GitHub Pull Request Merge Ref and TeamCity CI fail

GitHub has an awesome feature that allows us to build on the potential merge result of a pull request.

This allows us to run unit and UI tests against the result of a merge, so we know with certainty that it works, before we merge the code.

To get this working with TeamCity is a pain in the ass though.

Lets look at a basic workflow with this:

First we will look at two active pull request, and we are about to merge

MasterFeatureBranchGitHubFlow

Pull request 2 advertises the /head (actual branch) and /merge (result “if we merged”)

TeamCity say you should tie your builds to the /merge for CI, this will build the merge result, and I agree.

However lets look at what happens in GitHub when we merge in Feature 1.

MasterFeatureBranchGitHubFlowMergedFeature1

The new code goes into master, which will recalculate the merge result on Pull request 2. TeamCity correctly builds the merge reference and validates that the Pull Request will succeed.

However if we look in GitHub we will see the below

UpdateGitHubBranch

It now blocks you and prompts you to updates your branch.

After you click this, the /head and /merge refs will update, as it adds a commit to your branch and recalculates the merge result again, then you need wait for another build to validate the new commit on your branch.

MergeAndHeadRefOnGitHubBranchUpdate

This now triggers a second build.And when it completes you can merge.

The issues here is we are double building. There is two solutions as I see it,

  1. GitHub should allow you top merge without updating your branch
  2. TeamCity should allow you to trigger from one ref and build on a different one

I was able to implement the second result using a build configuration that calls the TeamCity API to trigger a build. However my preference would be number 1 as this is more automated.

BuildOffDifferentBranchFromTrigger

Inside it looks like this

BuildOffDifferentBranchFromTrigger1

Below is example powershell that is used in the trigger build, we had an issue with the SSL cert (even though it wasn’t self signed) so had to disable the check for it to work.

add-type @"
using System.Net;
using System.Security.Cryptography.X509Certificates;
public class TrustAllCertsPolicy : ICertificatePolicy {
public bool CheckValidationResult(
ServicePoint srvPoint, X509Certificate certificate,
WebRequest request, int certificateProblem) {
return true;
}
}
"@
[System.Net.ServicePointManager]::CertificatePolicy = New-Object TrustAllCertsPolicy

$buildBranch ="%teamcity.build.branch%"
Write-Host $buildBranch
$buildBranch = $buildBranch.Replace("/head","/merge")
$postbody = "<build branchName='$buildBranch'>
<buildType id='%TargetBuildType%'/>
</build>"
Write-Host $postbody
$user = '%TeamCityUser%'
$pass = '%TeamCityPassword%'

$secpasswd = ConvertTo-SecureString $pass -AsPlainText -Force
$credential = New-Object System.Management.Automation.PSCredential($user, $secpasswd)

Invoke-RestMethod https://teamcity/httpAuth/app/rest/buildQueue -Method POST -Body $postbody -Credential $credential -Headers @{"accept"="application/xml";"Content-Type"="application/xml"}

You will see that we replace the branch name head with merge, so we trigger after someone clicks the update branch button only.

Also don’t forget to add a VCS trigger for file changes “+:.”, so that it will only run builds when there are changes.

VCSTriggerRule

We are running with this solution this week and I am going to put a request into GitHub support about option 1.

This is a really big issues for us as we have 30-40 open pull requests on our repo, so double building creates a LOT of traffic.

If anyone has a better solution please leave some comments.

 

 

 

 

TeamCity and avoiding redownloading of npm packages

I haven’t been able to find a better solution that this. My original post on stackoverflow here.

TeamCity clears out the node_modules folder on every build, so when running npm install it re-downloads them all. So I’ve moved to using a powershell step to backup and restore this folder and the end and start of each build respectively.

Here is the PS step i run at the start


param (
[string]$projId ,
[string]$WorkDir
)

if(Test-Path "c:\node_backup\$projId\node_modules")
{
Move-Item "c:\node_backup\$projId\node_modules" "$WorkDir\node_modules"
}

And here is the one at the end


param (
[string]$projId ,
[string]$WorkDir
)
If (!(Test-Path "c:\node_backup"))
{
mkdir "c:\node_backup"
}
If (!(Test-Path c:\node_backup\$projId))
{
mkdir "c:\node_backup\$projId"
}
If (Test-Path "c:\node_backup\$projId\node_modules")
{
Remove-Item "c:\node_backup\$projId\node_modules"
}

Move-Item "$WorkDir\node_modules" "c:\node_backup\$projId"

Then in the script arguments i pass this

-projId %system.teamcity.buildType.id% -WorkDir %NodeWorkingDirectory%

The NodeWorkingDir is a parameter I set in my projects that i use to tell node where my gulp file is.

 

Running up TeamCity build Agents on HyperV

I had to run up a bucket load of build servers on a HyperV environment the other day so decide to automate the process a little.

I have also started using RAM disks for the agents too, to speed them up.

Step for the build server load were:

  1. Install OS (Win Server 2012 R2)
  2. Install SQL 2012 Express
  3. Install VS 2015
  4. Install build agent and point it at the TC Build agent on Drive E:
  5. Install RAMDisk (here)
  6. Run windows updates

If you are unsure where to get the Build Agent installer from click the agents tab in TeamCity and there is a link in the top left.

InstallTeamCityBuildAgent

At this point move the build agent to the RAM disk by:

  1. Stop Team City Build agent service
  2. Change the drive letter of E: to F:
  3. create a RAM drive as E: with image file on F and save to disk on shutdown
    1. I used 5Gb drive and it handles ok
  4. Copy the build agent folder from F to E
  5. Start Build agent service

After this i also installed IIS and disabled the firewall

Lastly sysprep the server and check OOBE/Shutdown

Then delete the VM I was using and the VHDX file  that was generated keep for using as a template, I backed it up to use on other hosts as well in future.

Then wrote the below script that run’s on the host, it does the following for me:

  1. Creates a new VHDX using a diff based on the original (makes run up really fast)
  2. Creates a VM that uses that VHDX
  3. Sets some values that I can set in the New-VM Command
  4. Changes the network config of the guest

 


param(
[string]$MachineName="MyMachine",
[string]$IPv4Address="10.0.0.20"
)
Function Set-VMNetworkConfiguration {
[CmdletBinding()]
Param (
[Parameter(Mandatory=$true,
Position=1,
ParameterSetName='DHCP',
ValueFromPipeline=$true)]
[Parameter(Mandatory=$true,
Position=0,
ParameterSetName='Static',
ValueFromPipeline=$true)]
[Microsoft.HyperV.PowerShell.VMNetworkAdapter]$NetworkAdapter,

[Parameter(Mandatory=$true,
Position=1,
ParameterSetName='Static')]
[String[]]$IPAddress=@(),

[Parameter(Mandatory=$false,
Position=2,
ParameterSetName='Static')]
[String[]]$Subnet=@(),

[Parameter(Mandatory=$false,
Position=3,
ParameterSetName='Static')]
[String[]]$DefaultGateway = @(),

[Parameter(Mandatory=$false,
Position=4,
ParameterSetName='Static')]
[String[]]$DNSServer = @(),

[Parameter(Mandatory=$false,
Position=0,
ParameterSetName='DHCP')]
[Switch]$Dhcp
)

$VM = Get-WmiObject -Namespace 'root\virtualization\v2' -Class 'Msvm_ComputerSystem' | Where-Object { $_.ElementName -eq $NetworkAdapter.VMName }
$VMSettings = $vm.GetRelated('Msvm_VirtualSystemSettingData') | Where-Object { $_.VirtualSystemType -eq 'Microsoft:Hyper-V:System:Realized' }
$VMNetAdapters = $VMSettings.GetRelated('Msvm_SyntheticEthernetPortSettingData')

$NetworkSettings = @()
foreach ($NetAdapter in $VMNetAdapters) {
if ($NetAdapter.Address -eq $NetworkAdapter.MacAddress) {
$NetworkSettings = $NetworkSettings + $NetAdapter.GetRelated("Msvm_GuestNetworkAdapterConfiguration")
}
}

$NetworkSettings[0].IPAddresses = $IPAddress
$NetworkSettings[0].Subnets = $Subnet
$NetworkSettings[0].DefaultGateways = $DefaultGateway
$NetworkSettings[0].DNSServers = $DNSServer
$NetworkSettings[0].ProtocolIFType = 4096

if ($dhcp) {
$NetworkSettings[0].DHCPEnabled = $true
} else {
$NetworkSettings[0].DHCPEnabled = $false
}

$Service = Get-WmiObject -Class "Msvm_VirtualSystemManagementService" -Namespace "root\virtualization\v2"
$setIP = $Service.SetGuestNetworkAdapterConfiguration($VM, $NetworkSettings[0].GetText(1))

if ($setip.ReturnValue -eq 4096) {
$job=[WMI]$setip.job

while ($job.JobState -eq 3 -or $job.JobState -eq 4) {
start-sleep 1
$job=[WMI]$setip.job
}

if ($job.JobState -eq 7) {
write-host "Success"
}
else {
$job.GetError()
}
} elseif($setip.ReturnValue -eq 0) {
Write-Host "Success"
}
}

Write-Host $MachineName

New-VHD –Path “D:\Hyper-V\Diff.$MachineName.vhdx” –ParentPath “D:\Hyper-V\agent_template2.VHDX” –Differencing

New-VM -Name $MachineName -MemoryStartupBytes 6024000000 -Generation 2 -BootDevice VHD -VHDPath “D:\Hyper-V\Diff.$MachineName.vhdx” -SwitchName "10.0.0.xx"

Set-VM -Name $MachineName -DynamicMemory -ProcessorCount 8
Write-Host $IPv4Address

Get-VMNetworkAdapter -VMName $MachineName -Name "Network Adapter" | Set-VMNetworkConfiguration -IPAddress $IPv4Address -Subnet 255.255.255.0 -DNSServer 10.0.0.15 -DefaultGateway 10.0.0.1

The above command takes a parameter of the new machine name and the IP you want to give the server, I have hard codded the subnet, gateway and DNS, but these should probably be made parameters too, depending on your environment.

After this i just have to login to the agents and domain join them after they are spun up. I used to use VMM that would domain join out-of-the-box, but it looked painful to do from a script on the host so have left it.

An also authorize them on the TeamCity server.

 

Connecting TeamCity to GitLab with a self-signed SSL

So I spent hours today beating my head against a wall and cursing JRE, so a pretty normally day for me.

I had to connect our TeamCity server to the GitLab server, the GitLab server uses a SSL cert that was generate from the AD Domain CA, so is trusted by all the domain machines. Our TC server is on the domain as well and when connecting to the https site it comes up as green.

However when connecting to git through TeamCity it is running inside JRE which for some reason doesn’t use the machine trusts, it has it’s own cert store you need to add the cert too.

Here’s the error i was facing:

 

List remote refs failed: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target

 

To test the trust from JRE you need to run this

java SSLPoke git.mycompany.local 443

Where git.mycompany.local is your gitlab server

You can get the sslpoke class here

if its untrusted you will see an error here.

You can use your web browser to export the public key.

GitLabUntrustedSSLCertificate.PNG

Most docs tell me that you can export your root CA public cert, but this didn’t work for me, I actually had to export the specific cert for this site.

Then use this command line to import the cert into JRE and restart TeamCity.

C:\TeamCity\jre\bin>C:\TeamCity\jre\bin\keytool.exe -importcert -trustcacerts -file C:\MyGitLabSSLCert.cer -alias MyGitLabSSLCert -keystore “C:\TeamCity\jre\lib\security\cacerts”

After this we are in business!

 

 

 

Swagger/Swashbuckle and WebAPI Notes

If you aren’t using Swagger/Swashbuckle on your WebAPI project, you may have been living under a rock, if so go out and download it now 🙂

Its a port from a node.js project that rocks! And MS is really getting behind in a big way. If you haven’t heard of it before, imagine WSDL for REST with a snazy Web UI for testing.

Swagger is relatively straight forward to setup with WebAPI, however there were a few gotchas that I ran into that I thought I would blog about.

The first one we ran into is so common MS have a blog post about it. This issue deals with an exception you’ll get logged due to the way swashbuckle auto generates the ID from the method names.

A common example is when you have methods like the following:

GET /api/Company // Returns all companies

GET /api/Company/{id} // Returns company of given ID

In this case the swagger IDs will both be “Company_Get”, and the generation of the swagger json content will work, but if you try to run autorest or swagger-codegen on this they will fail.

The solution is to create a custom attribute to apply to the methods like so


// Attribute
namespace MyCompany.MyProject.Attributes
{
[AttributeUsage(AttributeTargets.Method)]
public sealed class SwaggerOperationAttribute : Attribute
{
public SwaggerOperationAttribute(string operationId)
{
this.OperationId = operationId;
}

public string OperationId { get; private set; }
}
}

//Filter

namespace MyCompany.MyProject.Filters
{
public class SwaggerOperationNameFilter : IOperationFilter
{
public void Apply(Operation operation, SchemaRegistry schemaRegistry, ApiDescription apiDescription)
{
operation.operationId = apiDescription.ActionDescriptor
.GetCustomAttributes&lt;SwaggerOperationAttribute&gt;()
.Select(a =&gt; a.OperationId)
.FirstOrDefault();
}
}
}

//SwaggerConfig.cs file
namespace MyCompany.MyProject
{
public class SwaggerConfig
{
private static string GetXmlCommentsPath()
{
return string.Format(@&quot;{0}\MyCompany.MyProject.XML&quot;,
System.AppDomain.CurrentDomain.BaseDirectory);
}
public static void Register()
{

var thisAssembly = typeof(SwaggerConfig).Assembly;

GlobalConfiguration.Configuration
.EnableSwagger(c =&gt;
{
c.OperationFilter&lt;SwaggerOperationNameFilter&gt;();

c.IncludeXmlComments(GetXmlCommentsPath());

// the above is for comments doco that i will talk about next.

// there will be a LOT of additional code here that I have omitted

}

}

}

}

Then apply like this:


[Attributes.SwaggerOperation(&quot;CompanyGetOne&quot;)]
[Route(&quot;api/Company/{Id}&quot;)]
[HttpGet]
public Company CompanyGet(int id)
{
// code here
}

[Attributes.SwaggerOperation(&quot;CompanyGetAll&quot;)]
[Route(&quot;api/Company&quot;)]
[HttpGet]
public List&lt;Company&gt; CompanyGet()
{
// code here
}

Also mentioned I the MS article is XML code comments, these are awesome for documentation, but make sure you don’t have any potty mouth programmers

This is pretty straight forward, see the setting below

XmlCommentsOutputDocumentationSwaggerSwashbuckle

The issue we had though was packaging them with octopus as it’s an output file that is generated at build time. We use the octopack nuget package to wrap up our web projects, so in order to package build-time output (other than bin folder content) we need to create a nuspec file in the project. Octopack will default to using this instead of the csproj file if it has the same name.

e.g. if you project is called MyCompany.Myproject.csproj, create a nuspec file in this project called MyCompany.MyProject.nuspec.

Once you add a file tag into the nuspec file this will override octopack ebnhaviour of looking up the csproj file for files, but you can override this behavior by using this msbuild switch.

/p:OctoPackEnforceAddingFiles=true

This will make octopack package files from the csproj first, then use what is specified in the files tag in the nuspec file as additional files.

So our files tag just specifies the MyCompany.MyProject.XML file, and we are away and deploying comments as doco!

We used to use sandcastle so most of the main code comment doco marries up between the two.

Autofac DI is a bit odd with the WebAPI controllers, we generally use DI on the constructor params, but WebAPI controllers require a parameter-less constructor. So we need to use Properties for DI. This is pretty straight forward you juat need to call the PropertiesAutowired method when registering them. And as well with the filters and Attributes. In our example below I put my filters in a “Filters” Folder/Namespace, and my Attributes in an “Attributes” Folder/Namespace


// this code goes in your Application_Start

var containerBuilder = new ContainerBuilder();

&amp;nbsp;

containerBuilder.RegisterAssemblyTypes(typeof(WebApiApplication).Assembly)
.Where(t =&gt; t.IsInNamespace(&quot;MyCompany.MyProject.Attributes&quot;)).PropertiesAutowired();
containerBuilder.RegisterAssemblyTypes(typeof(WebApiApplication).Assembly)
.Where(t =&gt; t.IsInNamespace(&quot;MyCompany.MyProject.Filters&quot;)).PropertiesAutowired();

containerBuilder.RegisterApiControllers(Assembly.GetExecutingAssembly()).PropertiesAutowired();

containerBuilder.RegisterWebApiFilterProvider(config);

 

 

Agile-Scrum Interview Questions to a Company from a Candidate

I recently went for a job interview with a company and wanted to know how evolved they were with respect to agile practices. So using the Agile manifesto I created a series of questions to ask them in the interview to rank how evolved they were in following Agile/Scrum processes.

I think if you are looking at your own company you could use these as a method to judge your own progress in your journey into Agile/Scrum.

Below is listed each Principle follow by the question i asked.

Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.

How do you do you do a release? Do you hand over to another team for releases? Do you practice continuous delivery?

With the rise of Dev Ops, pushing releases should be easy these days, if you have a complex process in place for pushing releases to live, it might be a sign that you need change.

How I’ve seen people try to justify complex release and release/approval process before is when you have critical systems that any sort of downtime will have a large business impact. You will hear them say things like “We work in E-Commerce, any minute offline is lost money” or “We are in finance, one wrong number could cost us large amounts of money”. These statements are true, but in an evolved company you have things implemented such as A/B testing, tests that run in the deployment process to verify applications on an inactive node before live traffic is cut over to. AWS’s Elastic Beanstalk out of the box will run you up a new set of servers in the deployment process that tests can be preformed on before a DNS cut over is done and the old environment completely shut down.

While you do need to take into account the context, there is few companies I have seen that could not aim for continuous delivery.

Zero-Downtime deployment, and Continuous delivery are the two key words that give you a big tick here.

Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage

How do you handle change in requirements after development work has started, or finished?

If they tell you that they have “Change Requests” that’s a sure sign they aren’t following Agile/Scrum process.

Another common mistake I see people do is track this and report on it so they can “improve planning”, while I am not saying that you shouldn’t try to plan ahead where possible, trying this will give you a lot of false positives, because one of the theories of scrum is that “the customer doesn’t know what the right product is, until they’ve seen the wrong product”.

Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.

How often do you release? How long roughly does it take you do do a release of your product?

Similar to the first question, releases should be done “on demand” by the team, if there is any hand over process in place, or senior management that needs to be involved beyond Acceptance testing in the demos then this might be a sign of problems.

Business people and developers must work together daily throughout the project

Where do your requirements come from? Where are they stored? Who manages them? What contact does this person have with the team?

This question in summary is “Do you have product Owners? and are they doing their job?”. Product owners should have daily contact with the team, however having them in the same room might be too much. The company I went for the interview with has their Team, PO and scrum master all in the same desk cluster, I’m not sure about this 🙂

Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.

Who manages your teams and makes sure they are keeping focused day-to-day?

This is a loaded question. Unmotivated people need people to motivated them and don’t make for a good dev team.

The answer should be no one, because our teams are self-motivated and self organizing, our scrum master checks in with them daily to make sure they don’t have any impediments and keeps the team from distractions.

Do you have any regular reporting to upper management that needs to be done inside the sprint?

The answer here should be no, the measure of progress is working software, which is reported in the demo. There maybe reports to the business of a rolled up result of sprints, for example one feature make take 3 sprints to complete, so at the end of those 3 sprints some additional reporting needs to be done. But beware of anyone that says something like “The output of our daily stand-up on in the chat room is emailed to all senior managers in the business”, this means that there is a lack of trust in the organisation.

The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.

Inquire about which ceremonies they conduct, Daily stand-up, retrospective, etc.

Is there any they don’t do? Is there any they do in a chat room? Where is the scrum master and product owner located? Does the team have easy access to them and vice versa?

In some teams that aren’t co-located this is difficult, but let me tell you from experience that video conferencing is the answer if you aren’t co-located.

While I think chat rooms are an important part of a modern team (Go download slack or hipchat now if you aren’t using them already, but don’t ask me which is better), they should NOT be used for your ceremonies. When you do a planning meeting and you are in the same office as someone you see and hear the emotion in their voice when they are talking, this is vital to good communications.

Also talking as opposed to typing lets people go off on tangents more easily, which generally leads to better innovation. The same is true of Stand ups, retrospectives and so on.

Working software is the primary measure of progress.

How do you measure progress of your dev team?

This one is pretty easy, if they say from the deployed running software then they get a tick.

Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.

How often do you staff work overtime? is any weekend work required? if so how often?

If you have staff regularly working overtime or weekends, and this is accepted by the organisation, this is a sign that your pace is not sustainable. You will burn your staff out, they will look for new jobs.

Continuous attention to technical excellence and good design enhances agility.

This is hard to put into a single question, I would start with asking them where they are at with practices like:

  • Unit Testing
  • Test Automation
  • Do they use Dependency Injection
  • etc

These types of technology will change over time, but once you have an indication of where they are at, how do they improve? Good answers will be things like involvement in open source projects, user groups (VMWare, etc), partner programs (Microsoft, RedHat,etc) and so on.

One of the processes we used to use was “Friday Afternoon Channel9” each week the team members would take turns picking a video from MSDN Channel9 (and sometimes other sources) and we would all watch it together.

 

The best architectures, requirements, and designs emerge from self-organizing teams.

Do you have team leaders or managers?

If you have a manager then you are not self organised. By the same token Leaders are bad too, a lot of people are astounded by this concept and would argue that leadership is a good thing.

If you promote leadership then you give your team a head that can be cut off, what happens when you leader goes on holiday for 3 weeks?

It also prevents your team from taking ownership if the are working under someones directions. You will end up with a product architected from a single persons view as opposed to the entire team. This also allows the team to say things such as “I was just doing what I was told”, your team needs to own their work, this in the long run will give them more job satisfaction as well.

In short, everyone on your team should be a leader, and the scrum master should be their to mediate any conflicts of direction.

Your scrum master should not be a member of the team also, he/she needs to be impartial to the team’s decisions to allow him/her to give good guidance. If your scrum master is a team member he will end up becoming a leader and you will have problems.

At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.

What is the most astounding changes one of your teams has proposed from a retrospective that you can remember?

A bit of a loaded question again. If you simply ask them “Do you do retrospectives” this won’t tell you much, what you need to be asking them is what impact their retrospectives have.

The answer to the above will give you an indication of if they are following the Scrum retrospective process and seeing a positive outcome from it.

One of the Keys of Scrum is Empirical Process Control, if the team is not in charge of themselves, then they do not own their actions, retrospectives is a key point in the team shaping their own direction.

 

 

Compact Framework Builds in Appveyor

I did an issue for nunit to get their CI builds for nunit running in appveyor, previously they had a manual process for CF checking.

The changes are all in this commit 

To go into a bit of detail, I found a similar requirement with an R project someone had done, so did something similar, in that we used the appveyor.yml “install” step, to execute a powershell script that downloads and installs an msi on the box.

The bulk of the script below is pretty straight forward, all I’ve done is create a repo for the msi files, download and run them with msiexec on the box, with some very verbose checking. As isolated build boxes like appveyor if something goes wrong its nice to have a lot of log output.


$url= "https://github.com/dicko2/CompactFrameworkBuildBins/raw/master/NETCFSetupv35.msi";
Progress ("Downloading NETCFSetupv35 from: " $url)
Invoke-WebRequest -Uri $url -OutFile NETCFSetupv35.msi

$url= "https://github.com/dicko2/CompactFrameworkBuildBins/raw/master/NETCFv35PowerToys.msi";
Progress ("Downloading NETCFv35PowerToys from: " $url)
Invoke-WebRequest -Uri $url -OutFile NETCFv35PowerToys.msi

Progress("Running NETCFSetupv35 installer")

$msi = @("NETCFSetupv35.msi","NETCFv35PowerToys.msi")
foreach ($msifile in $msi)
{
if(!(Test-Path($msi)))
{
throw "MSI files are not present, please check logs."
}
Progress("Installing msi " $msifile )
Start-Process -FilePath "$env:systemroot\system32\msiexec.exe" -ArgumentList "/i `"$msifile`" /qn /norestart" -Wait -WorkingDirectory $pwd -RedirectStandardOutput stdout.txt -RedirectStandardError stderr.txt
$OutputText = get-content stdout.txt
Progress($OutputText)
$OutputText = get-content stderr.txt
Progress($OutputText)
}
if(!(Test-Path("C:\Windows\Microsoft.NET\Framework\v3.5\Microsoft.CompactFramework.CSharp.targets")))
{
throw "Compact framework files not found after install, install may have failed, please check logs."
}
RegistryWorkAround

You’ll note at the end though there is a call to “RegistryWorkaround”, i got these errors after setting the above up

The “AddHighDPIResource” task failed unexpectedly.
System.ArgumentNullException: Value cannot be null.
Parameter name: path1

After a quick google check found this forum post about the error and it solved my problem. You can see my work around below.


$registryPaths = @("HKLM:\SOFTWARE\Microsoft\VisualStudio\9.0\Setup\VS","HKLM:\SOFTWARE\Wow6432Node\Microsoft\VisualStudio\9.0\Setup\VS")
$Name = "ProductDir"
$value = "C:\Program Files (x86)\Microsoft Visual Studio 9.0"

foreach($registryPath in $registryPaths)
{
If(!(Test-Path $registryPath))
{
New-Item -Path $registryPath -Force | Out-Null
}
If(!(Test-Path $registryPath+"\"+$Name))
{
New-ItemProperty -Path $registryPath -Name $name -Value $value `
-PropertyType String -Force | Out-Null
}
If(!((Get-ItemProperty -Path $registryPath -Name $Name).ProductDir -eq "C:\Program Files (x86)\Microsoft Visual Studio 9.0"))
{
throw "Registry path " $registryPath " not set to correct value, please check logs"
}
else
{
Progress("Regsitry update ok to value " (Get-ItemProperty -Path $registryPath -Name $Name).ProductDir)
}
}

After all that it only adds less than 10 seconds on average to the build too which is good.

And now we have a working Compact Framework build in Appveyor!