Swagger/Swashbuckle and WebAPI Notes

If you aren’t using Swagger/Swashbuckle on your WebAPI project, you may have been living under a rock, if so go out and download it now ūüôā

Its a port from a node.js project that rocks! And MS is really getting behind in a big way. If you haven’t heard of it before, imagine WSDL for REST with a snazy Web UI for testing.

Swagger is relatively straight forward to setup with WebAPI, however there were a few gotchas that I ran into that I thought I would blog about.

The first one we ran into is so common MS have a blog post about it. This issue deals with an exception you’ll get logged due to the way swashbuckle auto generates the ID from the method names.

A common example is when you have methods like the following:

GET /api/Company // Returns all companies

GET /api/Company/{id} // Returns company of given ID

In this case the swagger IDs will both be “Company_Get”, and the generation of the swagger json content will work, but if you try to run autorest or swagger-codegen on this they will fail.

The solution is to create a custom attribute to apply to the methods like so

// Attribute
namespace MyCompany.MyProject.Attributes
public sealed class SwaggerOperationAttribute : Attribute
public SwaggerOperationAttribute(string operationId)
this.OperationId = operationId;

public string OperationId { get; private set; }


namespace MyCompany.MyProject.Filters
public class SwaggerOperationNameFilter : IOperationFilter
public void Apply(Operation operation, SchemaRegistry schemaRegistry, ApiDescription apiDescription)
operation.operationId = apiDescription.ActionDescriptor
.Select(a => a.OperationId)

//SwaggerConfig.cs file
namespace MyCompany.MyProject
public class SwaggerConfig
private static string GetXmlCommentsPath()
return string.Format(@"{0}\MyCompany.MyProject.XML",
public static void Register()

var thisAssembly = typeof(SwaggerConfig).Assembly;

.EnableSwagger(c =>


// the above is for comments doco that i will talk about next.

// there will be a LOT of additional code here that I have omitted





Then apply like this:

public Company CompanyGet(int id)
// code here

public List<Company> CompanyGet()
// code here

Also mentioned I¬†the MS article is XML code comments, these are awesome for documentation, but make sure you don’t have any potty mouth programmers

This is pretty straight forward, see the setting below


The issue we had though was packaging them with octopus as it’s an output file that is generated at build time. We use the octopack nuget package to wrap up our web projects, so¬†in order to package build-time output (other than bin folder content) we need to create a nuspec file in the project. Octopack will default to using this instead of the csproj file if it has the same name.

e.g. if you project is called MyCompany.Myproject.csproj, create a nuspec file in this project called MyCompany.MyProject.nuspec.

Once you add a file tag into the nuspec file this will override octopack ebnhaviour of looking up the csproj file for files, but you can override this behavior by using this msbuild switch.


This will make octopack package files from the csproj first, then use what is specified in the files tag in the nuspec file as additional files.

So our files tag just specifies the MyCompany.MyProject.XML file, and we are away and deploying comments as doco!

We used to use sandcastle so most of the main code comment doco marries up between the two.

Autofac DI is a bit odd with the WebAPI controllers, we generally use DI on the constructor params, but WebAPI controllers require a parameter-less constructor. So we need to use Properties for DI. This is pretty straight forward you juat need to call the PropertiesAutowired method when registering them. And as well with the filters and Attributes. In our example below I put my filters in a “Filters” Folder/Namespace, and my Attributes in an “Attributes” Folder/Namespace

// this code goes in your Application_Start

var containerBuilder = new ContainerBuilder();


.Where(t => t.IsInNamespace("MyCompany.MyProject.Attributes")).PropertiesAutowired();
.Where(t => t.IsInNamespace("MyCompany.MyProject.Filters")).PropertiesAutowired();





Agile-Scrum Interview Questions to a Company from a Candidate

I recently went for a job interview with a company and wanted to know how evolved they were with respect to agile practices. So using the Agile manifesto I created a series of questions to ask them in the interview to rank how evolved they were in following Agile/Scrum processes.

I think if you are looking at your own company you could use these as a method to judge your own progress in your journey into Agile/Scrum.

Below is listed each Principle follow by the question i asked.

Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.

How do you do you do a release? Do you hand over to another team for releases? Do you practice continuous delivery?

With the rise of Dev Ops, pushing releases should be easy these days, if you have a complex process in place for pushing releases to live, it might be a sign that you need change.

How I’ve seen people¬†try to justify complex release and release/approval process before is when you have critical systems that any sort of downtime will have a large business impact. You will hear them say things like “We work in E-Commerce, any minute offline is lost money” or “We are in finance, one wrong number could cost us large amounts of money”. These statements are true, but in an evolved company you have things implemented such as A/B testing, tests that run in the deployment process to verify applications¬†on an inactive node before live traffic is¬†cut over to. AWS’s¬†Elastic Beanstalk¬†out of the box will run you up a new set of servers in the deployment process that tests can be preformed on before a DNS cut over is done and the old environment completely shut down.

While you do need to take into account the context, there is few companies I have seen that could not aim for continuous delivery.

Zero-Downtime deployment, and Continuous delivery are the two key words that give you a big tick here.

Welcome changing requirements, even late in development. Agile processes harness change for¬†the customer’s competitive advantage

How do you handle change in requirements after development work has started, or finished?

If they tell you that they have “Change Requests” that’s a sure sign they aren’t following Agile/Scrum process.

Another common mistake I see people do is track this and report on it so they can “improve planning”,¬†while¬†I am not saying that you shouldn’t try to plan ahead where possible, trying this will give you a lot of false positives, because one of the¬†theories of scrum is that “the customer doesn’t know what the right product is, until they’ve seen the wrong product”.

Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.

How often do you release? How long roughly does it take you do do a release of your product?

Similar to the first question,¬†releases should be done “on demand” by the team, if there is any hand over process in place, or senior management that needs to be involved beyond Acceptance testing in the demos then this might be a sign of problems.

Business people and developers must work together daily throughout the project

Where do your requirements come from? Where are they stored? Who manages them? What contact does this person have with the team?

This question in summary is “Do you have product Owners? and are they doing their job?”. Product owners should have daily contact with the team, however having them in the same room might be too much. The company I went for the interview with has their Team, PO and scrum master all in the same desk cluster, I’m not sure about this¬†ūüôā

Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.

Who manages your teams and makes sure they are keeping focused day-to-day?

This is a loaded question. Unmotivated people need people to motivated them and don’t make for a good dev team.

The answer should be no one, because our teams are self-motivated and self organizing, our scrum master checks in with them daily to make sure they don’t have any impediments and keeps the team from distractions.

Do you have any regular reporting to upper management that needs to be done inside the sprint?

The answer here should be no, the measure of progress is working software, which is reported¬†in the demo. There maybe reports to the business of a rolled up result of sprints, for example one feature make take 3 sprints to complete, so at the end of those 3 sprints some additional reporting needs to be done. But beware of anyone¬†that says something like “The output of our daily stand-up on in the chat room is emailed to all senior managers in the business”, this means that there is a lack of trust in the organisation.

The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.

Inquire about which ceremonies they conduct, Daily stand-up, retrospective, etc.

Is there any they don’t do? Is there any they do in a chat room? Where is the scrum master and product owner located? Does the team have easy access to them and vice versa?

In some teams that aren’t co-located this is difficult, but let me tell you from experience that video conferencing is the answer if you aren’t co-located.

While I think chat rooms are an important part of a modern team (Go download slack or hipchat now if you aren’t using them already, but don’t ask me which is better), they should NOT be used¬†for your ceremonies. When you do a planning meeting and you are in the same office as someone you see and hear the emotion in their voice when they are talking, this is vital to good communications.

Also talking as opposed to typing lets people go off on tangents more easily, which generally leads to better innovation. The same is true of Stand ups, retrospectives and so on.

Working software is the primary measure of progress.

How do you measure progress of your dev team?

This one is pretty easy, if they say from the deployed running software then they get a tick.

Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.

How often do you staff work overtime? is any weekend work required? if so how often?

If you have staff regularly working overtime or weekends, and this is accepted by the organisation, this is a sign that your pace is not sustainable. You will burn your staff out, they will look for new jobs.

Continuous attention to technical excellence and good design enhances agility.

This is hard to put into a single question, I would start with asking them where they are at with practices like:

  • Unit Testing
  • Test Automation
  • Do they use Dependency Injection
  • etc

These types of technology will change over time, but once you have an indication of where they are at, how do they improve? Good answers will be things like involvement in open source projects, user groups (VMWare, etc), partner programs (Microsoft, RedHat,etc) and so on.

One of the processes we used to use was “Friday Afternoon Channel9” each week the team members would take turns picking a video from MSDN Channel9 (and sometimes other sources) and we would all watch it together.


The best architectures, requirements, and designs emerge from self-organizing teams.

Do you have team leaders or managers?

If you have a manager then you are not self organised. By the same token Leaders are bad too, a lot of people are astounded by this concept and would argue that leadership is a good thing.

If you promote leadership then you give your team a head that can be cut off, what happens when you leader goes on holiday for 3 weeks?

It also prevents your team from taking ownership if the are working under someones directions. You will end up with a product architected from a single persons view as opposed to the entire team. This also allows the team to say things such as “I was just doing what I was told”, your team needs to own their work, this in the long run will give them more job satisfaction as well.

In short, everyone on your team should be a leader, and the scrum master should be their to mediate any conflicts of direction.

Your scrum master should not be a member of the team also, he/she needs to be impartial to the team’s decisions to allow him/her to give good guidance. If your scrum master is a team member he will end up becoming a leader and you will have problems.

At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.

What is the most astounding changes one of your teams has proposed from a retrospective that you can remember?

A bit of a loaded question again. If you simply ask them “Do you do retrospectives” this won’t tell you much, what you need to be asking them is what impact their¬†retrospectives have.

The answer to the above will give you an indication of if they are following the Scrum retrospective process and seeing a positive outcome from it.

One of the Keys of Scrum is Empirical Process Control, if the team is not in charge of themselves, then they do not own their actions, retrospectives is a key point in the team shaping their own direction.



Compact Framework Builds in Appveyor

I did an issue for nunit to get their CI builds for nunit running in appveyor, previously they had a manual process for CF checking.

The changes are all in this commit 

To go into a bit of detail, I found a similar requirement with an R project someone had done, so did something similar, in that we used the appveyor.yml “install” step, to execute a powershell script that downloads and installs an msi on the box.

The bulk of the script below is pretty straight forward, all I’ve done is create a repo for the msi files, download and run them with msiexec on the box, with¬†some very verbose checking. As isolated build boxes like appveyor if something goes wrong its nice to have a lot of log output.

$url= "https://github.com/dicko2/CompactFrameworkBuildBins/raw/master/NETCFSetupv35.msi";
Progress ("Downloading NETCFSetupv35 from: " $url)
Invoke-WebRequest -Uri $url -OutFile NETCFSetupv35.msi

$url= "https://github.com/dicko2/CompactFrameworkBuildBins/raw/master/NETCFv35PowerToys.msi";
Progress ("Downloading NETCFv35PowerToys from: " $url)
Invoke-WebRequest -Uri $url -OutFile NETCFv35PowerToys.msi

Progress("Running NETCFSetupv35 installer")

$msi = @("NETCFSetupv35.msi","NETCFv35PowerToys.msi")
foreach ($msifile in $msi)
throw "MSI files are not present, please check logs."
Progress("Installing msi " $msifile )
Start-Process -FilePath "$env:systemroot\system32\msiexec.exe" -ArgumentList "/i `"$msifile`" /qn /norestart" -Wait -WorkingDirectory $pwd -RedirectStandardOutput stdout.txt -RedirectStandardError stderr.txt
$OutputText = get-content stdout.txt
$OutputText = get-content stderr.txt
throw "Compact framework files not found after install, install may have failed, please check logs."

You’ll note at the end though there is a call to “RegistryWorkaround”, i got these errors after setting the above up

The “AddHighDPIResource” task failed unexpectedly.
System.ArgumentNullException: Value cannot be null.
Parameter name: path1

After a quick google check found this forum post about the error and it solved my problem. You can see my work around below.

$registryPaths = @("HKLM:\SOFTWARE\Microsoft\VisualStudio\9.0\Setup\VS","HKLM:\SOFTWARE\Wow6432Node\Microsoft\VisualStudio\9.0\Setup\VS")
$Name = "ProductDir"
$value = "C:\Program Files (x86)\Microsoft Visual Studio 9.0"

foreach($registryPath in $registryPaths)
If(!(Test-Path $registryPath))
New-Item -Path $registryPath -Force | Out-Null
If(!(Test-Path $registryPath+"\"+$Name))
New-ItemProperty -Path $registryPath -Name $name -Value $value `
-PropertyType String -Force | Out-Null
If(!((Get-ItemProperty -Path $registryPath -Name $Name).ProductDir -eq "C:\Program Files (x86)\Microsoft Visual Studio 9.0"))
throw "Registry path " $registryPath " not set to correct value, please check logs"
Progress("Regsitry update ok to value " (Get-ItemProperty -Path $registryPath -Name $Name).ProductDir)

After all that it only adds less than 10 seconds on average to the build too which is good.

And now we have a working Compact Framework build in Appveyor!



Private Nuget Servers ‚Äď VS Team Services Package Management

A while back i setup a Klondike server for hosting our internal nuget packages. We use it for both internal libraries and octopus.

Microsoft recently released the Package Management feature for VSTS (Formerly know as VSO), the exciting thing about Package Management is that they have hinted they will include support for npm and bower in future, so you will have a single source for all your package management.


After Installing in VSTS you will get a new “Package” option in the top bar.


From here you can create new feeds. In my case I’ve decide to break up my feeds to one per project, but you could easily create more per project if you had for example¬†separate responsibilities where you wanted to have more granular permissions. You can restrict the Publish and Read rights to the feeds to users OR groups within VSTS¬†so its very easy to manage, unlike my hack around for permissions in my previous post about Klondike.


Now because we use TeamCity I have considered creating the build service their own Account in VSTS as they need credentials, but in this example I’m just using my own account.

You will need to change the “pick your tool” option to nuget 2.x to get your credentials to use in the TeamCity Steps.


Then click “Generate nuget Credentials” and grab the username and password out.



Next hop over to your TeamCity Server, and edit/add your build configuration.

It’s important to note that you will require at least TeamCity version 9.1.6 to do this, as there is a fix in here for nuget credentials.

First jump into “Build Features”, and add a set of nuget credetails with the URL of your feed that you got from the VSTS interface.


Then jump over to your Build steps and edit/add your nuget steps. Below is an example of my publish step.


The API key I’ve set to “VSTS” as per the instructions in the web interface of VSTS.

And we are publishing.


You will see the built packages in the VSTS interface when you are done.


Now if you have an Octopus server like us you will need to add the credentials into it as well into the nuget feeds section.



And its that easy.

One of our concerns about the Klondike server we setup was capacity. Because we have more than 12 developers and run CI with auto deployment to development environment, we are generating a large number of packages daily as developers check-in/commit work, so over a period of months and years the server has become quite bloated, though to give it credit i am surprised at how long it took to get bloated.

Some queries are taking upwards of 15-20 seconds at times and we have an issue (which I¬†have not confirmed is related) where packages are randomly “not there” after the build log say they have been successfully published.

I am hoping that the VSTS platform will do us for longer, and it has the added advantage of the granular permissions which we will be taking advantage of as we grow.






Exception Logging and Tracking with Application Insights 4.2

After finally getting the latest version of App Insights Extension installed into Visual Studio, its been a breath of fresh air to use.

Just a note, to get it installed, I had to go to to installed programs, hit modify on VS2015, make sure everything else was updated. Then run the installer 3 times, it failed twice, 3rd time worked.

Now its installed I get a new option under my config file in each of my projects called “search”.


This will open the Search window inside a tab in visual studio to allow me to search my application data. The first time you hit it you will need to login and link the project to the correct store though, after that it remembers.


From here you can filter for and find exceptions in your applications and view a whole host of information about them. Including information about the server, client, etc. But my favorite feature is the small blue link at the bottom.


Click on this will take you to the faulting function, it doesn’t take you to the faulting line though (which i think it should) but you can mouse over it to see the line.


One of the other nice features, which was also in the web portal. is the +/- 5 minutes search.


You can use this to run a search for all telemetry within 5 minutes either side of the exception. In the web portal there is also an option of “all telemetry of this session”, which is missing from the VS interface, I hope they will introduce this soon as well.

But the big advantage to this is if you have setup up App Insights for all of your tracking you will be able to see all of the following for that session or period:

  • Other Exceptions
  • Page Views
  • Debug logging (Trace)
  • Custom Events (If you are tracking thinks like feature usage in JavaScript this is very handy)
  • Raw Requests to the web server
  • Dependencies (SQL calls)

Lets take a look at some of the detail we get on the above for my +/- 5 Minute view

Below is an SQL dependency, this is logging all my queries. So I can see whats called, when, the time the query took to run, from which server, etc. This isn’t any extra code I¬†had to write, App Insights will track all SQL queries that run from your application¬†out of the box, once setup.


And dependencies won’t just be SQL, they will also be SOAP and REST requests to external web services.

Http Request monitoring the detail is pretty basic but useful.


And Page views you get some pretty basic info also. Not as good as some systems i have seen, but defiantly enough to help out with diagnosing exceptions.


I’ve been using this for a few days now and find it so easy to just open my solution in the morning and do a quick check for exceptions, narrow them down and fix them. Still needs another version or two before it has all the same features as the web interface, but defiantly worth a try if you have App Insights setup.




Building C# projects with Cake Build

I’ve been helping out the nunit team the last few weeks in my spare time, one thing that I was interested to check out is cake, which they are using for building.

Off the bat I don’t recommend using visual studio to edit the cake files, I¬†was unable to get the IDE to work correctly, it just made a mess of the file. I’ve been using visual studio code instead which has an addon for cake.

There is also a VSTS addon available for a cake build step, or you can just run a powershell command line if your build server doesn’t have a build step for it¬†OOTB.

Cake chains tasks together with dependencies and criteria.

.Does(() =>
// code to build your project ehre

You can then use the IsDependant and WithCriteria methods to chain dependencies like below.

.Does(() =>
// code to build your project here

.Does(() =>
// code to do stuff before building your project here

When you run cake from the command line you need to specify the Task name which is your entry point. With nunit what the guys have done is defined a number of tasks that are the main target entry points at the end of the cake file an example would be something lie the below when you specify a “Release Build” target it does the steps to Build, Test and then package in that order.


.Does(() =>
// code to Build your project here

.Does(() =>
// code to run your tests here

.Does(() =>
// code to package up your project here

Arguments from the command line can be picked up from within the script using the arguments method.

// if not specified at the command line the value will be Debug
var configuration = Argument("configuration", "Debug");

Building projects from within cake is pretty straight forward, you just need to call the built in MSBuild method

// Use MSBuild
MSBuild("MySolution/MyProject.csproj", new MSBuildSettings()

If you want to use Travis CI you’ll need to use XBuild instead of MSBuild, and also get a build.sh file in your project to kick off the cake script.

Here is a ref to their cake file for nunit that we have been working on

I’m not “blown away”¬†with cake so far, the¬†good parts are:

  • It’s multi-platform so you can run free Travis CI builds with it if you are working on an opensource project like nunit.
  • Its easy to get a consistent experience if you are using multiple build servers (appveyor, Travis, TFS, etc).
  • The scripting language is in C# so for some developers who might struggle with PowerShell or possible node.js/javascript in the case of gulp, its a lot more familiar
  • While there isn’t a “huge” amount of libraries out there its got all the core stuff you need (nuget, git, msbuild, etc.)

The bad parts that I don’t like are:

  • The documentation is pretty light, and its hard to find sites out there with examples
  • If you actually want to support builds on Travis there is language limitations to be aware of, so¬†you end up with functioning cake script on a windows box that will fail when run on Travis the below example code will fail on¬†cake in Travis but work on windows


var MyString ="A string";

void WriteStringConsole()

.Does(() =>

  • While the ability to have a consistent build experience across multiple build servers is good, who actually uses multiple build servers to build a single product? I think this will be useful for open source products where another company may want to fork and start building a version for their use, but I don’t see this as advantageous for private code.
  • I haven’t found an effective way to do templates with it yet, happy to be corrected on this as the documentation is light. The build templating system we use in TeamCity is awesome, the ability to use one central template then tweak it slightly for each project is second to none, and TFS seem to be following suite with this too, using a single script file means this becomes a lot more hard, sure you can toggle things on and off with code, but in TeamCity there is a “GUI” to do this.

Overall I¬†think cake is cute. It defiantly has its place in open source projects, but I’m not going to be moving projects over to it en masse.




TeamCity Build Artifacts

Build Artifacts I find are handy for things you want to throw around on the build server, such as command line tools from open source projects, etc.

Where I would make the call over a build artifact vs a package (npm, nuget, etc.) is when it’s something that needs to be run on the developers local, i.e. packages will downlaod to both the build server and the developers local, where as a build artifact is really only designed for use on the build server.

Build Artifacts are also handy for when you need to break builds into multiple builds for stages.

We’ve got a few TFS command line tools that we use to update data in TFS from our TeamCity server, all are built from github projects, so these are good examples of a command line tool we build internally that is only used on the build server itself.

You could use build artifacts for other things as well, but I prefer for anything serious putting it into a package manager, as this allows for better version control management.

The artifact output is controlled from the General Tab in you build’s Configuration Settings.


Once you have at least one successful build run you can use the folder tab to browse the build output and pick what you need (normally in the bin\release folder).

The format you need to use is

SourceFileOrFolder1 => TargetFileOrFolder1
SourceFileOrFolder2 => TargetFileOrFolder2

You can specify a zip file for your output which i would recommend to save space. to do¬†this you simply give it the location in the format of “MyFile.zip!/Subfolder” and it will compress your output into a folder in the zip file.

SourceFileOrFolder1 => TargetZipFile!/TargetFileOrFolder1
SourceFileOrFolder2 => TargetZipFile!/TargetFileOrFolder2

After that’s done you can run a build and check the output in the artifact tab of the completed build


Once you have this working you can then go to other builds and add this output as a dependency.

So in the other builds you will use the dependence tab, as seen below.


And you need to use s similar format to include to files into this build.

SourceArtifactFolderOrFile => TargetBuildDirOutputFodlerOrFile

Again you can also use the ! to browse inside of zip files to pull out content


in the above example if will have the command line app i need in the build output folder under the TfsCreateBuildCmd folder.

So I can now add a build step that calls this command using “TfsCreateBuildCmd\TfsCreateBuild.exe” to call the command and do something.

And it’s that easy ūüôā




Startup/Shutdown VMs in Azure after hours – Gotchas

A few of our VMs (Dev/test servers) don’t need to be on overnight so we have some scripts to shut them down. This is a little bit tricky in Azure because of the runbook credentials. These are easy to create, a good post here about it. But in all the articles I’ve read, no one mentions that the passwords in Azure AD expire, so every 90 days or so you have to go in and rest your passwords.

Another gotcha I ran into was that with the run books, errors don’t make them fail, only exceptions do. So i had to¬†check for error states and throw.

So when my automation user’s credentials expired and started throwing errors, I got no alerts about this. Until that was, someone read that months bill ūüôā


So I’ve put together a little post on how to work around these as it’s not easy.

First of all, lets assume you have followed the above post already and have automation credentials already.

You then need to use powershell to set the user’s password to never expire. To do this you need download and install the following.

  1. Microsoft Online Services Sign-In Assistant for IT Professionals RTW
  2. Windows Azure Active Directory Module for Windows PowerShell 

After that you can use the following PowerShell script from you local to set the user’s password never to expire

WARNING: You cannot use a Microsoft LIVE account to run this script, you need to use an organisational account.

Import-Module MSOnline
# you cannot login with a LVIE account, it must be an organisational account
Set-MsolUser -UserPrincipalName "myaccount@myorg.onmicrosoft.com" -PasswordNeverExpires $true

Now below is my shutdown and startup scripts that i set on a Schedule, with error detection for common errors in them

workflow shutdown
$Cred = Get-AutomationPSCredential -Name 'MyAutomationCred'

$a = Add-AzureAccount -Credential $Cred -ErrorAction Stop
if ($a.Subscriptions) {
Write-Output 'User Logged in'
} else {
throw 'User logged in with no subscriptions'
Select-AzureSubscription 'MySubscription'
#Array of server names here
$VMS = "web02","web03"
ForEach ($VM in $VMS)
$aVM = get-azurevm $VM
if($aVM -eq $null)
throw "Unable to get VM, check permissions perhaps?"
$VMName = $aVM.Name
Write-Output "Attempting to stop VM: $VMName"
Stop-AzureVM -ServiceName $aVM.ServiceName -StayProvisioned $true -Name $aVM.Name


workflow startup
$Cred = Get-AutomationPSCredential -Name 'MyAutomationCred'

$a = Add-AzureAccount -Credential $Cred -ErrorAction Stop
if ($a.Subscriptions) {
Write-Output 'User Logged in'
} else {
throw 'User logged in with no subscriptions'
Select-AzureSubscription 'MySubscription'
#Array of server names here
$VMS = "web02","web03"
ForEach ($VM in $VMS)
$aVM = get-azurevm $VM
if($aVM -eq $null)
throw "Unable to get VM, check permissions perhaps?"
$VMName = $aVM.Name
Write-Output "Attempting to start VM: $VMName"
Start-AzureVM -ServiceName $aVM.ServiceName -Name $aVM.Name

You will note the checks and throws as Errors are ignored by the runbook.


Sharing files between Visual Studio projects, where the file is included in the project

We have a standard deployment script that runs within the scope of the web app. It’s an Octopus PreDeploy.ps1 script. It uses things like the name of the project to make decisions about what to call the user account on the app pool, the web site name, the app pool name, etc. There is a few things we haven’t that can’t be covered by the standard octopus IIS step¬†(e.g. one is that we deploy our web services to a versioned url, https://myservivce.com/v1.1/endpoint/).

If you are starting from scratch I might be inclined to not do what we did, and instead start from using separate steps for this, the new features in Octopus 3.3 support storing a script in a package that you could use for this.

So to share this between our projects we decided to put it into a nuget package and install it that way, this means though that we need to treat it like content, and not a dll, but it needs to be included in the project, so that octopack will bundle it up into the package.

To do this we created an install.ps1 and uninstall.ps1 files to include the files from the nuget package as a linked item in the visual studio project.

So the nuspec file needed to be modified as follows.

You will note the target of the (un)install files is set to tools, this will make them get executed by visual studio. And our file we want to add is added in the root.

<file src="PreDeploy.ps1" target="." />
<file src="install.ps1" target="tools" />
<file src="uninstall.ps1" target="tools" />

Then the install.ps1 file looks as follows.

You will note it uses MS Build libraries in the powershell to execute inside of visual studio. This allows us to use the handy “GetItems” method on the project, and return all content items, so we can check for previous versions and remove.

It needs to be a content item because octopack will only package content items out of the box.

This is further filtered for items which have the packge name in the path (e.g. it would look something like this “packges\MyDeploymentPackage\predeploy.ps1″). If you had multiple files to add you could use an array here to¬†remove all files isntead of one.

We store this in a delegate because we can’t call remove mid loop (you’ll get an error), then remove after the loop has completed.

Then prepare a new content item and save it into the project. You could do a dir listing on that folder and add all files, if you wanted to do multiple.

$predeployfilename = "predeploy.ps1"
# Need to load MSBuild assembly if it's not loaded yet.
Add-Type -AssemblyName 'Microsoft.Build, Version=, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'

# Grab the loaded MSBuild project for the projectcontent
$buildProject = [Microsoft.Build.Evaluation.ProjectCollection]::GlobalProjectCollection.GetLoadedProjects($project.FullName) | Select-Object -First 1

Write-Host ("Adding $predeployfilename into project " + $project.Name);
$PackName = $package.id;
Write-Host '$package.id' = $package.id
$nodeDeligate = $null

$buildProject.GetItems('Content') | Where-Object { $_.EvaluatedInclude -match $PackName } | ForEach-Object {
Write-Host "Removing Previous $predeployfilename Item"
$nodeDeligate = $_;

Write-Host '$nodeDeligate' = $nodeDeligate
if($nodeDeligate -ne $null)
Write-Host ("Removing old item: " + $predeployfilename);

$projectItem = Get-ChildItem $project.FullName;
$predeployfile = Resolve-Path ($installPath + "\" + $predeployfilename);
Set-Location $projectItem.Directory
$predeployrel = Get-Item $predeployfile | Resolve-Path -Relative

# For linked items the Include attribute is the relative path to that item, and the Link subproperty is the local display name.
$metadata = New-Object 'System.Collections.Generic.Dictionary[System.String, System.String]';
$metadata.Add('Link', $predeployfilename);

$target = $buildProject.AddItem("Content", $predeployrel, $metadata);


Write-Host ("$predeployfilename added.");

The uninstall.ps1 looks the same except it only has the step to remove not add. And its that easy!

Also to note that the contentFiles feature in nuget 3.3 which is not support by Visual studio yet may solve this too, i haven’t see it in action yet.




Use Parameters from TFS Test Cases in NUnit Unit Tests

One of the things we have been doing a lot lately is getting our testers involved more in API method creation/development. With technologies like swagger this is becoming a lot easier to get non-developers involved at this level,¬†and I find its great to have a “test driven” mind helping developers out with creating unit tests at a low level like this.

One of our problems though is that the testers have been coming up with test cases fine, but getting them into our code has been a mission.

Our testers know enough code to have a conversation about code, look at some basic loops and ifs, but not enough to edit it, so in some cases we’ve been having Chinese whisper issues in getting the cases into the code. Also we have had some complex test methods recently (10+ parameters, 30+ test cases) and storing¬†these into C# code its hard to visualize when look at code and not something like a grid/table (excel, etc.).

For the later requirement we had considered CSV storage, and I have done some code for this in my library for use if you want it. But we decide to go with TFS test cases for storing the data, because i was working on updating the Test Cases from the NUnit tests anyway .

Ideally i wanted to do something like this:

 public void MyUnitTestMethod()
 // Test stuff here

And have it draw from the test case in TFS (example below).


However there is some issues with passing parameters to NUnit Tests that prevented me from doing this.

So instead what we have to do is create a class for each Unit Test we want to get its Data from a Test Case in TFS. This is the current work around. I may yet fix this in NUnit one weekend if I am not too hungover (but there is little change of a weekend like this).

Library I am using here is available on nuget.org

So using the library you have to do the following

1. Create a class for the data source that inherits from the Class in NUnitTfsTestCase.TestCaseData.TestCaseDataTfs Hard code your TFS test case ID in here

using System.Collections.Generic;
using NUnitTfsTestCase.TestCaseData;

namespace MyNamespace.MyFixture.MyMethod // use a good folder structure please ūüôā
 class GetTestCaseData : TestCaseDataTfs
 public static IEnumerable<dynamic> GetTestData()
 return TestCaseDataTfs.GetTestDataInternal(65079);

2. Add an attribute to you test case that references this

class MyTestClass

[Test, TestCaseSource(typeof(MyFixture.MyMethod.GetTestCaseData), "GetTestData")]
public void MyUnitTestMethod(strin someParam1, string someParam2)
// Test stuff here

3. Add app-settings for the TFS server and TFS project name

<add key="tpcUri" value="https://mytfsserver.com.au/tfs/DefaultCollection" />
<add key="teamProjectName" value="MyProjectName" />

Its important to note here that I am using a build agent that is on the same domain as an on-premise TFS server using a Domain account with appropriate permissions to the TFS server. I will work on example code in the near future for this hit VSTS (formally VSO) with credentials baked into the appSettings.

Another thing to note is that NUnit pulls out data and sends it to the parameter using ordinals. so you need to make sure the column parameter data in TFS is in the correct order as the parameters in the unit test method. I think i should be able to fix this though so it matches the parameters on the method to the Columns in TFS, I’ll have another crack in a few weeks.

The source code for the library is available here. Feel free to send a PR if you want to improve or fix anything ūüôā