Updating Test Cases in TFS from NUnit Tests run in TeamCity Builds

One of the things i always try to impress upon business is the need for a dashboards, and TFS provides a great interface for this. We use them to report on our manual testing, and we were looking at also using them to report on test automation.

I’ve seen CodedUI test results in the past update the test from links to test automation via test manager. I thought we might be able to use this but its pretty hard, instead I just used the TFS Client (old one, not the new REST client unfortunately, will revise code in another post). Below is the final outcome I wanted to end up with on Out dashboard:


The concept is pretty straight forward, we wanted to achieve something like the following with our NUnit units Tests:

public void MyUnitTestMethod()
// Test stuff here

So we could simply add an attribute to the Unit test with the ID of the test case and the library would take care of the rest. And its “almost” that easy.

In order to log test results you also need to Test Suite ID and the Test Plan ID from TFS, and we need to initialize the run at the start, then close it at the end so we end up with all of our units tests in the same run (I’ve done one run per fixture but you could change this).

So in the library there is two static int that need to be set and I also created a “Test Rig” class that I inherit my test Fixtures from that have the appropriate OneTimeSetup and OneTimeTearDown methods to start the test run, then close it off.

There is also some static variables for caching all the test results and adding them in one go at the end (I couldn’t work out how to add them as they are running).

The end result is something like the below:

class MyTestClass : ControllerTestRig
public MyTestClass () //ctor for setting static vars, runs before OneTimeSetup
NUnitTfsTestCase.TfsService.RunData.TestPlanId = 65213;
NUnitTfsTestCase.TfsService.RunData.TestSuiteId = 65275;

public void MyUnitTestMethod()
// Test stuff here

It’s important to note at this stage that this library assumes that your Build Agent has authentication with your TFS server. In our first case for this we are using on-premise TFS and a Build Agent that is running under a domain windows user account with the correct permissions. I will do an update later for different auth methods, and also do an example for VSTS (Formally VSO), as I am going to move this over to a VSTS hosted project in the next few weeks.

Also you need to put the following app settings in for the library to pick up your TFS server address and project name

<add key="tpcUri" value="https://mytfsserver.com.au/tfs/DefaultCollection" />
<add key="teamProjectName" value="MyProjectName" />

Once the above is done we now get a nice test run for each Test Fixture in our Test library, as well as the test cases having their status get updated.



Any test that error it will pass on the error status and also the message. We are using NUnit 3 for this, because 2.x isn’t under active development and doesn’t provide a lot of detail in the TestContext for use to log to TFS.

The source code is available on GitHub here and the nuget package is available on nuget.org. Feel free to send a PR if you want to change or fix something!


Swagger WebAPI JSON Object formatting standards between C#, TypeScript and Others

When designing objects in C# you use pascal casing for your properties, but in other languages you don’t, and example (other than java) is TypeScript here’s an article from Microsoft about it.

And that’s cool, a lot of the languages have different standards and depending on which one you are in, you write a little different.

The problem is when you try to work on a standard that defines cross platform communication that is case sensitive, an example being Swagger using REST and JSON.

So the issue we had today was a WebAPI project was generating objects like this:

"ObjectId": 203
"ObjectName" : "My Object Name"

When swaggerated the object comes out correctly with the correct pascal casing, however when using swagger codegen the object is converted to camel case (TypeScript Below)

export interface MyObject {
 objectId: number;
 objectName: string;

The final output is a generated client library that can’t read any objects from the API because JavaScript is case sensitive.

After some investigation we found that when the swagger outputs camel casing the C# client generators (Autorest and Swagger codegen) will output C# code that is in camel casing but with properties to do the translating from camel to pascal, like the below example

/// <summary>
/// Gets or Sets TimeZoneName
/// </summary>
[JsonProperty(PropertyName = "timeZoneName")]
public string TimeZoneName { get; set; }

So to avoid pushing shit up hill we decided to roll down it. I found this excellent article on creating a filter for WebAPI to convert all your pascal cased objects to camel case on the fly

So we found that the best practice is:

  • Write Web API C# in Pascal casing
  • Covert using an action filter from pascal to camel case Json objects
  • Creating the client with TyepScript (or other camel language) default option will then work
  • Creating the C# client will add the JsonProperty to translate from camel to pascal and resulting C# client will be pascal cased

I raised a GitHub Issue here with a link into the source code that I found in swagger codegen, however later realized that changing the way we do things will mitigate long term pain.

Automated SSRS Report Deployments from Octopus

Recently we started using the SQL Data Tools preview in 2015, its looking good so far, for the first time in a long time I don’t have to “match” the visual studio version I’m using with SQL, i can use the latest version of VS and support SQL version from 2008 to 2016, which is great, but it isn’t perfect. There is a lot of bugs in the latest version of data tools and some that Microsoft are refusing to fix that affect us.

One of the sticking points for me has always been deployment, we use octopus internally for deployment and try to run everything through that if we can to simplify our deployment environment.

SSRS is the big one, so I’m going to go into how we do deployments of that.

We use a tools in SQL called RS.EXE this is a command line tool that pre-installs with SSRS for running vb scripts to deploy reports and other objects in SSRS.

You need to be aware with this tool though that based on which API endpoint you call using the “-e” command line parameter, that the methods will be different. And out of the box it defaults to calling the 2005 endpoint, which has massive changes to 2010.

A lot of the code i wrote was based on Tom’s Blog Post on the subject in 2011, however he was using the 2005 endpoint and I have updated the code to use the 2010 end point.

The assumption is that you will have a folder full of RDL files (and possibly some pngs and connection objects) that you need to deploy, so the VB script takes a parameters of a “source folder”, the “target folder” on the SSRS server, and the “SSRS server address” itself. I package this into a predeploy.ps1 file that I package up into a nuget file for octopus, the line in the pre-deploy script looks like the below.

rs.exe -i Deploy_VBScript.rss -s $SSRS_Server_Address -e Mgmt2010 -v sourcePATH="$sourcePath"   -v targetFolder="$targetFolder"

You will note the “-e Mgmt2010” this is important for the script to work, next is the Deploy_VBScript.rss

Dim definition As [Byte]() = Nothing
Dim warnings As Warning() = Nothing
Public Sub Main()
' Create Folder for the project if not exist

Dim descriptionProp As New [Property]
descriptionProp.Name = "Description"
descriptionProp.Value = ""
Dim visibleProp As New [Property]
visibleProp.Name = "Visible"
visibleProp.value = True
Dim props(1) As [Property]
props(0) = descriptionProp
props(1) = visibleProp
If targetFolder.SubString(1).IndexOf("/") <> -1 Then
Dim level2 As String = targetFolder.SubString(targetFolder.LastIndexOf("/") + 1)
Dim level1 As String = targetFolder.SubString(1, targetFolder.LastIndexOf("/") - 1)
rs.CreateFolder(level1,"/", props)
rs.CreateFolder(level2 , "/"+level1, props)
Console.Writeline(targetFolder.Replace("/", ""))
rs.CreateFolder(targetFolder.Replace("/", ""), "/", props)
End If
Catch ex As Exception
If ex.Message.Indexof("AlreadyExists") > 0 Then
Console.WriteLine("Folder {0} exists on the server",targetFolder)
throw ex
End If
End Try
Dim Files As String()
Dim rdlFile As String
Files = Directory.GetFiles(sourcePath)
For Each rdlFile In Files
If rdlFile.EndsWith(".rds") Then
'TODO Implment handler for RDS files
End If
If rdlFile.EndsWith(".rsd") Then
'TODO Implment handler for RSD files
End If
If rdlFile.EndsWith(".png") Then
Console.WriteLine(String.Format("Deploying PNG file {2} {0} to folder {1}", rdlFile, targetFolder, sourcePath))
Dim stream As FileStream = File.OpenRead(rdlFile)
Dim x As Integer = rdlFile.LastIndexOf("\")

definition = New [Byte](stream.Length) {}
stream.Read(definition, 0, CInt(stream.Length))
Dim fileName As String = rdlFile.Substring(x + 1)
Dim prop As new [Property]
prop.Name ="MimeType"
Dim props(0) as [Property]
End If
If rdlFile.EndsWith(".rdl") Then
Console.WriteLine(String.Format("Deploying report {2} {0} to folder {1}", rdlFile, targetFolder, sourcePath))
Dim stream As FileStream = File.OpenRead(rdlFile)
Dim x As Integer = rdlFile.LastIndexOf("\")

Dim reportName As String = rdlFile.Substring(x + 1).Replace(".rdl", "")

definition = New [Byte](stream.Length - 1) {}
stream.Read(definition, 0, CInt(stream.Length))

rs.CreateCatalogItem("Report",reportName, targetFolder, True, definition, Nothing, warnings)
If Not (warnings Is Nothing) Then
Dim warning As Warning
For Each warning In warnings
Next warning
Console.WriteLine("Report: {0} published successfully with no warnings", reportName)
End If
Catch e As Exception
End Try
End If

End Sub

I haven’t yet added support for RDS and RSD files to this script. If someone wants to finish it off please share. Now its running PNGs and RDLs which is the main thing we update.

Now that all sounds too easy, and it was, the next issue i ran into was this one when deploying. The VS 2015 Data Tools client saves everything in 2016 format, and when I say 2016 format, i mean it changes the xmlns and adds a tag called “ReportParametersLayout”.

When you deploy from the VS client it appears to “rollback” these changes before passing the report to the 2010 endpoint, but if you try to deploy the RDL file from source it will fail (insert fun).

To work around this I had to write my own “roll back” script in the octopus pre-deploy powershell script, below:

Get-ChildItem $sourcePath -Filter "*.rdl" | `
[Xml]$xml = [xml](Get-Content $_.FullName)
if($xml.Report.GetAttribute("xmlns") -eq "http://schemas.microsoft.com/sqlserver/reporting/2016/01/reportdefinition")
if($xml.Report.ReportParametersLayout -ne $null)
$xml.Save($sourcePath + '\' + $_.BaseName + '.rdl')

Now things were deploying w… no, wait! there’s more!

Another error I hit was this next one, I mentioned before MS are refusing to fix some bugs here’s one. We work in Australia, so Time Zones are a pain in the ass, we have some states on daylight savings, some not, and some that were that have now stopped. Add to this we work on a multi-tenanted application, so while some companies rule “We are in Sydney time, end of story” we cannot. So using the .NET functions for timezone conversion is a must as they take care of all this nonsense for you.

I make a habit of storing all date/time data in UTC that way in the database or application layer if you need to do a comparison its easy, then i convert to local timezone in the UI based on user preference or business rule, etc. Its because of this we have been using System.TimezoneInfo in the RDL files. As of VS 2012 and up, you get an error from this library in VS, the report will still run fine on SSRS when deployed, it just errors in the Editor in VS and wont preview or deploy.

We’ve worked around this by using a CLR function that wraps TimezonInfo example here. If you are reading this please vote this one up on connect, converting timezone in the UI is better.

After this fix things are running smoothly.






AutoRest, Swagger-codegen and Swagger

One of the best things about swagger is being able to generate a client. For me swagger is for REST what WSDL was for SOAP, one of my big dislikes about REST from the start was it was hard to build clients because the standard was so lose, and most services if you got one letter’s casing wrong in a large object it would give you a generic 400 response with no clue as to what the actual problem might be.

Enter Swagger-codegen, Java based command line app for generating proxy clients based on the swagger standard. Awesomesuace! However I’m a .NET developer and I try to avoid adding new dependencies into my development environment (Like J2SE), that’s ok though, they have a REST API you can use to generate the clients as well.

In working on this though I found that MS is also working on their own version of codegen, called AutoRest. AutoRest only support 3 output formats at the moment though, Ruby, Node.js (TypeScript) and C#, But looking at the output from both and comparing them, I am much happier with the AutoRest outputted code, its a lot cleaner.

So in our case we have 3 client requirements C#, Client Side javascript, and Client Side Typescript.

Now either way you go with this, one requirement is you need to be able to “run” your WebAPI service on a web server to generate the json swagger file that will be used in the client code generation. So you could add it into a CI pipeline with your Web API but you would need to do build steps like

  1. Build WebAPI project
  2. Deploy Web API project to Dev server
  3. Download json file from Dev Server
  4. Build client

Or you could make a separate build that you run, I’ve tried both ways and it works fine.

So we decided to use AutoRest for the C# client. This was pretty straight forward, the autorest exe if available in a nuget package. So for our WebAPI project we simply added this, which made it available and build time. Then it was simply a matter of adding a PowerShell step into TeamCity for the client library creation. AutoRest will output a bunch of C# cs file that you will need to compile, which is simply a mater of using the csc.exe, after this I copy over a nupsec file that i have pre-baked for the client library.


.\Packages\autorest.0.13.0\tools\AutoRest.exe -OutputDirectory GeneratedCSharp -Namespace MyWebAPI -Input http://MyWebAPI.net/swagger/docs/v1 -AddCredentials
& "C:\Program Files (x86)\MSBuild\14.0\bin\csc.exe" /out:GeneratedCSharp\MyWebAPI.Client.dll /reference:Packages\Newtonsoft.Json.6.0.4\lib\net45\Newtonsoft.Json.dll /reference:Packages\Microsoft.Rest.ClientRuntime.1.8.2\lib\net45\Microsoft.Rest.ClientRuntime.dll /recurse:GeneratedCSharp\*.cs /reference:System.Net.Http.dll /target:library
xcopy MyWebAPI\ClientNuspecs\CSharp\MyWebAPI.Client.nuspec GeneratedCSharp

You will note form the above command lines for csc that I have had to add in some references to get it to compile, these need to go into your nuspec file as well, so people installing your client package will have the correct dependencies. Snip from my nuspec file below:

<frameworkAssembly assemblyName="System.Net.Http" targetFramework="net45" />
<dependency id="Microsoft.Rest.ClientRuntime" version="1.8.2" />
<dependency id="Newtonsoft.Json" version="6.0.8" />

After this just add a Nuget Publish step and you can start pushing your library to nuget.org, or in out case just our private internal server.

For authentication we use Basic Auth over SSL, so adding the “-AddCredentials” command line parameter is needed to generate the extra methods and properties for us, you may or may not need this.

Below is an example console app where I have installed the nuget package that autorest created, this uses basic auth which you my not need.

namespace ConsoleApplication1
class Program
static void Main(string[] args)
var svc = new MyClient();
svc.BaseUri = new Uri("https://MyWebAPILive.com");
svc.Credentials= new BasicAuthenticationCredentials{UserName = "MyUser",Password = "MyPassword!"};

Next we have swagger codegen for our Client libraries. As I said before I don’t want to add J2SE into our build environment to avoid complexity, so we are using the API. I’ve built a gulp job to do this.

Why gulp? the javascript client output from codegen is pretty rubbish, so instead of using this I’m getting the typescript library and compile it, then minify, i find this easier to do in gulp.

The Swagger UI for the Swagger Codegen api is here. When you call the POST /gen/clients method you pass in your json file, after this it returns a URL back that you can use to then download a zip file with the client package. Below is my gulpfile

var gulp = require('gulp');
var fs = require('fs');
var request = require('request');
var concat = require('gulp-concat');
var unzip = require('gulp-unzip');
var ts = require('gulp-typescript');
var tsd = require('gulp-tsd');
var tempFolder = 'temp';

gulp.task('default', ['ProcessJSONFile'], function () {
// doco https://generator.swagger.io/#/

gulp.task('ProcessJSONFile', function (callback) {
return request('http://MyWebAPI.net/swagger/docs/v1',
function (error, response, body) {
if (error != null) {

function ProcessJSONFileSwagOnline(bodyData) {
bodyData = "{\"spec\":" + bodyData + "}"; // Swagger code Gen web API requires the output be wrapped in another object
return request({
method: 'POST',
uri: 'http://generator.swagger.io/api/gen/clients/typescript-angular',
body: bodyData,
headers: {
"content-type": "application/json"
function (error, response, body) {
if (error) {
return console.error('upload failed:', error);
var responseData = JSON.parse(body);
var Url = responseData.link;

function downloadPackage(Url) {
return request(Url,
function(error, response, body) {
}).pipe(fs.createWriteStream('client.zip'), setTimeout(exctractPackage,2000));

function exctractPackage() {

function moveFiles() {
return gulp.src(tempFolder + '/typescript-angular-client/API/Client/*.ts')

Now I am no expert at Node.js I’ll be the first to admit, so I’ve added a few work arounds using setTimeout in my script as I could get the async functions to work correctly, if anyone wants to correct me on how these should be done properly please do 🙂

At the end of this you will end up with the type script files in a folder that you can then process into a package. We are still working on a push to GitHub for this so that we can compile a bower package for us, I will make another blog post about this.

In the typescript output there will always be a api.d.ts file that you can reference into your TypeScript project to expose the client. I’ll do another post about how we setup or Dev Environment for compile the TypeScript from bower packages.

for our Javascript library we just need to add one more step.

function compileTypeScriptClientLib() {
var sourceFiles = [tempFolder + '/**/*.ts'];

out: 'clientProxy.js'

This will compile us our JS script library, we can then also minify it in gulp as well, before packaging, again bower is the technology for distributing client packages, so after this we push to GitHub, but i’ll do another blog post about that.

The output you get from TypeScript in CodeGen is angularJS, which is fine as “most” of our apps use angular already, however a couple of our legacy ones don’t, so the client proxy object that is created needs a bit of work to inject it’s dependencies.

Below is an example of a module in javascript that I use to wrap the AngularJS service and return it as a javascipt object with the Angular Dependencies injected:

var apiClient = (function (global) {
var ClientProxyMod= angular.module("ClientProxyMod", []);
ClientProxyMod.value("basePath", "http://MyWebAPILive.com/"); // normally I'd have a settings.js file where I would store this
ClientProxyMod.service("MyWebAPIController1", ['$http', '$httpParamSerializer', 'basePath', API.Client.MyWebAPIController1]);
var prx = angular.injector(['ng', 'ClientProxyMod']).get('MyWebAPIController1');
return {
proxy: prx

You would need to do the above once for each controller you have in your WebAPI project, the codegen outputs one service for each controller.

One of the dependencies of the Service that is created by CodeGen is the “basePath” this is the URL to the live service, so i pass this in as a value, you will need to add this value to your Angular JS module when using in an Angular JS app as well.

Using basic auth in AngularJS is pretty straight forward because you can set it on the $http object which is exposed as a property on the service.

apiClient.proxy.$http.defaults.headers.common['Authorization'] = "Basic " + btoa(username + ":" + password);

Then you can simply call your methods from this apiClient.proxy object.



Swagger/Swashbuckle displaying Error with no information

Ran into a interesting problem today when implementing swagger UI on one of our WebAPI 2 projects.

Locally it was working fine. But when the site was deployed to dev/test it would display an ambiguous error message

<Message>An error has occurred.</Message>

After hunting around I found that swashbuckle respects the customErrors mode in the system.web section of the web config.

Setting this Off displayed the real error, in our case a missing dependency

<customErrors mode="Off"/>


<Message>An error has occurred.</Message>
Could not find file 'C:\Octopus\Applications\Development\Oztix.GreenRoom.WebAPI\2.0.332.0\Oztix.GreenRoom.WebAPI.XML'.
at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath) at System.IO.FileStream.Init(String path, FileMode mode, FileAccess access, Int32 rights, Boolean useRights, FileShare share, Int32 bufferSize, FileOptions options, SECURITY_ATTRIBUTES secAttrs, String msgPath, Boolean bFromProxy, Boolean useLongPath, Boolean checkHost) at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize, FileOptions options, String msgPath, Boolean bFromProxy) at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize) at System.Xml.XmlUrlResolver.GetEntity(Uri absoluteUri, String role, Type ofObjectToReturn) at System.Xml.XmlTextReaderImpl.OpenUrlDelegate(Object xmlResolver) at System.Threading.CompressedStack.runTryCode(Object userData) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.CompressedStack.Run(CompressedStack compressedStack, ContextCallback callback, Object state) at System.Xml.XmlTextReaderImpl.OpenUrl() at System.Xml.XmlTextReaderImpl.Read() at System.Xml.XPath.XPathDocument.LoadFromReader(XmlReader reader, XmlSpace space) at System.Xml.XPath.XPathDocument..ctor(String uri, XmlSpace space) at Swashbuckle.Application.SwaggerDocsConfig.<>c__DisplayClass8.<IncludeXmlComments>b__6() at Swashbuckle.Application.SwaggerDocsConfig.<GetSwaggerProvider>b__e(Func`1 factory) at System.Linq.Enumerable.WhereSelectListIterator`2.MoveNext() at Swashbuckle.Swagger.SwaggerGenerator.CreateOperation(ApiDescription apiDescription, SchemaRegistry schemaRegistry) at Swashbuckle.Swagger.SwaggerGenerator.CreatePathItem(IEnumerable`1 apiDescriptions, SchemaRegistry schemaRegistry) at System.Linq.Enumerable.ToDictionary[TSource,TKey,TElement](IEnumerable`1 source, Func`2 keySelector, Func`2 elementSelector, IEqualityComparer`1 comparer) at Swashbuckle.Swagger.SwaggerGenerator.GetSwagger(String rootUrl, String apiVersion) at Swashbuckle.Application.SwaggerDocsHandler.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken) at System.Net.Http.HttpMessageInvoker.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken) at System.Web.Http.Dispatcher.HttpRoutingDispatcher.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken) at System.Web.Http.HttpServer.<SendAsync>d__0.MoveNext()

Application Insights, and why you need them

Metrics are really important for decision making. To often in business I hear people make statements based on assumptions, followed by justifications like “It’s an educated guess”, if you want to get educated you should use metrics to get your facts first.

Microsoft recently published a great article (From Agile to DevOps at Microsoft Developer Division) about how they changed their agile processes, and one of the key things I took out of it was the change away from the product owner as the ultimate source of information.

A tacit assumption of Agile was that the Product Owner
was omniscient and could groom the backlog correctly…

… the Product Owner ranks the Product Backlog Items (PBIs) and these are treated more or less as requirements. They may be written as user stories, and they may be lightweight in form, but they have been decided.

We used to work this way, but we have since developed a more flexible and effective approach.In keeping with DevOps practices, we think of our PBIs as hypotheses. These hypotheses need to be turned into experiments that produce evidence to support or diminish the experiment, and that evidence in turn produces validated learning.

So you can use metrics to validate your hypotheses and make informed decision about the direction of your product.

If you are already in azure its easy and free for most small to mid-sized apps. You can do pretty similar things with google analytics too, but the reason i like the App Insights is it combines server side monitoring and also it works nicely with the Azure dashboard in the new portal, so i can keep everything in once place.

From Visual studio you can simply right click on a project and click “Add Application Insight Telemetry”


This will bring up a wizard that will guide you through the process.


One gotcha i found though was that because my account was linked to multiple subscriptions i had to cancel out of the wizard, then login with the server explorer then go through the wizard again.

It adds a lot of gear to your projects including

  • Javascript libraries
  • Http Module for tracking requests
  • Nuget packages for all the ApplicaitonInsights libraries
  • Application insights config file with the Key from your account

After that’s loaded in you’ll also need to add in tracking to various areas, the help pages in the azure interface have all the info for this and come complete with snippets you can easily copy and paste for a dozen languages and platforms (PHP, C#, Phyton, Javascript, etc) but not VB in a lot of instances which surprised me 🙂 you can use this link inside the Azure interface to get at all the goodies you’ll need


Where I recommend adding tracking at a minimum is:

Global.ascx for web apps

public void Application_Error(object sender, EventArgs e)
// Code that runs when an unhandled error occurs

// Get the exception object.
Exception exc = Server.GetLastError;

dynamic telemetry = new TelemetryClient();


Client side tracking copy the snippet from their interface

Add to master page or shared template cshtml for MVC.



Also you should load it onto your VMs as well, there is a web platform installer you can run on your windows boxes that will load a service on for collecting the stats.



Warning though that the above needs a IISReset to complete. A good article on the setup is here.

For tracking events I normally create a helper class that wraps all my calls to make things easier.

Also as a lot of my apps these days are heavy JavaScript I’m using the tracking feature in JavaScript a lot more. The snippet they give you will create a appInsights object in the page you can reuse through your app after startup.

Below is an example drawn from code I put into a survey App recently, not exactly the same code but I will use this as an example. This is so we could track when a user answers a question incorrectly (i.e. validation fails), there is a good detailed article on this here

SurveyName: "My Survey",
SurveyId: 22,
SurveyVersion: 3,
FailedQuestion: 261

By tracking this we can answer questions like

  • Are there particular questions that users fail to answer more often
  • Which surveys have a higher failure rate for data input

Now that you have the ability to ask these questions you can start to form hypotheses. For example i make make a statement like;

“I think Survey X is too complicated for users and needs to be split into multiple smaller surveys”

To validate this I could look at the metrics collected for the above event for the user error rate on survey X compare to other surveys and use this as a basis for my hypotheses. You can sue the metrics explorer to create charts that map this out for you, filtering by you event (example below)


Then after the change is complete I can use the same metrics to measure the impact of my change.

This last point is another mistake I see in business a lot, too often changes is made, then no one reviews success. What Microsoft have done with their “Dev Ops” process embeds into their metrics into the process. So by default you are checking your facts before and after change, which is the right way to go about things imo.



TypeScript Project AppSettings

Ok so there’s a few nice things that MS have done with typescript, but Dev time vs build time vs deploy time they don’t all work the same, so there’s a few things we’ve done to make the F5 experience nice while making sure the build and deployed results are the same.

In the below examples we are using

  • Visual Studio 2015
  • Team City
  • Octopus Deploy

Firstly TypeScript in Visual Studio has this nice feature to wrap up your typescript into a single js file while your coding. It’ll save to this file as you are saving your TS files, so its nice for making code changes on the fly while debugging, and you get a single file output.


Also this will build when TeamCity runs msbuild as well. But don’t check this file in, its compile ts output, it should be treated like binary output from a C# project.

And you can further minify and compact this using npm (see this post for details).

This isn’t prefect though, because we shovel our static content to a separate CDN that is spared between environments (Folder per version), and we have environment specific JavaScript variables that need to be set, this can’t be done in a TypeScript file as they all compile into a single file. I could use npm to generate the TypeScript into 2 files but this conflicts too much with the developers local setup in using the above feature from my tests.

So we pulled it into a js file that is separate form the type script, this obviously though causes the type script to break as the object doesn’t exist. So we added a declaration file for it like below:

declare var AppSetting: {
url: string;
baseCredential: string;
ApiKey: string;

Then drop the actual object in a JavaScript file with the tokens ready for octopus, with a simple if statement to drop in the Developers local settings when running locally.

var AppSetting;
(function (AppSetting) {
if (window.location.hostname == "localhost") {
AppSetting.url= "http://localhost:9129/";
AppSetting.baseCredential= "XXXVVVWWW";
AppSetting.ApiKey= "82390189wuiuu";
} else {
AppSetting.url= "#{url}";
AppSetting.baseCredential= "#{baseCredential}";
AppSetting.ApiKey= "#{ApiKey}";
})(AppSetting || (AppSetting = {}));

This does mean we need a extra http request to pull down the appSetting js file for our environment, but it means we can maintain our single CDN for static content/code like we have traditionally done.

SQL Query Slow? Some Basic MSSQL tips

A lot of developers I’ve worked with in the past don’t have good experience with SQL out of the box, when is say good i mean beyond knowing the basics. A lot of the systems I work on are high performance so I end up training developers in SQL as a priority when they come onto the team. There is a few basic things I always show them which gets them up to speed pretty easily.

Common Mistakes to Avoid

Backwards Conversions in WHERE clauses

A common mistake is backwards where clauses, this causes the engine to create a temporary table and convert all the data in that column in the table, you should always convert the parameter not the column.

WHERE CONVERT(int,column1) = @param1

Cursors, they aren’t bad, just use them correctly

If you are using cursors for doing transactional workloads I will be scared and probably not talk to you again. If you are using them simply to iterate through a temporary table or table var and do “something” you are probably using them correctly.

Just remember the use these two hints all the time READ_ONLY and FAST_FORWARD, they will give you a speed boost of an order of magnitude.


limit them to a few thousand rows if you can, don’t use them for millions of rows if you can avoid it.

INSERT is your friend, UPDATE is your enemy

Try to design your schema with INSERTing rather than UDPATing, you will mitigate contention this way, contention of resources is what makes you build big indexes and will slow you down in the long run.

Get some CTEs up ya

Common Table Expressions (CTEs) are useful for breaking sub-queries out, and I find are cleaner in the code, performance difference is arguable though.

Table Variables over Temp Tables

Yes, I’ve said it, now I may get flamed. Temp tables have their place, primarily when you need to index your content, but if you have temporary data that is large enough to require an index, maybe it should be a real table.

Also Table variables are easier debugging because you don’t have to drop them before F5ing again in SSMS.

Missing Indexes

Number one cause of query slow down is bad indexing. If you are doing large amounts of UPDATE/INSERT on your tables though, too much indexing can be bad too.

SSMS is your friend in this case, there are a lot more advanced tools out there sure, but you should be starting with SSMS, you’ll be able to find out your basic slow downs

Look at query plans

Hit this button in the tool bar and your away


If there is an obvious slow down SSMS will recommend you an index to fix your problem.

You can see the below Green text it displays sometimes (not all the time) you can right click on this and select “Missing Index Details…” and it will give you are CREATE INDEX statement that you can use to create your index.


Most of the index hints in here are pretty spot on but there is a few things to consider before going “yeah! here’s the index that will solve my problem”

  1. Don’t index bit columns, or columns that have a small Cardinality
  2. Look for covering indexes, what it suggests might be the same as an index you have already but with one extra column, which means you could use a single index for both jobs
  3. Think about any high volume updates you have, you might slow down your updating if you add more indexes

The query plan itself will give you some more detailed info than the hints

Each block will be broken up by the percent of the entire statement (below is 1 block which is 13% of the entire statement), then within each block it breaks it up further, the below 3 Index Seeks use 12% of the total performance each.


When looking at the above, it can get very confusing what to do if you are not very familiar with SQL this interface gives you a lot of info when you mouse over each point, I think this is why some developers like to hide behind entity frameworks 🙂

The basic thing i tell people to look out for is the below:


Index Scans are usually your source of pain that you can fix, and when you get big ones SSMS will generally suggest indexes for you based on these. You will want to make sure they are consuming a fair chuck of the query before creating an index for them though, the above example of 2% is not a good one.

Maintain your Database

Your indexes will need defragmeting/rebuidling and you will need to reclaim space by backing up db and logs.

I won’t go into this too much in the scope of this post, I might do another post about it as its a rather large subject. I recommend googling this for recommendation but at least use the wizard in SSMS to setup a “default” maintenance plan job nightly, don’t leave your database un-maintained, that will slow it down in the long run.

People to watch and Learn from

Pinal Dave from SQL Authority is “the man”, he has come up on more of my google searches for SQL issues than stackoverflow.

Packaging Large Scale JS projects with gulp

Large scale JS projects have become a lot more common with the rise of AngularJS and other javascript frameworks.

Laying out your project though you want to have your source in a lot of separate files to keep things neat and in a structure, but you don’t want you browser making  a few hundred connections to the web server on every page load.

I previous blogged about a simple JS and CSS minify and compress for a C# based project, but this time I’m looking at a small AngularJS project that has over 50 js files.

I’m using VS 2015 with the Web Essentials add-on, so in the package.json file can put my npm requirements and they will download, with 2013 i previously had to run npm on my local each time I setup to get them, the new VS is way better for these tasks, if you haven’t upgraded to 2015 you should download it now.

There is 3 important files highlighted below


Firstly lets look at my package json file:


in here you can add all your npm dependencies and VS with run npm in the background to install them, as you are typing.

Next is the bower.json file


This is used for managing your 3rd party libraries, traditionally when throwing in 3rd party js libraries I would use a CDN, however with Angular, you will end up with a bunch of smaller libraries that will end you up with too many CDN references, so you are better off using bower to download them into your project, then from there combine them (They will already be uglifyd in most cases).

Now there is also a “.bowerrc” file that i have added, because i don’t like the default path

"directory": "app/lib"

Again as with the package json it will start downloading these as you type and press save, also to note it doesn’t clean up if you delete one, so you’ll  need to go into the target location and delete the folder manually.

Lastly the Gulpfile.js file, here is where we bring it all together.

In the example below I am processing 2 lots of javascript, the 3rd party libraries into a lib.min.js and our code into scripts.min.js. I could also add a final step to combine them as well.

The good thing about the below is that as i add new js files to my /app/scripts folder, they automatically get combined into the main script file, and when a add new 3rd party libraries the will automatically get added to the 3rd party script file (hence the aforementioned change to the bowerrc file).

The 3rd party libraries tend to sometimes get painful though, you can see below that i am not minifing them only joining, the ones i’m using are all minified already, but sometimes you will get ones that aren’t, also one of the packages i am using contains the jQuery1.8 libraries that it appears to use for some sort of unit test, so i had to exclude this file specifically, so be prepared for some troubleshooting with this.

/// <binding BeforeBuild='clean' AfterBuild='minify, scripts' />
// include plug-ins
var gulp = require('gulp');
var concat = require('gulp-concat');
var uglify = require('gulp-uglify');
var del = require('del');
var rename = require('gulp-rename');
var minifyCss = require('gulp-minify-css');
var sourcemaps = require('gulp-sourcemaps');

gulp.task('default', function () {

//Define javascript files for processing
var config = {
AppJSsrc: ['App/Scripts/**/*.js'],
LibJSsrc: ['App/lib/**/*.min.js', '!App/lib/**/*.min.js.map',
'!App/lib/**/*.min.js.gzip', , 'App/lib/**/ui-router-tabs.js',

//delete the output file(s)
gulp.task('clean', function () {
return del(['Content/*.min.css']);

gulp.task('scripts', function () {
// Process js from us
// Process js from 3rd parties

gulp.task('minify', function () {
suffix: '.min'

So now i end up with two files outputted


oh and don’t for get to add tehse fiels to your gitignore or tfignore so they don’t get checked in. and also make sure you add “note_modules” and your 3rd party library folder as well, you don’t want to be checking in your dependencies, I’ll do another blog post about using the above dependencies in the build environment.

Just to note, not having node_modules in my gitignore killed VS2015s integration with github causing a error about the path being longer than 260 characters. Prevented me checking in and refused to acknowledge the project was under source control.

And in my app i only need to script tags

<script src="App/lib.min.js"></script>
<script src="App/scripts.min.js"></script>

You will also note in the above processing I have a source map file output, this is essential for debugging the scripts. These map files wont get download unless the browser is in debugging mode and you should use your build/deployment environment to exclude from production, to stop people decompressing you js code.

You can see from the screen shot below with me debugging on my local, firebug is able to render the original javascript code for me to debug and step through the code just like it was uncompressed, with the original file names and all.


Handling Client Side error in AngularJS and sending them Server Side

I did an example in my AzureDNS App today of how to add a ApiController that will log client side error to a server side store.

Coming from a C# background I am used to having a central logging store (e.g. SQL table or SEQ more recently) that all errors are dumped too. When running code on a server its easy for an app to throw its logs into something that is behind the firewall and catalogs the logs from all your apps nicely.

When you have code that is running in someone’s web browser that not always as easy, but what I’ve done with this example is just create a controller on /api/Log/Error within the running app that i can throw my JavaScript Exceptions at, then add a call from Angular’s global exception handler.

The Log Controller can be found here


The log error method is pretty straight forward, the logger object is passed in via DI from autofac

// POST: api/Log/Error
public void PostError([FromBody]string value)

The the global error handler in Angular is done as follows


'use strict';
angular.module('AzureDNSUI') // This would be changed to your app name if reusing this code
.factory('$exceptionHandler', function($injector) {
return function(exception, cause) {
var $http = $injector.get("$http");
var $log = $injector.get("$log");
exception.message += ' (caused by "' + cause + '")';
$log.log(exception); // Logs to console
$http.post('/api/Log/Error', JSON.stringify(exception)); // Where the magic happens
throw exception;

An example error from Angular in our SEQ server below:


There is a few fields that we can customize that I do by default from the Serilog config, some of these are not relevant for the javascript errors, for example the SourceContext will always be the same, I’ll do a follow-up post about collecting client information to pass through later.