Use Parameters from TFS Test Cases in NUnit Unit Tests

One of the things we have been doing a lot lately is getting our testers involved more in API method creation/development. With technologies like swagger this is becoming a lot easier to get non-developers involved at this level, and I find its great to have a “test driven” mind helping developers out with creating unit tests at a low level like this.

One of our problems though is that the testers have been coming up with test cases fine, but getting them into our code has been a mission.

Our testers know enough code to have a conversation about code, look at some basic loops and ifs, but not enough to edit it, so in some cases we’ve been having Chinese whisper issues in getting the cases into the code. Also we have had some complex test methods recently (10+ parameters, 30+ test cases) and storing these into C# code its hard to visualize when look at code and not something like a grid/table (excel, etc.).

For the later requirement we had considered CSV storage, and I have done some code for this in my library for use if you want it. But we decide to go with TFS test cases for storing the data, because i was working on updating the Test Cases from the NUnit tests anyway .

Ideally i wanted to do something like this:


 [Test]
 [TfsTestCaseDataSource(65871)]
 public void MyUnitTestMethod()
 {
 // Test stuff here
 }

And have it draw from the test case in TFS (example below).

TestCaseInTFSStoringDataToUseInUnitTestNUnit

However there is some issues with passing parameters to NUnit Tests that prevented me from doing this.

So instead what we have to do is create a class for each Unit Test we want to get its Data from a Test Case in TFS. This is the current work around. I may yet fix this in NUnit one weekend if I am not too hungover (but there is little change of a weekend like this).

Library I am using here is available on nuget.org

So using the library you have to do the following

1. Create a class for the data source that inherits from the Class in NUnitTfsTestCase.TestCaseData.TestCaseDataTfs Hard code your TFS test case ID in here


using System.Collections.Generic;
using NUnitTfsTestCase.TestCaseData;

namespace MyNamespace.MyFixture.MyMethod // use a good folder structure please 🙂
{
 class GetTestCaseData : TestCaseDataTfs
 {
 public static IEnumerable<dynamic> GetTestData()
 {
 return TestCaseDataTfs.GetTestDataInternal(65079);
 }
 }
}

2. Add an attribute to you test case that references this


[TestFixture]
class MyTestClass
{

[Test]
[Test, TestCaseSource(typeof(MyFixture.MyMethod.GetTestCaseData), "GetTestData")]
public void MyUnitTestMethod(strin someParam1, string someParam2)
{
// Test stuff here
}
}

3. Add app-settings for the TFS server and TFS project name


<add key="tpcUri" value="https://mytfsserver.com.au/tfs/DefaultCollection" />
<add key="teamProjectName" value="MyProjectName" />

Its important to note here that I am using a build agent that is on the same domain as an on-premise TFS server using a Domain account with appropriate permissions to the TFS server. I will work on example code in the near future for this hit VSTS (formally VSO) with credentials baked into the appSettings.

Another thing to note is that NUnit pulls out data and sends it to the parameter using ordinals. so you need to make sure the column parameter data in TFS is in the correct order as the parameters in the unit test method. I think i should be able to fix this though so it matches the parameters on the method to the Columns in TFS, I’ll have another crack in a few weeks.

The source code for the library is available here. Feel free to send a PR if you want to improve or fix anything 🙂

 

 

 

Updating Test Cases in TFS from NUnit Tests run in TeamCity Builds

One of the things i always try to impress upon business is the need for a dashboards, and TFS provides a great interface for this. We use them to report on our manual testing, and we were looking at also using them to report on test automation.

I’ve seen CodedUI test results in the past update the test from links to test automation via test manager. I thought we might be able to use this but its pretty hard, instead I just used the TFS Client (old one, not the new REST client unfortunately, will revise code in another post). Below is the final outcome I wanted to end up with on Out dashboard:

TFSDashboardTestcaseStatusForUnitTest

The concept is pretty straight forward, we wanted to achieve something like the following with our NUnit units Tests:


[Test]
[TfsTestCase(65871)]
public void MyUnitTestMethod()
{
// Test stuff here
}

So we could simply add an attribute to the Unit test with the ID of the test case and the library would take care of the rest. And its “almost” that easy.

In order to log test results you also need to Test Suite ID and the Test Plan ID from TFS, and we need to initialize the run at the start, then close it at the end so we end up with all of our units tests in the same run (I’ve done one run per fixture but you could change this).

So in the library there is two static int that need to be set and I also created a “Test Rig” class that I inherit my test Fixtures from that have the appropriate OneTimeSetup and OneTimeTearDown methods to start the test run, then close it off.

There is also some static variables for caching all the test results and adding them in one go at the end (I couldn’t work out how to add them as they are running).

The end result is something like the below:


[TestFixture]
class MyTestClass : ControllerTestRig
{
public MyTestClass () //ctor for setting static vars, runs before OneTimeSetup
{
NUnitTfsTestCase.TfsService.RunData.TestPlanId = 65213;
NUnitTfsTestCase.TfsService.RunData.TestSuiteId = 65275;
}

[Test]
[TfsTestCase(65871)]
public void MyUnitTestMethod()
{
// Test stuff here
}
}

It’s important to note at this stage that this library assumes that your Build Agent has authentication with your TFS server. In our first case for this we are using on-premise TFS and a Build Agent that is running under a domain windows user account with the correct permissions. I will do an update later for different auth methods, and also do an example for VSTS (Formally VSO), as I am going to move this over to a VSTS hosted project in the next few weeks.

Also you need to put the following app settings in for the library to pick up your TFS server address and project name

<add key="tpcUri" value="https://mytfsserver.com.au/tfs/DefaultCollection" />
<add key="teamProjectName" value="MyProjectName" />

Once the above is done we now get a nice test run for each Test Fixture in our Test library, as well as the test cases having their status get updated.

TFSTestRunCharts

TFSTestRunDetailsFromTeamCityNUnitRun

Any test that error it will pass on the error status and also the message. We are using NUnit 3 for this, because 2.x isn’t under active development and doesn’t provide a lot of detail in the TestContext for use to log to TFS.

The source code is available on GitHub here and the nuget package is available on nuget.org. Feel free to send a PR if you want to change or fix something!

 

Swagger WebAPI JSON Object formatting standards between C#, TypeScript and Others

When designing objects in C# you use pascal casing for your properties, but in other languages you don’t, and example (other than java) is TypeScript here’s an article from Microsoft about it.

And that’s cool, a lot of the languages have different standards and depending on which one you are in, you write a little different.

The problem is when you try to work on a standard that defines cross platform communication that is case sensitive, an example being Swagger using REST and JSON.

So the issue we had today was a WebAPI project was generating objects like this:


{
"ObjectId": 203
"ObjectName" : "My Object Name"
}

When swaggerated the object comes out correctly with the correct pascal casing, however when using swagger codegen the object is converted to camel case (TypeScript Below)


export interface MyObject {
 objectId: number;
 objectName: string;
}

The final output is a generated client library that can’t read any objects from the API because JavaScript is case sensitive.

After some investigation we found that when the swagger outputs camel casing the C# client generators (Autorest and Swagger codegen) will output C# code that is in camel casing but with properties to do the translating from camel to pascal, like the below example


/// <summary>
/// Gets or Sets TimeZoneName
/// </summary>
[JsonProperty(PropertyName = "timeZoneName")]
public string TimeZoneName { get; set; }

So to avoid pushing shit up hill we decided to roll down it. I found this excellent article on creating a filter for WebAPI to convert all your pascal cased objects to camel case on the fly

So we found that the best practice is:

  • Write Web API C# in Pascal casing
  • Covert using an action filter from pascal to camel case Json objects
  • Creating the client with TyepScript (or other camel language) default option will then work
  • Creating the C# client will add the JsonProperty to translate from camel to pascal and resulting C# client will be pascal cased

I raised a GitHub Issue here with a link into the source code that I found in swagger codegen, however later realized that changing the way we do things will mitigate long term pain.

Automated SSRS Report Deployments from Octopus

Recently we started using the SQL Data Tools preview in 2015, its looking good so far, for the first time in a long time I don’t have to “match” the visual studio version I’m using with SQL, i can use the latest version of VS and support SQL version from 2008 to 2016, which is great, but it isn’t perfect. There is a lot of bugs in the latest version of data tools and some that Microsoft are refusing to fix that affect us.

One of the sticking points for me has always been deployment, we use octopus internally for deployment and try to run everything through that if we can to simplify our deployment environment.

SSRS is the big one, so I’m going to go into how we do deployments of that.

We use a tools in SQL called RS.EXE this is a command line tool that pre-installs with SSRS for running vb scripts to deploy reports and other objects in SSRS.

You need to be aware with this tool though that based on which API endpoint you call using the “-e” command line parameter, that the methods will be different. And out of the box it defaults to calling the 2005 endpoint, which has massive changes to 2010.

A lot of the code i wrote was based on Tom’s Blog Post on the subject in 2011, however he was using the 2005 endpoint and I have updated the code to use the 2010 end point.

The assumption is that you will have a folder full of RDL files (and possibly some pngs and connection objects) that you need to deploy, so the VB script takes a parameters of a “source folder”, the “target folder” on the SSRS server, and the “SSRS server address” itself. I package this into a predeploy.ps1 file that I package up into a nuget file for octopus, the line in the pre-deploy script looks like the below.


rs.exe -i Deploy_VBScript.rss -s $SSRS_Server_Address -e Mgmt2010 -v sourcePATH="$sourcePath"   -v targetFolder="$targetFolder"

You will note the “-e Mgmt2010” this is important for the script to work, next is the Deploy_VBScript.rss


Dim definition As [Byte]() = Nothing
Dim warnings As Warning() = Nothing
Public Sub Main()
' Create Folder for the project if not exist
try

Dim descriptionProp As New [Property]
descriptionProp.Name = "Description"
descriptionProp.Value = ""
Dim visibleProp As New [Property]
visibleProp.Name = "Visible"
visibleProp.value = True
Dim props(1) As [Property]
props(0) = descriptionProp
props(1) = visibleProp
If targetFolder.SubString(1).IndexOf("/") <> -1 Then
Dim level2 As String = targetFolder.SubString(targetFolder.LastIndexOf("/") + 1)
Dim level1 As String = targetFolder.SubString(1, targetFolder.LastIndexOf("/") - 1)
Console.WriteLine(level1)
Console.Writeline(level2)
rs.CreateFolder(level1,"/", props)
rs.CreateFolder(level2 , "/"+level1, props)
Else
Console.Writeline(targetFolder.Replace("/", ""))
rs.CreateFolder(targetFolder.Replace("/", ""), "/", props)
End If
Catch ex As Exception
If ex.Message.Indexof("AlreadyExists") > 0 Then
Console.WriteLine("Folder {0} exists on the server",targetFolder)
else
throw ex
End If
End Try
Dim Files As String()
Dim rdlFile As String
Files = Directory.GetFiles(sourcePath)
For Each rdlFile In Files
If rdlFile.EndsWith(".rds") Then
'TODO Implment handler for RDS files
End If
If rdlFile.EndsWith(".rsd") Then
'TODO Implment handler for RSD files
End If
If rdlFile.EndsWith(".png") Then
Console.WriteLine(String.Format("Deploying PNG file {2} {0} to folder {1}", rdlFile, targetFolder, sourcePath))
Dim stream As FileStream = File.OpenRead(rdlFile)
Dim x As Integer = rdlFile.LastIndexOf("\")

definition = New [Byte](stream.Length) {}
stream.Read(definition, 0, CInt(stream.Length))
Dim fileName As String = rdlFile.Substring(x + 1)
Dim prop As new [Property]
prop.Name ="MimeType"
prop.Value="image/png"
Dim props(0) as [Property]
props(0)=prop
rs.CreateCatalogItem("Resource",fileName,targetFolder,True,definition,props,Nothing)
End If
If rdlFile.EndsWith(".rdl") Then
Console.WriteLine(String.Format("Deploying report {2} {0} to folder {1}", rdlFile, targetFolder, sourcePath))
Try
Dim stream As FileStream = File.OpenRead(rdlFile)
Dim x As Integer = rdlFile.LastIndexOf("\")

Dim reportName As String = rdlFile.Substring(x + 1).Replace(".rdl", "")

definition = New [Byte](stream.Length - 1) {}
stream.Read(definition, 0, CInt(stream.Length))

rs.CreateCatalogItem("Report",reportName, targetFolder, True, definition, Nothing, warnings)
If Not (warnings Is Nothing) Then
Dim warning As Warning
For Each warning In warnings
Console.WriteLine(warning.Message)
Next warning
Else
Console.WriteLine("Report: {0} published successfully with no warnings", reportName)
End If
Catch e As Exception
Console.WriteLine(e.Message)
End Try
End If

Next
End Sub

I haven’t yet added support for RDS and RSD files to this script. If someone wants to finish it off please share. Now its running PNGs and RDLs which is the main thing we update.

Now that all sounds too easy, and it was, the next issue i ran into was this one when deploying. The VS 2015 Data Tools client saves everything in 2016 format, and when I say 2016 format, i mean it changes the xmlns and adds a tag called “ReportParametersLayout”.

When you deploy from the VS client it appears to “rollback” these changes before passing the report to the 2010 endpoint, but if you try to deploy the RDL file from source it will fail (insert fun).

To work around this I had to write my own “roll back” script in the octopus pre-deploy powershell script, below:

Get-ChildItem $sourcePath -Filter "*.rdl" | `
Foreach-Object{
[Xml]$xml = [xml](Get-Content $_.FullName)
if($xml.Report.GetAttribute("xmlns") -eq "http://schemas.microsoft.com/sqlserver/reporting/2016/01/reportdefinition")
{
$xml.Report.SetAttribute("xmlns","http://schemas.microsoft.com/sqlserver/reporting/2010/01/reportdefinition")
}
if($xml.Report.ReportParametersLayout -ne $null)
{
$xml.Report.RemoveChild($xml.Report.ReportParametersLayout)
}
$xml.Save($sourcePath + '\' + $_.BaseName + '.rdl')
}

Now things were deploying w… no, wait! there’s more!

Another error I hit was this next one, I mentioned before MS are refusing to fix some bugs here’s one. We work in Australia, so Time Zones are a pain in the ass, we have some states on daylight savings, some not, and some that were that have now stopped. Add to this we work on a multi-tenanted application, so while some companies rule “We are in Sydney time, end of story” we cannot. So using the .NET functions for timezone conversion is a must as they take care of all this nonsense for you.

I make a habit of storing all date/time data in UTC that way in the database or application layer if you need to do a comparison its easy, then i convert to local timezone in the UI based on user preference or business rule, etc. Its because of this we have been using System.TimezoneInfo in the RDL files. As of VS 2012 and up, you get an error from this library in VS, the report will still run fine on SSRS when deployed, it just errors in the Editor in VS and wont preview or deploy.

We’ve worked around this by using a CLR function that wraps TimezonInfo example here. If you are reading this please vote this one up on connect, converting timezone in the UI is better.

After this fix things are running smoothly.


 

 

 

 

 

AutoRest, Swagger-codegen and Swagger

One of the best things about swagger is being able to generate a client. For me swagger is for REST what WSDL was for SOAP, one of my big dislikes about REST from the start was it was hard to build clients because the standard was so lose, and most services if you got one letter’s casing wrong in a large object it would give you a generic 400 response with no clue as to what the actual problem might be.

Enter Swagger-codegen, Java based command line app for generating proxy clients based on the swagger standard. Awesomesuace! However I’m a .NET developer and I try to avoid adding new dependencies into my development environment (Like J2SE), that’s ok though, they have a REST API you can use to generate the clients as well.

In working on this though I found that MS is also working on their own version of codegen, called AutoRest. AutoRest only support 3 output formats at the moment though, Ruby, Node.js (TypeScript) and C#, But looking at the output from both and comparing them, I am much happier with the AutoRest outputted code, its a lot cleaner.

So in our case we have 3 client requirements C#, Client Side javascript, and Client Side Typescript.

Now either way you go with this, one requirement is you need to be able to “run” your WebAPI service on a web server to generate the json swagger file that will be used in the client code generation. So you could add it into a CI pipeline with your Web API but you would need to do build steps like

  1. Build WebAPI project
  2. Deploy Web API project to Dev server
  3. Download json file from Dev Server
  4. Build client

Or you could make a separate build that you run, I’ve tried both ways and it works fine.

So we decided to use AutoRest for the C# client. This was pretty straight forward, the autorest exe if available in a nuget package. So for our WebAPI project we simply added this, which made it available and build time. Then it was simply a matter of adding a PowerShell step into TeamCity for the client library creation. AutoRest will output a bunch of C# cs file that you will need to compile, which is simply a mater of using the csc.exe, after this I copy over a nupsec file that i have pre-baked for the client library.

PowerShellAutoRestStepTeamCity


.\Packages\autorest.0.13.0\tools\AutoRest.exe -OutputDirectory GeneratedCSharp -Namespace MyWebAPI -Input http://MyWebAPI.net/swagger/docs/v1 -AddCredentials
& "C:\Program Files (x86)\MSBuild\14.0\bin\csc.exe" /out:GeneratedCSharp\MyWebAPI.Client.dll /reference:Packages\Newtonsoft.Json.6.0.4\lib\net45\Newtonsoft.Json.dll /reference:Packages\Microsoft.Rest.ClientRuntime.1.8.2\lib\net45\Microsoft.Rest.ClientRuntime.dll /recurse:GeneratedCSharp\*.cs /reference:System.Net.Http.dll /target:library
xcopy MyWebAPI\ClientNuspecs\CSharp\MyWebAPI.Client.nuspec GeneratedCSharp

You will note form the above command lines for csc that I have had to add in some references to get it to compile, these need to go into your nuspec file as well, so people installing your client package will have the correct dependencies. Snip from my nuspec file below:


<frameworkAssemblies>
<frameworkAssembly assemblyName="System.Net.Http" targetFramework="net45" />
</frameworkAssemblies>
<dependencies>
<dependency id="Microsoft.Rest.ClientRuntime" version="1.8.2" />
<dependency id="Newtonsoft.Json" version="6.0.8" />
</dependencies>

After this just add a Nuget Publish step and you can start pushing your library to nuget.org, or in out case just our private internal server.

For authentication we use Basic Auth over SSL, so adding the “-AddCredentials” command line parameter is needed to generate the extra methods and properties for us, you may or may not need this.

Below is an example console app where I have installed the nuget package that autorest created, this uses basic auth which you my not need.

namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
var svc = new MyClient();
svc.BaseUri = new Uri("https://MyWebAPILive.com");
svc.Credentials= new BasicAuthenticationCredentials{UserName = "MyUser",Password = "MyPassword!"};
Console.WriteLine(svc.HelloWorld());
Console.ReadLine();
}
}
}

Next we have swagger codegen for our Client libraries. As I said before I don’t want to add J2SE into our build environment to avoid complexity, so we are using the API. I’ve built a gulp job to do this.

Why gulp? the javascript client output from codegen is pretty rubbish, so instead of using this I’m getting the typescript library and compile it, then minify, i find this easier to do in gulp.

The Swagger UI for the Swagger Codegen api is here. When you call the POST /gen/clients method you pass in your json file, after this it returns a URL back that you can use to then download a zip file with the client package. Below is my gulpfile

var gulp = require('gulp');
var fs = require('fs');
var request = require('request');
var concat = require('gulp-concat');
var unzip = require('gulp-unzip');
var ts = require('gulp-typescript');
var tsd = require('gulp-tsd');
var tempFolder = 'temp';

gulp.task('default', ['ProcessJSONFile'], function () {
// doco https://generator.swagger.io/#/
});

gulp.task('ProcessJSONFile', function (callback) {
return request('http://MyWebAPI.net/swagger/docs/v1',
function (error, response, body) {
if (error != null) {
console.log(error);
return;
}
ProcessJSONFileSwagOnline(body);
});
});

function ProcessJSONFileSwagOnline(bodyData) {
bodyData = "{\"spec\":" + bodyData + "}"; // Swagger code Gen web API requires the output be wrapped in another object
return request({
method: 'POST',
uri: 'http://generator.swagger.io/api/gen/clients/typescript-angular',
body: bodyData,
headers: {
"content-type": "application/json"
}
},
function (error, response, body) {
if (error) {
console.log(error);
return console.error('upload failed:', error);
}
var responseData = JSON.parse(body);
var Url = responseData.link;
console.log(Url);
downloadPackage(Url);
});
};

function downloadPackage(Url) {
return request(Url,
function(error, response, body) {
console.log(error);
}).pipe(fs.createWriteStream('client.zip'), setTimeout(exctractPackage,2000));
};

function exctractPackage() {
gulp.src("client.zip")
.pipe(unzip())
.pipe(gulp.dest(tempFolder));
setTimeout(moveFiles,2000);
};

function moveFiles() {
return gulp.src(tempFolder + '/typescript-angular-client/API/Client/*.ts')
.pipe(gulp.dest('generatedTS/'));
};

Now I am no expert at Node.js I’ll be the first to admit, so I’ve added a few work arounds using setTimeout in my script as I could get the async functions to work correctly, if anyone wants to correct me on how these should be done properly please do 🙂

At the end of this you will end up with the type script files in a folder that you can then process into a package. We are still working on a push to GitHub for this so that we can compile a bower package for us, I will make another blog post about this.

In the typescript output there will always be a api.d.ts file that you can reference into your TypeScript project to expose the client. I’ll do another post about how we setup or Dev Environment for compile the TypeScript from bower packages.

for our Javascript library we just need to add one more step.


function compileTypeScriptClientLib() {
var sourceFiles = [tempFolder + '/**/*.ts'];

gulp.src(sourceFiles)
.pipe(ts({
out: 'clientProxy.js'
}))
.pipe(gulp.dest('outputtedJS/'));
};

This will compile us our JS script library, we can then also minify it in gulp as well, before packaging, again bower is the technology for distributing client packages, so after this we push to GitHub, but i’ll do another blog post about that.

The output you get from TypeScript in CodeGen is angularJS, which is fine as “most” of our apps use angular already, however a couple of our legacy ones don’t, so the client proxy object that is created needs a bit of work to inject it’s dependencies.

Below is an example of a module in javascript that I use to wrap the AngularJS service and return it as a javascipt object with the Angular Dependencies injected:


var apiClient = (function (global) {
var ClientProxyMod= angular.module("ClientProxyMod", []);
ClientProxyMod.value("basePath", "http://MyWebAPILive.com/"); // normally I'd have a settings.js file where I would store this
ClientProxyMod.service("MyWebAPIController1", ['$http', '$httpParamSerializer', 'basePath', API.Client.MyWebAPIController1]);
var prx = angular.injector(['ng', 'ClientProxyMod']).get('MyWebAPIController1');
return {
proxy: prx
}
}());

You would need to do the above once for each controller you have in your WebAPI project, the codegen outputs one service for each controller.

One of the dependencies of the Service that is created by CodeGen is the “basePath” this is the URL to the live service, so i pass this in as a value, you will need to add this value to your Angular JS module when using in an Angular JS app as well.

Using basic auth in AngularJS is pretty straight forward because you can set it on the $http object which is exposed as a property on the service.


apiClient.proxy.$http.defaults.headers.common['Authorization'] = "Basic " + btoa(username + ":" + password);

Then you can simply call your methods from this apiClient.proxy object.