Dependency Injection Recomendations

Recently started working with a project that has a class called “UnityConfiguration” with 2000 lines of this

container.RegisterType<ISearchProvider, SearchProvider>();

This fast becomes unmanageable, wait, I hear you say, not all Types are registered in the same way! Yes, and you won’t get away with a single line to wire-up your whole IoC container, but you should be able to get it way under 50 lines of code, even in big projects.

I prefer to go a bit Hungarian and file things via folders/namespaces by their types, then use the IoC framework to load in dependencies using this. This is because based on the Type is generally where you find the differences.

For example I put all my Attributes under a namespace called “Attributes”, with sub-folders if there is too many of course, as so on.

Below is an example of a WebApi application i have worked on in the past. This method is called assembly scanning and is in the Autofac doco here

var containerBuilder = new ContainerBuilder();

.Where(t => t.IsInNamespace("Company.Project.WebAPI.Lib")).AsImplementedInterfaces();
.Where(t => t.IsInNamespace("Company.Project.WebAPI.Attributes")).PropertiesAutowired();
.Where(t => t.IsInNamespace("Company.Project.WebAPI.Filters")).PropertiesAutowired();

_container = containerBuilder.Build();

You can see form the above code that things like the Attributes and filters require the Properties AutoWired as I use Property injection as opposed to the constructor injection, as these require a parameter-less constructor. So I end up with one line for each sub-folder in my project basically.

So as long as I keep my filing system correct I don’t have to worry about maintaining a giant “Configuration” class for my IoC container.

You can also make use of modules in Autofac by implementing the Module, I recommend using this for libraries external to your project that you want to load in. you can use the RegisterAssemblyModules method in Autofac in a similar way






GitHub Pull Request Merge Ref and TeamCity CI fail

GitHub has an awesome feature that allows us to build on the potential merge result of a pull request.

This allows us to run unit and UI tests against the result of a merge, so we know with certainty that it works, before we merge the code.

To get this working with TeamCity is a pain in the ass though.

Lets look at a basic workflow with this:

First we will look at two active pull request, and we are about to merge


Pull request 2 advertises the /head (actual branch) and /merge (result “if we merged”)

TeamCity say you should tie your builds to the /merge for CI, this will build the merge result, and I agree.

However lets look at what happens in GitHub when we merge in Feature 1.


The new code goes into master, which will recalculate the merge result on Pull request 2. TeamCity correctly builds the merge reference and validates that the Pull Request will succeed.

However if we look in GitHub we will see the below


It now blocks you and prompts you to updates your branch.

After you click this, the /head and /merge refs will update, as it adds a commit to your branch and recalculates the merge result again, then you need wait for another build to validate the new commit on your branch.


This now triggers a second build.And when it completes you can merge.

The issues here is we are double building. There is two solutions as I see it,

  1. GitHub should allow you top merge without updating your branch
  2. TeamCity should allow you to trigger from one ref and build on a different one

I was able to implement the second result using a build configuration that calls the TeamCity API to trigger a build. However my preference would be number 1 as this is more automated.


Inside it looks like this


Below is example powershell that is used in the trigger build, we had an issue with the SSL cert (even though it wasn’t self signed) so had to disable the check for it to work.

add-type @"
using System.Net;
using System.Security.Cryptography.X509Certificates;
public class TrustAllCertsPolicy : ICertificatePolicy {
public bool CheckValidationResult(
ServicePoint srvPoint, X509Certificate certificate,
WebRequest request, int certificateProblem) {
return true;
[System.Net.ServicePointManager]::CertificatePolicy = New-Object TrustAllCertsPolicy

$buildBranch =""
Write-Host $buildBranch
$buildBranch = $buildBranch.Replace("/head","/merge")
$postbody = "<build branchName='$buildBranch'>
<buildType id='%TargetBuildType%'/>
Write-Host $postbody
$user = '%TeamCityUser%'
$pass = '%TeamCityPassword%'

$secpasswd = ConvertTo-SecureString $pass -AsPlainText -Force
$credential = New-Object System.Management.Automation.PSCredential($user, $secpasswd)

Invoke-RestMethod https://teamcity/httpAuth/app/rest/buildQueue -Method POST -Body $postbody -Credential $credential -Headers @{"accept"="application/xml";"Content-Type"="application/xml"}

You will see that we replace the branch name head with merge, so we trigger after someone clicks the update branch button only.

Also don’t forget to add a VCS trigger for file changes “+:.”, so that it will only run builds when there are changes.


We are running with this solution this week and I am going to put a request into GitHub support about option 1.

This is a really big issues for us as we have 30-40 open pull requests on our repo, so double building creates a LOT of traffic.

If anyone has a better solution please leave some comments.





TeamCity and avoiding redownloading of npm packages

I haven’t been able to find a better solution that this. My original post on stackoverflow here.

TeamCity clears out the node_modules folder on every build, so when running npm install it re-downloads them all. So I’ve moved to using a powershell step to backup and restore this folder and the end and start of each build respectively.

Here is the PS step i run at the start

param (
[string]$projId ,

if(Test-Path "c:\node_backup\$projId\node_modules")
Move-Item "c:\node_backup\$projId\node_modules" "$WorkDir\node_modules"

And here is the one at the end

param (
[string]$projId ,
If (!(Test-Path "c:\node_backup"))
mkdir "c:\node_backup"
If (!(Test-Path c:\node_backup\$projId))
mkdir "c:\node_backup\$projId"
If (Test-Path "c:\node_backup\$projId\node_modules")
Remove-Item "c:\node_backup\$projId\node_modules"

Move-Item "$WorkDir\node_modules" "c:\node_backup\$projId"

Then in the script arguments i pass this

-projId -WorkDir %NodeWorkingDirectory%

The NodeWorkingDir is a parameter I set in my projects that i use to tell node where my gulp file is.


Running up TeamCity build Agents on HyperV

I had to run up a bucket load of build servers on a HyperV environment the other day so decide to automate the process a little.

I have also started using RAM disks for the agents too, to speed them up.

Step for the build server load were:

  1. Install OS (Win Server 2012 R2)
  2. Install SQL 2012 Express
  3. Install VS 2015
  4. Install build agent and point it at the TC Build agent on Drive E:
  5. Install RAMDisk (here)
  6. Run windows updates

If you are unsure where to get the Build Agent installer from click the agents tab in TeamCity and there is a link in the top left.


At this point move the build agent to the RAM disk by:

  1. Stop Team City Build agent service
  2. Change the drive letter of E: to F:
  3. create a RAM drive as E: with image file on F and save to disk on shutdown
    1. I used 5Gb drive and it handles ok
  4. Copy the build agent folder from F to E
  5. Start Build agent service

After this i also installed IIS and disabled the firewall

Lastly sysprep the server and check OOBE/Shutdown

Then delete the VM I was using and the VHDX file  that was generated keep for using as a template, I backed it up to use on other hosts as well in future.

Then wrote the below script that run’s on the host, it does the following for me:

  1. Creates a new VHDX using a diff based on the original (makes run up really fast)
  2. Creates a VM that uses that VHDX
  3. Sets some values that I can set in the New-VM Command
  4. Changes the network config of the guest


Function Set-VMNetworkConfiguration {
Param (



[String[]]$DefaultGateway = @(),

[String[]]$DNSServer = @(),


$VM = Get-WmiObject -Namespace 'root\virtualization\v2' -Class 'Msvm_ComputerSystem' | Where-Object { $_.ElementName -eq $NetworkAdapter.VMName }
$VMSettings = $vm.GetRelated('Msvm_VirtualSystemSettingData') | Where-Object { $_.VirtualSystemType -eq 'Microsoft:Hyper-V:System:Realized' }
$VMNetAdapters = $VMSettings.GetRelated('Msvm_SyntheticEthernetPortSettingData')

$NetworkSettings = @()
foreach ($NetAdapter in $VMNetAdapters) {
if ($NetAdapter.Address -eq $NetworkAdapter.MacAddress) {
$NetworkSettings = $NetworkSettings + $NetAdapter.GetRelated("Msvm_GuestNetworkAdapterConfiguration")

$NetworkSettings[0].IPAddresses = $IPAddress
$NetworkSettings[0].Subnets = $Subnet
$NetworkSettings[0].DefaultGateways = $DefaultGateway
$NetworkSettings[0].DNSServers = $DNSServer
$NetworkSettings[0].ProtocolIFType = 4096

if ($dhcp) {
$NetworkSettings[0].DHCPEnabled = $true
} else {
$NetworkSettings[0].DHCPEnabled = $false

$Service = Get-WmiObject -Class "Msvm_VirtualSystemManagementService" -Namespace "root\virtualization\v2"
$setIP = $Service.SetGuestNetworkAdapterConfiguration($VM, $NetworkSettings[0].GetText(1))

if ($setip.ReturnValue -eq 4096) {

while ($job.JobState -eq 3 -or $job.JobState -eq 4) {
start-sleep 1

if ($job.JobState -eq 7) {
write-host "Success"
else {
} elseif($setip.ReturnValue -eq 0) {
Write-Host "Success"

Write-Host $MachineName

New-VHD –Path “D:\Hyper-V\Diff.$MachineName.vhdx” –ParentPath “D:\Hyper-V\agent_template2.VHDX” –Differencing

New-VM -Name $MachineName -MemoryStartupBytes 6024000000 -Generation 2 -BootDevice VHD -VHDPath “D:\Hyper-V\Diff.$MachineName.vhdx” -SwitchName "10.0.0.xx"

Set-VM -Name $MachineName -DynamicMemory -ProcessorCount 8
Write-Host $IPv4Address

Get-VMNetworkAdapter -VMName $MachineName -Name "Network Adapter" | Set-VMNetworkConfiguration -IPAddress $IPv4Address -Subnet -DNSServer -DefaultGateway

The above command takes a parameter of the new machine name and the IP you want to give the server, I have hard codded the subnet, gateway and DNS, but these should probably be made parameters too, depending on your environment.

After this i just have to login to the agents and domain join them after they are spun up. I used to use VMM that would domain join out-of-the-box, but it looked painful to do from a script on the host so have left it.

An also authorize them on the TeamCity server.


Connecting TeamCity to GitLab with a self-signed SSL

So I spent hours today beating my head against a wall and cursing JRE, so a pretty normally day for me.

I had to connect our TeamCity server to the GitLab server, the GitLab server uses a SSL cert that was generate from the AD Domain CA, so is trusted by all the domain machines. Our TC server is on the domain as well and when connecting to the https site it comes up as green.

However when connecting to git through TeamCity it is running inside JRE which for some reason doesn’t use the machine trusts, it has it’s own cert store you need to add the cert too.

Here’s the error i was facing:


List remote refs failed: PKIX path building failed: unable to find valid certification path to requested target


To test the trust from JRE you need to run this

java SSLPoke git.mycompany.local 443

Where git.mycompany.local is your gitlab server

You can get the sslpoke class here

if its untrusted you will see an error here.

You can use your web browser to export the public key.


Most docs tell me that you can export your root CA public cert, but this didn’t work for me, I actually had to export the specific cert for this site.

Then use this command line to import the cert into JRE and restart TeamCity.

C:\TeamCity\jre\bin>C:\TeamCity\jre\bin\keytool.exe -importcert -trustcacerts -file C:\MyGitLabSSLCert.cer -alias MyGitLabSSLCert -keystore “C:\TeamCity\jre\lib\security\cacerts”

After this we are in business!




Swagger/Swashbuckle and WebAPI Notes

If you aren’t using Swagger/Swashbuckle on your WebAPI project, you may have been living under a rock, if so go out and download it now 🙂

Its a port from a node.js project that rocks! And MS is really getting behind in a big way. If you haven’t heard of it before, imagine WSDL for REST with a snazy Web UI for testing.

Swagger is relatively straight forward to setup with WebAPI, however there were a few gotchas that I ran into that I thought I would blog about.

The first one we ran into is so common MS have a blog post about it. This issue deals with an exception you’ll get logged due to the way swashbuckle auto generates the ID from the method names.

A common example is when you have methods like the following:

GET /api/Company // Returns all companies

GET /api/Company/{id} // Returns company of given ID

In this case the swagger IDs will both be “Company_Get”, and the generation of the swagger json content will work, but if you try to run autorest or swagger-codegen on this they will fail.

The solution is to create a custom attribute to apply to the methods like so

// Attribute
namespace MyCompany.MyProject.Attributes
public sealed class SwaggerOperationAttribute : Attribute
public SwaggerOperationAttribute(string operationId)
this.OperationId = operationId;

public string OperationId { get; private set; }


namespace MyCompany.MyProject.Filters
public class SwaggerOperationNameFilter : IOperationFilter
public void Apply(Operation operation, SchemaRegistry schemaRegistry, ApiDescription apiDescription)
operation.operationId = apiDescription.ActionDescriptor
.Select(a =&gt; a.OperationId)

//SwaggerConfig.cs file
namespace MyCompany.MyProject
public class SwaggerConfig
private static string GetXmlCommentsPath()
return string.Format(@&quot;{0}\MyCompany.MyProject.XML&quot;,
public static void Register()

var thisAssembly = typeof(SwaggerConfig).Assembly;

.EnableSwagger(c =&gt;


// the above is for comments doco that i will talk about next.

// there will be a LOT of additional code here that I have omitted





Then apply like this:

public Company CompanyGet(int id)
// code here

public List&lt;Company&gt; CompanyGet()
// code here

Also mentioned I the MS article is XML code comments, these are awesome for documentation, but make sure you don’t have any potty mouth programmers

This is pretty straight forward, see the setting below


The issue we had though was packaging them with octopus as it’s an output file that is generated at build time. We use the octopack nuget package to wrap up our web projects, so in order to package build-time output (other than bin folder content) we need to create a nuspec file in the project. Octopack will default to using this instead of the csproj file if it has the same name.

e.g. if you project is called MyCompany.Myproject.csproj, create a nuspec file in this project called MyCompany.MyProject.nuspec.

Once you add a file tag into the nuspec file this will override octopack ebnhaviour of looking up the csproj file for files, but you can override this behavior by using this msbuild switch.


This will make octopack package files from the csproj first, then use what is specified in the files tag in the nuspec file as additional files.

So our files tag just specifies the MyCompany.MyProject.XML file, and we are away and deploying comments as doco!

We used to use sandcastle so most of the main code comment doco marries up between the two.

Autofac DI is a bit odd with the WebAPI controllers, we generally use DI on the constructor params, but WebAPI controllers require a parameter-less constructor. So we need to use Properties for DI. This is pretty straight forward you juat need to call the PropertiesAutowired method when registering them. And as well with the filters and Attributes. In our example below I put my filters in a “Filters” Folder/Namespace, and my Attributes in an “Attributes” Folder/Namespace

// this code goes in your Application_Start

var containerBuilder = new ContainerBuilder();


.Where(t =&gt; t.IsInNamespace(&quot;MyCompany.MyProject.Attributes&quot;)).PropertiesAutowired();
.Where(t =&gt; t.IsInNamespace(&quot;MyCompany.MyProject.Filters&quot;)).PropertiesAutowired();





Agile-Scrum Interview Questions to a Company from a Candidate

I recently went for a job interview with a company and wanted to know how evolved they were with respect to agile practices. So using the Agile manifesto I created a series of questions to ask them in the interview to rank how evolved they were in following Agile/Scrum processes.

I think if you are looking at your own company you could use these as a method to judge your own progress in your journey into Agile/Scrum.

Below is listed each Principle follow by the question i asked.

Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.

How do you do you do a release? Do you hand over to another team for releases? Do you practice continuous delivery?

With the rise of Dev Ops, pushing releases should be easy these days, if you have a complex process in place for pushing releases to live, it might be a sign that you need change.

How I’ve seen people try to justify complex release and release/approval process before is when you have critical systems that any sort of downtime will have a large business impact. You will hear them say things like “We work in E-Commerce, any minute offline is lost money” or “We are in finance, one wrong number could cost us large amounts of money”. These statements are true, but in an evolved company you have things implemented such as A/B testing, tests that run in the deployment process to verify applications on an inactive node before live traffic is cut over to. AWS’s Elastic Beanstalk out of the box will run you up a new set of servers in the deployment process that tests can be preformed on before a DNS cut over is done and the old environment completely shut down.

While you do need to take into account the context, there is few companies I have seen that could not aim for continuous delivery.

Zero-Downtime deployment, and Continuous delivery are the two key words that give you a big tick here.

Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage

How do you handle change in requirements after development work has started, or finished?

If they tell you that they have “Change Requests” that’s a sure sign they aren’t following Agile/Scrum process.

Another common mistake I see people do is track this and report on it so they can “improve planning”, while I am not saying that you shouldn’t try to plan ahead where possible, trying this will give you a lot of false positives, because one of the theories of scrum is that “the customer doesn’t know what the right product is, until they’ve seen the wrong product”.

Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.

How often do you release? How long roughly does it take you do do a release of your product?

Similar to the first question, releases should be done “on demand” by the team, if there is any hand over process in place, or senior management that needs to be involved beyond Acceptance testing in the demos then this might be a sign of problems.

Business people and developers must work together daily throughout the project

Where do your requirements come from? Where are they stored? Who manages them? What contact does this person have with the team?

This question in summary is “Do you have product Owners? and are they doing their job?”. Product owners should have daily contact with the team, however having them in the same room might be too much. The company I went for the interview with has their Team, PO and scrum master all in the same desk cluster, I’m not sure about this 🙂

Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.

Who manages your teams and makes sure they are keeping focused day-to-day?

This is a loaded question. Unmotivated people need people to motivated them and don’t make for a good dev team.

The answer should be no one, because our teams are self-motivated and self organizing, our scrum master checks in with them daily to make sure they don’t have any impediments and keeps the team from distractions.

Do you have any regular reporting to upper management that needs to be done inside the sprint?

The answer here should be no, the measure of progress is working software, which is reported in the demo. There maybe reports to the business of a rolled up result of sprints, for example one feature make take 3 sprints to complete, so at the end of those 3 sprints some additional reporting needs to be done. But beware of anyone that says something like “The output of our daily stand-up on in the chat room is emailed to all senior managers in the business”, this means that there is a lack of trust in the organisation.

The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.

Inquire about which ceremonies they conduct, Daily stand-up, retrospective, etc.

Is there any they don’t do? Is there any they do in a chat room? Where is the scrum master and product owner located? Does the team have easy access to them and vice versa?

In some teams that aren’t co-located this is difficult, but let me tell you from experience that video conferencing is the answer if you aren’t co-located.

While I think chat rooms are an important part of a modern team (Go download slack or hipchat now if you aren’t using them already, but don’t ask me which is better), they should NOT be used for your ceremonies. When you do a planning meeting and you are in the same office as someone you see and hear the emotion in their voice when they are talking, this is vital to good communications.

Also talking as opposed to typing lets people go off on tangents more easily, which generally leads to better innovation. The same is true of Stand ups, retrospectives and so on.

Working software is the primary measure of progress.

How do you measure progress of your dev team?

This one is pretty easy, if they say from the deployed running software then they get a tick.

Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.

How often do you staff work overtime? is any weekend work required? if so how often?

If you have staff regularly working overtime or weekends, and this is accepted by the organisation, this is a sign that your pace is not sustainable. You will burn your staff out, they will look for new jobs.

Continuous attention to technical excellence and good design enhances agility.

This is hard to put into a single question, I would start with asking them where they are at with practices like:

  • Unit Testing
  • Test Automation
  • Do they use Dependency Injection
  • etc

These types of technology will change over time, but once you have an indication of where they are at, how do they improve? Good answers will be things like involvement in open source projects, user groups (VMWare, etc), partner programs (Microsoft, RedHat,etc) and so on.

One of the processes we used to use was “Friday Afternoon Channel9” each week the team members would take turns picking a video from MSDN Channel9 (and sometimes other sources) and we would all watch it together.


The best architectures, requirements, and designs emerge from self-organizing teams.

Do you have team leaders or managers?

If you have a manager then you are not self organised. By the same token Leaders are bad too, a lot of people are astounded by this concept and would argue that leadership is a good thing.

If you promote leadership then you give your team a head that can be cut off, what happens when you leader goes on holiday for 3 weeks?

It also prevents your team from taking ownership if the are working under someones directions. You will end up with a product architected from a single persons view as opposed to the entire team. This also allows the team to say things such as “I was just doing what I was told”, your team needs to own their work, this in the long run will give them more job satisfaction as well.

In short, everyone on your team should be a leader, and the scrum master should be their to mediate any conflicts of direction.

Your scrum master should not be a member of the team also, he/she needs to be impartial to the team’s decisions to allow him/her to give good guidance. If your scrum master is a team member he will end up becoming a leader and you will have problems.

At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.

What is the most astounding changes one of your teams has proposed from a retrospective that you can remember?

A bit of a loaded question again. If you simply ask them “Do you do retrospectives” this won’t tell you much, what you need to be asking them is what impact their retrospectives have.

The answer to the above will give you an indication of if they are following the Scrum retrospective process and seeing a positive outcome from it.

One of the Keys of Scrum is Empirical Process Control, if the team is not in charge of themselves, then they do not own their actions, retrospectives is a key point in the team shaping their own direction.



Compact Framework Builds in Appveyor

I did an issue for nunit to get their CI builds for nunit running in appveyor, previously they had a manual process for CF checking.

The changes are all in this commit 

To go into a bit of detail, I found a similar requirement with an R project someone had done, so did something similar, in that we used the appveyor.yml “install” step, to execute a powershell script that downloads and installs an msi on the box.

The bulk of the script below is pretty straight forward, all I’ve done is create a repo for the msi files, download and run them with msiexec on the box, with some very verbose checking. As isolated build boxes like appveyor if something goes wrong its nice to have a lot of log output.

$url= "";
Progress ("Downloading NETCFSetupv35 from: " $url)
Invoke-WebRequest -Uri $url -OutFile NETCFSetupv35.msi

$url= "";
Progress ("Downloading NETCFv35PowerToys from: " $url)
Invoke-WebRequest -Uri $url -OutFile NETCFv35PowerToys.msi

Progress("Running NETCFSetupv35 installer")

$msi = @("NETCFSetupv35.msi","NETCFv35PowerToys.msi")
foreach ($msifile in $msi)
throw "MSI files are not present, please check logs."
Progress("Installing msi " $msifile )
Start-Process -FilePath "$env:systemroot\system32\msiexec.exe" -ArgumentList "/i `"$msifile`" /qn /norestart" -Wait -WorkingDirectory $pwd -RedirectStandardOutput stdout.txt -RedirectStandardError stderr.txt
$OutputText = get-content stdout.txt
$OutputText = get-content stderr.txt
throw "Compact framework files not found after install, install may have failed, please check logs."

You’ll note at the end though there is a call to “RegistryWorkaround”, i got these errors after setting the above up

The “AddHighDPIResource” task failed unexpectedly.
System.ArgumentNullException: Value cannot be null.
Parameter name: path1

After a quick google check found this forum post about the error and it solved my problem. You can see my work around below.

$registryPaths = @("HKLM:\SOFTWARE\Microsoft\VisualStudio\9.0\Setup\VS","HKLM:\SOFTWARE\Wow6432Node\Microsoft\VisualStudio\9.0\Setup\VS")
$Name = "ProductDir"
$value = "C:\Program Files (x86)\Microsoft Visual Studio 9.0"

foreach($registryPath in $registryPaths)
If(!(Test-Path $registryPath))
New-Item -Path $registryPath -Force | Out-Null
If(!(Test-Path $registryPath+"\"+$Name))
New-ItemProperty -Path $registryPath -Name $name -Value $value `
-PropertyType String -Force | Out-Null
If(!((Get-ItemProperty -Path $registryPath -Name $Name).ProductDir -eq "C:\Program Files (x86)\Microsoft Visual Studio 9.0"))
throw "Registry path " $registryPath " not set to correct value, please check logs"
Progress("Regsitry update ok to value " (Get-ItemProperty -Path $registryPath -Name $Name).ProductDir)

After all that it only adds less than 10 seconds on average to the build too which is good.

And now we have a working Compact Framework build in Appveyor!



Private Nuget Servers – VS Team Services Package Management

A while back i setup a Klondike server for hosting our internal nuget packages. We use it for both internal libraries and octopus.

Microsoft recently released the Package Management feature for VSTS (Formerly know as VSO), the exciting thing about Package Management is that they have hinted they will include support for npm and bower in future, so you will have a single source for all your package management.


After Installing in VSTS you will get a new “Package” option in the top bar.


From here you can create new feeds. In my case I’ve decide to break up my feeds to one per project, but you could easily create more per project if you had for example separate responsibilities where you wanted to have more granular permissions. You can restrict the Publish and Read rights to the feeds to users OR groups within VSTS so its very easy to manage, unlike my hack around for permissions in my previous post about Klondike.


Now because we use TeamCity I have considered creating the build service their own Account in VSTS as they need credentials, but in this example I’m just using my own account.

You will need to change the “pick your tool” option to nuget 2.x to get your credentials to use in the TeamCity Steps.


Then click “Generate nuget Credentials” and grab the username and password out.



Next hop over to your TeamCity Server, and edit/add your build configuration.

It’s important to note that you will require at least TeamCity version 9.1.6 to do this, as there is a fix in here for nuget credentials.

First jump into “Build Features”, and add a set of nuget credetails with the URL of your feed that you got from the VSTS interface.


Then jump over to your Build steps and edit/add your nuget steps. Below is an example of my publish step.


The API key I’ve set to “VSTS” as per the instructions in the web interface of VSTS.

And we are publishing.


You will see the built packages in the VSTS interface when you are done.


Now if you have an Octopus server like us you will need to add the credentials into it as well into the nuget feeds section.



And its that easy.

One of our concerns about the Klondike server we setup was capacity. Because we have more than 12 developers and run CI with auto deployment to development environment, we are generating a large number of packages daily as developers check-in/commit work, so over a period of months and years the server has become quite bloated, though to give it credit i am surprised at how long it took to get bloated.

Some queries are taking upwards of 15-20 seconds at times and we have an issue (which I have not confirmed is related) where packages are randomly “not there” after the build log say they have been successfully published.

I am hoping that the VSTS platform will do us for longer, and it has the added advantage of the granular permissions which we will be taking advantage of as we grow.






Exception Logging and Tracking with Application Insights 4.2

After finally getting the latest version of App Insights Extension installed into Visual Studio, its been a breath of fresh air to use.

Just a note, to get it installed, I had to go to to installed programs, hit modify on VS2015, make sure everything else was updated. Then run the installer 3 times, it failed twice, 3rd time worked.

Now its installed I get a new option under my config file in each of my projects called “search”.


This will open the Search window inside a tab in visual studio to allow me to search my application data. The first time you hit it you will need to login and link the project to the correct store though, after that it remembers.


From here you can filter for and find exceptions in your applications and view a whole host of information about them. Including information about the server, client, etc. But my favorite feature is the small blue link at the bottom.


Click on this will take you to the faulting function, it doesn’t take you to the faulting line though (which i think it should) but you can mouse over it to see the line.


One of the other nice features, which was also in the web portal. is the +/- 5 minutes search.


You can use this to run a search for all telemetry within 5 minutes either side of the exception. In the web portal there is also an option of “all telemetry of this session”, which is missing from the VS interface, I hope they will introduce this soon as well.

But the big advantage to this is if you have setup up App Insights for all of your tracking you will be able to see all of the following for that session or period:

  • Other Exceptions
  • Page Views
  • Debug logging (Trace)
  • Custom Events (If you are tracking thinks like feature usage in JavaScript this is very handy)
  • Raw Requests to the web server
  • Dependencies (SQL calls)

Lets take a look at some of the detail we get on the above for my +/- 5 Minute view

Below is an SQL dependency, this is logging all my queries. So I can see whats called, when, the time the query took to run, from which server, etc. This isn’t any extra code I had to write, App Insights will track all SQL queries that run from your application out of the box, once setup.


And dependencies won’t just be SQL, they will also be SOAP and REST requests to external web services.

Http Request monitoring the detail is pretty basic but useful.


And Page views you get some pretty basic info also. Not as good as some systems i have seen, but defiantly enough to help out with diagnosing exceptions.


I’ve been using this for a few days now and find it so easy to just open my solution in the morning and do a quick check for exceptions, narrow them down and fix them. Still needs another version or two before it has all the same features as the web interface, but defiantly worth a try if you have App Insights setup.