Gulp in Visual Studio and building with TeamCity

Starting from VS 2015 node is built in, along with the task runner explorer, though i still have occasional issues with the task runner explorer, but they are defiantly a must have.

These days in in Visual Studio i normally do a separate project for my static content, this is for a couple of reasons:

  1. In my F5 experience means the static content is on a different site, this means that’s is closer to live as i generally send the static content to a CDN
  2. The static content is packaged into a single separate package, so i don’t have to pull files out into a separate package for the CDN (personally I use octopack at the moment, which works per vs project)

The gulp file is easy, just throw in a js file with the correct name and it’ll get picked up


I find it easier to work with the uncompressed files locally so throw in a variable to cater for either accessing the live vs local

Dim compress_string As String = ".min"
#If DEBUG Then
 compress_string = ""
#End If

 Dim lk4 As New HtmlControls.HtmlGenericControl("script")
 lk4.Attributes.Add("src", StaticLocation & "js/default" & compress_string & ".js")
 lk4.Attributes.Add("language", "javascript")
 lk4.Attributes.Add("type", "text/javascript")

NOTE: Please don’t troll at me for the VB code, just use a converter if you don’t understand or want to be a script kiddy ūüôā

you can test it locally by running a build, you’ll want to add some liens into your .tfignore for the “node_modules” folder so it doesn’t pick up the npm files. I also add in a line for *.min.js so it doesn’t check-in my minified content when it builds locally

Below is an example file from one of my projects, im not using contcat on this one because i dont ahve a lot of js in this project,¬†but if you’ve got more than 2 its generally good to concat them as well.

/// <binding AfterBuild='default, clean, scripts, minify' />

// include plug-ins
var gulp = require('gulp');
var concat = require('gulp-concat');
var uglify = require('gulp-uglify');
var del = require('del');
var rename = require('gulp-rename');
var minifyCss = require('gulp-minify-css');

var config = {
 //Include all js files but exclude any min.js files
 src: ['js/**/*.js', '!js/**/*.min.js']

//delete the output file(s)
gulp.task('clean', function () {
 return del(['css/*.min.css']);

//Process javascript files
gulp.task('scripts', function () {
return gulp.src(config.src)
 suffix: '.min'
//process css files
gulp.task('minify', function () {
 suffix: '.min'
//Set a default tasks
gulp.task('default', ['clean'], function () {
 gulp.start('minify', 'scripts');
 // do nothing 

This minifies all my css files to a new file named oldfilename.min.css and uglifies all js files to oldfilename.min.js

The Node plugin for teamcity is available here, this will give you npm steps in you build templates.

You’ve need to add 2 steps to you project

1. Node.js NPM to install npm modules require


simply the install command will pick up the dependencies and get the files, NOTE: this is add a shitload of time to your builds (upwards of 2 minutes), I’m still looking at ways to solve this.

2. Gulp Step to run the gulp file


Just set the working directory to the root of the project that has the gulp file.

I haven’t tried doing gulp for multiple projects, so far I’ve been primarily using it for js and css, and i generally create one static content project per solution that all the other projects will share. So I can’t comment on this yet.

After that you should have a working solution, as you can see below the issue i have with build times now, 2 min 34 secs downloading to run a job that takes 4 seconds.


If you are using octopack, you will not get the outputed files in the nuget packages, because they aren’t included in the visual studio project file.


I’ve worked around this by adding a nuspec file to my solution. example below from one of my projects, shows including just the minified css and js content to the nuget package for octopus.

<?xml version="1.0"?>
<package xmlns="">
<authors>Your name</authors>
<owners>Your name</owners>
<file src="css\*.min.css" target="css" />
<file src="js\*.min.js" target="js" />
<file src="img\**\*" target="img" />
<file src="Deploy.ps1" target="" />
<file src="Web.config" target="" />

SendGrid Initial Setup in Azure

Looking at a basic email setup for an Azure hosted app SendGrid offers a free low volume (25k emails a month) solution that is well rounded.

You can easily add a free SendGrid account by using the Azure marketplace to add it to your existing Azure account.


Then once added you can click into it to get your username and password using the “Connection Info” link at the bottom.


Once you’ve got this you can install the SendGrid libraries via Nuget Package manager

PM> Install-Package SendGrid

Below is an example of a method that sends an email with SendGrid, its pretty similar to your standard SMTP mail in the dotnet framework libraries

Public Sub SendAnEmail(mailId As Integer, FromAddress As String _
, ToAddress As String, CCAddress As String _
, BCCAddress As String, Subject As String, Body As String)
' Create the email object first, then add the properties.
Dim myMessage = New SendGridMessage()

' Add the message properties.
myMessage.From = New MailAddress(FromAddress)

' Add multiple addresses to the To field.
Dim recipients As New List(Of [String])() From { _

If CCAddress.Length <> 0 Then
End If
If BCCAddress.Length <> 0 Then
End If

myMessage.Subject = Subject
'Add the HTML and Text bodies
myMessage.Html = Body

' Create credentials, specifying your user name and password.
Dim SGUser As String = ConfigurationManager.AppSettings("SGUser")
Dim SGPass As String = ConfigurationManager.AppSettings("SGPass")
Dim credentials = New NetworkCredential(SGUser, SGPass)

' Create an Web transport for sending email.
Dim transportWeb = New SendGrid.Web(credentials)

' Send the email.

End Sub

There is a few other steps i usually do too, you will note in the above method i’ve set the click trough tracking to disabled. This is because i have had issues with it before and the links not working on some odd mail clients.

Also by default SendGrid will “process” your bounces, so you’ll need to login to their dashboard to find them, Most of my users don’t want another dashboard to login to so i normally setup an auto-forward. This can be setup in their interface as per below shot.


If you need to access the bounce history its under the “Suppression” section.

I also recommend setting up the white label


I’ll do another post about setting this up as its not easy, SPF and DKIM are essential to have setup, but can be a pain in the ass to get going, SendGrid does make the processes easier though

TeamCity, TFVC and Octopus Branching

I’ve recently implemented a TeamCity build on one of my old projects in TFVC and was surprised about the¬†branching support in TeamCity is focused around Git and Mercurial for branching. So as per usual i coded my way out of a hole, and here’s how i did it.

This is just using a basic example of dev and main branch, but i normally use feature branching for this project too, and version/release branches in some of my other TFVC projects as well.

In TFVC my branches are just basically sub-folders to the solution root, and i put my tfignore at the root as this is outside of the core code imo


In the VCS root, i use the root of the solution, so it includes all branches when it syncs, as TeamCity “syncs” files with source and doesn’t re-download them all the time like TeamBuild, its not an issue when you have a lot of branches, unless you need to do a clean build.

In the build Parameters create a Parameters called Branch Name


Then Create a PowerShell¬†step as the first step on the Build Configuration. Below example code checks the files that were included in the build change and sets the value for the parameter. If you run a build without a check-in it’s not going to have any changes attached to it, so in this case i default it to the dev branch.

$content = [IO.File]::ReadAllText($args[0])
if($content.length -eq 0)
Write-Host "No information, defaulting to Dev"
$branch = "dev"
Write-Host "##teamcity[setParameter name='BranchName' value='$branch']"
$branch = $content.Split("/")[0]
Write-Host "##teamcity[setParameter name='BranchName' value='$branch']"

This code will only look at the first file in the check-in, because it assumes you wont be checking in to multiple branches at once.


Once you’ve done this, you’ve got the branch name form the file system and can use it in your other steps

In my case i use it in the “OctopusDeploy: Create Release” Step


The end result in octopus looks like the below


In a subsequent post I’ll go into how to setup steps in octopus to prevent the dev branch from getting release to live.

When your SQL database is stuck in single user mode

A come problem we have is when a restore of a dev database is done (usually on a weekly schedule in our environments) and for some reason it fails.

We have a lot of things working on timers that poll the database, so these end up stealing the single connection and we can access it.

So here is the script we use to kill all processes and set the database back to multi-user mode, sometimes needs a few goes to work.

USE master

DECLARE @kill varchar(8000) = '';
SELECT @kill = @kill + 'kill ' + CONVERT(varchar(5), spid) + ';'
FROM master..sysprocesses
WHERE dbid = db_id('mDB')


Backwards compatibility of Incremental Database Change

Managing large SQL databases its important to keep your updates transactional and as non-blocking as possible to keep the up time high.

When deploying incremental change you can do it in a way that your changes are backwards compatible with old code, and have less of an impact.

A lot of organisations I’ve worked with pre-deploy their db changes long before their code to make cut over seemless, here a few practical tips on making you DB changes backwards compatible and low impact.


Tables are easy for schema changes:

  • Never Change your column order
  • Never remove a column (if you don’t need it then ignore it)
  • Never change a datatype, if you “really” need to, then create a new column and migrate¬†the old data
  • When adding new columns always use nullable types (Where possible)

The last point though really depends on the size of your table, when you add a new column that is nullable, even to massive tables, the transaction will complete in milliseconds. If you set NOT NULL you will need to specify a default, which will cause it to essentially update every row in the table with the new value, on large tables this can takes minutes or even hours depending on your table size and capacity.

Below is an example of a “good” update, nullable

ALTER TABLE dbo.affiliate ADD addressStreet nvarchar(255) NULL

Applying Indexes on the fly is usually ok for small to mid size table that aren’t under heavy transacitonal load, but¬†if you have high OLTP load on large or massive tables you might want to arrange for an outage window to apply the indexes, or at the very least schedule them or over night, when the stakeholder¬†won’t notice your applications stop¬†responding for several minutes ūüôā

If a web site goes offline and no one is there to see it, is it really offline?

Adding constraints on the fly¬†or FKs with constraints you’ll need to consult with another source, i don’t use constraints a lot because they put a lot of potential load/risk on OLTP tables which i work with a lot.


Procedures are pretty straight forward to:

  • Never remove a parameter
  • Always assign a default to a new parameter

The last point there begs the question though, What should the default be?

90% of the time its going to be null, lets take a look at a few basic examples.

1. Procedure that runs an Update

If we look at this code example

CREATE PROCEDURE dbo.UpdateAffiliate
 @affilaiteId int,
 @affiliateName VARCHAR(255)
UPDATE affiliate
SET affiliateName=@affiliateName
WHERE affilaiteId=@affilaiteId

First we need to think about the code that calls the proc we are updating, so that the both the old and new code can call the proc and not fail. here is an example of calling code.

EXEC UpdateAffiliate @affilaiteId=1,@affiliateName'Big Affiliate'

If we are to add a new parameter we simply do so by adding a default value

CREATE PROCEDURE dbo.UpdateAffiliate
 @affilaiteId int,
 @affiliateName VARCHAR(255),
 @addressStreet VARCHAR(255)=null
UPDATE affiliate
SET affiliateName=@affiliateName,
WHERE affilaiteId=@affilaiteId

This means that when the old code calls the proc it will still run, and simply sets the value to null.

Where this doesn’t work is when you are going to run both version in parallel for an extended period.

If that is the case using the above example, when someone updates the affiliate record with the new code

EXEC UpdateAffiliate @affilaiteId=1,@affiliateName'Big Affiliate', @addressStreet='12 MyStreet Road'

The a user update is using the old version

EXEC UpdateAffiliate @affilaiteId=1,@affiliateName'Big Affiliate'

This will cause the new field (addressStreet in this case) to be set to null.

If you are doing a cut over this is not an issue though.

the solution to this is to add a case

CREATE PROCEDURE dbo.UpdateAffiliate
 @affilaiteId int,
 @affiliateName VARCHAR(255),
 @addressStreet VARCHAR(255)=null
UPDATE affiliate
SET affiliateName=@affiliateName,
addressStreet=CASE @addressStreet WHEN NULL THEN addressStreet ELSE @addressStreet END
WHERE affilaiteId=@affilaiteId

This will work, unless you actually need to set the field to NULL for some reason, in this case you will need to use a value that the field will never be set to as the default, for example a tab space, then use this in your case statement.

2. Procedures that returns data.

For select statement it depends on what you code is on the other end but most of the time if you stick to these rules you’ll be ok

  • Don’t Change column names (not even case)
  • Don’t reorder columns
  • Only add new columns

Most languages when they handle data will simply ignore new columns that are added, some languages are case sensitive with their column handling and some developers are mentally unstable and use ordinals instead of the column names.

3. Procedures that INSERT data

Inserting is a little easier than updating, again just use a default value. And if your column has a defualt value this should be the default in your proc, not null as in the above example.