Your Load Balancer Will Kill You

Let’s start by talking about the traditional way people scale applications.

You have a startup, you have a new idea, so you throw something out there fast, maybe it’s a Rails app with a mongoDB backend if you’re unlucky, or something like that

Now thing are going pretty good, maybe you have time and you re-write into something sensible at this point as you business grows and you get more devs, so now you’re on a nice react website with dotnet or node backend or something. But your going slow due to too many users, so you start to scale, first thing people do is this, load balancer in front, horizontally scale the web layer

Now that doesn’t seem too bad, with a few cache optimizations you’re probably handling a few thousands users simultaneous and felling happy. But you keep growing and now you need to handle tens of thousands, so the architecture starts to break out vertically.

So lets imagine something like the below, and if we’ve been good little vegemites we have a good separation of domains, so are able to scale the db by separating the domains out into microservices on the backend with their own independent data.

Our web site then ends up looking a bit like a BFF (Backend For Frontend), and we scale nicely and are able to start to scale up to tens or into the hundreds of thousands of users. And if you are using AWS especially you are going have these lovely Elastically Scaling Load balancers everywhere.

Now when everything is working its fine, but let’s look at a failure scenario.

One of the API B server’s goes offline, like dead, total hardware failure. What happens in the seconds that follow.

To start, let’s look at load balancer redundancy methods, LBs use a health-check endpoint, an aggressive setting would be to ping it every 2 seconds, then after 2 consecutive failures failures take the node offline.

Let’s also take the example we are getting 1,000 requests per second from our BFF.

Second 1
Lose 333 Requests

Second 2
Lose 333 Requests
Health check fails first time

Second 3
Lose 333 Requests

Second 4
Lose 333 Requests
Health check fails second time and LB stops sending traffic to node

So in this scenario we’ve lost about 1300 requests, but we’ve recovered.

Now you say, but how about we get more aggressive with the health check? This only goes so far.

At scale, the more common outage are not ones where things going totally offline (although this does happen), they are ones where things go “slow”.

Now imagine we have aggressive LB health checks (the ones above are already very aggressive so you cant get much more usually), and things are going “slow” to the point health checks are randomly timing out, you’ll start to see nodes pop on and offline randomly, your load will get unevenly distributed to the point usually where you may even have periods of no nodes online, 503s FTW!. I’ve witnessed this first hand, it happens with agressive health checks 🙂

Next is, what happens if your load balancer goes offline? While load balancers are generally very reliable, things like config updates and firmware updates are times when they most commonly fail, but even then, they still can succumb to hardware failure.

If you are running in e-commerce like I have been for the last 15 odd years then traffic is money, every bit of traffic you lose can potentially be costing you money.

Also when you start to get into very large scale, the natural entropy on hardware means hardware failure becomes more common. For example, if you have say 5,000 physical server in your cloud, how often will you have a failure that takes applications offline.

And it doesn’t matter if you are running AWS cloud, kubernetes, etc hardware failure still takes things offline, your VMs and containers may restart with little to no data loss, but they still go offline for periods.

How do we deal with this then? How about Client-side Weighted round-robin?

WTF is that? I hear you say. Good Question!

It’s were we move the load balancing mechanism to the client that is calling the backend. There is several advantages to doing this.

This is usually coupled with a service discovery system, we use consul these days, but there is lots out there.

The basic concept is that the client get a list of all available nodes for a given service. They will the maintain they own in memory list of them and round robin through them similar to a load balancer.

This removes infrastructure (i.e. cost and complexity)

The big differences comes though that the client can retry the request on a different node. You can implement retries when you have a load balancer in front of you, but you are in effect rolling the dice, having the knowledge of the endpoint on the client side means that the retries can be targeted at a different server to the one that errored, or timed out.

What’s the Weighting part though?

Each client maintains its own list of failures, so for example if a client got a 500 or timeout from a node, it would weight him down and start to call him less, this cause a node specific back off, which is extremely important in the more common outage at scale of its “slow”, so if a particular node has been smashed a bit too much by something and is overloaded the clients will slow back off that guy.

Let’s look at the same scenario as before with API B and a node going offline. We usually set our timeouts to 500ms to 1 second for our API requests, so let’s say 1 second, as the requests start to fail they will retry on the next node in the list, and weight down the offline server in the local clients list of servers/weighting, so here’s what it looks like:

Second 1
220 Requests take 1 second longer

Second 2
60 Requests take 1 second longer

Second 3
3 Requests take 1 second longer

Second 4
3 Requests take 1 second longer

Second 5

The Round robin weighting kicks in at the first failure, as we only have 3 web servers in this scenario and they are high volume the back-off isn’t decremented in periods of seconds its in number of requests.

Eventually we get to the point that we are trying the API once every few seconds with a request from each web server until he comes back online, or until the service discover system kicks in and tells us he’s dead (which usually takes under 10 seconds)

But the end result is 0 lost requests.

And this is why I don’t use load balancers any more 🙂

Importing Custom TypeScript tslint rules into Sonarqube

I’ll be the first to say I am not a fan of sonarqube, but is the only tool out there that can do the job we need. Getting TypeScript working with it was royal butt hurt, but we got there in the end so I wanted to share our journey.

The best way we found to work with it was to store our rules in our tslint config in source control with our own settings and use it as, this is good because it’ll help keep the sonarqube server rules in sync with the developers.

The problem we run into is that the rules need to exist on the server, so if you for example add the react-tslint rules to your project they also need to be defined in the sonarqube server here



Once they are there sonar understand the rules, but will not process them, but rather than setting up the processing on the server we decide to use our build.

So what we do is

  1. Import ALL rules to sonar server (once off)
  2. run tslint and export failed rules to file
  3. import failed rules using sonar runner (instead of letting runner do analysis)

The server is aware of ALL rules, but its our tslint output that tells it which ones have failed, so you can disable rules in your tsling config that the server is aware of and it won’t report them.

This then means that the local developer experience and the sonarqube report should be a lot more in sync than having to maintain the server processing, and means it easier to run multiple project on the one server with disparate rule sets.

The hard part here though is the import of rules

For our initial import we did the follow rule sets:

  1. react-tslint
  2. tslint-eslint-rules
  3. tslint-consistent-codestyle
  4. tslint-microsoft-contrib

And I have created some powershell scripts that generates the format that is needed from the rules git repos.

To use this clone each of the above repos, then run its corresponding script to generate the output file, then copy and paste this into the section in the sonarqube admin page (it’s ok, this is a once off step).

[gist /]

you should create one record below for each of the four imports, then paste the output from each powershell script into the boxes on the right, as seen below.


Once this is done you need to restart the sonarqube server for the rules to get picked up

WARNING: check for duplicate rule names, there is some (I forgot which ones sorry) and they prevent the sonarqube server from starting and you will need to edit the SQL database to fix it.

Then browse to your rule set and active the rules into it. I recommend just creating a single rule set and put everything in it, like i said you can control the rules from your tslint run, and just add all rules to all projects on the sonarqube server side.


After this run your sonarqube analysis build (see here if you haven’t built it yet ) and you are away.


Sonarqube with a MultiLanguage Project, TypeScript and dotnet

Sonarqube is a cool tool, but getting multiple languages to work with it can be hard, especially because each language has its own plugin maintained by different people most of the time, so the implementations are different, so for each language you need to learn a new sonar plugin.

In our example we have a frontend project using React/Typescript and dotnet for the backend.

For C# we use the standard ootb rules from microsoft, plus some of our own custom rules.

For typescript we follow a lot of recommendations from AirBnB but have some of our own tweaks to it.

In the example I am using an end to end build in series, but in reality we use build chains to speed things up so our actual solution is quite more complex than this.

So the build steps look something like this

  1. dotnet restore
  2. Dotnet test, bootstrapped with dotcover
  3. Yarn install
  4. tslint
  5. yarn test
  6. Sonarqube runner

Note: In this setup we do not get the Build Test stats in Teamcity though, so we cannot block builds for test coverage metrics.

So lets cover the dotnet side first, I mentioned our custom rules, I’ll do a separate blog post about getting them into sonar and just cover the build setup in this post.

with the dotnet restore setup is pretty simple, we do use a custom nuget.config file for our internal nuget server, i would recommend always using a custom nuget config file, your IDEs will pick this up and use its settings.

dotnet restore\nuget.config MyCompany.MyProject.sln

The dotnet test step is a little tricky, we need to boot strap it with dotcover.exe, using the analyse command and output HTML format that sonar will consume (yes, sonar wants the HTML format).

%teamcity.tool.JetBrains.dotCover.CommandLineTools.DEFAULT%\dotcover.exe analyse /TargetExecutable="C:\Program Files\dotnet\dotnet.exe" /TargetArguments="test MyCompany.MyProject.sln" /AttributeFilters="+:MyCompany.MyProject.*" /Output="dotCover.htm" /ReportType="HTML" /TargetWorkingDir=.

echo "this is working"

Lastly sometimes the error code on failing tests is non zero, this causes the build to fail, so by putting the second echo line here it mitigates this.

Typescript We have 3 steps.

yarn install, which just call that exact command

Out tslint step is a command line step below, again we need to use a second echo step because when there is linting errors it returns a non zero exit code and we need to process to still continue.

node ".\node_modules\tslint\bin\tslint" -o issues.json -p "tsconfig.json" -t json -c "tslint.json" -e **/*.spec.tsx -e **/*.spec.ts
echo "this is working"

This will generate an lcov report, now i need to put a disclaimer here, lcov has a problem where it only reports coverage on the files that where executed during the test, so if you have code that is never touched by tests they will not appear on your lcov report, sonarqube will give you the correct numbers. So if you get to the end and find that sonar is reporting numbers a lot lower than what you thought you had this is probably why.

Our test step just run yarn test, but here is the fill command in the package json for reference.

"scripts": {
"test": "jest –silent –coverage"

Now we have 3 artifacts, two coverage reports and a tslint report.

The final step takes these, runs an analysis on our C# code, then uploads everything

We use the sonarqube runner plugin from sonarsource


The important thing here is the additional Parameters that are below


You can see our 3 artifacts that we pass is, we also disable the typescript analysis and rely on our analysis from tslint. The reason for this is it allows us to control the analysis from the IDE, and keep the analysis that is done on the IDE easily in sync with the Sonarqube server.

Also if you are using custom tslint rules that aren’t in the sonarqube default list you will need to import them, I will do another blog post about how we did this in bulk for the 3-4 rule sets we use.

Sonarqube without a language parameter will auto detect the languages, so we exclude files like scss to prevent it from processing those rules.

This isn’t needed for C# though because we use the nuget packages, i will do another blog post about sharing rules around.

And that’s it, you processing should work and turn out something like the below. You can see in the top right both C# and Typescript lines of code are reported, so this reports Bugs, code smells, coverage, etc is the combined values of both languages in the project.


Happy coding!

Comparing Webpack Bundle Size Changes on Pull Requests as a Part of CI

We’ve had some issues where developers haven’t realized it and inadvertently increased the size of our bundles in work they have been doing. So we tried to give them more visibility of the impact of their change on the Pull Request, but using webpack stats, and publishing a compare to the PR for them.

The first part of the is getting webpack-stats-plugin into the solution and also I’ve done a custom version of webpack-compare to output mark down, and only focus on the files you have changed instead of all of them.

"webpack-compare-markdown": "dicko2/webpack-compare",
"webpack-stats-plugin": "0.1.5"

Then we add yarn commands into the package json to preform the work of generating and comparing the stats files

"analyze": "webpack --profile --json > stats.json",
"compare": "webpack-compare-markdown stats.json stats-new.json -o compare"

But what are we comparing? Here’s where it gets a bit tricky. We need to be able to compare the latest master, so what I did was, when the build config that runs the compare runs on master branch I generate a nuget package and push it up to our local server, this way I can just get latest version of this package to get the master stats file.


if("" -eq "master")
md pack
copy-item stats.json pack

$nuspec = '<?xml version="1.0" encoding="utf-8"?>
<package xmlns="">
<!-- Required elements-->
<description>Webpack stats file from master builds</description>
<file src="stats.json" target="tools" />

$nuspec >> "pack\ClientSide.WebPackStats.nuspec"
cd pack
%teamcity.tool.NuGet.CommandLine.DEFAULT%\tools\nuget.exe pack -Version %Version%
%teamcity.tool.NuGet.CommandLine.DEFAULT%\tools\nuget.exe push *.nupkg -source -apiKey "%KlondikeApiKey%"

If we are on a non-master branch we need to download the nuget and run the compare to generate the report.

if("" -ne "master")
%teamcity.tool.NuGet.CommandLine.DEFAULT%\tools\nuget.exe install ClientSide.WebPackStats
$dir = (Get-ChildItem . -Directory -Filter "ClientSide.WebPackStats*").Name
move-item stats.json stats-new.json
copy-item "$dir\tools\stats.json" stats.json
yarn compare

Then finally we need to comment back to the github pull request with the report


$myRepoURL = "%myRepoURL%"
$githubheaders = @{"Authorization"="token $GithubToken"}
$PRNumber= ("").Replace("pull/","")

$PathToMD ="compare\index.MD"

if("" -ne "master")

function GetCommentsFromaPR()

$coms=invoke-webrequest $CommentsURL -Headers $githubheaders -UseBasicParsing
$coms=$coms | ConvertFrom-Json
$rtnGetCommentsFromaPR = New-Object System.Collections.ArrayList

foreach ($comment in $coms)
$info1 = New-Object System.Object
$info1 | Add-Member -type NoteProperty -name ID -Value $
$info1 | Add-Member -type NoteProperty -name Created -Value $comment.created_at
$info1 | Add-Member -type NoteProperty -name Body -Value $comment.Body
$i =$rtnGetCommentsFromaPR.Add($info1)
return $rtnGetCommentsFromaPR;

$pr=invoke-webrequest "$myRepoURL/pulls/$PRNumber" -Headers $githubheaders -UseBasicParsing
$pr=$pr.Content | ConvertFrom-Json

$CommentsFromaPR= GetCommentsFromaPR($pr.comments_url)
foreach($comment in $CommentsFromaPR)
if($comment.Body.StartsWith("[Webpack Stats]"))
Write-Host "Found an existing comment ID " + $comment.ID
$Body = [IO.File]::ReadAllText($PathToMD) -replace "`r`n", "`n"
$Body ="[Webpack Stats] `n" + $Body

$newComment = New-Object System.Object
$newComment | Add-Member -type NoteProperty -name body -Value $Body


if($commentId -eq 0)
Write-Host "Create a comment"
#POST /repos/:owner/:repo/issues/:number/comments
invoke-webrequest "$myRepoURL/issues/$PRNumber/comments" -Headers $githubheaders -UseBasicParsing -Method POST -Body ($newComment | ConvertTo-Json)
Write-Host "Edit a comment"
#PATCH /repos/:owner/:repo/issues/comments/:id
invoke-webrequest "$myRepoURL/issues/comments/$commentId" -Headers $githubheaders -UseBasicParsing -Method PATCH -Body ($newComment | ConvertTo-Json)



And we are done, below is what the output looks like in GitHub


Happy packing!

Upgrading to Visual Studio 2017 Project file format

The new project file format drops the list of included files, as well as moving the nuget references into the csproj are the two biggest changes that you should be interested in.

These changes will greatly reduces your merge conflicts when you have a lot of developers working on a single project

There is a couple of pain points though, the first is that VS 2017 wont update your project files for you and there is no official tool for this. There is a community one available though you can download it here

This tool only does libraries though, if you do a web project you’ll need to edit the file and put in you settings manually as well as adding “.web” to the end of the project type

<Project Sdk="Microsoft.NET.Sdk.Web">

Running this on you project files will convert them, however we were unlucky enough to have some people that have been excluding files from projects and not deleting them. So when we converted a large number of old cs files came back into the solution and broken it, as the new format includes by default and you need to explicitly exclude, there reverse approach form the old format.

So we have some powershell we wrote to fix this, firstly a powershell function to run per project


[Parameter(Position=0, Mandatory=$true)]

$ErrorActionPreference = "Stop"
$projectPath = Split-Path $project
$fileType = "*.cs"
$fileType = "*.vb"


$projectFiles = Select-String -Path $project -Pattern '<compile' | % { $_.Line -split '\t' } | `
% {$_ -replace "(<Compile Include=|\s|/>|["">])", ""} | % { "{0}\{1}" -f $projectPath, $_ }
Write-Host "Project files:" $projectFiles.Count

$diskFiles = gci -Path $projectPath -Recurse -Filter $fileType | % { $_.FullName}
Write-Host "Disk files:" $diskFiles.Count


$diff = (compare-object $diskFiles $projectFiles -PassThru)
Write-Host "Excluded Files:" $diff.Count

#create a text file for log purposes
$diffFilePath = Join-Path $projectPath "DiffFileList.txt"
$diff | Out-File $diffFilePath -Encoding UTF8
notepad $diffFilePath

#just remove the files from disk
$diff | % { Remove-Item -Path $_ -Force -Verbose}

Then another script that finds all my csproj files and calls it for each one

foreach($csproj in (Get-ChildItem . -Recurse -Depth 2 | Where-Object {$_.FullName.EndsWith("csproj")}))
.\removeUnused.ps1 -Project $csproj.FullName -DeleteFromDisk

You can run it without the delete from disk flag to just get a text file with what things it will potentially delete to test it without deleting any files


Configurator Pattern

AppSettings and Xml config seem to be the staple for ASP.NET developers, but in production they aren’t good for configuration that needs to change on the fly. Modifications to the web.config cause a worker process recycle, and if you use config files external to the web config, modifying them wont cause a recycle, but you need to force a recycle to pick up the changes.

If you are using something like configinjector, and startup validation of your settings this might be not a bad thing, however if you have costly start-up times for your app for pre-warming cache etc, this maybe less than desirable.

Recently we’ve been using consul to manage our configuration, for both service discovery and K/V store (replacing a lot of our app settings).

So we’ve started to use a pattern in some of our libraries to manage their settings from code as opposed to filling our web config with hordes of xml data.

The way this works is we store our config in a singleton that is configured at app startup programatically. This allow us to load in value from what ever source we want, and abstracts the app settings as a dependency. Then if at run time you need to update the settings you can call the same method.

Then to make things nice and fluent we add extension methods to add the configuration to the class then update the singleton with a Create method at the end.


public class Configurator
public static Configurator Current => _current ?? (_current = new Configurator());
private static object _locker = new object();
private static Configurator _current;

public static Configurator RegisterConfigurationSettings()
return new Configurator();

internal bool MySetting1 = true;

internal int MySetting2 = 0;

public void Create()
lock (_locker)
_current = this;


public static class ConfiguratorExt
public static Configurator DisableMySetting1(this Configurator that)
that.MySetting1 = false;
return that;

public static Configurator WithMySetting2Of(this Configurator that, int myVal)
that.MySetting2 = myVal;
return that;

You could also implement what i have done as extension method into the configurator class, but i tend to find when the class gets big it helps to break it up a bit.

This allows us to programtically configure the library at run time, and pull in the values from where ever we like. for example below

void PopulateFromAppSettings()

void PopulateFromConsul()
var MySetting2 = // Get value from Consul

You’ll also notice the locker object that we use to make the operation thread safe.

After populating the object we can use the read only Configurator.Current singleton form anywhere in or app to access the configuration settings.

Creating a Docker Container from a Scala/SBT project in TeamCity for Use with Octopus Deploy

I considered creating a series of blog posts about my journey into Scala titled “How much I hate Scala/SBT Part XX”, however i decide not to be that bitter. The language isn’t bad, its just the ecosystem around it sucks, I am more likely to find the source code for something from a google search, rather than a stack overflow post or even documentation.

So here’s where I started, I will assume your Scala project is already Packaging up the project with a TeamCity build and SBT step running compile and your ready to move to a docker container.

So the key things here is the version number for me, I use a custom variable called “Version” that i usually set to something like “1.0.%build.counter%”, for my dot NET projects i use the assembly info patcher and this is then used in the package version, so with your docker containers you can use the tag for the version. Octopus Deploy needs the version on the Tag to work effectively.

If you look internally how TeamCity’s SBT runner runs the comment you will see something like the following:

[11:49:38][Step 2/2] Starting: /usr/java/jdk1.8.0_121/bin/java -Dagent.home.dir=/opt/buildagent -Dagent.ownPort=9090 -Dbuild.number=1.0.12 -Dbuild.vcs.number=411695cf560acb5b7e4b2eb837738660acf0e287 -Dbuild.vcs.number.1=411695cf560acb5b7e4b2eb837738660acf0e287 -Dbuild.vcs.number.Ycs_SupplyApiService_YcsSuppioScala1=411695cf560acb5b7e4b2eb837738660acf0e287 -Dsbt.ivy.home=/opt/buildagent/system/sbt_ivy -Dteamcity.agent.cpuBenchmark=627 -Dteamcity.agent.dotnet.agent_url=http://localhost:9090/RPC2 -Dteamcity.agent.dotnet.build_id=1022735 -Dteamcity.auth.password=******* -Dteamcity.auth.userId=TeamCityBuildId=1022735 -Dteamcity.buildConfName=DevelopPushDocker -Dteamcity.projectName=APIService -Dteamcity.tests.recentlyFailedTests.file=/opt/buildagent/temp/buildTmp/testsToRunFirst5160519002097758075.txt -Dteamcity.version=2017.1 (build 46533) -classpath /opt/buildagent/temp/agentTmp/agent-sbt/bin/sbt-launch.jar:/opt/buildagent/temp/agentTmp/agent-sbt/bin/classes: xsbt.boot.Boot < /opt/buildagent/temp/agentTmp/commands5523308191041557049.file
 I’ve highlighted the one I am after, but you can see that TeamCity is passing a lot of data to the SBT runner, it uses java to run from the command line instead of just running the SBT command itself.
There is something i am missing though, I need to know the branch name, becuase we have a convention that if is not built from the master branch we use a “-branchname” at the end. So to add this in you need to edit your SBT runner step in teamcity and add the below
From this we can use this variable in our Build.Scala file like so, I also add a value for team that is used later.
val dockerRegistry = ""
 val team = "myteam"
val appName = "ycs-supply-api"
 var PROJECT_VERSION = Option(System.getProperty("build.number")).getOrElse("0.0.4")

 val BRANCH = Option(System.getProperty("")).getOrElse("SNAPSHOT")
 if ( BRANCH != "master")
 PROJECT_VERSION = PROJECT_VERSION + "-" + BRANCH.replace("-","").replace("/","")
Now for docker, in you SBT plugins directory make sure you have this line to import the plugin
addSbtPlugin(“se.marcuslonnberg” % “sbt-docker” % “1.4.1”)
Then here is what our Build.sbt looks like
import scala.xml.{Elem, Node}


name := "MyAppBin"

dockerfile in docker := {
val appDir: File = stage.value
val targetDir = s"/opt/$team"

new Dockerfile {

runRaw(s"mkdir -p $targetDir")
copy(appDir, targetDir)
env("JAVA_OPTS" -> s"-Dappname=$appName -Dconfig.file=conf/application-qa.conf -Dlog.dir=log/ -Dlogback.configurationFile=conf/logback-test.xml -Xms256m -Xmx256m -server")

imageNames in docker := Seq(
// SPAPI Sets the latest tag
// SPAPI Sets a name with a tag that contains the project version

mainClass in Compile := Some("Boot")

buildOptions in docker := BuildOptions(
cache = true,
removeIntermediateContainers = BuildOptions.Remove.Always,
pullBaseImage = BuildOptions.Pull.IfMissing
This will get us building the container and saving it to the local docker cache. After this we need to push it to our private registry.
There is currently an open issue about SBT-Docker here so I couldn’t get docker login to run from SBT so I created a separate task in TeamCity to handle this.
To do this i want to keep a lot of my settings in the Build.Scala so that the experience on the local will be similar to the build server, but I don’t want to text parse, so what we can do is output some logs for SBT to tell TeamCity what settings to use.
Add these two lines in

println(s"##teamcity[setParameter name='DockerContainerCreated' value='$dockerRegistry/$team/$appName:${version.value}']")
println(s"##teamcity[setParameter name='SbtDockerRegistry' value='$dockerRegistry']")</div>
This Will make SBT output the format that TeamCity Reads to set parameters and allow us to create the next step as a command line step.
Next add these parameters is as empty
Then we can create a command line step that does the docker login/push
And we are done! you should see your container in the registry now, and if like us you are using Octopus Deploy you will see the container appear on searches and the version numbers will correct be associated with the containers

Slack Bots – Merge Queue Bot

I recently did a talk about a Slack Bot we created to solve our issue of merging to master. Supaket Wongkampoo helped me out with the Thai translation on this one.

We have over 100 developers working on a single repository, so at any one time we have 20 odd people wanting to merge, and each need to wait for a build to complete in order to merge, then one merged the next person must get those changes and rerun. It quit a common scenario but I haven’t seen any projects doing this with this much frequency.

Slides are available here


Continuous Integration/Continuous Delivery Workshop

I hosted a Workshop on CI and CD on the weekend with the following overview of topics

  • Create Build definitions in TeamCity
    • C# net core 1.1 MVC/Web API Project
    • Typescript/Webpack
    • Unit Test NUnit and Mocha
    • Output Packages for Deployment
    • Update GitHub Pull Request status
  • Create deployments in Octopus
    • Deploy to Cluster (C# backend)
    • Deploy to CDN (React SPA)
    • Send an Alert to Slack

Below are the links: