Comparing Webpack Bundle Size Changes on Pull Requests as a Part of CI

We’ve had some issues where developers haven’t realized it and inadvertently increased the size of our bundles in work they have been doing. So we tried to give them more visibility of the impact of their change on the Pull Request, but using webpack stats, and publishing a compare to the PR for them.

The first part of the is getting webpack-stats-plugin into the solution and also I’ve done a custom version of webpack-compare to output mark down, and only focus on the files you have changed instead of all of them.


"webpack-compare-markdown": "dicko2/webpack-compare",
"webpack-stats-plugin": "0.1.5"

Then we add yarn commands into the package json to preform the work of generating and comparing the stats files


"analyze": "webpack --profile --json > stats.json",
"compare": "webpack-compare-markdown stats.json stats-new.json -o compare"

But what are we comparing? Here’s where it gets a bit tricky. We need to be able to compare the latest master, so what I did was, when the build config that runs the compare runs on master branch I generate a nuget package and push it up to our local server, this way I can just get latest version of this package to get the master stats file.


 

if("%teamcity.build.branch%" -eq "master")
{
md pack
copy-item stats.json pack

$nuspec = '<?xml version="1.0" encoding="utf-8"?>
<package xmlns="http://schemas.microsoft.com/packaging/2010/07/nuspec.xsd">
<metadata>
<!-- Required elements-->
<id>ClientSide.WebPackStats</id>
<version>$Version$</version>
<description>Webpack stats file from master builds</description>
<authors>Dicko</authors>
</metadata>
<files>
<file src="stats.json" target="tools" />
</files>
</package>'

$nuspec >> "pack\ClientSide.WebPackStats.nuspec"
cd pack
%teamcity.tool.NuGet.CommandLine.DEFAULT%\tools\nuget.exe pack -Version %Version%
%teamcity.tool.NuGet.CommandLine.DEFAULT%\tools\nuget.exe push *.nupkg -source https://lib-mynuget.io/api/odata -apiKey "%KlondikeApiKey%"
}

If we are on a non-master branch we need to download the nuget and run the compare to generate the report.


if("%teamcity.build.branch%" -ne "master")
{
%teamcity.tool.NuGet.CommandLine.DEFAULT%\tools\nuget.exe install ClientSide.WebPackStats
$dir = (Get-ChildItem . -Directory -Filter "ClientSide.WebPackStats*").Name
move-item stats.json stats-new.json
copy-item "$dir\tools\stats.json" stats.json
yarn compare
}

Then finally we need to comment back to the github pull request with the report


&nbsp;

#======================================================
$myRepoURL = "%myRepoURL%"
$GithubToken="%GithubToken%"
#======================================================
$githubheaders = @{"Authorization"="token $GithubToken"}
$PRNumber= ("%teamcity.build.branch%").Replace("pull/","")

$PathToMD ="compare\index.MD"

if("%teamcity.build.branch%" -ne "master")
{

function GetCommentsFromaPR()
{
Param([string]$CommentsURL)

$coms=invoke-webrequest $CommentsURL -Headers $githubheaders -UseBasicParsing
$coms=$coms | ConvertFrom-Json
$rtnGetCommentsFromaPR = New-Object System.Collections.ArrayList

foreach ($comment in $coms)
{
$info1 = New-Object System.Object
$info1 | Add-Member -type NoteProperty -name ID -Value $comment.id
$info1 | Add-Member -type NoteProperty -name Created -Value $comment.created_at
$info1 | Add-Member -type NoteProperty -name Body -Value $comment.Body
$i =$rtnGetCommentsFromaPR.Add($info1)
}
return $rtnGetCommentsFromaPR;
}

$pr=invoke-webrequest "$myRepoURL/pulls/$PRNumber" -Headers $githubheaders -UseBasicParsing
$pr=$pr.Content | ConvertFrom-Json

$pr.comments_url
$CommentsFromaPR= GetCommentsFromaPR($pr.comments_url)
$commentId=0
foreach($comment in $CommentsFromaPR)
{
if($comment.Body.StartsWith("[Webpack Stats]"))
{
Write-Host "Found an existing comment ID " + $comment.ID
$commentId=$comment.ID
}
}
$Body = [IO.File]::ReadAllText($PathToMD) -replace "`r`n", "`n"
$Body ="[Webpack Stats] `n" + $Body
$Body

$newComment = New-Object System.Object
$newComment | Add-Member -type NoteProperty -name body -Value $Body

&nbsp;

if($commentId -eq 0)
{
Write-Host "Create a comment"
#POST /repos/:owner/:repo/issues/:number/comments
"$myRepoURL/issues/$PRNumber/comments"
invoke-webrequest "$myRepoURL/issues/$PRNumber/comments" -Headers $githubheaders -UseBasicParsing -Method POST -Body ($newComment | ConvertTo-Json)
}
else
{
Write-Host "Edit a comment"
#PATCH /repos/:owner/:repo/issues/comments/:id
"$myRepoURL/issues/$PRNumber/comments/$commentId"
invoke-webrequest "$myRepoURL/issues/comments/$commentId" -Headers $githubheaders -UseBasicParsing -Method PATCH -Body ($newComment | ConvertTo-Json)
}

}

&nbsp;

And we are done, below is what the output looks like in GitHub

WebpackBundleSizeChangeOnPullRequestBuild

Happy packing!

Upgrading to Visual Studio 2017 Project file format

The new project file format drops the list of included files, as well as moving the nuget references into the csproj are the two biggest changes that you should be interested in.

These changes will greatly reduces your merge conflicts when you have a lot of developers working on a single project

There is a couple of pain points though, the first is that VS 2017 wont update your project files for you and there is no official tool for this. There is a community one available though you can download it here

https://github.com/hvanbakel/CsprojToVs2017

This tool only does libraries though, if you do a web project you’ll need to edit the file and put in you settings manually as well as adding “.web” to the end of the project type


<Project Sdk="Microsoft.NET.Sdk.Web">

Running this on you project files will convert them, however we were unlucky enough to have some people that have been excluding files from projects and not deleting them. So when we converted a large number of old cs files came back into the solution and broken it, as the new format includes by default and you need to explicitly exclude, there reverse approach form the old format.

So we have some powershell we wrote to fix this, firstly a powershell function to run per project


#removeUnused.ps1

[CmdletBinding()]
param(
[Parameter(Position=0, Mandatory=$true)]
[string]$Project,
[Parameter(Mandatory=$false)]
[ValidateRange(10,12)]
[switch]$DeleteFromDisk
)

$ErrorActionPreference = "Stop"
$projectPath = Split-Path $project
if($Project.EndsWith("csproj"))
{
$fileType = "*.cs"
}
else
{
$fileType = "*.vb"
}
$fileType

&nbsp;

$projectFiles = Select-String -Path $project -Pattern '<compile' | % { $_.Line -split '\t' } | `
% {$_ -replace "(<Compile Include=|\s|/>|["">])", ""} | % { "{0}\{1}" -f $projectPath, $_ }
Write-Host "Project files:" $projectFiles.Count

$diskFiles = gci -Path $projectPath -Recurse -Filter $fileType | % { $_.FullName}
Write-Host "Disk files:" $diskFiles.Count

&nbsp;

$diff = (compare-object $diskFiles $projectFiles -PassThru)
Write-Host "Excluded Files:" $diff.Count

#create a text file for log purposes
$diffFilePath = Join-Path $projectPath "DiffFileList.txt"
$diff | Out-File $diffFilePath -Encoding UTF8
notepad $diffFilePath

#just remove the files from disk
if($DeleteFromDisk)
{
$diff | % { Remove-Item -Path $_ -Force -Verbose}
}

Then another script that finds all my csproj files and calls it for each one


foreach($csproj in (Get-ChildItem . -Recurse -Depth 2 | Where-Object {$_.FullName.EndsWith("csproj")}))
{
.\removeUnused.ps1 -Project $csproj.FullName -DeleteFromDisk
}

You can run it without the delete from disk flag to just get a text file with what things it will potentially delete to test it without deleting any files

 

Configurator Pattern

AppSettings and Xml config seem to be the staple for ASP.NET developers, but in production they aren’t good for configuration that needs to change on the fly. Modifications to the web.config cause a worker process recycle, and if you use config files external to the web config, modifying them wont cause a recycle, but you need to force a recycle to pick up the changes.

If you are using something like configinjector, and startup validation of your settings this might be not a bad thing, however if you have costly start-up times for your app for pre-warming cache etc, this maybe less than desirable.

Recently we’ve been using consul to manage our configuration, for both service discovery and K/V store (replacing a lot of our app settings).

So we’ve started to use a pattern in some of our libraries to manage their settings from code as opposed to filling our web config with hordes of xml data.

The way this works is we store our config in a singleton that is configured at app startup programatically. This allow us to load in value from what ever source we want, and abstracts the app settings as a dependency. Then if at run time you need to update the settings you can call the same method.

Then to make things nice and fluent we add extension methods to add the configuration to the class then update the singleton with a Create method at the end.

 


public class Configurator
{
public static Configurator Current => _current ?? (_current = new Configurator());
private static object _locker = new object();
private static Configurator _current;

public static Configurator RegisterConfigurationSettings()
{
return new Configurator();
}

internal bool MySetting1 = true;

internal int MySetting2 = 0;

public void Create()
{
lock (_locker)
{
_current = this;
}
}
}

&nbsp;

public static class ConfiguratorExt
{
public static Configurator DisableMySetting1(this Configurator that)
{
that.MySetting1 = false;
return that;
}

public static Configurator WithMySetting2Of(this Configurator that, int myVal)
{
that.MySetting2 = myVal;
return that;
}
}

You could also implement what i have done as extension method into the configurator class, but i tend to find when the class gets big it helps to break it up a bit.

This allows us to programtically configure the library at run time, and pull in the values from where ever we like. for example below


void PopulateFromAppSettings()
{
Configurator.RegisterConfigurationSettings()
.WithMySetting2Of(ConfigurationManager.AppSettings["MySetting2"])
.Create();
}

void PopulateFromConsul()
{
var MySetting2 = // Get value from Consul
Configurator.RegisterConfigurationSettings()
.WithMySetting2Of(MySetting2)
.Create();
}

You’ll also notice the locker object that we use to make the operation thread safe.

After populating the object we can use the read only Configurator.Current singleton form anywhere in or app to access the configuration settings.

Creating a Docker Container from a Scala/SBT project in TeamCity for Use with Octopus Deploy

I considered creating a series of blog posts about my journey into Scala titled “How much I hate Scala/SBT Part XX”, however i decide not to be that bitter. The language isn’t bad, its just the ecosystem around it sucks, I am more likely to find the source code for something from a google search, rather than a stack overflow post or even documentation.

So here’s where I started, I will assume your Scala project is already Packaging up the project with a TeamCity build and SBT step running compile and your ready to move to a docker container.

So the key things here is the version number for me, I use a custom variable called “Version” that i usually set to something like “1.0.%build.counter%”, for my dot NET projects i use the assembly info patcher and this is then used in the package version, so with your docker containers you can use the tag for the version. Octopus Deploy needs the version on the Tag to work effectively.

If you look internally how TeamCity’s SBT runner runs the comment you will see something like the following:

[11:49:38][Step 2/2] Starting: /usr/java/jdk1.8.0_121/bin/java -Dagent.home.dir=/opt/buildagent -Dagent.name=tt-xxx-2004 -Dagent.ownPort=9090 -Dagent.work.dir=/opt/buildagent/work -Dbuild.number=1.0.12 -Dbuild.vcs.number=411695cf560acb5b7e4b2eb837738660acf0e287 -Dbuild.vcs.number.1=411695cf560acb5b7e4b2eb837738660acf0e287 -Dbuild.vcs.number.Ycs_SupplyApiService_YcsSuppioScala1=411695cf560acb5b7e4b2eb837738660acf0e287 -Djava.io.tmpdir=/opt/buildagent/temp/buildTmp -Dsbt.ivy.home=/opt/buildagent/system/sbt_ivy -Dteamcity.agent.cpuBenchmark=627 -Dteamcity.agent.dotnet.agent_url=http://localhost:9090/RPC2 -Dteamcity.agent.dotnet.build_id=1022735 -Dteamcity.auth.password=******* -Dteamcity.auth.userId=TeamCityBuildId=1022735 -Dteamcity.build.changedFiles.file=/opt/buildagent/temp/buildTmp/changedFiles3407499404619574497.txt -Dteamcity.build.checkoutDir=/opt/buildagent/work/36dd69c049b3f712 -Dteamcity.build.id=1022735 -Dteamcity.build.properties.file=/opt/buildagent/temp/buildTmp/teamcity.build2492188537600221102.properties -Dteamcity.build.tempDir=/opt/buildagent/temp/buildTmp -Dteamcity.build.workingDir=/opt/buildagent/work/36dd69c049b3f712 -Dteamcity.buildConfName=DevelopPushDocker -Dteamcity.buildType.id=FFFGGHHH -Dteamcity.configuration.properties.file=/opt/buildagent/temp/buildTmp/teamcity.config6701741486713268575.properties -Dteamcity.projectName=APIService -Dteamcity.runner.properties.file=/opt/buildagent/temp/buildTmp/teamcity.runner3391739853422434247.properties -Dteamcity.tests.recentlyFailedTests.file=/opt/buildagent/temp/buildTmp/testsToRunFirst5160519002097758075.txt -Dteamcity.version=2017.1 (build 46533) -classpath /opt/buildagent/temp/agentTmp/agent-sbt/bin/sbt-launch.jar:/opt/buildagent/temp/agentTmp/agent-sbt/bin/classes: xsbt.boot.Boot < /opt/buildagent/temp/agentTmp/commands5523308191041557049.file
 I’ve highlighted the one I am after, but you can see that TeamCity is passing a lot of data to the SBT runner, it uses java to run from the command line instead of just running the SBT command itself.
There is something i am missing though, I need to know the branch name, becuase we have a convention that if is not built from the master branch we use a “-branchname” at the end. So to add this in you need to edit your SBT runner step in teamcity and add the below
SBTParametersFromCommandLineTeamCity
From this we can use this variable in our Build.Scala file like so, I also add a value for team that is used later.
val dockerRegistry = "MyprivateDockerReg.company.com"
 val team = "myteam"
val appName = "ycs-supply-api"
 var PROJECT_VERSION = Option(System.getProperty("build.number")).getOrElse("0.0.4")

 val BRANCH = Option(System.getProperty("teamcity.build.branch")).getOrElse("SNAPSHOT")
 if ( BRANCH != "master")
 {
 PROJECT_VERSION = PROJECT_VERSION + "-" + BRANCH.replace("-","").replace("/","")
 }
Now for docker, in you SBT plugins directory make sure you have this line to import the plugin
addSbtPlugin(“se.marcuslonnberg” % “sbt-docker” % “1.4.1”)
Then here is what our Build.sbt looks like
import scala.xml.{Elem, Node}

enablePlugins(JavaAppPackaging)

name := "MyAppBin"

dockerfile in docker := {
val appDir: File = stage.value
val targetDir = s"/opt/$team"

new Dockerfile {
from("java:8-jre-alpine")
maintainer(s"$team")

runRaw(s"mkdir -p $targetDir")
workDir(s"$targetDir")
copy(appDir, targetDir)
expose(80)
env("JAVA_OPTS" -> s"-Dappname=$appName -Dconfig.file=conf/application-qa.conf -Dlog.dir=log/ -Dlogback.configurationFile=conf/logback-test.xml -Xms256m -Xmx256m -server")
entryPoint(s"bin/${name.value}")
}
}

imageNames in docker := Seq(
// SPAPI Sets the latest tag
ImageName(s"$dockerRegistry/$team/$appName:latest"),
// SPAPI Sets a name with a tag that contains the project version
ImageName(s"$dockerRegistry/$team/$appName:${version.value}")
)

mainClass in Compile := Some("Boot")

buildOptions in docker := BuildOptions(
cache = true,
removeIntermediateContainers = BuildOptions.Remove.Always,
pullBaseImage = BuildOptions.Pull.IfMissing
)
This will get us building the container and saving it to the local docker cache. After this we need to push it to our private registry.
There is currently an open issue about SBT-Docker here so I couldn’t get docker login to run from SBT so I created a separate task in TeamCity to handle this.
To do this i want to keep a lot of my settings in the Build.Scala so that the experience on the local will be similar to the build server, but I don’t want to text parse, so what we can do is output some logs for SBT to tell TeamCity what settings to use.
Add these two lines in

println(s"##teamcity[setParameter name='DockerContainerCreated' value='$dockerRegistry/$team/$appName:${version.value}']")
println(s"##teamcity[setParameter name='SbtDockerRegistry' value='$dockerRegistry']")</div>
This Will make SBT output the format that TeamCity Reads to set parameters and allow us to create the next step as a command line step.
TeamCitySetParameterFromSBTCommandLine
Next add these parameters is as empty
EmptyParamterTeamCitySetParameterFromCommandLine
Then we can create a command line step that does the docker login/push
PushContainerToDockerRegistryWithVersionTagFromTeamCity
And we are done! you should see your container in the registry now, and if like us you are using Octopus Deploy you will see the container appear on searches and the version numbers will correct be associated with the containers

Slack Bots – Merge Queue Bot

I recently did a talk about a Slack Bot we created to solve our issue of merging to master. Supaket Wongkampoo helped me out with the Thai translation on this one.

We have over 100 developers working on a single repository, so at any one time we have 20 odd people wanting to merge, and each need to wait for a build to complete in order to merge, then one merged the next person must get those changes and rerun. It quit a common scenario but I haven’t seen any projects doing this with this much frequency.

Slides are available here https://github.com/tech-at-agoda/meetup-2017-04-01

 

Continuous Integration/Continuous Delivery Workshop

I hosted a Workshop on CI and CD on the weekend with the following overview of topics

  • Create Build definitions in TeamCity
    • C# net core 1.1 MVC/Web API Project
    • Typescript/Webpack
    • Unit Test NUnit and Mocha
    • Output Packages for Deployment
    • Update GitHub Pull Request status
  • Create deployments in Octopus
    • Deploy to Cluster (C# backend)
    • Deploy to CDN (React SPA)
    • Send an Alert to Slack

Below are the links:

Videos:

Slides:

https://github.com/tech-at-agoda/meetup-2017-05-13