Sonarqube with a MultiLanguage Project, TypeScript and dotnet

Sonarqube is a cool tool, but getting multiple languages to work with it can be hard, especially because each language has its own plugin maintained by different people most of the time, so the implementations are different, so for each language you need to learn a new sonar plugin.

In our example we have a frontend project using React/Typescript and dotnet for the backend.

For C# we use the standard ootb rules from microsoft, plus some of our own custom rules.

For typescript we follow a lot of recommendations from AirBnB but have some of our own tweaks to it.

In the example I am using an end to end build in series, but in reality we use build chains to speed things up so our actual solution is quite more complex than this.

So the build steps look something like this

  1. dotnet restore
  2. Dotnet test, bootstrapped with dotcover
  3. Yarn install
  4. tslint
  5. yarn test
  6. Sonarqube runner

Note: In this setup we do not get the Build Test stats in Teamcity though, so we cannot block builds for test coverage metrics.

So lets cover the dotnet side first, I mentioned our custom rules, I’ll do a separate blog post about getting them into sonar and just cover the build setup in this post.

with the dotnet restore setup is pretty simple, we do use a custom nuget.config file for our internal nuget server, i would recommend always using a custom nuget config file, your IDEs will pick this up and use its settings.


dotnet restore --configfile=%teamcity.build.workingDir%\nuget.config MyCompany.MyProject.sln

The dotnet test step is a little tricky, we need to boot strap it with dotcover.exe, using the analyse command and output HTML format that sonar will consume (yes, sonar wants the HTML format).


%teamcity.tool.JetBrains.dotCover.CommandLineTools.DEFAULT%\dotcover.exe analyse /TargetExecutable="C:\Program Files\dotnet\dotnet.exe" /TargetArguments="test MyCompany.MyProject.sln" /AttributeFilters="+:MyCompany.MyProject.*" /Output="dotCover.htm" /ReportType="HTML" /TargetWorkingDir=.

echo "this is working"

Lastly sometimes the error code on failing tests is non zero, this causes the build to fail, so by putting the second echo line here it mitigates this.

Typescript We have 3 steps.

yarn install, which just call that exact command

Out tslint step is a command line step below, again we need to use a second echo step because when there is linting errors it returns a non zero exit code and we need to process to still continue.


node ".\node_modules\tslint\bin\tslint" -o issues.json -p "tsconfig.json" -t json -c "tslint.json" -e **/*.spec.tsx -e **/*.spec.ts
echo "this is working"

This will generate an lcov report, now i need to put a disclaimer here, lcov has a problem where it only reports coverage on the files that where executed during the test, so if you have code that is never touched by tests they will not appear on your lcov report, sonarqube will give you the correct numbers. So if you get to the end and find that sonar is reporting numbers a lot lower than what you thought you had this is probably why.

Our test step just run yarn test, but here is the fill command in the package json for reference.

"scripts": {
"test": "jest –silent –coverage"
}

Now we have 3 artifacts, two coverage reports and a tslint report.

The final step takes these, runs an analysis on our C# code, then uploads everything

We use the sonarqube runner plugin from sonarsource

SonarqubeRunnerTeamCityTypeScriptDotnet

The important thing here is the additional Parameters that are below

-Dsonar.cs.dotcover.reportsPaths=dotCover.htm
-Dsonar.exclusions=**/node_modules/**,**/dev/**,**/*.js,**/*.vb,**/*.css,**/*.scss,**/*.spec.tsx,**/*.spec.ts
-Dsonar.ts.coverage.lcovReportPath=coverage/lcov.info
-Dsonar.ts.excludetypedefinitionfiles=true
-Dsonar.ts.tslint.outputPath=issues.json
-Dsonar.verbose=true

You can see our 3 artifacts that we pass is, we also disable the typescript analysis and rely on our analysis from tslint. The reason for this is it allows us to control the analysis from the IDE, and keep the analysis that is done on the IDE easily in sync with the Sonarqube server.

Also if you are using custom tslint rules that aren’t in the sonarqube default list you will need to import them, I will do another blog post about how we did this in bulk for the 3-4 rule sets we use.

Sonarqube without a language parameter will auto detect the languages, so we exclude files like scss to prevent it from processing those rules.

This isn’t needed for C# though because we use the nuget packages, i will do another blog post about sharing rules around.

And that’s it, you processing should work and turn out something like the below. You can see in the top right both C# and Typescript lines of code are reported, so this reports Bugs, code smells, coverage, etc is the combined values of both languages in the project.

SonarqubeCodeCoverageStaticAnalysisMultiLanguage

Happy coding!

Comparing Webpack Bundle Size Changes on Pull Requests as a Part of CI

We’ve had some issues where developers haven’t realized it and inadvertently increased the size of our bundles in work they have been doing. So we tried to give them more visibility of the impact of their change on the Pull Request, but using webpack stats, and publishing a compare to the PR for them.

The first part of the is getting webpack-stats-plugin into the solution and also I’ve done a custom version of webpack-compare to output mark down, and only focus on the files you have changed instead of all of them.


"webpack-compare-markdown": "dicko2/webpack-compare",
"webpack-stats-plugin": "0.1.5"

Then we add yarn commands into the package json to preform the work of generating and comparing the stats files


"analyze": "webpack --profile --json > stats.json",
"compare": "webpack-compare-markdown stats.json stats-new.json -o compare"

But what are we comparing? Here’s where it gets a bit tricky. We need to be able to compare the latest master, so what I did was, when the build config that runs the compare runs on master branch I generate a nuget package and push it up to our local server, this way I can just get latest version of this package to get the master stats file.


 

if("%teamcity.build.branch%" -eq "master")
{
md pack
copy-item stats.json pack

$nuspec = '<?xml version="1.0" encoding="utf-8"?>
<package xmlns="http://schemas.microsoft.com/packaging/2010/07/nuspec.xsd">
<metadata>
<!-- Required elements-->
<id>ClientSide.WebPackStats</id>
<version>$Version$</version>
<description>Webpack stats file from master builds</description>
<authors>Dicko</authors>
</metadata>
<files>
<file src="stats.json" target="tools" />
</files>
</package>'

$nuspec >> "pack\ClientSide.WebPackStats.nuspec"
cd pack
%teamcity.tool.NuGet.CommandLine.DEFAULT%\tools\nuget.exe pack -Version %Version%
%teamcity.tool.NuGet.CommandLine.DEFAULT%\tools\nuget.exe push *.nupkg -source https://lib-mynuget.io/api/odata -apiKey "%KlondikeApiKey%"
}

If we are on a non-master branch we need to download the nuget and run the compare to generate the report.


if("%teamcity.build.branch%" -ne "master")
{
%teamcity.tool.NuGet.CommandLine.DEFAULT%\tools\nuget.exe install ClientSide.WebPackStats
$dir = (Get-ChildItem . -Directory -Filter "ClientSide.WebPackStats*").Name
move-item stats.json stats-new.json
copy-item "$dir\tools\stats.json" stats.json
yarn compare
}

Then finally we need to comment back to the github pull request with the report


&nbsp;

#======================================================
$myRepoURL = "%myRepoURL%"
$GithubToken="%GithubToken%"
#======================================================
$githubheaders = @{"Authorization"="token $GithubToken"}
$PRNumber= ("%teamcity.build.branch%").Replace("pull/","")

$PathToMD ="compare\index.MD"

if("%teamcity.build.branch%" -ne "master")
{

function GetCommentsFromaPR()
{
Param([string]$CommentsURL)

$coms=invoke-webrequest $CommentsURL -Headers $githubheaders -UseBasicParsing
$coms=$coms | ConvertFrom-Json
$rtnGetCommentsFromaPR = New-Object System.Collections.ArrayList

foreach ($comment in $coms)
{
$info1 = New-Object System.Object
$info1 | Add-Member -type NoteProperty -name ID -Value $comment.id
$info1 | Add-Member -type NoteProperty -name Created -Value $comment.created_at
$info1 | Add-Member -type NoteProperty -name Body -Value $comment.Body
$i =$rtnGetCommentsFromaPR.Add($info1)
}
return $rtnGetCommentsFromaPR;
}

$pr=invoke-webrequest "$myRepoURL/pulls/$PRNumber" -Headers $githubheaders -UseBasicParsing
$pr=$pr.Content | ConvertFrom-Json

$pr.comments_url
$CommentsFromaPR= GetCommentsFromaPR($pr.comments_url)
$commentId=0
foreach($comment in $CommentsFromaPR)
{
if($comment.Body.StartsWith("[Webpack Stats]"))
{
Write-Host "Found an existing comment ID " + $comment.ID
$commentId=$comment.ID
}
}
$Body = [IO.File]::ReadAllText($PathToMD) -replace "`r`n", "`n"
$Body ="[Webpack Stats] `n" + $Body
$Body

$newComment = New-Object System.Object
$newComment | Add-Member -type NoteProperty -name body -Value $Body

&nbsp;

if($commentId -eq 0)
{
Write-Host "Create a comment"
#POST /repos/:owner/:repo/issues/:number/comments
"$myRepoURL/issues/$PRNumber/comments"
invoke-webrequest "$myRepoURL/issues/$PRNumber/comments" -Headers $githubheaders -UseBasicParsing -Method POST -Body ($newComment | ConvertTo-Json)
}
else
{
Write-Host "Edit a comment"
#PATCH /repos/:owner/:repo/issues/comments/:id
"$myRepoURL/issues/$PRNumber/comments/$commentId"
invoke-webrequest "$myRepoURL/issues/comments/$commentId" -Headers $githubheaders -UseBasicParsing -Method PATCH -Body ($newComment | ConvertTo-Json)
}

}

&nbsp;

And we are done, below is what the output looks like in GitHub

WebpackBundleSizeChangeOnPullRequestBuild

Happy packing!

Slack Bots – Merge Queue Bot

I recently did a talk about a Slack Bot we created to solve our issue of merging to master. Supaket Wongkampoo helped me out with the Thai translation on this one.

We have over 100 developers working on a single repository, so at any one time we have 20 odd people wanting to merge, and each need to wait for a build to complete in order to merge, then one merged the next person must get those changes and rerun. It quit a common scenario but I haven’t seen any projects doing this with this much frequency.

Slides are available here https://github.com/tech-at-agoda/meetup-2017-04-01

 

GitHub Pull Request Merge Ref and TeamCity CI fail

GitHub has an awesome feature that allows us to build on the potential merge result of a pull request.

This allows us to run unit and UI tests against the result of a merge, so we know with certainty that it works, before we merge the code.

To get this working with TeamCity is a pain in the ass though.

Lets look at a basic workflow with this:

First we will look at two active pull request, and we are about to merge

MasterFeatureBranchGitHubFlow

Pull request 2 advertises the /head (actual branch) and /merge (result “if we merged”)

TeamCity say you should tie your builds to the /merge for CI, this will build the merge result, and I agree.

However lets look at what happens in GitHub when we merge in Feature 1.

MasterFeatureBranchGitHubFlowMergedFeature1

The new code goes into master, which will recalculate the merge result on Pull request 2. TeamCity correctly builds the merge reference and validates that the Pull Request will succeed.

However if we look in GitHub we will see the below

UpdateGitHubBranch

It now blocks you and prompts you to updates your branch.

After you click this, the /head and /merge refs will update, as it adds a commit to your branch and recalculates the merge result again, then you need wait for another build to validate the new commit on your branch.

MergeAndHeadRefOnGitHubBranchUpdate

This now triggers a second build.And when it completes you can merge.

The issues here is we are double building. There is two solutions as I see it,

  1. GitHub should allow you top merge without updating your branch
  2. TeamCity should allow you to trigger from one ref and build on a different one

I was able to implement the second result using a build configuration that calls the TeamCity API to trigger a build. However my preference would be number 1 as this is more automated.

BuildOffDifferentBranchFromTrigger

Inside it looks like this

BuildOffDifferentBranchFromTrigger1

Below is example powershell that is used in the trigger build, we had an issue with the SSL cert (even though it wasn’t self signed) so had to disable the check for it to work.

add-type @"
using System.Net;
using System.Security.Cryptography.X509Certificates;
public class TrustAllCertsPolicy : ICertificatePolicy {
public bool CheckValidationResult(
ServicePoint srvPoint, X509Certificate certificate,
WebRequest request, int certificateProblem) {
return true;
}
}
"@
[System.Net.ServicePointManager]::CertificatePolicy = New-Object TrustAllCertsPolicy

$buildBranch ="%teamcity.build.branch%"
Write-Host $buildBranch
$buildBranch = $buildBranch.Replace("/head","/merge")
$postbody = "<build branchName='$buildBranch'>
<buildType id='%TargetBuildType%'/>
</build>"
Write-Host $postbody
$user = '%TeamCityUser%'
$pass = '%TeamCityPassword%'

$secpasswd = ConvertTo-SecureString $pass -AsPlainText -Force
$credential = New-Object System.Management.Automation.PSCredential($user, $secpasswd)

Invoke-RestMethod https://teamcity/httpAuth/app/rest/buildQueue -Method POST -Body $postbody -Credential $credential -Headers @{"accept"="application/xml";"Content-Type"="application/xml"}

You will see that we replace the branch name head with merge, so we trigger after someone clicks the update branch button only.

Also don’t forget to add a VCS trigger for file changes “+:.”, so that it will only run builds when there are changes.

VCSTriggerRule

We are running with this solution this week and I am going to put a request into GitHub support about option 1.

This is a really big issues for us as we have 30-40 open pull requests on our repo, so double building creates a LOT of traffic.

If anyone has a better solution please leave some comments.