Performant .NET APIs

I’m going to conflate two topics here as they king of go together to make the title, the first isnt dotnet specific, its a general API principle.

Systems Should Own their Own data

If you can make this work then there’s a lot of advantages. What does this mean in practice though?

It means that tables in your database should only every be read and written by a single system, and there’s a lot of Pros around this. Essentially the below is what I am recommending you AVOID

How else to proceed though? there is several options that you may or may not be aware about, I’ll mention a few now but wont go into specifics

Backbend for Front end

Event Sourcing

Data Materialization

I’ve have other blogs on these. But what you are asking is what’s the benefit right?

If you control data from within a closed systems, its easier to control, a pattern which becomes easy here is known a “write through cache”.

Most of you will be familiar with a “Pull Through Cache”, this is the most common caching pattern, the logic flows like this

  1. Inbound request for X
  2. Check cache for key X
  3. If X found in Cache return
  4. Else Get X from Database, Update Cache, Return X

So on access we update the cache, and we set an expiry time of Y. And our data is usually stale by Y or less at any given time, unless it hits the DB and then its fresh and slow.

A write through cache is easy to implement when the same system reading is tightly couple with the system writing (or in my recommendation, the same system).

In this scenario the logic works the same, with one difference, when writing we update the cache with the object we are writing, example:

  1. Inbound update for key X
  2. Update database for key X
  3. Update Cache for key X

This way all forces a cache update and your cache is always fresh. Depending on how we implement our cache will vary on how fresh it becomes though. We could work this with local or remote cache.

For small datasets (1s or 10s of Gigabytes in size) I recommend local cache, but if we have a cluster of 3 servers for example how does this work? I generally recommend using a message bus, the example below step 3 sends a message, all APIs in a cluster subscribe to updates on this bus on startup, and use this to know when to re-request updates from DB when update events occur to keep cache fresh. In my experience this sort of pattern leads to 1-4 seconds of cache freshness, depending on your scale (slightly more if geographically distributed)

So this isn’t dotnet specific but it makes me lead to my next point. Once you have 10-15Gb of Cache in RAM, how do you handle this? how do you query it? which brings me to the next part.

Working with big-RAM

I’m going to use an example where we used immutable collections to store the data. Update means rebuild the whole collection in this case, we did this because the data updated infrequently, for more frequently updated data DONT do this.

Then used Linq to query into them, collections where 80Mb to 1.2Gb in size in RAM, and some of them had multiple keys to lookup, this was the tricky bit.

The example Data we had was Geographic data (cities, states, points of interest, etc), and we had in the collections multiple languages, so lookups generally had a “Key” plus another “Language Id” to get the correct translation.

So the initial Linq query for this was like

_cityLanguage.FirstOrDefault(x => x.Key.KeyId == cityId && x.Key.LanguageId == languageId).Value;

The results of this are below

MethodCityNumberMeanErrorStdDev
LookupCityWithLanguage60000929.3 ms17.79 ms17.48 ms

You can see the mean response here is almost a second, which isn’t nice user experience.

The next method we tried was to create a dictionary that was keyed on the two fields. To do this on a POCO you need to implement Equals and GetHash code methods so that the dictionary can Hash and compare the keys like below.

class LanguageKey
    {
        public LanguageKey(int languageId, int keyId)
        {
            LanguageId = languageId;
            KeyId = keyId;
        }
        public int LanguageId { get; }
        public int KeyId { get; }

        public override bool Equals(object obj)
        {
                if(!(obj is LanguageKey)) return false;
                var o = (LanguageKey) obj;
                return o.KeyId == KeyId && o.LanguageId == LanguageId;
        }

        public override int GetHashCode()
        {
            return LanguageId.GetHashCode() ^ KeyId.GetHashCode();
        }

        public override string ToString()
        {
            return $"{LanguageId}:{KeyId}";
        }
    }

So the code we end up with is like this for the lookup

_cityLanguage[new LanguageKey(languageId,cityId)];

And the results are

MethodCityNumberMeanErrorStdDev
LookupCityWithLanguage60000332.3 ns6.61 ns10.86 ns

Now we can see we’ve gone from milliseconds to nanoseconds a pretty big jump.

The next approach we tried is using a “Lookup” object to store the index below is the code to create the lookup and how to access it.

// in ctor warmup
lookup = _cityLanguage.ToLookup(x => new LanguageKey(x.Key.LanguageId, x.Key.KeyId));
// in method
lookup[new LanguageKey(languageId, cityId)].FirstOrDefault().Value;

And the results are similar

MethodCityNumberMeanErrorStdDev
LookupCityWithLanguage60000386.3 ns17.79 ns51.89 ns

We prefer to look at the high percentiles at Agoda though so measure APIs (usually the P90 or P99) below is a peak at how the API is handling the responses.

consistently below 4ms at P90, which is a pretty good experience.

Overall the “Write Through Cache” Approach is a winner for microservices where its common that systems own their own data.

NOTE: this testing was done on an API written in netcore 3.1, I’ll post updates on what it does when we upgrade to 6 🙂

From Code Standards to Automation with Analyzers and Code Fixes

We started to talk about standards between teams and system owners at Agoda a couple of years ago. We first started out on C#, the Idea was to come up with a list of recommendations for developers, for a few reasons.

One was we follow polyglot programming here and we would sometimes get developers more familiar with Java, JavaScript and other languages that would be working on the C# projects and would often misused or get lost in some of the features (e.g. When JavaScript developers find dynamics in C#, or Java Developer get confused with Extension Methods).

Beyond this we want to encourage a level of good practice, drawing on the knowledge of the people we have we could advise on potential pit falls and drawbacks of certain paths. In short the standards should not be prescriptive ones, as in “you must do it this way”, they should be more “Don’t do this”, but also teach at the same time, as in “Don’t do this, here’s why, and here’s some example code”. But also includes some guidance as well, as in “We recommend this way, or this, or this, or even this, depending on your context”, but we avoid “do this”.

CodeStandardsCSharpTypeScriptJavascriptScalaPloyglot

The output was the standards repository that we’ve now open sourced

It’s primarily markdown documents that allow us to easily document, and also use pull requests and issues to start conversation around changes and evolve.

But we had a “If you build it they will come” problem. We had standards, but people either couldn’t find them, didn’t read them, and even if they did, they’ll probably forget half of them within a week.

So the question was how do you go about implementing standards amongst 150 core developers and hundreds more casual contributors in the organisation?

We turned to static code analysis first, the Roslyn API in C# is one of the most mature Language Apis on the market imo. And we were able to write rules for most of the standards (especially the “Don’t do this” ones).

This gave birth to a cross department effort that resulted in a Code fix library here that we like to call Agoda Analyzers.

RoslynAnalyzersCSharpStaticCodeAnalysisLibraryTools

Initially we were able to add them into the project via the published nuget package and have them present in the IDE, and they are available here.

RoslynAnalyzerPackageRuleCodeFixes

but like most linting systems they tend to just “error” at the developer without much information, which we don’t find very helpful, so we quickly moved to Sonarqube with it’s github integration.

This allows a good experience for code reviewers and contributors. The contributor get’s inline comments on their pull request from our bot.

CodeReviewBotCommentPullRequest

This allows the user time to fix issues before they get to the code reviewers, so most common mistakes are fixed prior to code review.

Also the link (the 3 dots at the end of the comment) Deep links into sonarqube’s WebUI to documentation that we write on each rule.

CodeSmellCodeFixDocumentationHowToFixSmell

This allows for not just “Don’t do this”, but also to achieve “Don’t do this, here’s why, and here’s some example code”.

Static code analysis is not a silver bullet though, things like design patterns are hard to encompass, but we find that most of the crazy stuff you can catch with it, leaving the code review to be more about design and less about naming conventions and “wtf” moments from code reviewers when reviewing the time a node developer finds dynamics in C# for the first time and decides to have some fun.

We are also trying the same approach with other languages internally, such as TypeScript and Scala are our two other main languages we work in, so stay tuned for more on this.

 

 

 

The Transitive Dependency Dilemma

It sounds like the title of a Big Bang Theory episode, but its not, instead is an all to common problem that breaks the single responsibility rule that I see regularly.

And I’ve seen first hand how much butt hurt this can cause, a friend of mine (Tomz) spent 6 weeks trying to update a library in a website, the library having a breaking change.

The root cause of this issue comes from the following scenario, My Project depends on my Library 1 and 2. My Libraries both depend on the same 3rd party library (or it could be an internal one too).

TransitiveDependencyIssueNugetMavenNPM

Now lets take the example that each library refers to a different version.

TransitiveDependencyVersionMismatch

Now which version do we use, you can solve this issue in dotnet with assembly binding redirects, and nuget even does this for you (other languages have similar approaches too). However, when there is a major version bump and breaking changes it doesn’t work.

If you take the example as well of having multiple libraries (In my friend tomz case there was about 30 libraries that depended on the logging library that had a major version bump) this can get real messy real fast, and a logging library is usually the most common scenario. So lets use this as the example.

TransitiveDependencyLoggingLibrary

 

So what can we change to handle this better?

I point you to the single responsibility principle. In the above scenario, MyLibrary1 and 2 are now technically responsible for logging because they have a direct dependency on the logging library, this is where the problem lies. They should only be responsible for “My thing 1” and “My thing 2”, and leave the logging to another library.

There is two approaches to solve this that i can recommend, each with their own flaws.

The first is exposing an interface that you can implement in MyProject,

TransitiveDependencyIssueUseInterfacesSingleResponsibility

This also helps with if you want to reuse you library in other projects, you wont be dictating to the project how to log.

The problem with this approach though is that you end up implementing a lot of ILoggers in the parent

The other approach is to use a shared common Interfaces library.

TransitiveDependencyIssueCommonInterfaceLogging

The problem with this approach however is when you need to update the Common interfaces library with a breaking change it becomes near impossible, because you end up with most of your company depending on a single library. So I prefer the former approach.

Sonarqube with a MultiLanguage Project, TypeScript and dotnet

Sonarqube is a cool tool, but getting multiple languages to work with it can be hard, especially because each language has its own plugin maintained by different people most of the time, so the implementations are different, so for each language you need to learn a new sonar plugin.

In our example we have a frontend project using React/Typescript and dotnet for the backend.

For C# we use the standard ootb rules from microsoft, plus some of our own custom rules.

For typescript we follow a lot of recommendations from AirBnB but have some of our own tweaks to it.

In the example I am using an end to end build in series, but in reality we use build chains to speed things up so our actual solution is quite more complex than this.

So the build steps look something like this

  1. dotnet restore
  2. Dotnet test, bootstrapped with dotcover
  3. Yarn install
  4. tslint
  5. yarn test
  6. Sonarqube runner

Note: In this setup we do not get the Build Test stats in Teamcity though, so we cannot block builds for test coverage metrics.

So lets cover the dotnet side first, I mentioned our custom rules, I’ll do a separate blog post about getting them into sonar and just cover the build setup in this post.

with the dotnet restore setup is pretty simple, we do use a custom nuget.config file for our internal nuget server, i would recommend always using a custom nuget config file, your IDEs will pick this up and use its settings.


dotnet restore --configfile=%teamcity.build.workingDir%\nuget.config MyCompany.MyProject.sln

The dotnet test step is a little tricky, we need to boot strap it with dotcover.exe, using the analyse command and output HTML format that sonar will consume (yes, sonar wants the HTML format).


%teamcity.tool.JetBrains.dotCover.CommandLineTools.DEFAULT%\dotcover.exe analyse /TargetExecutable="C:\Program Files\dotnet\dotnet.exe" /TargetArguments="test MyCompany.MyProject.sln" /AttributeFilters="+:MyCompany.MyProject.*" /Output="dotCover.htm" /ReportType="HTML" /TargetWorkingDir=.

echo "this is working"

Lastly sometimes the error code on failing tests is non zero, this causes the build to fail, so by putting the second echo line here it mitigates this.

Typescript We have 3 steps.

yarn install, which just call that exact command

Out tslint step is a command line step below, again we need to use a second echo step because when there is linting errors it returns a non zero exit code and we need to process to still continue.


node ".\node_modules\tslint\bin\tslint" -o issues.json -p "tsconfig.json" -t json -c "tslint.json" -e **/*.spec.tsx -e **/*.spec.ts
echo "this is working"

This will generate an lcov report, now i need to put a disclaimer here, lcov has a problem where it only reports coverage on the files that where executed during the test, so if you have code that is never touched by tests they will not appear on your lcov report, sonarqube will give you the correct numbers. So if you get to the end and find that sonar is reporting numbers a lot lower than what you thought you had this is probably why.

Our test step just run yarn test, but here is the fill command in the package json for reference.

"scripts": {
"test": "jest –silent –coverage"
}

Now we have 3 artifacts, two coverage reports and a tslint report.

The final step takes these, runs an analysis on our C# code, then uploads everything

We use the sonarqube runner plugin from sonarsource

SonarqubeRunnerTeamCityTypeScriptDotnet

The important thing here is the additional Parameters that are below

-Dsonar.cs.dotcover.reportsPaths=dotCover.htm
-Dsonar.exclusions=**/node_modules/**,**/dev/**,**/*.js,**/*.vb,**/*.css,**/*.scss,**/*.spec.tsx,**/*.spec.ts
-Dsonar.ts.coverage.lcovReportPath=coverage/lcov.info
-Dsonar.ts.excludetypedefinitionfiles=true
-Dsonar.ts.tslint.outputPath=issues.json
-Dsonar.verbose=true

You can see our 3 artifacts that we pass is, we also disable the typescript analysis and rely on our analysis from tslint. The reason for this is it allows us to control the analysis from the IDE, and keep the analysis that is done on the IDE easily in sync with the Sonarqube server.

Also if you are using custom tslint rules that aren’t in the sonarqube default list you will need to import them, I will do another blog post about how we did this in bulk for the 3-4 rule sets we use.

Sonarqube without a language parameter will auto detect the languages, so we exclude files like scss to prevent it from processing those rules.

This isn’t needed for C# though because we use the nuget packages, i will do another blog post about sharing rules around.

And that’s it, you processing should work and turn out something like the below. You can see in the top right both C# and Typescript lines of code are reported, so this reports Bugs, code smells, coverage, etc is the combined values of both languages in the project.

SonarqubeCodeCoverageStaticAnalysisMultiLanguage

Happy coding!

Upgrading to Visual Studio 2017 Project file format

The new project file format drops the list of included files, as well as moving the nuget references into the csproj are the two biggest changes that you should be interested in.

These changes will greatly reduces your merge conflicts when you have a lot of developers working on a single project

There is a couple of pain points though, the first is that VS 2017 wont update your project files for you and there is no official tool for this. There is a community one available though you can download it here

https://github.com/hvanbakel/CsprojToVs2017

This tool only does libraries though, if you do a web project you’ll need to edit the file and put in you settings manually as well as adding “.web” to the end of the project type


<Project Sdk="Microsoft.NET.Sdk.Web">

Running this on you project files will convert them, however we were unlucky enough to have some people that have been excluding files from projects and not deleting them. So when we converted a large number of old cs files came back into the solution and broken it, as the new format includes by default and you need to explicitly exclude, there reverse approach form the old format.

So we have some powershell we wrote to fix this, firstly a powershell function to run per project


#removeUnused.ps1

[CmdletBinding()]
param(
[Parameter(Position=0, Mandatory=$true)]
[string]$Project,
[Parameter(Mandatory=$false)]
[ValidateRange(10,12)]
[switch]$DeleteFromDisk
)

$ErrorActionPreference = "Stop"
$projectPath = Split-Path $project
if($Project.EndsWith("csproj"))
{
$fileType = "*.cs"
}
else
{
$fileType = "*.vb"
}
$fileType

&nbsp;

$projectFiles = Select-String -Path $project -Pattern '<compile' | % { $_.Line -split '\t' } | `
% {$_ -replace "(<Compile Include=|\s|/>|["">])", ""} | % { "{0}\{1}" -f $projectPath, $_ }
Write-Host "Project files:" $projectFiles.Count

$diskFiles = gci -Path $projectPath -Recurse -Filter $fileType | % { $_.FullName}
Write-Host "Disk files:" $diskFiles.Count

&nbsp;

$diff = (compare-object $diskFiles $projectFiles -PassThru)
Write-Host "Excluded Files:" $diff.Count

#create a text file for log purposes
$diffFilePath = Join-Path $projectPath "DiffFileList.txt"
$diff | Out-File $diffFilePath -Encoding UTF8
notepad $diffFilePath

#just remove the files from disk
if($DeleteFromDisk)
{
$diff | % { Remove-Item -Path $_ -Force -Verbose}
}

Then another script that finds all my csproj files and calls it for each one


foreach($csproj in (Get-ChildItem . -Recurse -Depth 2 | Where-Object {$_.FullName.EndsWith("csproj")}))
{
.\removeUnused.ps1 -Project $csproj.FullName -DeleteFromDisk
}

You can run it without the delete from disk flag to just get a text file with what things it will potentially delete to test it without deleting any files

 

Configurator Pattern

AppSettings and Xml config seem to be the staple for ASP.NET developers, but in production they aren’t good for configuration that needs to change on the fly. Modifications to the web.config cause a worker process recycle, and if you use config files external to the web config, modifying them wont cause a recycle, but you need to force a recycle to pick up the changes.

If you are using something like configinjector, and startup validation of your settings this might be not a bad thing, however if you have costly start-up times for your app for pre-warming cache etc, this maybe less than desirable.

Recently we’ve been using consul to manage our configuration, for both service discovery and K/V store (replacing a lot of our app settings).

So we’ve started to use a pattern in some of our libraries to manage their settings from code as opposed to filling our web config with hordes of xml data.

The way this works is we store our config in a singleton that is configured at app startup programatically. This allow us to load in value from what ever source we want, and abstracts the app settings as a dependency. Then if at run time you need to update the settings you can call the same method.

Then to make things nice and fluent we add extension methods to add the configuration to the class then update the singleton with a Create method at the end.

 


public class Configurator
{
public static Configurator Current => _current ?? (_current = new Configurator());
private static object _locker = new object();
private static Configurator _current;

public static Configurator RegisterConfigurationSettings()
{
return new Configurator();
}

internal bool MySetting1 = true;

internal int MySetting2 = 0;

public void Create()
{
lock (_locker)
{
_current = this;
}
}
}

&nbsp;

public static class ConfiguratorExt
{
public static Configurator DisableMySetting1(this Configurator that)
{
that.MySetting1 = false;
return that;
}

public static Configurator WithMySetting2Of(this Configurator that, int myVal)
{
that.MySetting2 = myVal;
return that;
}
}

You could also implement what i have done as extension method into the configurator class, but i tend to find when the class gets big it helps to break it up a bit.

This allows us to programtically configure the library at run time, and pull in the values from where ever we like. for example below


void PopulateFromAppSettings()
{
Configurator.RegisterConfigurationSettings()
.WithMySetting2Of(ConfigurationManager.AppSettings["MySetting2"])
.Create();
}

void PopulateFromConsul()
{
var MySetting2 = // Get value from Consul
Configurator.RegisterConfigurationSettings()
.WithMySetting2Of(MySetting2)
.Create();
}

You’ll also notice the locker object that we use to make the operation thread safe.

After populating the object we can use the read only Configurator.Current singleton form anywhere in or app to access the configuration settings.

Cleaning up Error logs and Dealing with vague Exceptions like “Object Reference not set to an instance of an Object”

In my experience in most cases when finding an issue with a dot NET app 90% of issues will have a corresponding exception, this exception might not be the root cause but can lead you to the root cause if you follow it, having clean error logs is essential if you want to get to the root cause fast.

The approach I take with getting to the root cause is pretty straight forward and systematic, but I find a lot that developers don’t seem to grasp troubleshoot concepts like “Process of Elimination” very well, which are common more in the Infrastructure and support side of IT, so I wanted to give a small overview for my Devs.

The steps I usually follow are:

  1. Check the line of code from the stack trace
  2. If not enough data, add more and wait for the exception to occur again
  3. If enough data then fix the problem

This is assuming you cant replicate the exception.

If you take this approach you will eventually get to the root cause and fix an exception, the next step after that is apply this to “every” exception and clean up your logs.

I will use an example of a recent issue I fixed.

I recently worked on one of our selenium testing projects, which was having some intermittent issues that we couldn’t trouble shoot and there was nothing in the Error Logs, when this happens my initial reaction is to look for code that is swallowing Exceptions,  and low and behold I find code like the below that I fixed up.

emptycatchexception

I fixed about 5 of these in key areas, now I start getting data into my logs, and I get the age old “NullReferenceException”

nullreferenceexception

Looking at the line of code, i can’t tell much, so we are using Serilog, so its easy just to throw some additional objects into the Log.Error method in order to better understand what was going on at run time. In this example the null reference exception was on the line of code that accesses the driverPool object, so I pass this in like below.

logtheobjectthatiscausingthenullreferenceexception

Now I wait for the exception to occur again and then check logs.

findingtherootcauseofexception

Now i can see the driver pool is a list of null objects. The reason for this will be earlier in the code, but this particular case is filling up the logs so we need to clean it up. So in this situation what i would do is do  null check then log a warning message and not attempt the code that is accessing it.

The reason for this is that this method does a “clean up” to dispose the object, but something else has already disposed it, there is not hard set rule here, you need to judge thee on a case by case basis.

fixingtherootcauseofexception

After waiting again my error logs are a lot more clean now and I can start to see other exception that I couldn’t see before, below you can visibly see the decrease in volume after I fixed this and a few other errors up as well, after 10:15 when my code was deployed i have 1 exception in the next hour, as opposed to hundreds per  minute. My warning logs are a bit more noisy, but that’s ok because I am trying to filter out the noise of the less important errors by using warnings.

errorlogvolume

 

Fluent C# for Unit Testing

I’ve been playing with different methods of this for a while now, ever since i got introduced to it by Shea when I was at Oztix.

Note:  in this example I am using the Shouldly library, which i recommend checking out.

I’ve seen a number of different implementations for Unit Testing and mocking specifically, but then all seem a bit complex, and also these days we run NUnit is massive degrees of parallel, and a lot of implementations use static, which causes issues in this instance.

So in introducing my new team to it I revisited it and came up with what i think is a rather simplified version using Moq, which we are using for mocking at Agoda – I miss NSubstitute 😦

When using fluent within Unit Testing the goal is, in my mind, to get your naming using the GivenWhenThen convention from BDD.  I don’t like creating “builder” classes as they dont selfdocument with good context of what you are doing.

So you test’s C# code should read something like this

GivenAPerson()
.WhenAngry()
.ThenHappinessLevelShouldBeLow();

So if we look at a more detailed example

I am going to create a person class with a happiness level property, and also something else to check, lets say Number of fingers.


public class Person
 {
 public Person()
 {
 //Set Defaults
 HappynessLevel = 5;
 NumberOfFingers = 10;
 }
 public int HappynessLevel{ get; set; }
 public int NumberOfFingers { get; set; }
 public void MakeAngry()
 {
 HappynessLevel = 1;
 }

public void MakeHappy()
 {
 HappynessLevel = 9;
 }
 }

Now I am going to create my test class in my unit test project to assist with Testing and mocking it.


 public class GivenAPerson
 {
 private Person _person;
 public GivenAPerson()
 {
 _person = new Person();
 }

public Person Then()
 {
 return _person;
 }

public GivenAPerson WhenAngry()
 {
 _person.MakeAngry();
 return this;
 }

public GivenAPerson ThenHappinessLevelShouldBeLow()
 {
 _person.HappynessLevel.ShouldBeLessThan(3);
 return this;
 }
 public GivenAPerson ThenNumberOfFingersShouldBeDefault()
 {
 _person.NumberOfFingers.ShouldBe(10);
 return this;
 }
 }

You can see that the methods all return “This” or the class itself, this is one of the core concepts in fluent C#. This allow us to achieve the desired outcome of style we are looking for above.

There is one method that doesn’t though, which is “Then”, I sue this in conjunction with shouldly, and I’ll get into why at the end.

So now lets look at putting it all together in the Test Fixture


 [TestFixture]
 public class TestFixtureExample
 {
 [Test]
 public void MyTest()
 {
 new GivenAPerson()
 .WhenAngry()
 .ThenHappinessLevelShouldBeLow()
 .ThenNumberOfFingersShouldBeDefault();
 }
 }

The first thing you will notice is that i use the word “new”, in the past I have used static to avoid this, but when using massive parallelism in NUnit you want to do this so you avoid static.

You will also note that I have added 2 checks to this test, this is to give an example of how you would use this to scale the framework.

Using this method you should be able to easily reuse your checks, in this example we verify that the persons number of fingers doesn’t change when they are angry, which is very odd, but it’s just to give an example how you can apply multiple Then or When to a single test.

Now what about the “Then()” method. I normally don’t write a method like what is in ThenNumberOfFingersShouldBeDefault(), because it is just a single value. Normally i use ThenBlahBlah() methods for complex checks, if the check is not complex i would write something like the following


 [Test]
 public void MyTest2()
 {
 new GivenAPerson()
 .WhenAngry()
 .ThenHappinessLevelShouldBeLow()
 .Then().NumberOfFingers.ShouldBe(10);
 }

This allows me a quick check for a single value at the end if I need to and avoids me writing methods.

 

 

Dependency Injection Recomendations

Recently started working with a project that has a class called “UnityConfiguration” with 2000 lines of this

container.RegisterType<ISearchProvider, SearchProvider>();

This fast becomes unmanageable, wait, I hear you say, not all Types are registered in the same way! Yes, and you won’t get away with a single line to wire-up your whole IoC container, but you should be able to get it way under 50 lines of code, even in big projects.

I prefer to go a bit Hungarian and file things via folders/namespaces by their types, then use the IoC framework to load in dependencies using this. This is because based on the Type is generally where you find the differences.

For example I put all my Attributes under a namespace called “Attributes”, with sub-folders if there is too many of course, as so on.

Below is an example of a WebApi application i have worked on in the past. This method is called assembly scanning and is in the Autofac doco here

var containerBuilder = new ContainerBuilder();
containerBuilder.RegisterAssemblyModules(
typeof(WebApiApplication).Assembly);

containerBuilder.RegisterAssemblyTypes(typeof(WebApiApplication).Assembly)
.Where(t => t.IsInNamespace("Company.Project.WebAPI.Lib")).AsImplementedInterfaces();
containerBuilder.RegisterAssemblyTypes(typeof(WebApiApplication).Assembly)
.Where(t => t.IsInNamespace("Company.Project.WebAPI.Attributes")).PropertiesAutowired();
containerBuilder.RegisterAssemblyTypes(typeof(WebApiApplication).Assembly)
.Where(t => t.IsInNamespace("Company.Project.WebAPI.Filters")).PropertiesAutowired();

containerBuilder.RegisterApiControllers(Assembly.GetExecutingAssembly()).PropertiesAutowired();
_container = containerBuilder.Build();

You can see form the above code that things like the Attributes and filters require the Properties AutoWired as I use Property injection as opposed to the constructor injection, as these require a parameter-less constructor. So I end up with one line for each sub-folder in my project basically.

So as long as I keep my filing system correct I don’t have to worry about maintaining a giant “Configuration” class for my IoC container.

You can also make use of modules in Autofac by implementing the Module, I recommend using this for libraries external to your project that you want to load in. you can use the RegisterAssemblyModules method in Autofac in a similar way

 

 

 

 

 

Swagger/Swashbuckle and WebAPI Notes

If you aren’t using Swagger/Swashbuckle on your WebAPI project, you may have been living under a rock, if so go out and download it now 🙂

Its a port from a node.js project that rocks! And MS is really getting behind in a big way. If you haven’t heard of it before, imagine WSDL for REST with a snazy Web UI for testing.

Swagger is relatively straight forward to setup with WebAPI, however there were a few gotchas that I ran into that I thought I would blog about.

The first one we ran into is so common MS have a blog post about it. This issue deals with an exception you’ll get logged due to the way swashbuckle auto generates the ID from the method names.

A common example is when you have methods like the following:

GET /api/Company // Returns all companies

GET /api/Company/{id} // Returns company of given ID

In this case the swagger IDs will both be “Company_Get”, and the generation of the swagger json content will work, but if you try to run autorest or swagger-codegen on this they will fail.

The solution is to create a custom attribute to apply to the methods like so


// Attribute
namespace MyCompany.MyProject.Attributes
{
[AttributeUsage(AttributeTargets.Method)]
public sealed class SwaggerOperationAttribute : Attribute
{
public SwaggerOperationAttribute(string operationId)
{
this.OperationId = operationId;
}

public string OperationId { get; private set; }
}
}

//Filter

namespace MyCompany.MyProject.Filters
{
public class SwaggerOperationNameFilter : IOperationFilter
{
public void Apply(Operation operation, SchemaRegistry schemaRegistry, ApiDescription apiDescription)
{
operation.operationId = apiDescription.ActionDescriptor
.GetCustomAttributes&lt;SwaggerOperationAttribute&gt;()
.Select(a =&gt; a.OperationId)
.FirstOrDefault();
}
}
}

//SwaggerConfig.cs file
namespace MyCompany.MyProject
{
public class SwaggerConfig
{
private static string GetXmlCommentsPath()
{
return string.Format(@&quot;{0}\MyCompany.MyProject.XML&quot;,
System.AppDomain.CurrentDomain.BaseDirectory);
}
public static void Register()
{

var thisAssembly = typeof(SwaggerConfig).Assembly;

GlobalConfiguration.Configuration
.EnableSwagger(c =&gt;
{
c.OperationFilter&lt;SwaggerOperationNameFilter&gt;();

c.IncludeXmlComments(GetXmlCommentsPath());

// the above is for comments doco that i will talk about next.

// there will be a LOT of additional code here that I have omitted

}

}

}

}

Then apply like this:


[Attributes.SwaggerOperation(&quot;CompanyGetOne&quot;)]
[Route(&quot;api/Company/{Id}&quot;)]
[HttpGet]
public Company CompanyGet(int id)
{
// code here
}

[Attributes.SwaggerOperation(&quot;CompanyGetAll&quot;)]
[Route(&quot;api/Company&quot;)]
[HttpGet]
public List&lt;Company&gt; CompanyGet()
{
// code here
}

Also mentioned I the MS article is XML code comments, these are awesome for documentation, but make sure you don’t have any potty mouth programmers

This is pretty straight forward, see the setting below

XmlCommentsOutputDocumentationSwaggerSwashbuckle

The issue we had though was packaging them with octopus as it’s an output file that is generated at build time. We use the octopack nuget package to wrap up our web projects, so in order to package build-time output (other than bin folder content) we need to create a nuspec file in the project. Octopack will default to using this instead of the csproj file if it has the same name.

e.g. if you project is called MyCompany.Myproject.csproj, create a nuspec file in this project called MyCompany.MyProject.nuspec.

Once you add a file tag into the nuspec file this will override octopack ebnhaviour of looking up the csproj file for files, but you can override this behavior by using this msbuild switch.

/p:OctoPackEnforceAddingFiles=true

This will make octopack package files from the csproj first, then use what is specified in the files tag in the nuspec file as additional files.

So our files tag just specifies the MyCompany.MyProject.XML file, and we are away and deploying comments as doco!

We used to use sandcastle so most of the main code comment doco marries up between the two.

Autofac DI is a bit odd with the WebAPI controllers, we generally use DI on the constructor params, but WebAPI controllers require a parameter-less constructor. So we need to use Properties for DI. This is pretty straight forward you juat need to call the PropertiesAutowired method when registering them. And as well with the filters and Attributes. In our example below I put my filters in a “Filters” Folder/Namespace, and my Attributes in an “Attributes” Folder/Namespace


// this code goes in your Application_Start

var containerBuilder = new ContainerBuilder();

&amp;nbsp;

containerBuilder.RegisterAssemblyTypes(typeof(WebApiApplication).Assembly)
.Where(t =&gt; t.IsInNamespace(&quot;MyCompany.MyProject.Attributes&quot;)).PropertiesAutowired();
containerBuilder.RegisterAssemblyTypes(typeof(WebApiApplication).Assembly)
.Where(t =&gt; t.IsInNamespace(&quot;MyCompany.MyProject.Filters&quot;)).PropertiesAutowired();

containerBuilder.RegisterApiControllers(Assembly.GetExecutingAssembly()).PropertiesAutowired();

containerBuilder.RegisterWebApiFilterProvider(config);