AutoRest, Swagger-codegen and Swagger

One of the best things about swagger is being able to generate a client. For me swagger is for REST what WSDL was for SOAP, one of my big dislikes about REST from the start was it was hard to build clients because the standard was so lose, and most services if you got one letter’s casing wrong in a large object it would give you a generic 400 response with no clue as to what the actual problem might be.

Enter Swagger-codegen, Java based command line app for generating proxy clients based on the swagger standard. Awesomesuace! However I’m a .NET developer and I try to avoid adding new dependencies into my development environment (Like J2SE), that’s ok though, they have a REST API you can use to generate the clients as well.

In working on this though I found that MS is also working on their own version of codegen, called AutoRest. AutoRest only support 3 output formats at the moment though, Ruby, Node.js (TypeScript) and C#, But looking at the output from both and comparing them, I am much happier with the AutoRest outputted code, its a lot cleaner.

So in our case we have 3 client requirements C#, Client Side javascript, and Client Side Typescript.

Now either way you go with this, one requirement is you need to be able to “run” your WebAPI service on a web server to generate the json swagger file that will be used in the client code generation. So you could add it into a CI pipeline with your Web API but you would need to do build steps like

  1. Build WebAPI project
  2. Deploy Web API project to Dev server
  3. Download json file from Dev Server
  4. Build client

Or you could make a separate build that you run, I’ve tried both ways and it works fine.

So we decided to use AutoRest for the C# client. This was pretty straight forward, the autorest exe if available in a nuget package. So for our WebAPI project we simply added this, which made it available and build time. Then it was simply a matter of adding a PowerShell step into TeamCity for the client library creation. AutoRest will output a bunch of C# cs file that you will need to compile, which is simply a mater of using the csc.exe, after this I copy over a nupsec file that i have pre-baked for the client library.

PowerShellAutoRestStepTeamCity


.\Packages\autorest.0.13.0\tools\AutoRest.exe -OutputDirectory GeneratedCSharp -Namespace MyWebAPI -Input http://MyWebAPI.net/swagger/docs/v1 -AddCredentials
& "C:\Program Files (x86)\MSBuild\14.0\bin\csc.exe" /out:GeneratedCSharp\MyWebAPI.Client.dll /reference:Packages\Newtonsoft.Json.6.0.4\lib\net45\Newtonsoft.Json.dll /reference:Packages\Microsoft.Rest.ClientRuntime.1.8.2\lib\net45\Microsoft.Rest.ClientRuntime.dll /recurse:GeneratedCSharp\*.cs /reference:System.Net.Http.dll /target:library
xcopy MyWebAPI\ClientNuspecs\CSharp\MyWebAPI.Client.nuspec GeneratedCSharp

You will note form the above command lines for csc that I have had to add in some references to get it to compile, these need to go into your nuspec file as well, so people installing your client package will have the correct dependencies. Snip from my nuspec file below:


<frameworkAssemblies>
<frameworkAssembly assemblyName="System.Net.Http" targetFramework="net45" />
</frameworkAssemblies>
<dependencies>
<dependency id="Microsoft.Rest.ClientRuntime" version="1.8.2" />
<dependency id="Newtonsoft.Json" version="6.0.8" />
</dependencies>

After this just add a Nuget Publish step and you can start pushing your library to nuget.org, or in out case just our private internal server.

For authentication we use Basic Auth over SSL, so adding the “-AddCredentials” command line parameter is needed to generate the extra methods and properties for us, you may or may not need this.

Below is an example console app where I have installed the nuget package that autorest created, this uses basic auth which you my not need.

namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
var svc = new MyClient();
svc.BaseUri = new Uri("https://MyWebAPILive.com");
svc.Credentials= new BasicAuthenticationCredentials{UserName = "MyUser",Password = "MyPassword!"};
Console.WriteLine(svc.HelloWorld());
Console.ReadLine();
}
}
}

Next we have swagger codegen for our Client libraries. As I said before I don’t want to add J2SE into our build environment to avoid complexity, so we are using the API. I’ve built a gulp job to do this.

Why gulp? the javascript client output from codegen is pretty rubbish, so instead of using this I’m getting the typescript library and compile it, then minify, i find this easier to do in gulp.

The Swagger UI for the Swagger Codegen api is here. When you call the POST /gen/clients method you pass in your json file, after this it returns a URL back that you can use to then download a zip file with the client package. Below is my gulpfile

var gulp = require('gulp');
var fs = require('fs');
var request = require('request');
var concat = require('gulp-concat');
var unzip = require('gulp-unzip');
var ts = require('gulp-typescript');
var tsd = require('gulp-tsd');
var tempFolder = 'temp';

gulp.task('default', ['ProcessJSONFile'], function () {
// doco https://generator.swagger.io/#/
});

gulp.task('ProcessJSONFile', function (callback) {
return request('http://MyWebAPI.net/swagger/docs/v1',
function (error, response, body) {
if (error != null) {
console.log(error);
return;
}
ProcessJSONFileSwagOnline(body);
});
});

function ProcessJSONFileSwagOnline(bodyData) {
bodyData = "{\"spec\":" + bodyData + "}"; // Swagger code Gen web API requires the output be wrapped in another object
return request({
method: 'POST',
uri: 'http://generator.swagger.io/api/gen/clients/typescript-angular',
body: bodyData,
headers: {
"content-type": "application/json"
}
},
function (error, response, body) {
if (error) {
console.log(error);
return console.error('upload failed:', error);
}
var responseData = JSON.parse(body);
var Url = responseData.link;
console.log(Url);
downloadPackage(Url);
});
};

function downloadPackage(Url) {
return request(Url,
function(error, response, body) {
console.log(error);
}).pipe(fs.createWriteStream('client.zip'), setTimeout(exctractPackage,2000));
};

function exctractPackage() {
gulp.src("client.zip")
.pipe(unzip())
.pipe(gulp.dest(tempFolder));
setTimeout(moveFiles,2000);
};

function moveFiles() {
return gulp.src(tempFolder + '/typescript-angular-client/API/Client/*.ts')
.pipe(gulp.dest('generatedTS/'));
};

Now I am no expert at Node.js I’ll be the first to admit, so I’ve added a few work arounds using setTimeout in my script as I could get the async functions to work correctly, if anyone wants to correct me on how these should be done properly please do 🙂

At the end of this you will end up with the type script files in a folder that you can then process into a package. We are still working on a push to GitHub for this so that we can compile a bower package for us, I will make another blog post about this.

In the typescript output there will always be a api.d.ts file that you can reference into your TypeScript project to expose the client. I’ll do another post about how we setup or Dev Environment for compile the TypeScript from bower packages.

for our Javascript library we just need to add one more step.


function compileTypeScriptClientLib() {
var sourceFiles = [tempFolder + '/**/*.ts'];

gulp.src(sourceFiles)
.pipe(ts({
out: 'clientProxy.js'
}))
.pipe(gulp.dest('outputtedJS/'));
};

This will compile us our JS script library, we can then also minify it in gulp as well, before packaging, again bower is the technology for distributing client packages, so after this we push to GitHub, but i’ll do another blog post about that.

The output you get from TypeScript in CodeGen is angularJS, which is fine as “most” of our apps use angular already, however a couple of our legacy ones don’t, so the client proxy object that is created needs a bit of work to inject it’s dependencies.

Below is an example of a module in javascript that I use to wrap the AngularJS service and return it as a javascipt object with the Angular Dependencies injected:


var apiClient = (function (global) {
var ClientProxyMod= angular.module("ClientProxyMod", []);
ClientProxyMod.value("basePath", "http://MyWebAPILive.com/"); // normally I'd have a settings.js file where I would store this
ClientProxyMod.service("MyWebAPIController1", ['$http', '$httpParamSerializer', 'basePath', API.Client.MyWebAPIController1]);
var prx = angular.injector(['ng', 'ClientProxyMod']).get('MyWebAPIController1');
return {
proxy: prx
}
}());

You would need to do the above once for each controller you have in your WebAPI project, the codegen outputs one service for each controller.

One of the dependencies of the Service that is created by CodeGen is the “basePath” this is the URL to the live service, so i pass this in as a value, you will need to add this value to your Angular JS module when using in an Angular JS app as well.

Using basic auth in AngularJS is pretty straight forward because you can set it on the $http object which is exposed as a property on the service.


apiClient.proxy.$http.defaults.headers.common['Authorization'] = "Basic " + btoa(username + ":" + password);

Then you can simply call your methods from this apiClient.proxy object.

 

 

Packaging Large Scale JS projects with gulp

Large scale JS projects have become a lot more common with the rise of AngularJS and other javascript frameworks.

Laying out your project though you want to have your source in a lot of separate files to keep things neat and in a structure, but you don’t want you browser making  a few hundred connections to the web server on every page load.

I previous blogged about a simple JS and CSS minify and compress for a C# based project, but this time I’m looking at a small AngularJS project that has over 50 js files.

I’m using VS 2015 with the Web Essentials add-on, so in the package.json file can put my npm requirements and they will download, with 2013 i previously had to run npm on my local each time I setup to get them, the new VS is way better for these tasks, if you haven’t upgraded to 2015 you should download it now.

There is 3 important files highlighted below

GulpPackageJsonBower.PNG

Firstly lets look at my package json file:

package.json

in here you can add all your npm dependencies and VS with run npm in the background to install them, as you are typing.

Next is the bower.json file

bower.sjon.png

This is used for managing your 3rd party libraries, traditionally when throwing in 3rd party js libraries I would use a CDN, however with Angular, you will end up with a bunch of smaller libraries that will end you up with too many CDN references, so you are better off using bower to download them into your project, then from there combine them (They will already be uglifyd in most cases).

Now there is also a “.bowerrc” file that i have added, because i don’t like the default path


{
"directory": "app/lib"
}

Again as with the package json it will start downloading these as you type and press save, also to note it doesn’t clean up if you delete one, so you’ll  need to go into the target location and delete the folder manually.

Lastly the Gulpfile.js file, here is where we bring it all together.

In the example below I am processing 2 lots of javascript, the 3rd party libraries into a lib.min.js and our code into scripts.min.js. I could also add a final step to combine them as well.

The good thing about the below is that as i add new js files to my /app/scripts folder, they automatically get combined into the main script file, and when a add new 3rd party libraries the will automatically get added to the 3rd party script file (hence the aforementioned change to the bowerrc file).

The 3rd party libraries tend to sometimes get painful though, you can see below that i am not minifing them only joining, the ones i’m using are all minified already, but sometimes you will get ones that aren’t, also one of the packages i am using contains the jQuery1.8 libraries that it appears to use for some sort of unit test, so i had to exclude this file specifically, so be prepared for some troubleshooting with this.


/// <binding BeforeBuild='clean' AfterBuild='minify, scripts' />
// include plug-ins
var gulp = require('gulp');
var concat = require('gulp-concat');
var uglify = require('gulp-uglify');
var del = require('del');
var rename = require('gulp-rename');
var minifyCss = require('gulp-minify-css');
var sourcemaps = require('gulp-sourcemaps');

gulp.task('default', function () {
});

//Define javascript files for processing
var config = {
AppJSsrc: ['App/Scripts/**/*.js'],
LibJSsrc: ['App/lib/**/*.min.js', '!App/lib/**/*.min.js.map',
'!App/lib/**/*.min.js.gzip', , 'App/lib/**/ui-router-tabs.js',
'!App/lib/**/jquery-1.8.2.min.js']
};

//delete the output file(s)
gulp.task('clean', function () {
del(['App/scripts.min.js']);
del(['App/lib.min.js']);
return del(['Content/*.min.css']);
});

gulp.task('scripts', function () {
// Process js from us
gulp.src(config.AppJSsrc)
.pipe(sourcemaps.init())
.pipe(uglify())
.pipe(concat('scripts.min.js'))
.pipe(sourcemaps.write('maps'))
.pipe(gulp.dest('App'));
// Process js from 3rd parties
gulp.src(config.LibJSsrc)
.pipe(concat('lib.min.js'))
.pipe(gulp.dest('App'));
});

gulp.task('minify', function () {
gulp.src('./Content/*.css')
.pipe(minifyCss())
.pipe(rename({
suffix: '.min'
}))
.pipe(gulp.dest('Content/'));
});

So now i end up with two files outputted

BuildOutputJavaScriptFiles.PNG

oh and don’t for get to add tehse fiels to your gitignore or tfignore so they don’t get checked in. and also make sure you add “note_modules” and your 3rd party library folder as well, you don’t want to be checking in your dependencies, I’ll do another blog post about using the above dependencies in the build environment.

Just to note, not having node_modules in my gitignore killed VS2015s integration with github causing a error about the path being longer than 260 characters. Prevented me checking in and refused to acknowledge the project was under source control.

And in my app i only need to script tags


<script src="App/lib.min.js"></script>
<script src="App/scripts.min.js"></script>

You will also note in the above processing I have a source map file output, this is essential for debugging the scripts. These map files wont get download unless the browser is in debugging mode and you should use your build/deployment environment to exclude from production, to stop people decompressing you js code.

You can see from the screen shot below with me debugging on my local, firebug is able to render the original javascript code for me to debug and step through the code just like it was uncompressed, with the original file names and all.

DebugFireFoxSourceMapFiles

Handling Client Side error in AngularJS and sending them Server Side

I did an example in my AzureDNS App today of how to add a ApiController that will log client side error to a server side store.

Coming from a C# background I am used to having a central logging store (e.g. SQL table or SEQ more recently) that all errors are dumped too. When running code on a server its easy for an app to throw its logs into something that is behind the firewall and catalogs the logs from all your apps nicely.

When you have code that is running in someone’s web browser that not always as easy, but what I’ve done with this example is just create a controller on /api/Log/Error within the running app that i can throw my JavaScript Exceptions at, then add a call from Angular’s global exception handler.

The Log Controller can be found here

https://github.com/HostedSolutions/AzureDNSUI/blob/master/src/HostedSol.AzureDNSUI.Web/Controllers/LogController.cs

The log error method is pretty straight forward, the logger object is passed in via DI from autofac


[Route("~/api/Log/Error")]
[HttpPost]
// POST: api/Log/Error
public void PostError([FromBody]string value)
{
_logger.Error(value);
}

The the global error handler in Angular is done as follows

https://github.com/HostedSolutions/AzureDNSUI/blob/master/src/HostedSol.AzureDNSUI.Web/App/Scripts/fac/loggerSvc.js


'use strict';
angular.module('AzureDNSUI') // This would be changed to your app name if reusing this code
.factory('$exceptionHandler', function($injector) {
return function(exception, cause) {
var $http = $injector.get("$http");
var $log = $injector.get("$log");
exception.message += ' (caused by "' + cause + '")';
$log.log(exception); // Logs to console
$http.post('/api/Log/Error', JSON.stringify(exception)); // Where the magic happens
throw exception;
};
});

An example error from Angular in our SEQ server below:

SEQExampleOutputFromAngularJS

There is a few fields that we can customize that I do by default from the Serilog config, some of these are not relevant for the javascript errors, for example the SourceContext will always be the same, I’ll do a follow-up post about collecting client information to pass through later.

Azure DNS Web UI, the beginning

I was a bit frustrated to hear that the Azure DNS service didn’t have a UI.

It is however understandable, the main purpose for introducing a DNS service in the cloud environment is primarily around environments that will be run up programmatically (i.e. Infrastructure as code).

But I’ve been meaning to dig my hands into a small sized angularJS project to try out some stuff so I decided to write a small Web UI for it, this is why I haven’t done any posts in a week. Well that and my cousin’s wedding 🙂

It is starting to Take shape now in GitHub:

https://github.com/HostedSolutions/AzureDNSUI

It’s based on the Authentication libraries published in the TODO example app from MS.

There is still a bit of work to be done, i’ll be doing some follow-up posts, and hopefully be able to get it into the Azure Marketplace.

Currently there is a requirement of configuring an Azure Application in Azure AD, then putting the AppID and tenant in to the config to make it work. As I understand it, if it is a marketplace app that will configure this automatically in your environment when its installed.

This is configured under the active directory section in Azure, and there is a few gotchas i will cover in followup posts. but hopefully won’t be needed in the long run.

AzureApplicaitonConfiguration


&amp;lt;add key="ida:Tenant" value="hostedsolutionscomau.onmicrosoft.com" /&amp;gt;
&amp;lt;add key="ida:Audience" value="ffd940d1-3eed-425b-9ae9-fd0e9955db29" /&amp;gt;

The app itself is pretty basic, after a bounce and back to Azure/MSLive to login, you come back with a token that is passed with the API calls from JavaScript. The Azure Management gear is all CORS enabled so it can be hosted on a different domain from the APIs and runs fine.

You simply select the Subscription then resource group and you get a list of domains in that resource group.

AzureDNSUserInterfaceWeb.PNG

Then once a domain is selected you can edit the individual records.

I noticed with the schema of the JSON objects they are designed hierarchical, assuming that for each A record, for example, there will most likely be multiple IPs (i.e. DNS load balancing), so i have tried to design the UI to suite this.

The exception was the TXT records, if someone can explain to me why you would have multiple records for a single TXT value, I’d be glad to fix it up, but even looking at the RFCs left me saying WTF?

AzureDNSUserInterfaceWebEditorRecords

I think it will be another few weeks before this is completed and ready for the market place. But I think its a tool worth having, because while the “run up” of Infrastructure as Code deployments might be all command line based, sometimes its handy to be able to use UI to check your work and make minor “on the spot” fix ups for troubleshooting, rather than having to pour through command lines.

more to come in the coming weeks…