Deploying to Telerik Platform from Octopus Server

We have been suing Telerik Platform for a while now, while their platform is great, going from TFVC to Build to Platform for deploy always involved someone building for their local visual studio, which of course carries the risks of:

  1. Manually changing settings from local to dev/test/live and things getting missed
  2. Unchecked-in code going up
  3. Manual Labor

So since they have a CLI we decided to try to automate this process, its a bit wired what we came up with because we build at deployment time, but it works.

I setup a git repo for this one with an example solution using the friends app (

Summary of what we are going to setup with this process

  1. Package the project into a nuget package on the TeamCity Server and stick it into a nuget server
  2. Pick up the nuget package with an Octopus Server and store variables for dev/test/prod in octopus
  3. Deploy to Telerik Platform from the octopus Tentacle
  4. Based on the octopus environment choose (dev/test/etc) use different variables, and make it available to different group


First of all, the project itself. So the Build server in this instance isn’t going to build anything, its just going to package it. so we simply need to add a nuspec file to the project in Visual Studio, example below

<?xml version="1.0" encoding="utf-8" ?>
<package xmlns="">
 <title>platform-friends-hybrid deployment package</title>
 <authors>Hosted Solutions Pty Ltd</authors>
 <owners>Hosted Solutions Pty Ltd</owners>
 <description>platform-friends-hybrid deployment package</description>
 <summary>platform-friends-hybrid deployment package</summary>
 <releaseNotes />
 <copyright>Copyright © Hosted Solutions Pty Ltd 2015</copyright>
 <file src="\**\*.*" />
 <file src=".abproject" />
 <file src=".debug.abproject" />
 <file src=".release.abproject" />

Now in this example you will note the files beginning with a “.” i had to add individually. this is because they aren’t picked up by “*.*”

You also need to manually add the packages.config for nuget with a reference to octopack. This will package up the files for you into the nuget format octopus needs.

This Commit in github has the full details (

Finally we need to change the file /scripts/app/settings.js, so that we can token replace the variables, the need to be in a format of #{VaribleName] for octopus, below is an exmaple

var appSettings = {

everlive: {
 apiKey: '#{EVERLIVE_API_KEY}', // Put your Backend Services API key here
 scheme: 'http'


To get this to build on your build server you will need to download and install the Visual Studio Extension on your build server as well ( and go to “Getting Started” -> “Downloads” -> “App Builder Hybrid”)


In my build in TeamCity to make this work I’ve got 4 steps


These are very similar to what i outlined in this post (

The main difference is I’ve had to add an extra build configuration parameter like this


This makes Octopack pass an additional parameter to nuget when it does the packaging, without this it refuses to pickup the files beginning with the “.”

Now this will package up our solution to be built on the deployment server, it wont actually do any building.


I’ve got a few projects that are odd like this, where i end up pushing from Octopus to a remote environment to then onward deploy, its not unusual I think, but still not common. So we ended up creating a machine specific in one of our setups for just running scripts, in my smaller setup we just drop a tentacle on our octopus server though.

I’m using an octopus server with a tentacle on it in this example.

First we need to get node js on the box (

The tricky bit here is that nodejs with AppManager CLI runs out of the User Profile, so What i have done is set the Tentacle on the box to run a a local User Account (if you are in a domain i recommend an AD account), make sure it a local administrator on the box the tentacle is installed on.


Once this is done login as this user to install and setup nodejs and AppManager CLI with the command line

npm install appbuilder -g


Now that that is ready you need to setup a project in Octopus


Make sure you select the substitute Variable Step Feature.


And you will need to add you JavaScript Settings file to the list once this is enabled.


The setup the variables in the variable section.


And for each variable you will probably want to set a different scope per environment.


Then add a step template

I’ve put notes for my original powershell script here (

And a full step template for octopus here (

Now i’ll just walk through a few of the things in the powershell and why we are doing things that way.

# Setup Group Command
$GroupCmd =" --group " + $GroupAccessList.Replace(","," --group ");
$GroupCmd=$GroupCmd.Replace(" "," ").Replace(" "," ")

This code above is for giving the option for various groups to be able to access different deployments. For example, we have Dev and Test, the developers have access to both Dev and Test, but only our testers have access to Test, because we allow the developers to “mess around” with dev, which may cause false positive results in testing.

$currentstepname = $OctopusParameters["Octopus.Step[0].Package.NuGetPackageId"]

We expect the previous step to this one to be the Telerik Nuget Package step, if it is not this will fail.

CMD /C C:\"Program Files (x86)"\nodejs\npm update -g appbuilder; $LASTEXITCODE

We run an update command to make sure that we are on the latest version of AppBuilder, if AppBuilder is not the latest version it will fail

$AppData = $env:APPDATA

AppBuilder runs from the local user profile, so we need to use the AppData folder path

$JSON = (Get-Content "$parentLocation\.abproject" -Raw) | ConvertFrom-Json; 

# Set values in object
$JSON.ProjectName = $ProjectName; 
$JSON.AppIdentifier = $AppIdentifier; 
$JSON.DisplayName = $DisplayName;
$JSON.BundleVersion = $OctopusReleaseNumber; 

$JSON | ConvertTo-Json | Set-Content "$parentLocation\.abproject"

We modify values in the .abproject file to set things like the version number and also the app name (we prefix dev with “Dev” and Test with “Test” so using the example above where a developer has both dev and test on their phone, that the developers when they are using the app on their phone know which one is which.

CMD /C $APPDATA\npm\appbuilder dev-telerik-login $TelerikUserName $TelerikPassword       IF ($LASTEXITCODE -ne 0) { Write-Error "Error"}

Login to telerik platform

CMD /C $APPDATA\npm\appbuilder appmanager upload android --certificate $AndriodCertID --publish --send-push $GroupCmd;$LASTEXITCODE;IF ($LASTEXITCODE -ne 0) { Write-Error "error"}

Uploads to android

CMD /C $APPDATA\npm\appbuilder appmanager upload ios     --certificate $iOSCertID --provision $iOSProvitionID --publish $SendPushCmd $GroupCmd;$LASTEXITCODE;IF ($LASTEXITCODE -ne 0) { Write-Error "error"}

Uploads the iOS version
We normally set the Group Access list to a variable, so that it can be varied per environment.

So we then end up with the steps like so in octopus


Once deployed to Telerik Platform our version number are in sync with Octopus and Team City as well as our Source control labels. And we end up with seperate apps for Dev,Test , etc. and in the event you are accessing services you can token replace the right scoped variable so that the “Test Mobile App” accesses the “Test Web API” and so on.

And there you have it, TFVC -> TeamCity -> Octopus Deploy -> Telerik Platform

Private Nuget Servers – Klondike

UPDATE: A better solution I think is to use VSTS’s new package Management feature if you are using VSTS already.

Nuget is a great way to share packages around, but a lot of the LOB apps I work on we need a private solution that isn’t open to the public. I’ve setup Klondike ( a couple of times now and find it to be light weight and dumb enough not to give any problems.

I even used it for my octopack nuget files (instead of the built in nuget server in octopus) as I find it handy just to have all your nuget in the one place.

Installation is easy, just extract it to a folder and point an IIS webroot at it. Supports self-hosted in OWIN as well if that is more your flavor.

To get everything setup i generally set this value in the web.config, it treats all request from the local server as admin. So using a web browser on the local server I get everything setup.

&lt;add key=&quot;NuGet.Lucene.Web:handleLocalRequestsAsAdmin&quot; value=&quot;true&quot; /&gt;

I’ve dropped this on servers on a domain before and used windows auth as well for authentication at an IIS level, which works fine. You could also use Basic Auth and create user accounts on the local server to manage access which I’ve done as well without any complaints.

In general terms, most projects in private sources that I work on, if you have access to the source control you should have permission to read from the nuget server, so I leave the credentials for the nuget server in my nuget.config at the root of each project in source.

Example configuration below from one of my projects:

&lt;?xml version=&quot;1.0&quot; encoding=&quot;utf-8&quot;?&gt;
&lt;add key=&quot;All&quot; value=&quot;(Aggregate source)&quot; /&gt;
&lt;add key=&quot;enabled&quot; value=&quot;true&quot; /&gt;
&lt;add key=&quot;disableSourceControlIntegration&quot; value=&quot;true&quot; /&gt;
&lt;add key=&quot;MyNugetServerInAzure&quot; value=&quot;; /&gt;
&lt;add key=&quot;; value=&quot;; /&gt;
&lt;add key=&quot;Username&quot; value=&quot;MyUserAccount&quot; /&gt;
&lt;add key=&quot;ClearTextPassword&quot; value=&quot;XXXXX /&gt;

And then create the User Account in the Computer Manager on the server and setup IIS Authenticaiton on the Klondike website like so:


And you are away. I have had odd issues using the above with the role mappings, so i generally leave these settings alone

&lt;add key=&quot;PackageManager&quot; value=&quot;&quot; /&gt;
&lt;add key=&quot;AccountAdministrator&quot; value=&quot;&quot; /&gt;

The only thing to point out in the config I change generally for Klondike is is

&lt;add key=&quot;NuGet.Lucene.Web:localAdministratorApiKey&quot; value=&quot;XXX&quot; /&gt;

And drop in an API key to use when updating packages, I don’t always give this out to all the developers working on it, but instead save the API key in the build server to force check-in to update packages via the build system.

One last issue i have with Klondike is this error “The process cannot access the file ‘C:\inetpub\wwwroot\App_Data\Lucene\write.lock’ because it is being used by another process.”


If you recycle the worker process this happens, touch the web config file and it will fix it up.

Test Manager Workflow, Manual to Automation

We’ve been using a lot of Test Manager Lately and I am really happy with the work flow that is built into TFS with this product.

We have full time testers who don’t have a lot of experience with code, so you can’t give them tools like the CUIT recorder in visual studio and expect them to run with it. But they are the best people to use tools like this because they understand testing more, and also in my experience testers tend to be more thorough than developers.

The other thing I like about the workflow is that its “Documentation First”, so your documentation is inherently linked into the Test platform.

Microsoft Test Manager’s recorder tools is a good cut down version of CUIT that makes things easier for the testers to record manual tests but not get caught up in code.

That being said, it is a pain in the ass to get running, we have a complicated site with a lot of composite controls, so the DOM was a bit ugly, and this made it a painful exercise to get any recorder going (We also tried Teleirk Test Studio and Selenium with similar issues as well, but Telerik Test Studio was probably the best out of them)

The basic workflow in Test Manager Stars with the Test Case work Item, from a test Case you outline your test steps


The important thing here is the use of parameters, you can see in the above example that I am using @URL, @User and @Pass.

In the test case you can input n number of iterations of these, then when creating the test recorder it will play back one iteration per set, this allows testers to go nuts with different sets and edge cases (e.g. 200 Ws for a name, all the special characters in the ASIC set for someone’s last name, Asian character sets, and so on).

Once the documentation is done the test recorder is used (with IE, doesn’t work in other browsers) to save a recorded case. Which can then be played back while someone is watching, which is the first level of automation, i think this is an important first step, because in your development flow if you are working on a new features (requiring a new test case) then you want a set of eyes looking at it first, you don’t want to straight away go to full automation.

It’s important to note though that the test recorder cannot be used to “Assert” values like CUIT, validation of success is done by the Tester looking at the screen and deciding.

When you have past all your newly created test cases and your new features are working, Product Owner and stakeholders are all happy with the UI, this is the point you want to go to full Automation with your UI tests.

When creating a CodedUI test one of the options is to “Use an existing Recording”


After selecting this you can search through your work items for the test case and Visual Studio will pull out the recording and generate your CodedUI test starting with what your testers have created.


That will go through and generate something like the below code.

[DataSource("Microsoft.VisualStudio.TestTools.DataSource.TestCase", "https://MyTFSServer/tfs/defaultcollection;MyProject", "13934", DataAccessMethod.Sequential)]
public void CodedUITestMethod1()
// To generate code for this test, select "Generate Code for Coded UI Test" from the shortcut menu and select one of the menu items.
this.UIMap.GoToURLParams.UIboxLoadBalanceWindowUrl = TestContext.DataRow["URL"].ToString();
this.UIMap.EnterUserandPassParams.UITxtEmailEditText = TestContext.DataRow["User"].ToString();
this.UIMap.EnterUserandPassParams.UITxtPasswordEditPassword = Playback.EncryptText(TestContext.DataRow["Pass"].ToString());

The great thing about this is that we can see the attribute for the datasource, it refers to the TFS server and even the work item ID. so this is where the CodedUI test is going to get its data.

So your testers can maintain the data in their Test Plans in TFS and the testing automation will look back into these TFS work items to get their data each time they run.

Now that you are in the CUIT environment you can start to use the CUIT recorder to add Asserts to your tests and validate data. In the above example we might do something like Assert that username text displayed on the page after login is the same as the value we logged in as.

Now we have a working Coded UI test we can add it to a build.

I generally don’t do CodeUI Tests in our standard builds, I don’t know anyone that does, we use n-Tier environment, so in order to run a single app you would need to start up a couple service layers and a database behind it. So i find it better to create a separate solution and builds for the coded UI test and schedule them to run after a full deploy of all layers to test environment overnight.

As in the example above, we put the initial “Open this URL in the Browser” step’s URL as a parameter, so if we want to change the target to another environment, or point it at local host, this is easy, and i defiantly recommend doing this.

So putting it all together we have the below work flow:


Now the thing I like about this is they all feed from the same datasource, which is a Documented Test Plan from your testers, who can then updated a single datasource that’s in a format they not only understand but have control over.

Take the example you are implementing Asian Language support. In the above example you could simple add a new parameter set to the test case that has a Chinese character username in it, which could be done by a tester, then if it wasn’t support the build would start failing.

Lastly I mentioned before IE only, that’s for recording, there is some plugins to allow playback in different browser that I will be checking out soon and make some additional posts about.

This process also works well into your weekly sprint cycles, which I will go into in another post.

AppSettings in ASP.NET 5

There is a massive amount of change in ASP.NET 5 (for the better), the way we use AppSettings is just one. I’ve been using configinjector ( for a while now, so thought that this might become redundant with these new changes, but there is still work to be done in my opinion before the framework support is as good.

It uses a json file to store the settings, then a class in C# to load them in too, and you can support multiple files/classes as well, I’m just going to look at an example of one and where they need to do some improvement.

To get it start you need to create a class that will map to the data in the json config file

public class AppSettings
public string AdminEmail { get; set; }
public Uri AuthHost { get; set; }

The following is the corresponding json file


"AppSettings": {
"AdminEmail": "",
"AuthHost": "kljkljk"

Then in startup.cs add the following to load it in for use in DI.

public void ConfigureServices(IServiceCollection services)
// Add Application settings to the services container.


Then it can be used in a controller like this following example

private readonly IOptions<AppSettings> _appSettings;

public AccountController(IOptions<AppSettings> appSettings)


_appSettings = appSettings;


// POST: /Account/Login
public async Task<IActionResult> Login(LoginViewModel model, string returnUrl = null)

var i =_appSettings.Value.AuthHost;


Now the issue with the above is that I have added a value into the config file that is not the correct data type.

The property AuthHost is Uri but the data in the JSON file “kljkljk” is not a Uri type.

If i was using configinjector, it would blow up on Application Startup, but using this configuration in ASP.NET 5 it only errors when its first accessed within the controller, the same issues I have with the current use of AppSettings in the ConfigurationManager with the web config. So if someone makes a typo you don’t know until code starts to run taht actually uses it.

Stopping Private AppSettings Getting into Public Repos

Generally we check-in just development AppSettings that refer to things like “localhost:3456”, etc. but we have some instances where we can’t run the service on our locals.

Why? one good example is the work we do against our on-premise TFS server, while i have considered an F5 experience that “runs up” a TFS server on a developers local I just don’t think we (or most people) have the budget to spend on a workstation that will handle this. Also add the other services we do integration against as well (TeamCity, Octopus, etc.)

We use a separate project in TFS, octopus etc and sometimes separate servers, for our development work so we aren’t interfering with live systems, but the credentials and URL aren’t something i want appearing in GitHub public Repos.

So how we work around this is a very old setting

<appSettings file="../../../MyRepoName.config">
<add key="TeamCityPassword" value="XXXXX" />
<add key="TeamCityUserName" value="ABCDE" />

The above usage of the file property points to a location outside the repo, i have setup a config file like this for each public repo that i use.

The good thing about this as well is if that file is not present (i.e. when its deployed to production) then it simply ignores it.You could also remove it with the Web.Release.config transform as well to be clean about it, example below.

<appSettings xdt:Transform="RemoveAttributes(file)"/>

Amazon.S3.AmazonS3Client PutObject “Cannot close stream until all bytes are written”

The AWS .NET SDK in my opinion is horrible, and poorly documented. Below is an issue i had that was very hard to trouble shoot so thought i would post about it.

The below code was giving me this exception

System.Net.WebException: The request was aborted: The request was canceled. —> System.IO.IOException: Cannot close stream until all bytes are written. at System.Net.ConnectStream.CloseInternal(Boolean internalCall, Boolean aborting) — End of inner exception stack trace — at System.Net.ConnectStream.CloseInternal(Boolean internalCall, Boolean aborting) at System.Net.ConnectStream.System.Net.ICloseEx.CloseEx(CloseExState closeState) at System.Net.ConnectStream.Dispose(Boolean disposing) at System.IO.Stream.Close() at Amazon.S3.AmazonS3Client.getRequestStreamCallback[T](IAsyncResult result)

using (var client = Amazon.AWSClientFactory.CreateAmazonS3Client(awsAccessKey, awsSecretKey))
 var s3Key = awsEticketFolder + eticketFileName;
 var objReq = new PutObjectRequest();
 objReq.CannedACL = S3CannedACL.PublicRead;
 objReq.StorageClass = S3StorageClass.Standard;


Was only getting it on large files.

It appears that under the hood, the library is timing out and closing the connection before it was complete hence causing this error.

I have been told however that this might only occur on memorystream objects, and that filestreams might actually get a timeout error, I haven’t verified this though.

The solution?


Adding a time out to the object of 1 hour allows my big files to upload successfully.

Sync C# classes with TypeScript using TypeLite and the pain of building and packaging

UPDATE: a much better way to approach this is to use swagger, and generate your client code using autorest or swagger codegen you wont be able to use this with WCF, but we moved our WCF services to WebAPI and it worked fine.

A common requirement we have is using the same objects between TypeScript and C#, writing the UI in TypeScript and the business logic and data access in C# in a Web API or WCF.

What I’ve got for this is not a perfect solution by far, we work on a team so need to share the code around, so build time is important, I’m hoping to do some followup posts about how i solved these issues, when i solve them.

We use TypeLite for this (, you can install it via nuget package manager into your C# class library, once in there all you need to do is add the “TsClass” attributes to the classes that you share.

Make sure they are properties and not fields though, fields won’t output into the TypeScript generated code.

public class MyObject
public int ObjectId { get; set; }
public string ObjectName { get; set; }

Now the imperfect part of this is in the build, TypeLite uses a tt script to execute, its is possible to run these via msbuild (TeamCity/Teambuild) but they are missing some libraries, so if you do actually get it to run (as i spent half a weekend doing) it’ll blow up on you.

I’m still working on a way to solve this, for now there is one manual step i tell my guys to do before check in, which is right click on the TypeLite script and hit run custom tool


this will regenerate the “ts” file which will be checked in with the project.

Now for the build time pain.

I create a TypeScript Sub-folder in the project and throw in a nuspec file, example code below

<?xml version="1.0" encoding="utf-8" ?>
<package xmlns="">

<title>Types.TypeScript deployment package</title>
<authors> Pty Ltd</authors>
<owners> Pty Ltd</owners>
<description>Types.TypeScript deployment package</description>
<summary>Types.TypeScript deployment package</summary>
<releaseNotes />

<file src="*.ts" target="content\Types\$version$" />

This then gives me a a nuget package that i push up to our nuget server that we can install into the UI project via nuget package manager. I also package the C# library so end up with two outputs from the one project that can be used in C# or TypeScript.


Currently i put the version number in the path because i was having issues with Visual Studio Version control causing a delete and add on the same file when installing the nuget contents, preventing a check-in on updating the nuget package as the file locks when it is flagged for deletion in source control.

i’m yet to try this with VS2015 or with Git to see if it fixes it. Having the version number in the path means I have to update the path in the project each time so it’s by no means perfect, but it works and we can share it around the team.