Deploying to Telerik Platform from Octopus Server

We have been suing Telerik Platform for a while now, while their platform is great, going from TFVC to Build to Platform for deploy always involved someone building for their local visual studio, which of course carries the risks of:

  1. Manually changing settings from local to dev/test/live and things getting missed
  2. Unchecked-in code going up
  3. Manual Labor

So since they have a CLI we decided to try to automate this process, its a bit wired what we came up with because we build at deployment time, but it works.

I setup a git repo for this one with an example solution using the friends app (https://github.com/HostedSolutions/platform-friends-hybrid)

Summary of what we are going to setup with this process

  1. Package the project into a nuget package on the TeamCity Server and stick it into a nuget server
  2. Pick up the nuget package with an Octopus Server and store variables for dev/test/prod in octopus
  3. Deploy to Telerik Platform from the octopus Tentacle
  4. Based on the octopus environment choose (dev/test/etc) use different variables, and make it available to different group

THE PROJECT

First of all, the project itself. So the Build server in this instance isn’t going to build anything, its just going to package it. so we simply need to add a nuspec file to the project in Visual Studio, example below


<?xml version="1.0" encoding="utf-8" ?>
<package xmlns="http://schemas.microsoft.com/packaging/2011/08/nuspec.xsd">
 <metadata>
 <id>platform-friends-hybrid</id>
 <version>0.0.0</version>
 <title>platform-friends-hybrid deployment package</title>
 <authors>Hosted Solutions Pty Ltd</authors>
 <owners>Hosted Solutions Pty Ltd</owners>
 <requireLicenseAcceptance>false</requireLicenseAcceptance>
 <description>platform-friends-hybrid deployment package</description>
 <summary>platform-friends-hybrid deployment package</summary>
 <releaseNotes />
 <copyright>Copyright © Hosted Solutions Pty Ltd 2015</copyright>
 <language>en-US</language>
 </metadata>
 <files>
 <file src="\**\*.*" />
 <file src=".abproject" />
 <file src=".debug.abproject" />
 <file src=".release.abproject" />
 </files>
</package>

Now in this example you will note the files beginning with a “.” i had to add individually. this is because they aren’t picked up by “*.*”

You also need to manually add the packages.config for nuget with a reference to octopack. This will package up the files for you into the nuget format octopus needs.

This Commit in github has the full details (https://github.com/HostedSolutions/platform-friends-hybrid/commit/467c2f06fdb5d123021250902cf0035bf5806790)

Finally we need to change the file /scripts/app/settings.js, so that we can token replace the variables, the need to be in a format of #{VaribleName] for octopus, below is an exmaple


var appSettings = {

everlive: {
 apiKey: '#{EVERLIVE_API_KEY}', // Put your Backend Services API key here
 scheme: 'http'
 },

THE BUILD

To get this to build on your build server you will need to download and install the Visual Studio Extension on your build server as well (https://platform.telerik.com/#workspaces and go to “Getting Started” -> “Downloads” -> “App Builder Hybrid”)

AppBuilderExtentionForVisualStudio

In my build in TeamCity to make this work I’ve got 4 steps

BuildStepsInTeamCity

These are very similar to what i outlined in this post (https://beerandserversdontmix.com/2015/09/30/tfs-teamcity-nuget-octopus-somewhere/)

The main difference is I’ve had to add an extra build configuration parameter like this

/p:OctoPackNuGetArguments=-NoDefaultExcludes

This makes Octopack pass an additional parameter to nuget when it does the packaging, without this it refuses to pickup the files beginning with the “.”

Now this will package up our solution to be built on the deployment server, it wont actually do any building.

THE DEPLOY (BUILD)

I’ve got a few projects that are odd like this, where i end up pushing from Octopus to a remote environment to then onward deploy, its not unusual I think, but still not common. So we ended up creating a machine specific in one of our setups for just running scripts, in my smaller setup we just drop a tentacle on our octopus server though.

I’m using an octopus server with a tentacle on it in this example.

First we need to get node js on the box (https://nodejs.org/download/release/latest-v0.12.x/node-v0.12.7-x86.msi)

The tricky bit here is that nodejs with AppManager CLI runs out of the User Profile, so What i have done is set the Tentacle on the box to run a a local User Account (if you are in a domain i recommend an AD account), make sure it a local administrator on the box the tentacle is installed on.

OctopusDeployTentacleChangeUser

Once this is done login as this user to install and setup nodejs and AppManager CLI with the command line

npm install appbuilder -g

npmInstallAppBuilderCommandLine

Now that that is ready you need to setup a project in Octopus

CreateStepInOctopusDeployTelerikAppBuilder

Make sure you select the substitute Variable Step Feature.

OctopusDeployStepSubstituteVarible

And you will need to add you JavaScript Settings file to the list once this is enabled.

OctopusDeploySettingsJsSubstitute

The setup the variables in the variable section.

OctopusDeployAddVariblesTelerik

And for each variable you will probably want to set a different scope per environment.

OctopusDeployVaribleScopeTelerik

Then add a step template

I’ve put notes for my original powershell script here (https://github.com/HostedSolutions/platform-friends-hybrid/blob/master/NotesOctopusStepTemplate.ps1)

And a full step template for octopus here (https://github.com/HostedSolutions/platform-friends-hybrid/blob/master/OctopusStepTemplate.json)

Now i’ll just walk through a few of the things in the powershell and why we are doing things that way.


# Setup Group Command
$GroupCmd =" --group " + $GroupAccessList.Replace(","," --group ");
$GroupCmd=$GroupCmd.Replace(" "," ").Replace(" "," ")

This code above is for giving the option for various groups to be able to access different deployments. For example, we have Dev and Test, the developers have access to both Dev and Test, but only our testers have access to Test, because we allow the developers to “mess around” with dev, which may cause false positive results in testing.


$currentstepname = $OctopusParameters["Octopus.Step[0].Package.NuGetPackageId"]

We expect the previous step to this one to be the Telerik Nuget Package step, if it is not this will fail.


CMD /C C:\"Program Files (x86)"\nodejs\npm update -g appbuilder; $LASTEXITCODE

We run an update command to make sure that we are on the latest version of AppBuilder, if AppBuilder is not the latest version it will fail


$AppData = $env:APPDATA

AppBuilder runs from the local user profile, so we need to use the AppData folder path

$JSON = (Get-Content "$parentLocation\.abproject" -Raw) | ConvertFrom-Json; 

# Set values in object
$JSON.ProjectName = $ProjectName; 
$JSON.AppIdentifier = $AppIdentifier; 
$JSON.DisplayName = $DisplayName;
$JSON.BundleVersion = $OctopusReleaseNumber; 

$JSON | ConvertTo-Json | Set-Content "$parentLocation\.abproject"

We modify values in the .abproject file to set things like the version number and also the app name (we prefix dev with “Dev” and Test with “Test” so using the example above where a developer has both dev and test on their phone, that the developers when they are using the app on their phone know which one is which.

CMD /C $APPDATA\npm\appbuilder dev-telerik-login $TelerikUserName $TelerikPassword       IF ($LASTEXITCODE -ne 0) { Write-Error "Error"}

Login to telerik platform

CMD /C $APPDATA\npm\appbuilder appmanager upload android --certificate $AndriodCertID --publish --send-push $GroupCmd;$LASTEXITCODE;IF ($LASTEXITCODE -ne 0) { Write-Error "error"}

Uploads to android

CMD /C $APPDATA\npm\appbuilder appmanager upload ios     --certificate $iOSCertID --provision $iOSProvitionID --publish $SendPushCmd $GroupCmd;$LASTEXITCODE;IF ($LASTEXITCODE -ne 0) { Write-Error "error"}

Uploads the iOS version
We normally set the Group Access list to a variable, so that it can be varied per environment.
TelerikGroupAccessListVarible

So we then end up with the steps like so in octopus

StepsOctopusDeployTelerikPlatform

Once deployed to Telerik Platform our version number are in sync with Octopus and Team City as well as our Source control labels. And we end up with seperate apps for Dev,Test , etc. and in the event you are accessing services you can token replace the right scoped variable so that the “Test Mobile App” accesses the “Test Web API” and so on.

And there you have it, TFVC -> TeamCity -> Octopus Deploy -> Telerik Platform

Private Nuget Servers – Klondike

UPDATE: A better solution I think is to use VSTS’s new package Management feature if you are using VSTS already.

Nuget is a great way to share packages around, but a lot of the LOB apps I work on we need a private solution that isn’t open to the public. I’ve setup Klondike (https://github.com/themotleyfool/Klondike) a couple of times now and find it to be light weight and dumb enough not to give any problems.

I even used it for my octopack nuget files (instead of the built in nuget server in octopus) as I find it handy just to have all your nuget in the one place.

Installation is easy, just extract it to a folder and point an IIS webroot at it. Supports self-hosted in OWIN as well if that is more your flavor.

To get everything setup i generally set this value in the web.config, it treats all request from the local server as admin. So using a web browser on the local server I get everything setup.


&lt;add key=&quot;NuGet.Lucene.Web:handleLocalRequestsAsAdmin&quot; value=&quot;true&quot; /&gt;

I’ve dropped this on servers on a domain before and used windows auth as well for authentication at an IIS level, which works fine. You could also use Basic Auth and create user accounts on the local server to manage access which I’ve done as well without any complaints.

In general terms, most projects in private sources that I work on, if you have access to the source control you should have permission to read from the nuget server, so I leave the credentials for the nuget server in my nuget.config at the root of each project in source.

Example configuration below from one of my projects:


&lt;?xml version=&quot;1.0&quot; encoding=&quot;utf-8&quot;?&gt;
&lt;configuration&gt;
&lt;activePackageSource&gt;
&lt;add key=&quot;All&quot; value=&quot;(Aggregate source)&quot; /&gt;
&lt;/activePackageSource&gt;
&lt;packageRestore&gt;
&lt;add key=&quot;enabled&quot; value=&quot;true&quot; /&gt;
&lt;/packageRestore&gt;
&lt;solution&gt;
&lt;add key=&quot;disableSourceControlIntegration&quot; value=&quot;true&quot; /&gt;
&lt;/solution&gt;
&lt;packageSources&gt;
&lt;add key=&quot;MyNugetServerInAzure&quot; value=&quot;https://MyOctopusServer.cloudapp.net:88/&quot; /&gt;
&lt;add key=&quot;nuget.org&quot; value=&quot;https://nuget.org/api/v2/&quot; /&gt;
&lt;/packageSources&gt;
&lt;packageSourceCredentials&gt;
&lt;MyNugetServerInAzure&gt;
&lt;add key=&quot;Username&quot; value=&quot;MyUserAccount&quot; /&gt;
&lt;add key=&quot;ClearTextPassword&quot; value=&quot;XXXXX /&gt;
&lt;/MyNugetServerInAzure&gt;
&lt;/packageSourceCredentials&gt;
&lt;/configuration&gt;

And then create the User Account in the Computer Manager on the server and setup IIS Authenticaiton on the Klondike website like so:

IISAUthSettingsKlondike

And you are away. I have had odd issues using the above with the role mappings, so i generally leave these settings alone

&lt;roleMappings&gt;
&lt;add key=&quot;PackageManager&quot; value=&quot;&quot; /&gt;
&lt;add key=&quot;AccountAdministrator&quot; value=&quot;&quot; /&gt;
&lt;/roleMappings&gt;

The only thing to point out in the config I change generally for Klondike is is


&lt;add key=&quot;NuGet.Lucene.Web:localAdministratorApiKey&quot; value=&quot;XXX&quot; /&gt;

And drop in an API key to use when updating packages, I don’t always give this out to all the developers working on it, but instead save the API key in the build server to force check-in to update packages via the build system.

One last issue i have with Klondike is this error “The process cannot access the file ‘C:\inetpub\wwwroot\App_Data\Lucene\write.lock’ because it is being used by another process.”

KlondikeError

If you recycle the worker process this happens, touch the web config file and it will fix it up.

Test Manager Workflow, Manual to Automation

We’ve been using a lot of Test Manager Lately and I am really happy with the work flow that is built into TFS with this product.

We have full time testers who don’t have a lot of experience with code, so you can’t give them tools like the CUIT recorder in visual studio and expect them to run with it. But they are the best people to use tools like this because they understand testing more, and also in my experience testers tend to be more thorough than developers.

The other thing I like about the workflow is that its “Documentation First”, so your documentation is inherently linked into the Test platform.

Microsoft Test Manager’s recorder tools is a good cut down version of CUIT that makes things easier for the testers to record manual tests but not get caught up in code.

That being said, it is a pain in the ass to get running, we have a complicated site with a lot of composite controls, so the DOM was a bit ugly, and this made it a painful exercise to get any recorder going (We also tried Teleirk Test Studio and Selenium with similar issues as well, but Telerik Test Studio was probably the best out of them)

The basic workflow in Test Manager Stars with the Test Case work Item, from a test Case you outline your test steps

TestManagerCreateTestCase

The important thing here is the use of parameters, you can see in the above example that I am using @URL, @User and @Pass.

In the test case you can input n number of iterations of these, then when creating the test recorder it will play back one iteration per set, this allows testers to go nuts with different sets and edge cases (e.g. 200 Ws for a name, all the special characters in the ASIC set for someone’s last name, Asian character sets, and so on).

Once the documentation is done the test recorder is used (with IE, doesn’t work in other browsers) to save a recorded case. Which can then be played back while someone is watching, which is the first level of automation, i think this is an important first step, because in your development flow if you are working on a new features (requiring a new test case) then you want a set of eyes looking at it first, you don’t want to straight away go to full automation.

It’s important to note though that the test recorder cannot be used to “Assert” values like CUIT, validation of success is done by the Tester looking at the screen and deciding.

When you have past all your newly created test cases and your new features are working, Product Owner and stakeholders are all happy with the UI, this is the point you want to go to full Automation with your UI tests.

When creating a CodedUI test one of the options is to “Use an existing Recording”

ImportTestFromRecording

After selecting this you can search through your work items for the test case and Visual Studio will pull out the recording and generate your CodedUI test starting with what your testers have created.

SearchWorkItemsForTestRecording

That will go through and generate something like the below code.


[DataSource("Microsoft.VisualStudio.TestTools.DataSource.TestCase", "https://MyTFSServer/tfs/defaultcollection;MyProject", "13934", DataAccessMethod.Sequential)]
[TestMethod]
public void CodedUITestMethod1()
{
// To generate code for this test, select "Generate Code for Coded UI Test" from the shortcut menu and select one of the menu items.
this.UIMap.GoToURLParams.UIboxLoadBalanceWindowUrl = TestContext.DataRow["URL"].ToString();
this.UIMap.GoToURL();
this.UIMap.EnterUserandPassParams.UITxtEmailEditText = TestContext.DataRow["User"].ToString();
this.UIMap.EnterUserandPassParams.UITxtPasswordEditPassword = Playback.EncryptText(TestContext.DataRow["Pass"].ToString());
this.UIMap.EnterUserandPass();
this.UIMap.ClickLoginbutton();
}

The great thing about this is that we can see the attribute for the datasource, it refers to the TFS server and even the work item ID. so this is where the CodedUI test is going to get its data.

So your testers can maintain the data in their Test Plans in TFS and the testing automation will look back into these TFS work items to get their data each time they run.

Now that you are in the CUIT environment you can start to use the CUIT recorder to add Asserts to your tests and validate data. In the above example we might do something like Assert that username text displayed on the page after login is the same as the value we logged in as.

Now we have a working Coded UI test we can add it to a build.

I generally don’t do CodeUI Tests in our standard builds, I don’t know anyone that does, we use n-Tier environment, so in order to run a single app you would need to start up a couple service layers and a database behind it. So i find it better to create a separate solution and builds for the coded UI test and schedule them to run after a full deploy of all layers to test environment overnight.

As in the example above, we put the initial “Open this URL in the Browser” step’s URL as a parameter, so if we want to change the target to another environment, or point it at local host, this is easy, and i defiantly recommend doing this.

So putting it all together we have the below work flow:

WorkflowTestManager

Now the thing I like about this is they all feed from the same datasource, which is a Documented Test Plan from your testers, who can then updated a single datasource that’s in a format they not only understand but have control over.

Take the example you are implementing Asian Language support. In the above example you could simple add a new parameter set to the test case that has a Chinese character username in it, which could be done by a tester, then if it wasn’t support the build would start failing.

Lastly I mentioned before IE only, that’s for recording, there is some plugins to allow playback in different browser that I will be checking out soon and make some additional posts about.

This process also works well into your weekly sprint cycles, which I will go into in another post.

AppSettings in ASP.NET 5

There is a massive amount of change in ASP.NET 5 (for the better), the way we use AppSettings is just one. I’ve been using configinjector (https://github.com/uglybugger/ConfigInjector) for a while now, so thought that this might become redundant with these new changes, but there is still work to be done in my opinion before the framework support is as good.

It uses a json file to store the settings, then a class in C# to load them in too, and you can support multiple files/classes as well, I’m just going to look at an example of one and where they need to do some improvement.

To get it start you need to create a class that will map to the data in the json config file


public class AppSettings
{
public string AdminEmail { get; set; }
public Uri AuthHost { get; set; }
}

The following is the corresponding json file


{

"AppSettings": {
"AdminEmail": "admin@hosted-solutions.com.au",
"AuthHost": "kljkljk"
}
}

Then in startup.cs add the following to load it in for use in DI.


public void ConfigureServices(IServiceCollection services)
{
// Add Application settings to the services container.
services.Configure<AppSettings>(Configuration.GetSection("AppSettings"));

}

Then it can be used in a controller like this following example


private readonly IOptions<AppSettings> _appSettings;

public AccountController(IOptions<AppSettings> appSettings)

{

_appSettings = appSettings;

}

//
// POST: /Account/Login
[HttpPost]
[AllowAnonymous]
[ValidateAntiForgeryToken]
public async Task<IActionResult> Login(LoginViewModel model, string returnUrl = null)
{

var i =_appSettings.Value.AuthHost;

}

Now the issue with the above is that I have added a value into the config file that is not the correct data type.

The property AuthHost is Uri but the data in the JSON file “kljkljk” is not a Uri type.

If i was using configinjector, it would blow up on Application Startup, but using this configuration in ASP.NET 5 it only errors when its first accessed within the controller, the same issues I have with the current use of AppSettings in the ConfigurationManager with the web config. So if someone makes a typo you don’t know until code starts to run taht actually uses it.

Stopping Private AppSettings Getting into Public Repos

Generally we check-in just development AppSettings that refer to things like “localhost:3456”, etc. but we have some instances where we can’t run the service on our locals.

Why? one good example is the work we do against our on-premise TFS server, while i have considered an F5 experience that “runs up” a TFS server on a developers local I just don’t think we (or most people) have the budget to spend on a workstation that will handle this. Also add the other services we do integration against as well (TeamCity, Octopus, etc.)

We use a separate project in TFS, octopus etc and sometimes separate servers, for our development work so we aren’t interfering with live systems, but the credentials and URL aren’t something i want appearing in GitHub public Repos.

So how we work around this is a very old setting

<configuration>
<appSettings file="../../../MyRepoName.config">
<add key="TeamCityPassword" value="XXXXX" />
<add key="TeamCityUserName" value="ABCDE" />

The above usage of the file property points to a location outside the repo, i have setup a config file like this for each public repo that i use.

The good thing about this as well is if that file is not present (i.e. when its deployed to production) then it simply ignores it.You could also remove it with the Web.Release.config transform as well to be clean about it, example below.


<appSettings xdt:Transform="RemoveAttributes(file)"/>

Amazon.S3.AmazonS3Client PutObject “Cannot close stream until all bytes are written”

The AWS .NET SDK in my opinion is horrible, and poorly documented. Below is an issue i had that was very hard to trouble shoot so thought i would post about it.

The below code was giving me this exception

System.Net.WebException: The request was aborted: The request was canceled. —> System.IO.IOException: Cannot close stream until all bytes are written. at System.Net.ConnectStream.CloseInternal(Boolean internalCall, Boolean aborting) — End of inner exception stack trace — at System.Net.ConnectStream.CloseInternal(Boolean internalCall, Boolean aborting) at System.Net.ConnectStream.System.Net.ICloseEx.CloseEx(CloseExState closeState) at System.Net.ConnectStream.Dispose(Boolean disposing) at System.IO.Stream.Close() at Amazon.S3.AmazonS3Client.getRequestStreamCallback[T](IAsyncResult result)


using (var client = Amazon.AWSClientFactory.CreateAmazonS3Client(awsAccessKey, awsSecretKey))
 {
 var s3Key = awsEticketFolder + eticketFileName;
 var objReq = new PutObjectRequest();
 objReq.WithInputStream(eticketFileStream);
objReq.WithBucketName(awsBucketName);
 objReq.WithKey(s3Key);
 objReq.WithContentType(mimeType);
 objReq.CannedACL = S3CannedACL.PublicRead;
 objReq.StorageClass = S3StorageClass.Standard;

client.PutObject(objReq);
 }

Was only getting it on large files.

It appears that under the hood, the library is timing out and closing the connection before it was complete hence causing this error.

I have been told however that this might only occur on memorystream objects, and that filestreams might actually get a timeout error, I haven’t verified this though.

The solution?


objReq.WithTimeout(60*60*1000);

Adding a time out to the object of 1 hour allows my big files to upload successfully.

Sync C# classes with TypeScript using TypeLite and the pain of building and packaging

UPDATE: a much better way to approach this is to use swagger, and generate your client code using autorest or swagger codegen you wont be able to use this with WCF, but we moved our WCF services to WebAPI and it worked fine.

A common requirement we have is using the same objects between TypeScript and C#, writing the UI in TypeScript and the business logic and data access in C# in a Web API or WCF.

What I’ve got for this is not a perfect solution by far, we work on a team so need to share the code around, so build time is important, I’m hoping to do some followup posts about how i solved these issues, when i solve them.

We use TypeLite for this (http://type.litesolutions.net/), you can install it via nuget package manager into your C# class library, once in there all you need to do is add the “TsClass” attributes to the classes that you share.

Make sure they are properties and not fields though, fields won’t output into the TypeScript generated code.


[TsClass]
public class MyObject
{
public int ObjectId { get; set; }
public string ObjectName { get; set; }
}

Now the imperfect part of this is in the build, TypeLite uses a tt script to execute, its is possible to run these via msbuild (TeamCity/Teambuild) but they are missing some libraries, so if you do actually get it to run (as i spent half a weekend doing) it’ll blow up on you.

I’m still working on a way to solve this, for now there is one manual step i tell my guys to do before check in, which is right click on the TypeLite script and hit run custom tool

TypeLiteManualStepBeforeCheckIn

this will regenerate the “ts” file which will be checked in with the project.

Now for the build time pain.

I create a TypeScript Sub-folder in the project and throw in a nuspec file, example code below


<?xml version="1.0" encoding="utf-8" ?>
<package xmlns="http://schemas.microsoft.com/packaging/2011/08/nuspec.xsd">

<metadata>
<id>Types.TypeScript</id>
<version>0.0.0</version>
<title>Types.TypeScript deployment package</title>
<authors> Pty Ltd</authors>
<owners> Pty Ltd</owners>
<requireLicenseAcceptance>false</requireLicenseAcceptance>
<description>Types.TypeScript deployment package</description>
<summary>Types.TypeScript deployment package</summary>
<releaseNotes />

<language>en-US</language>
</metadata>
<files>
<file src="*.ts" target="content\Types\$version$" />
</files>
</package>

This then gives me a a nuget package that i push up to our nuget server that we can install into the UI project via nuget package manager. I also package the C# library so end up with two outputs from the one project that can be used in C# or TypeScript.

LiteLiteInstallNuget

Currently i put the version number in the path because i was having issues with Visual Studio Version control causing a delete and add on the same file when installing the nuget contents, preventing a check-in on updating the nuget package as the file locks when it is flagged for deletion in source control.

i’m yet to try this with VS2015 or with Git to see if it fixes it. Having the version number in the path means I have to update the path in the project each time so it’s by no means perfect, but it works and we can share it around the team.

Gulp in Visual Studio and building with TeamCity

Starting from VS 2015 node is built in, along with the task runner explorer, though i still have occasional issues with the task runner explorer, but they are defiantly a must have.

These days in in Visual Studio i normally do a separate project for my static content, this is for a couple of reasons:

  1. In my F5 experience means the static content is on a different site, this means that’s is closer to live as i generally send the static content to a CDN
  2. The static content is packaged into a single separate package, so i don’t have to pull files out into a separate package for the CDN (personally I use octopack at the moment, which works per vs project)

The gulp file is easy, just throw in a js file with the correct name and it’ll get picked up

GulpStaticContent

I find it easier to work with the uncompressed files locally so throw in a variable to cater for either accessing the live vs local


Dim compress_string As String = ".min"
#If DEBUG Then
 compress_string = ""
#End If

 Dim lk4 As New HtmlControls.HtmlGenericControl("script")
 lk4.Attributes.Add("src", StaticLocation & "js/default" & compress_string & ".js")
 lk4.Attributes.Add("language", "javascript")
 lk4.Attributes.Add("type", "text/javascript")
 ThePage.Header.Controls.Add(lk4)

NOTE: Please don’t troll at me for the VB code, just use a converter if you don’t understand or want to be a script kiddy 🙂

you can test it locally by running a build, you’ll want to add some liens into your .tfignore for the “node_modules” folder so it doesn’t pick up the npm files. I also add in a line for *.min.js so it doesn’t check-in my minified content when it builds locally

Below is an example file from one of my projects, im not using contcat on this one because i dont ahve a lot of js in this project, but if you’ve got more than 2 its generally good to concat them as well.


/// <binding AfterBuild='default, clean, scripts, minify' />

// include plug-ins
var gulp = require('gulp');
var concat = require('gulp-concat');
var uglify = require('gulp-uglify');
var del = require('del');
var rename = require('gulp-rename');
var minifyCss = require('gulp-minify-css');

var config = {
 //Include all js files but exclude any min.js files
 src: ['js/**/*.js', '!js/**/*.min.js']
}

//delete the output file(s)
gulp.task('clean', function () {
 del(['js/*.min.js']);
 return del(['css/*.min.css']);
});

//Process javascript files
gulp.task('scripts', function () {
return gulp.src(config.src)
 .pipe(uglify())
 .pipe(rename({
 suffix: '.min'
 }))
 .pipe(gulp.dest('js/'));
});
//process css files
gulp.task('minify', function () {
 gulp.src('./css/*.css')
 .pipe(minifyCss())
 .pipe(rename({
 suffix: '.min'
 }))
 .pipe(gulp.dest('css/'))
 ;
});
//Set a default tasks
gulp.task('default', ['clean'], function () {
 gulp.start('minify', 'scripts');
 // do nothing 
});

This minifies all my css files to a new file named oldfilename.min.css and uglifies all js files to oldfilename.min.js

The Node plugin for teamcity is available here, this will give you npm steps in you build templates.

https://github.com/jonnyzzz/TeamCity.Node

You’ve need to add 2 steps to you project

1. Node.js NPM to install npm modules require

Nodejsnpmstepbuild

simply the install command will pick up the dependencies and get the files, NOTE: this is add a shitload of time to your builds (upwards of 2 minutes), I’m still looking at ways to solve this.

2. Gulp Step to run the gulp file

gulpstepbuild

Just set the working directory to the root of the project that has the gulp file.

I haven’t tried doing gulp for multiple projects, so far I’ve been primarily using it for js and css, and i generally create one static content project per solution that all the other projects will share. So I can’t comment on this yet.

After that you should have a working solution, as you can see below the issue i have with build times now, 2 min 34 secs downloading to run a job that takes 4 seconds.

Teamcitybuildgulp

If you are using octopack, you will not get the outputed files in the nuget packages, because they aren’t included in the visual studio project file.

gulpoutputfilesystem

I’ve worked around this by adding a nuspec file to my solution. example below from one of my projects, shows including just the minified css and js content to the nuget package for octopus.


<?xml version="1.0"?>
<package xmlns="http://schemas.microsoft.com/packaging/2010/07/nuspec.xsd">
<metadata>
<id>StaticContent</id>
<title>StaticContent</title>
<authors>Your name</authors>
<owners>Your name</owners>
<version>0.0.0.0</version>
<licenseUrl>http://yourcompany.com</licenseUrl>
<projectUrl>http://yourcompany.com</projectUrl>
<requireLicenseAcceptance>false</requireLicenseAcceptance>
<description>StaticContent</description>
</metadata>
<files>
<file src="css\*.min.css" target="css" />
<file src="js\*.min.js" target="js" />
<file src="img\**\*" target="img" />
<file src="Deploy.ps1" target="" />
<file src="Web.config" target="" />
</files>
</package>

SendGrid Initial Setup in Azure

Looking at a basic email setup for an Azure hosted app SendGrid offers a free low volume (25k emails a month) solution that is well rounded.

You can easily add a free SendGrid account by using the Azure marketplace to add it to your existing Azure account.

SendGridSetup

Then once added you can click into it to get your username and password using the “Connection Info” link at the bottom.

SendGridCreds

Once you’ve got this you can install the SendGrid libraries via Nuget Package manager


PM> Install-Package SendGrid

Below is an example of a method that sends an email with SendGrid, its pretty similar to your standard SMTP mail in the dotnet framework libraries


Public Sub SendAnEmail(mailId As Integer, FromAddress As String _
, ToAddress As String, CCAddress As String _
, BCCAddress As String, Subject As String, Body As String)
' Create the email object first, then add the properties.
Dim myMessage = New SendGridMessage()

' Add the message properties.
myMessage.From = New MailAddress(FromAddress)

' Add multiple addresses to the To field.
Dim recipients As New List(Of [String])() From { _
ToAddress
}

myMessage.AddTo(recipients)
If CCAddress.Length <> 0 Then
myMessage.AddCc(CCAddress)
End If
If BCCAddress.Length <> 0 Then
myMessage.AddBcc(BCCAddress)
End If

myMessage.Subject = Subject
myMessage.DisableClickTracking()
'Add the HTML and Text bodies
myMessage.Html = Body

' Create credentials, specifying your user name and password.
Dim SGUser As String = ConfigurationManager.AppSettings("SGUser")
Dim SGPass As String = ConfigurationManager.AppSettings("SGPass")
Dim credentials = New NetworkCredential(SGUser, SGPass)

' Create an Web transport for sending email.
Dim transportWeb = New SendGrid.Web(credentials)

' Send the email.
transportWeb.DeliverAsync(myMessage)

End Sub

There is a few other steps i usually do too, you will note in the above method i’ve set the click trough tracking to disabled. This is because i have had issues with it before and the links not working on some odd mail clients.

Also by default SendGrid will “process” your bounces, so you’ll need to login to their dashboard to find them, Most of my users don’t want another dashboard to login to so i normally setup an auto-forward. This can be setup in their interface as per below shot.

SendGridAutoForward

If you need to access the bounce history its under the “Suppression” section.

I also recommend setting up the white label

Sendgridwhitelabel

I’ll do another post about setting this up as its not easy, SPF and DKIM are essential to have setup, but can be a pain in the ass to get going, SendGrid does make the processes easier though

TeamCity, TFVC and Octopus Branching

I’ve recently implemented a TeamCity build on one of my old projects in TFVC and was surprised about the branching support in TeamCity is focused around Git and Mercurial for branching. So as per usual i coded my way out of a hole, and here’s how i did it.

This is just using a basic example of dev and main branch, but i normally use feature branching for this project too, and version/release branches in some of my other TFVC projects as well.

In TFVC my branches are just basically sub-folders to the solution root, and i put my tfignore at the root as this is outside of the core code imo

BranchLayoutBasic

In the VCS root, i use the root of the solution, so it includes all branches when it syncs, as TeamCity “syncs” files with source and doesn’t re-download them all the time like TeamBuild, its not an issue when you have a lot of branches, unless you need to do a clean build.

In the build Parameters create a Parameters called Branch Name

BuildParametersBranchName

Then Create a PowerShell step as the first step on the Build Configuration. Below example code checks the files that were included in the build change and sets the value for the parameter. If you run a build without a check-in it’s not going to have any changes attached to it, so in this case i default it to the dev branch.


$content = [IO.File]::ReadAllText($args[0])
if($content.length -eq 0)
{
Write-Host "No information, defaulting to Dev"
$branch = "dev"
Write-Host "##teamcity[setParameter name='BranchName' value='$branch']"
}
else
{
$branch = $content.Split("/")[0]
Write-Host "##teamcity[setParameter name='BranchName' value='$branch']"
}

This code will only look at the first file in the check-in, because it assumes you wont be checking in to multiple branches at once.

PowerShellStepBranchName

Once you’ve done this, you’ve got the branch name form the file system and can use it in your other steps

In my case i use it in the “OctopusDeploy: Create Release” Step

OctoCreateReleaseBranch

The end result in octopus looks like the below

OctopusBranch

In a subsequent post I’ll go into how to setup steps in octopus to prevent the dev branch from getting release to live.