Azure DNS Web UI, the beginning

I was a bit frustrated to hear that the Azure DNS service didn’t have a UI.

It is however understandable, the main purpose for introducing a DNS service in the cloud environment is primarily around environments that will be run up programmatically (i.e. Infrastructure as code).

But I’ve been meaning to dig my hands into a small sized angularJS project to try out some stuff so I decided to write a small Web UI for it, this is why I haven’t done any posts in a week. Well that and my cousin’s wedding 🙂

It is starting to Take shape now in GitHub:

https://github.com/HostedSolutions/AzureDNSUI

It’s based on the Authentication libraries published in the TODO example app from MS.

There is still a bit of work to be done, i’ll be doing some follow-up posts, and hopefully be able to get it into the Azure Marketplace.

Currently there is a requirement of configuring an Azure Application in Azure AD, then putting the AppID and tenant in to the config to make it work. As I understand it, if it is a marketplace app that will configure this automatically in your environment when its installed.

This is configured under the active directory section in Azure, and there is a few gotchas i will cover in followup posts. but hopefully won’t be needed in the long run.

AzureApplicaitonConfiguration


<add key="ida:Tenant" value="hostedsolutionscomau.onmicrosoft.com" />
<add key="ida:Audience" value="ffd940d1-3eed-425b-9ae9-fd0e9955db29" />

The app itself is pretty basic, after a bounce and back to Azure/MSLive to login, you come back with a token that is passed with the API calls from JavaScript. The Azure Management gear is all CORS enabled so it can be hosted on a different domain from the APIs and runs fine.

You simply select the Subscription then resource group and you get a list of domains in that resource group.

AzureDNSUserInterfaceWeb.PNG

Then once a domain is selected you can edit the individual records.

I noticed with the schema of the JSON objects they are designed hierarchical, assuming that for each A record, for example, there will most likely be multiple IPs (i.e. DNS load balancing), so i have tried to design the UI to suite this.

The exception was the TXT records, if someone can explain to me why you would have multiple records for a single TXT value, I’d be glad to fix it up, but even looking at the RFCs left me saying WTF?

AzureDNSUserInterfaceWebEditorRecords

I think it will be another few weeks before this is completed and ready for the market place. But I think its a tool worth having, because while the “run up” of Infrastructure as Code deployments might be all command line based, sometimes its handy to be able to use UI to check your work and make minor “on the spot” fix ups for troubleshooting, rather than having to pour through command lines.

more to come in the coming weeks…

XML and JSON (De)Serialization Tools

I’ve been working with a lot of external systems lately that have their own objects that they serialize into either JSON or XML, and have found some good to tools to rebuild those objects into C# classes that i can add to my apps.

For XML I’ve found this one (http://xmltocsharp.azurewebsites.net/)

Some Example XML from one of the Apps I’ve been working with (This was a sample on their website)


<reply>
    <contents>...</contents>
    <attachment filename="..." md5="...">
      <!-- base-64 encoded file contents -->
    </attachment>
  </reply>

The tools nicely converts to the below C#


[XmlRoot(ElementName="attachment")]
public class Attachment {
[XmlAttribute(AttributeName="filename")]
public string Filename { get; set; }
[XmlAttribute(AttributeName="md5")]
public string Md5 { get; set; }
}

[XmlRoot(ElementName="reply")]
public class Reply {
[XmlElement(ElementName="contents")]
public string Contents { get; set; }
[XmlElement(ElementName="attachment")]
public Attachment Attachment { get; set; }
}

Then i can use the following code to deserialise their Response Stream from the HTTP response


Reply reply=new Reply ();
var resp = CallA MethodThatReturnsaStreamResposne();
using (var sr = new StreamReader(resp))
{
var xs = new XmlSerializer(reply.GetType());
reply= (Model.Tickets)(xs.Deserialize(sr));
}

Pretty similar with JSON too, using this site (http://json2csharp.com/)

However they don’t support invalid names, so i am going to use one int his example and how to work around it.

{".Name":"Jon Smith"}
public class RootObject
{
    public string __invalid_name__.Name { get; set; }
}

JSON properties support dashes and periods, which are not supported in C#, so in C# there is support for a attribute “jsonproperty” that you can use like the below to fix the issue.

 
public class RootObject
{
    [JsonProperty(".Name")]
    public string Name { get; set; }
}

Then to deserialize I use the Newtonsoft JSON libraries example below

var del = message.GetBody<string>();
var myObject = JsonConvert.DeserializeObject<RootObject>(del); 

Obviously thought you would not leave it named as “RootObject” though 🙂

In the example above i was reading a JSON object from a Azure Service Bus Queue.

Lastly, thought i would add in the class i use for going back to XML. In this case i was working on today I was going to a PHP based system Kayako and it has some strange requirements.

I created a class to inherit my XML objects from so add an extension method.

     public class XmlDataType
    {
        public string ToXmlString()
        {
            StringBuilder sb = new StringBuilder();
            string retval = null;
            
            using (XmlWriter writer = XmlWriter.Create(sb, new XmlWriterSettings() {Encoding = Encoding.UTF8}))//{OmitXmlDeclaration = true}
            {
                var ns = new XmlSerializerNamespaces();
                
                ns.Add(&quot;&quot;, &quot;&quot;);
                new XmlSerializer(this.GetType(),new XmlAttributeOverrides() { }).Serialize(writer, this,ns);
                retval = sb.ToString();
            }
            return retval.Replace(&quot;utf-16&quot;,&quot;UTF-8&quot;);
        }
    }

Kayako only accepts UTF8 not UTF16, adn you need to remove the xmlns properties, which i do with the Namespace declaration above.

The way I’ve changed UTF16 to 8 is a bit dodgy but it works for Kayako.

Now i can just call MyObject.ToXmlString() and get a string for the XML data that i can pass into a HTTP Request.

Noting that some of the Kayako methods don’t like the xml tag at the start, so if you need to remove this I’ve left in a bit of commented out code that you can use in the XML Writer settings initialization.

Converting Between UTC and TimeZone in SQL CLR

I make a habit of storing in UTC time, and then converting when displaying, or getting input form the user. It makes things a lot easier and more manageable. I have yet to try a large project with the new offsets data type, but worried about how they will handle daylight savings time, which is a constant source of frustration for Australian Developers (Disclaimer: I come from Queensland where our cows don’t like it either).

Most of the time we handle the conversion in the Application or Presentation layer, I don’t like to encourage developers to handle it in the SQL layer because this is the most common place of doing comparisons and means you have more opportunity for Developer error when one side of your comparison is in the wrong Time Zone.

However there are a few cases where it is needed so today I whipped up a project that is backwards compatible to SQL 2008 R2 (the earliest version running in prod systems i work on).

GitHub Link here https://github.com/HostedSolutions/SQLDates

it basically re-advertises the TimeZoneInfo object, wrapped up specific to what I use which is going back and forth from UTC to what ever time zone my users are in.

Most of my projects are either multi-tenanted, or have users in different states in Australia that need to view data in their own timezone.

Going to UTC below (i.e. form user input in a text box).


[Microsoft.SqlServer.Server.SqlFunction]
public static SqlDateTime ConvertToUtc(SqlString dotNetTimeZone, SqlDateTime theDateTime)
{
var localDate = DateTime.SpecifyKind((DateTime)theDateTime, DateTimeKind.Unspecified);

// create TimeZoneInfo by string time zone.
var timeZoneInfo = TimeZoneInfo.FindSystemTimeZoneById(dotNetTimeZone.ToString());

// convert date local to utc date by time zone.
return TimeZoneInfo.ConvertTimeToUtc(localDate, timeZoneInfo);
}

Coming from UTC below (i.e. form data in a database table)

[Microsoft.SqlServer.Server.SqlFunction]
public static SqlDateTime ConvertToLocalTimeZone(SqlString dotNetTimeZone, SqlDateTime theDateTime)
{
// create TimeZoneInfo by string time zone by time zone.
var timeZoneInfo = TimeZoneInfo.FindSystemTimeZoneById(dotNetTimeZone.ToString());

// convert date utc to local date.
return TimeZoneInfo.ConvertTimeFromUtc((DateTime)theDateTime, timeZoneInfo);
}

The DataTools SQL projects these days are great, you just need to add a reference into your C# library from your SQL project, and it’ll compile into the SQL script that generates for deploy, examples are all in the github project.

Just noting that you can’t compile a C# class library for 3.5 in a SQL Data Tools project, you need to create an external C# class library project, because 3.5 reference in the SQL projects have some issue with mscorlib.

SQLDataToolsProjectReference

After adding the reference you also need to also set it to “unsafe”

SetReferenceToCLRProjectUnsafe

Then when you publish the Class with output like the below into your deployment script:

CREATE ASSEMBLY [SQLDatesFunc]
AUTHORIZATION [dbo]
FROM 0x4D5A90000300000004000000FFFF0000B8000000000000004000...
WITH PERMISSION_SET = UNSAFE;

I’ve also included a post deploy script in this example code with the other settings you need to get you database going with CLR functions

sp_configure 'clr enabled', 1
GO
RECONFIGURE
GO
ALTER DATABASE [$(DatabaseName)] SET TRUSTWORTHY ON;
GO

 

Moving from TFVC to Git

I’ve had to move a few projects now and it’s pretty straight forward if you don’t want to carry over work item history. If you do then this is possible to pull over, but you might be better to use git-tfs, as opposed to git-tf.

I’m using git-tf in this example (git-tf-2.0.3.20131219)

First for convenience i set the PATH

PATH=%PATH%;D:\3rdParty\git-tf-2.0.3.20131219

Then create a new folder and use git-tf to pull down your TFS VC chagne history into a Git report like so

git-tf clone https://MEME.visualstudio.com/DefaultCollection $/MyProject/solOARS D:\SOURCE\Repos\OARS –deep

The “–deep” flag above is what pulls down the whole history, without it you just get one commit with the latest version. You’ll get prompted for the credentials, in this example I’m using VSO, so you will need a set of basic auth creds (https://www.visualstudio.com/en-us/integrate/get-started/auth/overview).

BasicAuthUsernamePasswordVSO

Then you’ve get some output like the following to know its working and done.

Cloning $/MyProject/solOARS into D:\SOURCE\Repos\OARS\solOARS: 19%,
Cloned 13 changesets. Cloned last changeset 1332 as 1c1cf84

Once this is done you can add the existing Repo to VS

AddExistingRepo

Then check if the history is there on your local by the following

click Branches

ViewBranchHistoryOnLocal

Right click on master and click view history

ViewBranchHistoryOnLocal2

And you should see a commit for every check in that is in you history

ViewBranchHistoryOnLocal3

Once this is confirm you can go back and then go to Sync

You’ll get prompted here to input the address of your Repo, just throw it in and away you go.

PublishToRemoteRepo

Once this is done you will be able to see the history come through on your repo, in my case its a TFS-Git repo

ResultInTFS

Replacing Windows Services with Web Apps using App Init in IIS 8+ (and 7.5 sort of)

We’ve always moved towards a SOA, so most of our business logic is in the web service layer, primarily WCF traditionally, but moving towards REST these days. So a lot of our windows services end up as simply a timer that polls a web method, or a timer that pools a web method to then tell it what methods to poll.

Disclaimer: if anyone is saying right now “why are you polling, and not using a Service Bus?”, its mostly to do with 3rd party integration or legacy systems when we need to poll.

In the past we have never contemplated putting something like this into a web app due to IIS limitations around “keep alive” of the app domain.

With Application Initialization as of IIS 8 (and supported in 7.5 with a plugin) we have started to do this. So instead of having a dependence on a windows service we are just dropping timers into the start up of the web app. This reduces the amount of projects we have to deploy/maintain/monitor.

There is a few dependencies on this though beyond just the tag in the web config that need to be set to make an App Domain “immortal”, as I’ve been putting it.

First the tag in your web config


 <applicationInitialization doAppInitAfterRestart="true">
 <add initializationPage="/myService.svc" />
</applicationInitialization>

I usually just throw it at the based svc file, but anything that hits something that executes .NET code is fine, then you need to make sure you have the doAppInitAfterRestart set “true”, in case someone kills the worker process or something odd like that.

Note though that relative paths (using ~) are not supported, so it you have something in a sub app you are going to have to code this path in.

Next is your App Pool Settings, you need to set the following values:

  • idleTimeout to 0
  • recyclingPeriodicrestart to 0
  • startMode to AlwaysRunning

Then you also need to set the following values in your Web Site settings as well

  • preloadEnabled to “true”
  • serviceAutoStartEnabled to “true”

I use Step Templates in Octopus deploy to check for these values and Update them if note set:

Below is an example of the powershell i use in the AppPool update script


Import-Module WebAdministration
$enable32BitAppOnWin64Val=[System.Convert]::ToBoolean($enable32BitAppOnWin64)
$idleTimeoutVal= [TimeSpan]::FromMinutes($idleTimeout)
$recyclingPeriodicrestartVal= [TimeSpan]::FromMinutes($recyclingPeriodicrestart)

Write-Host "Checking $AppPoolName processModel.idleTimeout is $idleTimeoutVal"
if((Get-ItemProperty "IIS:\AppPools\$AppPoolName").processModel.idleTimeout -ne $idleTimeoutVal)
{
$delPool = Get-Item "IIS:\AppPools\$AppPoolName";
$delPool.processModel.idleTimeout=$idleTimeoutVal;
$delPool | Set-Item
Write-Host "Setting $AppPoolName processModel.idleTimeout to $idleTimeoutVal"
}
else
{
Write-Host "$AppPoolName processModel.idleTimeout is $idleTimeoutVal already"
}

Write-Host "Checking $AppPoolName recycling.periodicrestart.time is $recyclingPeriodicrestartVal"
if((Get-ItemProperty "IIS:\AppPools\$AppPoolName").recycling.periodicrestart.time -ne $recyclingPeriodicrestartVal)
{
$delPool = Get-Item "IIS:\AppPools\$AppPoolName";
$delPool.recycling.periodicrestart.time=$recyclingPeriodicrestartVal;
$delPool | Set-Item
Write-Host "Setting $AppPoolName recycling.periodicrestart.time to $recyclingPeriodicrestartVal"
}
else
{
Write-Host "$AppPoolName recycling.periodicrestart.time is $recyclingPeriodicrestartVal already"
}

Write-Host "Checking $AppPoolName enable32BitAppOnWin64 is $enable32BitAppOnWin64Val"
if((Get-ItemProperty "IIS:\AppPools\$AppPoolName").enable32BitAppOnWin64 -ne $enable32BitAppOnWin64Val)
{
$delPool = Get-Item "IIS:\AppPools\$AppPoolName";
$delPool.enable32BitAppOnWin64=$enable32BitAppOnWin64Val;
$delPool | Set-Item
Write-Host "Setting $AppPoolName enable32BitAppOnWin64 to $enable32BitAppOnWin64Val"
}
else
{
Write-Host "$AppPoolName enable32BitAppOnWin64 is $enable32BitAppOnWin64Val already"
}

Write-Host "Checking $AppPoolName startMode is $startMode"
if((Get-ItemProperty "IIS:\AppPools\$AppPoolName").startMode -ne $startMode)
{
$delPool = Get-Item "IIS:\AppPools\$AppPoolName";
$delPool.startMode=$startMode;
$delPool | Set-Item
Write-Host "Setting $AppPoolName startMode to $startMode"
}
else
{
Write-Host "$AppPoolName startMode is $startMode already"
}

And i use a similar script for the Web Site update, pretty basic stuff.

What about the App Domain? the above will secure the App Pool but not the App Domain, a common cause of app domain restart is a deploy via octopus, when it changes the IIS path it will restart the App Domain of the site. to handle this you need to add a snippet like the below to you Application end event in Global asax


protected void Application_End()
{
var client = new WebClient();
var hostName = Dns.GetHostName();
var url = "http://" + hostName + "/service.svc";
client.DownloadString(url);
}

Once these are all setup you couldn’t kill that App Domain with a crowbar. Even end tasking it from Task Manager it will restart.

So you are safe to run timers and any other services you like in it, just like you would in a windows service.