Teaching an old dog new tricks

For the last 10+ years I’ve primarily been a C# / .NET developer. This ecosystem is great, there are thousands of great packages available. The one thing that’s been irritating me of late is the challenge of running .NET in containers such as Docker.

As luck would have it, I recently got an opportunity to work on a new side project FairDealFx which I (perhaps stupidly) decided to do using the MEAN stack (MongoDb, ExpressJs, Angular and Node). I’ve done some javascript and typescript as part of web projects in the past, but not an entire application stack.

This undertaking began as an offer to help a friend out with a project he had been working on in his spare time. His prototype was working well but I didn’t think it would scale too well, so I offered to help refactor it a bit. This is when I got entirely carried away and picked a whole new tech stack.

Before, The original stack

MySQL, PHP, jQuery, Python

The original codebase worked fine, but in my eyes I could pick a couple of key areas where scaling or supportability was going to become challenging.

After, The ambitious new stack

MongoDb, Redis, Auth0, NodeJs, ExpressJs, Angular 2, Docker, Rancher, Drone, Azure

Key lessons

  1. Embrace the container world:
    1. Smaller services and decoupled architecture is great - there’s much less you need to keep in your head.
    2. The ability to add new data providers or other integration services at a later date without redeploying the whole solution has been great.
    3. The isolation of containers means you can test a lot more on a local development environment.
  2. Be aware of the pricing model of PaaS offerings. We have a BizSpark subscription, so I decided to use Azure DocumentDb with MongoDb support enabled. The problem with this approach is that you pay per collection! This meant that with 4-5 collections, our monthly credit rapidly ran out. I ended up using mLab’s hosted MongoDb for the time being to avoid having to set up a MongoDb cluster.
  3. Adopting new frameworks (Angular 2) is great, but be ready for instability and a constantly moving foundation. In a 2 month period the Angular 2 project changed the way the router worked and switched from SystemJs to Webpack for builds. The changes themselves are probably only 20% of the pain, you spend 80% of your time finding blog posts with answers to obsolete problems.
  4. When working on a startup project, don’t build stuff you don’t need to. There are so many services out there with free or low cost tiers:
    1. Auth0 for authentication, rather than dealing with the hassle of running a password database and all the associated work for resets and verifications - outsource that problem!
    2. mLab for MongoDb hosting. I’m no MongoDb expert, so I would rather not run my own instance just yet. (especially on Docker where I’m unsure of the failure modes)

What’s next?

I’m going to do some follow up posts on some of the details here. In particular Drone and Rancher.

Live Streaming on a Budget

Capabilities

  • Live Stream Video
  • Include Presentation Audio / Video
  • Include Live Audio

Tools and Products Used

  • Telestream WireCast
  • Wowza Media Server (Local Instance)

Challenges

  • Audio not streaming from Desktop Presenter
  • No hardware control surface

Things Learnt

  • Mixer output is Line Level, Camera requires Mic level
  • Use a DI Box to lower the levels to Mic level

TattsHack

Our first “Hackathon”, TattsHack, is done and dusted, and what an experience. From the original idea through to the event it was only a few short weeks, and of that we were only actively working on it for 15 business days.

In that time we went from a blank canvas to forming teams of volunteers to look after communications, rules, facilities and most importantly food! We designed logos, stickers, posters and had t-shirts printed. Given that it was the first event of this type it was decided to keep it to an MVP (Minimum Viable Product)… I think we failed there - it wasn’t minimal by any stretch of the imagination. Even though all our volunteers and participants had their normal workloads to take care of, they threw themselves at the task with an enthusiasm that is rare to see. It was a clear sign that the tide of cultural change really is happening at Tatts.

My expectations of the event were exceeded many times over, and although I was quite nervous right up until the point when teams started pouring into the WOW room for the launch. The atmosphere at this point was amazing, there was a lot of excitement and positive energy in the room and the teams were eager to get started.

One of my personal highlights was being able to get a Live Stream working so that people outside Albion could feel like they were part of the event. This was a technical challenge because we didn’t want to use public streaming services due to the risk of external parties gaining access to our video feed. Challenge accepted, and with a couple of days work I had a solution for a live stream - two of the team then enhanced that to build a video-only live stream between the main room and training rooms. Now teams could see into the other room, and lots of fun was had communicating between the rooms with gestures, cards and dance moves.

Throughout the event the ten teams worked in different ways, some teams were very quietly discussing their approach and then working individually on their piece of the puzzle, while other teams were using all their agility training and doing hourly stand-ups and adjusting their approach constantly.

During the evening on Thursday a couple of advisors came to visit from Finance, Legal and Procurement and they walked around giving the teams advice and encouragement for their pitches. Our agility coaches were also on hand throughout the event to offer guidance to the teams and I saw team members giving a hand to competing teams with specific things like video editing and tech support.

When it came to the time for pitches the main room and car park area was full with people who came to watch the pitches and enjoy a beer and bite to eat. After only 24hrs of work, our ten teams weren’t going to make our judges decision an easy one. In the end there could only be one winner though, and that prestigious title went to the lotteries terminal replacement team. Their pitch was a solution to replacing our fleet of handheld terminals with commodity tablet hardware. I hope that the business will be able to see the value in this idea and will be able to gain value form the prototype produced during the event.

Speaking with people around the office over the following week there was a really good buzz about TattsHack and people are looking forward to the next innovation event. Hopefully next time around we’ll have a bigger venue available so that more people can participate, and hopefully we can find some more volunteers so that we can all participate too!

Testing Windows Infrastructure with ServerSpec

Environment Validator

I’m using the serverspec framework to perform environment validation of development and test environments. The initial version is set up in a very basic way where you pass a single host IP or Name and it connects over WinRM.

This is specifically designed to be an example of how one might test long-lived environments without taking the leap into configuration management tools such as Chef. Although that would be the logical and ideal situation, there are times where it’s not yet possible to take that step.

The other area I wanted to demonstrate was the use of these tools to test in a Windows environment rather than the more commonly demonstrated Linux-based environments.

I would like to enhance this to be more role based in the future so that rather than specifying a number of hosts and specs, you can define roles and specs and then assign roles to a list of hosts.

Ideally these roles would match Chef or Octopus Deploy roles too. In the future I would like to build some integrations between the various tools.

I’ve shared the code for this example on GitHub here: https://github.com/ShawInnes/environment-validator

Sample Specification

The serverspec format is very familiar to anyone who has done BDD before. It uses a format which can be easily read by less technical humans which is a huge selling point for me.

Copy Code


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
require 'spec_helper'
describe 'SQL Server 2014' do
describe service('SQL Server (MSSQLSERVER)') do
it { should be_installed }
it { should be_enabled }
it { should be_running }
it { should have_start_mode('Automatic') }
end
describe package('Microsoft SQL Server 2014 (64-bit)') do
it { should be_installed }
end
describe port(1433) do
it { should be_listening.with('tcp') }
end
end

Sample Command

To execute the test you just run rake from the command line. This is an
example of what I would run on my Mac from the terminal, but it could equally
be kicked off by running it from a command prompt or PowerShell prompt in
Windows.

Copy Code


1
TARGET_HOST="10.0.1.3" TARGET_USER="packer" TARGET_PASS="topsecret123" rake

Sample Output

Copy Code


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
DevOps Tools
Package "Chef Development Kit v0.6.2"
should be installed
Developer Tools
Package "Microsoft Visual Studio Enterprise 2015"
should be installed
Package "JetBrains ReSharper Ultimate in Visual Studio 2015"
should be installed
Package "JetBrains dotCover 3.1.2"
should be installed
Package "JetBrains dotMemory 4.3.2"
should be installed
Package "JetBrains dotPeek 1.4.2"
should be installed
Package "JetBrains dotTrace 6.1.2"
should be installed
Package "LINQPad 4"
should be installed
Internet Information Server (IIS)
IIS Application Pool "api.serenityone.com"
should exist
should have dotnet version "4.0"
IIS Website "api.serenityone.com"
should exist
should be enabled
should be running
should be in app pool "api.serenityone.com"
Port "80"
should be listening
File "c://inetpub//wwwroot"
should be directory
NodeJs
File "c:/program files/nodejs/node.exe"
should be file
should be version "0.12.4"
md5sum
should eq "e05e5562864f2c914259ff562fa51be4"
Developer Tools
Package "Octopus Deploy Server"
should be installed
Package "Octopus Deploy Tentacle"
should be installed
RabbitMQ Server
Service "RabbitMQ"
should be installed
should be enabled
should be running
should have start mode "Automatic"
Package "RabbitMQ"
should be installed
Port "5672"
should be listening
Port "15672"
should be listening
Seq Server
Package "Seq"
should be installed
Service "Seq"
should be installed
should be enabled
should be running
should have start mode "Automatic"
Port "5341"
should be listening
SQL Server 2014
Service "SQL Server (MSSQLSERVER)"
should be installed
should be enabled
should be running
should have start mode "Automatic"
Package "Microsoft SQL Server 2014 (64-bit)"
should be installed
Port "1433"
should be listening
Local User Configuration
User "shaw.innes"
should exist
should belong to group "Administrators"
Finished in 26.07 seconds (files took 1.23 seconds to load)
42 examples, 0 failures

Code and contributions

The source for this sample is available on GitHub here: https://github.com/ShawInnes/environment-validator

Starlight Children's Foundation - Great Adventure Challenge

Great Adventure Challenge

The 2015 Great Adventure Challenge

Shaw I am taking on the Great Adventure Challenge and making a difference in the lives of seriously ill children with the Starlight Children’s Foundation. Through this link you can easily support my efforts by making a secure donation. I would also really appreciate it if you could share my page above or comment below so more people know about it.

Click ‘Donate Now‘ to make a secure online donation.

All donations over $2 are tax deductible and you will be issued with a DGR receipt via email as soon as you make a donation.
Thanks so much for your support!

About Starlight

Every minute of every day a child is admitted to hospital in Australia. Unfortunately, thousands of these children are then faced with a diagnosis that can change their life, and the lives of their family, forever. Starlight’s mission is to transform the experience of these children by replacing pain, fear and boredom with fun, joy and laughter.

Starlight programs are integral to the total care of seriously ill children and young people - while health professionals focus on treating the illness, Starlight is there to focus on the child - lifting their spirits, giving them an opportunity to laugh and play, building resilience and improving their wellbeing.

Starlight is the only children’s charity with a permanent presence in the seven major paediatric hospitals around Australia. We grant once in a lifetime Starlight Wishes that provide the sickest children with something to look forward to and create memories to last forever. In regional Australia our programs have improved attendance at clinics and enhanced the effectiveness of health promotion programs in remote and indigenous communities. For older children, we have Livewire – a safe online & in-hospital community where adolescents can meet other kids their age who are dealing with similar experiences.

Fundraise

Visual Studio Code Behind a Proxy

If you’re having trouble running Visual Studio Code behind a corporate proxy, the following
steps might help. Basically on a mac you just need to set two variables.

This is what will happen if you’ve got proxy problems:

Copy Code


1
2
3
4
5
6
7
osx:vscode shaw.innes$ dnu restore
Restoring packages for /Users/shaw.innes/Desktop/vscode/AkkaAkka/project.json
GET https://www.nuget.org/api/v2/FindPackagesById()?Id='System.Console'.
Warning: FindPackagesById: System.Console
Error: ConnectFailure (Connection timed out)
GET https://www.nuget.org/api/v2/FindPackagesById()?Id='System.Console'.
osx:vscode shaw.innes$ dnu

Before

With the following lines at a terminal window, you’ll get much better results.

Copy Code


1
2
3
osx:vscode shaw.innes$ export http_proxy=proxy.mydomain.com:3128
osx:vscode shaw.innes$ export https_proxy=proxy.mydomain.com:3128
osx:vscode shaw.innes$ dnu restore

After

Copy Code


1
2
3
4
5
6
7
8
9
10
11
Restoring packages for /Users/shaw.innes/Desktop/vscode/AkkaAkka/project.json
GET https://www.nuget.org/api/v2/FindPackagesById()?Id='System.Console'.
OK https://www.nuget.org/api/v2/FindPackagesById()?Id='System.Console' 1469ms
GET https://www.nuget.org/api/v2/package/System.Console/4.0.0-beta-22816.
OK https://www.nuget.org/api/v2/package/System.Console/4.0.0-beta-22816 1960ms
GET https://www.nuget.org/api/v2/FindPackagesById()?Id='System.IO'.
GET https://www.nuget.org/api/v2/FindPackagesById()?Id='System.Runtime'.
OK https://www.nuget.org/api/v2/FindPackagesById()?Id='System.IO' 268ms
GET https://www.nuget.org/api/v2/package/System.IO/4.0.10-beta-22816.
OK https://www.nuget.org/api/v2/FindPackagesById()?Id='System.Runtime' 1578ms
GET https://www.nuget.org/api/v2/package/System.Runtime/4.0.20-beta-22816.

Building Supportable Systems (Performance & Diagnostics)

Isolating performance issues or tracing web traffic problems can be a challenge. Modern browsers have excellent developer tools and 3rd party tools like Fiddler are also great for this job, but they only give you so much information. Sometimes you want to get more in-depth information around the request processing times, or the configuration variables on the server.

Whilst the browser developer tools (“F12 tools”) are getting better and better with each release of browsers now, there is still a class of problem which can’t be solved by these client-side tools. In the case where a change to a web page might increase the page load performance by a few hundred milliseconds, but under heavy load has the potential to totally incapacitate your entire website, a tool like Glimpse is an excellent way to identify the performance information of the specific page or area where you are currently working.

Glimpse

Glimpse (http://getglimpse.com/) is an indispensable tool and it fits right into the problem space explained above. It (optionally) displays itself as a browser popup area at the base of every page and shows various statistics about page load times, database accesses, queries and other useful web request pipeline information. There are a number of additional plugins to enhance the detail in specific areas such as specialised databases, content management systems and routing libraries.

As with all the other tools I’ve been describing Glimpse is installed by way of a NuGet package (the other four are optional extensions).

Copy Code


1
2
3
4
5
Install-Package Glimpse
Install-Package Glimpse.AspNet
Install-Package Glimpse.Mvc5
Install-Package Glimpse.EF6
Install-Package Glimpse-Knockout

To enable Glimpse either follow the instructions on the web page that was displayed when you installed the packages, or go to ~/Glimpse.axd and follow the presented directions.

Glimpse-ing into the mind of the machine…

The initial view of Glimpse in action will be a toolbar-style information block something similar to the following one displayed at the base of your page. As you navigate around your ASP.NET website this info-bar will automatically update to display information about the HTTP request, Server activities, MVC controller load times and any AJAX calls and timings.

Default View

Once you’ve found the page you’re interested in, or are working on, you can get more detailed information on the three main areas (HTTP,HOST,AJAX) by hovering your mouse over them. When you do this a more detailed information pane will pop up and display a breakdown of the page load timings.

Detailed View

Detailed View

Glimpse will break this down into quite detailed sections of information, and if your MVC Controller makes database calls you’ll even get a breakdown of how many calls and how long they took.

In the example above you can see that there were 3 database queries which each took around 50-60ms and that the view took 126ms to render. This isn’t a particularly good example as there isn’t a glaring problem to look at. Often you’ll see situations where there’s a huge time spent in one area (Render, or Query) and you can start digging into that area to try and optimise the page.

Super Detailed View

The final area of Glimpse is the detailed view. This is accessed by clicking on the “g” at the right of the original Glimpse info-bar. As you can see from the example below there is a huge amount of information available at your fingertips ranging from the web.config configuration settings, through to server-side model binding information, loaded modules, MVC routes and even client-side SPA binding information.

Super Detailed View

In Summary

Glimpse is a very easy tool to get started with, and it offers a valuable insight into the inner workings of your ASP.NET website. It’s one of those tools that way more people should know about and be using to help diagnose those tricky performance-related issues when developing MVC websites. I’d strongly urge anyone who’s working on .NET websites to check it out.

Building Supportable Systems (Monitoring)

Once an application is running in a production environment it can become more complicated to access the systems the application is running on. For example that the web servers might be running in a DMZ where the developers don’t have access to easily view logs (though this shouldn’t be the case if you read the previous article about log management). Another example is when you have a website or application which has down-stream system dependencies such as APIs or back-end database systems, and you want to know if these systems are healthy to determine the health of your own application.

Without the ability to quickly gauge the health of your application and its dependencies, you can waste a lot of time fault finding outages in your application which might not actually be a result of your code.

More Metrics.Net

Once again, the Metrics.Net NuGet package comes to the rescue here. One of the features it offers is a “health check” implementation which can be monitored and the results aggregated. If you don’t already have Metrics.Net in your project, you can add it in the usual way:

Install-Package Metrics.Net

Once installed it’s simply a matter of using the fluent configuration API to specify an endpoint and counters configuration. In the following example I’m registering two HealthCheck classes, one to check the availability of a database (for example by creating and checking a persistent connection) and the other to ensure there is sufficient disk space on the server. These are pretty basic examples, perhaps your application or website depends on a back-end API or SOAP service, so in these cases you could perform a regular a status check on those.

Metric.Config
    .WithHttpEndpoint("http://localhost:1234/metrics/")
    .WithAllCounters();
HealthChecks.RegisterHealthCheck(new DatabsaeHealthCheck());
HealthChecks.RegisterHealthCheck(new DiskHealthCheck());

Implementing HealthChecks

Implementation of the actual HealthChecks is very straight-forward and nicely encapsulated by deriving from a HealthCheck base class. For example:

public class DatabaseHealthCheck : HealthCheck
{
    private readonly IDatabase database;
    public DatabaseHealthCheck(IDatabase database)
        : base("DatabaseCheck")
    {
        this.database = database;
        HealthChecks.RegisterHealthCheck(this);
    }
    protected override HealthCheckResult Check()
    {
        // exceptions will be caught and
        // the result will be unhealthy
        this.database.Ping();
        return HealthCheckResult.Healthy();
    }
}

Monitoring HealthChecks

Health Checks

Viewing the state of your application health checks is very simple as well. You can either access them through the Metrics.Net web dashboard by going to ~/metrics (default) in your browser or you can call another endpoint to receive an HTML or JSON encoded version of the health check data.

This image shows an example of what the web console displays for unhealthy (red, at the top) and healthy (green, below) health checks. As you can see it’s really obvious which ones are failing at the time and you can quickly take action to rectify the problem(s).

Applied Health Checks

Another use for health checks is for application monitoring by network infrastructure such as load balancers. Most load balancers will periodically monitor an end-point on your website or application to determine whether the application is in a state capable of accepting traffic. If not, the load balancer will remove that instance of the application from its pool of available servers. This can be particularly useful if you have a farm of web servers and you want to distribute the load evenly across them or provide fault-tolerance.

Using the above health check process you can either create a specific implementation to respond to your load balancer query, or you can simply configure your load balancer to call the standard end-point and react based on the aggregate result of your health checks.

Building Supportable Systems (Instrumentation & Metrics)

Gathering useful instrumentation about running applications such as throughput and performance can be tricky, but invaluable for understanding bottlenecks or latency problems. There are a number of commercial products that cover this area such as AppDynamics, AppInsights, New Relic, Stackify etc… I’ve had some experience with these tools (especially AppDynamics) and I would say if you’re going to be supporting an application in production where there would be financial impacts if your application is performing badly or fails in production, then spend the money on one of these tools.

Having said that, I don’t think the use of an off-the-shelf product is an excuse to skip adding your own metrics to an application, especially when there are a variety of open-source options. One of the greatest benefits to implementing your own metrics within your application is that you can instrument only the areas you care about. Another benefit is that you don’t need to depend on 3rd party infrastructure (such as data collection agents, or cloud services) which might be difficult to configure or maintain depending on your deployment environment.

Metrics.Net

The Metrics.Net project https://github.com/etishor/Metrics.NET makes it pretty simple to gather these metrics and is based on a Java port of Metrics. Metrics.Net also provides an easy interface for create health monitoring endpoints and I’ll cover that in a future post.

To get started, just install the Metrics.Net NuGet package in the usual way. There is a base install which provides the core functionality, and there are additional extensions to this which provide tight integration with OWIN and NancyFx.

Install-Package Metrics.Net

Once you’ve installed the base packages you can configure it in your app startup. I’ll demonstrate the functionality through a console app (it works in pretty much any .net project type). In my main method I will add the following block of code. This will configure Metrics.Net and also expose an HTTP endpoint at “/metrics” where the metrics can be viewed through a web browser. The “WithAllCounters” call will also enable the capture of metrics around .NET resource usage.

Metric.Config
  .WithHttpEndpoint("http://localhost:1234/metrics/")
  .WithAllCounters();

The next thing to do is to add some readonly properties to any classes you wish to instrument. For example if you have a transaction processing class, or an MVC Controller you can add metrics to count the number of calls being made, or the number of active connections to a SignalR Hub.

private readonly Timer timer = Metric.Timer("Requests", Unit.Requests);
private readonly Counter counter =  Metric.Counter("ConcurrentRequests", Unit.Requests);

Now that everything’s set up, it’s just a matter of calling the appropriate method on the fields. In this case I’m incrementing and decrementing a counter so I can get a count of “in progress” calls, as well as using the timer field to gather metrics on how long a particular task is taking to call. Metrics.Net will then aggregate, slice and dice the data into useful statistics.

public void Process(string inputString)
{
    <strong>counter.Increment();</strong>
    using (<strong>timer.NewContext()</strong>)
    {
        // do something to time
        System.Threading.Thread.Sleep(1230);
    }
    <strong>counter.Decrement();</strong>
}

Visualisation

Metrics.Net makes it relatively simple to visualise the data you’re capturing by providing an HTML5 dashboard. Though I wouldn’t suggest using this as your only means of gathering metrics (as it’s stored in volatile memory) it’s a great way to get started. For more permanent storage of metrics data I would suggest looking into the (currently alpha) support for pushing metrics data into another persistent storage system such as InfluxDb, Graphite or ElasticSearch.

Charting Dashboard

The composition of the dashboard can be configured to some extent through the use of the menus across the top of the dashboard. It’s possible to turn various metrics on and off easily, and to modify the polling interval. From what I can tell it’s just polling the internal state of the gathered metrics, so while it’s not ideal to pull every 200ms, it’s not re-calculating everything - just grabbing the stats.

Metrics.Net also includes the ability to tag and categorise them for reporting purposes. At the time of writing, the dashboard doesn’t support extensive filtering or grouping based on these tags but I suspect this will change in the not-to-distant future.

Integration

While it’s very useful to gather metrics for a single instance of an application, the power of Metrics.Net is probably only really apparent once you start to aggregate the data collected. There a few options here and as mentioned above there’s experimental support for live exporting of the instrumented data into a number of databases specifically designed for this type of thing (InfluxDb, Graphite, ElasticSearch).

However there is another feature of Metrics.Net which is extremely useful for either aggregating the data, or for integration into your own custom web dashboards. By appending “/json” to the end of your metrics dashboard URL you can receive a json feed of the raw and aggregated data as can be seen below.

Metrics JSON Feed

Summary

The use of Metrics.Net (or other similar projects) is a great way to quickly increase the supportability of any application (whether cloud-based or not) and the Metrics.Net project in particular is undergoing constant development and improvement with the addition of integration features which will bring it into a more “enterprise” class of library.

Building Supportable Systems (Deployment)

One of the biggest time-killers in software development is deployments (and environment management). I’ve worked on some big projects recently where people are spending many hours each week creating, maintaining and deploying software packages. These hours could be better spent fixing bugs, or adding value to business by adding features to the software. Instead, they’re spent manually performing and tweaking installations of the software, often with additional overhead due to the inherent human errors along the way.

The other advantage to automatic deployment is that you can have high levels of confidence that you can quickly make changes to your software, build and deploy it to your test and production environments. The sooner you get that line of code into production, the sooner it’s adding value to your business or customers. Recently I spent a whole weekend and re-factored one large web project with my colleague Sam (http://thwaitesy.com/) to make it simpler and quicker to package. We then set up an automated deployment process for the website. The deployment process developer 1-2 days a fortnight to do manually, now it takes about 10 minutes from code check-in, through automated build, and deployment to a developer-test environment. Instead of doing a deployment every 14 or so days, now we’re getting well over 14 a day!

With the introduction of Microsoft Azure Websites and source control integration, I can’t handle manually deploying websites any more, and for the vast majority of simple website projects this is a great way to go. For everything else… there’s Octopus Deploy.

Octopus Deploy

In my predominantly Microsoft-centric development career, nothing has been more of a game-changer than Octopus Deploy. It’s a simple (and affordable) product that allows you to configure repeatable and configurable deployment processes for your software. Octopus Deploy is mainly for the deployment of windows server-based applications like windows services, websites, etc, though it has recently acquired support for Linux deployments via SSH (I assume OSX would work too). The heavy lifting is taken care of by an army of agents installed on your target machines and keeping with the cephalopod theme these are obviously “Tentacles” - though you can certainly have more than 8 of them.

The following diagram shows the main dashboard of Octopus Deploy where you can get a quick overview of your products (down the left hand side) and your environments (across the top). You can easily see which versions of each product are currently installed in each environment, if any have failed, or if any are in the process of being deployed.

Octopus Deploy Dashboard

Application packages are simply NuGet packages, optionally with some additional PowerShell scripts to help things along. Behind the scenes Octopus Deploy can store and use a variety of information about your applications by way of scoped variables. These can be substituted into the deployment process based on the environment, product, or even specific machine you are installing to.

The way this product works makes it really easy to work in an agile manner, quickly making changes to the deployment processes, or the variables and re-deploying quickly to an environment until the process is just right. Once you’re happy with the process on your development or integration environment (for example) you can promote the deployment to the next environment (such as staging or production). The great benefit of this process is that you’re not manually doing anything, and because it’s repeatable, as you progress through your environments that you can have greater confidence of your production deployment being well tested. Of course the one caveat to this claim is that you need to keep your various environments reasonably similar in architecture to avoid unexpected changes at the end.

Corporate Features

Octopus Deploy Lifecycles

One of the arguments I’ve had with the adoption of Octopus Deploy in a large enterprise was around how to control who can set up deployments, set variables, and subsequently deploy packages into various environments. Whilst Octopus Deploy has had an excellent role-based security system for a while, there was still the question of being able to enforce that a particular deployment progresses through the appropriate test and QA environments. In the recently released version of Octopus Deploy (2.6 / Dec 2014) they added a new “lifecycle” feature which addresses this very problem, and it’s brilliant… mostly. The only downside of this new feature is that it works exactly as designed and I can’t sneakily skip steps in the process like I did before, damn! :)

With these flexible security options and full auditing, it’s really easy to give developers and testers access to the system so they can develop and test their own deployment processes without having to chuck the task over the fence to the operations team. At work we’ve given a few teams access to our Octopus Deploy instance and every time I look at the dashboard there are new applications being deployed to development and test environments. It’s kinda great.

Extensibility and Integrations

I use Octopus Deploy in a couple of startups, open source projects and at work. Having used it across a variety of scenarios and scales I’ve found it to almost always work perfectly out of the box. In the few instances where the basic product hasn’t had the ability to perform a deployment by default, I’ve always been able to achieve the goal through the addition of “Step Templates” (http://library.octopusdeploy.com), by adding a bit of custom PowerShell or through the fully-featured REST API.

The ultimate example of the integration process is to use TeamCity from JetBrains to perform automated build of your code, package it into a NuGet package and push it to Octopus Deploy for delivery. Once it’s in Octopus Deploy you can perform automatically or manually triggered deployments and call PowerShell scripts to do things like publish to HipChat or Slack. Today I cloned the Slack notification script to make my own generic WebHook method (I’ll publish this soon).

What’s Next?

As part of a fun project at work I want to integrate a Netduino or Arduino with Octopus Deploy and this button (from Jaycar Electronics) so we can literally do “one button” deployments to production.

Keep Calm - Deploy to Production
Deploy Button

What could possibly be cooler (and geekier) than having this switch on the wall for the business-owner or CEO to push their new website to production (literally). I’ll work out the technical details and put together a post on this ASAP. Keep calm, and deploy to production.

(Edit) This post has since been featured on CodeProject How We Stopped Wasting Time On Manual Deployments