Live Streaming on a Budget

LiveStream Setup v2

Capabilities

  • Live Stream Video
  • Include Presentation Audio / Video
  • Include Live Audio

Challenges

  • Audio not streaming from Desktop Presenter

Things Learnt

  • Mixer output is Line Level, Camera requires Mic level
    • Use a DI Box to lower the levels to Mic level

TattsHack TV

LiveStream Connections

WireCast Screen

Testing Windows Infrastructure with ServerSpec

Environment Validator

I’m using the serverspec framework to perform environment validation of development and test environments. The initial version is set up in a very basic way where you pass a single host IP or Name and it connects over WinRM.

This is specifically designed to be an example of how one might test long-lived environments without taking the leap into configuration management tools such as Chef. Although that would be the logical and ideal situation, there are times where it’s not yet possible to take that step.

The other area I wanted to demonstrate was the use of these tools to test in a Windows environment rather than the more commonly demonstrated Linux-based environments.

I would like to enhance this to be more role based in the future so that rather than specifying a number of hosts and specs, you can define roles and specs and then assign roles to a list of hosts.

Ideally these roles would match Chef or Octopus Deploy roles too. In the future I would like to build some integrations between the various tools.

I’ve shared the code for this example on GitHub here: https://github.com/ShawInnes/environment-validator

Sample Specification

The serverspec format is very familiar to anyone who has done BDD before. It uses a format which can be easily read by less technical humans which is a huge selling point for me.


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
require 'spec_helper'
describe 'SQL Server 2014' do
describe service('SQL Server (MSSQLSERVER)') do
it { should be_installed }
it { should be_enabled }
it { should be_running }
it { should have_start_mode('Automatic') }
end
describe package('Microsoft SQL Server 2014 (64-bit)') do
it { should be_installed }
end
describe port(1433) do
it { should be_listening.with('tcp') }
end
end

Sample Command

To execute the test you just run rake from the command line. This is an
example of what I would run on my Mac from the terminal, but it could equally
be kicked off by running it from a command prompt or PowerShell prompt in
Windows.


1
TARGET_HOST="10.0.1.3" TARGET_USER="packer" TARGET_PASS="topsecret123" rake

Sample Output


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
DevOps Tools
Package "Chef Development Kit v0.6.2"
should be installed
Developer Tools
Package "Microsoft Visual Studio Enterprise 2015"
should be installed
Package "JetBrains ReSharper Ultimate in Visual Studio 2015"
should be installed
Package "JetBrains dotCover 3.1.2"
should be installed
Package "JetBrains dotMemory 4.3.2"
should be installed
Package "JetBrains dotPeek 1.4.2"
should be installed
Package "JetBrains dotTrace 6.1.2"
should be installed
Package "LINQPad 4"
should be installed
Internet Information Server (IIS)
IIS Application Pool "api.serenityone.com"
should exist
should have dotnet version "4.0"
IIS Website "api.serenityone.com"
should exist
should be enabled
should be running
should be in app pool "api.serenityone.com"
Port "80"
should be listening
File "c://inetpub//wwwroot"
should be directory
NodeJs
File "c:/program files/nodejs/node.exe"
should be file
should be version "0.12.4"
md5sum
should eq "e05e5562864f2c914259ff562fa51be4"
Developer Tools
Package "Octopus Deploy Server"
should be installed
Package "Octopus Deploy Tentacle"
should be installed
RabbitMQ Server
Service "RabbitMQ"
should be installed
should be enabled
should be running
should have start mode "Automatic"
Package "RabbitMQ"
should be installed
Port "5672"
should be listening
Port "15672"
should be listening
Seq Server
Package "Seq"
should be installed
Service "Seq"
should be installed
should be enabled
should be running
should have start mode "Automatic"
Port "5341"
should be listening
SQL Server 2014
Service "SQL Server (MSSQLSERVER)"
should be installed
should be enabled
should be running
should have start mode "Automatic"
Package "Microsoft SQL Server 2014 (64-bit)"
should be installed
Port "1433"
should be listening
Local User Configuration
User "shaw.innes"
should exist
should belong to group "Administrators"
Finished in 26.07 seconds (files took 1.23 seconds to load)
42 examples, 0 failures

Code and contributions

The source for this sample is available on GitHub here: https://github.com/ShawInnes/environment-validator

Starlight Children's Foundation - Great Adventure Challenge

Great Adventure Challenge

The 2015 Great Adventure Challenge

Shaw I am taking on the Great Adventure Challenge and making a difference in the lives of seriously ill children with the Starlight Children’s Foundation. Through this link you can easily support my efforts by making a secure donation. I would also really appreciate it if you could share my page above or comment below so more people know about it.

Click ‘Donate Now‘ to make a secure online donation.

All donations over $2 are tax deductible and you will be issued with a DGR receipt via email as soon as you make a donation.
Thanks so much for your support!

About Starlight

Every minute of every day a child is admitted to hospital in Australia. Unfortunately, thousands of these children are then faced with a diagnosis that can change their life, and the lives of their family, forever. Starlight’s mission is to transform the experience of these children by replacing pain, fear and boredom with fun, joy and laughter.

Starlight programs are integral to the total care of seriously ill children and young people - while health professionals focus on treating the illness, Starlight is there to focus on the child - lifting their spirits, giving them an opportunity to laugh and play, building resilience and improving their wellbeing.

Starlight is the only children’s charity with a permanent presence in the seven major paediatric hospitals around Australia. We grant once in a lifetime Starlight Wishes that provide the sickest children with something to look forward to and create memories to last forever. In regional Australia our programs have improved attendance at clinics and enhanced the effectiveness of health promotion programs in remote and indigenous communities. For older children, we have Livewire – a safe online & in-hospital community where adolescents can meet other kids their age who are dealing with similar experiences.

Fundraise

Visual Studio Code Behind a Proxy

If you’re having trouble running Visual Studio Code behind a corporate proxy, the following
steps might help. Basically on a mac you just need to set two variables.

This is what will happen if you’ve got proxy problems:


1
2
3
4
5
6
7
osx:vscode shaw.innes$ dnu restore
Restoring packages for /Users/shaw.innes/Desktop/vscode/AkkaAkka/project.json
GET https://www.nuget.org/api/v2/FindPackagesById()?Id='System.Console'.
Warning: FindPackagesById: System.Console
Error: ConnectFailure (Connection timed out)
GET https://www.nuget.org/api/v2/FindPackagesById()?Id='System.Console'.
osx:vscode shaw.innes$ dnu

Before

With the following lines at a terminal window, you’ll get much better results.


1
2
3
osx:vscode shaw.innes$ export http_proxy=proxy.mydomain.com:3128
osx:vscode shaw.innes$ export https_proxy=proxy.mydomain.com:3128
osx:vscode shaw.innes$ dnu restore

After


1
2
3
4
5
6
7
8
9
10
11
Restoring packages for /Users/shaw.innes/Desktop/vscode/AkkaAkka/project.json
GET https://www.nuget.org/api/v2/FindPackagesById()?Id='System.Console'.
OK https://www.nuget.org/api/v2/FindPackagesById()?Id='System.Console' 1469ms
GET https://www.nuget.org/api/v2/package/System.Console/4.0.0-beta-22816.
OK https://www.nuget.org/api/v2/package/System.Console/4.0.0-beta-22816 1960ms
GET https://www.nuget.org/api/v2/FindPackagesById()?Id='System.IO'.
GET https://www.nuget.org/api/v2/FindPackagesById()?Id='System.Runtime'.
OK https://www.nuget.org/api/v2/FindPackagesById()?Id='System.IO' 268ms
GET https://www.nuget.org/api/v2/package/System.IO/4.0.10-beta-22816.
OK https://www.nuget.org/api/v2/FindPackagesById()?Id='System.Runtime' 1578ms
GET https://www.nuget.org/api/v2/package/System.Runtime/4.0.20-beta-22816.

Building Supportable Systems (Performance & Diagnostics)

Isolating performance issues or tracing web traffic problems can be a challenge. Modern browsers have excellent developer tools and 3rd party tools like Fiddler are also great for this job, but they only give you so much information. Sometimes you want to get more in-depth information around the request processing times, or the configuration variables on the server.

Whilst the browser developer tools (“F12 tools”) are getting better and better with each release of browsers now, there is still a class of problem which can’t be solved by these client-side tools. In the case where a change to a web page might increase the page load performance by a few hundred milliseconds, but under heavy load has the potential to totally incapacitate your entire website, a tool like Glimpse is an excellent way to identify the performance information of the specific page or area where you are currently working.

Glimpse

Glimpse (http://getglimpse.com/) is an indispensable tool and it fits right into the problem space explained above. It (optionally) displays itself as a browser popup area at the base of every page and shows various statistics about page load times, database accesses, queries and other useful web request pipeline information. There are a number of additional plugins to enhance the detail in specific areas such as specialised databases, content management systems and routing libraries.

As with all the other tools I’ve been describing Glimpse is installed by way of a NuGet package (the other four are optional extensions).


1
2
3
4
5
Install-Package Glimpse
Install-Package Glimpse.AspNet
Install-Package Glimpse.Mvc5
Install-Package Glimpse.EF6
Install-Package Glimpse-Knockout

To enable Glimpse either follow the instructions on the web page that was displayed when you installed the packages, or go to ~/Glimpse.axd and follow the presented directions.

Glimpse-ing into the mind of the machine…

The initial view of Glimpse in action will be a toolbar-style information block something similar to the following one displayed at the base of your page. As you navigate around your ASP.NET website this info-bar will automatically update to display information about the HTTP request, Server activities, MVC controller load times and any AJAX calls and timings.

Default View

Once you’ve found the page you’re interested in, or are working on, you can get more detailed information on the three main areas (HTTP,HOST,AJAX) by hovering your mouse over them. When you do this a more detailed information pane will pop up and display a breakdown of the page load timings.

Detailed View

Detailed View

Glimpse will break this down into quite detailed sections of information, and if your MVC Controller makes database calls you’ll even get a breakdown of how many calls and how long they took.

In the example above you can see that there were 3 database queries which each took around 50-60ms and that the view took 126ms to render. This isn’t a particularly good example as there isn’t a glaring problem to look at. Often you’ll see situations where there’s a huge time spent in one area (Render, or Query) and you can start digging into that area to try and optimise the page.

Super Detailed View

The final area of Glimpse is the detailed view. This is accessed by clicking on the “g” at the right of the original Glimpse info-bar. As you can see from the example below there is a huge amount of information available at your fingertips ranging from the web.config configuration settings, through to server-side model binding information, loaded modules, MVC routes and even client-side SPA binding information.

Super Detailed View

In Summary

Glimpse is a very easy tool to get started with, and it offers a valuable insight into the inner workings of your ASP.NET website. It’s one of those tools that way more people should know about and be using to help diagnose those tricky performance-related issues when developing MVC websites. I’d strongly urge anyone who’s working on .NET websites to check it out.

Building Supportable Systems (Monitoring)

Once an application is running in a production environment it can become more complicated to access the systems the application is running on. For example that the web servers might be running in a DMZ where the developers don’t have access to easily view logs (though this shouldn’t be the case if you read the previous article about log management). Another example is when you have a website or application which has down-stream system dependencies such as APIs or back-end database systems, and you want to know if these systems are healthy to determine the health of your own application.

Without the ability to quickly gauge the health of your application and its dependencies, you can waste a lot of time fault finding outages in your application which might not actually be a result of your code.

More Metrics.Net

Once again, the Metrics.Net NuGet package comes to the rescue here. One of the features it offers is a “health check” implementation which can be monitored and the results aggregated. If you don’t already have Metrics.Net in your project, you can add it in the usual way:

Install-Package Metrics.Net

Once installed it’s simply a matter of using the fluent configuration API to specify an endpoint and counters configuration. In the following example I’m registering two HealthCheck classes, one to check the availability of a database (for example by creating and checking a persistent connection) and the other to ensure there is sufficient disk space on the server. These are pretty basic examples, perhaps your application or website depends on a back-end API or SOAP service, so in these cases you could perform a regular a status check on those.

Metric.Config
    .WithHttpEndpoint("http://localhost:1234/metrics/")
    .WithAllCounters();
HealthChecks.RegisterHealthCheck(new DatabsaeHealthCheck());
HealthChecks.RegisterHealthCheck(new DiskHealthCheck());

Implementing HealthChecks

Implementation of the actual HealthChecks is very straight-forward and nicely encapsulated by deriving from a HealthCheck base class. For example:

public class DatabaseHealthCheck : HealthCheck
{
    private readonly IDatabase database;
    public DatabaseHealthCheck(IDatabase database)
        : base("DatabaseCheck")
    {
        this.database = database;
        HealthChecks.RegisterHealthCheck(this);
    }
    protected override HealthCheckResult Check()
    {
        // exceptions will be caught and
        // the result will be unhealthy
        this.database.Ping();
        return HealthCheckResult.Healthy();
    }
}

Monitoring HealthChecks

Health Checks

Viewing the state of your application health checks is very simple as well. You can either access them through the Metrics.Net web dashboard by going to ~/metrics (default) in your browser or you can call another endpoint to receive an HTML or JSON encoded version of the health check data.

This image shows an example of what the web console displays for unhealthy (red, at the top) and healthy (green, below) health checks. As you can see it’s really obvious which ones are failing at the time and you can quickly take action to rectify the problem(s).

Applied Health Checks

Another use for health checks is for application monitoring by network infrastructure such as load balancers. Most load balancers will periodically monitor an end-point on your website or application to determine whether the application is in a state capable of accepting traffic. If not, the load balancer will remove that instance of the application from its pool of available servers. This can be particularly useful if you have a farm of web servers and you want to distribute the load evenly across them or provide fault-tolerance.

Using the above health check process you can either create a specific implementation to respond to your load balancer query, or you can simply configure your load balancer to call the standard end-point and react based on the aggregate result of your health checks.

Building Supportable Systems (Instrumentation & Metrics)

Gathering useful instrumentation about running applications such as throughput and performance can be tricky, but invaluable for understanding bottlenecks or latency problems. There are a number of commercial products that cover this area such as AppDynamics, AppInsights, New Relic, Stackify etc… I’ve had some experience with these tools (especially AppDynamics) and I would say if you’re going to be supporting an application in production where there would be financial impacts if your application is performing badly or fails in production, then spend the money on one of these tools.

Having said that, I don’t think the use of an off-the-shelf product is an excuse to skip adding your own metrics to an application, especially when there are a variety of open-source options. One of the greatest benefits to implementing your own metrics within your application is that you can instrument only the areas you care about. Another benefit is that you don’t need to depend on 3rd party infrastructure (such as data collection agents, or cloud services) which might be difficult to configure or maintain depending on your deployment environment.

Metrics.Net

The Metrics.Net project https://github.com/etishor/Metrics.NET makes it pretty simple to gather these metrics and is based on a Java port of Metrics. Metrics.Net also provides an easy interface for create health monitoring endpoints and I’ll cover that in a future post.

To get started, just install the Metrics.Net NuGet package in the usual way. There is a base install which provides the core functionality, and there are additional extensions to this which provide tight integration with OWIN and NancyFx.

Install-Package Metrics.Net

Once you’ve installed the base packages you can configure it in your app startup. I’ll demonstrate the functionality through a console app (it works in pretty much any .net project type). In my main method I will add the following block of code. This will configure Metrics.Net and also expose an HTTP endpoint at “/metrics” where the metrics can be viewed through a web browser. The “WithAllCounters” call will also enable the capture of metrics around .NET resource usage.

Metric.Config
  .WithHttpEndpoint("http://localhost:1234/metrics/")
  .WithAllCounters();

The next thing to do is to add some readonly properties to any classes you wish to instrument. For example if you have a transaction processing class, or an MVC Controller you can add metrics to count the number of calls being made, or the number of active connections to a SignalR Hub.

private readonly Timer timer = Metric.Timer("Requests", Unit.Requests);
private readonly Counter counter =  Metric.Counter("ConcurrentRequests", Unit.Requests);

Now that everything’s set up, it’s just a matter of calling the appropriate method on the fields. In this case I’m incrementing and decrementing a counter so I can get a count of “in progress” calls, as well as using the timer field to gather metrics on how long a particular task is taking to call. Metrics.Net will then aggregate, slice and dice the data into useful statistics.

public void Process(string inputString)
{
    <strong>counter.Increment();</strong>
    using (<strong>timer.NewContext()</strong>)
    {
        // do something to time
        System.Threading.Thread.Sleep(1230);
    }
    <strong>counter.Decrement();</strong>
}

Visualisation

Metrics.Net makes it relatively simple to visualise the data you’re capturing by providing an HTML5 dashboard. Though I wouldn’t suggest using this as your only means of gathering metrics (as it’s stored in volatile memory) it’s a great way to get started. For more permanent storage of metrics data I would suggest looking into the (currently alpha) support for pushing metrics data into another persistent storage system such as InfluxDb, Graphite or ElasticSearch.

Charting Dashboard

The composition of the dashboard can be configured to some extent through the use of the menus across the top of the dashboard. It’s possible to turn various metrics on and off easily, and to modify the polling interval. From what I can tell it’s just polling the internal state of the gathered metrics, so while it’s not ideal to pull every 200ms, it’s not re-calculating everything - just grabbing the stats.

Metrics.Net also includes the ability to tag and categorise them for reporting purposes. At the time of writing, the dashboard doesn’t support extensive filtering or grouping based on these tags but I suspect this will change in the not-to-distant future.

Integration

While it’s very useful to gather metrics for a single instance of an application, the power of Metrics.Net is probably only really apparent once you start to aggregate the data collected. There a few options here and as mentioned above there’s experimental support for live exporting of the instrumented data into a number of databases specifically designed for this type of thing (InfluxDb, Graphite, ElasticSearch).

However there is another feature of Metrics.Net which is extremely useful for either aggregating the data, or for integration into your own custom web dashboards. By appending “/json” to the end of your metrics dashboard URL you can receive a json feed of the raw and aggregated data as can be seen below.

Metrics JSON Feed

Summary

The use of Metrics.Net (or other similar projects) is a great way to quickly increase the supportability of any application (whether cloud-based or not) and the Metrics.Net project in particular is undergoing constant development and improvement with the addition of integration features which will bring it into a more “enterprise” class of library.

Building Supportable Systems (Deployment)

One of the biggest time-killers in software development is deployments (and environment management). I’ve worked on some big projects recently where people are spending many hours each week creating, maintaining and deploying software packages. These hours could be better spent fixing bugs, or adding value to business by adding features to the software. Instead, they’re spent manually performing and tweaking installations of the software, often with additional overhead due to the inherent human errors along the way.

The other advantage to automatic deployment is that you can have high levels of confidence that you can quickly make changes to your software, build and deploy it to your test and production environments. The sooner you get that line of code into production, the sooner it’s adding value to your business or customers. Recently I spent a whole weekend and re-factored one large web project with my colleague Sam (http://thwaitesy.com/) to make it simpler and quicker to package. We then set up an automated deployment process for the website. The deployment process developer 1-2 days a fortnight to do manually, now it takes about 10 minutes from code check-in, through automated build, and deployment to a developer-test environment. Instead of doing a deployment every 14 or so days, now we’re getting well over 14 a day!

With the introduction of Microsoft Azure Websites and source control integration, I can’t handle manually deploying websites any more, and for the vast majority of simple website projects this is a great way to go. For everything else… there’s Octopus Deploy.

Octopus Deploy

In my predominantly Microsoft-centric development career, nothing has been more of a game-changer than Octopus Deploy. It’s a simple (and affordable) product that allows you to configure repeatable and configurable deployment processes for your software. Octopus Deploy is mainly for the deployment of windows server-based applications like windows services, websites, etc, though it has recently acquired support for Linux deployments via SSH (I assume OSX would work too). The heavy lifting is taken care of by an army of agents installed on your target machines and keeping with the cephalopod theme these are obviously “Tentacles” - though you can certainly have more than 8 of them.

The following diagram shows the main dashboard of Octopus Deploy where you can get a quick overview of your products (down the left hand side) and your environments (across the top). You can easily see which versions of each product are currently installed in each environment, if any have failed, or if any are in the process of being deployed.

Octopus Deploy Dashboard

Application packages are simply NuGet packages, optionally with some additional PowerShell scripts to help things along. Behind the scenes Octopus Deploy can store and use a variety of information about your applications by way of scoped variables. These can be substituted into the deployment process based on the environment, product, or even specific machine you are installing to.

The way this product works makes it really easy to work in an agile manner, quickly making changes to the deployment processes, or the variables and re-deploying quickly to an environment until the process is just right. Once you’re happy with the process on your development or integration environment (for example) you can promote the deployment to the next environment (such as staging or production). The great benefit of this process is that you’re not manually doing anything, and because it’s repeatable, as you progress through your environments that you can have greater confidence of your production deployment being well tested. Of course the one caveat to this claim is that you need to keep your various environments reasonably similar in architecture to avoid unexpected changes at the end.

Corporate Features

Octopus Deploy Lifecycles

One of the arguments I’ve had with the adoption of Octopus Deploy in a large enterprise was around how to control who can set up deployments, set variables, and subsequently deploy packages into various environments. Whilst Octopus Deploy has had an excellent role-based security system for a while, there was still the question of being able to enforce that a particular deployment progresses through the appropriate test and QA environments. In the recently released version of Octopus Deploy (2.6 / Dec 2014) they added a new “lifecycle” feature which addresses this very problem, and it’s brilliant… mostly. The only downside of this new feature is that it works exactly as designed and I can’t sneakily skip steps in the process like I did before, damn! :)

With these flexible security options and full auditing, it’s really easy to give developers and testers access to the system so they can develop and test their own deployment processes without having to chuck the task over the fence to the operations team. At work we’ve given a few teams access to our Octopus Deploy instance and every time I look at the dashboard there are new applications being deployed to development and test environments. It’s kinda great.

Extensibility and Integrations

I use Octopus Deploy in a couple of startups, open source projects and at work. Having used it across a variety of scenarios and scales I’ve found it to almost always work perfectly out of the box. In the few instances where the basic product hasn’t had the ability to perform a deployment by default, I’ve always been able to achieve the goal through the addition of “Step Templates” (http://library.octopusdeploy.com), by adding a bit of custom PowerShell or through the fully-featured REST API.

The ultimate example of the integration process is to use TeamCity from JetBrains to perform automated build of your code, package it into a NuGet package and push it to Octopus Deploy for delivery. Once it’s in Octopus Deploy you can perform automatically or manually triggered deployments and call PowerShell scripts to do things like publish to HipChat or Slack. Today I cloned the Slack notification script to make my own generic WebHook method (I’ll publish this soon).

What’s Next?

As part of a fun project at work I want to integrate a Netduino or Arduino with Octopus Deploy and this button (from Jaycar Electronics) so we can literally do “one button” deployments to production.

Keep Calm - Deploy to Production
Deploy Button

What could possibly be cooler (and geekier) than having this switch on the wall for the business-owner or CEO to push their new website to production (literally). I’ll work out the technical details and put together a post on this ASAP. Keep calm, and deploy to production.

(Edit) This post has since been featured on CodeProject How We Stopped Wasting Time On Manual Deployments

Building Supportable Systems (Log Management)

Following on from my previous post about logging this one will go a bit deeper into the logging story. There’s a fine line between too much and too little when it comes to logging. On the one hand you don’t want to skip logging something that might make it easier to diagnose a problem later one and on the other hand you don’t want to create so many verbose log entries that you just can’t find the information you need.

There are a number of options here such as manually searching for text strings in the log, through to expensive log aggregation software or services like Splunk, Logrhythm, etc. My personal favourite is a product called Seq. It’s a commercial product, but for single-user use on your local dev environment it’s free.

Seq

Seq is a windows service application which listens for log entries and stores them in a high-performance data store. The log entries can then be sorted, filtered and added to a dashboard as the user sees fit. The great advantage of Seq is that you can obtain a commercial license and centralise your logs for easier analysis of aggregated data. It’s not a “big data” log archival system like Splunk, so evaluate what you’re trying to achieve with the tool before throwing all your eggs into the basket.

One super-awesome-great thing about Seq is that there’s a log4net target available which will take your existing application’s log4net logging output and push it into a Seq server. This is great for those situations where you just want to get the logs into a manageable UI or you don’t have the time to replace your logging framework.

However, to get the greatest benefit out of Seq you need to use it with Serilog. Serilog (covered in one of my previous posts) is a structured logging framework. This means that it can log more than just lines of text, it can log meaningful object data. This data can later be filtered in Seq by using a LINQ-like query syntax.

Log.Logger = new LoggerConfiguration()
    .WriteTo.Seq("http://localhost:5341/")
    .Enrich.WithProperty("ComputerName", System.Net.Dns.GetHostName())
    .Enrich.FromLogContext()
    .CreateLogger();

using (LogContext.PushProperty("CorrelationId", Guid.NewGuid()))
{
    Log.Information("Processed order {@order}", order);
}

In the sample above Serilog will actually log some additional properties along with each log entry. It will add the Computer Name and a CorrelationId (which I’m just making a random Guid for fun). The advantage of this is that any action or logging that occurs within the scoped LogContext will have a traceable CorrelationId attached, and all logs related to this action can be filtered easily.

Seq

The other thing that will happen with the above code is that the “{@order}” format string will be automatically serialised at the time of logging and the properties of the order object will be available to be viewed, queried or filtered.

Seq

There’s plenty of good documentation on the the Seq website at http://getseq.net/. It’s well worth taking the time to download it and have a play with it alongside Serilog.

Other Options

There are plenty of other log management and aggregation tools and services. Things like Elastic Search, New Relic, LogEntries and Splunk are worth looking into for log archival, large volumes, or cloud services.

Building Supportable Systems (Logging)

If there was going to be a “silver bullet” that will make your applications more supportable I think I would suggest that appropriate logging would be it. There are obviously a lot of other things you can do to make applications easier to support, monitor, debug and develop - but without logging you’re shooting yourself in the foot before you even start.

Luckily there are plenty of great options for logging in .NET. Console.WriteLine() isn’t one of them, and neither is anything in the System.IO.* namespaces. I mean there are plenty of useful functions throughout these namespaces, but you really don’t want to write your own logging framework when there are so many open source solutions. Libraries like log4net, nlog, SLAB and Serilog are all dedicated logging libraries and they will abstract all of the tricky bits away from you so you don’t need to think too hard and can concentrate on adding business value to your applications.

Serilog

Most logging frameworks are kind of like string.Format(“…”) that goes somewhere useful. Most logging frameworks offer a variety of output targets (Sinks to Serilog) ranging from simple flat files through to databases and cloud-hosted log aggregation services.

The following is an example of a log4net formatted logging command:


1
Log.Info("{0}, {1} bytes @ {2}", ...)

It’s not bad, but we can do better… we can make it more readable. This is where Serilog comes in and as with everything in this series it’s installed via NuGet So our previous log line now becomes this:


1
Log.Information("{User}, {Bytes} bytes @ {Time}", ...)

“So what?” you say. That’s exactly the response I got when I was demoing this to some workmates last week. Well this is where the power of Serilog becomes apparent. Unlike other logging frameworks, Serilog can actually serialise your logged objects into rich objects rather than just the “ToString()” representation. So rather then getting an anaemic “Processed Order 123” or something similar from the following line:


1
Log.Information("Processed order {@order}", order);

We will get something rich and infinitely more useful:


1
2014-12-07 18:47:43 [Information] Processed order Order { Id: 123, Customer: Customer { Name: "Shaw", Address: "123 Some Street" } }

There is one thing to watch for (as I discovered the naive and hard way). Beware of very deep object graphs, or even worse, circular references. These will the kiss of death to your rich logging story.

Summary

With these logging frameworks it’s so easy to get rich logging into your applications so do it early and make use of it for easier access to usage and debugging information. Mature logging frameworks like log4net and Serilog have integrated log file retention management, so you can set reasonable limits on the size and number of logs to keep without having to worry about your disks filling up.