By Phil Richards, Lead Architect at aluma
Running a microservices-based architecture here at Aluma means we’re able to trial new technologies when developing new features, without having to rewrite a monolith. We'd already been using Go for our command-line client, but around a year ago we tried out Go for a new microservice, and we liked it so much we decided to use it for all future backend development work where possible.
This post is about how we made that decision and how we pitted C# on dotnet core against Go.
Technology choices are always contentious within committed teams, especially when it comes to languages, so we needed to be sure we were doing the right thing. Language choice, in particular, is such a fundamental decision to make because it affects almost every aspect of development - from writing and testing code, runtime, deployment - even recruitment of new hires. It’s not just syntax!
With that in mind, here are the reasons we chose to go with Go.
A clean break
C# has become a very pleasant to-use language, from its initial inception when it took heavy inspiration from Java, to present day with the introduction of easy-to-use suspendable functions with async-await, and functional idioms like pattern matching, lambda expressions and LINQ. The tooling is mature (we’re big fans of Jetbrains’ IDEs here and Rider is exceptional), and Microsoft is doing brilliant work in making dotnet core a genuine option for multi-platform development.
However, dotnet still has to support legacy, error-prone APIs (like the Stream abstraction) and its runtime wasn’t designed from ground-up to be asynchronous like Go’s was. I’ll dive a little deeper into the ramifications of that shortly, but this was a key point in our decision. Go, by contrast, was designed explicitly with non-blocking I/O in mind, and its runtime prevents any of the thread-blocking and thread pool exhaustion issues we’ve seen with dotnet services. Its Reader and Writer types are so simple and elegant compared to dotnet’s Streams, and they’re so fundamental to the language and standard library they shape how third-party packages operate, too.
Streams, Readers and the runtime
Let’s dig in a little bit more to that last sentence. Why are streams in dotnet error-prone? The key is that a single Stream can operate in multiple modes. It might be OK to write to a Stream, it might be OK to read from a Stream, and it might be OK to seek to a certain location in a Stream. A Stream might also implement asynchronous versions of each of its read/write/seek methods, or it might not. 3rd party libraries might also use those asynchronous methods, or they might not.
That’s a lot of ‘might’s, and a lot of ways to:
a) block the calling thread (e.g. if your JSON de/serialization library uses non-async methods) and cause thread pool starvation,
b) make assumptions in unit tests that your code will receive a writable, seekable and readable Stream when, in fact, in production, it has to deal with a Stream that cannot be written to, or where Seek isn’t permitted.
Those are big problems. In the past, we’ve experienced thread starvation in our services, in production, and they’re tremendously fiddly to track down, and have the potential to cause massive, user-visible slowdowns. They’re even more fiddly to track down if the thread block comes from a 3rd party library which isn’t using async methods properly.
The new Pipelines API in dotnet goes some way to alleviate those issues by providing a much saner API, but the damage has been done. Streams aren’t going away.
Go doesn’t have these problems at all. Its equivalents,
Reader, Writer, and
Seeker interfaces guarantee just one method per interface - a
Seek. There’s no ambiguity at compile time (and by extension, test or runtime) about what a
Reader can do (‘can it
Seek?’ etc.). Even more importantly, the Go runtime guarantees that all IO is non-blocking - that is, goroutines cannot block other goroutines, and they cannot block OS threads. I won’t dive into Go’s concurrency model here, but suffice to say that goroutines and asynchronous IO are, together, one of the founding principles of the language. That has an important ripple effect in the Go ecosystem - the guarantee that 3rd party code (and our own!) cannot cause thread pool starvation was a key factor in our decision.
A side effect of the clumsy Streams API in dotnet is that the antipattern of reading a
Stream into memory (e.g. with a MemoryStream) is prevalent - we’re guilty of it ourselves, in places. When using a plain byte array or
MemoryStream, you have more guarantees than you do with working with abstract Streams. And again, in our experience, 3rd party libraries commonly accept byte arrays where a
Stream would be more appropriate, so we’re left with the unfortunate situation of having to read an entire Stream’s contents into memory before passing it to the 3rd party library. In Aluma’s case, this
Stream may be quite large, and so reading it into a byte array or
MemoryStream would bloat our memory usage tremendously.
There’s also the fact that even idle dotnet processes can consume hundreds of MB of memory. Roughly equivalent Go processes tend to consume as little as 3-5MB. That’s a huge difference!
Above: output from running `kubectl top pods` on a real kubernetes deployment of aluma (pod names obscured).
Here’s another area where Go really shines. Its build and test loop is so fast you could be forgiven for thinking there’s no compilation step at all. Take, for example, the times to run the build (compilation only) for a Go service and a dotnet service, of roughly equivalent size:
You can see a clean build took just over 4s on my dev machine, and a subsequent (no-op) incremental build took around 250ms. Here’s the dotnet equivalent:
A clean build took just over 11s, and a no-op build took 3.6s. Now, dotnet’s build times wouldn’t normally be a problem, but once we got used to Go’s near-instant builds, we found even small delays like this
hard to swallow.
Running tests is a similar story - running Go test only runs tests that need to be due to code changes:
...go test caches successful package test results to avoid unnecessary repeated running of tests. When the result of a test can be recovered from the cache, go test will redisplay the previous output instead of running the test binary again.
That means that the constant dev cycle of running tests and making code changes is cut down considerably, to the point that it’s hard to go back to anything that adds a delay.
Deployment cycle time, single-file binaries
It’s a similar story with deployment times. We like to have the ability to run our microservices locally, on a microk8s kubernetes installation (that’s a post for another time). We use skaffold to deploy our current changes to the local microk8s installation, which involves creating a docker image and pushing it to the local image registry. This is another dev cycle where it’s beneficial to keep each operation fast.
Go’s build output is a single file, whereas dotnet’s is a folder with the built assemblies (including dependencies). Building a docker image and pushing it to microk8s is much faster when all you’re packaging up is a single binary, vs. an entire directory of assemblies.
This is another case where, given it’s something we do in a tight loop, shaving a few seconds off a single iteration makes a big difference to our productivity.
Tools, not frameworks
Go’s elegant and simple interfaces included in the standard library ripple out to its ecosystem. I’ve already mentioned the
Seeker interfaces, but another key one is
http.Handler. This innocuous-looking interface contains a single method definition,
ServeHTTP, which accepts an
http.Request and an
http.ResponseWriter. This single type definition has formed the basis of all HTTP endpoints, HTTP routing tables and HTTP middleware in Go, and means that all libraries that interoperate with it can, by extension, interoperate with each other. Go provides solid, in-house http server and client code, with no external dependencies (e.g. openssl), and simple extension points.
Dotnet provides aspnet core, which is also a great http service framework. It is a large, complex piece of software, which provides de/serialiation, routing, dependency injection, authentication and authorization and more. This is all good stuff, assuming you only need to follow the well-trodden path, and do more-or-less what the tutorials lead you to do (and not much more). The problem comes when you try to do something that isn’t in the tutorials, and you need to dig into aspnet’s internal workings. Aspnet’s approach of providing a framework to cover 90% of use cases works if you’re in that 90%... otherwise it can just get in the way. I’ve personally found its many levels of abstraction confusing, when trying to implement a custom authentication middleware, for example.
Go and its ecosystem provide easily pluggable pieces of functionality that allow you to build the application using only the bits you need. This is probably one of the more subjective aspects to the Go vs. C# debate. Aspnet is a proven framework which can offer great performance… but personally, I’d opt for a set of tools over an entire framework any day.
While we’re on the subject of aspnet core, one final thing to mention is that the API changes between major versions can cause major headaches when trying to upgrade. That, combined with the fact that a few of our services use .Net framework (not dotnet core) mean we’re stuck with an older version of aspnet than we’d like (since .Net framework is no longer supported by aspnet core 3.0 and newer). We cannot upgrade to a newer aspnet version until we’re able to transition those services away from .net framework, and even then we have some serious work to do to upgrade.
Go guarantees that code written to work with an older 1.x version will work with a newer 1.x version. No breaking changes. This is huge, and means no problems when upgrading. We can focus on features, bugfixes and improvements, without having put in the effort to migrate between versions.
It’s impossible to compare languages without at least mentioning how they ‘feel’ to use! This time it’s not so clear cut. There are features that C# provide that we miss in Go, and ways that Go approaches problems that feel fresh in comparison to C#.
We miss C#’s generics, useful features like the null propagation operator, and the ternary that even C has. We miss NSubstitute for quickly creating stubs for unit testing. We also miss Rider’s superior refactoring support vs. what Goland provides. We also miss generics - although that’s coming soon to Go.
But Go’s duck typing, explicit error handling (and explicitness throughout - there’s rarely any magic or implicit behaviour when looking through Go source code) and thoughtful features like the defer statement all make for a pleasant developer experience, and one that provides confidence that the code you’re writing today will be readable tomorrow, or in a month or a year - by yourself or a newcomer. That’s something that can't be said about most languages.
Sorry for the length of this post - I didn’t have time to write a shorter one! Hopefully reading this has given a taste of why we’re betting on Go, and if you haven’t already tried it yet, why we think you should.
We’re by no means done with C# - we still have a decent amount of C# code that we need to maintain and improve over time, but the distinct advantages of using Go mean we’ll be looking to gradually port those codebases over.