The potted version of how we broke our monoliths into dockerised microservices and then totally embraced AWS and serverless computing.
Rewind 3 years, Donald Trump had just been elected president of the United States, the British public had voted to leave the European Union, and as far as we were concerned Amazon was just a great place to buy those paper things with words in (I believe they were called “books”).
A large part of our technology stack at that time comprised of a Spring based application and a MySQL database running on VMs in a data centre in sunny Manchester (it’s not sunny). The application had grown in size and functionality since the birth of the company back in 2008, whilst it had served a purpose, and served it well, it was now officially legacy.
In computing terms, the word legacy is used to describe outdated or obsolete technology and equipment that is still being used by an individual or organisation. Legacy implies that the system is out of date or in need of replacement, however it may be in good working order so the business or individual owner does not want to upgrade or update the equipment.
The final point in this quote is the most pertinent. The application was working for our thousands of customers, day in, day out, with little to no downtime. But it couldn’t be denied that new features were becoming difficult to build and the underlying infrastructure was beginning to struggle to scale as we continued to grow as a business.
Docker, Kubernetes and Microservices FTW
We had previously made an attempt at breaking down our monolith into more manageable components and were well aware of the idea and benefits of microservices (another post for another day). But all we ended up with in this first pass were smaller, but still essentially monolithic applications. We had cut too wide and too deep to make any meaningful difference, we had in fact just made life harder for ourselves. We now had multiple teams working on multiple applications which had no clear demarcation. We were also still in a world of ‘old school’ Spring pain, developers had to jump through hoops to create new features and deployments were slow and risky.
We needed a drastic rethink of our infrastructure and that came in the shape of Docker containers and Kubernetes.
Containers and K8s
A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings.
Kubernetes (K8s) is a portable, extensible open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.
Put simply, containers contain all of the software (OS, applications, settings etc.) that they need to run, and Kubernetes is a place to orchestrate and run those containers. Perfect for microservices!
We took a long hard look at our codebase and with the ‘independent loosely coupled services’ mantra at the forefront of our minds we were quickly able to break off large parts of the monolith into smaller much more manageable services. New functionality was designed and built in the same way and we were quickly up to a 2 node K8s cluster with over 35 running pods. Not massive by any stretch of the imagination but it was a significant step change from the way we were used to working. We used CI/CD tools and principles to get code from desktop to production in record time and we were well and truly flying.
Whilst we had gained a lot of traction from transitioning to docker and Kubernetes there was a large overhead involved in managing the cluster, the VMs that it ran on, networking, load balancing, volumes etc. We had a couple of systems engineers who knew it well enough to keep it running but that represented a significant “bus factor” that we just couldn’t ignore.
Whilst all this was going on our CTO and a couple of our engineers were exploring Amazon Web Services and the power that could be gained from ‘going serverless’. We had several demonstrations on things like building REST API’s with API Gateway, using Lambda functions to run code and storing data in scalable DynamoDB tables, as a team we were quickly seeing the power that could be gained from using AWS.