Serverless is a Doctrine, not a Technology

By Paul Johnston

I have spent the last few years in the serverless community trying to figure out how to help other people understand what it means to “go serverless”. For me in recent months talking about serverless has been an attempt to avoid talking about technologies and try to start talking about the business value of the approach. I’ve even written blogs about some of this:

One of the things I’ve noticed is that my own thinking has shifted over the past years in various different ways. I am less interested now in specific technology choices, and more interested in the strategic approach to how the technology is implemented within an organisation, and the transition an organisation needs to make to embrace a serverless mindset.

And then yesterday I stumbled over an article that swardley wrote back in 2016 about universal doctrine for an organisation:

…and I realised something…

Serverless is a doctrine

What is doctrine? For our purposes, a doctrine is a set of principles that you have learned from experience, and codified into some written form e.g. a set of best practices… like my best practices blog post…

Thinking of “serverless as a doctrine” I think is why I find it really frustrating when someone calls any technology or service “serverless”. The most common example is when someone says that what the word “serverless” means is “Function as a Service” or FaaS — it doesn’t. FaaS is one of the primary enabling technologies of a serverless approach and FaaS is not serverless in and of itself.

Let me repeat again:

FaaS is not serverless

Let me explain…

If you take “Serverless is a Doctrine” thinking, then FaaS cannot be serverless on it’s own, but whatever is built with the principles/doctrine in mind can be, because you can look at the principles and apply them to the solution and ask “is this serverless?” and respond yes or no.

The service itself isn’t serverless

The approach to how something is built is serverless, and produces a serverless application.

What is built using FaaS might be serverless… but only if it applies the principles of the serverless doctrine.

In fact, it must therefore be possible to build something that is not serverless using FaaS, which would then prove the point that simply using the service is not what makes the application serverless.

(It would be hugely presumptuous of me to write “The” serverless doctrine, so this is only “a” serverless doctrine, but it’s my attempt at it…)

I have written about the rise of serverless in various blog posts, but the last one I wrote was around how I see serverless as cloud 2.0. I think this post identifies a large part of the principles, but not all:

https://medium.com/@PaulDJohnston/cloud-2-0-code-is-no-longer-king-serverless-has-dethroned-it-c6dc955db9d5

The principles and therefore the doctrine could be distilled to something like this:

Serverless applications are defined not by code, but by configuration that defines resources, and events that trigger small amounts of code that glue those resources together.

Code becomes far more about unique business logic

Configuration defines the non-business critical application

Today’s code is tomorrow’s technical debt

In my world, preferably zero code (all in configuration)

The more we can shift to configuration, the smaller the lines of code we need

If a service exists that can do what we want, there has to be a very good reason why we would want to build that service.

So don’t build that service unless it’s business critical.

When you are running systems, you should only be concerned with business critical systems. Everything else is unimportant.

So why would you want to run a workload that wasn’t business critical?

Disown it.

Jared Short has a version of the last two…

…which I think is useful too.

But this isn’t everything, because in conversations with others I have realised I have other doctrines that I’ve never written down:

This is not about being “server”-less but about getting people to be creative in their thinking. Without this, developers will often not “think outside the box” and delve into their history to answer questions. This is sometimes the right thing to do, but it often isn’t.

Why would I want to manage these any more if I don’t have to! Unless it is business critical these are one of the biggest wastes of time in a tech budget

Scaling up and down to zero is key here.

This includes services that are basically provisioning a server to run something on it. Something like running an RDS database on AWS is questionable at this point, because you have to provision instances, and the scaling is manual.

It’s also why RDBMS are often a poor fit in my view for serverless applications, because this scaling element is rarely taken into account, except with Aurora Serverless.

This one is a key one for me, and is simply about challenging the assumption that “everything must be an RDBMS”

There is nothing that says you cannot use an RDBMS, but remember to take into account the scaling and management doctrines too.

However, in almost all scenarios that I have come across in the past few years, there is a better solution than an RDBMS for the application data layer.

Yes, it might take longer now, but if it takes less time to manage over 5 years, that’s a much bigger win.

No, the development budget is not that big a deal in the grand scheme of things.

Yes, I would rather have fewer people manage my application during it’s lifetime, even if it’s more complex to develop now.

Whenever developers build things, they are often conscious of not releasing bugs.

What I want my developers to consider is not whether they release bugs, but how quickly can they recover from those bugs

Can they build a system with a very low Mean Time To Repair? (oh, if you haven’t read the excellent Accelerate book, then this is the point where you go and read it)

This means working backwards, not forwards.

It means thinking “when this function/system goes wrong, how is someone in this team going to know how to fix it as quickly as possible?”.

At present, developer thinking is all about finding bugs before release, this is thinking forwards.

I want everyone to think the other way around. This is actually quite difficult to do.

Note: This doesn’t mean you shouldn’t do TDD or BDD or whatever. It simply means that those elements are much less important than observability of issues in production.

Note 2: This is also where the “functions shouldn’t call other functions” comes from. Finding an error is hard. It’s even harder if you have to look through function 1, then 2, then 3 etc.

This one is my favourite.

I have often had developers come to me saying “the only way I can do this is with X service/technology”. This is never true.

My response is to send them away and ask them to write a “business case”. After they have written a business case (which often takes them several goes, because they don’t know what one is), I then tell them to go away and identify whether they can use the current technology stack to achieve what they need. The answer is almost always yes, or if it’s no, it’s often a completely different technology.

e.g. I had a developer come to me asking whether we could use RedShift, I sent him away, and after a round of “business case” bingo, he realised that he could use Athena to achieve the same goal. It’s serverless, and fits within the principles.

Developers need to understand that they have to justify decisions and when they realise that they have to understand what business they are in, then they often become better developers.

The job of a person in a business is not to provide technology, but to provide business value.

This should be obvious, but to most developers it isn’t.

This is my first attempt at putting this together, and it comes out of multiple conversations that I have with lots of people and companies. Please use it, and adapt it if it’s helpful. Please share if you do, I’d like to see what others add/remove/change.

I think that this supercedes my relatively simple definition:

“A serverless application is one that provides maximum business value over its application lifecycle and one that costs you nothing to run when nobody is using it, excluding data storage costs.”

This is still useful as a basis for an initial conversation, but I think this post is where I will start to point people who are starting their serverless journey.

I constantly have to say similar things to this to many different people when discussing serverless, and I will have missed things, so there will be an update at some point in the future (I’m sure there will be relatively soon).

This doctrine can be applied to all cloud vendors of course. I think it’s pretty clear that it’s vendor agnostic (at least I’ve tried to make it agnostic — where I’ve used something vendor specific it is as an example only).

This doctrine provides a simple way for me to look at an application, and say whether it is or isn’t serverless.

It also provides a simple way to direct a team or a whole department so that they can develop serverless applications. That is really helpful for me going forwards.

It’s not about technology.

It’s not about what technologies are better or worse.

Technology is just a tool to help us build applications.

Going serverless is more about following a set of guiding principles than it is about the technologies you use.

That’s why I’ve written this doctrine, and it’s primarily for me, if I’m ever in the situation where I need to get a team to go serverless, this will be my starting point.

If other people find it helpful, then that’s great too.