How Far Out is AWS Fargate?

By Michael Lavers

As serverless technologies continue to gain traction since the introduction of AWS Lambda, we’re seeing more entrants into the serverless space. Even some services that predate Lambda, such as Google’s AppEngine, are finding new life. One of these entrants, introduced in November 2017, is AWS Fargate. And while Fargate had some buzz around the time of it’s announcement, much of it has waned while Lambda continues to grow. I suspect much of this is likely due to how Fargate was pitched to users.

Here’s how AWS describes it:

AWS Fargate is a compute engine for Amazon ECS that allows you to run containers without having to manage servers or clusters. With AWS Fargate, you no longer have to provision, configure, and scale clusters of virtual machines to run containers. This removes the need to choose server types, decide when to scale your clusters, or optimize cluster packing. AWS Fargate removes the need for you to interact with or think about servers or clusters. Fargate lets you focus on designing and building your applications instead of managing the infrastructure that runs them.

While this description is accurate, it’s burying the lede. Coming right out of the gate describing Fargate as a “compute engine for ECS” is wrought with red flags. If you’ve worked with ECS in the past, you’re likely already penalizing Fargate with all the complexity that ECS brings. If you’re not familiar with ECS, the term “compute engine” is likely leaving you with more questions than answers.

If you haven’t stopped reading, then the Fargate description promises to solve some of that ECS complexity, but still ends up sounding like a more complicated version of AWS Lambda.

So why would you pick it over Lambda? Like with most things, it all depends on your use case. And while it remains firmly in the shadow of it’s sibling, AWS Lambda, it gives Kubernetes a run for it’s money, both figuratively and literally.

I’ve just made up a word. If I were to describe Fargate, I’d describe it as clusterless container orchestration. Much like “serverless” means an architecture where the server has been abstracted away, with Fargate, the ECS cluster has been abstracted away. There’s still a server/cluster, which is why many consider these terms to be a misnomer, but you no longer need to manage them. In fact, when you provision a Fargate cluster, unlike with a regular ECS cluster, you don’t need to provision any EC2 instances at all. You can optionally provision a VPC and subnets for your cluster, but aside from that, your cluster on its own is effectively free. It’s also managed for you by AWS. Compare that with AWS EKS pricing where the Kubernetes cluster alone is going to cost you $144 a month. A cluster isn’t terribly useful without something running on it, but just wanted to highlight that Fargate doesn’t come with a cost overhead.

Earlier I referred to AWS Lambda as a sibling of Fargate, and while that should be obvious considering they’re both services provided by AWS, they’re related for another reason. Under the hood, they share the same virtualization technology called Firecrracker. Firecracker is a KVM-based virtualization layer that creates and manages minimalistic container-like virtual machines called “microVMs”. Firecrracker is still relatively new and was designed from the ground up to address issues identified with the previous generation of AWS Lambda. Because of Firecracker’s minimalist design, it’s strengths are it is secure, fast and efficient for general purpose workloads when compared with traditional virtual machines or container technologies. Why is this important? It means that AWS can offer more compute with less overhead. For example, as AWS rolled out Firecracker, they were able to cut Fargate costs by up to 50% for most workloads.

While the underlying technology that powers each of these services is the same, they’re quite a bit different. Their both “pay as you go” pricing models. With Lambda you pay per invocation and the price is based on the memory you allocate for your function (up to 30GB) and its execution time. The amount of compute available to your Lambda function is based on it’s memory allocation. This pricing model is ideal for workloads that have spikes and/or long periods of downtime. Fargate, on the other hand, lets you configure how many VCPUs (up to 8) and GBs of memory (up to 30GB) you want your Fargate tasks to have independently, priced by the secondrounded up to one minute. While you can run one-off tasks with Fargate, similar to Lambda function invocations, the pricing model is better suited for longer running workloads. Where a Lambda invocation must complete its work in 15 minutes or less, a Fargate task has no such limitation. And if you’re wondering what you’re getting compute-wise, Firecracker’s README says it supports C3 and T2 configurations, so that’s likely a good analog when comparing with equivalent EC2 instances.

Another difference between the two is that Lambda is a function-as-a-service (or FaaS), and as the name suggests, your Lambda entry point is a function. Your function also needs to run under one of Lambda’s supported runtimes or you can bring your own. Fargate, on the other hand, is container-based, so if it will run under Docker, you’re good to go with Fargate. A Lambda package must also be no larger than 50MB zipped or 250MB unzipped (including layers), whereas a Fargate container image has no such limitation, although it does have other limits. A Lambda function has 512MB of ephemeral storage, whereas you can mount shared volumes to Fargate tasks. A Lambda function will scale to 1,000 concurrent requests by default whereas Fargate requires you to define auto scaling rules based on CPU and memory utilization.

There are other differences, but hopefully it’s becoming clear where each service’s strength lies. Lambda is an additional layer of abstraction where if your workload can be expressed as a function and complete it’s work in 15 minutes or less, then it’s a great choice, especially if your workload leans towards the sporadic. But if you need more control or the limits imposed by Lambda’s abstractions pose a problem for your workload, then Fargate is worth a close look. You don’t really need to choose one or the other as they very much compliment each other.

Fargate is a simpler, more cost-effective Kubernetes. Which isn’t really saying much as Kubernetes is complicated. If you know you need Kubernetes, then you’re probably right. Or at least I hope you are, because otherwise you’re in for a world of hurt. But if your use case doesn’t require all the bells and whistles offered by Kubernetes, Fargate may be a great alternative.

There are some obvious similarities between the two, As they both do container orchestration after all. They both have a cluster. In Kubernetes you have worker nodes which would be equivalent to a traditional ECS cluster’s EC2 instances, but with Fargate these nodes are managed for you by AWS. In a Kubernetes deployment you’d define services and pods, whereas in Fargate you start with a task definition, which tells Fargate which container your service should run as well as any data volumes that should be mounted. Both Fargate and EKS use AWS’ Elastic Container Registry (or ECR) by default, so your Fargate task definition is referencing a container in a preexisting container registry. Similar to Kubernetes, in Fargate you would then create a servicebased on that task definition. A service also configures load balancers, auto scaling rules, network parameters, service discovery and port mappings. But unlike Kubernetes, where it provides built-in load balancing to your pods, in Fargate this is an external AWS service (either ALB, NLB or ELB). Other services such as autoscaling and logging are also handled externally to Fargate itself, but are easily configured during service creation. Once your Fargate service is running, each individual running container is called a task. A service can run zero or more tasks. If you were to scale your Fargate service to zero running tasks, your costs would be zero as well. Whereas with Kubernetes, even with zero running containers, you’re still paying to keep those worker nodes running.

Fargate is inherently simpler than Kubernetes because it only does one thing: container orchestration. And it does this very well. Everything else is provided by an external AWS service. It could be argued that this masks the complexity that Fargate would have if it supported these features out-of-the-box like Kubernetes does, but it also means that if you don’t need that service then it’s one less service to manage. And considering Fargate offers a low maintenance container orchestration solution — that’s saying a lot. And from a vendor lock in perspective, Fargate has more similarities to Kubernetes than differences, making a migration from one to the other a project measured in hours or days. And while Fargate integrates with other AWS services by default, you can use third party options as well. For example, you can use any container registry or load balancer you want, in fact Fargate services can be configured to expose a public IP in the latter case.

In this post I’ve made the case that Fargate is a very capable (albeit under-appreciated) alternative to Kubernetes and a compliment to AWS Lambda. If you’re looking to go serverless, or already have, maybe consider taking the next step and go clusterless? I think you’ll find that Fargate has a lot to offer and deserves more attention than it has received to date.

In part two of this post, I’ll take you on a crash course into Fargate and demonstrate just how easy it can be to get up and running. I’ll also introduce you to a CLI tool that is Fargate’s equivalent to Kubernetes’ kubectl that can take you from a Dockerfile to a running web service in just two commands. Stay tuned.

If you’re currently exploring serverless observability tools, try out IOpipe for free and let us know what you think.