For quite some time, there was a running joke that “serverless” was just for converting images to thumbnails. That’s still a great use case for serverless, of course, but since AWS released Lambda in 2014, serverless has definitely come a long way. Even still, newcomers to the space often don’t realize just how many use cases there are for serverless. I spoke with Gareth McCumskey, a Solutions Architect at Serverless Inc, on a recent two part episode (part 1 and part 2) of Serverless Chats, and we discussed nine very applicable use cases that I thought I’d share with you here.
By far, the most popular use case for serverless has been using it to build APIs, whether that’s a RESTful API or a GraphQL service. Just running a Lambda by itself can be very powerful, but when you drop something like API Gateway in front of it, you now have a match made in heaven — a highly scalable endpoint processing data in real time with no servers to manage.
AWS has two flavors of API Gateway, the standard REST version and the new HTTP APIs. But a very cool feature with the REST version (and something that will be coming to HTTP APIs as well) is the ability to configure service integrations. Lambda functions are often necessary to perform transformations and other types of processing, but with service integrations, you have a useful and efficient way to transport data from API requests without the need for (or additional costs of) Lambda functions.
There are plenty of great use cases that use HTTP endpoints, and when you start to think about high-volume, write-heavy applications (like webhooks), using service integrations with something like SQS or Kinesis makes perfect sense. When you have a lot of data coming in quickly, you don’t need to respond to the request with a message that says, “I’ve done all the processing.” You just need to indicate that the data has been captured.
Like Gareth mentions, store the data by throwing it into SQS, respond to the client immediately, and then worry about processing that data later on down the line. By thinking asynchronously like this, you get message durability, faster response times, the ability to mitigate downstream pressure, and you’ve added a layer of resiliency to your application.
GraphQL has become another incredibly popular use case for serverless. The major benefit of GraphQL is that the client can make tailored requests and only get the data back that they need. In doing that, they’re not over-fetching data, plus they can also combine data from multiple data sources so that they’re not under-fetching data or required to make multiple calls. It’s quite possible to use API Gateway and Lambda to build a GraphQL “server”, however, AWS also has AppSync, that will do almost all of the heavy lifting for you.
AppSync, like API Gateway, is massively scalable, and there’s no need to run servers or set up load balancers to handle the traffic. You have the ability to attach Data Source that can pull data directly from DynamoDB, Elasticsearch, RDBMS, and HTTP endpoints. You can even build completely custom Data Sources using Lambda functions. As with any data-backed solution, you still have to worry about the scalability and availability of your underlying data sources. But overall, the GraphQL use case is solid.
Real time communication in web and mobile apps is becoming more common, and WebSockets are a great way to provide us with that capability. There are so many amazing things you can do with WebSockets, whether it’s chat functionality, multiplayer online games, or anything where you want to push data back and forth in real-time. Serverless WebSockets, however, can seem a bit perplexing since Lambda functions are event-driven and stateless.
API Gateway has the ability to initialize and maintain WebSocket connections with clients for you, and then only triggers a Lambda function when a message is sent from a client. This allows you to respond to a request serverlessly and let API Gateway do all that heavy lifting for you. You need to maintain a list of connections if you want to do broadcasts, for example, but it’s relatively straightforward using something like DynamoDB as a datastore.
At one of the startups I worked at several years ago, we built a real time commenting interface. We started using simple long polling (which was terribly inefficient) and then ended up building out an entire fleet of load balanced servers to support WebSockets. It worked really well, but took a huge investment to build, was expensive to run, and the maintenance was a nightmare. With API Gateway and Lambda, this would have taken a few hours to set up, and the ongoing maintenance and cost would be negligible.
Capturing Clickstream Data
Another common use case is capturing clickstream data from your users. API Gateway, with the right service integration, absolutely shines here. Using API Gateway with Kinesis Data Firehose, for example, allows you to reliably capture high-velocity clickstreams, convert the data in batches using Lambda functions, and then dump the data directly into an S3 bucket. You can then use Athena to quickly and cheaply query the data using standard SQL syntax. This whole pipeline is essentially 100% serverless and there’s no need to maintain a large data warehouse running 24/7 just for the occasional ad hoc query.
On one of my teams, we tried using Elasticsearch to store our clickstream data. After doing some projections and estimating the number of records collected per day, we realized that it would have been unjustifiably expensive. The serverless solution gave us similar flexibility at a fraction of the cost. In today’s world of unlimited options, it’s so important to understand what users are doing on your site. So whether you’re using clickstream data for personalization, A/B testing, or other types of optimization, serverless just handles it so well.
One of the things that I see quite often is people using the power of Lambda to run massive, parallel compute jobs. A great feature of Lambda is its single concurrency model, meaning that a Lambda function only handles one request at a time. If you have a thousand concurrent users, it spins up a thousand separate Lambda function containers. This not only gives you the ability to massively scale requests from things like APIs and WebSockets, but you can actually use it to fan out several processing jobs at once. As Gareth said, it’s the famed “Lambda supercomputer”, and you can get an enormous amount of parallelization with it.
It’s also incredibly cost effective. If you run a thousand concurrent Lambda functions, even at the maximum memory, and it takes five minutes to run your job, that’s maybe a couple of dollars every time it runs. Gareth mentioned the use case of simulating users for the purpose of load testing your application. I also spoke with Lynn Langit a few weeks ago, and she pointed out using this parallelization for genomics and bioinformatics research. Essentially, any job that can be broken down into parallel tasks, makes for a great fit here.
There are a ton of useful triggers for Lambda functions, but a really interesting one is processing email using an S3 or SNS trigger when an email is received by AWS SES (Simple Email Service). Gareth shared the example of an organization that handled requests for medical insurance, and their need for sifting and sorting those emails into a CRM system. Regardless of industry, this idea of receiving emails, dumping them into S3, and reading them in with Lambda has so many possible use cases. As Gareth mentions, this solution worked great for the organization from his example – so great, in fact, that the downstream CRM couldn’t initially handle the load.
If you were building a SaaS company that provided a ticketing component, for example, this use case would be perfect. I actually saw a similar use case a while ago called S3 Email that built an entire email server using S3, SES and Lambda. There are so many cool things you can do with this, like handling and processing attachments, analyzing text, sending data into Comprehend to calculate sentiment, translating the email to a different language, and so much more.
Cron jobs are the Swiss army knife of developers. We use them for everything from cleaning up log files on a server to triggering ETL tasks to scheduling nightly email reports. They’re also a perfect use case for serverless (and probably the second most popular, says Gareth) especially if you want to run something peripheral to your main application.
Gareth explains that Lambda becomes incredibly useful here, because you can just schedule it, and then run a job that can access all of your AWS services, all without needing to set up, secure, and maintain a separate server. Gareth gave the simple example of periodically rebuilding an XML feed for Google Shopping. Lambda would run on a schedule, pre-render the XML, and save it to S3 so that it was available to the Google Shopping spider.
Using Lambda to do these types of jobs just makes a ton of sense, because it doesn’t touch the rest of your application. If you’re generating reports every night, or doing some other type of CPU or memory intensive computations, do you really want that running on the same systems that are handling requests from your users? And if you’re already using separate machines to do that processing, wouldn’t it be nice to only pay when the jobs were running and never have to worry about maintaining those servers? I know I would.
This use case ties into several that we’ve already discussed, but it’s such a powerful concept that it stands on its own. We mentioned the API Gateway to Simple Queue Service (SQS) service integration use case, but there are many different ways that Lambda functions can be triggered asynchronously. Whenever you have a request that requires a lot of processing or requires multiple steps, you should avoid handling it synchronously. You don’t want to generate a big PDF or convert a bunch of thumbnail images while the user is waiting on the other end of an API. You want that to happen in the background as a background task – and that’s where the idea of offline or asynchronous processing comes in.
Machine learning is the proverbial “thorn” in serverless’s side. Anytime you say serverless can do pretty much everything, there’s always that one person who responds with: “It can’t do machine learning.” Gareth and I both agreed that Lambda’s “Achille’s heel” when it comes to machine learning is Lambda’s minimal amount of disk space that’s available to you. But he says that serverless as an architectural concept isn’t necessarily just about Lambda functions either. He says there are a couple of angles in response to the “serverless can’t do machine learning” myth. He points to performing image recognition or doing text-to-speech using the managed services already available within the AWS ecosystem. There’s Lex for speech, Rekognition for image recognition, Comprehend for sentiment analysis – that’s machine learning. Those services are all there for you and just an API call away right from your Lambda function.
And there’s also Fargate. If you’re looking for something that can respond quickly to events like Lambda, then Fargate might not be the tool for you. However, if you’re building machine learning models, that’s less of a concern. In most cases, the models are already built for you, and you have many options for running them. If you have some really unique machine learning model that you need to build, there are still options to do that in a fairly serverless way or close-to-serverless way, just maybe not on Lambda functions.
For those who are skeptical about serverless’s applicability to your team or organization’s needs, I hope this post provides some practical use cases to assuage those doubts. If you need no persuasion and are making the move to serverless, my advice to you is: trust the services in the cloud instead of trying to reinvent the wheel. There are so many great built-in features that add resiliency and relieve you from having to do all of it yourself.
Serverless is still maturing, and yes, there are still some rough edges here and there. But we’ve gotten to a point where most things can be implemented using a serverless architecture. If you want to learn more about serverless use cases, you can watch or listen to our Serverless Chats episodes (Part 1 and Part 2).
Listen to the episodes:
Watch the episodes:
Did you like this post? 👍 Do you want more? 🙌 Follow me on Twitter or check out some of the projects I’m working on. You can sign up for my WEEKLY newsletter too. You'll get links to my new posts (like this one), industry happenings, project updates and much more! 📪