Today we'll present to you Webiny's top 5 picks on serverless announcements. We've been active during the past three weeks on live announcements of AWS re:Invent, focusing on the serverless news that can affect the way we develop software solutions from the cost and performance perspective.
Without further ado, let's go through the top 5 picks from our team @ Webiny ⬇️
Lambda now will bill your functions by the ms, as opposed to a round of 100ms.
"For example, a function that runs in 30ms on average used to be billed for 100ms. Now, it will be billed for 30ms resulting in a 70% drop in its duration spend."
For more details and information about pricing check out the article here.
Lambda increased the memory limit by 3x compared to the previous limit, from 3,008 MB up to 10,240 MB (10 GB) 🚀
This helps to perform memory-intensive operations at scale.
Starting today you can configure between 128 MB and 10.140 MB of memory for new or existing Lambda functions.
For more details check out the article here.
Package the Lambda functions and deploy as a container image up to 10 GB. Now you have the chance to build and use Lambda based applications using container tooling, workflows, and dependencies.
Some advantages of Lambda packaging:
- Operational simplicity
- Automatic scaling
- High availability
- Native integrations with 140 AWS services
"With this launch, AWS provides a set of base images for Lambda that are available on ECR Public and Docker Hub."
For more details check out the article here.
You can now subscribe to log streams directly from within the Lambda execution environment.
"After receiving the subscription request, the Lambda service streams logs to the extension, and the extension can then process, filter, and send them to any preferred destination."
This extension supersedes the CloudWatch Logs, making it easier for you to use your preferred extensions for diagnostics.
For more details on how to get started with the extension, follow the article here.
Extending the features of Aurora Serverless, this is one of the biggest announcements on the first week of AWS re:Invent.
One of the key features: "You pay only for the capacity your application consumes, and you can save up to 90% of your database cost compared to the cost of provisioning capacity for peak load."
Amazon Aurora Serverless v2 also provides the full breadth of Amazon Aurora’s capabilities:
- Multi-AZ support,
- Global Database, and
- Read replicas
For more details, check out the in-house AWS article here, or follow Jeremy Daly's preview on "Aurora Serverless v2: The Good, the Better, and the Possibly Amazing"
"Automated management for container and serverless deployments"
AWS Proton is the first application to fully manage and deploy container and serverless applications.
With AWS Proton, engineering teams can connect and coordinate all the different tools needed for:
- Infrastructure Provisioning,
- Code Deployments
- Monitoring, and
To learn more about what AWS Proton solves on the software systems complexities, check out the article here.
You can replicate data from one source bucket to multiple destination buckets at the same or different AWS regions.
This is intended for you if you're interested to maintain multiple copies of your data in one or more AWS Regions.
"With S3 Replication (multi-destination), you can easily create a shared dataset by replicating data to multiple buckets in the same or different AWS Regions."
For more details on S3 features and the pricing page, check out the article here.
S3 delivers strong read-after-write consistency for any storage request. With the strong consistency, S3 is removing the need to make changes to apps and reduces costs by removing the need for extra infrastructure which provided strong consistency.
This blog post guides you on more details on react-after-write consistency.
If you want to build shared datasets across multiple regions and keep all object and object metadata changes in sync then two-way replication is important.
Learn more about the S3 Replication here.
You can now export your Amazon DynamoDB table data to your data lake in AWS S3 where you can use different services such as Athena.
Your DynamoDB data added in your AWS S3 data is easily discovered, encrypted at rest, and in transit.
With just a few clicks in the AWS Management Console and a simple API call, you can export DynamoDB tables ranging from a few megabytes to hundreds of terabytes of data.
Learn more about this announcement here.
These were our top 5 picks on Serverless Announcements @ AWS re:Invent. If you're interested in our future blog posts, subscribe to our newsletter and you'll be notified when we have interesting topics to share!
Thanks for reading! My name is Albiona and I work as a developer relations engineer at Webiny. I enjoy learning new tech and building communities around them = ) If you have questions or just want to say hi, reach out to me via Twitter.
Create your free account to unlock your custom reading experience.