As always, Typescript is the choice of programming language, we need to transpile TS to JS so that Lambda can understand, this is all handled by Webpack. In my last post, I used babel-loader as the transpiler, this time let’s use ts-loader.
ts-loader vs babel-loader
If you need a faster deployment process or local development feedback, you can add transpileOnly: true to the ts-loader options, this will make ts-loader skip the type check and only does the transpile job.
chrome-aws-lambda, which ships the chromium binary for Lambda environment, now supports up to nodejs12.x, so we will set the runtime in serverless.yml to be nodejs12.x.
For your local development, chrome-aws-lambda won’t give you the chromium binary, so make sure you install the full puppeteer as devDependencies. In your code you need to check if the runtime is offline, ie local development, and set executablePath correspondingly.
As firstname.lastname@example.org has removed isOffline from event, we can only check if it is offline mode by injecting an environment variable to serverless-offline cli, and add IS_OFFLINE to serverless.yml to reflect the env from cli.
Now if you run
yarn serverless it will start the local server. To deploy it simply run
yarn sls deploy , and remember that the GET request must have
Accept:application/pdf , this tells API Gateway the request expects a PDF.
> Lambda Layer
If you have a look at the deployment package, given it’s such a small project, 42MB is a pretty big deployment package, be aware that 250 MB (unzipped, including layers) is the limit of AWS lambda deployment package. It can be a real problem for a bigger project.
We can shake off some weight by moving chrome-aws-lambda to lambda layer.
First, we need to make the layer zip, see how-to, and copy the zip file to your project directory.
Then include it in serverless.yml, and make sure the layer is attached to the function.
chrome-aws-lambda from the deployment package
By moving it to layer, the deployment package has reduced from 41MB to 769KB, this means a much faster lambda startup time!
> Provisioned concurrency
AWS Lambda cold start issue has been a real headache for many people, but recently AWS has released a couple of key improvements, №1 being the VPC improvement, see my previous post. No2 improvement is the provisioned concurrency, which means there will always be X number of lambda running.
To enable it is quite easy, just add to add provisionedConcurrency to the function definition. You will see a significant drop in response time for a cold start, from ~4s to ~400ms. But do remember that it ain’t come for free, you will need to pay the bill for the running Lambda instances.
The code lives at https://github.com/crespowang/serverless-lambda-chrome.