Create a Serverless Powered API in 10 Minutes Using Cloudflare Workers

August 2018 - Rita Kozlov, Workers Product Manager

Cloudflare Workers makes building a REST API a simple process. In this post we’ll show you how we used Workers to quickly stand up an API for testing if websites would be flagged as not secure with the Chrome 68 update.

The Objective

In preparation for Chrome’s Not Secure flag in July of 2018, which updated the URL indicator to show Not Secure for sites access over HTTP, we wanted to allow website owners to see if their site would pass.

In order to do this, we created a small serverless powered fiddle to share on our site that allowed web site owners to enter their URL and see if it would be marked as “Not secure.

Getting Started

The logic and requirements to get started creating the serverless API were very simple:

  • Make a serverless API endpoint
  • Input: domain (e.g.
  • Output: “secure” / “not secure”

One additional requirement was that we needed to follow redirects all the way; sites often redirect to first, and only then redirect to https.

While the requirements for the API were straightforward, setting up a server to run the small amount of code wasn’t going to be trivial. Thanks to detailed guides it’s no longer quite as difficult to setup a web server, but it’s still far from trivial. Also, since blog posts on the Cloudflare can have sharp traffic spikes, we would have needed to over provision any server. Blog traffic spikes are generally unpredictable, and we would’ve needed to spend extra money without knowing if the server could actually handle the load until it was too late.

Using Cloudflare Workers, we deployed an endpoint on the web, soup to nuts, within a few minutes. No time was spent researching how to deploy a server, and 99% of the time was spent converting the very simple pseudo code above into real code. The other 1% was to add a DNS record. In 10 minutes, we had a demo-ready cURL to test domains against.

Ready to follow along with the tutorial?

addEventListener('fetch', event => { event.respondWith(handleRequest(event.request))
}) /**
* Fetch a request and follow redirects
* @param {Request} request
async function handleRequest(request) { let headers = new Headers({ 'Content-Type': 'text/html', 'Access-Control-Allow-Origin': '*'
const SECURE_RESPONSE = new Response('secure', {status: 200, headers: headers})
const INSECURE_RESPONSE = new Response('not secure', {status: 200, headers: headers})
const NO_SUCH_SITE = new Response('website not found', {status: 200, headers: headers}) let domain = new URL(request.url).searchParams.get('domain')
if(domain === null) { return new Response('Please pass in domain via query string', {status: 404})
try { let resp = await fetch(`http://${domain}`, {headers: {'User-Agent': request.headers.get('User-Agent')}}) if(resp.redirected == true && resp.url.startsWith('https')) { return SECURE_RESPONSE } else if(resp.redirected == false && resp.status == 502) { return NO_SUCH_SITE } else { return INSECURE_RESPONSE }
catch (e) { return new Response(`Something went wrong ${e}`, {status: 404}) }

While this is a fairly small Worker, we can break it down into a few distinct parts.

Parsing Input

To keep the API as simple as possible we’re going to pass URLs via the query string instead of using a POST request.

let domain = new URL(request.url).searchParams.get('domain')
if(domain === null) { return new Response('Please pass in domain via query string', {status: 404})

Here, we instantiate a new URL object, which will handle all the URL parsing for us. Since we’re not using or modifying any other aspects of the URL, we’re able to perform all the functions on the object in one line. However, if we need to look at other parts of the request including: hostname, path, body, or headers, it would be best to define a separate object.

The URL.searchParams property returns a URLSearchParams object, which allows us to get the value of the query string parameter directly. In a situation when a parameter is not passed, it will return an error back to the requestor.

Making Subrequests

Next, we will need to make a subrequest to the domain, and validate whether or not it redirects us to https.

On the fetch, we will also pass in the User-Agent header of the original request, as we have noticed some sites (for example will vary their responses for different User-Agents.

let resp = await fetch(`http://${domain}`, {headers: {‘User-Agent’: request.headers.get(‘User-Agent’)}})
if(resp.redirected == true && resp.url.startsWith('https')) { return SECURE_RESPONSE
else if(resp.redirected == false && resp.status == 502) { return NO_SUCH_SITE

Following Redirects

By default, when you make a new fetch, what actually happens behind the scenes is that the redirect property is set to follow. Thus fetch(url) is the same as fetch(url, {redirect: “follow”}). So when we are making the subrequest within the Worker, the final resp.url property we are inspecting will provide us with the final location of the redirect chain.

While a new fetch defaults redirects to follow. The event.request.redirect property in the fetch event handler is by default set to manual. So, if we carried over all the initial request properties in our subrequest, the redirect chain would not have been followed, or we would have had to explicitly override it.

There is a good reason for the default to manual. it allows trivial, pass-through Cloudflare Workers to function correctly in situations where origins assume they are actually redirecting the client itself. One situation is when an HTTP redirect sends browsers to a non-HTTP URL, such as a mailto: link, which Service Workers have no ability to follow. The intended recipient of the redirect is clearly the browser in this case. Another situation arises when the origin needs the browser to update its navigation bar with a new URL (like when redirecting from HTTP to HTTPS!). If redirects are followed in a Cloudflare Service Worker before returning the resulting response to the browser, the browser will have no way of displaying the correct, redirected URL in the navigation bar.

Adding CORS Headers

let headers = new Headers({ 'Content-Type': 'text/html', 'Access-Control-Allow-Origin': '*'

In the static response we are always adding a response header called Access-Control-Allow-Origin. CORS headers are meant to help protect origins from being accessed by other sites. If we tried to run our app directly on the client (from the browser side), the browser would enforce the CORS policies of the domains we’re trying to test against, and block those requests. Setting Access-Control-Allow-Origin> to * will allow this endpoint to be accessed from this blog or any other site. If you are looking to embed it into your site, you can! Otherwise, if the JavaScript on the blog post itself was making browser side calls to various domains, many requests would be blocked by the browser.

Try out the Worker for yourself!

When building a Worker, the preview UI is a great way to validate code at every step along the way. The console output is really useful for simple debugging. For example, to make sure the query string was parsed properly, we could call console.log(domain).

It’s hard to get code right on the first try, and it’s not always clear where things go wrong. While the Preview is not reflective of the end to end experience (there are many variables that may change once a request is going over the web), it’s a great developer tool to help validate progress along the way.

Using the API

Once it the Worker is functioning in preview, it’s time for a real test: cURL.

not secure 

And it works! You can now test this site by running the same cURL from your machine and adjusting the domain parameter, or you can deploy the Worker above on your zone, and have your own testing endpoint.

Ready to try Workers for yourself?

Looking to do something more complex for your business?