EdgeWorkers

Learn how to execute JavaScript functions at the edge to optimize site performance and customize web experiences.

Get Started    

Best practices for performance

Use these best practices to maximize the performance of your custom JavaScript code.

Although many performance benefits come out of the box with EdgeWorkers, you should still develop your custom code with these tips in mind.

Use the right events

๐Ÿ‘

Reduce EdgeWorkers invocations by using onOrigin events.

EdgeWorkers' granular event model lets you control when to execute code. EdgeWorkers are only invoked on the events that you implement. Although the EdgeWorkers overhead is minimal, you avoid any performance cost for events that are not implemented.

Every hit triggers the onClientRequest and onClientResponse events. The onOriginRequest and onOriginResponse events are only triggered when going forward to origin.

For example, when creating an EdgeWorkers function, you have two options to remove a set of HTTP response headers on all static assets. Let's assume there are 100 million requests, with 98% offload.

  • Option A: Implement onOriginResponse and only trigger the EdgeWorkers function on cache misses.

  • Option B: Implement onClientResponse and trigger the EdgeWorkers function on all edge hits.

Event handler

Edge hits (100 million)

Origin hits (2 million)

onClient Response

100 M EdgeWorkers invocations

0 EdgeWorkers invocations

onOriginResponse

0 EdgeWorkers

2 M EdgeWorkers invocations

Picking option A, onOriginResponse invokes your EdgeWorkers function only 2 million times. If you pick onClientResponse, the EdgeWorkers function would trigger 100 million times.

Avoid empty events

๐Ÿ‘

Prevent empty events from adding overhead to requests.

EdgeWorkers are invoked if the event handler is present in your code bundle, even if it is empty. These three examples still add EdgeWorkers overhead to every request, as the code bundle exports the onClientResponse() function.

export onClientResponse(request, response)(){
    //commented out
}
export onClientResponse(request, response)(){} //empty
export onClientResponse(request, response)(){
    if(request.getHeader(โ€˜debugโ€™)[0]==โ€™secretโ€™){
        //conditional invocation
    }
}

Use JSON parsing

๐Ÿ‘

Use JSON.parse('string') to instantiate larger JSON objects.

The way you instantiate the JSON object impacts your runtime performance, especially with larger objects. Using the JSON.parse('string') function is significantly faster than using the JavaScript object literal.

//Slower: JavaScript object literal 
const dataJson = {"hello": "world", "foo": "bar"}
//Faster: JSON.parse(string) 
const dataJson = JSON.parse('{"hello": "world", "foo": "bar"}')

You can find more information about JSON parsing in this blog.

Reduce code bundle size

๐Ÿ‘

Deploy smaller code bundles to reduce invocation time.

Remove any unnecessary or dead code from your code bundles. Treeshaking can be automated using modern bundlers like rollup to create lean ES2015 modules. Read the EdgeWorkers Use Case: Store Locator blog to learn how to use npm and rollup to add JavaScript libraries to your EdgeWorkers scripts.

You should also use the smallest library available that provides the functionality you need to achieve your use case.

Parallelise sub-requests

๐Ÿ‘

Invoke multiple HTTP sub-requests in parallel.

Evaluate if you can parallelise your sub-requests before handling their responses. This is faster than triggering them one by one in series. Remember to take into account the limits for concurrent HTTP sub-requests.

// Two subrequests triggered and handled in series
const responseA = await httpRequest($endPointA);
let jsonA = await responseA.json();

const responseB = await httpRequest($endPointB);
let jsonB = await responseB.json();
// Two subrequests triggered in parallel, handled in series
const responseA = await httpRequest($endPointA);
const responseB = await httpRequest($endPointB);

let jsonA = await responseA.json();
let jsonB = await responseB.json();

Cache sub-requests

๐Ÿ‘

Cache individual subrequests.

When possible use Edge caching to make individual sub-requests fast so your actual request is not slowed down.

You can control these settings outside of your EdgeWorkers functions in Property Manager, PAPI, and origin headers. Learn more about caching and caching features such as cache prefreshing and cache ID modification in our Property Manager documentation.

๐Ÿ“˜

Caching features set within the delivery property also apply to the associated sub-request hostnames.

Cache responseProvider output

๐Ÿ‘

Cache the output of your responseProvider.

When possible use Edge caching to serve the output of your responseProvider.

You can control these settings outside of your EdgeWorkers function in Property Manager, PAPI, and origin headers. Learn more about caching and caching features such as cache prefreshing and cache ID modification in our Property Manager documentation.

Don't include sub-request headers in the response

๐Ÿ‘

As a general rule you should avoid passing sub-request headers back in the response to prevent intermittent data corruption errors that are hard to debug.

If you do decide to include sub-request headers in the response you should use an allow list to choose and explicitly add only the headers that need to be forwarded. This method also lets you avoid sending extraneous headers, which speeds up the response and lowers the load on the network.

import {createResponse} from 'create-response';
import {httpRequest} from 'http-request'

export async function responseProvider(request) {
        let response = await httpRequest('https://myhost.com');

        let response_headers = response.getHeaders();

        let hdrs = {};
        for (let allowed of ['server', 'mime-version', 'content-type']) {
                if (allowed in response_headers) {
                        hdrs[allowed] = response_headers[allowed];
                }
        }

Updated about a month ago


Best practices for performance


Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.