Learn about Akamai's caching
This document describes how Akamai caches content on its edge servers and how you can control this caching. Except the Downstream Caching section, it doesn’t discuss caching of objects on end-user clients such as browsers, although many of the underlying concepts are similar.
One of the core benefits of Akamai CDN is the ability to retrieve content (that is, HTML pages, images, fonts, and so on — also commonly referred to as “objects”) from your origin server and cache it in Akamai edge servers at the edge of the Internet, near the end-users. This feature improves performance by reducing both the load on your origin server and the time it takes to serve content to end-users.
What is cached and what is cacheable?
You may see discussions about caching requests. However, in fact, it’s the response to a request sent from your origin server that is cached. In practice, request and response have similar meanings, but in conjunction with the discussion of cache keys below, you can see that it is possible to design your caching strategy so that multiple similar but slightly different requests will all return the same object from cache.
By default, only the content requested with a GET method is cacheable by edge servers. If an object is cacheable, you can control for how long it is cached and for what requests it should be served to the requesting client. Some factors may affect cacheability, for example the presence of the Vary
header in the origin server's response.
Cache keys
When an object is cached on an edge server, it is identified internally by a cache key. The cache key is unique to this object and contains these elements:
Element | Status |
---|---|
Hostname (either the incoming Host header or the Origin hostname) | Required |
Path to the object (for example, /example/test/index.html ) | Required |
Query strings passed in the request | Optional |
Headers passed in the request | Optional |
Cookies passed in the request | Optional |
Variables defined in your configuration | Optional |
For optional elements, you can decide whether you want to include all elements of a specific type or just some of them. For instance, you can define a cache key to include only certain query parameters. See the Cache ID Modification behavior for more details.
You can use these attributes to decide whether similar requests should result in the same object being returned from the cache, or they should result in multiple objects being cached separately.
Separate caches per server
Akamai’s cache is not monolithic — each individual edge server has its own cache. When a request from an end user is routed to an edge server, content is cached on this specific edge server. Content is only cached as a result of a request from an end user.
For example, in many cases, requests for content come from end users in a particular geographic region, and therefore only the Akamai edge servers near those end users cache a copy of that content.
However, when content is purged from cache, the purge request is made to every edge server to remove the content from cache, whether it exists there or not.
How caching works in Property Manager
Property Manager automatically assigns a cache setting to your content. By default, edge servers cache content for a theoretically infinite time. Objects are removed from cache on a least recently used (LRU) basis. If an edge server’s cache is full, it can remove objects that are infrequently accessed from the cache even if those objects have not yet reached their Time-to-Live (TTL). In addition, you can use the Purge Cache application to completely remove objects from Akamai edge servers’ caches at any time.
To change the default settings, you can modify the TTL value through Property Manager behaviors, the most important of which is Caching, or through cache-related response headers sent by your origin when the object is retrieved.
What is TTL?
A Time-to-Live (TTL) is the amount of time an edge server can hold an object in its cache without checking with the origin server whether a newer version of the object is available. When you send a request, edge servers check whether the object has been in the edge server cache for longer than its defined TTL — this is known as the object being stale If the object is not stale, it is served to you from cache. If the object is stale, the edge server checks with the origin server using an If-Modified-Since
(IMS) GET request to see if a newer version of the object is available. If so, it retrieves the newer object from the origin server before caching and serving it to you. If no newer object is available on the origin server, the currently-cached object is served to you and its TTL is reset. This is referred to as conditional revalidation.
How to set a TTL?
Your property configuration is the primary method for setting options for groups of objects, and it's the main method for setting a TTL. In rules within Property Manager, you use the Caching behavior to set the max-age value, which defines the maximum amount of time that objects can be retrieved from the cache before they are considered stale. Typically, you set it to the maximum acceptable value, based on which the remaining TTL is calculated. By applying matches to your rules, you can specify a variety of TTL values that best suit your needs. For example, you can match on file directories, file extensions, and other groupings to ensure the best possible caching for different objects.
If necessary, you can override configuration settings on selected objects with Edge Side Includes (ESI) attributes and specific headers coming from the origin server, such as Cache-Control
, and Expires
.
The Edge-Control header
In addition to the normal Cache-Control and Expires caching response headers, there is an Akamai-specific HTTP response header called
Edge-Control
.Edge-Control
provides controls and parameters for content served over the edge network. On the Akamai network, theEdge-Control
settings take precedence over anyCache-Control
andExpires
headers as well as over many caching-related configuration settings. TheEdge-Control
header is generally deprecated and is not described in this document. However, if you have specific complex caching requirements, your Akamai account team may suggest using theEdge-Control
header and provide you with further details.
Cache-Control and Expires headers
By default, edge servers don’t honor these two response headers sent from the origin server. You can change this setting for specific objects by enabling each header individually, or both together with the Caching behavior. Once Honor origin Cache-Control and Expires is enabled, the edge network honors the Expires
header value and the following Cache Control
header directives:
s-maxage
(specific to shared caches)max-age
no-store
no-cache
(behaves like setting a zero second max-age)
Potential conflicts
In case of conflicts between the Cache Control
and Expires
headers when both are returned from the origin, the Cache-Control
value takes precedence for caching in Akamai.
If the Cache-Control
header contains both max-age
and s-maxage
directives, Akamai caches the content using the s-maxage
value, and sends both directives downstream. If there are any caching proxies between Akamai and the client, they should also cache the content using the s-maxage
value. Clients will ignore the s-maxage
directive entirely and will only cache content using the max-age
directive value or the Expires
value, if passed.
Note that you can’t override a no-store
setting defined in your configuration with a Cache-Control
or Expires
header.
Other considerations:
- If you set the configuration to honor the
Cache-Control
header, edge servers use this information for the lifetime of the object if the following conditions are met:- You set edge servers to accept any of the
s-maxage
,max-age
,no-store
, andno-cache
directives of theCache-Control
header. - The edge servers receive the
Cache-Control
header in the response from the origin server and it contains any of the above directives.
- You set edge servers to accept any of the
- If you set the configuration to honor the
Expires
header, edge servers use the ‘implied’max-age
(Expires
value - current time) as the lifetime for the object if the following conditions are met:- Edge servers receive the
Expires
header in the response from the origin server. - The expiration is in the future.
- Edge servers receive the
- If you set the configuration to honor the
Expires
header and an edge server receives anExpires
header from the origin, the edge server uses theDate
header from the origin to calculate the TTL of the object rather than its own internal clock, in case the origin’s clock isn't synchronized with the edge server’s clock. - If you set the configuration to honor the
Expires
header and the receivedExpires
value is in the past, the edge servers use their own current date stamp instead of the impliedmax-age
value.
ESI tag attributes
This section may be of interest to ESI developers. When you use an esi:include
statement to request a fragment for inclusion in an ESI template page, you can use the ttl
or no-store
tag attributes to control the caching properties for the requested object.
TTL best practices
The choice of an appropriate TTL depends on the time sensitivity and the nature of the objects. In general, you should choose the longest possible TTL that does not cause the end-user to receive stale content. The longer the TTL associated with an object, the greater the benefit of offloading content.
Prefresh
Akamai edge servers can asynchronously prefresh objects if a request is received during a specified time percentage of TTL, by default set to 90% of the TTL. This means that after 90% of the TTL has elapsed, a request for the object is immediately served from cache and also triggers an asynchronous IMS request to the origin to check for a newer version of the object.
For example, an object with a TTL of 60 minutes is requested 54 minutes after it’s added to the cache or later. The Akamai server immediately serves the object to the client and then sends an If-Modified-Since
request to the origin server to conditionally revalidate the object and potentially retrieve an updated object to replace the existing object in the cache.
This asynchronous update reduces the waiting time even for popular objects with a short TTL and avoids a delay in serving the object on the next request. It also reduces potential spikes in traffic to the origin server when the TTL of the object expires.
To change the default setting, add the Cache Prefresh/Refresh behavior to your property configuration.
Time sensitivity
Consider the following question when selecting the TTL settings:
“How long do I want Akamai to serve this cached version after I've already changed the content on my site?”
The answer to this question will inform the appropriate TTL for your content.
- Extreme time sensitivity. If an object is so time sensitive that it must be revalidated with each delivery, such as a breaking news story, then you should either not serve the object from cache at all and send each request to the origin server, or set TTL=0 (zero TTL) so that the object is cached but it is revalidated with each request. The decision of whether to prevent caching (
no-store
) or assign the zero-TTL depends primarily on the size of your object. - Moderate time sensitivity. If you update the site once a day in the late evening, when request traffic is low, you may not mind stale content being served for some time. You can set the TTL for most objects to an hour or more. You can even assign longer TTLs and use Purge Cache in the publishing process to delete old content after each new publication. However, be aware that a change in content doesn't necessarily warrant purging the old content from the cache. Object sensitivity is the primary issue. For example, if you make superficial changes to an image and the user doesn’t necessarily need to see the updated version, you can let the changes take effect gradually as edge servers revalidate stale objects in the cache with the origin server when they receive requests.
- No time sensitivity. If the content isn't time sensitive at all, such as an archival magazine article, you can give it a very long TTL. It's probably most beneficial if the origin server isn't bothered with
If-Modified-Since
requests to confirm that the object is up-to-date. You can maintain the timeliness of the object by either changing the reference in the HTML file when the content changes, or using Purge Cache to delete the old content after a change.
Downstream caching
Downstream caching refers to the caching instructions that edge servers apply to the content they serve to clients (such as browsers, mobile devices, or client proxies). These instructions are sent to clients as response headers.
Downstream
no-store
andbypass-cache
instructions override other downstream TTL instructions. If the object can’t be cached, the TTL is irrelevant.
Default downstream caching behavior
By default, Akamai sets the cacheability in the browser to no longer than the remaining lifetime for the content stored in the edge server cache. If the cacheability headers sent by the origin server suggest a shorter time, that shorter time is used.
When delivering content to clients, Akamai servers by default send the smaller of the Cache-Control: max-age
and/or Expires
values received from the origin server and the remaining lifetime of the object in the edge server cache. This means that the client max-age
is always equal to or lower than the edge max-age
.
Exceptions
By default, edge servers update only the values in the Cache-Control
and Expires headers
. They don't add these headers if the origin server didn't send them. However, there are certain exceptions to this default behavior:
- If you use the
no-store
orbypass-cache
settings in your property configuration to indicate that an object shouldn't be cached, edge servers send "cache-busting" headers downstream to the client. AnyCache-Control: max-age
ors-maxage
directives received from the origin server are discarded. - You can set the edge server configuration to compute the downstream caching itself.
- If you use ESI, the downstream cacheability is calculated as the lowest value of all fragments that make up the output page served to the client.
Custom TTL downstream settings
If you want to set the downstream TTL directly, without relying on the default behavior, you can use either:
- The Downstream Cacheability behavior.
- ESI configuration file or ESI language controls. With ESI Downstream Time-to-Live, you can only set a specific value in the response. There are no controls to select the headers to send or the method to use to calculate the value of those headers.
Cache prevention
You must specify at least one default caching rule for your object in the property configuration. If you select no-store
, the object isn't cached, and any downstream caching is also prevented.
To bust downstream caches, edge servers send these HTTP headers in response to the client:
Expires: <current_time>
Cache-Control: max-age=0
Cache-Control: no-store
Pragma: no-cache
If the Cache-Control
header is enabled in your settings, the cache-busting behavior also applies when edge servers receive Cache-Control: no-cache
or Cache-Control: no-store
headers in a response from the origin server. This is true even if a prior origin response contained the Expires
header but no Cache-Control: max- age
.
If you want to cache in edge servers but only prevent downstream caching, you can:
Add the Downstream Cacheability behavior, with Caching Option set to Allow caching and Cache Lifetime set to Fixed value, set the max-age value to -1.
Use the ESI function $add_cachebusting_header()
, or set the ESI downstream TTL configuration attribute to -1.
TTL alternatives
In some cases, setting a TTL duration may not meet your needs. Imagine you need fresh content the moment you publish it, such as a market quote that needs to reflect changes in real time. Or you need to authenticate for each request, so the requests need to be forwarded to the origin. There are some alternatives to TTL that you can apply to your objects.
Zero TTL
If you set TTL to zero, edge servers can cache your object, but are required to revalidate the object with an If-Modified-Since
(IMS) request to the origin each time the object is requested. This option may be appropriate for large, time-sensitive objects. If the object is large, the benefit of serving it from the edge of the Internet may outweigh the cost of revalidating the object for each client request.
You may also find zero TTL useful if you need real-time logging of client requests, which is a viable alternative to using no-store
to provide real-time logging. However, to ensure fresh content and minimal latency, you should specify an appropriate TTL for the object and set the prefresh option to 0%. This way, the edge servers contact the origin server for each request only after they have served the content to the client, to avoid increased latency due to real-time logging.
If the content changes for every request, use no-store instead.
No-store
Use no-store
if you don't want Akamai to cache the object. In general, if the object is likely to change more often than it’s requested — for example, if it's different with each request — it's best to simply disallow caching. Without caching the benefits of persistent TCP connections and route optimization between Akamai servers and the origin server are still realized.
If you need to pass requests and responses transparently, no-store
is sometimes the only solution. You can specify no-store
using the Cache-Control: no-store
response header directives from your origin.
For dynamically generated content, it may be essential to use no-store
. However, Akamai also provides ESI (Edge Side Includes) that allow dynamic content to be served at the edge, close to the end user.
Bypass-cache
Bypass-cache
is similar to no-store
, except that the request goes directly through the edge server to the origin and the origin response is returned to the requesting client without removing the underlying object from the cache if it's already there. Bypass-cache
can be useful when the returned object is an alternative for the normally delivered content that is usually delivered — for example, when the requested page is already in the cache, but the user must be redirected to a separate sign-in page first. In this case, you can keep the requested page in the cache (for use by signed-in users), but use the bypass-cache
for this particular end user request.
TTL and Purge Cache
To optimize your delivery and have better control over your content on Akamai servers, you can combine the caching settings with the use of Purge Cache. You would typically use TTL control in the Caching behavior as the primary content management solution. However, in some cases where content is time-sensitive but rarely changed, you may want to set long TTLs to reduce the If-Modified-Since
requests to your origin and purge the old content after it is changed. You can also use Purge Cache when an object has changed and you need strict consistency — that is, you can’t afford to serve the stale object for a period of time. For more information, see the Purge Cache documentation.
Caching redirects
If the requested resource has been moved to another location, the response might include various types or redirects. Edge servers can cache redirect responses only if they're sent from the origin server.
Redirects from origin
By default, permanent redirects (301 and 308 status response codes) returned from your origin server are cached the same as other cacheable origin responses. Edge servers then serve these redirects from cache to clients who request the same content.
Temporary redirects (302 and 307 status response codes) returned from your origin server are not cached by default unless you are using Adaptive Media Delivery, Download Delivery, or Object Delivery. However, you can use the Cache HTTP Temporary Redirects behavior to cache them in the same way as permanent redirects. If you are using Adaptive Media Delivery, Download Delivery, or Object Delivery, 302 redirects are cached by default.
Redirects from Akamai
All redirects returned directly from Akamai, whether through Edge Redirector Cloudlet or behaviors such as Redirect or Redirect Plus, are not cached. Edge servers perform a processing to determine whether to return a redirect response to the client separately for every request.
Updated 4 months ago