Strict Header Parsing

This behavior specifies how ​Akamai​ servers should handle requests containing improperly formatted or invalid headers that don’t comply with RFC 9110.

Why you need it

Some clients may send invalid or incorrectly-formatted, non-RFC-compliant request headers. If such requests reach the origin server, this vulnerability can be exploited by a “bad actor”, for example to poison your cache and cause invalid content to be returned to your end users.

Use Strict Header Parsing to tell the edge servers what requests to reject, independently of the ​Akamai​ platform's default behavior. Therefore, you may either get the protection earlier than the global customer base or defer changes to a later time, though not recommended.

Features and options

FieldWhat it does
Valid ModeWhen enabled, ​Akamai​ servers reject requests made which include non-RFC-compliant headers that contain invalid characters in the header name or value. Clients receive a 400 Bad Request response. When disabled, ​Akamai​ servers allow such requests, passing the invalid headers to the origin server. In both cases, ​Akamai​ servers write a warning to their logs.
Strict ModeWhen enabled, ​Akamai​ servers reject requests made which include non-RFC-compliant, improperly formatted headers, where the header line starts with a colon, misses a colon, or doesn’t end with CR LF. Clients receive a 400 Bad Request response. When disabled, ​Akamai​ servers allow such requests, but correct the violation by removing or rewriting the header line before passing the headers to the origin server. In both cases, ​Akamai​ servers write a warning to their logs.

Note that the two modes are independent – each of them concerns different issues with request headers. As ​Akamai​ strives to be fully RFC-compliant, you should set both these options to On as best practice.

Technical details

The Valid Mode processing rejects any requests if either of these header problems are detected:

  • A request header name includes any characters outside of this list:
abcdefghijklmnopqrstuvwxyzABCEDEFGHIJKLMNOPQRSTUVWXYZ0123456789!#$%&'\*+-.^\_\`|~ 
  • A request header value includes any characters outside of this list:
abcdefghijklmnopqrstuvwxyzABCEDEFGHIJKLMNOPQRSTUVWXYZ0123456789!#$%&'\*+-.^\_\`|~"(),/:;\<=>?@\[\]{}

which are also outside of this range:

0x80-FF

📘

Extended ASCII characters

For backwards-compatibility with certain clients, ​Akamai​ allows extended ASCII characters in the range 0x80-FF in the request header values for HTTP/1.1 requests, even though these values are considered obsolete and not recommended by RFC 9110. The request won't be rejected if the request header values include these characters, regardless of the options you select in this behavior.

Implementation

Enabling both options ensures that ​Akamai​ servers reject requests with invalid headers and don’t forward them to your origin. In such cases, the end user receives a 400 Bad Request HTTP response code.

If you don’t want to block traffic from a known client that sends invalid request headers:

  1. Add the Strict Header Parsing behavior to the Default Rule with all options set to On. This ensures that by default, requests with invalid request headers are rejected by ​Akamai​ servers.
  2. Add a new rule with matches on the attributes that identify traffic from that client (such as User-Agent, hostname, path etc.). Add the Strict Header Parsing behavior to the same rule with all options set to Off. This way only specific requests will be allowed through to your origin server.

In the long-term, you should update any client which sends invalid headers so that it becomes fully RFC-compliant. This is the only way to protect your origin server from the cache poisoning vulnerability.