Responsible use of AI in App & API Protector and related solutions

Akamai’s top-rated security solutions have protected your applications and web assets for years. We build our innovative threat response systems with prudence and thoughtfulness—a logical extension of our mission to safeguard your online presence and data. Our use of artificial intelligence (AI) is limited, careful, and shared openly with you.

How our web application security products use AI

To improve protections, Akamai employs AI models for very specific and legitimate purposes, to:

  • detect attacks.
  • enhance the attack detection and analytical capabilities of our solutions.
  • provide you, our customer, with insights on your security posture and attacks.
  • recommend improvements to your security posture.

When AI technology is core to a product (required in order for the product to run), as it is in WAF protection engine, you consent to AI use as part of your product purchase.

When AI is part of an optional feature created to speed your work, we tell you, and offer the option to use it or not. These features are preset to off and don’t directly affect protections. You can grant consent to use each feature and turn it on.

Models

Akamai uses models that either:

  1. Akamai develops in-house. These are discriminative and generative AI small language models. As part of regular software development and enhancements, we regularly improve these models, without notice and notify on improvements and updates of the related use cases; or
  2. are developed outside of Akamai as commercial or open-source large language models. When we update to a newer model version as part of software development, we update the relevant documentation and publish a notification.For improvements and updates of the use case, notifications are issued.

Regardless of either scenario (1) or (2), the model artifacts are not shared or deployed outside of Akamai solutions. You’ll find more detailed information on models used for specific features within product documentation and consent agreements.

Data protection

Akamai is committed to proper data handling, and that includes areas involving AI. Our solutions operate within the secure framework of our control center platform and adhere to all necessary security and privacy standards to ensure that data remains protected.

Your data is not used to train AI

We don’t use your proprietary content or other customer data to train AI solutions. Akamai does use its Akamai Network data and Threat Data (as those terms are defined in the Terms of Use) generated and logged in the course of Akamai’s delivery and protection of your web properties, in order to provide, maintain, develop and improve our solutions. For example, in the course of protecting your assets, we evaluate web requests. When an attack occurs, we collect and process the attack related security event data, which may be used to improve the AI-based attack detections. When you provide questions or prompts to an AI tool we offer, we may use that data in a manner that is not identifying you as the customer for improving, monitoring, and operating the AI tool.

Data handling

All data practices comply with PCI, ISO, and SOC 2 industry standards, with compliance ensured through data minimization, purpose limitation, automated retention policies, governance by data processing agreements where applicable, and transparent documentation. Here’s a rundown of data handling associated with our AI solutions.

  • Data collection. In the course of delivering and protecting your web assets, our systems do gather data about web requests and responses that are pertinent to your security. (Includes: HTTP transaction metadata like query, request, and response headers, protocol TLS and H2 fingerprints, IP address, and snippets of content data containing detected attack). We also use synthetic data to mimic real world cases as part of our risk mitigation practices.
  • Data use. Request and response data we use for training, validation, and testing is scrubbed and redacted, e.g. hostnames and other identifiers of Akamai customers and end-users are removed, and sensitive data like IP addresses, is obfuscated. Data is also balanced to avoid data bias.
  • Data retention. Scrubbed web request and response data lives in our centrally governed data storage for 31 days. Derived data lives in that storage for up to 6 months to aid further security investigation and analysis. At the end of those periods, the system purges the data automatically and generates audit logs. You can restrict certain data elements from being logged in Akamai’s system.
  • Data segregation, limits, and encryption. No customer can query another customer’s data. Secure access is determined by user roles and authentication features of Akamai’s control center. AI models cannot access or query data except those we explicitly allow for the specific use case. Data in transit and at rest is encrypted.
  • Security coverage. Akamai uses secure coding standards. We run regular vulnerability assessments, code reviews, and DAST and SAST security tests to ensure system robustness. We monitor systems that process traffic and customer data in real-time for abnormal behavior and other anomalies. Administrators act immediately on threats and issues. Any generative AI exposed to our customers is protected by our own Firewall for AI product. We continually enhance, test, and validate this service to ensure it’s effective against attacks and data manipulation attempts.

Product-specific model details

Following are model use and data-handling details for specific features and products.

L7 DDoS Protection

Uses discriminative Akamai internally developed AI models for core features. The models are trained on PII-obfuscated attack records and traffic logs. The model output is used internally by Akamai products and systems only for attack detection purposes, doesn't contain customer data, is not displayed to any customers, and is not shared outside Akamai.

App & API Protector - Custom Rule Builder AI Assistant

Custom Rule Builder AI Assistant is an optional product feature using commercial Llama 3.1-70B model, developed by Meta Platforms, Inc. The model is not trained on customer data. The model output is used by customer users to ease creation of custom security rules, is not displayed to any other customers, and is not shared outside Akamai.

App & API Protector - WAF

WAF uses discriminative Akamai-developed AI models for core features. The models are trained on PII-obfuscated attack records and traffic logs. The model output is used by Akamai to enhance the recognition of attack patterns. The model output is displayed to authorized customer users in the form of recommendations advising that they adjust their security policies to better protect against attacks. The recommendations do not contain customer data, but rather customer security-policy-specific suggestions.

App & API Protector - AI-powered detections

AAP AI detections is an optional product feature using Akamai-developed generative AI Small Language Models to enhance malicious request detection. The models are trained on PII-obfuscated attack records and traffic logs. They use only limited fields of attack records and traffic logs. The model output is used by Akamai systems and employees only, doesn't contain customer data, is not displayed to customers, and is not shared outside Akamai.

App & API Protector - Client Reputation

Client Reputation uses discriminative Akamai-developed AI models for core features to determine the reputation of a client IP address based on its behavior across the Akamai platform. The models are trained on PII-obfuscated attack records and traffic logs. The model output is used internally by Akamai systems and employees only, doesn't contain customer data, is not displayed to customers, and is not shared outside Akamai.

App & API Protector - Bot Visibility and Mitigation (bot protections included with App & API Protector)

These bot detections use discriminative Akamai-developed AI models for core features. The models are trained on PII-obfuscated network data including elements pertaining to HTTP requests and client browser. The model output is a signature indicating that a request is made by a bot or a human. The signature is used by Akamai systems only, doesn't contain customer data, is not displayed to customers, and is not shared outside Akamai.

App & API Protector Hybrid

This WAF uses discriminative Akamai-developed AI models for core features. The models are trained on PII-obfuscated attack records and traffic logs. The model output is used by Akamai to permanently enhance the recognition of attack patterns. The model output is displayed to authorized customer users in the form of recommendations advising that they adjust their security policies to better protect against attacks. The recommendations do not contain customer data, but rather customer security-policy-specific suggestions.

Firewall for AI

This product uses Akamai-developed generative and discriminative AI models for core features to detect attacks and identify risks to customers' LLM endpoints. The models are not trained on customer data. The model output is used by Akamai systems and employees only, doesn't contain customer data, is not displayed to customers, and is not shared outside Akamai. Firewall for AI inspects transactional HTTP data, including request and response payloads, as well as streaming content. This data is temporarily retained for processing on Akamai’s systems for buffering purposes. When processed on Akamai’s systems, the request and response payloads are processed in clear as needed to analyze malicious activities in the payloads. The full, clear data is immediately erased from Akamai systems after inspection is completed. Data which was identified as an attack, and/or a small sample of the benign data, is stored, after PII-obfuscation for a longer period, for visibility and research purposes (see below).

Data handling

Only when this protection is explicitly turned on for a customer endpoint, by an authorized customer user, Firewall for AI:

  • Collects and process unencrypted request and response bodies (payloads) for inference and attack detectionLogs the request, response, and websockets content identified as attacks, after PII obfuscation, to provide customers with better visibility into AI decisions. These logs are accessible to authorized customer users only, through analytics applications and dashboards. This log data is retained for 30 days.
    📘

    Users who want stricter control on request body data-at-rest, can turn off data logging. In this case, the specific payload or payload section that caused the rule to trigger is not retained or displayed.

  • The PII-obfuscated payload and content data may be used internally by Akamai systems and employees only to maintain, develop and improve the Firewall for AI models. A sample of the data may be examined by our researchers to verify and improve the accuracy of the model, research new attacks not covered by current models, and generate synthetic data which is equivalent to the original sample, in terms of the specific attack, but retains no user-specific or customer-specific data. This derived data can be used to train the models, in order to improve their accuracy. Any such research outcome and derived data is not displayed to any customers, and is not shared outside Akamai.
    📘

    The sample data is retained for up to 90 days. If the user has turned off data logging, no data is retained for these purposes.