Logs (beta)
Logs enables you to capture log data across multiple Akamai Cloud services and deliver it to the destination of your choice. Logs can help you to improve operational efficiency, enhance security, ensure compliance, and reduce the overhead of managing your Akamai Cloud services.
For the beta release, the focus is on audit logs: user and system events are collected, structured into JSON format, and delivered to the configured destination. Audit logs provide a complete and reliable history of changes and other events—an essential foundation for investigations, compliance, and operational insight.
Key concepts and terms
The following concepts apply to log delivery:
- Stream. A flow of logs from Akamai Cloud services to a configured destination. Streams bundle request/response events and deliver them in batches.
- Destination. The target where audit log files are delivered.
- Service. An Akamai Cloud service for which data is collected and logs are delivered. Logs currently supports the following services:
- Identity and Access Management (closed beta)
- Linodes
- Object Storage
- Cloud Firewalls
- VPC
- Linode Kubernetes Engine (LKE)
The following concepts apply to audit logs specifically:
- Audit Log. A time-stamped JSON record of events.
- Type. The type of stream, which reflects the kind of content being delivered: Linode API operations and login audit events (
audit_log
) or Kubernetes API audit logs (lke_audit_log
).
Authentication and access
The following are required to create and manage streams and destinations:
- Full account access
- Authentication with a valid Personal Access Token (PAT)
Your access is limited to only the streams and destinations associated with your account.
Streams
A stream is a flow of logs from the platform to a configured destination. During closed beta, use the Linode API to configure and manage streams. See the beta release note for the list of relevant operations, and this workflow to learn how to set up a stream using the Linode API.
Stream attributes
When creating a stream, you’ll specify:
-
Label. Every stream needs a unique name. If a name already exists, stream creation will fail.
-
Type. There are currently two types of audit logs:
-
Linode API operation and login audit event logs (
audit_logs
) -
Kubernetes API audit logs (
lke_audit_logs
)
-
-
Status. Streams are active (default), inactive or being provisioned. You can activate or deactivate at any time by changing the status.
- Inactive: Your stream configuration is saved, but no data is produced.
- Active: Data collection begins immediately, but it may take up to 45 minutes for the first audit events to be delivered to your destination.
- Provisioning: The stream configuration is being set up. Logs are not yet being delivered.
Once a stream is activated, it may take up to 45 minutes for logs to begin to arrive at your configured destination.
-
Additional details. Additional details may be required, depending on the type of stream. For example, to generate Kubernetes API audit logs, you’ll need to provide cluster IDs for the clusters you wish to include and specify whether new clusters should be automatically added to the stream.
-
Destination. The target location where the streamed data is delivered. See Destinations to learn more about configuring and managing destinations.
Stream versions
Each time you create or update a stream—for example, when you modify settings or change the destination—a new version of the stream configuration is generated. Previous versions of the stream configuration are retained. You can get a stream's history to review past configurations and track how the configuration has changed over time.
Deleted streams
When a stream configuration is deleted, audit events continue to be generated but will stop being delivered to the destination. Settings and credentials aren't retained when a stream configuration is deleted.
Destinations
Audit logs are delivered to your destination, which is the sink for the stream data. During closed beta, use the Linode API to configure and manage streams. See the beta release note for the list of relevant operations, and this workflow to learn the basics of configuring a destination using the Linode API.
Destination attributes
When creating a destination, you’ll specify:
-
Type. Currently, Object Storage (
linode_object_storage
) is supported. -
Label. The name of the new destination.
-
Host. The name of Object Storage host.
-
Bucket. The name of the Object Storage bucket that will be your sink.
-
Path. The path prefix used for uploaded objects.
-
Access key ID. The unique identifier used with the access key secret to access Object Storage.
-
Access key secret. The confidential security credential used with the access key ID to access Object Storage.
Destination versions
Each time you update a destination—for example, by modifying settings—a new version is generated. Previous versions are retained. The destination with the highest version number is the active version. Get a destination’s history to review past configurations and track how the configuration has changed over time.
Best practices for the Object Storage logs destination
To ensure audit integrity, authenticated users shouldn’t be able to:
- prevent audit logs from being written
- alter or forge log content
- delete logs before retention criteria are met
Users who have permission to delete Object Storage buckets, create or delete access keys, or modify Object Storage permissions can use that access to interfere with your audit logs.
To guard against that:
- Create a dedicated account that’s used exclusively for audit log storage.
- Limit account access to security administrators.
- Create an Object Storage bucket in that account with Object Lock enabled.
- Create a read-write access key for that bucket.
- Apply a LifeCycle policy to the logs in your bucket.
- In your main account, configure log delivery using the bucket created in step 3 and the key created in step 4.
- Create read-only access keys for the bucket for any person or system that needs read access to the audit logs.
- Regularly rotate all access keys associated with the Object Storage bucket.
This section elaborates on the steps above, as well as best practices that will help you to protect your audit logs.
Create the audit logs bucket with Object Lock enabled
To prevent log tampering, configure Object Lock on the Object Storage bucket where the audit logs will be stored. Object Lock must be enabled when the bucket is created, like the following example:
aws s3api create-bucket
--bucket my-audit-logs-bucket
--object-lock-enabled-for-bucket
--endpoint=<https://gb-lon-1.linodeobjects.com>
Mode
COMPLIANCE mode is recommended for strict, tamper-proof retention. Compliance mode means that objects in the bucket can’t be deleted by any user or Akamai until after the compliance period for those objects is over.
Billing
You'll be billed for audit logs stored in a bucket in COMPLIANCE mode until the data is deleted or the account is closed. Data can only be deleted after the original compliance period for that data ends.
Use GOVERNANCE mode if you need to retain the ability to delete logs.
Retention period
Select an appropriate retention period, for example 90 or 365 days.
Example
This example shows Object Lock enabled in COMPLIANCE mode with a retention period of 365 days, which is suitable for most compliance programs.
aws s3api put-object-lock-configuration
--endpoint=<https://gb-lon-1.linodeobjects.com>
--bucket my-audit-logs-bucket
--object-lock-configuration '{ "ObjectLockEnabled": "Enabled", "Rule": { "DefaultRetention": { "Mode": "COMPLIANCE", "Days": 365 }}}'
Apply a LifeCycle policy to the logs in your bucket
Apply a LifeCycle policy to your bucket to ensure that audit logs are retained according to your organization's data retention policies.
The LifeCycle policy should be set to expire logs after the end of the Object Lock retention period.
LifeCycle policy example
This example applies a LifeCycle policy that expires audit logs in the bucket after 365 days.
<LifecycleConfiguration>
<Rule>
<ID>audit-log-data-retention</ID>
<Filter><Prefix></Prefix></Filter>
<Status>Enabled</Status>
<Expiration>
<Days>365</Days>
</Expiration>
</Rule>
</LifecycleConfiguration>
To apply this policy using s3cmd:
s3cmd setlifecycle lifecycle_policy.xml s3://my-audit-logs-bucket
Manage Access Keys
Under the shared responsibility model, access key management is a customer responsibility. We recommend rotating the Object Storage access key every 90 days.
Suggested rotation flow:
- Log into the account containing the Object Storage bucket used for audit logs.
- Create a new read-write access key for the bucket.
- Log into your main account and update the destination configuration to use the new access key.
- Wait for the configuration to propagate, which can take an hour or more.
- Log back into the account containing the Object Storage bucket and revoke (delete) the old key.
Log files
Logs capture a record of what your Akamai Cloud services are doing over time. The structured events data that they provide makes it possible to reconstruct behavior, trace issues, and understand usage patterns. Logs provide contextual information that helps to answer “who”, “what”, “when”, and "outcome" when investigating performance, reliability, or security concerns.
For the beta release, audit logs are supported.
Audit logs
Audit logs are time-stamped records of all Linode APIv4 operations and login events for supported services. Akamai collects and stores audit logs for 90 days, regardless of whether or not a stream is configured. Once a log entry is written, it can’t be altered, deleted, or overwritten.
The audit_logs
stream type collects login audit logs and configuration audit logs. The lke_audit_logs
stream type collects Kubernetes API audit logs. Support for additional log types is planned for future releases.
Audit logs are delivered to your Object Storage bucket in batches. Each batch is stored as an object. Object naming follows the pattern defined in the stream configuration. If a path isn't specified, the default pattern is:
-
Login audit logs:
/audit_logs/com.akamai.audit.login/{accoutn_id}/{Y}/{m}/{d}/akamai_log-{random_string}-{timestamp}-{random_string}.gz
Example:
/audit_logs/com.akamai.audit.login/3242234543/2025/08/27/akamai_log-000166-1756015362-319597.gz
-
Configuration audit logs:
/audit_logs/com.akamai.audit.config{accoutn_id}/{Y}/{m}/{d}/akamai_log-{random_string}-{timestamp}-{random_string}.gz
Example:
/audit_logs/com.akamai.audit.config/3242234543/2025/08/27/akamai_log-000166-1756015362-319597.gz
-
Kubernetes API audit logs:
/lke_audit_logs/com.akamai.audit.k8s{accoutn_id}/{partition}/{Y}/{m}/{d}/akamai_log-{random_string}-{timestamp}-{random_string}.gz
Example:
/lke_audit_logs/com.akamai.audit.k8s/3242234543/234/2025/08/27/akamai_log-000166-1756015362-319597.gz
Within an object, you’ll find one or more log lines. The format of the log lines depends on the type of log.
Login audit logs
Login audit logs capture successful login event details, such as the timestamp, account, and user metadata. They are delivered by the audit_logs
stream type and follow the open standard CloudEvents v1.0.2 format, with the entire object stored in JSON Lines (JSONL) format.
Login audit log example
This example has been formatted for clarity.
{
"specversion": "1.0",
"id": "99f77d13-b398-49f4-b747-24c457609c75",
"source": "/service/login",
"type": "com.akamai.audit.login",
"time": "2025-01-28T15:33:11.421Z",
"account": "33334444-2222-EEEE-0123456789ABCDEF",
"data": {
"username": "testuser",
"sourceip": "12.34.56.78",
"permissionlevel": "restricted",
"statuscode": "succeeded",
"statusmessage": "Successful login",
"type": "direct",
"email": "testuser@domain.com",
"useragent": "Mozilla/5.0 (..."
}
}
Critical fields:
id
: Uniquely identifies each log line. In rare cases, a single log entry is delivered multiple times. The id field can be used to detect duplicates.type
: For login audit logs, the type is always "com.akamai.audit.login".time
: UTC time when the login occurred.account
: The external UUID associated with the account where the login occurred. If you have multiple accounts, you can find the external UUID for each using the Linode API.data
: Information about the client and user that logged in.
Configuration audit logs
Configuration audit logs capture a record of every operation, such as creating, modifying, or deleting resources. Logs contain details like the actor, the event code, the path, and so on. Configuration audit logs also capture all management operations performed through the API or other user interfaces.
Configuration audit logs are delivered by the audit_logs
stream type and follow the open standard CloudEvents v1.0.2 format, with the entire object stored in JSON Lines (JSONL) format.
Configuration audit log example
{
"specversion": "1.0",
"id": "1b9fd401-ad35-4dd8-88da-802e52d4503a",
"source": "/service/linodes",
"type": "com.akamai.audit.config",
"time": "2025-01-28T15:33:11.123Z",
"account": "33334444-2222-EEEE-0123456789ABCDEF",
"data": {
"actor": {
"type": "user",
"username": "testuser",
"email": "testuser@domain.com",
"sourceip": "12.34.56.78",
"useragent": "Mozilla/5.0 (..."
},
"eventcode": "post-boot-linode-instance",
"path": "api.linode.com/v4/linode/instances/123/boot",
"request": {},
"responsecode": 200,
"response": {},
"requestid": "9097a7cd-86ed-4b7e-a607-613cb6693c41"
}
}
Critical fields:
id
: Uniquely identifies each log line. In rare cases, a single log entry is delivered multiple times. The id field can be used to detect duplicates.type
: For login audit logs, the type is always "com.akamai.audit.config".time
: UTC time of when the configuration change occurred.account
: The external UUID associated with the account where the operation occurred. If you have multiple accounts, you can find the external UUID for each using the Linode API.data
: Information about the caller, what they attempted to change, and the result.
data.actor: The identity of the caller.data.eventcode
: The API event type (operation identifier). You can append the event code to the URL for the Linode API reference to learn more about the operation, for example, “https://techdocs.akamai.com/linode-api/reference/post-boot-linode-instance".data.path
: The path to the resource that was changed.data.request
: The request parameters that the user submitted with sensitive data redacted.data.responsecode
: The success or failure status of the request.data.response
: The API response with sensitive data redacted.data.requestid
: A unique identifier for the request that initiated a configuration change. If there are multiple logs for a single request, you can use this identifier to correlate to other requests sharing the same request id.
Kubernetes API audit logs
Kubernetes API audit logs can be enabled for LKE Enterprise clusters.
To generate Kubernetes API audit logs, you also need to provide the following information:
- The cluster IDs of the clusters to include
- Whether or not to automatically add new clusters to the stream
These logs are delivered by the lke_audit_logs
stream type in native Kubernetes audit format, which may vary depending on the cluster version. To learn more about the contents of these logs, see the Kubernetes documentation. LKE delivers log contents unaltered, with no redaction or other transformations. This ensures compatibility with standard Kubernetes tooling.
If you’re running multiple LKE clusters, you can differentiate logs by configuring each LKE cluster to send logs to a different destination. You can use the same Object Storage bucket with different paths for multiple destinations.
Under the shared responsibility model, it’s the customer's responsibility to ensure no sensitive data is included in custom resource definitions (CRDs) or other Kubernetes properties. It’s also the customer’s responsibility to limit audit log visibility to only those users who are authorized to see them.
<span style="color:red> ADD SOMETHING HERE ABOUT https://docs.google.com/document/d/1lfhqUSDcntqWwtl5GXhkZW8yLBXZOKu22xmiqmDDtIk/edit?tab=t.0#heading=h.kaykpljlihco. See comments in Google Doc
Log redaction and truncation
Audit logs intentionally capture the details needed to reconstruct the changes that users made. As part of this, properties specified in an API request and response bodies are logged. To ensure sensitive information is not logged, logs only include non-sensitive, valid request and response properties.
To preserve security while maintaining an actionable audit trail:
- Sensitive properties such as tokens, passwords, keys, personally identifiable information, and freeform string data are replaced with "[REDACTED]". Resource labels, tags, usernames, and email addresses are not. Avoid putting sensitive information into resource labels or tags, and limit audit log visibility to only those users who are authorized to see them.
- Invalid request bodies and invalid properties are omitted. Invalid properties include malformed JSON, incorrect types (for example, a string where an integer is expected), and fields that aren’t part of an API’s valid input.
- Log line size is limited to a maximum of 64KB. If this limit is exceeded, the log may be truncated. Log truncation is performed carefully, to maintain valid JSON and retain as much detail as possible:
- Very long strings may be truncated with trailing "...".
- Response bodies may be elided to prioritize request properties. If so, the log will include
"responselided": true
and theresponse
property will benull
. - Very long arrays may be trimmed, keeping only a subset of the array items. An auxiliary
<arrayname>__tl
field records the original array length. For exampleitems__tl: 30
would indicate that theitems
array was truncated and its original length was 30.
Shared responsibility model
Audit logs let you see exactly what changes were made in your account, by whom, and what resources were affected—an essential feature for compliance and traceability. At the same time, audit logs can expose sensitive system details if misused. Protecting this information is a shared responsibility between you and Akamai.
This section outlines responsibilities under the shared responsibility model and indicates whether they belong to the customer or Akamai. This list is not exhaustive.
Akamai is responsible for:
- Enforcing authentication and authorization so that every action is tied to the correct user..
- Preventing users from hiding malicious actions by ensuring all changes generate audit logs
- Excluding sensitive, non-audit information from logs.
- Protecting audit log data in transit and at rest, from creation through delivery to your destination.
- Providing controls to help you secure audit logs and other data stored in Object Storage.
- Managing all aspects of audit log generation, transport, and storage, up to delivery in your Object Storage bucket.
The customer is responsible for:
- Protecting user identities and credentials, for example, by not sharing passwords.
- Limiting full account access privileges to only trusted administrators.
- Restricting access to the audit logs within your Object Storage bucket and any systems where you replicate them.
- Applying recommended security controls to your Object Storage, including regular key rotation for all access keys.
- Avoiding the use of sensitive information in resource names, labels, tags, usernames and user email addresses.
- Avoiding the use of any sensitive information in Kubernetes CRDs or other audited Kubernetes properties.
- Protecting audit logs after they’ve been delivered to your Object Storage bucket.
- Ensuring Object Storage limits aren’t exceeded, Object Storage remains enabled in your account, and all related bills are paid.
Updated about 24 hours ago