Logs (beta)
Logs is a log delivery service that enables you to collect log data from multiple Akamai Cloud services and deliver it to the destination of your choice. It can help you to improve operational efficiency, strengthen security, and simplify the management of your Akamai Cloud services.
The current release is focused on audit logs that capture control plane operations, recording user and system events across your cloud environment. These events are collected, structured in JSON format, and delivered to the configured destination.
Audit logs provide a comprehensive and authoritative record of changes and activities, forming a critical foundation for security investigations and configuration change tracking.
Getting started
To generate a flow of logs to a storage location, you'll need to:
- Set up storage for the logs (create a bucket and set up access keys)
- Create a destination
- Create a stream
This section covers the requirements to get started and the key concepts you'll encounter along the way.
Key concepts and terms
The following concepts apply to log delivery and audit logs:
- Stream. A flow of logs from Akamai Cloud services to a configured destination. Streams bundle request/response events and deliver them in batches.
- Destination. The target where audit log files are delivered.
- Service. An Akamai Cloud service for which data is collected and logs are delivered. Logs currently supports the following services:
- Identity and Access Management (login events)
- Cloud Firewalls
- VPC
- LKE Enterprise
- Audit Log. A time-stamped JSON record of events.
- Type. The type of stream, which reflects the kind of content being delivered. Currently,
audit_logsis the only available type.
Authentication and access
The following are required to create and manage streams and destinations:
- Full account access
- Authentication with a valid Personal Access Token (PAT)
You can create a PAT using the API or Cloud Manager. In Cloud Manager, navigate to the accounts menu, select API tokens, then click Create A Personal Access Token. Select Monitor and Read/Write.
Your access is limited to only the streams and destinations associated with your account.
Pricing
There is no cost for Logs during beta. Participants are limited to creating one stream of each stream type.
Destinations
Audit logs are delivered to a destination, which is the sink for the stream data. During beta, use Cloud Manager or the Linode API to configure and manage destinations. See the beta release note for the list of relevant API operations, and this workflow to learn the basics of configuring a destination using the Linode API.
Destination attributes
When creating a destination, you’ll specify:
-
Type. Currently, Object Storage is supported.
-
Name. The name of the new destination.
-
Host. The name of Object Storage host.
-
Bucket. The name of the Object Storage bucket that will be your sink.
-
Path. The path prefix used for uploaded objects.
-
Access key ID. The unique identifier used with the secret access key to access Object Storage.
-
Secret access key. The confidential security credential used with the access key ID to access Object Storage.
Destination versions
Each time you update a destination—for example, by modifying settings—a new version is generated. Previous versions are retained. The destination with the highest version number is the active version. Get a destination’s history to review past configurations and track how the configuration has changed over time.
Best practices for the Object Storage logs destination
To ensure audit integrity, authenticated users should be prevented from:
- altering or forging log content
- preventing audit logs from being written
- deleting logs before retention criteria are met
Users who have permission to delete Object Storage buckets, create or delete access keys, or modify Object Storage permissions can use that access to interfere with your audit logs.
To guard against that:
- Create a dedicated account that’s used exclusively for audit log storage.
- Limit account access to security administrators.
- Create an Object Storage bucket in that account with Object Lock enabled.
- Create a read-write access key for that bucket.
- Apply a LifeCycle policy to the logs in your bucket.
- In your main account, configure log delivery using the bucket created in step 3 and the key created in step 4.
- Create read-only access keys for the bucket for any person or system that needs read access to the audit logs.
- Regularly rotate all access keys associated with the Object Storage bucket.
This section elaborates on the steps above, as well as best practices that will help you to protect your audit logs.
Create the audit logs bucket with Object Lock enabled
To prevent log tampering, configure Object Lock on the Object Storage bucket where the audit logs will be stored. Object Lock must be enabled when the bucket is created, like the following example:
aws s3api create-bucket
--bucket my-audit-logs-bucket
--object-lock-enabled-for-bucket
--endpoint=<https://gb-lon-1.linodeobjects.com>
Mode
COMPLIANCE mode is recommended for strict, tamper-proof retention. Compliance mode means that objects in the bucket can’t be deleted by any user or Akamai until after the compliance period for those objects is over.
You'll be billed for audit logs stored in a bucket in COMPLIANCE mode until the data is deleted or the account is closed. Data can only be deleted after the original compliance period for that data ends.
Use GOVERNANCE mode if you need to retain the ability to delete logs.
Retention period
Select an appropriate retention period, for example 90 or 365 days.
Example
This example shows Object Lock enabled in COMPLIANCE mode with a retention period of 365 days, which is suitable for most compliance programs.
aws s3api put-object-lock-configuration
--endpoint=<https://gb-lon-1.linodeobjects.com>
--bucket my-audit-logs-bucket
--object-lock-configuration '{ "ObjectLockEnabled": "Enabled", "Rule": { "DefaultRetention": { "Mode": "COMPLIANCE", "Days": 365 }}}'
Apply a LifeCycle policy to the logs in your bucket
Apply a LifeCycle policy to your bucket to ensure that audit logs are retained according to your organization's data retention policies.
The LifeCycle policy should be set to expire logs after the end of the Object Lock retention period.
LifeCycle policy example
This example applies a LifeCycle policy that expires audit logs in the bucket after 365 days.
<LifecycleConfiguration>
<Rule>
<ID>audit-log-data-retention</ID>
<Filter><Prefix></Prefix></Filter>
<Status>Enabled</Status>
<Expiration>
<Days>365</Days>
</Expiration>
</Rule>
</LifecycleConfiguration>
To apply this policy using s3cmd:
s3cmd setlifecycle lifecycle_policy.xml s3://my-audit-logs-bucket
Manage Access Keys
Under the shared responsibility model, access key management is a customer responsibility. We recommend rotating the Object Storage access key every 90 days.
Suggested rotation flow:
- Log into the account containing the Object Storage bucket used for audit logs.
- Create a new read-write access key for the bucket.
- Log into your main account and update the destination configuration to use the new access key.
- Wait for the configuration to propagate, which can take an hour or more.
- Log back into the account containing the Object Storage bucket and revoke (delete) the old key.
Streams
A stream is a flow of logs from the platform to a configured destination. During beta, use Cloud Manager or the Linode API to configure and manage streams. See the beta release note for the list of relevant API operations, and this workflow to learn how to set up a stream using the Linode API.
Stream attributes
When creating a stream, you’ll specify:
-
Name. Every stream needs a unique name. If a name already exists, stream creation will fail.
-
Type. Currently, there is only one type of logs:
audit_logs. -
Destination. The target location where the streamed data is delivered. See Destinations to learn more about configuring and managing destinations.
Stream activation
Streams are active by default when they're created. You can activate or deactivate at any time by selecting Activate or Deactivate from the stream's options menu.
- Inactive: Your stream configuration is saved, but no data is produced.
- Active: Data is being collected and streamed to your destination.
- Provisioning: The stream configuration is being set up. Logs are not yet being delivered.
When a stream is activated, it may take up to 45 minutes for logs to begin to arrive at your configured destination.
Stream versions
Each time you create or update a stream—for example, when you modify settings or change the destination—a new version of the stream configuration is generated. Previous versions of the stream configuration are retained. You can get a stream's history to review past configurations and track how the configuration has changed over time.
Deleted streams
When a stream configuration is deleted, audit events continue to be generated but will stop being delivered to the destination. Settings and credentials aren't retained when a stream configuration is deleted.
Log files
Logs capture a record of what your Akamai Cloud services are doing over time. The structured events data that they provide makes it possible to reconstruct behavior, trace issues, and understand usage patterns. Logs provide contextual information that helps to answer “who”, “what”, “when”, and "outcome" when investigating performance, reliability, or security concerns.
During beta, audit logs are supported.
Audit logs
Audit logs are time-stamped records of all Linode APIv4 operations and login events for supported services. Akamai collects and stores audit logs for 90 days, regardless of whether or not a stream is configured. Once a log entry is written, it can’t be altered, deleted, or overwritten.
Audit logs are delivered to your Object Storage bucket in batches. Each batch is stored as an object. Object naming follows the pattern defined in the stream configuration. If a path isn't specified, the default pattern is:
-
Login audit logs:
/audit_logs/com.akamai.audit/{account_id}/{Y}/{m}/{d}/akamai_log-{random_string}-{timestamp}-{random_string}-login.gzExample:
/audit_logs/com.akamai.audit/3242234543/2025/08/27/akamai_log-000166-1756015362-319597-login.gz -
Configuration audit logs:
/audit_logs/com.akamai.audit/{account_id}/{Y}/{m}/{d}/akamai_log-{random_string}-{timestamp}-{random_string}-config.gzExample:
/audit_logs/com.akamai.audit/3242234543/2025/08/27/akamai_log-000166-1756015362-319597-config.gz
Within an object, you’ll find one or more log lines. The format of the log lines depends on the type of log.
Login audit logs
Login audit logs capture successful login event details, such as the timestamp, account, and user metadata. They are delivered by the audit_logs stream type and follow the open standard CloudEvents v1.0.2 format, with the entire object stored in JSON Lines (JSONL) format.
Login audit log example
This example has been formatted for clarity.
{
"specversion": "1.0",
"id": "99f77d13-b398-49f4-b747-24c457609c75",
"source": "/service/login",
"type": "com.akamai.audit.login",
"time": "2025-01-28T15:33:11.421Z",
"account": "33334444-2222-EEEE-0123456789ABCDEF",
"data": {
"username": "testuser",
"sourceip": "12.34.56.78",
"permissionlevel": "restricted",
"statuscode": "succeeded",
"statusmessage": "Successful login",
"type": "direct",
"email": "testuser@domain.com",
"useragent": "Mozilla/5.0 (..."
}
}
Critical fields:
id: Uniquely identifies each log line. In rare cases, a single log entry is delivered multiple times. The id field can be used to detect duplicates.type: For login audit logs, the type is always "com.akamai.audit.login".time: UTC time when the login occurred.account: The external UUID associated with the account where the login occurred. If you have multiple accounts, you can find the external UUID for each using the Linode API.data: Information about the client and user that logged in.
Configuration audit logs
Configuration audit logs capture a record of every operation, such as creating, modifying, or deleting resources. Logs contain details like the actor, the event code, the path, and so on. Configuration audit logs also capture all management operations performed through the API or other user interfaces.
Configuration audit logs are delivered by the audit_logs stream type and follow the open standard CloudEvents v1.0.2 format, with the entire object stored in JSON Lines (JSONL) format.
Configuration audit log example
{
"specversion": "1.0",
"id": "1b9fd401-ad35-4dd8-88da-802e52d4503a",
"source": "/service/linodes",
"type": "com.akamai.audit.config",
"time": "2025-01-28T15:33:11.123Z",
"account": "33334444-2222-EEEE-0123456789ABCDEF",
"data": {
"actor": {
"type": "user",
"username": "testuser",
"email": "testuser@domain.com",
"sourceip": "12.34.56.78",
"useragent": "Mozilla/5.0 (..."
},
"eventcode": "post-boot-linode-instance",
"path": "api.linode.com/v4/linode/instances/123/boot",
"request": {},
"responsecode": 200,
"response": {},
"requestid": "9097a7cd-86ed-4b7e-a607-613cb6693c41"
}
}
Critical fields:
id: Uniquely identifies each log line. In rare cases, a single log entry is delivered multiple times. The id field can be used to detect duplicates.type: For login audit logs, the type is always "com.akamai.audit.config".time: UTC time of when the configuration change occurred.account: The external UUID associated with the account where the operation occurred. If you have multiple accounts, you can find the external UUID for each using the Linode API.data: Information about the caller, what they attempted to change, and the result.
data.actor: The identity of the caller.data.eventcode: The API event type (operation identifier). You can append the event code to the URL for the Linode API reference to learn more about the operation, for example, https://techdocs.akamai.com/linode-api/reference/post-boot-linode-instance.data.path: The path to the resource that was changed.data.request: The request parameters that the user submitted with sensitive data redacted.data.responsecode: The success or failure status of the request.data.response: The API response with sensitive data redacted. Returned only for successful requests.data.errors: Errors returned during the request. Returned only for unsuccessful requests.data.requestid: A unique identifier for the request that initiated a configuration change. If there are multiple logs for a single request, you can use this identifier to correlate to other requests sharing the same request id.
Log redaction and truncation
Audit logs intentionally capture the details needed to reconstruct the changes that users made. As part of this, properties specified in an API request and response bodies are logged. To ensure sensitive information is not logged, logs only include non-sensitive, valid request and response properties.
To preserve security while maintaining an actionable audit trail:
- Sensitive properties such as tokens, passwords, keys, personally identifiable information, and freeform string data are replaced with "[REDACTED]". Resource labels, tags, usernames, and email addresses are not. Avoid putting sensitive information into resource labels or tags, and limit audit log visibility to only those users who are authorized to see them.
- Invalid request bodies and invalid properties are omitted. Invalid properties include malformed JSON, incorrect types (for example, a string where an integer is expected), and fields that aren’t part of an API’s valid input.
- Log line size is limited to a maximum of 64KB. If this limit is exceeded, the log may be truncated. Log truncation is performed carefully, to maintain valid JSON and retain as much detail as possible:
- Very long strings may be truncated with trailing "...".
- Response bodies may be elided to prioritize request properties. If so, the log will include
"responselided": trueand theresponseproperty will benull. - Very long arrays may be trimmed, keeping only a subset of the array items. An auxiliary
<arrayname>__tlfield records the original array length. For exampleitems__tl: 30would indicate that theitemsarray was truncated and its original length was 30.
Shared responsibility
Audit logs let you see exactly what changes were made in your account, by whom, and what resources were affected—an essential feature for traceability. At the same time, audit logs can expose sensitive system details if misused. Protecting this information is a shared responsibility between you and Akamai.
This section highlights important aspects of the shared responsibility model and indicates who is most responsible: the customer or Akamai. This list is not exhaustive.
Akamai is responsible for:
- Enforcing authentication and authorization so that every action is tied to the correct user..
- Preventing users from hiding malicious actions by ensuring all changes generate audit logs
- Excluding sensitive, non-audit information from logs.
- Protecting audit log data in transit and at rest, from creation through delivery to your destination.
- Providing controls to help you secure audit logs and other data stored in Object Storage.
- Managing all aspects of audit log generation, transport, and storage, up to delivery in your Object Storage bucket.
The customer is responsible for:
- Protecting user identities and credentials, for example, by not sharing passwords.
- Limiting full account access privileges to only trusted administrators.
- Restricting access to the audit logs within your Object Storage bucket and any systems where you replicate them.
- Applying recommended security controls to your Object Storage, including regular key rotation for all access keys.
- Avoiding the use of sensitive information in resource names, labels, tags, usernames and user email addresses.
- Avoiding the use of any sensitive information in Kubernetes CRDs or other audited Kubernetes properties.
- Protecting audit logs after they’ve been delivered to your Object Storage bucket.
- Ensuring Object Storage limits aren’t exceeded, Object Storage remains enabled in your account, and all related bills are paid.
Updated 4 days ago
