Configure audit log delivery

Beta Here, we'll set up a destination for audit log storage and a stream to generate them, for either your Linodes or Linode Kubernetes Engine clusters. The process uses Monitor logging operations from the Linode API and related resources.

🚧

By using this service, you acknowledge your obligations under the United States Department of Justice Bulk Sensitive Data Transaction Rule ("BSD Rule"). You also agree that you will not use the service to transfer, onward transfer, or otherwise make accessible any United States government-related data or bulk United States sensitive personal data to countries of concern or a covered person, as each of those terms and concepts are defined in the BSD Rule. Anyone using the service is solely responsible for compliance with the BSD Rule.

Get set up with Object Storage

First, you'll need a place to store the logs. We'll set up an audit logs bucket using our Object Storage service for this.

Create the bucket

📘

This assumes you already have Object Storage on your account. Talk to your Akamai account team about getting it added.

Monitor log support via an audit logs bucket requires that you enable Object Lock when creating you create it. Object Lock is currently only supported using the S3 API’s; there is currently no support for it with the Linode API or in Cloud Manager.

See Create the audit logs bucket with Object Lock enabled for full details on its configuration. Below is an example of enabling it in a new bucket:

aws s3api put-object-lock-configuration
  --bucket my-audit-logs-bucket
  --endpoint=<https://(bucket_name)-1.(S3 hostname)>
  --object-lock-configuration '{ "ObjectLockEnabled": "Enabled", "Rule": { "DefaultRetention": { "Mode":    "COMPLIANCE", "Days": 365 }}}'
  • --bucket. A unique name for the bucket. Store this value for future use, as your bucket_name.

  • --endpoint. The endpoint for access to the bucket. The (bucket_name) variable is the unique name you set for the bucket, and the (S3 hostname) is the assigned S3 hostname for the region where you want the bucket to live. From that linked topic, store the associated Region value from the table, as your region. Finally, store the full --endpoint you set here as the hostname for your bucket.

  • --object-lock-configuration. Enable and configure Object Lock, using different modes, and setting a life cycle.

Set up an access key

Now, you need a key to access your audit logs bucket.

  1. Run the Create an Object Storage key operation including the region you stored from the previous step.

    {
      "label": "OBJ Access for logging",
      "regions": [
        "us-iad"
      ]
    }
    
  2. From the response, store these values:

    • The id. This will serve as your access_key_id for later in the process.

    • The secret_key. Used to validate the access key in requests.

    🚧

    The secret_key is only revealed in the response for this operation. Be sure to store it now because you can't view it later.

Enable LKE audit logs

If you're configuring a stream to gather Kubernetes (LKE) audit logs, you need to enable them for each cluster you want to track.

  1. Run the Update a Kubernetes cluster operation for an existing cluster, or the Create a Kubernetes cluster operation to create a new cluster.

  2. In the request, set audit_logs_enabled: true in the control_plane object:

    "control_plane": {
      "audit_logs_enabled": true
    },
    

    📘

    This object is only available with these operations through this beta release. It's status is also revealed in the response for these operations as well as the List Kubernetes clusters and Get a Kubernetes cluster operations.

Create a destination

With your bucket set up in Object Storage, let's set it up as a destination to store your audit logs.

  1. Run the Create a destination operation and target your Object Storage bucket:

    {
      "details": {
        "access_key_id": 123,
        "access_key_secret": "1aB2CD3e4fgHi5JK6lmnop7qR8STU9VxYzabcdefHh",
        "bucket_name": "my-audit-logs-bucket",
        "host": "my-audit-logs-bucket-1.us-iad-1.linodeobjects.com",
        "path": "ds-logs"
      },
      "type": "linode_object_storage"
    }
    
    • access_key_id. This is the id for the Object Storage key you stored.

    • access_key_secret. This is the secret_key you stored.

    • bucket_name. This is the bucket's label that you stored.

    • host. This is the bucket's hostname you stored.

    • path (Optional). Enter a name of a directory path where you want the logs stored. This path doesn't have to already exist in the bucket. If you leave this out, it defaults to a specific path, based on the type of stream you want to create:

      • Audit logs. {stream_type}/{log_type}/{account}/{%Y/%m/%d/}.

      • LKE audit logs. {stream_type}/{log_type}/{account}/{partition}/{%Y/%m/%d/}.

    • type. Set this to akamai_object_storage.

  2. Store the id from the response, for use as the destination_id.

    {
       "created": "2025-07-20 09:45:13",
       "created_by": "John Q. Linode",
       "details": {
         "access_key_id": 123,
         "bucket_name": "my-audit-logs-bucket",
         "host": "my-audit-logs-bucket-1.us-iad-1.linodeobjects.com",
         "path": "ds-logs",
         "region": "string"
       },
       "id": 12345, <== Store this.
       "label": "audit_logs_destination",
       "type": "linode_object_storage",
       "updated": "2025-07-21 12:41:09",
       "updated_by": "Jane Q. Linode",
       "version": 1
    }
    

Create the stream

Run the Create a stream operation to define how logs will be gathered, and include your stored destinationId in the destinations array. You can set up a stream for two different types of audit logs.

👍

You can set up a separate destination, using a different path for each audit log type.

Audit logs

These let you gather log data for all of the control plane operations for the services in your Linodes.

  • label. Give the stream a unique, easily recognizable name.

  • type. Set this to audit_logs.

  • destinations. Include your stored destinationId in this array.

  • status (Optional). This defaults to active. When active, Logs should begin to arrive at your configured destination in about 45 minutes.

{
  "label": "Linode_services_logs",
  "type": "audit_logs",
  "destinations": [
    1234
  ],
  "status": "active"
}

LKE audit logs

These let you gather log data for actions performed with your Linode Kubernetes Engine (LKE) enterprise clusters.

📘

You also need to enable LKE audit logs for each cluster.

  • label. Give the stream a unique, easily recognizable name.

  • type. Set this to lke_audit_logs.

  • destinations. Include your stored destinationId in this array.

  • status. Set this to active. LKE audit logs should begin to arrive at your configured destination in about 45 minutes.

  • details (object). Include this and its cluster_ids array to call out individual identifiers for the specific LKE enterprise clusters you want to target. Run the List Kubernetes clusters operation and store the id values for all applicable clusters.

{
  "destinations": [
    5678
  ],
  "type": "lke_audit_logs",
  "details": {
    "cluster_ids": [
      1234,
      5678
    ]
  },
  "label": "LKE_enterprise_clusters",
  "status": "active"
}