Configure audit log delivery with Object Storage

Here, we'll set up a destination for audit log storage and a stream to generate them, for an Akamai Cloud service. The process uses Monitor logging operations from the Linode API and related resources.

🚧

By using this service, you acknowledge your obligations under the United States Department of Justice Bulk Sensitive Data Transaction Rule ("BSD Rule"). You also agree that you will not use the service to transfer, onward transfer, or otherwise make accessible any United States government-related data or bulk United States sensitive personal data to countries of concern or a covered person, as each of those terms and concepts are defined in the BSD Rule. Anyone using the service is solely responsible for compliance with the BSD Rule.

Get set up with Object Storage

First, you'll need a place to store the logs. We'll set up an audit logs bucket using our Object Storage service for this.

Create the bucket

📘

This assumes you already have Object Storage on your account. Talk to your Akamai account team about getting it added.

Monitor log support via an audit logs bucket requires that you enable Object Lock when creating you create it. Object Lock is currently only supported using the S3 API’s; there is currently no support for it with the Linode API or in Cloud Manager.

See Create the audit logs bucket with Object Lock enabled for full details on its configuration. Below is an example of enabling it in a new bucket:

aws s3api put-object-lock-configuration
  --bucket my-audit-logs-bucket
  --endpoint=<https://(bucket_name)-1.(S3 hostname)>
  --object-lock-configuration '{ "ObjectLockEnabled": "Enabled", "Rule": { "DefaultRetention": { "Mode":    "COMPLIANCE", "Days": 365 }}}'
  • --bucket. A unique name for the bucket. Store this value for future use, as your bucket_name.

  • --endpoint. The endpoint for access to the bucket. The (bucket_name) variable is the unique name you set for the bucket, and (S3 hostname) hostname is the assigned S3 hostname for the region where you want the bucket to live. From that linked topic, also store the associated Region value from the table, as your region. Finally, store the full --endpoint as the hostname for your bucket.

  • --object-lock-configuration. Enable and configure Object Lock, using different modes, and setting a life cycle.

Set up an access key

Now, you need a key to access your audit logs bucket.

  1. Run the Create an Object Storage key operation including the region you stored from the previous step.

    {
      "label": "OBJ Access for logging",
      "regions": [
        "us-iad"
      ]
    }
    
  2. From the response, store these values:

    • The id. This will serve as your access_key_id for later in the process.

    • The secret_key. Used to validate the access key in requests.

🚧

The secret_key is only revealed in the response for this operation. Be sure to store it now because you can't view it later.

Create a destination

With your bucket set up in Object Storage, let's set it up as a destination to store your audit logs.

  1. Run the Create a destination operation and target your Object Storage bucket:

    {
      "details": {
        "access_key_id": 123,
        "bucket_name": "my-audit-logs-bucket",
        "host": "my-audit-logs-bucket-1.us-iad-1.linodeobjects.com",
        "path": "ds-logs",
        "region": "us-iad"
      },
      "type": "linode_object_storage"
    }
    
    • access_key_id. This is the id for the Object Storage key you stored.
    • bucket_name. This is the bucket's label that you stored.
    • host. This is the bucket's hostname you stored.
    • path. Enter the name of a directory path where you want the logs stored. This path doesn't have to already exist in the bucket.
    • region. This is the region you stored.
    • type. Set this to linode_object_storage.
  2. Store the id from the response, for use as the destination_id.

    {
       "created": "2025-07-20 09:45:13",
       "created_by": "John Q. Linode",
       "details": {
         "access_key_id": 123,
         "bucket_name": "my-audit-logs-bucket",
         "host": "my-audit-logs-bucket-1.us-iad-1.linodeobjects.com",
         "path": "ds-logs",
         "region": "string"
       },
       "id": 12345, <== Store this.
       "label": "audit_logs_destination",
       "type": "linode_object_storage",
       "updated": "2025-07-21 12:41:09",
       "updated_by": "Jane Q. Linode",
       "version": 1
    }
    

Create the stream

Run the Create a stream operation to define how logs will be gathered, and add your destination to it:

{
  "type": "audit_logs",
  "destinations": [
    12345
  ],
  "status": "active",
  "label": "DBaaS-config"
}

With status set to active, logs should begin to arrive at your configured destination in about 45 minutes.