Stream logs to Amazon S3

DataStream 2 supports sending log files to Amazon Simple Storage Service (Amazon S3). Amazon S3 is a static file storage that lets you organize your data and configure finely tuned access controls to meet your specific business, organizational, and compliance requirements.

DataStream 2 uploads logs to Amazon S3 in a gzip-compressed file.

For security reasons, DataStream sends logs over TLS even if Amazon S3 policies allow insecure requests.

Before you begin

How to

  1. In Destination, select S3.

  2. In Name, enter a human-readable description for the destination.

  3. In Bucket, enter the name of the bucket you created in the S3 account where you want to store logs.

  4. In Folder path, provide the path to the folder within the bucket where you want to store logs. If the folders don't exist in the bucket, Amazon creates them—for example, logsor logs/diagnostics. You can use Dynamic variables in folder paths for timestamps, stream ID, and stream version.

📘

Folder paths in Amazon S3

Amazon treats objects that end with / as folders. For example, if you start your path with /, as in /logs, Amazon creates two folders in your bucket. The first one is named /, and it contains the logs folder. See Using folders in AWS and Bucket naming rules in Amazon S3.

  1. In Region, enter the AWS region code where the bucket resides—for example, ap-south-1. See Region names and codes on the Amazon AWS website.

  2. In Access key ID, enter the access key associated with the Amazon S3 bucket.

  3. In Secret access key, enter the secret key associated with the Amazon S3 bucket.

📘

Getting authentication details

You can check your authentication details in the .csv file that you saved when creating your access key. If you didn't download the .csv file, or if you lost it, you may need to delete the existing access key and add a new one. See Managing access keys (console) in AWS.

  1. Click Validate & Save to validate the connection to the destination, and save the details you provided.

    As part of this validation process, the system uses the provided access key identifier and secret access key to create a verification file in your S3 folder, with a timestamp in the filename in the Akamai_access_verification_[TimeStamp].txt format. You can only see this file if the validation process is successful, and you have access to the Amazon S3 bucket and folder that you're trying to send logs to.

  2. Optionally, in the Delivery options menu, edit the Filename field to change the prefix and suffix for your log files. File name prefixes support Dynamic variables.

    For file name prefixes, you shouldn't use the .character, as it may result in errors and data loss. File name suffixes don't support dynamic variables and the ., /, %, and ?characters. See the Object naming conventions in Amazon S3.

  3. Optionally, change the Push frequency to receive bundled logs to your destination every 30 or 60 seconds.

  4. Click Next.


Did this page help you?