Stream logs to Amazon S3
DataStream 2 supports sending log files to Amazon Simple Storage Service (Amazon S3). Amazon S3 is a static file storage that lets you organize your data and configure finely tuned access controls to meet your specific business, organizational, and compliance requirements.
DataStream 2 uploads logs to Amazon S3 in a gzip-compressed file. For security reasons, DataStream sends log files over TLS even if Amazon S3 policies allow insecure requests.
If you want to get improved aggregated metrics, you can use the new DataStream 2 SDK available in our GitHub repository. See this video to get to know how to use our SDK for Amazon S3:
Before you begin
Create an Identity and Access Management (IAM) user. See the Overview of access management: permissions and policies in Amazon S3.
Create a dedicated storage bucket in an AWS region. See Create storage buckets in Amazon S3.
Grant the user or role that can access the bucket the appropriate permissions to the bucket contents, including
Make note of the access keys and client secret associated with your account. See Understanding and getting your security credentials in Amazon S3.
Set up and manage server side encryption (SSE) in the container's settings. See Server side encryption for Amazon S3.
In Destination, select S3.
In Name, enter a human-readable description for the destination.
In Bucket, enter the name of the bucket you created in the S3 account where you want to store logs.
In Folder path, provide the path to the folder within the bucket where you want to store logs. If the folders don't exist in the bucket, Amazon creates them—for example,
logs/diagnostics. You can use Dynamic variables in folder paths for timestamps, stream ID, and stream version.
Folder paths in Amazon S3
Amazon treats objects that end with
/as folders. For example, if you start your path with
/, as in
/logs, Amazon creates two folders in your bucket. The first one is named
/, and it contains the
logsfolder. See Using folders in AWS and Bucket naming rules in Amazon S3.
In Region, enter the AWS region code where the bucket resides—for example,
ap-south-1. See Region names and codes on the Amazon AWS website.
In Access key ID, enter the access key associated with the Amazon S3 bucket.
In Secret access key, enter the secret key associated with the Amazon S3 bucket.
Getting authentication details
You can check your authentication details in the
.csvfile that you saved when creating your access key. If you didn't download the
.csvfile, or if you lost it, you may need to delete the existing access key and add a new one. See Managing access keys (console) in AWS.
Click Validate & Save to validate the connection to the destination, and save the details you provided.
As part of this validation process, the system uses the provided access key identifier and secret access key to create a verification file in your S3 folder, with a timestamp in the filename in the
Akamai_access_verification_[TimeStamp].txtformat. You can only see this file if the validation process is successful, and you have access to the Amazon S3 bucket and folder that you're trying to send logs to.
Optionally, in the Delivery options menu, edit the Filename field to change the prefix and suffix for your log files. File name prefixes support Dynamic variables.
For file name prefixes, you shouldn't use the
.character, as it may result in errors and data loss. File name suffixes don't support dynamic variables and the
?characters. See the Object naming conventions in Amazon S3.
Optionally, change the Push frequency to receive bundled logs to your destination every 30 or 60 seconds.
Updated 6 months ago