Stream logs to Amazon S3
Follow these steps to send DataStream 2 logs to Amazon Simple Storage Service (Amazon S3).
You can send logs to S3 to organize your data, configure access controls, and meet organizational and compliance requirements for your business.
DataStream 2 uploads logs to Amazon S3 in a gzip-compressed file. For security reasons, DataStream sends log files over TLS even if Amazon S3 policies allow insecure requests.
If you want to get improved aggregated metrics for your logs, you can use the DataStream 2 SDK available in the Akamai GitHub repository.
See this video to get to know how to use our SDK for Amazon S3:
Before you begin
-
Create an Identity and Access Management (IAM) user. See the Overview of access management: permissions and policies in Amazon S3.
-
Create a dedicated storage bucket in an AWS region. See Create storage buckets in Amazon S3.
-
Grant the user a role with access to the bucket, including
ListBucket
,GetObject
, andPutObject
. -
Make note of the access keys and client secret for your account. See Understanding and getting your security credentials in Amazon S3.
-
Set up and manage server side encryption (SSE) in the container's settings. See Server side encryption for Amazon S3.
How to
-
In Destination, select S3.
-
In Name, enter a human-readable description for the destination.
-
In Bucket, enter the name of the bucket you created in the S3 account where you want to store logs.
-
In Folder path, provide the path to the folder within the bucket where you want to store logs. If the folders don't exist in the bucket, Amazon creates them—for example,
logs
orlogs/diagnostics
. You can use Dynamic variables in folder paths for timestamps, stream ID, and stream version.
Folder paths in Amazon S3
Amazon treats objects that end with
/
as folders. For example, if you start your path with/
, as in/logs
, Amazon creates two folders in your bucket. The first one is named/
, and it contains thelogs
folder. See Using folders in AWS and Bucket naming rules in Amazon S3.
-
In Region, enter the AWS region code where the bucket resides—for example,
ap-south-1
. See Region names and codes on the Amazon AWS website. -
In Access key ID, enter the access key of your Amazon S3 bucket.
-
In Secret access key, enter the secret key of your Amazon S3 bucket.
Getting authentication details
You can check your authentication details in the
.csv
file that you saved when creating your access key. If you didn't download the.csv
file, or if you lost it, you may need to delete the existing access key and add a new one. See Managing access keys (console) in AWS.
-
Click Validate & Save to validate the connection to the destination, and save the details you provided.
As part of this validation process, the system uses the provided access key identifier and secret access key to create a verification file in your S3 folder, with a timestamp in the filename in the
Akamai_access_verification_[TimeStamp].txt
format. You can only see this file if the validation process is successful, and you have access to the Amazon S3 bucket and folder that you're trying to send logs to. -
Optionally, in the Delivery options menu, edit the Filename field to change the prefix and suffix for your log files. File name prefixes support Dynamic variables.
For file name prefixes, you shouldn't use the
.
character, as it may result in errors and data loss. File name suffixes don't support dynamic variables and the.
,/
,%
, and?
characters. See the Object naming conventions in Amazon S3. -
Optionally, change the Push frequency to receive bundled logs to your destination every 30 or 60 seconds.
-
Click Next.
Updated about 1 year ago