GuideReference
TrainingSupportCommunity
Guide

Data stream

akamai_datastream

 Average processing time 90 minutes - 3 hours

Create and manage a data stream to capture log entries from edge servers and deliver them to a set connector.

To deactivate or delete a data stream, use terraform destroy.

resource "akamai_datastream" "my_datastream" {
  active = true
  delivery_configuration {
    field_delimiter = "SPACE"
    format          = "STRUCTURED"
    frequency {
      interval_in_secs = 30
    }
    upload_file_prefix = "prefix"
    upload_file_suffix = "suffix"
  }
  contract_id = "C-0N7RAC7"
  dataset_fields = [
    999, 1002
  ]
  group_id = "12345"
  properties = [
    "12345", "98765"
  ]
  stream_name = "Datastream_Example1"
  gcs_connector {
    bucket               = "my_bucket"
    display_name         = "my_connector_name"
    path                 = "akamai/logs"
    private_key          = "-----BEGIN PRIVATE KEY-----\nprivate_key\n-----END PRIVATE KEY-----\n"
    project_id           = "my_project_id"
    service_account_name = "my_service_account_name"
  }
  notification_emails = [
    "example1@example.com",
    "example2@example.com",
  ]
  collect_midgress = true
}

Arguments

Pass the required arguments to create or modify a data stream.

ArgumentRequiredDescription
active✔️Whether to activate the data stream when the resource is applied.
  • true activates the data stream upon terraform apply.
  • false creates or updates a data stream without an activation.
delivery_configuration✔️A set that provides configuration information for the logs.
  • field_delimiter. Sets a space as a delimiter to separate data set fields in log lines. Value is SPACE. If used, you must also use the format argument set to STRUCTURED.
  • format. Required. The format in which you want to receive log files, STRUCTURED or JSON. If you've used a delimiter, the format must be STRUCTURED.
  • frequency. Required. A set that includes interval_in_secs. The time in seconds after which the system bundles log lines into a file and sends the file to a destination. Possible values are 30 and 60.
  • upload_file_prefix. The log file prefix to send to a destination. Maximum characters, 200. If unspecified, it defaults to ak.
  • upload_file_suffix. The log file suffix to send to a destination. Maximum characters, 10. If unspecified, it defaults to ds.
contract_id✔️Your contract's ID.
dataset_fields✔️An set of IDs for the data set fields within the product for which you want to receive logs. The order of the IDs defines their order in the log lines. For values, use the dataset_fields data source to get the available fields for your product.
group_id✔️Your group's ID
properties✔️A list of properties the data stream monitors. Data can only be logged on active properties.
stream_name✔️The name of or for your stream.
<connector>_connector✔️Destination details for the data stream. Replace <connector> with the respective type listed in the connector table.
notification_emailsA list of email addresses to which the data stream's activation and deactivation status are sent.
collect_midgressBoolean that sets the collection of midgress data.

Connectors

For each of the connectors listed, use the argument column's heading as is as the value in <connector>_connector, for example, gcs_connector.

Argument Required Description
azure
access_key The account access key for authentication.
account_name The Azure Storage account.
display_name The connector's name.
container_name The Azure Storage container name.
path The path to the log storage folder.
compress_logs Boolean that sets the compression of logs.
datadog
auth_token Your account's API key.
display_name The connector's name.
endpoint The storage endpoint for the logs.
tags The Datadog connector tags.
compress_logs Boolean that sets the compression of logs
service The Datadog service connector.
source The Datadog source connector.
elasticsearch
display_name The connector's name.
endpoint The storage endpoint for the logs.
user_name The BASIC user name for authentication.
password The BASIC password for authentication.
index_name The index name for where to store log files.
tls_hostname The hostname that verifies the server's certificate and matches the Subject Alternative Names (SANs) in the certificate. If not provided, DataStream fetches the hostname from the endpoint URL.
ca_cert The certification authority (CA) certificate used to verify the origin server's certificate. If the certificate is not signed by a well-known certification authority, enter the CA certificate in PEM format for verification.
client_cert The digital certificate in the PEM format you want to use to authenticate requests to your destination. If you want to use mutual authentication, you need to provide both the client certificate and the client key in PEM format.
client_key The private key for back-end authentication in non-encrypted PKCS8 format you. If you want to use mutual authentication, you need to provide both the client certificate and the client key.
m_tls Boolean that sets mTLS enablement.
content_type The content type to pass in the log file header.
custom_header_name A custom header name passed with the request to the destination.
custom_header_value The custom header's value passed with the request to the destination.
gcs
bucket The bucket name.
display_name The connector's name.
private_key A JSON private key for a Google Cloud Storage account.
project_id A Google Cloud project ID.
service_account_name The name of the service account with the storage object create permission or storage object creator role.
compress_logs Boolean that sets the compression of logs
path The path to the log storage folder.
https
authentication_type Either NONE for no authentication or BASIC for username and password authentication.
display_name The connector's name.
content_type The content type to pass in the log file header.
endpoint The storage endpoint for the logs.
m_tls Boolean that sets mTLS enablement.
compress_logs Boolean that sets the compression of logs.
custom_header_name A custom header name passed with the request to the destination.
custom_header_value The custom header's value passed with the request to the destination.
password The BASIC password for authentication.
user_name The BASIC user name for authentication.
tls_hostname The hostname that verifies the server's certificate and matches the Subject Alternative Names (SANs) in the certificate. If not provided, DataStream fetches the hostname from the endpoint URL.
ca_cert The certification authority (CA) certificate used to verify the origin server's certificate. If the certificate is not signed by a well-known certification authority, enter the CA certificate in PEM format for verification.
client_cert The digital certificate in the PEM format you want to use to authenticate requests to your destination. If you want to use mutual authentication, you need to provide both the client certificate and the client key in PEM format.
client_key The private key for back-end authentication in non-encrypted PKCS8 format you. If you want to use mutual authentication, you need to provide both the client certificate and the client key.
loggly
display_name The connector's name.
endpoint The storage endpoint for the logs.
auth_token The HTTP code for your Loggly bulk endpoint.
content_type The content type to pass in the log file header.
tags Tags to segment and filter log events in Loggly.
custom_header_name A custom header name passed with the request to the destination.
custom_header_value The custom header's value passed with the request to the destination.
new_relic
display_name The connector's name.
endpoint The storage endpoint for the logs.
auth_token Your account's API key.
content_type The content type to pass in the log file header.
custom_header_name A custom header name passed with the request to the destination.
custom_header_value The custom header's value passed with the request to the destination.
oracle
access_key The account access key for authentication.
bucket The bucket name.
compress_logs Boolean that sets the compression of logs
display_name The connector's name.
namespace The Oracle Cloud storage account's namespace.
path The path to the log storage folder.
region The region where the bucket resides.
secret_access_key The account access key for authentication.
s3
access_key The account access key for authentication.
bucket The bucket name.
display_name The connector's name.
path The path to the log storage folder.
region The region where the bucket resides.
secret_access_key The secret access key used to authenticate requests to the Amazon S3 account.
compress_logs Boolean that sets the compression of logs.
splunk
display_name The connector's name.
event_collector_token The Splunk account's event collector token.
endpoint The storage endpoint for the logs.
client_key The private key for back-end authentication in non-encrypted PKCS8 format you. If you want to use mutual authentication, you need to provide both the client certificate and the client key.
ca_cert The certification authority (CA) certificate used to verify the origin server's certificate. If the certificate is not signed by a well-known certification authority, enter the CA certificate in PEM format for verification.
client_cert The digital certificate in the PEM format you want to use to authenticate requests to your destination. If you want to use mutual authentication, you need to provide both the client certificate and the client key in PEM format.
m_tls Boolean that sets mTLS enablement.
custom_header_name A custom header name passed with the request to the destination.
custom_header_value The custom header's value passed with the request to the destination.
tls_hostname The hostname that verifies the server's certificate and matches the Subject Alternative Names (SANs) in the certificate. If not provided, DataStream fetches the hostname from the endpoint URL.
compress_logs Boolean that sets the compression of logs
sumologic
collector_code The Sumo Logic endpoint's HTTP collector code.
content_type The content type to pass in the log file header.
display_name The connector's name.
endpoint The storage endpoint for the logs.
compress_logs Boolean that sets the compression of logs
custom_header_name A custom header name passed with the request to the destination.
Midgress traffic
2051 Is midgress ?
2084 Prefetch midgress hits The midgress traffic within the Akamai network, such as between two edge servers. To use this, enable the collect_midgress_traffic option in the [DataStream behavior](ga-datastream) for your property in Property Manager. As a result, the second slot in the log line returns processing information about a request.
  • 0, if the request was processed between the client device and edge server (CLIENT_REQ), and isn't logged as midgress traffic.
  • 1, if the request was processed by an edge server within the region (PEER_REQ), and is logged as midgress traffic.
  • 2, if the request was processed by a parent Akamai edge server in the parent-child hierarchy (CHILD_REQ), and is logged as midgress traffic​.
custom_header_value The custom header's value passed with the request to the destination.

Attributes

There is no default standard output as the attribute values are sensitive, but you can get your data stream's ID from the last line of the process log.

akamai_datastream.my_datastream: Creation complete after 1h20m16s [id=12345]