Data stream

akamai_datastream

 Average processing time 90 minutes – 3 hours

Create and manage a data stream to capture log entries from edge servers and deliver them to a set connector.

To deactivate or delete a data stream, use terraform destroy.

resource "akamai_datastream" "my_datastream" {
  active = true
  delivery_configuration {
    field_delimiter = "SPACE"
    format          = "STRUCTURED"
    frequency {
      interval_in_secs = 30
    }
    upload_file_prefix = "prefix"
    upload_file_suffix = "suffix"
  }
  contract_id = "C-0N7RAC7"
  dataset_fields = [
    999, 1002
  ]
  group_id = "12345"
  properties = [
    "12345", "98765"
  ]
  stream_name = "Datastream_Example1"
  gcs_connector {
    bucket               = "my_bucket"
    display_name         = "my_connector_name"
    path                 = "akamai/logs"
    private_key          = "-----BEGIN PRIVATE KEY-----\nprivate_key\n-----END PRIVATE KEY-----\n"
    project_id           = "my_project_id"
    service_account_name = "my_service_account_name"
  }
  notification_emails = [
    "example1@example.com",
    "example2@example.com",
  ]
  collect_midgress = true
}

Arguments

Pass the required arguments to create or modify a data stream.

Argument Required Description
active Whether your data stream is activated along with creation.

Important: Because the data stream creation process can take a bit, set the value to true as it removes a second round of processing for activation.
delivery_configuration A set that provides configuration information for the logs.
  • format. Required. The format in which you want to receive log files, STRUCTURED or JSON. If you've used a delimiter, the format must be STRUCTURED.
  • frequency. Required. A set that includes interval_in_secs. The time in seconds after which the system bundles log lines into a file and sends the file to a destination. Possible values are 30 and 60.
  • field_delimiter. Sets a space as a delimiter to separate data set fields in log lines. Value is SPACE. If used, you must also use the format argument set to STRUCTURED.
  • upload_file_prefix. The log file prefix to send to a destination. Maximum characters, 200. If unspecified, it defaults to ak.
  • upload_file_suffix. The log file suffix to send to a destination. Maximum characters, 10. If unspecified, it defaults to ds.
contract_id Your contract's ID.
dataset_fields A set of IDs for the data set fields within the product for which you want to receive logs. The order of the IDs defines their order in the log lines. For values, use the dataset_fields data source to get the available fields for your product. For details on each data set, see Choose data sets.
group_id Your group's ID.
properties A list of properties the data stream monitors. Data can only be logged on active properties.
stream_name The name of or for your stream.
{connector}_connector Destination details for the data stream. Replace {connector} with the respective type listed in the connector table.
notification_emails A list of email addresses to which the data stream's activation and deactivation status are sent.
collect_midgress Boolean that sets the collection of midgress data.

Connectors

For each of the connectors listed, use the argument column's heading as is as the value in {connector}_connector, for example, gcs_connector.

Argument Required Description
azure
access_key The account access key for authentication.
account_name The Azure Storage account.
display_name The connector's name.
container_name The Azure Storage container name.
path The path to the log storage folder.
datadog
auth_token Your account's API key.
display_name The connector's name.
endpoint The storage endpoint for the logs.
tags The Datadog connector tags.
compress_logs Boolean that sets the compression of logs.
service The Datadog service connector.
source The Datadog source connector.
elasticsearch
display_name The connector's name.
endpoint The storage endpoint for the logs.
user_name The BASIC user name for authentication.
password The BASIC password for authentication.
index_name The index name for where to store log files.
tls_hostname The hostname that verifies the server's certificate and matches the Subject Alternative Names (SANs) in the certificate. If not provided, DataStream fetches the hostname from the endpoint URL.
ca_cert The certification authority (CA) certificate used to verify the origin server's certificate. If the certificate is not signed by a well-known certification authority, enter the CA certificate in PEM format for verification.
client_cert The digital certificate in the PEM format you want to use to authenticate requests to your destination. If you want to use mutual authentication, you need to provide both the client certificate and the client key in PEM format.
client_key The private key for back-end authentication in non-encrypted PKCS8 format you. If you want to use mutual authentication, you need to provide both the client certificate and the client key.
content_type The content type to pass in the log file header.
custom_header_name A custom header name passed with the request to the destination.
custom_header_value The custom header's value passed with the request to the destination.
gcs
bucket The bucket name.
display_name The connector's name.
private_key A JSON private key for a Google Cloud Storage account.
project_id A Google Cloud project ID.
service_account_name The name of the service account with the storage object create permission or storage object creator role.
path The path to the log storage folder.
https
authentication_type Either NONE for no authentication or BASIC for username and password authentication.
display_name The connector's name.
endpoint The storage endpoint for the logs.
content_type The content type to pass in the log file header.
compress_logs Boolean that sets the compression of logs.
custom_header_name A custom header name passed with the request to the destination.
custom_header_value The custom header's value passed with the request to the destination.
password The BASIC password for authentication.
user_name The BASIC user name for authentication.
tls_hostname The hostname that verifies the server's certificate and matches the Subject Alternative Names (SANs) in the certificate. If not provided, DataStream fetches the hostname from the endpoint URL.
ca_cert The certification authority (CA) certificate used to verify the origin server's certificate. If the certificate is not signed by a well-known certification authority, enter the CA certificate in PEM format for verification.
client_cert The digital certificate in the PEM format you want to use to authenticate requests to your destination. If you want to use mutual authentication, you need to provide both the client certificate and the client key in PEM format.
client_key The private key for back-end authentication in non-encrypted PKCS8 format you. If you want to use mutual authentication, you need to provide both the client certificate and the client key.
loggly
display_name The connector's name.
endpoint The storage endpoint for the logs.
auth_token The HTTP code for your Loggly bulk endpoint.
content_type The content type to pass in the log file header.
tags Tags to segment and filter log events in Loggly.
custom_header_name A custom header name passed with the request to the destination.
custom_header_value The custom header's value passed with the request to the destination.
new_relic
display_name The connector's name.
endpoint The storage endpoint for the logs.
auth_token Your account's API key.
content_type The content type to pass in the log file header.
custom_header_name A custom header name passed with the request to the destination.
custom_header_value The custom header's value passed with the request to the destination.
oracle
access_key The account access key for authentication.
bucket The bucket name.
display_name The connector's name.
namespace The Oracle Cloud storage account's namespace.
path The path to the log storage folder.
region The region where the bucket resides.
secret_access_key The account access key for authentication.
s3
access_key The account access key for authentication.
bucket The bucket name.
display_name The connector's name.
path The path to the log storage folder.
region The region where the bucket resides.
secret_access_key The secret access key used to authenticate requests to the Amazon S3 account.
splunk
display_name The connector's name.
event_collector_token The Splunk account's event collector token.
endpoint The storage endpoint for the logs.
client_key The private key for back-end authentication in non-encrypted PKCS8 format you. If you want to use mutual authentication, you need to provide both the client certificate and the client key.
ca_cert The certification authority (CA) certificate used to verify the origin server's certificate. If the certificate is not signed by a well-known certification authority, enter the CA certificate in PEM format for verification.
client_cert The digital certificate in the PEM format you want to use to authenticate requests to your destination. If you want to use mutual authentication, you need to provide both the client certificate and the client key in PEM format.
custom_header_name A custom header name passed with the request to the destination.
custom_header_value The custom header's value passed with the request to the destination.
tls_hostname The hostname that verifies the server's certificate and matches the Subject Alternative Names (SANs) in the certificate. If not provided, DataStream fetches the hostname from the endpoint URL.
compress_logs Boolean that sets the compression of logs.
sumologic
collector_code The Sumo Logic endpoint's HTTP collector code.
display_name The connector's name.
endpoint The storage endpoint for the logs.
content_type The content type to pass in the log file header.
compress_logs Boolean that sets the compression of logs.
custom_header_name A custom header name passed with the request to the destination.
custom_header_value The custom header's value passed with the request to the destination.

Attributes

There is no default standard output as some of the attribute values are sensitive. If you've not set an output method, the response only provides your data stream's ID in the success message.

Setting an output method returns the data stream's details you provided on create along with these computed attributes.

Attribute Description
id The resource's ID.
created_by The username who created the stream.
created_date The ISO 8601 timestamp indicating when the stream was created.
modified_by The user who updated the stream.
modified_date The ISO 8601 timestamp indicating when the stream was updated.
papi_json The configuration in JSON format that can be copy-pasted into the property configuration to enable the datastream behavior.
product_id The ID of the product for which the stream was created.
stream_version The stream's version.
latest_version The latest active stream's version.

Connectors

Attribute Description
azure
compress_logs Boolean that sets the compression of logs.
elasticsearch
m_tls Boolean that sets mTLS enablement.
gcs
compress_logs Boolean that sets the compression of logs.
https
m_tls Boolean that sets mTLS enablement.
oracle
compress_logs Boolean that sets the compression of logs.
s3
compress_logs Boolean that sets the compression of logs.
splunk
m_tls Boolean that sets mTLS enablement.