Data stream

 Average processing time 90 minutes – 3 hours

Create and manage a data stream to capture log entries from edge servers and deliver them to a set connector.

To deactivate or delete a data stream, use terraform destroy.

resource "akamai_datastream" "my_datastream" {
  active = true
  delivery_configuration {
    field_delimiter = "SPACE"
    format          = "STRUCTURED"
    frequency {
      interval_in_secs = 30
    }
    upload_file_prefix = "prefix"
    upload_file_suffix = "suffix"
  }
  contract_id = "C-0N7RAC7"
  dataset_fields = [
    999, 1002
  ]
  group_id = "12345"
  properties = [
    "12345", "98765"
  ]
  stream_name = "Datastream_Example1"
  gcs_connector {
    bucket               = "my_bucket"
    display_name         = "my_connector_name"
    path                 = "akamai/logs"
    private_key          = "-----BEGIN PRIVATE KEY-----\nprivate_key\n-----END PRIVATE KEY-----\n"
    project_id           = "my_project_id"
    service_account_name = "my_service_account_name"
  }
  notification_emails = [
    "example1@example.com",
    "example2@example.com",
  ]
  collect_midgress = true
}

Arguments

Pass the required arguments to create or modify a data stream.

Argument Required Description
active Whether your data stream is activated along with creation.

Important: Because the data stream creation process can take a bit, set the value to true as it removes a second round of processing for activation.
delivery_configuration A set that provides configuration information for the logs.
  • format. Required. The format in which you want to receive log files, STRUCTURED or JSON. If you've used a delimiter, the format must be STRUCTURED.
  • frequency. Required. A set that includes interval_in_secs. The time in seconds after which the system bundles log lines into a file and sends the file to a destination. Possible values are 30 and 60.
  • field_delimiter. Sets a space as a delimiter to separate data set fields in log lines. Value is SPACE. If used, you must also use the format argument set to STRUCTURED.
  • upload_file_prefix. The log file prefix to send to a destination. Maximum characters, 200. If unspecified, it defaults to ak.
  • upload_file_suffix. The log file suffix to send to a destination. Maximum characters, 10. If unspecified, it defaults to ds.
contract_id Your contract's ID.
dataset_fields A set of IDs for the data set fields within the product for which you want to receive logs. The order of the IDs defines their order in the log lines. For values, use the dataset_fields data source to get the available fields for your product. For details on each data set, see Choose data sets.
group_id Your group's ID.
properties A list of properties the data stream monitors. Data can only be logged on active properties.
stream_name The name for your stream.
{connector}_connector Destination details for the data stream. Replace {connector} with the respective type listed in the connector table.
notification_emails A list of email addresses to which the data stream's activation and deactivation status are sent.
collect_midgress Boolean that sets the collection of midgress data.

Connectors

For each of the connectors listed, use the argument column's heading as is as the value in {connector}_connector, for example, gcs_connector.

ArgumentRequiredDescription
azure
access_keyThe account access key for authentication.
account_nameThe Azure Storage account.
display_nameThe connector's name.
container_nameThe Azure Storage container name.
pathThe path to the log storage folder.
datadog
auth_tokenYour account's API key.
display_nameThe connector's name.
endpointThe storage endpoint for the logs.
tagsThe Datadog connector tags.
compress_logsBoolean that sets the compression of logs.
serviceThe Datadog service connector.
sourceThe Datadog source connector.
elasticsearch
display_nameThe connector's name.
endpointThe storage endpoint for the logs.
user_nameThe BASIC user name for authentication.
passwordThe BASIC password for authentication.
index_nameThe index name for where to store log files.
tls_hostnameThe hostname that verifies the server's certificate and matches the Subject Alternative Names (SANs) in the certificate. If not provided, DataStream fetches the hostname from the endpoint URL.
ca_certThe certification authority (CA) certificate used to verify the origin server's certificate. If the certificate is not signed by a well-known certification authority, enter the CA certificate in PEM format for verification.
client_certThe digital certificate in the PEM format you want to use to authenticate requests to your destination. If you want to use mutual authentication, you need to provide both the client certificate and the client key in PEM format.
client_keyThe private key for back-end authentication in non-encrypted PKCS8 format you. If you want to use mutual authentication, you need to provide both the client certificate and the client key.
content_typeThe content type to pass in the log file header.
custom_header_nameA custom header name passed with the request to the destination.
custom_header_valueThe custom header's value passed with the request to the destination.
gcs
bucketThe bucket name.
display_nameThe connector's name.
private_keyA JSON private key for a Google Cloud Storage account.
project_idA Google Cloud project ID.
service_account_nameThe name of the service account with the storage object create permission or storage object creator role.
pathThe path to the log storage folder.
https
authentication_typeEither NONE for no authentication or BASIC for username and password authentication.
display_nameThe connector's name.
endpointThe storage endpoint for the logs.
content_typeThe content type to pass in the log file header.
compress_logsBoolean that sets the compression of logs.
custom_header_nameA custom header name passed with the request to the destination.
custom_header_valueThe custom header's value passed with the request to the destination.
passwordThe BASIC password for authentication.
user_nameThe BASIC user name for authentication.
tls_hostnameThe hostname that verifies the server's certificate and matches the Subject Alternative Names (SANs) in the certificate. If not provided, DataStream fetches the hostname from the endpoint URL.
ca_certThe certification authority (CA) certificate used to verify the origin server's certificate. If the certificate is not signed by a well-known certification authority, enter the CA certificate in PEM format for verification.
client_certThe digital certificate in the PEM format you want to use to authenticate requests to your destination. If you want to use mutual authentication, you need to provide both the client certificate and the client key in PEM format.
client_keyThe private key for back-end authentication in non-encrypted PKCS8 format you. If you want to use mutual authentication, you need to provide both the client certificate and the client key.
loggly
display_nameThe connector's name.
endpointThe storage endpoint for the logs.
auth_tokenThe HTTP code for your Loggly bulk endpoint.
content_typeThe content type to pass in the log file header.
tagsTags to segment and filter log events in Loggly.
custom_header_nameA custom header name passed with the request to the destination.
custom_header_valueThe custom header's value passed with the request to the destination.
new_relic
display_nameThe connector's name.
endpointThe storage endpoint for the logs.
auth_tokenYour account's API key.
content_typeThe content type to pass in the log file header.
custom_header_nameA custom header name passed with the request to the destination.
custom_header_valueThe custom header's value passed with the request to the destination.
oracle
access_keyThe account access key for authentication.
bucketThe bucket name.
display_nameThe connector's name.
namespaceThe Oracle Cloud storage account's namespace.
pathThe path to the log storage folder.
regionThe region where the bucket resides.
secret_access_keyThe account access key for authentication.
s3
access_keyThe account access key for authentication.
bucketThe bucket name.
display_nameThe connector's name.
pathThe path to the log storage folder.
regionThe region where the bucket resides.
secret_access_keyThe secret access key used to authenticate requests to the Amazon S3 account.
splunk
display_nameThe connector's name.
event_collector_tokenThe Splunk account's event collector token.
endpointThe storage endpoint for the logs.
client_keyThe private key for back-end authentication in non-encrypted PKCS8 format you. If you want to use mutual authentication, you need to provide both the client certificate and the client key.
ca_certThe certification authority (CA) certificate used to verify the origin server's certificate. If the certificate is not signed by a well-known certification authority, enter the CA certificate in PEM format for verification.
client_certThe digital certificate in the PEM format you want to use to authenticate requests to your destination. If you want to use mutual authentication, you need to provide both the client certificate and the client key in PEM format.
custom_header_nameA custom header name passed with the request to the destination.
custom_header_valueThe custom header's value passed with the request to the destination.
tls_hostnameThe hostname that verifies the server's certificate and matches the Subject Alternative Names (SANs) in the certificate. If not provided, DataStream fetches the hostname from the endpoint URL.
compress_logsBoolean that sets the compression of logs.
sumologic
collector_codeThe Sumo Logic endpoint's HTTP collector code.
display_nameThe connector's name.
endpointThe storage endpoint for the logs.
content_typeThe content type to pass in the log file header.
compress_logsBoolean that sets the compression of logs.
custom_header_nameA custom header name passed with the request to the destination.
custom_header_valueThe custom header's value passed with the request to the destination.

Attributes

There is no default standard output as some of the attribute values are sensitive. If you've not set an output method, the response only provides your data stream's ID in the success message.

Setting an output method returns the data stream's details you provided on create along with these computed attributes.

AttributeDescription
idThe resource's ID.
created_byThe username who created the stream.
created_dateThe ISO 8601 timestamp indicating when the stream was created.
modified_byThe user who updated the stream.
modified_dateThe ISO 8601 timestamp indicating when the stream was updated.
papi_jsonThe configuration in JSON format that can be copy-pasted into the property configuration to enable the datastream behavior.
product_idThe ID of the product for which the stream was created.
stream_versionThe stream's version.
latest_versionThe latest active stream's version.

Connectors

AttributeDescription
azure
compress_logsBoolean that sets the compression of logs.
elasticsearch
m_tlsBoolean that sets mTLS enablement.
gcs
compress_logsBoolean that sets the compression of logs.
https
m_tlsBoolean that sets mTLS enablement.
oracle
compress_logsBoolean that sets the compression of logs.
s3
compress_logsBoolean that sets the compression of logs.
splunk
m_tlsBoolean that sets mTLS enablement.