Data stream

akamai_datastream

ūüöß

You are viewing the documentation for version 5.0.

This version introduces backwards incompatible changes. See the previous version or our migration information to upgrade.

Average processing time90 minutes - 3 hours

Create and manage a data stream to capture log entries from edge servers and deliver them to a set connector.

ūüďė

To deactivate or delete a data stream, use terraform destroy.

resource "akamai_datastream" "my_datastream" {
  active = false
  delivery_configuration {
    field_delimiter = "SPACE"
    format          = "STRUCTURED"
    frequency {
      interval_in_secs = 30
    }
    upload_file_prefix = "prefix"
    upload_file_suffix = "suffix"
  }
  contract_id = "C-0N7RAC7"
  dataset_fields = [
    1000
  ]
  group_id = 12345
  properties = [
    12345, 98765
  ]
  stream_name = "Datastream_Example1"
  gcs_connector {
    bucket               = "my_bucket"
    display_name         = "my_connector_name"
    path                 = "akamai/logs"
    private_key          = "-----BEGIN PRIVATE KEY-----\nprivate_key\n-----END PRIVATE KEY-----\n"
    project_id           = "my_project_id"
    service_account_name = "my_service_account_name"
  }
  notification_emails = [
    "example1@example.com",
    "example2@example.com",
  ]
  collect_midgress = true
}

Arguments

Pass the required arguments to create or modify a data stream.

ArgumentRequiredDescription
active‚úĒÔłŹBoolean that sets the activation status upon terraform apply.
delivery_configuration‚úĒÔłŹA set that provides configuration information for the logs.
  • field_delimiter. Sets a space as a delimiter to separate data set fields in log lines. Value is SPACE. If used, you must also use the format argument set to STRUCTURED.
  • format. Required. The format in which you want to receive log files, STRUCTURED or JSON. If you've used a delimiter, the format must be STRUCTURED.
  • frequency. Required. A set that includes interval_in_secs. The time in seconds after which the system bundles log lines into a file and sends the file to a destination. Possible values are 30 and 60.
  • upload_file_prefix. The log file prefix to send to a destination. Maximum characters, 200. If unspecified, it defaults to ak.
  • upload_file_suffix. The log file suffix to send to a destination. Maximum characters, 10. If unspecified, it defaults to ds.
contract_id‚úĒÔłŹYour contract's ID.
dataset_fields‚úĒÔłŹAn set of IDs for the data set fields within the product for which you want to receive logs. The order of the IDs defines their order in the log lines. For values, use the dataset_fields data source to get the available fields for your product.
group_id‚úĒÔłŹYour group's ID
properties‚úĒÔłŹA list of properties the data stream monitors. Data can only be logged on active properties.
stream_name‚úĒÔłŹThe name of or for your stream.
<connector>_connector‚úĒÔłŹDestination details for the data stream. Replace <connector> with the respective type listed in the connector table.
notification_emailsA list of email addresses to which the data stream's activation and deactivation status are sent.
collect_midgressBoolean that sets the collection of midgress data.

Connectors

For each of the connectors listed, use the section heading as is as the value in <connector>_connector, for example, gcs_connector.

Argument Required Description
azure
access_key ‚úĒ The account access key for authentication.
account_name ‚úĒ The Azure Storage account.
display_name ‚úĒ The connector's name.
container_name ‚úĒ The Azure Storage container name.
path ‚úĒ The path to the log storage folder.
compress_logs Boolean that sets the compression of logs.
datadog
auth_token ‚úĒ Your account's API key.
display_name ‚úĒ The connector's name.
endpoint ‚úĒ The storage endpoint for the logs.
tags The Datadog connector tags.
compress_logs Boolean that sets the compression of logs
service The Datadog service connector.
source The Datadog source connector.
elasticsearch
display_name ‚úĒ The connector's name.
endpoint ‚úĒ The storage endpoint for the logs.
user_name ‚úĒ The BASIC user name for authentication.
password ‚úĒ The BASIC password for authentication.
index_name ‚úĒ The index name for where to store log files.
tls_hostname The hostname that verifies the server's certificate and matches the Subject Alternative Names (SANs) in the certificate. If not provided, DataStream fetches the hostname from the endpoint URL.
ca_cert The certification authority (CA) certificate used to verify the origin server's certificate. If the certificate is not signed by a well-known certification authority, enter the CA certificate in PEM format for verification.
client_cert The digital certificate in the PEM format you want to use to authenticate requests to your destination. If you want to use mutual authentication, you need to provide both the client certificate and the client key in PEM format.
client_key The private key for back-end authentication in non-encrypted PKCS8 format you. If you want to use mutual authentication, you need to provide both the client certificate and the client key.
m_tls Boolean that sets mTLS enablement.
content_type The content type to pass in the log file header.
custom_header_name A custom header name passed with the request to the destination.
custom_header_value The custom header's value passed with the request to the destination.
gcs
bucket ‚úĒ The bucket name.
display_name ‚úĒ The connector's name.
private_key ‚úĒ A JSON private key for a Google Cloud Storage account.
project_id ‚úĒ A Google Cloud project ID.
service_account_name ‚úĒ The name of the service account with the storage object create permission or storage object creator role.
compress_logs Boolean that sets the compression of logs
path The path to the log storage folder.
https
authentication_type ‚úĒ Either NONE for no authentication or BASIC for username and password authentication.
display_name ‚úĒ The connector's name.
content_type ‚úĒ The content type to pass in the log file header.
endpoint ‚úĒ The storage endpoint for the logs.
m_tls Boolean that sets mTLS enablement.
compress_logs Boolean that sets the compression of logs.
custom_header_name A custom header name passed with the request to the destination.
custom_header_value The custom header's value passed with the request to the destination.
password The BASIC password for authentication.
user_name The BASIC user name for authentication.
tls_hostname The hostname that verifies the server's certificate and matches the Subject Alternative Names (SANs) in the certificate. If not provided, DataStream fetches the hostname from the endpoint URL.
ca_cert The certification authority (CA) certificate used to verify the origin server's certificate. If the certificate is not signed by a well-known certification authority, enter the CA certificate in PEM format for verification.
client_cert The digital certificate in the PEM format you want to use to authenticate requests to your destination. If you want to use mutual authentication, you need to provide both the client certificate and the client key in PEM format.
client_key The private key for back-end authentication in non-encrypted PKCS8 format you. If you want to use mutual authentication, you need to provide both the client certificate and the client key.
loggly
display_name ‚úĒ The connector's name.
endpoint ‚úĒ The storage endpoint for the logs.
auth_token ‚úĒ The HTTP code for your Loggly bulk endpoint.
content_type The content type to pass in the log file header.
tags Tags to segment and filter log events in Loggly.
custom_header_name A custom header name passed with the request to the destination.
custom_header_value The custom header's value passed with the request to the destination.
new_relic
display_name ‚úĒ The connector's name.
endpoint ‚úĒ The storage endpoint for the logs.
auth_token ‚úĒ Your account's API key.
content_type The content type to pass in the log file header.
custom_header_name A custom header name passed with the request to the destination.
custom_header_value The custom header's value passed with the request to the destination.
oracle
access_key ‚úĒ The account access key for authentication.
bucket ‚úĒ The bucket name.
compress_logs ‚úĒ Boolean that sets the compression of logs
display_name ‚úĒ The connector's name.
namespace ‚úĒ The Oracle Cloud storage account's namespace.
path ‚úĒ The path to the log storage folder.
region ‚úĒ The region where the bucket resides.
secret_access_key ‚úĒ The account access key for authentication.
s3
access_key ‚úĒ The account access key for authentication.
bucket ‚úĒ The bucket name.
display_name ‚úĒ The connector's name.
path ‚úĒ The path to the log storage folder.
region ‚úĒ The region where the bucket resides.
secret_access_key ‚úĒ The secret access key used to authenticate requests to the Amazon S3 account.
compress_logs Boolean that sets the compression of logs.
splunk
display_name ‚úĒ The connector's name.
event_collector_token ‚úĒ The Splunk account's event collector token.
endpoint ‚úĒ The storage endpoint for the logs.
client_key The private key for back-end authentication in non-encrypted PKCS8 format you. If you want to use mutual authentication, you need to provide both the client certificate and the client key.
ca_cert The certification authority (CA) certificate used to verify the origin server's certificate. If the certificate is not signed by a well-known certification authority, enter the CA certificate in PEM format for verification.
client_cert The digital certificate in the PEM format you want to use to authenticate requests to your destination. If you want to use mutual authentication, you need to provide both the client certificate and the client key in PEM format.
m_tls Boolean that sets mTLS enablement.
custom_header_name A custom header name passed with the request to the destination.
custom_header_value The custom header's value passed with the request to the destination.
tls_hostname The hostname that verifies the server's certificate and matches the Subject Alternative Names (SANs) in the certificate. If not provided, DataStream fetches the hostname from the endpoint URL.
compress_logs Boolean that sets the compression of logs
sumologic
collector_code ‚úĒ The Sumo Logic endpoint's HTTP collector code.
content_type ‚úĒ The content type to pass in the log file header.
display_name ‚úĒ The connector's name.
endpoint ‚úĒ The storage endpoint for the logs.
compress_logs Boolean that sets the compression of logs
custom_header_name A custom header name passed with the request to the destination.
custom_header_value The custom header's value passed with the request to the destination.

Attributes

There is no default standard output as the attribute values are sensitive, but you can get your data stream's ID from the last line of the process log.

akamai_datastream.my_datastream: Creation complete after 9s [id=12345]