GuideReference
TrainingSupportCommunity

DataStream

akamai_datastream

ūüďė

You are viewing the documentation for version 4.1. There is a newer version of this resource. See our migration information to upgrade.

 Average processing time 90 minutes - 3 hours

Akamai constantly gathers log entries from thousands of edge servers around the world. You can use the akamai_datastream resource to capture these logs and deliver them to a connector of your choice at low latency. A connector, also known as a destination, represents a third-party configuration where you want to send your stream’s log files to. For each stream, you can only set one connector.

When creating a stream, you select properties to associate with the stream, data set fields to monitor in logs, and a destination to send these logs to. You can also decide whether to activate the stream on making the request. Only active streams collect and send logs to their destinations.

ūüďė

If you update the connectors' credentials with Control Center, an Akamai API, or an Akamai CLI, Terraform won't implement these changes. All modifications your team makes outside of Terraform get overwritten whenever you run the terraform apply command.

Example

resource "akamai_datastream" "stream" {
    active             = false
    config {
        delimiter          = "SPACE"
        format             = "STRUCTURED"
        frequency {
            time_in_sec = 30
        }
        upload_file_prefix = "pre"
        upload_file_suffix = "suf"
    }
    contract_id        = "C-0N7RAC7"
    dataset_fields_ids = [
        1002, 1005, 1006
    ]
    email_ids          = [
        "example@example.com",
        "example2@example.com"
    ]
    group_id           = 12345
    property_ids       = [
        100011011
    ]
    stream_name        = "Test data stream"
    stream_type        = "RAW_LOGS"
    template_name      = "EDGE_LOGS"

    s3_connector {
        access_key        = "ACC35k3YT2ll1H4dXWx5itGhpc7FlSbvvOvky10"
        bucket            = "example.bucket.com"
        connector_name    = "S3Destination"
        path              = "log/edgelogs"
        region            = "ap-south-1"
        secret_access_key = "SKTACC3K3YAKIA6DK7TD"
    }
}

Argument reference

ūüďė

For security reasons, the arguments marked Secret are not populated when you import this resource. You'll have to add these arguments manually.

The resource supports these arguments:

  • active - (Required) Whether you want to start activating the stream when applying the resource. Either true for activating the stream upon sending the request or false for leaving the stream inactive after the request.

  • config - (Required) Provides information about the log line configuration, log file format, names of log files sent, and file delivery. The argument includes these sub-arguments:

    • delimiter - (Optional) A delimiter that you want to use to separate data set fields in the log lines. Currently, SPACE is the only available delimiter. This field is required for the STRUCTURED log file format.

    • format - (Required) The format in which you want to receive log files, either STRUCTURED or JSON. When delimiter is present in the request, STRUCTURED is the mandatory format.

    • frequency - (Required) How often you want to collect logs from each uploader and send them to a destination.

      • time_in_sec - (Required) The time in seconds after which the system bundles log lines into a file and sends it to a destination. 30 or 60 are the possible values.
    • upload_file_prefix - (Optional) The prefix of the log file that you want to send to a destination. It‚Äôs a string of at most 200 characters. If unspecified, defaults to ak.

    • upload_file_suffix - (Optional) The suffix of the log file that you want to send to a destination. It‚Äôs a static string of at most 10 characters. If unspecified, defaults to ds.

  • contract_id - (Required) Identifies the contract that has access to the product.

  • dataset_fields_ids - (Required) Identifiers of the data set fields within the template that you want to receive in logs. The order of the identifiers define how the value for these fields appears in the log lines. See Data set parameters.

  • email_ids - (Optional) A list of email addresses you want to notify about activations and deactivations of the stream.

  • group_id - (Required) Identifies the group that has access to the product and this stream configuration.

  • property_ids - (Required) Identifies the properties that you want to monitor in the stream. Note that a stream can only log data for active properties.

  • stream_name - (Required) The name of the stream.

  • stream_type - (Required) The type of stream that you want to create. Currently, RAW_LOGS is the only possible stream type.

  • template_name - (Required) The name of the data set template available for the product that you want to use in the stream. Currently, EDGE_LOGS is the only data set template available.

  • s3_connector - (Optional) Specify details about the Amazon S3 connector in a stream. When validating this connector, DataStream uses the provided access_key and secret_access_key values and saves an akamai_write_test_2147483647.txt file in your Amazon S3 folder. You can only see this file if validation succeeds, and you have access to the Amazon S3 bucket and folder that you‚Äôre trying to send logs to. The argument includes these sub-arguments:

    • access_key - (Required) Secret. The access key identifier that you use to authenticate requests to your Amazon S3 account. See Managing access keys (AWS API).
    • bucket - (Required) The name of the Amazon S3 bucket. See Working with Amazon S3 Buckets.
    • connector_name - (Required) The name of the connector.
    • path - (Required) The path to the folder within your Amazon S3 bucket where you want to store your logs. See Amazon S3 naming conventions.
    • region - (Required) The AWS region where your Amazon S3 bucket resides. See Regions and Zones in AWS.
    • secret_access_key - (Required) Secret. The secret access key identifier that you use to authenticate requests to your Amazon S3 account.
  • azure_connector - (Optional) Specify details about the Azure Storage connector configuration in a data stream. Note that currently DataStream supports only streaming data to block objects. The argument includes these sub-arguments:

    • access_key - (Required) Secret. Either of the access keys associated with your Azure Storage account. See View account access keys in Azure.
    • account_name - (Required) Specifies the Azure Storage account name.
    • connector_name - (Required) The name of the connector.
    • container_name - (Required) Specifies the Azure Storage container name.
    • path - (Required) The path to the folder within the Azure Storage container where you want to store your logs. See Azure blob naming conventions.
  • datadog_connector - (Optional) Specify details about the Datadog connector in a stream, including:

    • auth_token - (Required) Secret. The API key associated with your Datadog account. See View API keys in Datadog.
    • compress logs - (Optional) Enables GZIP compression for a log file sent to a destination. If unspecified, this defaults to false.
    • connector_name - (Required) The name of the connector.
    • service - (Optional) The service of the Datadog connector. A service groups together endpoints, queries, or jobs for the purposes of scaling instances. See View Datadog reserved attribute list.
    • source - (Optional) The source of the Datadog connector. See View Datadog reserved attribute list.
    • tags - (Optional) The tags of the Datadog connector. See View Datadog tags.
    • url - (Required) The Datadog endpoint where you want to store your logs. See View Datadog logs endpoint.
  • splunk_connector - (Optional) Specify details about the Splunk connector in your stream. Note that currently DataStream supports only endpoint URLs ending with collector/raw. The argument includes these sub-arguments:

    • compress_logs - (Optional) Enables GZIP compression for a log file sent to a destination. If unspecified, this defaults to true.
    • connector_name - (Required) The name of the connector.
    • event_collector_token - (Required) Secret. The Event Collector token associated with your Splunk account. See View usage of Event Collector token in Splunk.
    • url - (Required) The raw event Splunk URL where you want to store your logs.
  • gcs_connector - (Optional) Specify details about the Google Cloud Storage connector you can use in a stream. When validating this connector, DataStream uses the private access key to create an Akamai_access_verification_<timestamp>.txt object file in your GCS bucket. You can only see this file if the validation process is successful, and you have access to the Google Cloud Storage bucket where you are trying to send logs. The argument includes these sub-arguments:

    • bucket - (Required) The name of the storage bucket you created in your Google Cloud account. See Bucket naming conventions.
    • connector_name - (Required) The name of the connector.
    • path - (Optional) The path to the folder within your Google Cloud bucket where you want to store logs. See Object naming guidelines.
    • private_key - (Required) Secret. The contents of the JSON private key you generated and downloaded in your Google Cloud Storage account.
    • project_id - (Required) The unique ID of your Google Cloud project.
    • service_account_name - (Required) The name of the service account with the storage.object.create permission or Storage Object Creator role.
  • https_connector- (Optional) Specify details about the custom HTTPS endpoint you can use as a connector for a stream, including:

    • authentication_type - (Required) Either NONE for no authentication, or BASIC. For basic authentication, provide the user_name and password you set in your custom HTTPS endpoint.
    • compress_logs - (Optional) Whether to enable GZIP compression for a log file sent to a destination. If unspecified, this defaults to false.
    • connector_name - (Required) The name of the connector.
    • password - (Optional) Secret. Enter the password you set in your custom HTTPS endpoint for authentication.
    • url - (Required) Enter the secure URL where you want to send and store your logs.
    • user_name - (Optional) Secret. Enter the valid username you set in your custom HTTPS endpoint for authentication.
  • sumologic_connector - (Optional) Specify details about the Sumo Logic connector in a stream, including:

    • collector_code - (Required) Secret. The unique HTTP collector code of your Sumo Logic endpoint.
    • compress_logs - (Optional)Enables GZIP compression for a log file sent to a destination. If unspecified, this defaults to true.
    • connector_name - (Required) The name of the connector.
    • endpoint - (Required) The Sumo Logic collection endpoint where you want to send your logs. You should follow the https://<SumoEndpoint>/receiver/v1/http format and pass the collector code in the collectorCode argument.
  • oracle_connector- (Optional) Specify details about the Oracle Cloud Storage connector in a stream. When validating this connector, DataStream uses the provided access_key and secret_access_key values and tries to save an Akamai_access_verification_<timestamp>.txt file in your Oracle Cloud Storage folder. You can only see this file if the validation process is successful, and you have access to the Oracle Cloud Storage bucket and folder that you‚Äôre trying to send logs to.

    • access_key - (Required) Secret. The access key identifier that you use to authenticate requests to your Oracle Cloud account. See Managing user credentials in OCS.
    • bucket - (Required) The name of the Oracle Cloud Storage bucket. See Working with Oracle Cloud Storage buckets.
    • connector_name - (Required) The name of the connector.
    • namespace - (Required) The namespace of your Oracle Cloud Storage account. See Understanding Object Storage namespaces.
    • path - (Required) The path to the folder within your Oracle Cloud Storage bucket where you want to store your logs.
    • region - (Required) The Oracle Cloud Storage region where your bucket resides. See Regions and availability domains in OCS.
    • secret_access_key - (Required) Secret. The secret access key identifier that you use to authenticate requests to your Oracle Cloud account.

Attributes reference

There is no default standard output as the attribute values are sensitive, but you can get your data stream's ID from the last line of the process log.

akamai_datastream.my_datastream: Creation complete after 9s [id=12345]