Stream logs to Splunk

DataStream 2 supports sending logs to Splunk. It is an interface that lets you search, monitor, and analyze your data.

Depending on your choice, DataStream 2 can upload either uncompressed or gzip-compressed log files.

Optionally, you can upload a client certificate to enable mTLS authentication to improve stream security and prevent data delivery failures. The custom header feature allows you to optionally choose the content type passed in the log file, and enter the name and value for the header that your destination accepts.

Before you begin

To use Splunk as a destination for your logs, you need to:

  • Set up an HTTP Event Collector instance (HEC) that matches the type of Splunk software you use. Next, create a token and enable it. See Set up and use HTTP Event Collector in Splunk Web.

  • Save the HEC token that you enabled, and the URL for your event connector. The URL structure depends on the type of your Splunk instance. See Send data to HTTP Event Collector in Splunk Cloud.

How to

  1. In Destination, select Splunk.

  2. In Display name, enter a human-readable description for the destination. The name can't be longer than 255 characters.

  3. In Endpoint, enter the HTTP Event Collector URL to a Splunk endpoint, where you want to send your logs in the <protocol>://<host>:<port>/<endpoint> format. The URL can't be longer than 1000 characters. Example:

    https://<splunk-host>:8088/services/collector/raw
    

    DataStream 2 supports only Splunk HEC URLs for raw events. Entering endpoint URLs ending with /collector or /collector/event will result in an error.

  4. In Event collector token, enter the HEC token you created and enabled in Splunk.

  5. If you want to send compressed gzip logs to this destination, check Send compressed data.

  6. Click Validate & Save to validate the connection to the destination and save the details you provided.

    As part of this validation process, the system uses the provided credentials to push a sample request to the provided endpoint to validate the write access. In case you chose the Structured log format, the sample data appears in the 0,access_validation format. For JSON logs, the data follows the {"access_validation":true} format. You can see the data only if the destination validates, and you can access the destination storage.

Additional options

  1. Optionally, click Additional options to add mTLS certificates for additional authentication. In Client certificate, enter the:
    • TLS hostname matching the Subject Alternative Names (SANs) present in the SSL certificate for the endpoint URL. If not provided, DataStream 2 fetches the hostname from the URL.
    • CA certificate that you want to use to verify the origin server's certificate. DataStream requires a CA certificate, if you provide a self-signed certificate or a certificate signed by an unknown authority. Enter the CA certificate in the PEM format for verification.
    • Client certificate in the PEM format that you want to use to authenticate requests to your destination. If you want to use mutual authentication, provide both the client certificate and the client key.
    • Client key you want to use to authenticate to the backend server in the PEM (non-encrypted PKCS8) format. If you want to use mutual authentication, provide both the client certificate and the client key.

📘

When enabling mTLS authentication for this destination, set requireClientCert to true in Splunk if you want the endpoint to require certificate authentication when receiving log data. See Configure indexers to use a signed SSL certificate in the Splunk documentation.

  1. Optionally, go to Custom header and provide the details of the custom header for the log file:

    • In Content type, set the content type to pass in the log file header. application/json is the only supported content type at this time.
    • If your destination accepts only requests with certain headers, enter the Custom header name and Custom header value. he custom header name can contain the alphanumeric, dash, and underscore characters.

    You can use this feature for Splunk indexer acknowledgements passed as the X-Splunk-Request-Channel header. See Channels and sending data in the Splunk documentation.

🚧

Forbidden custom header values

DataStream 2 does not support custom header user values containing:

  • Content-Type
  • Encoding
  • Authorization
  • Host
  • Akamai (allowed if using an Akamaized hostname as destination)
  1. Click Validate & Save to validate the connection to the destination and save the details you provided.

Akamaized hostname as endpoint

This destination supports using Akamaized hostnames as endpoints to send DataStream 2 logs for improved security. When you create a property with a Splunk endpoint URL as hostname, this property acts as a proxy between the destination and DataStream. As a result, you can filter incoming traffic to your destination endpoint by IP addresses using the Origin IP Access List behavior. That means only IP addresses that belong to your Akamaized property hostname can send logs to your custom destination. Using Akamaized hostnames as endpoints also requires enabling the Allow POST behavior in your property.

Once the property hostname works as a destination endpoint, you cannot monitor it as a property in this or another stream. If you already monitor a property in DataStream, you cannot use it as a destination endpoint.

To enable this feature:

  1. Go to Property Manager and create a new property. We recommend choosing API Acceleration as the product. See Create a brand new property.

  2. Set your Splunk HTTP Event Collector URL as the property hostname. See Redirect users to edge servers.

  3. Go to > CDN > Properties or just enter Properties in the search box.

    The Property Groups page opens.

  4. Click the Property Name link to go to the property you created.

  5. Activate the property on the production network. Only properties active on the production network can serve as DataStream destinations. See Activate property on production.

  6. On the Property Details page, click the Version of your configuration that you want to access in Manage Versions and Activations.

    The Property Manager Editor appears.

  7. In the default rule, click Add Behavior, and select Origin IP Access List. Click Insert Behavior.

    The Origin IP Access List behavior appears in the default rule.

  8. Set the Enable slider in the Origin IP Access Control List behavior to On. Click Save.

  9. Click Add behavior, and select Allow POST.

  10. Click Insert Behavior.

    The Allow POST behavior appears in the default rule.

  11. Set the Behavior option in the Allow POST behavior to Allow.

  12. Click Save.

📘

Tip

You might need to additionally configure your property to ensure uninterrupted data flow. See Configuration best practices in the Property Manager guide for other behaviors you can configure in your property.

  1. Configure the firewall settings at your destination endpoint to allow access for IP addresses that belong to CIDR blocks for your Akamaized hostname. See the Origin IP Access List behavior for the list of IP addresses to put on the allow list.

After successfully configuring an Akamaized hostname as the destination endpoint, avoid editing an active property’s setup in Property Manager to ensure uninterrupted data flow. Adding, deleting, and editing hostnames and behaviors may cause unexpected behavior in the DataStream application.

We recommend setting up alerts that send e-mail notifications every time DataStream logs cannot be uploaded to your destination, so you can immediately troubleshoot issues with your property or destination configuration. See Set up alerts.


Did this page help you?