Stream logs to a custom HTTPS endpoint

Follow these steps to send DataStream 2 logs to a custom HTTPS endpoint to allow on-premise software to receive and process logs.

You can optionally upload a client certificate to enable mTLS authentication for improved stream security and less data delivery failures, and configure a custom request header. For steps, see [Additional options](doc:stream-custom-https#Additional options).

DataStream sends logs directly to your custom HTTPS destination as POST requests, instead of a file with log lines and a pre-configured filename as in other destinations. You should configure your desired destination endpoint to handle POST body content.

For security reasons, DataStream sends logs over TLS even if the endpoint’s policies allow insecure requests.

Before you begin

  • Deploy a dedicated HTTPS endpoint that supports URL token authentication.
  • Enable TLS transport in the endpoint to receive log data.
  • Optionally, configure an username or password in your custom endpoint. This applies if you want to set Basic authentication for log streaming.

📘

Custom endpoints and JSON support

If your custom HTTPS endpoint's setup supports only the JSON format, make sure you choose JSON as the log file format for your stream in the Log format tab.

How to

  1. In Destination, select Custom HTTPS.

  2. In Name, enter a human-readable description for the destination.

  3. In Endpoint URL, enter the secure URL where you want to send and store your logs.

📘

Endpoint URL requirements

Provide an endpoint URL that supports POST requests. If you want to choose Basic authentication, make sure your endpoint supports it.

Enter an URL that is not an IPv4 or IPv6 hostname.

  1. In Authentication, select:

    • Basic if you want to authenticate log streaming to your custom destination. Provide the Username and Password you set in your custom HTTPS endpoint for authentication.
    • None for no authentication.
  2. If you want to send compressed gzip files to your destination, check the Send compressed data box.

  3. Click Validate & Save to validate the connection to the destination and save the details you provided.

    As part of this validation process, the system uses the provided credentials to push a sample request to the provided endpoint to validate the write access. In case you chose the Structured log format, the sample data appears in the 0,access_validation format. For JSON logs, the data follows the {"access_validation":true} format. You can see the data only if the destination validates, and you can access the destination storage.

Additional options

  1. Optionally, click Additional options to add mTLS certificates for additional authentication. In Client certificate, enter the:
    • TLS hostname matching the Subject Alternative Names (SANs) present in the SSL certificate for the endpoint URL. If not provided, DataStream 2 fetches the hostname from the URL.
    • CA certificate that you want to use to verify the origin server's certificate. DataStream requires a CA certificate, if you provide a self-signed certificate or a certificate signed by an unknown authority. Enter the CA certificate in the PEM format for verification.
    • Client certificate in the PEM format that you want to use to authenticate requests to your destination. If you want to use mutual authentication, provide both the client certificate and the client key.
    • Client key you want to use to authenticate to the backend server in the PEM (non-encrypted PKCS8) format. If you want to use mutual authentication, provide both the client certificate and the client key.

📘

When enabling mTLS authentication for a custom destination, configure the endpoint for all settings required for authentication with a valid client certificate.

  1. Optionally, go to Custom header and provide the details of the custom header for the log file:

    • In Content type, set the content type to pass in the log file header. You can choose application/json or application/json; charset=utf-8 for destinations that require the charset parameter.

      Note: Some custom HTTPS destination endpoints may not be compatible with DataStream log delivery. Before using any endpoint to stream logs, we recommend creating a mock stream for this endpoint to ensure that the logs are delivered to your destination, and log data is complete.

    • If your destination accepts requests only with certain headers, enter the Custom header name and Custom header value. The custom header name can contain the alphanumeric, dash, and underscore characters.

🚧

Forbidden custom header values

DataStream 2 does not support custom header user values containing:

  • Content-Type
  • Encoding
  • Authorization
  • Host
  • Akamai
  1. Click Validate & Save to validate the connection to the destination and save the details you provided.

Akamaized hostname as endpoint

You can use Akamaized hostnames as endpoints to send DataStream 2 logs for improved security. When you create a property with a custom HTTPS endpoint URL as hostname, this property acts as a proxy between the destination and DataStream.

As a result, you can filter incoming traffic to your destination endpoint by IP addresses using the Origin IP Access List behavior. That means only IP addresses that belong to your Akamaized property hostname can send logs to your custom destination. Using Akamaized hostnames as endpoints also requires enabling the Allow POST behavior in your property.

Once the property hostname works as a destination endpoint, you cannot monitor it as a property in this or another stream. If you already monitor a property in DataStream, you cannot use it as a destination endpoint.

To enable this feature:

  1. Go to Property Manager and create a new property. We recommend choosing API Acceleration as the product. See Create a brand new property.

  2. Set your custom HTTPS endpoint URL as the property hostname. See Redirect users to edge servers.

  3. Go to > CDN > Properties or just enter Properties in the search box.

    The Property Groups page opens.

  4. Click the Property Name link to go to the property you created.

  5. Activate the property on the production network. Only properties active on the production network can serve as DataStream destinations. See Activate property on production.

  6. On the Property Details page, click the Version of your configuration that you want to access in Manage Versions and Activations.

    The Property Manager Editor appears.

  7. In the default rule, click Add Behavior, and select Origin IP Access List. Click Insert Behavior.

    The Origin IP Access List behavior appears in the default rule.

  8. Set the Enable slider in the Origin IP Access Control List behavior to On. Click Save.

  9. Click Add behavior, and select Allow POST.

  10. Click Insert Behavior.

    The Allow POST behavior appears in the default rule.

  11. Set the Behavior option in the Allow POST behavior to Allow.

  12. Click Save.

📘

Tip

You might need to additionally configure your property to ensure uninterrupted data flow. See Configuration best practices in the Property Manager guide for other behaviors you can configure in your property.

  1. Configure the firewall settings at your destination endpoint to allow access for IP addresses that belong to CIDR blocks for your Akamaized hostname. See the Origin IP Access List behavior for the list of IP addresses to put on the allow list.

After successfully configuring an Akamaized hostname as the destination endpoint, avoid editing an active property’s setup in Property Manager to ensure uninterrupted data flow. Adding, deleting, and editing hostnames and behaviors may cause unexpected behavior in the DataStream application.

We recommend setting up alerts that send e-mail notifications every time DataStream logs cannot be uploaded to your destination, so you can immediately troubleshoot issues with your property or destination configuration. See Set up alerts.

Request examples

Depending on the configuration, including the authentication type you choose, requests to your destination may look differently. See the request header examples below:

None authenticationBasic authentication
Host: pdxsqalinuxvm.eastus2.cloudapp.azure.com:8102
User-Agent: Go-http-client/1.1
Connection: close
Transfer-Encoding: chunked
Accept-Encoding: gzip
Host: pdxsqalinuxvm.eastus2.cloudapp.azure.com:8002
User-Agent: Go-http-client/1.1
Connection: close
Transfer-Encoding: chunked
Authorization: Basic Og==
Accept-Encoding: gzip