Export SIA logs
You can export event and activity logs at a large volume from SIA and import them into Linode Object Storage or Microsoft Azure Blob Storage. Log data is stored in your Linode S3-compatible object storage bucket or in your Azure blob.
To configure this feature:
- On the Connection Info page (Clients & Connectors > Connection Info), you specify the destination settings for Linode or Azure.
- On the Scheduled Reports & Notifications page (Reports >Scheduled Reports & Notifications), you select the logs that you want to export. You can export data from the Threat Events, Access Control, and DNS Activity reports. If your organization uses SIA proxy, you can export data from the Proxy Activity report. To learn more about the data that is exported, see Exported log data.
After these settings are configured, Linode or Azure receives logs. The following applies:
- This operation exports data at 10 minute intervals.
- By default, data is compressed into three separate CSV files for each log type. These files include DNS Activity, Proxy Activity, and All Events.
- For Linode, you generate an encryption key that's used to encrypt log data. You must use this key to decrypt the log data as well. For more information, see Decrypt log data stored on Linode.
This feature is currently in limited availability. To use this feature, contact your Akamai account representative.
Log filename convention
Logs are compressed and exported into your Linode S3 bucket or your Azure blob storage based on the export settings you configure in SIA (Clients & Connectors > Connection Info). Logs are organized into folders that indicate the type of log that's exported.
Exported logs have this folder structure and use this filename convention:
<logType>/<timestamp><internalKey><partNumber>.<format>
where:
<logType>
is the type of logs that were exported. This area may show DNS_ACTIVITY, PROXY_ACTIVITY, or EVENTS.<timestamp>
is the epoch timestamp in milliseconds when data was exported from SIA.<internalKey>
is a number used for internal purposes only. You can ignore this value.<partNumber>
indicates the log file number in case the file is divided into multiple parts. For example, if the part number is 2, logs were continued in a second file. At this time, a single log file can have a maximum of 5 million rows.<format>
is the file format. CSV files are compressed into a gzip (.gz) file.
The following is an example of the folders where you can find logs and the log filename:
DNS_ACTIVITY/1720985285000_15_1.csv.gz
Exported log data
You can export log data from these SIA reports:
DNS activity
This data is exported into the DNS activity log file.
Data Field | Description |
---|---|
Detection time | Time the request occurred. |
Domain | Domain that was requested |
Action | Policy action that was taken on request. Indicates whether the request was allowed or blocked. |
Query Type | DNS resource record type associated with the request. |
Reason | Indicates how the activity was detected. For example, it may have been detected based on policy, custom list, or more. |
List | If applicable, this value indicates the custom list that applies to the activity. |
Category | Category assigned to the activity. If the category is a threat, a threat category is listed. |
Observed categories | Additional categories that were observed for the activity. |
Policy | Name of policy that applies to activity. |
Location | Name of the location that’s associated with the activity. |
Sub-location | If applicable, this is the sub-location that’s assigned to activity. |
Source IP | External IP address for DNS activity. |
Internal IP | The IP address of the internal client device. |
Device Name or ID | Name of the device associated with the activity. |
Internal Client Name | Name of the internal client device. |
Onramp Type | Indicates how a request was directed to SIA Proxy. One of these values may appear:
|
Proxy activity
This data is exported into the proxy activity log file.
Data Field | Description |
---|---|
Detection time | Time the request occurred. |
Domain | Domain that was requested |
URL | URL or path that was requested. |
HTTP Method | Action that’s performed during a request. May indicate a POST or GET. |
Action | Indicates the policy action that was performed on the activity. |
Reason | Informs how traffic was identified. Any of these reasons may appear:
|
Lists | Indicates what list matches this activity. Lists may include a custom list or an intelligence list associated with a threat category. |
Categories | Category assigned to the activity. If the category is a threat, a threat category is listed. |
Policy | Name of policy that applies to activity. |
Location | Name of the location that applies to the activity. |
Sublocation | If applicable, the name of the sub-location that applies to activity. |
Application Name | Name of the application that applies to the activity. |
Application Operation | Name of the application operation that applies to this activity. |
Source IP | External IP address associated with the activity. |
Destination IP | IP address that was requested. |
Internal IP | IP address of the internal client device that was detected by Security Connector. |
Internal Client Name | Name of the internal client device that was detected by Security Connector. |
Onramp Type | Indicates how activity was directed to SIA Proxy. One of these values may appear:
|
Threat and access control events
Threat events and access control events are exported into the Events log file.
Data Field | Description |
---|---|
Detection Time | Time the request occurred. |
Origin | Indicates whether the event was discovered as a result of SIA DNS or SIA Proxy |
Domain | Domain that was requested. |
Action | Policy action that was taken on request. Indicates whether the request was allowed or blocked. |
Query Type | DNS resource record type associated with the request. |
Event Type | Indicates whether the event was a threat event or an access control event.
|
Reason | Informs how an event was identified. Any of these reasons may appear:
|
Lists | List that identified the threat as an event. This list can be a custom list or a threat category. |
Categories | The overall category of the event. For a threat event, categories can be Malware, Phishing, C&C, DNS Exfiltration, Deny List, or Other (if assigned to a custom list) |
Threats | Name of the threat. If a specific name for a threat does not appear, SIA shows a name that classifies the threat. These classifications include:
This field appears for threat events only. |
Severity | Indicates the severity level. For more information, see Severity levels. This field appears for threat events only. |
Application Name | Web application that violated access control settings in SIA policy. For more information, see Application visibility and control. |
Application Risk | Risk level associated with a web application that violated access control settings in SIA policy. For more information, see Application visibility and control. |
Application Operation | Application operation that violated access control settings in SIA policy. For more information, see Application visibility and control. |
File Name | Name of the file that was involved in the event. |
File Type | MIME file type that is downloaded or uploaded. An administrator may assign the block or monitor action to this file type in a policy. |
Policy | Security policy or set of rules that’s associated with the event. |
Location | Location where the event originated from. |
Sublocation | If applicable, the name of the sub-location where the event originated from. |
Source IP | External IP address of traffic. This is likely the IP address that’s assigned to a location as a result of NAT. |
Destination IP | IP address that was requested. |
Internal IP | Internal IP address of the user’s device. |
Device Name/ID | Name or ID of the user’s device. |
Internal Client Name | Internal client name of machine that’s detected by DNS Forwarder. |
Onramp Type | Indicates how a request was directed to SIA Proxy. One of these values may appear:
|
Configure log export settings
Complete this procedure to configure the destination of SIA logs.
To define what type of logs are exported, see Enable or disable the export of specific logs.
Before you begin:
- If you are exporting logs to Linode:
- Make sure you’ve created a storage bucket in the Linode platform. For instructions, see Create and manage buckets in the Akamai cloud computing documentation.
- Make sure you have your access key and secret key. For more information, see Get started with object storage in the Akamai cloud computing documentation.
- Generate a 32-byte string in hexadecimal format that you can use as an encryption key for logs. You can provide any unencoded 32 byte string. You can use OpenSSL to generate this string with the following command:
openssl rand -hex 16
Linode destroys encryption keys immediately after your data is encrypted. Object Storage data that is encrypted with SSE-C is unrecoverable without your encryption key. Make sure you save the encryption key.
- If you are exporting logs to Azure:
- Make sure you have an Azure storage account and you've created a container and a blob. For more information, see Introduction to Azure Blob Storage in the Azure documentation.
- Make sure you have your tenant ID. To find this ID, see Find your Microsoft Entra Tenant in the Azure documentation.
- Make sure you have your account access key. For instructions, see View account access keys in the Azure documentation.
To export logs:
- In the Threat Protection menu of Enterprise Center, select Clients & Connectors > Connection Info.
- Go to the Log Export Settings.
- In the Destination menu, select Linode S3 Bucket or Azure Blob Storage.
- If you selected Linode S3 Bucket, complete these steps:
- In the Access Key ID field, enter the access key label.
- In the Secure Access Key field, enter the access key.
- In the Endpoint field, enter the URL for the Linode bucket.
- In the Bucket Name field, enter the name of your Linode bucket.
- In the Encryption token field, enter the encryption key that you generated.
- If you selected Azure Blob Storage, complete these steps:
- In the Account Name field, enter the name of your Azure storage account.
- In the Secret Key field, enter the Azure access key.
- In the Blob field, enter the name of your container where you want to store exported log files.
- In the Tenant ID field, enter the tenant ID that is associated with your organization’s account.
- Click Save.
Next Step:
Select the logs that you want to export. For instructions, see Enable or disable the export of specific logs.
Enable or disable the export of specific logs
Complete this procedure to enable or disable the export of a specific log type. CSV files with log data can be exported for DNS activity, proxy activity, and event data.
The user interface indicates that logs are exported to the destination you configured on the Connection Info page.
Before you begin:
Make sure you configure the log export settings. For more information, see Configure log export settings.
To enable or disable the export of specific logs:
- In the Threat Protection menu of Enterprise Center, select Reports > Scheduled Reports & Notifications.
- Expand the Logs section.
- To enable the export of a specific log, do the following:
- To export DNS activity logs, turn on the toggle for DNS Activity. Make sure you wait for the success message that appears after enabling this setting before turning on a toggle for another log.
- To export Proxy activity logs, turn on the toggle for Proxy Activity. Make sure you wait for the success message that appears after enabling this setting before turning on a toggle for another log.
- To export logs on all events, turn on the toggle for All Events. Make sure you wait for the success message that appears after enabling this setting before turning on a toggle for another log.
- To disable the export of a specific log, do the following:
- To disable the export of DNS Activity logs, turn off the toggle for DNS Activity.
- To disable the export of Proxy Activity logs, turn off the toggle for Proxy Activity.
- To disable the export of All events logs, turn off the toggle for All Events.
Next Steps:
If you are storing log data on Linode, you will need to decrypt log data. For more information, see Decrypt log data stored on Linode.
Decrypt log data stored on Linode
You use the encryption key that you generate as part of Linode object storage setup to encrypt the data that’s in your bucket. After it’s encrypted, you can decrypt data with the same encryption key. To decrypt your log data, use one of these methods:
- Create a python script that can decrypt log data in your bucket. For instructions, see Create and run a python script for decryption.
- Use the rclone command line tool to decrypt your data. For instructions, see Use rclone to decrypt log data.
Create and run a python script for decryption
Complete this procedure to a create and run a python script that decrypts log data that's stored in your Linode bucket.
Before you begin:
- Make sure you are running Python 3.4 or later on your machine. To check your version, enter this command:
python3 --version
- Make sure you’ve generated an object storage key pair.
- Make sure you have the encryption key that was used to encrypt logs.
To create a python script for decryption:
-
In a code editor, open a new file with the name
example.py
. In the file, enter enter the following:
#!/usr/bin/env python import boto3 import os import gzip import shutil import logging # Static configuration ENDPOINT_URL = "<BUCKET_URL>" AWS_ACCESS_KEY_ID = "<ACCESS_KEY_ID>" AWS_SECRET_ACCESS_KEY = "<SECRET_ACCESS_KEY>" ENCRYPTION_KEY = "<ENCRYPTION_KEY>" BUCKET = "<BUCKET_NAME>" SOURCE_FOLDER = "<LINODE_FOLDER>" DESTINATION_DIR = '<LOCAL_PATH>' # Set up logging logging.basicConfig(level=logging.INFO) # Ensure the destination directory exists os.makedirs(DESTINATION_DIR, exist_ok=True) # Create an S3 client client = boto3.client( "s3", aws_access_key_id=AWS_ACCESS_KEY_ID, aws_secret_access_key=AWS_SECRET_ACCESS_KEY, endpoint_url=ENDPOINT_URL ) def list_s3_files(): logging.info(f"Listing files in S3 folder: {SOURCE_FOLDER}.") response = client.list_objects_v2(Bucket=BUCKET, Prefix=SOURCE_FOLDER + '/') files = [] if 'Contents' in response: for obj in response['Contents']: files.append(obj['Key']) return files def process_and_copy_file(s3_key): local_filename = os.path.basename(s3_key) dest_path = os.path.join(DESTINATION_DIR, local_filename) logging.info(f"Downloading encrypted file from S3: {s3_key}.") with open(dest_path, 'wb') as f: client.download_fileobj(BUCKET, s3_key, f, ExtraArgs={"SSECustomerKey": ENCRYPTION_KEY, "SSECustomerAlgorithm": "AES256"}) # Check if the file is compressed and decompress if necessary if dest_path.endswith('.gz'): logging.info(f"Decompressing the file: {dest_path}.") with gzip.open(dest_path, 'rb') as f_in: with open(dest_path[:-3], 'wb') as f_out: shutil.copyfileobj(f_in, f_out) os.remove(dest_path) # Remove the original compressed file dest_path = dest_path[:-3] # Update path to decompressed file logging.info(f"File processed and saved to {dest_path}.") if __name__ == "__main__": try: s3_files = list_s3_files() for s3_key in s3_files: process_and_copy_file(s3_key) except Exception as e: logging.error(f"An error occurred: {e}")
where:
<ACCESS_KEY_ID>
is the Linode access key ID.<SECRET_ACCESS_KEY>
is the Linode access key.<BUCKET_URL>
is the URL of the bucket’s cluster. All buckets are hosted by a unique cluster. For example, us-east-1.linodeobjects.com.<ENCRYPTION_KEY>
is your 32-byte encryption key.<BUCKET_NAME>
is the Linode bucket name.<LINODE_FOLDER>
is the path to the Linode folder where you want to copy all the files.<LOCAL PATH>
is the path on your local machine where you want to store decrypted log files.
-
Enter this command to make sure that your script file is executable:
chmod +x example.py
-
Enter this command to run the script:
./example.py
Decrypted files are available in the provided path on your local machine.
Use rclone to decrypt log data
Rclone is a command line tool used for syncing files to remote services. You can use this tool to download decrypted log data by copying data to a local directory on your machine. Alternatively, you can also configure another location, such as a different directory or bucket in your Linode storage or a location in another cloud storage service. These steps apply to Linux or macOS.
-
Configure rclone with Linode object storage bucket information. Note that as part of this procedure:
- Choose Other from the list of storage types. Do not select Linode.
- Enter your Linode object storage access key ID and secret access key when prompted for AWS credentials.
- For the Access Control List step, select 1 or private.
-
Go to your config file. You can typically find the config file in this location:
/<user_directory>/.config/rclone/rclone.conf
As a result of your configuration, your config file should look like this:[test] type = s3 provider = Other access_key_id = <ACCESS_KEY_ID> secret_access_key = <SECRET_ACCESS_KEY> endpoint = <ENDPOINT_URL> acl = private
where:
<ACCESS_KEY_ID>
is the Linode access key ID.<SECRET_ACCESS_KEY>
is the Linode access key.<ENDPOINT_URL>
is the URL for the region where your Linode bucket is located.
-
Add these two lines to the config file for the location you just configured:
sse_customer_algorithm = AES256 sse_customer_key = <ENCRYPTION_KEY>
where
<ENCRYPTION_KEY>
is the 32-byte encryption key you’ve configured for Linode and provided in SIA for log export. -
If you want to copy log files to a location on your local machine, run this command on your machine. Log files are decrypted as part of this operation.
rclone copy test:<bucket_name>/<path_to_files> <path_on_local_machine> -P
where:<bucket_name>
is the name of your Linode bucket.<path_to_file>
is the path in Linode to your log files.<path_on_local_machine>
is the path on your local machine where you want to copy the files.
-
If you want to configure another storage area in rclone for your decrypted files, do the following:
-
Configure another remote location with rclone. Repeat step 2 and configure an alternative configuration with these settings:
[alternative] type = s3 provider = Other access_key_id = <ACCESS_KEY_ID> secret_access_key = <SECRET_ACCESS_KEY> endpoint = <ENDPOINT_URL>
where:
<ACCESS_KEY_ID>
is the Linode access key ID.<SECRET_ACCESS_KEY>
is the Linode access key.<ENDPOINT_URL>
is the URL for the region where your Linode bucket is located.
Make sure the alternative configuration does not have
acl
,sse_customer_algorithm
, andsse_customer_key
values.- Run this command to transfer your decrypted data to an alternative location. You can configure another directory or bucket in Linode, or you can specify another cloud location.
rclone copy test:<bucket_name>/<path_to_files> alternative:<bucket_name>/<new_path>
where:
-
<bucket_name>
is the name of your Linode bucket. For the alternative location, this can be your Linode bucket or another container in a cloud service. -
<path_to_file>
is the path to your log files. -
<new_path>
is the path where you want to place decrypted log files. If you are placing files in the same Linode bucket as your encrypted log files, this can be a new directory in your bucket.You can use the sync or move commands instead of the copy command. To learn more about these rclone commands, see rclone copy , rclone sync , or rclone move .
-
Updated 3 months ago