AWS CLI and SDKs support details

Data integrity protections

If you are using a version of the AWS CLI or SDKs released on or after January 15, 2025, you might experience issues uploading to Akamai Cloud Object Storage Amazon S3 endpoints. PutObject and UploadPart requests fail with SignatureDoesNotMatch, MissingContentLength, and NotImplemented error codes.

The affected CLI and SDKs now require Data Integrity Protections for Amazon S3, which is not supported with Object Storage. Affected versions may include:

  • AWS CLI v2.23.0 and later
  • AWS SDK for Python (boto3) v1.36.0 and later

Our current recommendation is to downgrade the CLI or SDK to the latest version released prior to January 15, 2025. For example:

  • AWS CLI v2.22.35
  • AWS SDK for Python (boto3) v1.35.99

An alternative workaround is to configure the request_checksum_calculation parameter to WHEN_REQUIRED using one of the methods described in the Data Integrity Protections for Amazon S3 document. This workaround may not work in all cases. For example, when using AWS CLI v2.23.5, this method works when uploading with aws s3api put-object but not with aws s3 cp, as described in aws s3 cp does not honor request_checksum_calculation = WHEN_REQUIRED.

We are continuing to investigate this issue to better support the latest releases of the AWS SDKs.

Multipart download support

Akamai Object Storage does not currently support multipart downloads. If the partNumber query parameter is added to a GET request, the parameter is ignored. The entire object will be returned in the 200 HTTP response. Some SDK’s and tools may default to using this parameter which will result in part downloads not working as expected. A quick way to verify that your SDK or tooling is attempting to use multipart downloads, is to look at whether the ?partNumber parameter is being used for GET requests.

In order to achieve the same effect as the ?partNumber query parameter, that is, to download only a specific part of an object or to potentially parallelise the download of a large object, you can use byte range requests. A byte range request returns an HTTP 206 response. Below is a code example using boto3, which parallelises the download of a 1GB object into 100MB chunks:

import boto3
import math
import os
from concurrent.futures import ThreadPoolExecutor, as_completed

BUCKET = "my-bucket"
KEY = "large-object.bin"
OUTPUT_FILE = "large-object.bin"
CHUNK_SIZE = 100 * 1024 * 1024
MAX_WORKERS = 8 
s3 = boto3.client("s3")


def get_object_size(bucket, key):
    response = s3.head_object(Bucket=bucket, Key=key)
    return response["ContentLength"]


def download_range(bucket, key, start, end, output_file):
    byte_range = f"bytes={start}-{end}"
    response = s3.get_object(Bucket=bucket, Key=key, Range=byte_range)
    data = response["Body"].read()

    with open(output_file, "r+b") as f:
        f.seek(start)
        f.write(data)

    return start, end


def main():
    object_size = get_object_size(BUCKET, KEY)
    print(f"Object size: {object_size / (1024**3):.2f} GB")

    # Preallocate file
    with open(OUTPUT_FILE, "wb") as f:
        f.truncate(object_size)

    ranges = []
    for start in range(0, object_size, CHUNK_SIZE):
        end = min(start + CHUNK_SIZE - 1, object_size - 1)
        ranges.append((start, end))

    print(f"Downloading in {len(ranges)} chunks...")

    with ThreadPoolExecutor(max_workers=MAX_WORKERS) as executor:
        futures = [
            executor.submit(download_range, BUCKET, KEY, start, end, OUTPUT_FILE)
            for start, end in ranges
        ]

        for future in as_completed(futures):
            start, end = future.result()
            print(f"Completed: bytes {start}-{end}")

    print("Download complete.")


if __name__ == "__main__":
    main()