Best practices
Best practices for high throughput applications
A general design pattern for high throughput applications is to distribute content over multiple TCP connections. For client applications multiple TCP connections across multiple threads can help increase overall performance by minimizing the impact of blocking API calls, better CPU resource utilization, and helps to optimize throughput across the network by allowing multiple network paths to be followed.
Each DNS lookup for an Akamai Object Storage S3 hostname, such as, us-sea-9.linodeobjects.com
randomly returns 12 IP addresses from a larger pool of addresses. Each DNS A record TTL has a timeout of 30 seconds. A diversity of IP addresses when connecting to Object Storage distributes content across multiple ingress points. This helps to ensure optimal throughput rates. You should review any local libraries, SDK’s, or caches to ensure that connection requests are spread across all of the IP addresses available for the Akamai Object Storage endpoint.
We recommend that applications model for a maximum throughput of 1Gbps (gigabits per second) per connection. At higher per rates, individual connections may be throttled. For uploads (ingress) throttling would result in TCP backpressure on the sender to limit the amount of data being transmitted. To avoid the risk of throttling, spread uploads across multiple connections with a model target of 1Gbps each. For example, an application may issue 8 simultaneous GET requests to 8 distinct IP addresses to target a download (egress) rate of 8Gbps.
Objects larger than 1MiB should be preferred over smaller objects for high throughput applications. Larger objects minimize the TCP connection and operation overhead with individual S3 API requests which can meaningfully impact overall application throughput for large numbers of small objects.
Updated about 8 hours ago