Distributed Compute Regions (limited availability)

Distributed compute regions bring the power of full-stack computing to underserved and hard-to-reach locations. This lets you build and deploy edge-native applications closer to users and devices around the world. It also lets you spread resiliency across multiple locations rather than relying on the availability of a single region.

Distributed regions support a subset of the Akamai cloud computing features and services supported by core compute regions, and differ in the use cases they support. Core regions are more suited to full-duty workloads, while distributed regions are best suited to specific parts of an application or workload where proximity to end users is paramount.

Deploying a Compute Instance to a distributed region may be a good option if you already have an account, have already deployed Compute Instance(s) to one or more core regions, and any of the following apply:

  • You need to offer a high performing and highly available service to a distributed audience
  • You need to run multiple instances of your application
  • Your application requires very low latency
  • Consistent user experience is of prime importance, but some of your users are far from core regions

Comparison of core and distributed regions

Core compute regions are centrally located with a full feature set, long-term durability, and extensive scalability for enterprise workloads.

Distributed compute regions let you deploy parts of your application or workload to locations closer to your users. You can also spread resiliency across multiple locations rather than relying on the availability of a single region. Applications deployed to distributed regions should be designed in such a way that any single region or server could fail without impacting overall availability.

Distributed regions support features and services you need to build highly available edge-native applications across a growing distributed footprint. These include Dedicated CPU Compute Instances, Cloud Firewalls, Metadata service (cloud-init), VLANs, IP Sharing, and more. 

Block Storage, Object Storage, and Backups are not directly supported in distributed compute regions, but can be leveraged as part of a larger distributed ecosystem. When state or storage is a key component of your application, distributed regions can seamlessly reach back to core region deployments of such services.

For the list of features and services supported by distributed regions, see Distributed Compute Regions features and services.

To learn more about Service Level Objective targets, contact Support.

Availability

Limited availability

Access to distributed compute regions is currently limited. If you have an application or workload that you'd like to deploy to a distributed region, contact us.

Locations

Compute Instances can be deployed to the following distributed compute regions:

  • Auckland, NZ
  • Bogotá, CO
  • Denver, CO, USA
  • Hamburg, DE
  • Houston, TX, USA
  • Johannesburg, ZA
  • Marseille, FR
  • Querétaro, MX
  • Santiago, CL

See these distributed locations on the Akamai Connected Cloud map.

Plans and pricing

To deploy a Compute Instance to a distributed compute region, you need a Dedicated CPU Compute Instance plan.

Dedicated CPU plans are priced differently for distributed versus core regions. For pricing details, see the Linode Plan section of the Cloud Manager Create form, or use the API.

Linux distributions

For the list of supported distributions, see the Linux Distribution dropdown menu on the Cloud Manager Create form, or use the API.

Use cases

Deploying Compute Instances to a distributed compute regions is good approach for a number of different use cases including, but not limited to:

Gaming:

  • Matchmaking. Enable digital experiences like player matchmaking that depend on short wait times, high performance, and adaptive decision-making.
  • Game servers. Enable the real-time responsiveness critical to competitive gaming experiences.

Social Media:

  • User-generated content in live streams. Optimize interactive experiences, such as user reactions to live streams and chats, by minimizing latency.
  • WebRTC. Enable low-latency direct communication between users, especially those in close geographical proximity.

Media Streaming

  • Manifest Manipulation. Maximize video quality, enable seamless ad insertion, and improve the user experience based on real-time edge and device characteristics.
  • Live Streaming. Optimize streaming performance by minimizing latency for viewers in close proximity to distributed regions.

Data and AI:

  • Distributed Data. Enable global data distribution to power real-time decisioning and scaled computing at the edge.
  • AI Inferencing. Leverage near-user generalized computing to deliver the services offered by LLMs at a global scale.

Limits and considerations

  • Available features and services: Distributed compute regions support a subset of Akamai cloud computing features and services that are essential to building and deploying highly available edge-native applications. For more information, see Distributed Compute Regions features and services.
  • Bandwidth: Egress limits for Compute Instances in distributed regions are similar to core regions. In the event of abnormal usage, however, Akamai may need to implement traffic management procedures to ensure network stability and fair usage of impacted resources.
  • Workload migration: You can migrate Compute Instances from one distributed region to another through cold or warm migration (live migration is not supported). You cannot migrate Compute Instances between distributed and core regions. To learn more about the types of migrations, see Maintenance and migrations.
  • Disk encryption: Local disk encryption is enabled by default for Compute Instances in distributed regions, and cannot be disabled. For more information, see Disk encryption.