Distributed compute regions (limited availability)

Distributed compute regions bring the power of full-stack computing to underserved and remote locations. They let you build and deploy edge-native applications near users and devices around the world. These distributed regions spread resiliency across multiple locations rather than relying on the availability of a single region.

Distributed regions support a subset of the Akamai cloud computing features and services supported by core compute regions, and cater to different use cases. While core regions can handle full-duty workloads, distributed regions are ideal for applications that require proximity to end users.

When to deploy to a distributed compute region

You can deploy to a distributed compute region if you have an account and have already deployed Compute Instances to one or more core regions.

Consider deploying to a distributed region if you:

  • Need to offer a high performing and highly available service to a distributed audience.
  • Need to run multiple instances of your application.
  • Require very low latency for your application.
  • Want to deliver a consistent user experience, even for users far from core regions

Core vs. distributed regions

Core compute regions. These regions are centrally located, offering full features, long-term durability, and scalability for enterprise workloads.

Distributed compute regions. These regions let you deploy parts of your application or workload closer to your users. They enable you to spread resiliency across multiple locations instead of relying on the availability of a single region. Applications deployed to distributed regions should be designed so that any single region or server could fail without impacting overall availability.

Distributed regions support essential features and services for building highly available, edge-native applications across a growing distributed footprint. These include Dedicated CPU Compute Instances, Cloud Firewalls, Metadata service (cloud-init), VLANs, IP Sharing, and more. 

Block Storage, Object Storage, and Backups are not directly supported in distributed compute regions, but can be leveraged as part of a larger distributed ecosystem. When state or storage is a key component of your application, distributed regions can connect seamlessly to core region deployments of these services.

For a complete list of features and services, see Distributed compute region features and services.

To learn more about Service Level Objective targets, contact Support.

Availability

Limited availability

Access to distributed compute regions is currently limited. If you want to deploy an application or workload to a distributed region, contact us.

Locations

You can deploy Compute Instances to these distributed compute regions:

  • Auckland, NZ
  • Bogotá, CO
  • Denver, CO, USA
  • Hamburg, DE
  • Houston, TX, USA
  • Johannesburg, ZA
  • Kuala Lumpur, MY
  • Marseille, FR
  • Querétaro, MX
  • Santiago, CL

View these locations on the Akamai Connected Cloud map.

Plans and pricing

To deploy a Compute Instance in a distributed compute region, you need a Dedicated CPU Compute Instance plan.

Pricing differs between core and distributed regions. For details, see the Linode Plan section of the Cloud Manager Create form, or use the API.

Linux distributions

For the list of supported distributions, see the Linux Distribution dropdown menu on the Cloud Manager Create form, or use the API.

Use cases

Deploying Compute Instances to distributed compute regions works well for several uses cases, including:

Gaming:

  • Matchmaking. Suppport digital experiences like player matchmaking that rely on short wait times, high performance, and adaptive decision-making.
  • Game servers. Deliver real-time responsiveness essential for competitive gaming.

Social Media:

  • User-generated content in live streams. Optimize interactive experiences such as user reactions to live streams and chats by minimizing latency.
  • WebRTC. Facilitate low-latency direct communication between geographically close users.

Media Streaming

  • Manifest Manipulation. Enhance video quality, enable seamless ad insertion, and improve the user experience based on real-time edge and device data.
  • Live Streaming. Optimize streaming performance for viewers near distributed regions.

Data and AI:

  • Distributed Data. Enable global data distribution to power real-time decisioning and scaled computing at the edge.
  • AI Inferencing. Leverage near-user generalized computing to deliver large language model (LLM) services at a global scale.

Limits and considerations

  • Available features and services: Distributed compute regions support a subset of Akamai cloud computing features and services that are essential for edge-native applications. To learn more, see Distributed compute region features and services.
  • Bandwidth: Egress limits for Compute Instances in distributed regions are similar to core regions. If abnormal usage occurs, Akamai may implement traffic management procedures to maintain network stability and fair usage of impacted resources.
  • Workload migration: You can migrate Compute Instances between distributed regions through cold or warm migration. Live migration, and migration between distributed and core regions, are not supported. To learn more, see Maintenance and migrations.
  • Disk encryption: Local disk encryption is on by default for Compute Instances in distributed regions, and can't be turned off. For more information, see Disk encryption.