Choose a Compute Instance plan

There are several Compute Instance types, each of which can be equipped with varying amounts of resources. This lets you create a Compute Instance tailored to the requirements of your application or workload. For example, some applications may need to store a lot of data but require less processing power. Others may need more memory than CPU. Some may be especially CPU-intensive and require more computing power.

This guide provides you with the information needed to select the most appropriate Compute Instance for the job.

📘

You can easily change between Compute Instance types and plans on an existing Compute Instance at any time. Review the Resizing a Compute Instance for instructions.

Compute Instance types

The following Compute Instance types are available in all core compute regions. Dedicated CPU is the only available Compute Instance type in distributed compute regions.

They each have unique characteristics and their resources are optimized for different types of workloads. Learn about each of these Compute Instance types below, along with the resources provided and their suggested use cases.

Most of these plan types are equipped with dedicated CPU cores for maximum peak performance and competition-free resources, though the Shared CPU plan comes with shared CPU cores. To learn more about the differences, see Choosing between shared and dedicated CPUs.

Shared CPU Compute Instances

1 GB - 192 GB Memory, 1 - 32 Shared vCPU Cores, 25 GB - 3840 GB Storage

Starting at $5/mo ($0.0075/hour). See Shared CPU pricing for a full list of plans, resources, and pricing.

Shared CPU Compute Instances offer a balanced array of resources coupled with shared CPU cores. These CPU cores can be used at 100% for short bursts, but should remain below 80% sustained usage on average. This keeps costs down while still supporting a wide variety of cloud applications. Your processes are scheduled on the same CPU cores as processes from other Compute Instances. This shared scheduling is done in a secure and performant manner. While ​Akamai Technologies, Inc.​works to minimize competition for CPU resources between your Compute Instance and other Compute Instances on the same hardware, it's possible that high usage from neighboring Compute Instances can negatively impact the performance of your Compute Instance.

Recommended Use Cases:

Best for development servers, staging servers, low traffic websites, personal blogs, and production applications that may not be affected by resource contention.

  • Medium to low traffic websites, such as for marketing content and blogs
  • Forums
  • Development and staging servers
  • Low traffic databases
  • Worker nodes within a container orchestration cluster

Dedicated CPU Compute Instances

4 GB - 512 GB* Memory, 2 - 64 Dedicated vCPUs, 80 GB - 7200 GB Storage

Starting at $36/mo ($0.05/hour). Pricing may vary by region, and differs for distributed compute regions. See Dedicated CPU pricing for a full list of plans, resources, and pricing.

*512 GB plans are in limited availability.

Dedicated CPU Compute Instances reserve physical CPU cores that you can utilize at 100% load 24/7 for as long as you need. This provides competition free guaranteed CPU resources and ensures your software can run at peak speed and efficiency. With Dedicated CPU Compute Instances, you can run your software for prolonged periods of maximum CPU usage, and you can ensure the lowest latency possible for latency-sensitive operations. These Compute Instances offer a perfectly balanced set of resources for most production applications.

Recommended Use Cases:

Best for production websites, high traffic databases, and any application that requires 100% sustained CPU usage or may be impacted by resource contention.

  • CI/CD toolchains and build servers
  • Game servers (like Minecraft or Team Fortress)
  • Audio and video transcoding
  • Big data (and data analysis)
  • Scientific computing
  • Machine learning and AI
  • High traffic databases (Galera, PostgreSQL with Replication Manager, MongoDB using Replication Sets)
  • Replicated or distributed file systems (GlusterFS, DRBD)

Premium Compute Instances

4 GB - 512 GB* Memory, 2 - 64 Dedicated vCPUs, 80 GB - 7200 GB Storage

Starting at $43/mo ($0.06/hr). See Premium pricing for a full list of plans, resources, and pricing.

*512 GB plans are in limited availability.

Premium Compute Instances build off our Dedicated CPU Compute Instances and guarantee a minimum hardware class utilizing the latest available AMD EPYC™ CPUs. This provides consistent performance to your workloads and is suitable for running mission-critical applications. Premium iCompute Instances are available in select data centers (see Availability).

Recommended Use Cases:

Best for enterprise-grade, business-critical, and latency-sensitive applications.

High Memory Compute Instances

24 GB - 300 GB Memory, 2 - 16 Dedicated vCPUs, 20 GB - 340 GB Storage

Starting at $60/mo ($0.09/hour). See High Memory pricing for a full list of plans, resources, and pricing.

High Memory Compute Instances are optimized for memory-intensive applications and equipped with dedicated CPUs, which provide competition free guaranteed CPU resources. These Compute Instances feature higher RAM allocations and relatively fewer vCPUs and less storage. This keeps your costs down and provides power to memory-intensive applications.

Recommended Use Cases:

Best for in-memory databases, in-memory caching systems, big data processing, and any production application that requires a large amount of memory while keeping costs down.

  • Any production application that requires large amounts of memory
  • In-memory database caching systems, such as Redis and Memcached. These applications offer very fast retrieval of data, but they store data in a non-persistent manner (with some caveats). So, they are usually used in conjunction with another persistent database server running on a separate Compute Instance.
  • In-memory databases, such as possible with NoSQL and other solutions
  • Big data processing (and data analysis)

GPU Compute Instances

16 GB - 196 GB Memory, 4 - 48 Dedicated vCPUs, 640 GB - 2.56 TB GB Storage

NVIDIA RTX 4000 Ada GPU plans starting at $350/mo ($0.52/hour) with 1 GPU card, 4 vCPU cores, 16 GB of memory, and 500 GB of SSD storage. NVIDIA Quadro RTX 6000 plans starting at $1000/mo ($1.50/hr) with 1 GPU card, 8 vCPU cores, 32 GB of memory, and 640 GB of storage. For a full list of plans, resources, and pricing, see GPU pricing.

GPU Compute Instances are the only Compute Instances equipped with NVIDIA RTX 4000 Ada GPU cards or NVIDIA Quadro RTX 6000 GPU cards for on demand execution of complex processing workloads. These GPUs have CUDA cores, Tensor cores, and RT (Ray Tracing) cores. GPUs are designed to process large blocks of data in parallel, meaning that they are an excellent choice for any workload requiring thousands of simultaneous threads. With significantly more logical cores than a standard CPU, GPUs can perform computations that process large amounts of data in parallel more efficiently.

Recommended Use Cases:

Best for applications that require massive amounts of parallel processing power, including machine learning, AI inferencing, graphics processing, and big data analysis.

Compute resources

When selecting a plan, it is important to understand the hardware resources allocated to your Compute Instance. These resources include the amount of vCPU cores, memory, storage space, network transfer, and more. Start by reviewing each resource below and the implications it may have for your application.

ResourceDescription
Memory (RAM)The working memory available for your server's processes. Your server stores information in memory that is needed to carry out its functions. Or, it caches data in memory for fast retrieval in the future, if it is likely that the data will be needed. Data stored in RAM is accessed faster than data stored in your disks, but it is not persistent storage.
vCPU CoresThe number of virtual CPUs (vCPUs) available to your server. Your software is often designed to execute its tasks across multiple CPUs in parallel. The higher your vCPU count, the more work you can perform simultaneously. Plans are also equipped with either shared CPU cores or dedicated CPU cores. Dedicated CPU cores allow your system to utilize 100% of your CPU resources at all times, while shared CPU cores require a lower sustained usage and may be affected by resource contention. See Choose between shared and dedicated CPUs.
StorageYour server's built-in persistent storage. Large databases, media libraries, and other stores of files will require more storage space. Your Compute Instance's storage is maintained on high-performance SSDs for fast access. You can also supplement your disks with extra Block Storage volumes.
TransferThe total amount of traffic your server can emit over the course of a month. Inbound traffic sent to your Compute Instance does not count against your transfer quota. If you exceed your quota, your service will not be shut off; instead, an overage will be billed. See Network transfer usage and costs for more information about how transfer works.
Network InThe maximum bandwidth for inbound traffic sent to your Compute Instance. The bandwidth you observe will also depend on other factors, like the geographical distance between you and your Compute Instance and the bandwidth of your local ISP. For help with choosing a data center that will feature the lowest latency and best bandwidth, review the Choose a data center guide.
Network OutThe maximum bandwidth for outbound traffic emitted by your Compute Instance. The bandwidth you observe will also depend on other factors, like the geographical distance between you and your Compute Instance and the bandwidth of your local ISP. For help with choosing a data center that will feature the lowest latency and best bandwidth, review the Choose a data center guide.
GPUGPU's, or Graphical Processing Units are specialized hardware units only available on our GPU Compute Instances. Originally designed to manipulate computer graphics and handle image processing, GPUs are now commonly also used for many compute intensive tasks that require thousands of simultaneous threads and the higher number of logical cores that a CPU can not provide alone.

Pricing

If you run a business or not, you likely need to think about pricing when considering which plan is right for you. You can view all pricing on our pricing page. Note that pricing and plan options may vary between regions.

Compare cost per month and save with predictable and transparent pricing using our Cloud Estimator. Explore bundled compute, storage, and transfer packages against AWS, GCP, and Azure.

Migrating from on-premise or between cloud providers for hosting, cloud storage, or cloud computing? Use our total cost of ownership Cloud Computing Calculator to receive a full cost breakdown and technical recommendations.