Dedicated CPU Linodes
Dedicated CPU Linodes are virtual machines that deliver guaranteed, competition-free CPU resources for consistently high performance. Each vCPU core assigned to your instances is reserved exclusively for your workloads, ensuring predictable compute capacity with no contention from neighboring users. This makes Dedicated CPU plans well-suited to production applications and CPU-intensive tasks that rely on steady performance over time. If your workloads require substantial memory as well, consider High Memory Linodes.
Dedicated CPU plans are ideal for nearly all production applications and CPU-intensive workloads, including high traffic websites, video encoding, machine learning, and data processing.
Dedicated competition-free resources
A Dedicated CPU Linode provides vCPU cores that are reserved exclusively for your instance. Because these cores are not shared with other workloads, your applications never wait for CPU access. This enables the lowest latency possible for CPU-bound operations, and supports sustained full-duty workloads (100% CPU all day, every day) at consistent performance levels.
Upgrading from a Shared CPU Linode
If your workloads experience variable performance or slowdowns caused by CPU scheduling on a Shared CPU Linode, upgrading to a Dedicated CPU plan can provide immediate improvements. Dedicated plans remove CPU contention entirely.
Moving from a Shared CPU Linode to a Dedicated CPU Linode is a seamless process. If you're thinking about upgrading, see Resize a Linode for guidance.
Recommended workloads
Dedicated CPU Linodes are well suited to any workload that requires consistently high performing compute resources.
G6, G7, or G8 Dedicated:
- Production websites and e-commerce sites
- Applications that required sustained 100% CPU usage
- Applications that might be impacted by resource contention
- CI/CD pipelines and build servers
- Replicated or distributed file systems (GlusterFS, DRBD)
G7 or G8 Dedicated:
- Big data and data analysis
- High traffic databases (Galera, PostgreSQL with Replication Manager, MongoDB using Replication Sets)
- Scientific computing
G8 Dedicated::
- Game servers with real-time state synchronization
- Audio and video transcoding
- Machine learning and AI
See the Use cases section for more information.
For audio and video transcoding, you may also want to consider GPU or Accelerated (VPU) plans. For machine learning and AI workloads, you may also want to consider GPU plans.
Availability
Dedicated CPU plan availability varies by plan generation and region. See the product availability table for the most up-to-date list.
Plans and pricing
Dedicated CPU plans are available across multiple hardware generations and plan families, giving you the flexibility to match compute performance with your application needs. All plans provide competition-free CPU resources, with differences in processor architecture, memory-to-vCPU ratios, and pricing models.
Generations
G8 Dedicated (Featured)
G8 Dedicated plans deliver the most advanced compute performance, which is ideal for resource-Intensive workloads. They deliver high-consistency compute powered by Zen 5 cores with new 1:4 VM shapes and larger memory options. Consider these plans for enterprise-grade, latency-sensitive, and resource-heavy applications.
G8 Dedicated plans differ from G6 and G7 Dedicated plans in that they use our newest network transfer model of usage-based billing, which allows you to pay for only the bandwidth you use.
G7 Dedicated
G7 Dedicated plans are well-suited to performance workloads. They deliver consistent, enterprise-grade performance on Zen3 cores. These plans are designed for CPU-intensive, business-critical applications requiring reliability and scale.
G6 Dedicated
G6 Dedicated plans are good for production workloads. They offer CPUs with no resource contention. These plans deliver balanced performance suitable for most production cloud applications.
G6 is currently still available for existing deployments; however, most new workloads would benefit from the performance improvements of G7 and G8 Dedicated plans.
Plan types
Dedicated CPU plans are also organized by plan type. These types determine the memory-to-vCPU ratio, helping you to choose the right balance of compute and memory for your workload.
Compute optimized (1:2).
Compute optimized plans provide approximately 2 GB of RAM per vCPU, offering higher compute density for CPU-intensive and latency-sensitive workloads. This plan type is a strong fit a range of compute-intensive tasks, including:
- Converting or compressing videos and other media files
- Speeding up software builds and automated testing
- Running calculation heavy scientific or engineering workloads
- Supporting busy websites or apps that receive lots of quick, simple requests
- Managing everyday database transactions for small to medium data sets.
General purpose (1:4)
General purpose plans provide approximately 4 GB RAM per vCPU, offering a balanced mix of compute and memory. These plans are suitable for most production applications and work well for tasks such as:
- Hosting websites and apps with steady traffic
- Running relational of NoSQL databases that need more memory
- Powering in-memory caches for faster data access
- Supporting e-commerce sites with variable traffic and product data
- Handling analytics workloads that process larger data sets
- Running common business or enterprise applications.
Resource ranges
The ranges in the following table represent minimum and maximum values available across all Dedicated CPU plans within each generation.
| Resource | G8 Dedicated | G7 Dedicated | G6 Dedicated |
|---|---|---|---|
| vCPU cores | 2-256 cores | 2-64 cores | 2-64 cores |
| Memory | 4 GB - 512 GB | 4 GB - 512 GB* | 4 GB - 512 GB* |
| Storage | 40 GB - 5120 GB | 80 GB - 7,200 GB | 80 GB - 7,200 GB |
| Outbound Network Transfer | 0 TB | 4 TB - 12 TB | 4 TB - 12 TB |
| Outbound Network Bandwidth | 4 Gbps - 12 Gbps | 4 Gbps - 12 Gbps | 4 Gbps - 12 Gbps |
| Compute optimized plans (1:2) | ✔ | ||
| General purpose plans (1:4) | ✔ | ||
| Legacy 1:2 plans | ✔ | ✔ | |
* G6 and G7 Dedicated 512 GB plans have limited deployment availability.
Pricing
Pricing varies by plan generation, resources, and in some cases, region. See the pricing page for full details.
Network Transfer
G8 Dedicated plans differ from G6 and G7 Dedicated plans in that they use our newest network transfer model of usage-based billing, which allows you to pay only for the bandwidth you use.
G6 and G7 include bundled transfer with allowances that scale based on plan size.
See Network transfer usage and costs for more information.
Use cases
The following use cases highlight workloads where dedicated, competition-free CPU resources deliver meaningful benefits.
CI/CD toolchains and build servers
Recommended generation: G6, G7, or G8 Dedicated
Continuous Integration and Continuous Delivery (CI/CD) workflows often involve frequent builds, automated tests, and rapid iteration cycles. When many commits land in short intervals, or when large codebases must be compiled repeatedly, the build server becomes CPU-bound. A Dedicated CPU plan ensures that your build and test jobs always have access to the guaranteed vCPU capacity. This reduces queue times and prevents unpredictable slowdowns caused by contention.
Dedicated CPU plans are recommended for:
- Build servers that compile code frequently or in parallel
- Automated test runners and integration workflows
- Remote build environments expected to stay active throughout the day
To learn more about CI/CD concepts, see our Introduction to CI/CD guide.
Game servers
Recommended generation: G8 Dedicated
Modern multiplayer games maintain constant communication with many clients while tracking world state, physics, player input, and anti-cheat logic. Game servers coordinate a large number of clients and must sync entire game worlds for each session. When CPU resources are unavailable or contended, players may experience stuttering, lag. Or reduced tick rates.
The following popular games are good examples of game servers running workloads that may benefit from a Dedicated CPU:
Audio and video transcoding
Recommended generation: **G8 Dedicated **
Consider also: GPU and Accelerated (VPU) plans
Transcoding—converting audio or video from one format to another—is both CPU-intensive and time-sensitive, making it well-suited to a Dedicated CPU or GPU plan. Popular transcoding tools like FFmpeg can fully saturate a CPU for long periods, and contention can significantly delay encoding tasks.
Big data and data analysis
Recommended generation: G7 or G8 Dedicated
Extracting meaningful insights from very large datasets often requires specialized software and hardware. Big data is most easily recognized using the "three V's":
- Volume: Generally, if you are working with terabytes, petabytes, exabytes, or more amounts of information you are in the realm of big data.
- Velocity: With big data, you are using data that is being created, called, moved, and interacted with at a high velocity, for example, the real-time data generated on social media platforms.
- Variety: Variety refers to the many different types of data formats with which you may need to interact. Photos, video, audio, and documents can all be written and saved in a number of different formats. It’s important to consider the variety of data that you will collect in order to appropriately categorize it.
Processing big data is often especially hardware-dependent. A Dedicated CPU can give you access to the isolated resources that are often required to complete these tasks.
Common data analysis tools that can be installed on a Dedicated CPU Linode include:
-
Hadoop: An Apache project for the creation of parallel processing applications on large data sets, distributed across networked nodes.
-
Apache Spark: A unified analytics engine for large-scale data processing designed with speed and ease of use in mind.
-
Apache Storm: A distributed computation system that processes streaming data in real time.
Scientific computing
Recommended generation: G7 or G8 Dedicated
Scientific computing is the term used to describe the use of computing power to solve complex scientific problems that are either impossible, dangerous, or otherwise inconvenient to solve via traditional means. Scientific computing involves numerical modeling, simulation, and other CPU-intensive operations that require consistent computational throughput. Many scientific workflows run long-duration or iterative jobs where even minor performance fluctuations can slow results.
Dedicated CPU plans support:
- Mathematical modeling and simulation
- Complex numerical analysis
- Research workloads requiring predictable runtimes
Common tools for scientific computing, such as Jupyter Notebook and NumPy, can be installed on a Dedicated CPU Linode.
Machine learning
Recommended generation: **G8 Dedicated **
Consider also: GPU plans
Machine learning is a powerful approach to data science that uses large sets of data to build prediction algorithms. These prediction algorithms are commonly used in “recommendation” features on many popular music and video applications, online shops, and search engines. When you receive intelligent recommendations tailored to your own tastes, machine learning is often responsible. Other areas where you might find machine learning being used are in self-driving cars, process automation, security, marketing analytics, and health care.
Many machine learning workflows—pre-processing, feature extraction, CPU-optimized model training—benefit from sustained CPU performance. Batch inference tasks and classical machine learning algorithms rely heavily on predictable compute availability.
Common tools used for machine learning and AI that can be installed on a CPU Linode include:
-
TensorFlow - a free, open-source, machine learning framework and deep learning library. Tensorflow was originally developed by Google for internal use and later fully released to the public under the Apache License.
-
PyTorch - a machine learning library for Python that uses the popular GPU-optimized Torch framework.
-
Apache Mahout - a scalable library of machine learning algorithms and distributed linear algebra framework designed to let mathematicians, statisticians, and data scientists quickly implement their own algorithms.
Updated 4 days ago
