NVIDIA RTX PRO 6000 Blackwell GPU Onboarding (limited availability)

This guide is designed to move you from initial access to your first successful deployment. Whether you're deploying a specialized Large Language Model (LLM) or integrating real-time AI features into your application, this will help you select the right resource plan and identify the optimal region for your workload.

📘

NVIDIA RTX PRO 6000™ Blackwell Server Edition and the NVIDIA Quadro RTX 6000™ GPU plans have limited deployment availability.

See the Compute Service Level Agreement for legal terms that apply to Akamai features that are in limited availability or otherwise not yet released into general availability.

Which GPU plan is right for me?

GPU requirements are rarely one-size-fits-all. A workload dominated by AI inference has vastly different architectural needs versus one focused on video processing or model serving. Choosing the right hardware is the difference between a bottleneck and a breakthrough. Use the Selection Guide to navigate our GPU fleet and identify the specific architecture optimized for your performance, memory, and cost requirements.

Selecting your plan

The NVIDIA RTX PRO 6000 Blackwell Server Edition GPU Linodes are available in a range of configurations so you can match the hardware to your workload instead of over- or under-provisioning. For a full comparison of sizes and pricing, see the Plans and pricing.

Plans scale along three main dimensions:

  • GPU count (1-4 cards). Start with a single GPU for focused inference workloads, or scale up to multi-GPU configurations when you need high-throughput or more parallel jobs.
  • Video memory (96 GB - 384 GB VRAM). Higher-tier plans provide significantly more VRAM, enabling deployment of larger language and multimodal models.
  • System resources (vCPUs, system RAM, storage). As you move up the plan family, you gain additional CPU, memory, and SSD capacity to keep data pipelines and logging from becoming bottlenecks.

Use the smaller plans for targeted, single-model inference and development environments. Choose larger multi-GPU plans when running high-concurrent APIs, or workloads that benefit from keeping more models in memory at once.

Region availability

To get the best latency and throughput from your NVIDIA RTX PRO 6000 Blackwell Server Edition GPU plan, deploy them in a region that is close to your users. While in limited availability, plans are offered in select data centers. For the most current list, see Product Availability.

When selecting a region, consider:

  • Proximity to your users or data. Shorter network paths generally mean lower latency for interactive AI applications.
  • Product availability. Not every region offers NVIDIA RTX PRO 6000 Blackwell Server Edition GPU plans yet, so verify that your preferred data center includes these plans before you deploy.

Known limitations

The NVIDIA RTX PRO 6000 Blackwell Server Edition GPU Linodes are currently offered with limited availability, and we're actively iterating on both the platform and operational experience. As we continue to roll out enhancements, you may encounter a few behaviors that differ from other general availability GPU families.

For the latest details and any recommended workarounds, see known limitations.

How to Deploy an NVIDIA RTX PRO 6000 Blackwell Server Edition GPU Linode

Now that you’ve identified your plan and region, it’s time to spin up your GPU using the deployment method that best fits your workflow:

  • Deploy via Cloud Manager UI. Best for a visual, step-by-step guide.
  • Deploy via API/CLI. For automated scripts and CI/CD pipelines.
  • Deploy via LKE. For managed Kubernetes orchestration.

Deploy Using Cloud Manager

Use the Cloud Manager UI for a visual, guided experience that streamlines the configuration and deployment of your NVIDIA RTX PRO 6000 Blackwell Server Edition GPU plan.

Start by selecting Create > Linode within your Cloud Manager dashboard. The interface walks you through selecting an NVIDIA RTX PRO 6000 Blackwell Server Edition-ready region, pairing it with your preferred distribution. When you reach the plan selection, select the GPU tab to unlock the full power of the RTX PRO 6000 Blackwell architecture.

For a comprehensive walkthrough of the entire provisioning process, see Create a Linode.

Deploy Using Linode API

You can use the Linode API to provision your RNVIDIA RTX PRO 6000 Blackwell Server Edition GPU plan. Refer to the Linode API workflows for step-by-step instructions.

📘

The values shown in code snippets are for example purposes only. Request body parameters and response output may be different for you when running these operations.

Please update the example API and CLI commands with your:

curl --request POST \
     --url https://api.linode.com/v4/linode/instances \
     --header 'accept: application/json' \
     --header 'content-type: application/json' \
     --data '
{
  "root_pass": "aComplexP@ssword",
  "type": "g3-gpu-rtxpro6000-blackwell-1",
  "region": "<your_region",
  "image": "linode/ubuntu22.04",
  "label": "gpu-blackwell-1",
  "authorized_keys": [
    "ssh-rsa AAAA_valid_public_ssh_key_123456785== user@their-computer"
  ]
}
'

Specific details for the POST/linode/instances operation can be found in the Linode API reference documentation.

Deploy Using linode-cli

The Linode CLI provides easy access to any of the Linode API operations from a terminal.

📘

The values shown in code snippets are for example purposes only. Request body parameters and response output may be different for you when running these operations.

linode-cli linodes create
-label blackwell-gpu-1 \
-root_pass < your_password_here> \
-booted true \
-region <your_region> \
-disk_encryption enabled \
-type g3-gpu-rtxpro6000-blackwell-1 \
-authorized_keys "‹your_key_here>"

Deploy Using LKE

Akamai's GPU Linodes are available for deployment on standard LKE clusters, letting you run your GPU-accelerated workloads on Akamai's managed Kubernetes service. See Using GPUs on LKE for installing the NVIDIA software components you need to configure GPU-enabled workloads on LKE.

Need Help?

If you run into any issues deploying, need to adjust your access, or want guidance on GPU performance, we're here to help. Open a support ticket.