Known limitations you may encounter with NVIDIA RTX PRO 6000 Blackwell Server Edition GPU Linodes
There are a few known limitations present in the NVIDIA RTX PRO 6000™ Blackwell Server Edition GPU Linodes Limited Availability launch. These issues are detailed below, along with any workarounds or information to help you plan around these limitations.
Automated configuration limitation for Blackwell GPU plans via StackScripts and Cloud-init
While we continue to expand compatibility for automated configuration tools, some users may encounter a 403 Forbidden error when attempting to boot these specific plans using StackScripts or Cloud-init. The error message typically indicates that the Linode plan is not available in the selected region, even when the hardware is supported.
This limitation currently affects both StackScripts and Cloud-init provisioning workflows. If you encounter this error, you must select a different plan type or contact Support for assistance with manual configuration and deployment methods.
Watchdog may not automatically reboot Blackwell GPU instances
While we continue to optimize the management framework for high-capacity compute plans, the Watchdog service (Lassie) may not automatically reboot RTX PRO 6000 Blackwell instances following a manual shutdown. This behavior is primarily observed on multi-GPU plans. If an instance remains powered off after a shutdown, it can be restarted manually through the Cloud Manager or the Linode API.
Linode deletions may be slow or hang for GPU plans
Due to the large memory allocation and hardware cleanup required for RTX PRO 6000 Blackwell plans, the deletion process may occasionally take several minutes to complete. During this time, the instance may remain visible in the Cloud Manager or API beyond the standard timeout window. If a deletion does not complete within five minutes, please contact Support for assistance.
Performance optimization for multi-GPU plans
We are continuously refining the performance of our 2x and 4x RTX PRO 6000 Blackwell GPU plans. While these configurations are currently available for deployment, some users may experience performance variations as we work toward future optimizations.
We recommend monitoring workload performance during this period of ongoing refinement as we continue to enhance the capabilities of our high-density GPU offerings.
Migration support for GPU-accelerated instances
Instances utilizing hardware passthrough currently support cold and warm migration workflows. Because these configurations require a direct hardware assignment to the guest, live migrations are not supported at this time.
If you need to move your instance to a different host or region, you must initiate a migration that allows for a controlled restart of the virtual machine.
Request assistance
If you encounter any issues, open a support ticket.
Updated about 1 hour ago
