Known issues you may encounter with LKE Enterprise

🚧

LKE Enterprise clusters provisioned during the beta period (before Jun 5th 2025) should be deleted and re-created. No upgrade path is supported for these clusters.

❗️

Resource deletion issue resolved in v1.31.8+lke5

We noticed an issue with an earlier LKE Kubernetes release where some resources would remain after they were deleted from an LKE Enterprise cluster (running v1.31.8+lke3 and earlier). This has been resolved in v1.31.8+lke5. See the Resource deletion issue section on this page for more details.

There are a few known issues present in the LKE Enterprise Limited Availability launch. These issues are detailed below, along with any workarounds or information to help you plan around these limitations.

Additional configuration required before deploying NodeBalancers to a VPC within an LKE Enterprise cluster

📘

This issue has been resolved in version v1.31.8+lke5. Upgrade your clusters and worker nodes to this latest version. See Upgrade an LKE Enterprise cluster to a newer Kubernetes version.

If you intend to deploy NodeBalancers within an LKE Enterprise cluster (version v1.31.8+lke3 and prior) and assign those NodeBalancers to a VPC, your account must be enabled to use this feature and you must have the proper service annotations. The required service annotation is service.beta.kubernetes.io/linode-loadbalancer-backend-ipv4-range and should include a free and unique /30 (or larger) subnet from within the 10.254.0.0/22 subnet of the VPC configured for the worker nodes. Here is an example:

metadata:  
  annotations:  
    service.beta.kubernetes.io/linode-loadbalancer-backend-ipv4-range: "10.100.0.0/30"

Additional details are available in the Configuring NodeBalancers section of the LKE Enterprise page and within Configuring NodeBalancers directly with VPC in the Linode CCM docs accessible on GitHub.

After this service annotation has been applied, the load balancer resources may need to be deleted and recreated. If you are still having issues deploying NodeBalancers to VPCs after including the correct service annotations, reach out to your account manager or our sales team to confirm that your account has this feature enabled.

New nodes may not automatically join the cluster

In rare cases, newly created nodes may not properly join your LKE Enterprise cluster. If this occurs, recycle the affected nodes.

Node pools may not gracefully drain nodes when scaling down

In some cases, excess worker nodes may be immediately terminated instead of gracefully drained when a node pool is scaling down. Any persistent volumes attached to pods within these nodes might not be properly detached when the nodes are deleted. These volumes may fail to attach to pods that are immediately scheduled to other nodes. Please contact the Support team if this occurs and you need assistance.

Downscaling does not occur as expected

We have noticed that there are some cases in which downscaling does not occur. For example, scaling down from 300 nodes to 200 nodes may result in the cluster continuing to have 300 nodes. This is a rare case and improvements have been made to reduce the changes of this happening. If this does occur, please reach out to the Support team.

The firewall_id field should not be used when creating or updating node pools through the Linode API

When you create or update a node pool through the Linode API, do not include the firewall_id field. If this field is included, new worker nodes for this node pool will never enter a ready state and cannot be used.

Worker nodes are not automatically upgraded when cluster is upgraded

Kubernetes version upgrades are managed differently on LKE Enterprise clusters when compared to non-enterprise LKE clusters. When the cluster is upgraded to the latest version, only the control plane is upgraded. To upgrade existing worker nodes, you must also update the Kubernetes version assigned to each node pool and, when using the on recycle strategy, recycle worker nodes in the node pool. For instructions, see Upgrade an LKE Enterprise cluster to a newer Kubernetes version.

🚧

When upgrading the cluster to a newer Kubernetes version through Cloud Manager, you are prompted to recycle all nodes. Continuing with this process will replace all worker nodes, but they will continue to use the older Kubernetes version. When prompted, click Cancel to prevent recycling these nodes and, instead, follow the instructions in the previously mentioned guide.

Resource deletion issue (resolved)

We have noticed an issue where resources (such as Linodes, NodeBalancers, and VPCs) remained after they were deleted from an LKE Enterprise cluster using Kubernetes version 1.31.8+lke3 and earlier. This issue has been fixed in v1.31.8+lke5. Before deleting resources, please upgrade to this version (see Upgrade an LKE Enterprise cluster to a newer Kubernetes version). If you have already deleted your v1.31.8+lke3 cluster and some resources remain on your account, you can manually remove them using Cloud Manager, the Linode CLI, or the Linode API. Please contact support if assistance is needed.

LKE Enterprise is not available to restricted users in Cloud Manager

Users are required to have full account access to interact with the new LKE Enterprise service in Cloud Manager. This means that LKE Enterprise clusters cannot be created, viewed, or modified by restricted user accounts within Cloud Manager. As a workaround, restricted users can instead deploy, view, and modify LKE Enterprise clusters through the Linode API and Linode CLI since API tokens do not have this same permissions-based limitation.