Known issues you may encounter with LKE Enterprise
LKE Enterprise clusters provisioned during the beta period (before Jun 5th 2025) should be deleted and re-created. No upgrade path is supported for these clusters.
Resource deletion issue (June 6th)
We have noticed an issue where resources (such as Linodes) remain after they are deleted from an LKE Enterprise cluster and we are working toward a fix. In the meantime, you can manually remove these resources from your account using the Cloud Manager, Linode CLI, and Linode API. Please contact support if this is observed.
There are a few known issues present in the LKE Enterprise Limited Availability launch. These issues are detailed below, along with any workarounds or information to help you plan around these limitations.
Additional configuration required before deploying NodeBalancers to a VPC within an LKE Enterprise cluster
If you intend to deploy NodeBalancers within an LKE Enterprise cluster and assign those NodeBalancers to a VPC, your account must be enabled to use this feature and you must have the proper service annotations. The required service annotation is service.beta.kubernetes.io/linode-loadbalancer-backend-ipv4-range
and should include a free and unique /30 (or larger) subnet from within the 10.254.0.0/22 subnet of the VPC configured for the worker nodes. Here is an example:
metadata:
annotations:
service.beta.kubernetes.io/linode-loadbalancer-backend-ipv4-range: "10.100.0.0/30"
Additional details are available in the Configuring NodeBalancers section of the LKE Enterprise page and within Configuring NodeBalancers directly with VPC in the Linode CCM docs accessible on GitHub.
After this service annotation has been applied, the load balancer resources may need to be deleted and recreated. If you are still having issues deploying NodeBalancers to VPCs after including the correct service annotations, reach out to your account manager or our sales team to confirm that your account has this feature enabled.
New nodes may not automatically join the cluster
In rare cases, newly created nodes may not properly join your LKE Enterprise cluster. If this occurs, recycle the affected nodes.
Node pools may not gracefully drain nodes when scaling down
In some cases, excess worker nodes may be immediately terminated instead of gracefully drained when a node pool is scaling down. Any persistent volumes attached to pods within these nodes might not be properly detached when the nodes are deleted. These volumes may fail to attach to pods that are immediately scheduled to other nodes. Please contact the Support team if this occurs and you need assistance.
Downscaling does not occur as expected
We have noticed that there are some cases in which downscaling does not occur. For example, scaling down from 300 nodes to 200 nodes may result in the cluster continuing to have 300 nodes. This is a rare case and improvements have been made to reduce the changes of this happening. If this does occur, please reach out to the Support team.
The firewall_id
field should not be used when creating or updating node pools through the Linode API
firewall_id
field should not be used when creating or updating node pools through the Linode APIWhen you create or update a node pool through the Linode API, do not include the firewall_id
field. If this field is included, new worker nodes for this node pool will never enter a ready state and cannot be used.
Updated about 4 hours ago