LKE Enterprise
LKE Enterprise limited availability launch
LKE Enterprise has been released as part of a limited availability launch. To start deploying LKE Enterprise clusters, contact your account manager or reach out to our sales team.
LKE Enterprise offers a robust set of enterprise-grade capabilities when compared with our standard LKE offering. It’s more scalable than LKE and can support a larger number of worker nodes and pods. The control plane is also highly available by default and offers dedicated resources, reducing possibility of resource contention and increasing the overall performance.
Features
Dedicated HA control plane
The control plane is the center of any managed Kubernetes service. The dedicated HA control plane equipped on LKE Enterprise clusters is designed to manage the added complexities in Kubernetes for enterprise workloads, including complexities in resource management, networking, cloud integrations, configuration databases, and scaling, and more. Dedicated resources increase the overall responsiveness and prevent resource contention, while its high-availability architecture increases the uptime.
Increased scalability
LKE Enterprise clusters support up to 500 nodes (over double that of LKE) and up to 5000 pods. This enables users to scale their clusters to accommodate much larger workloads.
Improved high-performant networking
A Container Network Interface, called a CNI, is responsible for configuring network resources within a Kubernetes cluster. LKE Enterprise clusters utilize Cilium for their CNI while LKE clusters use Calico. Calico is a mature, reliable, and performant networking solution that is well suited for a variety of use cases. It leverages the BGP routing protocol and Layer 3 networking to provide simplified straight-forward network administration. In comparison, Cilium uses eBPF and direct native routing through a VPC. It allows for more advanced monitoring and increased throughput. An LKE Enterprise cluster using Cilium has double the pod-to-pod throughput when compared to Calico.
Premium NodeBalancers
Just like LKE clusters, LKE Enterprise clusters can deploy NodeBalancers to add load balancing functionality. Currently, the only available load balancers are our traditional NodeBalancer solution. Akamai’s Premium NodeBalancer solution is coming soon and will only be available for LKE Enterprise users. These high-capacity Premium NodeBalancers allow for up to 100,000 concurrent connections (compared to 10,000 concurrent connections with traditional NodeBalancers).
Specifications
The following features and technical specifications are specific to LKE Enterprise:
- Supports 500 nodes and 5000 pods for increased scalability
- Dedicated high-availability (HA) control plane
- High-performance pod-to-pod networking through VPCs (using direct/native routing)
- High-performance ingress service networking
- Internal VPC that enables isolated intra-cluster communication and data exchange with the load balancing service
- Support for Premium NodeBalancers (coming soon)
- Cilium CNI with kube-proxy replacement through eBPF
- Kubernetes versions are supported for a longer time period
- Full control over initiating Kubernetes upgrades
LKE Enterprise also inherits the following benefits from LKE:
- Managed Kubernetes control plane
- Support for Akamai Cloud services, including:
- Volumes (through Linode Blockstorage CSI Driver)
- NodeBalancers (through Linode CMM)
- Metadata (through Linode CCM)
- Worker node pool management and autoscaling
- Fast and predictable Kubernetes version upgrades
Comparison
LKE Enterprise | LKE | |
---|---|---|
Maximum # of worker nodes | 500 | 250 |
Maximum # of pods | 5000 | 1000 |
Control plane resources | Dedicated | Shared |
High availability (HA) control plane | Yes | Optional add-on |
VPC isolated networking | Yes | No |
Node pool autoscaling | Yes | Yes |
Node pool data encryption | Yes | Yes |
Automated Kubernetes version updates | Yes | Yes |
Container Network Interface (CNI) | Cilium | Calico |
Load balancing solution | NodeBalancers, Premium NodeBalancers | NodeBalancers |
Kubernetes Dashboard pre-installed | No | Yes |
Akamai App Platform | Coming soon | Yes |
Pricing | Paid service (see Pricing) | No additional cost beyond worker node resources, add-ons, and additional deployed services |
Availability | Limited availability | All core compute regions |
Availability
LKE Enterprise is only available to approved customers in limited regions. If you are interested in deploying LKE Enterprise clusters, please contact your account manager or reach out to our sales team.
Pricing
LKE Enterprise clusters are available to approved customers at $300/month in addition to any resources consumed by the cluster. All resources consumed by the cluster are billed at the normal rate, including Linodes, NodeBalancers, and Block Storage volumes. Review the Pricing page for more information on those costs.
Deploy an LKE Enterprise Cluster
This section requires you to be enrolled in the LKE Enterprise program. If you are not yet enrolled, review the enrollment details in the Availability section.
Create the cluster
The instructions below detail how to deploy an LKE Enterprise cluster though Cloud Manager. For general information on deploying any LKE cluster, see the Create a cluster page.
- Open the Create Cluster form in the Cloud Manager and enter a label for your new cluster.
- The Cluster Tier section is new and allows you to choose between LKE and LKE Enterprise. Select LKE Enterprise. Once selected, the Akamai App Platform and HA Control Plane options are no longer available. In addition, the Control Plane ACL is required and you are not able to disable it.
- Select the region for the new cluster. LKE Enterprise is in limited availability and cannot yet be deployed within all regions.
- In the Version dropdown menu, select the Kubernetes version to use. Version numbers are different for LKE Enterprise. See Kubernetes versions available for LKE Enterprise for all available versions.
- Configure the Control Plane ACL by adding IP addresses or CIDRs. Only traffic from one of these IP addresses will be able to access your cluster’s control plane. You must add at least one IP address before continuing.
- Add node pools to your cluster. To accommodate networking sensitive workloads, especially in enterprise scenarios where low latency or high throughput is required, using Premium instances with LKE Enterprise is highly recommended.
- To finish and deploy your new cluster, click the Create Cluster button.
VPC details
All LKE Enterprise clusters are automatically configured with an internal VPC. This VPC is used for intra-cluster networking as well as data exchange with any configured load balancer (NodeBalancer) services. This means that all traffic between pods and traffic between nodes will occur over the VPC interface. Each worker node is assigned an address from the 10.0.0.0/8 subnet and each pod is given a /24 CIDR range from the 10.248.0.0/14 subnet that will be used for pod networking. The 10.255.255.0/24 subnet is reserved for internal cluster use. Any NodeBalancers on your cluster will also need to be configured for this VPC. See the Configure NodeBalancers section below.
While this VPC can be viewed and modified just like any other VPC on Akamai Cloud, we do not recommend making any changes unless advised otherwise by our team. Editing your cluster’s VPC may disrupt your cluster’s networking and could result in your workload or application becoming inaccessible.
Firewall details
A Cloud Firewall is created as part of every LKE Enterprise cluster and assigned to all worker node VMs. Rules that allow communication over VPC IP addresses are pre-populated. Changes to these existing rules can impact cluster networking. Additional rules can be added as needed.
Kubernetes version life cycle and upgrades
LKE Enterprise maintains separate Kubernetes versions than standard LKE. These versions are supported for longer periods of time, including full support for 12 months and maintenance patches for 2 months after that. Once the support period ends, your clusters are not automatically force-upgraded. Instead, they can remain on their current Kubernetes version until you initiate the upgrade. If a critical vulnerability is identified for any Kubernetes versions still running on your clusters, you will receive a notification of the upgrade details and timeline. For additional version details, see LKE versioning and life cycle policy.
Configure NodeBalancers
Premium NodeBalancers are coming soon to LKE Enterprise clusters.
Load balancer services can be added to a cluster by defining it within a configuration YAML file and applying that configuration to your cluster. By default, load balancers are deployed as standard NodeBalancers. You can switch to Premium NodeBalancers and configure additional options through service annotations. Specifically, the service.beta.kubernetes.io/linode-loadbalancer-nodebalancer-type defines the type of NodeBalancer that is deployed from the following options:
“common”
: NodeBalancers“premium”
: Premium NodeBalancers
Additional service annotations are also required to configure the NodeBalancer with the cluster’s VPC. The service.beta.kubernetes.io/linode-loadbalancer-backend-ipv4-range annotation defines the /30 subnet used for backend interfaces of the NodeBalancer. This must be unique from any other NodeBalancers in this cluster and it must be from within the 10.254.0.0/22 range.
Here is an example service annotation that defines the load balancer type as a Premium NodeBalancer and sets the /30 subnet for the NodeBalancer’s backend interface:
metadata:
annotations:
service.beta.kubernetes.io/linode-loadbalancer-nodebalancer-type: "premium"
service.beta.kubernetes.io/linode-loadbalancer-backend-ipv4-range: "10.254.0.0/30"
For more details on using Service annotations to configure NodeBalancers, see Load balancing on LKE.
Upcoming enhancements
Additional LKE Enterprise features will be released over the course of 2025. Some of the planned updates include support for the Akamai App Platform, customer-specified VPCs, dual stack networking (IPv6 support), OIDC authentication, and improved upgrade scheduling.
Updated about 6 hours ago