Getting started with LKE
Install kubectl
macOS:
Install via Homebrew:
brew install kubernetes-cli
Linux:
-
Download the latest kubectl release:
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
-
Make the downloaded file executable:
chmod +x ./kubectl
-
Move the command into your PATH:
sudo mv ./kubectl /usr/local/bin/kubectl
Windows:
Visit the Kubernetes documentation for a link to the most recent Windows release.
Create an LKE cluster
-
Log into your Cloud Manager account.
-
Select Kubernetes from the left navigation menu and then click Create Cluster.
-
The Create a Kubernetes Cluster page appears. At the top of the page, you are required to select the following options:
-
In the Cluster Label field, provide a name for your cluster. The name must be unique between all of the clusters on your account. This name is how you identify your cluster in Cloud Manager’s Dashboard.
-
From the Region dropdown menu, select the Region where you would like your cluster to reside.
-
From the Version dropdown menu, select a Kubernetes version to deploy to your cluster.
-
-
In the Add Node Pools section, select the hardware resources for the Linode worker node(s) that make up your LKE cluster. To the right of each plan, select the plus
+
and minus-
to add or remove a Compute Instance to a node pool one at time. -
Once you're satisfied with the number of nodes in a node pool, select Add to include it in your configuration. If you decide that you need more or fewer hardware resources after you deploy your cluster, you can always edit your Node Pool.
-
Once a pool has been added to your configuration, it is listed in the Cluster Summary on the right-hand side of Cloud Manager detailing your cluster's hardware resources and monthly cost. Additional pools can be added before finalizing the cluster creation process by repeating the previous step for each additional pool.
-
When you are satisfied with the configuration of your cluster, click the Create Cluster button on the right hand side of the screen. Your cluster's detail page appears, and your Node Pools are listed on this page. From this page, you can edit your existing Node Pools, access your Kubeconfig file, and view an overview of your cluster's resource details.
Access and download your kubeconfig
-
To access your cluster's kubeconfig, log in to your Cloud Manager account and navigate to the Kubernetes section.
-
From the Kubernetes listing page, click on your cluster's more options ellipsis and select Download kubeconfig. The file is saved to your computer's
Downloads
folder. -
Open a terminal shell and save your kubeconfig file's path to the
$KUBECONFIG
environment variable. In the example command, the kubeconfig file is located in theDownloads
folder, but you should alter this line with this folder's location on your computer:export KUBECONFIG=~/Downloads/kubeconfig.yaml
-
View your cluster's nodes using kubectl.
kubectl get nodes
General network and firewall information
In an LKE cluster, some entities and services are only accessible from within that cluster while others are publicly accessible (reachable from the internet).
Private (accessible only within the cluster):
- Pod IPs, which use a per-cluster virtual network in the range 10.2.0.0/16
- ClusterIP Services, which use a per-cluster virtual network in the range 10.128.0.0/16
Public (accessible over the internet):
- NodePort Services, which listen on all Nodes with ports in the range 30000-32768
- LoadBalancer Services, which automatically deploy and configure a NodeBalancer
- Any manifest which uses
hostNetwork
: true and specifies a port - Most manifests which use
hostPort
and specify a port
Exposing workloads to the public internet through the above methods can be convenient, but this can also carry a security risk. To maintain strong security, consider applying firewall rules to your cluster nodes. You can do this through the cloud-firewall-controller, which creates a Cloud Firewall ruleset with strong default policies and applies this firewall to your LKE cluster. You can also do this manually using your preferred firewall. The following policies are needed to allow communication between the node pools and the control plane and block unwanted traffic.
- Allow kubelet health checks: TCP port 10250 and 10256 from 192.168.128.0/17 Accept
- Allow Wireguard tunneling for kubectl proxy: UDP port 51820 from 192.168.128.0/17 Accept
- Allow cluster DNS access: TCP/UDP port 53 from 192.168.128.0/17 Accept
- Allow Calico BGP traffic: TCP port 179 from 192.168.128.0/17 Accept
- Allow NodeBalancer traffic: TCP/UDP port range 30000-32767 from 192.168.255.0/24 Accept
- Block all other TCP traffic: TCP All Ports All IPv4/All IPv6 Drop
- Block all other UDP traffic: UDP All Ports All IPv4/All IPv6 Drop
- Block all ICMP traffic: (Not required) ICMP All Ports All IPv4/All IPv6 Drop
- IPENCAP for IP ranges 192.168.128.0/17 for internal communication between node pools and control plane.
To ensure that all nodes in the cluster (including new or recycled nodes) are added to the same firewall ruleset, use of the Cloud Firewall Controller is recommended.
All new LKE clusters create a service named
Kubernetes
in thedefault
namespace designed to ease interactions with the control plane. This is a standard service for LKE clusters.
Next steps
Now that you have a running LKE cluster, you can start deploying workloads to it. Refer to our other guides to learn more:
Updated about 2 months ago