Rich
Azure DevOps, Terraform, PowerShell, Azure Policy and more.
Linux - How to a Setup a Kubernetes Cluster
15 Feb 2023Lab Requirements
- 3 Virtual Machines with Ubuntu 22.04.1 LTS installed and a static IP address all residing on the same subnet.
- A Controller (controller) with 2 vCPUs, 2GB vRAM and IP address 192.168.101.90.
- 2 Nodes (node1 & node2) with 1 vCPU, 2GB vRAM and IP addresses 192.168.101.91 and 192.168.101.92.
Steps
Prerequisites to Setup Kubernetes Cluster - Applies to all VMs
- Update the Operating System and packages using:
sudo apt update
sudo apt dist-upgrade
- Install a container runtime so that K8s may run containers:
sudo apt install containerd
systemctl status containerd
- Create a default configuration for containerd:
sudo mkdir /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
- Set the cgroup driver to systemd by editing
sudo vi /etc/containerd/config.toml
:- Under
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
setSystemCgroup = true
.
- Under
- Disable swap:
sudo swapoff -a
free -m
- Comment out the swap entry within
/etc/fstab
.
- Enable network bridging by editing
sudo vi /etc/sysctl.conf
and set:net.ipv4.ip_forward=1
.sudo sysctl --system
displays the Linux Kernel runtime parameters.
- Enable Kernel modules by editing
sudo vi /etc/modules-load.d/k8s.conf
and adding:br_netfilter overlay
lsmod
displays the status of modules in the Linux Kernel.
- Reboot the Server.
Install K8s - Applies to all VMs
- Add key for the K8s package repository so that your server trusts it:
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
ls -l /usr/share/keyrings/
- Add the K8s package repository as a source:
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo cat /etc/apt/sources.list.d/kubernetes.list
- Install the K8s packages:
sudo apt update
sudo apt install kubeadm kubectl kubelet
sudo apt-mark hold kubelet kubeadm kubectl
Configure the K8s Cluster using the Controller
- Initialise the K8s Cluster:
sudo kubeadm init --control-plane-endpoint=192.168.101.90 --node-name controller --pod-network-cidr=10.244.0.0/16
- control-plane-endpoint is the IP address of the controller server.
- node-name is the hostname of the controller server.
- pod-network-cidr is an internal IP address space used within the K8s Cluster. Note: If you change this value you would have to change other settings also, there should be no reason to change this.
- The output of
kubeadm init
will generate a join command that may be run on each worker node to add it to the K8s Cluster:sudo kubeadm join 192.168.101.90:6443 --token 1u8i34.gr3ji3jxl7qj0x9m --discovery-token-ca-cert-hash sha256:c8df85b95eb98f2cf21ef0b6382d478d596245850a27a46d92587a70e248d6d3
- You will also receive a set of commands that you may run in the context of your standard user to allow the K8s Cluster to be managed without root or sudo access:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config ls -l ~/.kube
kubectl get pods --all-namespaces
displays kube-system pods running on controller. Note: the coredns pods are in a pending state as an overlay network has not been deployed yet. The overlay network is a network on top of another network and used for management communication between nodes.- Deploy a network model using:
kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml kubectl get pods --all-namespaces
Note: There are multiple network models but flannel is fine for a lab.
Join the Worker Nodes to the K8s Cluster
- Use the command output by the K8s Cluster initialisation:
sudo kubeadm join 192.168.101.90:6443 --token 1u8i34.gr3ji3jxl7qj0x9m --discovery-token-ca-cert-hash sha256:c8df85b95eb98f2cf21ef0b6382d478d596245850a27a46d92587a70e248d6d3
- The hash value has a timeout, so if you need to regenerate it on the Controller run
kubeadm token create --print-join-command
. - From the Controller, check the nodes have joined successfully by running
kubectl get nodes
.
Troubleshooting Advice for Cluster Initialisation
- Check the containerd and kubelet services are running
systemctl status containerd kubelet
. - Review logs for a failed service using
journalctl -xu kubelet
. - Check that all Prerequisites to Setup Kubernetes Cluster are in place.
- I found an issue with the version of kubelet and had to use an earlier version using the following commands:
sudo apt remove --purge kubelet sudo apt install kubelet=1.25.5-00 sudo apt-mark hold kubelet sudo apt list --installed | grep kubelet sudo kubeadm reset sudo kubeadm init --control-plane-endpoint=192.168.101.90 --node-name controller --pod-network-cidr=10.244.0.0/16
Troubleshooting Advice for coredns pods in a pending state
kubectl describe pod coredns -n kube-system
may provide more information.- Check whether a node is in a tainted state
kubectl describe node controller | grep Taints
. Note:node.kubernetes.io/disk-pressure
would indicate that more disk space is required.