Kubernetes Lab Install with Ubuntu 22.04
Prerequisites
Prior to getting started make sure that you have KVM installed and Create Network Bridge.
Create VMs for the Kubernetes Cluster
The first thing we need to do is create our virtual machines. For this lab, we will create 3 virtual machines. One for the controller and 2 workers.
It will make things easier to set static IPs during Ubuntu Server installation. Make sure you choose IPs that are not in use and are part of your network.
name | role | memory | vCPUs | IP Address |
---|---|---|---|---|
k8s-controller | controller | 4 GB | 2 | 192.168.1.200 |
K8s-worker-01 | worker | 8 GB | 4 | 192.168.1.201 |
K8s-worker-02 | worker | 8 GB | 4 | 192.168.1.202 |
Configure all nodes
We need to run some configuration on each of the nodes. Make sure to run all the following commands on each node in the cluster.
Configure /etc/hosts
Open the /etc/hosts
file and add the following lines: (updating for your actual IPs from earlier)
192.168.1.200 k8s-controller
192.168.1.201 k8s-worker-01
192.168.1.202 k8s-worker-02
Save the file and exit.
Disable swap
Run the following commands:
swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
Load Kernel Modules
Run these commands to load the required kernel modules:
tee /etc/modules-load.d/containerd.conf <<EOF
overlay
br_netfilter
EOF
modprobe overlay
modprobe br_netfilter
tee /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
Load the changes:
sysctl --system
Install containerd
To install containerd prerequisites:
apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates
Enable Docker Repository
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmour -o /etc/apt/trusted.gpg.d/docker.gpg
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
Install containerd
apt install -y containerd.io
We need to configure containerd to start automatically:
containerd config default | sudo tee /etc/containerd/config.toml >/dev/null 2>&1
sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml
Restart and enable the service:
systemctl restart containerd
systemctl enable containerd
Get ready to install Kubernetes
First, add the apt repository for Kubernetes.
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
Next, install kubectl, kubeadm and kubelet.
apt install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl
Kubernetes Controller Configuration
This section is only to be completed on the kubernetes controller node.
Now we need to initialize the kubernetes cluster.
kubeadm init --control-plane-endpoint=k8s-controller
You will see output with the next commands to run.
Run the following commands to manage the node from a regular user.
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
Run the following commands to view the cluster status.
$ kubectl cluster-info
$ kubectl get nodes
Join Worker Nodes to the Cluster
The output of the kubeadm command will have a line that you can copy and paste to run on your worker nodes. This will join them to the cluster.
For example:
kubeadm join k8s-controller:6443 --token gbgcmh.lkajfslksjadf \
--discovery-token-ca-cert-hash sha256:{hidden}
Run the line shown in your output on each worker node.
Finishing up cluster configuration
Back on our controller node run the following command:
$ kubectl get nodes
Install Calico Pod Network
In order for our nodes to talk to each other, we need to install Calico Pod Network Add-on:
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico.yaml
After some time your nodes should all show as running:
$ kubectl get pods -n kube-system
$ kubectl get nodes
Create Snapshots
Save yourself some time and create snapshots of all three virtual machines.
Since this is a lab environment, our Kubernetes installation will most assuradly get messed up.
Run the following commands on you vm host:
virsh snapshot-create-as k8s-controller controller-snapshot0 --description "k8s installed"
virsh snapshot-create-as k8s-worker-01 worker1-snapshot0 --description "k8s installed"
virsh snapshot-create-as k8s-worker-02 worker2-snapshot0 --description "k8s installed"
When things get hosed run these commands to restore the snapshots:
virsh snapshot-revert k8s-controller controller-snapshot0
virsh snapshot-revert k8s-worker-01 worker1-snapshot0
virsh snapshot-revert k8s-worker-02 worker2-snapshot0