Kubernetes on linux with `kubeadm`
Kubernetes on linux with kubeadm
Table of Contents
Preface
This document will attempt to guide you through the creation of a simple single node kubernetes cluster using containerd for your home-lab.
This is not a guide on how to create a production ready/hardened environment.
Creating a kubernetes cluster manually with kubeadm
can be difficult, for a few reasons:
- The kubernetes documentation, while complete, is very detailed and assumes a fair bit of knowledge about kubernetes. Because of this it can be hard to understand if you don't already know a lot about.
- Kubernetes releases are frequent and features are quickly deprecated, which means existing walk-throughs/guides may quickly become out-of date.
Version Information
This guide is targeted to kubernetes v1.20 - v1.23.
If you are attempting to use this guide for another kubernetes version, please be aware that kubernetes is a quickly changing application and this guide may be out-of-date and incorrect. You have been warned.
Links to additional Documentation
We will delve a little bit into each of the items below. In case you need additional documentation here are some links to the tools we will be using below.
Requirements
We will need the following to successfully create a kubernetes cluster:
-
An active internet connection.
-
A linux host, preferably a VM, that has been freshly installed version of linux.
In this document we will be using a virtual machine freshly installed with a minimal version of RedHat Enterprise Linux v8 or Rocky Linux v8. This system at minimum needs to have at least:
- 2 GB RAM
- 2 cores
Containerd vs Docker
Due to Docker's aggressive cash-grab and licensing model changes, applications have been switching away from the Docker product. The reason is as simple as how each is defined.
The result of the difference is that to control/run a container, you really only need Containerd as long as your application can support communicating with it's API, which Kubernetes can now do. So, we wont need to install docker on our host.
Configuring our Host
- Configure the Host
-
Configure kernel modules
- Configure the
br_netfilter
andoverlay
kernel modules to load on start-up.printf 'br_netfilter\n' | sudo tee /etc/modules-load.d/br_netfilter.conf printf 'overlay\n' | sudo tee /etc/modules-load.d/overlay.conf
- Manually start the kernel modules
sudo modprobe br_netfilter sudo modprobe overlay
- Configure modules to load configurations at start-up
printf 'net.bridge.bridge-nf-call-ip6tables = 1\n' | sudo tee /etc/sysctl.d/net.bridge.bridge-nf-call-ip6tables.conf printf 'net.bridge.bridge-nf-call-iptables = 1\n' | sudo tee /etc/sysctl.d/net.bridge.bridge-nf-call-iptables.conf printf 'net.ipv4.ip_forward = 1\n' | sudo tee /etc/sysctl.d/net.ipv4.ip_forward.conf
- Update running system to use newly created configurations.
sudo sysctl --system
- Configure the
-
Disable SELinux. (Optional, seriously, you don't need to do this.)
Author's Note: I'm including this for those that don't want to mess with it, but I desperately hate this step. In my opinion there is no reason to disable SELinux, but this is a home-lab configuration and it can cause some people a little bit of trouble.
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config sudo setenforce 0
-
Disable Firewall
firewall-cmd --zone=public --permanent --set-target=ACCEPT firewall-cmd --complete-reload
-
- Install Containerd:
- Install the yum-utils package
sudo dnf install -y yum-utils
- Add the Containerd Repository
sudo yum-config-manager \ --add-repo \ https://download.docker.com/linux/centos/docker-ce.repo
- Install Containerd
sudo dnf install -y containerd.io
- Configure Containerd to restart automatically
sudo systemctl enable containerd.service
- Start Containerd
sudo systemctl start containerd.service
- Install the yum-utils package
- Install kubeadm
- Add Kubernetes Repository
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/ enabled=1 gpgcheck=1 gpgkey=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/repodata/repomd.xml.key exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni EOF
- Install kubelet, kubeadm, kubectl
sudo dnf install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
- Configure kubelet to restart automatically
sudo systemctl enable kubelet.service
- Start kubelet
sudo systemctl start kubelet.service
- Add Kubernetes Repository
- Configuring the systemd cgroup driver
- Generate the default containerd config file
containerd config default | sudo tee /etc/containerd/config.toml
- Update the containerd config to use the systemd cgroup driver by modifying
/etc/containerd/config.toml
(Around line 95) to include the systemd cgroup option, like this:[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc] ... [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] SystemdCgroup = true
- Restart containerd
sudo systemctl restart containerd.service
- Generate the default containerd config file
- Installing Helm
- Go to helm's release page and grab the download link for the latest linux_amd64 stable release. (At the time this article was written, it was
https://get.helm.sh/helm-v3.8.0-linux-amd64.tar.gz
.) - Using the link above run the following to download and install helm:
HELM_URL="https://get.helm.sh/helm-v3.8.0-linux-amd64.tar.gz" curl -Ls ${HELM_URL} | tar -zxf - linux-amd64/helm sudo mv linux-amd64/helm /usr/local/bin/ rm -rf linux-amd64/
- Alternatively you could use the helm install script, but keep in mind that it is never wise to run a script as root directly from the internet:
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | sudo bash
- Go to helm's release page and grab the download link for the latest linux_amd64 stable release. (At the time this article was written, it was
- Disable Linux Swap
- Kubernetes does not like a swap partition to exist. The following command will disable the swap partition in the RUNNING system. Due to the varying number of ways swap can be configured on a linux host, I will leave it up to you to permanently disable swap on the system.(Hint, this usually involves modifying
/etc/fstab
.)swapoff -a
- Kubernetes does not like a swap partition to exist. The following command will disable the swap partition in the RUNNING system. Due to the varying number of ways swap can be configured on a linux host, I will leave it up to you to permanently disable swap on the system.(Hint, this usually involves modifying
Create the Cluster
Now that the system is ready, it's finally time to create the cluster. But I need to go into a little bit of detail on how kubernetes operates.
Kubernetes uses two private internal virtual networks. The first is a service network to communicate to other nodes and for internal kubernetes orchestration and a services. The second is a pod network used by the containers when they are created to allow for communication. In order to avoid a collision with my physical network, I want to define both of these network's manually. In this example we will use 10.42.0.0/16 for the pod network and 10.43.0.0/16 for the service network.
With that information, we can now create the cluster. We do that by running the following command:
kubeadm init --pod-network-cidr 10.42.0.0/16 --service-cidr 10.43.0.0/16
When complete you will see some output that looks like this:
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.20.18.193:6443 --token y0lqvj.olkaucjqldj841k7 \
--discovery-token-ca-cert-hash sha256:0417d205e5283e114b5ae2ef79e0fca01edeb5d3a56c84e624024e4249595676
We will be using kubectl
to interact with our cluster. kubectl
uses a kube-config file to gain access to a cluster. For now, we will use the admin kube-config. We need to tell kubectl
where that file is. To do that we will run:
export KUBECONFIG=/etc/kubernetes/admin.conf
Installing a CNI
Now, just because we have a cluster, does not mean it's functional yet. In fact, if you take a look at your cluster now, you will see that there are some coredns
pods in a non-running state. Because of the number of ways a cluster can be used, kubernetes does not come with a CNI (Container Network Interface) by default. There are a lot to choose from, but for this cluster we are going to use Canal which is a combination of Calico and Flannel CNIs.
You can do this with one step:
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.25.1/manifests/canal.yaml
Allow Local Workloads
Normally, non-service workloads are prevented from running on a management node. Which is currently what we have. In oder to allow workloads to start on a single node cluster, we need to remove a configuration from the current node. This is done by modifying the taint on the current node:
kubectl taint nodes --all node-role.kubernetes.io/master-
Install Ingress Controller
Now we have a cluster, but we don't have an entrypoint for traffic nor do we have a method to route our traffic. This is done with an ingress controller. Let's use helm
to install, configure, and manage our ingress controller. In this example, we will use the ingress-nginx controller.
First, we need to configure helm and tell it where the ingress-nginx repository is so it can fetch the data it needs to handle our deployment.
helm repo add nginx https://kubernetes.github.io/ingress-nginx
Normally, we would have a load balancer or other high-availability system in place to route traffic to our cluster, but as this a home-lab, we will configure the ingress controller to use the host's network to allow easy access.
To do that we need to deploy our ingress-nginx deployment with the following command:
helm install \
--create-namespace \
--namespace ingress-nginx \
ingress-nginx \
nginx/ingress-nginx \
--set controller.hostNetwork=true
Install a Workload
Our cluster should be running and ready for a workload. Let's deploy a simple webserver to our cluster.
-
Create the namespace for our workload:
kubectl create ns test-workload
-
Deploy a container image as an example workload:
kubectl create deployment --namespace test-workload nginx --image nginx
-
Create a kubernetes service that will route traffic to the container:
kubectl expose deployment --namespace test-workload nginx --port 80
-
Create an ingress to route traffic to the kubernetes service:
cat <<EOF | tee workload-ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: nginx namespace: test-workload annotations: kubernetes.io/ingress.class: "nginx" spec: rules: - http: paths: - backend: service: name: nginx port: number: 80 pathType: ImplementationSpecific path: / EOF kubectl apply -f workload-ingress.yaml
Your workload should now be accessible from your computer by visiting the ip address of your vm.
Closing Thoughts
Keep in mind that this is a VERY simple workload and we are not exploring much in the way of options for kubernetes, the CNI, or the ingress controller.