Kubernetes on linux with `kubeadm`

Kubernetes on linux with kubeadm

Table of Contents

  1. Preface

  2. Version Information

  3. Links to additional Documentation

  4. Requirements

  5. Containerd vs Docker

  6. Configuring our Host

  7. Create the Cluster

  8. Installing a CNI

  9. Allow Local Workloads

  10. Install Ingress Controller

  11. Install a Workload

Preface

This document will attempt to guide you through the creation of a simple single node kubernetes cluster using containerd for your home-lab.

This is not a guide on how to create a production ready/hardened environment.

Creating a kubernetes cluster manually with kubeadm can be difficult, for a few reasons:

  • The kubernetes documentation, while complete, is very detailed and assumes a fair bit of knowledge about kubernetes. Because of this it can be hard to understand if you don't already know a lot about.
  • Kubernetes releases are frequent and features are quickly deprecated, which means existing walk-throughs/guides may quickly become out-of date.

Version Information

This guide is targeted to kubernetes v1.20 - v1.23.

If you are attempting to use this guide for another kubernetes version, please be aware that kubernetes is a quickly changing application and this guide may be out-of-date and incorrect. You have been warned.

We will delve a little bit into each of the items below. In case you need additional documentation here are some links to the tools we will be using below.

Requirements

We will need the following to successfully create a kubernetes cluster:

  1. An active internet connection.

  2. A linux host, preferably a VM, that has been freshly installed version of linux.

    In this document we will be using a virtual machine freshly installed with a minimal version of RedHat Enterprise Linux v8 or Rocky Linux v8. This system at minimum needs to have at least:

    • 2 GB RAM
    • 2 cores

Containerd vs Docker

Due to Docker's aggressive cash-grab and licensing model changes, applications have been switching away from the Docker product. The reason is as simple as how each is defined.

Docker is a developer-oriented software with a high level interface that lets you easily build and run containers from your terminal.

Containerd is an abstraction of kernel features that provides a relatively high level container interface.

The result of the difference is that to control/run a container, you really only need Containerd as long as your application can support communicating with it's API, which Kubernetes can now do. So, we wont need to install docker on our host.

Configuring our Host

  1. Configure the Host
    1. Configure kernel modules

      1. Configure the br_netfilter and overlay kernel modules to load on start-up.
        printf 'br_netfilter\n' | sudo tee /etc/modules-load.d/br_netfilter.conf
        printf 'overlay\n' | sudo tee /etc/modules-load.d/overlay.conf
        
      2. Manually start the kernel modules
        sudo modprobe br_netfilter
        sudo modprobe overlay
        
      3. Configure modules to load configurations at start-up
        printf 'net.bridge.bridge-nf-call-ip6tables = 1\n' | sudo tee /etc/sysctl.d/net.bridge.bridge-nf-call-ip6tables.conf
        printf 'net.bridge.bridge-nf-call-iptables = 1\n' | sudo tee /etc/sysctl.d/net.bridge.bridge-nf-call-iptables.conf
        printf 'net.ipv4.ip_forward = 1\n' | sudo tee /etc/sysctl.d/net.ipv4.ip_forward.conf
        
      4. Update running system to use newly created configurations.
        sudo sysctl --system
        
    2. Disable SELinux. (Optional, seriously, you don't need to do this.)

      Author's Note: I'm including this for those that don't want to mess with it, but I desperately hate this step. In my opinion there is no reason to disable SELinux, but this is a home-lab configuration and it can cause some people a little bit of trouble.

      sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
      sudo setenforce 0
      
    3. Disable Firewall

      firewall-cmd --zone=public --permanent --set-target=ACCEPT
      firewall-cmd --complete-reload
      
  2. Install Containerd:
    1. Install the yum-utils package
      sudo dnf install -y yum-utils
      
    2. Add the Containerd Repository
      sudo yum-config-manager \
        --add-repo \
        https://download.docker.com/linux/centos/docker-ce.repo
      
    3. Install Containerd
      sudo dnf install -y containerd.io
      
    4. Configure Containerd to restart automatically
      sudo systemctl enable containerd.service
      
    5. Start Containerd
      sudo systemctl start containerd.service
      
  3. Install kubeadm
    1. Add Kubernetes Repository
      cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
      [kubernetes]
      name=Kubernetes
      baseurl=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/
      enabled=1
      gpgcheck=1
      gpgkey=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/repodata/repomd.xml.key
      exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
      EOF
      
    2. Install kubelet, kubeadm, kubectl
      sudo dnf install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
      
    3. Configure kubelet to restart automatically
      sudo systemctl enable kubelet.service
      
    4. Start kubelet
      sudo systemctl start kubelet.service
      
  4. Configuring the systemd cgroup driver
    1. Generate the default containerd config file
      containerd config default | sudo tee /etc/containerd/config.toml
      
    2. Update the containerd config to use the systemd cgroup driver by modifying /etc/containerd/config.toml (Around line 95) to include the systemd cgroup option, like this:
      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
        ...
        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
          SystemdCgroup = true
      
    3. Restart containerd
      sudo systemctl restart containerd.service
      
  5. Installing Helm
    1. Go to helm's release page and grab the download link for the latest linux_amd64 stable release. (At the time this article was written, it was https://get.helm.sh/helm-v3.8.0-linux-amd64.tar.gz.)
    2. Using the link above run the following to download and install helm:
      HELM_URL="https://get.helm.sh/helm-v3.8.0-linux-amd64.tar.gz"
      curl -Ls ${HELM_URL} | tar -zxf - linux-amd64/helm
      sudo mv linux-amd64/helm /usr/local/bin/
      rm -rf linux-amd64/
      
    3. Alternatively you could use the helm install script, but keep in mind that it is never wise to run a script as root directly from the internet:
      curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | sudo bash
      
  6. Disable Linux Swap
    1. Kubernetes does not like a swap partition to exist. The following command will disable the swap partition in the RUNNING system. Due to the varying number of ways swap can be configured on a linux host, I will leave it up to you to permanently disable swap on the system.(Hint, this usually involves modifying /etc/fstab.)
      swapoff -a
      

Create the Cluster

Now that the system is ready, it's finally time to create the cluster. But I need to go into a little bit of detail on how kubernetes operates.

Kubernetes uses two private internal virtual networks. The first is a service network to communicate to other nodes and for internal kubernetes orchestration and a services. The second is a pod network used by the containers when they are created to allow for communication. In order to avoid a collision with my physical network, I want to define both of these network's manually. In this example we will use 10.42.0.0/16 for the pod network and 10.43.0.0/16 for the service network.

With that information, we can now create the cluster. We do that by running the following command:

kubeadm init --pod-network-cidr 10.42.0.0/16 --service-cidr 10.43.0.0/16

When complete you will see some output that looks like this:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.20.18.193:6443 --token y0lqvj.olkaucjqldj841k7 \
	--discovery-token-ca-cert-hash sha256:0417d205e5283e114b5ae2ef79e0fca01edeb5d3a56c84e624024e4249595676

We will be using kubectl to interact with our cluster. kubectl uses a kube-config file to gain access to a cluster. For now, we will use the admin kube-config. We need to tell kubectl where that file is. To do that we will run:

export KUBECONFIG=/etc/kubernetes/admin.conf

Installing a CNI

Now, just because we have a cluster, does not mean it's functional yet. In fact, if you take a look at your cluster now, you will see that there are some coredns pods in a non-running state. Because of the number of ways a cluster can be used, kubernetes does not come with a CNI (Container Network Interface) by default. There are a lot to choose from, but for this cluster we are going to use Canal which is a combination of Calico and Flannel CNIs.

You can do this with one step:

kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.25.1/manifests/canal.yaml

Allow Local Workloads

Normally, non-service workloads are prevented from running on a management node. Which is currently what we have. In oder to allow workloads to start on a single node cluster, we need to remove a configuration from the current node. This is done by modifying the taint on the current node:

kubectl taint nodes --all node-role.kubernetes.io/master-

Install Ingress Controller

Now we have a cluster, but we don't have an entrypoint for traffic nor do we have a method to route our traffic. This is done with an ingress controller. Let's use helm to install, configure, and manage our ingress controller. In this example, we will use the ingress-nginx controller.

First, we need to configure helm and tell it where the ingress-nginx repository is so it can fetch the data it needs to handle our deployment.

helm repo add nginx https://kubernetes.github.io/ingress-nginx

Normally, we would have a load balancer or other high-availability system in place to route traffic to our cluster, but as this a home-lab, we will configure the ingress controller to use the host's network to allow easy access.

To do that we need to deploy our ingress-nginx deployment with the following command:

helm install \
  --create-namespace \
  --namespace ingress-nginx \
  ingress-nginx \
  nginx/ingress-nginx \
  --set controller.hostNetwork=true

Install a Workload

Our cluster should be running and ready for a workload. Let's deploy a simple webserver to our cluster.

  1. Create the namespace for our workload:

    kubectl create ns test-workload
    
  2. Deploy a container image as an example workload:

    kubectl create deployment --namespace test-workload nginx --image nginx
    
  3. Create a kubernetes service that will route traffic to the container:

    kubectl expose deployment --namespace test-workload nginx --port 80
    
  4. Create an ingress to route traffic to the kubernetes service:

    cat <<EOF | tee workload-ingress.yaml
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: nginx
      namespace: test-workload
      annotations:
        kubernetes.io/ingress.class: "nginx"
    spec:
      rules:
      - http:
          paths:
          - backend:
              service:
                name: nginx
                port:
                  number: 80
            pathType: ImplementationSpecific
            path: /
    EOF
    kubectl apply -f workload-ingress.yaml
    

    Your workload should now be accessible from your computer by visiting the ip address of your vm.

Closing Thoughts

Keep in mind that this is a VERY simple workload and we are not exploring much in the way of options for kubernetes, the CNI, or the ingress controller.