Setup and Configure On-Prem Kubernetes High Available Stacked ETCD Cluster

Keywords: Full guide to setting up Kubernetes Stacked ETCD High Availability Control node cluster, with diagrams and references.Setup K8 HA – Part1

Setup K8 HA - Part1

This is a Step-by-Step Guide on how to setup a Kubernetes Cluster with Control Plane High Availability,it’s worth noting that the purpose of this guide is to expressly make up for some of the significantly incomplete official documentation.

Based on References:

Official Kuberentes High Availability

Official Kuberentes High Availability Cluster Topology

First of, the guide assumes you have exclusively met the following dependency skills:

  1. Advanced Linux expertise
  2. Good understanding of loadbalancing concepts, typically high availability in general
  3. Decent understanding of Kubernetes networking and API server
  4. A lot of patience

      Secondly, server sizing minimum Requirements:

      1. 3 x Centos 7 VMs with specs (2 vCPU + 4GB Mem)
      2. 1 x Centos 7 VM with specs (6 vCPU + 8GB Mem)
      3. 1 x VIP (Virtual Floating IP)

        Thirdly, Scope of this document is to particularly help you build a Full On-Prem Kubernetes cluster with multi-master (Control Plane) and 1, 2 or 3 worker nodes (It’s important to realize that the cluster can in any event be expanded whether it be on the Master or Worker nodes level, otherwise not having all 3 worker nodes available on hand is not is not considered an issue, although you still have to meet the minimal requirement specs above)

        All things considered, let’s get started

        First let’s take a look at a High-Level overview of the Cluster overall topology

        High-Level overview of the overall cluster topology

        Control Plane Topology as per the official site

        As shown above, the Control Plane topology of choice is going to be the Stacked ETCD HA type as :

        1. It saves the hassle worrying about extra cluster builds
        2. Consequently saving infra costs
        3. Rather easier to maintain 
        With that said, I’m going to assume you have all the required servers in place. My Setup:
        
        Control Plane Node1:  k8master1.peter.loc, 192.168.1.155
        Control Plane Node2:  k8master2.peter.loc, 192.168.1.156
        Control Plane Node3:  k8master3.peter.loc, 192.168.1.157
        Worker Node1: k8node1.peter.loc, 192.168.1.158
        Worker Node2: k8node2.peter.loc, 192.168.1.159 (Not required)
        Worker Node3: k8node3.peter.loc, 192.168.1.160 (Not required)
        VIP : k8mastervip.peter.loc, 192.168.1.161

        Step 1 - Add Kubernetes Repo

        Add Kubernetes official repo /etc/yum.repos.d/kubernetes.repo
        
        cat >> /etc/yum.repos.d/kubernetes.repo << EOF
        [kubernetes]
        name=Kubernetes
        baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
        enabled=1
        gpgcheck=1
        repo_gpgcheck=1
        gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
        https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
        EOF
        

        Step 2 - Install Packages:

        Ensure Kubernetes packages are installed On all nodes:
        
        yum install kubelet-1.18.13-0 kubeadm-1.18.13-0 kubectl-1.18.13-0 containerd.io-1.4.3
        
        It’s worth noting, my explicit choice of package versioning here is ultimately a result of extensive testing with these versions combination, in a word both kubernetes and containerd, are being stable across multiple benchmark tests. Although this is the recommend versioning at least for this guide, you can still choose any package combinations to your liking. Let me know in the comments what you’ve used and the results.

        Step 3 - Network Setup

        That said, enable Bridge networking and IP forwarding On all nodes:
        
        SYSCTLDIRECTIVES='net.bridge.bridge-nf-call-iptables net.ipv4.conf.all.forwarding net.ipv4.conf.default.forwarding net.ipv4.ip_forward'
        
        for directive in ${SYSCTLDIRECTIVES}; do
          if cat /etc/sysctl.d/99-sysctl.conf | grep -q "${directive}"; then
            echo "Directive ${directive} is loaded"
          else
            echo "${directive}=1" >> /etc/sysctl.d/99-sysctl.conf
          fi
        done
        
        Then load the new values into sysctl:
        sysctl -p /etc/sysctl.d/99-sysctl.conf

        Step 4 - Enable Services

        likewise, Enable kubelet and containerd services On all nodes:
        systemctl enable containerd kubelet

        Step 5 - Firewalld

        Stop, disable and mask firewalld (This is by all means raises a strong security flag, you should never disable the firewall on a server that’s facing the public internet, this would be reasonably ok if behind an edge firewall)
        
        systemctl stop firewalld
        systemctl disable firewalld
        systemctl mask firewalld
        

        Step 6 - Create crictl and ctr tools config for sock

        Now onto creating Crictl unix socket to access containers vim /etc/crictl.yaml On all nodes:
        
        runtime-endpoint: unix:///run/containerd/containerd.sock
        

        Step 7 - Global DNS Config

        Make sure all of your 4 machines are aware of their hostname resolution, that in essence is how kubelet communicates with the different cluster nodes I’m using powerDNS in a lab environment, so having a local DNS server, would be essentially an asset, as opposed to adding your servers hostnames to hosts. Thus, assuming you don’t have a local DNS server On all nodes: vim /etc/hosts
        
        cat >> /etc/hosts << EOF
        192.168.1.155 k8master1.peter.loc
        192.168.1.156 k8master2.peter.loc
        192.168.1.157 k8master3.peter.loc
        192.168.1.158 k8node1.peter.loc
        
        EOF
        

        Step 8 - Prepare peer control planes for bootstrapping

        Create Certs and Keys dirs on the two other controlplane nodes, this is where the primary node’s api, etcd, certs/keys credentials will be transferred to On node2 and node3 (Peer control planes – secondary control planes)
        
        if [ ! -d /etc/kubernetes/pki ]; then
          mkdir /etc/kubernetes/pki
        fi
        
        
        if [ ! -d /etc/kubernetes/pki/etcd ]; then
          mkdir -p /etc/kubernetes/pki/etcd/
        fi
        
        
        if [ ! -d /etc/kubernetes/audit-policy ]; then
          mkdir -p /etc/kubernetes/audit-policy
        fi
        

        Step 9 - Audit Policy

        Optionally, if you want to go out of your way create an Audit policy, now is potentially a good time to tackle it, Audit policies are certainly out of scope for this document.

        I prefer to drop ideas in various sections as it helps explore more about the possibilities you may be interested in.

        Keywords: Full guide to setting up Kubernetes Stacked ETCD High Availability Control node cluster, with diagrams and references.Setup K8 HA – Part1