Keywords: Full guide to setting up Kubernetes Stacked ETCD High Availability Control node cluster, with diagrams and references.Setup K8 HA – Part3

Step 11 - Temporarily Add Your VIP on NIC​

Before firing up our first control plane bootstrap, first, we MUST ensure that our VIP being the IP of the API server is present, and not to mention correctly setup on the intended NIC, typically it is usually eth0, but depends on your kernel IF DEV names and your network setup this can be rather different bearing in mind the network architecture discussed at the beginning of this article

As a matter of fact, not adding this VIP will cause bootstrap failure, as a result of referencing an IP that does not exist

Assuming your LAN interface is eth0

ip add add 192.168.1.161/32 dev eth0

Step 12 - Bootstrap Primary Control Plane Node​

Time to bootstrap our Primary Master node (Control plane)

On node 1:

kubeadm init --config  /root/kube-config.yaml

Step 13 - Copy kubeconfig to home dir

To access your kubernetes cluster we then need to get the admin.conf containing certs and keys of the API server into the home dir, like this:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Step 14 - Check if all containers running state

Now that we have our api server credentials in place, and nothing errored out so far, time to test
kubectl get pods -n kube-system
If this command returns anything but an output similar to the one below, then you’re likely going to need to start all over from scratch, this is essentially a checkpoint.

That said, it’s important to realize that CoreDNS pods are in pending state as there is no network driver backbone installed yet, which causes the Master node to not be in a Ready state, by the same token not accepting changes

Step 15 - Installing Network Driver CALICO

In the mean time, we’re going to install one of the most uniquely advanced network drivers for Kubernetes Calico, not only because it provides the level of network sophistication needed in terms of network policies and ACLs, but also because it performs way better than the ordinary traditional flannel network, it’s a production network driver and it’s why i included it as part of the guide.

For official Calico site:

Calico Official documentation

Install Calico

Alternatively, you can also have it downloaded straight to your server

wget https://docs.projectcalico.org/manifests/calico.yam

Given previous points about leveraging a local container registry server, you may need to replace docker.io/calico with your own registry address depending on your setup, in most cases you probably wont need any modifications done.

if however you decide to do so, then it’s advisable that you open the file and look for every occurrence of “image: ” check the claico image used, push it onto your local container registry, update the file then apply it, otherwise go ahead and apply it anyway.


kubectl apply -f calico.yaml

Now, check the pods to see if everything is running correctly

Check if the primary node status has changed to “Ready”

Awesome! At this stage, you should have a functional control plane in (Ready) master status mode.

Albeit if you wind up with broken/Crashing containers, go back and check what you missed alternatively it’s better you describe the failing pods before just repeating previous steps.

You won’t be able to continue without getting past this checkpoint until you get a similar output  (from status standpoint) as shown above

Step 16 - Generate Join Token

Generate a join token:

On node one:

kubeadm token create --print-join-command --ttl=0

Step 17 - Prep Peer Control Planes Bootstrap

In like manner, we will create a Control Plane files placeholder on local workstation to fetch both certs as well as keys from Primary node
Ergo, on Primary node get the following files to your local env and uniquely classify them, otherwise you’ll wind up overwriting files with similar names like ca.crt in pki and pki/etcd

/etc/kubernetes/pki/ca.crt
/etc/kubernetes/pki/ca.key
/etc/kubernetes/pki/sa.key
/etc/kubernetes/pki/sa.pub
/etc/kubernetes/pki/front-proxy-ca.crt
/etc/kubernetes/pki/front-proxy-ca.key
/etc/kubernetes/pki/etcd/ca.crt
/etc/kubernetes/pki/etcd/ca.key

Keywords: Full guide to setting up Kubernetes Stacked ETCD High Availability Control node cluster, with diagrams and references.Setup K8 HA – Part3