Keywords: Full guide to setting up Kubernetes Stacked ETCD High Availability Control node cluster, with diagrams and references.Setup K8 HA – Part4

Step 18 - Checkpoint files we got from Primary Control Plane Node

That said, you typically should windup with the required files together with a similar dir structure that looks something like this  

Step 19 - Push certs and keys to Peer Control Nodes​

Then push those certs and keys over to the Peer Control Plane Nodes namely (Node2 and Node 3) in the exact same locations as were they on Node 1

On node 1 or your workstation (depends on where you placed the files):


ControlPlaneNodes='k8master2.peter.loc k8master3.peter.loc'

for i in ${ControlPlaneNodes}; do

scp pki/ca.crt $i:/etc/kubernetes/pki/ca.crt
scp pki/ca.key $i:/etc/kubernetes/pki/ca.key
scp pki/etcd/ca.crt $i:/etc/kubernetes/pki/etcd/ca.crt
scp pki/etcd/ca.key $i:/etc/kubernetes/pki/etcd/ca.key
scp pki/front-proxy-ca.crt $i:/etc/kubernetes/pki/front-proxy-ca.crt
scp pki/front-proxy-ca.key $i:/etc/kubernetes/pki/front-proxy-ca.key
scp pki/sa.pub $i:/etc/kubernetes/pki/sa.pub
scp pki/sa.key $i:/etc/kubernetes/pki/sa.key

done

Step 20 - Bootstrap Control Plane Nodes

In light of pushing those files over to their respective locations on the control planes, also make sure they have similar unix permissions

And we are ready join them to the Primary Master node.

therefore, on each one of the control planes, one at a time (preferably, although can batch strap them is also fine)

Kubeadm official reference

On node 2 (Peer Control Plane Node 2)

kubeadm join 192.168.1.161:6443 --token dlfkiq.aoj13usq20hkg6c2     --discovery-token-ca-cert-hash sha256:e309fb3b4546edbdc0bdb5e81825ae91d6c274ea04a1c525544acb78a10fc03c --control-plane

On node 3 (Peer Control Plane Node 3)

kubeadm join 192.168.1.161:6443 --token dlfkiq.aoj13usq20hkg6c2     --discovery-token-ca-cert-hash sha256:e309fb3b4546edbdc0bdb5e81825ae91d6c274ea04a1c525544acb78a10fc03c --control-plane

Step 21 - Check Peer Control Nodes

Finally we’re getting one more step closer to our end goal.

Having said that, it’s time to check our status (Yes, this is another checkpoint) 

The 3 Master nodes should identically be in a “Ready” state together with Pods running and you should have an output that looks alike to this for your Control Plane nodes:

equally for the pods:

Step 22 - Create Kubectl Access on Peer Control Nodes

In the same fashion we enabled kubectl on Primary Control Plane, will also apply to the secondary ones. On both node2 and node3:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Step 23 - Check Peer Control Nodes Access to API Server

In like manner pods showing on both Nodes 2 and 3 MUST be equally identical to Node 1, since they’re connected to the same VIP to whichever has the master IP currently Node 1 (at this stage still)

Node 2:

Node 3:

Step 24 - Join Worker Node to Cluster

Now, time to have a worker node added. As previously mentioned, joining worker nodes is as simple as a single command “Cluster expansion is indeed easy”! thus we’ll bootstrap our first worker node comparatively. On Node 4 (First worker node)

kubeadm join 192.168.1.161:6443 --token dlfkiq.aoj13usq20hkg6c2     --discovery-token-ca-cert-hash sha256:e309fb3b4546edbdc0bdb5e817f7ae91d6c274ea04a1c525544acb78a10fc03c
Notice the difference between joining a Control Plane node and a Worker node is essentially the –control-plane argument

Keywords: Full guide to setting up Kubernetes Stacked ETCD High Availability Control node cluster, with diagrams and references.Setup K8 HA – Part4