Deploy private two-node Kubernetes cluster on GCP VMs

RakeshZingade
8 min readMay 9, 2021

--

I am using Kubernetes for a long but never deployed it using Kubeadm and on private VMs. So thought of deploying a private Kubernetes cluster on GCP VMs. Here I tried to deploy and access a private cluster and a sample guestbook application. Used GCP IAP tunneling for accessing the private VMs and applications.

One can consider this as a runbook to deploy a private Kubernetes cluster.

I have split the article into two parts. Here the first part of infra creation and cluster deployment is discussed. In the second part, I will discuss more on hosting a sample application on this cluster and its access.

Following is the reference architect

Let's spin up the infrastructure. I am assuming that you have already a GCP account and a project to launch the infrastructure. Here I have a project named devops-practice and launching everything in that project. You have to replace the project name before executing these gcloud commands

# CREATE NETWORK AND SUBNET 📌️

$ gcloud config set project 'devops-practice'$ gcloud compute networks create custom-k8s --subnet-mode custom$ gcloud compute networks subnets create custom-k8s-sub --network custom-k8s --range 10.1.0.0/16 --enable-private-ip-google-access --region us-east1

# CONFIGURE NAT 📌

We need internet access on private VMs, NAT provides a secure way to create outbound connections to the Internet

$ gcloud compute routers create nat-router-custom-k8s --network custom-k8s --region us-east1$ gcloud compute routers nats create nat-config --router-region us-east1 --router nat-router-custom-k8s --nat-all-subnet-ip-ranges --auto-allocate-nat-external-ips

# CREATE SSH BASTION HOST 📌️

First, configure SSH access to an instance and then create a bastion VM. Then configure a firewall rule to access the bastion host over the internet. You have to limit its access to your IP/you network GW IP and to 35.235.240.0/20 range that contains all IP addresses that IAP uses for TCP forwarding.

You either execute curl ifconifig.co command to get your public IP address or simply search on google for ‘What’s my IP’. Also, create an fwd rule to allow traffic from bastion to all other VMs in the network.

$ gcloud compute os-login ssh-keys add --key-file ~/.ssh/gcp.pub$ gcloud compute instances create bastion --zone=us-east1-b --machine-type=f1-micro --subnet=custom-k8s-sub --no-address --maintenance-policy=MIGRATE --no-service-account --no-scopes --tags=bastion --image-family=debian-9 --image-project=debian-cloud --boot-disk-size=10GB --boot-disk-type=pd-standard --boot-disk-device-name=bastion --metadata enable-oslogin=true --metadata block-project-ssh-keys=TRUE  --metadata-from-file ssh-keys=~/.ssh/gcp.pub$ gcloud compute firewall-rules create bastion-ssh --direction=INGRESS --priority=1000 --network=custom-k8s --action=ALLOW --rules=tcp:22 --source-ranges=106.210.141.149/32,35.235.240.0/20 --target-tags=bastion$ gcloud compute firewall-rules create bastion-fwd --direction=INGRESS --priority=1000 --network=custom-k8s --action=ALLOW --rules=all --source-tags=bastion$ gcloud compute firewall-rules create k8s --direction=INGRESS --priority=1000 --network=custom-k8s --action=ALLOW --rules=all --source-ranges=10.1.0.0/16

To connect the bastion using IAP, first, add IAM policy to allow a user to establish the IAP tunnel

$ gcloud projects add-iam-policy-binding devops-practice --member=user:<user-id/email> --role=roles/iap.tunnelResourceAccessor$ gcloud projects add-iam-policy-binding devops-practice --member=user:<user-id/email> --role=roles/compute.viewer$ gcloud compute ssh bastion --ssh-key-file=~/.ssh/gcp

# LAUNCH TWO VMs - K8S MASTER and K8S WORKER 📌️

Here I have selected Ubuntu-18.04 OS and VM spec as 4 core, 16 GB RAM, and 100 GB disk. You have to ssh to bastion and configure ssh keys to access these two VMs for Kubernetes master and worker configuration.

$ gcloud compute instances create k8s-master k8s-worker --zone=us-east1-b --machine-type=e2-standard-4 --subnet=custom-k8s-sub --no-address --maintenance-policy=MIGRATE --no-service-account --no-scopes --tags=k8s,bastion-fwd --image-family=ubuntu-1804-lts --image-project=ubuntu-os-cloud --boot-disk-size=100GB --boot-disk-type=pd-standard --boot-disk-device-name=k8s-disk --metadata enable-oslogin=true,block-project-ssh-keys=TRUE,ssh-keys=~/.ssh/gcp.pubNAME        ZONE        MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP  STATUS
k8s-master us-east1-b e2-standard-4 10.1.0.4 RUNNING
k8s-worker us-east1-b e2-standard-4 10.1.0.3 RUNNING

# KUBERNETES CLUSTER DEPLOYMENT 📌️

We are deploying a single control plane Kubernetes cluster with two nodes (k8s-master and k8s-worker). Install prerequisites and three packages viz.. kubeadm, kubelet and kubectl on k8s-master and k8s-worker node.

To know more about Kubernetes components, refer to this link: https://kubernetes.io/docs/concepts/overview/components/

Connect to each node k8s-master and k8s-worker from bastion and provision following prerequisites (till the steps for installing kubeadm, kubelet and kubectl).

## Prerequisite setup

### Check network adapter

Make sure br_netfilter is loaded using the following command

$ sudo lsmod | grep br_netfilter

If not, use the following command to load

$ sudo modprob br_netfilter

It is recommended you add IP route(s) so Kubernetes cluster addresses go via the appropriate adapter. As a requirement for your Linux Node’s iptables to correctly see bridged traffic, you should ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl config.

$ cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
$ cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
$ sudo sysctl — system

### Install container runtime

Installing runtime, here I have selected Docker Container runtime to require on each node so that pod can run there.

Installing docker:

Uninstall old runtime if any and then for installing new setup the repository

$ sudo apt-get remove docker docker-engine docker.io containerd runc$ sudo apt-get update$ sudo apt-get install -y apt-transport-https ca-certificates curl gnupg lsb-release$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg$ echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Install the docker engine and enable the docker service to start on the boot

$ sudo apt-get update$ sudo apt-get install -y docker-ce docker-ce-cli containerd.io$ sudo systemctl enable docker.service$ sudo systemctl enable containerd.service$ sudo systemctl daemon-reload$ sudo systemctl status docker

Configure the Docker daemon, In particular, systemd for the management of container’s cgroups and restart the docker services

$ cat <<EOF | sudo tee /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
$ sudo systemctl restart docker

### Installing kubeadm, kubelet, AND kubectl utilities 📌️

  • kubeadm: The command to bootstrap the cluster
  • kubelet: Daemon service to start pods and containers
  • kubectl: Command-line utility to talk to the cluster

Kubeadm will not install or manage the kubelet and kubectl so we will need to ensure they match the versions of the Kubernetes control plane.

Installation steps:

  • Needed packages
$ sudo apt-get update$ sudo apt-get install -y apt-transport-https ca-certificates curl
  • Download google cloud public sign-in key
$ sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
  • Add Kubernetes repository
$ echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
  • Install kubeadm, kubelet, and kubectl. Pinned their versions
$ sudo apt-get update$ sudo apt-get install -y kubelet kubeadm kubectl$ sudo apt-mark hold kubelet kubeadm kubectl$ kubelet --versionKubernetes v1.21.0$ kubeadm versionkubeadm version: &version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:30:03Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}$ kubectl versionClient Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:31:21Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}

Note that Kubelet keeps restarting until kubeadm tells what to do $ sudo systemctl status kubelet

# INITIALISE CONTROL-PLANE NODE (K8S-master)📌️

On the k8s-master node, run the following command to initialize the kubeadm with the pod network set to 192.168.0.0/16. This command outputs the information that helps in defining worker nodes to join the cluster. This command takes 1 min to 4 mins to execute depending on the network and VMs configured.

$ sudo kubeadm init --pod-network-cidr=192.168.0.0/16Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.1.0.4:6443 --token ymiblf.cyw118noca0hg1ez \
--discovery-token-ca-cert-hash sha256:5310dd72b24568329aace94d32cc83f8f2199d57b0a50146c0b777bacf720ad8

Keep the above output handly, it's later required to form a cluster. Now configure kubectl with newly created control plane

$ mkdir -p $HOME/.kube$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config$ sudo chown $(id -u):$(id -g) $HOME/.kube/config$ kubectl get nodesNAME         STATUS     ROLES                  AGE     VERSION
k8s-master NotReady control-plane,master 5m29s v1.21.0

Now we can check kubelet status and verify its active

$ sudo systemctl status kubelet

# INSTALL ADDONs on CONTROL-PLANE NODE (k8s-master)📌️

As stated in the ‘kubeadm init’ output, we should deploy a pod network. For this, I find calico will be the best option. Calico is the networking and network policy provider. Find here more details: https://docs.projectcalico.org/about/about-calico

Installing Calico with K8S API datastore supporting 50 nodes or less. Download the calico manifest and directly apply the manifest. As we have user 192.168.0.0/16 CIRD for pod networking, no need to customize it.

$ curl https://docs.projectcalico.org/manifests/calico.yaml -O$ kubectl apply -f calico.yaml

For more details refer to this link: https://docs.projectcalico.org/getting-started/kubernetes/self-managed-onprem/onpremises#install-calico-with-etcd-datastore

The kubectl also showed the node is in a ready state

$ kubectl get nodesNAME         STATUS   ROLES                  AGE   VERSION
k8s-master Ready control-plane,master 24m v1.21.0

# CONTROL PLANE NODE ISOLATION 📌️

As we are deploying 2 node clusters, 1 is master and the other is a worker node, so no need to perform these steps. However, if we wish to use the master node also to spun up PODs then only perform the following steps

$ kubectl taint nodes --all node-role.kubernetes.io/master-

# JOIN THE WORKER NODE TO CLUSTER 📌️

Before stepping into joining, perform the pre-requisite setup on k8s-worker node and install kubeadm, kubelet and kubectl on it.

Use the following kubeadm command to join the cluster. This is same copied from kubeadm init

$ sudo  kubeadm join 10.1.0.4:6443 --token ymiblf.cyw118noca0hg1ez --discovery-token-ca-cert-hash sha256:5310dd72b24568329aace94d32cc83f8f2199d57b0a50146c0b777bacf720ad8[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

Verify the kubelet service is up and running with the command $ sudo systemctl status kubelet

Login to k8s-master node and check cluster status. Here the role is just a label, and you can update it with the label nodes command

$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane,master 60m v1.21.0
k8s-worker Ready <none> 5m21s v1.21.0
$ kubectl label node k8s-worker node-role.kubernetes.io/worker=worker$ kubectl get nodesNAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane,master 64m v1.21.0
k8s-worker Ready worker 9m50s v1.21.0

I have posted the second article in this series to discuss application deployment and its access to this privately hosted Kubernetes cluster. You can read it here: https://rakeshzingade.medium.com/sample-application-deployment-on-a-privately-deployed-kubernetes-cluster-c68c7e085759

--

--

RakeshZingade

Architect & Evangelist — DevOps at Cognologix Technologies