Sample application deployment on a privately deployed Kubernetes cluster

RakeshZingade
3 min readJun 4, 2021

This is in continuation of the story posted with the title “Deploying private two-node Kubernetes cluster on GCP VMs

We will deploy a simple standard 2-tier guestbook application provided in Kubernetes documents: https://kubernetes.io/docs/tutorials/stateless-application/guestbook/. Also, we will deploy the Kubernetes dashboard service and access its UI on our workstation.

Use the following commands to deploy this:

A] Launch the mongodb application


$ kubectl apply -f https://k8s.io/examples/application/guestbook/mongo-deployment.yaml
$ kubectl apply -f https://k8s.io/examples/application/guestbook/mongo-service.yaml# Validate the guestbook replicas running or not$ kubectl get pods -l app.kubernetes.io/name=guestbook -l app.kubernetes.io/component=frontend
NAME READY STATUS RESTARTS AGE
frontend-848d88c7c-4tfwx 1/1 Running 0 25s
frontend-848d88c7c-hbpr7 1/1 Running 0 25s
frontend-848d88c7c-nmtp6 1/1 Running 0 25s

B] Launch the frontend service

$ kubectl apply -f https://k8s.io/examples/application/guestbook/frontend-service.yaml# Validate the deployment and service status
$ kubectl get svc,pod -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/frontend ClusterIP 10.111.223.126 <none> 80/TCP 41s app.kubernetes.io/component=frontend,app.kubernetes.io/name=guestbookservice/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 81m <none>service/mongo ClusterIP 10.109.55.70 <none> 27017/TCP 6m47s app.kubernetes.io/component=backend,app.kubernetes.io/name=mongoNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESpod/frontend-848d88c7c-4tfwx 1/1 Running 0 2m46s 192.168.254.131 k8s-worker <none> <none>pod/frontend-848d88c7c-hbpr7 1/1 Running 0 2m46s 192.168.254.130 k8s-worker <none> <none>pod/frontend-848d88c7c-nmtp6 1/1 Running 0 2m46s 192.168.254.132 k8s-worker <none> <none>pod/mongo-75f59d57f4-dd7w4 1/1 Running 0 11m 192.168.254.129 k8s-worker <none> <none>

The guest book by default runs a frontend service with clusterIP which is accessible from the cluster itself. To access it outside privately we need to change its type to NodePort. Run the following command:

$ kubectl patch svc frontend --type='json' -p '[{"op":"replace","path":"/spec/type","value":"NodePort"}]'$ kubectl get svc frontend
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend NodePort 10.111.223.126 <none> 80:31488/TCP 3d7h

This service is now accessible from bastion with worker node IP and 31488 port. To access it from your workstation or laptop — we need to start an SSH tunnel between bastion and our host system. (ignore the messages prompted after starting the tunnel)

$ gcloud compute ssh bastion --ssh-key-file=~/.ssh/gcp -- -N -p 22 -D localhost:3333External IP address was not found; defaulting to using IAP tunneling.
bind: Cannot assign requested address

channel 8: open failed: administratively prohibited: open failed

Open the chrome window from the command prompt with socks5 proxy to access the application URL.

$ google-chrome --user-data-dir=/home/rzingade --proxy-server="socks5://localhost:3333"

# DEPLOY A KUBERNETES DASHBOARD 📌

Log in to the master node and execute the below command to deploy the dashboard. Patch the dashboard to run on NodePort instead of default clusterIp

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml$ kubectl patch svc kubernetes-dashboard -n kubernetes-dashboard --type='json' -p '[{"op":"replace","path":"/spec/type","value":"NodePort"}]'$ kubectl get pods,svc -n kubernetes-dashboardNAME                                             READY   STATUS    RESTARTS   AGE
pod/dashboard-metrics-scraper-5594697f48-bgl52 1/1 Running 0 18m
pod/kubernetes-dashboard-57c9bfc8c8-x2ncr 1/1 Running 0 18m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dashboard-metrics-scraper ClusterIP 10.98.66.141 <none> 8000/TCP 18m
service/kubernetes-dashboard NodePort 10.96.15.192 <none> 443:30429/TCP 18m

The dashboard service runs with SSL enabled so faced difficulty in accessing the page on chrome. Used firefox with proxy configuration worked. To access the dashboard we need a bearer service token, and this can be found with a secret that is deployed with the name “deployment-controller-token-xxx” . Copy the token and use it to log in to the dashboard

$ kubectl -n kube-system describe secret deployment-controller-token-v4gdc

--

--

RakeshZingade

Architect & Evangelist — DevOps at Cognologix Technologies