Kubernetes can't access pod in multi worker nodes - kubernetes

I was following a tutorial on youtube and the guy said that if you deploy your application in a multi-cluster setup and if your service is of type NodePort, you don't have to worry from where your pod gets scheduled. You can access it with different node IP address like
worker1IP:servicePort or worker2IP:servicePort or workerNIP:servicePort
But I tried just now and this is not the case, I can only access the pod on the node from where it is scheduled and deployed. Is it correct behavior?
kubectl version --short
> Client Version: v1.18.5
> Server Version: v1.18.5
kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-66bff467f8-6pt8s 0/1 Running 288 7d22h
coredns-66bff467f8-t26x4 0/1 Running 288 7d22h
etcd-redhat-master 1/1 Running 16 7d22h
kube-apiserver-redhat-master 1/1 Running 17 7d22h
kube-controller-manager-redhat-master 1/1 Running 19 7d22h
kube-flannel-ds-amd64-9mh6k 1/1 Running 16 5d22h
kube-flannel-ds-amd64-g2k5c 1/1 Running 16 5d22h
kube-flannel-ds-amd64-rnvgb 1/1 Running 14 5d22h
kube-proxy-gf8zk 1/1 Running 16 7d22h
kube-proxy-wt7cp 1/1 Running 9 7d22h
kube-proxy-zbw4b 1/1 Running 9 7d22h
kube-scheduler-redhat-master 1/1 Running 18 7d22h
weave-net-6jjd8 2/2 Running 34 7d22h
weave-net-ssqbz 1/2 CrashLoopBackOff 296 7d22h
weave-net-ts2tj 2/2 Running 34 7d22h
[root#redhat-master deployments]# kubectl logs weave-net-ssqbz -c weave -n kube-system
DEBU: 2020/07/05 07:28:04.661866 [kube-peers] Checking peer "b6:01:79:66:7d:d3" against list &{[{e6:c9:b2:5f:82:d1 redhat-master} {b2:29:9a:5b:89:e9 redhat-console-1} {e2:95:07:c8:a0:90 redhat-console-2}]}
Peer not in list; removing persisted data
INFO: 2020/07/05 07:28:04.924399 Command line options: map[conn-limit:200 datapath:datapath db-prefix:/weavedb/weave-net docker-api: expect-npc:true host-root:/host http-addr:127.0.0.1:6784 ipalloc-init:consensus=2 ipalloc-range:10.32.0.0/12 metrics-addr:0.0.0.0:6782 name:b6:01:79:66:7d:d3 nickname:redhat-master no-dns:true port:6783]
INFO: 2020/07/05 07:28:04.924448 weave 2.6.5
FATA: 2020/07/05 07:28:04.938587 Existing bridge type "bridge" is different than requested "bridged_fastdp". Please do 'weave reset' and try again
Update:
So basically the issue is because iptables is deprecated in rhel8. But After downgrading my OS to rhel7. I can access the nodeport only on the node it is deployed.

Related

There are 2 networking component installed on node master, Weave and Calico. how can I completely remove Calico from my kubernetes cluster?

Weave has overlap with host's IP address and its pod stuck in CrashLoopBackOff state. There is a need to remove Calico first as I have no clue about working 2 Networking module on master!
emo#master:~$ sudo kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-64897985d-dw6ch 0/1 ContainerCreating 0
kube-system coredns-64897985d-xr6br 0/1 ContainerCreating 0
kube-system etcd-master 1/1 Running 26 (14m ago)
kube-system kube-apiserver-master 1/1 Running 26 (12m ago)
kube-system kube-controller-manager-master 1/1 Running 4 (20m ago)
kube-system kube-proxy-g98ph 1/1 Running 3 (20m ago)
kube-system kube-scheduler-master 1/1 Running 4 (20m ago)
kube-system weave-net-56n8k 1/2 CrashLoopBackOff 76 (54s ago)
tigera-operator tigera-operator-b876f5799-sqzf9 1/1 Running 6 (5m57s ago)
master:
emo#master:~$ kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master Ready control-plane,master 6d19h v1.23.5 192.168.71.132 <none> Ubuntu 20.04.3 LTS 5.4.0-81-generic containerd://1.5.5
You may need to re-build your cluster after cleaning it up.
First, run kubectl delete for all the manifests you have applied to configure calico and weave. (e.g. kubectl delete -f https://projectcalico.docs.tigera.io/manifests/tigera-operator.yaml)
Then run kubeadm reset and run /etc/cni/net.d/ to delete all of your cni configurations. After that, you also need to reboot the server to delete some old records of ip link, or manually remove them by ip link delete {name}.
Now the new installation should be done well.

Kubernetes API container dies constantly

I just installed from scratch a small Kubernetes test cluster in a 4 Armbian/Odroid_MC1 (Debian 10) nodes. The install process is this 1, nothing fancy or special, adding k8s apt repo and install with apt.
The problem is that the API server dies constantly, like every 5 to 10 minutes, after the controller-manager and the scheduler die together, who seem to stop simultaneously before. Evidently, the API becomes unusable for like a minute. All three services do restart, and things run fine for the next four to nine minutes, when the loop repeats. Logs are here 2. This is an excerpt:
$ kubectl get pods -o wide --all-namespaces
The connection to the server 192.168.1.91:6443 was refused - did you specify the right host or port?
(a minute later)
$ kubectl get pods -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-74ff55c5b-8pm9r 1/1 Running 2 88m 10.244.0.7 mc1 <none> <none>
kube-system coredns-74ff55c5b-pxdqz 1/1 Running 2 88m 10.244.0.6 mc1 <none> <none>
kube-system etcd-mc1 1/1 Running 2 88m 192.168.1.91 mc1 <none> <none>
kube-system kube-apiserver-mc1 0/1 Running 12 88m 192.168.1.91 mc1 <none> <none>
kube-system kube-controller-manager-mc1 1/1 Running 5 31m 192.168.1.91 mc1 <none> <none>
kube-system kube-flannel-ds-fxg2s 1/1 Running 5 45m 192.168.1.94 mc4 <none> <none>
kube-system kube-flannel-ds-jvvmp 1/1 Running 5 48m 192.168.1.92 mc2 <none> <none>
kube-system kube-flannel-ds-qlvbc 1/1 Running 6 45m 192.168.1.93 mc3 <none> <none>
kube-system kube-flannel-ds-ssb9t 1/1 Running 3 77m 192.168.1.91 mc1 <none> <none>
kube-system kube-proxy-7t9ff 1/1 Running 2 45m 192.168.1.93 mc3 <none> <none>
kube-system kube-proxy-8jhc7 1/1 Running 2 88m 192.168.1.91 mc1 <none> <none>
kube-system kube-proxy-cg75m 1/1 Running 2 45m 192.168.1.94 mc4 <none> <none>
kube-system kube-proxy-mq8j7 1/1 Running 2 48m 192.168.1.92 mc2 <none> <none>
kube-system kube-scheduler-mc1 1/1 Running 5 31m 192.168.1.91 mc1 <none> <none>
$ docker ps -a # (check the exited and restarted services)
CONTAINER ID NAMES STATUS IMAGE NETWORKS PORTS
0e179c6495db k8s_kube-apiserver_kube-apiserver-mc1_kube-system_c55114bd57b1bf357c8f4c0d749ae105_13 Up About a minute 66eaad223e2c
2ccb014beb73 k8s_kube-scheduler_kube-scheduler-mc1_kube-system_fe362b2b6b08ca576b7416df7f2e7845_6 Up 3 minutes 21e17680ca2d
3322f6ec1546 k8s_kube-controller-manager_kube-controller-manager-mc1_kube-system_17cf17caf36ba27e3d2ec4f113a0cf6f_6 Up 3 minutes a1ab72ce4ba2
583129da455f k8s_kube-apiserver_kube-apiserver-mc1_kube-system_c55114bd57b1bf357c8f4c0d749ae105_12 Exited (137) About a minute ago 66eaad223e2c
72268d8e1503 k8s_install-cni_kube-flannel-ds-ssb9t_kube-system_dbf3513d-dad2-462d-9107-4813acf9c23a_0 Exited (0) 5 minutes ago 263b01b3ca1f
fe013d07f186 k8s_kube-controller-manager_kube-controller-manager-mc1_kube-system_17cf17caf36ba27e3d2ec4f113a0cf6f_5 Exited (255) 3 minutes ago a1ab72ce4ba2
34ef8757b63d k8s_kube-scheduler_kube-scheduler-mc1_kube-system_fe362b2b6b08ca576b7416df7f2e7845_5 Exited (255) 3 minutes ago 21e17680ca2d
fd8e0c0ba27f k8s_coredns_coredns-74ff55c5b-8pm9r_kube-system_3b813dc9-827d-4cf6-88cc-027491b350f1_2 Up 32 minutes 15c1a66b013b
f44e2c45ed87 k8s_coredns_coredns-74ff55c5b-pxdqz_kube-system_c3b7fbf2-2064-4f3f-b1b2-dec5dad904b7_2 Up 32 minutes 15c1a66b013b
04fa4eca1240 k8s_POD_coredns-74ff55c5b-8pm9r_kube-system_3b813dc9-827d-4cf6-88cc-027491b350f1_42 Up 32 minutes k8s.gcr.io/pause:3.2 none
f00c36d6de75 k8s_POD_coredns-74ff55c5b-pxdqz_kube-system_c3b7fbf2-2064-4f3f-b1b2-dec5dad904b7_42 Up 32 minutes k8s.gcr.io/pause:3.2 none
a1d6814e1b04 k8s_kube-flannel_kube-flannel-ds-ssb9t_kube-system_dbf3513d-dad2-462d-9107-4813acf9c23a_3 Up 32 minutes 263b01b3ca1f
94b231456ed7 k8s_kube-proxy_kube-proxy-8jhc7_kube-system_cc637e27-3b14-41bd-9f04-c1779e500a3a_2 Up 33 minutes 377de0f45e5c
df91856450bd k8s_POD_kube-flannel-ds-ssb9t_kube-system_dbf3513d-dad2-462d-9107-4813acf9c23a_2 Up 34 minutes k8s.gcr.io/pause:3.2 host
b480b844671a k8s_POD_kube-proxy-8jhc7_kube-system_cc637e27-3b14-41bd-9f04-c1779e500a3a_2 Up 34 minutes k8s.gcr.io/pause:3.2 host
1d4a7bcaad38 k8s_etcd_etcd-mc1_kube-system_14b7b6d6446e21cc57f0b40571ae3958_2 Up 35 minutes 2e91dde7e952
e5d517a9c29d k8s_POD_kube-controller-manager-mc1_kube-system_17cf17caf36ba27e3d2ec4f113a0cf6f_1 Up 35 minutes k8s.gcr.io/pause:3.2 host
3a3da7dbf3ad k8s_POD_kube-apiserver-mc1_kube-system_c55114bd57b1bf357c8f4c0d749ae105_2 Up 35 minutes k8s.gcr.io/pause:3.2 host
eef29cdebf5f k8s_POD_etcd-mc1_kube-system_14b7b6d6446e21cc57f0b40571ae3958_2 Up 35 minutes k8s.gcr.io/pause:3.2 host
3631d43757bc k8s_POD_kube-scheduler-mc1_kube-system_fe362b2b6b08ca576b7416df7f2e7845_1 Up 35 minutes k8s.gcr.io/pause:3.2 host
I see no weird issues on the logs (I'm a k8s beginner). This was working until a month ago, when I've reinstalled this for practicing, this is probably my tenth install attempt, I've tried different options, versions and googled a lot, but can't find no solution.
What could be the reason? What else can I try? How can I get to the root of the problem?
UPDATE 2021/02/06
The problem is not occurring anymore. Apparently, the issue was the version in this specific case. Didn't filed an issue because I didn't found clues regarding what specific issue to report.
The installation procedure in all cases was this:
# swapoff -a
# curl -sL get.docker.com|sh
# usermod -aG docker rodolfoap
# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
# echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list
# apt-get update
# apt-get install -y kubeadm kubectl kubectx # Master
# kubeadm config images pull
# kubeadm init --apiserver-advertise-address=0.0.0.0 --pod-network-cidr=10.244.0.0/16
Armbian-20.08.1 worked fine. My installation procedure has not changed since.
Armbian-20.11.3 had the issue: the API, scheduler and coredns restarted every 5 minutes, blocking the access to the API 5 of each 8 minutes, average..
Armbian-21.02.1 works fine. Worked at the first install, same procedure.
All versions were updated to the last kernel, at the moment of the install, current is 5.10.12-odroidxu4.
As you can see, after around two hours, no API reboots:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE LABELS
kube-system coredns-74ff55c5b-gnvf2 1/1 Running 0 173m 10.244.0.2 mc1 k8s-app=kube-dns,pod-template-hash=74ff55c5b
kube-system coredns-74ff55c5b-wvnnz 1/1 Running 0 173m 10.244.0.3 mc1 k8s-app=kube-dns,pod-template-hash=74ff55c5b
kube-system etcd-mc1 1/1 Running 0 173m 192.168.1.91 mc1 component=etcd,tier=control-plane
kube-system kube-apiserver-mc1 1/1 Running 0 173m 192.168.1.91 mc1 component=kube-apiserver,tier=control-plane
kube-system kube-controller-manager-mc1 1/1 Running 0 173m 192.168.1.91 mc1 component=kube-controller-manager,tier=control-plane
kube-system kube-flannel-ds-c4jgv 1/1 Running 0 123m 192.168.1.93 mc3 app=flannel,controller-revision-hash=64465d999,pod-template-generation=1,tier=node
kube-system kube-flannel-ds-cl6n5 1/1 Running 0 75m 192.168.1.94 mc4 app=flannel,controller-revision-hash=64465d999,pod-template-generation=1,tier=node
kube-system kube-flannel-ds-z2nmw 1/1 Running 0 75m 192.168.1.92 mc2 app=flannel,controller-revision-hash=64465d999,pod-template-generation=1,tier=node
kube-system kube-flannel-ds-zqxh7 1/1 Running 0 150m 192.168.1.91 mc1 app=flannel,controller-revision-hash=64465d999,pod-template-generation=1,tier=node
kube-system kube-proxy-bd596 1/1 Running 0 75m 192.168.1.94 mc4 controller-revision-hash=b89db7f56,k8s-app=kube-proxy,pod-template-generation=1
kube-system kube-proxy-n6djp 1/1 Running 0 75m 192.168.1.92 mc2 controller-revision-hash=b89db7f56,k8s-app=kube-proxy,pod-template-generation=1
kube-system kube-proxy-rf4cr 1/1 Running 0 173m 192.168.1.91 mc1 controller-revision-hash=b89db7f56,k8s-app=kube-proxy,pod-template-generation=1
kube-system kube-proxy-xhl95 1/1 Running 0 123m 192.168.1.93 mc3 controller-revision-hash=b89db7f56,k8s-app=kube-proxy,pod-template-generation=1
kube-system kube-scheduler-mc1 1/1 Running 0 173m 192.168.1.91 mc1 component=kube-scheduler,tier=control-plane
Cluster is fully functional :)
I have the same problem, but with Ubuntu:
PRETTY_NAME="Ubuntu 22.04 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04 (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy
The cluster works good for:
Ubuntu 20.04 LTS
Ubuntu 18.04 LTS
Thought it will help someone else who is running ubuntu instead of Armbian.
Solution for ubuntu (possible for Armbian too) is here: Issues with "stability" with Kubernetes cluster before adding networking
Apparently it is a problem with the config of containerd on those versions.
UPDATE:
The problem is that if you use sudo apt install containerd, you will install the version v1.5.9 which has the option SystemdCgroup = false that worked, in my case, in Ubuntu 20.04 but on the Ubuntu 22.04 doesn't work. But if you change it to SystemdCgroup = trueit works.(this feature is updated in containerd v1.6.2 so that it is set on true). This will hopefully fix your problem too.

Kubernetes: how do you list components running on master?

How do you list components running on the master Kubernetes node?
I assume there should be a kubeadm or kubectl command but can't find anything.
E.g. I'm looking to see if the Scheduler is running and I've used kubeadm config view which lists:
scheduler: {}
but not sure if that means the Scheduler is not running or there's simply no config for it.
Since you have installed with kubeadm, the control plane components must be running as pods in kube-system namespace. So you can run the following command to see if scheduler is running.
# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
calico-node-4x9fp 2/2 Running 0 4d6h
coredns-86c58d9df4-bw2q9 1/1 Running 0 4d6h
coredns-86c58d9df4-gvcl9 1/1 Running 0 4d6h
etcd-k1 1/1 Running 0 4d6h
kube-apiserver-k1 1/1 Running 0 4d6h
kube-controller-manager-k1 1/1 Running 83 4d6h
kube-dash-kubernetes-dashboard-5b7cf769bc-pd2n2 1/1 Running 0 4d6h
kube-proxy-jmrrz 1/1 Running 0 4d6h
kube-scheduler-k1 1/1 Running 82 4d6h
metrics-server-8544b5c78b-k2lwt 1/1 Running 16 4d6h
tiller-deploy-5f4fc5bcc6-gvhlz 1/1 Running 0 4d6h
If you want to know all pods running on a master node(or any particular node), you can use field-selector to select the node.
kubectl get pod --all-namespaces --field-selector spec.nodeName=<nodeName>
To filter pods only in kube-system namespace running on particular node -
kubectl get pod -n kube-system --field-selector spec.nodeName=<nodeName>
Assuming that you want to check what is running in master node and you are unable not do that via Kubernetes API server.
For kubelet since its running as systemd service you can check systemctl status kubelet.service.
Other components such as scheduler is run as container by kubelet so you can check them with standard docker command such as docker ps.

Dashboard UI is not accessible

Dashboard UI not working. I'm trying to troubleshoot kubernetes UI Dashboard. It is not working so far. I have a cluster with three nodes, 1 master and 2 workers
Error: 'dial tcp 172.16.1.4:8443: i/o timeout' Trying to reach: 'https://172.16.1.4:8443/'
Error: 'dial tcp 172.16.1.4:8443: i/o timeout' Trying to reach: 'https://172.16.1.4:8443/'
The issue is that when the proxy is activated, the Dashboard does not display on the worker machine (node1) which is the one where dashboard is running
I'm trying to troubleshoot kubernetes UI Dashboard. It is not working so far. I have a cluster with three nodes, 1 master and 2 workers:
[admin#k8s-node1 ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 4d21h v1.15.2
k8s-node1 Ready 4d20h v1.15.2
k8s-node2 Ready 4d20h v1.15.2
[admin#k8s-node1 ~]$ kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-5c98db65d4-7fztc 1/1 Running 2 4d20h 172.16.0.5 k8s-master kube-system coredns-5c98db65d4-wwb4t 1/1 Running 2 4d20h 172.16.0.4 k8s-master kube-system etcd-k8s-master 1/1 Running 1 4d20h 10.1.99.10 k8s-master
kube-system kube-apiserver-k8s-master 1/1 Running 1 4d20h 10.1.99.10 k8s-master
kube-system kube-controller-manager-k8s-master 1/1 Running 1 4d20h 10.1.99.10 k8s-master
kube-system kube-router-bt2rb 1/1 Running 0 30m 10.1.99.11 k8s-node1
kube-system kube-router-dnft9 1/1 Running 0 30m 10.1.99.10 k8s-master
kube-system kube-router-z98ns 1/1 Running 0 29m 10.1.99.12 k8s-node2
kube-system kube-scheduler-k8s-master 1/1 Running 1 4d20h 10.1.99.10 k8s-master
kubernetes-dashboard kubernetes-dashboard-5c8f9556c4-8skmv 1/1 Running 0 43m 172.16.1.4 k8s-node1
kubernetes-dashboard kubernetes-metrics-scraper-86456cdd8f-htq9t 1/1 Running 0 43m 172.16.2.7 k8s-node2
URL: http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
The Log for the Dashboard deployment shows the following message:
Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
Expect for the Dashboard UI to load with URL, but get error message instead.

Understanding Kubernetes networking, pods with same ip

I checked the pods in the kube-system namespace and noticed that some pods share the same ip address.The pods that share the same ip address appear to be on the same node.
In the Kubernetes documenatation it said that "Evert pod gets its own ip address." (https://kubernetes.io/docs/concepts/cluster-administration/networking/). I'm confused as to how same ip for some pods came about.
This was reported in issue 51322 and can depend on the network plugin you are using.
The issue was seen when using the basic kubenet network plugin on Linux.
Sometime, a reset/reboot can help
I suspect nodes have been configured with overlapped podCIDRs for such cases.
The pod CIDR could be checked by kubectl get node -o jsonpath='{.items[*].spec.podCIDR}'
Please check the Kubernetes manifests of the pods that have the same IP address as their node. If they have the parameter 'hostNetwork' set to be true, then this is not an issue.
master-node after logging in using PuTTY
worker-node01 after logging in using PuTTY
It clearly shows a separate CIDR for weave network. So it depends on the network plug-in. And some cases will override the pod specification CIDR provided during initialization.
After re-deploying across the new node - worker-node02
Yes. I have checked my 2 node clusters created using kubeadm on VMs running on AWS.
In the manifest files for static Pods hostNetwork=true is set.
Pods are:
-rw------- 1 root root 2100 Feb 4 16:48 etcd.yaml
-rw------- 1 root root 3669 Feb 4 16:48 kube-apiserver.yaml
-rw------- 1 root root 3346 Feb 4 16:48 kube-controller-manager.yaml
-rw------- 1 root root 1385 Feb 4 16:48 kube-scheduler.yaml
I have checked with weave and flannel.
All other pods getting IP, which was set during cluster initialization by kubeadm:
kubeadm init --pod-network-cidr=10.244.0.0/16
ubuntu#master-node:~$ kubectl get all -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default pod/my-nginx-deployment-5976fbfd94-2n2ff 1/1 Running 0 20m 10.244.1.17 worker-node01
default pod/my-nginx-deployment-5976fbfd94-4sghq 1/1 Running 0 20m 10.244.1.12 worker-node01
default pod/my-nginx-deployment-5976fbfd94-57lfp 1/1 Running 0 20m 10.244.1.14 worker-node01
default pod/my-nginx-deployment-5976fbfd94-77nrr 1/1 Running 0 20m 10.244.1.18 worker-node01
default pod/my-nginx-deployment-5976fbfd94-m7qbn 1/1 Running 0 20m 10.244.1.15 worker-node01
default pod/my-nginx-deployment-5976fbfd94-nsxvm 1/1 Running 0 20m 10.244.1.19 worker-node01
default pod/my-nginx-deployment-5976fbfd94-r5hr6 1/1 Running 0 20m 10.244.1.16 worker-node01
default pod/my-nginx-deployment-5976fbfd94-whtcg 1/1 Running 0 20m 10.244.1.13 worker-node01
kube-system pod/coredns-f9fd979d6-nghhz 1/1 Running 0 63m 10.244.0.3 master-node
kube-system pod/coredns-f9fd979d6-pdbrx 1/1 Running 0 63m 10.244.0.2 master-node
kube-system pod/etcd-master-node 1/1 Running 0 63m 172.31.8.115 master-node
kube-system pod/kube-apiserver-master-node 1/1 Running 0 63m 172.31.8.115 master-node
kube-system pod/kube-controller-manager-master-node 1/1 Running 0 63m 172.31.8.115 master-node
kube-system pod/kube-proxy-8k9s4 1/1 Running 0 63m 172.31.8.115 master-node
kube-system pod/kube-proxy-ln6gb 1/1 Running 0 37m 172.31.3.75 worker-node01
kube-system pod/kube-scheduler-master-node 1/1 Running 0 63m 172.31.8.115 master-node
kube-system pod/weave-net-jc92w 2/2 Running 1 24m 172.31.8.115 master-node
kube-system pod/weave-net-l9rg2 2/2 Running 1 24m 172.31.3.75 worker-node01
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default service/kubernetes ClusterIP 10.96.0.1 443/TCP 63m
kube-system service/kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP,9153/TCP 63m k8s-app=kube-dns
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR
kube-system daemonset.apps/kube-proxy 2 2 2 2 2 kubernetes.io/os=linux 63m kube-proxy k8s.gcr.io/kube-proxy:v1.19.16 k8s-app=kube-proxy
kube-system daemonset.apps/weave-net 2 2 2 2 2 24m weave,weave-npc ghcr.io/weaveworks/launcher/weave-kube:2.8.1,ghcr.io/weaveworks/launcher/weave-npc:2.8.1 name=weave-net
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
default deployment.apps/my-nginx-deployment 8/8 8 8 20m nginx nginx app=my-nginx-deployment
kube-system deployment.apps/coredns 2/2 2 2 63m coredns k8s.gcr.io/coredns:1.7.0 k8s-app=kube-dns
NAMESPACE NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
default replicaset.apps/my-nginx-deployment-5976fbfd94 8 8 8 20m nginx nginx app=my-nginx-deployment,pod-template-hash=5976fbfd94
kube-system replicaset.apps/coredns-f9fd979d6 2 2 2 63m coredns k8s.gcr.io/coredns:1.7.0 k8s-app=kube-dns,pod-template-hash=f9fd979d6
ubuntu#master-node:~$
I will add another worker node and check.
Note: I was testing with a one master and 3 worker node cluster, where pods were getting IP from some other CIDR 10.38 and 10.39. I am not sure, but the way steps are followed matters. I could not fix that cluster.