kubernetes : unable to join a node - kubernetes

I have created a master and am trying to join a node to create a cluster. When I try the join command I get the below error. Both the nodes are on the same network. The error message indicates that no routing exist to the host. I'm not sure how to establish a route to the host. Any help is appreciated.
sudo kubeadm join --token d23afe.14fde99cd03def7e 192.168.178.24:6443 --discovery-token-ca-cert-hash sha256:6a5e2674825e683bbdfe9bab512b03c556bcf89d8648317a64372bb44746bb39
[preflight] Running pre-flight checks.
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.02.0-ce. Max validated version: 17.03
[WARNING FileExisting-crictl]: crictl not found in system path
[discovery] Trying to connect to API Server "192.168.178.24:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.178.24:6443"
[discovery] Failed to request cluster info, will try again: [Get https://192.168.178.24:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 192.168.178.24:6443: getsockopt: no route to host]
Here's the output of sudo route. Unfortunately, I have little knowledge to troubleshoot from this output
Here's the output of
`sudo route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 192.168.178.1 0.0.0.0 UG 202 0 0 eth0
10.32.0.0 0.0.0.0 255.240.0.0 U 0 0 0 weave
link-local 0.0.0.0 255.255.0.0 U 205 0 0 datapath
link-local 0.0.0.0 255.255.0.0 U 210 0 0 vethwe-datapath
link-local 0.0.0.0 255.255.0.0 U 211 0 0 vethwe-bridge
link-local 0.0.0.0 255.255.0.0 U 212 0 0 vxlan-6784
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.178.0 0.0.0.0 255.255.255.0 U 202 0 0 eth0
`

I managed to identify the issue. The issue was with the weave net plugin. I did a tear down and reinstalled the plugin. I was then able to join the node. Thanks all for your suggestions.

Related

kubernetes: can't ping pods

I created a k8s cluster which network configuration to pod is podSubnet: 172.168.0.0/12
Then, I find that can't ping those pod's IP.
for example, the deployment of metrics
# on k8s-master01 node:
$ kubectl get po -n kube-system -o wide
metrics-server-545b8b99c6-r2ql5 1/1 Running 0 5d1h 172.171.14.193 k8s-node02 <none> <none>
# ping 172.171.14.193 -c 2
PING 172.171.14.193 (172.171.14.193) 56(84) bytes of data.
^C
--- 172.171.14.193 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1016ms
# this is route table
# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.180.104.1 0.0.0.0 UG 0 0 0 eth0
10.180.104.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.161.125.0 10.180.104.110 255.255.255.192 UG 0 0 0 tunl0
172.162.195.0 10.180.104.109 255.255.255.192 UG 0 0 0 tunl0
172.169.92.64 10.180.104.108 255.255.255.192 UG 0 0 0 tunl0
172.169.244.192 0.0.0.0 255.255.255.255 UH 0 0 0 cali06e1673851f
172.169.244.192 0.0.0.0 255.255.255.192 U 0 0 0 *
172.171.14.192 10.180.104.111 255.255.255.192 UG 0 0 0 tunl0
that shows metric pod host on k8s-node02. This is k8s-node02's route table
# route -n
Kernel IP routing table of k8s-master01
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.180.104.1 0.0.0.0 UG 0 0 0 eth0
10.180.104.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
169.254.169.254 10.180.104.11 255.255.255.255 UGH 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.161.125.0 10.180.104.110 255.255.255.192 UG 0 0 0 tunl0
172.162.195.0 10.180.104.109 255.255.255.192 UG 0 0 0 tunl0
172.169.92.64 10.180.104.108 255.255.255.192 UG 0 0 0 tunl0
172.169.244.192 10.180.104.107 255.255.255.192 UG 0 0 0 tunl0
172.171.14.192 0.0.0.0 255.255.255.192 U 0 0 0 *
172.171.14.193 0.0.0.0 255.255.255.255 UH 0 0 0 cali872eed170f4
172.171.14.194 0.0.0.0 255.255.255.255 UH 0 0 0 cali7d7625dd37e
172.171.14.203 0.0.0.0 255.255.255.255 UH 0 0 0 calid4e258f95f6
172.171.14.204 0.0.0.0 255.255.255.255 UH 0 0 0 cali5cf96eb1028
in fact, all pods can't access. I created a service based on example deployment.
# kubectl describe svc my-service
Name: my-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=demo-nginx
Type: ClusterIP
IP Families: <none>
IP: 10.100.75.139
IPs: 10.100.75.139
Port: http 80/TCP
TargetPort: 80/TCP
Endpoints: 172.161.125.14:80,172.161.125.15:80,172.171.14.203:80
Session Affinity: None
Events: <none>
# ping 10.100.75.139 -c 1
PING 10.100.75.139 (10.100.75.139) 56(84) bytes of data.
64 bytes from 10.100.75.139: icmp_seq=1 ttl=64 time=0.077 ms
# nc -vz 10.100.75.139 80
Ncat: Version 7.50 ( https://nmap.org/ncat )
Ncat: Connection timed out.
I suppose that the root cause is route table but not sure. Would you please help to fix this issue??
please feel free to let me know if you need more information.
Thanks a lot in advance.
BR//
In connect mode, Ncat initiates a connection (or sends UDP data) to a service that is listening somewhere. For those familiar with socket programming, connect mode is like using the connect function. In listen mode, Ncat waits for an incoming connection (or data receipt), like using the bind and listen to functions.
A connection timed out response indicates that your connection is not working, which could mean your firewall is blocking the port. Test the connection status by adding a rule that accepts connections on the required port.
There might be issues with traffic rules, try adding outbound traffic rules in security groups if you face the error again. Even after adding the traffic rules, if you face the same error then there must be some restricting rules on the node which is blocking you from accessing the port.
Refer to Testing Network services for more information.

How to add IP route(s) So Kubernetes cluster addresses go via through appropriate adapter

I have installed Kubernetes cluster(one Master and one Worker- Node) on CentOS-8 OS stand-alone server separately as per the below link instructions.
https://www.tecmint.com/install-a-kubernetes-cluster-on-centos-8/
Weave-Net - CNI plugin installed as per above link. Now I can see below new network adapter in our K8s Master & Worker-Node server.
weave: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1376
inet 10.32.0.1 netmask 255.240.0.0 broadcast 10.47.255.255
inet6 fe80::a07d:21ff:fef1:4656 prefixlen 64 scopeid 0x20<link>
ether a2:7d:21:f1:46:56 txqueuelen 1000 (Ethernet)
RX packets 141 bytes 13322 (13.0 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 48 bytes 4896 (4.7 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
But the problem is from host server unable to ping (Or) access any of our remote site/location IPs (ping response given below). whereas Local IPs are pinging & accessible.
ping -c 4 120.121.5.48
PING 120.121.5.48 (120.121.5.48) 56(84) bytes of data.
From 10.32.0.1 icmp_seq=1 Destination Host Unreachable
From 10.32.0.1 icmp_seq=2 Destination Host Unreachable
From 10.32.0.1 icmp_seq=3 Destination Host Unreachable
From 10.32.0.1 icmp_seq=4 Destination Host Unreachable
--- 120.121.5.48 ping statistics ---
4 packets transmitted, 0 received, +4 errors, 100% packet loss, time 2999ms
pipe 4
Also from host server tried to connect our remote LDAP server through telnet it shows below error message.
# telnet 120.121.5.48 389
Trying 120.121.5.48...
telnet: connect to address 120.121.5.48: No route to host
In our K8s Master & Worker-Node server have 23 network adapters, Statically network IP have configured, So any additional configuration need to be configured for K8s CNI reachable in default routing?
ip route show & route -n output as follows.
# ip route show
default via 45.46.47.1 dev ens1f0 proto static metric 100
10.32.0.0/12 dev weave proto kernel scope link src 10.32.0.1
45.46.47.0/24 dev ens1f0 proto kernel scope link src 45.46.47.48 metric 100
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown
# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 45.46.47.1 0.0.0.0 UG 100 0 0 ens1f0
10.32.0.0 0.0.0.0 255.255.255.0 U 10 0 0 ens1f0
10.32.0.0 0.0.0.0 255.240.0.0 U 0 0 0 weave
45.46.47.0 0.0.0.0 255.255.255.0 U 100 0 0 ens1f0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0
Tried to change the weave route to default with below command. it executed successfully, But still same problem.
ip route add 10.32.0.0/24 via 45.46.47.1 dev ens1f0 metric 100
Suppose if i run ifconfig weave down everything is working fine. But to use Kubernetes cluster i need Weave-net network adapter. So please help me to add IP route(s) So that my Kubernetes cluster addresses go via through appropriate adapter, So that i will be able to access both our local & remote location server.
I have changed the CNI-Weave-Net plugin to Flannel, now it is working as excepted.

Internet connectivity losses to Kubernetes cluster

I have a master and two worker node architecture. I had single node and later added two nodes using kubeadm, I can see worker nodes when I use kubectl get nodes.
But cluster loses internet connectivity soon after workers joins.
I found this error when I try to deploy nginx web server to worker.
I have installed Cal as described here.
But it does not show CoreDNS pod with kubectl get pods --all-namespaces.
When I use route -n on a worker node :
$route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.100.1 0.0.0.0 UG 100 0 0 ens32
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.100.0 0.0.0.0 255.255.255.0 U 0 0 0 ens32
192.168.100.1 192.168.100.222 255.255.255.255 UGH 0 0 0 tunl0
192.168.100.1 0.0.0.0 255.255.255.255 UH 100 0 0 ens32
Here 192.168.100.1 is my network gateway server and 192.168.100.222 is my master node.
All 3 nodes running Ubuntu 18.04 LTS. As I described earlier, I had single node cluster and then I added two worker nodes using kubeadm join
I am really new to Kubernetes.What am I missing here?

kubernetes master starts only on tcp6 , how to join a node?

I have a local Kubernetes master started on a tcp6:6443 but not on tcp so how to start a kubeadm join for using the right port?
tcp6 0 0 :::10250 :::* LISTEN -
tcp6 0 0 :::6443 :::* LISTEN -
tcp6 0 0 :::10251 :::* LISTEN -
Starting Nmap 7.01 ( https://nmap.org ) at 2019-09-25 15:40 CEST
Nmap scan report for 10.0.2.15
Host is up (0.000081s latency).
PORT STATE SERVICE
6443/tcp closed unknown
You should run the below command (on master host):
$ kubeadm init --apiserver-advertise-address=<private-ip of master host>
--apiserver-advertise-address parameter - if the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
Now try to run the join command that was generated in the output of kubeadm init. It should works fine.
Also, what you can check is a firewall running on your master node that should be disabled. It’s blocking incoming traffic.
systemctl stop firewalld

Ping other pod in the same & different pod

I would like to ping B pod (node 1) from A pod (node 0) but it's unreachable. However, pinging pod in the same node can not be reachable too.
I am setting up new cluster for trying Kubernetes from Kelsey.
I have tried to use this link as my reference Kubernetes: Can't ping pods across nodes
Node - IP Private - IP Pod
worker-0 - 10.240.0.20 - 10.200.0.0/24
worker-1 - 10.240.0.21 - 10.200.1.0/24
worker-2 - 10.240.0.22 - 10.200.2.0/24
route -n
worker-0
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.240.0.1 0.0.0.0 UG 100 0 0 ens4
10.200.0.0 0.0.0.0 255.255.255.0 U 0 0 0 cnio0
10.240.0.1 0.0.0.0 255.255.255.255 UH 100 0 0 ens4
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
worker-1
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.240.0.1 0.0.0.0 UG 100 0 0 ens4
10.200.1.0 0.0.0.0 255.255.255.0 U 0 0 0 cnio0
10.240.0.1 0.0.0.0 255.255.255.255 UH 100 0 0 ens4
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
worker-2
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.240.0.1 0.0.0.0 UG 100 0 0 ens4
10.200.2.0 0.0.0.0 255.255.255.0 U 0 0 0 cnio0
10.240.0.1 0.0.0.0 255.255.255.255 UH 100 0 0 ens4
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
I have done setting up VPC Netowrk Routes like this link.
After that i followed this reference Kubernetes: Can't ping pods across nodes
route add -net 10.200.1.0 netmask 255.255.255.0 gw 10.240.0.21 in worker-0
The result is
SIOCADDRT: Network is unreachable
I tried it in worker-0, worker-1, worker-2 and got same result.
Eventhough worker-0 can ping to worker-1 (10.240.0.21), reachable.
My expectation when i am in Pod A (worker-0) with IP Pod 10.200.0.3, i can ping to Pod B (worker-1) with IP Pod 10.200.1.3. And also, i can ping to Pod C (worker-0) same with Pod A.
Does this step should be using Calico or Flannel ? or Should we can ping other pod from different node without Calico or Flannel (only CNI setting) ?
Additional Information
I am using Docker not runc & containderd.
So, i installed Docker manually from this link.
In kubelet.service, --container-runtime=remote become --container-runtime=docker
Try adding the routes like this:
Worker-0:
$ sudo route add -net 10.200.1.0 netmask 255.255.255.0 gw 10.240.0.21
$ sudo route add -net 10.200.2.0 netmask 255.255.255.0 gw 10.240.0.22
Worker-1:
$ sudo route add -net 10.200.0.0 netmask 255.255.255.0 gw 10.240.0.20
$ sudo route add -net 10.200.2.0 netmask 255.255.255.0 gw 10.240.0.22
Worker-2:
$ sudo route add -net 10.200.0.0 netmask 255.255.255.0 gw 10.240.0.20
$ sudo route add -net 10.200.1.0 netmask 255.255.255.0 gw 10.240.0.21