I am looking for help to troubleshoot this basic scenario that isn't working OK:
Three nodes installed with kubeadm on VirtualBox VMs running on a MacBook:
sudo kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubernetes-master Ready master 4h v1.10.2
kubernetes-node1 Ready <none> 4h v1.10.2
kubernetes-node2 Ready <none> 34m v1.10.2
The Virtualbox VMs have 2 adapters: 1) Host-only 2) NAT. The node IP's from the guest computer are:
kubernetes-master (192.168.56.3)
kubernetes-node1 (192.168.56.4)
kubernetes-node2 (192.168.56.5)
I am using flannel pod network (I also tried Calico previously with the same result).
When installing the master node I used this command:
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.56.3
I deployed an nginx application whose pods are up, one pod per node:
nginx-deployment-64ff85b579-sk5zs 1/1 Running 0 14m 10.244.2.2 kubernetes-node2
nginx-deployment-64ff85b579-sqjgb 1/1 Running 0 14m 10.244.1.2 kubernetes-node1
I exposed them as a ClusterIP service:
sudo kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 22m
nginx-deployment ClusterIP 10.98.206.211 <none> 80/TCP 14m
Now the problem:
I ssh into kubernetes-node1 and curl the service using the cluster IP:
ssh 192.168.56.4
---
curl 10.98.206.211
Sometimes the request goes fine, returning the nginx welcome page. I can see in the logs that this requests are always answered by the pod in the same node (kubernetes-node1). Some other requests are stuck until they time out. I guess that this ones were sent to the pod in the other node (kubernetes-node2).
The same happens the other way around, when ssh'd into kubernetes-node2 the pod from this node logs the successful requests and the others time out.
I seems there is some kind of networking problem and nodes can't access pods from the other nodes. How can I fix this?
UPDATE:
I downscaled the number of replicas to 1, so now there is only one pod on kubernetes-node2
If I ssh into kubernetes-node2 all curls go fine. When in kubernetes-node1 all requests time out.
UPDATE 2:
kubernetes-master ifconfig
cni0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 10.244.0.1 netmask 255.255.255.0 broadcast 0.0.0.0
inet6 fe80::20a0:c7ff:fe6f:8271 prefixlen 64 scopeid 0x20<link>
ether 0a:58:0a:f4:00:01 txqueuelen 1000 (Ethernet)
RX packets 10478 bytes 2415081 (2.4 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 11523 bytes 2630866 (2.6 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
ether 02:42:cd:ce:84:a9 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.56.3 netmask 255.255.255.0 broadcast 192.168.56.255
inet6 fe80::a00:27ff:fe2d:298f prefixlen 64 scopeid 0x20<link>
ether 08:00:27:2d:29:8f txqueuelen 1000 (Ethernet)
RX packets 20784 bytes 2149991 (2.1 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 26567 bytes 26397855 (26.3 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
enp0s8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.0.3.15 netmask 255.255.255.0 broadcast 10.0.3.255
inet6 fe80::a00:27ff:fe09:f08a prefixlen 64 scopeid 0x20<link>
ether 08:00:27:09:f0:8a txqueuelen 1000 (Ethernet)
RX packets 12662 bytes 12491693 (12.4 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 4507 bytes 297572 (297.5 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 10.244.0.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::c078:65ff:feb9:e4ed prefixlen 64 scopeid 0x20<link>
ether c2:78:65:b9:e4:ed txqueuelen 0 (Ethernet)
RX packets 6 bytes 444 (444.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 6 bytes 444 (444.0 B)
TX errors 0 dropped 15 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 464615 bytes 130013389 (130.0 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 464615 bytes 130013389 (130.0 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
tunl0: flags=193<UP,RUNNING,NOARP> mtu 1440
tunnel txqueuelen 1000 (IPIP Tunnel)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
vethb1098eb3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet6 fe80::d8a3:a2ff:fedf:4d1d prefixlen 64 scopeid 0x20<link>
ether da:a3:a2:df:4d:1d txqueuelen 0 (Ethernet)
RX packets 10478 bytes 2561773 (2.5 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 11538 bytes 2631964 (2.6 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
kubernetes-node1 ifconfig
cni0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 10.244.1.1 netmask 255.255.255.0 broadcast 0.0.0.0
inet6 fe80::5cab:32ff:fe04:5b89 prefixlen 64 scopeid 0x20<link>
ether 0a:58:0a:f4:01:01 txqueuelen 1000 (Ethernet)
RX packets 199 bytes 41004 (41.0 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 331 bytes 56438 (56.4 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
ether 02:42:0f:02:bb:ff txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.56.4 netmask 255.255.255.0 broadcast 192.168.56.255
inet6 fe80::a00:27ff:fe36:741a prefixlen 64 scopeid 0x20<link>
ether 08:00:27:36:74:1a txqueuelen 1000 (Ethernet)
RX packets 12834 bytes 9685221 (9.6 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 9114 bytes 1014758 (1.0 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
enp0s8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.0.3.15 netmask 255.255.255.0 broadcast 10.0.3.255
inet6 fe80::a00:27ff:feb2:23a3 prefixlen 64 scopeid 0x20<link>
ether 08:00:27:b2:23:a3 txqueuelen 1000 (Ethernet)
RX packets 13263 bytes 12557808 (12.5 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 5065 bytes 341321 (341.3 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 10.244.1.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::7815:efff:fed6:1423 prefixlen 64 scopeid 0x20<link>
ether 7a:15:ef:d6:14:23 txqueuelen 0 (Ethernet)
RX packets 483 bytes 37506 (37.5 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 483 bytes 37506 (37.5 KB)
TX errors 0 dropped 15 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 3072 bytes 269588 (269.5 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 3072 bytes 269588 (269.5 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
veth153293ec: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet6 fe80::70b6:beff:fe94:9942 prefixlen 64 scopeid 0x20<link>
ether 72:b6:be:94:99:42 txqueuelen 0 (Ethernet)
RX packets 81 bytes 19066 (19.0 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 129 bytes 10066 (10.0 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
kubernetes-node2 ifconfig
cni0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 10.244.2.1 netmask 255.255.255.0 broadcast 0.0.0.0
inet6 fe80::4428:f5ff:fe8b:a76b prefixlen 64 scopeid 0x20<link>
ether 0a:58:0a:f4:02:01 txqueuelen 1000 (Ethernet)
RX packets 184 bytes 36782 (36.7 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 284 bytes 36940 (36.9 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
ether 02:42:7f:e9:79:cd txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.56.5 netmask 255.255.255.0 broadcast 192.168.56.255
inet6 fe80::a00:27ff:feb7:ff54 prefixlen 64 scopeid 0x20<link>
ether 08:00:27:b7:ff:54 txqueuelen 1000 (Ethernet)
RX packets 12634 bytes 9466460 (9.4 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 8961 bytes 979807 (979.8 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
enp0s8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.0.3.15 netmask 255.255.255.0 broadcast 10.0.3.255
inet6 fe80::a00:27ff:fed8:9210 prefixlen 64 scopeid 0x20<link>
ether 08:00:27:d8:92:10 txqueuelen 1000 (Ethernet)
RX packets 12658 bytes 12491919 (12.4 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 4544 bytes 297215 (297.2 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 10.244.2.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::c832:e4ff:fe3e:f616 prefixlen 64 scopeid 0x20<link>
ether ca:32:e4:3e:f6:16 txqueuelen 0 (Ethernet)
RX packets 111 bytes 8466 (8.4 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 111 bytes 8466 (8.4 KB)
TX errors 0 dropped 15 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 2940 bytes 258968 (258.9 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2940 bytes 258968 (258.9 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
UPDATE 3:
Kubelet logs:
kubernetes-master kubelet logs
kubernetes-node1 kubelet logs
kubernetes-node2 kubelet logs
IP Routes
Master
kubernetes-master:~$ ip route
default via 10.0.3.2 dev enp0s8 proto dhcp src 10.0.3.15 metric 100
10.0.3.0/24 dev enp0s8 proto kernel scope link src 10.0.3.15
10.0.3.2 dev enp0s8 proto dhcp scope link src 10.0.3.15 metric 100
10.244.0.0/24 dev cni0 proto kernel scope link src 10.244.0.1
10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.168.56.0/24 dev enp0s3 proto kernel scope link src 192.168.56.3
Node1
kubernetes-node1:~$ ip route
default via 10.0.3.2 dev enp0s8 proto dhcp src 10.0.3.15 metric 100
10.0.3.0/24 dev enp0s8 proto kernel scope link src 10.0.3.15
10.0.3.2 dev enp0s8 proto dhcp scope link src 10.0.3.15 metric 100
10.244.0.0/24 via 10.244.0.0 dev flannel.1 onlink
10.244.1.0/24 dev cni0 proto kernel scope link src 10.244.1.1
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.168.56.0/24 dev enp0s3 proto kernel scope link src 192.168.56.4
Node2
kubernetes-node2:~$ ip route
default via 10.0.3.2 dev enp0s8 proto dhcp src 10.0.3.15 metric 100
10.0.3.0/24 dev enp0s8 proto kernel scope link src 10.0.3.15
10.0.3.2 dev enp0s8 proto dhcp scope link src 10.0.3.15 metric 100
10.244.0.0/24 via 10.244.0.0 dev flannel.1 onlink
10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.168.56.0/24 dev enp0s3 proto kernel scope link src 192.168.56.5
iptables-save:
kubernetes-master iptables-save
kubernetes-node1 iptables-save
kubernetes-node2 iptables-save
I was running into a similar problem with my K8s cluster with Flannel. I had set up the vms with a NAT nic for internet connectivity and a Host-Only nic for node to node communication. Flannel was choosing the NAT nic by default for node to node communication which obviously won't work in this scenario.
I modified the flannel manifest before deploying to set the --iface=enp0s8
argument to the Host-Only nic that should have been chosen (enp0s8 in my case). In your case it looks like enp0s3 would be the correct NIC. Node to node communication worked fine after that.
I failed to note that I also modified the kube-proxy manifest to include the --cluster-cidr=10.244.0.0/16 and --proxy-mode=iptables which appears to be required as well.
Flushed all firewalls with iptables --flush and iptables -tnat --flush then restart docker fixed it
check this github issue link
Based on your logs and the fact that you had problems only with connections between nodes which use Flannel, I guess you had a problem with Flannel CNI during the installation.
In logs from node1 and master, I see the following messages:
Error adding network: open /run/flannel/subnet.env: no such file or directory
Error while adding to cni network: open /run/flannel/subnet.env: no such file or directory
The root cause can be in network problem between VMs.
I recommend you to create 2 networks for each instance in your cluster - one with NAT for access to the Internet and one Host-only for in-cluster communication.
As an alternative way - you can use Bridge mode for interfaces of VMs if your network allows it.
Finally, the only suggestion I can provide - remove all cluster components and initialize cluster one more time using the configuration I mentioned above. That is the fastest way.
I have had the same issue after raw install kubernetes on raspberrypi cluster, with flannel.
The resolution was to disable ufw firewall.
Related
I'm trying to setup a VPN to access my cluster's workloads without setting public endpoints.
Service is deployed using the OpenVPN helm chart, and kubernetes using Rancher v2.3.2
replacing L4 loadbalacer with a simple service discovery
edit configMap to allow TCP to go through the loadbalancer and reach the VPN
What does / doesn't work:
OpenVPN client can connect successfully
Cannot ping public servers
Cannot ping Kubernetes services or pods
Can ping openvpn cluster IP "10.42.2.11"
My files
vars.yml
---
replicaCount: 1
nodeSelector:
openvpn: "true"
openvpn:
OVPN_K8S_POD_NETWORK: "10.42.0.0"
OVPN_K8S_POD_SUBNET: "255.255.0.0"
OVPN_K8S_SVC_NETWORK: "10.43.0.0"
OVPN_K8S_SVC_SUBNET: "255.255.0.0"
persistence:
storageClass: "local-path"
service:
externalPort: 444
Connection works, but I'm not able to hit any ip inside my cluster.
The only ip I'm able to reach is the openvpn cluster ip.
openvpn.conf:
server 10.240.0.0 255.255.0.0
verb 3
key /etc/openvpn/certs/pki/private/server.key
ca /etc/openvpn/certs/pki/ca.crt
cert /etc/openvpn/certs/pki/issued/server.crt
dh /etc/openvpn/certs/pki/dh.pem
key-direction 0
keepalive 10 60
persist-key
persist-tun
proto tcp
port 443
dev tun0
status /tmp/openvpn-status.log
user nobody
group nogroup
push "route 10.42.2.11 255.255.255.255"
push "route 10.42.0.0 255.255.0.0"
push "route 10.43.0.0 255.255.0.0"
push "dhcp-option DOMAIN-SEARCH openvpn.svc.cluster.local"
push "dhcp-option DOMAIN-SEARCH svc.cluster.local"
push "dhcp-option DOMAIN-SEARCH cluster.local"
client.ovpn
client
nobind
dev tun
remote xxxx xxx tcp
CERTS CERTS
dhcp-option DOMAIN openvpn.svc.cluster.local
dhcp-option DOMAIN svc.cluster.local
dhcp-option DOMAIN cluster.local
dhcp-option DOMAIN online.net
I don't really know how to debug this.
I'm using windows
route command from client
Destination Gateway Genmask Flags Metric Ref Use Ifac
0.0.0.0 livebox.home 255.255.255.255 U 0 0 0 eth0
192.168.1.0 0.0.0.0 255.255.255.0 U 256 0 0 eth0
192.168.1.17 0.0.0.0 255.255.255.255 U 256 0 0 eth0
192.168.1.255 0.0.0.0 255.255.255.255 U 256 0 0 eth0
224.0.0.0 0.0.0.0 240.0.0.0 U 256 0 0 eth0
255.255.255.255 0.0.0.0 255.255.255.255 U 256 0 0 eth0
224.0.0.0 0.0.0.0 240.0.0.0 U 256 0 0 eth1
255.255.255.255 0.0.0.0 255.255.255.255 U 256 0 0 eth1
0.0.0.0 10.240.0.5 255.255.255.255 U 0 0 0 eth1
10.42.2.11 10.240.0.5 255.255.255.255 U 0 0 0 eth1
10.42.0.0 10.240.0.5 255.255.0.0 U 0 0 0 eth1
10.43.0.0 10.240.0.5 255.255.0.0 U 0 0 0 eth1
10.240.0.1 10.240.0.5 255.255.255.255 U 0 0 0 eth1
127.0.0.0 0.0.0.0 255.0.0.0 U 256 0 0 lo
127.0.0.1 0.0.0.0 255.255.255.255 U 256 0 0 lo
127.255.255.255 0.0.0.0 255.255.255.255 U 256 0 0 lo
224.0.0.0 0.0.0.0 240.0.0.0 U 256 0 0 lo
255.255.255.255 0.0.0.0 255.255.255.255 U 256 0 0 lo
And finally ifconfig
inet 192.168.1.17 netmask 255.255.255.0 broadcast 192.168.1.255
inet6 2a01:cb00:90c:5300:603c:f8:703e:a876 prefixlen 64 scopeid 0x0<global>
inet6 2a01:cb00:90c:5300:d84b:668b:85f3:3ba2 prefixlen 128 scopeid 0x0<global>
inet6 fe80::603c:f8:703e:a876 prefixlen 64 scopeid 0xfd<compat,link,site,host>
ether 00:d8:61:31:22:32 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.240.0.6 netmask 255.255.255.252 broadcast 10.240.0.7
inet6 fe80::b9cf:39cc:f60a:9db2 prefixlen 64 scopeid 0xfd<compat,link,site,host>
ether 00:ff:42:04:53:4d (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 1500
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0xfe<compat,link,site,host>
loop (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
For anybody looking for a working sample, this is going to go into your openvpn deployment along side your container definition:
initContainers:
- args:
- -w
- net.ipv4.ip_forward=1
command:
- sysctl
image: busybox
name: openvpn-sidecar
securityContext:
privileged: true
Don't know if it is the RIGHT answer.
But I got it to work by adding a sidecar to my pods to execute
net.ipv4.ip_forward=1
which solved the issue
You can set ipForwardInitContainer option to "true" in values.yaml
ss -tnulp|grep 8443
tcp LISTEN 0 128 172.16.1.4:8443 *:* users:(("kube-apiserver",pid=29513,fd=5))
i have my api server running and i want to expose it to the rest of the network, this is the network config on my cluster :
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.16.1.4 netmask 255.255.255.0 broadcast 172.16.1.255
inet6 fe80::f816:3eff:feb5:93a3 prefixlen 64 scopeid 0x20<link>
ether fa:16:3e:b5:93:a3 txqueuelen 1000 (Ethernet)
RX packets 218935 bytes 2518654013 (2.3 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 160281 bytes 33994810 (32.4 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 139.54.130.39 netmask 255.255.254.0 broadcast 139.54.131.255
inet6 3ffe:302:11:2:f816:3eff:fe46:ab28 prefixlen 64 scopeid 0x0<global>
inet6 fd12:1f4b:e0bf:10:f816:3eff:fe46:ab28 prefixlen 64 scopeid 0x0<global>
inet6 fd12:1f4b:e0bf:1:f816:3eff:fe46:ab28 prefixlen 64 scopeid 0x0<global>
inet6 fe80::f816:3eff:fe46:ab28 prefixlen 64 scopeid 0x20<link>
ether fa:16:3e:46:ab:28 txqueuelen 1000 (Ethernet)
RX packets 3227129 bytes 845879874 (806.6 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1072031 bytes 132806957 (126.6 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
the VM has an external ip 139.54.130.39
Any leads how to do that ?
Did you try using this option
- --apiserver-advertise-address=139.54.130.39
Kubectl over this network will be able to handshake 139.54.130.39
you can apply this depends of your installation:
.......
In case .. you installed apiserver as pod
just you can change apiserver-advertise-address parameter in
/etc/kubernetes/manifests/kube-apiserver.yaml
or
check/list kube-system pods you have to get actual apiserver name and edit it (carefully )
kubectl get pod -n kube-system
kubectl edit pod -n kube-system kube-apiserver
........
In case .. you installed apiserver as service, edit systemd script
ex:
vim /etc/systemd/system/kube-apiserver.service
Edit
ExecStart=/usr/local/bin/kube-apiserver
--bind-address=0.0.0.0
--advertise_address=139.54.130.39
Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.0", GitCommit:"87d9d8d7bc5aa35041a8ddfe3d4b367381112f89", GitTreeState:"clean", BuildDate:"2016-12-12T21:10:52Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.0", GitCommit:"87d9d8d7bc5aa35041a8ddfe3d4b367381112f89", GitTreeState:"clean", BuildDate:"2016-12-12T21:10:52Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}
Environment:
AWS, using VPC, all master and 2 nodes under same subnet
RHEL 7.2
Kernel (e.g. uname -a): Linux master.example.com 3.10.0-514.6.2.el7.x86_64 #1 SMP Fri Feb 17 19:21:31 EST 2017 x86_64 x86_64 x86_64 GNU/Linux
Install tools: Install kubernetes as per Redhat guideline using flannel Network
flannel-config.json
{
"Network": "10.20.0.0/16",
"SubnetLen": 24,
"Backend": {
"Type": "vxlan",
"VNI": 1
}
}
Kubernetes Cluster Network : 10.254.0.0/16
Others:
What happened:
We have kubernetes cluster setup with following setup
Master: ip-10-52-2-56.ap-northeast-2.compute.internal
Node1: ip-10-52-2-59.ap-northeast-2.compute.internal
Node2: ip-10-52-2-54.ap-northeast-2.compute.internal
Master config details:
[root#master ~]# egrep -v '^#|^$' /etc/etcd/etcd.conf
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://localhost:2380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"
[root#master ~]# egrep -v '^#|^$' /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://ip-10-52-2-56.ap-northeast-2.compute.internal:8080"
[root#master ~]# egrep -v '^#|^$' /etc/kubernetes/apiserver
KUBE_API_ADDRESS="--address=0.0.0.0"
KUBE_ETCD_SERVERS="--etcd_servers=http://ip-10-52-2-56.ap-northeast-2.compute.internal:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
KUBE_API_ARGS="--service_account_key_file=/serviceaccount.key""
[root#master ~]# egrep -v '^#|^$' /etc/sysconfig/flanneld
FLANNEL_ETCD="http://ip-10-52-2-56.ap-northeast-2.compute.internal:2379"
FLANNEL_ETCD_KEY="/coreos.com/network"
FLANNEL_OPTIONS="eth0"
Node1/Node2 config details are same as follows:
[root#ip-10-52-2-59 ec2-user]# egrep -v '^$|^#' /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://ip-10-52-2-56.ap-northeast-2.compute.internal:8080"
[root#ip-10-52-2-59 ec2-user]# egrep -v '^#|^$' /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_HOSTNAME="--hostname-override=ip-10-52-2-59.ap-northeast-2.compute.internal"
KUBELET_API_SERVER="--api-servers=http://ip-10-52-2-56.ap-northeast-2.compute.internal:8080"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS="--cluster-dns=10.254.0.2 --cluster-domain=cluster.local"
[root#ip-10-52-2-59 ec2-user]# grep KUBE_PROXY_ARGS /etc/kubernetes/proxy
KUBE_PROXY_ARGS=""
[root#ip-10-52-2-59 ec2-user]# egrep -v '^#|^$' /etc/sysconfig/flanneld
FLANNEL_ETCD="http://ip-10-52-2-56.ap-northeast-2.compute.internal:2379"
FLANNEL_ETCD_KEY="/coreos.com/network"
FLANNEL_OPTIONS="eth0"
Running kube dns as below configuration:
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.254.0.2
ports:
name: dns
port: 53
protocol: UDP
name: dns-tcp
port: 53
protocol: TCP
apiVersion: v1
kind: ReplicationController
metadata:
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
version: v20
name: kube-dns-v20
namespace: kube-system
spec:
replicas: 1
selector:
k8s-app: kube-dns
version: v20
template:
metadata:
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
version: v20
spec:
containers:
-
args:
- "--domain=cluster.local"
- "--kube-master-url=http://ip-10-52-2-56.ap-northeast-2.compute.internal:8080"
- "--dns-port=10053"
image: "gcr.io/google_containers/kubedns-amd64:1.9"
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 60
successThreshold: 1
timeoutSeconds: 5
name: kubedns
ports:
-
containerPort: 10053
name: dns-local
protocol: UDP
-
containerPort: 10053
name: dns-tcp-local
protocol: TCP
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
resources:
limits:
cpu: 100m
memory: 500Mi
requests:
cpu: 100m
memory: 500Mi
-
args:
- "--cache-size=1000"
- "--no-resolv"
- "--server=127.0.0.1#10053"
image: "gcr.io/google_containers/kube-dnsmasq-amd64:1.4"
name: dnsmasq
ports:
-
containerPort: 53
name: dns
protocol: UDP
-
containerPort: 53
name: dns-tcp
protocol: TCP
-
args:
- "-cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null && nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null"
- "-port=8080"
- "-quiet"
image: "gcr.io/google_containers/exechealthz-amd64:1.2"
name: healthz
ports:
-
containerPort: 8080
protocol: TCP
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
dnsPolicy: Default
What happen:
Kubernetes DNS works where kube-dns pod working, if scale kubedns pod nothing is working anywhere (nodes).
In below one dns pod is running on node1 and response also coming from node1 busybox pod but node2 busybox pod nslookup not responded.
image1
Now below two dns pods are running on node1 and node2 and you can see NO response coming from none of busybox pod from both node
image2
below some other observation ....
DNS pod most of the time taking 172.17 IP series if i scale more than 4 pod then in node 2 dns pod taking 10.20 series IP.
Interesting part Node2 pods started with 10.20 series IP.
but Node1 pods started with 172.17 series IP.
Some of iptable-save output for both nodes.
[root#ip-10-52-2-54 ec2-user]# iptables-save | grep DNAT
-A KUBE-SEP-3M72SO5X7J6X6TX6 -p tcp -m comment --comment "default/prometheus:prometheus" -m tcp -j DNAT --to-destination 172.17.0.8:9090
-A KUBE-SEP-7SLC3EUJVX23N2X4 -p tcp -m comment --comment "default/zookeeper:" -m tcp -j DNAT --to-destination 172.17.0.4:2181
-A KUBE-SEP-D4NTKJJ3YXXGJARZ -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 172.17.0.10:53
-A KUBE-SEP-EN24FH2N7PLAR6AW -p tcp -m comment --comment "default/kafkacluster:" -m tcp -j DNAT --to-destination 172.17.0.2:9092
-A KUBE-SEP-LCDAFU4UXQHVDQT6 -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-LCDAFU4UXQHVDQT6 --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 10.52.2.56:6443
-A KUBE-SEP-MX63IHIHS5ZB4347 -p tcp -m comment --comment "default/nodejs4promethus-scraping:" -m tcp -j DNAT --to-destination 172.17.0.6:3000
-A KUBE-SEP-NOI5B75N7ZJAIPJR -p tcp -m comment --comment "default/mongodb-prometheus-exporter:" -m tcp -j DNAT --to-destination 172.17.0.12:9001
-A KUBE-SEP-O6UDQQL3MHGYTSH5 -p tcp -m comment --comment "default/producer:" -m tcp -j DNAT --to-destination 172.17.0.3:8125
-A KUBE-SEP-QO4SWWCV7NMMGPBN -p tcp -m comment --comment "default/kafka-prometheus-jmx:" -m tcp -j DNAT --to-destination 172.17.0.2:7071
-A KUBE-SEP-SVCEI2UVU246H7MW -p tcp -m comment --comment "default/mongodb:" -m tcp -j DNAT --to-destination 172.17.0.12:27017
-A KUBE-SEP-Y4XH6F2KQCY7WQBG -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 172.17.0.10:53
-A KUBE-SEP-ZXXWX3EF7T3W7UNY -p tcp -m comment --comment "default/grafana:" -m tcp -j DNAT --to-destination 172.17.0.9:3000
[root#ip-10-52-2-54 ec2-user]# iptables-save | grep 53
-A KUBE-SEP-D4NTKJJ3YXXGJARZ -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 172.17.0.10:53
-A KUBE-SEP-Y4XH6F2KQCY7WQBG -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 172.17.0.10:53
-A KUBE-SERVICES -d 10.254.0.2/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -d 10.254.0.2/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
---------
[root#ip-10-52-2-59 ec2-user]# iptables-save | grep DNAT
-A KUBE-SEP-3M72SO5X7J6X6TX6 -p tcp -m comment --comment "default/prometheus:prometheus" -m tcp -j DNAT --to-destination 172.17.0.8:9090
-A KUBE-SEP-7SLC3EUJVX23N2X4 -p tcp -m comment --comment "default/zookeeper:" -m tcp -j DNAT --to-destination 172.17.0.4:2181
-A KUBE-SEP-D4NTKJJ3YXXGJARZ -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 172.17.0.10:53
-A KUBE-SEP-EN24FH2N7PLAR6AW -p tcp -m comment --comment "default/kafkacluster:" -m tcp -j DNAT --to-destination 172.17.0.2:9092
-A KUBE-SEP-LCDAFU4UXQHVDQT6 -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-LCDAFU4UXQHVDQT6 --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 10.52.2.56:6443
-A KUBE-SEP-MX63IHIHS5ZB4347 -p tcp -m comment --comment "default/nodejs4promethus-scraping:" -m tcp -j DNAT --to-destination 172.17.0.6:3000
-A KUBE-SEP-NOI5B75N7ZJAIPJR -p tcp -m comment --comment "default/mongodb-prometheus-exporter:" -m tcp -j DNAT --to-destination 172.17.0.12:9001
-A KUBE-SEP-O6UDQQL3MHGYTSH5 -p tcp -m comment --comment "default/producer:" -m tcp -j DNAT --to-destination 172.17.0.3:8125
-A KUBE-SEP-QO4SWWCV7NMMGPBN -p tcp -m comment --comment "default/kafka-prometheus-jmx:" -m tcp -j DNAT --to-destination 172.17.0.2:7071
-A KUBE-SEP-SVCEI2UVU246H7MW -p tcp -m comment --comment "default/mongodb:" -m tcp -j DNAT --to-destination 172.17.0.12:27017
-A KUBE-SEP-Y4XH6F2KQCY7WQBG -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 172.17.0.10:53
-A KUBE-SEP-ZXXWX3EF7T3W7UNY -p tcp -m comment --comment "default/grafana:" -m tcp -j DNAT --to-destination 172.17.0.9:3000
[root#ip-10-52-2-59 ec2-user]# iptables-save | grep 53
-A KUBE-SEP-D4NTKJJ3YXXGJARZ -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 172.17.0.10:53
-A KUBE-SEP-Y4XH6F2KQCY7WQBG -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 172.17.0.10:53
-A KUBE-SERVICES -d 10.254.0.2/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 10.254.0.2/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
Restarted below serviced on both node
for SERVICES in flanneld docker kube-proxy.service kubelet.service; do
systemctl stop $SERVICES
systemctl start $SERVICES
done
Node1: ifconfig
[root#ip-10-52-2-59 ec2-user]# ifconfig
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 0.0.0.0
inet6 fe80::42:2dff:fe01:c0b0 prefixlen 64 scopeid 0x20<link>
ether 02:42:2d:01:c0:b0 txqueuelen 0 (Ethernet)
RX packets 1718522 bytes 154898857 (147.7 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1704874 bytes 2186333188 (2.0 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9001
inet 10.52.2.59 netmask 255.255.255.224 broadcast 10.52.2.63
inet6 fe80::91:9aff:fe7e:20a7 prefixlen 64 scopeid 0x20<link>
ether 02:91:9a:7e:20:a7 txqueuelen 1000 (Ethernet)
RX packets 2604083 bytes 2208387383 (2.0 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1974861 bytes 593497458 (566.0 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1 (Local Loopback)
RX packets 80 bytes 7140 (6.9 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 80 bytes 7140 (6.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
veth01225a6: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::1034:a8ff:fe79:aba3 prefixlen 64 scopeid 0x20<link>
ether 12:34:a8:79:ab:a3 txqueuelen 0 (Ethernet)
RX packets 1017 bytes 100422 (98.0 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1869 bytes 145519 (142.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
veth3079eb6: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::90c2:62ff:fe84:fb53 prefixlen 64 scopeid 0x20<link>
ether 92:c2:62:84:fb:53 txqueuelen 0 (Ethernet)
RX packets 4891 bytes 714845 (698.0 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 5127 bytes 829516 (810.0 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
veth3be8c1f: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::c8a5:64ff:fe15:be95 prefixlen 64 scopeid 0x20<link>
ether ca:a5:64:15:be:95 txqueuelen 0 (Ethernet)
RX packets 210 bytes 27750 (27.0 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 307 bytes 35118 (34.2 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
veth559a1ab: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::100b:23ff:fe60:3752 prefixlen 64 scopeid 0x20<link>
ether 12:0b:23:60:37:52 txqueuelen 0 (Ethernet)
RX packets 14926 bytes 1931413 (1.8 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 14375 bytes 19695295 (18.7 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
veth5c05729: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::cca1:4ff:fe5d:14cd prefixlen 64 scopeid 0x20<link>
ether ce:a1:04:5d:14:cd txqueuelen 0 (Ethernet)
RX packets 455 bytes 797963 (779.2 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 681 bytes 83904 (81.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
veth85ba9a9: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::74ca:90ff:feae:6f4d prefixlen 64 scopeid 0x20<link>
ether 76:ca:90:ae:6f:4d txqueuelen 0 (Ethernet)
RX packets 19 bytes 1404 (1.3 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 66 bytes 4568 (4.4 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
vetha069d16: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::accd:eeff:fe21:6eda prefixlen 64 scopeid 0x20<link>
ether ae:cd:ee:21:6e:da txqueuelen 0 (Ethernet)
RX packets 3566 bytes 7353788 (7.0 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2560 bytes 278400 (271.8 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
vetha58e4af: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::6cd2:16ff:fee2:aa59 prefixlen 64 scopeid 0x20<link>
ether 6e:d2:16:e2:aa:59 txqueuelen 0 (Ethernet)
RX packets 779 bytes 62585 (61.1 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1014 bytes 109417 (106.8 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
vethb7bbef5: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::5ce6:6fff:fe31:c3e prefixlen 64 scopeid 0x20<link>
ether 5e:e6:6f:31:0c:3e txqueuelen 0 (Ethernet)
RX packets 589 bytes 55654 (54.3 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 573 bytes 74014 (72.2 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
vethbda3e0a: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::9c0a:f2ff:fea5:23a2 prefixlen 64 scopeid 0x20<link>
ether 9e:0a:f2:a5:23:a2 txqueuelen 0 (Ethernet)
RX packets 490 bytes 47064 (45.9 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 645 bytes 77464 (75.6 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
vethfc65cc3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::b854:dcff:feb4:f4ba prefixlen 64 scopeid 0x20<link>
ether ba:54:dc:b4:f4:ba txqueuelen 0 (Ethernet)
RX packets 503 bytes 508251 (496.3 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 565 bytes 73145 (71.4 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Node2 - ifconfig
[root#ip-10-52-2-54 ec2-user]# ifconfig
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 8951
inet 10.20.48.1 netmask 255.255.255.0 broadcast 0.0.0.0
inet6 fe80::42:87ff:fe39:2ef0 prefixlen 64 scopeid 0x20<link>
ether 02:42:87:39:2e:f0 txqueuelen 0 (Ethernet)
RX packets 269123 bytes 22165441 (21.1 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 419870 bytes 149980299 (143.0 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9001
inet 10.52.2.54 netmask 255.255.255.224 broadcast 10.52.2.63
inet6 fe80::9a:d8ff:fed3:4cf5 prefixlen 64 scopeid 0x20<link>
ether 02:9a:d8:d3:4c:f5 txqueuelen 1000 (Ethernet)
RX packets 1517512 bytes 938147149 (894.6 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1425156 bytes 1265738472 (1.1 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 8951
inet 10.20.48.0 netmask 255.255.0.0 broadcast 0.0.0.0
ether 06:69:bf:c6:8a:12 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 1 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1 (Local Loopback)
RX packets 106 bytes 8792 (8.5 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 106 bytes 8792 (8.5 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
veth9f05785: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 8951
inet6 fe80::d81e:d3ff:fe5e:bade prefixlen 64 scopeid 0x20<link>
ether da:1e:d3:5e:ba:de txqueuelen 0 (Ethernet)
RX packets 31 bytes 2458 (2.4 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 37 bytes 4454 (4.3 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
little confused with two ifconfig output
check flanneld process on node-1, flannel.1 interface is missing from node-1, check /var/log/message and also compare both node flannel config file -- /etc/sysconfig/flannel
It looks like flannel is not running properly on node2. You should check the logs and configs as already pointed out by Pawan.
Also, you seem to be using an older version of Kubernetes. The current version is 1.5 and I'd recommend to use this version.
The bare metal setup guides found in the net tend to get outdated pretty fast, even the official Kubernetes guides.
I'd recommend to not use any of these guides anymore and instead use a (semi) automated deployment solution, like kargo (Ansible based) or kops (only AWS, Go based). If you do not want to use these automatic solutions, you could try to use kubeadm, which is currently in alpha state but may already work good enough for you.
I'm seeing a weird issue on kubernetes and I'm not sure how to debug it. The k8s environment was installed by kube-up for vsphere using the 2016-01-08 kube.vmdk
The symptom is that the dns for a container in a pod is not working correctly. When I logon to the kube-dns service to check the settings everything looks correct. When I ping outside the local network it works as it should but when I ping inside my local network it cannot reach any of the hosts.
For the following my host network is 10.1.1.x, the gateway / dns server is 10.1.1.1.
inside the kube-dns container:
(I can ping outside the network by ip and I can ping the gateway just fine. dns isn't working since the nameserver is unreachable)
kube#kubernetes-master:~$ kubectl --namespace=kube-system exec -ti kube-dns-v20-in2me -- /bin/sh
/ # cat /etc/resolv.conf
nameserver 10.1.1.1
options ndots:5
/ # ping google.com
^C
/ # ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=54 time=13.542 ms
64 bytes from 8.8.8.8: seq=1 ttl=54 time=13.862 ms
^C
--- 8.8.8.8 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 13.542/13.702/13.862 ms
/ # ping 10.1.1.1
PING 10.1.1.1 (10.1.1.1): 56 data bytes
^C
--- 10.1.1.1 ping statistics ---
4 packets transmitted, 0 packets received, 100% packet loss
/ # netstat -r
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
default 10.244.2.1 0.0.0.0 UG 0 0 0 eth0
10.244.2.0 * 255.255.255.0 U 0 0 0 eth0
/ # ping 10.244.2.1
PING 10.244.2.1 (10.244.2.1): 56 data bytes
64 bytes from 10.244.2.1: seq=0 ttl=64 time=0.249 ms
64 bytes from 10.244.2.1: seq=1 ttl=64 time=0.091 ms
^C
--- 10.244.2.1 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.091/0.170/0.249 ms
on the master:
kube#kubernetes-master:~$ netstat -r
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
default 10.1.1.1 0.0.0.0 UG 0 0 0 eth0
10.1.1.0 * 255.255.255.0 U 0 0 0 eth0
10.244.0.0 kubernetes-mini 255.255.255.0 UG 0 0 0 eth0
10.244.1.0 kubernetes-mini 255.255.255.0 UG 0 0 0 eth0
10.244.2.0 kubernetes-mini 255.255.255.0 UG 0 0 0 eth0
10.244.3.0 kubernetes-mini 255.255.255.0 UG 0 0 0 eth0
10.246.0.0 * 255.255.255.0 U 0 0 0 cbr0
172.17.0.0 * 255.255.0.0 U 0 0 0 docker0
kube#kubernetes-master:~$ ping 10.1.1.1
PING 10.1.1.1 (10.1.1.1) 56(84) bytes of data.
64 bytes from 10.1.1.1: icmp_seq=1 ttl=64 time=0.409 ms
64 bytes from 10.1.1.1: icmp_seq=2 ttl=64 time=0.481 ms
^C
--- 10.1.1.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.409/0.445/0.481/0.036 ms
version:
kube#kubernetes-master:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.5", GitCommit:"5a0a696437ad35c133c0c8493f7e9d22b0f9b81b", GitTreeState:"clean", BuildDate:"2016-10-29T01:38:40Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.5", GitCommit:"5a0a696437ad35c133c0c8493f7e9d22b0f9b81b", GitTreeState:"clean", BuildDate:"2016-10-29T01:32:42Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
kubernetes-minion-2 (10.244.2.1):
(Per #der's response adding info from 10.244.2.1)
kube#kubernetes-minion-2:~$ ip addr show cbr0
5: cbr0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc htb state UP group default
link/ether 8a:ef:b5:fc:28:f4 brd ff:ff:ff:ff:ff:ff
inet 10.244.2.1/24 scope global cbr0
valid_lft forever preferred_lft forever
inet6 fe80::38b5:44ff:fe8a:6d79/64 scope link
valid_lft forever preferred_lft forever
kube#kubernetes-minion-2:~$ ping google.com
PING google.com (216.58.192.14) 56(84) bytes of data.
64 bytes from nuq04s29-in-f14.1e100.net (216.58.192.14): icmp_seq=1 ttl=52 time=11.8 ms
64 bytes from nuq04s29-in-f14.1e100.net (216.58.192.14): icmp_seq=2 ttl=52 time=11.6 ms
64 bytes from nuq04s29-in-f14.1e100.net (216.58.192.14): icmp_seq=3 ttl=52 time=10.4 ms
^C
--- google.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 10.477/11.343/11.878/0.624 ms
kube#kubernetes-minion-2:~$ ping 10.1.1.1
PING 10.1.1.1 (10.1.1.1) 56(84) bytes of data.
64 bytes from 10.1.1.1: icmp_seq=1 ttl=64 time=0.369 ms
64 bytes from 10.1.1.1: icmp_seq=2 ttl=64 time=0.456 ms
64 bytes from 10.1.1.1: icmp_seq=3 ttl=64 time=0.442 ms
^C
--- 10.1.1.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.369/0.422/0.456/0.041 ms
kube#kubernetes-minion-2:~$ netstat -r
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
default 10.1.1.1 0.0.0.0 UG 0 0 0 eth0
10.1.1.0 * 255.255.255.0 U 0 0 0 eth0
10.244.0.0 kubernetes-mini 255.255.255.0 UG 0 0 0 eth0
10.244.1.0 kubernetes-mini 255.255.255.0 UG 0 0 0 eth0
10.244.2.0 * 255.255.255.0 U 0 0 0 cbr0
10.244.3.0 kubernetes-mini 255.255.255.0 UG 0 0 0 eth0
172.17.0.0 * 255.255.0.0 U 0 0 0 docker0
kube#kubernetes-minion-2:~$ routel
target gateway source proto scope dev tbl
default 10.1.1.1 eth0
10.1.1.0 24 10.1.1.86 kernel link eth0
10.244.0.0 24 10.1.1.88 eth0
10.244.1.0 24 10.1.1.87 eth0
10.244.2.0 24 10.244.2.1 kernel link cbr0
10.244.3.0 24 10.1.1.85 eth0
172.17.0.0 16 172.17.0.1 kernel linkdocker0
10.1.1.0 broadcast 10.1.1.86 kernel link eth0 local
10.1.1.86 local 10.1.1.86 kernel host eth0 local
10.1.1.255 broadcast 10.1.1.86 kernel link eth0 local
10.244.2.0 broadcast 10.244.2.1 kernel link cbr0 local
10.244.2.1 local 10.244.2.1 kernel host cbr0 local
10.244.2.255 broadcast 10.244.2.1 kernel link cbr0 local
127.0.0.0 broadcast 127.0.0.1 kernel link lo local
127.0.0.0 8 local 127.0.0.1 kernel host lo local
127.0.0.1 local 127.0.0.1 kernel host lo local
127.255.255.255 broadcast 127.0.0.1 kernel link lo local
172.17.0.0 broadcast 172.17.0.1 kernel linkdocker0 local
172.17.0.1 local 172.17.0.1 kernel hostdocker0 local
172.17.255.255 broadcast 172.17.0.1 kernel linkdocker0 local
::1 local kernel lo
fe80:: 64 kernel eth0
fe80:: 64 kernel cbr0
fe80:: 64 kernel veth6129284
default unreachable kernel lo unspec
::1 local none lo local
fe80::250:56ff:fe8e:d580 local none lo local
fe80::38b5:44ff:fe8a:6d79 local none lo local
fe80::88ef:b5ff:fefc:28f4 local none lo local
ff00:: 8 eth0 local
ff00:: 8 cbr0 local
ff00:: 8 veth6129284 local
default unreachable kernel lo unspec
How can I diagnose what is going on here?
thanks!
Turns out this is an issue with the default nat routing rules on the minions
$ iptables –t nat –vnxL
...
...
Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
...
80 4896 MASQUERADE all -- * * 0.0.0.0/0 !10.0.0.0/8 /* kubelet: SNAT outbound cluster traffic */ ADDRTYPE match dst-type !LOCAL
...
...
This shows that all traffic coming from the 10.x.x.x network gets ignored by the postrouting rules.
If anyone runs across this fix it with:
$ iptables -t nat -I POSTROUTING 1 -s 10.244.0.0/16 -d 10.1.1.1/32 -j MASQUERADE
where 10.244.x.x/16 is the container network and 10.1.1.1 is the gateway ip
First, figure out what's up with kubernetes-mini. Do on it what you've done with the 2 nodes you've shown us.
All traffic between 10.1.1.0 and 10.244.2.0 goes through it. It, however, may have a bad route for the 10.1.1.0 net.
Recently installed centos virtual machine (vm player) in my windows 7 host.
I can ping my vm from internal network without any problem.
I can also reach internal network from my vm without issues.
But my vm cant access internet, I can't ping google for example or any other external network.
I tried several solutions, I spent more than a week trying to figure out what's the issue.
Configuration:
My VM is bridged and working in DHCP mode:
[root#localhost ~]# ifconfig
eth0 Link encap:Ethernet HWaddr 00:0C:29:2F:D7:52
inet addr:**172.31.44.128** Bcast:172.31.47.255 Mask:255.255.248.0
inet6 addr: fe80::20c:29ff:fe2f:d752/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:15535 errors:0 dropped:0 overruns:0 frame:0
TX packets:503 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1099726 (1.0 MiB) TX bytes:38953 (38.0 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:36 errors:0 dropped:0 overruns:0 frame:0
TX packets:36 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:4098 (4.0 KiB) TX bytes:4098 (4.0 KiB)
[root#localhost ~]# more /etc/sysconfig/network
NETWORKING=yes
NETWORKING_IPV6=yes
HOSTNAME=localhost.localdomain
[root#localhost ~]# **
more /etc/sysconfig/network-scripts/ifcfg-eth0**
# Intel Corporation 82545EM Gigabit Ethernet Controller (Copper)
DEVICE=eth0
BOOTPROTO=dhcp
DHCPCLASS=
HWADDR=00:0C:29:2F:D7:52
ONBOOT=yes
[root#localhost ~]# **
more /etc/resolv.conf**
; generated by /sbin/dhclient-script
search dhcp.city.country.company
nameserver 172.31.41.2
nameserver 172.17.25.22
nameserver 172.16.25.10
[root#localhost ~]#
**netstat -rn**
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
172.31.40.0 0.0.0.0 255.255.248.0 U 0 0 0 eth0
169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
0.0.0.0 172.31.40.1 0.0.0.0 UG 0 0 0 eth0
I can ping my gateway, can ping my DNS and proxy also:
[root#localhost ~]#
ping 172.31.40.1
PING 172.31.40.1 (172.31.40.1) 56(84) bytes of data.
64 bytes from 172.31.40.1: icmp_seq=1 ttl=255 time=11.9 ms
64 bytes from 172.31.40.1: icmp_seq=2 ttl=255 time=1.18 ms
[root#localhost ~]# ping 172.31.41.2
PING 172.31.41.2 (172.31.41.2) 56(84) bytes of data.
64 bytes from 172.31.41.2: icmp_seq=1 ttl=128 time=1.75 ms
64 bytes from 172.31.41.2: icmp_seq=2 ttl=128 time=0.520 ms
64 bytes from 172.31.41.2: icmp_seq=3 ttl=128 time=0.580 ms
[root#localhost ~]# ping ptx.proxy.corp.company
PING lmarcproxy100.ptx.fr.company (10.7.80.40) 56(84) bytes of data.
64 bytes from lmarcproxy100.ptx.fr.company (10.7.80.40): icmp_seq=1 ttl=246 time=40.2 ms
64 bytes from lmarcproxy100.ptx.fr.company (10.7.80.40): icmp_seq=2 ttl=246 time=40.1 ms
64 bytes from lmarcproxy100.ptx.fr.company (10.7.80.40): icmp_seq=3 ttl=246 time=40.2 ms
64 bytes from lmarcproxy100.ptx.fr.company (10.7.80.40): icmp_seq=4 ttl=246 time=40.2 ms
Network interface is up & running:
[root#localhost ~]# service network status
Configured devices:
lo eth0
Currently active devices:
lo eth0
Firewalls are Stopped:
[root#localhost ~]# service iptables status
Firewall is stopped.
[root#localhost ~]# service ip6tables status
Firewall is stopped.
What else? I can yum also!
But I can't connect to internet!
Thanks in advance for your help.
Try to ping to your nameserver ip addresses and try to ping to your gateway address. Disable the search... line in your resolv.conf