Recently installed centos virtual machine (vm player) in my windows 7 host.
I can ping my vm from internal network without any problem.
I can also reach internal network from my vm without issues.
But my vm cant access internet, I can't ping google for example or any other external network.
I tried several solutions, I spent more than a week trying to figure out what's the issue.
Configuration:
My VM is bridged and working in DHCP mode:
[root#localhost ~]# ifconfig
eth0 Link encap:Ethernet HWaddr 00:0C:29:2F:D7:52
inet addr:**172.31.44.128** Bcast:172.31.47.255 Mask:255.255.248.0
inet6 addr: fe80::20c:29ff:fe2f:d752/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:15535 errors:0 dropped:0 overruns:0 frame:0
TX packets:503 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1099726 (1.0 MiB) TX bytes:38953 (38.0 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:36 errors:0 dropped:0 overruns:0 frame:0
TX packets:36 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:4098 (4.0 KiB) TX bytes:4098 (4.0 KiB)
[root#localhost ~]# more /etc/sysconfig/network
NETWORKING=yes
NETWORKING_IPV6=yes
HOSTNAME=localhost.localdomain
[root#localhost ~]# **
more /etc/sysconfig/network-scripts/ifcfg-eth0**
# Intel Corporation 82545EM Gigabit Ethernet Controller (Copper)
DEVICE=eth0
BOOTPROTO=dhcp
DHCPCLASS=
HWADDR=00:0C:29:2F:D7:52
ONBOOT=yes
[root#localhost ~]# **
more /etc/resolv.conf**
; generated by /sbin/dhclient-script
search dhcp.city.country.company
nameserver 172.31.41.2
nameserver 172.17.25.22
nameserver 172.16.25.10
[root#localhost ~]#
**netstat -rn**
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
172.31.40.0 0.0.0.0 255.255.248.0 U 0 0 0 eth0
169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
0.0.0.0 172.31.40.1 0.0.0.0 UG 0 0 0 eth0
I can ping my gateway, can ping my DNS and proxy also:
[root#localhost ~]#
ping 172.31.40.1
PING 172.31.40.1 (172.31.40.1) 56(84) bytes of data.
64 bytes from 172.31.40.1: icmp_seq=1 ttl=255 time=11.9 ms
64 bytes from 172.31.40.1: icmp_seq=2 ttl=255 time=1.18 ms
[root#localhost ~]# ping 172.31.41.2
PING 172.31.41.2 (172.31.41.2) 56(84) bytes of data.
64 bytes from 172.31.41.2: icmp_seq=1 ttl=128 time=1.75 ms
64 bytes from 172.31.41.2: icmp_seq=2 ttl=128 time=0.520 ms
64 bytes from 172.31.41.2: icmp_seq=3 ttl=128 time=0.580 ms
[root#localhost ~]# ping ptx.proxy.corp.company
PING lmarcproxy100.ptx.fr.company (10.7.80.40) 56(84) bytes of data.
64 bytes from lmarcproxy100.ptx.fr.company (10.7.80.40): icmp_seq=1 ttl=246 time=40.2 ms
64 bytes from lmarcproxy100.ptx.fr.company (10.7.80.40): icmp_seq=2 ttl=246 time=40.1 ms
64 bytes from lmarcproxy100.ptx.fr.company (10.7.80.40): icmp_seq=3 ttl=246 time=40.2 ms
64 bytes from lmarcproxy100.ptx.fr.company (10.7.80.40): icmp_seq=4 ttl=246 time=40.2 ms
Network interface is up & running:
[root#localhost ~]# service network status
Configured devices:
lo eth0
Currently active devices:
lo eth0
Firewalls are Stopped:
[root#localhost ~]# service iptables status
Firewall is stopped.
[root#localhost ~]# service ip6tables status
Firewall is stopped.
What else? I can yum also!
But I can't connect to internet!
Thanks in advance for your help.
Try to ping to your nameserver ip addresses and try to ping to your gateway address. Disable the search... line in your resolv.conf
Related
ss -tnulp|grep 8443
tcp LISTEN 0 128 172.16.1.4:8443 *:* users:(("kube-apiserver",pid=29513,fd=5))
i have my api server running and i want to expose it to the rest of the network, this is the network config on my cluster :
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.16.1.4 netmask 255.255.255.0 broadcast 172.16.1.255
inet6 fe80::f816:3eff:feb5:93a3 prefixlen 64 scopeid 0x20<link>
ether fa:16:3e:b5:93:a3 txqueuelen 1000 (Ethernet)
RX packets 218935 bytes 2518654013 (2.3 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 160281 bytes 33994810 (32.4 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 139.54.130.39 netmask 255.255.254.0 broadcast 139.54.131.255
inet6 3ffe:302:11:2:f816:3eff:fe46:ab28 prefixlen 64 scopeid 0x0<global>
inet6 fd12:1f4b:e0bf:10:f816:3eff:fe46:ab28 prefixlen 64 scopeid 0x0<global>
inet6 fd12:1f4b:e0bf:1:f816:3eff:fe46:ab28 prefixlen 64 scopeid 0x0<global>
inet6 fe80::f816:3eff:fe46:ab28 prefixlen 64 scopeid 0x20<link>
ether fa:16:3e:46:ab:28 txqueuelen 1000 (Ethernet)
RX packets 3227129 bytes 845879874 (806.6 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1072031 bytes 132806957 (126.6 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
the VM has an external ip 139.54.130.39
Any leads how to do that ?
Did you try using this option
- --apiserver-advertise-address=139.54.130.39
Kubectl over this network will be able to handshake 139.54.130.39
you can apply this depends of your installation:
.......
In case .. you installed apiserver as pod
just you can change apiserver-advertise-address parameter in
/etc/kubernetes/manifests/kube-apiserver.yaml
or
check/list kube-system pods you have to get actual apiserver name and edit it (carefully )
kubectl get pod -n kube-system
kubectl edit pod -n kube-system kube-apiserver
........
In case .. you installed apiserver as service, edit systemd script
ex:
vim /etc/systemd/system/kube-apiserver.service
Edit
ExecStart=/usr/local/bin/kube-apiserver
--bind-address=0.0.0.0
--advertise_address=139.54.130.39
I've built a CentOS 7.3 vmusing virtualbox. This vm has the following interface:
[root#localhost ~]# ip addr show enp0s3
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:fa:df:f8 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3
valid_lft 86063sec preferred_lft 86063sec
inet6 fe80::68e5:f976:c846:7881/64 scope link
valid_lft forever preferred_lft forever
When I try to ping this, I get:
[root#localhost ~]# ping -c 3 10.0.2.15
PING 10.0.2.15 (10.0.2.15) 56(84) bytes of data.
64 bytes from 10.0.2.15: icmp_seq=1 ttl=64 time=0.027 ms
64 bytes from 10.0.2.15: icmp_seq=2 ttl=64 time=0.034 ms
64 bytes from 10.0.2.15: icmp_seq=3 ttl=64 time=0.034 ms
--- 10.0.2.15 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 0.027/0.031/0.034/0.007 ms
However the ping command fails when I specify the interface name in my ping command:
[root#localhost ~]# ping -c 3 -I enp0s3 10.0.2.15
PING 10.0.2.15 (10.0.2.15) from 10.0.2.15 enp0s3: 56(84) bytes of data.
--- 10.0.2.15 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2001ms
Another thing I noticed is that this problem appears to be specific to CentOS 7.3. The same command works fine on a CentOS 7.2 vm:
[root#localhost ~]# ip addr show enp0s3
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:12:a3:c1 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3
valid_lft 86226sec preferred_lft 86226sec
inet6 fe80::a00:27ff:fe12:a3c1/64 scope link
valid_lft forever preferred_lft forever
[root#localhost ~]# ping -c 3 10.0.2.15
PING 10.0.2.15 (10.0.2.15) 56(84) bytes of data.
64 bytes from 10.0.2.15: icmp_seq=1 ttl=64 time=0.021 ms
64 bytes from 10.0.2.15: icmp_seq=2 ttl=64 time=0.031 ms
64 bytes from 10.0.2.15: icmp_seq=3 ttl=64 time=0.032 ms
--- 10.0.2.15 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.021/0.028/0.032/0.005 ms
[root#localhost ~]# ping -c 3 -I enp0s3 10.0.2.15
PING 10.0.2.15 (10.0.2.15) from 10.0.2.15 enp0s3: 56(84) bytes of data.
64 bytes from 10.0.2.15: icmp_seq=1 ttl=64 time=0.026 ms
64 bytes from 10.0.2.15: icmp_seq=2 ttl=64 time=0.031 ms
64 bytes from 10.0.2.15: icmp_seq=3 ttl=64 time=0.030 ms
--- 10.0.2.15 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.026/0.029/0.031/0.002 ms
Does anyone have any ideas why this works in CentOS 7.2 but not CentOS 7.3?
I am new to Kubernetes, so some of my questions may be basic.
My setup: 2 VM (running Ubuntu 16.04.2)
Kubernetes Version: 1.7.1 on both Master Node(kube4local) and Slave Node(kube5local)
My Steps: 1.
On both Master and Slave Nodes, installed the required kubernetes(kubelet
kubeadm kubectl kubernetes-cni) packages and docker(docker.io) packages.
On the Master Node: 1.
vagrant#kube4local:~$ sudo kubeadm init
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.1
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.06.0-ce. Max validated version: 1.12
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] API Server serving cert is signed for DNS names [kube4local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.2.15]
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key and public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 1051.552012 seconds
[token] Using token: 3c68b6.8c3f8d5a0a29a3ac
[apiconfig] Created RBAC rules
[addons] Applied essential addon: kube-proxy
[addons] Applied essential addon: kube-dns
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run (as a regular user):
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
http://kubernetes.io/docs/admin/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join --token 3c68b6.8c3f8d5a0a29a3ac 10.0.2.15:6443
vagrant#kube4local:~$ mkdir -p $HOME/.kube
vagrant#kube4local:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
vagrant#kube4local:~$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
vagrant#kube4local:~$ sudo kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
serviceaccount "weave-net" created
clusterrole "weave-net" created
clusterrolebinding "weave-net" created
daemonset "weave-net" created
On the Slave Node:
Note: I am able to do a basic ping test, and ssh, scp commands between the master node running in VM1 and slave node running in VM2 works fine.
Ran the join command.
Output of join command in slave node:
vagrant#kube5local:~$ sudo kubeadm join --token 3c68b6.8c3f8d5a0a29a3ac 10.0.2.15:6443
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Running pre-flight checks
[preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.06.0-ce. Max validated version: 1.12
[preflight] WARNING: hostname "" could not be reached
[preflight] WARNING: hostname "" lookup : no such host
[preflight] Some fatal errors occurred:
hostname "" a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
[preflight] If you know what you are doing, you can skip pre-flight checks with `--skip-preflight-checks`
Why i get this error, my /etc/hosts correct:
[preflight] WARNING: hostname "" could not be reached
[preflight] WARNING: hostname "" lookup : no such host
Output of Status Commands On the Master Node:
vagrant#kube4local:~$ sudo kubectl cluster-info
Kubernetes master is running at https://10.0.2.15:6443
vagrant#kube4local:~$ sudo kubectl get nodes
NAME STATUS AGE VERSION
kube4local Ready 26m v1.7.1
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Output of ifconfig on Master Node(kube4local):
vagrant#kube4local:~$ ifconfig
docker0 Link encap:Ethernet HWaddr 02:42:3a:c4:00:50
inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
enp0s3 Link encap:Ethernet HWaddr 08:00:27:19:2c:a4
inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fe19:2ca4/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:260314 errors:0 dropped:0 overruns:0 frame:0
TX packets:58921 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:334293914 (334.2 MB) TX bytes:3918136 (3.9 MB)
enp0s8 Link encap:Ethernet HWaddr 08:00:27:b8:ef:b6
inet addr:192.168.56.104 Bcast:192.168.56.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:feb8:efb6/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:247 errors:0 dropped:0 overruns:0 frame:0
TX packets:154 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:36412 (36.4 KB) TX bytes:25999 (25.9 KB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:19922 errors:0 dropped:0 overruns:0 frame:0
TX packets:19922 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:1996565 (1.9 MB) TX bytes:1996565 (1.9 MB)
Output of /etc/hosts on Master Node(kube4local):
vagrant#kube4local:~$ cat /etc/hosts
192.168.56.104 kube4local kube4local
192.168.56.105 kube5local kube5local
127.0.1.1 vagrant.vm vagrant
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
Output of ifconfig on Slave Node(kube5local):
vagrant#kube5local:~$ ifconfig
docker0 Link encap:Ethernet HWaddr 02:42:bb:37:ab:35
inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
enp0s3 Link encap:Ethernet HWaddr 08:00:27:19:2c:a4
inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fe19:2ca4/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:163514 errors:0 dropped:0 overruns:0 frame:0
TX packets:39792 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:207478954 (207.4 MB) TX bytes:2660902 (2.6 MB)
enp0s8 Link encap:Ethernet HWaddr 08:00:27:6a:f0:51
inet addr:192.168.56.105 Bcast:192.168.56.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fe6a:f051/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:195 errors:0 dropped:0 overruns:0 frame:0
TX packets:151 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:30463 (30.4 KB) TX bytes:26737 (26.7 KB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Output of /etc/hosts on Slave Node(kube4local):
vagrant#kube5local:~$ cat /etc/hosts
192.168.56.104 kube4local kube4local
192.168.56.105 kube5local kube5local
127.0.1.1 vagrant.vm vagrant
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
Nat this is bug in version v1.7.1. you can use v1.7.0 version or skip the pre-flight check.
kubeadm join --skip-preflight-checks
you can refer this thread for more details.
kubernets v1.7.1 kubeadm join hostname "" could not be reached error
I'm seeing a weird issue on kubernetes and I'm not sure how to debug it. The k8s environment was installed by kube-up for vsphere using the 2016-01-08 kube.vmdk
The symptom is that the dns for a container in a pod is not working correctly. When I logon to the kube-dns service to check the settings everything looks correct. When I ping outside the local network it works as it should but when I ping inside my local network it cannot reach any of the hosts.
For the following my host network is 10.1.1.x, the gateway / dns server is 10.1.1.1.
inside the kube-dns container:
(I can ping outside the network by ip and I can ping the gateway just fine. dns isn't working since the nameserver is unreachable)
kube#kubernetes-master:~$ kubectl --namespace=kube-system exec -ti kube-dns-v20-in2me -- /bin/sh
/ # cat /etc/resolv.conf
nameserver 10.1.1.1
options ndots:5
/ # ping google.com
^C
/ # ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=54 time=13.542 ms
64 bytes from 8.8.8.8: seq=1 ttl=54 time=13.862 ms
^C
--- 8.8.8.8 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 13.542/13.702/13.862 ms
/ # ping 10.1.1.1
PING 10.1.1.1 (10.1.1.1): 56 data bytes
^C
--- 10.1.1.1 ping statistics ---
4 packets transmitted, 0 packets received, 100% packet loss
/ # netstat -r
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
default 10.244.2.1 0.0.0.0 UG 0 0 0 eth0
10.244.2.0 * 255.255.255.0 U 0 0 0 eth0
/ # ping 10.244.2.1
PING 10.244.2.1 (10.244.2.1): 56 data bytes
64 bytes from 10.244.2.1: seq=0 ttl=64 time=0.249 ms
64 bytes from 10.244.2.1: seq=1 ttl=64 time=0.091 ms
^C
--- 10.244.2.1 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.091/0.170/0.249 ms
on the master:
kube#kubernetes-master:~$ netstat -r
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
default 10.1.1.1 0.0.0.0 UG 0 0 0 eth0
10.1.1.0 * 255.255.255.0 U 0 0 0 eth0
10.244.0.0 kubernetes-mini 255.255.255.0 UG 0 0 0 eth0
10.244.1.0 kubernetes-mini 255.255.255.0 UG 0 0 0 eth0
10.244.2.0 kubernetes-mini 255.255.255.0 UG 0 0 0 eth0
10.244.3.0 kubernetes-mini 255.255.255.0 UG 0 0 0 eth0
10.246.0.0 * 255.255.255.0 U 0 0 0 cbr0
172.17.0.0 * 255.255.0.0 U 0 0 0 docker0
kube#kubernetes-master:~$ ping 10.1.1.1
PING 10.1.1.1 (10.1.1.1) 56(84) bytes of data.
64 bytes from 10.1.1.1: icmp_seq=1 ttl=64 time=0.409 ms
64 bytes from 10.1.1.1: icmp_seq=2 ttl=64 time=0.481 ms
^C
--- 10.1.1.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.409/0.445/0.481/0.036 ms
version:
kube#kubernetes-master:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.5", GitCommit:"5a0a696437ad35c133c0c8493f7e9d22b0f9b81b", GitTreeState:"clean", BuildDate:"2016-10-29T01:38:40Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.5", GitCommit:"5a0a696437ad35c133c0c8493f7e9d22b0f9b81b", GitTreeState:"clean", BuildDate:"2016-10-29T01:32:42Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
kubernetes-minion-2 (10.244.2.1):
(Per #der's response adding info from 10.244.2.1)
kube#kubernetes-minion-2:~$ ip addr show cbr0
5: cbr0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc htb state UP group default
link/ether 8a:ef:b5:fc:28:f4 brd ff:ff:ff:ff:ff:ff
inet 10.244.2.1/24 scope global cbr0
valid_lft forever preferred_lft forever
inet6 fe80::38b5:44ff:fe8a:6d79/64 scope link
valid_lft forever preferred_lft forever
kube#kubernetes-minion-2:~$ ping google.com
PING google.com (216.58.192.14) 56(84) bytes of data.
64 bytes from nuq04s29-in-f14.1e100.net (216.58.192.14): icmp_seq=1 ttl=52 time=11.8 ms
64 bytes from nuq04s29-in-f14.1e100.net (216.58.192.14): icmp_seq=2 ttl=52 time=11.6 ms
64 bytes from nuq04s29-in-f14.1e100.net (216.58.192.14): icmp_seq=3 ttl=52 time=10.4 ms
^C
--- google.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 10.477/11.343/11.878/0.624 ms
kube#kubernetes-minion-2:~$ ping 10.1.1.1
PING 10.1.1.1 (10.1.1.1) 56(84) bytes of data.
64 bytes from 10.1.1.1: icmp_seq=1 ttl=64 time=0.369 ms
64 bytes from 10.1.1.1: icmp_seq=2 ttl=64 time=0.456 ms
64 bytes from 10.1.1.1: icmp_seq=3 ttl=64 time=0.442 ms
^C
--- 10.1.1.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.369/0.422/0.456/0.041 ms
kube#kubernetes-minion-2:~$ netstat -r
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
default 10.1.1.1 0.0.0.0 UG 0 0 0 eth0
10.1.1.0 * 255.255.255.0 U 0 0 0 eth0
10.244.0.0 kubernetes-mini 255.255.255.0 UG 0 0 0 eth0
10.244.1.0 kubernetes-mini 255.255.255.0 UG 0 0 0 eth0
10.244.2.0 * 255.255.255.0 U 0 0 0 cbr0
10.244.3.0 kubernetes-mini 255.255.255.0 UG 0 0 0 eth0
172.17.0.0 * 255.255.0.0 U 0 0 0 docker0
kube#kubernetes-minion-2:~$ routel
target gateway source proto scope dev tbl
default 10.1.1.1 eth0
10.1.1.0 24 10.1.1.86 kernel link eth0
10.244.0.0 24 10.1.1.88 eth0
10.244.1.0 24 10.1.1.87 eth0
10.244.2.0 24 10.244.2.1 kernel link cbr0
10.244.3.0 24 10.1.1.85 eth0
172.17.0.0 16 172.17.0.1 kernel linkdocker0
10.1.1.0 broadcast 10.1.1.86 kernel link eth0 local
10.1.1.86 local 10.1.1.86 kernel host eth0 local
10.1.1.255 broadcast 10.1.1.86 kernel link eth0 local
10.244.2.0 broadcast 10.244.2.1 kernel link cbr0 local
10.244.2.1 local 10.244.2.1 kernel host cbr0 local
10.244.2.255 broadcast 10.244.2.1 kernel link cbr0 local
127.0.0.0 broadcast 127.0.0.1 kernel link lo local
127.0.0.0 8 local 127.0.0.1 kernel host lo local
127.0.0.1 local 127.0.0.1 kernel host lo local
127.255.255.255 broadcast 127.0.0.1 kernel link lo local
172.17.0.0 broadcast 172.17.0.1 kernel linkdocker0 local
172.17.0.1 local 172.17.0.1 kernel hostdocker0 local
172.17.255.255 broadcast 172.17.0.1 kernel linkdocker0 local
::1 local kernel lo
fe80:: 64 kernel eth0
fe80:: 64 kernel cbr0
fe80:: 64 kernel veth6129284
default unreachable kernel lo unspec
::1 local none lo local
fe80::250:56ff:fe8e:d580 local none lo local
fe80::38b5:44ff:fe8a:6d79 local none lo local
fe80::88ef:b5ff:fefc:28f4 local none lo local
ff00:: 8 eth0 local
ff00:: 8 cbr0 local
ff00:: 8 veth6129284 local
default unreachable kernel lo unspec
How can I diagnose what is going on here?
thanks!
Turns out this is an issue with the default nat routing rules on the minions
$ iptables –t nat –vnxL
...
...
Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
...
80 4896 MASQUERADE all -- * * 0.0.0.0/0 !10.0.0.0/8 /* kubelet: SNAT outbound cluster traffic */ ADDRTYPE match dst-type !LOCAL
...
...
This shows that all traffic coming from the 10.x.x.x network gets ignored by the postrouting rules.
If anyone runs across this fix it with:
$ iptables -t nat -I POSTROUTING 1 -s 10.244.0.0/16 -d 10.1.1.1/32 -j MASQUERADE
where 10.244.x.x/16 is the container network and 10.1.1.1 is the gateway ip
First, figure out what's up with kubernetes-mini. Do on it what you've done with the 2 nodes you've shown us.
All traffic between 10.1.1.0 and 10.244.2.0 goes through it. It, however, may have a bad route for the 10.1.1.0 net.
Problem:
/ # ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
ping: sendto: Network unreachable
Example container ifconfig:
eth0 Link encap:Ethernet HWaddr F2:3D:87:30:39:B8
inet addr:10.2.8.64 Bcast:0.0.0.0 Mask:255.255.255.0
inet6 addr: fe80::f03d:87ff:fe30:39b8%32750/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:22 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:4088 (3.9 KiB) TX bytes:648 (648.0 B)
eth1 Link encap:Ethernet HWaddr 6E:1C:69:85:21:96
inet addr:172.16.28.63 Bcast:0.0.0.0 Mask:255.255.255.0
inet6 addr: fe80::6c1c:69ff:fe85:2196%32750/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:18 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1418 (1.3 KiB) TX bytes:648 (648.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1%32750/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Routing inside container:
/ # ip route show
10.2.0.0/16 via 10.2.8.1 dev eth0
10.2.8.0/24 dev eth0 src 10.2.8.73
172.16.28.0/24 via 172.16.28.1 dev eth1 src 172.16.28.72
172.16.28.1 dev eth1 src 172.16.28.72
Host iptables: http://pastebin.com/raw/UcLQQa4J
Host ifconfig: http://pastebin.com/raw/uxsM1bx6
logs by flannel:
main.go:275] Installing signal handlers
main.go:188] Using 104.238.xxx.xxx as external interface
main.go:189] Using 104.238.xxx.xxx as external endpoint
etcd.go:129] Found lease (10.2.8.0/24) for current IP (104.238.xxx.xxx), reusing
etcd.go:84] Subnet lease acquired: 10.2.8.0/24
ipmasq.go:50] Adding iptables rule: FLANNEL -d 10.2.0.0/16 -j ACCEPT
ipmasq.go:50] Adding iptables rule: FLANNEL ! -d 224.0.0.0/4 -j MASQUERADE
ipmasq.go:50] Adding iptables rule: POSTROUTING -s 10.2.0.0/16 -j FLANNEL
ipmasq.go:50] Adding iptables rule: POSTROUTING ! -s 10.2.0.0/16 -d 10.2.0.0/16 -j MASQUERADE
vxlan.go:153] Watching for L3 misses
vxlan.go:159] Watching for new subnet leases
vxlan.go:273] Handling initial subnet events
device.go:159] calling GetL2List() dev.link.Index: 3
vxlan.go:280] fdb already populated with: 104.238.xxx.xxx 82:83:be:17:3e:d6
vxlan.go:280] fdb already populated with: 104.238.xxx.xxx 82:dd:90:b2:42:87
vxlan.go:280] fdb already populated with: 104.238.xxx.xxx de:e8:be:28:cf:7a
systemd[1]: Started Network fabric for containers.
It is possible if you set a config map with upstreamNameServers.
Example:
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
data:
upstreamNameservers: |
["8.8.8.8", "8.8.8.4"]
And in you Deployment definition add:
dnsPolicy: "ClusterFirst"
More info here:
https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers
It is not possible to make it work because it is not yet implemented...I guess I am switching to docker...
edit: ...or not, switched from flannel to calico, it works ok.
rkt #862
k8s #2249
This GitHub issue on the Flannel project may provide a solution - essentially, try disabling IP masquerading (--ip-masq=false) on your Docker daemon, and enabling it (--ip-masq) on your Flannel daemon.
This solution worked for me when I was unable to ping internet IPs (e.g. 8.8.8.8) from inside a container in my Kubernetes cluster.
Try to Check the Kube-flannel.yml file and also the starting command to create the cluster that is kubeadm init --pod-network-cidr=10.244.0.0/16 and by default in this file kube-flannel.yml you will get the 10.244.0.0/16 IP, so if you want to change the pod-network-CIDR then please change in the file also.