I'm seeing a weird issue on kubernetes and I'm not sure how to debug it. The k8s environment was installed by kube-up for vsphere using the 2016-01-08 kube.vmdk
The symptom is that the dns for a container in a pod is not working correctly. When I logon to the kube-dns service to check the settings everything looks correct. When I ping outside the local network it works as it should but when I ping inside my local network it cannot reach any of the hosts.
For the following my host network is 10.1.1.x, the gateway / dns server is 10.1.1.1.
inside the kube-dns container:
(I can ping outside the network by ip and I can ping the gateway just fine. dns isn't working since the nameserver is unreachable)
kube#kubernetes-master:~$ kubectl --namespace=kube-system exec -ti kube-dns-v20-in2me -- /bin/sh
/ # cat /etc/resolv.conf
nameserver 10.1.1.1
options ndots:5
/ # ping google.com
^C
/ # ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=54 time=13.542 ms
64 bytes from 8.8.8.8: seq=1 ttl=54 time=13.862 ms
^C
--- 8.8.8.8 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 13.542/13.702/13.862 ms
/ # ping 10.1.1.1
PING 10.1.1.1 (10.1.1.1): 56 data bytes
^C
--- 10.1.1.1 ping statistics ---
4 packets transmitted, 0 packets received, 100% packet loss
/ # netstat -r
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
default 10.244.2.1 0.0.0.0 UG 0 0 0 eth0
10.244.2.0 * 255.255.255.0 U 0 0 0 eth0
/ # ping 10.244.2.1
PING 10.244.2.1 (10.244.2.1): 56 data bytes
64 bytes from 10.244.2.1: seq=0 ttl=64 time=0.249 ms
64 bytes from 10.244.2.1: seq=1 ttl=64 time=0.091 ms
^C
--- 10.244.2.1 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.091/0.170/0.249 ms
on the master:
kube#kubernetes-master:~$ netstat -r
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
default 10.1.1.1 0.0.0.0 UG 0 0 0 eth0
10.1.1.0 * 255.255.255.0 U 0 0 0 eth0
10.244.0.0 kubernetes-mini 255.255.255.0 UG 0 0 0 eth0
10.244.1.0 kubernetes-mini 255.255.255.0 UG 0 0 0 eth0
10.244.2.0 kubernetes-mini 255.255.255.0 UG 0 0 0 eth0
10.244.3.0 kubernetes-mini 255.255.255.0 UG 0 0 0 eth0
10.246.0.0 * 255.255.255.0 U 0 0 0 cbr0
172.17.0.0 * 255.255.0.0 U 0 0 0 docker0
kube#kubernetes-master:~$ ping 10.1.1.1
PING 10.1.1.1 (10.1.1.1) 56(84) bytes of data.
64 bytes from 10.1.1.1: icmp_seq=1 ttl=64 time=0.409 ms
64 bytes from 10.1.1.1: icmp_seq=2 ttl=64 time=0.481 ms
^C
--- 10.1.1.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.409/0.445/0.481/0.036 ms
version:
kube#kubernetes-master:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.5", GitCommit:"5a0a696437ad35c133c0c8493f7e9d22b0f9b81b", GitTreeState:"clean", BuildDate:"2016-10-29T01:38:40Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.5", GitCommit:"5a0a696437ad35c133c0c8493f7e9d22b0f9b81b", GitTreeState:"clean", BuildDate:"2016-10-29T01:32:42Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
kubernetes-minion-2 (10.244.2.1):
(Per #der's response adding info from 10.244.2.1)
kube#kubernetes-minion-2:~$ ip addr show cbr0
5: cbr0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc htb state UP group default
link/ether 8a:ef:b5:fc:28:f4 brd ff:ff:ff:ff:ff:ff
inet 10.244.2.1/24 scope global cbr0
valid_lft forever preferred_lft forever
inet6 fe80::38b5:44ff:fe8a:6d79/64 scope link
valid_lft forever preferred_lft forever
kube#kubernetes-minion-2:~$ ping google.com
PING google.com (216.58.192.14) 56(84) bytes of data.
64 bytes from nuq04s29-in-f14.1e100.net (216.58.192.14): icmp_seq=1 ttl=52 time=11.8 ms
64 bytes from nuq04s29-in-f14.1e100.net (216.58.192.14): icmp_seq=2 ttl=52 time=11.6 ms
64 bytes from nuq04s29-in-f14.1e100.net (216.58.192.14): icmp_seq=3 ttl=52 time=10.4 ms
^C
--- google.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 10.477/11.343/11.878/0.624 ms
kube#kubernetes-minion-2:~$ ping 10.1.1.1
PING 10.1.1.1 (10.1.1.1) 56(84) bytes of data.
64 bytes from 10.1.1.1: icmp_seq=1 ttl=64 time=0.369 ms
64 bytes from 10.1.1.1: icmp_seq=2 ttl=64 time=0.456 ms
64 bytes from 10.1.1.1: icmp_seq=3 ttl=64 time=0.442 ms
^C
--- 10.1.1.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.369/0.422/0.456/0.041 ms
kube#kubernetes-minion-2:~$ netstat -r
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
default 10.1.1.1 0.0.0.0 UG 0 0 0 eth0
10.1.1.0 * 255.255.255.0 U 0 0 0 eth0
10.244.0.0 kubernetes-mini 255.255.255.0 UG 0 0 0 eth0
10.244.1.0 kubernetes-mini 255.255.255.0 UG 0 0 0 eth0
10.244.2.0 * 255.255.255.0 U 0 0 0 cbr0
10.244.3.0 kubernetes-mini 255.255.255.0 UG 0 0 0 eth0
172.17.0.0 * 255.255.0.0 U 0 0 0 docker0
kube#kubernetes-minion-2:~$ routel
target gateway source proto scope dev tbl
default 10.1.1.1 eth0
10.1.1.0 24 10.1.1.86 kernel link eth0
10.244.0.0 24 10.1.1.88 eth0
10.244.1.0 24 10.1.1.87 eth0
10.244.2.0 24 10.244.2.1 kernel link cbr0
10.244.3.0 24 10.1.1.85 eth0
172.17.0.0 16 172.17.0.1 kernel linkdocker0
10.1.1.0 broadcast 10.1.1.86 kernel link eth0 local
10.1.1.86 local 10.1.1.86 kernel host eth0 local
10.1.1.255 broadcast 10.1.1.86 kernel link eth0 local
10.244.2.0 broadcast 10.244.2.1 kernel link cbr0 local
10.244.2.1 local 10.244.2.1 kernel host cbr0 local
10.244.2.255 broadcast 10.244.2.1 kernel link cbr0 local
127.0.0.0 broadcast 127.0.0.1 kernel link lo local
127.0.0.0 8 local 127.0.0.1 kernel host lo local
127.0.0.1 local 127.0.0.1 kernel host lo local
127.255.255.255 broadcast 127.0.0.1 kernel link lo local
172.17.0.0 broadcast 172.17.0.1 kernel linkdocker0 local
172.17.0.1 local 172.17.0.1 kernel hostdocker0 local
172.17.255.255 broadcast 172.17.0.1 kernel linkdocker0 local
::1 local kernel lo
fe80:: 64 kernel eth0
fe80:: 64 kernel cbr0
fe80:: 64 kernel veth6129284
default unreachable kernel lo unspec
::1 local none lo local
fe80::250:56ff:fe8e:d580 local none lo local
fe80::38b5:44ff:fe8a:6d79 local none lo local
fe80::88ef:b5ff:fefc:28f4 local none lo local
ff00:: 8 eth0 local
ff00:: 8 cbr0 local
ff00:: 8 veth6129284 local
default unreachable kernel lo unspec
How can I diagnose what is going on here?
thanks!
Turns out this is an issue with the default nat routing rules on the minions
$ iptables –t nat –vnxL
...
...
Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
...
80 4896 MASQUERADE all -- * * 0.0.0.0/0 !10.0.0.0/8 /* kubelet: SNAT outbound cluster traffic */ ADDRTYPE match dst-type !LOCAL
...
...
This shows that all traffic coming from the 10.x.x.x network gets ignored by the postrouting rules.
If anyone runs across this fix it with:
$ iptables -t nat -I POSTROUTING 1 -s 10.244.0.0/16 -d 10.1.1.1/32 -j MASQUERADE
where 10.244.x.x/16 is the container network and 10.1.1.1 is the gateway ip
First, figure out what's up with kubernetes-mini. Do on it what you've done with the 2 nodes you've shown us.
All traffic between 10.1.1.0 and 10.244.2.0 goes through it. It, however, may have a bad route for the 10.1.1.0 net.
Related
kubernetes can't access other machine by ip from pod inside
kubectl exec dnsutils -it /bin/bash
root#dnsutils:/# ping 10.116.197.60
PING 10.116.197.60 (10.116.197.60) 56(84) bytes of data.
but it works on machine
ping 10.116.197.60
PING 10.116.197.60 (10.116.197.60) 56(84) bytes of data.
64 bytes from 10.116.197.60: icmp_seq=1 ttl=64 time=0.854 ms
64 bytes from 10.116.197.60: icmp_seq=2 ttl=64 time=0.906 ms
...
and works on docker container
docker exec -it bind /bin/bash
root#0f356bf598c5:/# ping 10.116.197.60
PING 10.116.197.60 (10.116.197.60): 56 data bytes
64 bytes from 10.116.197.60: icmp_seq=0 ttl=63 time=1.172 ms
64 bytes from 10.116.197.60: icmp_seq=1 ttl=63 time=1.007 ms
64 bytes from 10.116.197.60: icmp_seq=2 ttl=63 time=1.260 ms
64 bytes from 10.116.197.60: icmp_seq=3 ttl=63 time=1.307 ms
64 bytes from 10.116.197.60: icmp_seq=4 ttl=63 time=1.118 ms
64 bytes from 10.116.197.60: icmp_seq=5 ttl=63 time=1.023 ms
...
use tracerouter in pod
/ # traceroute -n -m 5 -q 4 -w 3 10.116.197.60
traceroute to 10.116.197.60 (10.116.197.60), 5 hops max, 46 byte packets
1 10.233.0.1 0.008 ms 0.005 ms 0.004 ms 0.004 ms
2 * * * *
3 * * * *
4 * * * *
5 * * * *
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
3: eth0#if64: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue state UP
link/ether 82:71:94:c7:fe:90 brd ff:ff:ff:ff:ff:ff
inet 10.233.0.139/24 brd 10.233.0.255 scope global eth0
valid_lft forever preferred_lft forever
look likes some error in 10.233.0.1
but i don't know why
kubernetes version: 1.20
network: flannel
mode: ipvs
after some test, ping other machine without snat.
09:20:24.997764 IP 10.233.0.156 > 10.116.197.60: ICMP echo request, id 149, seq 187, length 64
09:20:24.997888 IP 10.116.197.60 > 10.233.0.156: ICMP echo reply, id 149, seq 187, length 64
09:20:26.021795 IP 10.233.0.156 > 10.116.197.60: ICMP echo request, id 149, seq 188, length 64
09:20:26.021876 IP 10.116.197.60 > 10.233.0.156: ICMP echo reply, id 149, seq 188, length 64
09:20:27.045738 IP 10.233.0.156 > 10.116.197.60: ICMP echo request, id 149, seq 189, length 64
09:20:27.045825 IP 10.116.197.60 > 10.233.0.156: ICMP echo reply, id 149, seq 189, length 64
the ip 10.233.0.156 is ip of the pod, machine can not reply
add a iptables's rule to snat the pod ip
iptables -t nat -A POSTROUTING -s 10.233.0.0/24 -j MASQUERADE
10.233.0.0/24 is value of --pod-network-cidr
yum install iptables-services -y
iptables -F
iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -t nat -A POSTROUTING -s 10.233.0.0/24 -j MASQUERADE
service iptables save
systemctl enable iptables.service
I have installed Kubernetes cluster(one Master and one Worker- Node) on CentOS-8 OS stand-alone server separately as per the below link instructions.
https://www.tecmint.com/install-a-kubernetes-cluster-on-centos-8/
Weave-Net - CNI plugin installed as per above link. Now I can see below new network adapter in our K8s Master & Worker-Node server.
weave: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1376
inet 10.32.0.1 netmask 255.240.0.0 broadcast 10.47.255.255
inet6 fe80::a07d:21ff:fef1:4656 prefixlen 64 scopeid 0x20<link>
ether a2:7d:21:f1:46:56 txqueuelen 1000 (Ethernet)
RX packets 141 bytes 13322 (13.0 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 48 bytes 4896 (4.7 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
But the problem is from host server unable to ping (Or) access any of our remote site/location IPs (ping response given below). whereas Local IPs are pinging & accessible.
ping -c 4 120.121.5.48
PING 120.121.5.48 (120.121.5.48) 56(84) bytes of data.
From 10.32.0.1 icmp_seq=1 Destination Host Unreachable
From 10.32.0.1 icmp_seq=2 Destination Host Unreachable
From 10.32.0.1 icmp_seq=3 Destination Host Unreachable
From 10.32.0.1 icmp_seq=4 Destination Host Unreachable
--- 120.121.5.48 ping statistics ---
4 packets transmitted, 0 received, +4 errors, 100% packet loss, time 2999ms
pipe 4
Also from host server tried to connect our remote LDAP server through telnet it shows below error message.
# telnet 120.121.5.48 389
Trying 120.121.5.48...
telnet: connect to address 120.121.5.48: No route to host
In our K8s Master & Worker-Node server have 23 network adapters, Statically network IP have configured, So any additional configuration need to be configured for K8s CNI reachable in default routing?
ip route show & route -n output as follows.
# ip route show
default via 45.46.47.1 dev ens1f0 proto static metric 100
10.32.0.0/12 dev weave proto kernel scope link src 10.32.0.1
45.46.47.0/24 dev ens1f0 proto kernel scope link src 45.46.47.48 metric 100
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown
# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 45.46.47.1 0.0.0.0 UG 100 0 0 ens1f0
10.32.0.0 0.0.0.0 255.255.255.0 U 10 0 0 ens1f0
10.32.0.0 0.0.0.0 255.240.0.0 U 0 0 0 weave
45.46.47.0 0.0.0.0 255.255.255.0 U 100 0 0 ens1f0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0
Tried to change the weave route to default with below command. it executed successfully, But still same problem.
ip route add 10.32.0.0/24 via 45.46.47.1 dev ens1f0 metric 100
Suppose if i run ifconfig weave down everything is working fine. But to use Kubernetes cluster i need Weave-net network adapter. So please help me to add IP route(s) So that my Kubernetes cluster addresses go via through appropriate adapter, So that i will be able to access both our local & remote location server.
I have changed the CNI-Weave-Net plugin to Flannel, now it is working as excepted.
I'm trying to setup a VPN to access my cluster's workloads without setting public endpoints.
Service is deployed using the OpenVPN helm chart, and kubernetes using Rancher v2.3.2
replacing L4 loadbalacer with a simple service discovery
edit configMap to allow TCP to go through the loadbalancer and reach the VPN
What does / doesn't work:
OpenVPN client can connect successfully
Cannot ping public servers
Cannot ping Kubernetes services or pods
Can ping openvpn cluster IP "10.42.2.11"
My files
vars.yml
---
replicaCount: 1
nodeSelector:
openvpn: "true"
openvpn:
OVPN_K8S_POD_NETWORK: "10.42.0.0"
OVPN_K8S_POD_SUBNET: "255.255.0.0"
OVPN_K8S_SVC_NETWORK: "10.43.0.0"
OVPN_K8S_SVC_SUBNET: "255.255.0.0"
persistence:
storageClass: "local-path"
service:
externalPort: 444
Connection works, but I'm not able to hit any ip inside my cluster.
The only ip I'm able to reach is the openvpn cluster ip.
openvpn.conf:
server 10.240.0.0 255.255.0.0
verb 3
key /etc/openvpn/certs/pki/private/server.key
ca /etc/openvpn/certs/pki/ca.crt
cert /etc/openvpn/certs/pki/issued/server.crt
dh /etc/openvpn/certs/pki/dh.pem
key-direction 0
keepalive 10 60
persist-key
persist-tun
proto tcp
port 443
dev tun0
status /tmp/openvpn-status.log
user nobody
group nogroup
push "route 10.42.2.11 255.255.255.255"
push "route 10.42.0.0 255.255.0.0"
push "route 10.43.0.0 255.255.0.0"
push "dhcp-option DOMAIN-SEARCH openvpn.svc.cluster.local"
push "dhcp-option DOMAIN-SEARCH svc.cluster.local"
push "dhcp-option DOMAIN-SEARCH cluster.local"
client.ovpn
client
nobind
dev tun
remote xxxx xxx tcp
CERTS CERTS
dhcp-option DOMAIN openvpn.svc.cluster.local
dhcp-option DOMAIN svc.cluster.local
dhcp-option DOMAIN cluster.local
dhcp-option DOMAIN online.net
I don't really know how to debug this.
I'm using windows
route command from client
Destination Gateway Genmask Flags Metric Ref Use Ifac
0.0.0.0 livebox.home 255.255.255.255 U 0 0 0 eth0
192.168.1.0 0.0.0.0 255.255.255.0 U 256 0 0 eth0
192.168.1.17 0.0.0.0 255.255.255.255 U 256 0 0 eth0
192.168.1.255 0.0.0.0 255.255.255.255 U 256 0 0 eth0
224.0.0.0 0.0.0.0 240.0.0.0 U 256 0 0 eth0
255.255.255.255 0.0.0.0 255.255.255.255 U 256 0 0 eth0
224.0.0.0 0.0.0.0 240.0.0.0 U 256 0 0 eth1
255.255.255.255 0.0.0.0 255.255.255.255 U 256 0 0 eth1
0.0.0.0 10.240.0.5 255.255.255.255 U 0 0 0 eth1
10.42.2.11 10.240.0.5 255.255.255.255 U 0 0 0 eth1
10.42.0.0 10.240.0.5 255.255.0.0 U 0 0 0 eth1
10.43.0.0 10.240.0.5 255.255.0.0 U 0 0 0 eth1
10.240.0.1 10.240.0.5 255.255.255.255 U 0 0 0 eth1
127.0.0.0 0.0.0.0 255.0.0.0 U 256 0 0 lo
127.0.0.1 0.0.0.0 255.255.255.255 U 256 0 0 lo
127.255.255.255 0.0.0.0 255.255.255.255 U 256 0 0 lo
224.0.0.0 0.0.0.0 240.0.0.0 U 256 0 0 lo
255.255.255.255 0.0.0.0 255.255.255.255 U 256 0 0 lo
And finally ifconfig
inet 192.168.1.17 netmask 255.255.255.0 broadcast 192.168.1.255
inet6 2a01:cb00:90c:5300:603c:f8:703e:a876 prefixlen 64 scopeid 0x0<global>
inet6 2a01:cb00:90c:5300:d84b:668b:85f3:3ba2 prefixlen 128 scopeid 0x0<global>
inet6 fe80::603c:f8:703e:a876 prefixlen 64 scopeid 0xfd<compat,link,site,host>
ether 00:d8:61:31:22:32 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.240.0.6 netmask 255.255.255.252 broadcast 10.240.0.7
inet6 fe80::b9cf:39cc:f60a:9db2 prefixlen 64 scopeid 0xfd<compat,link,site,host>
ether 00:ff:42:04:53:4d (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 1500
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0xfe<compat,link,site,host>
loop (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
For anybody looking for a working sample, this is going to go into your openvpn deployment along side your container definition:
initContainers:
- args:
- -w
- net.ipv4.ip_forward=1
command:
- sysctl
image: busybox
name: openvpn-sidecar
securityContext:
privileged: true
Don't know if it is the RIGHT answer.
But I got it to work by adding a sidecar to my pods to execute
net.ipv4.ip_forward=1
which solved the issue
You can set ipForwardInitContainer option to "true" in values.yaml
In my Ubuntu server 16.04 I have this file: /etc/network/interfaces
# The primary network interface
auto eth0
iface eth0 inet dhcp
#auto eth1
#iface eth1 inet static
# address 10.0.0.41
# netmask 255.255.255.0
# network 10.0.0.0
# broadcast 10.0.0.255
# gateway 10.0.0.1
The eth0 is linked to dsl, if I uncomment the eth1 section to enable second NIC card, I can't ping remote server like yahoo.com:
ping yahoo.com
PING yahoo.com (98.138.253.109) 56(84) bytes of data.
From 10.0.0.41 icmp_seq=1 Destination Host Unreachable
From 10.0.0.41 icmp_seq=2 Destination Host Unreachable
From 10.0.0.41 icmp_seq=3 Destination Host Unreachable
found a solution: remove the gateway 10.0.0.1
I've built a CentOS 7.3 vmusing virtualbox. This vm has the following interface:
[root#localhost ~]# ip addr show enp0s3
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:fa:df:f8 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3
valid_lft 86063sec preferred_lft 86063sec
inet6 fe80::68e5:f976:c846:7881/64 scope link
valid_lft forever preferred_lft forever
When I try to ping this, I get:
[root#localhost ~]# ping -c 3 10.0.2.15
PING 10.0.2.15 (10.0.2.15) 56(84) bytes of data.
64 bytes from 10.0.2.15: icmp_seq=1 ttl=64 time=0.027 ms
64 bytes from 10.0.2.15: icmp_seq=2 ttl=64 time=0.034 ms
64 bytes from 10.0.2.15: icmp_seq=3 ttl=64 time=0.034 ms
--- 10.0.2.15 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 0.027/0.031/0.034/0.007 ms
However the ping command fails when I specify the interface name in my ping command:
[root#localhost ~]# ping -c 3 -I enp0s3 10.0.2.15
PING 10.0.2.15 (10.0.2.15) from 10.0.2.15 enp0s3: 56(84) bytes of data.
--- 10.0.2.15 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2001ms
Another thing I noticed is that this problem appears to be specific to CentOS 7.3. The same command works fine on a CentOS 7.2 vm:
[root#localhost ~]# ip addr show enp0s3
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:12:a3:c1 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3
valid_lft 86226sec preferred_lft 86226sec
inet6 fe80::a00:27ff:fe12:a3c1/64 scope link
valid_lft forever preferred_lft forever
[root#localhost ~]# ping -c 3 10.0.2.15
PING 10.0.2.15 (10.0.2.15) 56(84) bytes of data.
64 bytes from 10.0.2.15: icmp_seq=1 ttl=64 time=0.021 ms
64 bytes from 10.0.2.15: icmp_seq=2 ttl=64 time=0.031 ms
64 bytes from 10.0.2.15: icmp_seq=3 ttl=64 time=0.032 ms
--- 10.0.2.15 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.021/0.028/0.032/0.005 ms
[root#localhost ~]# ping -c 3 -I enp0s3 10.0.2.15
PING 10.0.2.15 (10.0.2.15) from 10.0.2.15 enp0s3: 56(84) bytes of data.
64 bytes from 10.0.2.15: icmp_seq=1 ttl=64 time=0.026 ms
64 bytes from 10.0.2.15: icmp_seq=2 ttl=64 time=0.031 ms
64 bytes from 10.0.2.15: icmp_seq=3 ttl=64 time=0.030 ms
--- 10.0.2.15 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.026/0.029/0.031/0.002 ms
Does anyone have any ideas why this works in CentOS 7.2 but not CentOS 7.3?