OpenVZ Container is dropping packets - openvz

I have a OVH Server, Using a network bridge mode. Nothing seems to work within the container, nor can i SSH/ping the container.
Here is route/ifconfig from the hostnode:
ifconfig -a eth0 Link encap:Ethernet HWaddr 0C:C4:7A:4C:C0:06
inet6 addr: fe80::ec4:7aff:fe4c:c006/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:40726 errors:0 dropped:0 overruns:0 frame:0
TX packets:11298 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2943207 (2.8 MiB) TX bytes:1915174 (1.8 MiB)
Memory:fb920000-fb93ffff
eth1 Link encap:Ethernet HWaddr 0C:C4:7A:4C:C0:07
BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
Memory:fb900000-fb91ffff
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:104 errors:0 dropped:0 overruns:0 frame:0
TX packets:104 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:9092 (8.8 KiB) TX bytes:9092 (8.8 KiB)
venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet6 addr: fe80::1/128 Scope:Link
UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1
RX packets:395 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:84297 (82.3 KiB) TX bytes:0 (0.0 b)
veth101.0 Link encap:Ethernet HWaddr 02:00:00:0D:70:EA
inet6 addr: fe80::ff:fe0d:70ea/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:1178 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
vmbr0 Link encap:Ethernet HWaddr 02:00:00:0D:70:EA
inet addr:167.114.174.210 Bcast:167.114.174.255 Mask:255.255.255.0
inet6 addr: fe80::ec4:7aff:fe4c:c006/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:40611 errors:0 dropped:0 overruns:0 frame:0
TX packets:11012 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:2335856 (2.2 MiB) TX bytes:1895173 (1.8 MiB)
Routing:
[root#knode-ca2]:()/etc/sysconfig/network-scripts# netstat -rn
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
158.69.135.200 0.0.0.0 255.255.255.255 UH 0 0 0 venet0
167.114.174.0 0.0.0.0 255.255.255.0 U 0 0 0 vmbr0
0.0.0.0 167.114.174.254 0.0.0.0 UG 0 0 0 vmbr0
[root#knode-ca2]:()/etc/sysconfig/network-scripts# ip route show
158.69.135.200 dev venet0 scope link src 167.114.174.210
167.114.174.0/24 dev vmbr0 proto kernel scope link src 167.114.174.210
default via 167.114.174.254 dev vmbr0
[root#knode-ca2]:()/etc/sysconfig/network-scripts#
And here is the same commands from the CONTAINER
Executing command: ifconfig
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:29 errors:0 dropped:0 overruns:0 frame:0
TX packets:29 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1830 (1.7 KiB) TX bytes:1830 (1.7 KiB)
venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet addr:127.0.0.2 P-t-P:127.0.0.2 Bcast:0.0.0.0 Mask:255.255.255.255
UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:125 errors:0 dropped:14 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:8868 (8.6 KiB)
venet0:0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet addr:158.69.135.200 P-t-P:158.69.135.200 Bcast:158.69.135.200 Mask:255.255.255.255
UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1
[root#knode-ca2]:()~# vzctl exec 106 netstat -rn
Executing command: netstat -rn
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 0.0.0.0 0.0.0.0 U 0 0 0 venet0
[root#knode-ca2]:()~# vzctl exec 106 ip route show
Executing command: ip route show
default dev venet0 scope link
[root#knode-ca2]:()~#
I'm not sure whats going on here, and i am using Virtual MAC Addresses.

Have a look into your configuration from your container.
the Broadcast address is the same as the inet address and the P-t-P
my container shows this
venet0:0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet addr:1.2.3.4 P-t-P:1.2.3.4 Bcast:0.0.0.0 Mask:255.255.255.255
UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1
Just make sure that this is correct

Related

Ubuntu UDP Multicast not received on secondary interface

My test setup looks as following:
Ubuntu 22.4
Kernel 5.15.1025 Realtime
I210 enp1s0 (10.1.180.98)
I225 enp2s0 (10.1.180.97)
Netgear GS108 Switch
enp1s0 and enp2s0 are connected to the switch
sending UDP Packets over enp1s0 to multicast address 224.0.0.22
listening on enp2s0 (-> external loop back)
open62541 UDP pubsub
General:
ifconig
enp1s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.1.180.98 netmask 255.255.255.0 broadcast 10.1.180.255
inet6 fe80::36fc:cf83:b6f7:e7eb prefixlen 64 scopeid 0x20<link>
ether 00:07:32:a5:c3:88 txqueuelen 1000 (Ethernet)
RX packets 10823 bytes 3936173 (3.9 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 287226 bytes 29921782 (29.9 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device memory 0x7fe00000-7fe1ffff
enp2s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.1.180.97 netmask 255.255.255.0 broadcast 10.1.180.255
inet6 fe80::a22:bab1:5e74:d3ad prefixlen 64 scopeid 0x20<link>
ether 00:07:32:a5:c3:89 txqueuelen 1000 (Ethernet)
RX packets 287442 bytes 29411683 (29.4 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 3506 bytes 174754 (174.7 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device memory 0x7fc00000-7fcfffff
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 10698 bytes 924534 (924.5 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 10698 bytes 924534 (924.5 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
# netstat -rn
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 10.1.180.10 0.0.0.0 UG 0 0 0 enp1s0
0.0.0.0 10.1.180.10 0.0.0.0 UG 0 0 0 enp2s0
10.1.180.0 0.0.0.0 255.255.255.0 U 0 0 0 enp2s0
10.1.180.0 0.0.0.0 255.255.255.0 U 0 0 0 enp1s0
169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 enp2s0
# netstat -g
IPv6/IPv4 Group Memberships
Interface RefCnt Group
--------------- ------ ---------------------
lo 1 224.0.0.251
lo 1 224.0.0.1
enp1s0 1 224.0.0.251
enp1s0 1 224.0.0.1
enp2s0 1 224.0.0.22
enp2s0 1 224.0.0.251
enp2s0 1 224.0.0.1
lo 1 ff02::fb
lo 1 ip6-allnodes
lo 1 ff01::1
enp1s0 1 ff02::fb
enp1s0 1 ff02::1:fff7:e7eb
enp1s0 1 ip6-allnodes
enp1s0 1 ff01::1
enp2s0 1 ff02::fb
enp2s0 1 ff02::1:ff74:d3ad
enp2s0 1 ip6-allnodes
enp2s0 1 ff01::1
# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
befor sending:
# netstat -i
Kernel Interface table
Iface MTU RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
enp1s0 1500 12397 0 0 0 572786 0 0 0 BMRU
enp2s0 1500 562000 0 0 0 4015 0 0 0 BMRU
lo 65536 12782 0 0 0 12782 0 0 0 LRU
# netstat -s -u
IcmpMsg:
InType3: 6576
OutType3: 6576
Udp:
5710 packets received
902 packets to unknown port received
0 packet receive errors
576693 packets sent
0 receive buffer errors
0 send buffer errors
IgnoredMulti: 259
UdpLite:
IpExt:
InMcastPkts: 110
OutMcastPkts: 567399
InBcastPkts: 259
InOctets: 54256683
OutOctets: 52916072
InMcastOctets: 10142
OutMcastOctets: 50498445
InBcastOctets: 19383
InNoECTPkts: 574627
MPTcpExt:
# ethtool -S enp2s0 | grep rx
rx_packets: 561920
rx_bytes: 59893407
rx_broadcast: 5508
rx_multicast: 556412
rx_crc_errors: 0
rx_no_buffer_count: 0
rx_missed_errors: 0
rx_long_length_errors: 0
rx_short_length_errors: 0
rx_align_errors: 0
rx_flow_control_xon: 0
rx_flow_control_xoff: 0
rx_long_byte_count: 59893407
rx_smbus: 0
os2bmc_rx_by_bmc: 0
os2bmc_rx_by_host: 0
rx_hwtstamp_cleared: 0
rx_lpi_counter: 0
rx_errors: 0
rx_length_errors: 0
rx_over_errors: 0
rx_frame_errors: 0
rx_fifo_errors: 0
rx_queue_0_packets: 561750
rx_queue_0_bytes: 57629925
rx_queue_0_drops: 0
rx_queue_0_csum_err: 0
rx_queue_0_alloc_failed: 0
rx_queue_1_packets: 0
rx_queue_1_bytes: 0
rx_queue_1_drops: 0
rx_queue_1_csum_err: 0
rx_queue_1_alloc_failed: 0
rx_queue_2_packets: 148
rx_queue_2_bytes: 13290
rx_queue_2_drops: 0
rx_queue_2_csum_err: 0
rx_queue_2_alloc_failed: 0
rx_queue_3_packets: 22
rx_queue_3_bytes: 2512
rx_queue_3_drops: 0
rx_queue_3_csum_err: 0
rx_queue_3_alloc_failed: 0
after sending:
# netstat -i
Kernel Interface table
Iface MTU RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
enp1s0 1500 12465 0 0 0 618087 0 0 0 BMRU
enp2s0 1500 607349 0 0 0 4031 0 0 0 BMRU
lo 65536 12800 0 0 0 12800 0 0 0 LRU
# netstat -s -u
IcmpMsg:
InType3: 6588
OutType3: 6588
Udp:
5715 packets received
902 packets to unknown port received
0 packet receive errors
621972 packets sent
0 receive buffer errors
0 send buffer errors
IgnoredMulti: 263
UdpLite:
IpExt:
InMcastPkts: 112
OutMcastPkts: 612677
InBcastPkts: 263
InOctets: 58289081
OutOctets: 56953872
InMcastOctets: 10222
OutMcastOctets: 54527991
InBcastOctets: 19816
InNoECTPkts: 619936
MPTcpExt:
# ethtool -S enp2s0 | grep rx
rx_packets: 607351
rx_bytes: 64748001
rx_broadcast: 5666
rx_multicast: 601685
rx_crc_errors: 0
rx_no_buffer_count: 0
rx_missed_errors: 0
rx_long_length_errors: 0
rx_short_length_errors: 0
rx_align_errors: 0
rx_flow_control_xon: 0
rx_flow_control_xoff: 0
rx_long_byte_count: 64748001
rx_smbus: 0
os2bmc_rx_by_bmc: 0
os2bmc_rx_by_host: 0
rx_hwtstamp_cleared: 0
rx_lpi_counter: 0
rx_errors: 0
rx_length_errors: 0
rx_over_errors: 0
rx_frame_errors: 0
rx_fifo_errors: 0
rx_queue_0_packets: 607176
rx_queue_0_bytes: 62302224
rx_queue_0_drops: 0
rx_queue_0_csum_err: 0
rx_queue_0_alloc_failed: 0
rx_queue_1_packets: 0
rx_queue_1_bytes: 0
rx_queue_1_drops: 0
rx_queue_1_csum_err: 0
rx_queue_1_alloc_failed: 0
rx_queue_2_packets: 153
rx_queue_2_bytes: 13861
rx_queue_2_drops: 0
rx_queue_2_csum_err: 0
rx_queue_2_alloc_failed: 0
rx_queue_3_packets: 22
rx_queue_3_bytes: 2512
rx_queue_3_drops: 0
rx_queue_3_csum_err: 0
rx_queue_3_alloc_failed: 0
Dropwatch output is as followed:
# sudo dropwatch -l ksa
2 drops at igmp_rcv+10c (0xffffffff9dd7202c) [software]
1 drops at unix_stream_connect+36a (0xffffffff9ddbb10a) [software]
2 drops at ip_rcv_finish_core.constprop.0+19c (0xffffffff9dd1930c) [software]
2048 drops at ip_rcv_finish_core.constprop.0+19c (0xffffffff9dd1930c) [software]
2036 drops at ip_rcv_finish_core.constprop.0+19c (0xffffffff9dd1930c) [software]
1 drops at __udp4lib_lib_mcast_deliver+31f (0xffffffff9dd5d67f) [software]
1 drops at __udp4lib_lib_mcast_deliver+31f (0xffffffff9dd5d67f) [software]
If I run this setup (exatly same UDP packets with tcpdump) with a real second windows device, receiving works. But this "external loopback" dosn't receive anything (I want to create so a TSN setup, so the windows machine is no option).
If I don't specify the interface for receiving, I get the packets (but don't know if they come from the loopback)
Following steps I tried without success:
Disabling RP_FILTER (in any combination for all available interfaces)
promisc mode on (but the ethtool output says that there is no problem on the NIC side)
What did I missed?
Best regards,
Patrick
My goal is to send UDP multicast packets on the first interface and receive them on the second interface (for performance analysis and for simulating a current missing Master hardware).

Trying to create a new Postgres connection

I implement a java microservice application deployed under a docker using postgres as data base. I am on MAC. Yesterday I created successfully a connection on DBeaver using the host of my MAC : 192.168.1.73. Today, I cannot connect with this host.
I tried $ telnet 192.168.1.73 5432 and had the following output
Trying 192.168.1.73...
telnet: connect to address 192.168.1.73: Connection refused
telnet: Unable to connect to remote host
What can I do ?
EDIT
I run docker-compose.yml
here is the extract for the data base
database:
image: postgres:9.5
ports:
- "5432:5432"
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=Esprit292948
- POSTGRES_DB=immo_db_local
Here is the begining of docker-compose.yml. I voluntarelly hid the details
version: '2'
services:
eurekaserver:
image: ...
ports:
- ...
configserver:
image: ...
ports:
- ...
environment:
EUREKASERVER_URI: ...
EUREKASERVER_PORT: ...
ENCRYPT_KEY: ...
gateservice:
image: ...
ports:
- ...
environment:
PROFILE: ...
SERVER_PORT: ...
CONFIGSERVER_URI: ...
EUREKASERVER_URI: ...
EUREKASERVER_PORT: ...
DATABASESERVER_PORT: "5432"
CONFIGSERVER_PORT: "8888"
AUDIT_PORT: "8087"
DB_PORT: "8930"
database:
image: postgres:9.5
ports:
- "5432:5432"
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=Esprit292948
- POSTGRES_DB=immo_db_local
optimisationfiscaleservice:
image: ...
ports:
- ...
etc......
Here is the ifconfig result
$ ifconfig
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384
options=1203<RXCSUM,TXCSUM,TXSTATUS,SW_TIMESTAMP>
inet 127.0.0.1 netmask 0xff000000
inet6 ::1 prefixlen 128
inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1
nd6 options=201<PERFORMNUD,DAD>
gif0: flags=8010<POINTOPOINT,MULTICAST> mtu 1280
stf0: flags=0<> mtu 1280
en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
options=50b<RXCSUM,TXCSUM,VLAN_HWTAGGING,AV,CHANNEL_IO>
ether a8:20:66:31:0a:2a
nd6 options=201<PERFORMNUD,DAD>
media: autoselect (none)
status: inactive
en1: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
options=400<CHANNEL_IO>
ether 20:c9:d0:e0:5d:a1
inet6 fe80::41:83de:7236:ba7a%en1 prefixlen 64 secured scopeid 0x5
inet 192.168.1.73 netmask 0xffffff00 broadcast 192.168.1.255
nd6 options=201<PERFORMNUD,DAD>
media: autoselect
status: active
en2: flags=8963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500
options=460<TSO4,TSO6,CHANNEL_IO>
ether 82:0a:60:f7:bc:80
media: autoselect <full-duplex>
status: inactive
fw0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 4078
lladdr a8:20:66:ff:fe:83:de:f2
nd6 options=201<PERFORMNUD,DAD>
media: autoselect <full-duplex>
status: inactive
bridge0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
options=63<RXCSUM,TXCSUM,TSO4,TSO6>
ether 82:0a:60:f7:bc:80
Configuration:
id 0:0:0:0:0:0 priority 0 hellotime 0 fwddelay 0
maxage 0 holdcnt 0 proto stp maxaddr 100 timeout 1200
root id 0:0:0:0:0:0 priority 0 ifcost 0 port 0
ipfilter disabled flags 0x0
member: en2 flags=3<LEARNING,DISCOVER>
ifmaxaddr 0 port 6 priority 0 path cost 0
nd6 options=201<PERFORMNUD,DAD>
media: <unknown type>
status: inactive
p2p0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 2304
options=400<CHANNEL_IO>
ether 02:c9:d0:e0:5d:a1
media: autoselect
status: inactive
awdl0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1484
options=400<CHANNEL_IO>
ether 0e:b6:7e:12:5d:fb
inet6 fe80::cb6:7eff:fe12:5dfb%awdl0 prefixlen 64 scopeid 0xa
nd6 options=201<PERFORMNUD,DAD>
media: autoselect
status: active
utun0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1380
inet6 fe80::3b2:bb0f:18:dcb7%utun0 prefixlen 64 scopeid 0xb
nd6 options=201<PERFORMNUD,DAD>
utun1: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 2000
inet6 fe80::b744:4974:1dd2:5fda%utun1 prefixlen 64 scopeid 0xc
nd6 options=201<PERFORMNUD,DAD>
I didn't pay attention, there was an error message
Name does not resolve
So I added "apk add bind-tools" to the docker files to get logs, according to this post : Docker Compose and Postgres : Name does not resolve
After which I had another error
Docker error : no space left on device
So according to this post Docker error : no space left on device : I run
docker system prune
And then, I succedded to connect with DBeaver

Can't join cluster: connection refused during kubeadm join

I am getting the following error when I try to join the cluster
sudo kubeadm join 10.1.1.150:6443 --token ypcdg7.w6pun0nd31c4q5c2 --discovery-token-ca-cert-hash sha256:1ac79447f8dee9d90d592a3ead3d6c54ce7046dcd0b3854917c93cf6bbff7894
[preflight] running pre-flight checks
[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:
1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support
I0710 13:39:56.080260 2415 kernel_validator.go:81] Validating kernel version
I0710 13:39:56.080423 2415 kernel_validator.go:96] Validating kernel config
[discovery] Trying to connect to API Server "10.1.1.150:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.1.1.150:6443"
[discovery] Failed to request cluster info, will try again: [Get https://10.1.1.150:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 10.1.1.150:6443: connect: connection refused]
I get the same error with curl on the machine that I want to join the cluster.
But if I run this
curl -k https://10.1.1.150:6443/api/v1/namespaces/kube-public/configmaps/cluster-info
in any machine in the local net (10.1.1.0/24) I get a good JSON response.
Some usefull info:
I can ping my master node (10.1.1.150)
6443 is open on my master node, this is netstat on my master
jp#tensor3:~$ sudo netstat -tulpn
Connessioni Internet attive (solo server)
Proto CodaRic CodaInv Indirizzo locale Indirizzo remoto Stato PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1110/sshd
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 6783/kubelet
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 7375/kube-proxy
tcp 0 0 127.0.0.1:10251 0.0.0.0:* LISTEN 7120/kube-scheduler
tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 7065/etcd
tcp 0 0 127.0.0.1:10252 0.0.0.0:* LISTEN 7058/kube-controlle
tcp 0 0 127.0.0.1:2380 0.0.0.0:* LISTEN 7065/etcd
tcp 0 0 127.0.0.1:37491 0.0.0.0:* LISTEN 6783/kubelet
tcp6 0 0 :::22 :::* LISTEN 1110/sshd
tcp6 0 0 :::10250 :::* LISTEN 6783/kubelet
tcp6 0 0 :::6443 :::* LISTEN 7130/kube-apiserver
tcp6 0 0 :::10256 :::* LISTEN 7375/kube-proxy
udp 0 0 0.0.0.0:68 0.0.0.0:* 980/dhclient
udp 0 0 0.0.0.0:8472 0.0.0.0:* -
I'm using flannel for network communication
ifconfig and route on my master
jp#tensor3:~$ ifconfig -a
cni0 Link encap:Ethernet IndirizzoHW 0a:58:0a:f4:00:01
indirizzo inet:10.244.0.1 Bcast:0.0.0.0 Maschera:255.255.255.0
indirizzo inet6: fe80::ec67:aeff:fe90:4fbc/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:31641 errors:0 dropped:0 overruns:0 frame:0
TX packets:31925 errors:0 dropped:0 overruns:0 carrier:0
collisioni:0 txqueuelen:1000
Byte RX:2032440 (2.0 MB) Byte TX:11726575 (11.7 MB)
docker0 Link encap:Ethernet IndirizzoHW 02:42:44:f4:4f:cb
indirizzo inet:172.17.0.1 Bcast:0.0.0.0 Maschera:255.255.0.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisioni:0 txqueuelen:0
Byte RX:0 (0.0 B) Byte TX:0 (0.0 B)
ens160 Link encap:Ethernet IndirizzoHW 00:50:56:3d:73:84
indirizzo inet:10.1.1.150 Bcast:10.1.1.255 Maschera:255.255.255.0
indirizzo inet6: fe80::250:56ff:fe3d:7384/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:294081 errors:0 dropped:0 overruns:0 frame:0
TX packets:74724 errors:0 dropped:0 overruns:0 carrier:0
collisioni:0 txqueuelen:1000
Byte RX:297445831 (297.4 MB) Byte TX:5900453 (5.9 MB)
flannel.1 Link encap:Ethernet IndirizzoHW 06:a7:4c:1d:6f:cd
indirizzo inet:10.244.0.0 Bcast:0.0.0.0 Maschera:255.255.255.255
indirizzo inet6: fe80::4a7:4cff:fe1d:6fcd/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:8 overruns:0 carrier:0
collisioni:0 txqueuelen:0
Byte RX:0 (0.0 B) Byte TX:0 (0.0 B)
lo Link encap:Loopback locale
indirizzo inet:127.0.0.1 Maschera:255.0.0.0
indirizzo inet6: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:1219503 errors:0 dropped:0 overruns:0 frame:0
TX packets:1219503 errors:0 dropped:0 overruns:0 carrier:0
collisioni:0 txqueuelen:1
Byte RX:271937625 (271.9 MB) Byte TX:271937625 (271.9 MB)
vethd451ffcc Link encap:Ethernet IndirizzoHW 8a:f1:f0:05:80:f3
indirizzo inet6: fe80::88f1:f0ff:fe05:80f3/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:8799 errors:0 dropped:0 overruns:0 frame:0
TX packets:8860 errors:0 dropped:0 overruns:0 carrier:0
collisioni:0 txqueuelen:0
Byte RX:687962 (687.9 KB) Byte TX:3256670 (3.2 MB)
vethed2073fb Link encap:Ethernet IndirizzoHW ea:9e:43:5e:4e:30
indirizzo inet6: fe80::e89e:43ff:fe5e:4e30/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:8800 errors:0 dropped:0 overruns:0 frame:0
TX packets:8887 errors:0 dropped:0 overruns:0 carrier:0
collisioni:0 txqueuelen:0
Byte RX:688228 (688.2 KB) Byte TX:3258477 (3.2 MB)
jp#tensor3:~$ route -n
Tabella di routing IP del kernel
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.1.1.29 0.0.0.0 UG 0 0 0 ens160
10.1.1.0 0.0.0.0 255.255.255.0 U 0 0 0 ens160
10.244.0.0 0.0.0.0 255.255.255.0 U 0 0 0 cni0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
ifconfig and route on my node
jp#tensor2:~$ ifconfig
docker0 Link encap:Ethernet IndirizzoHW 02:42:4e:2f:0e:97
indirizzo inet:172.17.0.1 Bcast:0.0.0.0 Maschera:255.255.0.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisioni:0 txqueuelen:0
Byte RX:0 (0.0 B) Byte TX:0 (0.0 B)
ens160 Link encap:Ethernet IndirizzoHW 00:50:56:8c:0a:cd
indirizzo inet:10.1.1.151 Bcast:10.1.1.255 Maschera:255.255.255.0
indirizzo inet6: fe80::250:56ff:fe8c:acd/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:17470 errors:0 dropped:0 overruns:0 frame:0
TX packets:913 errors:0 dropped:0 overruns:0 carrier:0
collisioni:0 txqueuelen:1000
Byte RX:1511461 (1.5 MB) Byte TX:101801 (101.8 KB)
lo Link encap:Loopback locale
indirizzo inet:127.0.0.1 Maschera:255.0.0.0
indirizzo inet6: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:160 errors:0 dropped:0 overruns:0 frame:0
TX packets:160 errors:0 dropped:0 overruns:0 carrier:0
collisioni:0 txqueuelen:1
Byte RX:11840 (11.8 KB) Byte TX:11840 (11.8 KB)
jp#tensor2:~$ route -n
Tabella di routing IP del kernel
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.1.1.29 0.0.0.0 UG 0 0 0 ens160
10.1.1.0 0.0.0.0 255.255.255.0 U 0 0 0 ens160
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
and this is the status of my cluster
jp#tensor3:~$ sudo kubectl get all --namespace=kube-system
[sudo] password di jp:
NAME READY STATUS RESTARTS AGE
pod/coredns-78fcdf6894-bblcs 1/1 Running 0 1h
pod/coredns-78fcdf6894-wrmj4 1/1 Running 0 1h
pod/etcd-tensor3 1/1 Running 0 1h
pod/kube-apiserver-tensor3 1/1 Running 0 1h
pod/kube-controller-manager-tensor3 1/1 Running 0 1h
pod/kube-flannel-ds-amd64-p7jmq 1/1 Running 0 1h
pod/kube-proxy-qg7jj 1/1 Running 0 1h
pod/kube-scheduler-tensor3 1/1 Running 0 1h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 1h
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/kube-flannel-ds-amd64 1 1 1 1 1 beta.kubernetes.io/arch=amd64 1h
daemonset.apps/kube-flannel-ds-arm 0 0 0 0 0 beta.kubernetes.io/arch=arm 1h
daemonset.apps/kube-flannel-ds-arm64 0 0 0 0 0 beta.kubernetes.io/arch=arm64 1h
daemonset.apps/kube-flannel-ds-ppc64le 0 0 0 0 0 beta.kubernetes.io/arch=ppc64le 1h
daemonset.apps/kube-flannel-ds-s390x 0 0 0 0 0 beta.kubernetes.io/arch=s390x 1h
daemonset.apps/kube-proxy 1 1 1 1 1 beta.kubernetes.io/arch=amd64 1h
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/coredns 2 2 2 2 1h
NAME DESIRED CURRENT READY AGE
replicaset.apps/coredns-78fcdf6894 2 2 2 1h
On the same machine I'm using:
docker version 1.13.1
k8s version 1.11
I met the similar behaviour on Ubuntu 16.04 after reboot.
I did
sudo kubeadm join ...
It crashed like described.
Then I did:
sudo su
kubeadm init
exit
I tried again and that second time it was successful.

Set /etc/hosts for each vagrant vm by ansible

My Vagrantfile
hosts = {
"host01" => "192.168.11.101",
"host02" => "192.168.11.102",
}
Vagrant.configure("2") do |config|
config.ssh.username = "root"
config.ssh.password = "vagrant"
config.ssh.insert_key = "true"
hosts.each_with_index do |(name,ip),index|
config.vm.define name do |machine|
machine.vm.box = "centos7"
machine.vm.box_check_update = false
machine.vm.hostname = name
machine.vm.synced_folder "/data", "/data"
machine.vm.network :private_network, ip: ip
machine.vm.provider "virtualbox" do |v|
v.name = name
v.customize ["modifyvm", :id, "--memory", 2048]
end
end
end
end
ansible tamplate for generate /etc/host
127.0.0.1 localhost
{% for host in groups['all'] %}
{{ hostvars[host]['ansible_' + iface].ipv4.address }} {{ host }}
{% endfor %}
ansible task
- name: Create the hosts file for all machines
template: src=hosts.j2 dest=/etc/hosts
But I get the result
[root#host01 ~]# cat /etc/hosts
127.0.0.1 localhost
10.0.2.15 host01
10.0.2.15 host02
ifconfig
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 0.0.0.0
inet6 fe80::42:49ff:fed1:eebb prefixlen 64 scopeid 0x20<link>
ether 02:42:49:d1:ee:bb txqueuelen 0 (Ethernet)
RX packets 77 bytes 6065 (5.9 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 99 bytes 8572 (8.3 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.0.2.15 netmask 255.255.255.0 broadcast 10.0.2.255
inet6 fe80::a00:27ff:fede:e0e prefixlen 64 scopeid 0x20<link>
ether 08:00:27:de:0e:0e txqueuelen 1000 (Ethernet)
RX packets 785483 bytes 57738892 (55.0 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 777457 bytes 1957320412 (1.8 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
enp0s8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.11.101 netmask 255.255.255.0 broadcast 192.168.11.255
ether 08:00:27:15:2c:64 txqueuelen 1000 (Ethernet)
RX packets 41445 bytes 39878552 (38.0 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 18055 bytes 2113498 (2.0 MiB)
I found in ifconfig, only enp0s8 inet 192.168.11.102 is different on host01 and host02
host01 and host02 got same ip ??
host01 has a docker registry.
host01, curl http://host01:5006/v2/_catalog works
host02, curl http://host01:5006/v2/_catalog not work
host01 and host02 got same ip ??
Yes. That's how Vagrant works and is able to connect for orchestration purposes to machines created from a variety of boxes coming from different publishers.
There's nothing strange about that, they are connected to different virtual switches in VirtualBox.
I just want host01 and host02 can access each other by hostname .
Use the other interface as the value of iface in your Jinja2 template (you did not show it in the question anyway).

route not working in kubernetes with calico

I have
kubernetes v1.6.0 setup by kubeadm v1.6.1
calico setup by offical yaml
iptables v1.6.0
nodes are provided by AliCloud
Problem:
The cni network is not working. Any deployment can only be visited from the node where it is running. I doubt it is related with route table conflict/missing, because I have another cluster on Vultr Cloud working fine, with the same setup steps.
Cluster Info:
root#iZ2ze8ctk2q17u029a8wcoZ:~# kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system calico-etcd-66gf4 1/1 Running 0 16h 10.27.219.50 iz2ze8ctk2q17u029a8wcoz
kube-system calico-node-4wxsb 2/2 Running 0 16h 10.27.219.50 iz2ze8ctk2q17u029a8wcoz
kube-system calico-node-6n1g1 2/2 Running 0 16h 10.30.248.80 iz2zegw6nmd5t5qxy35lh0z
kube-system calico-policy-controller-2561685917-7bdd4 1/1 Running 0 16h 10.30.248.80 iz2zegw6nmd5t5qxy35lh0z
kube-system etcd-iz2ze8ctk2q17u029a8wcoz 1/1 Running 0 16h 10.27.219.50 iz2ze8ctk2q17u029a8wcoz
kube-system heapster-bx03l 1/1 Running 0 16h 192.168.31.150 iz2zegw6nmd5t5qxy35lh0z
kube-system kube-apiserver-iz2ze8ctk2q17u029a8wcoz 1/1 Running 0 16h 10.27.219.50 iz2ze8ctk2q17u029a8wcoz
kube-system kube-controller-manager-iz2ze8ctk2q17u029a8wcoz 1/1 Running 0 16h 10.27.219.50 iz2ze8ctk2q17u029a8wcoz
kube-system kube-dns-3913472980-kgzln 3/3 Running 0 16h 192.168.31.149 iz2zegw6nmd5t5qxy35lh0z
kube-system kube-proxy-ck83t 1/1 Running 0 16h 10.30.248.80 iz2zegw6nmd5t5qxy35lh0z
kube-system kube-proxy-lssdn 1/1 Running 0 16h 10.27.219.50 iz2ze8ctk2q17u029a8wcoz
kube-system kube-scheduler-iz2ze8ctk2q17u029a8wcoz 1/1 Running 0 16h 10.27.219.50 iz2ze8ctk2q17u029a8wcoz
I checked each pod's log, cannot find anything wrong.
Master Info:
internal ip: 10.27.219.50
root#iZ2ze8ctk2q17u029a8wcoZ:~# ifconfig
docker0 Link encap:Ethernet HWaddr 02:42:56:84:35:19
inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.255.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
eth0 Link encap:Ethernet HWaddr 00:16:3e:30:51:ae
inet addr:10.27.219.50 Bcast:10.27.219.255 Mask:255.255.252.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:4400927 errors:0 dropped:0 overruns:0 frame:0
TX packets:3906530 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:564808928 (564.8 MB) TX bytes:792611382 (792.6 MB)
eth1 Link encap:Ethernet HWaddr 00:16:3e:32:07:f8
inet addr:59.110.32.199 Bcast:59.110.35.255 Mask:255.255.252.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1148756 errors:0 dropped:0 overruns:0 frame:0
TX packets:688177 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1570341044 (1.5 GB) TX bytes:58104611 (58.1 MB)
tunl0 Link encap:IPIP Tunnel HWaddr
inet addr:192.168.201.0 Mask:255.255.255.255
UP RUNNING NOARP MTU:1440 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
root#iZ2ze8ctk2q17u029a8wcoZ:~# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 59.110.35.247 0.0.0.0 UG 0 0 0 eth1
10.27.216.0 0.0.0.0 255.255.252.0 U 0 0 0 eth0
10.30.0.0 10.27.219.247 255.255.0.0 UG 0 0 0 eth0
10.32.0.0 0.0.0.0 255.240.0.0 U 0 0 0 weave
59.110.32.0 0.0.0.0 255.255.252.0 U 0 0 0 eth1
100.64.0.0 10.27.219.247 255.192.0.0 UG 0 0 0 eth0
172.16.0.0 10.27.219.247 255.240.0.0 UG 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0
192.168.201.0 0.0.0.0 255.255.255.192 U 0 0 0 *
root#iZ2ze8ctk2q17u029a8wcoZ:~# ip route list
default via 59.110.35.247 dev eth1
10.27.216.0/22 dev eth0 proto kernel scope link src 10.27.219.50
10.30.0.0/16 via 10.27.219.247 dev eth0
10.32.0.0/12 dev weave proto kernel scope link src 10.32.0.1
59.110.32.0/22 dev eth1 proto kernel scope link src 59.110.32.199
100.64.0.0/10 via 10.27.219.247 dev eth0
172.16.0.0/12 via 10.27.219.247 dev eth0
172.17.0.0/24 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
blackhole 192.168.201.0/26 proto bird
// NOTE: 10.30.0.0/16 via 10.27.219.247 dev eth0
// this rule is important, the worker node's ip is 10.30.xx.xx. If I delete this rule, I cannot ping worker node.
// this rule is 10.0.0.0/8 via 10.27.219.247 dev eth0 by default, I changed it to the above.
root#iZ2ze8ctk2q17u029a8wcoZ:~# iptables -t nat -nvL
Chain PREROUTING (policy ACCEPT 3 packets, 180 bytes)
pkts bytes target prot opt in out source destination
20976 1250K cali-PREROUTING all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:6gwbT8clXdHdC1b1 */
21016 1252K KUBE-SERVICES all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
20034 1193K DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
Chain INPUT (policy ACCEPT 3 packets, 180 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 4 packets, 240 bytes)
pkts bytes target prot opt in out source destination
109K 6580K cali-OUTPUT all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:tVnHkvAo15HuiPy0 */
111K 6738K KUBE-SERVICES all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
1263 75780 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL
Chain POSTROUTING (policy ACCEPT 4 packets, 240 bytes)
pkts bytes target prot opt in out source destination
86584 5235K cali-POSTROUTING all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:O3lYWMrLQYEMJtB5 */
0 0 MASQUERADE all -- * !docker0 172.17.0.0/24 0.0.0.0/0
3982K 239M KUBE-POSTROUTING all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes postrouting rules */
28130 1704K WEAVE all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER (2 references)
pkts bytes target prot opt in out source destination
0 0 RETURN all -- docker0 * 0.0.0.0/0 0.0.0.0/0
Chain KUBE-MARK-DROP (0 references)
pkts bytes target prot opt in out source destination
0 0 MARK all -- * * 0.0.0.0/0 0.0.0.0/0 MARK or 0x8000
Chain KUBE-MARK-MASQ (5 references)
pkts bytes target prot opt in out source destination
0 0 MARK all -- * * 0.0.0.0/0 0.0.0.0/0 MARK or 0x4000
Chain KUBE-NODEPORTS (1 references)
pkts bytes target prot opt in out source destination
Chain KUBE-POSTROUTING (1 references)
pkts bytes target prot opt in out source destination
0 0 MASQUERADE all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service traffic requiring SNAT */ mark match 0x4000/0x4000
Chain KUBE-SEP-2VS52M6CEWASZVOP (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ all -- * * 192.168.31.149 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */ tcp to:192.168.31.149:53
Chain KUBE-SEP-3XQHSFTDAPNNNDX3 (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ all -- * * 192.168.31.150 0.0.0.0/0 /* kube-system/heapster: */
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/heapster: */ tcp to:192.168.31.150:8082
Chain KUBE-SEP-CH7KJM5XKO5WGA6D (2 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ all -- * * 10.27.219.50 0.0.0.0/0 /* default/kubernetes:https */
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:https */ recent: SET name: KUBE-SEP-CH7KJM5XKO5WGA6D side: source mask: 255.255.255.255 tcp to:10.27.219.50:6443
Chain KUBE-SEP-X3WTOMIYJNS7APAN (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ all -- * * 192.168.31.149 0.0.0.0/0 /* kube-system/kube-dns:dns */
0 0 DNAT udp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns */ udp to:192.168.31.149:53
Chain KUBE-SEP-YDCHDMTZNPMRRKCX (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ all -- * * 10.27.219.50 0.0.0.0/0 /* kube-system/calico-etcd: */
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/calico-etcd: */ tcp to:10.27.219.50:6666
Chain KUBE-SERVICES (2 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SVC-NPX46M4PTMTKRN6Y tcp -- * * 0.0.0.0/0 10.96.0.1 /* default/kubernetes:https cluster IP */ tcp dpt:443
0 0 KUBE-SVC-TCOU7JCQXEZGVUNU udp -- * * 0.0.0.0/0 10.96.0.10 /* kube-system/kube-dns:dns cluster IP */ udp dpt:53
0 0 KUBE-SVC-ERIFXISQEP7F7OF4 tcp -- * * 0.0.0.0/0 10.96.0.10 /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:53
0 0 KUBE-SVC-NTYB37XIWATNM25Y tcp -- * * 0.0.0.0/0 10.96.232.136 /* kube-system/calico-etcd: cluster IP */ tcp dpt:6666
0 0 KUBE-SVC-BJM46V3U5RZHCFRZ tcp -- * * 0.0.0.0/0 10.96.181.180 /* kube-system/heapster: cluster IP */ tcp dpt:80
7 420 KUBE-NODEPORTS all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL
Chain KUBE-SVC-BJM46V3U5RZHCFRZ (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-3XQHSFTDAPNNNDX3 all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/heapster: */
Chain KUBE-SVC-ERIFXISQEP7F7OF4 (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-2VS52M6CEWASZVOP all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */
Chain KUBE-SVC-NPX46M4PTMTKRN6Y (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-CH7KJM5XKO5WGA6D all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:https */ recent: CHECK seconds: 10800 reap name: KUBE-SEP-CH7KJM5XKO5WGA6D side: source mask: 255.255.255.255
0 0 KUBE-SEP-CH7KJM5XKO5WGA6D all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:https */
Chain KUBE-SVC-NTYB37XIWATNM25Y (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-YDCHDMTZNPMRRKCX all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/calico-etcd: */
Chain KUBE-SVC-TCOU7JCQXEZGVUNU (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-X3WTOMIYJNS7APAN all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns */
Chain WEAVE (1 references)
pkts bytes target prot opt in out source destination
0 0 RETURN all -- * * 10.32.0.0/12 224.0.0.0/4
1 93 MASQUERADE all -- * * !10.32.0.0/12 10.32.0.0/12
0 0 MASQUERADE all -- * * 10.32.0.0/12 !10.32.0.0/12
Chain cali-OUTPUT (1 references)
pkts bytes target prot opt in out source destination
109K 6580K cali-fip-dnat all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:GBTAv2p5CwevEyJm */
Chain cali-POSTROUTING (1 references)
pkts bytes target prot opt in out source destination
109K 6571K cali-fip-snat all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:Z-c7XtVd2Bq7s_hA */
109K 6571K cali-nat-outgoing all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:nYKhEzDlr11Jccal */
0 0 MASQUERADE all -- * tunl0 0.0.0.0/0 0.0.0.0/0 /* cali:JHlpT-eSqR1TvyYm */ ADDRTYPE match src-type !LOCAL limit-out ADDRTYPE match src-type LOCAL
Chain cali-PREROUTING (1 references)
pkts bytes target prot opt in out source destination
20976 1250K cali-fip-dnat all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:r6XmIziWUJsdOK6Z */
Chain cali-fip-dnat (2 references)
pkts bytes target prot opt in out source destination
Chain cali-fip-snat (1 references)
pkts bytes target prot opt in out source destination
Chain cali-nat-outgoing (1 references)
pkts bytes target prot opt in out source destination
4 376 MASQUERADE all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:Wd76s91357Uv7N3v */ match-set cali4-masq-ipam-pools src ! match-set cali4-all-ipam-pools dst
Worker Node Info:
internal ip: 10.30.248.80
ifconfig
docker0 Link encap:Ethernet HWaddr 02:42:58:2b:b5:39
inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.255.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
eth0 Link encap:Ethernet HWaddr 00:16:3e:2e:3d:fd
inet addr:10.30.248.80 Bcast:10.30.251.255 Mask:255.255.252.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:3856596 errors:0 dropped:0 overruns:0 frame:0
TX packets:4253613 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:827402268 (827.4 MB) TX bytes:510838231 (510.8 MB)
eth1 Link encap:Ethernet HWaddr 00:16:3e:2c:db:d1
inet addr:47.93.161.177 Bcast:47.93.163.255 Mask:255.255.252.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:890451 errors:0 dropped:0 overruns:0 frame:0
TX packets:825607 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1695352720 (1.6 GB) TX bytes:62341312 (62.3 MB)
tunl0 Link encap:IPIP Tunnel HWaddr
inet addr:192.168.31.128 Mask:255.255.255.255
UP RUNNING NOARP MTU:1440 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
root#iZ2zegw6nmd5t5qxy35lh0Z:~# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 47.93.163.247 0.0.0.0 UG 0 0 0 eth1
10.0.0.0 10.30.251.247 255.0.0.0 UG 0 0 0 eth0
10.30.248.0 0.0.0.0 255.255.252.0 U 0 0 0 eth0
47.93.160.0 0.0.0.0 255.255.252.0 U 0 0 0 eth1
100.64.0.0 10.30.251.247 255.192.0.0 UG 0 0 0 eth0
172.16.0.0 10.30.251.247 255.240.0.0 UG 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0
192.168.31.128 0.0.0.0 255.255.255.192 U 0 0 0 *
192.168.31.149 0.0.0.0 255.255.255.255 UH 0 0 0 cali3567b3362cc
192.168.31.150 0.0.0.0 255.255.255.255 UH 0 0 0 cali9d04015b0e7
root#iZ2zegw6nmd5t5qxy35lh0Z:~# ip route list
default via 47.93.163.247 dev eth1
10.0.0.0/8 via 10.30.251.247 dev eth0
10.30.248.0/22 dev eth0 proto kernel scope link src 10.30.248.80
47.93.160.0/22 dev eth1 proto kernel scope link src 47.93.161.177
100.64.0.0/10 via 10.30.251.247 dev eth0
172.16.0.0/12 via 10.30.251.247 dev eth0
172.17.0.0/24 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
blackhole 192.168.31.128/26 proto bird
192.168.31.149 dev cali3567b3362cc scope link
192.168.31.150 dev cali9d04015b0e7 scope link
// NOTE: 10.0.0.0/8 via 10.30.251.247 dev eth0
// I didn't change this one. So it is default now.
root#iZ2zegw6nmd5t5qxy35lh0Z:~# iptables -t nat -nvL
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
3524 263K cali-PREROUTING all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:6gwbT8clXdHdC1b1 */
3527 263K KUBE-SERVICES all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
1031 53882 DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 4 packets, 240 bytes)
pkts bytes target prot opt in out source destination
84174 5099K cali-OUTPUT all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:tVnHkvAo15HuiPy0 */
85201 5163K KUBE-SERVICES all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
0 0 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL
Chain POSTROUTING (policy ACCEPT 7 packets, 420 bytes)
pkts bytes target prot opt in out source destination
76279 4644K cali-POSTROUTING all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:O3lYWMrLQYEMJtB5 */
0 0 MASQUERADE all -- * !docker0 172.17.0.0/24 0.0.0.0/0
87179 5342K KUBE-POSTROUTING all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes postrouting rules */
43815 2646K WEAVE all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER (2 references)
pkts bytes target prot opt in out source destination
0 0 RETURN all -- docker0 * 0.0.0.0/0 0.0.0.0/0
Chain KUBE-MARK-DROP (0 references)
pkts bytes target prot opt in out source destination
0 0 MARK all -- * * 0.0.0.0/0 0.0.0.0/0 MARK or 0x8000
Chain KUBE-MARK-MASQ (5 references)
pkts bytes target prot opt in out source destination
0 0 MARK all -- * * 0.0.0.0/0 0.0.0.0/0 MARK or 0x4000
Chain KUBE-NODEPORTS (1 references)
pkts bytes target prot opt in out source destination
Chain KUBE-POSTROUTING (1 references)
pkts bytes target prot opt in out source destination
0 0 MASQUERADE all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service traffic requiring SNAT */ mark match 0x4000/0x4000
Chain KUBE-SEP-2VS52M6CEWASZVOP (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ all -- * * 192.168.31.149 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */ tcp to:192.168.31.149:53
Chain KUBE-SEP-3XQHSFTDAPNNNDX3 (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ all -- * * 192.168.31.150 0.0.0.0/0 /* kube-system/heapster: */
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/heapster: */ tcp to:192.168.31.150:8082
Chain KUBE-SEP-CH7KJM5XKO5WGA6D (2 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ all -- * * 10.27.219.50 0.0.0.0/0 /* default/kubernetes:https */
3 180 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:https */ recent: SET name: KUBE-SEP-CH7KJM5XKO5WGA6D side: source mask: 255.255.255.255 tcp to:10.27.219.50:6443
Chain KUBE-SEP-X3WTOMIYJNS7APAN (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ all -- * * 192.168.31.149 0.0.0.0/0 /* kube-system/kube-dns:dns */
0 0 DNAT udp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns */ udp to:192.168.31.149:53
Chain KUBE-SEP-YDCHDMTZNPMRRKCX (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ all -- * * 10.27.219.50 0.0.0.0/0 /* kube-system/calico-etcd: */
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/calico-etcd: */ tcp to:10.27.219.50:6666
Chain KUBE-SERVICES (2 references)
pkts bytes target prot opt in out source destination
3 180 KUBE-SVC-NPX46M4PTMTKRN6Y tcp -- * * 0.0.0.0/0 10.96.0.1 /* default/kubernetes:https cluster IP */ tcp dpt:443
0 0 KUBE-SVC-TCOU7JCQXEZGVUNU udp -- * * 0.0.0.0/0 10.96.0.10 /* kube-system/kube-dns:dns cluster IP */ udp dpt:53
0 0 KUBE-SVC-ERIFXISQEP7F7OF4 tcp -- * * 0.0.0.0/0 10.96.0.10 /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:53
0 0 KUBE-SVC-NTYB37XIWATNM25Y tcp -- * * 0.0.0.0/0 10.96.232.136 /* kube-system/calico-etcd: cluster IP */ tcp dpt:6666
0 0 KUBE-SVC-BJM46V3U5RZHCFRZ tcp -- * * 0.0.0.0/0 10.96.181.180 /* kube-system/heapster: cluster IP */ tcp dpt:80
0 0 KUBE-NODEPORTS all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL
Chain KUBE-SVC-BJM46V3U5RZHCFRZ (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-3XQHSFTDAPNNNDX3 all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/heapster: */
Chain KUBE-SVC-ERIFXISQEP7F7OF4 (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-2VS52M6CEWASZVOP all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */
Chain KUBE-SVC-NPX46M4PTMTKRN6Y (1 references)
pkts bytes target prot opt in out source destination
3 180 KUBE-SEP-CH7KJM5XKO5WGA6D all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:https */ recent: CHECK seconds: 10800 reap name: KUBE-SEP-CH7KJM5XKO5WGA6D side: source mask: 255.255.255.255
0 0 KUBE-SEP-CH7KJM5XKO5WGA6D all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:https */
Chain KUBE-SVC-NTYB37XIWATNM25Y (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-YDCHDMTZNPMRRKCX all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/calico-etcd: */
Chain KUBE-SVC-TCOU7JCQXEZGVUNU (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-X3WTOMIYJNS7APAN all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns */
Chain WEAVE (1 references)
pkts bytes target prot opt in out source destination
Chain cali-OUTPUT (1 references)
pkts bytes target prot opt in out source destination
84174 5099K cali-fip-dnat all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:GBTAv2p5CwevEyJm */
Chain cali-POSTROUTING (1 references)
pkts bytes target prot opt in out source destination
86501 5298K cali-fip-snat all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:Z-c7XtVd2Bq7s_hA */
86501 5298K cali-nat-outgoing all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:nYKhEzDlr11Jccal */
0 0 MASQUERADE all -- * tunl0 0.0.0.0/0 0.0.0.0/0 /* cali:JHlpT-eSqR1TvyYm */ ADDRTYPE match src-type !LOCAL limit-out ADDRTYPE match src-type LOCAL
Chain cali-PREROUTING (1 references)
pkts bytes target prot opt in out source destination
3524 263K cali-fip-dnat all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:r6XmIziWUJsdOK6Z */
Chain cali-fip-dnat (2 references)
pkts bytes target prot opt in out source destination
Chain cali-fip-snat (1 references)
pkts bytes target prot opt in out source destination
Chain cali-nat-outgoing (1 references)
pkts bytes target prot opt in out source destination
29 1726 MASQUERADE all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:Wd76s91357Uv7N3v */ match-set cali4-masq-ipam-pools src ! match-set cali4-all-ipam-pools dst
Problem is found by calicoctl node status. The calico/node use a public ip to communicate with each other. But nodes in AliCloud are behind a firewall. So they cannot do that via public ip address.
As gunjan5 suggested, I used this env var IP_AUTODETECTION_METHOD to specify the internal interface. Problem solved.
I'm not sure what the problem is but here are a couple things to consider:
I am not familiar with AliCloud but sometimes there are special consideration for some cloud providers. For example with GCE the IP-in-IP must be explicitly allowed, http://docs.projectcalico.org/v2.1/getting-started/kubernetes/installation/gce.
I see the weave interface on your master so I'm wondering if weave could have left something around that is causing a problem.
Also as was suggested in your issue https://github.com/projectcalico/cni-plugin/issues/314 you should check calicoctl node status on the nodes to see if BGP is working as expected.