I have
kubernetes v1.6.0 setup by kubeadm v1.6.1
calico setup by offical yaml
iptables v1.6.0
nodes are provided by AliCloud
Problem:
The cni network is not working. Any deployment can only be visited from the node where it is running. I doubt it is related with route table conflict/missing, because I have another cluster on Vultr Cloud working fine, with the same setup steps.
Cluster Info:
root#iZ2ze8ctk2q17u029a8wcoZ:~# kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system calico-etcd-66gf4 1/1 Running 0 16h 10.27.219.50 iz2ze8ctk2q17u029a8wcoz
kube-system calico-node-4wxsb 2/2 Running 0 16h 10.27.219.50 iz2ze8ctk2q17u029a8wcoz
kube-system calico-node-6n1g1 2/2 Running 0 16h 10.30.248.80 iz2zegw6nmd5t5qxy35lh0z
kube-system calico-policy-controller-2561685917-7bdd4 1/1 Running 0 16h 10.30.248.80 iz2zegw6nmd5t5qxy35lh0z
kube-system etcd-iz2ze8ctk2q17u029a8wcoz 1/1 Running 0 16h 10.27.219.50 iz2ze8ctk2q17u029a8wcoz
kube-system heapster-bx03l 1/1 Running 0 16h 192.168.31.150 iz2zegw6nmd5t5qxy35lh0z
kube-system kube-apiserver-iz2ze8ctk2q17u029a8wcoz 1/1 Running 0 16h 10.27.219.50 iz2ze8ctk2q17u029a8wcoz
kube-system kube-controller-manager-iz2ze8ctk2q17u029a8wcoz 1/1 Running 0 16h 10.27.219.50 iz2ze8ctk2q17u029a8wcoz
kube-system kube-dns-3913472980-kgzln 3/3 Running 0 16h 192.168.31.149 iz2zegw6nmd5t5qxy35lh0z
kube-system kube-proxy-ck83t 1/1 Running 0 16h 10.30.248.80 iz2zegw6nmd5t5qxy35lh0z
kube-system kube-proxy-lssdn 1/1 Running 0 16h 10.27.219.50 iz2ze8ctk2q17u029a8wcoz
kube-system kube-scheduler-iz2ze8ctk2q17u029a8wcoz 1/1 Running 0 16h 10.27.219.50 iz2ze8ctk2q17u029a8wcoz
I checked each pod's log, cannot find anything wrong.
Master Info:
internal ip: 10.27.219.50
root#iZ2ze8ctk2q17u029a8wcoZ:~# ifconfig
docker0 Link encap:Ethernet HWaddr 02:42:56:84:35:19
inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.255.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
eth0 Link encap:Ethernet HWaddr 00:16:3e:30:51:ae
inet addr:10.27.219.50 Bcast:10.27.219.255 Mask:255.255.252.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:4400927 errors:0 dropped:0 overruns:0 frame:0
TX packets:3906530 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:564808928 (564.8 MB) TX bytes:792611382 (792.6 MB)
eth1 Link encap:Ethernet HWaddr 00:16:3e:32:07:f8
inet addr:59.110.32.199 Bcast:59.110.35.255 Mask:255.255.252.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1148756 errors:0 dropped:0 overruns:0 frame:0
TX packets:688177 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1570341044 (1.5 GB) TX bytes:58104611 (58.1 MB)
tunl0 Link encap:IPIP Tunnel HWaddr
inet addr:192.168.201.0 Mask:255.255.255.255
UP RUNNING NOARP MTU:1440 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
root#iZ2ze8ctk2q17u029a8wcoZ:~# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 59.110.35.247 0.0.0.0 UG 0 0 0 eth1
10.27.216.0 0.0.0.0 255.255.252.0 U 0 0 0 eth0
10.30.0.0 10.27.219.247 255.255.0.0 UG 0 0 0 eth0
10.32.0.0 0.0.0.0 255.240.0.0 U 0 0 0 weave
59.110.32.0 0.0.0.0 255.255.252.0 U 0 0 0 eth1
100.64.0.0 10.27.219.247 255.192.0.0 UG 0 0 0 eth0
172.16.0.0 10.27.219.247 255.240.0.0 UG 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0
192.168.201.0 0.0.0.0 255.255.255.192 U 0 0 0 *
root#iZ2ze8ctk2q17u029a8wcoZ:~# ip route list
default via 59.110.35.247 dev eth1
10.27.216.0/22 dev eth0 proto kernel scope link src 10.27.219.50
10.30.0.0/16 via 10.27.219.247 dev eth0
10.32.0.0/12 dev weave proto kernel scope link src 10.32.0.1
59.110.32.0/22 dev eth1 proto kernel scope link src 59.110.32.199
100.64.0.0/10 via 10.27.219.247 dev eth0
172.16.0.0/12 via 10.27.219.247 dev eth0
172.17.0.0/24 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
blackhole 192.168.201.0/26 proto bird
// NOTE: 10.30.0.0/16 via 10.27.219.247 dev eth0
// this rule is important, the worker node's ip is 10.30.xx.xx. If I delete this rule, I cannot ping worker node.
// this rule is 10.0.0.0/8 via 10.27.219.247 dev eth0 by default, I changed it to the above.
root#iZ2ze8ctk2q17u029a8wcoZ:~# iptables -t nat -nvL
Chain PREROUTING (policy ACCEPT 3 packets, 180 bytes)
pkts bytes target prot opt in out source destination
20976 1250K cali-PREROUTING all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:6gwbT8clXdHdC1b1 */
21016 1252K KUBE-SERVICES all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
20034 1193K DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
Chain INPUT (policy ACCEPT 3 packets, 180 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 4 packets, 240 bytes)
pkts bytes target prot opt in out source destination
109K 6580K cali-OUTPUT all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:tVnHkvAo15HuiPy0 */
111K 6738K KUBE-SERVICES all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
1263 75780 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL
Chain POSTROUTING (policy ACCEPT 4 packets, 240 bytes)
pkts bytes target prot opt in out source destination
86584 5235K cali-POSTROUTING all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:O3lYWMrLQYEMJtB5 */
0 0 MASQUERADE all -- * !docker0 172.17.0.0/24 0.0.0.0/0
3982K 239M KUBE-POSTROUTING all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes postrouting rules */
28130 1704K WEAVE all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER (2 references)
pkts bytes target prot opt in out source destination
0 0 RETURN all -- docker0 * 0.0.0.0/0 0.0.0.0/0
Chain KUBE-MARK-DROP (0 references)
pkts bytes target prot opt in out source destination
0 0 MARK all -- * * 0.0.0.0/0 0.0.0.0/0 MARK or 0x8000
Chain KUBE-MARK-MASQ (5 references)
pkts bytes target prot opt in out source destination
0 0 MARK all -- * * 0.0.0.0/0 0.0.0.0/0 MARK or 0x4000
Chain KUBE-NODEPORTS (1 references)
pkts bytes target prot opt in out source destination
Chain KUBE-POSTROUTING (1 references)
pkts bytes target prot opt in out source destination
0 0 MASQUERADE all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service traffic requiring SNAT */ mark match 0x4000/0x4000
Chain KUBE-SEP-2VS52M6CEWASZVOP (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ all -- * * 192.168.31.149 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */ tcp to:192.168.31.149:53
Chain KUBE-SEP-3XQHSFTDAPNNNDX3 (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ all -- * * 192.168.31.150 0.0.0.0/0 /* kube-system/heapster: */
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/heapster: */ tcp to:192.168.31.150:8082
Chain KUBE-SEP-CH7KJM5XKO5WGA6D (2 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ all -- * * 10.27.219.50 0.0.0.0/0 /* default/kubernetes:https */
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:https */ recent: SET name: KUBE-SEP-CH7KJM5XKO5WGA6D side: source mask: 255.255.255.255 tcp to:10.27.219.50:6443
Chain KUBE-SEP-X3WTOMIYJNS7APAN (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ all -- * * 192.168.31.149 0.0.0.0/0 /* kube-system/kube-dns:dns */
0 0 DNAT udp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns */ udp to:192.168.31.149:53
Chain KUBE-SEP-YDCHDMTZNPMRRKCX (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ all -- * * 10.27.219.50 0.0.0.0/0 /* kube-system/calico-etcd: */
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/calico-etcd: */ tcp to:10.27.219.50:6666
Chain KUBE-SERVICES (2 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SVC-NPX46M4PTMTKRN6Y tcp -- * * 0.0.0.0/0 10.96.0.1 /* default/kubernetes:https cluster IP */ tcp dpt:443
0 0 KUBE-SVC-TCOU7JCQXEZGVUNU udp -- * * 0.0.0.0/0 10.96.0.10 /* kube-system/kube-dns:dns cluster IP */ udp dpt:53
0 0 KUBE-SVC-ERIFXISQEP7F7OF4 tcp -- * * 0.0.0.0/0 10.96.0.10 /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:53
0 0 KUBE-SVC-NTYB37XIWATNM25Y tcp -- * * 0.0.0.0/0 10.96.232.136 /* kube-system/calico-etcd: cluster IP */ tcp dpt:6666
0 0 KUBE-SVC-BJM46V3U5RZHCFRZ tcp -- * * 0.0.0.0/0 10.96.181.180 /* kube-system/heapster: cluster IP */ tcp dpt:80
7 420 KUBE-NODEPORTS all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL
Chain KUBE-SVC-BJM46V3U5RZHCFRZ (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-3XQHSFTDAPNNNDX3 all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/heapster: */
Chain KUBE-SVC-ERIFXISQEP7F7OF4 (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-2VS52M6CEWASZVOP all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */
Chain KUBE-SVC-NPX46M4PTMTKRN6Y (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-CH7KJM5XKO5WGA6D all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:https */ recent: CHECK seconds: 10800 reap name: KUBE-SEP-CH7KJM5XKO5WGA6D side: source mask: 255.255.255.255
0 0 KUBE-SEP-CH7KJM5XKO5WGA6D all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:https */
Chain KUBE-SVC-NTYB37XIWATNM25Y (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-YDCHDMTZNPMRRKCX all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/calico-etcd: */
Chain KUBE-SVC-TCOU7JCQXEZGVUNU (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-X3WTOMIYJNS7APAN all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns */
Chain WEAVE (1 references)
pkts bytes target prot opt in out source destination
0 0 RETURN all -- * * 10.32.0.0/12 224.0.0.0/4
1 93 MASQUERADE all -- * * !10.32.0.0/12 10.32.0.0/12
0 0 MASQUERADE all -- * * 10.32.0.0/12 !10.32.0.0/12
Chain cali-OUTPUT (1 references)
pkts bytes target prot opt in out source destination
109K 6580K cali-fip-dnat all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:GBTAv2p5CwevEyJm */
Chain cali-POSTROUTING (1 references)
pkts bytes target prot opt in out source destination
109K 6571K cali-fip-snat all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:Z-c7XtVd2Bq7s_hA */
109K 6571K cali-nat-outgoing all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:nYKhEzDlr11Jccal */
0 0 MASQUERADE all -- * tunl0 0.0.0.0/0 0.0.0.0/0 /* cali:JHlpT-eSqR1TvyYm */ ADDRTYPE match src-type !LOCAL limit-out ADDRTYPE match src-type LOCAL
Chain cali-PREROUTING (1 references)
pkts bytes target prot opt in out source destination
20976 1250K cali-fip-dnat all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:r6XmIziWUJsdOK6Z */
Chain cali-fip-dnat (2 references)
pkts bytes target prot opt in out source destination
Chain cali-fip-snat (1 references)
pkts bytes target prot opt in out source destination
Chain cali-nat-outgoing (1 references)
pkts bytes target prot opt in out source destination
4 376 MASQUERADE all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:Wd76s91357Uv7N3v */ match-set cali4-masq-ipam-pools src ! match-set cali4-all-ipam-pools dst
Worker Node Info:
internal ip: 10.30.248.80
ifconfig
docker0 Link encap:Ethernet HWaddr 02:42:58:2b:b5:39
inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.255.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
eth0 Link encap:Ethernet HWaddr 00:16:3e:2e:3d:fd
inet addr:10.30.248.80 Bcast:10.30.251.255 Mask:255.255.252.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:3856596 errors:0 dropped:0 overruns:0 frame:0
TX packets:4253613 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:827402268 (827.4 MB) TX bytes:510838231 (510.8 MB)
eth1 Link encap:Ethernet HWaddr 00:16:3e:2c:db:d1
inet addr:47.93.161.177 Bcast:47.93.163.255 Mask:255.255.252.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:890451 errors:0 dropped:0 overruns:0 frame:0
TX packets:825607 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1695352720 (1.6 GB) TX bytes:62341312 (62.3 MB)
tunl0 Link encap:IPIP Tunnel HWaddr
inet addr:192.168.31.128 Mask:255.255.255.255
UP RUNNING NOARP MTU:1440 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
root#iZ2zegw6nmd5t5qxy35lh0Z:~# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 47.93.163.247 0.0.0.0 UG 0 0 0 eth1
10.0.0.0 10.30.251.247 255.0.0.0 UG 0 0 0 eth0
10.30.248.0 0.0.0.0 255.255.252.0 U 0 0 0 eth0
47.93.160.0 0.0.0.0 255.255.252.0 U 0 0 0 eth1
100.64.0.0 10.30.251.247 255.192.0.0 UG 0 0 0 eth0
172.16.0.0 10.30.251.247 255.240.0.0 UG 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0
192.168.31.128 0.0.0.0 255.255.255.192 U 0 0 0 *
192.168.31.149 0.0.0.0 255.255.255.255 UH 0 0 0 cali3567b3362cc
192.168.31.150 0.0.0.0 255.255.255.255 UH 0 0 0 cali9d04015b0e7
root#iZ2zegw6nmd5t5qxy35lh0Z:~# ip route list
default via 47.93.163.247 dev eth1
10.0.0.0/8 via 10.30.251.247 dev eth0
10.30.248.0/22 dev eth0 proto kernel scope link src 10.30.248.80
47.93.160.0/22 dev eth1 proto kernel scope link src 47.93.161.177
100.64.0.0/10 via 10.30.251.247 dev eth0
172.16.0.0/12 via 10.30.251.247 dev eth0
172.17.0.0/24 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
blackhole 192.168.31.128/26 proto bird
192.168.31.149 dev cali3567b3362cc scope link
192.168.31.150 dev cali9d04015b0e7 scope link
// NOTE: 10.0.0.0/8 via 10.30.251.247 dev eth0
// I didn't change this one. So it is default now.
root#iZ2zegw6nmd5t5qxy35lh0Z:~# iptables -t nat -nvL
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
3524 263K cali-PREROUTING all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:6gwbT8clXdHdC1b1 */
3527 263K KUBE-SERVICES all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
1031 53882 DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 4 packets, 240 bytes)
pkts bytes target prot opt in out source destination
84174 5099K cali-OUTPUT all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:tVnHkvAo15HuiPy0 */
85201 5163K KUBE-SERVICES all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
0 0 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL
Chain POSTROUTING (policy ACCEPT 7 packets, 420 bytes)
pkts bytes target prot opt in out source destination
76279 4644K cali-POSTROUTING all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:O3lYWMrLQYEMJtB5 */
0 0 MASQUERADE all -- * !docker0 172.17.0.0/24 0.0.0.0/0
87179 5342K KUBE-POSTROUTING all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes postrouting rules */
43815 2646K WEAVE all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER (2 references)
pkts bytes target prot opt in out source destination
0 0 RETURN all -- docker0 * 0.0.0.0/0 0.0.0.0/0
Chain KUBE-MARK-DROP (0 references)
pkts bytes target prot opt in out source destination
0 0 MARK all -- * * 0.0.0.0/0 0.0.0.0/0 MARK or 0x8000
Chain KUBE-MARK-MASQ (5 references)
pkts bytes target prot opt in out source destination
0 0 MARK all -- * * 0.0.0.0/0 0.0.0.0/0 MARK or 0x4000
Chain KUBE-NODEPORTS (1 references)
pkts bytes target prot opt in out source destination
Chain KUBE-POSTROUTING (1 references)
pkts bytes target prot opt in out source destination
0 0 MASQUERADE all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service traffic requiring SNAT */ mark match 0x4000/0x4000
Chain KUBE-SEP-2VS52M6CEWASZVOP (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ all -- * * 192.168.31.149 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */ tcp to:192.168.31.149:53
Chain KUBE-SEP-3XQHSFTDAPNNNDX3 (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ all -- * * 192.168.31.150 0.0.0.0/0 /* kube-system/heapster: */
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/heapster: */ tcp to:192.168.31.150:8082
Chain KUBE-SEP-CH7KJM5XKO5WGA6D (2 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ all -- * * 10.27.219.50 0.0.0.0/0 /* default/kubernetes:https */
3 180 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:https */ recent: SET name: KUBE-SEP-CH7KJM5XKO5WGA6D side: source mask: 255.255.255.255 tcp to:10.27.219.50:6443
Chain KUBE-SEP-X3WTOMIYJNS7APAN (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ all -- * * 192.168.31.149 0.0.0.0/0 /* kube-system/kube-dns:dns */
0 0 DNAT udp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns */ udp to:192.168.31.149:53
Chain KUBE-SEP-YDCHDMTZNPMRRKCX (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ all -- * * 10.27.219.50 0.0.0.0/0 /* kube-system/calico-etcd: */
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/calico-etcd: */ tcp to:10.27.219.50:6666
Chain KUBE-SERVICES (2 references)
pkts bytes target prot opt in out source destination
3 180 KUBE-SVC-NPX46M4PTMTKRN6Y tcp -- * * 0.0.0.0/0 10.96.0.1 /* default/kubernetes:https cluster IP */ tcp dpt:443
0 0 KUBE-SVC-TCOU7JCQXEZGVUNU udp -- * * 0.0.0.0/0 10.96.0.10 /* kube-system/kube-dns:dns cluster IP */ udp dpt:53
0 0 KUBE-SVC-ERIFXISQEP7F7OF4 tcp -- * * 0.0.0.0/0 10.96.0.10 /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:53
0 0 KUBE-SVC-NTYB37XIWATNM25Y tcp -- * * 0.0.0.0/0 10.96.232.136 /* kube-system/calico-etcd: cluster IP */ tcp dpt:6666
0 0 KUBE-SVC-BJM46V3U5RZHCFRZ tcp -- * * 0.0.0.0/0 10.96.181.180 /* kube-system/heapster: cluster IP */ tcp dpt:80
0 0 KUBE-NODEPORTS all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL
Chain KUBE-SVC-BJM46V3U5RZHCFRZ (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-3XQHSFTDAPNNNDX3 all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/heapster: */
Chain KUBE-SVC-ERIFXISQEP7F7OF4 (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-2VS52M6CEWASZVOP all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */
Chain KUBE-SVC-NPX46M4PTMTKRN6Y (1 references)
pkts bytes target prot opt in out source destination
3 180 KUBE-SEP-CH7KJM5XKO5WGA6D all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:https */ recent: CHECK seconds: 10800 reap name: KUBE-SEP-CH7KJM5XKO5WGA6D side: source mask: 255.255.255.255
0 0 KUBE-SEP-CH7KJM5XKO5WGA6D all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:https */
Chain KUBE-SVC-NTYB37XIWATNM25Y (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-YDCHDMTZNPMRRKCX all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/calico-etcd: */
Chain KUBE-SVC-TCOU7JCQXEZGVUNU (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-X3WTOMIYJNS7APAN all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns */
Chain WEAVE (1 references)
pkts bytes target prot opt in out source destination
Chain cali-OUTPUT (1 references)
pkts bytes target prot opt in out source destination
84174 5099K cali-fip-dnat all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:GBTAv2p5CwevEyJm */
Chain cali-POSTROUTING (1 references)
pkts bytes target prot opt in out source destination
86501 5298K cali-fip-snat all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:Z-c7XtVd2Bq7s_hA */
86501 5298K cali-nat-outgoing all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:nYKhEzDlr11Jccal */
0 0 MASQUERADE all -- * tunl0 0.0.0.0/0 0.0.0.0/0 /* cali:JHlpT-eSqR1TvyYm */ ADDRTYPE match src-type !LOCAL limit-out ADDRTYPE match src-type LOCAL
Chain cali-PREROUTING (1 references)
pkts bytes target prot opt in out source destination
3524 263K cali-fip-dnat all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:r6XmIziWUJsdOK6Z */
Chain cali-fip-dnat (2 references)
pkts bytes target prot opt in out source destination
Chain cali-fip-snat (1 references)
pkts bytes target prot opt in out source destination
Chain cali-nat-outgoing (1 references)
pkts bytes target prot opt in out source destination
29 1726 MASQUERADE all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:Wd76s91357Uv7N3v */ match-set cali4-masq-ipam-pools src ! match-set cali4-all-ipam-pools dst
Problem is found by calicoctl node status. The calico/node use a public ip to communicate with each other. But nodes in AliCloud are behind a firewall. So they cannot do that via public ip address.
As gunjan5 suggested, I used this env var IP_AUTODETECTION_METHOD to specify the internal interface. Problem solved.
I'm not sure what the problem is but here are a couple things to consider:
I am not familiar with AliCloud but sometimes there are special consideration for some cloud providers. For example with GCE the IP-in-IP must be explicitly allowed, http://docs.projectcalico.org/v2.1/getting-started/kubernetes/installation/gce.
I see the weave interface on your master so I'm wondering if weave could have left something around that is causing a problem.
Also as was suggested in your issue https://github.com/projectcalico/cni-plugin/issues/314 you should check calicoctl node status on the nodes to see if BGP is working as expected.
Related
My test setup looks as following:
Ubuntu 22.4
Kernel 5.15.1025 Realtime
I210 enp1s0 (10.1.180.98)
I225 enp2s0 (10.1.180.97)
Netgear GS108 Switch
enp1s0 and enp2s0 are connected to the switch
sending UDP Packets over enp1s0 to multicast address 224.0.0.22
listening on enp2s0 (-> external loop back)
open62541 UDP pubsub
General:
ifconig
enp1s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.1.180.98 netmask 255.255.255.0 broadcast 10.1.180.255
inet6 fe80::36fc:cf83:b6f7:e7eb prefixlen 64 scopeid 0x20<link>
ether 00:07:32:a5:c3:88 txqueuelen 1000 (Ethernet)
RX packets 10823 bytes 3936173 (3.9 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 287226 bytes 29921782 (29.9 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device memory 0x7fe00000-7fe1ffff
enp2s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.1.180.97 netmask 255.255.255.0 broadcast 10.1.180.255
inet6 fe80::a22:bab1:5e74:d3ad prefixlen 64 scopeid 0x20<link>
ether 00:07:32:a5:c3:89 txqueuelen 1000 (Ethernet)
RX packets 287442 bytes 29411683 (29.4 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 3506 bytes 174754 (174.7 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device memory 0x7fc00000-7fcfffff
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 10698 bytes 924534 (924.5 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 10698 bytes 924534 (924.5 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
# netstat -rn
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 10.1.180.10 0.0.0.0 UG 0 0 0 enp1s0
0.0.0.0 10.1.180.10 0.0.0.0 UG 0 0 0 enp2s0
10.1.180.0 0.0.0.0 255.255.255.0 U 0 0 0 enp2s0
10.1.180.0 0.0.0.0 255.255.255.0 U 0 0 0 enp1s0
169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 enp2s0
# netstat -g
IPv6/IPv4 Group Memberships
Interface RefCnt Group
--------------- ------ ---------------------
lo 1 224.0.0.251
lo 1 224.0.0.1
enp1s0 1 224.0.0.251
enp1s0 1 224.0.0.1
enp2s0 1 224.0.0.22
enp2s0 1 224.0.0.251
enp2s0 1 224.0.0.1
lo 1 ff02::fb
lo 1 ip6-allnodes
lo 1 ff01::1
enp1s0 1 ff02::fb
enp1s0 1 ff02::1:fff7:e7eb
enp1s0 1 ip6-allnodes
enp1s0 1 ff01::1
enp2s0 1 ff02::fb
enp2s0 1 ff02::1:ff74:d3ad
enp2s0 1 ip6-allnodes
enp2s0 1 ff01::1
# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
befor sending:
# netstat -i
Kernel Interface table
Iface MTU RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
enp1s0 1500 12397 0 0 0 572786 0 0 0 BMRU
enp2s0 1500 562000 0 0 0 4015 0 0 0 BMRU
lo 65536 12782 0 0 0 12782 0 0 0 LRU
# netstat -s -u
IcmpMsg:
InType3: 6576
OutType3: 6576
Udp:
5710 packets received
902 packets to unknown port received
0 packet receive errors
576693 packets sent
0 receive buffer errors
0 send buffer errors
IgnoredMulti: 259
UdpLite:
IpExt:
InMcastPkts: 110
OutMcastPkts: 567399
InBcastPkts: 259
InOctets: 54256683
OutOctets: 52916072
InMcastOctets: 10142
OutMcastOctets: 50498445
InBcastOctets: 19383
InNoECTPkts: 574627
MPTcpExt:
# ethtool -S enp2s0 | grep rx
rx_packets: 561920
rx_bytes: 59893407
rx_broadcast: 5508
rx_multicast: 556412
rx_crc_errors: 0
rx_no_buffer_count: 0
rx_missed_errors: 0
rx_long_length_errors: 0
rx_short_length_errors: 0
rx_align_errors: 0
rx_flow_control_xon: 0
rx_flow_control_xoff: 0
rx_long_byte_count: 59893407
rx_smbus: 0
os2bmc_rx_by_bmc: 0
os2bmc_rx_by_host: 0
rx_hwtstamp_cleared: 0
rx_lpi_counter: 0
rx_errors: 0
rx_length_errors: 0
rx_over_errors: 0
rx_frame_errors: 0
rx_fifo_errors: 0
rx_queue_0_packets: 561750
rx_queue_0_bytes: 57629925
rx_queue_0_drops: 0
rx_queue_0_csum_err: 0
rx_queue_0_alloc_failed: 0
rx_queue_1_packets: 0
rx_queue_1_bytes: 0
rx_queue_1_drops: 0
rx_queue_1_csum_err: 0
rx_queue_1_alloc_failed: 0
rx_queue_2_packets: 148
rx_queue_2_bytes: 13290
rx_queue_2_drops: 0
rx_queue_2_csum_err: 0
rx_queue_2_alloc_failed: 0
rx_queue_3_packets: 22
rx_queue_3_bytes: 2512
rx_queue_3_drops: 0
rx_queue_3_csum_err: 0
rx_queue_3_alloc_failed: 0
after sending:
# netstat -i
Kernel Interface table
Iface MTU RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
enp1s0 1500 12465 0 0 0 618087 0 0 0 BMRU
enp2s0 1500 607349 0 0 0 4031 0 0 0 BMRU
lo 65536 12800 0 0 0 12800 0 0 0 LRU
# netstat -s -u
IcmpMsg:
InType3: 6588
OutType3: 6588
Udp:
5715 packets received
902 packets to unknown port received
0 packet receive errors
621972 packets sent
0 receive buffer errors
0 send buffer errors
IgnoredMulti: 263
UdpLite:
IpExt:
InMcastPkts: 112
OutMcastPkts: 612677
InBcastPkts: 263
InOctets: 58289081
OutOctets: 56953872
InMcastOctets: 10222
OutMcastOctets: 54527991
InBcastOctets: 19816
InNoECTPkts: 619936
MPTcpExt:
# ethtool -S enp2s0 | grep rx
rx_packets: 607351
rx_bytes: 64748001
rx_broadcast: 5666
rx_multicast: 601685
rx_crc_errors: 0
rx_no_buffer_count: 0
rx_missed_errors: 0
rx_long_length_errors: 0
rx_short_length_errors: 0
rx_align_errors: 0
rx_flow_control_xon: 0
rx_flow_control_xoff: 0
rx_long_byte_count: 64748001
rx_smbus: 0
os2bmc_rx_by_bmc: 0
os2bmc_rx_by_host: 0
rx_hwtstamp_cleared: 0
rx_lpi_counter: 0
rx_errors: 0
rx_length_errors: 0
rx_over_errors: 0
rx_frame_errors: 0
rx_fifo_errors: 0
rx_queue_0_packets: 607176
rx_queue_0_bytes: 62302224
rx_queue_0_drops: 0
rx_queue_0_csum_err: 0
rx_queue_0_alloc_failed: 0
rx_queue_1_packets: 0
rx_queue_1_bytes: 0
rx_queue_1_drops: 0
rx_queue_1_csum_err: 0
rx_queue_1_alloc_failed: 0
rx_queue_2_packets: 153
rx_queue_2_bytes: 13861
rx_queue_2_drops: 0
rx_queue_2_csum_err: 0
rx_queue_2_alloc_failed: 0
rx_queue_3_packets: 22
rx_queue_3_bytes: 2512
rx_queue_3_drops: 0
rx_queue_3_csum_err: 0
rx_queue_3_alloc_failed: 0
Dropwatch output is as followed:
# sudo dropwatch -l ksa
2 drops at igmp_rcv+10c (0xffffffff9dd7202c) [software]
1 drops at unix_stream_connect+36a (0xffffffff9ddbb10a) [software]
2 drops at ip_rcv_finish_core.constprop.0+19c (0xffffffff9dd1930c) [software]
2048 drops at ip_rcv_finish_core.constprop.0+19c (0xffffffff9dd1930c) [software]
2036 drops at ip_rcv_finish_core.constprop.0+19c (0xffffffff9dd1930c) [software]
1 drops at __udp4lib_lib_mcast_deliver+31f (0xffffffff9dd5d67f) [software]
1 drops at __udp4lib_lib_mcast_deliver+31f (0xffffffff9dd5d67f) [software]
If I run this setup (exatly same UDP packets with tcpdump) with a real second windows device, receiving works. But this "external loopback" dosn't receive anything (I want to create so a TSN setup, so the windows machine is no option).
If I don't specify the interface for receiving, I get the packets (but don't know if they come from the loopback)
Following steps I tried without success:
Disabling RP_FILTER (in any combination for all available interfaces)
promisc mode on (but the ethtool output says that there is no problem on the NIC side)
What did I missed?
Best regards,
Patrick
My goal is to send UDP multicast packets on the first interface and receive them on the second interface (for performance analysis and for simulating a current missing Master hardware).
I need to debug app running on the tv
The command I am running
ares-inspect -d webOS_TV -a myapp --open
The error I am getting
> Session#forward() failed forwarding client localPort: 0
> (inCnx.remotePort: 55431 )=> devicePort: 9998 ares-inspect WARN
> Session#forward() failed forwarding client localPort: 0 => devicePort:
> 9998 Session#forward() failed forwarding client localPort: 0
> (inCnx.remotePort: 55432 )=> devicePort: 9998 ares-inspect WARN
> Session#forward() failed forwarding client localPort: 0 => devicePort:
> 9998 Application Debugging - http://localhost:55430 Session#forward()
> failed forwarding client localPort: 0 (inCnx.remotePort: 55438 )=>
> devicePort: 9998 ares-inspect WARN Session#forward() failed forwarding
> client localPort: 0 => devicePort: 9998
Please help
I cannot find any 9998 port running on my TV
ares-novacom -d webOS_TV --r 'netstat -at'
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:1900 0.0.0.0:* LISTEN
tcp 0 0 localhost:43725 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:1266 0.0.0.0:* LISTEN
tcp 0 0 localhost:domain 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:afs3-fileserver 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:ssg_http 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:ssg_https 0.0.0.0:* LISTEN
tcp 0 0 localhost:43259 0.0.0.0:* LISTEN
tcp 0 0 localhost:39547 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:1979 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:1088 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:9922 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:1602 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:dial_http 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:dial_tvapp 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:1927 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:11111 0.0.0.0:* LISTEN
tcp 0 0 192.168.0.106:dial_http 192.168.0.102:47096 ESTABLISHED
tcp 0 0 localhost:39547 localhost:50092 ESTABLISHED
tcp 0 0 localhost:39726 localhost:43259 ESTABLISHED
tcp 0 0 192.168.0.106:ssg_https 192.168.0.103:40102 ESTABLISHED
tcp 0 0 192.168.0.106:ssg_https 192.168.0.103:40056 ESTABLISHED
tcp 0 0 192.168.0.106:57364 192.168.0.103:38520 TIME_WAIT
tcp 0 0 192.168.0.106:9922 192.168.0.107:49603 ESTABLISHED
tcp 1 0 192.168.0.106:41720 192.168.0.1:60440 CLOSE_WAIT
tcp 0 0 192.168.0.106:57352 192.168.0.103:38520 TIME_WAIT
tcp 0 0 192.168.0.106:ssg_https 192.168.0.103:40046 ESTABLISHED
tcp 0 0 192.168.0.106:11111 192.168.0.107:65512 CLOSE_WAIT
tcp 0 0 localhost:50092 localhost:39547 ESTABLISHED
tcp 0 0 192.168.0.106:57358 192.168.0.103:38520 TIME_WAIT
tcp 0 0 localhost:43259 localhost:39726 ESTABLISHED
tcp 224 0 192.168.0.106:11111 192.168.0.107:65511 CLOSE_WAIT
tcp 0 0 :::36718 :::* LISTEN
tcp 0 0 localhost:domain :::* LISTEN
tcp 0 0 :::9922 :::* LISTEN
I had installed the prod app and the local app has the same id so they were conflicted
When I am trying to use sbt shell in intelliJ it appears the following error message:
Do I miss any settings?
Update
netstat -planet
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name
tcp 0 0 0.0.0.0:41351 0.0.0.0:* LISTEN 1000 353424 573/java
tcp 0 0 127.0.0.1:63342 0.0.0.0:* LISTEN 1000 355674 573/java
tcp 0 0 0.0.0.0:33295 0.0.0.0:* LISTEN 1000 354289 573/java
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 101 16065 -
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 0 34359 -
tcp 0 0 127.0.0.1:6942 0.0.0.0:* LISTEN 1000 354471 573/java
tcp 0 0 10.0.2.15:42510 151.101.193.69:443 ESTABLISHED 1000 305742 5102/chromium-brows
tcp 0 0 10.0.2.15:44344 104.66.167.140:443 ESTABLISHED 1000 345081 5102/chromium-brows
tcp 0 0 10.0.2.15:37646 198.252.206.25:443 ESTABLISHED 1000 353330 5102/chromium-brows
tcp 0 0 10.0.2.15:41480 104.16.29.34:443 ESTABLISHED 1000 349198 5102/chromium-brows
tcp 564 0 10.0.2.15:43148 172.217.168.42:443 ESTABLISHED 1000 377410 5102/chromium-brows
tcp 0 0 10.0.2.15:59768 54.246.222.93:443 ESTABLISHED 1000 368480 5102/chromium-brows
tcp 0 0 10.0.2.15:40474 216.58.215.226:443 ESTABLISHED 1000 327204 5102/chromium-brows
tcp 0 0 10.0.2.15:37728 198.252.206.25:443 ESTABLISHED 1000 376381 5102/chromium-brows
tcp 0 0 10.0.2.15:40470 216.58.215.226:443 ESTABLISHED 1000 327196 5102/chromium-brows
tcp 0 0 10.0.2.15:56458 34.193.164.107:443 ESTABLISHED 1000 352517 5102/chromium-brows
tcp 0 0 10.0.2.15:52954 91.228.74.181:443 ESTABLISHED 1000 377413 5102/chromium-brows
tcp 0 0 10.0.2.15:52956 91.228.74.181:443 ESTABLISHED 1000 377416 5102/chromium-brows
tcp 0 0 10.0.2.15:39020 172.217.168.66:443 ESTABLISHED 1000 324280 5102/chromium-brows
tcp 0 0 10.0.2.15:39324 216.58.215.230:443 ESTABLISHED 1000 375801 5102/chromium-brows
tcp 0 0 10.0.2.15:38640 172.217.168.38:443 ESTABLISHED 1000 352610 5102/chromium-brows
tcp 0 0 10.0.2.15:50876 172.217.168.46:443 ESTABLISHED 1000 343823 5102/chromium-brows
tcp 0 0 10.0.2.15:36630 198.252.206.25:443 ESTABLISHED 1000 327486 5102/chromium-brows
tcp 0 0 10.0.2.15:40544 216.58.215.227:443 ESTABLISHED 1000 318302 5102/chromium-brows
tcp 0 0 10.0.2.15:47506 151.101.194.49:443 ESTABLISHED 1000 353343 5102/chromium-brows
tcp 311 0 10.0.2.15:38920 13.32.166.42:443 ESTABLISHED 1000 377418 5102/chromium-brows
tcp 0 0 10.0.2.15:54224 192.30.253.125:443 ESTABLISHED 1000 317858 5102/chromium-brows
tcp 0 0 10.0.2.15:49846 172.217.168.46:443 ESTABLISHED 1000 327213 5102/chromium-brows
tcp 0 0 10.0.2.15:56456 34.193.164.107:443 ESTABLISHED 1000 351738 5102/chromium-brows
tcp 32 0 10.0.2.15:42114 52.207.55.4:443 CLOSE_WAIT 1000 374681 5102/chromium-brows
tcp6 0 0 :::43209 :::* LISTEN 1000 355214 772/java
tcp6 0 0 ::1:631 :::* LISTEN 0 34358 -
tcp6 0 0 127.0.0.1:30107 :::* LISTEN 1000 355213 772/java
tcp6 0 0 127.0.0.1:43906 127.0.0.1:33295 TIME_WAIT 0 0 -
tcp6 0 0 127.0.0.1:43904 127.0.0.1:33295 TIME_WAIT 0 0 -
tcp6 0 0 127.0.0.1:43922 127.0.0.1:33295 TIME_WAIT 0 0 -
Update 2
What do I have to set:
This used to (2015) suggest a port already in used (as in ensime/ensime-server issue 921)
You have the same error with JBoss for instance
ERROR: transport error 202: bind failed: Cannot assign requested address
ERROR: JDWP Transport dt_socket failed to initialize, TRANSPORT_INIT(510)
JDWP exit error AGENT_ERROR_TRANSPORT_INIT(197): No transports initialized
Make sure there isn't already an sbt shell running elsewhere (netstat -planet).
I am getting the following error when I try to join the cluster
sudo kubeadm join 10.1.1.150:6443 --token ypcdg7.w6pun0nd31c4q5c2 --discovery-token-ca-cert-hash sha256:1ac79447f8dee9d90d592a3ead3d6c54ce7046dcd0b3854917c93cf6bbff7894
[preflight] running pre-flight checks
[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:
1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support
I0710 13:39:56.080260 2415 kernel_validator.go:81] Validating kernel version
I0710 13:39:56.080423 2415 kernel_validator.go:96] Validating kernel config
[discovery] Trying to connect to API Server "10.1.1.150:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.1.1.150:6443"
[discovery] Failed to request cluster info, will try again: [Get https://10.1.1.150:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 10.1.1.150:6443: connect: connection refused]
I get the same error with curl on the machine that I want to join the cluster.
But if I run this
curl -k https://10.1.1.150:6443/api/v1/namespaces/kube-public/configmaps/cluster-info
in any machine in the local net (10.1.1.0/24) I get a good JSON response.
Some usefull info:
I can ping my master node (10.1.1.150)
6443 is open on my master node, this is netstat on my master
jp#tensor3:~$ sudo netstat -tulpn
Connessioni Internet attive (solo server)
Proto CodaRic CodaInv Indirizzo locale Indirizzo remoto Stato PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1110/sshd
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 6783/kubelet
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 7375/kube-proxy
tcp 0 0 127.0.0.1:10251 0.0.0.0:* LISTEN 7120/kube-scheduler
tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 7065/etcd
tcp 0 0 127.0.0.1:10252 0.0.0.0:* LISTEN 7058/kube-controlle
tcp 0 0 127.0.0.1:2380 0.0.0.0:* LISTEN 7065/etcd
tcp 0 0 127.0.0.1:37491 0.0.0.0:* LISTEN 6783/kubelet
tcp6 0 0 :::22 :::* LISTEN 1110/sshd
tcp6 0 0 :::10250 :::* LISTEN 6783/kubelet
tcp6 0 0 :::6443 :::* LISTEN 7130/kube-apiserver
tcp6 0 0 :::10256 :::* LISTEN 7375/kube-proxy
udp 0 0 0.0.0.0:68 0.0.0.0:* 980/dhclient
udp 0 0 0.0.0.0:8472 0.0.0.0:* -
I'm using flannel for network communication
ifconfig and route on my master
jp#tensor3:~$ ifconfig -a
cni0 Link encap:Ethernet IndirizzoHW 0a:58:0a:f4:00:01
indirizzo inet:10.244.0.1 Bcast:0.0.0.0 Maschera:255.255.255.0
indirizzo inet6: fe80::ec67:aeff:fe90:4fbc/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:31641 errors:0 dropped:0 overruns:0 frame:0
TX packets:31925 errors:0 dropped:0 overruns:0 carrier:0
collisioni:0 txqueuelen:1000
Byte RX:2032440 (2.0 MB) Byte TX:11726575 (11.7 MB)
docker0 Link encap:Ethernet IndirizzoHW 02:42:44:f4:4f:cb
indirizzo inet:172.17.0.1 Bcast:0.0.0.0 Maschera:255.255.0.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisioni:0 txqueuelen:0
Byte RX:0 (0.0 B) Byte TX:0 (0.0 B)
ens160 Link encap:Ethernet IndirizzoHW 00:50:56:3d:73:84
indirizzo inet:10.1.1.150 Bcast:10.1.1.255 Maschera:255.255.255.0
indirizzo inet6: fe80::250:56ff:fe3d:7384/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:294081 errors:0 dropped:0 overruns:0 frame:0
TX packets:74724 errors:0 dropped:0 overruns:0 carrier:0
collisioni:0 txqueuelen:1000
Byte RX:297445831 (297.4 MB) Byte TX:5900453 (5.9 MB)
flannel.1 Link encap:Ethernet IndirizzoHW 06:a7:4c:1d:6f:cd
indirizzo inet:10.244.0.0 Bcast:0.0.0.0 Maschera:255.255.255.255
indirizzo inet6: fe80::4a7:4cff:fe1d:6fcd/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:8 overruns:0 carrier:0
collisioni:0 txqueuelen:0
Byte RX:0 (0.0 B) Byte TX:0 (0.0 B)
lo Link encap:Loopback locale
indirizzo inet:127.0.0.1 Maschera:255.0.0.0
indirizzo inet6: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:1219503 errors:0 dropped:0 overruns:0 frame:0
TX packets:1219503 errors:0 dropped:0 overruns:0 carrier:0
collisioni:0 txqueuelen:1
Byte RX:271937625 (271.9 MB) Byte TX:271937625 (271.9 MB)
vethd451ffcc Link encap:Ethernet IndirizzoHW 8a:f1:f0:05:80:f3
indirizzo inet6: fe80::88f1:f0ff:fe05:80f3/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:8799 errors:0 dropped:0 overruns:0 frame:0
TX packets:8860 errors:0 dropped:0 overruns:0 carrier:0
collisioni:0 txqueuelen:0
Byte RX:687962 (687.9 KB) Byte TX:3256670 (3.2 MB)
vethed2073fb Link encap:Ethernet IndirizzoHW ea:9e:43:5e:4e:30
indirizzo inet6: fe80::e89e:43ff:fe5e:4e30/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:8800 errors:0 dropped:0 overruns:0 frame:0
TX packets:8887 errors:0 dropped:0 overruns:0 carrier:0
collisioni:0 txqueuelen:0
Byte RX:688228 (688.2 KB) Byte TX:3258477 (3.2 MB)
jp#tensor3:~$ route -n
Tabella di routing IP del kernel
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.1.1.29 0.0.0.0 UG 0 0 0 ens160
10.1.1.0 0.0.0.0 255.255.255.0 U 0 0 0 ens160
10.244.0.0 0.0.0.0 255.255.255.0 U 0 0 0 cni0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
ifconfig and route on my node
jp#tensor2:~$ ifconfig
docker0 Link encap:Ethernet IndirizzoHW 02:42:4e:2f:0e:97
indirizzo inet:172.17.0.1 Bcast:0.0.0.0 Maschera:255.255.0.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisioni:0 txqueuelen:0
Byte RX:0 (0.0 B) Byte TX:0 (0.0 B)
ens160 Link encap:Ethernet IndirizzoHW 00:50:56:8c:0a:cd
indirizzo inet:10.1.1.151 Bcast:10.1.1.255 Maschera:255.255.255.0
indirizzo inet6: fe80::250:56ff:fe8c:acd/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:17470 errors:0 dropped:0 overruns:0 frame:0
TX packets:913 errors:0 dropped:0 overruns:0 carrier:0
collisioni:0 txqueuelen:1000
Byte RX:1511461 (1.5 MB) Byte TX:101801 (101.8 KB)
lo Link encap:Loopback locale
indirizzo inet:127.0.0.1 Maschera:255.0.0.0
indirizzo inet6: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:160 errors:0 dropped:0 overruns:0 frame:0
TX packets:160 errors:0 dropped:0 overruns:0 carrier:0
collisioni:0 txqueuelen:1
Byte RX:11840 (11.8 KB) Byte TX:11840 (11.8 KB)
jp#tensor2:~$ route -n
Tabella di routing IP del kernel
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.1.1.29 0.0.0.0 UG 0 0 0 ens160
10.1.1.0 0.0.0.0 255.255.255.0 U 0 0 0 ens160
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
and this is the status of my cluster
jp#tensor3:~$ sudo kubectl get all --namespace=kube-system
[sudo] password di jp:
NAME READY STATUS RESTARTS AGE
pod/coredns-78fcdf6894-bblcs 1/1 Running 0 1h
pod/coredns-78fcdf6894-wrmj4 1/1 Running 0 1h
pod/etcd-tensor3 1/1 Running 0 1h
pod/kube-apiserver-tensor3 1/1 Running 0 1h
pod/kube-controller-manager-tensor3 1/1 Running 0 1h
pod/kube-flannel-ds-amd64-p7jmq 1/1 Running 0 1h
pod/kube-proxy-qg7jj 1/1 Running 0 1h
pod/kube-scheduler-tensor3 1/1 Running 0 1h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 1h
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/kube-flannel-ds-amd64 1 1 1 1 1 beta.kubernetes.io/arch=amd64 1h
daemonset.apps/kube-flannel-ds-arm 0 0 0 0 0 beta.kubernetes.io/arch=arm 1h
daemonset.apps/kube-flannel-ds-arm64 0 0 0 0 0 beta.kubernetes.io/arch=arm64 1h
daemonset.apps/kube-flannel-ds-ppc64le 0 0 0 0 0 beta.kubernetes.io/arch=ppc64le 1h
daemonset.apps/kube-flannel-ds-s390x 0 0 0 0 0 beta.kubernetes.io/arch=s390x 1h
daemonset.apps/kube-proxy 1 1 1 1 1 beta.kubernetes.io/arch=amd64 1h
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/coredns 2 2 2 2 1h
NAME DESIRED CURRENT READY AGE
replicaset.apps/coredns-78fcdf6894 2 2 2 1h
On the same machine I'm using:
docker version 1.13.1
k8s version 1.11
I met the similar behaviour on Ubuntu 16.04 after reboot.
I did
sudo kubeadm join ...
It crashed like described.
Then I did:
sudo su
kubeadm init
exit
I tried again and that second time it was successful.
My Vagrantfile
hosts = {
"host01" => "192.168.11.101",
"host02" => "192.168.11.102",
}
Vagrant.configure("2") do |config|
config.ssh.username = "root"
config.ssh.password = "vagrant"
config.ssh.insert_key = "true"
hosts.each_with_index do |(name,ip),index|
config.vm.define name do |machine|
machine.vm.box = "centos7"
machine.vm.box_check_update = false
machine.vm.hostname = name
machine.vm.synced_folder "/data", "/data"
machine.vm.network :private_network, ip: ip
machine.vm.provider "virtualbox" do |v|
v.name = name
v.customize ["modifyvm", :id, "--memory", 2048]
end
end
end
end
ansible tamplate for generate /etc/host
127.0.0.1 localhost
{% for host in groups['all'] %}
{{ hostvars[host]['ansible_' + iface].ipv4.address }} {{ host }}
{% endfor %}
ansible task
- name: Create the hosts file for all machines
template: src=hosts.j2 dest=/etc/hosts
But I get the result
[root#host01 ~]# cat /etc/hosts
127.0.0.1 localhost
10.0.2.15 host01
10.0.2.15 host02
ifconfig
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 0.0.0.0
inet6 fe80::42:49ff:fed1:eebb prefixlen 64 scopeid 0x20<link>
ether 02:42:49:d1:ee:bb txqueuelen 0 (Ethernet)
RX packets 77 bytes 6065 (5.9 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 99 bytes 8572 (8.3 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.0.2.15 netmask 255.255.255.0 broadcast 10.0.2.255
inet6 fe80::a00:27ff:fede:e0e prefixlen 64 scopeid 0x20<link>
ether 08:00:27:de:0e:0e txqueuelen 1000 (Ethernet)
RX packets 785483 bytes 57738892 (55.0 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 777457 bytes 1957320412 (1.8 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
enp0s8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.11.101 netmask 255.255.255.0 broadcast 192.168.11.255
ether 08:00:27:15:2c:64 txqueuelen 1000 (Ethernet)
RX packets 41445 bytes 39878552 (38.0 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 18055 bytes 2113498 (2.0 MiB)
I found in ifconfig, only enp0s8 inet 192.168.11.102 is different on host01 and host02
host01 and host02 got same ip ??
host01 has a docker registry.
host01, curl http://host01:5006/v2/_catalog works
host02, curl http://host01:5006/v2/_catalog not work
host01 and host02 got same ip ??
Yes. That's how Vagrant works and is able to connect for orchestration purposes to machines created from a variety of boxes coming from different publishers.
There's nothing strange about that, they are connected to different virtual switches in VirtualBox.
I just want host01 and host02 can access each other by hostname .
Use the other interface as the value of iface in your Jinja2 template (you did not show it in the question anyway).