Cannot access pod network through master node - kubernetes

Follow the tutorial https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/ to deploy a single-node kubernetes with canal network plugin.
# kubeadm init --pod-network-cidr 10.244.0.0/16 --kubernetes-version stable-1.9
kube-dns container are not all running.
# kubectl -n kube-system get pod
NAME READY STATUS RESTARTS AGE
canal-mpzrt 3/3 Running 0 6h
etcd-gavin-k8s 1/1 Running 0 6h
kube-apiserver-gavin-k8s 1/1 Running 0 6h
kube-controller-manager-gavin-k8s 1/1 Running 0 6h
kube-dns-6f4fd4bdf-fc8pd 2/3 Running 0 53s
kube-proxy-vj2r9 1/1 Running 0 2h
kube-scheduler-gavin-k8s 1/1 Running 0 6h
kubectl -n kube-system logs kube-dns-6f4fd4bdf-fc8pd kubedns
I0425 08:40:41.303524 1 dns.go:48] version: 1.14.6-3-gc36cb11
I0425 08:40:41.304274 1 server.go:69] Using configuration read from directory: /kube-dns-config with period 10s
I0425 08:40:41.304308 1 server.go:112] FLAG: --alsologtostderr="false"
I0425 08:40:41.304316 1 server.go:112] FLAG: --config-dir="/kube-dns-config"
I0425 08:40:41.304326 1 server.go:112] FLAG: --config-map=""
I0425 08:40:41.304330 1 server.go:112] FLAG: --config-map-namespace="kube-system"
I0425 08:40:41.304334 1 server.go:112] FLAG: --config-period="10s"
I0425 08:40:41.304340 1 server.go:112] FLAG: --dns-bind-address="0.0.0.0"
I0425 08:40:41.304343 1 server.go:112] FLAG: --dns-port="10053"
I0425 08:40:41.304349 1 server.go:112] FLAG: --domain="cluster.local."
I0425 08:40:41.304354 1 server.go:112] FLAG: --federations=""
I0425 08:40:41.304359 1 server.go:112] FLAG: --healthz-port="8081"
I0425 08:40:41.304363 1 server.go:112] FLAG: --initial-sync-timeout="1m0s"
I0425 08:40:41.304367 1 server.go:112] FLAG: --kube-master-url=""
I0425 08:40:41.304372 1 server.go:112] FLAG: --kubecfg-file=""
I0425 08:40:41.304376 1 server.go:112] FLAG: --log-backtrace-at=":0"
I0425 08:40:41.304382 1 server.go:112] FLAG: --log-dir=""
I0425 08:40:41.304386 1 server.go:112] FLAG: --log-flush-frequency="5s"
I0425 08:40:41.304391 1 server.go:112] FLAG: --logtostderr="true"
I0425 08:40:41.304394 1 server.go:112] FLAG: --nameservers=""
I0425 08:40:41.304398 1 server.go:112] FLAG: --stderrthreshold="2"
I0425 08:40:41.304401 1 server.go:112] FLAG: --v="2"
I0425 08:40:41.304405 1 server.go:112] FLAG: --version="false"
I0425 08:40:41.304411 1 server.go:112] FLAG: --vmodule=""
I0425 08:40:41.304482 1 server.go:194] Starting SkyDNS server (0.0.0.0:10053)
I0425 08:40:41.304700 1 server.go:213] Skydns metrics enabled (/metrics:10055)
I0425 08:40:41.304709 1 dns.go:146] Starting endpointsController
I0425 08:40:41.304715 1 dns.go:149] Starting serviceController
I0425 08:40:41.308584 1 logs.go:41] skydns: ready for queries on cluster.local. for tcp://0.0.0.0:10053 [rcache 0]
I0425 08:40:41.308603 1 logs.go:41] skydns: ready for queries on cluster.local. for udp://0.0.0.0:10053 [rcache 0]
I0425 08:40:41.804866 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0425 08:40:42.304875 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0425 08:40:42.804873 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0425 08:40:43.304871 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0425 08:40:43.804868 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0425 08:40:44.304880 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0425 08:40:44.804873 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0425 08:40:45.304869 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0425 08:40:45.804863 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0425 08:40:46.304833 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0425 08:40:46.804868 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0425 08:40:47.304876 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0425 08:40:47.804878 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I found the root cause of kube-dns failure is container in pod cannot access my machine's physical ip.
Master node run at 192.168.80.167
# kubectl cluster-info
Kubernetes master is running at https://192.168.80.167:6443
KubeDNS is running at https://192.168.80.167:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
196.18.80.167 is an address of physical network bridge on my machine.
# ifconfig br0
br0 Link encap:Ethernet HWaddr 24:5E:BE:0C:C5:92
inet addr:192.168.80.167 Bcast:192.168.81.255 Mask:255.255.254.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:4661901 errors:0 dropped:191628 overruns:0 frame:0
TX packets:317984 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1116345980 (1.0 GiB) TX bytes:56761158 (54.1 MiB)
# brctl show br0
bridge name bridge id STP enabled interfaces
br0 8000.245ebe0cc592 no eth0
kubedns container cannot access the physical bridge ip of my machine, then it failed.
# kubectl -n kube-system exec -it kube-dns-6f4fd4bdf-fc8pd --container kubedns -- sh
/ # ping 192.168.80.167
PING 192.168.80.167 (192.168.80.167): 56 data bytes
^C
--- 192.168.80.167 ping statistics ---
16 packets transmitted, 0 packets received, 100% packet loss
The strange thing is kubedns can access other machines in LAN. It cannot access my machine which running the pod only.
/ # ping 192.168.80.107
PING 192.168.80.107 (192.168.80.107): 56 data bytes
64 bytes from 192.168.80.107: seq=0 ttl=63 time=0.361 ms
64 bytes from 192.168.80.107: seq=1 ttl=63 time=0.342 ms
64 bytes from 192.168.80.107: seq=2 ttl=63 time=4.112 ms
^C
--- 192.168.80.107 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.342/1.605/4.112 ms
Analyze network traffic by tcpdump, the traffic comes from calic0b238d4ce2 is not forward into br0, so no one answer the traffic.
# tcpdump -i caliec0efa8668a -Q inout | grep ICMP
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on caliec0efa8668a, link-type EN10MB (Ethernet), capture size 262144 bytes
09:05:31.950671 IP 10.244.0.3 > Gavin-K8S: ICMP echo request, id 34560, seq 54, length 64
09:05:32.950733 IP 10.244.0.3 > Gavin-K8S: ICMP echo request, id 34560, seq 55, length 64
09:05:33.950794 IP 10.244.0.3 > Gavin-K8S: ICMP echo request, id 34560, seq 56, length 64
# tcpdump -i br0 -Q inout | grep ICMP
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on br0, link-type EN10MB (Ethernet), capture size 262144 bytes
P.S: Every user pods are the same situation with kubedns: pods cannot access the node running it, but they can access other machines.
On host(master node), check the routing table
# ip route show
default via 192.168.80.254 dev br0 proto static metric 100
10.0.3.0/24 dev lxcbr0 proto kernel scope link src 10.0.3.1
10.0.5.0/24 dev docker0 proto kernel scope link src 10.0.5.1 dead linkdown
10.244.0.4 dev calic0b238d4ce2 scope link
10.244.0.6 dev cali45026c409f9 scope link
10.244.0.7 dev caliec0efa8668a scope link
169.254.0.0/16 dev docker_gwbridge proto kernel scope link src 169.254.8.151
192.168.80.0/23 dev br0 proto kernel scope link src 192.168.80.167
# ip route get 192.168.80.167
local 192.168.80.167 dev lo src 192.168.80.167
cache <local>
On container, check the routing table
/ # ip route show
default via 169.254.1.1 dev eth0
169.254.1.1 dev eth0
/ # ip route get 192.168.80.167
192.168.80.167 via 169.254.1.1 dev eth0 src 10.244.0.7
Result of iptable-save
# Generated by iptables-save v1.6.0 on Wed Apr 25 21:25:22 2018
*raw
:PREROUTING ACCEPT [5988958:1384538104]
:OUTPUT ACCEPT [4321136:929267397]
:cali-OUTPUT - [0:0]
:cali-PREROUTING - [0:0]
:cali-failsafe-in - [0:0]
:cali-failsafe-out - [0:0]
:cali-from-host-endpoint - [0:0]
:cali-to-host-endpoint - [0:0]
-A PREROUTING -m comment --comment "cali:6gwbT8clXdHdC1b1" -j cali-PREROUTING
-A OUTPUT -m comment --comment "cali:tVnHkvAo15HuiPy0" -j cali-OUTPUT
-A cali-OUTPUT -m comment --comment "cali:WX1xZBEtmbS0Rhjs" -j MARK --set-xmark 0x0/0xf000000
-A cali-OUTPUT -m comment --comment "cali:iE00ZyllJNXfrlg_" -j cali-to-host-endpoint
-A cali-OUTPUT -m comment --comment "cali:Asois4hxp1rUxwJS" -m mark --mark 0x1000000/0x1000000 -j ACCEPT
-A cali-PREROUTING -m comment --comment "cali:zatSDPVUhhPCk6Iy" -j MARK --set-xmark 0x0/0xf000000
-A cali-PREROUTING -i cali+ -m comment --comment "cali:-ES4EW0vxFmM81t8" -j MARK --set-xmark 0x4000000/0x4000000
-A cali-PREROUTING -m comment --comment "cali:VE1J3S_1t9q8GAsm" -m mark --mark 0x0/0x4000000 -j cali-from-host-endpoint
-A cali-PREROUTING -m comment --comment "cali:VX8l4jKL9w89GXz5" -m mark --mark 0x1000000/0x1000000 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:wWFQM43tJU7wwnFZ" -m multiport --dports 22 -j ACCEPT
-A cali-failsafe-in -p udp -m comment --comment "cali:LwNV--R8MjeUYacw" -m multiport --dports 68 -j ACCEPT
-A cali-failsafe-out -p tcp -m comment --comment "cali:73bZKoyDfOpFwC2T" -m multiport --dports 2379 -j ACCEPT
-A cali-failsafe-out -p tcp -m comment --comment "cali:QMFuWo6o-d9yOpNm" -m multiport --dports 2380 -j ACCEPT
-A cali-failsafe-out -p tcp -m comment --comment "cali:Kup7QkrsdmfGX0uL" -m multiport --dports 4001 -j ACCEPT
-A cali-failsafe-out -p tcp -m comment --comment "cali:xYYr5PEqDf_Pqfkv" -m multiport --dports 7001 -j ACCEPT
-A cali-failsafe-out -p udp -m comment --comment "cali:nbWBvu4OtudVY60Q" -m multiport --dports 53 -j ACCEPT
-A cali-failsafe-out -p udp -m comment --comment "cali:UxFu5cDK5En6dT3Y" -m multiport --dports 67 -j ACCEPT
COMMIT
# Completed on Wed Apr 25 21:25:22 2018
# Generated by iptables-save v1.6.0 on Wed Apr 25 21:25:22 2018
*nat
:PREROUTING ACCEPT [16:2103]
:INPUT ACCEPT [14:1981]
:OUTPUT ACCEPT [5:677]
:POSTROUTING ACCEPT [4:617]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-SEP-JPEBCQ2YOSKQPXKG - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:SYSDOCKER - [0:0]
:SYSNAT - [0:0]
:VPNNAT - [0:0]
:cali-OUTPUT - [0:0]
:cali-POSTROUTING - [0:0]
:cali-PREROUTING - [0:0]
:cali-fip-dnat - [0:0]
:cali-fip-snat - [0:0]
:cali-nat-outgoing - [0:0]
-A PREROUTING -m comment --comment "cali:6gwbT8clXdHdC1b1" -j cali-PREROUTING
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -m comment --comment "cali:tVnHkvAo15HuiPy0" -j cali-OUTPUT
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A POSTROUTING -m comment --comment "cali:O3lYWMrLQYEMJtB5" -j cali-POSTROUTING
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 10.244.0.0/16 -d 10.244.0.0/16 -j RETURN
-A POSTROUTING -s 10.244.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
-A POSTROUTING ! -s 10.244.0.0/16 -d 10.244.0.0/24 -j RETURN
-A POSTROUTING ! -s 10.244.0.0/16 -d 10.244.0.0/16 -j MASQUERADE
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-JPEBCQ2YOSKQPXKG -s 192.168.80.167/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-JPEBCQ2YOSKQPXKG -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-JPEBCQ2YOSKQPXKG --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 192.168.80.167:6443
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-JPEBCQ2YOSKQPXKG --mask 255.255.255.255 --rsource -j KUBE-SEP-JPEBCQ2YOSKQPXKG
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-JPEBCQ2YOSKQPXKG
-A cali-OUTPUT -m comment --comment "cali:GBTAv2p5CwevEyJm" -j cali-fip-dnat
-A cali-POSTROUTING -m comment --comment "cali:Z-c7XtVd2Bq7s_hA" -j cali-fip-snat
-A cali-POSTROUTING -m comment --comment "cali:nYKhEzDlr11Jccal" -j cali-nat-outgoing
-A cali-PREROUTING -m comment --comment "cali:r6XmIziWUJsdOK6Z" -j cali-fip-dnat
-A cali-nat-outgoing -m comment --comment "cali:Wd76s91357Uv7N3v" -m set --match-set cali4-masq-ipam-pools src -m set ! --match-set cali4-all-ipam-pools dst -j MASQUERADE
COMMIT
# Completed on Wed Apr 25 21:25:23 2018
# Generated by iptables-save v1.6.0 on Wed Apr 25 21:25:23 2018
*mangle
:PREROUTING ACCEPT [1727587:391808161]
:INPUT ACCEPT [5150922:1211808224]
:FORWARD ACCEPT [1062:89161]
:OUTPUT ACCEPT [4321182:929275109]
:POSTROUTING ACCEPT [4331603:931649202]
:VPNCUSSETMARK - [0:0]
:VPNDEFSETMARK - [0:0]
:cali-PREROUTING - [0:0]
:cali-failsafe-in - [0:0]
:cali-from-host-endpoint - [0:0]
-A PREROUTING -m comment --comment "cali:6gwbT8clXdHdC1b1" -j cali-PREROUTING
-A PREROUTING -j VPNCUSSETMARK
-A PREROUTING -m mark --mark 0x0/0xffff -j VPNDEFSETMARK
-A VPNCUSSETMARK -m set --match-set vpnbr0 src -j MARK --set-xmark 0x900/0xff00
-A VPNCUSSETMARK -m set --match-set vpndocker0 src -j MARK --set-xmark 0xa00/0xff00
-A VPNCUSSETMARK -m set --match-set vpnlxcbr0 src -j MARK --set-xmark 0xc00/0xff00
-A cali-PREROUTING -m comment --comment "cali:6BJqBjBC7crtA-7-" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A cali-PREROUTING -m comment --comment "cali:nE3PUa5RSRqBBvwx" -m mark --mark 0x1000000/0x1000000 -j ACCEPT
-A cali-PREROUTING -i cali+ -m comment --comment "cali:qgFofvzQe6yJPouQ" -j ACCEPT
-A cali-PREROUTING -m comment --comment "cali:o178eO5vvpj8e65z" -j cali-from-host-endpoint
-A cali-PREROUTING -m comment --comment "cali:5TQcm-i_T8rVGEEa" -m comment --comment "Host endpoint policy accepted packet." -m mark --mark 0x1000000/0x1000000 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:wWFQM43tJU7wwnFZ" -m multiport --dports 22 -j ACCEPT
-A cali-failsafe-in -p udp -m comment --comment "cali:LwNV--R8MjeUYacw" -m multiport --dports 68 -j ACCEPT
COMMIT
# Completed on Wed Apr 25 21:25:23 2018
# Generated by iptables-save v1.6.0 on Wed Apr 25 21:25:23 2018
*filter
:INPUT ACCEPT [3389:699050]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [2944:635600]
:DOCKER-USER - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-FORWARD - [0:0]
:KUBE-SERVICES - [0:0]
:SYSDOCKER - [0:0]
:SYSDOCKER-ISOLATION - [0:0]
:cali-FORWARD - [0:0]
:cali-INPUT - [0:0]
:cali-OUTPUT - [0:0]
:cali-failsafe-in - [0:0]
:cali-failsafe-out - [0:0]
:cali-from-host-endpoint - [0:0]
:cali-from-wl-dispatch - [0:0]
:cali-fw-cali45026c409f9 - [0:0]
:cali-fw-calic0b238d4ce2 - [0:0]
:cali-fw-caliec0efa8668a - [0:0]
:cali-pri-k8s_ns.default - [0:0]
:cali-pri-k8s_ns.kube-system - [0:0]
:cali-pro-k8s_ns.default - [0:0]
:cali-pro-k8s_ns.kube-system - [0:0]
:cali-to-host-endpoint - [0:0]
:cali-to-wl-dispatch - [0:0]
:cali-tw-cali45026c409f9 - [0:0]
:cali-tw-calic0b238d4ce2 - [0:0]
:cali-tw-caliec0efa8668a - [0:0]
:cali-wl-to-host - [0:0]
-A INPUT -m comment --comment "cali:Cz_u1IQiXIMmKD4c" -j cali-INPUT
-A INPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A INPUT -j KUBE-FIREWALL
-A FORWARD -m comment --comment "cali:wUHhoiAYhphO9Mso" -j cali-FORWARD
-A FORWARD -m comment --comment "kubernetes forward rules" -j KUBE-FORWARD
-A FORWARD -s 10.244.0.0/16 -j ACCEPT
-A FORWARD -d 10.244.0.0/16 -j ACCEPT
-A FORWARD -i br0 -o caliec0efa8668a -j ACCEPT
-A FORWARD -i caliec0efa8668a -o br0 -j ACCEPT
-A OUTPUT -m comment --comment "cali:tVnHkvAo15HuiPy0" -j cali-OUTPUT
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER-USER -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -s 10.244.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-FORWARD -d 10.244.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod destination rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns has no endpoints" -m udp --dport 53 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp has no endpoints" -m tcp --dport 53 -j REJECT --reject-with icmp-port-unreachable
-A SYSDOCKER-ISOLATION -j RETURN
-A cali-FORWARD -i cali+ -m comment --comment "cali:X3vB2lGcBrfkYquC" -j cali-from-wl-dispatch
-A cali-FORWARD -o cali+ -m comment --comment "cali:UtJ9FnhBnFbyQMvU" -j cali-to-wl-dispatch
-A cali-FORWARD -i cali+ -m comment --comment "cali:Tt19HcSdA5YIGSsw" -j ACCEPT
-A cali-FORWARD -o cali+ -m comment --comment "cali:9LzfFCvnpC5_MYXm" -j ACCEPT
-A cali-FORWARD -m comment --comment "cali:7AofLLOqCM5j36rM" -j MARK --set-xmark 0x0/0xe000000
-A cali-FORWARD -m comment --comment "cali:QM1_joSl7tL76Az7" -m mark --mark 0x0/0x1000000 -j cali-from-host-endpoint
-A cali-FORWARD -m comment --comment "cali:C1QSog3bk0AykjAO" -j cali-to-host-endpoint
-A cali-FORWARD -m comment --comment "cali:DmFiPAmzcisqZcvo" -m comment --comment "Host endpoint policy accepted packet." -m mark --mark 0x1000000/0x1000000 -j ACCEPT
-A cali-INPUT -m comment --comment "cali:i7okJZpS8VxaJB3n" -m mark --mark 0x1000000/0x1000000 -j ACCEPT
-A cali-INPUT -i cali+ -m comment --comment "cali:JaoDb6CLdcGw8g0Y" -g cali-wl-to-host
-A cali-INPUT -m comment --comment "cali:c5eKVW2VdKQ_LiSM" -j MARK --set-xmark 0x0/0xf000000
-A cali-INPUT -m comment --comment "cali:hwQKYSlSCkpE_9uN" -j cali-from-host-endpoint
-A cali-INPUT -m comment --comment "cali:ttp8-serzKCP-bKZ" -m comment --comment "Host endpoint policy accepted packet." -m mark --mark 0x1000000/0x1000000 -j ACCEPT
-A cali-OUTPUT -m comment --comment "cali:YQSSJIsRcHjFbXaI" -m mark --mark 0x1000000/0x1000000 -j ACCEPT
-A cali-OUTPUT -o cali+ -m comment --comment "cali:KRjBsKsBcFBYKCEw" -j RETURN
-A cali-OUTPUT -m comment --comment "cali:3VKAQBcyUUW5kS_j" -j MARK --set-xmark 0x0/0xf000000
-A cali-OUTPUT -m comment --comment "cali:Z1mBCSH1XHM6qq0k" -j cali-to-host-endpoint
-A cali-OUTPUT -m comment --comment "cali:N0jyWt2RfBedKw3L" -m comment --comment "Host endpoint policy accepted packet." -m mark --mark 0x1000000/0x1000000 -j ACCEPT
-A cali-failsafe-in -p tcp -m comment --comment "cali:wWFQM43tJU7wwnFZ" -m multiport --dports 22 -j ACCEPT
-A cali-failsafe-in -p udp -m comment --comment "cali:LwNV--R8MjeUYacw" -m multiport --dports 68 -j ACCEPT
-A cali-failsafe-out -p tcp -m comment --comment "cali:73bZKoyDfOpFwC2T" -m multiport --dports 2379 -j ACCEPT
-A cali-failsafe-out -p tcp -m comment --comment "cali:QMFuWo6o-d9yOpNm" -m multiport --dports 2380 -j ACCEPT
-A cali-failsafe-out -p tcp -m comment --comment "cali:Kup7QkrsdmfGX0uL" -m multiport --dports 4001 -j ACCEPT
-A cali-failsafe-out -p tcp -m comment --comment "cali:xYYr5PEqDf_Pqfkv" -m multiport --dports 7001 -j ACCEPT
-A cali-failsafe-out -p udp -m comment --comment "cali:nbWBvu4OtudVY60Q" -m multiport --dports 53 -j ACCEPT
-A cali-failsafe-out -p udp -m comment --comment "cali:UxFu5cDK5En6dT3Y" -m multiport --dports 67 -j ACCEPT
-A cali-from-wl-dispatch -i cali45026c409f9 -m comment --comment "cali:QTLwRyKNiscc-kE7" -g cali-fw-cali45026c409f9
-A cali-from-wl-dispatch -i calic0b238d4ce2 -m comment --comment "cali:7mRUmkMzCXKDHDzk" -g cali-fw-calic0b238d4ce2
-A cali-from-wl-dispatch -i caliec0efa8668a -m comment --comment "cali:vI_cBpGlZQpakzSQ" -g cali-fw-caliec0efa8668a
-A cali-from-wl-dispatch -m comment --comment "cali:y5WqyrGI7OWfnqNM" -m comment --comment "Unknown interface" -j DROP
-A cali-fw-cali45026c409f9 -m comment --comment "cali:OTJIDsP3TegJFYqm" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A cali-fw-cali45026c409f9 -m comment --comment "cali:uvhYBVFYvBcMfF1E" -m conntrack --ctstate INVALID -j DROP
-A cali-fw-cali45026c409f9 -m comment --comment "cali:N9Pier8knvEySzpb" -j MARK --set-xmark 0x0/0x1000000
-A cali-fw-cali45026c409f9 -m comment --comment "cali:6ctr2BZXeRQITWs2" -j cali-pro-k8s_ns.kube-system
-A cali-fw-cali45026c409f9 -m comment --comment "cali:Juq9dxqhxLUhudVk" -m comment --comment "Return if profile accepted" -m mark --mark 0x1000000/0x1000000 -j RETURN
-A cali-fw-cali45026c409f9 -m comment --comment "cali:o7CTzqIS9bu5DymV" -m comment --comment "Drop if no profiles matched" -j DROP
-A cali-fw-calic0b238d4ce2 -m comment --comment "cali:2dB9gQ0XK7ky-okg" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A cali-fw-calic0b238d4ce2 -m comment --comment "cali:ywcP6SMI-Q-GlUyW" -m conntrack --ctstate INVALID -j DROP
-A cali-fw-calic0b238d4ce2 -m comment --comment "cali:wroMotnj-PmPY-A1" -j MARK --set-xmark 0x0/0x1000000
-A cali-fw-calic0b238d4ce2 -m comment --comment "cali:nOL8WwmNyRPNDCRb" -j cali-pro-k8s_ns.default
-A cali-fw-calic0b238d4ce2 -m comment --comment "cali:r1XYAvTJ5M_XMUux" -m comment --comment "Return if profile accepted" -m mark --mark 0x1000000/0x1000000 -j RETURN
-A cali-fw-calic0b238d4ce2 -m comment --comment "cali:8-iYoFbdlSboxtvI" -m comment --comment "Drop if no profiles matched" -j DROP
-A cali-fw-caliec0efa8668a -m comment --comment "cali:NvFOTdFzvt46kQfQ" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A cali-fw-caliec0efa8668a -m comment --comment "cali:jxl0wYR8pO3dsQLg" -m conntrack --ctstate INVALID -j DROP
-A cali-fw-caliec0efa8668a -m comment --comment "cali:VlVHHstfJPnNr3LI" -j MARK --set-xmark 0x0/0x1000000
-A cali-fw-caliec0efa8668a -m comment --comment "cali:DlqVod2qRMSGS4t4" -j cali-pro-k8s_ns.default
-A cali-fw-caliec0efa8668a -m comment --comment "cali:LluPSlt2p5-XuwUs" -m comment --comment "Return if profile accepted" -m mark --mark 0x1000000/0x1000000 -j RETURN
-A cali-fw-caliec0efa8668a -m comment --comment "cali:23YDqnq73LBpscup" -m comment --comment "Drop if no profiles matched" -j DROP
-A cali-pri-k8s_ns.default -m comment --comment "cali:6MWuUqsVPzpSgE3L" -j MARK --set-xmark 0x1000000/0x1000000
-A cali-pri-k8s_ns.default -m comment --comment "cali:UGCdoOXoPRcONGv8" -m mark --mark 0x1000000/0x1000000 -j RETURN
-A cali-pri-k8s_ns.kube-system -m comment --comment "cali:plMTf6GGo5FLt-zw" -j MARK --set-xmark 0x1000000/0x1000000
-A cali-pri-k8s_ns.kube-system -m comment --comment "cali:d_ypsHpl3J96oOpx" -m mark --mark 0x1000000/0x1000000 -j RETURN
-A cali-pro-k8s_ns.default -m comment --comment "cali:DTsGE7pFaKbRuEBg" -j MARK --set-xmark 0x1000000/0x1000000
-A cali-pro-k8s_ns.default -m comment --comment "cali:4bIByWXruQ1DMcbo" -m mark --mark 0x1000000/0x1000000 -j RETURN
-A cali-pro-k8s_ns.kube-system -m comment --comment "cali:lDQGDZg5UANF5wIK" -j MARK --set-xmark 0x1000000/0x1000000
-A cali-pro-k8s_ns.kube-system -m comment --comment "cali:wn_dnW-P0COWnhhy" -m mark --mark 0x1000000/0x1000000 -j RETURN
-A cali-to-wl-dispatch -o cali45026c409f9 -m comment --comment "cali:c75T2Dgm3k-jJrbE" -g cali-tw-cali45026c409f9
-A cali-to-wl-dispatch -o calic0b238d4ce2 -m comment --comment "cali:qDV3G3z8-XF7ASpj" -g cali-tw-calic0b238d4ce2
-A cali-to-wl-dispatch -o caliec0efa8668a -m comment --comment "cali:0KGW9LSlkHoj3Pth" -g cali-tw-caliec0efa8668a
-A cali-to-wl-dispatch -m comment --comment "cali:jDu3duVnwTVndWys" -m comment --comment "Unknown interface" -j DROP
-A cali-tw-cali45026c409f9 -m comment --comment "cali:T8ds95eQAxnZl6cA" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A cali-tw-cali45026c409f9 -m comment --comment "cali:sBFjo942EoAZxbwi" -m conntrack --ctstate INVALID -j DROP
-A cali-tw-cali45026c409f9 -m comment --comment "cali:7mrDpuB_JSOiwD-w" -j MARK --set-xmark 0x0/0x1000000
-A cali-tw-cali45026c409f9 -m comment --comment "cali:SZ7jptebHBWtu0ut" -j cali-pri-k8s_ns.kube-system
-A cali-tw-cali45026c409f9 -m comment --comment "cali:XZUosCvhE-CFRBZf" -m comment --comment "Return if profile accepted" -m mark --mark 0x1000000/0x1000000 -j RETURN
-A cali-tw-cali45026c409f9 -m comment --comment "cali:UPdmXt0SUq5GpdCk" -m comment --comment "Drop if no profiles matched" -j DROP
-A cali-tw-calic0b238d4ce2 -m comment --comment "cali:k8kHsWO63lPZ_T5S" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A cali-tw-calic0b238d4ce2 -m comment --comment "cali:WcRO5jfNEyBl-P8e" -m conntrack --ctstate INVALID -j DROP
-A cali-tw-calic0b238d4ce2 -m comment --comment "cali:qgZ3s3ojXF7_0v41" -j MARK --set-xmark 0x0/0x1000000
-A cali-tw-calic0b238d4ce2 -m comment --comment "cali:l9FROf8cQyfmubvU" -j cali-pri-k8s_ns.default
-A cali-tw-calic0b238d4ce2 -m comment --comment "cali:i1mW8rmxu9TCd-T4" -m comment --comment "Return if profile accepted" -m mark --mark 0x1000000/0x1000000 -j RETURN
-A cali-tw-calic0b238d4ce2 -m comment --comment "cali:EOs-JJ221Us5p0EP" -m comment --comment "Drop if no profiles matched" -j DROP
-A cali-tw-caliec0efa8668a -m comment --comment "cali:_7y3hRmp6EU47Y0s" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A cali-tw-caliec0efa8668a -m comment --comment "cali:lqljOLOQn5ZkCC2p" -m conntrack --ctstate INVALID -j DROP
-A cali-tw-caliec0efa8668a -m comment --comment "cali:AGwqz_dfQJPaIJOa" -j MARK --set-xmark 0x0/0x1000000
-A cali-tw-caliec0efa8668a -m comment --comment "cali:IQNHtVteTcEbbzLF" -j cali-pri-k8s_ns.default
-A cali-tw-caliec0efa8668a -m comment --comment "cali:zFjCvYL15RsUfNaU" -m comment --comment "Return if profile accepted" -m mark --mark 0x1000000/0x1000000 -j RETURN
-A cali-tw-caliec0efa8668a -m comment --comment "cali:-GRpWsx8gV1ZNLvl" -m comment --comment "Drop if no profiles matched" -j DROP
-A cali-wl-to-host -m comment --comment "cali:Ee9Sbo10IpVujdIY" -j cali-from-wl-dispatch
-A cali-wl-to-host -m comment --comment "cali:nSZbcOoG1xPONxb8" -m comment --comment "Configured DefaultEndpointToHostAction" -j ACCEPT
COMMIT
# Completed on Wed Apr 25 21:25:23 2018

This is just a guess, but I think I know what the problem is.
Kubernetes uses iptables to manage traffic between pods and process requests to services, including some NAT rules.
When you call the service on the node, your request also processed by iptables, which includes NAT rules based on the source IP.
But, looks like when you call the service from the same node, your packets do not match the NAT rule of the Service and they are not processed correctly.
Calico has NatOutgoing option which enables masquerading for all packets with destinations outside the Pool.
With that option, Calico will masquerade packages (replace the source IP with the IP of the node) and it will be routed as the package from the node itself and will be caught by the right service's NAT rule.
Looks like it might help.

The ip rules of my machine, it will block container network traffic from going to my physical ip. After I delete the ip rule, the issue is resolved.

Related

kubernetes coredns can't resolve?

coredns state
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-8c79ffd8b-5ns7r 1/1 Running 0 121m 10.88.0.137 127.0.0.1
service state
kube-dns ClusterIP 10.0.0.10 53/UDP,53/TCP,9153/TCP 122m
this information for kube-dns
Name: kube-dns
Namespace: kube-system
Labels: addonmanager.kubernetes.io/mode=Reconcile
k8s-app=kube-dns
kubernetes.io/cluster-service=true
kubernetes.io/name=CoreDNS
Annotations: prometheus.io/port: 9153
prometheus.io/scrape: true
Selector: k8s-app=kube-dns
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.0.0.10
IPs: 10.0.0.10
Port: dns 53/UDP
TargetPort: 53/UDP
Endpoints: 10.88.0.137:53
Port: dns-tcp 53/TCP
TargetPort: 53/TCP
Endpoints: 10.88.0.137:53
Port: metrics 9153/TCP
TargetPort: 9153/TCP
Endpoints: 10.88.0.137:9153
Session Affinity: None
Events: <none>
I create "busybox" pod
busybox 1/1 Running 7 (5m45s ago) 122m 10.88.0.139 127.0.0.1
/kubectl.sh exec -it busybox -- nslookup kube-dns.kube-system.svc.cluster.local
resolve domain fail, when "/etc/resolv.conf" file content is follwing to
nameserver 10.0.0.10
options ndots:5
Error Message
Server: 10.0.0.10
Address 1: 10.0.0.10
nslookup: can't resolve 'kube-dns.kube-system.svc.cluster.local'
command terminated with exit code 1
but can ping 10.88.0.137
64 bytes from 10.88.0.137: seq=0 ttl=64 time=0.043 ms
64 bytes from 10.88.0.137: seq=1 ttl=64 time=0.114 ms
64 bytes from 10.88.0.137: seq=2 ttl=64 time=0.086 ms
./kubectl.sh exec -it busybox -- nslookup kube-dns.kube-system.svc.cluster.local
resolve domain success when "/etc/resolv.conf" file content is follwing to
nameserver 10.88.0.137
options ndots:5
Server: 10.88.0.137
Address 1: 10.88.0.137 10-88-0-137.kube-dns.kube-system.svc.cluster.local
Name: kube-dns.kube-system.svc.cluster.local
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
when i started docker.service ,it work
is iptables problem?
# Generated by iptables-save v1.8.4 on Thu Apr 28 11:32:57 2022
*mangle
:PREROUTING ACCEPT [286724:45943921]
:INPUT ACCEPT [286421:45917739]
:FORWARD ACCEPT [303:26182]
:OUTPUT ACCEPT [286333:44357718]
:POSTROUTING ACCEPT [286527:44373508]
:KUBE-IPTABLES-HINT - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
COMMIT
# Completed on Thu Apr 28 11:32:57 2022
# Generated by iptables-save v1.8.4 on Thu Apr 28 11:32:57 2022
*nat
:PREROUTING ACCEPT [5:763]
:INPUT ACCEPT [5:763]
:OUTPUT ACCEPT [183:11131]
:POSTROUTING ACCEPT [183:11131]
:CNI-65c6fd99503bff7510c4bb02 - [0:0]
:CNI-8484e8a122ace36d26bd565d - [0:0]
:CNI-a854b6638aef4961af4db12b - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-SEP-36TNAFFT2KZBXN2D - [0:0]
:KUBE-SEP-M2U7IOCBO7O5P45B - [0:0]
:KUBE-SEP-VO2PPQ7LAMK34B5H - [0:0]
:KUBE-SEP-WCWCTDIFXHIUSRHD - [0:0]
:KUBE-SEP-ZXXUG7AYMOGMIOPQ - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-JD5MR3NA4I4DYORP - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:KUBE-SVC-V2OKYYMBY3REGZOG - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 10.88.0.137/32 -m comment --comment "name: \"containerd-net\" id: \"b53510e54e70e73ed24423f543149fad70cb974de592a636c99fda46d953dfe7\"" -j CNI-65c6fd99503bff7510c4bb02
-A POSTROUTING -s 10.88.0.138/32 -m comment --comment "name: \"containerd-net\" id: \"ab9a9580ba9c6f12470d6f2fc255bd1a1043818ad15b51b9548b9b38425ee0db\"" -j CNI-8484e8a122ace36d26bd565d
-A POSTROUTING -s 10.88.0.139/32 -m comment --comment "name: \"containerd-net\" id: \"d9aa074f2053417e3cba4387af7c92d3e6fd0b15eb145ee7298137f0aa170f02\"" -j CNI-a854b6638aef4961af4db12b
-A CNI-65c6fd99503bff7510c4bb02 -d 10.88.0.0/16 -m comment --comment "name: \"containerd-net\" id: \"b53510e54e70e73ed24423f543149fad70cb974de592a636c99fda46d953dfe7\"" -j ACCEPT
-A CNI-65c6fd99503bff7510c4bb02 ! -d 224.0.0.0/4 -m comment --comment "name: \"containerd-net\" id: \"b53510e54e70e73ed24423f543149fad70cb974de592a636c99fda46d953dfe7\"" -j MASQUERADE
-A CNI-8484e8a122ace36d26bd565d -d 10.88.0.0/16 -m comment --comment "name: \"containerd-net\" id: \"ab9a9580ba9c6f12470d6f2fc255bd1a1043818ad15b51b9548b9b38425ee0db\"" -j ACCEPT
-A CNI-8484e8a122ace36d26bd565d ! -d 224.0.0.0/4 -m comment --comment "name: \"containerd-net\" id: \"ab9a9580ba9c6f12470d6f2fc255bd1a1043818ad15b51b9548b9b38425ee0db\"" -j MASQUERADE
-A CNI-a854b6638aef4961af4db12b -d 10.88.0.0/16 -m comment --comment "name: \"containerd-net\" id: \"d9aa074f2053417e3cba4387af7c92d3e6fd0b15eb145ee7298137f0aa170f02\"" -j ACCEPT
-A CNI-a854b6638aef4961af4db12b ! -d 224.0.0.0/4 -m comment --comment "name: \"containerd-net\" id: \"d9aa074f2053417e3cba4387af7c92d3e6fd0b15eb145ee7298137f0aa170f02\"" -j MASQUERADE
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m mark ! --mark 0x4000/0x4000 -j RETURN
-A KUBE-POSTROUTING -j MARK --set-xmark 0x4000/0x0
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE --random-fully
-A KUBE-SEP-36TNAFFT2KZBXN2D -s 10.88.0.138/32 -m comment --comment "default/nginx-service" -j KUBE-MARK-MASQ
-A KUBE-SEP-36TNAFFT2KZBXN2D -p tcp -m comment --comment "default/nginx-service" -m tcp -j DNAT --to-destination 10.88.0.138:80
-A KUBE-SEP-M2U7IOCBO7O5P45B -s 10.88.0.137/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-M2U7IOCBO7O5P45B -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.88.0.137:53
-A KUBE-SEP-VO2PPQ7LAMK34B5H -s 10.211.55.11/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-VO2PPQ7LAMK34B5H -p tcp -m comment --comment "default/kubernetes:https" -m tcp -j DNAT --to-destination 10.211.55.11:6443
-A KUBE-SEP-WCWCTDIFXHIUSRHD -s 10.88.0.137/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-WCWCTDIFXHIUSRHD -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.88.0.137:53
-A KUBE-SEP-ZXXUG7AYMOGMIOPQ -s 10.88.0.137/32 -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-MARK-MASQ
-A KUBE-SEP-ZXXUG7AYMOGMIOPQ -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.88.0.137:9153
-A KUBE-SERVICES -d 10.0.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -d 10.0.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 10.0.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SERVICES -d 10.0.0.179/32 -p tcp -m comment --comment "default/nginx-service cluster IP" -m tcp --dport 80 -j KUBE-SVC-V2OKYYMBY3REGZOG
-A KUBE-SERVICES -d 10.0.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp -> 10.88.0.137:53" -j KUBE-SEP-WCWCTDIFXHIUSRHD
-A KUBE-SVC-JD5MR3NA4I4DYORP -m comment --comment "kube-system/kube-dns:metrics -> 10.88.0.137:9153" -j KUBE-SEP-ZXXUG7AYMOGMIOPQ
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https -> 10.211.55.11:6443" -j KUBE-SEP-VO2PPQ7LAMK34B5H
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns -> 10.88.0.137:53" -j KUBE-SEP-M2U7IOCBO7O5P45B
-A KUBE-SVC-V2OKYYMBY3REGZOG -m comment --comment "default/nginx-service -> 10.88.0.138:80" -j KUBE-SEP-36TNAFFT2KZBXN2D
COMMIT
# Completed on Thu Apr 28 11:32:57 2022
# Generated by iptables-save v1.8.4 on Thu Apr 28 11:32:57 2022
*filter
:INPUT ACCEPT [22414:3436031]
:FORWARD ACCEPT [12:944]
:OUTPUT ACCEPT [22404:3431451]
:KUBE-EXTERNAL-SERVICES - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-FORWARD - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-SERVICES - [0:0]
-A INPUT -m comment --comment "kubernetes health check service ports" -j KUBE-NODEPORTS
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A INPUT -j KUBE-FIREWALL
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment "block incoming localnet connections" -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP
-A KUBE-FORWARD -m conntrack --ctstate INVALID -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
COMMIT
# Completed on Thu Apr 28 11:32:57 2022

Unable to reach service using NodePort from k8s master

I have setup kubernetes cluster in Ubuntu 16.04 with a master and a worker. I deployed application and created NodePort service as below.
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: hello-app-deployment
spec:
selector:
matchLabels:
app: hello-app
replicas: 1
template:
metadata:
labels:
app: hello-app
spec:
containers:
- name: hello-app
image: yeasy/simple-web:latest
ports:
- containerPort: 80
---
kind: Service
apiVersion: v1
metadata:
name: hello-app-service
spec:
selector:
app: hello-app
ports:
- protocol: TCP
port: 8000
targetPort: 80
nodePort: 30020
name: hello-app-port
type: NodePort
Pods and service are created for same
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/hello-app-deployment-6bfdc9c668-smsgq 1/1 Running 0 83m 10.32.0.3 k8s-worker-1 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/hello-app-service NodePort 10.106.91.145 <none> 8000:30020/TCP 83m app=hello-app
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 10h <none>
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/hello-app-deployment 1/1 1 1 83m hello-app yeasy/simple-web:latest app=hello-app
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/hello-app-deployment-6bfdc9c668 1 1 1 83m hello-app yeasy/simple-web:latest app=hello-app,pod-template-hash=6bfdc9c668
I am able to access application from host where application is deployed as:
kubeuser#kube-worker-1:~$ curl http://kube-worker-1:30020
Hello!
But when I access from master node or other worker nodes it doesn't connect.
kubeuser#k8s-master:~$ curl http://k8s-master:30020
curl: (7) Failed to connect to k8s-master port 30020: Connection refused
kubeuser#k8s-master:~$ curl http://localhost:30020
curl: (7) Failed to connect to localhost port 30020: Connection refused
kubeuser#k8s-master:~$ curl http://k8s-worker-2:30020
Failed to connect to k8s-worker-2 port 30020: No route to host
kubeuser#k8s-worker-2:~$ curl http://localhost:30020
Failed to connect to localhost port 30020: No route to host
Created CIDR as below
kubeadm init --pod-network-cidr=192.168.0.0/16
The following is iptable-save result:
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [30:1891]
:POSTROUTING ACCEPT [30:1891]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-SEP-3DU66DE6VORVEQVD - [0:0]
:KUBE-SEP-6UWAUPYDDOV5SU5B - [0:0]
:KUBE-SEP-S4MK5EVI7CLHCCS6 - [0:0]
:KUBE-SEP-SWLOBIBPXYBP7G2Z - [0:0]
:KUBE-SEP-SZZ7MOWKTWUFXIJT - [0:0]
:KUBE-SEP-UJJNLSZU6HL4F5UO - [0:0]
:KUBE-SEP-ZCHNBYOGFZRFKYMA - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-JD5MR3NA4I4DYORP - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:OUTPUT_direct - [0:0]
:POSTROUTING_ZONES - [0:0]
:POSTROUTING_ZONES_SOURCE - [0:0]
:POSTROUTING_direct - [0:0]
:POST_public - [0:0]
:POST_public_allow - [0:0]
:POST_public_deny - [0:0]
:POST_public_log - [0:0]
:PREROUTING_ZONES - [0:0]
:PREROUTING_ZONES_SOURCE - [0:0]
:PREROUTING_direct - [0:0]
:PRE_public - [0:0]
:PRE_public_allow - [0:0]
:PRE_public_deny - [0:0]
:PRE_public_log - [0:0]
:WEAVE - [0:0]
:WEAVE-CANARY - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -j PREROUTING_direct
-A PREROUTING -j PREROUTING_ZONES_SOURCE
-A PREROUTING -j PREROUTING_ZONES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j OUTPUT_direct
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -j POSTROUTING_direct
-A POSTROUTING -j POSTROUTING_ZONES_SOURCE
-A POSTROUTING -j POSTROUTING_ZONES
-A POSTROUTING -j WEAVE
-A DOCKER -i docker0 -j RETURN
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m mark ! --mark 0x4000/0x4000 -j RETURN
-A KUBE-POSTROUTING -j MARK --set-xmark 0x4000/0x0
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-3DU66DE6VORVEQVD -s 10.32.0.3/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-3DU66DE6VORVEQVD -p udp -m udp -j DNAT --to-destination 10.32.0.3:53
-A KUBE-SEP-6UWAUPYDDOV5SU5B -s 10.111.1.158/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-6UWAUPYDDOV5SU5B -p tcp -m tcp -j DNAT --to-destination 10.111.1.158:6443
-A KUBE-SEP-S4MK5EVI7CLHCCS6 -s 10.32.0.3/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-S4MK5EVI7CLHCCS6 -p tcp -m tcp -j DNAT --to-destination 10.32.0.3:53
-A KUBE-SEP-SWLOBIBPXYBP7G2Z -s 10.32.0.2/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-SWLOBIBPXYBP7G2Z -p tcp -m tcp -j DNAT --to-destination 10.32.0.2:9153
-A KUBE-SEP-SZZ7MOWKTWUFXIJT -s 10.32.0.2/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-SZZ7MOWKTWUFXIJT -p udp -m udp -j DNAT --to-destination 10.32.0.2:53
-A KUBE-SEP-UJJNLSZU6HL4F5UO -s 10.32.0.2/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-UJJNLSZU6HL4F5UO -p tcp -m tcp -j DNAT --to-destination 10.32.0.2:53
-A KUBE-SEP-ZCHNBYOGFZRFKYMA -s 10.32.0.3/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-ZCHNBYOGFZRFKYMA -p tcp -m tcp -j DNAT --to-destination 10.32.0.3:9153
-A KUBE-SERVICES ! -s 192.168.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES ! -s 192.168.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SERVICES ! -s 192.168.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES ! -s 192.168.0.0/16 -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-UJJNLSZU6HL4F5UO
-A KUBE-SVC-ERIFXISQEP7F7OF4 -j KUBE-SEP-S4MK5EVI7CLHCCS6
-A KUBE-SVC-JD5MR3NA4I4DYORP -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-SWLOBIBPXYBP7G2Z
-A KUBE-SVC-JD5MR3NA4I4DYORP -j KUBE-SEP-ZCHNBYOGFZRFKYMA
-A KUBE-SVC-NPX46M4PTMTKRN6Y -j KUBE-SEP-6UWAUPYDDOV5SU5B
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-SZZ7MOWKTWUFXIJT
-A KUBE-SVC-TCOU7JCQXEZGVUNU -j KUBE-SEP-3DU66DE6VORVEQVD
-A POSTROUTING_ZONES -g POST_public
-A POST_public -j POST_public_log
-A POST_public -j POST_public_deny
-A POST_public -j POST_public_allow
-A PREROUTING_ZONES -g PRE_public
-A PRE_public -j PRE_public_log
-A PRE_public -j PRE_public_deny
-A PRE_public -j PRE_public_allow
-A WEAVE -m set --match-set weaver-no-masq-local dst -m comment --comment "Prevent SNAT to locally running containers" -j RETURN
-A WEAVE -s 10.32.0.0/12 -d 224.0.0.0/4 -j RETURN
-A WEAVE ! -s 10.32.0.0/12 -d 10.32.0.0/12 -j MASQUERADE
-A WEAVE -s 10.32.0.0/12 ! -d 10.32.0.0/12 -j MASQUERADE
COMMIT
# Completed on Sun Aug 16 17:11:47 2020
# Generated by iptables-save v1.6.0 on Sun Aug 16 17:11:47 2020
*security
:INPUT ACCEPT [1417084:253669465]
:FORWARD ACCEPT [4:488]
:OUTPUT ACCEPT [1414939:285083560]
:FORWARD_direct - [0:0]
:INPUT_direct - [0:0]
:OUTPUT_direct - [0:0]
-A INPUT -j INPUT_direct
-A FORWARD -j FORWARD_direct
-A OUTPUT -j OUTPUT_direct
COMMIT
# Completed on Sun Aug 16 17:11:47 2020
# Generated by iptables-save v1.6.0 on Sun Aug 16 17:11:47 2020
*raw
:PREROUTING ACCEPT [1417204:253747905]
:OUTPUT ACCEPT [1414959:285085300]
:OUTPUT_direct - [0:0]
:PREROUTING_direct - [0:0]
-A PREROUTING -j PREROUTING_direct
-A OUTPUT -j OUTPUT_direct
COMMIT
# Completed on Sun Aug 16 17:11:47 2020
# Generated by iptables-save v1.6.0 on Sun Aug 16 17:11:47 2020
*mangle
:PREROUTING ACCEPT [1401943:246825511]
:INPUT ACCEPT [1401934:246824763]
:FORWARD ACCEPT [4:488]
:OUTPUT ACCEPT [1399691:277923964]
:POSTROUTING ACCEPT [1399681:277923072]
:FORWARD_direct - [0:0]
:INPUT_direct - [0:0]
:OUTPUT_direct - [0:0]
:POSTROUTING_direct - [0:0]
:PREROUTING_ZONES - [0:0]
:PREROUTING_ZONES_SOURCE - [0:0]
:PREROUTING_direct - [0:0]
:PRE_public - [0:0]
:PRE_public_allow - [0:0]
:PRE_public_deny - [0:0]
:PRE_public_log - [0:0]
:WEAVE-CANARY - [0:0]
-A PREROUTING -j PREROUTING_direct
-A PREROUTING -j PREROUTING_ZONES_SOURCE
-A PREROUTING -j PREROUTING_ZONES
-A INPUT -j INPUT_direct
-A FORWARD -j FORWARD_direct
-A OUTPUT -j OUTPUT_direct
-A POSTROUTING -j POSTROUTING_direct
-A PREROUTING_ZONES -g PRE_public
-A PRE_public -j PRE_public_log
-A PRE_public -j PRE_public_deny
-A PRE_public -j PRE_public_allow
COMMIT
# Completed on Sun Aug 16 17:11:47 2020
# Generated by iptables-save v1.6.0 on Sun Aug 16 17:11:47 2020
*filter
:INPUT ACCEPT [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [2897:591977]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
:FORWARD_IN_ZONES - [0:0]
:FORWARD_IN_ZONES_SOURCE - [0:0]
:FORWARD_OUT_ZONES - [0:0]
:FORWARD_OUT_ZONES_SOURCE - [0:0]
:FORWARD_direct - [0:0]
:FWDI_public - [0:0]
:FWDI_public_allow - [0:0]
:FWDI_public_deny - [0:0]
:FWDI_public_log - [0:0]
:FWDO_public - [0:0]
:FWDO_public_allow - [0:0]
:FWDO_public_deny - [0:0]
:FWDO_public_log - [0:0]
:INPUT_ZONES - [0:0]
:INPUT_ZONES_SOURCE - [0:0]
:INPUT_direct - [0:0]
:IN_public - [0:0]
:IN_public_allow - [0:0]
:IN_public_deny - [0:0]
:IN_public_log - [0:0]
:KUBE-EXTERNAL-SERVICES - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-FORWARD - [0:0]
:KUBE-SERVICES - [0:0]
:OUTPUT_direct - [0:0]
:WEAVE-CANARY - [0:0]
:WEAVE-NPC - [0:0]
:WEAVE-NPC-DEFAULT - [0:0]
:WEAVE-NPC-EGRESS - [0:0]
:WEAVE-NPC-EGRESS-ACCEPT - [0:0]
:WEAVE-NPC-EGRESS-CUSTOM - [0:0]
:WEAVE-NPC-EGRESS-DEFAULT - [0:0]
:WEAVE-NPC-INGRESS - [0:0]
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A INPUT -j KUBE-FIREWALL
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -j INPUT_direct
-A INPUT -j INPUT_ZONES_SOURCE
-A INPUT -j INPUT_ZONES
-A INPUT -p icmp -j ACCEPT
-A INPUT -m conntrack --ctstate INVALID -j DROP
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A INPUT -d 127.0.0.1/32 -p tcp -m tcp --dport 6784 -m addrtype ! --src-type LOCAL -m conntrack ! --ctstate RELATED,ESTABLISHED -m comment --comment "Block non-local access to Weave Net control port" -j DROP
-A INPUT -i weave -j WEAVE-NPC-EGRESS
-A FORWARD -i weave -m comment --comment "NOTE: this must go before \'-j KUBE-FORWARD\'" -j WEAVE-NPC-EGRESS
-A FORWARD -o weave -m comment --comment "NOTE: this must go before \'-j KUBE-FORWARD\'" -j WEAVE-NPC
-A FORWARD -o weave -m state --state NEW -j NFLOG --nflog-group 86
-A FORWARD -o weave -j DROP
-A FORWARD -i weave ! -o weave -j ACCEPT
-A FORWARD -o weave -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i lo -j ACCEPT
-A FORWARD -j FORWARD_direct
-A FORWARD -j FORWARD_IN_ZONES_SOURCE
-A FORWARD -j FORWARD_IN_ZONES
-A FORWARD -j FORWARD_OUT_ZONES_SOURCE
-A FORWARD -j FORWARD_OUT_ZONES
-A FORWARD -p icmp -j ACCEPT
-A FORWARD -m conntrack --ctstate INVALID -j DROP
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A OUTPUT -j OUTPUT_direct
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
-A FORWARD_IN_ZONES -g FWDI_public
-A FORWARD_OUT_ZONES -g FWDO_public
-A FWDI_public -j FWDI_public_log
-A FWDI_public -j FWDI_public_deny
-A FWDI_public -j FWDI_public_allow
-A FWDO_public -j FWDO_public_log
-A FWDO_public -j FWDO_public_deny
-A FWDO_public -j FWDO_public_allow
-A INPUT_ZONES -g IN_public
-A IN_public -j IN_public_log
-A IN_public -j IN_public_deny
-A IN_public -j IN_public_allow
-A IN_public_allow -p tcp -m tcp --dport 22 -m conntrack --ctstate NEW -j ACCEPT
-A IN_public_allow -p tcp -m tcp --dport 8080 -m conntrack --ctstate NEW -j ACCEPT
-A IN_public_allow -p tcp -m tcp --dport 10251 -m conntrack --ctstate NEW -j ACCEPT
-A IN_public_allow -p tcp -m tcp --dport 6443 -m conntrack --ctstate NEW -j ACCEPT
-A IN_public_allow -p tcp -m tcp --dport 30000:32767 -m conntrack --ctstate NEW -j ACCEPT
-A IN_public_allow -p tcp -m tcp --dport 10255 -m conntrack --ctstate NEW -j ACCEPT
-A IN_public_allow -p tcp -m tcp --dport 10252 -m conntrack --ctstate NEW -j ACCEPT
-A IN_public_allow -p tcp -m tcp --dport 2379:2380 -m conntrack --ctstate NEW -j ACCEPT
-A IN_public_allow -p tcp -m tcp --dport 10250 -m conntrack --ctstate NEW -j ACCEPT
-A IN_public_allow -p tcp -m tcp --dport 6784 -m conntrack --ctstate NEW -j ACCEPT
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment "block incoming localnet connections" -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP
-A KUBE-FORWARD -m conntrack --ctstate INVALID -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -s 192.168.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-FORWARD -d 192.168.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod destination rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A WEAVE-NPC -m state --state RELATED,ESTABLISHED -j ACCEPT
-A WEAVE-NPC -d 224.0.0.0/4 -j ACCEPT
-A WEAVE-NPC -m physdev --physdev-out vethwe-bridge --physdev-is-bridged -j ACCEPT
-A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-DEFAULT
-A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-INGRESS
-A WEAVE-NPC-DEFAULT -m set --match-set weave-Rzff}h:=]JaaJl/G;(XJpGjZ[ dst -m comment --comment "DefaultAllow ingress isolation for namespace: kube-public" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-P.B|!ZhkAr5q=XZ?3}tMBA+0 dst -m comment --comment "DefaultAllow ingress isolation for namespace: kube-system" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-;rGqyMIl1HN^cfDki~Z$3]6!N dst -m comment --comment "DefaultAllow ingress isolation for namespace: default" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-]B*(W?)t*z5O17G044[gUo#$l dst -m comment --comment "DefaultAllow ingress isolation for namespace: kube-node-lease" -j ACCEPT
-A WEAVE-NPC-EGRESS -m state --state RELATED,ESTABLISHED -j ACCEPT
-A WEAVE-NPC-EGRESS -m physdev --physdev-in vethwe-bridge --physdev-is-bridged -j RETURN
-A WEAVE-NPC-EGRESS -m addrtype --dst-type LOCAL -j RETURN
-A WEAVE-NPC-EGRESS -d 224.0.0.0/4 -j RETURN
-A WEAVE-NPC-EGRESS -m state --state NEW -j WEAVE-NPC-EGRESS-DEFAULT
-A WEAVE-NPC-EGRESS -m state --state NEW -m mark ! --mark 0x40000/0x40000 -j WEAVE-NPC-EGRESS-CUSTOM
-A WEAVE-NPC-EGRESS -m state --state NEW -m mark ! --mark 0x40000/0x40000 -j NFLOG --nflog-group 86
-A WEAVE-NPC-EGRESS-ACCEPT -j MARK --set-xmark 0x40000/0x40000
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-41s)5vQ^o/xWGz6a20N:~?#|E src -m comment --comment "DefaultAllow egress isolation for namespace: kube-public" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-41s)5vQ^o/xWGz6a20N:~?#|E src -m comment --comment "DefaultAllow egress isolation for namespace: kube-public" -j RETURN
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-E1ney4o[ojNrLk.6rOHi;7MPE src -m comment --comment "DefaultAllow egress isolation for namespace: kube-system" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-E1ney4o[ojNrLk.6rOHi;7MPE src -m comment --comment "DefaultAllow egress isolation for namespace: kube-system" -j RETURN
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-s_+ChJId4Uy_$}G;WdH|~TK)I src -m comment --comment "DefaultAllow egress isolation for namespace: default" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-s_+ChJId4Uy_$}G;WdH|~TK)I src -m comment --comment "DefaultAllow egress isolation for namespace: default" -j RETURN
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-sui%__gZ}{kX~oZgI_Ttqp=Dp src -m comment --comment "DefaultAllow egress isolation for namespace: kube-node-lease" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-sui%__gZ}{kX~oZgI_Ttqp=Dp src -m comment --comment "DefaultAllow egress isolation for namespace: kube-node-lease" -j RETURN
weave status connections
-> 10.111.1.156:6783 failed IP allocation was seeded by different peers (received: [2a:21:42:e0:5d:5f(k8s-worker-1)], ours: [12:35:b2:39:cf:7d(k8s-master)]), retry: 2020-08-17 08:15:51.155197759 +0000 UTC m=+68737.225153235
weave status in weave-pod
Version: 2.7.0 (failed to check latest version - see logs; next check at 2020/08/17 13:35:46)
Service: router
Protocol: weave 1..2
Name: 12:35:b2:39:cf:7d(k8s-master)
Encryption: disabled
PeerDiscovery: enabled
Targets: 1
Connections: 1 (1 failed)
Peers: 1
TrustedSubnets: none
Service: ipam
Status: ready
Range: 10.32.0.0/12
DefaultSubnet: 10.32.0.0/12
it tried solution in these links but didn't work solution1 and solution2
Please let me know what could be the possible reason for master to not serve on the published NodePort.
finally, it worked it was with ports for weave wasn't open in firewall as mentioned in this
also deleted weave deployment in kubernetes, removed /var/lib/weave/weave-netdata.db and deployed weave again, it worked.

kube-proxy unhappy after setting IPALLOC_RANGE in weave configuration

I’m installing kubernetes v1.5.6 + weave using kubeadm-1.6.0-0.alpha.0.2074.a092d8e0f95f52.x86_64.rpm on CentOS 7.3. Since my network IP range for host machine is 10.41.30.xx and it overlaps with internal weave IP range. I configured weave to use IPALLOC_RANGE as 172.30.0.0/16.
After setup, I’m not able to connect to kubernetes services. Kube-proxy complains about connecting to kubernetes master.
E0728 18:04:47.201682 1 server.go:421] Can't get Node "ctdpc001571.ctd.khi.com", assuming iptables proxy, err: Get https://10.41.30.50:6443/api/v1/nodes/ctdpc001571.ctd.khi.co.jp: dial tcp 10.41.30.50:6443: getsockopt: connection refused
I0728 18:04:47.204522 1 server.go:215] Using iptables Proxier.
W0728 18:04:47.205022 1 server.go:468] Failed to retrieve node info: Get https://10.41.30.50:6443/api/v1/nodes/ctdpc001571.ctd.khi.com: dial tcp 10.41.30.50:6443: getsockopt: connection refused
W0728 18:04:47.205325 1 proxier.go:249] invalid nodeIP, initialize kube-proxy with 127.0.0.1 as nodeIP
W0728 18:04:47.205347 1 proxier.go:254] clusterCIDR not specified, unable to distinguish between internal and external traffic
I0728 18:04:47.205394 1 server.go:227] Tearing down userspace rules.
I0728 18:04:47.238324 1 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_max' to 1048576
I0728 18:04:47.239243 1 conntrack.go:66] Setting conntrack hashsize to 262144
I0728 18:04:47.242492 1 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0728 18:04:47.242640 1 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
E0728 18:04:47.260748 1 reflector.go:188] pkg/proxy/config/api.go:33: Failed to list *api.Endpoints: Get https://10.41.30.50:6443/api/v1/endpoints?resourceVersion=0: dial tcp 10.41.30.50:6443: getsockopt: connection refused
E0728 18:04:47.260931 1 reflector.go:188] pkg/proxy/config/api.go:30: Failed to list *api.Service: Get https://10.41.30.50:6443/api/v1/services?resourceVersion=0: dial tcp 10.41.30.50:6443: getsockopt: connection refused
E0728 18:04:47.265569 1 event.go:208] Unable to write event: 'Post https://10.41.30.50:6443/api/v1/namespaces/default/events: dial tcp 10.41.30.50:6443: getsockopt: connection refused' (may retry after sleeping)
E0728 18:04:48.262006 1 reflector.go:188] pkg/proxy/config/api.go:33: Failed to list *api.Endpoints: Get https://10.41.30.50:6443/api/v1/endpoints?resourceVersion=0: dial tcp 10.41.30.50:6443: getsockopt: connection refused
Steps I followed:
$ yum -y install \
yum-versionlock \
docker-1.12.6-11.el7.centos \
kubectl-1.5.4-0 \
kubelet-1.5.4-0 \
kubernetes-cni-0.3.0.1-0.07a8a2 \
https://storage.googleapis.com/falkonry-k8-installer/kubeadm-1.6.0-0.alpha.0.2074.a092d8e0f95f52.x86_64.rpm
$ yum versionlock add kubectl kubelet kubernetes-cni kubeadm
$ systemctl enable docker && systemctl start docker
$ systemctl enable kubelet && systemctl start kubelet
$ kubeadm init --use-kubernetes-version=v1.5.6
Set `IPALLOC_RANGE` as `172.30.0.0/16` in https://git.io/weave-kube
$ kubectl apply -f weave-kube-config
$ kubectl run -i --tty busybox --image=busybox -- sh
$ nslookup kubernetes
After this, I cannot connect to kubernetes or any of the other services.
iptable
# Generated by iptables-save v1.4.21 on Sat Jul 29 04:19:10 2017
*filter
:INPUT ACCEPT [1350:566634]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [1344:579110]
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-SERVICES - [0:0]
:WEAVE-NPC - [0:0]
:WEAVE-NPC-DEFAULT - [0:0]
:WEAVE-NPC-INGRESS - [0:0]
-A INPUT -j KUBE-FIREWALL
-A INPUT -i virbr0 -p udp -m udp --dport 53 -j ACCEPT
-A INPUT -i virbr0 -p tcp -m tcp --dport 53 -j ACCEPT
-A INPUT -i virbr0 -p udp -m udp --dport 67 -j ACCEPT
-A INPUT -i virbr0 -p tcp -m tcp --dport 67 -j ACCEPT
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -i virbr0 -o virbr0 -j ACCEPT
-A FORWARD -o weave -j WEAVE-NPC
-A FORWARD -o weave -m state --state NEW -j NFLOG --nflog-group 86
-A FORWARD -o weave -j DROP
-A FORWARD -i weave ! -o weave -j ACCEPT
-A FORWARD -o weave -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A OUTPUT -o virbr0 -p udp -m udp --dport 68 -j ACCEPT
-A DOCKER-ISOLATION -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A WEAVE-NPC -m state --state RELATED,ESTABLISHED -j ACCEPT
-A WEAVE-NPC -d 224.0.0.0/4 -j ACCEPT
-A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-DEFAULT
-A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-INGRESS
-A WEAVE-NPC -m set ! --match-set weave-local-pods dst -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-k?Z;25^M}|1s7P3|H9i;*;MhG dst -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-iuZcey(5DeXbzgRFs8Szo]+#p dst -j ACCEPT
COMMIT
# Completed on Sat Jul 29 04:19:10 2017
# Generated by iptables-save v1.4.21 on Sat Jul 29 04:19:10 2017
*nat
:PREROUTING ACCEPT [2:148]
:INPUT ACCEPT [2:148]
:OUTPUT ACCEPT [24:1452]
:POSTROUTING ACCEPT [24:1452]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-SEP-DYFWWILLIC32NPM5 - [0:0]
:KUBE-SEP-GX7UKBANGEPIDZWU - [0:0]
:KUBE-SEP-YXLAFMRH4ZX57Y3W - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:WEAVE - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 192.168.122.0/24 -d 224.0.0.0/24 -j RETURN
-A POSTROUTING -s 192.168.122.0/24 -d 255.255.255.255/32 -j RETURN
-A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -p tcp -j MASQUERADE --to-ports 1024-65535
-A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -p udp -j MASQUERADE --to-ports 1024-65535
-A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -j MASQUERADE
-A POSTROUTING -j WEAVE
-A DOCKER -i docker0 -j RETURN
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-DYFWWILLIC32NPM5 -s 172.30.0.5/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-DYFWWILLIC32NPM5 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 172.30.0.5:53
-A KUBE-SEP-GX7UKBANGEPIDZWU -s 172.30.0.5/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-GX7UKBANGEPIDZWU -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 172.30.0.5:53
-A KUBE-SEP-YXLAFMRH4ZX57Y3W -s 10.41.30.50/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-YXLAFMRH4ZX57Y3W -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-YXLAFMRH4ZX57Y3W --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 10.41.30.50:6443
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-DYFWWILLIC32NPM5
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-YXLAFMRH4ZX57Y3W --mask 255.255.255.255 --rsource -j KUBE-SEP-YXLAFMRH4ZX57Y3W
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-YXLAFMRH4ZX57Y3W
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-GX7UKBANGEPIDZWU
-A WEAVE -s 172.30.0.0/16 -d 224.0.0.0/4 -j RETURN
-A WEAVE ! -s 172.30.0.0/16 -d 172.30.0.0/16 -j MASQUERADE
-A WEAVE -s 172.30.0.0/16 ! -d 172.30.0.0/16 -j MASQUERADE
COMMIT
# Completed on Sat Jul 29 04:19:10 2017
# Generated by iptables-save v1.4.21 on Sat Jul 29 04:19:10 2017
*mangle
:PREROUTING ACCEPT [323631:155761830]
:INPUT ACCEPT [323586:155756413]
:FORWARD ACCEPT [26:1880]
:OUTPUT ACCEPT [317539:144236316]
:POSTROUTING ACCEPT [317582:144241373]
-A POSTROUTING -o virbr0 -p udp -m udp --dport 68 -j CHECKSUM --checksum-fill
COMMIT
# Completed on Sat Jul 29 04:19:10 2017
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default gateway 0.0.0.0 UG 100 0 0 eno1d1
10.41.30.0 0.0.0.0 255.255.255.0 U 100 0 0 eno1
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.30.0.0 0.0.0.0 255.255.0.0 U 0 0 0 weave
192.168.1.0 0.0.0.0 255.255.255.0 U 100 0 0 eno1d1
192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0

Cannot connect to service of it's own from inside pod on kubernetes 1.6

I created service and deployment. Now from inside the pod I'm trying to connect to it's own service. It gets times out after few minutes.
This works perfectly fine on kubenetes 1.5.x but not 1.6.x. FYI - created kubernetes cluster using kubeadm tool and using weave as network plugin.
Cluster dump: https://drive.google.com/file/d/0ByZSwkp_d2U-aFREc3E5SjRCVFU/view?usp=sharing
Connecting to kafka service from other container
root#falkonry-redis-0:/data# curl -v http://falkonry-kafka:9092
* About to connect() to falkonry-kafka port 9092 (#0)
* Trying 10.99.232.10...
* connected
* Connected to falkonry-kafka (10.99.232.10) port 9092 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.26.0
> Host: falkonry-kafka:9092
> Accept: */*
>
* additional stuff not fine transfer.c:1037: 0 0
* Recv failure: Connection reset by peer
* Closing connection #0
curl: (56) Recv failure: Connection reset by peer
Connecting to kafka service from inside kafka container
root#falkonry-kafka-56017906-9qlg3:/# curl -v http://falkonry-kafka:9092
* Rebuilt URL to: http://falkonry-kafka:9092/
* Hostname was NOT found in DNS cache
* Trying 10.99.232.10...
^C
Request never finishes.
Service and deployment
Phaguns-MacBook-Pro:falkonryagent phagunbaya$ kubectl describe service falkonry-kafka
Name: falkonry-kafka
Namespace: default
Labels: function=kafka
party=falkonry
Selector: name=falkonry-kafka
Type: ClusterIP
IP: 10.99.232.10
Port: kafka 9092/TCP
Endpoints: 10.32.0.7:9092
Session Affinity: None
No events.
Phaguns-MacBook-Pro:falkonryagent phagunbaya$ kubectl describe deployment falkonry-kafka
Name: falkonry-kafka
Namespace: default
CreationTimestamp: Thu, 06 Apr 2017 16:58:36 -0700
Labels: function=kafka
party=falkonry
Selector: function=kafka,name=falkonry-kafka
Replicas: 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
OldReplicaSets: <none>
NewReplicaSet: falkonry-kafka-56017906 (1/1 replicas created)
No events.
iptables-save output
# Generated by iptables-save v1.4.21 on Fri Apr 7 12:16:32 2017
*nat
:PREROUTING ACCEPT [1:60]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [12:720]
:POSTROUTING ACCEPT [16:1038]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-SEP-4QD2LE2R2TODS2YV - [0:0]
:KUBE-SEP-6K3WNWFYOAH5UDZ7 - [0:0]
:KUBE-SEP-AR5TRSQMIM2F553H - [0:0]
:KUBE-SEP-BIZOCAOAPTCX4WBC - [0:0]
:KUBE-SEP-F7NTE7AMKDKNWUUF - [0:0]
:KUBE-SEP-FV6ZZ4EMBZMV4DQ5 - [0:0]
:KUBE-SEP-HVHMJPRJS2UA65HH - [0:0]
:KUBE-SEP-IBDVBYXSRD6MIAGE - [0:0]
:KUBE-SEP-KDTJFZVKN4ESIN24 - [0:0]
:KUBE-SEP-KNER6ASWBX763QL7 - [0:0]
:KUBE-SEP-NGQUCFCRE45KSL73 - [0:0]
:KUBE-SEP-NYKTVPUDBMHXGWAX - [0:0]
:KUBE-SEP-QLLLKZOFDP244LAS - [0:0]
:KUBE-SEP-RBQF4CU7COIZTWDJ - [0:0]
:KUBE-SEP-SX34LAYKH37CF5LT - [0:0]
:KUBE-SEP-SZZ7MOWKTWUFXIJT - [0:0]
:KUBE-SEP-TZPDA6OWOVPRIIUZ - [0:0]
:KUBE-SEP-UJJNLSZU6HL4F5UO - [0:0]
:KUBE-SEP-W4RNB3VXXTJ3LGHB - [0:0]
:KUBE-SEP-YYIR7TZA6ZBQSUSF - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-BL55CP3MKKB53NTC - [0:0]
:KUBE-SVC-BV4E552EX2CNKPCU - [0:0]
:KUBE-SVC-BYB5G3MHEBYVN43P - [0:0]
:KUBE-SVC-C64CQIO6Z225CXIH - [0:0]
:KUBE-SVC-CAVFOYOJQPPKKFSK - [0:0]
:KUBE-SVC-DM7TKUYSW7TW345O - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-NTZIAVXWXJCS7DKZ - [0:0]
:KUBE-SVC-PJO6V2NNIUDO2DKL - [0:0]
:KUBE-SVC-QIJ4ARI55YRJ76JG - [0:0]
:KUBE-SVC-QQGUGJWMO5HSN6XL - [0:0]
:KUBE-SVC-RVQUD6RAXHQPQF3I - [0:0]
:KUBE-SVC-SZGELJVIQ5IRMA57 - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:KUBE-SVC-U6PKKNLWPXOUUWIP - [0:0]
:KUBE-SVC-XGPIXF43F4GLZBG7 - [0:0]
:KUBE-SVC-Y4IVC7EWPWRMUFRE - [0:0]
:WEAVE - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.17.50.0/24 ! -o docker0 -j MASQUERADE
-A POSTROUTING -j WEAVE
-A DOCKER -i docker0 -j RETURN
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/falkonry-merlin:merlin-web" -m tcp --dport 30061 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/falkonry-merlin:merlin-web" -m tcp --dport 30061 -j KUBE-SVC-SZGELJVIQ5IRMA57
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-4QD2LE2R2TODS2YV -s 10.44.0.6/32 -m comment --comment "default/falkonry-spark-master:rest" -j KUBE-MARK-MASQ
-A KUBE-SEP-4QD2LE2R2TODS2YV -p tcp -m comment --comment "default/falkonry-spark-master:rest" -m tcp -j DNAT --to-destination 10.44.0.6:6066
-A KUBE-SEP-6K3WNWFYOAH5UDZ7 -s 10.32.0.4/32 -m comment --comment "default/falkonry-kafka:kafka" -j KUBE-MARK-MASQ
-A KUBE-SEP-6K3WNWFYOAH5UDZ7 -p tcp -m comment --comment "default/falkonry-kafka:kafka" -m tcp -j DNAT --to-destination 10.32.0.4:9092
-A KUBE-SEP-AR5TRSQMIM2F553H -s 10.24.10.4/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-AR5TRSQMIM2F553H -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-AR5TRSQMIM2F553H --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 10.24.10.4:6443
-A KUBE-SEP-BIZOCAOAPTCX4WBC -s 10.44.0.3/32 -m comment --comment "default/falkonry-merlin:merlin-web" -j KUBE-MARK-MASQ
-A KUBE-SEP-BIZOCAOAPTCX4WBC -p tcp -m comment --comment "default/falkonry-merlin:merlin-web" -m tcp -j DNAT --to-destination 10.44.0.3:8080
-A KUBE-SEP-F7NTE7AMKDKNWUUF -s 10.42.0.3/32 -m comment --comment "default/falkonry-riactor:riactor-http" -j KUBE-MARK-MASQ
-A KUBE-SEP-F7NTE7AMKDKNWUUF -p tcp -m comment --comment "default/falkonry-riactor:riactor-http" -m tcp -j DNAT --to-destination 10.42.0.3:8000
-A KUBE-SEP-FV6ZZ4EMBZMV4DQ5 -s 10.32.0.10/32 -m comment --comment "default/falkonry-redis:redis-cli" -j KUBE-MARK-MASQ
-A KUBE-SEP-FV6ZZ4EMBZMV4DQ5 -p tcp -m comment --comment "default/falkonry-redis:redis-cli" -m tcp -j DNAT --to-destination 10.32.0.10:6379
-A KUBE-SEP-HVHMJPRJS2UA65HH -s 10.32.0.7/32 -m comment --comment "default/falkonry-hadoop:namenode-ui" -j KUBE-MARK-MASQ
-A KUBE-SEP-HVHMJPRJS2UA65HH -p tcp -m comment --comment "default/falkonry-hadoop:namenode-ui" -m tcp -j DNAT --to-destination 10.32.0.7:50070
-A KUBE-SEP-IBDVBYXSRD6MIAGE -s 10.44.0.5/32 -m comment --comment "default/falkonry-riactor:riactor-http" -j KUBE-MARK-MASQ
-A KUBE-SEP-IBDVBYXSRD6MIAGE -p tcp -m comment --comment "default/falkonry-riactor:riactor-http" -m tcp -j DNAT --to-destination 10.44.0.5:8000
-A KUBE-SEP-KDTJFZVKN4ESIN24 -s 10.32.0.7/32 -m comment --comment "default/falkonry-hadoop:datanode" -j KUBE-MARK-MASQ
-A KUBE-SEP-KDTJFZVKN4ESIN24 -p tcp -m comment --comment "default/falkonry-hadoop:datanode" -m tcp -j DNAT --to-destination 10.32.0.7:50010
-A KUBE-SEP-KNER6ASWBX763QL7 -s 10.32.0.7/32 -m comment --comment "default/falkonry-hadoop:datanode-ui" -j KUBE-MARK-MASQ
-A KUBE-SEP-KNER6ASWBX763QL7 -p tcp -m comment --comment "default/falkonry-hadoop:datanode-ui" -m tcp -j DNAT --to-destination 10.32.0.7:50075
-A KUBE-SEP-NGQUCFCRE45KSL73 -s 10.44.0.6/32 -m comment --comment "default/falkonry-spark-master:webui" -j KUBE-MARK-MASQ
-A KUBE-SEP-NGQUCFCRE45KSL73 -p tcp -m comment --comment "default/falkonry-spark-master:webui" -m tcp -j DNAT --to-destination 10.44.0.6:8080
-A KUBE-SEP-NYKTVPUDBMHXGWAX -s 10.44.0.6/32 -m comment --comment "default/falkonry-spark-master:akka" -j KUBE-MARK-MASQ
-A KUBE-SEP-NYKTVPUDBMHXGWAX -p tcp -m comment --comment "default/falkonry-spark-master:akka" -m tcp -j DNAT --to-destination 10.44.0.6:7077
-A KUBE-SEP-QLLLKZOFDP244LAS -s 10.42.0.1/32 -m comment --comment "default/falkonry-connector:http" -j KUBE-MARK-MASQ
-A KUBE-SEP-QLLLKZOFDP244LAS -p tcp -m comment --comment "default/falkonry-connector:http" -m tcp -j DNAT --to-destination 10.42.0.1:8001
-A KUBE-SEP-RBQF4CU7COIZTWDJ -s 10.32.0.6/32 -m comment --comment "default/falkonry-zookeeper:zookeeper" -j KUBE-MARK-MASQ
-A KUBE-SEP-RBQF4CU7COIZTWDJ -p tcp -m comment --comment "default/falkonry-zookeeper:zookeeper" -m tcp -j DNAT --to-destination 10.32.0.6:2181
-A KUBE-SEP-SX34LAYKH37CF5LT -s 10.42.0.2/32 -m comment --comment "default/falkonry-merlin:merlin-web" -j KUBE-MARK-MASQ
-A KUBE-SEP-SX34LAYKH37CF5LT -p tcp -m comment --comment "default/falkonry-merlin:merlin-web" -m tcp -j DNAT --to-destination 10.42.0.2:8080
-A KUBE-SEP-SZZ7MOWKTWUFXIJT -s 10.32.0.2/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-SZZ7MOWKTWUFXIJT -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.32.0.2:53
-A KUBE-SEP-TZPDA6OWOVPRIIUZ -s 10.32.0.3/32 -m comment --comment "default/falkonry-riactor:riactor-http" -j KUBE-MARK-MASQ
-A KUBE-SEP-TZPDA6OWOVPRIIUZ -p tcp -m comment --comment "default/falkonry-riactor:riactor-http" -m tcp -j DNAT --to-destination 10.32.0.3:8000
-A KUBE-SEP-UJJNLSZU6HL4F5UO -s 10.32.0.2/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-UJJNLSZU6HL4F5UO -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.32.0.2:53
-A KUBE-SEP-W4RNB3VXXTJ3LGHB -s 10.32.0.8/32 -m comment --comment "default/falkonry-mongo:mongo-http" -j KUBE-MARK-MASQ
-A KUBE-SEP-W4RNB3VXXTJ3LGHB -p tcp -m comment --comment "default/falkonry-mongo:mongo-http" -m tcp -j DNAT --to-destination 10.32.0.8:27017
-A KUBE-SEP-YYIR7TZA6ZBQSUSF -s 10.32.0.7/32 -m comment --comment "default/falkonry-hadoop:namenode" -j KUBE-MARK-MASQ
-A KUBE-SEP-YYIR7TZA6ZBQSUSF -p tcp -m comment --comment "default/falkonry-hadoop:namenode" -m tcp -j DNAT --to-destination 10.32.0.7:8020
-A KUBE-SERVICES -d 10.103.204.121/32 -p tcp -m comment --comment "default/falkonry-spark-master:akka cluster IP" -m tcp --dport 7077 -j KUBE-SVC-CAVFOYOJQPPKKFSK
-A KUBE-SERVICES -d 10.111.87.193/32 -p tcp -m comment --comment "default/falkonryagent:agent-web cluster IP" -m tcp --dport 9090 -j KUBE-SVC-QQGUGJWMO5HSN6XL
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -d 10.107.140.112/32 -p tcp -m comment --comment "default/falkonry-zookeeper:zookeeper cluster IP" -m tcp --dport 2181 -j KUBE-SVC-BYB5G3MHEBYVN43P
-A KUBE-SERVICES -d 10.106.78.154/32 -p tcp -m comment --comment "default/falkonry-hadoop:datanode cluster IP" -m tcp --dport 50010 -j KUBE-SVC-NTZIAVXWXJCS7DKZ
-A KUBE-SERVICES -d 10.106.78.154/32 -p tcp -m comment --comment "default/falkonry-hadoop:datanode-ui cluster IP" -m tcp --dport 50075 -j KUBE-SVC-BL55CP3MKKB53NTC
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 10.111.174.212/32 -p tcp -m comment --comment "default/falkonry-merlin:merlin-web cluster IP" -m tcp --dport 8080 -j KUBE-SVC-SZGELJVIQ5IRMA57
-A KUBE-SERVICES -d 10.103.204.121/32 -p tcp -m comment --comment "default/falkonry-spark-master:rest cluster IP" -m tcp --dport 6066 -j KUBE-SVC-DM7TKUYSW7TW345O
-A KUBE-SERVICES -d 10.103.204.121/32 -p tcp -m comment --comment "default/falkonry-spark-master:webui cluster IP" -m tcp --dport 8080 -j KUBE-SVC-QIJ4ARI55YRJ76JG
-A KUBE-SERVICES -d 10.106.78.154/32 -p tcp -m comment --comment "default/falkonry-hadoop:namenode cluster IP" -m tcp --dport 9000 -j KUBE-SVC-BV4E552EX2CNKPCU
-A KUBE-SERVICES -d 10.106.78.154/32 -p tcp -m comment --comment "default/falkonry-hadoop:namenode-ui cluster IP" -m tcp --dport 50070 -j KUBE-SVC-U6PKKNLWPXOUUWIP
-A KUBE-SERVICES -d 10.98.38.82/32 -p tcp -m comment --comment "default/falkonry-mongo:mongo-http cluster IP" -m tcp --dport 27017 -j KUBE-SVC-Y4IVC7EWPWRMUFRE
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -d 10.96.90.91/32 -p tcp -m comment --comment "default/falkonry-redis:redis-cli cluster IP" -m tcp --dport 6379 -j KUBE-SVC-PJO6V2NNIUDO2DKL
-A KUBE-SERVICES -d 10.99.232.10/32 -p tcp -m comment --comment "default/falkonry-kafka:kafka cluster IP" -m tcp --dport 9092 -j KUBE-SVC-XGPIXF43F4GLZBG7
-A KUBE-SERVICES -d 10.100.203.65/32 -p tcp -m comment --comment "default/falkonry-riactor:riactor-http cluster IP" -m tcp --dport 8000 -j KUBE-SVC-C64CQIO6Z225CXIH
-A KUBE-SERVICES -d 10.110.120.177/32 -p tcp -m comment --comment "default/falkonry-connector:http cluster IP" -m tcp --dport 8001 -j KUBE-SVC-RVQUD6RAXHQPQF3I
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-BL55CP3MKKB53NTC -m comment --comment "default/falkonry-hadoop:datanode-ui" -j KUBE-SEP-KNER6ASWBX763QL7
-A KUBE-SVC-BV4E552EX2CNKPCU -m comment --comment "default/falkonry-hadoop:namenode" -j KUBE-SEP-YYIR7TZA6ZBQSUSF
-A KUBE-SVC-BYB5G3MHEBYVN43P -m comment --comment "default/falkonry-zookeeper:zookeeper" -j KUBE-SEP-RBQF4CU7COIZTWDJ
-A KUBE-SVC-C64CQIO6Z225CXIH -m comment --comment "default/falkonry-riactor:riactor-http" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-TZPDA6OWOVPRIIUZ
-A KUBE-SVC-C64CQIO6Z225CXIH -m comment --comment "default/falkonry-riactor:riactor-http" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-F7NTE7AMKDKNWUUF
-A KUBE-SVC-C64CQIO6Z225CXIH -m comment --comment "default/falkonry-riactor:riactor-http" -j KUBE-SEP-IBDVBYXSRD6MIAGE
-A KUBE-SVC-CAVFOYOJQPPKKFSK -m comment --comment "default/falkonry-spark-master:akka" -j KUBE-SEP-NYKTVPUDBMHXGWAX
-A KUBE-SVC-DM7TKUYSW7TW345O -m comment --comment "default/falkonry-spark-master:rest" -j KUBE-SEP-4QD2LE2R2TODS2YV
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-UJJNLSZU6HL4F5UO
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-AR5TRSQMIM2F553H --mask 255.255.255.255 --rsource -j KUBE-SEP-AR5TRSQMIM2F553H
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-AR5TRSQMIM2F553H
-A KUBE-SVC-NTZIAVXWXJCS7DKZ -m comment --comment "default/falkonry-hadoop:datanode" -j KUBE-SEP-KDTJFZVKN4ESIN24
-A KUBE-SVC-PJO6V2NNIUDO2DKL -m comment --comment "default/falkonry-redis:redis-cli" -j KUBE-SEP-FV6ZZ4EMBZMV4DQ5
-A KUBE-SVC-QIJ4ARI55YRJ76JG -m comment --comment "default/falkonry-spark-master:webui" -j KUBE-SEP-NGQUCFCRE45KSL73
-A KUBE-SVC-RVQUD6RAXHQPQF3I -m comment --comment "default/falkonry-connector:http" -j KUBE-SEP-QLLLKZOFDP244LAS
-A KUBE-SVC-SZGELJVIQ5IRMA57 -m comment --comment "default/falkonry-merlin:merlin-web" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-SX34LAYKH37CF5LT
-A KUBE-SVC-SZGELJVIQ5IRMA57 -m comment --comment "default/falkonry-merlin:merlin-web" -j KUBE-SEP-BIZOCAOAPTCX4WBC
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-SZZ7MOWKTWUFXIJT
-A KUBE-SVC-U6PKKNLWPXOUUWIP -m comment --comment "default/falkonry-hadoop:namenode-ui" -j KUBE-SEP-HVHMJPRJS2UA65HH
-A KUBE-SVC-XGPIXF43F4GLZBG7 -m comment --comment "default/falkonry-kafka:kafka" -j KUBE-SEP-6K3WNWFYOAH5UDZ7
-A KUBE-SVC-Y4IVC7EWPWRMUFRE -m comment --comment "default/falkonry-mongo:mongo-http" -j KUBE-SEP-W4RNB3VXXTJ3LGHB
-A WEAVE -s 10.32.0.0/12 -d 224.0.0.0/4 -j RETURN
-A WEAVE ! -s 10.32.0.0/12 -d 10.32.0.0/12 -j MASQUERADE
-A WEAVE -s 10.32.0.0/12 ! -d 10.32.0.0/12 -j MASQUERADE
COMMIT
# Completed on Fri Apr 7 12:16:32 2017
# Generated by iptables-save v1.4.21 on Fri Apr 7 12:16:32 2017
*filter
:INPUT ACCEPT [741:270665]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [727:337487]
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-SERVICES - [0:0]
:WEAVE-NPC - [0:0]
:WEAVE-NPC-DEFAULT - [0:0]
:WEAVE-NPC-INGRESS - [0:0]
-A INPUT -j KUBE-FIREWALL
-A INPUT -d 172.17.50.1/32 -i docker0 -p tcp -m tcp --dport 6783 -j DROP
-A INPUT -d 172.17.50.1/32 -i docker0 -p udp -m udp --dport 6783 -j DROP
-A INPUT -d 172.17.50.1/32 -i docker0 -p udp -m udp --dport 6784 -j DROP
-A INPUT -i docker0 -p udp -m udp --dport 53 -j ACCEPT
-A INPUT -i docker0 -p tcp -m tcp --dport 53 -j ACCEPT
-A FORWARD -i docker0 -o weave -j DROP
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -o weave -j WEAVE-NPC
-A FORWARD -o weave -m state --state NEW -j NFLOG --nflog-group 86
-A FORWARD -o weave -j DROP
-A FORWARD -i weave ! -o weave -j ACCEPT
-A FORWARD -o weave -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER-ISOLATION -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A KUBE-SERVICES -d 10.111.87.193/32 -p tcp -m comment --comment "default/falkonryagent:agent-web has no endpoints" -m tcp --dport 9090 -j REJECT --reject-with icmp-port-unreachable
-A WEAVE-NPC -m state --state RELATED,ESTABLISHED -j ACCEPT
-A WEAVE-NPC -d 224.0.0.0/4 -j ACCEPT
-A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-DEFAULT
-A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-INGRESS
-A WEAVE-NPC-DEFAULT -m set --match-set weave-k?Z;25^M}|1s7P3|H9i;*;MhG dst -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-4vtqMI<kx/2]jD%_c0S%thO%V dst -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-iuZcey(5DeXbzgRFs8Szo]<#p dst -j ACCEPT
COMMIT
# Completed on Fri Apr 7 12:16:32 2017
Kube-proxy logs
I0406 19:42:35.453335 1 server.go:225] Using iptables Proxier.
W0406 19:42:35.559100 1 proxier.go:309] clusterCIDR not specified, unable to distinguish between internal and external traffic
I0406 19:42:35.559155 1 server.go:249] Tearing down userspace rules.
I0406 19:42:35.711702 1 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_max' to 524288
I0406 19:42:35.712557 1 conntrack.go:66] Setting conntrack hashsize to 131072
I0406 19:42:35.713879 1 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0406 19:42:35.713949 1 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
How did you set up weave? There is a 1.6-specific configuration[1][2] that sets up an role and service account for running weave on clusters with RBAC enabled
[1] https://github.com/weaveworks/weave/blob/master/prog/weave-kube/weave-daemonset-k8s-1.6.yaml
[2] https://www.weave.works/weave-net-kubernetes-integration/

Can't reach Kubernetes service from outside of node when kube-proxy in iptables mode

I have a Single-Node (master+node) Kubernetes deployment running on CoreOS, with kube-proxy running in iptables mode, flannel for container networking, without Calico.
kube-proxy.yaml
apiVersion: v1
kind: Pod
metadata:
name: kube-proxy
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-proxy
image: quay.io/coreos/hyperkube:v1.5.2_coreos.0
command:
- /hyperkube
- proxy
- --master=http://127.0.0.1:8080
- --hostname-override=10.0.0.144
- --proxy-mode=iptables
- --bind-address=0.0.0.0
- --cluster-cidr=10.1.0.0/16
- --masquerade-all=true
securityContext:
privileged: true
I've created a deployment, then exposed that deployment using a Service of type NodePort.
user#node ~ $ kubectl run hostnames --image=gcr.io/google_containers/serve_hostname \
--labels=app=hostnames \
--port=9376 \
--replicas=3
user#node ~ $ kubectl expose deployment hostnames \
--port=80 \
--target-port=9376 \
--type=NodePort
user#node ~ $ kubectl get svc hostnames
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hostnames 10.1.50.64 <nodes> 80:30177/TCP 6m
I can curl successfully from the node (loopback and eth0 IP):
user#node ~ $ curl localhost:30177
hostnames-3799501552-xfq08
user#node ~ $ curl 10.0.0.144:30177
hostnames-3799501552-xfq08
However, I cannot curl from outside the node. I've tried from both a client machine outside the node's network (with correct firewall rules), and a machine inside the node's private network, with the network's firewall completely open between the two machines, with no luck.
I'm fairly confident that it's an iptables/kube-proxy issue, because if I modify the kube-proxy config from --proxy-mode=iptables to --proxy-mode=userspace I can access from both external machines. Also, if I bypass kubernetes and run a docker container I have no problems with external access.
Here are the current iptables rules:
user#node ~ $ iptables-save
# Generated by iptables-save v1.4.21 on Mon Feb 6 04:46:02 2017
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-SEP-4IIYBTTZSUAZV53G - [0:0]
:KUBE-SEP-4TMFMGA4TTORJ5E4 - [0:0]
:KUBE-SEP-DUUUKFKBBSQSAJB2 - [0:0]
:KUBE-SEP-XONOXX2F6J6VHAVB - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-NWV5X2332I4OT4T3 - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 10.1.0.0/16 -d 10.1.0.0/16 -j RETURN
-A POSTROUTING -s 10.1.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
-A POSTROUTING ! -s 10.1.0.0/16 -d 10.1.0.0/16 -j MASQUERADE
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/hostnames:" -m tcp --dport 30177 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/hostnames:" -m tcp --dport 30177 -j KUBE-SVC-NWV5X2332I4OT4T3
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-4IIYBTTZSUAZV53G -s 10.0.0.144/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-4IIYBTTZSUAZV53G -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-4IIYBTTZSUAZV53G --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 10.0.0.144:6443
-A KUBE-SEP-4TMFMGA4TTORJ5E4 -s 10.1.34.2/32 -m comment --comment "default/hostnames:" -j KUBE-MARK-MASQ
-A KUBE-SEP-4TMFMGA4TTORJ5E4 -p tcp -m comment --comment "default/hostnames:" -m tcp -j DNAT --to-destination 10.1.34.2:9376
-A KUBE-SEP-DUUUKFKBBSQSAJB2 -s 10.1.34.3/32 -m comment --comment "default/hostnames:" -j KUBE-MARK-MASQ
-A KUBE-SEP-DUUUKFKBBSQSAJB2 -p tcp -m comment --comment "default/hostnames:" -m tcp -j DNAT --to-destination 10.1.34.3:9376
-A KUBE-SEP-XONOXX2F6J6VHAVB -s 10.1.34.4/32 -m comment --comment "default/hostnames:" -j KUBE-MARK-MASQ
-A KUBE-SEP-XONOXX2F6J6VHAVB -p tcp -m comment --comment "default/hostnames:" -m tcp -j DNAT --to-destination 10.1.34.4:9376
-A KUBE-SERVICES -d 10.1.50.64/32 -p tcp -m comment --comment "default/hostnames: cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SERVICES ! -s 10.1.0.0/16 -d 10.1.50.64/32 -p tcp -m comment --comment "default/hostnames: cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.1.50.64/32 -p tcp -m comment --comment "default/hostnames: cluster IP" -m tcp --dport 80 -j KUBE-SVC-NWV5X2332I4OT4T3
-A KUBE-SERVICES -d 10.1.50.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES ! -s 10.1.0.0/16 -d 10.1.50.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.1.50.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-4IIYBTTZSUAZV53G --mask 255.255.255.255 --rsource -j KUBE-SEP-4IIYBTTZSUAZV53G
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-4IIYBTTZSUAZV53G
-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment "default/hostnames:" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-4TMFMGA4TTORJ5E4
-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment "default/hostnames:" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-DUUUKFKBBSQSAJB2
-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment "default/hostnames:" -j KUBE-SEP-XONOXX2F6J6VHAVB
COMMIT
# Completed on Mon Feb 6 04:46:02 2017
# Generated by iptables-save v1.4.21 on Mon Feb 6 04:46:02 2017
*filter
:INPUT DROP [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [67:14455]
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-SERVICES - [0:0]
-A INPUT -j KUBE-FIREWALL
-A INPUT -i lo -j ACCEPT
-A INPUT -i eth0 -j ACCEPT
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 443 -j ACCEPT
-A INPUT -p icmp -m icmp --icmp-type 0 -j ACCEPT
-A INPUT -p icmp -m icmp --icmp-type 3 -j ACCEPT
-A INPUT -p icmp -m icmp --icmp-type 11 -j ACCEPT
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER-ISOLATION -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
COMMIT
# Completed on Mon Feb 6 04:46:02 2017
I'm not sure what to look for in the rules... Can someone with more experience than myself make some suggestions on troubleshooting?
Fixed it. The problem was that I had some default iptables rules applied on startup which must override some parts of the dynamic rule-set created by kube-proxy.
The difference between working and non-working was as follows:
Working
:INPUT DROP [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [67:14455]
...
-A INPUT -i lo -j ACCEPT
-A INPUT -i eth0 -j ACCEPT
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 443 -j ACCEPT
-A INPUT -p icmp -m icmp --icmp-type 0 -j ACCEPT
-A INPUT -p icmp -m icmp --icmp-type 3 -j ACCEPT
-A INPUT -p icmp -m icmp --icmp-type 11 -j ACCEPT
...
Not working
:INPUT ACCEPT [30:5876]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [25:5616]
...