Kubernetes Nodeport only works on Pod Host - kubernetes

I just started creating my own Kubernetes cluster with a few Raspberry pi devices. I'm using the guide from Alex Ellis. But I'm having the issue where my NodePort only works from the pods that are actually running the container. So there's no redirecting going on from pods that are not running the container.
Service & Deployment
apiVersion: v1
kind: Service
metadata:
name: markdownrender
labels:
app: markdownrender
spec:
type: NodePort
externalTrafficPolicy: Cluster
ports:
- port: 8080
protocol: TCP
targetPort: 8080
nodePort: 31118
selector:
app: markdownrender
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: markdownrender
labels:
app: markdownrender
spec:
replicas: 2
selector:
matchLabels:
app: markdownrender
template:
metadata:
labels:
app: markdownrender
spec:
containers:
- name: markdownrender
image: functions/markdownrender:latest-armhf
imagePullPolicy: Always
ports:
- containerPort: 8080
protocol: TCP
kubectl get services
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 111m
markdownrender NodePort 10.104.5.83 <none> 8080:31118/TCP 102m
kubectl get deployments
markdownrender 2/2 2 2 101m
kubectl get pods -o wide
markdownrender-f9744b577-pcplc 1/1 Running 1 90m 10.244.1.2 kube-node233 <none> <none>
markdownrender-f9744b577-x4j4k 1/1 Running 1 90m 10.244.3.2 kube-node232 <none> <none>
curl http://127.0.0.1:31118 -d "# test" --max-time 1 on nodes different from hosts kube-node233 and kube-node232, always returns Connection timed.
sudo iptables-save (on 230 master-node)
# Generated by xtables-save v1.8.2 on Sun Jan 19 16:05:19 2020
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-JD5MR3NA4I4DYORP - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SEP-TPAZEM2ZI6GIP4H4 - [0:0]
:KUBE-SVC-QXMBXH4RFEQTDMUZ - [0:0]
:KUBE-SEP-7S77XOJGOAF6ON4P - [0:0]
:KUBE-SEP-GE6BLW5CUF74UDN2 - [0:0]
:KUBE-SEP-IRMT6RY5EEEBXDAY - [0:0]
:KUBE-SEP-232DQYSHL5HNRYWJ - [0:0]
:KUBE-SEP-2Z3537XSN3RJRU3M - [0:0]
:KUBE-SEP-A4UL7OUXQPUR7Y7Q - [0:0]
:KUBE-SEP-275NWNNANOEIGYHG - [0:0]
:KUBE-SEP-CPH3WXMLRJ2BZFXW - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A DOCKER -i docker0 -j RETURN
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE --random-fully
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.104.5.83/32 -p tcp -m comment --comment "default/markdownrender: cluster IP" -m tcp --dport 8080 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.104.5.83/32 -p tcp -m comment --comment "default/markdownrender: cluster IP" -m tcp --dport 8080 -j KUBE-SVC-QXMBXH4RFEQTDMUZ
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/markdownrender:" -m tcp --dport 31118 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/markdownrender:" -m tcp --dport 31118 -j KUBE-SVC-QXMBXH4RFEQTDMUZ
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-IRMT6RY5EEEBXDAY
-A KUBE-SVC-TCOU7JCQXEZGVUNU -j KUBE-SEP-232DQYSHL5HNRYWJ
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-2Z3537XSN3RJRU3M
-A KUBE-SVC-ERIFXISQEP7F7OF4 -j KUBE-SEP-A4UL7OUXQPUR7Y7Q
-A KUBE-SVC-JD5MR3NA4I4DYORP -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-275NWNNANOEIGYHG
-A KUBE-SVC-JD5MR3NA4I4DYORP -j KUBE-SEP-CPH3WXMLRJ2BZFXW
-A KUBE-SVC-NPX46M4PTMTKRN6Y -j KUBE-SEP-TPAZEM2ZI6GIP4H4
-A KUBE-SEP-TPAZEM2ZI6GIP4H4 -s 192.168.2.230/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-TPAZEM2ZI6GIP4H4 -p tcp -m tcp -j DNAT --to-destination 192.168.2.230:6443
-A KUBE-SVC-QXMBXH4RFEQTDMUZ -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-7S77XOJGOAF6ON4P
-A KUBE-SVC-QXMBXH4RFEQTDMUZ -j KUBE-SEP-GE6BLW5CUF74UDN2
-A KUBE-SEP-7S77XOJGOAF6ON4P -s 10.244.1.3/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-7S77XOJGOAF6ON4P -p tcp -m tcp -j DNAT --to-destination 10.244.1.3:8080
-A KUBE-SEP-GE6BLW5CUF74UDN2 -s 10.244.3.3/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-GE6BLW5CUF74UDN2 -p tcp -m tcp -j DNAT --to-destination 10.244.3.3:8080
-A KUBE-SEP-IRMT6RY5EEEBXDAY -s 10.244.0.6/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-IRMT6RY5EEEBXDAY -p udp -m udp -j DNAT --to-destination 10.244.0.6:53
-A KUBE-SEP-232DQYSHL5HNRYWJ -s 10.244.0.7/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-232DQYSHL5HNRYWJ -p udp -m udp -j DNAT --to-destination 10.244.0.7:53
-A KUBE-SEP-2Z3537XSN3RJRU3M -s 10.244.0.6/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-2Z3537XSN3RJRU3M -p tcp -m tcp -j DNAT --to-destination 10.244.0.6:53
-A KUBE-SEP-A4UL7OUXQPUR7Y7Q -s 10.244.0.7/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-A4UL7OUXQPUR7Y7Q -p tcp -m tcp -j DNAT --to-destination 10.244.0.7:53
-A KUBE-SEP-275NWNNANOEIGYHG -s 10.244.0.6/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-275NWNNANOEIGYHG -p tcp -m tcp -j DNAT --to-destination 10.244.0.6:9153
-A KUBE-SEP-CPH3WXMLRJ2BZFXW -s 10.244.0.7/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-CPH3WXMLRJ2BZFXW -p tcp -m tcp -j DNAT --to-destination 10.244.0.7:9153
COMMIT
# Completed on Sun Jan 19 16:05:19 2020
# Generated by xtables-save v1.8.2 on Sun Jan 19 16:05:19 2020
*filter
:INPUT ACCEPT [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [0:0]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-EXTERNAL-SERVICES - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-FORWARD - [0:0]
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A INPUT -j KUBE-FIREWALL
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
-A KUBE-FIREWALL -m mark --mark 0x8000/0x8000 -m comment --comment "kubernetes firewall for dropping marked packets" -j DROP
-A KUBE-FORWARD -m conntrack --ctstate INVALID -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -s 10.244.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-FORWARD -d 10.244.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod destination rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
COMMIT
# Completed on Sun Jan 19 16:05:20 2020
# Generated by xtables-save v1.8.2 on Sun Jan 19 16:05:20 2020
*mangle
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
COMMIT
# Completed on Sun Jan 19 16:05:20 2020
# Warning: iptables-legacy tables present, use iptables-legacy-save to see them
sudo iptables-save (node 231 who has no container running)
# Generated by xtables-save v1.8.2 on Sun Jan 19 16:08:01 2020
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-JD5MR3NA4I4DYORP - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SEP-TPAZEM2ZI6GIP4H4 - [0:0]
:KUBE-SVC-QXMBXH4RFEQTDMUZ - [0:0]
:KUBE-SEP-7S77XOJGOAF6ON4P - [0:0]
:KUBE-SEP-GE6BLW5CUF74UDN2 - [0:0]
:KUBE-SEP-IRMT6RY5EEEBXDAY - [0:0]
:KUBE-SEP-232DQYSHL5HNRYWJ - [0:0]
:KUBE-SEP-2Z3537XSN3RJRU3M - [0:0]
:KUBE-SEP-A4UL7OUXQPUR7Y7Q - [0:0]
:KUBE-SEP-275NWNNANOEIGYHG - [0:0]
:KUBE-SEP-CPH3WXMLRJ2BZFXW - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A DOCKER -i docker0 -j RETURN
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE --random-fully
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.104.5.83/32 -p tcp -m comment --comment "default/markdownrender: cluster IP" -m tcp --dport 8080 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.104.5.83/32 -p tcp -m comment --comment "default/markdownrender: cluster IP" -m tcp --dport 8080 -j KUBE-SVC-QXMBXH4RFEQTDMUZ
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/markdownrender:" -m tcp --dport 31118 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/markdownrender:" -m tcp --dport 31118 -j KUBE-SVC-QXMBXH4RFEQTDMUZ
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-IRMT6RY5EEEBXDAY
-A KUBE-SVC-TCOU7JCQXEZGVUNU -j KUBE-SEP-232DQYSHL5HNRYWJ
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-2Z3537XSN3RJRU3M
-A KUBE-SVC-ERIFXISQEP7F7OF4 -j KUBE-SEP-A4UL7OUXQPUR7Y7Q
-A KUBE-SVC-JD5MR3NA4I4DYORP -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-275NWNNANOEIGYHG
-A KUBE-SVC-JD5MR3NA4I4DYORP -j KUBE-SEP-CPH3WXMLRJ2BZFXW
-A KUBE-SVC-NPX46M4PTMTKRN6Y -j KUBE-SEP-TPAZEM2ZI6GIP4H4
-A KUBE-SEP-TPAZEM2ZI6GIP4H4 -s 192.168.2.230/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-TPAZEM2ZI6GIP4H4 -p tcp -m tcp -j DNAT --to-destination 192.168.2.230:6443
-A KUBE-SVC-QXMBXH4RFEQTDMUZ -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-7S77XOJGOAF6ON4P
-A KUBE-SVC-QXMBXH4RFEQTDMUZ -j KUBE-SEP-GE6BLW5CUF74UDN2
-A KUBE-SEP-7S77XOJGOAF6ON4P -s 10.244.1.3/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-7S77XOJGOAF6ON4P -p tcp -m tcp -j DNAT --to-destination 10.244.1.3:8080
-A KUBE-SEP-GE6BLW5CUF74UDN2 -s 10.244.3.3/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-GE6BLW5CUF74UDN2 -p tcp -m tcp -j DNAT --to-destination 10.244.3.3:8080
-A KUBE-SEP-IRMT6RY5EEEBXDAY -s 10.244.0.6/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-IRMT6RY5EEEBXDAY -p udp -m udp -j DNAT --to-destination 10.244.0.6:53
-A KUBE-SEP-232DQYSHL5HNRYWJ -s 10.244.0.7/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-232DQYSHL5HNRYWJ -p udp -m udp -j DNAT --to-destination 10.244.0.7:53
-A KUBE-SEP-2Z3537XSN3RJRU3M -s 10.244.0.6/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-2Z3537XSN3RJRU3M -p tcp -m tcp -j DNAT --to-destination 10.244.0.6:53
-A KUBE-SEP-A4UL7OUXQPUR7Y7Q -s 10.244.0.7/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-A4UL7OUXQPUR7Y7Q -p tcp -m tcp -j DNAT --to-destination 10.244.0.7:53
-A KUBE-SEP-275NWNNANOEIGYHG -s 10.244.0.6/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-275NWNNANOEIGYHG -p tcp -m tcp -j DNAT --to-destination 10.244.0.6:9153
-A KUBE-SEP-CPH3WXMLRJ2BZFXW -s 10.244.0.7/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-CPH3WXMLRJ2BZFXW -p tcp -m tcp -j DNAT --to-destination 10.244.0.7:9153
COMMIT
# Completed on Sun Jan 19 16:08:01 2020
# Generated by xtables-save v1.8.2 on Sun Jan 19 16:08:01 2020
*filter
:INPUT ACCEPT [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [0:0]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-EXTERNAL-SERVICES - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-FORWARD - [0:0]
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A INPUT -j KUBE-FIREWALL
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
-A KUBE-FIREWALL -m mark --mark 0x8000/0x8000 -m comment --comment "kubernetes firewall for dropping marked packets" -j DROP
-A KUBE-FORWARD -m conntrack --ctstate INVALID -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -s 10.244.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-FORWARD -d 10.244.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod destination rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
COMMIT
# Completed on Sun Jan 19 16:08:01 2020
# Generated by xtables-save v1.8.2 on Sun Jan 19 16:08:01 2020
*mangle
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
COMMIT
# Completed on Sun Jan 19 16:08:01 2020
# Warning: iptables-legacy tables present, use iptables-legacy-save to see them
sudo iptables-save (node 232 who's pod runs the container)
# Generated by xtables-save v1.8.2 on Sun Jan 19 16:11:44 2020
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-JD5MR3NA4I4DYORP - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SEP-TPAZEM2ZI6GIP4H4 - [0:0]
:KUBE-SVC-QXMBXH4RFEQTDMUZ - [0:0]
:KUBE-SEP-7S77XOJGOAF6ON4P - [0:0]
:KUBE-SEP-GE6BLW5CUF74UDN2 - [0:0]
:KUBE-SEP-IRMT6RY5EEEBXDAY - [0:0]
:KUBE-SEP-232DQYSHL5HNRYWJ - [0:0]
:KUBE-SEP-2Z3537XSN3RJRU3M - [0:0]
:KUBE-SEP-A4UL7OUXQPUR7Y7Q - [0:0]
:KUBE-SEP-275NWNNANOEIGYHG - [0:0]
:KUBE-SEP-CPH3WXMLRJ2BZFXW - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A DOCKER -i docker0 -j RETURN
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE --random-fully
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.104.5.83/32 -p tcp -m comment --comment "default/markdownrender: cluster IP" -m tcp --dport 8080 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.104.5.83/32 -p tcp -m comment --comment "default/markdownrender: cluster IP" -m tcp --dport 8080 -j KUBE-SVC-QXMBXH4RFEQTDMUZ
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/markdownrender:" -m tcp --dport 31118 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/markdownrender:" -m tcp --dport 31118 -j KUBE-SVC-QXMBXH4RFEQTDMUZ
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-IRMT6RY5EEEBXDAY
-A KUBE-SVC-TCOU7JCQXEZGVUNU -j KUBE-SEP-232DQYSHL5HNRYWJ
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-2Z3537XSN3RJRU3M
-A KUBE-SVC-ERIFXISQEP7F7OF4 -j KUBE-SEP-A4UL7OUXQPUR7Y7Q
-A KUBE-SVC-JD5MR3NA4I4DYORP -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-275NWNNANOEIGYHG
-A KUBE-SVC-JD5MR3NA4I4DYORP -j KUBE-SEP-CPH3WXMLRJ2BZFXW
-A KUBE-SVC-NPX46M4PTMTKRN6Y -j KUBE-SEP-TPAZEM2ZI6GIP4H4
-A KUBE-SEP-TPAZEM2ZI6GIP4H4 -s 192.168.2.230/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-TPAZEM2ZI6GIP4H4 -p tcp -m tcp -j DNAT --to-destination 192.168.2.230:6443
-A KUBE-SVC-QXMBXH4RFEQTDMUZ -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-7S77XOJGOAF6ON4P
-A KUBE-SVC-QXMBXH4RFEQTDMUZ -j KUBE-SEP-GE6BLW5CUF74UDN2
-A KUBE-SEP-7S77XOJGOAF6ON4P -s 10.244.1.3/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-7S77XOJGOAF6ON4P -p tcp -m tcp -j DNAT --to-destination 10.244.1.3:8080
-A KUBE-SEP-GE6BLW5CUF74UDN2 -s 10.244.3.3/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-GE6BLW5CUF74UDN2 -p tcp -m tcp -j DNAT --to-destination 10.244.3.3:8080
-A KUBE-SEP-IRMT6RY5EEEBXDAY -s 10.244.0.6/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-IRMT6RY5EEEBXDAY -p udp -m udp -j DNAT --to-destination 10.244.0.6:53
-A KUBE-SEP-232DQYSHL5HNRYWJ -s 10.244.0.7/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-232DQYSHL5HNRYWJ -p udp -m udp -j DNAT --to-destination 10.244.0.7:53
-A KUBE-SEP-2Z3537XSN3RJRU3M -s 10.244.0.6/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-2Z3537XSN3RJRU3M -p tcp -m tcp -j DNAT --to-destination 10.244.0.6:53
-A KUBE-SEP-A4UL7OUXQPUR7Y7Q -s 10.244.0.7/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-A4UL7OUXQPUR7Y7Q -p tcp -m tcp -j DNAT --to-destination 10.244.0.7:53
-A KUBE-SEP-275NWNNANOEIGYHG -s 10.244.0.6/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-275NWNNANOEIGYHG -p tcp -m tcp -j DNAT --to-destination 10.244.0.6:9153
-A KUBE-SEP-CPH3WXMLRJ2BZFXW -s 10.244.0.7/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-CPH3WXMLRJ2BZFXW -p tcp -m tcp -j DNAT --to-destination 10.244.0.7:9153
COMMIT
# Completed on Sun Jan 19 16:11:44 2020
# Generated by xtables-save v1.8.2 on Sun Jan 19 16:11:44 2020
*filter
:INPUT ACCEPT [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [0:0]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-EXTERNAL-SERVICES - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-FORWARD - [0:0]
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A INPUT -j KUBE-FIREWALL
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
-A KUBE-FIREWALL -m mark --mark 0x8000/0x8000 -m comment --comment "kubernetes firewall for dropping marked packets" -j DROP
-A KUBE-FORWARD -m conntrack --ctstate INVALID -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -s 10.244.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-FORWARD -d 10.244.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod destination rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
COMMIT
# Completed on Sun Jan 19 16:11:44 2020
# Generated by xtables-save v1.8.2 on Sun Jan 19 16:11:44 2020
*mangle
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
COMMIT
# Completed on Sun Jan 19 16:11:44 2020
# Warning: iptables-legacy tables present, use iptables-legacy-save to see them
I also checked "Nodeport only works on Pod Host" and "NodePort only responding on node where pod is running" but still no success.

If you’re running on a cloudprovider, you may need to open up a firewall-rule for the nodes:nodeport listed in your post
If the problem still remains there could be an issue with the pod networking. It would be difficult to identify the root cause unless we access the cluster. Though the below posts might be helpful.
https://github.com/kubernetes/kubernetes/issues/58908
https://github.com/kubernetes/kubernetes/issues/70222

A better approach to this would be using ingress and not the iptables route. The reason primarily being that you'll lose the configuration on node restart/drains.
The best and easily maintained would be nginx ingress. When you define the ingress, simply put in the hostPort which is the port you want to run it physcially on and map it with containerPort which will actually be the port of the container the service is running on (8080). Since it runs as a deamonset, it will take care of caching requests as well as act as a load balancer between the nodes by default.

Related

kubernetes coredns can't resolve?

coredns state
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-8c79ffd8b-5ns7r 1/1 Running 0 121m 10.88.0.137 127.0.0.1
service state
kube-dns ClusterIP 10.0.0.10 53/UDP,53/TCP,9153/TCP 122m
this information for kube-dns
Name: kube-dns
Namespace: kube-system
Labels: addonmanager.kubernetes.io/mode=Reconcile
k8s-app=kube-dns
kubernetes.io/cluster-service=true
kubernetes.io/name=CoreDNS
Annotations: prometheus.io/port: 9153
prometheus.io/scrape: true
Selector: k8s-app=kube-dns
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.0.0.10
IPs: 10.0.0.10
Port: dns 53/UDP
TargetPort: 53/UDP
Endpoints: 10.88.0.137:53
Port: dns-tcp 53/TCP
TargetPort: 53/TCP
Endpoints: 10.88.0.137:53
Port: metrics 9153/TCP
TargetPort: 9153/TCP
Endpoints: 10.88.0.137:9153
Session Affinity: None
Events: <none>
I create "busybox" pod
busybox 1/1 Running 7 (5m45s ago) 122m 10.88.0.139 127.0.0.1
/kubectl.sh exec -it busybox -- nslookup kube-dns.kube-system.svc.cluster.local
resolve domain fail, when "/etc/resolv.conf" file content is follwing to
nameserver 10.0.0.10
options ndots:5
Error Message
Server: 10.0.0.10
Address 1: 10.0.0.10
nslookup: can't resolve 'kube-dns.kube-system.svc.cluster.local'
command terminated with exit code 1
but can ping 10.88.0.137
64 bytes from 10.88.0.137: seq=0 ttl=64 time=0.043 ms
64 bytes from 10.88.0.137: seq=1 ttl=64 time=0.114 ms
64 bytes from 10.88.0.137: seq=2 ttl=64 time=0.086 ms
./kubectl.sh exec -it busybox -- nslookup kube-dns.kube-system.svc.cluster.local
resolve domain success when "/etc/resolv.conf" file content is follwing to
nameserver 10.88.0.137
options ndots:5
Server: 10.88.0.137
Address 1: 10.88.0.137 10-88-0-137.kube-dns.kube-system.svc.cluster.local
Name: kube-dns.kube-system.svc.cluster.local
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
when i started docker.service ,it work
is iptables problem?
# Generated by iptables-save v1.8.4 on Thu Apr 28 11:32:57 2022
*mangle
:PREROUTING ACCEPT [286724:45943921]
:INPUT ACCEPT [286421:45917739]
:FORWARD ACCEPT [303:26182]
:OUTPUT ACCEPT [286333:44357718]
:POSTROUTING ACCEPT [286527:44373508]
:KUBE-IPTABLES-HINT - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
COMMIT
# Completed on Thu Apr 28 11:32:57 2022
# Generated by iptables-save v1.8.4 on Thu Apr 28 11:32:57 2022
*nat
:PREROUTING ACCEPT [5:763]
:INPUT ACCEPT [5:763]
:OUTPUT ACCEPT [183:11131]
:POSTROUTING ACCEPT [183:11131]
:CNI-65c6fd99503bff7510c4bb02 - [0:0]
:CNI-8484e8a122ace36d26bd565d - [0:0]
:CNI-a854b6638aef4961af4db12b - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-SEP-36TNAFFT2KZBXN2D - [0:0]
:KUBE-SEP-M2U7IOCBO7O5P45B - [0:0]
:KUBE-SEP-VO2PPQ7LAMK34B5H - [0:0]
:KUBE-SEP-WCWCTDIFXHIUSRHD - [0:0]
:KUBE-SEP-ZXXUG7AYMOGMIOPQ - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-JD5MR3NA4I4DYORP - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:KUBE-SVC-V2OKYYMBY3REGZOG - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 10.88.0.137/32 -m comment --comment "name: \"containerd-net\" id: \"b53510e54e70e73ed24423f543149fad70cb974de592a636c99fda46d953dfe7\"" -j CNI-65c6fd99503bff7510c4bb02
-A POSTROUTING -s 10.88.0.138/32 -m comment --comment "name: \"containerd-net\" id: \"ab9a9580ba9c6f12470d6f2fc255bd1a1043818ad15b51b9548b9b38425ee0db\"" -j CNI-8484e8a122ace36d26bd565d
-A POSTROUTING -s 10.88.0.139/32 -m comment --comment "name: \"containerd-net\" id: \"d9aa074f2053417e3cba4387af7c92d3e6fd0b15eb145ee7298137f0aa170f02\"" -j CNI-a854b6638aef4961af4db12b
-A CNI-65c6fd99503bff7510c4bb02 -d 10.88.0.0/16 -m comment --comment "name: \"containerd-net\" id: \"b53510e54e70e73ed24423f543149fad70cb974de592a636c99fda46d953dfe7\"" -j ACCEPT
-A CNI-65c6fd99503bff7510c4bb02 ! -d 224.0.0.0/4 -m comment --comment "name: \"containerd-net\" id: \"b53510e54e70e73ed24423f543149fad70cb974de592a636c99fda46d953dfe7\"" -j MASQUERADE
-A CNI-8484e8a122ace36d26bd565d -d 10.88.0.0/16 -m comment --comment "name: \"containerd-net\" id: \"ab9a9580ba9c6f12470d6f2fc255bd1a1043818ad15b51b9548b9b38425ee0db\"" -j ACCEPT
-A CNI-8484e8a122ace36d26bd565d ! -d 224.0.0.0/4 -m comment --comment "name: \"containerd-net\" id: \"ab9a9580ba9c6f12470d6f2fc255bd1a1043818ad15b51b9548b9b38425ee0db\"" -j MASQUERADE
-A CNI-a854b6638aef4961af4db12b -d 10.88.0.0/16 -m comment --comment "name: \"containerd-net\" id: \"d9aa074f2053417e3cba4387af7c92d3e6fd0b15eb145ee7298137f0aa170f02\"" -j ACCEPT
-A CNI-a854b6638aef4961af4db12b ! -d 224.0.0.0/4 -m comment --comment "name: \"containerd-net\" id: \"d9aa074f2053417e3cba4387af7c92d3e6fd0b15eb145ee7298137f0aa170f02\"" -j MASQUERADE
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m mark ! --mark 0x4000/0x4000 -j RETURN
-A KUBE-POSTROUTING -j MARK --set-xmark 0x4000/0x0
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE --random-fully
-A KUBE-SEP-36TNAFFT2KZBXN2D -s 10.88.0.138/32 -m comment --comment "default/nginx-service" -j KUBE-MARK-MASQ
-A KUBE-SEP-36TNAFFT2KZBXN2D -p tcp -m comment --comment "default/nginx-service" -m tcp -j DNAT --to-destination 10.88.0.138:80
-A KUBE-SEP-M2U7IOCBO7O5P45B -s 10.88.0.137/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-M2U7IOCBO7O5P45B -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.88.0.137:53
-A KUBE-SEP-VO2PPQ7LAMK34B5H -s 10.211.55.11/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-VO2PPQ7LAMK34B5H -p tcp -m comment --comment "default/kubernetes:https" -m tcp -j DNAT --to-destination 10.211.55.11:6443
-A KUBE-SEP-WCWCTDIFXHIUSRHD -s 10.88.0.137/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-WCWCTDIFXHIUSRHD -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.88.0.137:53
-A KUBE-SEP-ZXXUG7AYMOGMIOPQ -s 10.88.0.137/32 -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-MARK-MASQ
-A KUBE-SEP-ZXXUG7AYMOGMIOPQ -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.88.0.137:9153
-A KUBE-SERVICES -d 10.0.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -d 10.0.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 10.0.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SERVICES -d 10.0.0.179/32 -p tcp -m comment --comment "default/nginx-service cluster IP" -m tcp --dport 80 -j KUBE-SVC-V2OKYYMBY3REGZOG
-A KUBE-SERVICES -d 10.0.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp -> 10.88.0.137:53" -j KUBE-SEP-WCWCTDIFXHIUSRHD
-A KUBE-SVC-JD5MR3NA4I4DYORP -m comment --comment "kube-system/kube-dns:metrics -> 10.88.0.137:9153" -j KUBE-SEP-ZXXUG7AYMOGMIOPQ
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https -> 10.211.55.11:6443" -j KUBE-SEP-VO2PPQ7LAMK34B5H
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns -> 10.88.0.137:53" -j KUBE-SEP-M2U7IOCBO7O5P45B
-A KUBE-SVC-V2OKYYMBY3REGZOG -m comment --comment "default/nginx-service -> 10.88.0.138:80" -j KUBE-SEP-36TNAFFT2KZBXN2D
COMMIT
# Completed on Thu Apr 28 11:32:57 2022
# Generated by iptables-save v1.8.4 on Thu Apr 28 11:32:57 2022
*filter
:INPUT ACCEPT [22414:3436031]
:FORWARD ACCEPT [12:944]
:OUTPUT ACCEPT [22404:3431451]
:KUBE-EXTERNAL-SERVICES - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-FORWARD - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-SERVICES - [0:0]
-A INPUT -m comment --comment "kubernetes health check service ports" -j KUBE-NODEPORTS
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A INPUT -j KUBE-FIREWALL
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment "block incoming localnet connections" -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP
-A KUBE-FORWARD -m conntrack --ctstate INVALID -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
COMMIT
# Completed on Thu Apr 28 11:32:57 2022

Unable to reach service using NodePort from k8s master

I have setup kubernetes cluster in Ubuntu 16.04 with a master and a worker. I deployed application and created NodePort service as below.
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: hello-app-deployment
spec:
selector:
matchLabels:
app: hello-app
replicas: 1
template:
metadata:
labels:
app: hello-app
spec:
containers:
- name: hello-app
image: yeasy/simple-web:latest
ports:
- containerPort: 80
---
kind: Service
apiVersion: v1
metadata:
name: hello-app-service
spec:
selector:
app: hello-app
ports:
- protocol: TCP
port: 8000
targetPort: 80
nodePort: 30020
name: hello-app-port
type: NodePort
Pods and service are created for same
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/hello-app-deployment-6bfdc9c668-smsgq 1/1 Running 0 83m 10.32.0.3 k8s-worker-1 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/hello-app-service NodePort 10.106.91.145 <none> 8000:30020/TCP 83m app=hello-app
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 10h <none>
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/hello-app-deployment 1/1 1 1 83m hello-app yeasy/simple-web:latest app=hello-app
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/hello-app-deployment-6bfdc9c668 1 1 1 83m hello-app yeasy/simple-web:latest app=hello-app,pod-template-hash=6bfdc9c668
I am able to access application from host where application is deployed as:
kubeuser#kube-worker-1:~$ curl http://kube-worker-1:30020
Hello!
But when I access from master node or other worker nodes it doesn't connect.
kubeuser#k8s-master:~$ curl http://k8s-master:30020
curl: (7) Failed to connect to k8s-master port 30020: Connection refused
kubeuser#k8s-master:~$ curl http://localhost:30020
curl: (7) Failed to connect to localhost port 30020: Connection refused
kubeuser#k8s-master:~$ curl http://k8s-worker-2:30020
Failed to connect to k8s-worker-2 port 30020: No route to host
kubeuser#k8s-worker-2:~$ curl http://localhost:30020
Failed to connect to localhost port 30020: No route to host
Created CIDR as below
kubeadm init --pod-network-cidr=192.168.0.0/16
The following is iptable-save result:
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [30:1891]
:POSTROUTING ACCEPT [30:1891]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-SEP-3DU66DE6VORVEQVD - [0:0]
:KUBE-SEP-6UWAUPYDDOV5SU5B - [0:0]
:KUBE-SEP-S4MK5EVI7CLHCCS6 - [0:0]
:KUBE-SEP-SWLOBIBPXYBP7G2Z - [0:0]
:KUBE-SEP-SZZ7MOWKTWUFXIJT - [0:0]
:KUBE-SEP-UJJNLSZU6HL4F5UO - [0:0]
:KUBE-SEP-ZCHNBYOGFZRFKYMA - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-JD5MR3NA4I4DYORP - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:OUTPUT_direct - [0:0]
:POSTROUTING_ZONES - [0:0]
:POSTROUTING_ZONES_SOURCE - [0:0]
:POSTROUTING_direct - [0:0]
:POST_public - [0:0]
:POST_public_allow - [0:0]
:POST_public_deny - [0:0]
:POST_public_log - [0:0]
:PREROUTING_ZONES - [0:0]
:PREROUTING_ZONES_SOURCE - [0:0]
:PREROUTING_direct - [0:0]
:PRE_public - [0:0]
:PRE_public_allow - [0:0]
:PRE_public_deny - [0:0]
:PRE_public_log - [0:0]
:WEAVE - [0:0]
:WEAVE-CANARY - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -j PREROUTING_direct
-A PREROUTING -j PREROUTING_ZONES_SOURCE
-A PREROUTING -j PREROUTING_ZONES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j OUTPUT_direct
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -j POSTROUTING_direct
-A POSTROUTING -j POSTROUTING_ZONES_SOURCE
-A POSTROUTING -j POSTROUTING_ZONES
-A POSTROUTING -j WEAVE
-A DOCKER -i docker0 -j RETURN
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m mark ! --mark 0x4000/0x4000 -j RETURN
-A KUBE-POSTROUTING -j MARK --set-xmark 0x4000/0x0
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-3DU66DE6VORVEQVD -s 10.32.0.3/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-3DU66DE6VORVEQVD -p udp -m udp -j DNAT --to-destination 10.32.0.3:53
-A KUBE-SEP-6UWAUPYDDOV5SU5B -s 10.111.1.158/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-6UWAUPYDDOV5SU5B -p tcp -m tcp -j DNAT --to-destination 10.111.1.158:6443
-A KUBE-SEP-S4MK5EVI7CLHCCS6 -s 10.32.0.3/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-S4MK5EVI7CLHCCS6 -p tcp -m tcp -j DNAT --to-destination 10.32.0.3:53
-A KUBE-SEP-SWLOBIBPXYBP7G2Z -s 10.32.0.2/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-SWLOBIBPXYBP7G2Z -p tcp -m tcp -j DNAT --to-destination 10.32.0.2:9153
-A KUBE-SEP-SZZ7MOWKTWUFXIJT -s 10.32.0.2/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-SZZ7MOWKTWUFXIJT -p udp -m udp -j DNAT --to-destination 10.32.0.2:53
-A KUBE-SEP-UJJNLSZU6HL4F5UO -s 10.32.0.2/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-UJJNLSZU6HL4F5UO -p tcp -m tcp -j DNAT --to-destination 10.32.0.2:53
-A KUBE-SEP-ZCHNBYOGFZRFKYMA -s 10.32.0.3/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-ZCHNBYOGFZRFKYMA -p tcp -m tcp -j DNAT --to-destination 10.32.0.3:9153
-A KUBE-SERVICES ! -s 192.168.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES ! -s 192.168.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SERVICES ! -s 192.168.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES ! -s 192.168.0.0/16 -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-UJJNLSZU6HL4F5UO
-A KUBE-SVC-ERIFXISQEP7F7OF4 -j KUBE-SEP-S4MK5EVI7CLHCCS6
-A KUBE-SVC-JD5MR3NA4I4DYORP -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-SWLOBIBPXYBP7G2Z
-A KUBE-SVC-JD5MR3NA4I4DYORP -j KUBE-SEP-ZCHNBYOGFZRFKYMA
-A KUBE-SVC-NPX46M4PTMTKRN6Y -j KUBE-SEP-6UWAUPYDDOV5SU5B
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-SZZ7MOWKTWUFXIJT
-A KUBE-SVC-TCOU7JCQXEZGVUNU -j KUBE-SEP-3DU66DE6VORVEQVD
-A POSTROUTING_ZONES -g POST_public
-A POST_public -j POST_public_log
-A POST_public -j POST_public_deny
-A POST_public -j POST_public_allow
-A PREROUTING_ZONES -g PRE_public
-A PRE_public -j PRE_public_log
-A PRE_public -j PRE_public_deny
-A PRE_public -j PRE_public_allow
-A WEAVE -m set --match-set weaver-no-masq-local dst -m comment --comment "Prevent SNAT to locally running containers" -j RETURN
-A WEAVE -s 10.32.0.0/12 -d 224.0.0.0/4 -j RETURN
-A WEAVE ! -s 10.32.0.0/12 -d 10.32.0.0/12 -j MASQUERADE
-A WEAVE -s 10.32.0.0/12 ! -d 10.32.0.0/12 -j MASQUERADE
COMMIT
# Completed on Sun Aug 16 17:11:47 2020
# Generated by iptables-save v1.6.0 on Sun Aug 16 17:11:47 2020
*security
:INPUT ACCEPT [1417084:253669465]
:FORWARD ACCEPT [4:488]
:OUTPUT ACCEPT [1414939:285083560]
:FORWARD_direct - [0:0]
:INPUT_direct - [0:0]
:OUTPUT_direct - [0:0]
-A INPUT -j INPUT_direct
-A FORWARD -j FORWARD_direct
-A OUTPUT -j OUTPUT_direct
COMMIT
# Completed on Sun Aug 16 17:11:47 2020
# Generated by iptables-save v1.6.0 on Sun Aug 16 17:11:47 2020
*raw
:PREROUTING ACCEPT [1417204:253747905]
:OUTPUT ACCEPT [1414959:285085300]
:OUTPUT_direct - [0:0]
:PREROUTING_direct - [0:0]
-A PREROUTING -j PREROUTING_direct
-A OUTPUT -j OUTPUT_direct
COMMIT
# Completed on Sun Aug 16 17:11:47 2020
# Generated by iptables-save v1.6.0 on Sun Aug 16 17:11:47 2020
*mangle
:PREROUTING ACCEPT [1401943:246825511]
:INPUT ACCEPT [1401934:246824763]
:FORWARD ACCEPT [4:488]
:OUTPUT ACCEPT [1399691:277923964]
:POSTROUTING ACCEPT [1399681:277923072]
:FORWARD_direct - [0:0]
:INPUT_direct - [0:0]
:OUTPUT_direct - [0:0]
:POSTROUTING_direct - [0:0]
:PREROUTING_ZONES - [0:0]
:PREROUTING_ZONES_SOURCE - [0:0]
:PREROUTING_direct - [0:0]
:PRE_public - [0:0]
:PRE_public_allow - [0:0]
:PRE_public_deny - [0:0]
:PRE_public_log - [0:0]
:WEAVE-CANARY - [0:0]
-A PREROUTING -j PREROUTING_direct
-A PREROUTING -j PREROUTING_ZONES_SOURCE
-A PREROUTING -j PREROUTING_ZONES
-A INPUT -j INPUT_direct
-A FORWARD -j FORWARD_direct
-A OUTPUT -j OUTPUT_direct
-A POSTROUTING -j POSTROUTING_direct
-A PREROUTING_ZONES -g PRE_public
-A PRE_public -j PRE_public_log
-A PRE_public -j PRE_public_deny
-A PRE_public -j PRE_public_allow
COMMIT
# Completed on Sun Aug 16 17:11:47 2020
# Generated by iptables-save v1.6.0 on Sun Aug 16 17:11:47 2020
*filter
:INPUT ACCEPT [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [2897:591977]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
:FORWARD_IN_ZONES - [0:0]
:FORWARD_IN_ZONES_SOURCE - [0:0]
:FORWARD_OUT_ZONES - [0:0]
:FORWARD_OUT_ZONES_SOURCE - [0:0]
:FORWARD_direct - [0:0]
:FWDI_public - [0:0]
:FWDI_public_allow - [0:0]
:FWDI_public_deny - [0:0]
:FWDI_public_log - [0:0]
:FWDO_public - [0:0]
:FWDO_public_allow - [0:0]
:FWDO_public_deny - [0:0]
:FWDO_public_log - [0:0]
:INPUT_ZONES - [0:0]
:INPUT_ZONES_SOURCE - [0:0]
:INPUT_direct - [0:0]
:IN_public - [0:0]
:IN_public_allow - [0:0]
:IN_public_deny - [0:0]
:IN_public_log - [0:0]
:KUBE-EXTERNAL-SERVICES - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-FORWARD - [0:0]
:KUBE-SERVICES - [0:0]
:OUTPUT_direct - [0:0]
:WEAVE-CANARY - [0:0]
:WEAVE-NPC - [0:0]
:WEAVE-NPC-DEFAULT - [0:0]
:WEAVE-NPC-EGRESS - [0:0]
:WEAVE-NPC-EGRESS-ACCEPT - [0:0]
:WEAVE-NPC-EGRESS-CUSTOM - [0:0]
:WEAVE-NPC-EGRESS-DEFAULT - [0:0]
:WEAVE-NPC-INGRESS - [0:0]
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A INPUT -j KUBE-FIREWALL
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -j INPUT_direct
-A INPUT -j INPUT_ZONES_SOURCE
-A INPUT -j INPUT_ZONES
-A INPUT -p icmp -j ACCEPT
-A INPUT -m conntrack --ctstate INVALID -j DROP
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A INPUT -d 127.0.0.1/32 -p tcp -m tcp --dport 6784 -m addrtype ! --src-type LOCAL -m conntrack ! --ctstate RELATED,ESTABLISHED -m comment --comment "Block non-local access to Weave Net control port" -j DROP
-A INPUT -i weave -j WEAVE-NPC-EGRESS
-A FORWARD -i weave -m comment --comment "NOTE: this must go before \'-j KUBE-FORWARD\'" -j WEAVE-NPC-EGRESS
-A FORWARD -o weave -m comment --comment "NOTE: this must go before \'-j KUBE-FORWARD\'" -j WEAVE-NPC
-A FORWARD -o weave -m state --state NEW -j NFLOG --nflog-group 86
-A FORWARD -o weave -j DROP
-A FORWARD -i weave ! -o weave -j ACCEPT
-A FORWARD -o weave -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i lo -j ACCEPT
-A FORWARD -j FORWARD_direct
-A FORWARD -j FORWARD_IN_ZONES_SOURCE
-A FORWARD -j FORWARD_IN_ZONES
-A FORWARD -j FORWARD_OUT_ZONES_SOURCE
-A FORWARD -j FORWARD_OUT_ZONES
-A FORWARD -p icmp -j ACCEPT
-A FORWARD -m conntrack --ctstate INVALID -j DROP
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A OUTPUT -j OUTPUT_direct
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
-A FORWARD_IN_ZONES -g FWDI_public
-A FORWARD_OUT_ZONES -g FWDO_public
-A FWDI_public -j FWDI_public_log
-A FWDI_public -j FWDI_public_deny
-A FWDI_public -j FWDI_public_allow
-A FWDO_public -j FWDO_public_log
-A FWDO_public -j FWDO_public_deny
-A FWDO_public -j FWDO_public_allow
-A INPUT_ZONES -g IN_public
-A IN_public -j IN_public_log
-A IN_public -j IN_public_deny
-A IN_public -j IN_public_allow
-A IN_public_allow -p tcp -m tcp --dport 22 -m conntrack --ctstate NEW -j ACCEPT
-A IN_public_allow -p tcp -m tcp --dport 8080 -m conntrack --ctstate NEW -j ACCEPT
-A IN_public_allow -p tcp -m tcp --dport 10251 -m conntrack --ctstate NEW -j ACCEPT
-A IN_public_allow -p tcp -m tcp --dport 6443 -m conntrack --ctstate NEW -j ACCEPT
-A IN_public_allow -p tcp -m tcp --dport 30000:32767 -m conntrack --ctstate NEW -j ACCEPT
-A IN_public_allow -p tcp -m tcp --dport 10255 -m conntrack --ctstate NEW -j ACCEPT
-A IN_public_allow -p tcp -m tcp --dport 10252 -m conntrack --ctstate NEW -j ACCEPT
-A IN_public_allow -p tcp -m tcp --dport 2379:2380 -m conntrack --ctstate NEW -j ACCEPT
-A IN_public_allow -p tcp -m tcp --dport 10250 -m conntrack --ctstate NEW -j ACCEPT
-A IN_public_allow -p tcp -m tcp --dport 6784 -m conntrack --ctstate NEW -j ACCEPT
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment "block incoming localnet connections" -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP
-A KUBE-FORWARD -m conntrack --ctstate INVALID -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -s 192.168.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-FORWARD -d 192.168.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod destination rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A WEAVE-NPC -m state --state RELATED,ESTABLISHED -j ACCEPT
-A WEAVE-NPC -d 224.0.0.0/4 -j ACCEPT
-A WEAVE-NPC -m physdev --physdev-out vethwe-bridge --physdev-is-bridged -j ACCEPT
-A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-DEFAULT
-A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-INGRESS
-A WEAVE-NPC-DEFAULT -m set --match-set weave-Rzff}h:=]JaaJl/G;(XJpGjZ[ dst -m comment --comment "DefaultAllow ingress isolation for namespace: kube-public" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-P.B|!ZhkAr5q=XZ?3}tMBA+0 dst -m comment --comment "DefaultAllow ingress isolation for namespace: kube-system" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-;rGqyMIl1HN^cfDki~Z$3]6!N dst -m comment --comment "DefaultAllow ingress isolation for namespace: default" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-]B*(W?)t*z5O17G044[gUo#$l dst -m comment --comment "DefaultAllow ingress isolation for namespace: kube-node-lease" -j ACCEPT
-A WEAVE-NPC-EGRESS -m state --state RELATED,ESTABLISHED -j ACCEPT
-A WEAVE-NPC-EGRESS -m physdev --physdev-in vethwe-bridge --physdev-is-bridged -j RETURN
-A WEAVE-NPC-EGRESS -m addrtype --dst-type LOCAL -j RETURN
-A WEAVE-NPC-EGRESS -d 224.0.0.0/4 -j RETURN
-A WEAVE-NPC-EGRESS -m state --state NEW -j WEAVE-NPC-EGRESS-DEFAULT
-A WEAVE-NPC-EGRESS -m state --state NEW -m mark ! --mark 0x40000/0x40000 -j WEAVE-NPC-EGRESS-CUSTOM
-A WEAVE-NPC-EGRESS -m state --state NEW -m mark ! --mark 0x40000/0x40000 -j NFLOG --nflog-group 86
-A WEAVE-NPC-EGRESS-ACCEPT -j MARK --set-xmark 0x40000/0x40000
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-41s)5vQ^o/xWGz6a20N:~?#|E src -m comment --comment "DefaultAllow egress isolation for namespace: kube-public" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-41s)5vQ^o/xWGz6a20N:~?#|E src -m comment --comment "DefaultAllow egress isolation for namespace: kube-public" -j RETURN
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-E1ney4o[ojNrLk.6rOHi;7MPE src -m comment --comment "DefaultAllow egress isolation for namespace: kube-system" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-E1ney4o[ojNrLk.6rOHi;7MPE src -m comment --comment "DefaultAllow egress isolation for namespace: kube-system" -j RETURN
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-s_+ChJId4Uy_$}G;WdH|~TK)I src -m comment --comment "DefaultAllow egress isolation for namespace: default" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-s_+ChJId4Uy_$}G;WdH|~TK)I src -m comment --comment "DefaultAllow egress isolation for namespace: default" -j RETURN
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-sui%__gZ}{kX~oZgI_Ttqp=Dp src -m comment --comment "DefaultAllow egress isolation for namespace: kube-node-lease" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-sui%__gZ}{kX~oZgI_Ttqp=Dp src -m comment --comment "DefaultAllow egress isolation for namespace: kube-node-lease" -j RETURN
weave status connections
-> 10.111.1.156:6783 failed IP allocation was seeded by different peers (received: [2a:21:42:e0:5d:5f(k8s-worker-1)], ours: [12:35:b2:39:cf:7d(k8s-master)]), retry: 2020-08-17 08:15:51.155197759 +0000 UTC m=+68737.225153235
weave status in weave-pod
Version: 2.7.0 (failed to check latest version - see logs; next check at 2020/08/17 13:35:46)
Service: router
Protocol: weave 1..2
Name: 12:35:b2:39:cf:7d(k8s-master)
Encryption: disabled
PeerDiscovery: enabled
Targets: 1
Connections: 1 (1 failed)
Peers: 1
TrustedSubnets: none
Service: ipam
Status: ready
Range: 10.32.0.0/12
DefaultSubnet: 10.32.0.0/12
it tried solution in these links but didn't work solution1 and solution2
Please let me know what could be the possible reason for master to not serve on the published NodePort.
finally, it worked it was with ports for weave wasn't open in firewall as mentioned in this
also deleted weave deployment in kubernetes, removed /var/lib/weave/weave-netdata.db and deployed weave again, it worked.

kube-proxy unhappy after setting IPALLOC_RANGE in weave configuration

I’m installing kubernetes v1.5.6 + weave using kubeadm-1.6.0-0.alpha.0.2074.a092d8e0f95f52.x86_64.rpm on CentOS 7.3. Since my network IP range for host machine is 10.41.30.xx and it overlaps with internal weave IP range. I configured weave to use IPALLOC_RANGE as 172.30.0.0/16.
After setup, I’m not able to connect to kubernetes services. Kube-proxy complains about connecting to kubernetes master.
E0728 18:04:47.201682 1 server.go:421] Can't get Node "ctdpc001571.ctd.khi.com", assuming iptables proxy, err: Get https://10.41.30.50:6443/api/v1/nodes/ctdpc001571.ctd.khi.co.jp: dial tcp 10.41.30.50:6443: getsockopt: connection refused
I0728 18:04:47.204522 1 server.go:215] Using iptables Proxier.
W0728 18:04:47.205022 1 server.go:468] Failed to retrieve node info: Get https://10.41.30.50:6443/api/v1/nodes/ctdpc001571.ctd.khi.com: dial tcp 10.41.30.50:6443: getsockopt: connection refused
W0728 18:04:47.205325 1 proxier.go:249] invalid nodeIP, initialize kube-proxy with 127.0.0.1 as nodeIP
W0728 18:04:47.205347 1 proxier.go:254] clusterCIDR not specified, unable to distinguish between internal and external traffic
I0728 18:04:47.205394 1 server.go:227] Tearing down userspace rules.
I0728 18:04:47.238324 1 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_max' to 1048576
I0728 18:04:47.239243 1 conntrack.go:66] Setting conntrack hashsize to 262144
I0728 18:04:47.242492 1 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0728 18:04:47.242640 1 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
E0728 18:04:47.260748 1 reflector.go:188] pkg/proxy/config/api.go:33: Failed to list *api.Endpoints: Get https://10.41.30.50:6443/api/v1/endpoints?resourceVersion=0: dial tcp 10.41.30.50:6443: getsockopt: connection refused
E0728 18:04:47.260931 1 reflector.go:188] pkg/proxy/config/api.go:30: Failed to list *api.Service: Get https://10.41.30.50:6443/api/v1/services?resourceVersion=0: dial tcp 10.41.30.50:6443: getsockopt: connection refused
E0728 18:04:47.265569 1 event.go:208] Unable to write event: 'Post https://10.41.30.50:6443/api/v1/namespaces/default/events: dial tcp 10.41.30.50:6443: getsockopt: connection refused' (may retry after sleeping)
E0728 18:04:48.262006 1 reflector.go:188] pkg/proxy/config/api.go:33: Failed to list *api.Endpoints: Get https://10.41.30.50:6443/api/v1/endpoints?resourceVersion=0: dial tcp 10.41.30.50:6443: getsockopt: connection refused
Steps I followed:
$ yum -y install \
yum-versionlock \
docker-1.12.6-11.el7.centos \
kubectl-1.5.4-0 \
kubelet-1.5.4-0 \
kubernetes-cni-0.3.0.1-0.07a8a2 \
https://storage.googleapis.com/falkonry-k8-installer/kubeadm-1.6.0-0.alpha.0.2074.a092d8e0f95f52.x86_64.rpm
$ yum versionlock add kubectl kubelet kubernetes-cni kubeadm
$ systemctl enable docker && systemctl start docker
$ systemctl enable kubelet && systemctl start kubelet
$ kubeadm init --use-kubernetes-version=v1.5.6
Set `IPALLOC_RANGE` as `172.30.0.0/16` in https://git.io/weave-kube
$ kubectl apply -f weave-kube-config
$ kubectl run -i --tty busybox --image=busybox -- sh
$ nslookup kubernetes
After this, I cannot connect to kubernetes or any of the other services.
iptable
# Generated by iptables-save v1.4.21 on Sat Jul 29 04:19:10 2017
*filter
:INPUT ACCEPT [1350:566634]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [1344:579110]
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-SERVICES - [0:0]
:WEAVE-NPC - [0:0]
:WEAVE-NPC-DEFAULT - [0:0]
:WEAVE-NPC-INGRESS - [0:0]
-A INPUT -j KUBE-FIREWALL
-A INPUT -i virbr0 -p udp -m udp --dport 53 -j ACCEPT
-A INPUT -i virbr0 -p tcp -m tcp --dport 53 -j ACCEPT
-A INPUT -i virbr0 -p udp -m udp --dport 67 -j ACCEPT
-A INPUT -i virbr0 -p tcp -m tcp --dport 67 -j ACCEPT
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -i virbr0 -o virbr0 -j ACCEPT
-A FORWARD -o weave -j WEAVE-NPC
-A FORWARD -o weave -m state --state NEW -j NFLOG --nflog-group 86
-A FORWARD -o weave -j DROP
-A FORWARD -i weave ! -o weave -j ACCEPT
-A FORWARD -o weave -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A OUTPUT -o virbr0 -p udp -m udp --dport 68 -j ACCEPT
-A DOCKER-ISOLATION -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A WEAVE-NPC -m state --state RELATED,ESTABLISHED -j ACCEPT
-A WEAVE-NPC -d 224.0.0.0/4 -j ACCEPT
-A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-DEFAULT
-A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-INGRESS
-A WEAVE-NPC -m set ! --match-set weave-local-pods dst -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-k?Z;25^M}|1s7P3|H9i;*;MhG dst -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-iuZcey(5DeXbzgRFs8Szo]+#p dst -j ACCEPT
COMMIT
# Completed on Sat Jul 29 04:19:10 2017
# Generated by iptables-save v1.4.21 on Sat Jul 29 04:19:10 2017
*nat
:PREROUTING ACCEPT [2:148]
:INPUT ACCEPT [2:148]
:OUTPUT ACCEPT [24:1452]
:POSTROUTING ACCEPT [24:1452]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-SEP-DYFWWILLIC32NPM5 - [0:0]
:KUBE-SEP-GX7UKBANGEPIDZWU - [0:0]
:KUBE-SEP-YXLAFMRH4ZX57Y3W - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:WEAVE - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 192.168.122.0/24 -d 224.0.0.0/24 -j RETURN
-A POSTROUTING -s 192.168.122.0/24 -d 255.255.255.255/32 -j RETURN
-A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -p tcp -j MASQUERADE --to-ports 1024-65535
-A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -p udp -j MASQUERADE --to-ports 1024-65535
-A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -j MASQUERADE
-A POSTROUTING -j WEAVE
-A DOCKER -i docker0 -j RETURN
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-DYFWWILLIC32NPM5 -s 172.30.0.5/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-DYFWWILLIC32NPM5 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 172.30.0.5:53
-A KUBE-SEP-GX7UKBANGEPIDZWU -s 172.30.0.5/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-GX7UKBANGEPIDZWU -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 172.30.0.5:53
-A KUBE-SEP-YXLAFMRH4ZX57Y3W -s 10.41.30.50/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-YXLAFMRH4ZX57Y3W -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-YXLAFMRH4ZX57Y3W --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 10.41.30.50:6443
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-DYFWWILLIC32NPM5
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-YXLAFMRH4ZX57Y3W --mask 255.255.255.255 --rsource -j KUBE-SEP-YXLAFMRH4ZX57Y3W
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-YXLAFMRH4ZX57Y3W
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-GX7UKBANGEPIDZWU
-A WEAVE -s 172.30.0.0/16 -d 224.0.0.0/4 -j RETURN
-A WEAVE ! -s 172.30.0.0/16 -d 172.30.0.0/16 -j MASQUERADE
-A WEAVE -s 172.30.0.0/16 ! -d 172.30.0.0/16 -j MASQUERADE
COMMIT
# Completed on Sat Jul 29 04:19:10 2017
# Generated by iptables-save v1.4.21 on Sat Jul 29 04:19:10 2017
*mangle
:PREROUTING ACCEPT [323631:155761830]
:INPUT ACCEPT [323586:155756413]
:FORWARD ACCEPT [26:1880]
:OUTPUT ACCEPT [317539:144236316]
:POSTROUTING ACCEPT [317582:144241373]
-A POSTROUTING -o virbr0 -p udp -m udp --dport 68 -j CHECKSUM --checksum-fill
COMMIT
# Completed on Sat Jul 29 04:19:10 2017
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default gateway 0.0.0.0 UG 100 0 0 eno1d1
10.41.30.0 0.0.0.0 255.255.255.0 U 100 0 0 eno1
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.30.0.0 0.0.0.0 255.255.0.0 U 0 0 0 weave
192.168.1.0 0.0.0.0 255.255.255.0 U 100 0 0 eno1d1
192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0

Cannot connect to service of it's own from inside pod on kubernetes 1.6

I created service and deployment. Now from inside the pod I'm trying to connect to it's own service. It gets times out after few minutes.
This works perfectly fine on kubenetes 1.5.x but not 1.6.x. FYI - created kubernetes cluster using kubeadm tool and using weave as network plugin.
Cluster dump: https://drive.google.com/file/d/0ByZSwkp_d2U-aFREc3E5SjRCVFU/view?usp=sharing
Connecting to kafka service from other container
root#falkonry-redis-0:/data# curl -v http://falkonry-kafka:9092
* About to connect() to falkonry-kafka port 9092 (#0)
* Trying 10.99.232.10...
* connected
* Connected to falkonry-kafka (10.99.232.10) port 9092 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.26.0
> Host: falkonry-kafka:9092
> Accept: */*
>
* additional stuff not fine transfer.c:1037: 0 0
* Recv failure: Connection reset by peer
* Closing connection #0
curl: (56) Recv failure: Connection reset by peer
Connecting to kafka service from inside kafka container
root#falkonry-kafka-56017906-9qlg3:/# curl -v http://falkonry-kafka:9092
* Rebuilt URL to: http://falkonry-kafka:9092/
* Hostname was NOT found in DNS cache
* Trying 10.99.232.10...
^C
Request never finishes.
Service and deployment
Phaguns-MacBook-Pro:falkonryagent phagunbaya$ kubectl describe service falkonry-kafka
Name: falkonry-kafka
Namespace: default
Labels: function=kafka
party=falkonry
Selector: name=falkonry-kafka
Type: ClusterIP
IP: 10.99.232.10
Port: kafka 9092/TCP
Endpoints: 10.32.0.7:9092
Session Affinity: None
No events.
Phaguns-MacBook-Pro:falkonryagent phagunbaya$ kubectl describe deployment falkonry-kafka
Name: falkonry-kafka
Namespace: default
CreationTimestamp: Thu, 06 Apr 2017 16:58:36 -0700
Labels: function=kafka
party=falkonry
Selector: function=kafka,name=falkonry-kafka
Replicas: 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
OldReplicaSets: <none>
NewReplicaSet: falkonry-kafka-56017906 (1/1 replicas created)
No events.
iptables-save output
# Generated by iptables-save v1.4.21 on Fri Apr 7 12:16:32 2017
*nat
:PREROUTING ACCEPT [1:60]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [12:720]
:POSTROUTING ACCEPT [16:1038]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-SEP-4QD2LE2R2TODS2YV - [0:0]
:KUBE-SEP-6K3WNWFYOAH5UDZ7 - [0:0]
:KUBE-SEP-AR5TRSQMIM2F553H - [0:0]
:KUBE-SEP-BIZOCAOAPTCX4WBC - [0:0]
:KUBE-SEP-F7NTE7AMKDKNWUUF - [0:0]
:KUBE-SEP-FV6ZZ4EMBZMV4DQ5 - [0:0]
:KUBE-SEP-HVHMJPRJS2UA65HH - [0:0]
:KUBE-SEP-IBDVBYXSRD6MIAGE - [0:0]
:KUBE-SEP-KDTJFZVKN4ESIN24 - [0:0]
:KUBE-SEP-KNER6ASWBX763QL7 - [0:0]
:KUBE-SEP-NGQUCFCRE45KSL73 - [0:0]
:KUBE-SEP-NYKTVPUDBMHXGWAX - [0:0]
:KUBE-SEP-QLLLKZOFDP244LAS - [0:0]
:KUBE-SEP-RBQF4CU7COIZTWDJ - [0:0]
:KUBE-SEP-SX34LAYKH37CF5LT - [0:0]
:KUBE-SEP-SZZ7MOWKTWUFXIJT - [0:0]
:KUBE-SEP-TZPDA6OWOVPRIIUZ - [0:0]
:KUBE-SEP-UJJNLSZU6HL4F5UO - [0:0]
:KUBE-SEP-W4RNB3VXXTJ3LGHB - [0:0]
:KUBE-SEP-YYIR7TZA6ZBQSUSF - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-BL55CP3MKKB53NTC - [0:0]
:KUBE-SVC-BV4E552EX2CNKPCU - [0:0]
:KUBE-SVC-BYB5G3MHEBYVN43P - [0:0]
:KUBE-SVC-C64CQIO6Z225CXIH - [0:0]
:KUBE-SVC-CAVFOYOJQPPKKFSK - [0:0]
:KUBE-SVC-DM7TKUYSW7TW345O - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-NTZIAVXWXJCS7DKZ - [0:0]
:KUBE-SVC-PJO6V2NNIUDO2DKL - [0:0]
:KUBE-SVC-QIJ4ARI55YRJ76JG - [0:0]
:KUBE-SVC-QQGUGJWMO5HSN6XL - [0:0]
:KUBE-SVC-RVQUD6RAXHQPQF3I - [0:0]
:KUBE-SVC-SZGELJVIQ5IRMA57 - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:KUBE-SVC-U6PKKNLWPXOUUWIP - [0:0]
:KUBE-SVC-XGPIXF43F4GLZBG7 - [0:0]
:KUBE-SVC-Y4IVC7EWPWRMUFRE - [0:0]
:WEAVE - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.17.50.0/24 ! -o docker0 -j MASQUERADE
-A POSTROUTING -j WEAVE
-A DOCKER -i docker0 -j RETURN
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/falkonry-merlin:merlin-web" -m tcp --dport 30061 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/falkonry-merlin:merlin-web" -m tcp --dport 30061 -j KUBE-SVC-SZGELJVIQ5IRMA57
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-4QD2LE2R2TODS2YV -s 10.44.0.6/32 -m comment --comment "default/falkonry-spark-master:rest" -j KUBE-MARK-MASQ
-A KUBE-SEP-4QD2LE2R2TODS2YV -p tcp -m comment --comment "default/falkonry-spark-master:rest" -m tcp -j DNAT --to-destination 10.44.0.6:6066
-A KUBE-SEP-6K3WNWFYOAH5UDZ7 -s 10.32.0.4/32 -m comment --comment "default/falkonry-kafka:kafka" -j KUBE-MARK-MASQ
-A KUBE-SEP-6K3WNWFYOAH5UDZ7 -p tcp -m comment --comment "default/falkonry-kafka:kafka" -m tcp -j DNAT --to-destination 10.32.0.4:9092
-A KUBE-SEP-AR5TRSQMIM2F553H -s 10.24.10.4/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-AR5TRSQMIM2F553H -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-AR5TRSQMIM2F553H --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 10.24.10.4:6443
-A KUBE-SEP-BIZOCAOAPTCX4WBC -s 10.44.0.3/32 -m comment --comment "default/falkonry-merlin:merlin-web" -j KUBE-MARK-MASQ
-A KUBE-SEP-BIZOCAOAPTCX4WBC -p tcp -m comment --comment "default/falkonry-merlin:merlin-web" -m tcp -j DNAT --to-destination 10.44.0.3:8080
-A KUBE-SEP-F7NTE7AMKDKNWUUF -s 10.42.0.3/32 -m comment --comment "default/falkonry-riactor:riactor-http" -j KUBE-MARK-MASQ
-A KUBE-SEP-F7NTE7AMKDKNWUUF -p tcp -m comment --comment "default/falkonry-riactor:riactor-http" -m tcp -j DNAT --to-destination 10.42.0.3:8000
-A KUBE-SEP-FV6ZZ4EMBZMV4DQ5 -s 10.32.0.10/32 -m comment --comment "default/falkonry-redis:redis-cli" -j KUBE-MARK-MASQ
-A KUBE-SEP-FV6ZZ4EMBZMV4DQ5 -p tcp -m comment --comment "default/falkonry-redis:redis-cli" -m tcp -j DNAT --to-destination 10.32.0.10:6379
-A KUBE-SEP-HVHMJPRJS2UA65HH -s 10.32.0.7/32 -m comment --comment "default/falkonry-hadoop:namenode-ui" -j KUBE-MARK-MASQ
-A KUBE-SEP-HVHMJPRJS2UA65HH -p tcp -m comment --comment "default/falkonry-hadoop:namenode-ui" -m tcp -j DNAT --to-destination 10.32.0.7:50070
-A KUBE-SEP-IBDVBYXSRD6MIAGE -s 10.44.0.5/32 -m comment --comment "default/falkonry-riactor:riactor-http" -j KUBE-MARK-MASQ
-A KUBE-SEP-IBDVBYXSRD6MIAGE -p tcp -m comment --comment "default/falkonry-riactor:riactor-http" -m tcp -j DNAT --to-destination 10.44.0.5:8000
-A KUBE-SEP-KDTJFZVKN4ESIN24 -s 10.32.0.7/32 -m comment --comment "default/falkonry-hadoop:datanode" -j KUBE-MARK-MASQ
-A KUBE-SEP-KDTJFZVKN4ESIN24 -p tcp -m comment --comment "default/falkonry-hadoop:datanode" -m tcp -j DNAT --to-destination 10.32.0.7:50010
-A KUBE-SEP-KNER6ASWBX763QL7 -s 10.32.0.7/32 -m comment --comment "default/falkonry-hadoop:datanode-ui" -j KUBE-MARK-MASQ
-A KUBE-SEP-KNER6ASWBX763QL7 -p tcp -m comment --comment "default/falkonry-hadoop:datanode-ui" -m tcp -j DNAT --to-destination 10.32.0.7:50075
-A KUBE-SEP-NGQUCFCRE45KSL73 -s 10.44.0.6/32 -m comment --comment "default/falkonry-spark-master:webui" -j KUBE-MARK-MASQ
-A KUBE-SEP-NGQUCFCRE45KSL73 -p tcp -m comment --comment "default/falkonry-spark-master:webui" -m tcp -j DNAT --to-destination 10.44.0.6:8080
-A KUBE-SEP-NYKTVPUDBMHXGWAX -s 10.44.0.6/32 -m comment --comment "default/falkonry-spark-master:akka" -j KUBE-MARK-MASQ
-A KUBE-SEP-NYKTVPUDBMHXGWAX -p tcp -m comment --comment "default/falkonry-spark-master:akka" -m tcp -j DNAT --to-destination 10.44.0.6:7077
-A KUBE-SEP-QLLLKZOFDP244LAS -s 10.42.0.1/32 -m comment --comment "default/falkonry-connector:http" -j KUBE-MARK-MASQ
-A KUBE-SEP-QLLLKZOFDP244LAS -p tcp -m comment --comment "default/falkonry-connector:http" -m tcp -j DNAT --to-destination 10.42.0.1:8001
-A KUBE-SEP-RBQF4CU7COIZTWDJ -s 10.32.0.6/32 -m comment --comment "default/falkonry-zookeeper:zookeeper" -j KUBE-MARK-MASQ
-A KUBE-SEP-RBQF4CU7COIZTWDJ -p tcp -m comment --comment "default/falkonry-zookeeper:zookeeper" -m tcp -j DNAT --to-destination 10.32.0.6:2181
-A KUBE-SEP-SX34LAYKH37CF5LT -s 10.42.0.2/32 -m comment --comment "default/falkonry-merlin:merlin-web" -j KUBE-MARK-MASQ
-A KUBE-SEP-SX34LAYKH37CF5LT -p tcp -m comment --comment "default/falkonry-merlin:merlin-web" -m tcp -j DNAT --to-destination 10.42.0.2:8080
-A KUBE-SEP-SZZ7MOWKTWUFXIJT -s 10.32.0.2/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-SZZ7MOWKTWUFXIJT -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.32.0.2:53
-A KUBE-SEP-TZPDA6OWOVPRIIUZ -s 10.32.0.3/32 -m comment --comment "default/falkonry-riactor:riactor-http" -j KUBE-MARK-MASQ
-A KUBE-SEP-TZPDA6OWOVPRIIUZ -p tcp -m comment --comment "default/falkonry-riactor:riactor-http" -m tcp -j DNAT --to-destination 10.32.0.3:8000
-A KUBE-SEP-UJJNLSZU6HL4F5UO -s 10.32.0.2/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-UJJNLSZU6HL4F5UO -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.32.0.2:53
-A KUBE-SEP-W4RNB3VXXTJ3LGHB -s 10.32.0.8/32 -m comment --comment "default/falkonry-mongo:mongo-http" -j KUBE-MARK-MASQ
-A KUBE-SEP-W4RNB3VXXTJ3LGHB -p tcp -m comment --comment "default/falkonry-mongo:mongo-http" -m tcp -j DNAT --to-destination 10.32.0.8:27017
-A KUBE-SEP-YYIR7TZA6ZBQSUSF -s 10.32.0.7/32 -m comment --comment "default/falkonry-hadoop:namenode" -j KUBE-MARK-MASQ
-A KUBE-SEP-YYIR7TZA6ZBQSUSF -p tcp -m comment --comment "default/falkonry-hadoop:namenode" -m tcp -j DNAT --to-destination 10.32.0.7:8020
-A KUBE-SERVICES -d 10.103.204.121/32 -p tcp -m comment --comment "default/falkonry-spark-master:akka cluster IP" -m tcp --dport 7077 -j KUBE-SVC-CAVFOYOJQPPKKFSK
-A KUBE-SERVICES -d 10.111.87.193/32 -p tcp -m comment --comment "default/falkonryagent:agent-web cluster IP" -m tcp --dport 9090 -j KUBE-SVC-QQGUGJWMO5HSN6XL
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -d 10.107.140.112/32 -p tcp -m comment --comment "default/falkonry-zookeeper:zookeeper cluster IP" -m tcp --dport 2181 -j KUBE-SVC-BYB5G3MHEBYVN43P
-A KUBE-SERVICES -d 10.106.78.154/32 -p tcp -m comment --comment "default/falkonry-hadoop:datanode cluster IP" -m tcp --dport 50010 -j KUBE-SVC-NTZIAVXWXJCS7DKZ
-A KUBE-SERVICES -d 10.106.78.154/32 -p tcp -m comment --comment "default/falkonry-hadoop:datanode-ui cluster IP" -m tcp --dport 50075 -j KUBE-SVC-BL55CP3MKKB53NTC
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 10.111.174.212/32 -p tcp -m comment --comment "default/falkonry-merlin:merlin-web cluster IP" -m tcp --dport 8080 -j KUBE-SVC-SZGELJVIQ5IRMA57
-A KUBE-SERVICES -d 10.103.204.121/32 -p tcp -m comment --comment "default/falkonry-spark-master:rest cluster IP" -m tcp --dport 6066 -j KUBE-SVC-DM7TKUYSW7TW345O
-A KUBE-SERVICES -d 10.103.204.121/32 -p tcp -m comment --comment "default/falkonry-spark-master:webui cluster IP" -m tcp --dport 8080 -j KUBE-SVC-QIJ4ARI55YRJ76JG
-A KUBE-SERVICES -d 10.106.78.154/32 -p tcp -m comment --comment "default/falkonry-hadoop:namenode cluster IP" -m tcp --dport 9000 -j KUBE-SVC-BV4E552EX2CNKPCU
-A KUBE-SERVICES -d 10.106.78.154/32 -p tcp -m comment --comment "default/falkonry-hadoop:namenode-ui cluster IP" -m tcp --dport 50070 -j KUBE-SVC-U6PKKNLWPXOUUWIP
-A KUBE-SERVICES -d 10.98.38.82/32 -p tcp -m comment --comment "default/falkonry-mongo:mongo-http cluster IP" -m tcp --dport 27017 -j KUBE-SVC-Y4IVC7EWPWRMUFRE
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -d 10.96.90.91/32 -p tcp -m comment --comment "default/falkonry-redis:redis-cli cluster IP" -m tcp --dport 6379 -j KUBE-SVC-PJO6V2NNIUDO2DKL
-A KUBE-SERVICES -d 10.99.232.10/32 -p tcp -m comment --comment "default/falkonry-kafka:kafka cluster IP" -m tcp --dport 9092 -j KUBE-SVC-XGPIXF43F4GLZBG7
-A KUBE-SERVICES -d 10.100.203.65/32 -p tcp -m comment --comment "default/falkonry-riactor:riactor-http cluster IP" -m tcp --dport 8000 -j KUBE-SVC-C64CQIO6Z225CXIH
-A KUBE-SERVICES -d 10.110.120.177/32 -p tcp -m comment --comment "default/falkonry-connector:http cluster IP" -m tcp --dport 8001 -j KUBE-SVC-RVQUD6RAXHQPQF3I
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-BL55CP3MKKB53NTC -m comment --comment "default/falkonry-hadoop:datanode-ui" -j KUBE-SEP-KNER6ASWBX763QL7
-A KUBE-SVC-BV4E552EX2CNKPCU -m comment --comment "default/falkonry-hadoop:namenode" -j KUBE-SEP-YYIR7TZA6ZBQSUSF
-A KUBE-SVC-BYB5G3MHEBYVN43P -m comment --comment "default/falkonry-zookeeper:zookeeper" -j KUBE-SEP-RBQF4CU7COIZTWDJ
-A KUBE-SVC-C64CQIO6Z225CXIH -m comment --comment "default/falkonry-riactor:riactor-http" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-TZPDA6OWOVPRIIUZ
-A KUBE-SVC-C64CQIO6Z225CXIH -m comment --comment "default/falkonry-riactor:riactor-http" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-F7NTE7AMKDKNWUUF
-A KUBE-SVC-C64CQIO6Z225CXIH -m comment --comment "default/falkonry-riactor:riactor-http" -j KUBE-SEP-IBDVBYXSRD6MIAGE
-A KUBE-SVC-CAVFOYOJQPPKKFSK -m comment --comment "default/falkonry-spark-master:akka" -j KUBE-SEP-NYKTVPUDBMHXGWAX
-A KUBE-SVC-DM7TKUYSW7TW345O -m comment --comment "default/falkonry-spark-master:rest" -j KUBE-SEP-4QD2LE2R2TODS2YV
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-UJJNLSZU6HL4F5UO
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-AR5TRSQMIM2F553H --mask 255.255.255.255 --rsource -j KUBE-SEP-AR5TRSQMIM2F553H
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-AR5TRSQMIM2F553H
-A KUBE-SVC-NTZIAVXWXJCS7DKZ -m comment --comment "default/falkonry-hadoop:datanode" -j KUBE-SEP-KDTJFZVKN4ESIN24
-A KUBE-SVC-PJO6V2NNIUDO2DKL -m comment --comment "default/falkonry-redis:redis-cli" -j KUBE-SEP-FV6ZZ4EMBZMV4DQ5
-A KUBE-SVC-QIJ4ARI55YRJ76JG -m comment --comment "default/falkonry-spark-master:webui" -j KUBE-SEP-NGQUCFCRE45KSL73
-A KUBE-SVC-RVQUD6RAXHQPQF3I -m comment --comment "default/falkonry-connector:http" -j KUBE-SEP-QLLLKZOFDP244LAS
-A KUBE-SVC-SZGELJVIQ5IRMA57 -m comment --comment "default/falkonry-merlin:merlin-web" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-SX34LAYKH37CF5LT
-A KUBE-SVC-SZGELJVIQ5IRMA57 -m comment --comment "default/falkonry-merlin:merlin-web" -j KUBE-SEP-BIZOCAOAPTCX4WBC
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-SZZ7MOWKTWUFXIJT
-A KUBE-SVC-U6PKKNLWPXOUUWIP -m comment --comment "default/falkonry-hadoop:namenode-ui" -j KUBE-SEP-HVHMJPRJS2UA65HH
-A KUBE-SVC-XGPIXF43F4GLZBG7 -m comment --comment "default/falkonry-kafka:kafka" -j KUBE-SEP-6K3WNWFYOAH5UDZ7
-A KUBE-SVC-Y4IVC7EWPWRMUFRE -m comment --comment "default/falkonry-mongo:mongo-http" -j KUBE-SEP-W4RNB3VXXTJ3LGHB
-A WEAVE -s 10.32.0.0/12 -d 224.0.0.0/4 -j RETURN
-A WEAVE ! -s 10.32.0.0/12 -d 10.32.0.0/12 -j MASQUERADE
-A WEAVE -s 10.32.0.0/12 ! -d 10.32.0.0/12 -j MASQUERADE
COMMIT
# Completed on Fri Apr 7 12:16:32 2017
# Generated by iptables-save v1.4.21 on Fri Apr 7 12:16:32 2017
*filter
:INPUT ACCEPT [741:270665]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [727:337487]
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-SERVICES - [0:0]
:WEAVE-NPC - [0:0]
:WEAVE-NPC-DEFAULT - [0:0]
:WEAVE-NPC-INGRESS - [0:0]
-A INPUT -j KUBE-FIREWALL
-A INPUT -d 172.17.50.1/32 -i docker0 -p tcp -m tcp --dport 6783 -j DROP
-A INPUT -d 172.17.50.1/32 -i docker0 -p udp -m udp --dport 6783 -j DROP
-A INPUT -d 172.17.50.1/32 -i docker0 -p udp -m udp --dport 6784 -j DROP
-A INPUT -i docker0 -p udp -m udp --dport 53 -j ACCEPT
-A INPUT -i docker0 -p tcp -m tcp --dport 53 -j ACCEPT
-A FORWARD -i docker0 -o weave -j DROP
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -o weave -j WEAVE-NPC
-A FORWARD -o weave -m state --state NEW -j NFLOG --nflog-group 86
-A FORWARD -o weave -j DROP
-A FORWARD -i weave ! -o weave -j ACCEPT
-A FORWARD -o weave -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER-ISOLATION -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A KUBE-SERVICES -d 10.111.87.193/32 -p tcp -m comment --comment "default/falkonryagent:agent-web has no endpoints" -m tcp --dport 9090 -j REJECT --reject-with icmp-port-unreachable
-A WEAVE-NPC -m state --state RELATED,ESTABLISHED -j ACCEPT
-A WEAVE-NPC -d 224.0.0.0/4 -j ACCEPT
-A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-DEFAULT
-A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-INGRESS
-A WEAVE-NPC-DEFAULT -m set --match-set weave-k?Z;25^M}|1s7P3|H9i;*;MhG dst -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-4vtqMI<kx/2]jD%_c0S%thO%V dst -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-iuZcey(5DeXbzgRFs8Szo]<#p dst -j ACCEPT
COMMIT
# Completed on Fri Apr 7 12:16:32 2017
Kube-proxy logs
I0406 19:42:35.453335 1 server.go:225] Using iptables Proxier.
W0406 19:42:35.559100 1 proxier.go:309] clusterCIDR not specified, unable to distinguish between internal and external traffic
I0406 19:42:35.559155 1 server.go:249] Tearing down userspace rules.
I0406 19:42:35.711702 1 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_max' to 524288
I0406 19:42:35.712557 1 conntrack.go:66] Setting conntrack hashsize to 131072
I0406 19:42:35.713879 1 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0406 19:42:35.713949 1 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
How did you set up weave? There is a 1.6-specific configuration[1][2] that sets up an role and service account for running weave on clusters with RBAC enabled
[1] https://github.com/weaveworks/weave/blob/master/prog/weave-kube/weave-daemonset-k8s-1.6.yaml
[2] https://www.weave.works/weave-net-kubernetes-integration/

Can't reach Kubernetes service from outside of node when kube-proxy in iptables mode

I have a Single-Node (master+node) Kubernetes deployment running on CoreOS, with kube-proxy running in iptables mode, flannel for container networking, without Calico.
kube-proxy.yaml
apiVersion: v1
kind: Pod
metadata:
name: kube-proxy
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-proxy
image: quay.io/coreos/hyperkube:v1.5.2_coreos.0
command:
- /hyperkube
- proxy
- --master=http://127.0.0.1:8080
- --hostname-override=10.0.0.144
- --proxy-mode=iptables
- --bind-address=0.0.0.0
- --cluster-cidr=10.1.0.0/16
- --masquerade-all=true
securityContext:
privileged: true
I've created a deployment, then exposed that deployment using a Service of type NodePort.
user#node ~ $ kubectl run hostnames --image=gcr.io/google_containers/serve_hostname \
--labels=app=hostnames \
--port=9376 \
--replicas=3
user#node ~ $ kubectl expose deployment hostnames \
--port=80 \
--target-port=9376 \
--type=NodePort
user#node ~ $ kubectl get svc hostnames
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hostnames 10.1.50.64 <nodes> 80:30177/TCP 6m
I can curl successfully from the node (loopback and eth0 IP):
user#node ~ $ curl localhost:30177
hostnames-3799501552-xfq08
user#node ~ $ curl 10.0.0.144:30177
hostnames-3799501552-xfq08
However, I cannot curl from outside the node. I've tried from both a client machine outside the node's network (with correct firewall rules), and a machine inside the node's private network, with the network's firewall completely open between the two machines, with no luck.
I'm fairly confident that it's an iptables/kube-proxy issue, because if I modify the kube-proxy config from --proxy-mode=iptables to --proxy-mode=userspace I can access from both external machines. Also, if I bypass kubernetes and run a docker container I have no problems with external access.
Here are the current iptables rules:
user#node ~ $ iptables-save
# Generated by iptables-save v1.4.21 on Mon Feb 6 04:46:02 2017
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-SEP-4IIYBTTZSUAZV53G - [0:0]
:KUBE-SEP-4TMFMGA4TTORJ5E4 - [0:0]
:KUBE-SEP-DUUUKFKBBSQSAJB2 - [0:0]
:KUBE-SEP-XONOXX2F6J6VHAVB - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-NWV5X2332I4OT4T3 - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 10.1.0.0/16 -d 10.1.0.0/16 -j RETURN
-A POSTROUTING -s 10.1.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
-A POSTROUTING ! -s 10.1.0.0/16 -d 10.1.0.0/16 -j MASQUERADE
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/hostnames:" -m tcp --dport 30177 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/hostnames:" -m tcp --dport 30177 -j KUBE-SVC-NWV5X2332I4OT4T3
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-4IIYBTTZSUAZV53G -s 10.0.0.144/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-4IIYBTTZSUAZV53G -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-4IIYBTTZSUAZV53G --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 10.0.0.144:6443
-A KUBE-SEP-4TMFMGA4TTORJ5E4 -s 10.1.34.2/32 -m comment --comment "default/hostnames:" -j KUBE-MARK-MASQ
-A KUBE-SEP-4TMFMGA4TTORJ5E4 -p tcp -m comment --comment "default/hostnames:" -m tcp -j DNAT --to-destination 10.1.34.2:9376
-A KUBE-SEP-DUUUKFKBBSQSAJB2 -s 10.1.34.3/32 -m comment --comment "default/hostnames:" -j KUBE-MARK-MASQ
-A KUBE-SEP-DUUUKFKBBSQSAJB2 -p tcp -m comment --comment "default/hostnames:" -m tcp -j DNAT --to-destination 10.1.34.3:9376
-A KUBE-SEP-XONOXX2F6J6VHAVB -s 10.1.34.4/32 -m comment --comment "default/hostnames:" -j KUBE-MARK-MASQ
-A KUBE-SEP-XONOXX2F6J6VHAVB -p tcp -m comment --comment "default/hostnames:" -m tcp -j DNAT --to-destination 10.1.34.4:9376
-A KUBE-SERVICES -d 10.1.50.64/32 -p tcp -m comment --comment "default/hostnames: cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SERVICES ! -s 10.1.0.0/16 -d 10.1.50.64/32 -p tcp -m comment --comment "default/hostnames: cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.1.50.64/32 -p tcp -m comment --comment "default/hostnames: cluster IP" -m tcp --dport 80 -j KUBE-SVC-NWV5X2332I4OT4T3
-A KUBE-SERVICES -d 10.1.50.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES ! -s 10.1.0.0/16 -d 10.1.50.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.1.50.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-4IIYBTTZSUAZV53G --mask 255.255.255.255 --rsource -j KUBE-SEP-4IIYBTTZSUAZV53G
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-4IIYBTTZSUAZV53G
-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment "default/hostnames:" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-4TMFMGA4TTORJ5E4
-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment "default/hostnames:" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-DUUUKFKBBSQSAJB2
-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment "default/hostnames:" -j KUBE-SEP-XONOXX2F6J6VHAVB
COMMIT
# Completed on Mon Feb 6 04:46:02 2017
# Generated by iptables-save v1.4.21 on Mon Feb 6 04:46:02 2017
*filter
:INPUT DROP [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [67:14455]
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-SERVICES - [0:0]
-A INPUT -j KUBE-FIREWALL
-A INPUT -i lo -j ACCEPT
-A INPUT -i eth0 -j ACCEPT
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 443 -j ACCEPT
-A INPUT -p icmp -m icmp --icmp-type 0 -j ACCEPT
-A INPUT -p icmp -m icmp --icmp-type 3 -j ACCEPT
-A INPUT -p icmp -m icmp --icmp-type 11 -j ACCEPT
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER-ISOLATION -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
COMMIT
# Completed on Mon Feb 6 04:46:02 2017
I'm not sure what to look for in the rules... Can someone with more experience than myself make some suggestions on troubleshooting?
Fixed it. The problem was that I had some default iptables rules applied on startup which must override some parts of the dynamic rule-set created by kube-proxy.
The difference between working and non-working was as follows:
Working
:INPUT DROP [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [67:14455]
...
-A INPUT -i lo -j ACCEPT
-A INPUT -i eth0 -j ACCEPT
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 443 -j ACCEPT
-A INPUT -p icmp -m icmp --icmp-type 0 -j ACCEPT
-A INPUT -p icmp -m icmp --icmp-type 3 -j ACCEPT
-A INPUT -p icmp -m icmp --icmp-type 11 -j ACCEPT
...
Not working
:INPUT ACCEPT [30:5876]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [25:5616]
...