how does Istio Envoy proxy use iptables in macOS? - kubernetes

As I know, macOS doesn't have iptables but has pf(packet filter) which is similar to iptables. But when I see istio-init container's log, I see istio envoy proxy is running command about initializing nat table of iptables.
the log looks like this
2022-05-26T08:51:19.067180Z info Istio iptables environment:
ENVOY_PORT=
INBOUND_CAPTURE_PORT=
ISTIO_INBOUND_INTERCEPTION_MODE=
ISTIO_INBOUND_TPROXY_ROUTE_TABLE=
ISTIO_INBOUND_PORTS=
ISTIO_OUTBOUND_PORTS=
ISTIO_LOCAL_EXCLUDE_PORTS=
ISTIO_EXCLUDE_INTERFACES=
ISTIO_SERVICE_CIDR=
ISTIO_SERVICE_EXCLUDE_CIDR=
ISTIO_META_DNS_CAPTURE=
INVALID_DROP=
2022-05-26T08:51:19.069632Z info Istio iptables variables:
PROXY_PORT=15001
PROXY_INBOUND_CAPTURE_PORT=15006
PROXY_TUNNEL_PORT=15008
PROXY_UID=1337
PROXY_GID=1337
INBOUND_INTERCEPTION_MODE=REDIRECT
INBOUND_TPROXY_MARK=1337
INBOUND_TPROXY_ROUTE_TABLE=133
INBOUND_PORTS_INCLUDE=*
INBOUND_PORTS_EXCLUDE=15090,15021,15020
OUTBOUND_IP_RANGES_INCLUDE=*
OUTBOUND_IP_RANGES_EXCLUDE=
OUTBOUND_PORTS_INCLUDE=
OUTBOUND_PORTS_EXCLUDE=
KUBE_VIRT_INTERFACES=
ENABLE_INBOUND_IPV6=false
DNS_CAPTURE=false
DROP_INVALID=false
CAPTURE_ALL_DNS=false
DNS_SERVERS=[],[]
OUTPUT_PATH=
NETWORK_NAMESPACE=
CNI_MODE=false
EXCLUDE_INTERFACES=
2022-05-26T08:51:19.071962Z info Writing following contents to rules file: /tmp/iptables-rules-1653555079071333887.txt2006549362
* nat
-N ISTIO_INBOUND
-N ISTIO_REDIRECT
-N ISTIO_IN_REDIRECT
-N ISTIO_OUTPUT
-A ISTIO_INBOUND -p tcp --dport 15008 -j RETURN
-A ISTIO_REDIRECT -p tcp -j REDIRECT --to-ports 15001
-A ISTIO_IN_REDIRECT -p tcp -j REDIRECT --to-ports 15006
-A PREROUTING -p tcp -j ISTIO_INBOUND
-A ISTIO_INBOUND -p tcp --dport 15090 -j RETURN
-A ISTIO_INBOUND -p tcp --dport 15021 -j RETURN
-A ISTIO_INBOUND -p tcp --dport 15020 -j RETURN
-A ISTIO_INBOUND -p tcp -j ISTIO_IN_REDIRECT
-A OUTPUT -p tcp -j ISTIO_OUTPUT
-A ISTIO_OUTPUT -o lo -s 127.0.0.6/32 -j RETURN
-A ISTIO_OUTPUT -o lo ! -d 127.0.0.1/32 -m owner --uid-owner 1337 -j ISTIO_IN_REDIRECT
-A ISTIO_OUTPUT -o lo -m owner ! --uid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -m owner --uid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -o lo ! -d 127.0.0.1/32 -m owner --gid-owner 1337 -j ISTIO_IN_REDIRECT
-A ISTIO_OUTPUT -o lo -m owner ! --gid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -m owner --gid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -d 127.0.0.1/32 -j RETURN
-A ISTIO_OUTPUT -j ISTIO_REDIRECT
COMMIT
2022-05-26T08:51:19.072193Z info Running command: iptables-restore --noflush /tmp/iptables-rules-1653555079071333887.txt2006549362
2022-05-26T08:51:19.097324Z error Command error output: xtables parameter problem: iptables-restore: unable to initialize table 'nat'
Error occurred at line: 1
Try `iptables-restore -h' or 'iptables-restore --help' for more information.
2022-05-26T08:51:19.097801Z error Failed to execute: iptables-restore --noflush /tmp/iptables-rules-1653555079071333887.txt2006549362, exit status 2
I have questions about this.
What OS the envoy proxy is running?
I'm running k8s with minikube on M1
If it's running on macOS, how can it run command like iptables-restore?
even if macOS doesnt have iptables
have a good day

Related

Service deployed with istio doesn't start (minikube, docker, mac m1)

I am new to Kubernetes, Istio and so on, so please be gentle :)
I have minikube running, I can deploy services and they run fine.
I have installed istio following this guide:
https://istio.io/latest/docs/setup/install/istioctl/
If I tag the default namespace with
kubectl label namespace default istio-injection=enabled
the deployment fails. The service is green on the minikube dashboard, but the pod doesn't start up.
Ready: false
Started: false
Reason: PodInitializing
Here are a couple of print screens from the dashboard:
This is clearly related to istio.
If I remove the istio tag from the namespace, the deployment works and the pod starts.
Any help would be greatly appreciated.
EDIT
Running
kubectl logs mypod-bd48d6bcc-6wcq2 -c istio-init
prints out
2022-08-24T14:07:15.227238Z info Istio iptables environment:
ENVOY_PORT=
INBOUND_CAPTURE_PORT=
ISTIO_INBOUND_INTERCEPTION_MODE=
ISTIO_INBOUND_TPROXY_ROUTE_TABLE=
ISTIO_INBOUND_PORTS=
ISTIO_OUTBOUND_PORTS=
ISTIO_LOCAL_EXCLUDE_PORTS=
ISTIO_EXCLUDE_INTERFACES=
ISTIO_SERVICE_CIDR=
ISTIO_SERVICE_EXCLUDE_CIDR=
ISTIO_META_DNS_CAPTURE=
INVALID_DROP=
2022-08-24T14:07:15.229791Z info Istio iptables variables:
PROXY_PORT=15001
PROXY_INBOUND_CAPTURE_PORT=15006
PROXY_TUNNEL_PORT=15008
PROXY_UID=1337
PROXY_GID=1337
INBOUND_INTERCEPTION_MODE=REDIRECT
INBOUND_TPROXY_MARK=1337
INBOUND_TPROXY_ROUTE_TABLE=133
INBOUND_PORTS_INCLUDE=*
INBOUND_PORTS_EXCLUDE=15090,15021,15020
OUTBOUND_OWNER_GROUPS_INCLUDE=*
OUTBOUND_OWNER_GROUPS_EXCLUDE=
OUTBOUND_IP_RANGES_INCLUDE=*
OUTBOUND_IP_RANGES_EXCLUDE=
OUTBOUND_PORTS_INCLUDE=
OUTBOUND_PORTS_EXCLUDE=
KUBE_VIRT_INTERFACES=
ENABLE_INBOUND_IPV6=false
DNS_CAPTURE=false
DROP_INVALID=false
CAPTURE_ALL_DNS=false
DNS_SERVERS=[],[]
OUTPUT_PATH=
NETWORK_NAMESPACE=
CNI_MODE=false
EXCLUDE_INTERFACES=
2022-08-24T14:07:15.232249Z info Writing following contents to rules file: /tmp/iptables-rules-1661350035231776045.txt1561657352
* nat
-N ISTIO_INBOUND
-N ISTIO_REDIRECT
-N ISTIO_IN_REDIRECT
-N ISTIO_OUTPUT
-A ISTIO_INBOUND -p tcp --dport 15008 -j RETURN
-A ISTIO_REDIRECT -p tcp -j REDIRECT --to-ports 15001
-A ISTIO_IN_REDIRECT -p tcp -j REDIRECT --to-ports 15006
-A PREROUTING -p tcp -j ISTIO_INBOUND
-A ISTIO_INBOUND -p tcp --dport 15090 -j RETURN
-A ISTIO_INBOUND -p tcp --dport 15021 -j RETURN
-A ISTIO_INBOUND -p tcp --dport 15020 -j RETURN
-A ISTIO_INBOUND -p tcp -j ISTIO_IN_REDIRECT
-A OUTPUT -p tcp -j ISTIO_OUTPUT
-A ISTIO_OUTPUT -o lo -s 127.0.0.6/32 -j RETURN
-A ISTIO_OUTPUT -o lo ! -d 127.0.0.1/32 -m owner --uid-owner 1337 -j ISTIO_IN_REDIRECT
-A ISTIO_OUTPUT -o lo -m owner ! --uid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -m owner --uid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -o lo ! -d 127.0.0.1/32 -m owner --gid-owner 1337 -j ISTIO_IN_REDIRECT
-A ISTIO_OUTPUT -o lo -m owner ! --gid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -m owner --gid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -d 127.0.0.1/32 -j RETURN
-A ISTIO_OUTPUT -j ISTIO_REDIRECT
COMMIT
2022-08-24T14:07:15.232504Z info Running command: iptables-restore --noflush /tmp/iptables-rules-1661350035231776045.txt1561657352
2022-08-24T14:07:15.256253Z error Command error output: xtables parameter problem: iptables-restore: unable to initialize table 'nat'
Error occurred at line: 1
Try `iptables-restore -h' or 'iptables-restore --help' for more information.
2022-08-24T14:07:15.256845Z error Failed to execute: iptables-restore --noflush /tmp/iptables-rules-1661350035231776045.txt1561657352, exit status 2

Unable to connect host after iptable configuration from centos 6

I have to allow port 9000 so sonarqube can be accessible, so I flushed the IPTABLE and add the below configuration, but from then below things happening:
no external URL connecting
unable to FTP connect via filezilla (but
NFtp working)
Below is the configuration:
# Generated by iptables-save v1.4.7 on Thu Feb 1 08:11:50 2018
*filter
:INPUT DROP [19:1566]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [9:928]
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 8080 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 443 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 9000 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 9090 -j ACCEPT
-A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT
-A INPUT -p icmp -m icmp --icmp-type 0 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 3306 -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m tcp --dport 21 -m conntrack --ctstate NEW,ESTABLISHED -m comment --comment "Allow ftp connections on port 21" -j ACCEPT
-A INPUT -p tcp -m tcp --dport 20 -m conntrack --ctstate RELATED,ESTABLISHED -m comment --comment "Allow ftp connections on port 20" -j ACCEPT
-A INPUT -p tcp -m tcp --sport 1024:65535 --dport 1024:65535 -m conntrack --ctstate ESTABLISHED -m comment --comment "Allow passive inbound connections" -j ACCEPT
-A OUTPUT -p icmp -m icmp --icmp-type 0 -j ACCEPT
-A OUTPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT
-A OUTPUT -o lo -j ACCEPT
-A OUTPUT -p tcp -m tcp --dport 21 -m conntrack --ctstate NEW,ESTABLISHED -m comment --comment "Allow ftp connections on port 21" -j ACCEPT
-A OUTPUT -p tcp -m tcp --dport 20 -m conntrack --ctstate ESTABLISHED -m comment --comment "Allow ftp connections on port 20" -j ACCEPT
-A OUTPUT -p tcp -m tcp --sport 1024:65535 --dport 1024:65535 -m conntrack --ctstate RELATED,ESTABLISHED -m comment --comment "Allow passive inbound connections" -j ACCEPT
COMMIT
# Completed on Thu Feb 1 08:11:50 2018
Please help.
Centos 6.9
I finally able to configure where all things git, composer, jenkins are able to coomunicate to external world and I can able to ssh via mingw git bash, and the configuration script is:
#!/bin/bash
iptables -F
iptables -A INPUT -p tcp --dport 21 -j ACCEPT
iptables -A INPUT -p tcp --dport 22 -j ACCEPT
iptables -A INPUT -p tcp --dport 80 -j ACCEPT
iptables -A INPUT -p tcp --dport 443 -j ACCEPT
iptables -A INPUT -p tcp --dport 8080 -j ACCEPT
iptables -A INPUT -p tcp --dport 3306 -j ACCEPT
iptables -A INPUT -p tcp --dport 9090 -j ACCEPT
iptables -P INPUT DROP
iptables -P FORWARD DROP
iptables -P OUTPUT ACCEPT
iptables -A INPUT -i lo -j ACCEPT
iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
/sbin/service iptables save
/sbin/service iptables restart
/sbin/service network restart
iptables -L -v

kube-proxy unhappy after setting IPALLOC_RANGE in weave configuration

I’m installing kubernetes v1.5.6 + weave using kubeadm-1.6.0-0.alpha.0.2074.a092d8e0f95f52.x86_64.rpm on CentOS 7.3. Since my network IP range for host machine is 10.41.30.xx and it overlaps with internal weave IP range. I configured weave to use IPALLOC_RANGE as 172.30.0.0/16.
After setup, I’m not able to connect to kubernetes services. Kube-proxy complains about connecting to kubernetes master.
E0728 18:04:47.201682 1 server.go:421] Can't get Node "ctdpc001571.ctd.khi.com", assuming iptables proxy, err: Get https://10.41.30.50:6443/api/v1/nodes/ctdpc001571.ctd.khi.co.jp: dial tcp 10.41.30.50:6443: getsockopt: connection refused
I0728 18:04:47.204522 1 server.go:215] Using iptables Proxier.
W0728 18:04:47.205022 1 server.go:468] Failed to retrieve node info: Get https://10.41.30.50:6443/api/v1/nodes/ctdpc001571.ctd.khi.com: dial tcp 10.41.30.50:6443: getsockopt: connection refused
W0728 18:04:47.205325 1 proxier.go:249] invalid nodeIP, initialize kube-proxy with 127.0.0.1 as nodeIP
W0728 18:04:47.205347 1 proxier.go:254] clusterCIDR not specified, unable to distinguish between internal and external traffic
I0728 18:04:47.205394 1 server.go:227] Tearing down userspace rules.
I0728 18:04:47.238324 1 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_max' to 1048576
I0728 18:04:47.239243 1 conntrack.go:66] Setting conntrack hashsize to 262144
I0728 18:04:47.242492 1 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0728 18:04:47.242640 1 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
E0728 18:04:47.260748 1 reflector.go:188] pkg/proxy/config/api.go:33: Failed to list *api.Endpoints: Get https://10.41.30.50:6443/api/v1/endpoints?resourceVersion=0: dial tcp 10.41.30.50:6443: getsockopt: connection refused
E0728 18:04:47.260931 1 reflector.go:188] pkg/proxy/config/api.go:30: Failed to list *api.Service: Get https://10.41.30.50:6443/api/v1/services?resourceVersion=0: dial tcp 10.41.30.50:6443: getsockopt: connection refused
E0728 18:04:47.265569 1 event.go:208] Unable to write event: 'Post https://10.41.30.50:6443/api/v1/namespaces/default/events: dial tcp 10.41.30.50:6443: getsockopt: connection refused' (may retry after sleeping)
E0728 18:04:48.262006 1 reflector.go:188] pkg/proxy/config/api.go:33: Failed to list *api.Endpoints: Get https://10.41.30.50:6443/api/v1/endpoints?resourceVersion=0: dial tcp 10.41.30.50:6443: getsockopt: connection refused
Steps I followed:
$ yum -y install \
yum-versionlock \
docker-1.12.6-11.el7.centos \
kubectl-1.5.4-0 \
kubelet-1.5.4-0 \
kubernetes-cni-0.3.0.1-0.07a8a2 \
https://storage.googleapis.com/falkonry-k8-installer/kubeadm-1.6.0-0.alpha.0.2074.a092d8e0f95f52.x86_64.rpm
$ yum versionlock add kubectl kubelet kubernetes-cni kubeadm
$ systemctl enable docker && systemctl start docker
$ systemctl enable kubelet && systemctl start kubelet
$ kubeadm init --use-kubernetes-version=v1.5.6
Set `IPALLOC_RANGE` as `172.30.0.0/16` in https://git.io/weave-kube
$ kubectl apply -f weave-kube-config
$ kubectl run -i --tty busybox --image=busybox -- sh
$ nslookup kubernetes
After this, I cannot connect to kubernetes or any of the other services.
iptable
# Generated by iptables-save v1.4.21 on Sat Jul 29 04:19:10 2017
*filter
:INPUT ACCEPT [1350:566634]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [1344:579110]
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-SERVICES - [0:0]
:WEAVE-NPC - [0:0]
:WEAVE-NPC-DEFAULT - [0:0]
:WEAVE-NPC-INGRESS - [0:0]
-A INPUT -j KUBE-FIREWALL
-A INPUT -i virbr0 -p udp -m udp --dport 53 -j ACCEPT
-A INPUT -i virbr0 -p tcp -m tcp --dport 53 -j ACCEPT
-A INPUT -i virbr0 -p udp -m udp --dport 67 -j ACCEPT
-A INPUT -i virbr0 -p tcp -m tcp --dport 67 -j ACCEPT
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -i virbr0 -o virbr0 -j ACCEPT
-A FORWARD -o weave -j WEAVE-NPC
-A FORWARD -o weave -m state --state NEW -j NFLOG --nflog-group 86
-A FORWARD -o weave -j DROP
-A FORWARD -i weave ! -o weave -j ACCEPT
-A FORWARD -o weave -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A OUTPUT -o virbr0 -p udp -m udp --dport 68 -j ACCEPT
-A DOCKER-ISOLATION -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A WEAVE-NPC -m state --state RELATED,ESTABLISHED -j ACCEPT
-A WEAVE-NPC -d 224.0.0.0/4 -j ACCEPT
-A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-DEFAULT
-A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-INGRESS
-A WEAVE-NPC -m set ! --match-set weave-local-pods dst -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-k?Z;25^M}|1s7P3|H9i;*;MhG dst -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-iuZcey(5DeXbzgRFs8Szo]+#p dst -j ACCEPT
COMMIT
# Completed on Sat Jul 29 04:19:10 2017
# Generated by iptables-save v1.4.21 on Sat Jul 29 04:19:10 2017
*nat
:PREROUTING ACCEPT [2:148]
:INPUT ACCEPT [2:148]
:OUTPUT ACCEPT [24:1452]
:POSTROUTING ACCEPT [24:1452]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-SEP-DYFWWILLIC32NPM5 - [0:0]
:KUBE-SEP-GX7UKBANGEPIDZWU - [0:0]
:KUBE-SEP-YXLAFMRH4ZX57Y3W - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:WEAVE - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 192.168.122.0/24 -d 224.0.0.0/24 -j RETURN
-A POSTROUTING -s 192.168.122.0/24 -d 255.255.255.255/32 -j RETURN
-A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -p tcp -j MASQUERADE --to-ports 1024-65535
-A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -p udp -j MASQUERADE --to-ports 1024-65535
-A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -j MASQUERADE
-A POSTROUTING -j WEAVE
-A DOCKER -i docker0 -j RETURN
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-DYFWWILLIC32NPM5 -s 172.30.0.5/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-DYFWWILLIC32NPM5 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 172.30.0.5:53
-A KUBE-SEP-GX7UKBANGEPIDZWU -s 172.30.0.5/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-GX7UKBANGEPIDZWU -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 172.30.0.5:53
-A KUBE-SEP-YXLAFMRH4ZX57Y3W -s 10.41.30.50/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-YXLAFMRH4ZX57Y3W -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-YXLAFMRH4ZX57Y3W --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 10.41.30.50:6443
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-DYFWWILLIC32NPM5
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-YXLAFMRH4ZX57Y3W --mask 255.255.255.255 --rsource -j KUBE-SEP-YXLAFMRH4ZX57Y3W
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-YXLAFMRH4ZX57Y3W
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-GX7UKBANGEPIDZWU
-A WEAVE -s 172.30.0.0/16 -d 224.0.0.0/4 -j RETURN
-A WEAVE ! -s 172.30.0.0/16 -d 172.30.0.0/16 -j MASQUERADE
-A WEAVE -s 172.30.0.0/16 ! -d 172.30.0.0/16 -j MASQUERADE
COMMIT
# Completed on Sat Jul 29 04:19:10 2017
# Generated by iptables-save v1.4.21 on Sat Jul 29 04:19:10 2017
*mangle
:PREROUTING ACCEPT [323631:155761830]
:INPUT ACCEPT [323586:155756413]
:FORWARD ACCEPT [26:1880]
:OUTPUT ACCEPT [317539:144236316]
:POSTROUTING ACCEPT [317582:144241373]
-A POSTROUTING -o virbr0 -p udp -m udp --dport 68 -j CHECKSUM --checksum-fill
COMMIT
# Completed on Sat Jul 29 04:19:10 2017
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default gateway 0.0.0.0 UG 100 0 0 eno1d1
10.41.30.0 0.0.0.0 255.255.255.0 U 100 0 0 eno1
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.30.0.0 0.0.0.0 255.255.0.0 U 0 0 0 weave
192.168.1.0 0.0.0.0 255.255.255.0 U 100 0 0 eno1d1
192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0

Can't reach Kubernetes service from outside of node when kube-proxy in iptables mode

I have a Single-Node (master+node) Kubernetes deployment running on CoreOS, with kube-proxy running in iptables mode, flannel for container networking, without Calico.
kube-proxy.yaml
apiVersion: v1
kind: Pod
metadata:
name: kube-proxy
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-proxy
image: quay.io/coreos/hyperkube:v1.5.2_coreos.0
command:
- /hyperkube
- proxy
- --master=http://127.0.0.1:8080
- --hostname-override=10.0.0.144
- --proxy-mode=iptables
- --bind-address=0.0.0.0
- --cluster-cidr=10.1.0.0/16
- --masquerade-all=true
securityContext:
privileged: true
I've created a deployment, then exposed that deployment using a Service of type NodePort.
user#node ~ $ kubectl run hostnames --image=gcr.io/google_containers/serve_hostname \
--labels=app=hostnames \
--port=9376 \
--replicas=3
user#node ~ $ kubectl expose deployment hostnames \
--port=80 \
--target-port=9376 \
--type=NodePort
user#node ~ $ kubectl get svc hostnames
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hostnames 10.1.50.64 <nodes> 80:30177/TCP 6m
I can curl successfully from the node (loopback and eth0 IP):
user#node ~ $ curl localhost:30177
hostnames-3799501552-xfq08
user#node ~ $ curl 10.0.0.144:30177
hostnames-3799501552-xfq08
However, I cannot curl from outside the node. I've tried from both a client machine outside the node's network (with correct firewall rules), and a machine inside the node's private network, with the network's firewall completely open between the two machines, with no luck.
I'm fairly confident that it's an iptables/kube-proxy issue, because if I modify the kube-proxy config from --proxy-mode=iptables to --proxy-mode=userspace I can access from both external machines. Also, if I bypass kubernetes and run a docker container I have no problems with external access.
Here are the current iptables rules:
user#node ~ $ iptables-save
# Generated by iptables-save v1.4.21 on Mon Feb 6 04:46:02 2017
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-SEP-4IIYBTTZSUAZV53G - [0:0]
:KUBE-SEP-4TMFMGA4TTORJ5E4 - [0:0]
:KUBE-SEP-DUUUKFKBBSQSAJB2 - [0:0]
:KUBE-SEP-XONOXX2F6J6VHAVB - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-NWV5X2332I4OT4T3 - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 10.1.0.0/16 -d 10.1.0.0/16 -j RETURN
-A POSTROUTING -s 10.1.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
-A POSTROUTING ! -s 10.1.0.0/16 -d 10.1.0.0/16 -j MASQUERADE
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/hostnames:" -m tcp --dport 30177 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/hostnames:" -m tcp --dport 30177 -j KUBE-SVC-NWV5X2332I4OT4T3
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-4IIYBTTZSUAZV53G -s 10.0.0.144/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-4IIYBTTZSUAZV53G -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-4IIYBTTZSUAZV53G --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 10.0.0.144:6443
-A KUBE-SEP-4TMFMGA4TTORJ5E4 -s 10.1.34.2/32 -m comment --comment "default/hostnames:" -j KUBE-MARK-MASQ
-A KUBE-SEP-4TMFMGA4TTORJ5E4 -p tcp -m comment --comment "default/hostnames:" -m tcp -j DNAT --to-destination 10.1.34.2:9376
-A KUBE-SEP-DUUUKFKBBSQSAJB2 -s 10.1.34.3/32 -m comment --comment "default/hostnames:" -j KUBE-MARK-MASQ
-A KUBE-SEP-DUUUKFKBBSQSAJB2 -p tcp -m comment --comment "default/hostnames:" -m tcp -j DNAT --to-destination 10.1.34.3:9376
-A KUBE-SEP-XONOXX2F6J6VHAVB -s 10.1.34.4/32 -m comment --comment "default/hostnames:" -j KUBE-MARK-MASQ
-A KUBE-SEP-XONOXX2F6J6VHAVB -p tcp -m comment --comment "default/hostnames:" -m tcp -j DNAT --to-destination 10.1.34.4:9376
-A KUBE-SERVICES -d 10.1.50.64/32 -p tcp -m comment --comment "default/hostnames: cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SERVICES ! -s 10.1.0.0/16 -d 10.1.50.64/32 -p tcp -m comment --comment "default/hostnames: cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.1.50.64/32 -p tcp -m comment --comment "default/hostnames: cluster IP" -m tcp --dport 80 -j KUBE-SVC-NWV5X2332I4OT4T3
-A KUBE-SERVICES -d 10.1.50.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES ! -s 10.1.0.0/16 -d 10.1.50.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.1.50.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-4IIYBTTZSUAZV53G --mask 255.255.255.255 --rsource -j KUBE-SEP-4IIYBTTZSUAZV53G
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-4IIYBTTZSUAZV53G
-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment "default/hostnames:" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-4TMFMGA4TTORJ5E4
-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment "default/hostnames:" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-DUUUKFKBBSQSAJB2
-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment "default/hostnames:" -j KUBE-SEP-XONOXX2F6J6VHAVB
COMMIT
# Completed on Mon Feb 6 04:46:02 2017
# Generated by iptables-save v1.4.21 on Mon Feb 6 04:46:02 2017
*filter
:INPUT DROP [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [67:14455]
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-SERVICES - [0:0]
-A INPUT -j KUBE-FIREWALL
-A INPUT -i lo -j ACCEPT
-A INPUT -i eth0 -j ACCEPT
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 443 -j ACCEPT
-A INPUT -p icmp -m icmp --icmp-type 0 -j ACCEPT
-A INPUT -p icmp -m icmp --icmp-type 3 -j ACCEPT
-A INPUT -p icmp -m icmp --icmp-type 11 -j ACCEPT
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER-ISOLATION -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
COMMIT
# Completed on Mon Feb 6 04:46:02 2017
I'm not sure what to look for in the rules... Can someone with more experience than myself make some suggestions on troubleshooting?
Fixed it. The problem was that I had some default iptables rules applied on startup which must override some parts of the dynamic rule-set created by kube-proxy.
The difference between working and non-working was as follows:
Working
:INPUT DROP [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [67:14455]
...
-A INPUT -i lo -j ACCEPT
-A INPUT -i eth0 -j ACCEPT
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 443 -j ACCEPT
-A INPUT -p icmp -m icmp --icmp-type 0 -j ACCEPT
-A INPUT -p icmp -m icmp --icmp-type 3 -j ACCEPT
-A INPUT -p icmp -m icmp --icmp-type 11 -j ACCEPT
...
Not working
:INPUT ACCEPT [30:5876]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [25:5616]
...

centos iptables at windows network

I've got a machine, running Centos and it's connected to a windows network. When I try to view the network I'm getting the error "unable to connect share list from server". Once I turned iptables off everything works fine. How ca I fix this problem. My current iptables configuration is
# Generated by iptables-save v1.4.7 on Sat Nov 16 11:06:35 2013
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [6:360]
-A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -s 192.168.2.0/24 -m state --state NEW -m tcp -p tcp --dport 445 -j ACCEPT
-A INPUT -s 192.168.2.0/24 -m state --state NEW -m udp -p udp --dport 445 -j ACCEPT
-A INPUT -s 192.168.2.0/24 -m state --state NEW -m udp -p udp --dport 137 -j ACCEPT
-A INPUT -s 192.168.2.0/24 -m state --state NEW -m udp -p udp --dport 138 -j ACCEPT
-A INPUT -s 192.168.2.0/24 -m state --state NEW -m tcp -p tcp --dport 139 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT
# Completed on Sat Nov 16 11:06:35 2013
You can temporary add the log rule for rejected traffic:
-A INPUT -j LOG --log-prefix "Rejected: "
before your:
-A INPUT -j REJECT --reject-with icmp-host-prohibited
And you`ll see which traffic is rejected..
a] First log the dropped ip tables for example like this
#----------
# Logs to messages.log
#----------
-A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: INPUT " --log-level 4
-A OUTPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: OUTPUT " --log-level 4
-A FORWARD -m limit --limit 5/min -j LOG --log-prefix "iptables denied: FORWARD " --log-level 4
b] tail dropped tables from messages
tomas#raspirarium:~ $ tail -f /var/log/messages |grep "iptables denied"
c] write ip tables rules beyond the denied rules in message.log on the fly as is the example bottom
#----------
# Windows Samba
#----------
# incoming request
-A INPUT -i eth0 -p tcp -s 192.168.79.0/24 -m multiport --dports 139,445 -m state --state NEW,ESTABLISHED -j ACCEPT
-A OUTPUT -o eth0 -p tcp -d 192.168.79.0/24 -m multiport --sports 1024:65535 -m state --state ESTABLISHED -j ACCEPT
# outgoing laso handler
-A OUTPUT -o eth0 -p tcp -s 192.168.79.0/24 -m multiport --dports 139,445 -m state --state NEW,ESTABLISHED -j ACCEPT
-A INPUT -i eth0 -p tcp -s 192.168.79.0/24 -m multiport --sports 1024:65535 -m state --state ESTABLISHED -j ACCEPT