Kubernetes service communication isse - Kubedns - kubernetes

I have two pods mapped to two services up and running using virtual box vm's in my laptop. I have kube dns working. One pod is a webservice and the other is a mongodb.
The spec of webapp pod is below
spec:
containers:
- resources:
limits:
cpu: 0.5
.
.
name: wsemp
ports:
- containerPort: 8080
# name: wsemp
#command: ["java","-Dspring.data.mongodb.uri=mongodb://192.168.6.103:30061/microservices", "-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
command: ["java","-Dspring.data.mongodb.uri=mongodb://mongoservice/microservices", "-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
The spec of corresponding service
apiVersion: v1
kind: Service
metadata:
labels:
name: webappservice
name: webappservice
spec:
ports:
- port: 8080
nodePort: 30062
targetPort: 8080
protocol: TCP
type: NodePort
selector:
name: webapp
Mongodb pod spec
apiVersion: v1
kind: Pod
metadata:
name: mongodb
labels:
name: mongodb
spec:
containers:
.
.
name: mongodb
ports:
- containerPort: 27017
Mongodb service spec
apiVersion: v1
kind: Service
metadata:
labels:
name: mongodb
name: mongoservice
spec:
ports:
- port: 27017
nodePort: 30061
targetPort: 27017
protocol: TCP
type: NodePort
selector:
name: mongodb
UPDATED TARGET PORTS IN SERVICE AFTER COMMENT
Issue
The webapp when it starts is not able to connect with the mongoservice port and gives this error on start
Exception in monitor thread while connecting to server mongoservice:27017
com.mongodb.MongoSocketOpenException: Exception opening socket
at com.mongodb.connection.SocketStream.open(SocketStream.java:63) ~[mongodb-driver-core-3.2.2.jar!/:na]
at com.mongodb.connection.InternalStreamConnection.open(InternalStreamConnection.java:114) ~[mongodb-driver-core-3.2.2.jar!/:na]
at com.mongodb.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:128) ~[mongodb-driver-core-3.2.2.jar!/:na]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method) ~[na:1.8.0_111]
describe svc
kubectl describe svc mongoservice
Name: mongoservice
Namespace: default
Labels: name=mongodb
Selector: name=mongodb
Type: NodePort
IP: 10.254.146.189
Port: <unset> 27017/TCP
NodePort: <unset> 30061/TCP
Endpoints: 172.17.99.2:27017
Session Affinity: None
No events.
kubectl describe svc webappservice
Name: webappservice
Namespace: default
Labels: name=webappservice
Selector: name=webapp
Type: NodePort
IP: 10.254.112.121
Port: <unset> 8080/TCP
NodePort: <unset> 30062/TCP
Endpoints: 172.17.99.3:8080
Session Affinity: None
No events.
Debugging
root#webapp:/# nslookup mongoservice
Server: 10.254.0.2
Address: 10.254.0.2#53
Non-authoritative answer:
Name: mongoservice.default.svc.cluster.local
Address: 10.254.146.189
root#webapp:/# curl 10.254.146.189:27017
curl: (7) Failed to connect to 10.254.146.189 port 27017: Connection refused
root#webapp:/# curl mongoservice:27017
curl: (7) Failed to connect to mongoservice port 27017: Connection refused
sudo iptables-save | grep webapp
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/webappservice:" -m tcp --dport 30062 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/webappservice:" -m tcp --dport 30062 -j KUBE-SVC-NQBDRRKQULANV7O3
-A KUBE-SEP-IE7EBTQCN7T6HXC4 -s 172.17.99.3/32 -m comment --comment "default/webappservice:" -j KUBE-MARK-MASQ
-A KUBE-SEP-IE7EBTQCN7T6HXC4 -p tcp -m comment --comment "default/webappservice:" -m tcp -j DNAT --to-destination 172.17.99.3:8080
-A KUBE-SERVICES -d 10.254.217.24/32 -p tcp -m comment --comment "default/webappservice: cluster IP" -m tcp --dport 8080 -j KUBE-SVC-NQBDRRKQULANV7O3
-A KUBE-SVC-NQBDRRKQULANV7O3 -m comment --comment "default/webappservice:" -j KUBE-SEP-IE7EBTQCN7T6HXC4
$ curl 10.254.217.24:8080
{"timestamp":1486678423757,"status":404,"error":"Not Found","message":"No message available","path":"/"}[osboxes#kube-node1 ~]$
sudo iptables-save | grep mongodb
[osboxes#osboxes ~]$ sudo iptables-save | grep mongo
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/mongoservice:" -m tcp --dport 30061 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/mongoservice:" -m tcp --dport 30061 -j KUBE-SVC-2HQWGC3WSIBZF7CN
-A KUBE-SEP-FVWOWAWXXVAVIQ5O -s 172.17.99.2/32 -m comment --comment "default/mongoservice:" -j KUBE-MARK-MASQ
-A KUBE-SEP-FVWOWAWXXVAVIQ5O -p tcp -m comment --comment "default/mongoservice:" -m tcp -j DNAT --to-destination 172.17.99.2:27017
-A KUBE-SERVICES -d 10.254.146.189/32 -p tcp -m comment --comment "default/mongoservice: cluster IP" -m tcp --dport 27017 -j KUBE-SVC-2HQWGC3WSIBZF7CN
-A KUBE-SVC-2HQWGC3WSIBZF7CN -m comment --comment "default/mongoservice:" -j KUBE-SEP-FVWOWAWXXVAVIQ5O
[osboxes#osboxes ~]$ sudo curl 10.254.146.189:8080
^C[osboxes#osboxes ~]$ sudo curl 10.254.146.189:27017
It looks like you are trying to access MongoDB over HTTP on the native driver port.
root#mongodb:/# netstat -an
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:27017 0.0.0.0:* LISTEN
tcp 0 0 172.17.99.2:60724 151.101.128.204:80 TIME_WAIT
tcp 0 0 172.17.99.2:60728 151.101.128.204:80 TIME_WAIT
mongodb container has no errors on startup.
Trying to follow steps in https://kubernetes.io/docs/user-guide/debugging-services/#iptables, stuck in the part where it says " try restarting kube-proxy with the -V flag set to 4" since I dont know how to do it.
I'm not a networking person, so dont know how and what needs to be analyzed in this. Any kind of tips to debug would be of great help.
Thanks.

:)
As a side note, have in mind that curl performs HTTP requests by default, but the port 27017 in the host you are trying to reach is not binded to an application that understands such protocol. Typically, what you would you in these scenarios is to use netcat:
nc -zv mongoservice 27017
This reports whether the port 27017 from such host is open or not.
nc = netcat
-z scan for listening daemons without sending data
-v adds verbosity
Regarding your MongoDB file, you must remember to set the targetPort directive. As explained in Kubernetes docs regarding targetPort:
This specification will create a Service which targets TCP port 80 on any Pod with the run: my-nginx label, and expose it on an abstracted Service port (targetPort: is the port the container accepts traffic on, port: is the abstracted Service port, which can be any port other pods use to access the Service). View service API object to see the list of supported fields in service definition.
Therefore, just set it to 27017 for consistency.
You should not run into issues after following these advices. Keep the good work and learn as much as you can!

The iptables rules looks ok, but it is not sure what network solution (flannel/calico) used in your kubernetes. You may check whether you can access kube dns pod IP from your web pod.

thanks. I had got a clue on this and since I was using the flannel network, there was an issue with the communication between the pods in the flannel network.
Particularly this part, FLANNEL_OPTIONS="--iface=eth1" as mentioned in the link http://jayunit100.blogspot.com/2015/06/flannel-and-vagrant-heads-up.html
Thanks.

Related

Accessing via NodeIP:NodePort returns connection refused

I created by cluster by using echo 'KUBELET_KUBEADM_ARGS="--network-plugin=kubenet --pod-cidr=10.20.0.0/24 --pod-infra-container-image=k8s.gcr.io/pause:3.6"' > /etc/default/kubelet. The setup is ran in a ubuntu VM using NAT configurations.
There is one cluster partitioned with two namespaces, each with one deployment of an application instance (think one application for one client). I'm trying to access the individual application instance via nodeIP:nodePort. I can access the application via ; however, this way I cant access application belonging to client A and client B separately.
If you're interested in the exact steps taken, see Kubernetes deployment not reachable via browser exposed with service
Below is the yaml file for deployment in eramba-1 namespace (so for the second deployment, I just have namespace = eramba-2)
apiVersion: apps/v1
kind: Deployment
metadata:
name: eramba-web
namespace: eramba-1
labels:
app.kubernetes.io/name: eramba-web
spec:
replicas: 1
selector:
matchLabels:
app: eramba-web
template:
metadata:
labels:
app: eramba-web
spec:
containers:
- name: eramba-web
image: markz0r/eramba-app:c281
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_HOSTNAME
value: eramba-mariadb
- name: MYSQL_DATABASE
value: erambadb
- name: MYSQL_USER
value: root
- name: MYSQL_PASSWORD
value: eramba
- name: DATABASE_PREFIX
value: ""
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: eramba-web
namespace: eramba-1
labels:
app.kubernetes.io/name: eramba-web
spec:
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 8080
selector:
app.kubernetes.io/name: eramba-web
type: NodePort
...
Service output for eramba-1 namespace
root#osboxes:/home/osboxes/eramba# kubectl describe svc eramba-web -n eramba-1
Name: eramba-web
Namespace: eramba-1
Labels: app.kubernetes.io/name=eramba-web
Annotations: <none>
Selector: app.kubernetes.io/name=eramba-web
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.100.17.120
IPs: 10.100.17.120
Port: http 8080/TCP
TargetPort: 8080/TCP
NodePort: http 32370/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Service for eramba-2 output
root#osboxes:/home/osboxes/eramba# kubectl describe svc eramba-web2 -n eramba-2
Name: eramba-web2
Namespace: eramba-2
Labels: app.kubernetes.io/name=eramba-web2
Annotations: <none>
Selector: app.kubernetes.io/name=eramba-web2
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.98.240.243
IPs: 10.98.240.243
Port: http 8080/TCP
TargetPort: 8080/TCP
NodePort: http 32226/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I've verified the nodePorts listening status
root#osboxes:/home/osboxes/eramba# netstat -tunlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 **0 0.0.0.0:32370** 0.0.0.0:* LISTEN 3776/kube-proxy
tcp 0 0 127.0.0.1:10259 0.0.0.0:* LISTEN 3476/kube-scheduler
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 535/systemd-resolve
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 587/cupsd
tcp 0 **0 0.0.0.0:32226** 0.0.0.0:* LISTEN 3776/kube-proxy
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 2983/kubelet
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 3776/kube-proxy
tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 809/mysqld
tcp 0 0 172.16.42.135:2379 0.0.0.0:* LISTEN 3495/etcd
tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 3495/etcd
tcp 0 0 172.16.42.135:2380 0.0.0.0:* LISTEN 3495/etcd
tcp 0 0 127.0.0.1:2381 0.0.0.0:* LISTEN 3495/etcd
tcp 0 0 127.0.0.1:39469 0.0.0.0:* LISTEN 2983/kubelet
tcp 0 0 127.0.0.1:10257 0.0.0.0:* LISTEN 3521/kube-controlle
tcp6 0 0 ::1:631 :::* LISTEN 587/cupsd
tcp6 0 0 :::33060 :::* LISTEN 809/mysqld
tcp6 0 0 :::10250 :::* LISTEN 2983/kubelet
tcp6 0 0 :::6443 :::* LISTEN 3485/kube-apiserver
tcp6 0 0 :::10256 :::* LISTEN 3776/kube-proxy
tcp6 0 0 :::80 :::* LISTEN 729/apache2
udp 0 0 0.0.0.0:35922 0.0.0.0:* 589/avahi-daemon: r
udp 0 0 0.0.0.0:5353 0.0.0.0:* 589/avahi-daemon: r
udp 0 0 127.0.0.53:53 0.0.0.0:* 535/systemd-resolve
udp 0 0 172.16.42.135:68 0.0.0.0:* 586/NetworkManager
udp 0 0 0.0.0.0:631 0.0.0.0:* 654/cups-browsed
udp6 0 0 :::5353 :::* 589/avahi-daemon: r
udp6 0 0 :::37750 :::* 589/avahi-daemon: r
Here's the Iptables output
root#osboxes:/home/osboxes/eramba# iptables --list-rules
-P INPUT ACCEPT
-P FORWARD DROP
-P OUTPUT ACCEPT
-N DOCKER
-N DOCKER-ISOLATION-STAGE-1
-N DOCKER-ISOLATION-STAGE-2
-N DOCKER-USER
-N KUBE-EXTERNAL-SERVICES
-N KUBE-FIREWALL
-N KUBE-FORWARD
-N KUBE-KUBELET-CANARY
-N KUBE-NODEPORTS
-N KUBE-PROXY-CANARY
-N KUBE-SERVICES
-A INPUT -m comment --comment "kubernetes health check service ports" -j KUBE-NODEPORTS
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A INPUT -j KUBE-FIREWALL
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
-A KUBE-EXTERNAL-SERVICES -p tcp -m comment --comment "eramba-2/eramba-web2:http has no endpoints" -m addrtype --dst-type LOCAL -m tcp --dport 32226 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-EXTERNAL-SERVICES -p tcp -m comment --comment "eramba-1/eramba-web:http has no endpoints" -m addrtype --dst-type LOCAL -m tcp --dport 32370 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment "block incoming localnet connections" -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP
-A KUBE-FORWARD -m conntrack --ctstate INVALID -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-SERVICES -d 10.98.240.243/32 -p tcp -m comment --comment "eramba-2/eramba-web2:http has no endpoints" -m tcp --dport 8080 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 10.100.17.120/32 -p tcp -m comment --comment "eramba-1/eramba-web:http has no endpoints" -m tcp --dport 8080 -j REJECT --reject-with icmp-port-unreachable
I'm sure I'm unaware of other ways where I can access the individual Application instances, so please advice if there's a better way.
Endpoints: <none> is an indication your Service is configured wrong; its selector doesn't match any of the Pods. If you look at the Service, it looks for
spec:
selector:
app.kubernetes.io/name: eramba-web
But if you look at the Deployment, it generates Pods with different labels
spec:
template:
metadata:
labels:
app: eramba-web # not app.kubernetes.io/name: ...
I'd consistently use the app.kubernetes.io/name format everywhere. You will have to delete and recreate the Deployment to change its selector: value to match.

How do GCP load balancers route traffic to GKE services?

I'm relatively new (< 1 year) to GCP, and I'm still in the process of mapping the various services onto my existing networking mental model.
Once knowledge gap I'm struggling to fill is how HTTP requests are load balanced to services running in our GKE clusters.
On a test cluster, I created a service in front of pods that serve HTTP:
apiVersion: v1
kind: Service
metadata:
name: contour
spec:
ports:
- port: 80
name: http
protocol: TCP
targetPort: 8080
- port: 443
name: https
protocol: TCP
targetPort: 8443
selector:
app: contour
type: LoadBalancer
The service is listening on node ports 30472 and 30816.:
$ kubectl get svc contour
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
contour LoadBalancer 10.63.241.69 35.x.y.z 80:30472/TCP,443:30816/TCP 41m
A GCP network load balancer is automatically created for me. It has its own public IP at 35.x.y.z, and is listening on ports 80-443:
Curling the load balancer IP works:
$ curl -q -v 35.x.y.z
* TCP_NODELAY set
* Connected to 35.x.y.z (35.x.y.z) port 80 (#0)
> GET / HTTP/1.1
> Host: 35.x.y.z
> User-Agent: curl/7.62.0
> Accept: */*
>
< HTTP/1.1 404 Not Found
< date: Mon, 07 Jan 2019 05:33:44 GMT
< server: envoy
< content-length: 0
<
If I ssh into the GKE node, I can see the kube-proxy is listening on the service nodePorts (30472 and 30816) and nothing has a socket listening on ports 80 or 443:
# netstat -lntp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:20256 0.0.0.0:* LISTEN 1022/node-problem-d
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 1221/kubelet
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 1369/kube-proxy
tcp 0 0 0.0.0.0:5355 0.0.0.0:* LISTEN 297/systemd-resolve
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 330/sshd
tcp6 0 0 :::30816 :::* LISTEN 1369/kube-proxy
tcp6 0 0 :::4194 :::* LISTEN 1221/kubelet
tcp6 0 0 :::30472 :::* LISTEN 1369/kube-proxy
tcp6 0 0 :::10250 :::* LISTEN 1221/kubelet
tcp6 0 0 :::5355 :::* LISTEN 297/systemd-resolve
tcp6 0 0 :::10255 :::* LISTEN 1221/kubelet
tcp6 0 0 :::10256 :::* LISTEN 1369/kube-proxy
Two questions:
Given nothing on the node is listening on ports 80 or 443, is the load balancer directing traffic to ports 30472 and 30816?
If the load balancer is accepting traffic on 80/443 and forwarding to 30472/30816, where can I see that configuration? Clicking around the load balancer screens I can't see any mention of ports 30472 and 30816.
I think I found the answer to my own question - can anyone confirm I'm on the right track?
The network load balancer redirects the traffic to a node in the cluster without modifying the packet - packets for port 80/443 still have port 80/443 when they reach the node.
There's nothing listening on ports 80/443 on the nodes. However kube-proxy has written iptables rules that match packets to the load balancer IP, and rewrite them with the appropriate ClusterIP and port:
You can see the iptables config on the node:
$ iptables-save | grep KUBE-SERVICES | grep loadbalancer
-A KUBE-SERVICES -d 35.x.y.z/32 -p tcp -m comment --comment "default/contour:http loadbalancer IP" -m tcp --dport 80 -j KUBE-FW-D53V3CDHSZT2BLQV
-A KUBE-SERVICES -d 35.x.y.z/32 -p tcp -m comment --comment "default/contour:https loadbalancer IP" -m tcp --dport 443 -j KUBE-FW-J3VGAQUVMYYL5VK6
$ iptables-save | grep KUBE-SEP-ZAA234GWNBHH7FD4
:KUBE-SEP-ZAA234GWNBHH7FD4 - [0:0]
-A KUBE-SEP-ZAA234GWNBHH7FD4 -s 10.60.0.30/32 -m comment --comment "default/contour:http" -j KUBE-MARK-MASQ
-A KUBE-SEP-ZAA234GWNBHH7FD4 -p tcp -m comment --comment "default/contour:http" -m tcp -j DNAT --to-destination 10.60.0.30:8080
$ iptables-save | grep KUBE-SEP-CXQOVJCC5AE7U6UC
:KUBE-SEP-CXQOVJCC5AE7U6UC - [0:0]
-A KUBE-SEP-CXQOVJCC5AE7U6UC -s 10.60.0.30/32 -m comment --comment "default/contour:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-CXQOVJCC5AE7U6UC -p tcp -m comment --comment "default/contour:https" -m tcp -j DNAT --to-destination 10.60.0.30:8443
An interesting implication is the the nodePort is created but doesn't appear to be used. That matches this comment in the kube docs:
Google Compute Engine does not need to allocate a NodePort to make LoadBalancer work
It also explains why GKE creates an automatic firewall rule that allows traffic from 0.0.0.0/0 towards ports 80/443 on the nodes. The load balancer isn't rewriting the packets, so the firewall needs to allow traffic from anywhere to reach iptables on the node, and it's rewritten there.
To understand LoadBalancer services, you first have to grok NodePort services. The way those work is that there is a proxy (usually actually implemented in iptables or ipvs now for perf but that's an implementation detail) on every node in your cluster, and when create a NodePort service it picks a port that is unused and sets every one of those proxies to forward packets to your Kubernetes pod. A LoadBalancer service builds on top of that, so on GCP/GKE it creates a GCLB forwarding rule mapping the requested port to a rotation of all those node-level proxies. So the GCLB listens on port 80, which proxies to some random port on a random node, which proxies to the internal port on your pod.
The process is a bit more customizable than that, but that's the basic defaults.

Iptables Add DNAT rules to forward request on an IP:port to a container port

I have a kubernetes cluster which has 2 interfaces:
eth0: 10.10.10.100 (internal)
eth1: 20.20.20.100 (External)
There are few pods running in the cluster with flannel networking.
POD1: 172.16.54.4 (nginx service)
I want to access 20.20.20.100:80 from another host which is connected to the above k8s cluster, so that I can reach the nginx POD.
I had enabled ip forwarding and also added DNAT rules as follows:
iptables -t nat -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j DNAT --to-destination 172.16.54.4:80
After this when I try to do a curl on 20.20.20.100, I get
Failed to connect to 10.10.65.161 port 80: Connection refused
How do I get this working?
You can try
iptables -t nat -A PREROUTING -p tcp -d 20.20.20.100 --dport 80 -j DNAT --to-destination 172.16.54.4:80
But I don't recommend that you manage the iptables by yourself, it's painful to maintain the rules...
You can use the hostPort in the k8s. You can use kubenet as network plugin, since cni plugin does not support hostPort.
why not use nodeport type? I think it is a better way to access service by hostIP. Please try iptables -nvL -t nat and show me the detail.

How to allow incoming connection on a particular port from specific IP

I am running mongodb in a docker container with 27017 port exposed with host to allow remote incoming connection. I want to block incoming connection on this port except a particular IP. I tried with iptables but it is not working. Maybe because of the docker service for which iptables commands need to be modified.
However I used the following commands:
myserver>iptables -I INPUT -p tcp -s 10.10.4.232 --dport 27017 -j ACCEPT
myserver>iptables -I INPUT -p tcp -s 0.0.0.0/0 --dport 27017 -j DROP
myserver>service iptables save
Then tried the following to check
mylocal>telnet myserver 27017
It is connected. So iptables is not working.
How do I do it?
I am using centos 6.8 and running mongodb 10 in docker container.
First, enable the source IP you wish to connect:
iptables -A INPUT -p tcp --dport 27017 -s 10.10.4.232 -j ACCEPT
Then DROP all the rest:
iptables -A INPUT -p tcp --dport 27017 -j DROP

Redirect port 443 (https) to IP using iptables

I've tried for some hours to do this simple job, but it is not so simple like you think.
I wanted to redirect every request for 443 and 80 port to a webserver , in my example http://127.0.0.1:80
Port 80 worked without any problems, but 443 port tried me a lot of time...
I guess you've tried already to run the following command:
iptables -t nat -A OUTPUT -p tcp -m tcp --dport 443 -j DNAT --to-destination 127.0.0.1:80
But this is wrong, because the port 443 cannot be redirected to other ports than 443.
The solution is:
Use the following command:
iptables -t nat -A OUTPUT -p tcp --dport 443 -j DNAT --to-destination 127.0.0.1:443
Then enable https for apache.
If you are using CentOS use this tutorial - http://wiki.centos.org/HowTos/Https
Good luck.