I'm relatively new (< 1 year) to GCP, and I'm still in the process of mapping the various services onto my existing networking mental model.
Once knowledge gap I'm struggling to fill is how HTTP requests are load balanced to services running in our GKE clusters.
On a test cluster, I created a service in front of pods that serve HTTP:
apiVersion: v1
kind: Service
metadata:
name: contour
spec:
ports:
- port: 80
name: http
protocol: TCP
targetPort: 8080
- port: 443
name: https
protocol: TCP
targetPort: 8443
selector:
app: contour
type: LoadBalancer
The service is listening on node ports 30472 and 30816.:
$ kubectl get svc contour
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
contour LoadBalancer 10.63.241.69 35.x.y.z 80:30472/TCP,443:30816/TCP 41m
A GCP network load balancer is automatically created for me. It has its own public IP at 35.x.y.z, and is listening on ports 80-443:
Curling the load balancer IP works:
$ curl -q -v 35.x.y.z
* TCP_NODELAY set
* Connected to 35.x.y.z (35.x.y.z) port 80 (#0)
> GET / HTTP/1.1
> Host: 35.x.y.z
> User-Agent: curl/7.62.0
> Accept: */*
>
< HTTP/1.1 404 Not Found
< date: Mon, 07 Jan 2019 05:33:44 GMT
< server: envoy
< content-length: 0
<
If I ssh into the GKE node, I can see the kube-proxy is listening on the service nodePorts (30472 and 30816) and nothing has a socket listening on ports 80 or 443:
# netstat -lntp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:20256 0.0.0.0:* LISTEN 1022/node-problem-d
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 1221/kubelet
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 1369/kube-proxy
tcp 0 0 0.0.0.0:5355 0.0.0.0:* LISTEN 297/systemd-resolve
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 330/sshd
tcp6 0 0 :::30816 :::* LISTEN 1369/kube-proxy
tcp6 0 0 :::4194 :::* LISTEN 1221/kubelet
tcp6 0 0 :::30472 :::* LISTEN 1369/kube-proxy
tcp6 0 0 :::10250 :::* LISTEN 1221/kubelet
tcp6 0 0 :::5355 :::* LISTEN 297/systemd-resolve
tcp6 0 0 :::10255 :::* LISTEN 1221/kubelet
tcp6 0 0 :::10256 :::* LISTEN 1369/kube-proxy
Two questions:
Given nothing on the node is listening on ports 80 or 443, is the load balancer directing traffic to ports 30472 and 30816?
If the load balancer is accepting traffic on 80/443 and forwarding to 30472/30816, where can I see that configuration? Clicking around the load balancer screens I can't see any mention of ports 30472 and 30816.
I think I found the answer to my own question - can anyone confirm I'm on the right track?
The network load balancer redirects the traffic to a node in the cluster without modifying the packet - packets for port 80/443 still have port 80/443 when they reach the node.
There's nothing listening on ports 80/443 on the nodes. However kube-proxy has written iptables rules that match packets to the load balancer IP, and rewrite them with the appropriate ClusterIP and port:
You can see the iptables config on the node:
$ iptables-save | grep KUBE-SERVICES | grep loadbalancer
-A KUBE-SERVICES -d 35.x.y.z/32 -p tcp -m comment --comment "default/contour:http loadbalancer IP" -m tcp --dport 80 -j KUBE-FW-D53V3CDHSZT2BLQV
-A KUBE-SERVICES -d 35.x.y.z/32 -p tcp -m comment --comment "default/contour:https loadbalancer IP" -m tcp --dport 443 -j KUBE-FW-J3VGAQUVMYYL5VK6
$ iptables-save | grep KUBE-SEP-ZAA234GWNBHH7FD4
:KUBE-SEP-ZAA234GWNBHH7FD4 - [0:0]
-A KUBE-SEP-ZAA234GWNBHH7FD4 -s 10.60.0.30/32 -m comment --comment "default/contour:http" -j KUBE-MARK-MASQ
-A KUBE-SEP-ZAA234GWNBHH7FD4 -p tcp -m comment --comment "default/contour:http" -m tcp -j DNAT --to-destination 10.60.0.30:8080
$ iptables-save | grep KUBE-SEP-CXQOVJCC5AE7U6UC
:KUBE-SEP-CXQOVJCC5AE7U6UC - [0:0]
-A KUBE-SEP-CXQOVJCC5AE7U6UC -s 10.60.0.30/32 -m comment --comment "default/contour:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-CXQOVJCC5AE7U6UC -p tcp -m comment --comment "default/contour:https" -m tcp -j DNAT --to-destination 10.60.0.30:8443
An interesting implication is the the nodePort is created but doesn't appear to be used. That matches this comment in the kube docs:
Google Compute Engine does not need to allocate a NodePort to make LoadBalancer work
It also explains why GKE creates an automatic firewall rule that allows traffic from 0.0.0.0/0 towards ports 80/443 on the nodes. The load balancer isn't rewriting the packets, so the firewall needs to allow traffic from anywhere to reach iptables on the node, and it's rewritten there.
To understand LoadBalancer services, you first have to grok NodePort services. The way those work is that there is a proxy (usually actually implemented in iptables or ipvs now for perf but that's an implementation detail) on every node in your cluster, and when create a NodePort service it picks a port that is unused and sets every one of those proxies to forward packets to your Kubernetes pod. A LoadBalancer service builds on top of that, so on GCP/GKE it creates a GCLB forwarding rule mapping the requested port to a rotation of all those node-level proxies. So the GCLB listens on port 80, which proxies to some random port on a random node, which proxies to the internal port on your pod.
The process is a bit more customizable than that, but that's the basic defaults.
Related
I created by cluster by using echo 'KUBELET_KUBEADM_ARGS="--network-plugin=kubenet --pod-cidr=10.20.0.0/24 --pod-infra-container-image=k8s.gcr.io/pause:3.6"' > /etc/default/kubelet. The setup is ran in a ubuntu VM using NAT configurations.
There is one cluster partitioned with two namespaces, each with one deployment of an application instance (think one application for one client). I'm trying to access the individual application instance via nodeIP:nodePort. I can access the application via ; however, this way I cant access application belonging to client A and client B separately.
If you're interested in the exact steps taken, see Kubernetes deployment not reachable via browser exposed with service
Below is the yaml file for deployment in eramba-1 namespace (so for the second deployment, I just have namespace = eramba-2)
apiVersion: apps/v1
kind: Deployment
metadata:
name: eramba-web
namespace: eramba-1
labels:
app.kubernetes.io/name: eramba-web
spec:
replicas: 1
selector:
matchLabels:
app: eramba-web
template:
metadata:
labels:
app: eramba-web
spec:
containers:
- name: eramba-web
image: markz0r/eramba-app:c281
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_HOSTNAME
value: eramba-mariadb
- name: MYSQL_DATABASE
value: erambadb
- name: MYSQL_USER
value: root
- name: MYSQL_PASSWORD
value: eramba
- name: DATABASE_PREFIX
value: ""
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: eramba-web
namespace: eramba-1
labels:
app.kubernetes.io/name: eramba-web
spec:
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 8080
selector:
app.kubernetes.io/name: eramba-web
type: NodePort
...
Service output for eramba-1 namespace
root#osboxes:/home/osboxes/eramba# kubectl describe svc eramba-web -n eramba-1
Name: eramba-web
Namespace: eramba-1
Labels: app.kubernetes.io/name=eramba-web
Annotations: <none>
Selector: app.kubernetes.io/name=eramba-web
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.100.17.120
IPs: 10.100.17.120
Port: http 8080/TCP
TargetPort: 8080/TCP
NodePort: http 32370/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Service for eramba-2 output
root#osboxes:/home/osboxes/eramba# kubectl describe svc eramba-web2 -n eramba-2
Name: eramba-web2
Namespace: eramba-2
Labels: app.kubernetes.io/name=eramba-web2
Annotations: <none>
Selector: app.kubernetes.io/name=eramba-web2
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.98.240.243
IPs: 10.98.240.243
Port: http 8080/TCP
TargetPort: 8080/TCP
NodePort: http 32226/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I've verified the nodePorts listening status
root#osboxes:/home/osboxes/eramba# netstat -tunlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 **0 0.0.0.0:32370** 0.0.0.0:* LISTEN 3776/kube-proxy
tcp 0 0 127.0.0.1:10259 0.0.0.0:* LISTEN 3476/kube-scheduler
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 535/systemd-resolve
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 587/cupsd
tcp 0 **0 0.0.0.0:32226** 0.0.0.0:* LISTEN 3776/kube-proxy
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 2983/kubelet
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 3776/kube-proxy
tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 809/mysqld
tcp 0 0 172.16.42.135:2379 0.0.0.0:* LISTEN 3495/etcd
tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 3495/etcd
tcp 0 0 172.16.42.135:2380 0.0.0.0:* LISTEN 3495/etcd
tcp 0 0 127.0.0.1:2381 0.0.0.0:* LISTEN 3495/etcd
tcp 0 0 127.0.0.1:39469 0.0.0.0:* LISTEN 2983/kubelet
tcp 0 0 127.0.0.1:10257 0.0.0.0:* LISTEN 3521/kube-controlle
tcp6 0 0 ::1:631 :::* LISTEN 587/cupsd
tcp6 0 0 :::33060 :::* LISTEN 809/mysqld
tcp6 0 0 :::10250 :::* LISTEN 2983/kubelet
tcp6 0 0 :::6443 :::* LISTEN 3485/kube-apiserver
tcp6 0 0 :::10256 :::* LISTEN 3776/kube-proxy
tcp6 0 0 :::80 :::* LISTEN 729/apache2
udp 0 0 0.0.0.0:35922 0.0.0.0:* 589/avahi-daemon: r
udp 0 0 0.0.0.0:5353 0.0.0.0:* 589/avahi-daemon: r
udp 0 0 127.0.0.53:53 0.0.0.0:* 535/systemd-resolve
udp 0 0 172.16.42.135:68 0.0.0.0:* 586/NetworkManager
udp 0 0 0.0.0.0:631 0.0.0.0:* 654/cups-browsed
udp6 0 0 :::5353 :::* 589/avahi-daemon: r
udp6 0 0 :::37750 :::* 589/avahi-daemon: r
Here's the Iptables output
root#osboxes:/home/osboxes/eramba# iptables --list-rules
-P INPUT ACCEPT
-P FORWARD DROP
-P OUTPUT ACCEPT
-N DOCKER
-N DOCKER-ISOLATION-STAGE-1
-N DOCKER-ISOLATION-STAGE-2
-N DOCKER-USER
-N KUBE-EXTERNAL-SERVICES
-N KUBE-FIREWALL
-N KUBE-FORWARD
-N KUBE-KUBELET-CANARY
-N KUBE-NODEPORTS
-N KUBE-PROXY-CANARY
-N KUBE-SERVICES
-A INPUT -m comment --comment "kubernetes health check service ports" -j KUBE-NODEPORTS
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A INPUT -j KUBE-FIREWALL
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
-A KUBE-EXTERNAL-SERVICES -p tcp -m comment --comment "eramba-2/eramba-web2:http has no endpoints" -m addrtype --dst-type LOCAL -m tcp --dport 32226 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-EXTERNAL-SERVICES -p tcp -m comment --comment "eramba-1/eramba-web:http has no endpoints" -m addrtype --dst-type LOCAL -m tcp --dport 32370 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment "block incoming localnet connections" -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP
-A KUBE-FORWARD -m conntrack --ctstate INVALID -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-SERVICES -d 10.98.240.243/32 -p tcp -m comment --comment "eramba-2/eramba-web2:http has no endpoints" -m tcp --dport 8080 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 10.100.17.120/32 -p tcp -m comment --comment "eramba-1/eramba-web:http has no endpoints" -m tcp --dport 8080 -j REJECT --reject-with icmp-port-unreachable
I'm sure I'm unaware of other ways where I can access the individual Application instances, so please advice if there's a better way.
Endpoints: <none> is an indication your Service is configured wrong; its selector doesn't match any of the Pods. If you look at the Service, it looks for
spec:
selector:
app.kubernetes.io/name: eramba-web
But if you look at the Deployment, it generates Pods with different labels
spec:
template:
metadata:
labels:
app: eramba-web # not app.kubernetes.io/name: ...
I'd consistently use the app.kubernetes.io/name format everywhere. You will have to delete and recreate the Deployment to change its selector: value to match.
I am trying to expose a service on 200 odd ports. Here is sample service yaml:
apiVersion: v1
kind: Service
metadata:
name: multiport-server-large-port
spec:
type: NodePort
selector:
app: multiport-server-large-port
ports:
- port: 49152
name: tcp-49152
- port: 49153
name: tcp-49153
- port: 49154
name: tcp-49154
- port: 49155
name: tcp-49155
- port: 49156
name: tcp-49156
- port: 49157
name: tcp-49157
- port: 49158
.
.
.
.... 200 more such ports
After I apply this yaml, service gets created but the ip:port combination is unreachable with connection refused error. On further investigation, I found out that there are some REJECT entries in iptables filter chain KUBE-EXTERNAL-SERVICES for the ports I have exposed.
IPTABLES Reject Rules:
Chain KUBE-EXTERNAL-SERVICES (1 references)
pkts bytes target prot opt in out source destination
0 0 REJECT tcp -- any any anywhere anywhere /* default/multiport-server-large-port:tcp-49316 has no endpoints */ ADDRTYPE match dst-type LOCAL tcp dpt:31
184 reject-with icmp-port-unreachable
0 0 REJECT tcp -- any any anywhere anywhere /* default/multiport-server-large-port:tcp-49325 has no endpoints */ ADDRTYPE match dst-type LOCAL tcp dpt:31
225 reject-with icmp-port-unreachable
0 0 REJECT tcp -- any any anywhere anywhere /* default/multiport-server-large-port:tcp-49383 has no endpoints */ ADDRTYPE match dst-type LOCAL tcp dpt:32
620 reject-with icmp-port-unreachable
0 0 REJECT tcp -- any any anywhere anywhere /* default/multiport-server-large-port:tcp-49385 has no endpoints */ ADDRTYPE match dst-type LOCAL tcp dpt:30
107 reject-with icmp-port-unreachable
0 0 REJECT tcp -- any any anywhere anywhere /* default/multiport-server-large-port:tcp-49359 has no endpoints */ ADDRTYPE match dst-type LOCAL tcp dpt:31
I want to understand:
Why these REJECT rules are appearing?
Is this not possible to expose large number of ports via services?
Is there any limit on number of ports that can be exposed via services?
The REJECT is inserted when a particular service has 0 endpoints. The selector in your Service.spec must be wrong or you don't have any pods running
I'm running into DNS issues on a GKE 1.10 kubernetes cluster. Occasionally pods start without any network connectivity. Restarting the pod tends to fix the issue.
Here's the result of the same few commands inside a container without network, and one with.
BROKEN:
kc exec -it -n iotest app1-b67598997-p9lqk -c userapp sh
/app $ nslookup www.google.com
nslookup: can't resolve '(null)': Name does not resolve
/app $ cat /etc/resolv.conf
nameserver 10.63.240.10
search iotest.svc.cluster.local svc.cluster.local cluster.local c.myproj.internal google.internal
options ndots:5
/app $ curl -I 10.63.240.10
curl: (7) Failed to connect to 10.63.240.10 port 80: Connection refused
/app $ netstat -antp
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:8001 0.0.0.0:* LISTEN 1/python
tcp 0 0 ::1:50051 :::* LISTEN 1/python
tcp 0 0 ::ffff:127.0.0.1:50051 :::* LISTEN 1/python
WORKING:
kc exec -it -n iotest app1-7d985bfd7b-h5dbr -c userapp sh
/app $ nslookup www.google.com
nslookup: can't resolve '(null)': Name does not resolve
Name: www.google.com
Address 1: 74.125.206.147 wk-in-f147.1e100.net
Address 2: 74.125.206.105 wk-in-f105.1e100.net
Address 3: 74.125.206.99 wk-in-f99.1e100.net
Address 4: 74.125.206.104 wk-in-f104.1e100.net
Address 5: 74.125.206.106 wk-in-f106.1e100.net
Address 6: 74.125.206.103 wk-in-f103.1e100.net
Address 7: 2a00:1450:400c:c04::68 wk-in-x68.1e100.net
/app $ cat /etc/resolv.conf
nameserver 10.63.240.10
search iotest.svc.cluster.local svc.cluster.local cluster.local c.myproj.internal google.internal
options ndots:5
/app $ curl -I 10.63.240.10
HTTP/1.1 404 Not Found
date: Sun, 29 Jul 2018 15:13:47 GMT
server: envoy
content-length: 0
/app $ netstat -antp
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:15000 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:15001 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:8001 0.0.0.0:* LISTEN 1/python
tcp 0 0 10.60.2.6:56508 10.60.48.22:9091 ESTABLISHED -
tcp 0 0 127.0.0.1:57768 127.0.0.1:50051 ESTABLISHED -
tcp 0 0 10.60.2.6:43334 10.63.255.44:15011 ESTABLISHED -
tcp 0 0 10.60.2.6:15001 10.60.45.26:57160 ESTABLISHED -
tcp 0 0 10.60.2.6:48946 10.60.45.28:9091 ESTABLISHED -
tcp 0 0 127.0.0.1:49804 127.0.0.1:50051 ESTABLISHED -
tcp 0 0 ::1:50051 :::* LISTEN 1/python
tcp 0 0 ::ffff:127.0.0.1:50051 :::* LISTEN 1/python
tcp 0 0 ::ffff:127.0.0.1:50051 ::ffff:127.0.0.1:49804 ESTABLISHED 1/python
tcp 0 0 ::ffff:127.0.0.1:50051 ::ffff:127.0.0.1:57768 ESTABLISHED 1/python
These pods are identical, just one was restarted.
Does anyone have advice about how to analyse and fix this issue?
Some steps to try:
1) ifconfig eth0 or whatever the primary interface is.
Is the interface up? Are the tx and rx packet counts increasing?
2)If interface is up, you can try tcpdump as you are running the nslookup command that you posted. See if the dns request packets are getting sent out.
3) See which node the pod is scheduled on, when network connectivity gets broken. Maybe it is on the same node every time? If yes, are other pods on that node running into similar problem?
I also faced the same problem, and I simply worked around it for now by switching to the 1.9.x GKE version (after spending many hours trying to debug why my app wasn't working).
Hope this helps!
I have a kubernetes cluster which has 2 interfaces:
eth0: 10.10.10.100 (internal)
eth1: 20.20.20.100 (External)
There are few pods running in the cluster with flannel networking.
POD1: 172.16.54.4 (nginx service)
I want to access 20.20.20.100:80 from another host which is connected to the above k8s cluster, so that I can reach the nginx POD.
I had enabled ip forwarding and also added DNAT rules as follows:
iptables -t nat -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j DNAT --to-destination 172.16.54.4:80
After this when I try to do a curl on 20.20.20.100, I get
Failed to connect to 10.10.65.161 port 80: Connection refused
How do I get this working?
You can try
iptables -t nat -A PREROUTING -p tcp -d 20.20.20.100 --dport 80 -j DNAT --to-destination 172.16.54.4:80
But I don't recommend that you manage the iptables by yourself, it's painful to maintain the rules...
You can use the hostPort in the k8s. You can use kubenet as network plugin, since cni plugin does not support hostPort.
why not use nodeport type? I think it is a better way to access service by hostIP. Please try iptables -nvL -t nat and show me the detail.
I have two pods mapped to two services up and running using virtual box vm's in my laptop. I have kube dns working. One pod is a webservice and the other is a mongodb.
The spec of webapp pod is below
spec:
containers:
- resources:
limits:
cpu: 0.5
.
.
name: wsemp
ports:
- containerPort: 8080
# name: wsemp
#command: ["java","-Dspring.data.mongodb.uri=mongodb://192.168.6.103:30061/microservices", "-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
command: ["java","-Dspring.data.mongodb.uri=mongodb://mongoservice/microservices", "-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
The spec of corresponding service
apiVersion: v1
kind: Service
metadata:
labels:
name: webappservice
name: webappservice
spec:
ports:
- port: 8080
nodePort: 30062
targetPort: 8080
protocol: TCP
type: NodePort
selector:
name: webapp
Mongodb pod spec
apiVersion: v1
kind: Pod
metadata:
name: mongodb
labels:
name: mongodb
spec:
containers:
.
.
name: mongodb
ports:
- containerPort: 27017
Mongodb service spec
apiVersion: v1
kind: Service
metadata:
labels:
name: mongodb
name: mongoservice
spec:
ports:
- port: 27017
nodePort: 30061
targetPort: 27017
protocol: TCP
type: NodePort
selector:
name: mongodb
UPDATED TARGET PORTS IN SERVICE AFTER COMMENT
Issue
The webapp when it starts is not able to connect with the mongoservice port and gives this error on start
Exception in monitor thread while connecting to server mongoservice:27017
com.mongodb.MongoSocketOpenException: Exception opening socket
at com.mongodb.connection.SocketStream.open(SocketStream.java:63) ~[mongodb-driver-core-3.2.2.jar!/:na]
at com.mongodb.connection.InternalStreamConnection.open(InternalStreamConnection.java:114) ~[mongodb-driver-core-3.2.2.jar!/:na]
at com.mongodb.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:128) ~[mongodb-driver-core-3.2.2.jar!/:na]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method) ~[na:1.8.0_111]
describe svc
kubectl describe svc mongoservice
Name: mongoservice
Namespace: default
Labels: name=mongodb
Selector: name=mongodb
Type: NodePort
IP: 10.254.146.189
Port: <unset> 27017/TCP
NodePort: <unset> 30061/TCP
Endpoints: 172.17.99.2:27017
Session Affinity: None
No events.
kubectl describe svc webappservice
Name: webappservice
Namespace: default
Labels: name=webappservice
Selector: name=webapp
Type: NodePort
IP: 10.254.112.121
Port: <unset> 8080/TCP
NodePort: <unset> 30062/TCP
Endpoints: 172.17.99.3:8080
Session Affinity: None
No events.
Debugging
root#webapp:/# nslookup mongoservice
Server: 10.254.0.2
Address: 10.254.0.2#53
Non-authoritative answer:
Name: mongoservice.default.svc.cluster.local
Address: 10.254.146.189
root#webapp:/# curl 10.254.146.189:27017
curl: (7) Failed to connect to 10.254.146.189 port 27017: Connection refused
root#webapp:/# curl mongoservice:27017
curl: (7) Failed to connect to mongoservice port 27017: Connection refused
sudo iptables-save | grep webapp
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/webappservice:" -m tcp --dport 30062 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/webappservice:" -m tcp --dport 30062 -j KUBE-SVC-NQBDRRKQULANV7O3
-A KUBE-SEP-IE7EBTQCN7T6HXC4 -s 172.17.99.3/32 -m comment --comment "default/webappservice:" -j KUBE-MARK-MASQ
-A KUBE-SEP-IE7EBTQCN7T6HXC4 -p tcp -m comment --comment "default/webappservice:" -m tcp -j DNAT --to-destination 172.17.99.3:8080
-A KUBE-SERVICES -d 10.254.217.24/32 -p tcp -m comment --comment "default/webappservice: cluster IP" -m tcp --dport 8080 -j KUBE-SVC-NQBDRRKQULANV7O3
-A KUBE-SVC-NQBDRRKQULANV7O3 -m comment --comment "default/webappservice:" -j KUBE-SEP-IE7EBTQCN7T6HXC4
$ curl 10.254.217.24:8080
{"timestamp":1486678423757,"status":404,"error":"Not Found","message":"No message available","path":"/"}[osboxes#kube-node1 ~]$
sudo iptables-save | grep mongodb
[osboxes#osboxes ~]$ sudo iptables-save | grep mongo
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/mongoservice:" -m tcp --dport 30061 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/mongoservice:" -m tcp --dport 30061 -j KUBE-SVC-2HQWGC3WSIBZF7CN
-A KUBE-SEP-FVWOWAWXXVAVIQ5O -s 172.17.99.2/32 -m comment --comment "default/mongoservice:" -j KUBE-MARK-MASQ
-A KUBE-SEP-FVWOWAWXXVAVIQ5O -p tcp -m comment --comment "default/mongoservice:" -m tcp -j DNAT --to-destination 172.17.99.2:27017
-A KUBE-SERVICES -d 10.254.146.189/32 -p tcp -m comment --comment "default/mongoservice: cluster IP" -m tcp --dport 27017 -j KUBE-SVC-2HQWGC3WSIBZF7CN
-A KUBE-SVC-2HQWGC3WSIBZF7CN -m comment --comment "default/mongoservice:" -j KUBE-SEP-FVWOWAWXXVAVIQ5O
[osboxes#osboxes ~]$ sudo curl 10.254.146.189:8080
^C[osboxes#osboxes ~]$ sudo curl 10.254.146.189:27017
It looks like you are trying to access MongoDB over HTTP on the native driver port.
root#mongodb:/# netstat -an
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:27017 0.0.0.0:* LISTEN
tcp 0 0 172.17.99.2:60724 151.101.128.204:80 TIME_WAIT
tcp 0 0 172.17.99.2:60728 151.101.128.204:80 TIME_WAIT
mongodb container has no errors on startup.
Trying to follow steps in https://kubernetes.io/docs/user-guide/debugging-services/#iptables, stuck in the part where it says " try restarting kube-proxy with the -V flag set to 4" since I dont know how to do it.
I'm not a networking person, so dont know how and what needs to be analyzed in this. Any kind of tips to debug would be of great help.
Thanks.
:)
As a side note, have in mind that curl performs HTTP requests by default, but the port 27017 in the host you are trying to reach is not binded to an application that understands such protocol. Typically, what you would you in these scenarios is to use netcat:
nc -zv mongoservice 27017
This reports whether the port 27017 from such host is open or not.
nc = netcat
-z scan for listening daemons without sending data
-v adds verbosity
Regarding your MongoDB file, you must remember to set the targetPort directive. As explained in Kubernetes docs regarding targetPort:
This specification will create a Service which targets TCP port 80 on any Pod with the run: my-nginx label, and expose it on an abstracted Service port (targetPort: is the port the container accepts traffic on, port: is the abstracted Service port, which can be any port other pods use to access the Service). View service API object to see the list of supported fields in service definition.
Therefore, just set it to 27017 for consistency.
You should not run into issues after following these advices. Keep the good work and learn as much as you can!
The iptables rules looks ok, but it is not sure what network solution (flannel/calico) used in your kubernetes. You may check whether you can access kube dns pod IP from your web pod.
thanks. I had got a clue on this and since I was using the flannel network, there was an issue with the communication between the pods in the flannel network.
Particularly this part, FLANNEL_OPTIONS="--iface=eth1" as mentioned in the link http://jayunit100.blogspot.com/2015/06/flannel-and-vagrant-heads-up.html
Thanks.