I am trying to access a Flask server running in one Openshift pod from other.
For that I created a service as below.
$ oc get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-web-app ClusterIP 172.30.216.112 <none> 8080/TCP 8m
$ oc describe svc my-web-app
Name: my-web-app
Namespace: neo4j-sys
Labels: app=my-web-app
Annotations: openshift.io/generated-by=OpenShiftNewApp
Selector: app=my-web-app,deploymentconfig=my-web-app
Type: ClusterIP
IP: 172.30.216.112
Port: 8080-tcp 8080/TCP
TargetPort: 8080/TCP
Endpoints: 172.20.203.104:5000,172.20.49.150:5000
Session Affinity: None
Events: <none>
1)
First, I ping ed from one pod to other pod and got response.
(app-root) sh-4.2$ ping 172.20.203.104
PING 172.20.203.104 (172.20.203.104) 56(84) bytes of data.
64 bytes from 172.20.203.104: icmp_seq=1 ttl=64 time=5.53 ms
64 bytes from 172.20.203.104: icmp_seq=2 ttl=64 time=0.527 ms
64 bytes from 172.20.203.104: icmp_seq=3 ttl=64 time=3.10 ms
64 bytes from 172.20.203.104: icmp_seq=4 ttl=64 time=2.12 ms
64 bytes from 172.20.203.104: icmp_seq=5 ttl=64 time=0.784 ms
64 bytes from 172.20.203.104: icmp_seq=6 ttl=64 time=6.81 ms
64 bytes from 172.20.203.104: icmp_seq=7 ttl=64 time=18.2 ms
^C
--- 172.20.203.104 ping statistics ---
7 packets transmitted, 7 received, 0% packet loss, time 6012ms
rtt min/avg/max/mdev = 0.527/5.303/18.235/5.704 ms
But, when I tried curl, it is not responding.
(app-root) sh-4.2$ curl 172.20.203.104
curl: (7) Failed connect to 172.20.203.104:80; Connection refused
(app-root) sh-4.2$ curl 172.20.203.104:8080
curl: (7) Failed connect to 172.20.203.104:8080; Connection refused
2)
After that I tried to reach cluster IP from one pod. In this case, both ping, curl not reachable.
(app-root) sh-4.2$ ping 172.30.216.112
PING 172.30.216.112 (172.30.216.112) 56(84) bytes of data.
From 172.20.49.1 icmp_seq=1 Destination Host Unreachable
From 172.20.49.1 icmp_seq=4 Destination Host Unreachable
From 172.20.49.1 icmp_seq=2 Destination Host Unreachable
From 172.20.49.1 icmp_seq=3 Destination Host Unreachable
^C
--- 172.30.216.112 ping statistics ---
7 packets transmitted, 0 received, +4 errors, 100% packet loss, time 6002ms
pipe 4
(app-root) sh-4.2$ curl 172.30.216.112
curl: (7) Failed connect to 172.30.216.112:80; No route to host
Please let me know where I am going wrong here. Why the above cases #1, #2 failing. How to access the clusterIP services.
I am completely new to the services and accessing them and hence I might be missing some basics.
I gone through other answers How can I access the Kubernetes service through ClusterIP 1. But, it is for Nodeport which is not helping me.
Updates based on below comment from Graham Dumpleton, below are my observations.
This is the Flask server log which I am running in the pods for information.
* Serving Flask app "wsgi" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
127.0.0.1 - - [14/Nov/2019 04:54:53] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [14/Nov/2019 04:55:05] "GET / HTTP/1.1" 200 -
Is your pod listening on external interfaces on pod 8080?
If I understand question correctly, my intention here is just to communicate between pods via clusterIP service. I am not looking for accessing pods from external interfaces(other projects or through web urls as load balancer service)
If you get into the pod, can you do curl $HOSTNAME:8080?
Yes, if I am running as localhost or 127.0.0.1, I am getting response from the same pod where I run this as expected.
(app-root) sh-4.2$ curl http://127.0.0.1:5000/
Hello World!
(app-root) sh-4.2$ curl http://localhost:5000/
Hello World!
But, if I tried with my-web-app or service IP(clusterIP). I am not getting response.
(app-root) sh-4.2$ curl http://172.30.216.112:5000/
curl: (7) Failed connect to 172.30.216.112:5000; No route to host
(app-root) sh-4.2$ curl my-web-app:8080
curl: (7) Failed connect to my-web-app:8080; Connection refused
(app-root) sh-4.2$ curl http://my-web-app:8080/
curl: (7) Failed connect to my-web-app:8080; Connection refused
With pod IP also I am not getting response.
(app-root) sh-4.2$ curl http://172.20.49.150:5000/
curl: (7) Failed connect to 172.20.49.150:5000; Connection refused
(app-root) sh-4.2$ curl 172.20.49.150
curl: (7) Failed connect to 172.20.49.150:80; Connection refused
I am answering my own question. Here is how my issue got resolved based on inputs from Graham Dumpleton.
Initially, I used to connect Flask server as below.
from flask import Flask
application = Flask(__name__)
if __name__ == "__main__":
application.run()
This bind the server to http://127.0.0.1:5000/ by default.
As part of resolution I changed the bind to 0.0.0.0 as below
from flask import Flask
application = Flask(__name__)
if __name__ == "__main__":
application.run(host='0.0.0.0')
And log as below after that.
* Serving Flask app "wsgi" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
After that pods successfully communicated via clusterIP. Below are the service details(increased one more pod)
$ oc describe svc my-web-app
Name: my-web-app
Namespace: neo4j-sys
Labels: app=my-web-app
Annotations: openshift.io/generated-by=OpenShiftNewApp
Selector: app=my-web-app,deploymentconfig=my-web-app
Type: ClusterIP
IP: 172.30.4.250
Port: 8080-tcp 8080/TCP
TargetPort: 5000/TCP
Endpoints: 172.20.106.184:5000,172.20.182.118:5000,172.20.83.40:5000
Session Affinity: None
Events: <none>
Below is the successful response.
(app-root) sh-4.2$ curl http://172.30.4.250:8080 //with clusterIP which is my expectation
Hello World!
(app-root) sh-4.2$ curl http://172.20.106.184:5000 // with pod IP
Hello World!
(app-root) sh-4.2$ curl $HOSTNAME:5000 // with $HOSTNAME
Hello World!
Related
Recently I have created private AKS via Terraform, every thing went OK, how is it possible that two pods within the same namespace are unable to communicate with each other?
AKS version= 1.19.11
coredns:1.6.6
# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 5d18h
Cluster has been created with below resources:
Network type (plugin)=Kubenet
Pod CIDR=10.x.x.x/16
Service CIDR=10.x.x.0/16
DNS service IP=10.x.x.10
Docker bridge CIDR=172.x.x.1/16
Network Policy=Calico
Ping response:
/ # ping 10.x.x.89
PING 10.x.x.89 (10.x.x.89): 56 data bytes
^C
--- 10.x.x.89 ping statistics ---
25 packets transmitted, 0 packets received, 100% packet loss
/ # ping 10.0.0.1
PING 10.0.0.1 (10.0.0.1): 56 data bytes
64 bytes from 10.0.0.1: seq=0 ttl=241 time=27.840 ms
64 bytes from 10.0.0.1: seq=1 ttl=241 time=28.790 ms
64 bytes from 10.0.0.1: seq=2 ttl=241 time=28.725 ms
^C
--- 10.0.0.1 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 27.840/28.451/28.790 ms
/ # ping kubernetes
ping: bad address 'kubernetes'
/ # nslookup kubernetes
nslookup: can't resolve '(null)': Name does not resolve
nslookup: can't resolve 'kubernetes': Name does not resolve
/ #
Network policy was the issue
Kubectl get netpol -n namespace
Sorry if it's a naive question. Please correct me if my understanding is wrong.
Created POD using this command:
kubectl run nginx --image=nginx --port=8888
My understand of this command, nginx (application) container will be exposed/available at port 8888
kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 1/1 Running 0 10m 10.244.1.2 node01 <none> <none>
curl -v 10.244.1.2:8888 ===> i am wondering why this failed ?
* Trying 10.244.1.2:8888...
* TCP_NODELAY set
* connect to 10.244.1.2 port 8888 failed: Connection refused
* Failed to connect to 10.244.1.2 port 8888: Connection refused
* Closing connection 0
curl: (7) Failed to connect to 10.244.1.2 port 8888: Connection refused
curl -v 10.244.1.2 ===> to my surprise this returned 200 success response
* Trying 10.244.1.2:80...
* TCP_NODELAY set
* Connected to 10.244.1.2 (10.244.1.2) port 80 (#0)
> GET / HTTP/1.1
> Host: 10.244.1.2
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
If the application is still referring default 80 port, I am wondering about the significance of container port 8888 ?
OK, so it may be used to expose the POD to the outside world.
Let's see that, I went ahead and created service for the POD:
kubectl expose pod nginx --port=80 --target-port=8888
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx ClusterIP 10.96.214.161 <none> 80/TCP 13m
$ curl -v 10.96.214.161 ==> here default port (80) didn't work
* Trying 10.96.214.161:80...
* TCP_NODELAY set
* connect to 10.96.214.161 port 80 failed: Connection refused
* Failed to connect to 10.96.214.161 port 80: Connection refused
* Closing connection 0
curl: (7) Failed to connect to 10.96.214.161 port 80: Connection refused
$ curl -v 10.96.214.161:8888 ==> target port didn't work either
* Trying 10.96.214.161:8888...
* TCP_NODELAY set
....waiting forever
Which port do I need to use to make it work? Am I missing anything?
By default, nginx server listen to the port 80. You can see it in their docker image ref.
With kubectl run nginx --image=nginx --port=8888 what you have done here is you have expose another port along with 80. But the server is still listening on the 80 port.
So, try with target port 80. For this reason when you tried with other than port 80 it's not working. Try with set --target-port=8888 to --target-port=80.
Or, If you want to change the server port you need to use configmap along with pod to pass custom config to the server.
I would like to access my application via localhost with kubectl port-forward command. But when I run kubectl port-forward road-dashboard-dev-5cdc465475-jwwgz 8082:8080 I received an below error.
> Forwarding from 127.0.0.1:8082 -> 8080 Forwarding from [::1]:8082 ->
> 8080 Handling connection for 8082 Handling connection for 8082 E0124
> 14:15:27.173395 4376 portforward.go:400] an error occurred
> forwarding 8082 -> 8080: error forwarding port 8080 to pod
> 09a76f6936b313e438bbf5a84bd886b3b3db8f499b5081b66cddc390021556d5, uid
> : exit status 1: 2020/01/24 11:15:27 socat[9064] E connect(6, AF=2
> 127.0.0.1:8080, 16): Connection refused
I also try to connect pod in cluster via exec -it but it did not work as well.What might be the missing point that I ignore?
node#road-dashboard-dev-5cdc465475-jwwgz:/usr/src/app$ curl -v localhost:8080
* Rebuilt URL to: localhost:8080/
* Trying ::1...
* TCP_NODELAY set
* connect to ::1 port 8080 failed: Connection refused
* Trying 127.0.0.1...
* TCP_NODELAY set
* connect to 127.0.0.1 port 8080 failed: Connection refused
* Failed to connect to localhost port 8080: Connection refused
* Closing connection 0
curl: (7) Failed to connect to localhost port 8080: Connection refused
kubectl get all out is below.I am sure that Container port value is set 8080.
NAME READY STATUS RESTARTS AGE
pod/road-dashboard-dev-5cdc465475-jwwgz 1/1 Running 0 34m
pod/road-dashboard-dev-5cdc465475-rdk7g 1/1 Running 0 34m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/road-dashboard-dev NodePort 10.254.61.225 <none> 80:41599/TCP 18h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/road-dashboard-dev 2/2 2 2 18h
NAME DESIRED CURRENT READY AGE
replicaset.apps/road-dashboard-dev-5cdc465475 2 2 2 34m
Name: road-dashboard-dev-5cdc465475-jwwgz
Namespace: dev
Priority: 0
PriorityClassName: <none>
Node: c123
Containers:
road-dashboard:
Port: 8080/TCP
Host Port: 0/TCP
State: Running
Started: Fri, 24 Jan 2020 13:42:39 +0300
Ready: True
Restart Count: 0
Environment: <none>
To debug your issue you should let the port forward command tuning in foreground and curl from a second terminal and see what output you get on the port-forward prompt.
$ kubectl get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/nginx 1/1 Running 2 112m 10.244.3.43 k8s-node-3 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11d <none>
service/nginx NodePort 10.96.130.207 <none> 80:31316/TCP 20m run=nginx
Example :
$ kubectl port-forward nginx 31000:80
Forwarding from 127.0.0.1:31000 -> 80
Forwarding from [::1]:31000 -> 80
Curl from second terminal window curl the port forward you have.
$ curl localhost:31000
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
nginx.org.<br/>
Commercial support is available at
nginx.com.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
You should see that on first terminal the portforward promt list that it is handeling a connection like below note new line Handling connection for 31000
$ kubectl port-forward nginx 31000:80
Forwarding from 127.0.0.1:31000 -> 80
Forwarding from [::1]:31000 -> 80
Handling connection for 31000
So if like i have wrong port forwarding as below (note i have mode the port 8080 for nginx container exposing port 80)
$ kubectl port-forward nginx 31000:8080
Forwarding from 127.0.0.1:31000 -> 8080
Forwarding from [::1]:31000 -> 8080
The curl will result clear error on port forward prompt indicating the connection was refused from container when getting to port 8080 as its not correct. and we get a empty reply back.
$ curl -v localhost:31000
* Rebuilt URL to: localhost:31000/
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 31000 (#0)
> GET / HTTP/1.1
> Host: localhost:31000
> User-Agent: curl/7.47.0
> Accept: */*
>
* Empty reply from server
* Connection #0 to host localhost left intact
curl: (52) Empty reply from server
$ kubectl port-forward nginx 31000:8080
Forwarding from 127.0.0.1:31000 -> 8080
Forwarding from [::1]:31000 -> 8080
Handling connection for 31000
E0124 11:35:53.390711 10791 portforward.go:400] an error occurred forwarding 31000 -> 8080: error forwarding port 8080 to pod 88e4de4aba522b0beff95c3b632eca654a5c34b0216320a29247bb8574ef0f6b, uid : exit status 1: 2020/01/24 11:35:57 socat[15334] E connect(5, AF=2 127.0.0.1:8080, 16): Connection refused
I started the kubernetes cluster using kubeadm on two servers rented from DigitalOcean. I use Flannel as CNI. After launching the cluster, I, following this tutorial, created deployment and service.
$ kubectl describe svc example-service
Name: example-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: run=load-balancer-example
Type: NodePort
IP: 10.99.217.181
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 31570/TCP
Endpoints: 10.244.1.2:8080,10.244.1.3:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Trying to access to pods from master node(server1)
$ curl 10.244.1.2:8080
curl: (7) Failed to connect to 10.244.1.2 port 8080: Connection timed out
$ curl 10.244.1.3:8080
curl: (7) Failed to connect to 10.244.1.3 port 8080: Connection timed out
$ curl curl 10.99.217.181:8080
curl: (7) Failed to connect to 10.99.217.181 port 8080: Connection timed out
$ curl [server1-ip]:31570
curl: (7) Failed to connect to [server1-ip] port 31570: Connection timed out
$ curl [server2-ip]:31570
curl: (7) Failed to connect to [server2-ip] port 31570: Connection timed out
Trying to access to pods from worker node(server2)
$ curl 10.244.1.2:8080
Hello Kubernetes!
$ curl 10.244.1.3:8080
Hello Kubernetes!
$ curl curl 10.99.217.181:8080
Hello Kubernetes!
$ curl [server1-ip]:31570
curl: (7) Failed to connect to [server1-ip] port 31570: Connection timed out
$ curl [server2-ip]:31570
Hello Kubernetes!
I created a K8s cluster of 5 VMs (1 master and 4 slaves running Ubuntu 16.04.3 LTS) using kubeadm. I used flannel to set up networking in the cluster. I was able to successfully deploy an application. I, then, exposed it via NodePort service. From here things got complicated for me.
Before I started, I disabled the default firewalld service on master and the nodes.
As I understand from the K8s Services doc, the type NodePort exposes the service on all nodes in the cluster. However, when I created it, the service was exposed only on 2 nodes out of 4 in the cluster. I am guessing that's not the expected behavior (right?)
For troubleshooting, here are some resource specs:
root#vm-vivekse-003:~# kubectl get nodes
NAME STATUS AGE VERSION
vm-deepejai-00b Ready 5m v1.7.3
vm-plashkar-006 Ready 4d v1.7.3
vm-rosnthom-00f Ready 4d v1.7.3
vm-vivekse-003 Ready 4d v1.7.3 //the master
vm-vivekse-004 Ready 16h v1.7.3
root#vm-vivekse-003:~# kubectl get pods -o wide -n playground
NAME READY STATUS RESTARTS AGE IP NODE
kubernetes-bootcamp-2457653786-9qk80 1/1 Running 0 2d 10.244.3.6 vm-rosnthom-00f
springboot-helloworld-2842952983-rw0gc 1/1 Running 0 1d 10.244.3.7 vm-rosnthom-00f
root#vm-vivekse-003:~# kubectl get svc -o wide -n playground
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
sb-hw-svc 10.101.180.19 <nodes> 9000:30847/TCP 5h run=springboot-helloworld
root#vm-vivekse-003:~# kubectl describe svc sb-hw-svc -n playground
Name: sb-hw-svc
Namespace: playground
Labels: <none>
Annotations: <none>
Selector: run=springboot-helloworld
Type: NodePort
IP: 10.101.180.19
Port: <unset> 9000/TCP
NodePort: <unset> 30847/TCP
Endpoints: 10.244.3.7:9000
Session Affinity: None
Events: <none>
root#vm-vivekse-003:~# kubectl get endpoints sb-hw-svc -n playground -o yaml
apiVersion: v1
kind: Endpoints
metadata:
creationTimestamp: 2017-08-09T06:28:06Z
name: sb-hw-svc
namespace: playground
resourceVersion: "588958"
selfLink: /api/v1/namespaces/playground/endpoints/sb-hw-svc
uid: e76d9cc1-7ccb-11e7-bc6a-fa163efaba6b
subsets:
- addresses:
- ip: 10.244.3.7
nodeName: vm-rosnthom-00f
targetRef:
kind: Pod
name: springboot-helloworld-2842952983-rw0gc
namespace: playground
resourceVersion: "473859"
uid: 16d9db68-7c1a-11e7-bc6a-fa163efaba6b
ports:
- port: 9000
protocol: TCP
After some tinkering I realized that on those 2 "faulty" nodes, those services were not available from within those hosts itself.
Node01 (working):
root#vm-vivekse-004:~# curl 127.0.0.1:30847 //<localhost>:<nodeport>
Hello Docker World!!
root#vm-vivekse-004:~# curl 10.101.180.19:9000 //<cluster-ip>:<port>
Hello Docker World!!
root#vm-vivekse-004:~# curl 10.244.3.7:9000 //<pod-ip>:<port>
Hello Docker World!!
Node02 (working):
root#vm-rosnthom-00f:~# curl 127.0.0.1:30847
Hello Docker World!!
root#vm-rosnthom-00f:~# curl 10.101.180.19:9000
Hello Docker World!!
root#vm-rosnthom-00f:~# curl 10.244.3.7:9000
Hello Docker World!!
Node03 (not working):
root#vm-plashkar-006:~# curl 127.0.0.1:30847
curl: (7) Failed to connect to 127.0.0.1 port 30847: Connection timed out
root#vm-plashkar-006:~# curl 10.101.180.19:9000
curl: (7) Failed to connect to 10.101.180.19 port 9000: Connection timed out
root#vm-plashkar-006:~# curl 10.244.3.7:9000
curl: (7) Failed to connect to 10.244.3.7 port 9000: Connection timed out
Node04 (not working):
root#vm-deepejai-00b:/# curl 127.0.0.1:30847
curl: (7) Failed to connect to 127.0.0.1 port 30847: Connection timed out
root#vm-deepejai-00b:/# curl 10.101.180.19:9000
curl: (7) Failed to connect to 10.101.180.19 port 9000: Connection timed out
root#vm-deepejai-00b:/# curl 10.244.3.7:9000
curl: (7) Failed to connect to 10.244.3.7 port 9000: Connection timed out
Tried netstat and telnet on all 4 slaves. Here's the output:
Node01 (the working host):
root#vm-vivekse-004:~# netstat -tulpn | grep 30847
tcp6 0 0 :::30847 :::* LISTEN 27808/kube-proxy
root#vm-vivekse-004:~# telnet 127.0.0.1 30847
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
Node02 (the working host):
root#vm-rosnthom-00f:~# netstat -tulpn | grep 30847
tcp6 0 0 :::30847 :::* LISTEN 11842/kube-proxy
root#vm-rosnthom-00f:~# telnet 127.0.0.1 30847
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
Node03 (the not-working host):
root#vm-plashkar-006:~# netstat -tulpn | grep 30847
tcp6 0 0 :::30847 :::* LISTEN 7791/kube-proxy
root#vm-plashkar-006:~# telnet 127.0.0.1 30847
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection timed out
Node04 (the not-working host):
root#vm-deepejai-00b:/# netstat -tulpn | grep 30847
tcp6 0 0 :::30847 :::* LISTEN 689/kube-proxy
root#vm-deepejai-00b:/# telnet 127.0.0.1 30847
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection timed out
Addition info:
From the kubectl get pods output, I can see that the pod is actually deployed on slave vm-rosnthom-00f. I am able to ping this host from all the 5 VMs and curl vm-rosnthom-00f:30847 also works from all the VMs.
I can clearly see that the internal cluster networking is messed up, but I am unsure how to resolve it! iptables -L for all the slaves are identical, and even the Local Loopback (ifconfig lo) is up and running for all the slaves. I'm completely clueless as to how to fix it!
Use a service type NodePort and access the NodePort if the Ipadress of your Master node.
The Service obviously knows on which node a Pod is running and redirect the traffic to one of the pods if you have several instances.
Label your pods and use the corrispondent selectors in the service.
If you get still into issues please post your service and deployment.
To check the connectivity i would suggest to use netcat.
nc -zv ip/service port
if network is ok it responds: open
inside the cluster access the containers like so:
nc -zv servicename.namespace.svc.cluster.local port
Consider always that you have 3 kinds of ports.
Port on which your software is running in side your container.
Port on which you expose that port to the pod. (a pod has one ipaddress, the clusterIp address, which is use by a container on a specific port)
NodePort wich allows you to access the pods ipaddress ports from outside the clusters network.
Either your firewall blocks some connections between nodes or your kube-proxy is not working properly. I guess your services work only on nodes where pods are running on.
If you want to reach the service from any node in the cluster you need fine service type as ClusterIP. Since you defined service type as NodePort, you can connect from the node where service is running.
my above answer was not correct, based on documentation we should be able to connect from any NodeIP:Nodeport. but its not working in my cluster also.
https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services---service-types
NodePort: Exposes the service on each Node’s IP at a static port (the
NodePort). A ClusterIP service, to which the NodePort service will
route, is automatically created. You’ll be able to contact the
NodePort service, from outside the cluster, by requesting
:.
One of my node ip forward not set. I was able to connect my service using NodeIP:nodePort
sysctl -w net.ipv4.ip_forward=1