Cannot access exposed deployment/pod in Kubernetes - kubernetes

I want to start out by saying I do not know the exact architecture of the servers involved.. All I know is that they are Ubuntu machines on the cloud.
I have set up a 1 master/1 worker k8s cluster using two servers.
kubectl cluster-info gives me:
Kubernetes master is running at https://10.62.194.4:6443
KubeDNS is running at https://10.62.194.4:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
I have created a simple deployment as such:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: nginx
name: nginx-deploy
spec:
replicas: 2
selector:
matchLabels:
run: nginx
template:
metadata:
labels:
run: nginx
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
Which spins up an nginx pod exposed on container port 80.
I have exposed this deployment using:
kubectl expose deployment nginx-deploy --type NodePort
When I run kubectl get svc, I get:
nginx-deploy NodePort 10.99.103.239 <none> 80:30682/TCP 29m
kubectl get pods -o wide gives me:
nginx-deploy-7c45b84548-ckqzb 1/1 Running 0 33m 192.168.1.5 myserver1 <none> <none>
nginx-deploy-7c45b84548-vl4kh 1/1 Running 0 33m 192.168.1.4 myserver1 <none> <none>
Since I exposed the deployment using NodePort, I was under the impression I can access the deployment using < Node IP > : < Node Port >
The Node IP of the worker node is 10.62.194.5 and when I try to access http://10.62.194.5:30682 I do not get the nginx landing page.
One part I do not understand is that when I do kubectl describe node myserver1, in the long output I receive I can see:
Addresses:
InternalIP: 10.62.194.5
Hostname: myserver1
Why does it say InternalIP? I can ping this IP
EDIT:
Output of sudo lsof -i -P -n | grep LISTEN
systemd-r 846 systemd-resolve 13u IPv4 24990 0t0 TCP 127.0.0.53:53 (LISTEN)
sshd 1157 root 3u IPv4 30168 0t0 TCP *:22 (LISTEN)
sshd 1157 root 4u IPv6 30170 0t0 TCP *:22 (LISTEN)
xrdp-sesm 9840 root 7u IPv6 116948 0t0 TCP [::1]:3350 (LISTEN)
xrdp 9862 xrdp 11u IPv6 117849 0t0 TCP *:3389 (LISTEN)
kubelet 51562 root 9u IPv4 560219 0t0 TCP 127.0.0.1:42735 (LISTEN)
kubelet 51562 root 24u IPv4 554677 0t0 TCP 127.0.0.1:10248 (LISTEN)
kubelet 51562 root 35u IPv6 558616 0t0 TCP *:10250 (LISTEN)
kube-prox 52427 root 10u IPv4 563401 0t0 TCP 127.0.0.1:10249 (LISTEN)
kube-prox 52427 root 11u IPv6 564298 0t0 TCP *:10256 (LISTEN)
kube-prox 52427 root 12u IPv6 618851 0t0 TCP *:30682 (LISTEN)
bird 52925 root 7u IPv4 562993 0t0 TCP *:179 (LISTEN)
calico-fe 52927 root 3u IPv6 562998 0t0 TCP *:9099 (LISTEN)
Output of ss -ntlp | grep 30682
LISTEN 0 128 *:30682 *:*

As far as I understand you are trying to access 10.62.194.5 from a Host which is in a different subnet, for example your terminal. In Azure I guess you have a Public IP and a Private IP for each Node. So, if you are trying to access the Kubernetes Service from your terminal, you should use the Public IP of the Host together with the port and also be sure that the port is open in your azure firewall.

Related

Not able to access Nginx from an external IP even after k8s nodeport service exposed

I am not able to access the nginx server using http://:30602 and also http://:30602
OS: Ubuntu 22
I also checked if any firewall is blocking it.
Using ufw
admin#tst-server:~$ sudo ufw status verbose
Status: inactive
Using netstat
admin#tst-server:~$ netstat -an | grep 22 | grep -i listen
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp6 0 0 :::22 :::* LISTEN
unix 2 [ ACC ] STREAM LISTENING 354787 /run/containerd/s/9a866c6ea3a4fe1976aaed0884400cd59228d43776774cc3fad2d0b9a7c2ed7b
unix 2 [ ACC ] STREAM LISTENING 21722 /run/systemd/private
admin#tst-server:~$ netstat -an | grep 30602 | grep -i listen
Commands used for nginx deployment
Create Deployment
kubectl create deployment nginx --image=nginx
kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
myapp 2/2 2 2 8d
nginx 1/1 1 1 9m50s
Create Service
kubectl create service nodeport nginx --tcp=80:80
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8d
nginx NodePort 10.109.112.116 <none> 80:30602/TCP 10m
Test it out
admin#tst-server:~$ hostname
tst-server.com
admin#tst-server:~$ curl tst-server.com:30602
curl: (7) Failed to connect to tst-server.com port 30602 after 10 ms: Connection refused
Got it working by getting the Node IP address for Minikube using following command
$ kubectl cluster-info
and then
curl http://<node_ip>:30008
Upon curl test-server.com:30602 why it redirects to tst-server.kanaaritech.com?
To check whether the node port is working or not you can check once with the node's IP with port 30602.

HAProxy creates thousands of connections with itself

I'm not an expert in HAProxy. What I see is over time haproxy seems to accumulate (tens of)thousands of TCP sessions, and the source seems to be the same as the server...?
Why is it creating sessions with ports other than 8123 it's bound to?
frontend tcp_front
bind *:8123
mode tcp
default_backend host_sub5
backend host_sub5
mode tcp
server node2 0.0.0.0:8123 check
show sess - this is a tiny fraction but the ports seem to grow sequentially like it's a netscan
haproxy 772263 haproxy *263u IPv4 626477013 0t0 TCP 127.215.21.22:38423->127.215.21.22:8123 (ESTABLISHED)
haproxy 772263 haproxy *264u IPv4 626477014 0t0 TCP 127.215.21.22:8123->127.215.21.22:38423 (ESTABLISHED)
haproxy 772263 haproxy *265u IPv4 626477016 0t0 TCP 127.215.21.22:38435->127.215.21.22:8123 (ESTABLISHED)
haproxy 772263 haproxy *266u IPv4 626477035 0t0 TCP 127.215.21.22:8123->127.215.21.22:38435 (ESTABLISHED)
haproxy 772263 haproxy *267u IPv4 626477037 0t0 TCP 127.215.21.22:38437->127.215.21.22:8123 (ESTABLISHED)
haproxy 772263 haproxy *268u IPv4 626477041 0t0 TCP 127.215.21.22:8123->127.215.21.22:38437 (ESTABLISHED)

netcat listerning pod in kubernetes namespace unable to connect

I am running kubernetes v 19.4 with weave-net ( image: weaveworks/weave-npc:2.7.0)
There are no network policies active in the default namespace
I want to run a netcat listener on pod1 port 8080, and want to connect to pod1 port 8080 by pod2
[root#node01 ~]# kubectl run pod1 -i -t --image=ubuntu -- /bin/bash
If you don't see a command prompt, try pressing enter.
root#pod1:/# apt update ; apt install netcat-openbsd -y
........
root#pod1:/# nc -l -p 8080
I verify the port is listening on pod1 by :
root#node01 ~]# kubectl exec -i -t pod1 -- /bin/bash
root#pod1:/# apt install net-tools -y
...........
root#pod1:/# netstat -tulpen
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 0 213960 263/nc
root#pod1:/# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1376
inet 10.32.0.3 netmask 255.240.0.0 broadcast 10.47.255.255
ether a2:b9:3e:bc:6e:25 txqueuelen 0 (Ethernet)
RX packets 8429 bytes 17438639 (17.4 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 4217 bytes 284639 (284.6 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1000 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
I install pod2 witn netcat on it:
[root#node01 ~]# kubectl run pod2 -i -t --image=ubuntu -- /bin/bash
If you don't see a command prompt, try pressing enter.
root#pod2:/# apt update ; apt install netcat-openbsd -y
I test my netcat listener on pod1 from pod2:
root#pod2:/# nc 10.32.0.3 8080
....times out
So i decided to create a service of port 8080 on pod1:
kubectl expose pod pod1 --port=8080 ; kubectl get svc ; kubectl get netpol
[root#node01 ~]# kubectl expose pod pod1 --port=8080 ; kubectl get svc
service/pod1 exposed
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
apache ClusterIP 10.104.218.123 <none> 80/TCP 20d
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 21d
nginx ClusterIP 10.98.221.196 <none> 80/TCP 13d
pod1 ClusterIP 10.105.194.196 <none> 8080/TCP 2s
No resources found in default namespace.
Retry from pod2 now by service:
ping pod1
PING pod1.default.svc.cluster.local (10.105.194.196) 56(84) bytes of data.
root#pod2:/# nc pod1 8080
....times out
I also tried this with the regular netcat package.
For good measure i try to expose port 8080 on the pod as nodeport:
root#node01 ~]# kubectl delete svc pod1 ; kubectl expose pod pod1 --port=8080 --type=NodePort ; kubectl get svc
So when i try to access that port from outside kubernetes i am unable to connect, for good measure i also test the ssh port to verify my base connectivity is ok
user#DESKTOP-7TIH9:~$ nc -zv 10.10.70.112 30743
nc: connect to 10.10.70.112 port 30743 (tcp) failed: Connection refused
user#DESKTOP-7TIH9:~$ nc -zv 10.10.70.112 22
Connection to 10.10.70.112 22 port [tcp/ssh] succeeded!
Can anybody tell me if i am doing something, have the wrong expectation or advice me how to proceed.
Thank you in advance.
Trying to solve this i somehow decided to enable the firewall on the k8s hosts.
This lead me to a broken cluster. I decided to reinit the cluster, make sure all the fw ports are opened. Including this one : https://www.weave.works/docs/net/latest/faq#ports
All is working now1

How do GCP load balancers route traffic to GKE services?

I'm relatively new (< 1 year) to GCP, and I'm still in the process of mapping the various services onto my existing networking mental model.
Once knowledge gap I'm struggling to fill is how HTTP requests are load balanced to services running in our GKE clusters.
On a test cluster, I created a service in front of pods that serve HTTP:
apiVersion: v1
kind: Service
metadata:
name: contour
spec:
ports:
- port: 80
name: http
protocol: TCP
targetPort: 8080
- port: 443
name: https
protocol: TCP
targetPort: 8443
selector:
app: contour
type: LoadBalancer
The service is listening on node ports 30472 and 30816.:
$ kubectl get svc contour
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
contour LoadBalancer 10.63.241.69 35.x.y.z 80:30472/TCP,443:30816/TCP 41m
A GCP network load balancer is automatically created for me. It has its own public IP at 35.x.y.z, and is listening on ports 80-443:
Curling the load balancer IP works:
$ curl -q -v 35.x.y.z
* TCP_NODELAY set
* Connected to 35.x.y.z (35.x.y.z) port 80 (#0)
> GET / HTTP/1.1
> Host: 35.x.y.z
> User-Agent: curl/7.62.0
> Accept: */*
>
< HTTP/1.1 404 Not Found
< date: Mon, 07 Jan 2019 05:33:44 GMT
< server: envoy
< content-length: 0
<
If I ssh into the GKE node, I can see the kube-proxy is listening on the service nodePorts (30472 and 30816) and nothing has a socket listening on ports 80 or 443:
# netstat -lntp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:20256 0.0.0.0:* LISTEN 1022/node-problem-d
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 1221/kubelet
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 1369/kube-proxy
tcp 0 0 0.0.0.0:5355 0.0.0.0:* LISTEN 297/systemd-resolve
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 330/sshd
tcp6 0 0 :::30816 :::* LISTEN 1369/kube-proxy
tcp6 0 0 :::4194 :::* LISTEN 1221/kubelet
tcp6 0 0 :::30472 :::* LISTEN 1369/kube-proxy
tcp6 0 0 :::10250 :::* LISTEN 1221/kubelet
tcp6 0 0 :::5355 :::* LISTEN 297/systemd-resolve
tcp6 0 0 :::10255 :::* LISTEN 1221/kubelet
tcp6 0 0 :::10256 :::* LISTEN 1369/kube-proxy
Two questions:
Given nothing on the node is listening on ports 80 or 443, is the load balancer directing traffic to ports 30472 and 30816?
If the load balancer is accepting traffic on 80/443 and forwarding to 30472/30816, where can I see that configuration? Clicking around the load balancer screens I can't see any mention of ports 30472 and 30816.
I think I found the answer to my own question - can anyone confirm I'm on the right track?
The network load balancer redirects the traffic to a node in the cluster without modifying the packet - packets for port 80/443 still have port 80/443 when they reach the node.
There's nothing listening on ports 80/443 on the nodes. However kube-proxy has written iptables rules that match packets to the load balancer IP, and rewrite them with the appropriate ClusterIP and port:
You can see the iptables config on the node:
$ iptables-save | grep KUBE-SERVICES | grep loadbalancer
-A KUBE-SERVICES -d 35.x.y.z/32 -p tcp -m comment --comment "default/contour:http loadbalancer IP" -m tcp --dport 80 -j KUBE-FW-D53V3CDHSZT2BLQV
-A KUBE-SERVICES -d 35.x.y.z/32 -p tcp -m comment --comment "default/contour:https loadbalancer IP" -m tcp --dport 443 -j KUBE-FW-J3VGAQUVMYYL5VK6
$ iptables-save | grep KUBE-SEP-ZAA234GWNBHH7FD4
:KUBE-SEP-ZAA234GWNBHH7FD4 - [0:0]
-A KUBE-SEP-ZAA234GWNBHH7FD4 -s 10.60.0.30/32 -m comment --comment "default/contour:http" -j KUBE-MARK-MASQ
-A KUBE-SEP-ZAA234GWNBHH7FD4 -p tcp -m comment --comment "default/contour:http" -m tcp -j DNAT --to-destination 10.60.0.30:8080
$ iptables-save | grep KUBE-SEP-CXQOVJCC5AE7U6UC
:KUBE-SEP-CXQOVJCC5AE7U6UC - [0:0]
-A KUBE-SEP-CXQOVJCC5AE7U6UC -s 10.60.0.30/32 -m comment --comment "default/contour:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-CXQOVJCC5AE7U6UC -p tcp -m comment --comment "default/contour:https" -m tcp -j DNAT --to-destination 10.60.0.30:8443
An interesting implication is the the nodePort is created but doesn't appear to be used. That matches this comment in the kube docs:
Google Compute Engine does not need to allocate a NodePort to make LoadBalancer work
It also explains why GKE creates an automatic firewall rule that allows traffic from 0.0.0.0/0 towards ports 80/443 on the nodes. The load balancer isn't rewriting the packets, so the firewall needs to allow traffic from anywhere to reach iptables on the node, and it's rewritten there.
To understand LoadBalancer services, you first have to grok NodePort services. The way those work is that there is a proxy (usually actually implemented in iptables or ipvs now for perf but that's an implementation detail) on every node in your cluster, and when create a NodePort service it picks a port that is unused and sets every one of those proxies to forward packets to your Kubernetes pod. A LoadBalancer service builds on top of that, so on GCP/GKE it creates a GCLB forwarding rule mapping the requested port to a rotation of all those node-level proxies. So the GCLB listens on port 80, which proxies to some random port on a random node, which proxies to the internal port on your pod.
The process is a bit more customizable than that, but that's the basic defaults.

K8s NodePort service is “unreachable by IP” only on 2/4 slaves in the cluster

I created a K8s cluster of 5 VMs (1 master and 4 slaves running Ubuntu 16.04.3 LTS) using kubeadm. I used flannel to set up networking in the cluster. I was able to successfully deploy an application. I, then, exposed it via NodePort service. From here things got complicated for me.
Before I started, I disabled the default firewalld service on master and the nodes.
As I understand from the K8s Services doc, the type NodePort exposes the service on all nodes in the cluster. However, when I created it, the service was exposed only on 2 nodes out of 4 in the cluster. I am guessing that's not the expected behavior (right?)
For troubleshooting, here are some resource specs:
root#vm-vivekse-003:~# kubectl get nodes
NAME STATUS AGE VERSION
vm-deepejai-00b Ready 5m v1.7.3
vm-plashkar-006 Ready 4d v1.7.3
vm-rosnthom-00f Ready 4d v1.7.3
vm-vivekse-003 Ready 4d v1.7.3 //the master
vm-vivekse-004 Ready 16h v1.7.3
root#vm-vivekse-003:~# kubectl get pods -o wide -n playground
NAME READY STATUS RESTARTS AGE IP NODE
kubernetes-bootcamp-2457653786-9qk80 1/1 Running 0 2d 10.244.3.6 vm-rosnthom-00f
springboot-helloworld-2842952983-rw0gc 1/1 Running 0 1d 10.244.3.7 vm-rosnthom-00f
root#vm-vivekse-003:~# kubectl get svc -o wide -n playground
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
sb-hw-svc 10.101.180.19 <nodes> 9000:30847/TCP 5h run=springboot-helloworld
root#vm-vivekse-003:~# kubectl describe svc sb-hw-svc -n playground
Name: sb-hw-svc
Namespace: playground
Labels: <none>
Annotations: <none>
Selector: run=springboot-helloworld
Type: NodePort
IP: 10.101.180.19
Port: <unset> 9000/TCP
NodePort: <unset> 30847/TCP
Endpoints: 10.244.3.7:9000
Session Affinity: None
Events: <none>
root#vm-vivekse-003:~# kubectl get endpoints sb-hw-svc -n playground -o yaml
apiVersion: v1
kind: Endpoints
metadata:
creationTimestamp: 2017-08-09T06:28:06Z
name: sb-hw-svc
namespace: playground
resourceVersion: "588958"
selfLink: /api/v1/namespaces/playground/endpoints/sb-hw-svc
uid: e76d9cc1-7ccb-11e7-bc6a-fa163efaba6b
subsets:
- addresses:
- ip: 10.244.3.7
nodeName: vm-rosnthom-00f
targetRef:
kind: Pod
name: springboot-helloworld-2842952983-rw0gc
namespace: playground
resourceVersion: "473859"
uid: 16d9db68-7c1a-11e7-bc6a-fa163efaba6b
ports:
- port: 9000
protocol: TCP
After some tinkering I realized that on those 2 "faulty" nodes, those services were not available from within those hosts itself.
Node01 (working):
root#vm-vivekse-004:~# curl 127.0.0.1:30847 //<localhost>:<nodeport>
Hello Docker World!!
root#vm-vivekse-004:~# curl 10.101.180.19:9000 //<cluster-ip>:<port>
Hello Docker World!!
root#vm-vivekse-004:~# curl 10.244.3.7:9000 //<pod-ip>:<port>
Hello Docker World!!
Node02 (working):
root#vm-rosnthom-00f:~# curl 127.0.0.1:30847
Hello Docker World!!
root#vm-rosnthom-00f:~# curl 10.101.180.19:9000
Hello Docker World!!
root#vm-rosnthom-00f:~# curl 10.244.3.7:9000
Hello Docker World!!
Node03 (not working):
root#vm-plashkar-006:~# curl 127.0.0.1:30847
curl: (7) Failed to connect to 127.0.0.1 port 30847: Connection timed out
root#vm-plashkar-006:~# curl 10.101.180.19:9000
curl: (7) Failed to connect to 10.101.180.19 port 9000: Connection timed out
root#vm-plashkar-006:~# curl 10.244.3.7:9000
curl: (7) Failed to connect to 10.244.3.7 port 9000: Connection timed out
Node04 (not working):
root#vm-deepejai-00b:/# curl 127.0.0.1:30847
curl: (7) Failed to connect to 127.0.0.1 port 30847: Connection timed out
root#vm-deepejai-00b:/# curl 10.101.180.19:9000
curl: (7) Failed to connect to 10.101.180.19 port 9000: Connection timed out
root#vm-deepejai-00b:/# curl 10.244.3.7:9000
curl: (7) Failed to connect to 10.244.3.7 port 9000: Connection timed out
Tried netstat and telnet on all 4 slaves. Here's the output:
Node01 (the working host):
root#vm-vivekse-004:~# netstat -tulpn | grep 30847
tcp6 0 0 :::30847 :::* LISTEN 27808/kube-proxy
root#vm-vivekse-004:~# telnet 127.0.0.1 30847
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
Node02 (the working host):
root#vm-rosnthom-00f:~# netstat -tulpn | grep 30847
tcp6 0 0 :::30847 :::* LISTEN 11842/kube-proxy
root#vm-rosnthom-00f:~# telnet 127.0.0.1 30847
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
Node03 (the not-working host):
root#vm-plashkar-006:~# netstat -tulpn | grep 30847
tcp6 0 0 :::30847 :::* LISTEN 7791/kube-proxy
root#vm-plashkar-006:~# telnet 127.0.0.1 30847
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection timed out
Node04 (the not-working host):
root#vm-deepejai-00b:/# netstat -tulpn | grep 30847
tcp6 0 0 :::30847 :::* LISTEN 689/kube-proxy
root#vm-deepejai-00b:/# telnet 127.0.0.1 30847
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection timed out
Addition info:
From the kubectl get pods output, I can see that the pod is actually deployed on slave vm-rosnthom-00f. I am able to ping this host from all the 5 VMs and curl vm-rosnthom-00f:30847 also works from all the VMs.
I can clearly see that the internal cluster networking is messed up, but I am unsure how to resolve it! iptables -L for all the slaves are identical, and even the Local Loopback (ifconfig lo) is up and running for all the slaves. I'm completely clueless as to how to fix it!
Use a service type NodePort and access the NodePort if the Ipadress of your Master node.
The Service obviously knows on which node a Pod is running and redirect the traffic to one of the pods if you have several instances.
Label your pods and use the corrispondent selectors in the service.
If you get still into issues please post your service and deployment.
To check the connectivity i would suggest to use netcat.
nc -zv ip/service port
if network is ok it responds: open
inside the cluster access the containers like so:
nc -zv servicename.namespace.svc.cluster.local port
Consider always that you have 3 kinds of ports.
Port on which your software is running in side your container.
Port on which you expose that port to the pod. (a pod has one ipaddress, the clusterIp address, which is use by a container on a specific port)
NodePort wich allows you to access the pods ipaddress ports from outside the clusters network.
Either your firewall blocks some connections between nodes or your kube-proxy is not working properly. I guess your services work only on nodes where pods are running on.
If you want to reach the service from any node in the cluster you need fine service type as ClusterIP. Since you defined service type as NodePort, you can connect from the node where service is running.
my above answer was not correct, based on documentation we should be able to connect from any NodeIP:Nodeport. but its not working in my cluster also.
https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services---service-types
NodePort: Exposes the service on each Node’s IP at a static port (the
NodePort). A ClusterIP service, to which the NodePort service will
route, is automatically created. You’ll be able to contact the
NodePort service, from outside the cluster, by requesting
:.
One of my node ip forward not set. I was able to connect my service using NodeIP:nodePort
sysctl -w net.ipv4.ip_forward=1