Not able to access Nginx from an external IP even after k8s nodeport service exposed - kubernetes

I am not able to access the nginx server using http://:30602 and also http://:30602
OS: Ubuntu 22
I also checked if any firewall is blocking it.
Using ufw
admin#tst-server:~$ sudo ufw status verbose
Status: inactive
Using netstat
admin#tst-server:~$ netstat -an | grep 22 | grep -i listen
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp6 0 0 :::22 :::* LISTEN
unix 2 [ ACC ] STREAM LISTENING 354787 /run/containerd/s/9a866c6ea3a4fe1976aaed0884400cd59228d43776774cc3fad2d0b9a7c2ed7b
unix 2 [ ACC ] STREAM LISTENING 21722 /run/systemd/private
admin#tst-server:~$ netstat -an | grep 30602 | grep -i listen
Commands used for nginx deployment
Create Deployment
kubectl create deployment nginx --image=nginx
kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
myapp 2/2 2 2 8d
nginx 1/1 1 1 9m50s
Create Service
kubectl create service nodeport nginx --tcp=80:80
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8d
nginx NodePort 10.109.112.116 <none> 80:30602/TCP 10m
Test it out
admin#tst-server:~$ hostname
tst-server.com
admin#tst-server:~$ curl tst-server.com:30602
curl: (7) Failed to connect to tst-server.com port 30602 after 10 ms: Connection refused

Got it working by getting the Node IP address for Minikube using following command
$ kubectl cluster-info
and then
curl http://<node_ip>:30008

Upon curl test-server.com:30602 why it redirects to tst-server.kanaaritech.com?
To check whether the node port is working or not you can check once with the node's IP with port 30602.

Related

Enable endpoints for kube-controller-manager & kube-scheduler

I am new to the kubernetes world and I am currently stuck with figuring out how to enable endpoints for kube-controller-manager & kube-scheduler. In some future, I'll be using the helm kube-prometheus-stack to scrape those endpoints for metrics. However, for now what would be the right approach to set up those endpoints?
$ kubectl get ep -n kube-system
NAME ENDPOINTS AGE
kube-controller-manager <none> 105d
kube-scheduler <none> 105d
No need to create endpoints for kube-controller-manage and kube-scheduler because they use hostNetwork and uses ports 10257 and 10259 respectively.
you can verify it checking the manifests "/etc/kubernetes/manifests/" and netstat -nltp or ss -nltp on masternode
ss -nltp | grep kube
LISTEN 0 128 127.0.0.1:10257 0.0.0.0:* users:(("kube-controller",pid=50301,fd=7))
LISTEN 0 128 127.0.0.1:10259 0.0.0.0:* users:(("kube-scheduler",pid=50400,fd=7))
so they should be accessible over < masternodeip >:<10257/10259>

how does tcp connection works in kubernetes loadbalancer

Hi I am running 5 replicas tcp-client( can be scaled up) and exposed 3 services as LoadBalancer to external network to get incoming connection. This client is listining on a port 7777 internally and mapped to external port 17777, 27777, 37777.
Pods
[root#pwconfig-k8s-master0 tcp_poc]# kubectl get pods -l app=tcp-client
NAME READY STATUS RESTARTS AGE
tcp-client-7dd545dcc9-54bdl 1/1 Running 0 4m47s
tcp-client-7dd545dcc9-628jn 1/1 Running 0 4m47s
tcp-client-7dd545dcc9-7pm44 1/1 Running 0 2m30s
tcp-client-7dd545dcc9-b287n 1/1 Running 0 4m47s
tcp-client-7dd545dcc9-mrmnm 1/1 Running 0 2m30s
Service
[root#pwconfig-k8s-master0 tcp_poc]# kubectl get svc | grep tcp-client
tcp-client ClusterIP y.y.y.y <none> 7777/TCP 4m36s
tcp-client-0 LoadBalancer y.y.y.y x.x.x.x 17777:30859/TCP 2m55s
tcp-client-1 LoadBalancer y.y.y.y x.x.x.x 27777:30089/TCP 2m55s
tcp-client-2 LoadBalancer y.y.y.y x.x.x.x 37777:31031/TCP 2m55s
We have seen this behavior that once any external client makes the tcp connection, the connection get fixed with particular pod and stay alive until external client closes the connection. I wanted to know the how the routing and tcp connection is working as I can see the LoadBalancing is over external client tcp connection not over the packets.
So if there are 100 external client , it will loadbalance over the client and route the tcp connection and fix with the pod for the lifecycle of the tcp connection.
Thanks for the help in advanced.

netcat listerning pod in kubernetes namespace unable to connect

I am running kubernetes v 19.4 with weave-net ( image: weaveworks/weave-npc:2.7.0)
There are no network policies active in the default namespace
I want to run a netcat listener on pod1 port 8080, and want to connect to pod1 port 8080 by pod2
[root#node01 ~]# kubectl run pod1 -i -t --image=ubuntu -- /bin/bash
If you don't see a command prompt, try pressing enter.
root#pod1:/# apt update ; apt install netcat-openbsd -y
........
root#pod1:/# nc -l -p 8080
I verify the port is listening on pod1 by :
root#node01 ~]# kubectl exec -i -t pod1 -- /bin/bash
root#pod1:/# apt install net-tools -y
...........
root#pod1:/# netstat -tulpen
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 0 213960 263/nc
root#pod1:/# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1376
inet 10.32.0.3 netmask 255.240.0.0 broadcast 10.47.255.255
ether a2:b9:3e:bc:6e:25 txqueuelen 0 (Ethernet)
RX packets 8429 bytes 17438639 (17.4 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 4217 bytes 284639 (284.6 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1000 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
I install pod2 witn netcat on it:
[root#node01 ~]# kubectl run pod2 -i -t --image=ubuntu -- /bin/bash
If you don't see a command prompt, try pressing enter.
root#pod2:/# apt update ; apt install netcat-openbsd -y
I test my netcat listener on pod1 from pod2:
root#pod2:/# nc 10.32.0.3 8080
....times out
So i decided to create a service of port 8080 on pod1:
kubectl expose pod pod1 --port=8080 ; kubectl get svc ; kubectl get netpol
[root#node01 ~]# kubectl expose pod pod1 --port=8080 ; kubectl get svc
service/pod1 exposed
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
apache ClusterIP 10.104.218.123 <none> 80/TCP 20d
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 21d
nginx ClusterIP 10.98.221.196 <none> 80/TCP 13d
pod1 ClusterIP 10.105.194.196 <none> 8080/TCP 2s
No resources found in default namespace.
Retry from pod2 now by service:
ping pod1
PING pod1.default.svc.cluster.local (10.105.194.196) 56(84) bytes of data.
root#pod2:/# nc pod1 8080
....times out
I also tried this with the regular netcat package.
For good measure i try to expose port 8080 on the pod as nodeport:
root#node01 ~]# kubectl delete svc pod1 ; kubectl expose pod pod1 --port=8080 --type=NodePort ; kubectl get svc
So when i try to access that port from outside kubernetes i am unable to connect, for good measure i also test the ssh port to verify my base connectivity is ok
user#DESKTOP-7TIH9:~$ nc -zv 10.10.70.112 30743
nc: connect to 10.10.70.112 port 30743 (tcp) failed: Connection refused
user#DESKTOP-7TIH9:~$ nc -zv 10.10.70.112 22
Connection to 10.10.70.112 22 port [tcp/ssh] succeeded!
Can anybody tell me if i am doing something, have the wrong expectation or advice me how to proceed.
Thank you in advance.
Trying to solve this i somehow decided to enable the firewall on the k8s hosts.
This lead me to a broken cluster. I decided to reinit the cluster, make sure all the fw ports are opened. Including this one : https://www.weave.works/docs/net/latest/faq#ports
All is working now1

Unable to acccess nginx pod across nodes using ClusterIP

I have created nginx deployment and nginx service(ClusterIP) to access nginx pod. But not able to access pod through cluster IP across nodes other than node where pod is scheduled.
I tried looking for IPtable too. But do not DNAT entry over there.
root#kdm-master-1:~# k get all -A -o wide |grep nginx
default pod/nginx-6db489d4b7-pfkm9 1/1 Running 0 3h16m 10.244.1.3 kdm-worker-1 <none> <none>
default service/nginx ClusterIP 10.102.239.131 <none> 80/TCP 3h20m run=nginx
default deployment.apps/nginx 1/1 1 1 3h32m nginx nginx run=nginx
default replicaset.apps/nginx-6db489d4b7 1 1 1 3h32m nginx nginx pod-template-hash=6db489d4b7,run=nginx
IP table:
root#kdm-master-1:~# iptables -L -t nat|grep nginx
KUBE-MARK-MASQ tcp -- !10.244.0.0/16 10.102.239.131 /* default/nginx:80-80 cluster IP */ tcp dpt:http
KUBE-SVC-OVTWZ4GROBJZO4C5 tcp -- anywhere 10.102.239.131 /* default/nginx:80-80 cluster IP */ tcp dpt:http
# Warning: iptables-legacy tables present, use iptables-legacy to see them
Please advice how can I resolve it?
set net.ipv4.ip_forward=1 in /etc/sysctl.conf
run sysctl --system
This will resolve the issue and one will be able able to access the pod from any node.

K8s NodePort service is “unreachable by IP” only on 2/4 slaves in the cluster

I created a K8s cluster of 5 VMs (1 master and 4 slaves running Ubuntu 16.04.3 LTS) using kubeadm. I used flannel to set up networking in the cluster. I was able to successfully deploy an application. I, then, exposed it via NodePort service. From here things got complicated for me.
Before I started, I disabled the default firewalld service on master and the nodes.
As I understand from the K8s Services doc, the type NodePort exposes the service on all nodes in the cluster. However, when I created it, the service was exposed only on 2 nodes out of 4 in the cluster. I am guessing that's not the expected behavior (right?)
For troubleshooting, here are some resource specs:
root#vm-vivekse-003:~# kubectl get nodes
NAME STATUS AGE VERSION
vm-deepejai-00b Ready 5m v1.7.3
vm-plashkar-006 Ready 4d v1.7.3
vm-rosnthom-00f Ready 4d v1.7.3
vm-vivekse-003 Ready 4d v1.7.3 //the master
vm-vivekse-004 Ready 16h v1.7.3
root#vm-vivekse-003:~# kubectl get pods -o wide -n playground
NAME READY STATUS RESTARTS AGE IP NODE
kubernetes-bootcamp-2457653786-9qk80 1/1 Running 0 2d 10.244.3.6 vm-rosnthom-00f
springboot-helloworld-2842952983-rw0gc 1/1 Running 0 1d 10.244.3.7 vm-rosnthom-00f
root#vm-vivekse-003:~# kubectl get svc -o wide -n playground
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
sb-hw-svc 10.101.180.19 <nodes> 9000:30847/TCP 5h run=springboot-helloworld
root#vm-vivekse-003:~# kubectl describe svc sb-hw-svc -n playground
Name: sb-hw-svc
Namespace: playground
Labels: <none>
Annotations: <none>
Selector: run=springboot-helloworld
Type: NodePort
IP: 10.101.180.19
Port: <unset> 9000/TCP
NodePort: <unset> 30847/TCP
Endpoints: 10.244.3.7:9000
Session Affinity: None
Events: <none>
root#vm-vivekse-003:~# kubectl get endpoints sb-hw-svc -n playground -o yaml
apiVersion: v1
kind: Endpoints
metadata:
creationTimestamp: 2017-08-09T06:28:06Z
name: sb-hw-svc
namespace: playground
resourceVersion: "588958"
selfLink: /api/v1/namespaces/playground/endpoints/sb-hw-svc
uid: e76d9cc1-7ccb-11e7-bc6a-fa163efaba6b
subsets:
- addresses:
- ip: 10.244.3.7
nodeName: vm-rosnthom-00f
targetRef:
kind: Pod
name: springboot-helloworld-2842952983-rw0gc
namespace: playground
resourceVersion: "473859"
uid: 16d9db68-7c1a-11e7-bc6a-fa163efaba6b
ports:
- port: 9000
protocol: TCP
After some tinkering I realized that on those 2 "faulty" nodes, those services were not available from within those hosts itself.
Node01 (working):
root#vm-vivekse-004:~# curl 127.0.0.1:30847 //<localhost>:<nodeport>
Hello Docker World!!
root#vm-vivekse-004:~# curl 10.101.180.19:9000 //<cluster-ip>:<port>
Hello Docker World!!
root#vm-vivekse-004:~# curl 10.244.3.7:9000 //<pod-ip>:<port>
Hello Docker World!!
Node02 (working):
root#vm-rosnthom-00f:~# curl 127.0.0.1:30847
Hello Docker World!!
root#vm-rosnthom-00f:~# curl 10.101.180.19:9000
Hello Docker World!!
root#vm-rosnthom-00f:~# curl 10.244.3.7:9000
Hello Docker World!!
Node03 (not working):
root#vm-plashkar-006:~# curl 127.0.0.1:30847
curl: (7) Failed to connect to 127.0.0.1 port 30847: Connection timed out
root#vm-plashkar-006:~# curl 10.101.180.19:9000
curl: (7) Failed to connect to 10.101.180.19 port 9000: Connection timed out
root#vm-plashkar-006:~# curl 10.244.3.7:9000
curl: (7) Failed to connect to 10.244.3.7 port 9000: Connection timed out
Node04 (not working):
root#vm-deepejai-00b:/# curl 127.0.0.1:30847
curl: (7) Failed to connect to 127.0.0.1 port 30847: Connection timed out
root#vm-deepejai-00b:/# curl 10.101.180.19:9000
curl: (7) Failed to connect to 10.101.180.19 port 9000: Connection timed out
root#vm-deepejai-00b:/# curl 10.244.3.7:9000
curl: (7) Failed to connect to 10.244.3.7 port 9000: Connection timed out
Tried netstat and telnet on all 4 slaves. Here's the output:
Node01 (the working host):
root#vm-vivekse-004:~# netstat -tulpn | grep 30847
tcp6 0 0 :::30847 :::* LISTEN 27808/kube-proxy
root#vm-vivekse-004:~# telnet 127.0.0.1 30847
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
Node02 (the working host):
root#vm-rosnthom-00f:~# netstat -tulpn | grep 30847
tcp6 0 0 :::30847 :::* LISTEN 11842/kube-proxy
root#vm-rosnthom-00f:~# telnet 127.0.0.1 30847
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
Node03 (the not-working host):
root#vm-plashkar-006:~# netstat -tulpn | grep 30847
tcp6 0 0 :::30847 :::* LISTEN 7791/kube-proxy
root#vm-plashkar-006:~# telnet 127.0.0.1 30847
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection timed out
Node04 (the not-working host):
root#vm-deepejai-00b:/# netstat -tulpn | grep 30847
tcp6 0 0 :::30847 :::* LISTEN 689/kube-proxy
root#vm-deepejai-00b:/# telnet 127.0.0.1 30847
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection timed out
Addition info:
From the kubectl get pods output, I can see that the pod is actually deployed on slave vm-rosnthom-00f. I am able to ping this host from all the 5 VMs and curl vm-rosnthom-00f:30847 also works from all the VMs.
I can clearly see that the internal cluster networking is messed up, but I am unsure how to resolve it! iptables -L for all the slaves are identical, and even the Local Loopback (ifconfig lo) is up and running for all the slaves. I'm completely clueless as to how to fix it!
Use a service type NodePort and access the NodePort if the Ipadress of your Master node.
The Service obviously knows on which node a Pod is running and redirect the traffic to one of the pods if you have several instances.
Label your pods and use the corrispondent selectors in the service.
If you get still into issues please post your service and deployment.
To check the connectivity i would suggest to use netcat.
nc -zv ip/service port
if network is ok it responds: open
inside the cluster access the containers like so:
nc -zv servicename.namespace.svc.cluster.local port
Consider always that you have 3 kinds of ports.
Port on which your software is running in side your container.
Port on which you expose that port to the pod. (a pod has one ipaddress, the clusterIp address, which is use by a container on a specific port)
NodePort wich allows you to access the pods ipaddress ports from outside the clusters network.
Either your firewall blocks some connections between nodes or your kube-proxy is not working properly. I guess your services work only on nodes where pods are running on.
If you want to reach the service from any node in the cluster you need fine service type as ClusterIP. Since you defined service type as NodePort, you can connect from the node where service is running.
my above answer was not correct, based on documentation we should be able to connect from any NodeIP:Nodeport. but its not working in my cluster also.
https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services---service-types
NodePort: Exposes the service on each Node’s IP at a static port (the
NodePort). A ClusterIP service, to which the NodePort service will
route, is automatically created. You’ll be able to contact the
NodePort service, from outside the cluster, by requesting
:.
One of my node ip forward not set. I was able to connect my service using NodeIP:nodePort
sysctl -w net.ipv4.ip_forward=1