Unable to curl pod IP using containerPort - kubernetes

Sorry if it's a naive question. Please correct me if my understanding is wrong.
Created POD using this command:
kubectl run nginx --image=nginx --port=8888
My understand of this command, nginx (application) container will be exposed/available at port 8888
kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 1/1 Running 0 10m 10.244.1.2 node01 <none> <none>
curl -v 10.244.1.2:8888 ===> i am wondering why this failed ?
* Trying 10.244.1.2:8888...
* TCP_NODELAY set
* connect to 10.244.1.2 port 8888 failed: Connection refused
* Failed to connect to 10.244.1.2 port 8888: Connection refused
* Closing connection 0
curl: (7) Failed to connect to 10.244.1.2 port 8888: Connection refused
curl -v 10.244.1.2 ===> to my surprise this returned 200 success response
* Trying 10.244.1.2:80...
* TCP_NODELAY set
* Connected to 10.244.1.2 (10.244.1.2) port 80 (#0)
> GET / HTTP/1.1
> Host: 10.244.1.2
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
If the application is still referring default 80 port, I am wondering about the significance of container port 8888 ?
OK, so it may be used to expose the POD to the outside world.
Let's see that, I went ahead and created service for the POD:
kubectl expose pod nginx --port=80 --target-port=8888
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx ClusterIP 10.96.214.161 <none> 80/TCP 13m
$ curl -v 10.96.214.161 ==> here default port (80) didn't work
* Trying 10.96.214.161:80...
* TCP_NODELAY set
* connect to 10.96.214.161 port 80 failed: Connection refused
* Failed to connect to 10.96.214.161 port 80: Connection refused
* Closing connection 0
curl: (7) Failed to connect to 10.96.214.161 port 80: Connection refused
$ curl -v 10.96.214.161:8888 ==> target port didn't work either
* Trying 10.96.214.161:8888...
* TCP_NODELAY set
....waiting forever
Which port do I need to use to make it work? Am I missing anything?

By default, nginx server listen to the port 80. You can see it in their docker image ref.
With kubectl run nginx --image=nginx --port=8888 what you have done here is you have expose another port along with 80. But the server is still listening on the 80 port.
So, try with target port 80. For this reason when you tried with other than port 80 it's not working. Try with set --target-port=8888 to --target-port=80.
Or, If you want to change the server port you need to use configmap along with pod to pass custom config to the server.

Related

client applications connection to kubernetes serviced pods are timing out if no container is listening

i'm facing my own program stuck in connecting to a serviced pod in my kubernetes cluster.
Let me explain, let's take a curl program trying to connect one of the containers in a serviced pod from outside the cluster.
curl -X GET http://192.168.1.105:31003/ready
{ "ready": true }
no error, the service is doing good :)
When my deployment is deleted, the curl command reports the network error as expected
curl -v -X GET http://192.168.1.105:31003/ready
curl: (7) Failed connect to 192.168.1.105:31003; Connection refused
Now, if i replace the webserver container in pod by a sleep 3600 container, start the deployment, then the curl command is timing out
curl -v -X GET http://192.168.1.105:31003/ready
* About to connect() to 192.168.1.105 port 31003 (#0)
* Trying 192.168.1.105...
* Connection timed out
* Failed connect to 192.168.1.105:31003; Connection timed out
* Closing connection 0
curl: (7) Failed connect to 192.168.1.105:31003; Connection timed out
I don't understand why the curl client doesn't get an error when it tries to connect to a container running sleep with no port opened!
My pod has no liveness or readiness probe set, so all containers are declared as 'running'.
kubectl get pod
NAME READY STATUS RESTARTS AGE
alis-green-core-2hlgx 3/3 Running 0 104s
kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service-alis-green-core NodePort 172.30.28.252 <none> 11003:31003/TCP,11903:31229/TCP,11904:32364/TCP,11007:31183/TCP,14281:31419/TCP 2m11s
kubectl get endpoints
NAME ENDPOINTS AGE
service-alis-green-core 10.129.0.44:14281,10.129.0.44:11903,10.129.0.44:11007 + 2 more... 2m52s
I guess the issue is related to some kube-proxy configuration i may have missed.
Thanks

Kubernetes Access Application via localhost with port-forward

I would like to access my application via localhost with kubectl port-forward command. But when I run kubectl port-forward road-dashboard-dev-5cdc465475-jwwgz 8082:8080 I received an below error.
> Forwarding from 127.0.0.1:8082 -> 8080 Forwarding from [::1]:8082 ->
> 8080 Handling connection for 8082 Handling connection for 8082 E0124
> 14:15:27.173395 4376 portforward.go:400] an error occurred
> forwarding 8082 -> 8080: error forwarding port 8080 to pod
> 09a76f6936b313e438bbf5a84bd886b3b3db8f499b5081b66cddc390021556d5, uid
> : exit status 1: 2020/01/24 11:15:27 socat[9064] E connect(6, AF=2
> 127.0.0.1:8080, 16): Connection refused
I also try to connect pod in cluster via exec -it but it did not work as well.What might be the missing point that I ignore?
node#road-dashboard-dev-5cdc465475-jwwgz:/usr/src/app$ curl -v localhost:8080
* Rebuilt URL to: localhost:8080/
* Trying ::1...
* TCP_NODELAY set
* connect to ::1 port 8080 failed: Connection refused
* Trying 127.0.0.1...
* TCP_NODELAY set
* connect to 127.0.0.1 port 8080 failed: Connection refused
* Failed to connect to localhost port 8080: Connection refused
* Closing connection 0
curl: (7) Failed to connect to localhost port 8080: Connection refused
kubectl get all out is below.I am sure that Container port value is set 8080.
NAME READY STATUS RESTARTS AGE
pod/road-dashboard-dev-5cdc465475-jwwgz 1/1 Running 0 34m
pod/road-dashboard-dev-5cdc465475-rdk7g 1/1 Running 0 34m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/road-dashboard-dev NodePort 10.254.61.225 <none> 80:41599/TCP 18h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/road-dashboard-dev 2/2 2 2 18h
NAME DESIRED CURRENT READY AGE
replicaset.apps/road-dashboard-dev-5cdc465475 2 2 2 34m
Name: road-dashboard-dev-5cdc465475-jwwgz
Namespace: dev
Priority: 0
PriorityClassName: <none>
Node: c123
Containers:
road-dashboard:
Port: 8080/TCP
Host Port: 0/TCP
State: Running
Started: Fri, 24 Jan 2020 13:42:39 +0300
Ready: True
Restart Count: 0
Environment: <none>
To debug your issue you should let the port forward command tuning in foreground and curl from a second terminal and see what output you get on the port-forward prompt.
$ kubectl get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/nginx 1/1 Running 2 112m 10.244.3.43 k8s-node-3 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11d <none>
service/nginx NodePort 10.96.130.207 <none> 80:31316/TCP 20m run=nginx
Example :
$ kubectl port-forward nginx 31000:80
Forwarding from 127.0.0.1:31000 -> 80
Forwarding from [::1]:31000 -> 80
Curl from second terminal window curl the port forward you have.
$ curl localhost:31000
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
nginx.org.<br/>
Commercial support is available at
nginx.com.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
You should see that on first terminal the portforward promt list that it is handeling a connection like below note new line Handling connection for 31000
$ kubectl port-forward nginx 31000:80
Forwarding from 127.0.0.1:31000 -> 80
Forwarding from [::1]:31000 -> 80
Handling connection for 31000
So if like i have wrong port forwarding as below (note i have mode the port 8080 for nginx container exposing port 80)
$ kubectl port-forward nginx 31000:8080
Forwarding from 127.0.0.1:31000 -> 8080
Forwarding from [::1]:31000 -> 8080
The curl will result clear error on port forward prompt indicating the connection was refused from container when getting to port 8080 as its not correct. and we get a empty reply back.
$ curl -v localhost:31000
* Rebuilt URL to: localhost:31000/
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 31000 (#0)
> GET / HTTP/1.1
> Host: localhost:31000
> User-Agent: curl/7.47.0
> Accept: */*
>
* Empty reply from server
* Connection #0 to host localhost left intact
curl: (52) Empty reply from server
$ kubectl port-forward nginx 31000:8080
Forwarding from 127.0.0.1:31000 -> 8080
Forwarding from [::1]:31000 -> 8080
Handling connection for 31000
E0124 11:35:53.390711 10791 portforward.go:400] an error occurred forwarding 31000 -> 8080: error forwarding port 8080 to pod 88e4de4aba522b0beff95c3b632eca654a5c34b0216320a29247bb8574ef0f6b, uid : exit status 1: 2020/01/24 11:35:57 socat[15334] E connect(5, AF=2 127.0.0.1:8080, 16): Connection refused

Kubernetes can telnet into POD but can't curl web content

In my Kubernetes environment I have following to pods running
NAME READY STATUS RESTARTS AGE IP NODE
httpd-6cc5cff4f6-5j2p2 1/1 Running 0 1h 172.16.44.12 node01
tomcat-68ccbb7d9d-c2n5m 1/1 Running 0 45m 172.16.44.13 node02
One is a Tomcat instance and other one is a Apache instance.
from node01 and node02 I can curl the httpd which is using port 80. But If i curl the tomcat server which is running on node2 from node1 it fails. I get below output.
[root#node1~]# curl -v 172.16.44.13:8080
* About to connect() to 172.16.44.13 port 8080 (#0)
* Trying 172.16.44.13...
* Connected to 172.16.44.13 (172.16.44.13) port 8080 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 172.16.44.13:8080
> Accept: */*
>
^C
[root#node1~]# wget -v 172.16.44.13:8080
--2019-01-16 12:00:21-- http://172.16.44.13:8080/
Connecting to 172.16.44.13:8080... connected.
HTTP request sent, awaiting response...
But I'm able telnet to port 8080 on 172.16.44.13 from node1
[root#node1~]# telnet 172.16.44.13 8080
Trying 172.16.44.13...
Connected to 172.16.44.13.
Escape character is '^]'.
^]
telnet>
Any reason for this behavior? why am I able to telnet but unable to get the web content? I have also tried different ports but curl is working only for port 80.
I was able to get this fixed by disabling selinux on my nodes.

Can't Connect to Kubernetes Service from Inside Service Pod?

I create a one-replica zookeeper + kafka cluster with the official kafka chart from the official incubator repo:
helm install --name mykafka -f kafka.yaml incubator/kafka
This gives me two pods:
kubectl get pods
NAME READY STATUS
mykafka-kafka-0 1/1 Running
mykafka-zookeeper-0 1/1 Running
And four services (in addition to the default kubernetes service)
kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP
mykafka-kafka ClusterIP 10.108.143.59 <none> 9092/TCP
mykafka-kafka-headless ClusterIP None <none> 9092/TCP
mykafka-zookeeper ClusterIP 10.109.43.48 <none> 2181/TCP
mykafka-zookeeper-headless ClusterIP None <none> 2888/TCP,3888/TCP
If I shell into the zookeeper pod:
> kubectl exec -it mykafka-zookeeper-0 -- /bin/bash
I use the curl tool to test TCP connectivity. I expect a communications error as the server isn't using HTTP, but if curl can't even connect and I have to ctrl-C out, then the TCP connection isn't working.
I can access the local pod through curl localhost:2181:
root#mykafka-zookeeper-0:/# curl localhost:2181
curl: (52) Empty reply from server
I can access other pod through curl mykafka-kafka:9092:
root#mykafka-zookeeper-0:/# curl mykafka-kafka:9092
curl: (56) Recv failure: Connection reset by peer
But I can't access mykafka-zookeeper:2181. That name resolves to the cluster IP, but the attempt to TCP connect hangs until I ctrl-C:
root#mykafka-zookeeper-0:/# curl -v mykafka-zookeeper:2181
* Rebuilt URL to: mykafka-zookeeper:2181/
* Trying 10.109.43.48...
^C
Similarly, I can shell into the kafka pod:
> kubectl exec -it mykafka-kafka-0 -- /bin/bash
Connecting to the Zookeeper pod by the service name works fine:
root#mykafka-kafka-0:/# curl mykafka-zookeeper:2181
curl: (52) Empty reply from server
Connecting to localhost kafka works fine:
root#mykafka-kafka-0:/# curl localhost:9092
curl: (56) Recv failure: Connection reset by peer
But connecting to the Kafka pod by the service name doesn't work and I must ctrl-C the curl attempt:
curl -v mykafka-kafka:9092
* Rebuilt URL to: mykafka-kafka:9092/
* Hostname was NOT found in DNS cache
* Trying 10.108.143.59...
^C
Can anyone explain why using I can only connect to a Kubernetes service from outside the service and not from within the service?
I believe what you're experiencing can be resolved by looking at how your kubelet is set up to run. There is a setting you can toggle when starting up the kubelet called --hairpin-mode. By default this setting is set to the string promiscuous, where a pod can't connect to its own service, but you can change it to be hairpin-veth, which would allow a pod to connect to its own service.
There are a few issues on the topic, but this seems to be referenced the most:
https://github.com/kubernetes/kubernetes/issues/45790

K8s NodePort service is “unreachable by IP” only on 2/4 slaves in the cluster

I created a K8s cluster of 5 VMs (1 master and 4 slaves running Ubuntu 16.04.3 LTS) using kubeadm. I used flannel to set up networking in the cluster. I was able to successfully deploy an application. I, then, exposed it via NodePort service. From here things got complicated for me.
Before I started, I disabled the default firewalld service on master and the nodes.
As I understand from the K8s Services doc, the type NodePort exposes the service on all nodes in the cluster. However, when I created it, the service was exposed only on 2 nodes out of 4 in the cluster. I am guessing that's not the expected behavior (right?)
For troubleshooting, here are some resource specs:
root#vm-vivekse-003:~# kubectl get nodes
NAME STATUS AGE VERSION
vm-deepejai-00b Ready 5m v1.7.3
vm-plashkar-006 Ready 4d v1.7.3
vm-rosnthom-00f Ready 4d v1.7.3
vm-vivekse-003 Ready 4d v1.7.3 //the master
vm-vivekse-004 Ready 16h v1.7.3
root#vm-vivekse-003:~# kubectl get pods -o wide -n playground
NAME READY STATUS RESTARTS AGE IP NODE
kubernetes-bootcamp-2457653786-9qk80 1/1 Running 0 2d 10.244.3.6 vm-rosnthom-00f
springboot-helloworld-2842952983-rw0gc 1/1 Running 0 1d 10.244.3.7 vm-rosnthom-00f
root#vm-vivekse-003:~# kubectl get svc -o wide -n playground
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
sb-hw-svc 10.101.180.19 <nodes> 9000:30847/TCP 5h run=springboot-helloworld
root#vm-vivekse-003:~# kubectl describe svc sb-hw-svc -n playground
Name: sb-hw-svc
Namespace: playground
Labels: <none>
Annotations: <none>
Selector: run=springboot-helloworld
Type: NodePort
IP: 10.101.180.19
Port: <unset> 9000/TCP
NodePort: <unset> 30847/TCP
Endpoints: 10.244.3.7:9000
Session Affinity: None
Events: <none>
root#vm-vivekse-003:~# kubectl get endpoints sb-hw-svc -n playground -o yaml
apiVersion: v1
kind: Endpoints
metadata:
creationTimestamp: 2017-08-09T06:28:06Z
name: sb-hw-svc
namespace: playground
resourceVersion: "588958"
selfLink: /api/v1/namespaces/playground/endpoints/sb-hw-svc
uid: e76d9cc1-7ccb-11e7-bc6a-fa163efaba6b
subsets:
- addresses:
- ip: 10.244.3.7
nodeName: vm-rosnthom-00f
targetRef:
kind: Pod
name: springboot-helloworld-2842952983-rw0gc
namespace: playground
resourceVersion: "473859"
uid: 16d9db68-7c1a-11e7-bc6a-fa163efaba6b
ports:
- port: 9000
protocol: TCP
After some tinkering I realized that on those 2 "faulty" nodes, those services were not available from within those hosts itself.
Node01 (working):
root#vm-vivekse-004:~# curl 127.0.0.1:30847 //<localhost>:<nodeport>
Hello Docker World!!
root#vm-vivekse-004:~# curl 10.101.180.19:9000 //<cluster-ip>:<port>
Hello Docker World!!
root#vm-vivekse-004:~# curl 10.244.3.7:9000 //<pod-ip>:<port>
Hello Docker World!!
Node02 (working):
root#vm-rosnthom-00f:~# curl 127.0.0.1:30847
Hello Docker World!!
root#vm-rosnthom-00f:~# curl 10.101.180.19:9000
Hello Docker World!!
root#vm-rosnthom-00f:~# curl 10.244.3.7:9000
Hello Docker World!!
Node03 (not working):
root#vm-plashkar-006:~# curl 127.0.0.1:30847
curl: (7) Failed to connect to 127.0.0.1 port 30847: Connection timed out
root#vm-plashkar-006:~# curl 10.101.180.19:9000
curl: (7) Failed to connect to 10.101.180.19 port 9000: Connection timed out
root#vm-plashkar-006:~# curl 10.244.3.7:9000
curl: (7) Failed to connect to 10.244.3.7 port 9000: Connection timed out
Node04 (not working):
root#vm-deepejai-00b:/# curl 127.0.0.1:30847
curl: (7) Failed to connect to 127.0.0.1 port 30847: Connection timed out
root#vm-deepejai-00b:/# curl 10.101.180.19:9000
curl: (7) Failed to connect to 10.101.180.19 port 9000: Connection timed out
root#vm-deepejai-00b:/# curl 10.244.3.7:9000
curl: (7) Failed to connect to 10.244.3.7 port 9000: Connection timed out
Tried netstat and telnet on all 4 slaves. Here's the output:
Node01 (the working host):
root#vm-vivekse-004:~# netstat -tulpn | grep 30847
tcp6 0 0 :::30847 :::* LISTEN 27808/kube-proxy
root#vm-vivekse-004:~# telnet 127.0.0.1 30847
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
Node02 (the working host):
root#vm-rosnthom-00f:~# netstat -tulpn | grep 30847
tcp6 0 0 :::30847 :::* LISTEN 11842/kube-proxy
root#vm-rosnthom-00f:~# telnet 127.0.0.1 30847
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
Node03 (the not-working host):
root#vm-plashkar-006:~# netstat -tulpn | grep 30847
tcp6 0 0 :::30847 :::* LISTEN 7791/kube-proxy
root#vm-plashkar-006:~# telnet 127.0.0.1 30847
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection timed out
Node04 (the not-working host):
root#vm-deepejai-00b:/# netstat -tulpn | grep 30847
tcp6 0 0 :::30847 :::* LISTEN 689/kube-proxy
root#vm-deepejai-00b:/# telnet 127.0.0.1 30847
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection timed out
Addition info:
From the kubectl get pods output, I can see that the pod is actually deployed on slave vm-rosnthom-00f. I am able to ping this host from all the 5 VMs and curl vm-rosnthom-00f:30847 also works from all the VMs.
I can clearly see that the internal cluster networking is messed up, but I am unsure how to resolve it! iptables -L for all the slaves are identical, and even the Local Loopback (ifconfig lo) is up and running for all the slaves. I'm completely clueless as to how to fix it!
Use a service type NodePort and access the NodePort if the Ipadress of your Master node.
The Service obviously knows on which node a Pod is running and redirect the traffic to one of the pods if you have several instances.
Label your pods and use the corrispondent selectors in the service.
If you get still into issues please post your service and deployment.
To check the connectivity i would suggest to use netcat.
nc -zv ip/service port
if network is ok it responds: open
inside the cluster access the containers like so:
nc -zv servicename.namespace.svc.cluster.local port
Consider always that you have 3 kinds of ports.
Port on which your software is running in side your container.
Port on which you expose that port to the pod. (a pod has one ipaddress, the clusterIp address, which is use by a container on a specific port)
NodePort wich allows you to access the pods ipaddress ports from outside the clusters network.
Either your firewall blocks some connections between nodes or your kube-proxy is not working properly. I guess your services work only on nodes where pods are running on.
If you want to reach the service from any node in the cluster you need fine service type as ClusterIP. Since you defined service type as NodePort, you can connect from the node where service is running.
my above answer was not correct, based on documentation we should be able to connect from any NodeIP:Nodeport. but its not working in my cluster also.
https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services---service-types
NodePort: Exposes the service on each Node’s IP at a static port (the
NodePort). A ClusterIP service, to which the NodePort service will
route, is automatically created. You’ll be able to contact the
NodePort service, from outside the cluster, by requesting
:.
One of my node ip forward not set. I was able to connect my service using NodeIP:nodePort
sysctl -w net.ipv4.ip_forward=1