Connection Refused between Kubernetes pods in the same cluster - kubernetes

I am new to Kubernetes and I'm working on deploying an application within a new Kubernetes cluster.
Currently, the service running has multiple pods that need to communicate with each other. I'm looking for a general approach to go about debugging the issue, rather than getting into the specifies of the service as the question will become much too specific.
The pods within the cluster are throwing an error:
err="Get \"http://testpod.mynamespace.svc.cluster.local:8080/": dial tcp 10.10.80.100:8080: connect: connection refused"
Both pods are in the same cluster.
What are the best steps to take to debug this?
I have tried running:
kubectl exec -it testpod --namespace mynamespace -- cat /etc/resolv.conf
And this returns:
search mynamespace.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal
Which I found here: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/

First of all, the following pattern:
my-svc.my-namespace.svc.cluster-domain.example
is applicable only to FQDNs of Services, not Pods which have the following form:
pod-ip-address.my-namespace.pod.cluster-domain.example
e.g.:
172-17-0-3.default.pod.cluster.local
So in fact you're querying cluster dns about FQDN of the Service named testpod and not about FQDN of the Pod. Judging by the fact that it's being resolved successfully, such Service already exists in your cluster but most probably is misconfigured. The fact that you're getting the error message connection refused can mean the following:
your Service FQDN testpod.mynamespace.svc.cluster.local has been successfully resolved
(otherwise you would receive something like curl: (6) Could not resolve host: testpod.default.svc.cluster.local)
you've reached successfully your testpod Service
(otherwise, i.e. if it existed but wasn't listening on 8080 port, you're trying to connect to, you would receive timeout e.g. curl: (7) Failed to connect to testpod.default.svc.cluster.local port 8080: Connection timed out)
you've reached the Pod, exposed by testpod Service (you've been sussessfully redirected to it by the testpod Service)
but once reached the Pod, you're trying to connect to incorect port and that's why the connection is being refused by the server
My best guess is that your Pod in fact listens on different port, like 80 but you exposed it via the ClusterIP Service by specifying only --port value e.g. by:
kubectl expose pod testpod --port=8080
In such case both --port (port of the Service) and --targetPort (port of the Pod) will have the same value. In other words you've created a Service like the one below:
apiVersion: v1
kind: Service
metadata:
name: testpod
spec:
ports:
- protocol: TCP
port: 8080
targetPort: 8080
And you probably should've exposed it either this way:
kubectl expose pod testpod --port=8080 --targetPort=80
or with the following yaml manifest:
apiVersion: v1
kind: Service
metadata:
name: testpod
spec:
ports:
- protocol: TCP
port: 8080
targetPort: 80
Of course your targetPort may be different than 80, but connection refused in such case can mean only one thing: target http server (running in a Pod) refuses connection to 8080 port (most probably because it isn't listening on it). You didn't specify what image you're using, whether it's a standard nginx webserver or something based on your custom image. But if it's nginx and wasn't configured differently it listens on port 80.
For further debug, you can attach to your Pod:
kubectl exec -it testpod --namespace mynamespace -- /bin/sh
and if netstat command is not present (the most likely scenario) run:
apt update && apt install net-tools
and then check with netstat -ntlp on which port your container listens on.
I hope this helps you solve your issue. In case of any doubts, don't hesitate to ask.

Related

How to use an ExternalName service to access an internal service that is exposed with ingress

I am trying out a possible kubernetes scenario in the local machine minikube cluster. It is to access an internal service that is exposed with ingress in one cluster from another cluster using an ExternalName service. I understand that using an ingress the service will already be accessible within the cluster. As I am trying this out locally using minikube, I am unable to use simultaneously running clusters. Since I just wanted to verify whether it is possible to access an ingress exposed service using ExternName service.
I started the minikube tunnel using minikube tunnel.
I can access the service using http://k8s-yaml-hello.info.
But when I tryout curl k8s-yaml-hello-internal within a running POD, the error that I that is curl: (7) Failed to connect to k8s-yaml-hello-internal port 80 after 1161 ms: Connection refused
Can anyone point me out the issue here? Thanks in advance.
service.yaml
apiVersion: v1
kind: Service
metadata:
name: k8s-yaml-hello
spec:
selector:
app: k8s-yaml-hello
ports:
- port: 3000
targetPort: 3000
ingress.yaml
kind: Ingress
metadata:
name: k8s-yaml-hello-ingress
labels:
name: k8s-yaml-hello-ingress
spec:
rules:
- host: k8s-yaml-hello.info
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: k8s-yaml-hello
port:
number: 3000
externalName.yaml
apiVersion: v1
kind: Service
metadata:
name: k8s-yaml-hello-internal
spec:
ports:
- name: ''
appProtocol: http
protocol: TCP
port: 3000
type: ExternalName
externalName: k8s-yaml-hello.info
etc/hosts
127.0.0.1 k8s-yaml-hello.info
As You are getting the error curl: (7) Failed to connect :
The above error message means that no web-server is running on the specified IP and Port and the specified (or implied) port.
Check using nano /etc/hosts whether the IP and port is pointing to the correct domain or not. If it's not pointing, provide the correct IP and Port.
Refer to this SO for more information.
In Ingress.Yaml use Port 80 and also in service.yaml port should be 80. The service port and Target port should be different As per your yaml it is the same. Change it to 80 and have a try , If you get any errors, post here.
The problem is that minikube tunnel by default binds to the localhost address 127.0.0.1. Every node, machine, vm, container etc. has its own and the same localhost address. It is to reach local services without having to know the ip address of the network interface (the service is running on "myself"). So when k8s-yaml-hello.info resolves to 127.0.0.1 then it points to different service depending on which container you are (just to myself).
To make it work like you want, you first have to find out the ip address of your hosts network interface e.g. with ifconfig. Its name is something like eth0 or en0, depending on your system.
Then you can use the bind-address option of minikube tunnel to bind to that address instead:
minikube tunnel --bind-address=192.168.1.10
With this your service should be reachable from within the container. Please check first with the ip address:
curl http://192.168.1.10
Then make sure name resolution with /etc/hosts works in your container with dig, nslookup, getent hosts or something similar that is available in your container.

Kubernetes API Gateway for Microservice deployment

I am trying to understand the Kubernetes API Gateway for my Microservices. I have multiple microservices and those are deployed with the Kubernetes deployment type along with its own services.
I also have a front-end application that basically tries to communicate with the above APIs to complete the requests.
Overall, below is something I like to achieve and I like your opinions.
Is my understanding correct with the below diagram? (like Should we have API Gateway on top of all my Microservices and Web Application should use this API Gateway to reach any of those services?
If yes, How can I make that possible? I mean, I tried ISTIO Gateway and that's somehow not working.
Here is istio gateway and virtual service
On another side, below is my service (catalog service) configuration
apiVersin: v1
kind: Service
metadata:
name: catalog-api-service
namespace: local-shoppingcart-v1
labels:
version: "1.0.0"
spec:
type: NodePort
selector:
app: catalog-api
ports:
- nodePort: 30001
port: 30001
targetPort: http
protocol: TCP
name: http-catalogapi
also, at the host file (windows - driver\etc\host file) I have entries for the local DNS
127.0.0.1 kubernetes.docker.internal
127.0.0.1 localshoppingcart.com
istio service side, following screenshot
I am not sure what is going wrong but I try localhost:30139/catalog or localhost/catalog it always gives me connection refuse or connection not found error only.
If you are on the minikube you have to get the IP of minikube and port using these command as mentioned in the document
Get IP of minikube
export INGRESS_HOST=$(minikube ip)
Document
You can get Port and HTTPS port details
export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(#.name=="http2")].nodePort}')
export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(#.name=="https")].nodePort}')
If you are on Docker Desktop try forwarding traffic using the kubectl
kubectl port-forward svc/istio-ingressgateway 8080:80 -n istio-system
Open
localhost:8080 in browser
Read more

redis-cluster on kubernetes: connection timed out

I have combined/followed the following manuals to create a redis cluster on kubernetes (GCP):
https://github.com/sanderploegsma/redis-cluster
https://rancher.com/blog/2019/deploying-redis-cluster
I have created 3 nodes with each 2 pods on it. The problem is: I get a connection timeout when I connect from outside of the kubernetes cluster (through a load balancer external ip) to the redis-cluster.
$ redis-cli -h external_ip_lb -p 6379 -c
external_ip_lb:6379> set foo bar
-> Redirected to slot [12182] located at interal_ip_node:6379
Could not connect to Redis at interal_ip_node:6379: Operation timed out
When I get into the shell of a running container and do the redis-cli commands there, it works.
$ kubectl exec -it redis-cluster-0 -- redis-cli -c
127.0.0.1:6379> set foo bar
-> Redirected to slot [12182] located at internal_ip_node:6379
OK
internal_ip_node:6379> get foo
"bar"
I also tried to set a cluster IP service and do a port-foward to my local machine port 7000, this gives me the same error as with the external ip method.
$ kubectl port-foward pods/redis-cluster-0 7000:6379
Does anyone has an idea what could be wrong? Clearly it has something do do with my local machine not being a part of the kubernetes cluster, so the connection with the internal IP's of the other nodes fail.
Edit: output of kubectl describe svc redis-cluster-lb
Name: redis-cluster-lb
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"redis-cluster-lb","namespace":"default"},"spec":{"ports":[{"port"...
Selector: app=redis-cluster
Type: LoadBalancer
IP: internal_ip_lb
LoadBalancer Ingress: external_ip_lb
Port: <unset> 6379/TCP
TargetPort: 6379/TCP
NodePort: <unset> 30631/TCP
Endpoints: internal_ip_node_1:6379,internal_ip_node_2:6379,internal_ip_node_3:6379 + 3 more...
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I'm able to ping the external load balancer's IP.
I am not Redis expert, but in Redis documentation you can read:
Since cluster nodes are not able to proxy requests, clients may be redirected to other nodes using redirection errors
This is why you are are having this issues with redis cluster behind LB and this is also the reason why it is (most probably) not going to work.
You may probably need to use some proxy (e.g. official redis-cluster-poxy) that is running inside of k8s cluster, can reach all internal IPs of redis cluster and would handle redirects.

dns entries for pods in not ready state

I'm trying to build a simple mongo replica set cluster in kubernetes.
i have a StatefulSet of mongod instances, with
livenessProbe:
initialDelaySeconds: 60
exec:
command:
- mongo
- --eval
- "db.adminCommand('ping')"
readinessProbe:
initialDelaySeconds: 60
exec:
command:
- /usr/bin/mongo --quiet --eval 'rs.status()' | grep ok | cut -d ':' -f 2 | tr -dc '0-9' | awk '{ if($0=="0"){ exit 127 }else{ exit 0 } }'
as you can see, my readinessProbe is checking to see if the mongo replicaSet is working correctly.
however, i get a circular dependency with (and existing) cluster reporting:
"lastHeartbeatMessage" : "Error connecting to mongo-2.mongo:27017 :: caused by :: Could not find address for mongo-2.mongo:27017: SocketException: Host not found (authoritative)",
(where mongo-2 was undergoing a rolling update).
looking further:
$ kubectl run --generator=run-pod/v1 tmp-shell --rm -i --tty --image nicolaka/netshoot -- /bin/bash
bash-5.0# nslookup mongo-2.mongo
Server: 10.96.0.10
Address: 10.96.0.10#53
** server can't find mongo-2.mongo: NXDOMAIN
bash-5.0# nslookup mongo-0.mongo
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: mongo-0.mongo.cryoem-logbook-dev.svc.cluster.local
Address: 10.27.137.6
so the question is whether there is a way to get kubernetes to always keep the dns entry for the mongo pods to always be present? it appears that i have a chicken and egg situation where if the entire pod hasn't passed its readiness and liveness checks, then a dns entry is not created, and hence the other mongod instances will not be able to access it.
I ended up just putting in a ClusterIP Service for each of the statefulset instances with a selector for the specific instance:
ie
apiVersion: v1
kind: Service
metadata:
name: mongo-0
spec:
clusterIP: 10.101.41.87
ports:
- port: 27017
protocol: TCP
targetPort: 27017
selector:
role: mongo
statefulset.kubernetes.io/pod-name: mongo-0
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
and repeat for the othe stss. the key here is the selector:
statefulset.kubernetes.io/pod-name: mongo-0
I believe you are misinterpreting the error.
Could not find address for mongo-2.mongo:27017: SocketException: Host not found (authoritative)"
The pod is created with an IP attached. Then it's registered into DNS:
Pod-0 has the IP 10.0.0.10 and now it's FQDN is Pod-0.servicename.namespace.svc.cluster.local
Pod-1 has the IP 10.0.0.11 and now it's FQDN is Pod-1.servicename.namespace.svc.cluster.local
Pod-2 has the IP 10.0.0.12 and now it's FQDN is Pod-2.servicename.namespace.svc.cluster.local
But DNS is a live service, IPs are dynamically assigned and can't be duplicated.
So whenever it receives a request:
"Connect me with Pod-A.servicename.namespace.svc.cluster.local"
It tries to reach the registered IP and if the Pod is offline due to a rolling update, it will think the pod is unavailable and will return "Could not find the address (IP) for Pod-0.servicename" until the pod is online again or until the IP reservation expires and only then the DNS registry will be recycled.
The DNS is not discarting the DNS name registered, it's only answering it's currently offline.
You can either ignore the errors during the rolling or rethink your script and try using the internal js environment as mentioned in the comments for continuous monitoring of the mongo status.
EDIT:
When Pods from a StatefulSet with N replicas are being deployed, they are created sequentially, in order from {0..N-1}.
When Pods are being deleted, they are terminated in reverse order, from {N-1..0}.
This is the expected/desired default behavior.
So the error is expected, since the rollingUpdate makes the pod temporarily unavailable.

Kubernetes: How to map service to a local port inside pod

Is it possible to map Kubernetes service to a specific port for a group of pods (deployement)?
E.g. I have service (just as an example)
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 8081
targetPort: 8081
And I want this service be available as http://localhost:8081/ in my pods from some specific deployment.
It seems to me that I saw this in K8S docs several days ago, but I can not find this right now.
It may be beneficial to review your usage of K8s services. If you had exposed a deployment of pods as a service, then your service will define the port mappings, and you will be able to access your service on its cluster DNS name on the service port.
If you must access your service via localhost, I am assuming your use case is some tightly coupled containers in your pod. In which case, you can define a "containerPort" in your deployment yaml, and add the containers that need to communicate with each other on localhost in the same pod.
If by localhost you are referring to your own local development computer, you can do a port-forward. As long as the port-forwarding process is running, you can access the pods' ports from your localhost. Find more on port-forwarding. Simple example:
kubectl port-forward redis-master-765d459796-258hz 6379:6379
# or
kubectl port-forward service/redis 6379:6379
Hope this helps!