I am using DNS based service discovery to discover services in K8s cluster.
From this link it is very clear that to discover a service named my-service we can do name lookup "my-service.my-ns" and the pod should be able to find the services.
However in case of port discovery for the service the solution is to use
is "_http._tcp.my-service.my-ns"
where
_http refers to the port named http in my-service.
But even after using _http._tcp.my-service it doesn't resolve port number. Below are the details.
my-service which needs to be discovered
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-service
ports:
- name: http
protocol: TCP
port: 5000
targetPort: 5000
client-service yaml snippet trying to discover my-service and its port.
spec:
containers:
- name: client-service
image: client-service
imagePullPolicy: Always
ports:
- containerPort: 7799
resources:
limits:
cpu: "100m"
memory: "500Mi"
env:
- name: HOST
value: my-service
- name: PORT
value: _http._tcp.my-service
Now when I make a request it fails and logs following request which is clearly incorrect as it doesn't discover port number.
http://my-service:_http._tcp.my-service
I am not sure what wrong I am doing here, but I am following the same instructions mentioned in document.
Can somebody suggest what is wrong here and how we can discover port using DNS based service discovery? Is my understanding wrong here that it will return the literal value of port?
Cluster details
K8s cluster version is 1.11.5-gke.5
and Kube-dns is running
Additional details trying to discover service from busybox and its not able to discover port value 5000
kubectl exec busybox -- nslookup my-service
Server: 10.51.240.10
Address: 10.51.240.10:53
Name: my-service.default.svc.cluster.local
Address: 10.51.253.236
*** Can't find my-service.svc.cluster.local: No answer
*** Can't find my-service.cluster.local: No answer
*** Can't find my-service.us-east4-a.c.gdic-infinity-dev.internal: No answer
*** Can't find my-service.c.gdic-infinity-dev.internal: No answer
*** Can't find my-service.google.internal: No answer
*** Can't find my-service.default.svc.cluster.local: No answer
*** Can't find my-service.svc.cluster.local: No answer
*** Can't find my-service.cluster.local: No answer
*** Can't find my-service.us-east4-a.c.gdic-infinity-dev.internal: No answer
*** Can't find my-service.c.gdic-infinity-dev.internal: No answer
*** Can't find my-service.google.internal: No answer
kubectl exec busybox -- nslookup _http._tcp.my-service
Server: 10.51.240.10
Address: 10.51.240.10:53
** server can't find _http._tcp.my-service: NXDOMAIN
*** Can't find _http._tcp.my-service: No answer
Since Services come with their own (Kubernetes-internal) IP addresses, the easy answer here is to not pick arbitrary ports for Services. Change to port: 80 in your Service definition, and clients will be able to reach it using the default HTTP port. When you set the environment variable, set
- name: PORT
value: "80"
DNS supports several different record types; for example, an A record translates a host name to its IPv4 address, and AAAA to an IPv6 address. The Kubernetes Service documentation you cite notes (emphasis mine)
you can do a DNS SRV query ... to discover the port number for "http".
While SRV records seems like they solve both halves of this problem (they provide a port and a host name for a service) in practice they seem to get fairly little use. The linked Wikipedia page has a list of services that use it, but "connect to the thing this SRV record points at" isn't an option in mainstream TCP clients that I know of.
You should be able to verify this with a command like (running this debugging image)
kubectl run debug --rm -it --image giantswarm/tiny-tools sh
# dig -t srv _http._tcp.my-service
(But notice the -t srv argument; it is not the default record type.)
Most things that expect a PORT environment variable or similar expect a number, or if not, a name they can find in an /etc/services file. The syntax you're trying to use here and trying to provide a DNS SRV name instead probably just won't work, unless you know the specific software supports it.
Related
I am trying out a possible kubernetes scenario in the local machine minikube cluster. It is to access an internal service that is exposed with ingress in one cluster from another cluster using an ExternalName service. I understand that using an ingress the service will already be accessible within the cluster. As I am trying this out locally using minikube, I am unable to use simultaneously running clusters. Since I just wanted to verify whether it is possible to access an ingress exposed service using ExternName service.
I started the minikube tunnel using minikube tunnel.
I can access the service using http://k8s-yaml-hello.info.
But when I tryout curl k8s-yaml-hello-internal within a running POD, the error that I that is curl: (7) Failed to connect to k8s-yaml-hello-internal port 80 after 1161 ms: Connection refused
Can anyone point me out the issue here? Thanks in advance.
service.yaml
apiVersion: v1
kind: Service
metadata:
name: k8s-yaml-hello
spec:
selector:
app: k8s-yaml-hello
ports:
- port: 3000
targetPort: 3000
ingress.yaml
kind: Ingress
metadata:
name: k8s-yaml-hello-ingress
labels:
name: k8s-yaml-hello-ingress
spec:
rules:
- host: k8s-yaml-hello.info
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: k8s-yaml-hello
port:
number: 3000
externalName.yaml
apiVersion: v1
kind: Service
metadata:
name: k8s-yaml-hello-internal
spec:
ports:
- name: ''
appProtocol: http
protocol: TCP
port: 3000
type: ExternalName
externalName: k8s-yaml-hello.info
etc/hosts
127.0.0.1 k8s-yaml-hello.info
As You are getting the error curl: (7) Failed to connect :
The above error message means that no web-server is running on the specified IP and Port and the specified (or implied) port.
Check using nano /etc/hosts whether the IP and port is pointing to the correct domain or not. If it's not pointing, provide the correct IP and Port.
Refer to this SO for more information.
In Ingress.Yaml use Port 80 and also in service.yaml port should be 80. The service port and Target port should be different As per your yaml it is the same. Change it to 80 and have a try , If you get any errors, post here.
The problem is that minikube tunnel by default binds to the localhost address 127.0.0.1. Every node, machine, vm, container etc. has its own and the same localhost address. It is to reach local services without having to know the ip address of the network interface (the service is running on "myself"). So when k8s-yaml-hello.info resolves to 127.0.0.1 then it points to different service depending on which container you are (just to myself).
To make it work like you want, you first have to find out the ip address of your hosts network interface e.g. with ifconfig. Its name is something like eth0 or en0, depending on your system.
Then you can use the bind-address option of minikube tunnel to bind to that address instead:
minikube tunnel --bind-address=192.168.1.10
With this your service should be reachable from within the container. Please check first with the ip address:
curl http://192.168.1.10
Then make sure name resolution with /etc/hosts works in your container with dig, nslookup, getent hosts or something similar that is available in your container.
I have the following service:
apiVersion: v1
kind: Service
metadata:
name: hedgehog
labels:
run: hedgehog
spec:
ports:
- port: 3000
protocol: TCP
name: restful
- port: 8982
protocol: TCP
name: websocket
selector:
run: hedgehog
externalIPs:
- 1.2.4.120
In which I have specified an externalIP.
I'm also seeing this IP under EXTERNAL-IP when running kubectl get services.
However, when I do curl http://1.2.4.120:3000 I get a timeout. However the app is supposed to give me a response because the jar running inside the container in the deployment does respond to localhost:3000 requests when run locally.
if you see the type of your service might be cluster IP try changing the type to LoadBalancer
apiVersion: v1
kind: Service
metadata:
name: http-service
spec:
clusterIP: 172.30.163.110
externalIPs:
- 192.168.132.253
externalTrafficPolicy: Cluster
ports:
- name: highport
nodePort: 31903
port: 30102
protocol: TCP
targetPort: 30102
selector:
app: web
sessionAffinity: None
type: LoadBalancer
something like this where type: LoadBalancer.
First of all you have to understand you cannot place any random address in your ExternalIP field. Those addresses are not managed by Kubernetes and are the responsibility of the cluster administrator or you. External IP addresses specified with externalIPs are different than the external IP address assigned to a service of type LoadBalancer by a cloud provider.
I checked the address that you mentioned in the question and it does not look like it belongs to you. That why I suspect that you placed a random one there.
The same address appears in this article about ExternalIP. As you can see here the address in this case are the IP addresses of the nodes that Kubernetes runs on.
This is potential issue in your case.
Another one is too verify if your application is listening on localhost or 0.0.0.0. If it's really localhost then this might be another potential problem for you. You can change where the server process is listening. You do this by listening on 0.0.0.0, which means “listen on all interfaces”.
Lastly please verify that your selector/ports of the services are correct and that you have at least one endpoint that backs your service.
I am able to get eclipse mosquitto broker up and running with the MQTT clients able to talk to the broker using Broker's IP address. However, as am running these on kubernetes, the broker IP keeps changing on restart. I would like to enable DNS name service for the broker, so the clients can use broker-name instead of the IP. coreDNS is running default in kubernetes..
Any suggestions on what can be done ?
$ nslookup kubernetes.default
Server: 10.43.0.10
Address: 10.43.0.10:53
** server can't find kubernetes.default: NXDOMAIN
** server can't find kubernetes.default: NXDOMAIN
You can achieve that using headless service. You create it by setting the clusterIP field in a service spec to None. Once you do that the DNS server will return the pod IPs instead of the single service and instead of returning a single DNS A record, the DNS server will return multiple A records for the service each pointing to the IP of an individual pod backing the service at the moment.
With this your client can perform a single DNS A record lookup to fetch the IP of all the pods that are part of the service. Headless service is also often used as service discovery system.
apiVersion: v1
kind: Service
metadata:
name: your-headless-service
spec:
clusterIP: None # <-- This makes the service hadless!
selector:
app: your-mosquito-broker-pod
ports:
- protocol: TCP
port: 80
targetPort: 3000
You are able also to resolve the dns name with regular service as well. The difference is that with headless service you are able to talk to the pod directly instead having service as load-balancer or proxy.
Resolving the service thru dns is easy and you do that with the following pattern:
backend-broker.default.svc.cluster.local
Whereas backend-broker corresponds to the service name, default stands for the namespace the service is defined in, and svc.cluster.local is a configurable cluster domain suffix used in all cluster local service names.
Note that if you client and broker are in the same namespace you can omit the svc.cluster.local suffix and the namespace. You then reffer the servie as:
backend-broker
I high encourage you to read more about Dns in kubernetes.
All,
Thanks for answering the query, especially Thomas for code pointers. With your suggestions, once I create a Service for the POD, I was able to get the DNS working as core-dns was already running .. I was able to use the hostname in MQTT broker also after this.
opts.AddBroker(fmt.Sprintf("tcp://mqtt-broker:1883"))
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2021-02-01T19:08:46Z"
labels:
app: ipc
name: mqtt-broker
namespace: default
BTW, I wasnt able to get the headless service working, was getting this error, so continued with ClusterIP itself + 1883 exposed port for MQTT. Any suggestions please ?
`services "mqtt-broker" was not valid:`
`spec.clusterIPs[0]: Invalid value: []string{"None"}: may not change once set`
I am new to Kubernetes and I'm working on deploying an application within a new Kubernetes cluster.
Currently, the service running has multiple pods that need to communicate with each other. I'm looking for a general approach to go about debugging the issue, rather than getting into the specifies of the service as the question will become much too specific.
The pods within the cluster are throwing an error:
err="Get \"http://testpod.mynamespace.svc.cluster.local:8080/": dial tcp 10.10.80.100:8080: connect: connection refused"
Both pods are in the same cluster.
What are the best steps to take to debug this?
I have tried running:
kubectl exec -it testpod --namespace mynamespace -- cat /etc/resolv.conf
And this returns:
search mynamespace.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal
Which I found here: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
First of all, the following pattern:
my-svc.my-namespace.svc.cluster-domain.example
is applicable only to FQDNs of Services, not Pods which have the following form:
pod-ip-address.my-namespace.pod.cluster-domain.example
e.g.:
172-17-0-3.default.pod.cluster.local
So in fact you're querying cluster dns about FQDN of the Service named testpod and not about FQDN of the Pod. Judging by the fact that it's being resolved successfully, such Service already exists in your cluster but most probably is misconfigured. The fact that you're getting the error message connection refused can mean the following:
your Service FQDN testpod.mynamespace.svc.cluster.local has been successfully resolved
(otherwise you would receive something like curl: (6) Could not resolve host: testpod.default.svc.cluster.local)
you've reached successfully your testpod Service
(otherwise, i.e. if it existed but wasn't listening on 8080 port, you're trying to connect to, you would receive timeout e.g. curl: (7) Failed to connect to testpod.default.svc.cluster.local port 8080: Connection timed out)
you've reached the Pod, exposed by testpod Service (you've been sussessfully redirected to it by the testpod Service)
but once reached the Pod, you're trying to connect to incorect port and that's why the connection is being refused by the server
My best guess is that your Pod in fact listens on different port, like 80 but you exposed it via the ClusterIP Service by specifying only --port value e.g. by:
kubectl expose pod testpod --port=8080
In such case both --port (port of the Service) and --targetPort (port of the Pod) will have the same value. In other words you've created a Service like the one below:
apiVersion: v1
kind: Service
metadata:
name: testpod
spec:
ports:
- protocol: TCP
port: 8080
targetPort: 8080
And you probably should've exposed it either this way:
kubectl expose pod testpod --port=8080 --targetPort=80
or with the following yaml manifest:
apiVersion: v1
kind: Service
metadata:
name: testpod
spec:
ports:
- protocol: TCP
port: 8080
targetPort: 80
Of course your targetPort may be different than 80, but connection refused in such case can mean only one thing: target http server (running in a Pod) refuses connection to 8080 port (most probably because it isn't listening on it). You didn't specify what image you're using, whether it's a standard nginx webserver or something based on your custom image. But if it's nginx and wasn't configured differently it listens on port 80.
For further debug, you can attach to your Pod:
kubectl exec -it testpod --namespace mynamespace -- /bin/sh
and if netstat command is not present (the most likely scenario) run:
apt update && apt install net-tools
and then check with netstat -ntlp on which port your container listens on.
I hope this helps you solve your issue. In case of any doubts, don't hesitate to ask.
I'm trying to build a simple mongo replica set cluster in kubernetes.
i have a StatefulSet of mongod instances, with
livenessProbe:
initialDelaySeconds: 60
exec:
command:
- mongo
- --eval
- "db.adminCommand('ping')"
readinessProbe:
initialDelaySeconds: 60
exec:
command:
- /usr/bin/mongo --quiet --eval 'rs.status()' | grep ok | cut -d ':' -f 2 | tr -dc '0-9' | awk '{ if($0=="0"){ exit 127 }else{ exit 0 } }'
as you can see, my readinessProbe is checking to see if the mongo replicaSet is working correctly.
however, i get a circular dependency with (and existing) cluster reporting:
"lastHeartbeatMessage" : "Error connecting to mongo-2.mongo:27017 :: caused by :: Could not find address for mongo-2.mongo:27017: SocketException: Host not found (authoritative)",
(where mongo-2 was undergoing a rolling update).
looking further:
$ kubectl run --generator=run-pod/v1 tmp-shell --rm -i --tty --image nicolaka/netshoot -- /bin/bash
bash-5.0# nslookup mongo-2.mongo
Server: 10.96.0.10
Address: 10.96.0.10#53
** server can't find mongo-2.mongo: NXDOMAIN
bash-5.0# nslookup mongo-0.mongo
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: mongo-0.mongo.cryoem-logbook-dev.svc.cluster.local
Address: 10.27.137.6
so the question is whether there is a way to get kubernetes to always keep the dns entry for the mongo pods to always be present? it appears that i have a chicken and egg situation where if the entire pod hasn't passed its readiness and liveness checks, then a dns entry is not created, and hence the other mongod instances will not be able to access it.
I ended up just putting in a ClusterIP Service for each of the statefulset instances with a selector for the specific instance:
ie
apiVersion: v1
kind: Service
metadata:
name: mongo-0
spec:
clusterIP: 10.101.41.87
ports:
- port: 27017
protocol: TCP
targetPort: 27017
selector:
role: mongo
statefulset.kubernetes.io/pod-name: mongo-0
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
and repeat for the othe stss. the key here is the selector:
statefulset.kubernetes.io/pod-name: mongo-0
I believe you are misinterpreting the error.
Could not find address for mongo-2.mongo:27017: SocketException: Host not found (authoritative)"
The pod is created with an IP attached. Then it's registered into DNS:
Pod-0 has the IP 10.0.0.10 and now it's FQDN is Pod-0.servicename.namespace.svc.cluster.local
Pod-1 has the IP 10.0.0.11 and now it's FQDN is Pod-1.servicename.namespace.svc.cluster.local
Pod-2 has the IP 10.0.0.12 and now it's FQDN is Pod-2.servicename.namespace.svc.cluster.local
But DNS is a live service, IPs are dynamically assigned and can't be duplicated.
So whenever it receives a request:
"Connect me with Pod-A.servicename.namespace.svc.cluster.local"
It tries to reach the registered IP and if the Pod is offline due to a rolling update, it will think the pod is unavailable and will return "Could not find the address (IP) for Pod-0.servicename" until the pod is online again or until the IP reservation expires and only then the DNS registry will be recycled.
The DNS is not discarting the DNS name registered, it's only answering it's currently offline.
You can either ignore the errors during the rolling or rethink your script and try using the internal js environment as mentioned in the comments for continuous monitoring of the mongo status.
EDIT:
When Pods from a StatefulSet with N replicas are being deployed, they are created sequentially, in order from {0..N-1}.
When Pods are being deleted, they are terminated in reverse order, from {N-1..0}.
This is the expected/desired default behavior.
So the error is expected, since the rollingUpdate makes the pod temporarily unavailable.