kubernetes-cli command to get dns of a kubernetes resource - kubernetes

I have gone through the doc mentioned here gitlink as well as doclink
But my job would be a whole lot easier if I could get the dns of a resource type by using any kubernetes command.
Also tried this commands-link
For example, i would like to get the dns name of a service db-service running in dev namespace inside svc.cluster.local
db-service.dev.svc.cluster.local
Any pointers ?

If you need to, you can query that in a pod:
How do I run a container from the command line in Kubernetes (like docker run)?
Retrieve the full name of a service in Kubernetes
using a pod which has some DNS utils
kubectl run tmp-shell --rm -i --tty --image tutum/dnsutils -- /bin/bash
you can then run
root#tmp-shell:/# nslookup db-service
Server: 10.2.32.10
Address: 10.2.32.10#53
Name: db-service.dev.svc.cluster.local
for a one-liner, see: https://serverfault.com/questions/929211/kubernetes-pod-dns-resolution

Try command
kubectl get svc
First column is internal DNS name.
If the type is LoadBalancer then EXTERNAL-IP column will display the external DNS name.

Related

Check working of an service in kubernetes

I create a pod to test my service in kubernetes. But i didn't get anythings. Here is my command
kubectl run --generator=run-pod/v1 nginx-resolver --image=nginx
kubectl expose pod nginx-resolver --name=nginx-resolver-service --port=80 --target-port=80 --type=ClusterIP
kubectl run --generator=run-pod/v1 test-nslookup --image=busybox:1.28 --rm -it -- nslookup nginx-resolver-service
Please help me explain why. Thanks
Run the following command and get things what you are thinking wrong about this cmd.
$ kubectl run --help
Create and run a particular image, possibly replicated.
Creates a deployment or job to manage the created container(s).
Examples:
# Start a single instance of nginx.
kubectl run nginx --image=nginx
# Start a single instance of hazelcast and let the container expose port 5701 .
kubectl run hazelcast --image=hazelcast --port=5701
...
So, kubectl run cmd creates a deployment or a job
If it is a deployment, it creates (a) first, a replicaset, (b) then Pod(s).
If it is a job, it creates a Pod.
But you are trying to expose a Pod which name is not the correct one. You can see the name of the Pod that is/are created by the cmd kubectl run.
$ kubectl get pods --namespace=<namespace> | grep "nginx-resolver"
$ kubectl get pods --namespace=<namespace> | grep "test-nslookup"
Then use those names to expose Pod.
You can optionally expose your Deployment. To do so, see the help of $ kubectl expose deployment --help. Run:
$ kubectl expose deployment --help
Expose a resource as a new Kubernetes service.
Looks up a deployment, service, replica set, replication controller or pod by name and uses the selector for that
resource as the selector for a new service on the specified port. A deployment or replica set will be exposed as a
service only if its selector is convertible to a selector that service supports, i.e. when the selector contains only
the matchLabels component. Note that if no port is specified via --port and the exposed resource has multiple ports, all
will be re-used by the new service. Also if no labels are specified, the new service will re-use the labels from the
resource it exposes.
Possible resources include (case insensitive):
pod (po), service (svc), replicationcontroller (rc), deployment (deploy), replicaset (rs)
Examples:
...
# Create a service for an nginx deployment, which serves on port 80 and connects to the containers on port 8000.
kubectl expose deployment nginx --port=80 --target-port=8000
...
If you want to see the log interactively, you need to set the --restart option of your test-nslookup pod to Never or OnFailure. Otherwise, kubernetes will just restart your pod indefinitely and you won't see anything.
So your last command should be :
kubectl run --generator=run-pod/v1 test-nslookup --image=busybox:1.28 -it --restart=OnFailure -- nslookup nginx-resolver-service
Why ?
Probably because of this issue.
It seems to have a delay of 5s before kubectl run actually print something.
So in order to do it without changing the restart option, you'll need to change your command like this (beware of the sleep 7, so you'll have to wait 7seconds before seeing the logs) :
kubectl run --generator=run-pod/v1 test-nslookup --image=busybox:1.28 -it --rm -- sh -c 'sleep 7; nslookup nginx-resolver-service'

How to find the url of a service in kubernetes?

I have a local kubernetes cluster on my local docker desktop.
This is how my kubernetes service looks like when I do a kubectl describe service
Name: helloworldsvc
Namespace: test
Labels: app=helloworldsvc
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"helloworldsvc"},"name":"helloworldsvc","namespace":"test...
Selector: app=helloworldapp
Type: ClusterIP
IP: 10.108.182.240
Port: http 9111/TCP
TargetPort: 80/TCP
Endpoints: 10.1.0.28:80
Session Affinity: None
Events: <none>
This service is pointing to a deployment with a web app.
My question how to I find the url for this service?
I already tried http://localhost:9111/ and that did not work.
I verified that the pod that this service points to is up and running.
URL of service is in the below format:
<service-name>.<namespace>.svc.cluster.local:<service-port>
In your case it is:
helloworldsvc.test.svc.cluster.local:9111
Get the service name: kubectl get service -n test
URL to a kubernetes service is service-name.namespace.svc.cluster.local:service-port where cluster.local is the kubernetes cluster name.
To get the cluster name: kubectl config get-contexts | awk {'print $2'}
URL to service in your case will be helloworldsvc.test.svc.cluster.local:9111
The way you are trying to do won't work as to make it available on your localhost you need to make the service available at nodeport or using port-forward or using kubectl proxy.
However, if you want dont want a node port and to check if inside the container everything works fine then follow these steps to get inside the container if it has a shell.
kubectl exec -it container-name -n its-namespace-name sh
then do a
curl localhost:80 or curl helloworldsvc.test.svc.cluster.local:9111 or curl 10.1.0.28:80
but both curl commands will work only inside Kubernetes pod and not on your localhost machine.
To access on your host machine kubectl port-forward svc/helloworldsvc 80:9111 -n test
The service you have created is of type ClusterIP which is only accessible from inside the cluster. You have two ways to access it from your desktop:
Create a nodeport type service and then access it via nodeip:nodeport
Use Kubectl port forward and then access it via localhost:forwardedport
The following url variations worked for me when in the same cluster and on the same namespace (namespace: default; though all but first should still work when services are on different namespaces):
http://helloworldsvc
http://helloworldsvc.default
http://helloworldsvc.default.svc
http://helloworldsvc.default.svc.cluster.local
http://helloworldsvc.default.svc.cluster.local:80
//
using HttpClient client = new();
string result = await client.GetStringAsync(url);
Notes:
I happen to be calling to and from an ASP.NET 6 application using HttpClient
That client I think just sets port to 80 by default, so no 80 port needs to be explicitly set to work. But I did verify for all of these it can be added or removed from the url
http only (not https, unless you configured it specially)
namespace can only be omitted in the first case (i.e. when domain / 'authority' is just the service name alone). So helloworldsvc.svc.cluster.local:80 fails with exception "Name or service not known (helloworldsvc.svc.cluster.local:80)"
If you are working with minikube , you can run the code below
minikube service --all
for specific service
minikube service service-name --url
Here is another way to get the URL of service
Enter one pod through kubectl exec
kubectl exec -it podName -n namespace -- /bin/sh
Then execute nslookup IP of service such as 172.20.2.213 in the pod
/ # nslookup 172.20.2.213
nslookup: can't resolve '(null)': Name does not resolve
Name: 172.20.2.213
Address 1: 172.20.2.213 172-20-2-213.servicename.namespace.svc.cluster.local
Or execute nslookup IP of serviceName in the pod
/ # nslookup servicename
nslookup: can't resolve '(null)': Name does not resolve
Name: 172.20.2.213
Address 1: 172.20.2.213 172-20-2-213.servicename.namespace.svc.cluster.local
Now the service URL is servicename.namespace.svc.cluster.local attached with the service port after removing IP for the output of nslookup.

How do I use minikube's DNS?

How do I use minikube's (cluster's) DNS? I want to receive all IP addresses associated with all pods for selected headless service? I don’t want to expose it outside the cluster. I am currently creating back-end layer.
As stated in the following answer:
What exactly is a headless service, what does it do/accomplish, and what are some legitimate use cases for it?
„Instead of returning a single DNS A record, the DNS server will return multiple A records for the service, each pointing to the IP of an individual pod backing the service at that moment.”
Thus the pods in back-end layer can communicate to each other.
I can’t use dig command. It is not installed in minikube. Eventually how do I install it? There is no apt available.
I hope this explains more accurately what I want to achieve.
You mentioned that you want to receive IP addresses associated with pods for selected service name for testing how does headless service work.
For only testing purposes you can use port-forwarding. You can forward traffic from your local machine to dns pod in your cluster. To do this, you need to run:
kubectl port-forward svc/kube-dns -n kube-system 5353:53
and it will expose kubs-dns service on your host. Then all you need is to use dig command (or alternative) to query the dns server.
dig #127.0.0.1 -p 5353 +tcp +short <service>.<namespace>.svc.cluster.local
You can also test your dns from inside of cluster e.g. by running a pod with interactive shell:
kubectl run --image tutum/dnsutils dns -it --rm -- bash
root#dns:/# dig +search <service>
Let me know it it helped.
As illustrated in kubernetes/minikube issue 4397
Containers don't have an IP address by default.
You'll want to use minikube service or minikube tunnel to get endpoint information.
See "hello-minikube/ Create a service":
By default, the Pod is only accessible by its internal IP address within the Kubernetes cluster.
To make the hello-node Container accessible from outside the Kubernetes virtual network, you have to expose the Pod as a Kubernetes Service.
Expose the Pod to the public internet using the kubectl expose command:
kubectl expose deployment hello-node --type=LoadBalancer --port=8080
The --type=LoadBalancer flag indicates that you want to expose your Service outside of the cluster.
View the Service you just created:
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-node LoadBalancer 10.108.144.78 <pending> 8080:30369/TCP 21s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP
On Minikube, the LoadBalancer type makes the Service accessible through the minikube service command.
Run the following command:
minikube service hello-node

Communication Between Different Services in Kubernetes Deploy

I'm new to using Kubernetes (v 1.11.2-gke.18), and am trying to get Sentry running as a Kubernetes deployment. This is all being run in Google Cloud. Here's the YAML files for Postgres, Redis, and Sentry:
The Redis and Postgres parts seem to be up and running just fine, but Sentry fails because: could not translate host name "postgres-sentry:5432" to address: Name or service not known.
My question is this: How do I get these services to communicate properly? How do I get the Sentry container to communicate with Postgres and Redis?
It turns out that in this case, what I needed was to change the SENTRY_REDIS_HOST and SENTRY_POSTGRES_HOST values to not have the port numbers in the env declaration. Taking those out made everything talk together as expected.
In Kubernetes pods generally can find other pods through services using DNS.
Your configuration looks correct if postgres-sentry is in the same namespace as your sentry app pod/deployment (which looks like the default namespace).
So it points that you may have a problem with DNS. You can check by shelling into the sentry app container/pod and trying to ping postgres-sentry:
$ kubectl exec -it <pod-id-of-your-sentry-app> sh
# ping postgres-sentry
Also, check if you have you /etc/resolv.conf looks something like this:
# cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
Finally, check if your DNS pods are running:
$ kubectl -n kube-system get pods | grep dns
coredns-xxxxxxxxxxx-xxxxx 1/1 Running 15 116d
coredns-xxxxxxxxxxx-xxxxx 1/1 Running 15 116d

Service discovery on Kubernetes

I have kubeDNS set up on a bare metal kubernetes cluster. I thought that would allow me to access services as described here (http:// for those who don't want to follow the link), but when I run
curl https://monitoring-influxdb:8083
I get the error
curl: (6) Could not resolve host: monitoring-influxdb
This is true when I run curl on a service name in any namespace. Is this an error with my kubDNS setup or are there different steps I need to take in order to achieve this? I get the expected output when I run the test at the end of this article.
For reference:
kubeDNS controller yaml files
kubeDNS service yaml file
kubelet flags
output of kubectl get svc in default and kube-system namespaces
The service discovery that you're trying to is documented at https://kubernetes.io/docs/concepts/services-networking/dns-pod-serv‌​ice, and is for communications within one pod talking to an existing service, not from nodes (or the master) to speak to Kubernetes services.
You will want to leverage the DNS for the service in form of <servicename>.<namespace> or <servicename>.<namespace>.svc.cluster.local. To see this in operation, kick up an interactive pod with busybox (or use an existing pod of your own) with something like:
kubectl run -i --tty alpine-interactive --image=alpine --restart=Never
and within that shell that is provided there, make an nslookup command. From your example, I'm guessing you're trying to access influxDB from https://github.com/kubernetes/heapster/tree/master/deploy/kube-config/influxdb, then it will be installed into the kube-system namespace, and the service name you'd use from another Pod internally to the cluster would be:
monitoring-influxdb.kube-system.svc.cluster.local
For example:
kubectl run -i --tty alpine --image=alpine --restart=Never
If you don't see a command prompt, try pressing enter.
/ # nslookup monitoring-influxdb.kube-system.svc.cluster.local
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: monitoring-influxdb.kube-system.svc.cluster.local
Address 1: 10.102.27.233 monitoring-influxdb.kube-system.svc.cluster.local
As #Michael Hausenblas pointed out in the comments, curl http://monitoring-influxdb:8086 needs to be run from within a pod. Doing that provided the expected results