DNS lookup not working properly in Kubernetes cluster - kubernetes

I spin up a cluster with minikube then apply this dummy deployment/service
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
run: nginx-label
replicas: 2
template:
metadata:
labels:
run: nginx-label
spec:
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
labels:
run: nginx-label
spec:
ports:
- port: 1234
targetPort: 80
protocol: TCP
selector:
run: nginx-label
Then I create a dummy curl pod to test internal network with the following
kubectl run curl --image=radial/busyboxplus:curl -i --tty
Inside that curl instance, I'm able to access the nginx with $NGINX_SERVICE_SERVICE_HOST:$NGINX_SERVICE_SERVICE_PORT or nginx-service.default:1234, but not nginx-service:1234, even though those pods belong to the same namespace.
ubuntu:~$ kubectl get pods --namespace=default
NAME READY STATUS RESTARTS AGE
curl-69c656fd45-d7w8t 1/1 Running 1 29m
nginx-deployment-58595d65fc-9ln25 1/1 Running 0 29m
nginx-deployment-58595d65fc-znkqp 1/1 Running 0 29m
Any idea what could cause this? Following is the nslookup result
[ root#curl-69c656fd45-d7w8t:/ ]$ nslookup nginx-service
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: nginx-service
Address 1: 23.202.231.169 a23-202-231-169.deploy.static.akamaitechnologies.com
Address 2: 23.217.138.110 a23-217-138-110.deploy.static.akamaitechnologies.com
[ root#curl-69c656fd45-d7w8t:/ ]$ nslookup nginx-service.default
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: nginx-service.default
Address 1: 10.103.69.73 nginx-service.default.svc.cluster.local
[ root#curl-69c656fd45-d7w8t:/ ]$
Update: here's the content of /etc/resolv.conf
[ root#curl-69c656fd45-d7w8t:/ ]$ cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local attlocal.net
options ndots:5
[ root#curl-69c656fd45-d7w8t:/ ]$

Answering on your question in comment.
Check following lines below the field name in nslookup file, for host nginx-service they are :
Name: nginx-service
Address 1: 23.202.231.169 a23-202-231-169.deploy.static.akamaitechnologies.com
Address 2: 23.217.138.110 a23-217-138-110.deploy.static.akamaitechnologies.com
for nginx-service.default:
Name: nginx-service.default
Address 1: 10.103.69.73 nginx-service.default.svc.cluster.local
The flow is:
Executing command nslookup
Checking entrypoints(line below host with adress and hostname)
Comparing entrypoints and its domains to domain listed in /etc/resolv.conf file if it doesn't match this mean that we cannot reach specific host.
Your nginx-service"hit" a23-202-231-169.deploy.static.akamaitechnologies.com NOTnginx-service.default.svc.cluster.local.
nginx-service.default "hit" nginxservice.default.svc.cluster.local as you said that is why curl is working.
10.96.0.10 is the address of our system's Domain Name Server named kube-dns.kube-system.svc.cluster.local. This is the server your system is configured to use to translate domain names into IP addresses.
Speaking about the domain deploy.static.akamaitechnologies.com
Akamai is a Content Delivery Network used by Symantec (and many other companies). Kubernetes is a cloud computing service used by a large number of companies as well for internet content. These services are essential for providing content when you visit certain websites, and for delivering things like product updates. It is normal to see connections to these sites when you visit certain other websites or have certain products, like Norton, installed. They essentially provide the servers needed to propagate large amounts of data to various regions of the world quickly while balancing internet traffic so that individual server locations are not overloaded..
Hmm, sorry, not really...My question is that, based on the search
directive in /etc/resolv.conf, if nginx-service.default is resolved
to nginx-service.default.svc.cluster.local, so should nginx-service.
Did I miss anything?
Answer:
Keep in mind that:
Look firstly at nslookup
And then find match in /etc/resolv.conf
Note: In case you will do these steps vice versa, it won't work.
Reasons what may be the problem of wrong resolution of domain in nslookup are many - see: dns-debugging . Try to execute command dig nginx-service. Then interpret the output from this file to find the real problem. Because it is obvious that you cannot curl nginx-service (I have explained it above) but why nslookup shows different records for nameservers it is completely different question.
More information you can find here: nslookup, akamai.

Related

Can you expose your local minikube cluster to be accessible from a browser without editing etc/hosts?

I am following this tutorial for how to expose your local cluster for external access.
I only need to be able to check my application from browser, without exposing the app to the Internet.
> kubectl get service web
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
web NodePort 10.98.217.114 <none> 8080:32718/TCP 10m
> minikube service web --url
http://192.168.49.2:32718
Followed the guide until the etc/hosts part. I set up the ingress:
> kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
example-ingress nginx hello-world.info 192.168.49.2 80 96s
For various reasons I cannot edit the etc/hosts file on my Windows machine, it says another process is using it. However, neither 192.168.49.2 nor http://192.168.49.2:32718 in the browser returns anything, as well as curl 192.168.49.2 (and with :32718). I don't think that should be expected, as the hosts file merely forwards hello-world.info to the IP, I should be able to access my app with just the IP. What am I missing here?
Kubectl v1.24.1 (kustomize v4.5.4, server v1.23.3), Minikube v1.25.2, Windows 10, Minikube with the Docker driver.
Okay my solution to the problem was this: port-forward to the ingress-controller pod (not to the ingress itself object, because it doesn't seem to be possible)
Sample ingress file for a service named "web":
# example-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web
port:
number: 8080
Run it with
kubectl apply -f example-ingress.yaml
Check that it's running
> kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
example-ingress nginx * 192.168.49.2 80 23h
If you ssh into minikube (minikube ssh), you can curl 192.168.49.2:80 and it returns the proper output.
Output nginx-controller pods:
> kubectl get pod -n ingress-nginx
NAME READY STATUS RESTARTS
AGE
ingress-nginx-admission-create-56gbc 0/1 Completed 0
46h
ingress-nginx-admission-patch-fqf92 0/1 Completed 0
46h
ingress-nginx-controller-cc8496874-7znt5 1/1 Running 4 (39m ago)
46h
Forward port to it:
> kubectl port-forward ingress-nginx-controller-cc8496874-7znt5 8080:80 -n ingress-nginx
Then check out localhost:8080. If it returns nginx 404, then your Ingress.yaml setup is probably wrong. Otherwise works for me.

Kubernetes Service get Connection Refused

I am trying to create an application in Kubernetes (Minikube) and expose its service to other applications in same clusters, but i get connection refused if i try to access this service in Kubernetes node.
This application just listen on HTTP 127.0.0.1:9897 address and send response.
Below is my yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: exporter-test
namespace: datenlord-monitoring
labels:
app: exporter-test
spec:
replicas: 1
selector:
matchLabels:
app: exporter-test
template:
metadata:
labels:
app: exporter-test
spec:
containers:
- name: prometheus
image: 34342/hello_world
ports:
- containerPort: 9897
---
apiVersion: v1
kind: Service
metadata:
name: exporter-test-service
namespace: datenlord-monitoring
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '9897'
spec:
selector:
app: exporter-test
type: NodePort
ports:
- port: 8080
targetPort: 9897
nodePort: 30001
After I apply this yaml file, the pod and the service deployed correctly, and I am sure this pod works correctly, since when I login the pod by
kubectl exec -it exporter-test-* -- sh, then just run curl 127.0.0.1:9897, I can get the correct response.
Also, if I run kubectl port-forward exporter-test-* -n datenlord-monitoring 8080:9897, I can get correct response from localhost:8080. So this application should work well.
However, when I trying to access this service from other application in same K8s cluster by exporter-test-service.datenlord-monitoring.svc:30001 or just run curl nodeIp:30001 in k8s node or run curl clusterIp:8080 in k8s node, I got Connection refused
Anyone had same issue before? Appreciate for any help! Thanks!
you are mixing two things here. NodePort is the port the application is available from outside your cluster. Inside your cluster you need to access your service via the service port, not the NodePort.
Try changing exporter-test-service.datenlord-monitoring.svc:30001 to exporter-test-service.datenlord-monitoring.svc:8080
Welcome to the community!
There are no issues with behaviour you observed.
In short words kubernetes cluster (which is minikube in this case) has its own isolated network with internal DNS.
One way to access your service on the node: you specified nodePort for your service and this made the service accessible on the localhost:30001. You can check it by running on your host:
$ kubectl get svc -n datenlord-monitoring
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
exporter-test-service NodePort 10.111.191.159 <none> 8080:30001/TCP 2m45s
# Test:
curl -I localhost:30001
HTTP/1.1 200 OK
Another way to expose service to the host network is to use minikube tunnel (run in the another console). You'll need to change service type from NodePort to LoadBalancer:
$ kubectl get svc -n datenlord-monitoring
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
exporter-test-service LoadBalancer 10.111.191.159 10.111.191.159 8080:30001/TCP 18m
# Test:
$ curl -I 10.111.191.159:8080
HTTP/1.1 200 OK
Why some of options doesn't work.
Connection to the service by its DNS + NodePort. NodePort is used to link host IP and NodePort to service port inside kubernetes cluster. Internal DNS is not accessible outside kubernetes cluster (unless you don't add IPs to /etc/hosts on your host machine)
Inside the cluster you should use internal DNS with internal service port which is 8080 in your case. You can check how this works with a separate container in the same namespace (e.g. image curlimages/curl) and get following:
$ kubectl exec -it curl -n datenlord-monitoring -- curl -I exporter-test-service:8080
HTTP/1.1 200 OK
Or from the pod in a different namespace:
$ kubectl exec -it curl-default-ns -- curl -I exporter-test-service.datenlord-monitoring.svc:8080
HTTP/1.1 200 OK
I've attached useful links which help you to understand this difference.
Edit: DNS inside deployed pod
$ kubectl exec -it exporter-test-xxxxxxxx-yyyyy -n datenlord-monitoring -- bash
root#exporter-test-74cf9f94ff-fmcqp:/# cat /etc/resolv.conf
nameserver 10.96.0.10
search datenlord-monitoring.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
Useful links:
DNS for pods and services
Service types
Accessing apps in Minikube
you need to change 127.0.0.1:9897 to 0.0.0.0:9897 so that application listens to all incoming requests

Kubernetes service is reachable from node but not from my machine

I have a timeout problem with my site hosted on Kubernetes cluster provided by DigitalOcean.
u#macbook$ curl -L fork.example.com
curl: (7) Failed to connect to fork.example.com port 80: Operation timed out
I have tried everything listed on the Debug Services page. I use a k8s service named df-stats-site.
u#pod$ nslookup df-stats-site
Server: 10.245.0.10
Address: 10.245.0.10#53
Name: df-stats-site.deepfork.svc.cluster.local
Address: 10.245.16.96
It gives the same output when I do it from node:
u#node$ nslookup df-stats-site.deepfork.svc.cluster.local 10.245.0.10
Server: 10.245.0.10
Address: 10.245.0.10#53
Name: df-stats-site.deepfork.svc.cluster.local
Address: 10.245.16.96
With the help of Does the Service work by IP? part of the page, I tried the following command and got the expected output.
u#node$ curl 10.245.16.96
*correct response*
Which should mean that everything is fine with DNS and service. I confirmed that kube-proxy is running with the following command:
u#node$ ps auxw | grep kube-proxy
root 4194 0.4 0.1 101864 17696 ? Sl Jul04 13:56 /hyperkube proxy --config=...
But I have something wrong with iptables rules:
u#node$ iptables-save | grep df-stats-site
(unfortunately, I was not able to copy the output from node, see the screenshot below)
It is recommended to restart kube-proxy with with the -v flag set to 4, but I don't know how to do it with DigitalOcean provided cluster.
That's the configuration I use:
apiVersion: v1
kind: Service
metadata:
name: df-stats-site
spec:
ports:
- port: 80
targetPort: 8002
selector:
app: df-stats-site
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: df-stats-site
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- fork.example.com
secretName: letsencrypt-prod
rules:
- host: fork.example.com
http:
paths:
- backend:
serviceName: df-stats-site
servicePort: 80
Also, I have a NGINX Ingress Controller set up with the help of this answer.
I must note that it worked fine before. I'm not sure what caused this, but restarting the cluster would be great, though I don't know how to do it without removing all the resources.
The solution for me was to add HTTP and HTTPS inbound rules in the Firewall (these are missing by default).
For DigitalOcean provided Kubernetes cluster, you can open it at https://cloud.digitalocean.com/networking/firewalls/.
UPDATE: Make sure to create a new firewall record rather than editing an existing one. Otherwise, your rules will be automatically removed in a couple of hours/days, because DigitalOcean k8s persists the set of rules in the firewall.
ClusterIP services are only accessible from within the cluster. If you want to access it from outside the cluster, it needs to be configured as NodePort or LoadBalancer.
If you are just trying to test something locally, you can use kubectl port-forward to forward a port on your local machine to a ClusterIP service on a remote cluster. Here's an example of creating a deployment from an image, exposing it as a ClusterIP service, then accessing it via kubectl port-forward:
$ kubectl run --image=rancher/hello-world hello-world --replicas 2
$ kubectl expose deployment hello-world --type=ClusterIP --port=8080 --target-port=80
$ kubectl port-forward svc/hello-world 8080:8080
This service is now accessible from my local computer at http://127.0.0.1:8080

Istio (1.0) intra ReplicaSet routing - support traffic between pods in a Kubernetes Deployment

How does Istio support IP based routing between pods in the same Service (or ReplicaSet to be more specific)?
We would like to deploy a Tomcat application with replica > 1 within an Istio mesh. The app runs Infinispan, which is using JGroups to sort out communication and clustering. JGroups need to identify its cluster members and for that purpose there is the KUBE_PING (Kubernetes discovery protocol for JGroups). It will consult K8S API with a lookup comparable to kubectl get pods. The cluster members can be both pods in other services and pods within the same Service/Deployment.
Despite our issue being driven by rather specific needs the topic is generic. How do we enable pods to communicate with each other within a replica set?
Example: as a showcase we deploy the demo application https://github.com/jgroups-extras/jgroups-kubernetes. The relevant stuff is:
apiVersion: v1
items:
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ispn-perf-test
namespace: my-non-istio-namespace
spec:
replicas: 3
< -- edited for brevity -- >
Running without Istio, the three pods will find each other and form the cluster. Deploying the same with Istio in my-istio-namespace and adding a basic Service definition:
kind: Service
apiVersion: v1
metadata:
name: ispn-perf-test-service
namespace: my-istio-namespace
spec:
selector:
run : ispn-perf-test
ports:
- protocol: TCP
port: 7800
targetPort: 7800
name: "one"
- protocol: TCP
port: 7900
targetPort: 7900
name: "two"
- protocol: TCP
port: 9000
targetPort: 9000
name: "three"
Note that output below is wide - you might need to scroll right to get the IPs
kubectl get pods -n my-istio-namespace -o wide
NAME READY STATUS RESTARTS AGE IP NODE
ispn-perf-test-558666c5c6-g9jb5 2/2 Running 0 1d 10.44.4.63 gke-main-pool-4cpu-15gb-98b104f4-v9bl
ispn-perf-test-558666c5c6-lbvqf 2/2 Running 0 1d 10.44.4.64 gke-main-pool-4cpu-15gb-98b104f4-v9bl
ispn-perf-test-558666c5c6-lhrpb 2/2 Running 0 1d 10.44.3.22 gke-main-pool-4cpu-15gb-98b104f4-x8ln
kubectl get service ispn-perf-test-service -n my-istio-namespace
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ispn-perf-test-service ClusterIP 10.41.13.74 <none> 7800/TCP,7900/TCP,9000/TCP 1d
Guided by https://istio.io/help/ops/traffic-management/proxy-cmd/#deep-dive-into-envoy-configuration, let's peek into the resulting Envoy conf of one of the pods:
istioctl proxy-config listeners ispn-perf-test-558666c5c6-g9jb5 -n my-istio-namespace
ADDRESS PORT TYPE
10.44.4.63 7900 TCP
10.44.4.63 7800 TCP
10.44.4.63 9000 TCP
10.41.13.74 7900 TCP
10.41.13.74 9000 TCP
10.41.13.74 7800 TCP
< -- edited for brevity -- >
The Istio doc describes the listeners above as
Receives outbound non-HTTP traffic for relevant IP:PORT pair from
listener 0.0.0.0_15001
and this all makes sense. The pod ispn-perf-test-558666c5c6-g9jb5 can reach itself on 10.44.4.63 and the service via 10.41.13.74. But... what if the pod sends packets to 10.44.4.64 or 10.44.3.22? Those IPs are not present among the listeners so afaik the two "sibling" pods are non-reachable for ispn-perf-test-558666c5c6-g9jb5.
Can Istio support this today - then how?
You are right that HTTP routing only supports local access or remote access by service name or service VIP.
That said, for your particular example, above, where the service ports are named "one", "two", "three", the routing will be plain TCP as described here. Therefore, your example should work. The pod ispn-perf-test-558666c5c6-g9jb5 can reach itself on 10.44.4.63 and the other pods at 10.44.4.64 and 10.44.3.22.
If you rename the ports to "http-one", "http-two", and "http-three" then HTTP routing will kick in and the RDS config will restrict the remote calls to ones using recognized service domains.
To see the difference in the RDF config look at the output from the following command when the port is named "one", and when it is changed to "http-one".
istioctl proxy-config routes ispn-perf-test-558666c5c6-g9jb5 -n my-istio-namespace --name 7800 -o json
With the port named "one" it will return no routes, so TCP routing will apply, but in the "http-one" case, the routes will be restricted.
I don't know if there is a way to add additional remote pod IP addresses to the RDS domains in the HTTP case. I would suggest opening an Istio issue, to see if it's possible.

Does the Google Container Engine support DNS based service discovery?

From the kubernetes docs I see that there is a DNS based service discovery mechanism. Does Google Container Engine support this. If so, what's the format of DNS name to discover a service running inside Container Engine. I couldn't find the relevant information in the Container Engine docs.
The DNS name for services is as follow: {service-name}.{namespace}.svc.cluster.local.
Assuming you configured kubectl to work with your cluster you should be able to get your service and namespace details by the following the steps below.
Get your namespace
$ kubectl get namespaces
NAME LABELS STATUS
default <none> Active
kube-system <none> Active
You should ignore the kube-system entry, because that is for the cluster itself. All other entries are your namespaces. By default there will be one extra namespace called default.
Get your services
$ kubectl get services
NAME LABELS SELECTOR IP(S) PORT(S)
broker-partition0 name=broker-partition0,type=broker name=broker-partition0 10.203.248.95 5050/TCP
broker-partition1 name=broker-partition1,type=broker name=broker-partition1 10.203.249.91 5050/TCP
kubernetes component=apiserver,provider=kubernetes <none> 10.203.240.1 443/TCP
service-frontend name=service-frontend,service=frontend name=service-frontend 10.203.246.16 80/TCP
104.155.61.198
service-membership0 name=service-membership0,partition=0,service=membership name=service-membership0 10.203.246.242 80/TCP
service-membership1 name=service-membership1,partition=1,service=membership name=service-membership1 10.203.248.211 80/TCP
This command lists all the services available in your cluster. So for example, if I want to get the IP address of the service-frontend I can use the following DNS: service-frontend.default.svc.cluster.local.
Verify DNS with busybox pod
You can create a busybox pod and use that pod to execute nslookup command to query the DNS server.
$ kubectl create -f - << EOF
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- image: busybox
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: busybox
restartPolicy: Always
EOF
Now you can do an nslookup from the pod in your cluster.
$ kubectl exec busybox -- nslookup broker-partition0.default.svc.cluster.local
Server: 10.203.240.10
Address 1: 10.203.240.10
Name: service-frontend.default.svc.cluster.local
Address 1: 10.203.246.16
Here you see that the Addres 1 entry is the IP of the service-frontend service, the same as the IP address listed by the kubectl get services.
It should work the same way as mentioned in the doc you linked to. Have you tried that? (i.e. "my-service.my-ns")