I am having an issue with Kubernetes on GKE.
I am unable to resolve services by their name using internal DNS.
this is my configuration
Google GKE v1.15
kubectl get namespaces
NAME STATUS AGE
custom-metrics Active 183d
default Active 245d
dev Active 245d
kube-node-lease Active 65d
kube-public Active 245d
kube-system Active 245d
stackdriver Active 198d
I've deployed a couple of simple services based on openjdk 11 docker image and made with spring boot + actuator in order to have a /actuator/health endpoint to test in dev
kubectl get pods --namespace=dev
NAME READY STATUS RESTARTS AGE
test1-5d86946c49-h9t9l 1/1 Running 0 3h1m
test2-5bb5f4ff8d-7mzc8 1/1 Running 0 3h10m
If i try to execute under
kubectl --namespace=dev exec -it test1-5d86946c49-h9t9 -- /bin/bash
root#test1-5d86946c49-h9t9:/app# cat /etc/resolv.conf
nameserver 10.40.0.10
search dev.svc.cluster.local svc.cluster.local cluster.local europe-west1-b.c.back-office-236415.internal c.back-office-236415.internal google.internal
options ndots:5
root#test1-5d86946c49-h9t9:/app# nslookup test2
Server: 10.40.0.10
Address: 10.40.0.10#53
** server can't find test2: NXDOMAIN
The same issue occurs if I try using test2 service and try to resolve test1. There is a special configuration for namespace to enable DNS resolve? Shouldn't this be automatic?
I have reproduced this using master version 1.15 and and type of service as ‘ClusterIP’. I am able to do look up from the Pod of one service to another. For creating Kubernetes Services in a Google Kubernetes Engine cluster [1] might be helpful.
To see the services:
$ kubectl get svc --namespace=default
To access the deployment:
$ kubectl exec -it [Pod Name] sh
To lookup:
$ nslookup [Service Name]
Every Service defined in the cluster (including the DNS server itself) is assigned a DNS name. By default, a client Pod’s DNS search list will include the Pod’s own namespace and the cluster’s default domain.
“Normal” (not headless) Services are assigned a DNS A record for a name of the form my-svc.my-namespace.svc.cluster-domain.example. This resolves to the cluster IP of the Service.
For “Headless” (without a cluster IP) Services are also assigned a DNS A record for a name.Though this resolves to the set of IPs of the pods selected by the Service.
However, DNS policies can be set on a per-pod basis. Currently Kubernetes supports the following pod-specific DNS policies. These policies are specified in the dnsPolicy field of a Pod Spec [2]:
“Default“: The Pod inherits the name resolution configuration from the node that the pods run on.
“ClusterFirst“: Any DNS query that does not match the configured cluster domain suffix, such as “www.kubernetes.io”, is forwarded to the upstream nameserver inherited from the node. Cluster administrators may have extra stub-domain and upstream DNS servers configured.
“ClusterFirstWithHostNet“: For Pods running with hostNetwork, need to set its DNS policy “ClusterFirstWithHostNet”.
“None“: It allows a Pod to ignore DNS settings from the Kubernetes environment. All DNS settings are supposed to be provided using the dnsConfig field in the Pod Spec.
[1]-https://cloud.google.com/kubernetes-engine/docs/how-to/exposing-apps
[2]-https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-config
Related
I'm running colima with kubernetes like:
colima start --kuberenetes
I created a few running pods, and I want to see access the through the browsers.
But I don't know what is the colima IP (or kubernetes node IP).
help appreciated
You can get the nodeIp so:
kubectl get node
NAME STATUS ROLES AGE VERSION
nodeName Ready <none> 15h v1.26.0
Then with the nodeName:
kubectl describe node nodeName
That gives you a descrition of the node and you should look for this section:
Addresses:
InternalIP: 10.165.39.165
Hostname: master
Ping it to verify the network.
Find your host file on Mac and make an entry like:
10.165.39.165 test.local
This let you access the cluster with a domain name.
Ping it to verify.
You can not access from outside the cluster a ClusterIp.
To access your pod you have several possibilities.
if your service is type ClusterIp, you can create a temporary connection from your host with a port forward.
kubectl port-forward svc/yourservicename localport:podport
(i would raccomend this) create a service type: NodePort
Then
kubectl get svc -o wide
Shows you the NodePort: between(30000-32000).
You can access now the Pod by: test.local:nodePort or Ipaddress:NodePort.
Note: If you deployed in a namespace other than default, add -n yournamespace in the kubectl commands.
Update:
if you want to start colima with an ipAddress, first find one of your local network which is available.
Your network setting you can get with:
ifconfig
find the network. Should be the same of that of your Internet router.
Look for the subnet. Most likely 255.255.255.0.
The value to pass then:
--network-address xxx.xxx.xxx.xxx/24
In case the subnet is 255.255.0.0 then /16. But i dont think, if you are connect from home. Inside a company however this is possible.
Again check with ping and follow the steps from begining to verify the kubernetes node configuration.
I am trying kubernetes and seem to have hit bit of a hurdle. The problem is that from within my pod I can't curl local hostnames such as wrkr1 or wrkr2 (machine hostnames on my network) but can successfully resolve hostnames such as google.com or stackoverflow.com.
My cluster is a basic setup with one master and 2 worker nodes.
What works from within the pod:
curl to google.com from pod -- works
curl to another service(kubernetes) from pod -- works
curl to another machine on same LAN via its IP address such as 192.168.x.x -- works
curl to another machine on same LAN via its hostname such as wrkr1 -- does not work
What works from the node hosting pod:
curl to google.com --works
curl to another machine on same LAN via
its IP address such as 192.168.x.x -- works
curl to another machine
on same LAN via its hostname such as wrkr1 -- works.
Note: the pod cidr is completely different from the IP range used in
LAN
the node contains a hosts file with entry corresponding to wrkr1's IP address (although I've checked node is able to resolve hostname without it also but I read somewhere that a pod inherits its nodes DNS resolution so I've kept the entry)
Kubernetes Version: 1.19.14
Ubuntu Version: 18.04 LTS
Need help as to whether this is normal behavior and what can be done if I want pod to be able to resolve hostnames on local LAN as well?
What happens
Need help as to whether this is normal behavior
This is normal behaviour, because there's no DNS server in your network where virtual machines are hosted and kubernetes has its own DNS server inside the cluster, it simply doesn't know about what happens on your host, especially in /etc/hosts because pods simply don't have access to this file.
I read somewhere that a pod inherits its nodes DNS resolution so I've
kept the entry
This is a point where tricky thing happens. There are four available DNS policies which are applied per pod. We will take a look at two of them which are usually used:
"Default": The Pod inherits the name resolution configuration from the node that the pods run on. See related discussion for more details.
"ClusterFirst": Any DNS query that does not match the configured cluster domain suffix, such as "www.kubernetes.io", is forwarded to the upstream nameserver inherited from the node. Cluster administrators may have extra stub-domain and upstream DNS servers configured
The trickiest ever part is this (from the same link above):
Note: "Default" is not the default DNS policy. If dnsPolicy is not
explicitly specified, then "ClusterFirst" is used.
That means that all pods that do not have DNS policy set will be run with ClusterFirst and they won't be able to see /etc/resolv.conf on the host. I tried changing this to Default and indeed, it can resolve everything host can, however internal resolving stops working, so it's not an option.
For example coredns deployment is run with Default dnsPolicy which allows coredns to resolve hosts.
How this can be resolved
1. Add local domain to coreDNS
This will require to add A records per host. Here's a part from edited coredns configmap:
This should be within .:53 { block
file /etc/coredns/local.record local
This part is right after block above ends (SOA information was taken from the example, it doesn't make any difference here):
local.record: |
local. IN SOA sns.dns.icann.org. noc.dns.icann.org. 2015082541 7200 3600 1209600 3600
wrkr1. IN A 172.10.10.10
wrkr2. IN A 172.11.11.11
Then coreDNS deployment should be added to include this file:
$ kubectl edit deploy coredns -n kube-system
volumes:
- configMap:
defaultMode: 420
items:
- key: Corefile
path: Corefile
- key: local.record # 1st line to add
path: local.record # 2nd line to add
name: coredns
And restart coreDNS deployment:
$ kubectl rollout restart deploy coredns -n kube-system
Just in case check if coredns pods are running and ready:
$ kubectl get pods -A | grep coredns
kube-system coredns-6ddbbfd76-mk2wv 1/1 Running 0 4h46m
kube-system coredns-6ddbbfd76-ngrmq 1/1 Running 0 4h46m
If everything's done correctly, now newly created pods will be able to resolve hosts by their names. Please find an example in coredns documentation
2. Set up DNS server in the network
While avahi looks similar to DNS server, it does not act like a DNS server. It's not possible to setup requests forwarding from coredns to avahi, while it's possible to proper DNS server in the network and this way have everything will be resolved.
3. Deploy avahi to kubernetes cluster
There's a ready image with avahi here. If it's deployed into the cluster with dnsPolicy set to ClusterFirstWithHostNet and most importantly hostNetwork: true it will be able to use host adapter to discover all available hosts within the network.
Useful links:
Pods DNS policy
Custom DNS entries for kubernetes
I've deployed the Rancher Helm chart to my Kubernetes cluster and want to access the Rancher API/UI from another pod (i.e. a pod running an ingress-controller).
when I list the services and the endpoints. The IP addresses differ:
$ kubectl get ep | grep rancher
release-name-rancher 10.200.23.13:80 18h
and
$ kubectl get services | grep rancher
release-name-rancher ClusterIP 10.100.200.253 <none> 80/TCP 18h
Within the container of the client (i.e. the ingress controller), I see the service beeing represented with the service's ClusterIP:
$ env | grep RELEASE_NAME_RANCHER_SERVICE_HOST
RELEASE_NAME_RANCHER_SERVICE_HOST=10.100.200.253
Trying to reach the backend via the IP address in the Env does not work (curl 10.100.200.253 just delivers no response and blocks forever).
Trying to reach the backend via the endpoint address works:
$ curl 10.200.23.13
Found.
I'm quite confused why the endpoint IP address and the ClusterIP address differ and why is it not possible to connect to the ClusterIP address. Any hints to polish my understanding?
In Kubernetes, every Pod and Service gets its own IP address. The kubectl get services IP address is the Kubernetes-internal address of the Service; the get ep address address of the Pod behind it. The Service actually acts like a load balancer, and there can be multiple Pods attached to it. The Kubernetes Service documentation goes into a lot of detail about what exactly is happening here.
Kubernetes also provides an internal DNS service that can resolve Service names. You generally shouldn't use any of these IP addresses directly; instead, use the host name release-name-rancher.default.svc.cluster.local (or replace "default" if you're running in some other Kubernetes namespace).
While the ..._SERVICE_HOST environment variable you reference is supported and documented, I'd avoid using it. Of particular note, if you helm install or kubectl apply a large set of resources at once and the Pod gets created before the Service, you'll be in a consistent state except that the Pod won't actually have this environment variable. In a Helm land where Services don't have fixed names, the environment variable name won't be fixed either. Prefer the DNS name.
I'm new to using Kubernetes (v 1.11.2-gke.18), and am trying to get Sentry running as a Kubernetes deployment. This is all being run in Google Cloud. Here's the YAML files for Postgres, Redis, and Sentry:
The Redis and Postgres parts seem to be up and running just fine, but Sentry fails because: could not translate host name "postgres-sentry:5432" to address: Name or service not known.
My question is this: How do I get these services to communicate properly? How do I get the Sentry container to communicate with Postgres and Redis?
It turns out that in this case, what I needed was to change the SENTRY_REDIS_HOST and SENTRY_POSTGRES_HOST values to not have the port numbers in the env declaration. Taking those out made everything talk together as expected.
In Kubernetes pods generally can find other pods through services using DNS.
Your configuration looks correct if postgres-sentry is in the same namespace as your sentry app pod/deployment (which looks like the default namespace).
So it points that you may have a problem with DNS. You can check by shelling into the sentry app container/pod and trying to ping postgres-sentry:
$ kubectl exec -it <pod-id-of-your-sentry-app> sh
# ping postgres-sentry
Also, check if you have you /etc/resolv.conf looks something like this:
# cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
Finally, check if your DNS pods are running:
$ kubectl -n kube-system get pods | grep dns
coredns-xxxxxxxxxxx-xxxxx 1/1 Running 15 116d
coredns-xxxxxxxxxxx-xxxxx 1/1 Running 15 116d
I'm running Kuberentes with a Minikube node on my machine. The pods are accessing each other by their .metadata.name, and I would like to have a custom domain to that name.
i.e. one pod accesses Elastic's machine by elasticsearch.blahblah.com
Thanks for any suggestions
You should have DNS records for pods by default due to kube-DNS addon enabled by default in minikube.
To check kube-dns addon status use the below command:
kubectl get pod -n kube-system
Please find below how cluster add-on DNS server works:
An optional (though strongly recommended) cluster add-on is a DNS server. The DNS server watches the Kubernetes API for new Services and creates a set of DNS records for each. If DNS has been enabled throughout the cluster then all Pods should be able to do name resolution of Services automatically.
For example, if you have a Service called "my-service" in Kubernetes Namespace "my-ns" a DNS record for "my-service.my-ns" is created. Pods which exist in the "my-ns" Namespace should be able to find it by simply doing a name lookup for "my-service". Pods which exist in other Namespaces must qualify the name as "my-service.my-ns". The result of these name lookups is the cluster IP.
Kubernetes also supports DNS SRV (service) records for named ports. If the "my-service.my-ns" Service has a port named "http" with protocol TCP, you can do a DNS SRV query for "_http._tcp.my-service.my-ns" to discover the port number for "http".
The Kubernetes DNS server is the only way to access services of type ExternalName.
You can follow Configure DNS Service document for configuration instructions.
Also, you can check DNS for Services and Pods for additional information.