CoreDNS only works in one host in kubernete cluster - kubernetes

I have a kubernetes with 3 nodes:
[root#ops001 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
azshara-k8s01 Ready <none> 143d v1.15.2
azshara-k8s02 Ready <none> 143d v1.15.2
azshara-k8s03 Ready <none> 143d v1.15.2
when after I am deployed some pods I found only one nodes azshara-k8s03 could resolve DNS, the other two nodes could not resolve DNS.this is my azshara-k8s03 host node /etc/resolv.conf:
options timeout:2 attempts:3 rotate single-request-reopen
; generated by /usr/sbin/dhclient-script
nameserver 100.100.2.136
nameserver 100.100.2.138
this is the other 2 node /etc/resolv.conf:
nameserver 114.114.114.114
should I keep the same ? what should I do to make the DNS works fine in 3 nodes?

did you try if 114.114.114.114 is actually reachable from your nodes? if not, change it to something that actually is ;-]
also check which resolv.conf your kublets actually use: it is often something else than /etc/resolv.conf: do ps ax |grep kubelet and check the value of --resolv-conf flag and see if the DNSes in that file work correctly.
update:
what names are failing to resolve on the 2 problematic nodes? are these public names or internal only? if they are internal only than 114.114.114 will not know about them. 100.100.2.136 and 100.100.2.138 are not reachable for me: are they your internal DNSes? if so try to just change /etc/resolv.conf on 2 nodes that don't work to be the same as on the one that works.

First step,your CoreDNS port are listening on port you specify,you can login Pod in other pod and try to using telnet command to make sure the DNS expose port is accesseable(current I am using alpine,centos using yum,ubuntu or debian using apt-get):
apk add busybox-extras
telnet <your coredns server ip> <your coredns listening port>
Second step: login pods on each host machine and make sure the port is accessable in each pod,if telnet port is not accessable,you should fix your cluser net first.

Related

how to access pods from host?

I'm running colima with kubernetes like:
colima start --kuberenetes
I created a few running pods, and I want to see access the through the browsers.
But I don't know what is the colima IP (or kubernetes node IP).
help appreciated
You can get the nodeIp so:
kubectl get node
NAME STATUS ROLES AGE VERSION
nodeName Ready <none> 15h v1.26.0
Then with the nodeName:
kubectl describe node nodeName
That gives you a descrition of the node and you should look for this section:
Addresses:
InternalIP: 10.165.39.165
Hostname: master
Ping it to verify the network.
Find your host file on Mac and make an entry like:
10.165.39.165 test.local
This let you access the cluster with a domain name.
Ping it to verify.
You can not access from outside the cluster a ClusterIp.
To access your pod you have several possibilities.
if your service is type ClusterIp, you can create a temporary connection from your host with a port forward.
kubectl port-forward svc/yourservicename localport:podport
(i would raccomend this) create a service type: NodePort
Then
kubectl get svc -o wide
Shows you the NodePort: between(30000-32000).
You can access now the Pod by: test.local:nodePort or Ipaddress:NodePort.
Note: If you deployed in a namespace other than default, add -n yournamespace in the kubectl commands.
Update:
if you want to start colima with an ipAddress, first find one of your local network which is available.
Your network setting you can get with:
ifconfig
find the network. Should be the same of that of your Internet router.
Look for the subnet. Most likely 255.255.255.0.
The value to pass then:
--network-address xxx.xxx.xxx.xxx/24
In case the subnet is 255.255.0.0 then /16. But i dont think, if you are connect from home. Inside a company however this is possible.
Again check with ping and follow the steps from begining to verify the kubernetes node configuration.

Kubernetes DNS name for static Pod's

I dig everywhere to see why we don't have DNS resolution for static pods and couldn't find a right answer. Most basic stuff and couldn't find appealing answer.
Like you create a static pod, exec into it, do a "nslookup pod-name" or like "nslookup 10-44-0-7.default.pod.cluster.local", I know the second one is for Deployment and DaemonSet which creates A record, why not for static pods because they are ephemeral, in that way Deployment also is. Please tell me if it is possible and how we enable it.
My testing for the failed queries, all are static pods created with "kubectl run busybox-2 --image=busybox --command sleep 1d"
Used this as syntax:
In general a pod has the following DNS resolution: pod-ip-address.my-namespace.pod.cluster-domain.example.
vagrant#kubemaster:~$ kubectl get pods -n default -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
busybox-1 1/1 Running 0 135m 10.36.0.6 kubenode02 <none> <none>
busybox-2 1/1 Running 0 134m 10.44.0.7 kubenode01 <none> <none>
busybox-sleep 1/1 Running 19 (24h ago) 23d 10.44.0.3 kubenode01 <none> <none>
/ # cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local Dlink
options ndots:5
/ # nslookup 10-44-0-7.default.pod.cluster.local
Server: 10.96.0.10
Address: 10.96.0.10:53
Name: 10-44-0-7.default.pod.cluster.local
Address: 10.44.0.7
*** Can't find 10-44-0-7.default.pod.cluster.local: No answer
/ # nslookup 10-44-0-6.default.pod.cluster.local
Server: 10.96.0.10
Address: 10.96.0.10:53
*** Can't find 10-44-0-6.default.pod.cluster.local: No answer
Appreciate the help.
You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:
Katacoda
Play with Kubernetes
Your cluster must be running the CoreDNS add-on. Migrating to CoreDNS explains how to use kubeadm to migrate from kube-dns.
Your Kubernetes server must be at or later than version v1.12. To check the version, enter kubectl version.
DNS Lookup & Reverse lookups does not work for pods/podIPs (By Design!).
Why ?
I also had the similar question , After spending a lot of time exploring following are reasons that convinced me :
Pods & and its IPs are ephemeral. even static pods when they get restarted(recreated) they might end up getting a different IP Address.
It will be huge overload on coredns/any dns server to keep a track of ever changing POD IP Addresses.
Due to above reasons It is recommended to access the POD through a service because service will have a constant IP irrespective of how the endpoint IPS have changed. for services DNS lookups & reverse loookups work fine.

I cannot load the node information on kubernetes

When I ran the command below, I got the below messages
bistel#BISTelResearchDev-DN03:~$ kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
While in the master node, I get the information as below:
bistel#BISTelResearchDev-NN:/etc/kubernetes$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
bistelresearchdev-dn03 NotReady <none> 62s v1.19.3
bistelresearchdev-nn Ready master 57m v1.19.3
bistel#BISTelResearchDev-NN:/etc/kubernetes$
The bistelresearchdev-dn03 is the worker node and the message appears when I ran any command using kubectl as follows The connection to the server localhost:8080 was refused - did you specify the right host or port?.
I googled it a lot but any trials didn't work for me.
Thanks,
kubectl works only on master node in cluster. If you are getting this error then there is no issue.
I can see the issue here is node is NotReady status for that you can check below things.
Check kubelet is running on node bistelresearchdev-dn03 with systemctl status kubelet
Check network plugin is installed on your cluster.
The first computer you ran on is missing the kube config file.
Normally kubectl expects to find it at
~/.kube/config
If you get the one off the master node and copy it onto your machine your kubectl will see it and be able to use it.

Kubernetes kube-dns restarts on master node

I have Kubernetes cluster on Ubuntu 16.04 servers, deployed using Kubespray.
Kube-dns pod is restarting continuously on master node. It has restarted 3454 times.
Can anyone let me know how to troubleshoot and solve this issue?
Starting logs of kube-dns:
# 1, # 2
k8s-cluster.yml
#1 #2
SkyDNS by default forwards the nameservers to the ones listed in the /etc/resolv.conf. Since SkyDNS runs inside the kube-dns pod as a cluster addon, it inherits the configuration of /etc/resolv.conf from its host as described in the kube-dns documentations.
From your error, it looks like your host's /etc/resolv.conf is configured to use 10.233.100.1 as its nameserver and that becomes the forwarding server in your SkyDNS config. Looks like 10.233.100.1 is not routable from your Kubernetes cluster and this is the reason why you are getting the error:
skydns: failure to forward request "read udp 10.233.100.1:40155->ourdnsserverIP:53: i/o timeout"
The solution would be to change the flag --nameservers in the SkyDNS configuration. I said change because currently you have it set to "" and you might change it to nameservers=8.8.8.8:53,8.8.4.4:53.

unable to access dns from a kubernetes pod

I have a kubernetes master and node setup in two centos VMs on my Win 10.
I used flannel for CNI and deployed ambassador as an API gateway.
As the ambassador routes did not work, I analysed further to understand that the DNS (ip-10.96.0.10) is not accessible from busybox pod which means that none of the service names can be accessed. Could I get any suggestion please.
1. You should use newest version of Flannel.
Flannel does not setup service IPs but kube-proxy does, you should look at kube-proxy on your nodes and ensure they are not reporting errors.
I'd suggest taking a look at https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#tabs-pod-install-4 and ensure you have met the requirements stated there.
Similar issue but with Calico plugin you can find here: https://github.com/projectcalico/calico/issues/1798
2. Check if you have open port 8285, flannel uses UDP port 8285 for sending encapsulated IP packets. Make sure to enable this traffic to pass between the hosts.
3. Ambassador includes an integrated diagnostics service to help with troubleshooting, this may be useful for you. By default, this is not exposed to the Internet. To view it, we'll need to get the name of one of the Ambassador pods:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
ambassador-3655608000-43x86 1/1 Running 0 2m
ambassador-3655608000-w63zf 1/1 Running 0 2m
Forwarding local port 8877 to one of the pods:
kubectl port-forward ambassador-3655608000-43x86 8877
will then let us view the diagnostics at http://localhost:8877/ambassador/v0/diag/.
First spot should solve your problem, if not, try remainings.
I hope this helps.