Kubernetes kube-dns restarts on master node - kubernetes

I have Kubernetes cluster on Ubuntu 16.04 servers, deployed using Kubespray.
Kube-dns pod is restarting continuously on master node. It has restarted 3454 times.
Can anyone let me know how to troubleshoot and solve this issue?
Starting logs of kube-dns:
# 1, # 2
k8s-cluster.yml
#1 #2

SkyDNS by default forwards the nameservers to the ones listed in the /etc/resolv.conf. Since SkyDNS runs inside the kube-dns pod as a cluster addon, it inherits the configuration of /etc/resolv.conf from its host as described in the kube-dns documentations.
From your error, it looks like your host's /etc/resolv.conf is configured to use 10.233.100.1 as its nameserver and that becomes the forwarding server in your SkyDNS config. Looks like 10.233.100.1 is not routable from your Kubernetes cluster and this is the reason why you are getting the error:
skydns: failure to forward request "read udp 10.233.100.1:40155->ourdnsserverIP:53: i/o timeout"
The solution would be to change the flag --nameservers in the SkyDNS configuration. I said change because currently you have it set to "" and you might change it to nameservers=8.8.8.8:53,8.8.4.4:53.

Related

kubernetes pod cannot resolve local hostnames but can resolve external ones like google.com

I am trying kubernetes and seem to have hit bit of a hurdle. The problem is that from within my pod I can't curl local hostnames such as wrkr1 or wrkr2 (machine hostnames on my network) but can successfully resolve hostnames such as google.com or stackoverflow.com.
My cluster is a basic setup with one master and 2 worker nodes.
What works from within the pod:
curl to google.com from pod -- works
curl to another service(kubernetes) from pod -- works
curl to another machine on same LAN via its IP address such as 192.168.x.x -- works
curl to another machine on same LAN via its hostname such as wrkr1 -- does not work
What works from the node hosting pod:
curl to google.com --works
curl to another machine on same LAN via
its IP address such as 192.168.x.x -- works
curl to another machine
on same LAN via its hostname such as wrkr1 -- works.
Note: the pod cidr is completely different from the IP range used in
LAN
the node contains a hosts file with entry corresponding to wrkr1's IP address (although I've checked node is able to resolve hostname without it also but I read somewhere that a pod inherits its nodes DNS resolution so I've kept the entry)
Kubernetes Version: 1.19.14
Ubuntu Version: 18.04 LTS
Need help as to whether this is normal behavior and what can be done if I want pod to be able to resolve hostnames on local LAN as well?
What happens
Need help as to whether this is normal behavior
This is normal behaviour, because there's no DNS server in your network where virtual machines are hosted and kubernetes has its own DNS server inside the cluster, it simply doesn't know about what happens on your host, especially in /etc/hosts because pods simply don't have access to this file.
I read somewhere that a pod inherits its nodes DNS resolution so I've
kept the entry
This is a point where tricky thing happens. There are four available DNS policies which are applied per pod. We will take a look at two of them which are usually used:
"Default": The Pod inherits the name resolution configuration from the node that the pods run on. See related discussion for more details.
"ClusterFirst": Any DNS query that does not match the configured cluster domain suffix, such as "www.kubernetes.io", is forwarded to the upstream nameserver inherited from the node. Cluster administrators may have extra stub-domain and upstream DNS servers configured
The trickiest ever part is this (from the same link above):
Note: "Default" is not the default DNS policy. If dnsPolicy is not
explicitly specified, then "ClusterFirst" is used.
That means that all pods that do not have DNS policy set will be run with ClusterFirst and they won't be able to see /etc/resolv.conf on the host. I tried changing this to Default and indeed, it can resolve everything host can, however internal resolving stops working, so it's not an option.
For example coredns deployment is run with Default dnsPolicy which allows coredns to resolve hosts.
How this can be resolved
1. Add local domain to coreDNS
This will require to add A records per host. Here's a part from edited coredns configmap:
This should be within .:53 { block
file /etc/coredns/local.record local
This part is right after block above ends (SOA information was taken from the example, it doesn't make any difference here):
local.record: |
local. IN SOA sns.dns.icann.org. noc.dns.icann.org. 2015082541 7200 3600 1209600 3600
wrkr1. IN A 172.10.10.10
wrkr2. IN A 172.11.11.11
Then coreDNS deployment should be added to include this file:
$ kubectl edit deploy coredns -n kube-system
volumes:
- configMap:
defaultMode: 420
items:
- key: Corefile
path: Corefile
- key: local.record # 1st line to add
path: local.record # 2nd line to add
name: coredns
And restart coreDNS deployment:
$ kubectl rollout restart deploy coredns -n kube-system
Just in case check if coredns pods are running and ready:
$ kubectl get pods -A | grep coredns
kube-system coredns-6ddbbfd76-mk2wv 1/1 Running 0 4h46m
kube-system coredns-6ddbbfd76-ngrmq 1/1 Running 0 4h46m
If everything's done correctly, now newly created pods will be able to resolve hosts by their names. Please find an example in coredns documentation
2. Set up DNS server in the network
While avahi looks similar to DNS server, it does not act like a DNS server. It's not possible to setup requests forwarding from coredns to avahi, while it's possible to proper DNS server in the network and this way have everything will be resolved.
3. Deploy avahi to kubernetes cluster
There's a ready image with avahi here. If it's deployed into the cluster with dnsPolicy set to ClusterFirstWithHostNet and most importantly hostNetwork: true it will be able to use host adapter to discover all available hosts within the network.
Useful links:
Pods DNS policy
Custom DNS entries for kubernetes

Cant access pod of another Node

I install a 3 node Kubernetes Cluster with RKE tool. The installation was successful with no errors but I'm unable to ping from one pod to another pod.
If I ping a pod running on worker2 node(NODE-IP-10.222.22.47) I get a response, but no responses from pods running on worker1(NODE-IP-10.222.22.46).
My Pods are as follows -
Also I noticed for some pods it has given node-ip addresses. The node IP addresses are
Master1=10.222.22.45
Worker1=10.222.22.46
Worker2=10.222.22.47
cluster_cidr: 10.42.0.0/16
service_cluster_ip_range: 10.43.0.0/16
cluster_dns_server: 10.43.0.10
Overlay network - canal
OS- CentOS Linux release 7.8.2003
Kubernetes - v1.20.8 installed with rke tool
Docker - 20.10.7
Sysctl entries in all nodes
firewall was disabled in all nodes before install.
Check - sysctl net.bridge.bridge-nf-call-ip6tables
net.bridge.bridge-nf-call-ip6tables = 1
Check - sysctl net.bridge.bridge-nf-call-iptables
net.bridge.bridge-nf-call-iptables = 1
From description and screenshots provided, some addons and
controllers of master01 do not seem ready, which might need to be
reconfigured.
You can also use routes to connect to pods on worker1. Check this for
detailed instructions.
The reason was the UDP ports were blocked

CoreDNS only works in one host in kubernete cluster

I have a kubernetes with 3 nodes:
[root#ops001 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
azshara-k8s01 Ready <none> 143d v1.15.2
azshara-k8s02 Ready <none> 143d v1.15.2
azshara-k8s03 Ready <none> 143d v1.15.2
when after I am deployed some pods I found only one nodes azshara-k8s03 could resolve DNS, the other two nodes could not resolve DNS.this is my azshara-k8s03 host node /etc/resolv.conf:
options timeout:2 attempts:3 rotate single-request-reopen
; generated by /usr/sbin/dhclient-script
nameserver 100.100.2.136
nameserver 100.100.2.138
this is the other 2 node /etc/resolv.conf:
nameserver 114.114.114.114
should I keep the same ? what should I do to make the DNS works fine in 3 nodes?
did you try if 114.114.114.114 is actually reachable from your nodes? if not, change it to something that actually is ;-]
also check which resolv.conf your kublets actually use: it is often something else than /etc/resolv.conf: do ps ax |grep kubelet and check the value of --resolv-conf flag and see if the DNSes in that file work correctly.
update:
what names are failing to resolve on the 2 problematic nodes? are these public names or internal only? if they are internal only than 114.114.114 will not know about them. 100.100.2.136 and 100.100.2.138 are not reachable for me: are they your internal DNSes? if so try to just change /etc/resolv.conf on 2 nodes that don't work to be the same as on the one that works.
First step,your CoreDNS port are listening on port you specify,you can login Pod in other pod and try to using telnet command to make sure the DNS expose port is accesseable(current I am using alpine,centos using yum,ubuntu or debian using apt-get):
apk add busybox-extras
telnet <your coredns server ip> <your coredns listening port>
Second step: login pods on each host machine and make sure the port is accessable in each pod,if telnet port is not accessable,you should fix your cluser net first.

unable to access dns from a kubernetes pod

I have a kubernetes master and node setup in two centos VMs on my Win 10.
I used flannel for CNI and deployed ambassador as an API gateway.
As the ambassador routes did not work, I analysed further to understand that the DNS (ip-10.96.0.10) is not accessible from busybox pod which means that none of the service names can be accessed. Could I get any suggestion please.
1. You should use newest version of Flannel.
Flannel does not setup service IPs but kube-proxy does, you should look at kube-proxy on your nodes and ensure they are not reporting errors.
I'd suggest taking a look at https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#tabs-pod-install-4 and ensure you have met the requirements stated there.
Similar issue but with Calico plugin you can find here: https://github.com/projectcalico/calico/issues/1798
2. Check if you have open port 8285, flannel uses UDP port 8285 for sending encapsulated IP packets. Make sure to enable this traffic to pass between the hosts.
3. Ambassador includes an integrated diagnostics service to help with troubleshooting, this may be useful for you. By default, this is not exposed to the Internet. To view it, we'll need to get the name of one of the Ambassador pods:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
ambassador-3655608000-43x86 1/1 Running 0 2m
ambassador-3655608000-w63zf 1/1 Running 0 2m
Forwarding local port 8877 to one of the pods:
kubectl port-forward ambassador-3655608000-43x86 8877
will then let us view the diagnostics at http://localhost:8877/ambassador/v0/diag/.
First spot should solve your problem, if not, try remainings.
I hope this helps.

How does DNS resolution work on Kubernetes with multiple networks?

I have a 4 node Kubernetes cluster, 1 x controller and 3 x workers. The following shows how they are configured with the versions.
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-ctrl-1 Ready master 1h v1.11.2 192.168.191.100 <none> Ubuntu 18.04.1 LTS 4.15.0-1021-aws docker://18.6.1
turtle-host-01 Ready <none> 1h v1.11.2 192.168.191.53 <none> Ubuntu 18.04.1 LTS 4.15.0-29-generic docker://18.6.1
turtle-host-02 Ready <none> 1h v1.11.2 192.168.191.2 <none> Ubuntu 18.04.1 LTS 4.15.0-34-generic docker://18.6.1
turtle-host-03 Ready <none> 1h v1.11.2 192.168.191.3 <none> Ubuntu 18.04.1 LTS 4.15.0-33-generic docker://18.6.1
Each of the nodes has two network interfaces, for arguments sake eth0 and eth1. eth1 is the network that I want to the cluster to work on. I setup the controller using kubeadm init and passed --api-advertise-address 192.168.191.100. The worker nodes where then joined using this address.
Finally on each node I modified the kubelet service to have --node-ip set so that the layout looks as above.
The cluster appears to be working correctly and I can create pods, deployments etc. However the issue I have is that none of the pods are able to use the kube-dns service for DNS resolution.
This is not a problem with resolution, rather that the machines cannot connect to the DNS service to perform the resolution. For example if I run a busybox container and access it to perform nslookup i get the following:
/ # nslookup www.google.co.uk
nslookup: read: Connection refused
nslookup: write to '10.96.0.10': Connection refused
I have a feeling that this is down to not using the default network and because of that I suspect some Iptables rules are not correct, that being said these are just guesses.
I have tried both the Flannel overlay and now Weave net. The pod CIDR range is 10.32.0.0/16 and the service CIDR is as default.
I have noticed that with Kubernetes 1.11 there are now pods called coredns rather than one kube-dns.
I hope that this is a good place to ask this question. I am sure I am missing something small but vital so if anyone has any ideas that would be most welcome.
Update #1:
I should have said that the nodes are not all in the same place. I have a VPN running between them all and this is the network I want things to communicate over. It is an idea I had to try and have distributed nodes.
Update #2:
I saw another answer on SO (DNS in Kubernetes not working) that suggested kubelet needed to have --cluster-dns and --cluster-domain set. This is indeed the case on my DEV K8s cluster that I have running at home (on one network).
However it is not the case on this cluster and I suspect this is down to a later version. I did add the two settings to all nodes in the cluster, but it did not make things work.
Update #3
The topology of the cluster is as follows.
1 x Controller is in AWS
1 x Worker is in Azure
2 x Worker are physical machines in a colo Data Centre
All machines are connected to each other using ZeroTier VPN on the 192.168.191.0/24 network.
I have not configured any special routing. I agree that this is probably where the issue is, but I am not 100% sure what this routing should be.
WRT to kube-dns and nginx, I have not tainted my controller so nginx is not on the master, not is busybox. nginx and busybox are on workers 1 and 2 respectively.
I have used netcat to test connection to kube-dns and I get the following:
/ # nc -vv 10.96.0.10 53
nc: 10.96.0.10 (10.96.0.10:53): Connection refused
sent 0, rcvd 0
/ # nc -uvv 10.96.0.10 53
10.96.0.10 (10.96.0.10:53) open
The UDP connection does not complete.
I modified my setup so that I could run containers on the controller, so kube-dns, nginx and busybox are all on the controller, and I am able to connect and resolve DNS queries against 10.96.0.10.
So all this does point to routing or IPTables IMHO, I just need to work out what that should be.
Update #4
In response to comments I can confirm the following ping test results.
Master -> Azure Worker (Internet) : SUCCESS : Traceroute SUCCESS
Master -> Azure Worker (VPN) : SUCCESS : Traceroute SUCCESS
Azure Worker -> Master (Internet) : SUCCESS : Traceroute FAIL (too many hops)
Azure Worker -> Master (VPN) : SUCCESS : Traceroute SUCCESS
Master -> Colo Worker 1 (Internet) : SUCCESS : Traceroute SUCCESS
Master -> Colo Worker 1 (VPN) : SUCCESS : Traceroute SUCCESS
Colo Worker 1 -> Master (Internet) : SUCCESS : Traceroute FAIL (too many hops)
Colo Worker 1 -> Master (VPN) : SUCCESS : Traceroute SUCCESS
Update 5
After running the tests above, it got me thinking about routing and I wondered if it was as simple as providing a route to the controller over the VPN for the service CIDR range (10.96.0.0/12).
So on a host, not included in the cluster, I added a route thus:
route add -net 10.96.0.0/12 gw 192.168.191.100
And I could then resolve DNS using the kube-dns server address:
nslookup www.google.co.uk 10.96.0.10
SO I then added a route, as above, to one of the worker nodes and tried the same. But it is blocked and I do not get a response.
Given that I can resolve DNS over the VPN with the appropriate route from a non-kubernetes machine, I can only think that there is an IPTables rule that needs updating or adding.
I think this is almost there, just one last bit to fix.
I realise this is wrong as it it the kube-proxy should do the DNS resolution on each host. I am leaving it here for information.
Following the instruction at this page, try to run this:
apiVersion: v1
kind: Pod
metadata:
namespace: default
name: dns-example
spec:
containers:
- name: test
image: nginx
dnsPolicy: "None"
dnsConfig:
nameservers:
- 1.2.3.4
searches:
- ns1.svc.cluster.local
- my.dns.search.suffix
options:
- name: ndots
value: "2"
- name: edns0
and see if a manual configuration works or you have some networking DNS problem.
Sounds like you are running on AWS. I suspect that your AWS security group is not allowing DNS traffic to go through. You can try allowing all traffic to the Security Group(s) where all your master and nodes are, to see if that's the problem.
You can also check that all your masters and nodes are allowing routing:
cat /proc/sys/net/ipv4/ip_forward
If not
echo 1 > /proc/sys/net/ipv4/ip_forward
Hope it helps.