i have been trying to setup k8s in a single node,everything was installed fine. but when i check the status of my kube-system pods,
CNI -> flannel pod has crashed, reason -> Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: x.x.x.x x.x.x.x x.x.x.x
CoreDNS pods status is ContainerCreating.
In My Office, the current server has been configured to have an static ip and when i checked /etc/resolv.conf
This is the output
# Generated by NetworkManager
search ORGDOMAIN.BIZ
nameserver 192.168.1.12
nameserver 192.168.2.137
nameserver 192.168.2.136
# NOTE: the libc resolver may not support more than 3 nameservers.
# The nameservers listed below may not be recognized.
nameserver 192.168.1.10
nameserver 192.168.1.11
i'm unable to find the root cause, what should i be looking at?
In short, you have too many entries in /etc/resolv.conf.
This is a known issue:
Some Linux distributions (e.g. Ubuntu), use a local DNS resolver by default (systemd-resolved). Systemd-resolved moves and replaces /etc/resolv.conf with a stub file that can cause a fatal forwarding loop when resolving names in upstream servers. This can be fixed manually by using kubelet’s --resolv-conf flag to point to the correct resolv.conf (With systemd-resolved, this is /run/systemd/resolve/resolv.conf). kubeadm (>= 1.11) automatically detects systemd-resolved, and adjusts the kubelet flags accordingly.
Also
Linux’s libc is impossibly stuck (see this bug from 2005) with limits of just 3 DNS nameserver records and 6 DNS search records. Kubernetes needs to consume 1 nameserver record and 3 search records. This means that if a local installation already uses 3 nameservers or uses more than 3 searches, some of those settings will be lost. As a partial workaround, the node can run dnsmasq which will provide more nameserver entries, but not more search entries. You can also use kubelet’s --resolv-conf flag.
If you are using Alpine version 3.3 or earlier as your base image, DNS may not work properly owing to a known issue with Alpine. Check here for more information.
You possibly could change that in the Kubernetes code, but I'm not sure about the functionality. As it's set to that value for purpose.
Code can be located here
const (
// Limits on various DNS parameters. These are derived from
// restrictions in Linux libc name resolution handling.
// Max number of DNS name servers.
MaxDNSNameservers = 3
// Max number of domains in search path.
MaxDNSSearchPaths = 6
// Max number of characters in search path.
MaxDNSSearchListChars = 256
)
Related
We have to use two name servers in the pod deployed in the cluster managed by other team.
The coredns for service name resolution of other pods inside cluster
The custom dns server for some external dns queries made by
application inside pod.
We have use the dnsPolicy as ClusterFirst and also specify the nameserver in dnsConfig
as ip of custom dns server and options with ndots = 1.
After deploying the pod we got the correct entries in /etc/resolv.conf with coredns entry as first entry. But when the application tries to resolve some domain, it first query with the absolute name (as option ndots=1 specified in /etc/resolv.conf from the first name server specified in /etc/resolv.conf, after failure it appends the search string that was automatically inserted when we specified the dnsPolicy as ClusterFirst, again query with the first nameserver and then try with second name server.
Why it is not trying the absolute name query with the second name server after failure from first name server, while in case when it append the search string, it query with both the name server in sequential order?
Is there any way we can insert the custom dns entry at top?
Note:- We cannot use the forward functionality of coredns as this dns server is used by other pods/services in the cluster.
I am relatively new to kubernetes and have a project for my University class, to build a kubernetes Cluster on bare metal.
For this I have set up a PoC Environment, out of 6 Machines (of which 3 are KVM Machines on one Node) all the administration is done by MAAS, meaning DHCP, and DNS is administered by that one Machine. I have a DNS Zone delegated to the MAAS DNS server k8s.example.com where all the machines are inside. The whole network is in its own VLan 10.0.10.0/24, with the metallb IPRange reserved from DHCP.
This is a picture to illustrate the simple cluster:
software wise, all hosts are using ubuntu 20.04 and I use kubespray to deploy everything, meaning kubernetes, metallb and nginx-ingress-controller. My corresponding values for kubespray are:
dashboard_enabled: false
ingress_nginx_enabled: true
ingress_nginx_host_network: true
kube_proxy_strict_arp: true
metallb_enabled: true
metallb_speaker_enabled: true
metallb_ip_range:
- "10.0.10.100-10.0.10.120"
kubeconfig_localhost: true
My Problem is, that I am unable getting DNS out of the cluster to the Internet to work.
I had a wildcard A Record set for *.k8s.example.com to the nginx-Ingress external ip, which worked fine for every pod to be accessible from outside.
The Problem was, every container inside the Cluster could not reach the internet anymore. Every request was routed via the ingress. Meaning if I tried to reach www.google.net it would try to reach www.google.net.k8s.example.com which makes kind of sense. Only every .com domain could be reached without problems (example www.google.com) after removing the Wildcard A record it worked fine. All pods inside of the cluster have no problem reaching each other.
There are several configuration possibility I see, where it makes sense to tweak around, yet after 2 weeks I really would prefer a solution that is based on best practice and done right.
I would really love to be able to work with a wildcard A record, but I fear that might not be possible.
I hope I supplied every Information needed to give you enough overview to understand my Problem.
EDIT:
I used the standard kubespray DNS config as i was told it would suffice:
DNS configuration.
# Kubernetes cluster name, also will be used as DNS domain
cluster_name: cluster.local
# Subdomains of DNS domain to be resolved via /etc/resolv.conf for hostnet pods
ndots: 2
# Can be coredns, coredns_dual, manual or none
dns_mode: coredns
# Set manual server if using a custom cluster DNS server
# manual_dns_server: 10.x.x.x
# Enable nodelocal dns cache
enable_nodelocaldns: true
nodelocaldns_ip: 169.254.25.10
nodelocaldns_health_port: 9254
# nodelocaldns_external_zones:
# - zones:
# - example.com
# - example.io:1053
# nameservers:
# - 1.1.1.1
# - 2.2.2.2
# cache: 5
# - zones:
# - https://mycompany.local:4453
# nameservers:
# - 192.168.0.53
# cache: 0
# Enable k8s_external plugin for CoreDNS
enable_coredns_k8s_external: false
coredns_k8s_external_zone: k8s_external.local
# Enable endpoint_pod_names option for kubernetes plugin
enable_coredns_k8s_endpoint_pod_names: false
# Can be docker_dns, host_resolvconf or none
resolvconf_mode: docker_dns
# Deploy netchecker app to verify DNS resolve as an HTTP service
deploy_netchecker: false
# Ip address of the kubernetes skydns service
skydns_server: "{{ kube_service_addresses|ipaddr('net')|ipaddr(3)|ipaddr('address') }}"
skydns_server_secondary: "{{ kube_service_addresses|ipaddr('net')|ipaddr(4)|ipaddr('address') }}"
dns_domain: "{{ cluster_name }}"
What I noticed is, that the etc resolv.conf of pods looks like this:
/ $ cat /etc/resolv.conf
nameserver 169.254.25.10
search flux-system.svc.cluster.local svc.cluster.local cluster.local k8s.example.com maas
options ndots:5
for example on the node, which is managed by MAAS, it is:
# This file is managed by man:systemd-resolved(8). Do not edit.
#
# This is a dynamic resolv.conf file for connecting local clients to the
# internal DNS stub resolver of systemd-resolved. This file lists all
# configured search domains.
#
# Run "resolvectl status" to see details about the uplink DNS servers
# currently in use.
#
# Third party programs must not access this file directly, but only through the
# symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a different way,
# replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.
nameserver 127.0.0.53
options edns0 trust-ad
search k8s.example.com maas
As discussed in comments, the issue is with the resolv.conf on your Kubernetes nodes, and the fact that you are using a wildcard record, that matches one of the names in that resolv.conf search entries.
Any name you may call, from a Node or a Pod, would first be searched as ${input}.${search-entry}, while ${input} would only be queried if concatenation with your search did not return with some record already. Having a wildcard record in the domains search list would result in just any name resolving to that record.
Granted that in this case, the k8s.example.com record is pushed by MAAS, and that we can't really remove it persistently, the next best solution would be to use another name serving your Ingresses - either a subdomain, or something unrelated. Usually, changing an option in your DHCP server should be enough - or arguably better: don't use DHCP hosting Kubernetes nodes.
I have observed that the pods in my cluster is not able to install the packages when exec to the pod . While debugging i have realized that its due to /etc/resolv.conf entries.
The /etc/resolv.conf entry from one of the pod is :
nameserver 192.168.27.116
search ui-container.svc.cluster.local svc.cluster.local cluster.local 192.168.27.116.nip.io
options ndots:5
Here if i remove a entry 192.168.27.116.nip.io from the resolv.conf of all the master, worker nodes then the pods will be able to connect to internet and apt-get update and apt-get install works. This is only a temporary workaround because it is not recommended to update the resolv.conf. because i have observed that resolv.conf contents gets re-set to original upon reboot of the nodes.
Is it due to options ndots:5 in the /etc/resolv.conf ?
How can i fix this?
As a quick-fix, you could leverage dnsConfig from pod Spec to override default dns configuration, more details: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-dns-config
No, it's because of the nip.io address in your resolv.conf. It's a special domain that's used for reflection and mocking. It's not the ndots:5.
Adding to above answers, usually resolve.conf follows following pattern
nameserver <local domain server provided by docker>
search <pod namespace>.svc.cluster.local svc.cluster.local cluster.local <host/node's actual DNS>
options ndots:5
Last part of the DNS <host/node's actual DNS> is usually your LAB's or system wide DNS that is part of your network. So you may have override with DNS policy or update the same in your lab. nip.io is a FQDN maker where there is a need, so you may come up with a local DNS name that resolves within your LAB instead of relying on that.
When I exec into a container I see an /etc/resolv.conf file that looks like this:
$ cat /etc/resolv.conf
search namespace.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.43.0.10
options ndots:5
How can I append to the search domains for all containers that get deployed such that the search domains will include extra domains? e.g. If I wanted to add foo.com and bar.com by default for any pod how can I update the search line to look like bellow?
search namespace.svc.cluster.local svc.cluster.local cluster.local foo.com bar.com
Notes:
This is a self managed k8s cluster. I am able to update the DNS/cluster configuration however I need to. I have already updated the coredns component to resolve my nameservers correctly, but this setting needs to be applied to each pod I would imagine.
I have looked at the pod spec, but this wouldn't be a great solution as it would need to be added to every pods (or deployment/ job/ replicaset/ etc) manifest in the system. I need this to be applied to all pods by default.
Due to the way hostnames are returned in numerous existing services, I cannot reasonably expect hostnames to be fully qualified domain names. This is an effort to maintain backwards compatibility with many services we already have (e.g. an LDAP request might return host fizz, but the lookup will need to fully resolve to fizz.foo.com). This is the way bare metal machines and VMs are normally configured here.
I found a possible solution, but I won't mark this as correct myself, because this was not directly specific to k8s, but rather k3s. I might come back later and provide more
In my case my test cluster was a k3s service, which I was assuming would act mostly the same as k8s. The way my environment was set up, my normal /etc/resolv.conf was being replaced by a new file on the node. I was able to circumvent this issues by supplying --resolv-conf where the files looks like this:
$ cat /somedir/resolv.conf
search foo.com bar.com
nameserver 8.8.8.8
Then start the server with /bin/k3s server --resolv-conf=/somedir/resolv.conf
Now when pods are spawned, k3s will parse this file for the search line and automatically append the search domains to whatever pod is created.
I'm not sure if I'm going to run into this issue again when I try this on actual k8s, but at least this gets me back up and running!
I have
an openstack, it is Queens, it has octavia for lbaas
a small (test) k8s cluster on top of it (3 nodes, 1 master), version 9.1.2
a deployment called hello which serves a simple webpage saying 'hello world', it works when accessed from within the cluster
I want to expose my deployment as a load balanced service with a floating IP.
I did kubectl expose deployment hello --type=LoadBalancer --name=my-service
It says (kubectl describe service my-service)
Error creating load balancer (will retry): failed to ensure load balancer for service default/my-service: error getting floating ip for port 9cc6442b-2b2f-4b6a-8f91-65dbc2ff13d0: Resource not found
If I manually do: openstack floating ip --port 9cc6442b-2b2f-4b6a-8f91-65dbc2ff13d0 356c8ffa-7bc2-43a9-a8d3-29147ae01727
where:
| ID | Floating IP Address | Port | Floating Network |
| 356c8ffa-7bc2-43a9-a8d3-29147ae01727 | 172.27.81.241 | None | eb31cc74-96ba-4394-aef4-0e94bec46d85 |
and /etc/kubernetes/cloud_config has:
[LoadBalancer]
subnet-id=6a6cdc35-8dda-4982-850e-53c6ee5a5085
floating-network-id=eb31cc74-96ba-4394-aef4-0e94bec46d85
use-octavia=True
(so it is looking for floating IPs on the correct network, and that subnet is the k8s internal subnet)
It all works.
So everything except "associate an IP" has worked. Why does this step fail? Where has k8s logged what it did and how it failed? I can only find docs for pod level logging (and my pod is fine, and serving it's test webpage just great).
(I have lots of quota remaining for 'make more floating ips', and several unused ones hanging around)
I was able to find this No Ports Available when trying to associate a floating IP and this Failed to Associate Floating IP. Maybe those will point you into right direction.
I would recommend that you check this page OpenStack community and look for more answers as I'm not an expert in OpenStack.
As for your question
Where has k8s logged what it did and how it failed?
You can use kubectl describe service <service_name>
Show details of a specific resource or group of resources
Print a detailed description of the selected resources, including related resources such as events or controllers. You may select a single object by name, all objects of that type, provide a name prefix, or label selector. For example:
$ kubectl describe TYPE NAME_PREFIX
For mode debug description please check Debug Services.