Update default search domains in pods - kubernetes

When I exec into a container I see an /etc/resolv.conf file that looks like this:
$ cat /etc/resolv.conf
search namespace.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.43.0.10
options ndots:5
How can I append to the search domains for all containers that get deployed such that the search domains will include extra domains? e.g. If I wanted to add foo.com and bar.com by default for any pod how can I update the search line to look like bellow?
search namespace.svc.cluster.local svc.cluster.local cluster.local foo.com bar.com
Notes:
This is a self managed k8s cluster. I am able to update the DNS/cluster configuration however I need to. I have already updated the coredns component to resolve my nameservers correctly, but this setting needs to be applied to each pod I would imagine.
I have looked at the pod spec, but this wouldn't be a great solution as it would need to be added to every pods (or deployment/ job/ replicaset/ etc) manifest in the system. I need this to be applied to all pods by default.
Due to the way hostnames are returned in numerous existing services, I cannot reasonably expect hostnames to be fully qualified domain names. This is an effort to maintain backwards compatibility with many services we already have (e.g. an LDAP request might return host fizz, but the lookup will need to fully resolve to fizz.foo.com). This is the way bare metal machines and VMs are normally configured here.

I found a possible solution, but I won't mark this as correct myself, because this was not directly specific to k8s, but rather k3s. I might come back later and provide more
In my case my test cluster was a k3s service, which I was assuming would act mostly the same as k8s. The way my environment was set up, my normal /etc/resolv.conf was being replaced by a new file on the node. I was able to circumvent this issues by supplying --resolv-conf where the files looks like this:
$ cat /somedir/resolv.conf
search foo.com bar.com
nameserver 8.8.8.8
Then start the server with /bin/k3s server --resolv-conf=/somedir/resolv.conf
Now when pods are spawned, k3s will parse this file for the search line and automatically append the search domains to whatever pod is created.
I'm not sure if I'm going to run into this issue again when I try this on actual k8s, but at least this gets me back up and running!

Related

How can I disable / ignore proxy settings from inside a kubernetes pod only for requests directed to kubernetes services?

I have set these environment variables inside my pod named main_pod.
$ env
HTTP_PROXY=http://myproxy.com
http_proxy=http://myproxy.com
I also have another dynamic pod in pattern sub_pod-{number} which has a service attached to it called sub_pod-{number}.
So, if I add NO_PROXY=sub_pod-1 environment variable in main_pod, request with URL http://sub_pod-1:5000/health_check will run successfully as it won't be directed through proxy which is fine.
But I want this process to be dynamic. sub_pod_45 might spawn at runtime and sub_pod-1 might get destroyed. Is there any better way to handle this rather than updating NO_PROXY for every pod creation / destruction ?
Is there any resource / network policy / egress rule from which I can tell pod that if domain name belongs to kubernetes service, do not route it through proxy server?
Or can I simply use regex or glob patterns in NO_PROXY env variable like NO_PROXY=sub_pod-* ?
Edited
Result of nslookup
root#tmp-shell:/# nslookup sub_pod-1
Server: 10.43.0.10
Address: 10.43.0.10#53
Name: sub_pod-1.default.svc.cluster.local
Address: 10.43.22.139
When no_proxy=cluster.local,
Proxy bypassed when requested with FQDN
res = requests.get('http://sub_pod-1.default.svc.cluster.local:5000')
Proxy didn't bypass when requested with service name only
res = requests.get('http://sub_pod-1:5000') # I want this to work
I would not want to ask my developers to change the application to use FQDN.
Is there any way cluster can identify if URL resolves to a service present within the network and if it happens do not route the request to proxy ?
Libraries that support the http_proxy environment variable generally also support a matching no_proxy that names things that shouldn't be proxied. The exact syntax seems to vary across languages and libraries but it does seem to be universal that setting no_proxy=example.com causes anything.example.com to not be proxied either.
This is relevant because the Kubernetes DNS system creates its names in a domain based on the cluster name, by default cluster.local. The canonical form of a Service DNS name, for example, is service-name.namespace-name.svc.cluster.local., where service-name and namespace-name are the names of the corresponding Kubernetes objects.
I suspect this means it would work to do two things:
Set an environment variable no_proxy=cluster.local; and
Make sure to use the FQDN form when calling other services, service.namespace.svc.cluster.local.
Pods have similar naming, but are in a pod.cluster.local subdomain. The cluster.local value is configurable at a cluster level and it may be different in your environment.

how to configure kubespray DNS for bare-metal

I am relatively new to kubernetes and have a project for my University class, to build a kubernetes Cluster on bare metal.
For this I have set up a PoC Environment, out of 6 Machines (of which 3 are KVM Machines on one Node) all the administration is done by MAAS, meaning DHCP, and DNS is administered by that one Machine. I have a DNS Zone delegated to the MAAS DNS server k8s.example.com where all the machines are inside. The whole network is in its own VLan 10.0.10.0/24, with the metallb IPRange reserved from DHCP.
This is a picture to illustrate the simple cluster:
software wise, all hosts are using ubuntu 20.04 and I use kubespray to deploy everything, meaning kubernetes, metallb and nginx-ingress-controller. My corresponding values for kubespray are:
dashboard_enabled: false
ingress_nginx_enabled: true
ingress_nginx_host_network: true
kube_proxy_strict_arp: true
metallb_enabled: true
metallb_speaker_enabled: true
metallb_ip_range:
- "10.0.10.100-10.0.10.120"
kubeconfig_localhost: true
My Problem is, that I am unable getting DNS out of the cluster to the Internet to work.
I had a wildcard A Record set for *.k8s.example.com to the nginx-Ingress external ip, which worked fine for every pod to be accessible from outside.
The Problem was, every container inside the Cluster could not reach the internet anymore. Every request was routed via the ingress. Meaning if I tried to reach www.google.net it would try to reach www.google.net.k8s.example.com which makes kind of sense. Only every .com domain could be reached without problems (example www.google.com) after removing the Wildcard A record it worked fine. All pods inside of the cluster have no problem reaching each other.
There are several configuration possibility I see, where it makes sense to tweak around, yet after 2 weeks I really would prefer a solution that is based on best practice and done right.
I would really love to be able to work with a wildcard A record, but I fear that might not be possible.
I hope I supplied every Information needed to give you enough overview to understand my Problem.
EDIT:
I used the standard kubespray DNS config as i was told it would suffice:
DNS configuration.
# Kubernetes cluster name, also will be used as DNS domain
cluster_name: cluster.local
# Subdomains of DNS domain to be resolved via /etc/resolv.conf for hostnet pods
ndots: 2
# Can be coredns, coredns_dual, manual or none
dns_mode: coredns
# Set manual server if using a custom cluster DNS server
# manual_dns_server: 10.x.x.x
# Enable nodelocal dns cache
enable_nodelocaldns: true
nodelocaldns_ip: 169.254.25.10
nodelocaldns_health_port: 9254
# nodelocaldns_external_zones:
# - zones:
# - example.com
# - example.io:1053
# nameservers:
# - 1.1.1.1
# - 2.2.2.2
# cache: 5
# - zones:
# - https://mycompany.local:4453
# nameservers:
# - 192.168.0.53
# cache: 0
# Enable k8s_external plugin for CoreDNS
enable_coredns_k8s_external: false
coredns_k8s_external_zone: k8s_external.local
# Enable endpoint_pod_names option for kubernetes plugin
enable_coredns_k8s_endpoint_pod_names: false
# Can be docker_dns, host_resolvconf or none
resolvconf_mode: docker_dns
# Deploy netchecker app to verify DNS resolve as an HTTP service
deploy_netchecker: false
# Ip address of the kubernetes skydns service
skydns_server: "{{ kube_service_addresses|ipaddr('net')|ipaddr(3)|ipaddr('address') }}"
skydns_server_secondary: "{{ kube_service_addresses|ipaddr('net')|ipaddr(4)|ipaddr('address') }}"
dns_domain: "{{ cluster_name }}"
What I noticed is, that the etc resolv.conf of pods looks like this:
/ $ cat /etc/resolv.conf
nameserver 169.254.25.10
search flux-system.svc.cluster.local svc.cluster.local cluster.local k8s.example.com maas
options ndots:5
for example on the node, which is managed by MAAS, it is:
# This file is managed by man:systemd-resolved(8). Do not edit.
#
# This is a dynamic resolv.conf file for connecting local clients to the
# internal DNS stub resolver of systemd-resolved. This file lists all
# configured search domains.
#
# Run "resolvectl status" to see details about the uplink DNS servers
# currently in use.
#
# Third party programs must not access this file directly, but only through the
# symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a different way,
# replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.
nameserver 127.0.0.53
options edns0 trust-ad
search k8s.example.com maas
As discussed in comments, the issue is with the resolv.conf on your Kubernetes nodes, and the fact that you are using a wildcard record, that matches one of the names in that resolv.conf search entries.
Any name you may call, from a Node or a Pod, would first be searched as ${input}.${search-entry}, while ${input} would only be queried if concatenation with your search did not return with some record already. Having a wildcard record in the domains search list would result in just any name resolving to that record.
Granted that in this case, the k8s.example.com record is pushed by MAAS, and that we can't really remove it persistently, the next best solution would be to use another name serving your Ingresses - either a subdomain, or something unrelated. Usually, changing an option in your DHCP server should be enough - or arguably better: don't use DHCP hosting Kubernetes nodes.

Pod unable to install packages(apt-get update or apt-get install )

I have observed that the pods in my cluster is not able to install the packages when exec to the pod . While debugging i have realized that its due to /etc/resolv.conf entries.
The /etc/resolv.conf entry from one of the pod is :
nameserver 192.168.27.116
search ui-container.svc.cluster.local svc.cluster.local cluster.local 192.168.27.116.nip.io
options ndots:5
Here if i remove a entry 192.168.27.116.nip.io from the resolv.conf of all the master, worker nodes then the pods will be able to connect to internet and apt-get update and apt-get install works. This is only a temporary workaround because it is not recommended to update the resolv.conf. because i have observed that resolv.conf contents gets re-set to original upon reboot of the nodes.
Is it due to options ndots:5 in the /etc/resolv.conf ?
How can i fix this?
As a quick-fix, you could leverage dnsConfig from pod Spec to override default dns configuration, more details: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-dns-config
No, it's because of the nip.io address in your resolv.conf. It's a special domain that's used for reflection and mocking. It's not the ndots:5.
Adding to above answers, usually resolve.conf follows following pattern
nameserver <local domain server provided by docker>
search <pod namespace>.svc.cluster.local svc.cluster.local cluster.local <host/node's actual DNS>
options ndots:5
Last part of the DNS <host/node's actual DNS> is usually your LAB's or system wide DNS that is part of your network. So you may have override with DNS policy or update the same in your lab. nip.io is a FQDN maker where there is a need, so you may come up with a local DNS name that resolves within your LAB instead of relying on that.

K8s coredns and flannel nameserver limit exceeded

i have been trying to setup k8s in a single node,everything was installed fine. but when i check the status of my kube-system pods,
CNI -> flannel pod has crashed, reason -> Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: x.x.x.x x.x.x.x x.x.x.x
CoreDNS pods status is ContainerCreating.
In My Office, the current server has been configured to have an static ip and when i checked /etc/resolv.conf
This is the output
# Generated by NetworkManager
search ORGDOMAIN.BIZ
nameserver 192.168.1.12
nameserver 192.168.2.137
nameserver 192.168.2.136
# NOTE: the libc resolver may not support more than 3 nameservers.
# The nameservers listed below may not be recognized.
nameserver 192.168.1.10
nameserver 192.168.1.11
i'm unable to find the root cause, what should i be looking at?
In short, you have too many entries in /etc/resolv.conf.
This is a known issue:
Some Linux distributions (e.g. Ubuntu), use a local DNS resolver by default (systemd-resolved). Systemd-resolved moves and replaces /etc/resolv.conf with a stub file that can cause a fatal forwarding loop when resolving names in upstream servers. This can be fixed manually by using kubelet’s --resolv-conf flag to point to the correct resolv.conf (With systemd-resolved, this is /run/systemd/resolve/resolv.conf). kubeadm (>= 1.11) automatically detects systemd-resolved, and adjusts the kubelet flags accordingly.
Also
Linux’s libc is impossibly stuck (see this bug from 2005) with limits of just 3 DNS nameserver records and 6 DNS search records. Kubernetes needs to consume 1 nameserver record and 3 search records. This means that if a local installation already uses 3 nameservers or uses more than 3 searches, some of those settings will be lost. As a partial workaround, the node can run dnsmasq which will provide more nameserver entries, but not more search entries. You can also use kubelet’s --resolv-conf flag.
If you are using Alpine version 3.3 or earlier as your base image, DNS may not work properly owing to a known issue with Alpine. Check here for more information.
You possibly could change that in the Kubernetes code, but I'm not sure about the functionality. As it's set to that value for purpose.
Code can be located here
const (
// Limits on various DNS parameters. These are derived from
// restrictions in Linux libc name resolution handling.
// Max number of DNS name servers.
MaxDNSNameservers = 3
// Max number of domains in search path.
MaxDNSSearchPaths = 6
// Max number of characters in search path.
MaxDNSSearchListChars = 256
)

Kubernetes NFS: Using service name instead of hardcoded server IP address

I was able to get it working following NFS example in Kubernetes.
https://github.com/kubernetes/kubernetes/tree/master/examples/volumes/nfs
However, when I want to automate all the steps, I need to find the IP and update nfs-pv.yaml PV file with the hard coded IP address as mentioned in the example link page.
Replace the invalid IP in the nfs PV. (In the future, we'll be able to
tie these together using the service names, but for now, you have to
hardcode the IP.)
Now, I wonder that how can we tie these together using the services names?
Or, it is not possible at the latest version of Kubernetes (as of today, the latest stable version is v1.6.2) ?
I got it working after I add kube-dns address to the each minion|node where Kubernetes is running. After login each minion, update resolv.conf file as the following;
cat /etc/resolv.conf
# Generated by NetworkManager
search openstacklocal localdomai
nameserver 10.0.0.10 # I added this line
nameserver 159.107.164.10
nameserver 153.88.112.200
....
I am not sure is it the best way but this works.
Any better solution is welcome.
You can use do this with the help of kube-dns,
check whether it's service running or not,
kubectl get svc --namespace=kube-system
and kube-dns pod also,
kubectl get pods --namespace=kube-system
you have to add respected name-server according to kube-dns on each node in cluster,
For more troubleshooting, follow this document,
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/