We have to use two name servers in the pod deployed in the cluster managed by other team.
The coredns for service name resolution of other pods inside cluster
The custom dns server for some external dns queries made by
application inside pod.
We have use the dnsPolicy as ClusterFirst and also specify the nameserver in dnsConfig
as ip of custom dns server and options with ndots = 1.
After deploying the pod we got the correct entries in /etc/resolv.conf with coredns entry as first entry. But when the application tries to resolve some domain, it first query with the absolute name (as option ndots=1 specified in /etc/resolv.conf from the first name server specified in /etc/resolv.conf, after failure it appends the search string that was automatically inserted when we specified the dnsPolicy as ClusterFirst, again query with the first nameserver and then try with second name server.
Why it is not trying the absolute name query with the second name server after failure from first name server, while in case when it append the search string, it query with both the name server in sequential order?
Is there any way we can insert the custom dns entry at top?
Note:- We cannot use the forward functionality of coredns as this dns server is used by other pods/services in the cluster.
Related
I have set these environment variables inside my pod named main_pod.
$ env
HTTP_PROXY=http://myproxy.com
http_proxy=http://myproxy.com
I also have another dynamic pod in pattern sub_pod-{number} which has a service attached to it called sub_pod-{number}.
So, if I add NO_PROXY=sub_pod-1 environment variable in main_pod, request with URL http://sub_pod-1:5000/health_check will run successfully as it won't be directed through proxy which is fine.
But I want this process to be dynamic. sub_pod_45 might spawn at runtime and sub_pod-1 might get destroyed. Is there any better way to handle this rather than updating NO_PROXY for every pod creation / destruction ?
Is there any resource / network policy / egress rule from which I can tell pod that if domain name belongs to kubernetes service, do not route it through proxy server?
Or can I simply use regex or glob patterns in NO_PROXY env variable like NO_PROXY=sub_pod-* ?
Edited
Result of nslookup
root#tmp-shell:/# nslookup sub_pod-1
Server: 10.43.0.10
Address: 10.43.0.10#53
Name: sub_pod-1.default.svc.cluster.local
Address: 10.43.22.139
When no_proxy=cluster.local,
Proxy bypassed when requested with FQDN
res = requests.get('http://sub_pod-1.default.svc.cluster.local:5000')
Proxy didn't bypass when requested with service name only
res = requests.get('http://sub_pod-1:5000') # I want this to work
I would not want to ask my developers to change the application to use FQDN.
Is there any way cluster can identify if URL resolves to a service present within the network and if it happens do not route the request to proxy ?
Libraries that support the http_proxy environment variable generally also support a matching no_proxy that names things that shouldn't be proxied. The exact syntax seems to vary across languages and libraries but it does seem to be universal that setting no_proxy=example.com causes anything.example.com to not be proxied either.
This is relevant because the Kubernetes DNS system creates its names in a domain based on the cluster name, by default cluster.local. The canonical form of a Service DNS name, for example, is service-name.namespace-name.svc.cluster.local., where service-name and namespace-name are the names of the corresponding Kubernetes objects.
I suspect this means it would work to do two things:
Set an environment variable no_proxy=cluster.local; and
Make sure to use the FQDN form when calling other services, service.namespace.svc.cluster.local.
Pods have similar naming, but are in a pod.cluster.local subdomain. The cluster.local value is configurable at a cluster level and it may be different in your environment.
When I exec into a container I see an /etc/resolv.conf file that looks like this:
$ cat /etc/resolv.conf
search namespace.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.43.0.10
options ndots:5
How can I append to the search domains for all containers that get deployed such that the search domains will include extra domains? e.g. If I wanted to add foo.com and bar.com by default for any pod how can I update the search line to look like bellow?
search namespace.svc.cluster.local svc.cluster.local cluster.local foo.com bar.com
Notes:
This is a self managed k8s cluster. I am able to update the DNS/cluster configuration however I need to. I have already updated the coredns component to resolve my nameservers correctly, but this setting needs to be applied to each pod I would imagine.
I have looked at the pod spec, but this wouldn't be a great solution as it would need to be added to every pods (or deployment/ job/ replicaset/ etc) manifest in the system. I need this to be applied to all pods by default.
Due to the way hostnames are returned in numerous existing services, I cannot reasonably expect hostnames to be fully qualified domain names. This is an effort to maintain backwards compatibility with many services we already have (e.g. an LDAP request might return host fizz, but the lookup will need to fully resolve to fizz.foo.com). This is the way bare metal machines and VMs are normally configured here.
I found a possible solution, but I won't mark this as correct myself, because this was not directly specific to k8s, but rather k3s. I might come back later and provide more
In my case my test cluster was a k3s service, which I was assuming would act mostly the same as k8s. The way my environment was set up, my normal /etc/resolv.conf was being replaced by a new file on the node. I was able to circumvent this issues by supplying --resolv-conf where the files looks like this:
$ cat /somedir/resolv.conf
search foo.com bar.com
nameserver 8.8.8.8
Then start the server with /bin/k3s server --resolv-conf=/somedir/resolv.conf
Now when pods are spawned, k3s will parse this file for the search line and automatically append the search domains to whatever pod is created.
I'm not sure if I'm going to run into this issue again when I try this on actual k8s, but at least this gets me back up and running!
When I created a given k8 cluster I didn't specify anything specific for service-cluster-ip-range. Now when I create new loadBalancer services k8 is assigning values that walk on existing ip addresses within the network.
Checking the allowed range via kubectl cluster-info dump | grep service-cluster-ip-range gives me:
"--service-cluster-ip-range=10.96.0.0/12"
which (oddly enough) isn't where the assigned values are coming from. New values seem to have started at 10.95.96.235 and incremented from there.
Attempts to preset a valid ip in a service descriptor via spec.loadBalancerIP gives me errors from kubelet:
Failed to allocate IP for "newservice": "10.95.96.233" is not allowed in config
My questions are:
is it possible to change service-cluster-ip-range without rebuilding the entire cluster?
if not, do I have any other options for (pre)setting loadBalancerIP ?
I was able to get it working following NFS example in Kubernetes.
https://github.com/kubernetes/kubernetes/tree/master/examples/volumes/nfs
However, when I want to automate all the steps, I need to find the IP and update nfs-pv.yaml PV file with the hard coded IP address as mentioned in the example link page.
Replace the invalid IP in the nfs PV. (In the future, we'll be able to
tie these together using the service names, but for now, you have to
hardcode the IP.)
Now, I wonder that how can we tie these together using the services names?
Or, it is not possible at the latest version of Kubernetes (as of today, the latest stable version is v1.6.2) ?
I got it working after I add kube-dns address to the each minion|node where Kubernetes is running. After login each minion, update resolv.conf file as the following;
cat /etc/resolv.conf
# Generated by NetworkManager
search openstacklocal localdomai
nameserver 10.0.0.10 # I added this line
nameserver 159.107.164.10
nameserver 153.88.112.200
....
I am not sure is it the best way but this works.
Any better solution is welcome.
You can use do this with the help of kube-dns,
check whether it's service running or not,
kubectl get svc --namespace=kube-system
and kube-dns pod also,
kubectl get pods --namespace=kube-system
you have to add respected name-server according to kube-dns on each node in cluster,
For more troubleshooting, follow this document,
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
Background:
Let's say I have a replication controller with some pods. When these pods were first deployed they were configured to expose port 8080. A service (of type LoadBalancer) was also create to expose port 8080 publicly. Later we decide that we want to export an additional port from the pods (port 8081). We change the pod definition and do a rolling-update with no downtime, great! But we want this port to be publicly accessible as well.
Question:
Is there a good way to update a service without downtime (for example by adding a an additional port to expose)? If I just do:
kubectl replace -f my-service-with-an-additional-port.json
I get the following error message:
Replace failedspec.clusterIP: invalid value '': field is immutable
If you name the ports in the pods, you can specify the target ports by name in the service rather than by number, and then the same service can direct target to pods using different port numbers.
Or, as Yu-Ju suggested, you can do a read-modify-write of the live state of the service, such as via kubectl edit. The error message was due to not specifying the clusterIP that had already been set.
In such case you can create a second service to expose the second port, it won't conflict with the other one and you'll have no downtime.
If you have more that one pod running for the same service you may use the Kubernetes Engine within the Google Cloud Console as follows:
Under "Workloads", select your Replication Controller. Within that screen, click "EDIT" then update and save your replication controller details.
Under "Discover & Load Balancing", select your Service. Within that screen, click "EDIT" then update and save your service details. If you changed ports you should see those reflecting under the column "Endpoints" when you've finished editing the details.
Assuming you have at least two pods running on a machine (and a restart policy of Always), if you wanted to update the pods with the new configuration or container image:
Under "Workloads", select your Replication Controller. Within that screen, scroll down to "Managed pods". Select a pod, then in that screen click "KUBECTL" -> "Delete". Note, you can do the same with the command line: kubectl delete pod <podname>. This would delete and restart it with the newly downloaded configuration and container image. Delete each pod one at a time, making sure to wait until a pod has fully restarted and working (i.e. check logs, debug) etc, before deleting the next.