We currently have pods in a kubernetes cluster (AKS) that need to resolve two different domains.
The first domain beeing the cluster domain default.svc.cluster.local and the second one beeing mydns.local
how can this be achieved?
I found the solution myself.
There are two ways to achieve the desired name resolution:
If your AKS Cluster is within an Azure VNET you can set the DNS settings in the VNET to the custom DNS Server that is able to resolve your custom domain. If your Pods have no specified dns settings then the resolution will work this way:
First the Pods try to resolve the DNS request within CoreDNS, if they can't then they take the DNS settings of the host and ask the DNS Server configured in the host. Since in azure the DNS settings of the VNET are applied to the Virtual Machines it will ask the correct DNS server.
Modify the coreDNS settings in your AKS cluster with the following json :
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns-custom
namespace: kube-system
data:
yourdns.server: |
yourdns.com:53 {
errors
cache 1
proxy . 10.1.0.40
}
Important to know is, that in AKS you can't overwrite the coredns ConfigMap. The Kubernetes master will always reset it to the default after a couple of seconds. If you want to edit the ConfigMap in AKS you have to name the configmap "coredns-custom".
yourdns.server is actually not the server. It is the domain.server. The DNS server IP is behind the proxy setting.
I think you can use ingress and ingress controller to manage the domain and path.with ingress you can manage multiple domain and attch service to particular domain.
https://kubernetes.github.io/ingress-nginx/
Here also sharing tutorial to setup ingress from digital ocean you can follow it :
https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes
Your second point "2.Modify the coreDNS settings in your AKS cluster with the following json :"
Note that the "forward" plugin should be used in place of "proxy" as noted here:
https://github.com/Azure/AKS/issues/1304
Related
I'm using minikube to have a local Kubernetes cluster.
I enabled the ingress addon with the following command :
minikube addons enable ingress
I have created an application that turns actually in my cluster and also a service tied to it plus an ingress resource where mydomain.name will forward traffic to that service.
The issue that I have: this will not work if I don't add an entry like the following one to the /etc/hosts file :
<kubernetes_cluster_ip_adress> mydomain.name
I thought having the ingress controller with an ingress resource would do the DNS resolution & that I don't need to add any entry to the /etc/hosts file but maybe I am wrong.
Thank you for your help
We have a docker image repository on GitLab which is hosted on the internal network ( repo.mycomapanydomain.io).
My K8 deployment is failing with Name not resolved error for repo.mycomapanydomain.io
I tried updating the kube-dns config as below. But I still have the same error.
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
data:
stubDomains: |
{“mycomapanydomain”: [“10.131.0.4”]}
upstreamNameservers: |
[“10.131.0.4”]
How can I make my resolv.conf to have the Internal nameservers by default or K8 to resolve with my internal DNS IPs?
Editing /etc/resolv.conf either manually or automatically is discouraged as for:
Internal DNS and resolv.conf
By default, most Linux distributions store DHCP information in resolv.conf. Compute Engine instances are configured to renew DHCP leases every 24 hours. For instances that are enabled for zonal DNS, the DHCP lease expires every hour. DHCP renewal overwrites this file, undoing any changes that you might have made. Instances using zonal DNS have both zonal and global entries in the resolv.conf file.
-- Cloud.google.com: Compute: Docs: Internal DNS: resolv.conf
Also:
Modifications on the boot disk of a node VM do not persist across node re-creations. Nodes are re-created during manual upgrade, auto-upgrade, auto-repair, and auto-scaling. In addition, nodes are re-created when you enable a feature that requires node re-creation, such as GKE sandbox, intranode visibility, and shielded nodes.
-- Cloud.google.com: Kubernetes Engine: Docs: Concepts: Node images: Modifications
As for:
How can I make my resolv.conf to have the Internal nameservers by default or K8 to resolve with my internal DNS IPs?
From the GCP and GKE perspective, you can use the Cloud DNS to configure your DNS resolution in either way that:
your whole DOMAIN is residing in GCP infrastructure (and you specify all the records).
your DOMAIN queries are forwarded to the DNS server of your choosing.
You can create your DNS zone by following:
GCP Cloud Console (Web UI) -> Network Services -> Cloud DNS -> Create zone:
Assuming that you want to forward your DNS queries to your internal DNS server residing in GCP your configuration should look similar to the one below:
A side note!
Remember to follow the "Destination DNS Servers" steps to allow the DNS queries to your DNS server.
Put the internal IP address of your DNS server where the black rectangle is placed.
After that your GKE cluster should be able to resolve the DNS queries of your DOMAIN.NAME.
Additional resources:
I found an article that shows how you can create a DNS forwarding for your GCP instances:
Medium.com: Faun: DNS forwarding zone and dns policy in GCP
I'm currently setting up a Kubernetes cluster where both private and public services are run. While public services should be accessible via the internet (and FQDNs), private services should not (the idea is to run a VPN inside the cluster where private services should be accessible via simple FQDNs).
At the moment, I'm using nginx-ingress and configure Ingress resources where I set the hostname for public resources. external-dns then adds the corresponding DNS records (in Google CloudDNS) - this already works.
The problem I'm facing now: I'm unsure about how I can add DNS records in the same way (i.e. simply specifying a host in Ingress definitions and using some ingress-class private), yet have these DNS records only be accessible from within the cluster.
I was under the impression that I can add these records to the Corefile that CoreDNS is using. However, I fail to figure out how this can be automated.
Thank you for any help!
If you don't want them to be accessed publicly, you don't want to add ingress rules for them. Ingress is only to route external traffic into your cluster.
All your services are already registered in CoreDNS and accessible with their local name, no need to add anything else.
I managed to resolve the problem myself... wrote a little Go application which watches Ingress resources and adds rewrite rules to the Corefile read by CoreDNS accordingly... works like a charm :)
PS: If anyone wants to use the tool, let me know. I'm happy to make it open-source if there is any demand.
Kubernetes has build-in DNS and each service receives internal fqdn.
These services are not available from the outside unless
service type is 'LoadBalancer'
you define ingress for that service (assuming you have ingress controler like nginx already deployed)
So your sample service deployed in 'default' namespace is accessible inside cluster out of the box via
service1.default.svc.cluster.local
You can change the name by specifying custom ExternalName
apiVersion: v1
kind: Service
metadata:
name: service1
namespace: prod
spec:
type: ExternalName
externalName: service1.database.example.com
Note that no proxying is done for this to work, you need to make sure given new name is routable from within your cluster (outbound connections are allowed, etc.)
As your k8s cluster is hosted with gcloud you can try to use Cloud DNS. There you can add a private zone with your DNS name.
Then you can push this dns server to your client in your vpn configuration with:
push "dhcp-option DOMAIN gitlab.internal.example.com"
push "dhcp-option DNS 169.254.169.254"
169.254.169.254 is googles dns, only accessible from inside a google private network
If you have an internal DNS server that can resolve the FQDNs, then you can configure the Corefile to forward internal service domain resolution to that DNS server.
For example, if the internal domains/FQDN is *.mycompany.local, the Corefile could have a section for that:
mycompany.local {
log
errors
ready
cache 10
forward . <internal DNS server IP> {
}
}
All the requests to app.mycompany.local, or frontend.middleware.backend.mycompany.local will be forward to your internal DNS for resolution.
Documentation of forward plugin is available here: https://coredns.io/plugins/forward/
Is it possible for an external DNS server to resolve against the K8s cluster DNS? I want to have applications residing outside of the cluster be able to resolve the container DNS names?
It's possible, there's a good article proving the concept: https://blog.heptio.com/configuring-your-linux-host-to-resolve-a-local-kubernetes-clusters-service-urls-a8c7bdb212a7
However, I agree with Dan that exposing via service + ingress/ELB + external-dns is a common way to solve this. And for dev purposes I use https://github.com/txn2/kubefwd which also hacks name resolution.
Although it may be possible to expose coredns and thus forward requests to kubernetes, the typical approach I've taken, in aws, is to use the external-dns controller.
This will sync services and ingresses with provides like aws. It comes with some caveats, but I've used it successfully in prod environments.
coredns will return cluster internal IP addresses that are normally unreachable from outside the cluster. The correct answer is the deleted by MichaelK suggesting to use coredns addon k8s_external https://coredns.io/plugins/k8s_external/ .
k8s_external is already part of coredns. Just edit with
kubectl -n kube-system edit configmap coredns and add k8s_external after kubernetes directive per docs.
kubernetes cluster.local
k8s_external example.org
k8s_gateway also handles dns for ingress resources
https://coredns.io/explugins/k8s_gateway/
https://github.com/ori-edge/k8s_gateway (includes helm chart)
You'll also want something like metallb or rancher/klipper-lb handling services with type: LoadBalancer as k8s_gateway won't resolve NodePort services.
MichaelK is the author of k8s_gateway not sure why his reply is deleted by moderator.
I've never done that, but technically this should be possible by exposing kube-dns service as NodePort. Then you should configure your external DNS server to forward queries for Kube DNS zone "cluster.local" (or any other you have in Kube) to kube-dns address and port.
In Bind that can be done like that:
zone "cluster.local" {
type forward;
forward only;
forwarders{ ANY_NODE_IP port NODEPORT_PORT; };
};
Im new to Ingress(k8s) and studying via documentation. Here is the
official documentation - Ingress Minikube. Under Create and ingress resource, I've already done steps 1-3 with no problem but I can't seem to do what step 4 asks. I have located the file inside the ingress-nginx-controller pod: /etc/hosts; but I can't seem to have any idea on how to edit it. I can't install vim-tiny or any other editing tools due to permission being denied. Sudo does not work. I just want to edit the /etc/hosts file.
This particular step (#4) should be done at your localhost, not inside ingress-controller pod. It`s just for mapping of hostname to IP addresses, so that you can verify if you can reach your application from outside exposed by Ingress resource.
that is ip address or step is to setup DNS A record which expose your application to outside netwrok kubernetes cluster.
ingress > service > POD