How to expose Kubernetes DNS externally - kubernetes

Is it possible for an external DNS server to resolve against the K8s cluster DNS? I want to have applications residing outside of the cluster be able to resolve the container DNS names?

It's possible, there's a good article proving the concept: https://blog.heptio.com/configuring-your-linux-host-to-resolve-a-local-kubernetes-clusters-service-urls-a8c7bdb212a7
However, I agree with Dan that exposing via service + ingress/ELB + external-dns is a common way to solve this. And for dev purposes I use https://github.com/txn2/kubefwd which also hacks name resolution.

Although it may be possible to expose coredns and thus forward requests to kubernetes, the typical approach I've taken, in aws, is to use the external-dns controller.
This will sync services and ingresses with provides like aws. It comes with some caveats, but I've used it successfully in prod environments.

coredns will return cluster internal IP addresses that are normally unreachable from outside the cluster. The correct answer is the deleted by MichaelK suggesting to use coredns addon k8s_external https://coredns.io/plugins/k8s_external/ .
k8s_external is already part of coredns. Just edit with
kubectl -n kube-system edit configmap coredns and add k8s_external after kubernetes directive per docs.
kubernetes cluster.local
k8s_external example.org
k8s_gateway also handles dns for ingress resources
https://coredns.io/explugins/k8s_gateway/
https://github.com/ori-edge/k8s_gateway (includes helm chart)
You'll also want something like metallb or rancher/klipper-lb handling services with type: LoadBalancer as k8s_gateway won't resolve NodePort services.
MichaelK is the author of k8s_gateway not sure why his reply is deleted by moderator.

I've never done that, but technically this should be possible by exposing kube-dns service as NodePort. Then you should configure your external DNS server to forward queries for Kube DNS zone "cluster.local" (or any other you have in Kube) to kube-dns address and port.
In Bind that can be done like that:
zone "cluster.local" {
type forward;
forward only;
forwarders{ ANY_NODE_IP port NODEPORT_PORT; };
};

Related

Kuberbetes: DNS server, Ingress controller and Metal LB communucations

I'm unable to wrap my head around the concepts of interconnectivity among DNS, Ingress controller, MetalLb and Kubeproxy.
I know what these resources/tools/services are for and get the concepts of them individually but unable to form a picture of them working in tandem.
For example, in a bare metal setup, a client accesses my site - https://mytestsite.com, still having doubts , how effectively it lands to the right pod and where the above mentioned services/resources/tools comes into picture & at what stage ?
Ex. How DNS talks to my MetalLB, if the client accesses my MetalLB hosting my application and how LB inturn speaks to IngressController and finally where does Kube-proxy comes into play here.
I went thru the K8s official documentation and few others as well but still kind of stumped here. Following article is really good but I'm unable to stitch the pieces together.
https://www.disasterproject.com/kubernetes-with-external-dns/
Kindly redirect me to the correct forum, if it is not the right place, thanks.
The ingress-controller creates a service of type LoadBalancer that serves as the entry point into the cluster. In a public cloud environment, a loadbalancer like ELB on AWS would create the counter part and set the externalIP of that service to it's ip. It is like a service of type NodePort but it also has an ExternalIP, which corresponds to the actual ip of the counterpart, a load balancer like ELB on aws.
In a bare metal environment, no external load balancer will be created, so the external ip would stay in <Pending> state forever. Here for example the service of the istio ingress controller:
$ kubectl get svc istio-ingressgateway -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
istio-ingressgateway LoadBalancer 192.12.129.119 <Pending> [...],80:32123/TCP,443:30994/TCP,[...]
In that state you would need to call http://<node-ip>:32123 to reach the http port 80 of the ingress controller service, which would be then forwarded to your pod (more on that in a bit).
When you're using metallb, it will update the service with an external ip so you can call http://<ip> instead. Metallb will also announce that ip, e.g. via BGP, so other know where to send traffic to, when someone would call the ip.
I havn't used external DNS and only scanned the article but I guess that you can use that to also have a dns record to be created so someone can call your service by it's domain, not only by it's ip. So you can call http://example.com instead.
This is basically why you run metallb and how it interacts with your ingress controller. The ingress controller creates an entry point into the cluster and metallb configures it and attracts traffic.
Until now the call to http://example.com can reach your cluster, but it needs to also reach the actual application, running in a pod inside the cluster. That's kube-proxy's job.
You read a lot about service of different types and all this kind of stuff, but in the end it all boils down to iptables rules. kube-proxy will create a bunch of those rules, that form a chain.
SSH into any kubernetes worker, run iptables-save | less command and search for the external ip configured on your ingress-controller's service by metallb. You'll find a chain with the destination of you external ip, that basically leads from the external IP over the service ip with a load balancer configuration to a pod ip.
In the end the whole chain would look something like this:
http://example.com
-> http://<some-ip> (domain translated to ip)
-> http://<node-ip>:<node-port> (ingress-controller service)
---
-> http://<cluster-internal-ip>:<some-port> (service of your application)
-> http://<other-cluster-internal-ip>:<some-port> (ip of one of n pods)
where the --- line shows the switch from cluster external to cluster internal traffic. The cluster-internal-ip will be from the configured service-cdir and the other-cluster-internal-ip will be from the configured pod-cidr.
Note that there are different ways to configure cluster internal traffic routing, how to run kube-proxy and some parts might even be a bit simplified, but this should give you a good enough understanding of the overall concept.
Also see this answer on the question 'What is a Kubernetes LoadBalancer On-Prem', that might provide additional input.

How to set DNS entrys & network configuration for kubernetes cluster #home (noob here)

I am currently running a Kubernetes cluster on my own homeserver (in proxmox ct's, was kinda difficult to get working because I am using zfs too, but it runs now), and the setup is as follows:
lb01: haproxy & keepalived
lb02: haproxy & keepalived
etcd01: etcd node 1
etcd02: etcd node 2
etcd03: etcd node 3
master-01: k3s in server mode with a taint for not accepting any jobs
master-02: same as above, just joining with the token from master-01
master-03: same as master-02
worker-01 - worker-03: k3s agents
If I understand it correctly k3s delivers with flannel as a CNI pre-installed, as well as traefik as a Ingress Controller.
I've setup rancher on my cluster as well as longhorn, the volumes are just zfs volumes mounted inside the agents tho, and as they aren't on different hdd's I've set the replicas to 1. I have a friend running the same setup (we set them up together, just yesterday) and we are planing on joining our networks trough vpn tunnels and then providing storage nodes for each other as an offsite backup.
So far I've hopefully got everything correct.
Now to my question: I've both got a static ip #home as well as a domain, and I've set that domain to my static ip
Something like that: (don't know how dns entries are actually written, just from the top of my head for your reference, the entries are working well.)
A example.com. [[my-ip]]
CNAME *.example.com. example.com
I've currently made a port-forward to one of my master nodes for port 80 & 443 but I am not quite sure how you would actually configure that with ha in mind, and my rancher is throwing a 503 after visiting global settings, but I have not changed anything.
So now my question: How would one actually configure the port-forward and, as far as I know k3s has a load-balancer pre-installed, but how would one configure those port-forwards for ha? the one master node it's pointing to could, theoretically, just stop working and then all services would not be reachable anymore from outside.
Assuming your apps are running on port 80 and port 443 your ingress should give you a service with an external ip and you would point your dns at that. Read below for more info.
Seems like you are not a noob! you got a lot going on with your cluster setup. What you are asking is a bit complicated to answer and I will have to make some assumptions about your setup, but will do my best to give you at least some intial info.
This tutorial has a ton of great info and may help you with what you are doing. They use kubeadm instead of k3s, buy you can skip that section if you want and still use k3s.
https://www.debontonline.com/p/kubernetes.html
If you are setting up and installing etcd on your own, you don't need to do that k3s will create an etcd cluster for you that run inside pods on your cluster.
Load Balancing your master nodes
haproxy + keepalived nodes would be configured to point to the ips of your master nodes at port 6443 (TCP), the keepalived will give you a virtual ip and you would configure your kubeconfig (that you get from k3s) to talk to that ip. On your router you will want to reserve an ip (make sure not to assign that to any computers).
This is a good video that explains how to do it with a nodejs server but concepts are the same for your master nodes:
https://www.youtube.com/watch?v=NizRDkTvxZo
Load Balancing your applications running in the cluster
Use an K8s Service read more about it here: https://kubernetes.io/docs/concepts/services-networking/service/
essentially you need an external ip, I prefer to do this with metal lb.
metal lb gives you a service of type load balancer with an external ip
add this flag to k3s when creating initial master node:
https://metallb.universe.tf/configuration/k3s/
configure metallb
https://metallb.universe.tf/configuration/#layer-2-configuration
You will want to reserve more ips on your router and put them under the addresses section in the yaml below. In this example you will see you have 11 ips in the range 192.168.1.240 to 192.168.1.250
create this as a file example metallb-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.1.240-192.168.1.250
kubectl apply -f metallb-cm.yaml
Install with these yaml files:
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/metallb.yaml
source - https://metallb.universe.tf/installation/#installation-by-manifest
ingress
Will need a service of type load balancer, use its external ip as the external ip
kubectl get service -A - look for your ingress service and see if it has an external ip and does not say pending
I will do my best to answer any of your follow up questions. Good Luck!

Route traffic in kubernetes based on IP address

We have need to test a pod on our production kubernetes cluster after a data migration before we expose it to our users. What we'd like to do is route traffic from our internal ip addresses to the correct pods, and all other traffic to a maintenance pod. Is there a way we can achieve this, or do we need to briefly expose an ip address on the cluster so we can access the pods directly?
What ingress controller are you using?
Generally ingress-controllers tend to support header-based routing, rather that source ip based one.
Say, ingress-nginx supports header-based canary deployments out of box.
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#canary.
Good example here.
https://medium.com/#domi.stoehr/canary-deployments-on-kubernetes-without-service-mesh-425b7e4cc862
Note, that ingress-nginx canary implementation actually requires your services to be in different namespaces.
You could try to configure your ingress-nginx with use-forwarded-headers and try to employ X-Forwarded-For header for routing.
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#use-forwarded-headers

coredns forward plugin to use a k8s service name

For configuring a multicluster Isito with replicated control planes, one of the requirements is to configure the k8s coredns service in the kube-system namespace, to forward zone "global" to the IP of the "istiocoredns" service deployed in the istio-system namespace. Like this:
global:53 {
errors
cache 30
forward . $(kubectl get svc -n istio-system istiocoredns -o jsonpath={.spec.clusterIP}):53
}
In the example the use that command expansion to get the IP of the istiocoredns ClusterIP type of service.
As that is a non static IP and could be modified, I am looking for a way to use something more dynamic and change aware. Using the istiocoredns service FQDN name would be great, but coredns documentation is not mentioning anything about it.
Is there any coredns plugin or workaround this?
Thank you.
Is there any coredns plugin or workaround this?
There is istio coredns plugin, but as mentioned in the usage section they set here the IP of the coredns anyway.
Update the kube-dns config map to point to this coredns service as the upstream DNS service for the *.global domain. You will have to find out the cluster IP of coredns service and update the config map (or write a controller for this purpose!).
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
data:
stubDomains: |
{"global": ["10.2.3.4"]}
But here's some interesting information
UPDATE: This plugin is no longer necessary as of Istio 1.8. DNS is built into the istio agent in the sidecar. Sidecar DNS is enabled by default in the preview profile. You can also enable it manually by setting the following config in the istio operator
meshConfig:
defaultConfig:
proxyMetadata:
ISTIO_META_DNS_CAPTURE: "true"
ISTIO_META_PROXY_XDS_VIA_AGENT: "true"
You can find more information about it here.
There are a few efforts in progress that will help simplify the DNS story:
Istio will soon support DNS interception for all workloads with a sidecar proxy. This will allow Istio to perform DNS lookup on behalf of the application.
Admiral is an Istio community project that provides a number of multicluster capabilities, including automatic creation of service DNS entries.
Kubernetes Multi-Cluster Services is a Kubernetes Enhancement Proposal (KEP) that defines an API for exporting services to multiple clusters. This effectively pushes the responsibility of service visibility and DNS resolution for the entire clusterset onto Kubernetes. There is also work in progress to build layers of MCS support into Istio, which would allow Istio to work with any cloud vendor MCS controller or even act as the MCS controller for the entire mesh.
While Admiral is available today, the Istio and Kubernetes communities are actively building more general solutions into their platforms. Stay tuned!
There is article about that in 1.8 prelim docs.

Nginx-ingress - Wrong src client ip (X-Real-Ip)

When using nginx-ingress in Kubernetes (installed via helm), the X-Real-Ip is not my real IP (not preserving the original client IP)
I've tried externalTrafficPolicy: "Local", use-proxy-protocol: "true" as suggested, but it didn't help...
Can you provide us with more info, like the service you're using,
From a quick guess, Looks like you're applying the externalTrafficPolicy: "Local" on the wrong service,
I also previously applied it to my NodePort service instead of the Nginx Service and it didn't work
Please check the service with LoadBalancer type, it's usually named nginx-nginx-ingress-controller, a quick kubectl get services --all-namespaces can show you a list of all the services running.
TLDL
Local means that when the packet arrives to a pod, kube proxy will only distribute the load within the same node pods even though other pods in the same cluster are less loaded.
On the other hand, when setting Cluster value, the balancing takes into account not only the nodes but also the number of pods to distribute the requests, and to avoid imbalance, Kubernetes perform the balancing within the cluster.
https://medium.com/pablo-perez/k8s-externaltrafficpolicy-local-or-cluster-40b259a19404
https://github.com/jetstack/kube-lego/issues/57#issuecomment-277777686