Kubernetes custom domain name - kubernetes

I'm running Kuberentes with a Minikube node on my machine. The pods are accessing each other by their .metadata.name, and I would like to have a custom domain to that name.
i.e. one pod accesses Elastic's machine by elasticsearch.blahblah.com
Thanks for any suggestions

You should have DNS records for pods by default due to kube-DNS addon enabled by default in minikube.
To check kube-dns addon status use the below command:
kubectl get pod -n kube-system
Please find below how cluster add-on DNS server works:
An optional (though strongly recommended) cluster add-on is a DNS server. The DNS server watches the Kubernetes API for new Services and creates a set of DNS records for each. If DNS has been enabled throughout the cluster then all Pods should be able to do name resolution of Services automatically.
For example, if you have a Service called "my-service" in Kubernetes Namespace "my-ns" a DNS record for "my-service.my-ns" is created. Pods which exist in the "my-ns" Namespace should be able to find it by simply doing a name lookup for "my-service". Pods which exist in other Namespaces must qualify the name as "my-service.my-ns". The result of these name lookups is the cluster IP.
Kubernetes also supports DNS SRV (service) records for named ports. If the "my-service.my-ns" Service has a port named "http" with protocol TCP, you can do a DNS SRV query for "_http._tcp.my-service.my-ns" to discover the port number for "http".
The Kubernetes DNS server is the only way to access services of type ExternalName.
You can follow Configure DNS Service document for configuration instructions.
Also, you can check DNS for Services and Pods for additional information.

Related

Clean way to connect to services running on the same host as the Kubernetes cluster

I have a single node Kubernetes cluster, installed using k3s on bare metal. I also run some services on the host itself, outside the Kubernetes cluster. Currently I use the external IP address of the machine (192.168.200.4) to connect to these services from inside the Kubernetes network.
Is there a cleaner way of doing this? What I want to avoid is having to reconfigure my Kubernetes pods if I decide to change the IP address of my host.
Possible magic I which existed: a Kubernetes service or IP that automagically points to my external IP (192.168.200.4) or a DNS name that points the node's external IP address.
That's what ExternalName services are for (https://kubernetes.io/docs/concepts/services-networking/service/#externalname):
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: ExternalName
externalName: ${my-hostname}
ports:
- port: 80
Then you can access the service from withing kubernetes as my-service.${namespace}.svc.cluster.local.
See: https://livebook.manning.com/concept/kubernetes/external-service
After the service is created, pods can connect to the external service
through the external-service.default.svc.cluster.local domain name (or
even external-service) instead of using the service’s actual FQDN.
This hides the actual service name and its location from pods
consuming the service, allowing you to modify the service definition
and point it to a different service any time later, by only changing
the externalName attribute or by changing the type back to ClusterIP
and creating an Endpoints object for the service—either manually or by
specifying a label selector on the service and having it created
automatically.
ExternalName services are implemented solely at the DNS level—a simple
CNAME DNS record is created for the service. Therefore, clients
connecting to the service will connect to the external service
directly, bypassing the service proxy completely. For this reason,
these types of services don’t even get a cluster IP.
This relies on using a resolvable hostname of your machine. On minikube there's a DNS alias host.minikube.internal that is setup to resolve to an IP address that routes to your host machine, I don't know if k3s supports something similar.
Thanks #GeertPt,
With minikube's host.minikube.internal in mind I search around and found that CoreDNS has a DNS entry for each host it's running on. This only seems the case for K3S.
Checking
kubectl -n kube-system get configmap coredns -o yaml
reveals there is the following entry:
NodeHosts: |
192.168.200.4 my-hostname
So if the hostname doesn't change, I can use this instead of the IP.
Also, if you're running plain docker you can use host.docker.internal to access the host.
So to sum up:
from minikube: host.minikube.internal
from docker: host.docker.internal
from k3s: <hostname>

Kubernetes: How to allow two pods running in same/different namespace communicate irrespective of the protocol using a servicename?

Allow two pods (say pod A and B) running in same/different namespace communicate irrespective of the protocol(say http,https,akka.tcp) along with a valid Network policy applied.
Solutions tried:
Tried applying network policy to both the pods and also used the service name: “my-svc.my-namespace.svc.cluster.local” to make pod B
communicate to pod A which is running the service “my-svc” but both
failed to communicate.
Also tried adding the IP address and host mapping of pod A in pod B while it’s deployment, then pod B was able to communicate to pod A
but inverse communication is failing.
Kindly suggest me a way to fix this.
By default, pods can communicate with each other by their IP address, regardless of the namespace they're in.
You can see the IP address of each pod with:
kubectl get pods -o wide --all-namespaces
However, the normal way to communicate within a cluster is through Service resources.
A Service also has an IP address and additionally a DNS name. A Service is backed by a set of pods. The Service forwards requests to itself to one of the backing pods.
The fully qualified DNS name of a Service is:
<service-name>.<service-namespace>.svc.cluster.local
This can be resolved to the IP address of the Service from anywhere in the cluster (regardless of namespace).
For example, if you have:
Namespace ns-a: Service svc-a → set of pods A
Namespace ns-b: Service svc-b → set of pods B
Then a pod of set A can reach a pod of set B by making a request to:
svc-b.ns-b.svc.cluster.local
You can put the Pods behind Services and use Service DNS for communication. Calls to service-name allow Pods in the same namespace to communicate. Calls to service-name.namespace allow Pods in different namespaces to communicate.

Kubernetes IP service IP and ports

I've deployed a hello-world application on my Kubernetes cluster. When I access the app via <cluster ip>:<port> in my browser I get the following webpage: hello-kuleuven app webpage.
I understand that from outside the cluster you have to access the app via the cluster IP and the port specified in the deployment file (which in my case is 30001). From inside the cluster you have to contact the master node with its local IP and another port number, in my case 10.111.152.164:8080.
My question is about the last line of the webpage:
Kubernetes listening in 443 available at tcp://10.96.0.1:443
Since, the service is already accessible from inside and outside the cluster by other ports and IP's, I'm not sure what this does.
The IP 10.96.0.1 is a cluster IP of kube-dns service. You can see it using
kubectl get svc -n kube-apiserver
Kubernetes DNS schedules a DNS Pod and Service on the cluster, and configures the kubelets to tell individual containers to use the DNS Service’s IP to resolve DNS names.
So every pod you deploy uses kube-dns service (ClusterIP 10.96.0.1) to resolve the dns names.
Read more about kube dns at kubernetes official document here

Resolving a url as a different url inside kubernetes pod

My pod (pod1) internally can connect to another pod using its service like the following:
pod2-service.namespace.svc.cluster.local
However, I want pod1 to connect to pod2 using a URL like abc.com which is not registered in a DNS. Basically, I want pod1 to resolve abc.com as pod2-service.namespace.svc.cluster.local.
I was looking at hostAliases here:
https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/.
However, it needs an IP. How can I do this in Kubernetes?
I think you can use a fixed ip as the service ip of your pod2, then use this ip in your hostalias definition.
There are a couple of things:
StatefulSets where you will always know the pod name and you can find it based on its ordinal index.
Using Pod hostname and subdomain spec field (Only works for standalone pods, afaik)
However, pod to pod doesn't seem to be natively supported by Kubernetes in Deployments, my guess the rationale here is that the pods can constantly change IP addresses and names. You could use Pod default DNS entries but again the DNS entries will vary depending on the IP addresses that are assigned to pods.
The other solution that I can think of for Deployments is to use something like Consul with stub domains, then on each pod you will have to add an initContainer or consul agent sidecar to register its IP with the consul service, every time a pod restarts it will need to overwrite the DNS registration in Consul.
If you don't want to use stub domain there's also the option of using Pod DNS Configs.
you can get the service ip and append to /etc/hosts in pod1 before your application code running.
echo "$(getent hosts pod2-service.namespace.svc.cluster.local | awk '{ print $1 }') abc.com" >> /etc/hosts
Notice: It is pretty hacky because you should guarantee service ip of pod2 is fixed. When service ip changed, pod1 will fail to reslove the host.

How to discover headless service endpoints

Is there a way to discover all the endpoints of a headless service from outside the cluster?
Preferably using DNS or Static IPs
By watching changes to a list of Endpoints:
GET /api/v1/watch/namespaces/{namespace}/endpoints
Headless Services are a group of Pod IPs. Pod IPs are not (generally) available outside the cluster/cloud-provider.
Are you trying to get external IPs for a headless service or are you within the same network (e.g. in the GCE project) but not in the cluster?
The DNS addon is exactly what you're after. From the docs:
For example, if you have a Service called "my-service" in Kubernetes
Namespace "my-ns" a DNS record for "my-service.my-ns" is created. Pods
which exist in the "my-ns" Namespace should be able to find it by
simply doing a name lookup for "my-service". Pods which exist in other
Namespaces must qualify the name as "my-service.my-ns". The result of
these name lookups is the cluster IP.
And in the case of a headless service:
DNS is configured to return multiple A records (addresses) for the
Service name, which point directly to the Pods backing the Service.
However, this service is only available inside the cluster. But KubeDNS is just another pod:
kubectl get po --namespace=kube-system
kubectl describe po kube-dns-pod-name --namespace=kube-system
Which means you can create a service with an externally accessible address to expose this service. Just use a selector matching your kube-dns pod label.
http://kubernetes.io/v1.1/docs/user-guide/services.html#dns
https://github.com/kubernetes/kubernetes/blob/release-1.1/cluster/addons/dns/README.md