My pod (pod1) internally can connect to another pod using its service like the following:
pod2-service.namespace.svc.cluster.local
However, I want pod1 to connect to pod2 using a URL like abc.com which is not registered in a DNS. Basically, I want pod1 to resolve abc.com as pod2-service.namespace.svc.cluster.local.
I was looking at hostAliases here:
https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/.
However, it needs an IP. How can I do this in Kubernetes?
I think you can use a fixed ip as the service ip of your pod2, then use this ip in your hostalias definition.
There are a couple of things:
StatefulSets where you will always know the pod name and you can find it based on its ordinal index.
Using Pod hostname and subdomain spec field (Only works for standalone pods, afaik)
However, pod to pod doesn't seem to be natively supported by Kubernetes in Deployments, my guess the rationale here is that the pods can constantly change IP addresses and names. You could use Pod default DNS entries but again the DNS entries will vary depending on the IP addresses that are assigned to pods.
The other solution that I can think of for Deployments is to use something like Consul with stub domains, then on each pod you will have to add an initContainer or consul agent sidecar to register its IP with the consul service, every time a pod restarts it will need to overwrite the DNS registration in Consul.
If you don't want to use stub domain there's also the option of using Pod DNS Configs.
you can get the service ip and append to /etc/hosts in pod1 before your application code running.
echo "$(getent hosts pod2-service.namespace.svc.cluster.local | awk '{ print $1 }') abc.com" >> /etc/hosts
Notice: It is pretty hacky because you should guarantee service ip of pod2 is fixed. When service ip changed, pod1 will fail to reslove the host.
Related
Allow two pods (say pod A and B) running in same/different namespace communicate irrespective of the protocol(say http,https,akka.tcp) along with a valid Network policy applied.
Solutions tried:
Tried applying network policy to both the pods and also used the service name: “my-svc.my-namespace.svc.cluster.local” to make pod B
communicate to pod A which is running the service “my-svc” but both
failed to communicate.
Also tried adding the IP address and host mapping of pod A in pod B while it’s deployment, then pod B was able to communicate to pod A
but inverse communication is failing.
Kindly suggest me a way to fix this.
By default, pods can communicate with each other by their IP address, regardless of the namespace they're in.
You can see the IP address of each pod with:
kubectl get pods -o wide --all-namespaces
However, the normal way to communicate within a cluster is through Service resources.
A Service also has an IP address and additionally a DNS name. A Service is backed by a set of pods. The Service forwards requests to itself to one of the backing pods.
The fully qualified DNS name of a Service is:
<service-name>.<service-namespace>.svc.cluster.local
This can be resolved to the IP address of the Service from anywhere in the cluster (regardless of namespace).
For example, if you have:
Namespace ns-a: Service svc-a → set of pods A
Namespace ns-b: Service svc-b → set of pods B
Then a pod of set A can reach a pod of set B by making a request to:
svc-b.ns-b.svc.cluster.local
You can put the Pods behind Services and use Service DNS for communication. Calls to service-name allow Pods in the same namespace to communicate. Calls to service-name.namespace allow Pods in different namespaces to communicate.
I am new to Kubernetes, and I am trying to make inter-pod communication over DNS to work.
Pods in My k8s are spawned using Deployments. My problem is all the Pods report its hostname to Zookeeper, and pods use those hostnames found in Zookeeper to ping the other peers. It always fail because the peer's hostnames are unresolvable between pods.
The only solution now is to manually add each pod's hostname to peer's /etc/hosts file. But this method would not endure to work for large clusters.
If there is a DNS solution for inter-pod communication, that keeps a record of any newly generated pods, and delete dead pods, will be great.
Thanks in advance.
One solution I had found was to add hostname and subdomain under spec->template->spec-> , then the communication over hostnames between each pod is successful.
However, this solution is fairly dumb, because I cannot set the replicas for each Deployment to more than 1, or I will get more than 1 pod with same hostname in the cluster. If I have 10 slave nodes with same function in a cluster, I will need to create 10 Deployments.
Any better solutions?
You need to use a service definition pointing to your pods
https://kubernetes.io/docs/concepts/services-networking/service/
With that you have a balanced proxy to control the inter-pod communications and the internal DNS on Kubernetes takes care of that service instead of each pod no matter the state of the pod.
If that simples solution didn't fit your needs you can substitute kubedns as the default internal DNS by using coreDNS.
https://coredns.io/
I have a deployment pod that needs to grab another the IP address of another deployment pod and use that as an environment variable. The closest I could find was this how-to-know-a-pods-own-ip-address-from-inside-a-container-in-the-pod
I know I can grab the IP address of a service using the environment variable: $<SVC NAME>_SERVICE_HOST injected in a pod that gets created after this service. Is there a similar way to inject a deployment pod's IP address into another deployment pod after the first gets created?
You should consider exposing your target pod through a ClusterIP service, and access that pod using the service's cluster DNS FQDN. Using this method, you don't have to worry about exactly what IP your target pod is at because the Kube proxy will take care of all the DNS and routing for you. You will then only need to know what the ClusterIP service endpoint is and access your target pod through that.
The official docs contain a great case study and an interactive tutorial on this subject.
Hope this helps!
There is not way currently to find another pod's IP in DNS or environment variables. For that you need to query Kubernetes API. You may create serviceaccount with pod and deployment list permissions and then use Kubernetes API library or kubectl.
I've deployed the Rancher Helm chart to my Kubernetes cluster and want to access the Rancher API/UI from another pod (i.e. a pod running an ingress-controller).
when I list the services and the endpoints. The IP addresses differ:
$ kubectl get ep | grep rancher
release-name-rancher 10.200.23.13:80 18h
and
$ kubectl get services | grep rancher
release-name-rancher ClusterIP 10.100.200.253 <none> 80/TCP 18h
Within the container of the client (i.e. the ingress controller), I see the service beeing represented with the service's ClusterIP:
$ env | grep RELEASE_NAME_RANCHER_SERVICE_HOST
RELEASE_NAME_RANCHER_SERVICE_HOST=10.100.200.253
Trying to reach the backend via the IP address in the Env does not work (curl 10.100.200.253 just delivers no response and blocks forever).
Trying to reach the backend via the endpoint address works:
$ curl 10.200.23.13
Found.
I'm quite confused why the endpoint IP address and the ClusterIP address differ and why is it not possible to connect to the ClusterIP address. Any hints to polish my understanding?
In Kubernetes, every Pod and Service gets its own IP address. The kubectl get services IP address is the Kubernetes-internal address of the Service; the get ep address address of the Pod behind it. The Service actually acts like a load balancer, and there can be multiple Pods attached to it. The Kubernetes Service documentation goes into a lot of detail about what exactly is happening here.
Kubernetes also provides an internal DNS service that can resolve Service names. You generally shouldn't use any of these IP addresses directly; instead, use the host name release-name-rancher.default.svc.cluster.local (or replace "default" if you're running in some other Kubernetes namespace).
While the ..._SERVICE_HOST environment variable you reference is supported and documented, I'd avoid using it. Of particular note, if you helm install or kubectl apply a large set of resources at once and the Pod gets created before the Service, you'll be in a consistent state except that the Pod won't actually have this environment variable. In a Helm land where Services don't have fixed names, the environment variable name won't be fixed either. Prefer the DNS name.
I'm running Kuberentes with a Minikube node on my machine. The pods are accessing each other by their .metadata.name, and I would like to have a custom domain to that name.
i.e. one pod accesses Elastic's machine by elasticsearch.blahblah.com
Thanks for any suggestions
You should have DNS records for pods by default due to kube-DNS addon enabled by default in minikube.
To check kube-dns addon status use the below command:
kubectl get pod -n kube-system
Please find below how cluster add-on DNS server works:
An optional (though strongly recommended) cluster add-on is a DNS server. The DNS server watches the Kubernetes API for new Services and creates a set of DNS records for each. If DNS has been enabled throughout the cluster then all Pods should be able to do name resolution of Services automatically.
For example, if you have a Service called "my-service" in Kubernetes Namespace "my-ns" a DNS record for "my-service.my-ns" is created. Pods which exist in the "my-ns" Namespace should be able to find it by simply doing a name lookup for "my-service". Pods which exist in other Namespaces must qualify the name as "my-service.my-ns". The result of these name lookups is the cluster IP.
Kubernetes also supports DNS SRV (service) records for named ports. If the "my-service.my-ns" Service has a port named "http" with protocol TCP, you can do a DNS SRV query for "_http._tcp.my-service.my-ns" to discover the port number for "http".
The Kubernetes DNS server is the only way to access services of type ExternalName.
You can follow Configure DNS Service document for configuration instructions.
Also, you can check DNS for Services and Pods for additional information.