I have an application pod running on a namespace named frontend and a database pod running on a different namespace named backend. I need to have communication between both the pods residing in different namespaces. The database container is up and running but while the application container has an error of crashloopbackoff.
When I saw the logs of the application pod, there is an error while resolving the database hostname which was provided through the environment variable PGHOST and was equal to the name of the database container. But it seems that the application container is unable to resolve the database host.
Therefore, how should I connect them. I suppose that the problem is due to different namespaces. So how do I connect them and make them communicate.
error:
> The Gemfile's dependencies are satisfied rake aborted!
> PG::ConnectionBad: could not translate host name "postgres" to
> address: Name or service not known
I am assuming you have a ClusterIP type service with name postgres in xyz namespace. Then you can access it from another namespace by specifying postgres.xyz
Kubernetes has a DNS system CoreDNS which will resolve the hostname postgres.xyz to the POD IP.
Related
I have a PostgreSQL server running in azure with port no 5432, which is publically accessible. There is a Kubernetes cluster running several pods. I am trying to access the PostgresSQL server in a pod but It says the port is not open. although, on my personal machine, It is accessible.
Does anyone have any idea what is going on?
So Postgres is accessible from your machine but not from a pod? Does the pod have internet access? do you get an error ?
You can access the postgres locally in the kubernetes cluster:
if the pod is in the same namespace you can use psql -h "svc name" where svc of the postgres pod.
If the pod is in a different namespace you need to use the full dns name
in this example the svc name psql is in the prod namesapce.
psql.prod.svc.cluster.local.
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
I have a single node Kubernetes cluster, installed using k3s on bare metal. I also run some services on the host itself, outside the Kubernetes cluster. Currently I use the external IP address of the machine (192.168.200.4) to connect to these services from inside the Kubernetes network.
Is there a cleaner way of doing this? What I want to avoid is having to reconfigure my Kubernetes pods if I decide to change the IP address of my host.
Possible magic I which existed: a Kubernetes service or IP that automagically points to my external IP (192.168.200.4) or a DNS name that points the node's external IP address.
That's what ExternalName services are for (https://kubernetes.io/docs/concepts/services-networking/service/#externalname):
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: ExternalName
externalName: ${my-hostname}
ports:
- port: 80
Then you can access the service from withing kubernetes as my-service.${namespace}.svc.cluster.local.
See: https://livebook.manning.com/concept/kubernetes/external-service
After the service is created, pods can connect to the external service
through the external-service.default.svc.cluster.local domain name (or
even external-service) instead of using the service’s actual FQDN.
This hides the actual service name and its location from pods
consuming the service, allowing you to modify the service definition
and point it to a different service any time later, by only changing
the externalName attribute or by changing the type back to ClusterIP
and creating an Endpoints object for the service—either manually or by
specifying a label selector on the service and having it created
automatically.
ExternalName services are implemented solely at the DNS level—a simple
CNAME DNS record is created for the service. Therefore, clients
connecting to the service will connect to the external service
directly, bypassing the service proxy completely. For this reason,
these types of services don’t even get a cluster IP.
This relies on using a resolvable hostname of your machine. On minikube there's a DNS alias host.minikube.internal that is setup to resolve to an IP address that routes to your host machine, I don't know if k3s supports something similar.
Thanks #GeertPt,
With minikube's host.minikube.internal in mind I search around and found that CoreDNS has a DNS entry for each host it's running on. This only seems the case for K3S.
Checking
kubectl -n kube-system get configmap coredns -o yaml
reveals there is the following entry:
NodeHosts: |
192.168.200.4 my-hostname
So if the hostname doesn't change, I can use this instead of the IP.
Also, if you're running plain docker you can use host.docker.internal to access the host.
So to sum up:
from minikube: host.minikube.internal
from docker: host.docker.internal
from k3s: <hostname>
I have kubernetes cluster with 1 master 1 worker , i have DB service postgres running one namespace "PG" and i have another service config-server running in default namespace and i am unable to access postgres from config-server service which is in default namespace
Kubernetes version 1.13
overlay network -calico
as per the articles i read if pods doesnt have any network policy defined then pods can be reached to any other namespace pod without any restriction , need help in how to achieve it
should be able to reach any pod from another pod on the same cluster.
one quick way to check is to ping the service dns of the pod from another pod
get into config service pod and try to run the below command
ping <postgres-service-name>.<namespace>.svc.cluster.local
you should be able to get ping response
I was using kubernetes cluster with overlay network as calico , if there is no network policy created , by default kubernetes core dns will resolve the service but we have to add the . in the application or env variable where you are calling the service in another namespace. That will allow cross namespace communication
I am setting up VerneMQ (a MQTT broker) in a cluster configuration. Therefore I am launching 4 replicas in a stateful set. Apparently VerneMQ wants to communicate with the other brokers in a cluster via DNS like this:
echo "Will join an existing Kubernetes cluster with discovery node at
${kube_pod_name}.${VERNEMQ_KUBERNETES_SUBDOMAIN}.${DOCKER_VERNEMQ_KUBERNETES_NAMESPACE}.svc.cluster.local"
Unfortunately the logs indicate that this doesn't work:
14:05:56.741 [info] Application vmq_server started on node
'VerneMQ#broker-vernemq-0.broker-vernemq.messaging.svc.cluster.local'
broker-vernemq-0 is the pod's name and broker-vernemq is the name of the statefulset. The service is configured as LoadBalancer.
The problem:
I connected to the pod broker-vernemq-1 via terminal and executed ping broker-vernemq-0 and I wondered that it is not able to resolve this hostname:
ping: unknown host broker-vernemq-0
I was under the impression that this is supposed to work?
The service must be headless for kube-dns to service domain names like that. See https://stackoverflow.com/a/46638059
I'm running Kuberentes with a Minikube node on my machine. The pods are accessing each other by their .metadata.name, and I would like to have a custom domain to that name.
i.e. one pod accesses Elastic's machine by elasticsearch.blahblah.com
Thanks for any suggestions
You should have DNS records for pods by default due to kube-DNS addon enabled by default in minikube.
To check kube-dns addon status use the below command:
kubectl get pod -n kube-system
Please find below how cluster add-on DNS server works:
An optional (though strongly recommended) cluster add-on is a DNS server. The DNS server watches the Kubernetes API for new Services and creates a set of DNS records for each. If DNS has been enabled throughout the cluster then all Pods should be able to do name resolution of Services automatically.
For example, if you have a Service called "my-service" in Kubernetes Namespace "my-ns" a DNS record for "my-service.my-ns" is created. Pods which exist in the "my-ns" Namespace should be able to find it by simply doing a name lookup for "my-service". Pods which exist in other Namespaces must qualify the name as "my-service.my-ns". The result of these name lookups is the cluster IP.
Kubernetes also supports DNS SRV (service) records for named ports. If the "my-service.my-ns" Service has a port named "http" with protocol TCP, you can do a DNS SRV query for "_http._tcp.my-service.my-ns" to discover the port number for "http".
The Kubernetes DNS server is the only way to access services of type ExternalName.
You can follow Configure DNS Service document for configuration instructions.
Also, you can check DNS for Services and Pods for additional information.