Resolve ip addresses of a headless service - kubernetes

I created a service, which has 3 pods assigned.
I would like to access the service through its hostname by an other service in the same project. How can I do that?
Tried:
alxtbk#dns-test:~$ ping elassandra-0.elassandra
ping: elassandra-0.elassandra: Name or service not known
alxtbk#dns-test:~$ ping elassandra-0.default.svc.cluster.local
ping: elassandra-0.default.svc.cluster.local: Name or service not known
alxtbk#dns-test:~$ ping elassandra.default.svc.cluster.local
ping: elassandra.default.svc.cluster.local: Name or service not known
What is the correct way to resolve the ip adresses of the headless service?

For such Services, a cluster IP is not allocated, kube-proxy does not
handle these services, and there is no load balancing or proxying done
by the platform for them. How DNS is automatically configured depends
on whether the service has selectors defined.
With selectors
For headless services that define selectors, the endpoints controller
creates Endpoints records in the API, and modifies the DNS
configuration to return A records (addresses) that point directly to
the Pods backing the Service.
Without selectors
For headless services that do not define selectors, the endpoints
controller does not create Endpoints records. However, the DNS system
looks for and configures either:
CNAME records for ExternalName-type services.
A records for any Endpoints that share a name with the service, for all other types.
so you maybe be able to do:
kubectl get ep
to get the endpoints and then use them inside another kubernetes service.

Related

How to keep IP address of a pod static after pod dies

I am new to learning kubernetes, and I understand that pods have dynamic IP and require some other "service" resource to be attached to a pod to use the fixed IP address. What service do I require and what is the process of configuration & How does AWS-ECR fit into all this.
So if I have to communicate from a container of a pod to google.com, Can I assume my source as the IP address of the "service", if I have to establish a connection?
Well, for example on Azure, this feature [Feature Request] Pod Static IP is under request:
See https://github.com/Azure/AKS/issues/2189
Also, as I know, you can currently assign an existing IP adress to a load balancer service or an ingress controller
See https://learn.microsoft.com/en-us/azure/aks/static-ip
By default, the public IP address assigned to a load balancer resource
created by an AKS cluster is only valid for the lifespan of that
resource. If you delete the Kubernetes service, the associated load
balancer and IP address are also deleted. If you want to assign a
specific IP address or retain an IP address for redeployed Kubernetes
services, you can create and use a static public IP address
As you said we needs to define a service which selects all the required pods and then you would be sending requests to this service instead of the pods.
I would suggest you to go through this https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types.
The type of service you need basically depends on the use-case.
I will give a small overview so you get an idea.
Usually when pods only have internal requests ClusterIP is used
Node port allow external requests but is basically used for testing and not for production cases
If you also have requests coming from outside the cluster you would usually use load balancer
Then there is another option for ingress
As for AWS-ECR, its basically a container registry where you store your docker images and pull from it.

Kubernetes internal wildcard DNS record

I'd like to create a wildcard DNS record that maps to a virtual IP inside my k8s cluster. This is because I want requests from my pods to any subdomain of a given name to map to a specific set of endpoints. I.e. requests from:
something.my-service.my-namespace.svc.cluster.local
something-else.my-service.my-namespace.svc.cluster.local
any-old-thing-my-pod-came-up-with.my-service.my-namespace.svc.cluster.local
to all resolve to the same virtual IP, and therefore to the same cluster (i.e. I would like these requests to be routed to endpoints in the same way a service does).
I've seen some other solutions that involve creating and modifying the cluster DNS service (i.e. kube-dns or CoreDNS) config. This doesn't work for me- the main reason I'm asking this question is to achieve declarative config.
What I've tried:
Service .metadata.name: '*.my-service'. Failed because '*.my-service' is not a valid service name.
Service .spec.ports.name: '*'. Not a valid port name.
Not an option:
Ingress. I cannot expose these services to the wider internet.
Pod hostname/subdomain. AFAIK DNS entries created by pod hostname/subdomain will not have a virtual IP that may resolve to any of a number of pods. (Quoting from https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-hostname-and-subdomain-fields) "DNS serves an A record at that name, pointing to the Pod’s IP."
wild card dns is not supported for kubernetes services. what you can do is front the service with an ingress controller. with ingress you can use wild card dns. refer the below PR
https://github.com/kubernetes/kubernetes/pull/29204

Kubernetes StatefulSets: External DNS

Kubernetes StatefulSets create internal DNS entries with stable network IDs. The docs describe this here:
Each Pod in a StatefulSet derives its hostname from the name of the
StatefulSet and the ordinal of the Pod. The pattern for the
constructed hostname is $(statefulset name)-$(ordinal). The example
above will create three Pods named web-0,web-1,web-2. A StatefulSet
can use a Headless Service to control the domain of its Pods. The
domain managed by this Service takes the form: $(service
name).$(namespace).svc.cluster.local, where “cluster.local” is the
cluster domain. As each Pod is created, it gets a matching DNS
subdomain, taking the form: $(podname).$(governing service domain),
where the governing service is defined by the serviceName field on the
StatefulSet.
I am experimenting with headless services, and this works great for communication between individual services i.e web-0.web.default.svc.cluster.local can connect and communicate with web-1.web.default.svc.cluster.local just fine.
Is there any way that I can configure this to work outside of the cluster network as well, where "cluster.local" is replaced with something like "clustera.com"?
I would like to give another kubernetes cluster, lets call it clusterb.com, access to the individual services of the original cluster (clustera.com); I'm hoping it would look something like clusterb simply hitting endpoints like web-1.web.default.svc.clustera.com and web-0.web.default.svc.clustera.com.
Is this possible? I would like access to the individual services, not a load balanced endpoint.
I would suggest you to test the following solutions and check if they can help you to achieve your goal in your particular scenario:
The first one is for sure the easiest and I believe that you didn't implemented it for some reason and you did not reported in the question why.
I am talking about Headless services Without selectors CNAME records for ExternalName-type services.
ExternalName: Maps the service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up. This requires version 1.7 or higher of kube-dns
Therefore if you need to point a service of an other cluster you will need to register a domain name pointing to the relative IP of clusterb.
The second solution that I have never tested, but I believe it can apply to your case is to make use of a Federated Cluster whose reason why to use it is accordinding to the documentation:
Cross cluster discovery: Federation provides the ability to auto-configure DNS servers and load balancers with backends from all clusters. For example, you can ensure that a global VIP or DNS record can be used to access backends from multiple clusters.

Frontend communication with API in Kubernetes cluster

Inside of a Kubernetes Cluster I am running 1 node with 2 deployments. React front-end and a .NET Core app. I also have a Load Balancer service for the front end app. (All working: I can port-forward to see the backend deployment working.)
Question: I'm trying to get the front end and API to communicate. I know I can do that with an external facing load balancer but is there a way to do that using the clusterIPs and not have an external IP for the back end?
The reason we are interested in this, it simply adds one more layer of security. Keeping the API to vnet only, we are removing one more entry point.
If it helps, we are deploying in Azure with AKS. I know they have some weird deployment things sometimes.
Pods running on the cluster can talk to each other using a ClusterIP service, which is the default service type. You don't need a LoadBalancer service to make two pods talk to each other. According to the docs on this topic
ClusterIP exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default ServiceType.
As explained in the Discovery documentation, if both Pods (frontend and API) are running on the same namespace, the frontend just needs to send requests to the name of the backend service.
If they are running on different namespaces, the frontend API needs to use a fully qualified domain name to be able to talk with the backend.
For example, if you have a Service called "my-service" in Kubernetes Namespace "my-ns" a DNS record for "my-service.my-ns" is created. Pods which exist in the "my-ns" Namespace should be able to find it by simply doing a name lookup for "my-service". Pods which exist in other Namespaces must qualify the name as "my-service.my-ns". The result of these name lookups is the cluster IP.
You can find more info about how DNS works on kubernetes in the docs.
The problem with this configuration is the idea that the Frontend app will be trying to reach out to the API via the internal cluster. But it will not. My app, on the client's browser can not reach services and pods in my Kluster.
My cluster will need something like nginx or another external Load Balancer to allow my client side api calls to reach my API.
You can alternatively used your front end app, as your proxy, but that is highly not advised!
I'm trying to get the front end and api to communicate
By api, if you mean the Kubernetes API server, first setup a service account and token for the front-end pod to communicate with the Kubernetes API server by following the steps here, here and here.
is there a way to do that using the clusterIPs and not have an external IP for the back end
Yes, this is possible and more secure if external access is not needed for the service. Service type ClusterIP will not have an ExternalIP and the pods can talk to each other using ClusterIP:Port within the cluster.

how kubernetes service works?

Of all the concepts from Kubernetes, I find service working mechanism is the most difficult to understand
Here is what I imagine right now:
kube-proxy in each node listen to any new service/endpoint in master API controller
If there is any new service/endpoint, it adds a rule to that node's iptables
For NodePort service, external client has to access new service through one of the node's ip and NodePort. The node will forward the request to the new service IP
Is it correct? There are still a few things I'm still not clear:
Are services lying within nodes? If so, can we ssh into nodes and inspect how services work?
Are service IPs virtual IPs and only accessible within nodes?
Most of the diagrams that I see online draw services as crossing all nodes, which make it even more difficult to imagine
kube-proxy in each node listen to any new service/endpoint in master API controller
Kubernetes uses etcd to share the current cluster configuration information across all nodes (including pods, services, deployments, etc.).
If there is any new service/endpoint, it adds a rule to that node's iptables
Internally kubernetes has a so called Endpoint Controller that is responsible for modifying the DNS configuration of the virtual cluster network to make service endpoints available via DNS (and environment variables).
For NodePort service, external client has to access new service through one of the node's ip and NodePort. The node will forward the request to the new service IP
Depending on the service type additional action is taken, e.g. to make a port available on the nodes through an automatically created clusterIP service for type nodePort. Or an external load balancer is created with the cloud provider, etc.
Are services lying within nodes? If so, can we ssh into nodes and inspect how services work?
As explained, services are manifested in the cluster configuration, the endpoint controller as well as additional things, like the clusterIP services, load balancers, etc. I cannot see a need to ssh into nodes to inspect services. Typically interacting with the cluster api should be sufficient to investigate/update the service configuration.
Are service IPs virtual IPs and only accessible within nodes?
Service IPs, like POD IPs are virtual and accessible from within the cluster network. There is a global allocation map in etcd that maintains the complete list that allows allocating unique new ones. For more information on the networking model read this blog.
For more detailed information see the docs for kubernetes components and services.