Kubernetes internal wildcard DNS record - kubernetes

I'd like to create a wildcard DNS record that maps to a virtual IP inside my k8s cluster. This is because I want requests from my pods to any subdomain of a given name to map to a specific set of endpoints. I.e. requests from:
something.my-service.my-namespace.svc.cluster.local
something-else.my-service.my-namespace.svc.cluster.local
any-old-thing-my-pod-came-up-with.my-service.my-namespace.svc.cluster.local
to all resolve to the same virtual IP, and therefore to the same cluster (i.e. I would like these requests to be routed to endpoints in the same way a service does).
I've seen some other solutions that involve creating and modifying the cluster DNS service (i.e. kube-dns or CoreDNS) config. This doesn't work for me- the main reason I'm asking this question is to achieve declarative config.
What I've tried:
Service .metadata.name: '*.my-service'. Failed because '*.my-service' is not a valid service name.
Service .spec.ports.name: '*'. Not a valid port name.
Not an option:
Ingress. I cannot expose these services to the wider internet.
Pod hostname/subdomain. AFAIK DNS entries created by pod hostname/subdomain will not have a virtual IP that may resolve to any of a number of pods. (Quoting from https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-hostname-and-subdomain-fields) "DNS serves an A record at that name, pointing to the Pod’s IP."

wild card dns is not supported for kubernetes services. what you can do is front the service with an ingress controller. with ingress you can use wild card dns. refer the below PR
https://github.com/kubernetes/kubernetes/pull/29204

Related

How to keep IP address of a pod static after pod dies

I am new to learning kubernetes, and I understand that pods have dynamic IP and require some other "service" resource to be attached to a pod to use the fixed IP address. What service do I require and what is the process of configuration & How does AWS-ECR fit into all this.
So if I have to communicate from a container of a pod to google.com, Can I assume my source as the IP address of the "service", if I have to establish a connection?
Well, for example on Azure, this feature [Feature Request] Pod Static IP is under request:
See https://github.com/Azure/AKS/issues/2189
Also, as I know, you can currently assign an existing IP adress to a load balancer service or an ingress controller
See https://learn.microsoft.com/en-us/azure/aks/static-ip
By default, the public IP address assigned to a load balancer resource
created by an AKS cluster is only valid for the lifespan of that
resource. If you delete the Kubernetes service, the associated load
balancer and IP address are also deleted. If you want to assign a
specific IP address or retain an IP address for redeployed Kubernetes
services, you can create and use a static public IP address
As you said we needs to define a service which selects all the required pods and then you would be sending requests to this service instead of the pods.
I would suggest you to go through this https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types.
The type of service you need basically depends on the use-case.
I will give a small overview so you get an idea.
Usually when pods only have internal requests ClusterIP is used
Node port allow external requests but is basically used for testing and not for production cases
If you also have requests coming from outside the cluster you would usually use load balancer
Then there is another option for ingress
As for AWS-ECR, its basically a container registry where you store your docker images and pull from it.

Resolve ip addresses of a headless service

I created a service, which has 3 pods assigned.
I would like to access the service through its hostname by an other service in the same project. How can I do that?
Tried:
alxtbk#dns-test:~$ ping elassandra-0.elassandra
ping: elassandra-0.elassandra: Name or service not known
alxtbk#dns-test:~$ ping elassandra-0.default.svc.cluster.local
ping: elassandra-0.default.svc.cluster.local: Name or service not known
alxtbk#dns-test:~$ ping elassandra.default.svc.cluster.local
ping: elassandra.default.svc.cluster.local: Name or service not known
What is the correct way to resolve the ip adresses of the headless service?
For such Services, a cluster IP is not allocated, kube-proxy does not
handle these services, and there is no load balancing or proxying done
by the platform for them. How DNS is automatically configured depends
on whether the service has selectors defined.
With selectors
For headless services that define selectors, the endpoints controller
creates Endpoints records in the API, and modifies the DNS
configuration to return A records (addresses) that point directly to
the Pods backing the Service.
Without selectors
For headless services that do not define selectors, the endpoints
controller does not create Endpoints records. However, the DNS system
looks for and configures either:
CNAME records for ExternalName-type services.
A records for any Endpoints that share a name with the service, for all other types.
so you maybe be able to do:
kubectl get ep
to get the endpoints and then use them inside another kubernetes service.

Kubernetes StatefulSets: External DNS

Kubernetes StatefulSets create internal DNS entries with stable network IDs. The docs describe this here:
Each Pod in a StatefulSet derives its hostname from the name of the
StatefulSet and the ordinal of the Pod. The pattern for the
constructed hostname is $(statefulset name)-$(ordinal). The example
above will create three Pods named web-0,web-1,web-2. A StatefulSet
can use a Headless Service to control the domain of its Pods. The
domain managed by this Service takes the form: $(service
name).$(namespace).svc.cluster.local, where “cluster.local” is the
cluster domain. As each Pod is created, it gets a matching DNS
subdomain, taking the form: $(podname).$(governing service domain),
where the governing service is defined by the serviceName field on the
StatefulSet.
I am experimenting with headless services, and this works great for communication between individual services i.e web-0.web.default.svc.cluster.local can connect and communicate with web-1.web.default.svc.cluster.local just fine.
Is there any way that I can configure this to work outside of the cluster network as well, where "cluster.local" is replaced with something like "clustera.com"?
I would like to give another kubernetes cluster, lets call it clusterb.com, access to the individual services of the original cluster (clustera.com); I'm hoping it would look something like clusterb simply hitting endpoints like web-1.web.default.svc.clustera.com and web-0.web.default.svc.clustera.com.
Is this possible? I would like access to the individual services, not a load balanced endpoint.
I would suggest you to test the following solutions and check if they can help you to achieve your goal in your particular scenario:
The first one is for sure the easiest and I believe that you didn't implemented it for some reason and you did not reported in the question why.
I am talking about Headless services Without selectors CNAME records for ExternalName-type services.
ExternalName: Maps the service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up. This requires version 1.7 or higher of kube-dns
Therefore if you need to point a service of an other cluster you will need to register a domain name pointing to the relative IP of clusterb.
The second solution that I have never tested, but I believe it can apply to your case is to make use of a Federated Cluster whose reason why to use it is accordinding to the documentation:
Cross cluster discovery: Federation provides the ability to auto-configure DNS servers and load balancers with backends from all clusters. For example, you can ensure that a global VIP or DNS record can be used to access backends from multiple clusters.

Public IP fronting a k8s Service has no DNS name in ACS

I've created a k8s Service to publicly front my WebApi pod in my ACS Windows cluster. It works great but there is no DNS name associated with the Public IP resources that is created. This prohibits me from adding it as an endpoint for a Traffic Manager profile, roadblock!
I can manually assign a DNS name to the Public IP, but then I'd be touching an ACS created resource, which makes me uncomfortable. But I REALLY want a static DNS name and the features of TrafficMgr to be in front of it.
This feels like a significant deficiency. Any advice?
there is a feature request in upstream
https://github.com/kubernetes/kubernetes/issues/50062
When you create a service, Kubernete automatically create a dns for it as long as kube-dns is running. The service name becomes the dns for accessing the pod withing the cluster and resolves to the cluster IP. So you can use the service name within other pods in the cluster.
https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#dns

How to access Kubernetes pod in local cluster?

I have set up an experimental local Kubernetes cluster with one master and three slave nodes. I have created a deployment for a custom service that listens on port 10001. The goal is to access an exemplary endpoint /hello with a stable IP/hostname, e.g. http://<master>:10001/hello.
After deploying the deployment, the pods are created fine and are accessible through their cluster IPs.
I understand the solution for cloud providers is to create a load balancer service for the deployment, so that you can just expose a service. However, this is apparently not supported for a local cluster. Setting up Ingress seems overkill for this purpose. Is it not?
It seems more like kube proxy is the way to go. However, when I run kube proxy --port <port> on the master node, I can access http://<master>:<port>/api/..., but not the actual pod.
There are many related questions (e.g. How to access services through kubernetes cluster ip?), but no (accepted) answers. The Kubernetes documentation on the topic is rather sparse as well, so I am not even sure about what is the right approach conceptually.
I am hence looking for a straight-forward solution and/or a good tutorial. It seems to be a very typical use case that lacks a clear path though.
If an Ingress Controller is overkill for your scenario, you may want to try using a service of type NodePort. You can specify the port, or let the system auto-assign one for you.
A NodePort service exposes your service at the same port on all Nodes in your cluster. If you have network access to your Nodes, you can access your service at the node IP and port specified in the configuration.
Obviously, this does not load balance between nodes. You can add an external service to help you do this if you want to emulate what a real load balancer would do. One simple option is to run something like rocky-cli.
An Ingress is probably your simplest bet.
You can schedule the creation of an Nginx IngressController quite simply; here's a guide for that. Note that this setup uses a DaemonSet, so there is an IngressController on each node. It also uses the hostPort config option, so the IngressController will listen on the node's IP, instead of a virtual service IP that will not be stable.
Now you just need to get your HTTP traffic to any one of your nodes. You'll probably want to define an external DNS entry for each Service, each pointing to the IPs of your nodes (i.e. multiple A/AAAA records). The ingress will disambiguate and route inside the cluster based on the HTTP hostname, using name-based virtual hosting.
If you need to expose non-HTTP services, this gets a bit more involved, but you can look in the nginx ingress docs for more examples (e.g. UDP).