Kubernetes expose an app via a DNS name in minicube - kubernetes

I have a Minikube installation in which I created a simple hello-world deployment like this:
kubectl create deployment hello-node \
--image=gcr.io/hello-minikube-zero-install/hello-node
I exposed the deployment via a service in the following way:
kubectl expose deployment hello-node --type=LoadBalancer --port=8080
Now If I call: http://<local cluster ip>:8080 it prints "Hello World!" as expected.
What I want to achieve:
I want to expose different deployments in the same cluster to different sub-domains of the cluster. For instance, deployment hello1 to hello1.my-k8-cluster.com, hello2 to hello2.my-k8-cluster.com.
I want to test this locally because later I will do the same on a real cluster.
Question: How to test DNS configurations of services locally? How to define sub-domains in services?
What I tried so far:
I went through the how-to guides here and the documentation which though didn't bring me a clear picture on how to configure what I want.

You cannot define subdomains on Services. Services have the form service-name.namespace.svc.domain.
If you want to manipulate the DNS names, you should look at Ingress.
In order to test DNS configurations, you can use normal DNS testing tools like dig from inside a container. You can use public images like dnsutils or create your own testing images for this purpose.

Related

bitnami/external-dns with Kubernetes on Docker Desktop does not work

What I'm trying to do
I have deployed an aps.net core gRpc service on Docker for Desktop (Kubernetes enabled). To do client-side load balancing, I want to expose the same via a headless service. The deployment and service definition YAML files are as provided by the link viz. Deployment.yaml , service.yaml, and PV and PVC .yaml. When the deployment is run two replicas will be created. Now I want to expose them via a headless service and do a DNS lookup of the pods' IP addresses and do a client-side load balancing. For this, I installed the bitnami external-dns using the HELM charts. I did not make any modifications to the default chart values. Now when I try to do a nslookup of my service this is not working.
My expectation
Deploy the bitnami external-dns on Docker for Desktop with Kubernetes enabled and configured service to expose as DNS on the load balancer. I was expecting the nslookup to succeed in getting the pod IPs as a result
Can someone help me to get the same working?

What are the best solutions to replace kubernetes port-forward

I am working on rancher server, k3s to improve my knowledge on these solutions.
I want to expose container services on LAN network, this is why I used kubectl port-forward.
kubectl port-forward --namespace=ns-name --address LAN-IP service/hello 30104:8080
But I can see in several web resources that is not a reliable solution, just for local testing purpose.
I tried to replace them by ingress but I am a bit lost between ingress, DNS and nginx-ingress in addition to rancher component.
I understood than load balancer need a cloud provider, to have a public IP for instance, and handle the <pending> state of load balancer.
Can you highlight me on how replace port-forward in LAN without a cloud provider?
[edit #Rajesh Dutta]
I already use NodePort, but, without port-forward the service is exposed as NODE_IP:PORT, not LAN_IP:PORT. My need is to join it from outside of the cluster.
So this is what i did :
1 - create deployment
kubectl create deployment hello --image=gcr.io/google-samples/node-hello:1.0 --port=8080 --replicas=2
2 - expose deployment(create service)
kubectl expose deployment hello --type=NodePort
3 - forward service
kubectl port-forward --namespace=ns-name --address local-ip service/hello 30104:8080
IP schema
Now, considering that i will have several service, i would find the best ways to replace port-forward.
To start with I would recommend to use NodePort service. This will expose your application to a NodePort(30000-32767). Later if you want you can choose to switch to ingress.
Assuming that you are trying with a deployment type object
command:
kubectl expose deployment deployment-name --type=NodePort --port=8080 --target-port=<rancher server port>

Is it possible for a pod running in a satrefulset to get the hostname of the all the pod running in different statefulset?

I have a pod running in a statefulset but it needs to know the hostname or address of all pods running in another statefulset to communicate with them. The second statefulset is being created by a separate helm chart. Can the pod work this out dynamically? Can I inject this information into the pod through an env similar to setting .Status.ip?
Edit: Each statefulSet has its own headless service
As discussed in the comments, the way to go here is to use a service-resource as this will give you a static DNS within the cluster to reach all the pods that a targeted by that service.
The DNS for the service is:
the services name if you access it from within the same namespace
<my-service-name>.<namespace-name>.svc.cluster.local if you access it from another namespace, and where cluster.local is the clusters domain that might differ from cluster to cluster depending on the clusters configuration
If you further need more configuration options, e.g. when you want to deploy your chart into different cloud environments where the clusters-domain might actually differ, you can use kustomize.io to adjust your configuration at apply time.

How do I un-expose (undo expose) service?

In Kubernetes, it is possible to make a service running in cluster externally accessible by running kubectl expose deployment. Why deployment as opposed to service is beyond my simpleton's comprehension. That aside, I would like to also be able to undo this operation afterwards. Think of a scenario, where I need to get access to the service that normally is only accessible inside the cluster for debugging purposes and then to restore original situation.
Is there any way of doing this short of deleting the deployment and creating it afresh?
PS. Actually deleting service and deployment doesn't help. Re-creating service and deployment with the same name will result in service being exposed.
Assuming you have a deployment called hello-world, and do a kubectl expose as follows:
kubectl expose deployment hello-world --type=ClusterIP --name=my-service
this will create a service called my-service, which makes your deployment accessible for debugging, as you described.
To display information about the Service:
kubectl get services my-service
To delete this service when you are done debugging:
kubectl delete service my-service
Now your deployment is un-exposed.

How to make a service can access via the service proxy running at the master in kubernetes

How to make a service can access via the service proxy running at the master in kubernetes ?
like service of kube-ui or fluentd-elasticsearch in example. can access the url: http://[masterIP:post]/api/v1/proxy/namespaces/kube-system/services/kube-ui/
I can not access http://[masterIP:post]/api/v1/proxy/namespaces/test/services/myweb, when I create a service in the test namespace named myweb.
So how to do ?
If you're trying to access it from a pod running in the cluster, you're best off just accessing the service directly. Services are made available using DNS within the cluster. If your pod is in the same namespace as the service, you should be able to access it simply using its name, e.g. at myweb in this case. If your pod is in a different namespace, you can hit it at pod-name.namespace, e.g. myweb.test in this case.
If you're trying to access it from outside the cluster, then you shouldn't need to do anything different than you do for the default services. If you're unable to access it in the same way, it's likely that your service doesn't have any pods backing it, or that those pods aren't working. You can check which pods are backing your service using kubectl get endpoints myweb --namespace=test. If that's empty, then you should make sure you've scheduled the pods that are meant to implement the service, and if so, that their labels are correct.
You might find the documentation on services useful.