How to discover headless service endpoints - kubernetes

Is there a way to discover all the endpoints of a headless service from outside the cluster?
Preferably using DNS or Static IPs

By watching changes to a list of Endpoints:
GET /api/v1/watch/namespaces/{namespace}/endpoints

Headless Services are a group of Pod IPs. Pod IPs are not (generally) available outside the cluster/cloud-provider.
Are you trying to get external IPs for a headless service or are you within the same network (e.g. in the GCE project) but not in the cluster?

The DNS addon is exactly what you're after. From the docs:
For example, if you have a Service called "my-service" in Kubernetes
Namespace "my-ns" a DNS record for "my-service.my-ns" is created. Pods
which exist in the "my-ns" Namespace should be able to find it by
simply doing a name lookup for "my-service". Pods which exist in other
Namespaces must qualify the name as "my-service.my-ns". The result of
these name lookups is the cluster IP.
And in the case of a headless service:
DNS is configured to return multiple A records (addresses) for the
Service name, which point directly to the Pods backing the Service.
However, this service is only available inside the cluster. But KubeDNS is just another pod:
kubectl get po --namespace=kube-system
kubectl describe po kube-dns-pod-name --namespace=kube-system
Which means you can create a service with an externally accessible address to expose this service. Just use a selector matching your kube-dns pod label.
http://kubernetes.io/v1.1/docs/user-guide/services.html#dns
https://github.com/kubernetes/kubernetes/blob/release-1.1/cluster/addons/dns/README.md

Related

ExternalIP for Kubernetes service

I wanted some help in understanding the ExternalIP field in Kubernetes services.
Based on the information https://kubernetes.io/docs/concepts/services-networking/service/#external-ips
I understood that I can define a Kubernetes service and assign a Ip address using the ExternalIP field.
To test this i created a Kubernetes environment on AWS using Rancher. In the Kubernetes cluster, i deployed a image and created a Nodeport service and assigned a ElasticIp to the Nodeport service.
My expectation was that i will be able to access the service using the ElasticIP:Nodeport.
However the there is no response. I am getting the error destination not available.
If i run the command
kubectl get svc -n namepace
I could see the ElasticIP under the elastic ip field.
I then deployed the same service without the ExternalIp field. This time the Nodeport service. I could access the deployment using the Nodeipaddress:Nodeport.

Kubernetes: How to allow two pods running in same/different namespace communicate irrespective of the protocol using a servicename?

Allow two pods (say pod A and B) running in same/different namespace communicate irrespective of the protocol(say http,https,akka.tcp) along with a valid Network policy applied.
Solutions tried:
Tried applying network policy to both the pods and also used the service name: “my-svc.my-namespace.svc.cluster.local” to make pod B
communicate to pod A which is running the service “my-svc” but both
failed to communicate.
Also tried adding the IP address and host mapping of pod A in pod B while it’s deployment, then pod B was able to communicate to pod A
but inverse communication is failing.
Kindly suggest me a way to fix this.
By default, pods can communicate with each other by their IP address, regardless of the namespace they're in.
You can see the IP address of each pod with:
kubectl get pods -o wide --all-namespaces
However, the normal way to communicate within a cluster is through Service resources.
A Service also has an IP address and additionally a DNS name. A Service is backed by a set of pods. The Service forwards requests to itself to one of the backing pods.
The fully qualified DNS name of a Service is:
<service-name>.<service-namespace>.svc.cluster.local
This can be resolved to the IP address of the Service from anywhere in the cluster (regardless of namespace).
For example, if you have:
Namespace ns-a: Service svc-a → set of pods A
Namespace ns-b: Service svc-b → set of pods B
Then a pod of set A can reach a pod of set B by making a request to:
svc-b.ns-b.svc.cluster.local
You can put the Pods behind Services and use Service DNS for communication. Calls to service-name allow Pods in the same namespace to communicate. Calls to service-name.namespace allow Pods in different namespaces to communicate.

What is the use of Kubernetes cluster?

Every where its mentioned "cluster type of service makes pod accessible within a Kubernetes cluster"
Does it mean, after adding cluster service to a POD, then that POD can be connected only using cluster service IP of POD, we will not be able to connect POD using the IP of POD generated before adding cluster ?
Please help me understanding, am learning Kubernetes so.
When a service is created using the ClusterIP then that service is accessible only inside the cluster as service IP's are virtual IP.
Although if you want to access the pod from outside using the service IP then you can use the nodeport or loadbalancer type service which will allow you to access the pod using the Node's IP or the loadbalancer's IP.
Main reason behind using services to access pod is that it give a fixed location (ClusterIP or service name) to access. Pod's can come an go but service IP will remain same.

How to access pods without services in Kubernetes

I was wondering how pods are accessed when no service is defined for that specific pod. If it's through the environment variables, how does the cluster retrieve these?
Also, when services are defined, where on the master node is it stored?
Kind regards,
Charles
If you define a service for your app , you can access it outside the cluster using that service
Services are of several types , including nodePort , where you can access that port on any cluster node and you will have access to the service regardless of the actual location of the pod
you can access the endpoints or actual pod ports inside the cluster as well , but not outside
all of the above uses the kubernetes service discovery
There are two type of service dicovery though
Internal Service discovery
External Service Discovery.
You cannot "access" a pods container port(s) without a service. Services are objects that define the desired state of an ultimate set of iptable rule(s).
Also, services, like all other objects, are stored in etcd and maintained through your master(s).
You could however manually create an iptable rule forwarding traffic to the local container port that docker has exposed.
Hope this helps! If you still have any questions drop them here.
Just for debugging purposes, you can forward a port from your machine to one in the pod:
kubectl port-forward POD_NAME HOST_PORT:POD_PORT
If you have to access it from anywhere, you should use services, but you got to have a deployment created
Create deployment
kubectl create -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/service/networking/run-my-nginx.yaml
Expose the deployment with a NodePort service
kubectl expose deployment deployment/my-nginx --type=NodePort --name=nginx-service
Then list the services and get the port of the service
kubectl get services | grep nginx-service
All cluster data is stored in etcd which is a distributed key-value store. If etcd goes down, cluster becomes unstable and no new pods can come up.
Kubernetes has a way to access any pod within the cluster. Service is a logical way to access a set of pods bound by a selector. An individual pod can still be accessed irrespective of the service. Further service can be created to access the pods from outside the cluster (NodePort service)

Kubernetes custom domain name

I'm running Kuberentes with a Minikube node on my machine. The pods are accessing each other by their .metadata.name, and I would like to have a custom domain to that name.
i.e. one pod accesses Elastic's machine by elasticsearch.blahblah.com
Thanks for any suggestions
You should have DNS records for pods by default due to kube-DNS addon enabled by default in minikube.
To check kube-dns addon status use the below command:
kubectl get pod -n kube-system
Please find below how cluster add-on DNS server works:
An optional (though strongly recommended) cluster add-on is a DNS server. The DNS server watches the Kubernetes API for new Services and creates a set of DNS records for each. If DNS has been enabled throughout the cluster then all Pods should be able to do name resolution of Services automatically.
For example, if you have a Service called "my-service" in Kubernetes Namespace "my-ns" a DNS record for "my-service.my-ns" is created. Pods which exist in the "my-ns" Namespace should be able to find it by simply doing a name lookup for "my-service". Pods which exist in other Namespaces must qualify the name as "my-service.my-ns". The result of these name lookups is the cluster IP.
Kubernetes also supports DNS SRV (service) records for named ports. If the "my-service.my-ns" Service has a port named "http" with protocol TCP, you can do a DNS SRV query for "_http._tcp.my-service.my-ns" to discover the port number for "http".
The Kubernetes DNS server is the only way to access services of type ExternalName.
You can follow Configure DNS Service document for configuration instructions.
Also, you can check DNS for Services and Pods for additional information.