It is a followup question to an earlier asked SO question regarding stability of serviceIP. I understand that in general, serviceIP are stable, but my case is the one where service gets often restarted, like for port changes.
using DNS is not the perfect solution in this case as client pods can cache the DNS entry. So, i wanted to know the best practice around this.
The service IP can be defined as a fixed IP by setting the spec.clusterIP field.
The IP address that a user chooses must be a valid IP address and within the service-cluster-ip-range.
It would be recommended to either use automatic IP addresses for all services or manage the service IPs of all services manually.
For more info please see the official docs.
Related
I have a statefulset that I need to run using the host network, purely for performance reasons. But I also want to be able to reference service-name endpoints. Is it possible to do this? ClusterFirstWithHostNet does not work because it doesn't prioritize using the host's network. The dnsConfig configuration might be promising, but I don't know how I would configure it to do what I'm asking about.
This is a community wiki answer. Feel free to expand it.
It might be possible if the app can select random port to listen during start and change if port is busy. However, Kubernetes is not involved in the selecting port for the application.
Statefulset requires a headless service, so it doesn't have an IP and works as a set of DNS records in coredns. A record would probably contain the same IP for the replicas on the same node, but SRV record may actually provide a proper endpoint.
For further reference, please take a look at the below sources:
How do I get individual pod hostnames in a Deployment registered and looked up in Kubernetes?
SRV records
I took IP of pod and assign it to externalIP of service. Also I tried to assign not assigned IP to it. It works in any way and I am not able to find any side effects. Do you see any possible issue with such solution?
The external IP field of a service is only used for tracking purposes, it is descriptive rather than proscriptive. You could put whatever you want there, the only thing I know of which uses that field is external-dns, beyond that it’s only for humans so the system can report back what the IP or hostname is with type LoadBalancer.
As mentioned in the Kubernetes Service documentation:
externalIPs are not managed by Kubernetes and are the responsibility of the cluster administrator.
ExternalIP is good when you want to have the control of your service IP's, from the other side, the High availability will be comprimissed, since if one node of the cluster die you will lost the route to the service.
In this blog there's a good explantion about ExternalIP
I'm curious what's the benefit of using Cluster IP in kubernetes.
I know if one app needs to access service of other app. It can directly use FQDN of that service, without the trouble of creating Cluster IP.
But i still see lots of places Cluster IP being used.
From my exp FQDN uses much more, however there are pretty much examples with Cluster IP direct usage. Benefit? dont think so. I think this is more philosophical question and you can do the way you prefer more.
Referring to Connect a Front End to a Back End Using a Service from kubernetes page will show optimal way how to do similar things
I have an HPC cluster application where I am looking to replace MPI and our internal cluster management software with a combination of Kubernetes and some middleware, most likely ZMQ or RabbitMQ.
I'm trying to design how best to do peer discovery on this system using Kubernetes' service discovery.
I know Kubernetes can provide a DNS name for a given service, and that's great, but is there a way to also dynamically discover ports?
For example, assuming I replaced the MPI middleware with ZeroMQ, I would need a way for ranks (processes on the cluster) to find each other. I know I could simply have the ranks issue service creation messages to the Kubernetes discovery mechanism and get a hostname like myapp_mypid_rank_42 fairly easily, but how would I handle the port?
If possible, it would be great if I could just do:
zmqSocket.connect("tcp://myapp_mypid_rank_42");
but I don't think that would work since I have no port number information from DNS.
How can I have Kubernetes service discovery also provide a port in as simple a manner as possible to allow ranks in the cluster to discover each other?
Note: The registering process knows its port and can register it with the K8s service discovery daemon. The problem is a quick and easy way to get that port number back for the processes that want it. The question I'm asking is whether or not there is a mechanism as simple as a DNS host name, or will I need to explicitly query both hostname and port number from the k8s daemon rather than simply building a hostname based on some agreed upon rule (like building a string from myapp_mypid_myrank)?
Turns out the best way to do this is with a DNS SRV record:
https://kubernetes.io/docs/concepts/services-networking/service/#discovering-services
https://en.wikipedia.org/wiki/SRV_record
A DNS SRV record provides both a hostname/IP and a port for a given request.
Luckily, Kubernetes service discovery supports SRV records and provides them on the cluster's DNS.
I think in the most usual case you should know the port number to access your services.
But if it is useful, Kubernetes add some environment variables to every pod to ease autodiscovery of all services. For example {SVCNAME}_SERVICE_HOST and {SVCNAME}_SERVICE_PORT. Docs here
I have to get the real ip from the request in my business.actually I got the 10.2.100.1 every time at my test environment. any way to do this ?
This is the same question as GCE + K8S - Accessing referral IP address and How to read client IP addresses from HTTP requests behind Kubernetes services?.
The answer, copied from them, is that this isn't yet possible in the released versions of Kubernetes.
Services go through kube_proxy, which answers the client connection and proxies through to the backend (your web server). The address that you'd see would be the IP of whichever kube-proxy the connection went through.
Work is being actively done on a solution that uses iptables as the proxy, which will cause your server to see the real client IP.
Try to get that service IP which service is associated with that pods.
One very roundabout way right now is to set up an HTTP liveness probe and watch the IP it originates from. Just be sure to also respond to it appropriately or it'll assume your pod is down.