I have built two services in k8s cluster, how can they interact with each other, if I want to make http request from one service to another, I know I can’t use local host, but how can I know the host when I am coding.
Service objects are automatically exposed in DNS as <servicename>.<namespace>.svc.<clusterdomain> where clusterdomain is usually cluster.local. The default resolv.conf allows for relative lookups so if the service is in the same namespace you can use just the name, otherwise <servicename>.<namespace>.
Related
I am learning the headless service of kubernetes.
I understand the following without question (please correct me if I am wrong):
A headless service doesn't have a cluster IP,
It is used for communicating with stateful app
When client app container/pod communicates with a database pod via headless service the pod IP address is returned instead of the service's.
What I don't quite sure:
Many articles on internet explaining headless service is vague in my opinion. Because all I found only directly state something like :
If you don't need load balancing but want to directly connect to the
pod (e.g. database) you can use headless service
But what does it mean exactly?
So, following are my thoughts of headless service in k8s & two questions with an example
Let's say I have 3 replicas of PostgreSQL database instance behind a service, if it is a regular service I know by default request to database would be routed in a round-robin fasion to one of the three database pod. That's indeed a load balancing.
Question 1:
If using headless service instead, does the above quoted statement mean the headless service will stick with one of the three database pod, never change until the pod dies? I ask this because otherwise it would still be doing load balancing if not stick with one of the three pod. Could some one please clarify it?
Question 2:
I feel no matter it is regular service or headless service, client application just need to know the DNS name of the service to communicate with database in k8s cluster. Isn't it so? I mean what's the point of using the headless service then? To me the headless service only makes sense if client application code really needs to know the IP address of the pod it connects to. So, as long as client application doesn't need to know the IP address it can always communicate with database either with regular service or with headless service via the service DNS name in cluster, Am I right here?
A normal Service comes with a load balancer (even if it's a ClusterIP-type Service). That load balancer has an IP address. The in-cluster DNS name of the Service resolves to the load balancer's IP address, which then forwards to the selected Pods.
A headless Service doesn't have a load balancer. The DNS name of the Service resolves to the IP addresses of the Pods themselves.
This means that, with a headless Service, basically everything is up to the caller. If the caller does a DNS lookup, picks the first address it's given, and uses that address for the lifetime of the process, then it won't round-robin requests between backing Pods, and it will not notice if that Pod disappears. With a normal Service, so long as the caller gets the Service's (cluster-internal load balancer's) IP address, these concerns are handled automatically.
A headless Service isn't specifically tied to stateful workloads, except that StatefulSets require a headless Service as part of their configuration. An individual StatefulSet Pod will actually be given a unique hostname connected to that headless Service. You can have both normal and headless Services pointing at the same Pods, though, and it might make sense to use a normal Service for cases where you don't care which replica is (initially) contacted.
A headless service will return all Pod IPs that are associated through the selector. The order is not stable, so if a client is making repeated DNS queries and uses only the first returned IP, this will result in some kind of load balancing as well.
Regarding your second question: That is correct. In general, if a client does not need to know all instances - and handle the unstable IPs - a regular service provides more benefits.
I have a kubernetes cluster v 1.19.4 set up with several nodes and services. Some of these services need to access an external resource residing outside the cluster. I have understood that this external resource can be defined by a service without label selector and then manually creating an Endpoints object to set the resource IP.
My question is one of convenience; the external service needs to be configured to allow connections from distinct IPs, and because of that I'd like to route all requests to this service to be handled by a specific node; so I know that the cluster-services will only communicate to this external resource via a single IP, regardless of from which node the request originates.
Is this possible by assigning nodeSelectors to the service definition in kubernetes or endpoints-object, or by some other means?
The requirement could be described with a networking likeness: I'd like to achieve what a router does for it's clients: present a single ip regardless of which client makes an outbound request to the wan.
thanks
services are a virtual thing, there is pod backing them per se so no nodes. However you can run the pod using nodeSelectors and providing hostname IP. Then you can have service communicating to that pod (and specific node) using labels
Inside of a Kubernetes Cluster I am running 1 node with 2 deployments. React front-end and a .NET Core app. I also have a Load Balancer service for the front end app. (All working: I can port-forward to see the backend deployment working.)
Question: I'm trying to get the front end and API to communicate. I know I can do that with an external facing load balancer but is there a way to do that using the clusterIPs and not have an external IP for the back end?
The reason we are interested in this, it simply adds one more layer of security. Keeping the API to vnet only, we are removing one more entry point.
If it helps, we are deploying in Azure with AKS. I know they have some weird deployment things sometimes.
Pods running on the cluster can talk to each other using a ClusterIP service, which is the default service type. You don't need a LoadBalancer service to make two pods talk to each other. According to the docs on this topic
ClusterIP exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default ServiceType.
As explained in the Discovery documentation, if both Pods (frontend and API) are running on the same namespace, the frontend just needs to send requests to the name of the backend service.
If they are running on different namespaces, the frontend API needs to use a fully qualified domain name to be able to talk with the backend.
For example, if you have a Service called "my-service" in Kubernetes Namespace "my-ns" a DNS record for "my-service.my-ns" is created. Pods which exist in the "my-ns" Namespace should be able to find it by simply doing a name lookup for "my-service". Pods which exist in other Namespaces must qualify the name as "my-service.my-ns". The result of these name lookups is the cluster IP.
You can find more info about how DNS works on kubernetes in the docs.
The problem with this configuration is the idea that the Frontend app will be trying to reach out to the API via the internal cluster. But it will not. My app, on the client's browser can not reach services and pods in my Kluster.
My cluster will need something like nginx or another external Load Balancer to allow my client side api calls to reach my API.
You can alternatively used your front end app, as your proxy, but that is highly not advised!
I'm trying to get the front end and api to communicate
By api, if you mean the Kubernetes API server, first setup a service account and token for the front-end pod to communicate with the Kubernetes API server by following the steps here, here and here.
is there a way to do that using the clusterIPs and not have an external IP for the back end
Yes, this is possible and more secure if external access is not needed for the service. Service type ClusterIP will not have an ExternalIP and the pods can talk to each other using ClusterIP:Port within the cluster.
I have set up an experimental local Kubernetes cluster with one master and three slave nodes. I have created a deployment for a custom service that listens on port 10001. The goal is to access an exemplary endpoint /hello with a stable IP/hostname, e.g. http://<master>:10001/hello.
After deploying the deployment, the pods are created fine and are accessible through their cluster IPs.
I understand the solution for cloud providers is to create a load balancer service for the deployment, so that you can just expose a service. However, this is apparently not supported for a local cluster. Setting up Ingress seems overkill for this purpose. Is it not?
It seems more like kube proxy is the way to go. However, when I run kube proxy --port <port> on the master node, I can access http://<master>:<port>/api/..., but not the actual pod.
There are many related questions (e.g. How to access services through kubernetes cluster ip?), but no (accepted) answers. The Kubernetes documentation on the topic is rather sparse as well, so I am not even sure about what is the right approach conceptually.
I am hence looking for a straight-forward solution and/or a good tutorial. It seems to be a very typical use case that lacks a clear path though.
If an Ingress Controller is overkill for your scenario, you may want to try using a service of type NodePort. You can specify the port, or let the system auto-assign one for you.
A NodePort service exposes your service at the same port on all Nodes in your cluster. If you have network access to your Nodes, you can access your service at the node IP and port specified in the configuration.
Obviously, this does not load balance between nodes. You can add an external service to help you do this if you want to emulate what a real load balancer would do. One simple option is to run something like rocky-cli.
An Ingress is probably your simplest bet.
You can schedule the creation of an Nginx IngressController quite simply; here's a guide for that. Note that this setup uses a DaemonSet, so there is an IngressController on each node. It also uses the hostPort config option, so the IngressController will listen on the node's IP, instead of a virtual service IP that will not be stable.
Now you just need to get your HTTP traffic to any one of your nodes. You'll probably want to define an external DNS entry for each Service, each pointing to the IPs of your nodes (i.e. multiple A/AAAA records). The ingress will disambiguate and route inside the cluster based on the HTTP hostname, using name-based virtual hosting.
If you need to expose non-HTTP services, this gets a bit more involved, but you can look in the nginx ingress docs for more examples (e.g. UDP).
I have a service created as a headless service that is intended to map to a range of external IP addresses provided by a separate k8s endpoints object. If one of the external nodes were to die, is there any way for me to remove the specific endpoint from the service automatically?
You can use kubectl patch to edit whatever object you want.
Since it's an external IP and Kubernetes is therefore not aware of it, you will need to provide the mechanism to automate the deletion, like using a job you run periodically or some sort of callback.
I'm thinking of deploying simple haproxy pods with configuration taken either from configmap (list of IPs) or directly from the other external service, to be able to add healthchecks. Config change might also be automated by confd inside this haproxy container. And these haproxy pods would be exposed as a Service in Kubernetes to the other apps.