I did read up on StatefulSets which I want to use for some stateful applications, and was wondering about two things?
1) Do I need to put a service in front? Or do I just dns query the single instances, and they do have a static ip like with a service by default, disregarding what pod runs behind?
2) How does a statefulset behave when a given pod X is down? I suppose with my theory of it having a kind of "internal" service, it will just hold back any requests done while the pod behind the IP is down until a new one is there?
1) Do I need to put a service in front? Or do I just dns query the
single instances, and they do have a static ip like with a service by
default, disregarding what pod runs behind?
StatefulSets currently require a Headless Service to be responsible for the network identity of the Pods. You are responsible for creating this Service.
2) How does a statefulset behave when a given pod X is down? I suppose
with my theory of it having a kind of "internal" service, it will just
hold back any requests done while the pod behind the IP is down until
a new one is there?
StatefulSet ensures that, at any time, there is at most one Pod with a given identity running in a cluster. This is referred to as at most one semantics provided by a StatefulSet.The StatefulSet also recreates Pods if they’re deleted, similar to what a ReplicaSet does for stateless Pods.
Related
I know why use StatefulSet for stateful applications. (e.g. DB or something)
In most cases, I can see like "You want to deploy stateful app to k8s? Use StatefulSet!"
However, I couldn't see like "You want to deploy stateless app to k8s? Then, DO NOT USE StatefulSet" ever.
Even nobody says "I don't recommend to use StatefulSet for stateless app", many stateless apps is deployed through Deployment, like it is the standard.
The StatefulSet has clear pros for stateful app, but I think Deployment doesn't for stateless app.
Is there any pros in Deployment for stateless apps? Or is there any clear cons in StatefulSet for stateless apps?
I supposed that StatefulSet cannot use LoadBalancer Service or StatefulSet has penalty to use HPA, but all these are wrong.
I'm really curious about this question.
P.S. Precondition is the stateless app also uses the PV, but not persists stateful data, for example logs.
I googled "When not to use StatefulSet", "when Deployment is better than StatefulSet", "Why Deployment is used for stateless apps", or something more questions.
I also see the k8s docs about StatefulSet either.
Different Priorities
What happens when a Node becomes unreachable in a cluster?
Deployment - Stateless apps
You want to maximize availability. As soon as Kubernetes detects that there are fewer than the desired number of replicas running in your cluster, the controllers spawn new replicas of it. Since these apps are stateless, it is very easy to do for the Kubernetes controllers.
StatefulSet - Stateful apps
You want to maximize availability - but not you must ensure data consistency (the state). To ensure data consistency, each replica has its own unique ID, and there are never multiple replicas of this ID, e.g. it is unique. This means that you cannot spawn up a new replica, unless that you are sure that the replica on the unreachable Node are terminated (e.g. stops using the Persistent Volume).
Conclusion
Both Deployment and StatefulSet try to maximize the availability - but StatefulSet cannot sacrifice data consistency (e.g. your state), so it cannot act as fast as Deployment (stateless) apps can.
These priorities does not only happens when a Node becomes unreachable, but at all times, e.g. also during upgrades and deployments.
In contrast to a Kubernetes Deployment, where pods are easily replaceable, each pod in a StatefulSet is given a name and treated individually. Pods with distinct identities are necessary for stateful applications.
This implies that if any pod perishes, it will be apparent right away. StatefulSets act as controllers but do not generate ReplicaSets; rather, they generate pods with distinctive names that follow a predefined pattern. The ordinal index appears in the DNS name of a pod. A distinct persistent volume claim (PVC) is created for each pod, and each replica in a StatefulSet has its own state.
For instance, a StatefulSet with four replicas generates four pods, each of which has its own volume, or four PVCs. StatefulSets require a headless service to return the IPs of the associated pods and enable direct interaction with them. The headless service has a service IP but no IP address and has to be created separately.The major components of a StatefulSet are the set itself, the persistent volume and the headless service.
That all being said, people deploy Stateful Applications with Deployments, usually they mount a RWX PV into the pods so all "frontends" share the same backend. Quite common in CNCF projects.
A stateful set manages each POD with a unique hostname based on an index number. So with an index, it would be easy to identify the individual PODs and also easy for the application to check which on rely or unique network identities. Also, you might have read stateful sets get deleted in a specified order to maintain consistency.
When you use stateful for the stateless application it will be like a burden to manage and add complexity to unique network identities and ordering guarantees.
For example, when you scale down to zero stateful sets it goes in the controlled way while with deployment or RS it won't be the same case. However, there is no guarantee when deleting the resource stateful set.
Also, Before a scaling operation is applied to a stateful set Pod, all of its predecessors must be Running and Ready. So if you are deploying the application, three Pods will be deployed suppose in order app-0, app-1, app-2. app-1 wont be deployed before app-0 is Running & Ready, and app-2 wont be deployed until app-1 is Ready.
While with deployment you can manage the % for and handle the RollingUpdate scenario but with a stateful set it will delete and recreate new POD one by one.
I'm a beginner in Kubernetes and I have a situation as following: I have two differents Pods: PodA and PodB. Firstly, I want to expose PodA to the outside world, so I create a Service (type NodePort or LoadBalancer) for PodA, which is not difficult to understand for me.
Then I want PodA communicate to PodB, and after several hours googling, I found the answer is that I also need to create a Service (type ClusterIP if I want to keep PodB only visible inside the cluster) for PodB, and if I do so, I can let PodA and PodB comminucate to each other. But the problem is I also found this article. According to this webpage, they say that the communication between pods on the same node can be done via cbr0, a Network Bridge, or the communication between pods on different nodes can be done via a route table of the cluster, and they don't mention anything to the Service object (which means we don't need Service object ???).
In fact, I also read the documents of K8s and I found in the Cluster Networking
Cluster Networking
...
2. Pod-to-Pod communications: this is the primary focus of this document.
...
where they also focus on to the Pod-to-Pod communications, but there is no stuff relevant to the Service object.
So, I'm really confusing right now and my question is: Could you please explain to me the connection between these stuff in the article and the Service object? The Service object is a high-level abstract of the cbr0 and route table? And in the end, how can the Pods can communicate to each other?
If I misunderstand something, please, point it out for me, I really appreciate that.
Thank you guys !!!
Motivation behind using a service in a Kubernetes cluster.
Kubernetes Pods are mortal. They are born and when they die, they are not resurrected. If you use a Deployment to run your app, it can create and destroy Pods dynamically.
Each Pod gets its own IP address, however in a Deployment, the set of Pods running in one moment in time could be different from the set of Pods running that application a moment later.
This leads to a problem: if some set of Pods (call them “backends”) provides functionality to other Pods (call them “frontends”) inside your cluster, how do the frontends find out and keep track of which IP address to connect to, so that the frontend can use the backend part of the workload?
That being said, a service is handy when your deployments (podA and podB) are dynamically managed.
Your PodA can always communicate with PodB if it knows the address or the DNS name of PodB. In a cluster environment, there may be multiple replicas of PodB, or an instance of PodB may die and be replaced by another instance with a different address and different name. A Service is an abstraction to deal with this situation. If you use a Service to expose your PodB, then all pods in the cluster can talk to an instance of PodB using that service, which has a fixed name and fixed address no matter how many instances of PodB exists and what their addresses are.
First, I read it as you are dealing with two applications, e.g. ApplicationA and ApplicationB. Don't use the Pod abstraction when you reason about your architecture. On Kubernetes, you are dealing with a distributed system, and it is designed so that you should have multiple instances of your Application, e.g. for High Availability. Each instance of your application is a Pod.
Deploy your applications ApplicationA and ApplicationB as a Deployment resource. Then it is easy do do rolling upgrades without downtime, and Kubernetes will restart any instance of your application if it crash.
For every Deployment or for you, application, create one Service resource, (e.g. ServiceA and ServiceB). When you communicate from ApplicationA to another application, use the Service, e.g. ServiceB. The service will load balance your requests to the instances of the other application, and you can upgrade your Deployment without downtime.
1.Cluster networking : As the name suggests, all the pods deployed in the cluster will be connected by implementing any kubernetes network model like DANM, flannel
Check this link to see how to create a cluster network.
Creating cluster network
With the CNI installed (by implementing cluster network), every pod will get an IP.
2.Service objects created with type ClusterIP, points to the this IPs (via endpoint) created internally to communicate.
Answering your question, Yes, The Service object is a high-level abstract of the cbr0 and route table.
You can use service object to communicate between pods.
You can also implement service mesh like envoy / Istio if the network is complex.
After reading thru Kubernetes documents like this, deployment , service and this I still do not have a clear idea what the purpose of service is.
It seems that the service is used for 2 purposes:
expose the deployment to the outside world (e.g using LoadBalancer),
expose one deployment to another deployment (e.g. using ClusterIP services).
Is this the case? And what about the Ingress?
------ update ------
Connect a Front End to a Back End Using a Service is a good example of the service working with the deployment.
Service
A deployment consists of one or more pods and replicas of pods. Let's say, we have 3 replicas of pods running in a deployment. Now let's assume there is no service. How does other pods in the cluster access these pods? Through IP addresses of these pods. What happens if we say one of the pods goes down. Kunernetes bring up another pod. Now the IP address list of these pods changes and all the other pods need to keep track of the same. The same is the case when there is auto scaling enabled. The number of the pods increases or decreases based on demand. To avoid this problem services come into play. Thus services are basically programs that manages the list of the pods ip for a deployment.
And yes, also regarding the uses that you posted in the question.
Ingress
Ingress is something that is used for providing a single point of entry for the various services in your cluster. Let's take a simple scenario. In your cluster there are two services. One for the web app and another for documentation service. If you are using services alone and not ingress, you need to maintain two load balancers. This might cost more as well. To avoid this, ingress when defined, sits on top of services and routes to services based on the rules and path defined in the ingress.
I'm new to K8s, so still trying to get my head around things. I've been looking at deployments and can appreciate how useful they will be. However, I don't understand why they don't support services (only replica sets and pods).
Why is this? Does this mean that services would typically be deployed outside of a deployment?
To answer your question, Kubernetes deployments are used for managing stateless services running in the cluster instead of StatefulSets which are built for the stateful application run-time. Actually, with deployments you can describe the update strategy and road map for all underlying objects that have to be created during implementation.Therefore, we can distinguish separate specification fields for some objects determination, like needful replica number of Pods, template for Pod by describing a list of containers that should be in the Pod, etc.
However, as #P Ekambaram already mention in his answer, Services represent abstraction layer of network communication model inside Kubernetes cluster, and they declare a way to access Pods within a cluster via corresponded Endpoints. Services are separated from deployment object manifest specification, because of their mission to dynamically provide specific network behavior for the nested Pods without affecting or restarting them in case of any communication modification via appropriate Service Types.
Yes, services should be deployed as separate objects. Note that deployment is used to upgrade or rollback the image and works above ReplicaSet
Kubernetes Pods are mortal. They are born and when they die, they are not resurrected. ReplicaSets in particular create and destroy Pods dynamically (e.g. when scaling out or in). While each Pod gets its own IP address, even those IP addresses cannot be relied upon to be stable over time. This leads to a problem: if some set of Pods (let’s call them backends) provides functionality to other Pods (let’s call them frontends) inside the Kubernetes cluster, how do those frontends find out and keep track of which backends are in that set?
Services.come to the rescue.
A Kubernetes Service is an abstraction which defines a logical set of Pods and a policy by which to access them. The set of Pods targeted by a Service is (usually) determined by a Label Selector
Something I've just learnt that is somewhat related to my question: multiple K8s objects can be included in the same yaml file, separate by ---. Something like:
apiVersion: v1
kind: Deployment
# other stuff here
---
apiVersion: v1
kind: Service
# other stuff here
i think it intends to decoupled and fine-grained.
I am trying to implement something like the etcd services that uses the consensus algorithm (https://raft.github.io/). In this case, multiple instances of the etcd services need to be aware of each other. For this to happen, if we have 3 pods of etcd instance in a replication controller, the pods need to be able to talk to each other (at least be able to know the IP of self and all the other pods).
Is there a way of achieving this in the replication controller or pod specs without having to use the kubernetes API in the pod container?
You can put a service in front of those pods by giving each pod some label (for example etcd-service=true), and making a kubernetes service with a selector that matches that label. Use the DNS add-on, and you will get a DNS A record for each endpoint in the service. You can read more in the docs here.