Allow only one pod of a type on a node in Kubernetes - kubernetes

How to allow only one pod of a type on a node in Kubernetes. Daemon-sets doesn't fit into this use-case.
For e.g. - Restricting scheduling of only one Elasticsearch pod on a node, to prevent data loss in case the node goes down.
It can be achieved by carefully planning CPU/memory resource of pod and machine type of cluster.
Is there any other way to do so?

Kubernetes 1.4 introduced Inter-pod affinity and anti-affinity. From the documentation: Inter-pod affinity and anti-affinity allow you to constrain which nodes your pod is eligible to schedule on based on labels on pods that are already running on the node.
That won't prevent a pod to be scheduled on a node, but at least the pod will be scheduled on the node if and only if the scheduler has no choice.

If you assign a constraint to your pod that can only be met at most once per node, then the scheduler will only be able to place one pod per node. A good example of such a constraint is a host port (the scheduler won't try to put two pods that both require the same host port onto the same node because the second one will never be able to run).
Also see How to require one pod per minion/kublet when configuring a replication controller?

I'm running MC jobs on kubernetes/GCE cluster, and for me M:N scheduling is important in a sense that out of M jobs, I want one job/pod per node for N nodes running (M >> N).
For me solution was to have explicit CPU limit set in pod JSON file
"resources": {
"limits": {
"cpu": "700m"
}
}
ANd I have no replication controller, just pure batch-style cluster.
Numbe of nodes N is typically 100-200-300, M is about 10K-20K

Is creating one elasticsearch deploy per node an option? In case yes I found it the easiest to use the nodeSelector combined with the Always restart policy. You can match on different labels, here I simply use the zone of the Azure AvailabilitySet. E.g. like this
spec:
containers:
...
nodeSelector:
failure-domain.beta.kubernetes.io/zone: "2"
...
restartPolicy: Always

Related

Match Deployment to specific nodepool

I am looking to find out if there is a way I can assign a specific Deployment to a specific node pool.
I am planning to deploy a big-size application using kubernetes. I am wondering if there is a way we can assign deployments to specific node pools. In other words, we have 3 types of services:
General services, low performance and low replica count
Monitor services, high I/O and high performance servers needed
Module services, most demanding services, we are aiming to allocate the biggest part of our budget for this.
So obviously we would like to best allocate nodes to specific deployments so no resources go wasted, for example low tier servers node pool X would be only utilized by General service deployments, high tier servers node pool Y would be only utilized by the monitor services, and the highest tier servers would only be utilized by the Module services.
I understand that there is a huge number of articles that talks about pod affinity and other related things, but what I seem to not be able to find anything that matches the following:
How to assign Deployment to specific node pool
Thanks in advance!
Another way (in addition to what Yayotrón proposed) would be to work with NodeAffinity and AntiAffinity. For more information check the official documentation here: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/
Taints and tolerations very strict and scheduling on other nodes would not be possible at all.
With Affinity and Antiaffinity you can specify wheter you want it to be strict (RequiredDuringSchedulingIgnoredDuringExecution) or a soft restriction (PreferredDuring....)
This can be achieved using Taints and Tolerations. A quick summary of what they are (from their documentation):
Node affinity, is a property of Pods that attracts them to a set of nodes (either as a preference or a hard requirement). Taints are the opposite -- they allow a node to repel a set of pods.
Tolerations are applied to pods, and allow (but do not require) the
pods to schedule onto nodes with matching taints.
Taints and tolerations work together to ensure that pods are not
scheduled onto inappropriate nodes. One or more taints are applied to
a node; this marks that the node should not accept any pods that do
not tolerate the taints.
Or simply by using NodeSelector
When you register a node to join the kubernetes cluster you can specify the taints and labels using kubelet --register-with-taints label=value --node-labels=label2=value2.
Or you can use kubectl taint for already registered nodes.
Then when you're going to deploy a pod/deployment/statefulset you can specify its nodeSelector and Tolerations
spec:
nodeSelector:
label2: value2
tolerations:
- key: "label"
operator: "Equal"
value: "value"
effect: "NoSchedule"

Kubernetes: Evenly distribute the replicas across the cluster

We can use DaemonSet object to deploy one replica on each node. How can we deploy say 2 replicas or 3 replicas per node? How can we achieve that. please let us know
There is no way to force x pods per node the way a Daemonset does. However, with some planning, you can force a fairly even pod distribution across your nodes using pod anti affinity.
Let's say we have 10 nodes. The first thing is we need to have a ReplicaSet (deployment) with 30 pods (3 per node). Next, we want to set the pod anti affinity to use preferredDuringSchedulingIgnoredDuringExecution with a relatively high weight and match the deployment's labels. This will cause the scheduler to prefer not scheduling pods where the same pod already exists. Once there is 1 pod per node, the cycle starts over again. A node with 2 pods will be weighted lower than one with 1 pod so the next pod should try to go there.
Note this is not as precise as a DaemonSet and may run into some limitations when it comes time to scale up or down the cluster.
A more reliable way if scaling the cluster is to simply create multiple DaemonSets with different names, but identical in every other way. Since the DaemonSets will have the same labels, they can all be exposed through the same service.
By default, the kubernetes scheduler will prefer to schedule pods on different nodes.
The kubernetes scheduler will first determine all possible nodes where a pod can be deployed based on your affinity/anti-affinity/resource limits/etc.
Afterward, the scheduler will find the best node where the pod can be deployed. The scheduler will automatically schedule the pods to be on separate availability zones and on separate nodes if this is possible of course.
You can try this on your own. For example, if you have 3 nodes, try deploying 9 replicas of a pod. You will see that each node will have 2 pods running.

How to convert Daemonsets to kind Deployment

I have already deployed pods using Daemonsets with nodeselector. My requirements is I need to use kind Deployment but at the same time I would want to retain Daemonsets functionality
.I have nodeselector defined so that same pod should be installed in labelled node.
How to achieve your help is appreciated.
My requirements is pod should be placed automatically based on nodeselector but with kind Deployment
In otherwords
Using Replication controller when I schedule 2 (two) replicas of a pod I expect 1 (one) replica each in each Nodes (VMs). Instead I find both replicas are created in same node This will make 1 Node a single point of failure which I need to avoid.
I have labelled two nodes properly. And I could see both pods spawned on single node. How to achieve both pods always schedule on both nodes?
Look into affinity and anti-affinity, specifically, inter-pod affinity and anti-affinity.
From official documentation:
Inter-pod affinity and anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled based on labels on pods that are already running on the node rather than based on labels on nodes. The rules are of the form “this pod should (or, in the case of anti-affinity, should not) run in an X if that X is already running one or more pods that meet rule Y”.

What's the purpose of Kubernetes DaemonSet when replication controllers have node anti-affinity

DaemonSet is a Kubernetes beta resource that can ensure that exactly one pod is scheduled to a group of nodes. The group of nodes is all nodes by default, but can be limited to a subset using nodeSelector or the Alpha feature of node affinity/anti-affinity.
It seems that DaemonSet functionality can be achieved with replication controllers/replica sets with proper node affinity and anti-affinity.
Am I missing something? If that's correct should DaemonSet be deprecated before it even leaves Beta?
As you said, DaemonSet guarantees one pod per node for a subset of the nodes in the cluster. If you use ReplicaSet instead, you need to
use the node affinity/anti-affinity and/or node selector to control the set of nodes to run on (similar to how DaemonSet does it).
use inter-pod anti-affinity to spread the pods across the nodes.
make sure the number of pods > number of node in the set, so that every node has one pod scheduled.
However, ensuring (3) is a chore as the set of nodes can change over time. With DaemonSet, you don't have to worry about that, nor would you need to create extra, unschedulable pods. On top of that, DaemonSet does not rely on the scheduler to assign its pods, which makes it useful for cluster bootstrap (see How Daemon Pods are scheduled).
See the "Alternative to DaemonSet" section in the DaemonSet doc for more comparisons. DaemonSet is still the easiest way to run a per-node daemon without external tools.

Does HorizontalPodAutoscaler make sense when there is only one Deployment on GKE (Google Container Engine) Kubernetes cluster?

I have a "homogeneous" Kubernetes setup. By this I mean that I am only running instances of a single type of pod (an http server) with a load balancer service distributing traffic to them.
By my reasoning, to get the most out of my cluster (edit: to be concrete -- getting the best average response times to http requests) I should have:
At least one pod running on every node: Not having a pod running on a node, means that I am paying for the node and not having it ready to serve a request.
At most one pod running on every node: The pods are threaded http servers so they can maximize utilization of a node, so running multiple pods on a node does not net me anything.
This means that I should have exactly one pod per node. I achieve this using a DaemonSet.
The alternative way is to configure a Deployment and apply a HorizontalPodAutoscaler to it and have Kubernetes handle the number of pods and pod to node mapping. Is there any disadvantage of my approach in comparison to this?
My evaluation is that the HorizontalPodAutoscaler is relevant mainly in heterogeneous situations, where one HorizontalPodAutoscaler can scale up a Deployment at the expense of another Deployment. But since I have only one type of pod, I would have only one Deployment and I would be scaling up that deployment at the expense of itself, which does not make sense.
HorizontalPodAutoscaler is actually a valid solution for your needs. To address your two concerns:
1. At least one pod running on every node
This isn't your real concern. The concern is underutilizing your cluster. However, you can be underutilizing your cluster even if you have a pod running on every node. Consider a three-node cluster:
Scenario A: pod running on each node, 10% CPU usage per node
Scenario B: pod running on only one node, 70% CPU usage
Even though Scenario A has a pod on each node the cluster is actually being less utilized than in Scenario B where only one node has a pod.
2. At most one pod running on every node
The Kubernetes scheduler tries to spread pods around so that you don't end up with multiple pods of the same type on a single node. Since in your case the other nodes should be empty, the scheduler should have no problems starting the pods on the other nodes. Additionally, if you have the pod request resources equivalent to the node's resources, that will prevent the scheduler from scheduling a new pod on a node that already has one.
Now, you can achieve the same effect whether you go with DaemonSet or HPA, but I personally would go with HPA since I think it fits your semantics better, and would also work much better if you eventually decide to add other types of pods to your cluster
Using a DamonSet means that the pod has to run on every node (or some subset). This is a great fit for something like a logger or a metrics collector which is per-node. But you really just want to use available cluster resources to power your pod as needed, which matches up better with the intent of HPA.
As an aside, I believe GKE supports cluster autoscaling, so you should never be paying for nodes that aren't needed.