Prefer dispatch replicas among nodes - kubernetes

I'm running a Kubernetes cluster of 3 nodes with GKE. I ask Kubernetes for 3 replicas of backend pods. The 3 pods are not well dispatched among the nodes to provide a high-availability service, they are usually all on 2 nodes.
I would like Kubernetes to dispatch the pods as much as possible to have a pod on each node, but not fail the deployment/scale-up if they are more backend pods than nodes.
Is it possible to do that with preferredDuringSchedulingIgnoredDuringExecution?

Try setting up an preferred antiAffinity rule like so:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- "my_app_name"
topologyKey: "kubernetes.io/hostname"
This will try to schedule pods onto nodes which do not already have a pod of the same label running on them. After that it's a free for all (so it won't evenly spread them after making sure at least 1 is running on each node). This means that after scaling up you might end up with a node with 5 pods, and other nodes with 1 pod each.

Related

When scaling a pod, will Kubernetes start new pods on more available nodes?

I tried to find a clear answer to this, and I'm sure it's in the kubernetes documentation somewhere but I couldn't find it.
If I have 4 nodes in my cluster, and am running a compute intensive pod which presumably uses up all/most of the resources on a single node, and I scale that pod to 4 replicas- will Kubernetes put those new pods on the unused nodes automatically?
Or, in the case where that compute intensive pod is not actually crunching numbers at the time but I scale the pod to 4 replicas (ie: node running the original pod has plenty of 'available' resources) will kubernetes still see that there are 3 other totally free nodes and start the pods on them?
Kubernetes will schedule the new pods on any node with sufficient resources. So one of the newly created pods could end up on a node were one is already running, if the node has enough resources left.
You can use an anti affinity to prevent pods from scheduling on the same node as a pod from the same deployment, e.g. by using the development's label
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: resources-group
operator: In
values:
- high
topologyKey: topology.kubernetes.io/zone
Read docs for more on that topic.
As far as I remember, it depends on the scheduler configuration.
If you are running your on-premise kubernetes cluster and you have access to the scheduler process, you can tune the policy as you prefer (see documentation).
Otherwise, you can only play with the pod resource requests/limits and anti-affinity (see here).

Kubernetes Restrict Node to run labeled pods only

we would like to merge 2 kubernetes cluster because we need to establish a communication between the pods and it should also be cheaper.
Cluster 1 should stay intact and cluster 2 will be deleted. The pods in cluster 2 have very high requirements for resources and we would like to create node pool dedicated to these pods.
So the idea is to label the new nodes and also label the pods that were part of cluster 2 before to enforce that they run on these nodes.
What I cannot find an answer for is the following question: How can I ensure that no other pod is scheduled to run on the new node pool without having to redeploy all pods and assigning labels to them?
There are 2 problems you have to solve:
Stop cluster 1 pods from running on cluster 2 nodes
Stop cluster 2 pods from running on cluster 1 nodes
Given your question, it looks like you can make changes to cluster 2 deployments, but don't want to update existing cluster 1 deployments.
The solution to problem 1 is to use taints and tolerations. You can taint your cluster 2 nodes to stop all pods from being scheduled there then add tolerations to your cluster 2 deployments to allow them to ignore this taint. This means that cluster 1 pods cannot be deployed to cluster 2 nodes and problem 1 is solved.
You add a taint like this:
kubectl taint nodes node1 key1=value1:NoSchedule-
and tolerate it in your cluster 2 pod/deployment spec like this:
tolerations:
- key: "key1"
operator: "Equal"
value: "value1"
effect: "NoSchedule"
Problem 2 cannot be solved the same way because you don't want to change deployments for cluster 1 pods. This is a shame because taints are the easiest solution to this. If you could make that change, then you'd simply add a taint to cluster 1 nodes and tolerate it only in cluster 1 deployments.
Given these constraints, the solution is to use node affinity. You'd need to use the requiredDuringSchedulingIgnoredDuringExecution form to ensure that the rules are always followed. The rules themselves can be as simple as a node selector based on labels. A shorter version of the example from the linked docs:
apiVersion: v1
kind: Pod
metadata:
name: with-node-affinity
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: a-node-label-key
operator: In
values:
- a-node-label-value
containers:
- name: with-node-affinity
image: k8s.gcr.io/pause:2.0

Kubernetes Pod anti-affinity - evenly spread pods based on a label?

We are finding that our Kubernetes cluster tends to have hot-spots where certain nodes get far more instances of our apps than other nodes.
In this case, we are deploying lots of instances of Apache Airflow, and some nodes have 3x more web or scheduler components than others.
Is it possible to use anti-affinity rules to force a more even spread of pods across the cluster?
E.g. "prefer the node with the least pods of label component=airflow-web?"
If anti-affinity does not work, are there other mechanisms we should be looking into as well?
Try adding this to the Deployment/StatefulSet .spec.template:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: "component"
operator: In
values:
- airflow-web
topologyKey: "kubernetes.io/hostname"
Have you tried configuring the kube-scheduler?
kube-scheduler selects a node for the pod in a 2-step operation:
Filtering: finds the set of Nodes where it's feasible to schedule the Pod.
Scoring: ranks the remaining nodes to choose the most suitable Pod placement.
Scheduling Policies: can be used to specify the predicates and priorities that the kube-scheduler runs to filter and score nodes.
kube-scheduler --policy-config-file <filename>
Sample config file
One of the priorities for your scenario is:
BalancedResourceAllocation: Favors nodes with balanced resource usage.
The right solution here is pod topology spread constraints: https://kubernetes.io/blog/2020/05/introducing-podtopologyspread/
Anti-affinity only works until each node has at least 1 pod. Spread constraints actually balances based on the pod count per node.

Schedule few statefulset pods on one node and rest on other node in a kubernetes cluster

I have a kubernetes cluster of 3 worker nodes where I need to deploy a statefulset app having 6 replicas.
My requirement is to make sure in every case, each node should get exactly 2 pods out of 6 replicas. Basically,
node1 - 2 pods of app
node2 - 2 pods of app
node3 - 2 pods of app
========================
Total 6 pods of app
Any help would be appreciated!
You should use Pod Anti-Affinity to make sure that the pods are spread to different nodes.
Since you will have more than one pod on the nodes, use preferredDuringSchedulingIgnoredDuringExecution
example when the app has the label app: mydb (use what fits your case):
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- mydb
topologyKey: "kubernetes.io/hostname"
each node should get exactly 2 pods out of 6 replicas
Try to not think that the pods are pinned to certain node. The idea with Kubernetes workload is that the workload is independent of the underlying infrastructure such as nodes. What you really want - I assume - is to spread the pods to increase availability - e.g. if one nodes goes down, your system should still be available.
If you are running at a cloud provider, you should probably design the anti-affinity such that the pods are scheduled to different Availability Zones and not only to different Nodes - but it requires that your cluster is deployed in a Region (consisting of multiple Availability Zones).
Spread pods across Availability Zones
After even distribution, all 3 nodes (scattered over three zones ) will have 2 pods. That is ok. The hard requirement is if 1 node ( Say node-1) goes down, then it's 2 pods, need not be re-scheduled again on other nodes. When the node-1 is restored, then those 2 pods now will be scheduled back on it. So, we can say, all 3 pair of pods have different node/zone affinity. Any idea around this?
This can be done with PodAffinity, but is more likely done using TopologySpreadConstraints and you will probably use topologyKey: topology.kubernetes.io/zone but this depends on what labels your nodes have.

Kubernetes pod affinity - Scheduling pods on different nodes

We are running our application with 3 pods on a 3 node kubernetes cluster. When we deploy out application, sometimes, pods are getting scheduled to the same kubernetes node.
We want our pods to scheduled in such a way that it spread our pods across nodes ( no 2 pods of the same application should be same node). Infact, as per documentation(https://kubernetes.io/docs/concepts/configuration/assign-pod-node/), kubernetes already does a good job in this. However, if it doesn't find resources, it schedules it to the same node. How do it make it a hard constraint ?
Requirement:
We want the deployment to fail or be in pending state if the pods don't obey the constraints (no 2 pods of the same application should be same node)
i think this one will work
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- <VALUE>
topologyKey: "kubernetes.io/hostname"
For more reference you can visit : https://thenewstack.io/implement-node-and-pod-affinity-anti-affinity-in-kubernetes-a-practical-example/