Use of Labels in Kubernetes deployments - kubernetes

I am interested in knowing how pervasively labels / selectors are getting used in Kubernetes. Is it widely used feature in field to segregate container workloads.
If not, what are other ways that are used to segregate workloads in kubernetes.

I'm currently running a Kubernetes in production for some months and using the labels on some pods to spread them out over the nodes using the podAntiAffinity rules. So that these pods aren't all located on a single node. Mind you, I'm running a small cluster of three nodes.
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- your-deployment-name
topologyKey: "kubernetes.io/hostname"
I've found this a useful way to use labels.

Related

Kubernetes spread pods along nodepools

I'm running a managed kubernetes cluster in GCP, which has 2 node pools - one on regular VMs, one on spot VMs, autoscaling is configured for both of them.
Currently i'm running batch jobs and async tasks on spot VMs and web apps on regular VMs, but to reduce costs i'd like to move web apps pods mostly to spot VMS. Usually i have 3-5 pods of each app running, so i'd like to leave 1 on regular VMs and 2-4 move to spot.
I've found a nodeAffinity and podAffinity settings and have set preferred pod placement with preferredDuringSchedulingIgnoredDuringExecution and spot VMs node selector, but now all my pods have moved to spot VMs.
Try something like
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: topology.kubernetes.io/type
operator: In
values:
- regular
- spot/preemptible
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- app-label
topologyKey: spot-node-label

How to distribute K8 Deployments evenly across nodes with Kubernetes?

I have 12 K8 deployments that should be distributed somewhat evenly across 3 K8 nodes based on resource utilization (like the uptime command). I expected Kubernetes to automatically choose the node that is utilizing the least amount of resources at the time of pod creation, which I would think should result in a somewhat even distribution, but to my surprise Kubernetes is creating the majority of the Pods on the same single node that is barely handling the load, while the other nodes are not being utilized almost at all.
I heard about using topologySpreadConstraints like so
topologySpreadConstraints:
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
type: wordpress-3
But I cant get it to work properly, what is the correct way to achieve the even distribution behavior of deployments that I am looking for? thanks!
Are you using bitnami's wordpress chart?
If so you can update the values.yaml you pass into the chart and set anti-affinity like this:
# Affinity
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: kubernetes.io/hostname
labelSelector:
matchLabels:
app.kubernetes.io/instance: {name of your Wordpress release}
This will force kubernetes to only allow one Wordpress pod on one host (i.e. node). I use this setup on my own Wordpress installations and it means if one node goes down, it doesn't take out the site as the other replicas will still be running and on separate nodes

Kubernetes only run specific deployments on nodes by specific label

Using Kubernetes I have a set of nodes that are high cpu and I am using a affinity policy for a given deployment to specifically target these high cpu nodes:
# deployment.yaml
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: high-cpu-node
operator: In
values:
- "true"
That works, however it does not prevent all the rest of the deployments from scheduling pods on these high cpu nodes. How do I specify that these high cpu nodes should ONLY run pods where high-cpu-node=true? Is it possible to do this without going and modifying all the other deployment configurations (I have dozens of deployments)?
To get this behaviour you should taint nodes and use tolerations on deployments: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
But, unfortunately, you would have to modify deployments. It's not possible to achieve this simply via labels.

Assign affinity for distributing the kubernetes pods across all nodes?

What rules should be used to assign affinity to Kubernetes pods for distributing the pods across all Availability Zones?
I have a region with 3 Availability Zones and Nodes in each of these. I want to make sure that each of the 3 pods are spread across all the 3 Availability Zones.
You should be able to use the label topology.kubernetes.io/zone (for e.g. topologyKey) and add anti-affinity rules.
This is part of the anti-affinity example:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: security
operator: In
values:
- S2
topologyKey: failure-domain.beta.kubernetes.io/zone
the result of the example is documented as
The pod anti-affinity rule says that the pod cannot be scheduled onto a node if that node is in the same zone as a pod with label having key "security" and value "S2".
Instead of the label security in the example, you can use e.g. app-name: <your-app-name> as label and use that in your matchExpression.

What is the recommended way to deploy kafka to make it deploy in all the available nodes?

I have 3 nodes in k8s and i'm running kafka (3 cluster).
While deploying zk/broker/rest-proxy, its not getting deployed in all the available nodes. How can i make sure that all pods are deployed in different nodes. Do i need to use nodeaffinity or podaffinity ?
If you want all pods to run on different nodes - you must use PodAntiAffinity. If this is hard requirement - you must use requiredDuringSchedulingIgnoredDuringExecution rule. If it's not - use preferredDuringSchedulingIgnoredDuringExecution.
topologyKey should be kubernetes.io/hostname.
In labelSelector put your pod's labels.
I recommend using soft anti-affinity which will look like:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- <your app label>
topologyKey: kubernetes.io/hostname
weight: 100
Here I explained the difference between anti-affinity types with examples applied to a live cluster:
https://blog.verygoodsecurity.com/posts/kubernetes-multi-az-deployments-using-pod-anti-affinity/