Can podaffinity schedule two pods to run on the same node? - kubernetes

Both pods are scheduled on same node with podaffinity, each pod on a different namespace. Once I try to deploy both of them on same namespace, podaffinity fails, and one one pod is running while the other one remains pending with podaffinity error.
Thanks!

From your comment, I suspect that you have a label collision that is only apparent when you try to run the pods in the same namespace.
Take a look at your nodeSelectorTerms and matchExpressions
From the docs:
If you specify multiple matchExpressions associated with nodeSelectorTerms, then the pod can be scheduled onto a node only if all matchExpressions can be satisfied.

Related

Kubernetes StatefulSets - run pod on every worker node

What is the easiest way to run a single Pod on every available worker node as part of the StatefulSet. So, a one to one mapping.
Am I right to say every Pod will run on a different Node by default with a StatefulSet? In which case is it sufficient to add x pods to the SS where x Worker nodes exist in the cluster?
Thanks.
Use DaemonSet instead.
A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage collected. Deleting a DaemonSet will clean up the Pods it created.
If you really want to use statefulSet, you can take a look at features like nodeSelector or Affinity and Anti-affinity.

Evaluate affinity rules

I updated a statefulset deployment and the deleted pods of that statefulset are pending forever. Thus I described the pods and saw that they can not be scheduled on nodes because the nodes didn't match the pod affinity/anti-affinity rules. This statefulset however has no affinity rules at all.
My question
How can I evaluate the affinity rules of my statefulset, so that I can see what affinity rules are hindering these pods from starting?
I believe it must be a different deployment which hinders these pods from starting up, but I am clueless which deployment it might be.
check this in order to determine the possible root cause
check if your nodes have taints (kubectl describe node {Node_Name} | grep Taint), if it is the case look for tolerations in order to schedule a workload in a specific node.
you have in the definition the field nodeName and is being pointed to an no existing node.
as Prateek Jain recommended above check your pod with describe in order to see what exactly is being overriden in your definition.
Statefulsets pods might be preventing the deletion because you may have some pv-protection, the best way to troubleshoot that situation is running kubectl get events -n ${yournamespace}, any event on your namespace will be listed.
Try to see if any warning or error message is displayed.
NOTE: If you get too many events, try to filter using --field-selector=type!=Normal,reason!=Unhealthy
✌

Can you tell kubernetes to start one pod before another?

Can I add some config so that my daemon pods start before other pods can be scheduled or nodes are designated as ready?
Adding post edit:
These are 2 different pods altogether, the daemonset is a downstream dependency to any pods that might get scheduled on the host.
There's no such a thing as Pod hierarchy in Kubernetes between multiple separate types of pods. Meaning belonging to different Deployments, Statefulsets, Daemonsets, etc. In other words, there is no notion of a master pod and children pods. If you like to create your custom hierarchy you can build your own tooling around, for example waiting for the status of all pods in a DaemonSet to start or create a new Pod or Kubernetes workload resource.
The closest in terms of pod dependency in K8s is StatefulSets.
As per the docs:
For a StatefulSet with N replicas, when Pods are being deployed, they are created sequentially, in order from {0..N-1}.

How do I debug kubernetes scheduling?

I have added podAntiAffinity to my DeploymentConfig template.
However, pods are being scheduled on nodes that I expected would be excluded by the rules.
How can I view logs of the kubernetes scheduler to understand why it chose the node it did for a given pod?
PodAntiAffinity has more to do with other pods than nodes specifically. That is, PodAntiAffinity specifies which nodes to exclude based on what pods are already scheduled on that node. And even here you can make it a requirement vs. just a preference. To directly pick the node on which a pod is/is not scheduled, you want to use NodeAffinity. The guide.

How to convert Daemonsets to kind Deployment

I have already deployed pods using Daemonsets with nodeselector. My requirements is I need to use kind Deployment but at the same time I would want to retain Daemonsets functionality
.I have nodeselector defined so that same pod should be installed in labelled node.
How to achieve your help is appreciated.
My requirements is pod should be placed automatically based on nodeselector but with kind Deployment
In otherwords
Using Replication controller when I schedule 2 (two) replicas of a pod I expect 1 (one) replica each in each Nodes (VMs). Instead I find both replicas are created in same node This will make 1 Node a single point of failure which I need to avoid.
I have labelled two nodes properly. And I could see both pods spawned on single node. How to achieve both pods always schedule on both nodes?
Look into affinity and anti-affinity, specifically, inter-pod affinity and anti-affinity.
From official documentation:
Inter-pod affinity and anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled based on labels on pods that are already running on the node rather than based on labels on nodes. The rules are of the form “this pod should (or, in the case of anti-affinity, should not) run in an X if that X is already running one or more pods that meet rule Y”.