We are having problem with several deployments in our cluster that do not seem to be working. But I am a bit apprehensive in touching these, since they are part of the kube-system namespace. I am also unsure as what the correct approach to getting them into an OK state is.
I currently have two daemonsets that have warnings with the message
DaemonSet has no nodes selected
See images below. Does anyone have any idea what the correct approach is?
A DaemonSet is creating a pod in each node of your Kubernetes cluster.
If the Kubernetes scheduler cannot schedule any pod, there are several possibilities:
Pod spec has a too high memory requests resource for the memory node capacity, look at the value of spec.containers[].resources.requests.memory
The nodes may have a taint, so DaemonSet declaration must have a toleration (kubernetes documentation about taint and toleration)
The pod spec may have a nodeSelector field (kubernetes documentation about node selector)
The pod spec may have an enforced node affinity or anti-affinity (kubernetes documentation about node affinity)
If Pod Security Policies are enabled on the cluster, a security policy may be blocking access to a resource that the pod needs to run
There are not the only solutions possible. More generally, a good start would be to look at the events associated to the daemon set:
> kubectl describe daemonsets NAME_OF_YOUR_DAEMON_SET
Related
I updated a statefulset deployment and the deleted pods of that statefulset are pending forever. Thus I described the pods and saw that they can not be scheduled on nodes because the nodes didn't match the pod affinity/anti-affinity rules. This statefulset however has no affinity rules at all.
My question
How can I evaluate the affinity rules of my statefulset, so that I can see what affinity rules are hindering these pods from starting?
I believe it must be a different deployment which hinders these pods from starting up, but I am clueless which deployment it might be.
check this in order to determine the possible root cause
check if your nodes have taints (kubectl describe node {Node_Name} | grep Taint), if it is the case look for tolerations in order to schedule a workload in a specific node.
you have in the definition the field nodeName and is being pointed to an no existing node.
as Prateek Jain recommended above check your pod with describe in order to see what exactly is being overriden in your definition.
Statefulsets pods might be preventing the deletion because you may have some pv-protection, the best way to troubleshoot that situation is running kubectl get events -n ${yournamespace}, any event on your namespace will be listed.
Try to see if any warning or error message is displayed.
NOTE: If you get too many events, try to filter using --field-selector=type!=Normal,reason!=Unhealthy
✌
I'm using digital ocean kubernetes cluster service and have deployed 9 nodes in cluster but when i'm trying to deploy kafka zookeeper pods few pods get deployed other remain in pending state. i've tried doing
kubectl describe pods podname -n namespace
it shows
its not getting assigned to any nodes
check if your deployment/statefulset might have some node Selectors and/or node/pod affinity that might prevent it from running .
also it would be helpful to see more parts of the pod decribe since it might give more details.
there is a message on your print screen about the PersistentVolume Claims so I would also check the status of the pvc objects to check if they are bound or not.
good luck
In Kelsey Hightower's Kubernetes Up and Running, he gives two commands :
kubectl get daemonSets --namespace=kube-system kube-proxy
and
kubectl get deployments --namespace=kube-system kube-dns
Why does one use daemonSets and the other deployments?
And what's the difference?
Kubernetes deployments manage stateless services running on your cluster (as opposed to for example StatefulSets which manage stateful services). Their purpose is to keep a set of identical pods running and upgrade them in a controlled way. For example, you define how many replicas(pods) of your app you want to run in the deployment definition and kubernetes will make that many replicas of your application spread over nodes. If you say 5 replica's over 3 nodes, then some nodes will have more than one replica of your app running.
DaemonSets manage groups of replicated Pods. However, DaemonSets attempt to adhere to a one-Pod-per-node model, either across the entire cluster or a subset of nodes. A Daemonset will not run more than one replica per node. Another advantage of using a Daemonset is that, if you add a node to the cluster, then the Daemonset will automatically spawn a pod on that node, which a deployment will not do.
DaemonSets are useful for deploying ongoing background tasks that you need to run on all or certain nodes, and which do not require user intervention. Examples of such tasks include storage daemons like ceph, log collection daemons like fluentd, and node monitoring daemons like collectd
Lets take the example you mentioned in your question: why iskube-dns a deployment andkube-proxy a daemonset?
The reason behind that is that kube-proxy is needed on every node in the cluster to run IP tables, so that every node can access every pod no matter on which node it resides. Hence, when we make kube-proxy a daemonset and another node is added to the cluster at a later time, kube-proxy is automatically spawned on that node.
Kube-dns responsibility is to discover a service IP using its name and only one replica of kube-dns is enough to resolve the service name to its IP. Hence we make kube-dns a deployment, because we don't need kube-dns on every node.
I have added podAntiAffinity to my DeploymentConfig template.
However, pods are being scheduled on nodes that I expected would be excluded by the rules.
How can I view logs of the kubernetes scheduler to understand why it chose the node it did for a given pod?
PodAntiAffinity has more to do with other pods than nodes specifically. That is, PodAntiAffinity specifies which nodes to exclude based on what pods are already scheduled on that node. And even here you can make it a requirement vs. just a preference. To directly pick the node on which a pod is/is not scheduled, you want to use NodeAffinity. The guide.
DaemonSet is a Kubernetes beta resource that can ensure that exactly one pod is scheduled to a group of nodes. The group of nodes is all nodes by default, but can be limited to a subset using nodeSelector or the Alpha feature of node affinity/anti-affinity.
It seems that DaemonSet functionality can be achieved with replication controllers/replica sets with proper node affinity and anti-affinity.
Am I missing something? If that's correct should DaemonSet be deprecated before it even leaves Beta?
As you said, DaemonSet guarantees one pod per node for a subset of the nodes in the cluster. If you use ReplicaSet instead, you need to
use the node affinity/anti-affinity and/or node selector to control the set of nodes to run on (similar to how DaemonSet does it).
use inter-pod anti-affinity to spread the pods across the nodes.
make sure the number of pods > number of node in the set, so that every node has one pod scheduled.
However, ensuring (3) is a chore as the set of nodes can change over time. With DaemonSet, you don't have to worry about that, nor would you need to create extra, unschedulable pods. On top of that, DaemonSet does not rely on the scheduler to assign its pods, which makes it useful for cluster bootstrap (see How Daemon Pods are scheduled).
See the "Alternative to DaemonSet" section in the DaemonSet doc for more comparisons. DaemonSet is still the easiest way to run a per-node daemon without external tools.