Kubernetes Pods Not Being Evicted - kubernetes

I have multiple pods on Kubernetes (v1.23.5) that are not being evicted and rescheduled in case of node failure.
According to Kubernetes documentation, this process must begin after 300s:
Kubernetes automatically adds a Toleration for node.kubernetes.io/not-ready and node.kubernetes.io/unreachable with tolerationSeconds=300 unless you, or a controller, set those tolerations explicitly.
These automatically-added tolerations mean that Pods remain bound to Nodes for 5 minutes after detecting one of these problems.
Unfortunately, pods get stuck in terminating status and would not evict. However, in one test on a pod without any PVC attached, it evicted and started running on another node.
I'm trying to understand how I can make other pods evict after the default 300s time.
I don't know why it would not happen automatically, and I must drain the pod stuck in a terminating state to make it work properly.
Update
I have seen the kvaps/kube-fencing project. There seems to be a fencing procedure that runs in case of a node failure. I couldn't make it solve my problem, and I didn't. I don't know whether it is because of my lack of comprehension of this project, or it is solely used to handle the node in case of a failure and not the pods stuck in termination state and evicting those pods.

There are two ways to handle this problem.
First one is to use kvaps/kube-fencing. You need to configure a PodTemplate in which you can set a node to be deleted from the cluster when it becomes NotReady or flush the node. If you have volumes attached to the pod, the pods will remain in the ContainerCreating state.
There are the annotations in the PodTemplate:
annotations:
fencing/mode: 'delete'
fencing/mode: 'flush'
The second way is to use the Kubernetes Non-Graceful Node Shutdown. This is not available in Kubernetes (v1.23.5) and you have to upgrade.

Related

Kubernetes does not evict a pod event if it is in a "failed" state, e.g. CrashLoopBackOff

I have a pod that had a Pod Disruption Budget that says at least one has to be running. While it generally works very well it leads to a peculiar problem.
I have this pod sometimes in a failed state (due to some development) so I have two pods, scheduled for two different nodes, both in a CrashLoopBackOff state.
Now if I want to run a drain or k8s version upgrade, what happens is that pod cannot ever be evicted since it knows that there should be at least one running, which will never happen.
So k8s does not evict a pod due to Pod Disruption Budget even if the pod is not running. Is there a way to do something with this? I think ideally k8s should treat failed pods as candidates for eviction regardless of the budget (as deleting a failing pod cannot "break" anything anyway)
...if I want to run a drain or k8s version upgrade, what happens is that pod cannot ever be evicted since it knows that there should be at least one running...
kubectl drain --disable-eviction <node> will delete pod that is protected by PDB. Since you are fully aware of the downtime, you can first delete the PDB in question before draining the node.
I hit this issue too during k8s upgrade. Fyi, as mentioned in the other answer, kubectl drain --disable-eviction <node> may cause service downtime, and deleting pods might not work always when deleted pods are immediately recreated by the deployment managing the pods. Also, even if the pods are deleted successfully, it may cause service downtime depending on PodDisruptionBudget.
Instead, I increased the number of replicas of the pods in the deployment to honor PodDisruptionBudget.minAvailable or PodDisruptionBudget.maxUnavailable and was able to successfully upgrade k8s while honoring PodDisruptionBudget.

How to automatically force delete pods stuck in 'Terminating' after node failure?

I have a deployment that deploys a single pod with a persistent volume claim. If I switch off the node it is running on, after a while k8s terminates the pod and tries to spin it up elsewhere. However the new pod cannot attach the volume (Multi-Attach error for volume "pvc-...").
I can manually delete the old 'Terminating' pod with kubectl delete pod <PODNAME> --grace-period=0 --force and then things recover.
Is there a way to get Kubernetes to force delete the 'Terminating' pods after a timeout or something? Tx.
According to the docs:
A Pod is not deleted automatically when a node is unreachable. The
Pods running on an unreachable Node enter the 'Terminating' or
'Unknown' state after a timeout. Pods may also enter these states when
the user attempts graceful deletion of a Pod on an unreachable Node.
The only ways in which a Pod in such a state can be removed from the
apiserver are as follows:
The Node object is deleted (either by you, or by the Node Controller).
The kubelet on the unresponsive Node starts responding, kills the Pod and removes the entry from the apiserver.
Force deletion of the Pod by the user.
So I assume you are not deleting nor draining the node that is being shut down.
In general I'd advice to ensure any broken nodes are deleted from the node list and that should make Terminating pods to be deleted by controller manager.
Node deletion normally happens automatically, at least on kubernetes clusters running on the main cloud providers, but if that's not happening for you than you need a way to remove nodes that are not healthy.
Use Recreate in .spec.strategy.type of your Deployment. This tell Kubernetes to delete the old pods before creating new ones.
Ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy

What's the difference between pod deletion and pod eviction?

From PodInterface the two operations Delete and Evict seems having the same effect: deleting the old Pod and creating a new Pod.
If the two operations have the same effect, why do we need two API to delete a Pod and create a new one?
Deletion of a pod is done by an end user and is a normal activity. It means the pod will be deleted from ETCD and kubernetes control plane. Unless there is a higher level controller such as deployment, daemonset, statefulset etc the pod will not be created again and scheduled to a kubernetes worker node.
Eviction happens if resource consumption by pod is exceeded the limit and kubelet triggers eviction of the pod or a user performs kubectl drain or manually invoking the eviction API. It's generally not not a normal activity .Sometimes evicted pods are not automatically deleted from ETCD and kubernetes control plane. Unless there is a higher level controller such as deployment, daemonset, statefulset etc the evicted pod will not be created again and scheduled to a kubernetes worker node.
It's preferable to use delete instead of evict because evict comes with more risk compared to delete because eviction may lead to in some cases, an application to a broken state if the replacement pod created by the application’s controller(deployment etc.) does not become ready, or if the last pod evicted has a very long termination grace period
Pod evict operation (assuming you're referring to the Eviction API) is a sort of smarter delete operation, which respects PodDisruptionBudget and thus it does respect the high-availability requirements of your application (as long as PodDisruptionBudget is configured correctly). Normally you don't manually evict a pod, however the pod eviction can be initiated as a part of a node drain operation (which can be manually invoked by kubectl drain command or automatically by the Cluster Autoscaler component).
Pod delete operation on the other hand doesn't respect PodDisruptionBudget and thus can affect availability of your application. As opposite to the evict operation this operation is normally invoked manually (e.g. by kubectl delete command).
Also besides the Eviction API, pods can be evicted by kubelet due to Node-pressure conditions, in which case PodDisruptionBudget is not respected by Kubernetes (see Node-pressure Eviction)

Does a Kubernetes POD with restart policy always have to be under the auspice of a controller to work?

If I create a POD manifest (pod-definition.yaml) and set the restartPolicy: Always does that Pod also need to be associated with any controller (i.e., a Replicaset or Deployment)? The end goal here it to auto-start the container in the Pod should it die. Without a Pod being associated with a controller will that container automatically restart? What happens if the Pod has only one container?
The documentation is not clear here but it lead me to believe that the Pod must be under a controller for this to work, i.e., if you implicitly create a 8Ks object and specify a restart policy of Never you'll get a pod. If you specify always (the default) you'll get a deployment.
Pod without a controller(deployment, replication controller etc) and only with restartPolicy will not be restarted/rescheduled if the node(to be exact the kubelet on that node) where its running dies or drained or rebooted or for some other reason pod is evicted from the node. If the node is in good state and for some reason pod crashes it will be restarted on the same node without the need of a controller.
The reason is pod restartPolicy is handled by kubelet i.e pod is restarted by kubelet of the node.Now if the node dies kubelet is also dead and can not restart the pod. Hence you need to have a controller which will restart it in another node.
From the docs
restartPolicy only refers to restarts of the Containers by the kubelet
on the same node
In short if you want pods to survive a node failure or a kubelet failure of a node you should have a higher level controller.

Would Kubernetes bring up the down-ed Pod if only Pod definition file exists?

I have Pod definition file only. Kubernetes will bring up the pod. What happens if it goes down? Would Kubernetes bring it up automatically? Or if we want certain numbers of pods up at all time, we MUST take the help of ReplicationController( or ReplicaSet in new versions)?
Although your question is not clear , but yes , if you have deployed the pod through deployment or replicaSet , then kubernetes will create another one if you or someone else deletes that pod.
If you have just the pod without any controller like ReplicaSet , then it goes forever as there is no one to take care of it.
In case , the app crashes inside pod then:
A CrashloopBackOff means that you have a pod starting, crashing, starting again, and then crashing again.
A PodSpec has a restartPolicy field with possible values Always, OnFailure, and Never which applies to all containers in a pod. The default value is Always and the restartPolicy only refers to restarts of the containers by the kubelet on the same node (so the restart count will reset if the pod is rescheduled in a different node). Failed containers that are restarted by the kubelet are restarted with an exponential back-off delay (10s, 20s, 40s …) capped at five minutes, and is reset after ten minutes of successful execution.
https://sysdig.com/blog/debug-kubernetes-crashloopbackoff/
restartPolicy pod only refers to restarts of the Containers by the kubelet on the same node.If there is no replication controller or deployment then if a node goes down kubernetes will not reschedule or restart the pods of that node into any other nodes.This is the reason pods are not recommended to be used directly in production.