kubectl wait until pod is gone (Terminating) - kubernetes

I know I can use kubectl wait to check if a pod is Ready but is there an easy way to check whether the pod is gone or in Terminating state? I'm running some tests and I only want to continue when the pod (or the namespace for that matter) is completely gone.
Also a timeout option would come in handy.

It's actually part of the wait command.
kubectl wait --for=delete pod/busybox1 --timeout=60s
You can check with kubectl wait --help to see this example and some more. For example
--for='': The condition to wait on: [delete|condition=condition- name|jsonpath='{JSONPath expression}'=JSONPath
Condition]. The default status value of condition-name is true, you > can set false with condition=condition-name=false.

If you execute kubectl delete po' <pod name>, the command will automatically wait until the pod is deleted. This is thanks to the finalizers feature that keeps the resource (the Pod in this case) from being deleted until the dependent resources (the containers of the pod for example) are cleaned up by the kubelet.

Related

Kubernetes limit number of retry

For some context, I'm creating an API in python that creates K8s Jobs with user input in ENV variables.
Sometimes, it happens that the Image selected does not exist or has been deleted. Secrets does not exists or Volume isn't created. So it makes the Job in a crashloopbackoff or imagepullbackoff state.
First I'm am wondering if the ressource during this state are allocated to the job?
If yes, I don't want the Job to loop forever and lock resources to a never starting Job.
I've set the backofflimit to 0, but this is when the Job detect a Pod that goes in fail and tries to relaunch an other Pod to retry. In my case, I know that if a Pod fails for a job, then it's mostly due to OOM or code that fails and will always fails due to user input. So retrying will always fail.
But it doesn't limit the number of tries to crashloopbackoff or imagepullbackoff. Is there a way to set to terminate or fail the Job? I don't want to kill it, but just free the ressource and keep the events in (status.container.state.waiting.reason + status.container.state.waiting.message) or (status.container.state.terminated.reason + status.container.state.terminated.exit_code)
Could there be an option to set to limit the number of retry at the creation so I can free resources, but not to remove it to keep logs.
I have tested your first question and YES even if a pod is in crashloopbackoff state, the resources are still allocated to it !!! Here is my test: Are the Kubernetes requested resources by a pod still allocated to it when it is in crashLoopBackOff state?
Thanks for your question !
Long answer short, unfortunately there is no such option in Kubernetes.
However, you can do this manually by checking if the pod is in a crashloopbackoff then, unallocate its resources or simply delete the pod itself.
The following script delete any pod in the crashloopbackoff state from a specified namespace
#!/bin/bash
# This script check the passed namespace and delete pods in 'CrashLoopBackOff state
NAMESPACE="test"
delpods=$(sudo kubectl get pods -n ${NAMESPACE} |
grep -i 'CrashLoopBackOff' |
awk '{print $1 }')
for i in ${delpods[#]}; do
sudo kubectl delete pod $i --force=true --wait=false \
--grace-period=0 -n ${NAMESPACE}
done
Since we have passed the option --grace-period=0 the pod won't automatically restart again.
But, if after using this script or assigning it to a job, you noticed that the pod continues to restart and fall in the CrashLoopBackOff state again for some weird reason. Thera is a workaround for this, which is changing the restart policy of the pod:
A PodSpec has a restartPolicy field with possible values Always,
OnFailure, and Never. The default value is Always. restartPolicy
applies to all Containers in the Pod. restartPolicy only refers to
restarts of the Containers by the kubelet on the same node. Exited
Containers that are restarted by the kubelet are restarted with an
exponential back-off delay (10s, 20s, 40s …) capped at five minutes,
and is reset after ten minutes of successful execution. As discussed
in the Pods document, once bound to a node, a Pod will never be
rebound to another node.
See more details in the documentation or from here.
And that is it! Happy hacking.
Regarding the first question, it is already answered by bguess here.

Display logs of an initContainer running inside github actions

I have a pod which embed an initContainer named initdb. Is there a kubectl command which returns true if initdb is started or else false? I need it to display logs of initdb in Github Action CI (kubectl log <pod> -c initdb crashes if initd is not started yet).
If you have a single init container in the Pod, you could so something like the following:
k get pod pod-name --output="jsonpath={.status.initContainerStatuses[0].ready}"
This will return true if the init container is in status Ready, but this means only that the init container is ready, it could be already terminated (because it completed the execution) or still running. I'm not completely sure but if an init container is ready, requesting its logs should work without errors)
You can use jsonpath to select specific sections of Pods definitions exactly for the scope of automating certain checks.
To see the full definition of your Pod, just use:
k get pod pod-name -oyaml
and maybe select what you are interested from there. If you want to wait until the init container is terminated or started, you could check its state section which explains the current state in detail and basically create a more finer check on what you are expecting to see.

kubectl wait sometimes timed out unexpectedly

I just add kubectl wait --for=condition=ready pod -l app=appname --timeout=30s in the last step of BitBucket Pipeline to report any deployment failure if the new pod somehow producing error.
I realize that the wait doesn't really consistent. Sometimes it gets timed out even if new pod from new image doesn't producing any error, pod turn to ready state.
Try to always change deployment.yaml or push newer image everytime to test this, the result is inconsistent.
BTW, I believe using kubectl rollout status doesn't suitable, I think because it just return after the deployment done without waiting for pod ready.
Note that there is not much difference if I change timeout from 30s to 5m since apply or rollout restart is quite instant.
kubectl version: 1.17
AWS EKS: latest 1.16
I'm placing this answer for better visibility as noted in the comments this indeed solves some problems with kubectl wait behavior.
I managed to replicate the issue and have some timeouts when my client version was older than server version. You have to match your client version with server in order to kubectl wait work properly.

How to delay the new pod creating in a deployment when delete one pod?

If a deployment set replica to 2, then we delete one pod using "kubectl delete pods" command, then the old pod will be in "terminating" status, and the new pod will turn out.
So, I hope the new pod will start to creat after the old pod has been terminated successfully, so, how should I do about it?
You can use the wait command:
Wait for a specific condition on one or many resources.
The command takes multiple resources and waits until the specified
condition is seen in the Status field of every given resource.
Alternatively, the command can wait for the given set of resources to be deleted by providing the "delete" keyword as the value to the
--for flag.
Here is an aexample:
Wait for the pod "busybox1" to be deleted, with a timeout of 60s, after having issued the "delete" command:
kubectl delete pod/busybox1
kubectl wait --for=delete pod/busybox1 --timeout=60s
---EDIT---
Depending on your use case the additional options would be to:
use StatefulSet with "Ordered, graceful deployment and scaling."
set replicas to 1, wait till the 2nd pod will be terminated and than set it back to 2
use additional deployment configuration like maxSurge and maxUnavailable
(less recommended) use grace-period=0 with the force parameter but it may result in inconsistency or data loss and requires confirmation. More details here and here.
Please let me know if that helped.

How to clear CrashLoopBackOff

When a Kubernetes pod goes into CrashLoopBackOff state, you will fix the underlying issue. How do you force it to be rescheduled?
For apply new configuration the new pod should be created (the old one will be removed).
If your pod was created automatically by Deployment or DaemonSet resource, this action will run automaticaly each time after you update resource's yaml.
It is not going to happen if your resource have spec.updateStrategy.type=OnDelete.
If problem was connected with error inside docker image, that you solved, you should update pods manually, you can use rolling-update feature for this purpose, In case when new image have same tag, you can just remove broken pod. (see below)
In case of node failure, the pod will recreated on new node after few time, the old pod will be removed after full recovery of broken node. worth noting it is not going to happen if your pod was created by DaemonSet or StatefulSet.
Any way you can manual remove crashed pod:
kubectl delete pod <pod_name>
Or all pods with CrashLoopBackOff state:
kubectl delete pod `kubectl get pods | awk '$3 == "CrashLoopBackOff" {print $1}'`
If you have completely dead node you can add --grace-period=0 --force options for remove just information about this pod from kubernetes.
Generally a fix requires you to change something about the configuration of the pod (the docker image, an environment variable, a command line flag, etc), in which case you should remove the old pod and start a new pod. If your pod is running under a replication controller (which it should be), then you can do a rolling update to the new version.
5 Years later, unfortunately, this scenario seems to still be the case.
#kvaps answer above suggested an alternative (rolling updates), that essentially updates(overwrites) instead of deleting a pod -- the current working link of rolling updates
The alternative to being able to delete a pod, was NOT to create a pod but instead create a deployment, and delete the deployment that contains the pod, subject to deletion.
$ kubectl get deployments -A
$ kubectl delete -n <NAMESPACE> deployment <DEPLOYMENT>
# When on minikube or using docker for development + testing
$ docker system prune -a
The first command displays all deployments, alongside their respective namespaces. This helped me reduce the error of deleting deployments that share the same name(name collision) but from two different namespaces.
The second command deletes a deployment that is exactly located underneath a namespace.
The last command helps when working in development mode. Essentially, removing all unused images, which is not required but helps clean up and save some disk-space.
Another great tip, is to try to understand the reasons why a Pod is failing. The problem may be relying completely somewhere else, and k8s does a good deal of documenting. For that one of the following may help:
$ kubectl logs -f <POD NAME>
$ kubectl get events
Other reference here on StackOveflow:
https://stackoverflow.com/a/55647634/132610
For anyone interested I wrote a simple helm chart and python script which watches the current namespace and deletes any pod that enters CrashLoopBackOff.
The chart is at https://github.com/timothyclarke/helm-charts/tree/master/charts/dr-abc.
This is a sticking plaster. Fixing the problem is always the best option. In my specific case getting the historic apps into K8s so the development teams have a common place to work and strangle the old applications with new ones is preferable to fixing all the bugs in the old apps. Having this in the namespace to keep the illusion of everything running buys that time.
This command will delete all pods that are in any of (CrashLoopBackOff, Init:CrashLoopBackOff, etc.) states. You can use grep -i <keyword> to match different states and then delete the pods that match the state. In your case it should be:
kubectl get pod -n <namespace> --no-headers | grep -i crash | awk '{print $1}' | while read line; do; kubectl delete pod -n <namespace> $line; done