To wait for a certain pod to be completed the command is
kubectl wait --for=condition=Ready pod/pod-name
Similarly I want to wait for any one pod in the statefulset to be ready. I tried the command below which did not work,
kubectl wait --for=condition=Ready statefulset/statefulset-name
What should the command options look like?
I used following and it works for me
kubectl wait -l statefulset.kubernetes.io/pod-name=activemq-0 --for=condition=ready pod --timeout=-1s
kubectl rollout status --watch --timeout=600s statefulset/name-of-statefulset
from https://github.com/kubernetes/kubernetes/issues/79606#issuecomment-779779928
Related
I know how to delete a specific pod:
kubectl -n <namespace> delete pod <pod-name>
Is there a way to delete all the Terminated pods once?
What does terminated pod mean? If you wish to delete finished pods of any jobs in the namespace then you can remove them with a single command:
kubectl -n <namespace> delete pods --field-selector=status.phase==Succeeded
Another approach in Kubernetes 1.23 onwards is to use Job's TTL controller feature:
spec:
ttlSecondsAfterFinished: 100
In your case Terminated status means your pods are in a failed state. To remove them just change the status.phase to Failed state (https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodStatus)
You can pipe 2 commands:
kubectl -n <namespace> get pods --field-selector=status.phase==Succeeded -o custom-columns=NAME:.metadata.name --no-headers | kubectl -n <namespace> delete pods
I don't think there is an 'exec' option to kubectl get (like the CLI tool 'find' for instance).
If the command fits your needs, you can always convert it to an alias or shell function
I have a deployment that fails within one second. And the logs are destroyed as the deployment does a rollback.
Is there anything similar to logs -f that works before a deployment have started and wait until it starts?
Check previous logs with kubectl logs -p <pod-name> to spot application issues.
Also, check the exit code of your container with:
kubectl describe pod <pod-name> | grep "Exit Code"
Finally, if it is a scheduling problem, check out the event log of the corresponding ReplicaSet:
kubectl describe replicaset <name-of-deployment>
I have a pod spec which runs a command like rm -rf /some/path
i create the pod using kubectl apply -f ...
now i want to wait till the pod completes.
i can see that the pod is done, kubectl get pod/<mypod> shows STATUS Completed
How do i wait for this condition?
i have looked at kubectl wait ... but that doesnt seem to help me
kubectl wait --for=condition=complete pod/<my-pod> seems to just block.
I havent deleted the pod, it is still there in the Completed status
The command that you use: kubectl wait --for=condition=complete pod/<my-pod> will not work because a pod doesn't have such condition. Pod Conditions are as follows:
PodScheduled: the Pod has been scheduled to a node.
ContainersReady: all containers in the Pod are ready.
Initialized: all init containers have started successfully.
Ready: the Pod is able to serve requests and should be added to the load balancing pools of all matching Services.
The phase for a successfully completed pod is called succeeded:
All containers in the Pod have terminated in success, and will not be
restarted.
It would be better however if you use kubectl wait for Jobs instead of bare Pods and than execute kubectl wait --for=condition=complete job/myjob.
I found my coredns pod throw error: Readiness probe failed: Get http://172.30.224.7:8080/health: net/http: request canceled (Client.Timeout exceeded while awaiting headers) . I am delete pod using this command:
kubectl delete pod coredns-89764d78c-mbcbz -n kube-system
but the command keep waiting and nothing response,how to know the progress of deleting? this is output:
[root#ops001 ~]# kubectl delete pod coredns-89764d78c-mbcbz -n kube-system
pod "coredns-89764d78c-mbcbz" deleted
and the terminal hangs or blocked,when I use browser UI with using kubernetes dashboard the pod exits.how to force delete it? or fix it the right way?
You are deleting a pod which is monitored by deployment controller. That's why when you delete one of the pods, the controller create another to make sure the number of pods equal to the replica count. If you really want to delete the coredns[not recommended], delete the deployment instead of the pods.
$ kubectl delete deployment coredns -n kube-system
Answering another part of your question:
but the command keep waiting and nothing response,how to know the
progress of deleting? this is output:
[root#ops001 ~]# kubectl delete pod coredns-89764d78c-mbcbz -n kube-system
pod "coredns-89764d78c-mbcbz" deleted
and the terminal blocked...
When you're deleting a Pod and you want to see what's going on under the hood, you can additionally provide -v flag and specify the desired verbosity level e.g.:
kubectl delete pod coredns-89764d78c-mbcbz -n kube-system -v 8
If there is some issue with the deletion of specific Pod, it should tell you the details.
I totally agree with #P Ekambaram's comment:
if coredns is not started. you need to check logs and find out why it
is not getting started – P Ekambaram
You can always delete the whole coredns Deployment and re-deploy it again but generally you shouldn't do that. Looking at Pod logs:
kubectl logs coredns-89764d78c-mbcbz -n kube-system
should also tell you some details explaining why it doesn't work properly. I would say that deleting the whole coredns Deployment is a last resort command.
As part of our CI pipeline, we have a deployment script for a number of web services that looks something like this:
kubectl apply -f deployment1.yml
kubectl apply -f deployment2.yml
The problem is that the next stage of the pipeline sometimes fails because the services are not ready by the time it starts.
I would like to add a line in the script that says something like:
Wait until all deployments are in the Ready state, or fail if more than 30 seconds has elapsed.
I thought that the following would work but unfortunately it seems that the timeout flag is not available:
kubectl rollout status deployment deployment1 --timeout=30s
kubectl rollout status deployment deployment2 --timeout=30s
I don't want to run "kubectl rollout status" without a timeout as that will cause our build to hang if there is a failure in one of the deployments.
Recent versions of kubectl do support a timeout option:
$ kubectl create -f ds-overlaytest.yml
daemonset.apps/overlaytest created
$ kubectl rollout status ds/overlaytest --timeout=10s
Waiting for daemon set spec update to be observed...
error: timed out waiting for the condition
$
Check out the kubectl reference on how to use this option:
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-status-em-
I found a solution that works well. Set the property .spec.progressDeadlineSeconds to a value such as 30 (default is 600 or ten minutes) and kubectl rollout status deployment will wait for this amount of time before displaying an error message and exiting with a non zero exit code:
$ kubectl rollout status deploy/nginx
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
error: deployment "nginx" exceeded its progress deadline
Documentation is here: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#failed-deployment
You could possibly handle this with pure bash i.e.:
ATTEMPTS=0
ROLLOUT_STATUS_CMD="kubectl rollout status deployment/myapp -n namespace"
until $ROLLOUT_STATUS_CMD || [ $ATTEMPTS -eq 60 ]; do
$ROLLOUT_STATUS_CMD
ATTEMPTS=$((attempts + 1))
sleep 10
done
This is specified in this blog
However, I do not believe there is a Kubernetes Native way to wait for a deployments rollout, you could possibly accheive this with hooks in Helm or Webhooks if you want to get really fancy