I have a pod spec which runs a command like rm -rf /some/path
i create the pod using kubectl apply -f ...
now i want to wait till the pod completes.
i can see that the pod is done, kubectl get pod/<mypod> shows STATUS Completed
How do i wait for this condition?
i have looked at kubectl wait ... but that doesnt seem to help me
kubectl wait --for=condition=complete pod/<my-pod> seems to just block.
I havent deleted the pod, it is still there in the Completed status
The command that you use: kubectl wait --for=condition=complete pod/<my-pod> will not work because a pod doesn't have such condition. Pod Conditions are as follows:
PodScheduled: the Pod has been scheduled to a node.
ContainersReady: all containers in the Pod are ready.
Initialized: all init containers have started successfully.
Ready: the Pod is able to serve requests and should be added to the load balancing pools of all matching Services.
The phase for a successfully completed pod is called succeeded:
All containers in the Pod have terminated in success, and will not be
restarted.
It would be better however if you use kubectl wait for Jobs instead of bare Pods and than execute kubectl wait --for=condition=complete job/myjob.
Related
I have a deployment that fails within one second. And the logs are destroyed as the deployment does a rollback.
Is there anything similar to logs -f that works before a deployment have started and wait until it starts?
Check previous logs with kubectl logs -p <pod-name> to spot application issues.
Also, check the exit code of your container with:
kubectl describe pod <pod-name> | grep "Exit Code"
Finally, if it is a scheduling problem, check out the event log of the corresponding ReplicaSet:
kubectl describe replicaset <name-of-deployment>
To wait for a certain pod to be completed the command is
kubectl wait --for=condition=Ready pod/pod-name
Similarly I want to wait for any one pod in the statefulset to be ready. I tried the command below which did not work,
kubectl wait --for=condition=Ready statefulset/statefulset-name
What should the command options look like?
I used following and it works for me
kubectl wait -l statefulset.kubernetes.io/pod-name=activemq-0 --for=condition=ready pod --timeout=-1s
kubectl rollout status --watch --timeout=600s statefulset/name-of-statefulset
from https://github.com/kubernetes/kubernetes/issues/79606#issuecomment-779779928
Kubernetes version 1.12.3. Does kubectl drain remove pod first or create pod first.
You can use kubectl drain to safely evict all of your pods from a node before you perform maintenance on the node (e.g. kernel upgrade, hardware maintenance, etc.)
When kubectl drain return successfuly it means it has removed all the pods successfully from that node and it is safe to bring that node down(physically shut off, or start maintainence)
Now if you turn on the machine and want to schedule pods again on that node you need to run:
kubectl uncordon <node name>
So, kubectl drain removes pods from the node and don't schedule any pods on that until you uncordon that node
kubectl drain will ignore certain system pods on the node that cannot be killed.
The given node will be marked unscheduled to prevent new pods from arriving.
When you are ready to put the node back into service, use kubectl uncordon, which will make the node schedulable again.
For for details use command:
kubectl drain --help
With this I hope you will get information which you are looking.
If there is an update in the docker image, rolling update strategy will update all the pods one by one in a daemonset, similarly is it possible to restart the pods gracefully without any changes the daemonset config or can it be triggered explicitly?
Currently, I am doing it manually by
kubectl delete pod <pod-name>
One by one until each pod gets into running state.
You could try and use Node maintenance operations:
Use kubectl drain to gracefully terminate all pods on the node while marking the node as unschedulable (with --ignore-daemonsets, from Konstantin Vustin's comment):
kubectl drain $NODENAME --ignore-daemonsets
This keeps new pods from landing on the node while you are trying to get them off.
Then:
Make the node schedulable again:
kubectl uncordon $NODENAME
To trigger restart of all pods managed by deamonset in namespace [namespace_name]:
kubectl rollout restart de -n [namespace_name]
I can delete a deployment with the kubectl cli, but is there a way to make my deployment auto-destroy itself once it has finished? For my situation, we are kicking off a long-running process in a Docker container on AWS EKS. When I check the status, it is 'running', and then sometime later the status is 'completed'. So is there any way to get the kubernetes pod to auto destroy once it as finished?
kubectl run some_deployment_name --image=path_to_image
kubectl get pods
//the above command returns...
some_deployment_name1212-75bfdbb99b-vt622 0/1 Running 2 23s
//and then some time later...
some_deployment_name1212-75bfdbb99b-vt622 0/1 Completed 2 15m
Once it is complete, I would like for it to be destroyed, without me having to call another command.
So the question is about running Jobs and not deployments as in the Kubernetes Deployments abstraction that creates a ReplicaSet but more like Kubernetes Jobs
A Job is created with kubectl run when you specify the --restart=OnFailure option. These jobs are not cleaned up by the cluster unless you delete them manually with kubectl delete <pod-name>. More info here.
If you are using Kubernetes 1.12 or later a new Job spec was introduced: ttlSecondsAfterFinished. You can also use that to clean up your jobs. Another more time-consuming option would be to write your own Kubernetes controller that cleans up regular Jobs.
A CronJob is created if you specify both the --restart=OnFailure and `--schedule="" option. These pods get deleted automatically because they run on a regular schedule.
More info on kubectl run here.