wait for the pod to be ready - kubernetes

I'm trying to add a step in the pipeline where a check is made if the PODS are ready before the pipeline moves forward.
I've read on forums that the kubectl wait command can work, however, every time the pipeline is executed, the PODS are created again, so I can't put a default name on the kubectl wait command.
Does anyone have any tips, please?

If your pods are part of a deployment:
kubectl wait deployment <deployment-name> -n <deployment-namespace> --for condition=Available=True --timeout=120s

Related

debugging a bad k8s deployment

I have a deployment that fails within one second. And the logs are destroyed as the deployment does a rollback.
Is there anything similar to logs -f that works before a deployment have started and wait until it starts?
Check previous logs with kubectl logs -p <pod-name> to spot application issues.
Also, check the exit code of your container with:
kubectl describe pod <pod-name> | grep "Exit Code"
Finally, if it is a scheduling problem, check out the event log of the corresponding ReplicaSet:
kubectl describe replicaset <name-of-deployment>

How to wait for a rollout to complete in Kubernetes

As part of our CI pipeline, we have a deployment script for a number of web services that looks something like this:
kubectl apply -f deployment1.yml
kubectl apply -f deployment2.yml
The problem is that the next stage of the pipeline sometimes fails because the services are not ready by the time it starts.
I would like to add a line in the script that says something like:
Wait until all deployments are in the Ready state, or fail if more than 30 seconds has elapsed.
I thought that the following would work but unfortunately it seems that the timeout flag is not available:
kubectl rollout status deployment deployment1 --timeout=30s
kubectl rollout status deployment deployment2 --timeout=30s
I don't want to run "kubectl rollout status" without a timeout as that will cause our build to hang if there is a failure in one of the deployments.
Recent versions of kubectl do support a timeout option:
$ kubectl create -f ds-overlaytest.yml
daemonset.apps/overlaytest created
$ kubectl rollout status ds/overlaytest --timeout=10s
Waiting for daemon set spec update to be observed...
error: timed out waiting for the condition
$
Check out the kubectl reference on how to use this option:
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-status-em-
I found a solution that works well. Set the property .spec.progressDeadlineSeconds to a value such as 30 (default is 600 or ten minutes) and kubectl rollout status deployment will wait for this amount of time before displaying an error message and exiting with a non zero exit code:
$ kubectl rollout status deploy/nginx
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
error: deployment "nginx" exceeded its progress deadline
Documentation is here: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#failed-deployment
You could possibly handle this with pure bash i.e.:
ATTEMPTS=0
ROLLOUT_STATUS_CMD="kubectl rollout status deployment/myapp -n namespace"
until $ROLLOUT_STATUS_CMD || [ $ATTEMPTS -eq 60 ]; do
$ROLLOUT_STATUS_CMD
ATTEMPTS=$((attempts + 1))
sleep 10
done
This is specified in this blog
However, I do not believe there is a Kubernetes Native way to wait for a deployments rollout, you could possibly accheive this with hooks in Helm or Webhooks if you want to get really fancy

Can you auto destroy a kubernetes pod deployment?

I can delete a deployment with the kubectl cli, but is there a way to make my deployment auto-destroy itself once it has finished? For my situation, we are kicking off a long-running process in a Docker container on AWS EKS. When I check the status, it is 'running', and then sometime later the status is 'completed'. So is there any way to get the kubernetes pod to auto destroy once it as finished?
kubectl run some_deployment_name --image=path_to_image
kubectl get pods
//the above command returns...
some_deployment_name1212-75bfdbb99b-vt622 0/1 Running 2 23s
//and then some time later...
some_deployment_name1212-75bfdbb99b-vt622 0/1 Completed 2 15m
Once it is complete, I would like for it to be destroyed, without me having to call another command.
So the question is about running Jobs and not deployments as in the Kubernetes Deployments abstraction that creates a ReplicaSet but more like Kubernetes Jobs
A Job is created with kubectl run when you specify the --restart=OnFailure option. These jobs are not cleaned up by the cluster unless you delete them manually with kubectl delete <pod-name>. More info here.
If you are using Kubernetes 1.12 or later a new Job spec was introduced: ttlSecondsAfterFinished. You can also use that to clean up your jobs. Another more time-consuming option would be to write your own Kubernetes controller that cleans up regular Jobs.
A CronJob is created if you specify both the --restart=OnFailure and `--schedule="" option. These pods get deleted automatically because they run on a regular schedule.
More info on kubectl run here.

Right way to update deployments on Kubernetes

Currently, I'm updating the version of an image to be deployed using the set image command:
$ kubectl set image deployments myapp myapp=caarlos0/myapp:v2
And then I watch the changes with rollout status:
$ kubectl rollout status deployments myapp
The problems I found while doing it this way are:
some times, it seems that a deployment is not triggered at all, and when I call rollout status, I get errors like this:
$ kubectl rollout status deployments myapp
Waiting for deployment spec update to be observed...
error: timed out waiting for the condition
The rollout history command show the CHANGE-CAUSE as <none>, and I can't find a way of making it show anything useful in there.
So, am I doing something wrong (or not in the best way)? How can I improve this workflow?
You're doing the right thing. Within the Updating a deployment documentation you'll find this:
Note: a Deployment’s rollout is triggered if and only if the Deployment’s pod template (i.e. .spec.template) is changed, e.g. updating labels or container images of the template. Other updates, such as scaling the Deployment, will not trigger a rollout.
So running $ kubectl set image deployments/app <image> will only trigger a rollout if <image> is not already configured for your containers.
The change cause can be used to record the command which was used to trigger the rollout by appending the --record flag to your commands (see Checking rollout history).
Deployment is observed by the controlplane component Deployment controller, make sure kube-controller-manager is running. for example below controller-manager is not running so it will not rollout deployments.
once controller is up and running it will start rollout example below

How to list Kubernetes recently deleted pods?

Is there a way to get some details about Kubernetes pod that was deleted (stopped, replaced by new version).
I am investigating bug. I have logs with my pod name. That pod does not exist anymore, it was replaced by another one (with different configuration). New pod resides in same namespace, replication controller and service as old one.
Commands like
kubectl get pods
kubectl get pod <pod-name>
work only with current pods (live or stopped).
How I could get more details about old pods? I would like to see
when they were created
which environment variables they had when created
why and when they were stopped
As of today, kubectl get pods -a is deprecated, and as a result you cannot get deleted pods.
What you can do though, is to get a list of recently deleted pod names - up to 1 hour in the past unless you changed the ttl for kubernetes events - by running:
kubectl get event -o custom-columns=NAME:.metadata.name | cut -d "." -f1
You can then investigate further issues within your logging pipeline if you have one in place.
As far as I know you cannot get the Pod details once the Pod is deleted. Can I know what is the usecase?
Example:
if a Pod is created using kubectl run busybox-test-pod-status --image=busybox --restart=Never -- /bin/false
you will have a Pod with status terminated:error
if a Pod is created using kubectl run busybox-test-pod-status --image=busybox --restart=Never -- /bin/true
you will have a Pod with status terminated:Completed
if a container in a Pod restarts: the Pod will be alive and you can get the logs of previous container (only the previous container) using
kubectl logs --container <container name> --previous=true <pod name>
if you doing an upgrade of you app and you are creating Pods using Deployments. If the update deployment "say a new image", the Pod will be terminated and new Pod will be created. You can get the Pod details from the Deployment's YAML. if you want to get details of previous Pod you have see "spec" section of previous Deployment's YAML
You can try kubectl logs --previous to list the logs of a previously stopped pod
http://kubernetes.io/docs/user-guide/kubectl/kubectl_logs/
You may also want to check out these debugging tips
http://kubernetes.io/docs/user-guide/debugging-pods-and-replication-controllers/
There is a way to find out why pods were deleted and who deleted them.
The only way to find out something is to set the ttl for k8s to be greater than the default 1h and search through the events:
kubectl get event -o custom-columns=NAME:.metadata.name | cut -d "." -f1
If your container has previously crashed, you can access the previous container’s crash log with:
kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}
There is this flag:
-a, --show-all=false: When printing, show all resources (default hide terminated pods.)
But this may not help in all cases of old pods.
kubectl get pods -a
you will get the list of running pods and the terminated pods in case you are searching for this
If you want to see all the previously deleted pods and you are trying to fetch the previous pods.
Command line:
kubectl get pods
in which you will get all the pod details, because every service has one or more pods and they have unique ip address
Here you can check the lifecycle of pods and what phases of pod has.
https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle
and you can see the previous pod logs by typing a command:
kubectl logs --previous