Can you auto destroy a kubernetes pod deployment? - kubernetes

I can delete a deployment with the kubectl cli, but is there a way to make my deployment auto-destroy itself once it has finished? For my situation, we are kicking off a long-running process in a Docker container on AWS EKS. When I check the status, it is 'running', and then sometime later the status is 'completed'. So is there any way to get the kubernetes pod to auto destroy once it as finished?
kubectl run some_deployment_name --image=path_to_image
kubectl get pods
//the above command returns...
some_deployment_name1212-75bfdbb99b-vt622 0/1 Running 2 23s
//and then some time later...
some_deployment_name1212-75bfdbb99b-vt622 0/1 Completed 2 15m
Once it is complete, I would like for it to be destroyed, without me having to call another command.

So the question is about running Jobs and not deployments as in the Kubernetes Deployments abstraction that creates a ReplicaSet but more like Kubernetes Jobs
A Job is created with kubectl run when you specify the --restart=OnFailure option. These jobs are not cleaned up by the cluster unless you delete them manually with kubectl delete <pod-name>. More info here.
If you are using Kubernetes 1.12 or later a new Job spec was introduced: ttlSecondsAfterFinished. You can also use that to clean up your jobs. Another more time-consuming option would be to write your own Kubernetes controller that cleans up regular Jobs.
A CronJob is created if you specify both the --restart=OnFailure and `--schedule="" option. These pods get deleted automatically because they run on a regular schedule.
More info on kubectl run here.

Related

Does "kubectl rollout restart deploy" cause downtime?

I'm trying to get all the deployments of a namespace to be restarted for implementation reasons.
I'm using "kubectl rollout -n restart deploy" and it works perfectly, but I'm not sure it that command causes downtime or if it works as the "rollout update", applying the restart one by one, keeping my services up.
Does anyone know?
In the documentation I can only find this:
Operation
Syntax
Description
rollout
kubectl rollout SUBCOMMAND [options]
Manage the rollout of a resource. Valid resource types include: deployments, daemonsets and statefulsets.
But I can't find details about the specific "rollout restart deploy".
I need to make sure it doesn't cause downtime. Right now is very hard to tell, because the restart process is very quick.
Update: I know that for one specific deployment (kubectl rollout restart deployment/name), it works as expected and doesn't cause downtime, but I need to apply it to all the namespace (without specifying the deployment) and that's the case I'm not sure about.
kubectl rollout restart deploy -n namespace1 will restart all deployments in specified namespace with zero downtime.
Restart command will work as follows:
After restart it will create new pods for a each deployments
Once new pods are up (running and ready) it will terminate old pods
Add readiness probes to your deployments to configure initial delays.
#pcsutar 's answer is almost correct. kubectl rollout restart $resourcetype $resourcename restarts your deployment, daemonset or stateful set according to the its update strategy. so if it is set to rollingUpdate it will behave exactly as the above answer:
After restart it will create new pods for a each deployments
Once new pods are up (running and ready) it will terminate old pods
Add readiness probes to your deployments to configure initial delays.
However, if the strategy for example is type: recreate all the currently running pods belonging to the deployment will be terminated before new pods will be spun up!

Airflow Kubernetes Executor pods go into "NotReady" state instead of being deleted

Installed airflow in kubernetes using the repo https://airflow-helm.github.io/charts and airflow-stable/airflow with version 8.1.3. So I have Airflow v2.0.1 installed. I have it setup using external postgres database and using the kubernetes executor.
What I have noticed is when airflow related pods are done they go into a "NotReady" status. This happens with the update-db pod at startup and also pods launched by the kubernetes executioner. When I go into airflow and look at the task some are successful and some can be failure, but either way the related pods are in "NotReady" status. In the values file I set the below thinking it would delete the pods when they are done. I've gone through the logs and made sure one of the dags ran as intended and was success in the related task was success and of course the related pod when it was done went into "NotReady" status.
The values below are located in Values.airflow.config.
AIRFLOW__KUBERNETES__DELETE_WORKER_PODS: "true"
AIRFLOW__KUBERNETES__DELETE_WORKER_PODS_ON_FAILURE: "true"
So I'm not really sure what I'm missing and if anyone has seen this behavior? It's also really strange that the upgrade-db pod is doing this too.
Screenshot of kubectl get pods for the namespace airflow is deployed in with the "NotReady" pods
Figured it out. The K8 namespace had auto injection of linkerd sidecar container for each pod. Would have to just use celery executioner or setup some sort of k8 job to cleanup completed pods and jobs that don’t get cleaned up due to the linkerd container running forever in those pods.

kubectl wait for a pod to complete

I have a pod spec which runs a command like rm -rf /some/path
i create the pod using kubectl apply -f ...
now i want to wait till the pod completes.
i can see that the pod is done, kubectl get pod/<mypod> shows STATUS Completed
How do i wait for this condition?
i have looked at kubectl wait ... but that doesnt seem to help me
kubectl wait --for=condition=complete pod/<my-pod> seems to just block.
I havent deleted the pod, it is still there in the Completed status
The command that you use: kubectl wait --for=condition=complete pod/<my-pod> will not work because a pod doesn't have such condition. Pod Conditions are as follows:
PodScheduled: the Pod has been scheduled to a node.
ContainersReady: all containers in the Pod are ready.
Initialized: all init containers have started successfully.
Ready: the Pod is able to serve requests and should be added to the load balancing pools of all matching Services.
The phase for a successfully completed pod is called succeeded:
All containers in the Pod have terminated in success, and will not be
restarted.
It would be better however if you use kubectl wait for Jobs instead of bare Pods and than execute kubectl wait --for=condition=complete job/myjob.

Busybox - How to delete looping creations of busybox containers

I created a namespace inside Kubernetes and tried to create a container using the following command:
kubectl run busybox -it ----image=busybox -- sh
But now, everytime I delete the pod using kubectl delete pods --all, it deletes the pod that was just created and it automatically recreates a new pod. I looked through the documentation but am unable to figure out what flag will stop this incessant creation of these containers.
The reason it does this is because kubectl run implicitly creates a deployment for the pod. Deployments are tasked with ensuring a certain number of pods are always running, so when Kubernetes detects a misalignment in the number of pods the deployment should be running vs the number that are actually running, it'll spin up a new one. You can remedy this by deleting the deployment: kubectl delete deployment busybox
Alternatively, you can temporarily kill the pods (but keep the deployment) by scaling down the deployment to run 0 pods: kubectl scale deployment busybox --replicas=0.
Documentation:
https://kubernetes-v1-4.github.io/docs/user-guide/kubectl/kubectl_run/
Create and run a particular image, possibly replicated. Creates a deployment or job to manage the created container(s).

How to kill pods on Kubernetes local setup

I am starting exploring runnign docker containers with Kubernetes. I did the following
Docker run etcd
docker run master
docker run service proxy
kubectl run web --image=nginx
To cleanup the state, I first stopped all the containers and cleared the downloaded images. However I still see pods running.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
web-3476088249-w66jr 1/1 Running 0 16m
How can I remove this?
To delete the pod:
kubectl delete pods web-3476088249-w66jr
If this pod is started via some replicaSet or deployment or anything that is creating replicas then find that and delete that first.
kubectl get all
This will list all the resources that have been created in your k8s cluster. To get information with respect to resources created in your namespace kubectl get all --namespace=<your_namespace>
To get info about the resource that is controlling this pod, you can do
kubectl describe web-3476088249-w66jr
There will be a field "Controlled By", or some owner field using which you can identify which resource created it.
When you do kubectl run ..., that's a deployment you create, not a pod directly. You can check this with kubectl get deploy. If you want to delete the pod, you need to delete the deployment with kubectl delete deploy DEPLOYMENT.
I would recommend you to create a namespace for testing when doing this kind of things. You just do kubectl create ns test, then you do all your tests in this namespace (by adding -n test). Once you have finished, you just do kubectl delete ns test, and you are done.
If you defined your object as Pod then
kubectl delete pod <--all | pod name>
will remove all of the generated Pod. But, If wrapped your Pod to Deployment object then running the command above only will trigger a re-creation of them.
In that case, you need to run
kubectl delete deployment <--all | deployment name>
That will also remove the Service object that is related to the deleted Deployment