Busybox - How to delete looping creations of busybox containers - kubernetes

I created a namespace inside Kubernetes and tried to create a container using the following command:
kubectl run busybox -it ----image=busybox -- sh
But now, everytime I delete the pod using kubectl delete pods --all, it deletes the pod that was just created and it automatically recreates a new pod. I looked through the documentation but am unable to figure out what flag will stop this incessant creation of these containers.

The reason it does this is because kubectl run implicitly creates a deployment for the pod. Deployments are tasked with ensuring a certain number of pods are always running, so when Kubernetes detects a misalignment in the number of pods the deployment should be running vs the number that are actually running, it'll spin up a new one. You can remedy this by deleting the deployment: kubectl delete deployment busybox
Alternatively, you can temporarily kill the pods (but keep the deployment) by scaling down the deployment to run 0 pods: kubectl scale deployment busybox --replicas=0.
Documentation:
https://kubernetes-v1-4.github.io/docs/user-guide/kubectl/kubectl_run/
Create and run a particular image, possibly replicated. Creates a deployment or job to manage the created container(s).

Related

Kubernetes pod crashLoopBackOff, need to remove a pod

I have installed Prometheus using helm chart, so I got 4 deployment files listed:
prometheus-alertmanager
prometheus-server
prometheus-pushgateway
prometheus-kube-state-metrics
All pods of deployment files are running accordingly.
By mistake I restarted one deployment file using this command:
kubectl rollout restart deployment prometheus-alertmanager
Now a new pod is getting created and getting crashed, if I delete deployment file then previous pod also be deleted. So what can I do for that crashLoopBackOff pod?
Screenshot of kubectl output
You can simply delete that pod with the kubectl delete pod <pod_name> command or attempt to delete all pod in crashLoopBackOff status with:
kubectl delete pod `kubectl get pods | awk '$3 == "CrashLoopBackOff" {print $1}'`
Make sure that the corresponding deployment is set to 1 replica (or any other chosen number). If you delete a pod(s) of that deployment, it will create a new one while keeping the desired replica count.
These two pods (one running and the other crashloopbackoff) belong to different deployments, as they're suffixed by different tags, i.e: pod1-abc-123 and pod2-abc-456 belong to the same deployment template, however pod1-abc-123 and pod2-def-566 belong to different deployments.
A deployment is going to create a replicaset, make sure you delete that corresponding old replicase, kubectl get rs | grep 99dd and delete that one, similar to the prometheus server one.

How to delete Pods or restart containers which are targets of kubectl attach/exec?

I'd like to automatically restart any container which is the target of kubectl exec/kubectl attach after the session is closed. Is this currently possible?
In K8s, Pod (not container) is the smallest unit in terms of operation.
So the workaround is to restart entire pod after the session.
Simple command concatenation with logical AND will work for your task, i.e.
kubectl exec -it webserver-1 bash && kubectl delete pod webserver-1
Once you exit the pod session second part of the command will be executed - removing a pod and a scheduler will spin up a new pod for you (if that was a part of replica set).

How to restart a failed pod in kubernetes deployment

I have 3 nodes in kubernetes cluster. I create a daemonset and deployed it in all the 3 devices. This daemonset created 3 pods and they were successfully running. But for some reasons, one of the pod failed.
I need to know how can we restart this pod without affecting other pods in the daemon set, also without creating any other daemon set deployment?
Thanks
kubectl delete pod <podname> it will delete this one pod and Deployment/StatefulSet/ReplicaSet/DaemonSet will reschedule a new one in its place
There are other possibilities to acheive what you want:
Just use rollout command
kubectl rollout restart deployment mydeploy
You can set some environment variable which will force your deployment pods to restart:
kubectl set env deployment mydeploy DEPLOY_DATE="$(date)"
You can scale your deployment to zero, and then back to some positive value
kubectl scale deployment mydeploy --replicas=0
kubectl scale deployment mydeploy --replicas=1
Just for others reading this...
A better solution (IMHO) is to implement a liveness probe that will force the pod to restart the container if it fails the probe test.
This is a great feature K8s offers out of the box. This is auto healing.
Also look into the pod lifecycle docs.
kubectl -n <namespace> delete pods --field-selector=status.phase=Failed
I think the above command is quite useful when you want to restart 1 or more failed pods :D
And we don't need to care about name of the failed pod.

Restart container within pod

I have a pod test-1495806908-xn5jn with 2 containers. I'd like to restart one of them called container-test. Is it possible to restart a single container within a pod and how? If not, how do I restart the pod?
The pod was created using a deployment.yaml with:
kubectl create -f deployment.yaml
Is it possible to restart a single container
Not through kubectl, although depending on the setup of your cluster you can "cheat" and docker kill the-sha-goes-here, which will cause kubelet to restart the "failed" container (assuming, of course, the restart policy for the Pod says that is what it should do)
how do I restart the pod
That depends on how the Pod was created, but based on the Pod name you provided, it appears to be under the oversight of a ReplicaSet, so you can just kubectl delete pod test-1495806908-xn5jn and kubernetes will create a new one in its place (the new Pod will have a different name, so do not expect kubectl get pods to return test-1495806908-xn5jn ever again)
There are cases when you want to restart a specific container instead of deleting the pod and letting Kubernetes recreate it.
Doing a kubectl exec POD_NAME -c CONTAINER_NAME /sbin/killall5 worked for me.
(I changed the command from reboot to /sbin/killall5 based on the below recommendations.)
Both pod and container are ephemeral, try to use the following command to stop the specific container and the k8s cluster will restart a new container.
kubectl exec -it [POD_NAME] -c [CONTAINER_NAME] -- /bin/sh -c "kill 1"
This will send a SIGTERM signal to process 1, which is the main process running in the container. All other processes will be children of process 1, and will be terminated after process 1 exits. See the kill manpage for other signals you can send.
I m using
kubectl rollout restart deployment [deployment_name]
or
kubectl delete pod [pod_name]
The whole reason for having kubernetes is so it manages the containers for you so you don't have to care so much about the lifecyle of the containers in the pod.
Since you have a deployment setup that uses replica set. You can delete the pod using kubectl delete pod test-1495806908-xn5jn and kubernetes will manage the creation of a new pod with the 2 containers without any downtime. Trying to manually restart single containers in pods negates the whole benefits of kubernetes.
All the above answers have mentioned deleting the pod...but if you have many pods of the same service then it would be tedious to delete each one of them...
Therefore, I propose the following solution, restart:
1) Set scale to zero :
kubectl scale deployment <<name>> --replicas=0 -n service
The above command will terminate all your pods with the name <<name>>
2) To start the pod again, set the replicas to more than 0
kubectl scale deployment <<name>> --replicas=2 -n service
The above command will start your pods again with 2 replicas.
We use a pretty convenient command line to force re-deployment of fresh images on integration pod.
We noticed that our alpine containers all run their "sustaining" command on PID 5. Therefore, sending it a SIGTERM signal takes the container down. imagePullPolicy being set to Always has the kubelet re-pull the latest image when it brings the container back.
kubectl exec -i [pod name] -c [container-name] -- kill -15 5
There was an issue in coredns pod, I deleted such pod by
kubectl delete pod -n=kube-system coredns-fb8b8dccf-8ggcf
Its pod will restart automatically.
kubectl exec -it POD_NAME -c CONTAINER_NAME bash - then kill 1
Assuming the container is run as root which is not recommended.
In my case when I changed the application config, I had to reboot the container which was used in a sidecar pattern, I would kill the PID for the spring boot application which is owned by the docker user.
I realize this question is old and already answered, but I thought I'd chip in with my method.
Whenever I want to do this, I just make a minor change to the pod's container's image field, which causes kubernetes to restart just the container.
If you can't switch between 2 different, but equivalent tags (like :latest / :1.2.3 where latest is actually version 1.2.3) then you can always just switch it quickly to an invalid tag (I put an X at the end like :latestX or something) and then re-edit it and remove the X straight away afterwards, this does cause the container to fail starting with an image pull error for a few seconds though.
So for example:
kubectl edit po my-pod-name
Find the spec.containers[].name you want to kill, then find it's image
apiVersion: v1
kind: Pod
metadata:
#...
spec:
containers:
- name: main-container
#...
- name: container-to-restart
image: container/image:tag
#...
You would search for your container-to-restart and then update it's image to something different which will force kubernetes to do a controlled restart for you.
Killing the process specified in the Dockerfile's CMD / ENTRYPOINT works for me. (The container restarts automatically)
Rebooting was not allowed in my container, so I had to use this workaround.
The correct, but likely less popular answer, is that if you need to restart one container in a pod then it shouldn't be in the same pod. You can't restart single containers in a pod by design. Just move the container out into it's own pod. From the docs
Pods that run a single container. The "one-container-per-Pod" model is
the most common Kubernetes use case; in this case, you can think of a
Pod as a wrapper around a single container; Kubernetes manages Pods
rather than managing the containers directly.
Note: Grouping multiple co-located and co-managed containers in a
single Pod is a relatively advanced use case. You should use this
pattern only in specific instances in which your containers are
tightly coupled.
https://kubernetes.io/docs/concepts/workloads/pods/
I was playing around with ways to restart a container. What I found for me was this solution:
Dockerfile:
...
ENTRYPOINT [ "/app/bootstrap.sh" ]
/app/bootstrap.sh:
#!/bin/bash
/app/startWhatEverYouActuallyWantToStart.sh &
tail -f /dev/null
Whenever I want to restart the container, I kill the process with tail -f /dev/null which I find with
kill -TERM `ps --ppid 1 | grep tail | grep -v -e grep | awk '{print $1}'`
Following that command, all the processes except for the one with PID==1 will be killed and the entrypoint, in my case bootstrap.sh will be executed (again).
That's for the part "restart" - which is not really a restart but it does what you wish, in the end. For the part with limiting restarting the container named container-test you could pass on the container name to the container in question (as the container name would otherwise not be available inside the container) and then you can decide whether to do the above kill.
That would be something like this in your deployment.yaml:
env:
- name: YOUR_CONTAINER_NAME
value: container-test
/app/startWhatEverYouActuallyWantToStart.sh:
#!/bin/bash
...
CONDITION_TO_RESTART=0
...
if [ "$YOUR_CONTAINER_NAME" == "container-test" -a $CONDITION_TO_RESTART -eq 1 ]; then
kill -TERM `ps --ppid 1 | grep tail | grep -v -e grep | awk '{print $1}'`
fi
Sometimes no one knows which OS the pod has, pod might not have sudo or reboot altogether.
Safer option is to take a snapshot and recreate pod.
kubectl get <pod-name> -o yaml > pod-to-be-restarted.yaml;
kubectl delete po <pod-name>;
kubectl create -f pod-to-be-restarted.yaml
kubectl delete pods POD_NAME
This command will delete the pod and restart another automatically.

How to kill pods on Kubernetes local setup

I am starting exploring runnign docker containers with Kubernetes. I did the following
Docker run etcd
docker run master
docker run service proxy
kubectl run web --image=nginx
To cleanup the state, I first stopped all the containers and cleared the downloaded images. However I still see pods running.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
web-3476088249-w66jr 1/1 Running 0 16m
How can I remove this?
To delete the pod:
kubectl delete pods web-3476088249-w66jr
If this pod is started via some replicaSet or deployment or anything that is creating replicas then find that and delete that first.
kubectl get all
This will list all the resources that have been created in your k8s cluster. To get information with respect to resources created in your namespace kubectl get all --namespace=<your_namespace>
To get info about the resource that is controlling this pod, you can do
kubectl describe web-3476088249-w66jr
There will be a field "Controlled By", or some owner field using which you can identify which resource created it.
When you do kubectl run ..., that's a deployment you create, not a pod directly. You can check this with kubectl get deploy. If you want to delete the pod, you need to delete the deployment with kubectl delete deploy DEPLOYMENT.
I would recommend you to create a namespace for testing when doing this kind of things. You just do kubectl create ns test, then you do all your tests in this namespace (by adding -n test). Once you have finished, you just do kubectl delete ns test, and you are done.
If you defined your object as Pod then
kubectl delete pod <--all | pod name>
will remove all of the generated Pod. But, If wrapped your Pod to Deployment object then running the command above only will trigger a re-creation of them.
In that case, you need to run
kubectl delete deployment <--all | deployment name>
That will also remove the Service object that is related to the deleted Deployment