I am working on an application which is running on the Kubernetes cluster. I want to restart the n number of pods manually in a sequence. Can we do that? Would kubectl scale <options> work here?
The answer is yes, you can restart 5 out of 10 pods of a particular deployment. Though it won't be a single command for this.
As you correctly assumed kubectl scale will help you here.
Restart of 5 pods out of 10 contains 2 operations:
Scaling down the deployment from 10 to 5 pods
kubectl scale deployment deployment-name --replicas=5
Scaling up the deployment from 5 to 10 pods back:
kubectl scale deployment deployment-name --replicas=10
Also you can delete exact pods, kube-controller-manager with deployment/replicaset controllers within will make sure that desired state will match the exact state and therefore missing pods will be automatically rescheduled.
However following best practice (thanks to #DavidMaze), ideal scenario is restart the whole deployment. This can be done with following command:
kubectl rollout restart deployment deployment-name
This is safer option and it allows to roll back easily in case of any mistakes/errors.
Also it's possible to restart pods 1 by 1 within the deployment when rollout restart is requested.
.spec.strategy.rollingUpdate.maxUnavailable should be set to 1 which means only 1 pods at most will be unavailable during the restart - reference to max unavailable.
Kubernetes Deployments
With replicaSet in place you can always scale up/down 'N' number of pods which will restart them and if you need to restart specific one simply delete them and RS will spin up a new one for you.
We can also use the bash script for this case. This shell script asks you for the replica_set-id, No of pods to be restarted and this script will delete/restart the pods in sequence.
#! /bin/bash
read -p "Replicaset-id: " p
pods=$(kubectl get pods | grep "$p" | awk '{print $1}')
read -p "No of pods to be restarted: " n
m=0
for pod in $pods
do
((m++))
echo "$m"
kubectl delete pod "$pod"
if [[ "$m" -eq "$n" ]]; then
break
fi
done
Related
I created a job,and rerun it several times。
When I delete this job. Only the latest pod be deleted.
How Can I delete these pods all.
For Cronjob
You can use the successfulJobsHistoryLimit to manage the pod count, if you will set it to 0, POD will get removed as soon as it complete it's execution successfully.
successfulJobsHistoryLimit: 0
failedJobsHistoryLimit: 0
Read more at : https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#jobs-history-limits
GCP ref : https://cloud.google.com/kubernetes-engine/docs/how-to/cronjobs#history-limit
For Job
If you are using job not cronjob you can use ttlSecondsAfterFinished - delete the job pod after set second automatically, you can set it accordingly keeping some buffer.
ttlSecondsAfterFinished: 100
will solve your issue.
Example : https://kubernetes.io/docs/concepts/workloads/controllers/job/#clean-up-finished-jobs-automatically
Extra :
You can delete those pods using simple command but that is one-time solution using the label in PODs or used in job
kubectl delete pods -l <labels> -n <namespace>
You can create a label or you maybe have already one to match the targeted group of pods , so you can delete them all based on this label as follow:
kubectl delete pods -l app=my-app
I assume you have a number of pods from the same image, and you want to clean them up, and then have only one pod running? If so, you need to delete the deploy:
kubectl -n <namespace> get deploy
kubectl -n <namespace> delete deploy <deployname>
Or you can scale to 0 replicas:
kubectl scale deploy <deploy-name> --replicas=0
which will kill all these pods, and then apply the manifest anew, so it creates 1 pod (assuming you are not scaling to more than 1 active pod)
kubectl -n <namespace> apply -f <manifest-for-that-deploy.yaml>
I have installed Prometheus using helm chart, so I got 4 deployment files listed:
prometheus-alertmanager
prometheus-server
prometheus-pushgateway
prometheus-kube-state-metrics
All pods of deployment files are running accordingly.
By mistake I restarted one deployment file using this command:
kubectl rollout restart deployment prometheus-alertmanager
Now a new pod is getting created and getting crashed, if I delete deployment file then previous pod also be deleted. So what can I do for that crashLoopBackOff pod?
Screenshot of kubectl output
You can simply delete that pod with the kubectl delete pod <pod_name> command or attempt to delete all pod in crashLoopBackOff status with:
kubectl delete pod `kubectl get pods | awk '$3 == "CrashLoopBackOff" {print $1}'`
Make sure that the corresponding deployment is set to 1 replica (or any other chosen number). If you delete a pod(s) of that deployment, it will create a new one while keeping the desired replica count.
These two pods (one running and the other crashloopbackoff) belong to different deployments, as they're suffixed by different tags, i.e: pod1-abc-123 and pod2-abc-456 belong to the same deployment template, however pod1-abc-123 and pod2-def-566 belong to different deployments.
A deployment is going to create a replicaset, make sure you delete that corresponding old replicase, kubectl get rs | grep 99dd and delete that one, similar to the prometheus server one.
I have 3 nodes in kubernetes cluster. I create a daemonset and deployed it in all the 3 devices. This daemonset created 3 pods and they were successfully running. But for some reasons, one of the pod failed.
I need to know how can we restart this pod without affecting other pods in the daemon set, also without creating any other daemon set deployment?
Thanks
kubectl delete pod <podname> it will delete this one pod and Deployment/StatefulSet/ReplicaSet/DaemonSet will reschedule a new one in its place
There are other possibilities to acheive what you want:
Just use rollout command
kubectl rollout restart deployment mydeploy
You can set some environment variable which will force your deployment pods to restart:
kubectl set env deployment mydeploy DEPLOY_DATE="$(date)"
You can scale your deployment to zero, and then back to some positive value
kubectl scale deployment mydeploy --replicas=0
kubectl scale deployment mydeploy --replicas=1
Just for others reading this...
A better solution (IMHO) is to implement a liveness probe that will force the pod to restart the container if it fails the probe test.
This is a great feature K8s offers out of the box. This is auto healing.
Also look into the pod lifecycle docs.
kubectl -n <namespace> delete pods --field-selector=status.phase=Failed
I think the above command is quite useful when you want to restart 1 or more failed pods :D
And we don't need to care about name of the failed pod.
I have a pod test-1495806908-xn5jn with 2 containers. I'd like to restart one of them called container-test. Is it possible to restart a single container within a pod and how? If not, how do I restart the pod?
The pod was created using a deployment.yaml with:
kubectl create -f deployment.yaml
Is it possible to restart a single container
Not through kubectl, although depending on the setup of your cluster you can "cheat" and docker kill the-sha-goes-here, which will cause kubelet to restart the "failed" container (assuming, of course, the restart policy for the Pod says that is what it should do)
how do I restart the pod
That depends on how the Pod was created, but based on the Pod name you provided, it appears to be under the oversight of a ReplicaSet, so you can just kubectl delete pod test-1495806908-xn5jn and kubernetes will create a new one in its place (the new Pod will have a different name, so do not expect kubectl get pods to return test-1495806908-xn5jn ever again)
There are cases when you want to restart a specific container instead of deleting the pod and letting Kubernetes recreate it.
Doing a kubectl exec POD_NAME -c CONTAINER_NAME /sbin/killall5 worked for me.
(I changed the command from reboot to /sbin/killall5 based on the below recommendations.)
Both pod and container are ephemeral, try to use the following command to stop the specific container and the k8s cluster will restart a new container.
kubectl exec -it [POD_NAME] -c [CONTAINER_NAME] -- /bin/sh -c "kill 1"
This will send a SIGTERM signal to process 1, which is the main process running in the container. All other processes will be children of process 1, and will be terminated after process 1 exits. See the kill manpage for other signals you can send.
I m using
kubectl rollout restart deployment [deployment_name]
or
kubectl delete pod [pod_name]
The whole reason for having kubernetes is so it manages the containers for you so you don't have to care so much about the lifecyle of the containers in the pod.
Since you have a deployment setup that uses replica set. You can delete the pod using kubectl delete pod test-1495806908-xn5jn and kubernetes will manage the creation of a new pod with the 2 containers without any downtime. Trying to manually restart single containers in pods negates the whole benefits of kubernetes.
All the above answers have mentioned deleting the pod...but if you have many pods of the same service then it would be tedious to delete each one of them...
Therefore, I propose the following solution, restart:
1) Set scale to zero :
kubectl scale deployment <<name>> --replicas=0 -n service
The above command will terminate all your pods with the name <<name>>
2) To start the pod again, set the replicas to more than 0
kubectl scale deployment <<name>> --replicas=2 -n service
The above command will start your pods again with 2 replicas.
We use a pretty convenient command line to force re-deployment of fresh images on integration pod.
We noticed that our alpine containers all run their "sustaining" command on PID 5. Therefore, sending it a SIGTERM signal takes the container down. imagePullPolicy being set to Always has the kubelet re-pull the latest image when it brings the container back.
kubectl exec -i [pod name] -c [container-name] -- kill -15 5
There was an issue in coredns pod, I deleted such pod by
kubectl delete pod -n=kube-system coredns-fb8b8dccf-8ggcf
Its pod will restart automatically.
kubectl exec -it POD_NAME -c CONTAINER_NAME bash - then kill 1
Assuming the container is run as root which is not recommended.
In my case when I changed the application config, I had to reboot the container which was used in a sidecar pattern, I would kill the PID for the spring boot application which is owned by the docker user.
I realize this question is old and already answered, but I thought I'd chip in with my method.
Whenever I want to do this, I just make a minor change to the pod's container's image field, which causes kubernetes to restart just the container.
If you can't switch between 2 different, but equivalent tags (like :latest / :1.2.3 where latest is actually version 1.2.3) then you can always just switch it quickly to an invalid tag (I put an X at the end like :latestX or something) and then re-edit it and remove the X straight away afterwards, this does cause the container to fail starting with an image pull error for a few seconds though.
So for example:
kubectl edit po my-pod-name
Find the spec.containers[].name you want to kill, then find it's image
apiVersion: v1
kind: Pod
metadata:
#...
spec:
containers:
- name: main-container
#...
- name: container-to-restart
image: container/image:tag
#...
You would search for your container-to-restart and then update it's image to something different which will force kubernetes to do a controlled restart for you.
Killing the process specified in the Dockerfile's CMD / ENTRYPOINT works for me. (The container restarts automatically)
Rebooting was not allowed in my container, so I had to use this workaround.
The correct, but likely less popular answer, is that if you need to restart one container in a pod then it shouldn't be in the same pod. You can't restart single containers in a pod by design. Just move the container out into it's own pod. From the docs
Pods that run a single container. The "one-container-per-Pod" model is
the most common Kubernetes use case; in this case, you can think of a
Pod as a wrapper around a single container; Kubernetes manages Pods
rather than managing the containers directly.
Note: Grouping multiple co-located and co-managed containers in a
single Pod is a relatively advanced use case. You should use this
pattern only in specific instances in which your containers are
tightly coupled.
https://kubernetes.io/docs/concepts/workloads/pods/
I was playing around with ways to restart a container. What I found for me was this solution:
Dockerfile:
...
ENTRYPOINT [ "/app/bootstrap.sh" ]
/app/bootstrap.sh:
#!/bin/bash
/app/startWhatEverYouActuallyWantToStart.sh &
tail -f /dev/null
Whenever I want to restart the container, I kill the process with tail -f /dev/null which I find with
kill -TERM `ps --ppid 1 | grep tail | grep -v -e grep | awk '{print $1}'`
Following that command, all the processes except for the one with PID==1 will be killed and the entrypoint, in my case bootstrap.sh will be executed (again).
That's for the part "restart" - which is not really a restart but it does what you wish, in the end. For the part with limiting restarting the container named container-test you could pass on the container name to the container in question (as the container name would otherwise not be available inside the container) and then you can decide whether to do the above kill.
That would be something like this in your deployment.yaml:
env:
- name: YOUR_CONTAINER_NAME
value: container-test
/app/startWhatEverYouActuallyWantToStart.sh:
#!/bin/bash
...
CONDITION_TO_RESTART=0
...
if [ "$YOUR_CONTAINER_NAME" == "container-test" -a $CONDITION_TO_RESTART -eq 1 ]; then
kill -TERM `ps --ppid 1 | grep tail | grep -v -e grep | awk '{print $1}'`
fi
Sometimes no one knows which OS the pod has, pod might not have sudo or reboot altogether.
Safer option is to take a snapshot and recreate pod.
kubectl get <pod-name> -o yaml > pod-to-be-restarted.yaml;
kubectl delete po <pod-name>;
kubectl create -f pod-to-be-restarted.yaml
kubectl delete pods POD_NAME
This command will delete the pod and restart another automatically.
I want to scale up/down the number of machines to increase/decrease the number of nodes in my Kubernetes cluster. When I add one machine, I’m able to successfully register it with Kubernetes; therefore, a new node is created as expected. However, it is not clear to me how to smoothly shut down the machine later. A good workflow would be:
Mark the node related to the machine that I am going to shut down as unschedulable;
Start the pod(s) that is running in the node in other node(s);
Gracefully delete the pod(s) that is running in the node;
Delete the node.
If I understood correctly, even kubectl drain (discussion) doesn't do what I expect since it doesn’t start the pods before deleting them (it relies on a replication controller to start the pods afterwards which may cause downtime). Am I missing something?
How should I properly shutdown a machine?
List the nodes and get the <node-name> you want to drain or (remove from cluster)
kubectl get nodes
1) First drain the node
kubectl drain <node-name>
You might have to ignore daemonsets and local-data in the machine
kubectl drain <node-name> --ignore-daemonsets --delete-local-data
2) Edit instance group for nodes (Only if you are using kops)
kops edit ig nodes
Set the MIN and MAX size to whatever it is -1
Just save the file (nothing extra to be done)
You still might see some pods in the drained node that are related to daemonsets like networking plugin, fluentd for logs, kubedns/coredns etc
3) Finally delete the node
kubectl delete node <node-name>
4) Commit the state for KOPS in s3: (Only if you are using kops)
kops update cluster --yes
OR (if you are using kubeadm)
If you are using kubeadm and would like to reset the machine to a state which was there before running kubeadm join then run
kubeadm reset
Find the node with kubectl get nodes. We’ll assume the name of node to be removed is “mynode”, replace that going forward with the actual node name.
Drain it with kubectl drain mynode
Delete it with kubectl delete node mynode
If using kubeadm, run on “mynode” itself kubeadm reset
Rafael. kubectl drain does work as you describe. There is some downtime, just as if the machine crashed.
Can you describe your setup? How many replicas do you have, and are you provisioned such that you can't handle any downtime of a single replica?
If the cluster is created by kops
1.kubectl drain <node-name>
now all the pods will be evicted
ignore daemeondet:
2.kubectl drain <node-name> --ignore-daemonsets --delete-local-data
3.kops edit ig nodes-3 --state=s3://bucketname
set max and min value of instance group to 0
4. kubectl delete node
5. kops update cluster --state=s3://bucketname --yes
Rolling update if required:
6. kops rolling-update cluster --state=s3://bucketname --yes
validate cluster:
7.kops validate cluster --state=s3://bucketname
Now the instance will be terminated.
The below command only works if you have a lot of replicas, disruption budgets, etc. - but helps a lot with improving cluster utilization. In our cluster we have integration tests kicked off throughout the day (pods run for an hour and then spin down) as well as some dev-workload (runs for a few days until a dev spins it down manually). I am running this every night and get from ~100 nodes in the cluster down to ~20 - which adds up to a fair amount of savings:
for node in $(kubectl get nodes -o name| cut -d "/" -f2); do
kubectl drain --ignore-daemonsets --delete-emptydir-data $node;
kubectl delete node $node;
done
Remove worker node from Kubernetes
kubectl get nodes
kubectl drain < node-name > --ignore-daemonsets
kubectl delete node < node-name >
When draining a node we can have the risk that the nodes remain unbalanced and that some processes suffer downtime. The purpose of this method is to maintain the load balance between nodes as much as possible in addition to avoiding downtime.
# Mark the node as unschedulable.
echo Mark the node as unschedulable $NODENAME
kubectl cordon $NODENAME
# Get the list of namespaces running on the node.
NAMESPACES=$(kubectl get pods --all-namespaces -o custom-columns=:metadata.namespace --field-selector spec.nodeName=$NODENAME | sort -u | sed -e "/^ *$/d")
# forcing a rollout on each of its deployments.
# Since the node is unschedulable, Kubernetes allocates
# the pods in other nodes automatically.
for NAMESPACE in $NAMESPACES
do
echo deployment restart for $NAMESPACE
kubectl rollout restart deployment/name -n $NAMESPACE
done
# Wait for deployments rollouts to finish.
for NAMESPACE in $NAMESPACES
do
echo deployment status for $NAMESPACE
kubectl rollout status deployment/name -n $NAMESPACE
done
# Drain node to be removed
kubectl drain $NODENAME
There exists some strange behaviors for me when kubectl drain. Here are my extra steps, otherwise DATA WILL LOST in my case!
Short answer: CHECK THAT no PersistentVolume is mounted to this node. If have some PV, see the following descriptions to remove it.
When executing kubectl drain, I noticed, some Pods are not evicted (they just did not appear in those logs like evicting pod xxx).
In my case, some are pods with soft anti-affinity (so they do not like to go to the remaining nodes), some are pods of StatefulSet of size 1 and wants to keep at least 1 pod.
If I directly delete that node (using the commands mentioned in other answers), data will get lost because those pods have some PersistentVolumes, and deleting a Node will also delete PersistentVolumes (if using some cloud providers).
Thus, please manually delete those pods one by one. After deleted, kuberentes will re-schedule the pods to other nodes (because this node is SchedulingDisabled).
After deleting all pods (excluding DaemonSets), please CHECK THAT no PersistentVolume is mounted to this node.
Then you can safely delete the node itself :)