I can see that both kubectl delete pod and kubectl set env will restart.
I would like to know the best practice to be followed, is there any advantage of using set env
Other than these two options, is there any other better option to restart a pod?
Although your question is not really clear, I believe that you are now using deployment or other types of replica set to spawn the pods.
So, neither kubectl delete pod nor kubectl set env are the correct one to restart all the replicas in the Kubernetes. The proper way to restart all the pod under a replica set is kubectl rollout restart <type-of-replica-set>/<name>.
But kubectl delete pod and kubectl set env still work by seeing the conclusion only. Here are the reasons.
kubectl delete pod will reduce the number of desired pods for your replica set. The replica set controller will reconcile and spawn a new pod in order to fulfill your desired number defined.
kubectl set env is used for updating the environment variable in the specification. So, once you run the command, the whole version in the replica set is bumped. The controller will reconcile and try to spawn a new set of pods for you.
Eventually, both of the commands can lead you to the restart of the pods. But the commands are not designed for restart.
Related
I'm trying to get all the deployments of a namespace to be restarted for implementation reasons.
I'm using "kubectl rollout -n restart deploy" and it works perfectly, but I'm not sure it that command causes downtime or if it works as the "rollout update", applying the restart one by one, keeping my services up.
Does anyone know?
In the documentation I can only find this:
Operation
Syntax
Description
rollout
kubectl rollout SUBCOMMAND [options]
Manage the rollout of a resource. Valid resource types include: deployments, daemonsets and statefulsets.
But I can't find details about the specific "rollout restart deploy".
I need to make sure it doesn't cause downtime. Right now is very hard to tell, because the restart process is very quick.
Update: I know that for one specific deployment (kubectl rollout restart deployment/name), it works as expected and doesn't cause downtime, but I need to apply it to all the namespace (without specifying the deployment) and that's the case I'm not sure about.
kubectl rollout restart deploy -n namespace1 will restart all deployments in specified namespace with zero downtime.
Restart command will work as follows:
After restart it will create new pods for a each deployments
Once new pods are up (running and ready) it will terminate old pods
Add readiness probes to your deployments to configure initial delays.
#pcsutar 's answer is almost correct. kubectl rollout restart $resourcetype $resourcename restarts your deployment, daemonset or stateful set according to the its update strategy. so if it is set to rollingUpdate it will behave exactly as the above answer:
After restart it will create new pods for a each deployments
Once new pods are up (running and ready) it will terminate old pods
Add readiness probes to your deployments to configure initial delays.
However, if the strategy for example is type: recreate all the currently running pods belonging to the deployment will be terminated before new pods will be spun up!
I ran kubectl rollout restart deployment.
It created a new pod which is now stuck in Pending state because there are not enough resources to schedule it.
I can't increase the resources.
How do I delete the new pod?
please check if that pod has a Deployment controller (which should be recreating the pod), use:
kubectl get deployments
Then try to delete the Deployment with
Kubectl delete deployment DEPLOYMENT_NAME
Also, I would suggest to check resources allocation on GKE and its usage on your nodes with next command:
kubectl describe nodes | grep -A10 "Allocated resources"
And if you need more resources, try to activate GKE CA (cluster autoscaler) or in case you already have it enabled, then increase the number of nodes on Max value. You can also try to manually add a new node by manually resizing the Nodepool you are using.
I have 3 nodes in kubernetes cluster. I create a daemonset and deployed it in all the 3 devices. This daemonset created 3 pods and they were successfully running. But for some reasons, one of the pod failed.
I need to know how can we restart this pod without affecting other pods in the daemon set, also without creating any other daemon set deployment?
Thanks
kubectl delete pod <podname> it will delete this one pod and Deployment/StatefulSet/ReplicaSet/DaemonSet will reschedule a new one in its place
There are other possibilities to acheive what you want:
Just use rollout command
kubectl rollout restart deployment mydeploy
You can set some environment variable which will force your deployment pods to restart:
kubectl set env deployment mydeploy DEPLOY_DATE="$(date)"
You can scale your deployment to zero, and then back to some positive value
kubectl scale deployment mydeploy --replicas=0
kubectl scale deployment mydeploy --replicas=1
Just for others reading this...
A better solution (IMHO) is to implement a liveness probe that will force the pod to restart the container if it fails the probe test.
This is a great feature K8s offers out of the box. This is auto healing.
Also look into the pod lifecycle docs.
kubectl -n <namespace> delete pods --field-selector=status.phase=Failed
I think the above command is quite useful when you want to restart 1 or more failed pods :D
And we don't need to care about name of the failed pod.
How to delete all the contents from a kubernetes node? Contents include deployments, replica sets etc. I tried to delete deplyoments seperately. But kubernetes recreates all the pods again. Is there there any ways to delete all the replica sets present in a node?
If you are testing things, the easiest way would be
kubectl delete deployment --all
Althougth if you are using minikube, the easiest would probably be delete the machine and start again with a fresh node
minikube delete
minikube start
If we are talking about a production cluster, Kubernetes has a built-in feature to drain a node of the cluster, removing all the objects from that node safely.
You can use kubectl drain to safely evict all of your pods from a node before you perform maintenance on the node. Safe evictions allow the pod’s containers to gracefully terminate and will respect the PodDisruptionBudgets you have specified.
Note: By default kubectl drain will ignore certain system pods on the node that cannot be killed; see the kubectl drain documentation for more details.
When kubectl drain returns successfully, that indicates that all of the pods (except the ones excluded as described in the previous paragraph) have been safely evicted (respecting the desired graceful termination period, and without violating any application-level disruption SLOs). It is then safe to bring down the node by powering down its physical machine or, if running on a cloud platform, deleting its virtual machine.
First, identify the name of the node you wish to drain. You can list all of the nodes in your cluster with
kubectl get nodes
Next, tell Kubernetes to drain the node:
kubectl drain <node name>
Once it returns (without giving an error), you can power down the node (or equivalently, if on a cloud platform, delete the virtual machine backing the node). drain waits for graceful termination. You should not operate on the machine until the command completes.
If you leave the node in the cluster during the maintenance operation, you need to run
kubectl uncordon <node name>
afterwards to tell Kubernetes that it can resume scheduling new pods onto the node.
Please, note that if there are any pods that are not managed by ReplicationController, ReplicaSet, DaemonSet, StatefulSet or Job, then drain will not delete any pods unless you use --force, as mentioned in the docs.
kubectl drain <node name> --force
minikube delete --all
in case you are using minikube
it will let you start a new clean cluster.
in case you run on Kubernetes :
kubectl delete pods,deployments -A --all
it will remove it from all namespaces, you can add more objects in the same command .
Kubenertes provides namespaces object for isolation and separation of concern. Therefore, It is recommended to apply all of the k8s resources objects (Deployment, ReplicaSet, Pods, Services and other) in a custom namespace.
Now If you want to remove all of the relevant and related k8s resources, you just need to delete the namespace which will remove all of these resources.
kubectl create namespace custom-namespace
kubectl create -f deployment.yaml --namespace=custom-namespace
kubectl delete namespaces custom-namespace
I have attached a link for further research.
Namespaces
I tried so many variations to delete old pods from tutorials, including everything here.
What finally worked for me was:
kubectl delete replicaset --all
Deleting them one at a time didn't seem to work; it was only with the --all flag that all pods were deleted without being recreated.
I have by mistake added a pod in the system namespace "kube-system". And then I am unable to remove this pod. It also seems to have created a replica set. Every time delete these items, they are recreated.
Can't seem to find a way to delete pods or replica sets belonging to the system namespace "kube-system"
If you created the pod using kubectl run, then you will need to delete the deployment (which created the replica set, which created the pod). Otherwise, the higher level controllers will continue to ensure that the objects they are responsible for keeping running stay around in the system, even if you try to delete them manually. Try kubectl get deployment --namespace=kube-system to see if you have a deployment in the kube-system namespace. If so, deleting it should also delete the replica set and the pods that you created.
If a pod is recreated even after kubectl delete pod-name, it means that the pod is controlled by a higher level kubernetes objects such as Deployment, Replicaset, Replication controller.
You can use kubectl describe pods pod-name | grep Controllers to find which controller your pod belongs to. You need to delete this higher level object to delete the pod.