Restart all the pods in deployment in Kubernetes 1.14 [duplicate] - kubernetes

In kubernetes there is a rolling update (automatically without downtime) but there is not a rolling restart, at least i could not find. We have to change deployment yaml. Is there a way to make rolling "restart", preferably without changing deployment yaml?

Before kubernetes 1.15 the answer is no. But there is a workaround of patching deployment spec with a dummy annotation:
kubectl patch deployment web -p \
"{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}"
As of kubernetes 1.15 you can use:
kubectl rollout restart deployment your_deployment_name
CLI Improvements
Created a new kubectl rollout restart command that does a rolling restart of a deployment.
kubectl rollout restart now works for DaemonSets and StatefulSets

If you use k9s, the restart command can be found if you select deployments, statefulsets or daemonsets:

Related

Does "kubectl rollout restart deploy" cause downtime?

I'm trying to get all the deployments of a namespace to be restarted for implementation reasons.
I'm using "kubectl rollout -n restart deploy" and it works perfectly, but I'm not sure it that command causes downtime or if it works as the "rollout update", applying the restart one by one, keeping my services up.
Does anyone know?
In the documentation I can only find this:
Operation
Syntax
Description
rollout
kubectl rollout SUBCOMMAND [options]
Manage the rollout of a resource. Valid resource types include: deployments, daemonsets and statefulsets.
But I can't find details about the specific "rollout restart deploy".
I need to make sure it doesn't cause downtime. Right now is very hard to tell, because the restart process is very quick.
Update: I know that for one specific deployment (kubectl rollout restart deployment/name), it works as expected and doesn't cause downtime, but I need to apply it to all the namespace (without specifying the deployment) and that's the case I'm not sure about.
kubectl rollout restart deploy -n namespace1 will restart all deployments in specified namespace with zero downtime.
Restart command will work as follows:
After restart it will create new pods for a each deployments
Once new pods are up (running and ready) it will terminate old pods
Add readiness probes to your deployments to configure initial delays.
#pcsutar 's answer is almost correct. kubectl rollout restart $resourcetype $resourcename restarts your deployment, daemonset or stateful set according to the its update strategy. so if it is set to rollingUpdate it will behave exactly as the above answer:
After restart it will create new pods for a each deployments
Once new pods are up (running and ready) it will terminate old pods
Add readiness probes to your deployments to configure initial delays.
However, if the strategy for example is type: recreate all the currently running pods belonging to the deployment will be terminated before new pods will be spun up!

Correct way to scale/restart an application down/up in kubernetes (replicaset, deployments and pod deletion)?

I usually restart my applications by:
kubectl scale deployment my-app --replicas=0
Followed by:
kubectl scale deployment my-app --replicas=1
which works fine. I also have another running application but when I look at its replicaset I see:
$ kubectl get rs
NAME DESIRED CURRENT READY AGE
another-app 2 2 2 2d
So to restart that correctly I would of course need to:
kubectl scale deployment another-app --replicas=0
kubectl scale deployment another-app --replicas=2
But is there a better way to do this so I don't have to manually look at the repliasets before scaling/restarting my application (that might have replicas > 1)?
You can restart pods by using level
kubectl delete pods -l name=myLabel
You can rolling restart of all pods for a deployments, so that you don't take the service down
kubectl patch deployment your_deployment_name -p \
"{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}"
And After kubernetes version 1.15 you can
kubectl rollout restart deployment your_deployment_name
To make changes in your current deployment you can use kubectl rollout pause deployment/YOUR_DEPLOYMENT. This way the deployment will be marked as paused and won't be reconciled by the controller. After it's paused you can make necessary changes to your configuration and then resume it by using kubectl rollout resume deployment/YOUR_DEPLOYMENT. This way it will create a new replicaset with updated configuration.
Pod with new configuration will be started and when it's in running status, pod with old configuration will be terminated.
Using this method you will be able to rollout the deployment to previous version by using:
kubectl rollout history deployment/YOUR_DEPLOYMENT
to check history of the rollouts and then execute following command to rollback:
kubectl rollout undo deployment/YOUR_DEPLOYMENT --to-revision=REVISION_NO

Redeployment in Kubernetes does not scale down the old Replica set

When we are trying to redeploy the same deployment which is already running, Deployment does not do the rolling update on the replicasets which means old and new replica sets are running.
I have tried to set the revisionHistoryLimit to 1 but it also does not help. Everytime, I am trying to delete/scale down the old replicaset by
kubectl delete replicaset/<old_replciaset_name> -n <namespace>
or
kubectl scale replicaset/<old_replciaset_name> --replicas=0

Using kubectl to rollout openshift DeploymentConfig

I'm using openshift container cluster to run my project.
In my CI I'm using helm and kubectl to upgrade and rollout the deployments.
Following this guide, I have created this simple DeploymentConfig:
apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
...
When I run helm upgrade --install I can see the new deployment in my openshift cluster.
But I want to rollout the deployment using kubectl and it fails:
helm upgrade --install --wait --namespace myapp nginx chart/
kubectl rollout status -n myapp -w "dc/nginx"
I'm getting this error error: no king "DeploymentConfig" is registered for version "apps.openshift.io/v1" in scheme "k8s.io/kubernetes/pkg/kubectl/scheme/scheme.go:28"
Running kubectl api-versions does display "apps.openshift.io/v1" though.
Why can't I rollout the deployment using kubectl?
Kubernetes' command line interface (CLI), kubectl, is used to run commands against Kubernetes cluster, while DeploymentConfigsis specific to OpenShift distributions, and not available in standard Kubernetes.
Though, as long as oc is built on top of kubectl, converting a kubectl binary to oc is as simple as changing the binary’s name from kubectl to oc.
See more information for using kubectl and oc

How to restart a failed pod in kubernetes deployment

I have 3 nodes in kubernetes cluster. I create a daemonset and deployed it in all the 3 devices. This daemonset created 3 pods and they were successfully running. But for some reasons, one of the pod failed.
I need to know how can we restart this pod without affecting other pods in the daemon set, also without creating any other daemon set deployment?
Thanks
kubectl delete pod <podname> it will delete this one pod and Deployment/StatefulSet/ReplicaSet/DaemonSet will reschedule a new one in its place
There are other possibilities to acheive what you want:
Just use rollout command
kubectl rollout restart deployment mydeploy
You can set some environment variable which will force your deployment pods to restart:
kubectl set env deployment mydeploy DEPLOY_DATE="$(date)"
You can scale your deployment to zero, and then back to some positive value
kubectl scale deployment mydeploy --replicas=0
kubectl scale deployment mydeploy --replicas=1
Just for others reading this...
A better solution (IMHO) is to implement a liveness probe that will force the pod to restart the container if it fails the probe test.
This is a great feature K8s offers out of the box. This is auto healing.
Also look into the pod lifecycle docs.
kubectl -n <namespace> delete pods --field-selector=status.phase=Failed
I think the above command is quite useful when you want to restart 1 or more failed pods :D
And we don't need to care about name of the failed pod.