Correct way to scale/restart an application down/up in kubernetes (replicaset, deployments and pod deletion)? - kubernetes

I usually restart my applications by:
kubectl scale deployment my-app --replicas=0
Followed by:
kubectl scale deployment my-app --replicas=1
which works fine. I also have another running application but when I look at its replicaset I see:
$ kubectl get rs
NAME DESIRED CURRENT READY AGE
another-app 2 2 2 2d
So to restart that correctly I would of course need to:
kubectl scale deployment another-app --replicas=0
kubectl scale deployment another-app --replicas=2
But is there a better way to do this so I don't have to manually look at the repliasets before scaling/restarting my application (that might have replicas > 1)?

You can restart pods by using level
kubectl delete pods -l name=myLabel
You can rolling restart of all pods for a deployments, so that you don't take the service down
kubectl patch deployment your_deployment_name -p \
"{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}"
And After kubernetes version 1.15 you can
kubectl rollout restart deployment your_deployment_name

To make changes in your current deployment you can use kubectl rollout pause deployment/YOUR_DEPLOYMENT. This way the deployment will be marked as paused and won't be reconciled by the controller. After it's paused you can make necessary changes to your configuration and then resume it by using kubectl rollout resume deployment/YOUR_DEPLOYMENT. This way it will create a new replicaset with updated configuration.
Pod with new configuration will be started and when it's in running status, pod with old configuration will be terminated.
Using this method you will be able to rollout the deployment to previous version by using:
kubectl rollout history deployment/YOUR_DEPLOYMENT
to check history of the rollouts and then execute following command to rollback:
kubectl rollout undo deployment/YOUR_DEPLOYMENT --to-revision=REVISION_NO

Related

Restart all the pods in deployment in Kubernetes 1.14 [duplicate]

In kubernetes there is a rolling update (automatically without downtime) but there is not a rolling restart, at least i could not find. We have to change deployment yaml. Is there a way to make rolling "restart", preferably without changing deployment yaml?
Before kubernetes 1.15 the answer is no. But there is a workaround of patching deployment spec with a dummy annotation:
kubectl patch deployment web -p \
"{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}"
As of kubernetes 1.15 you can use:
kubectl rollout restart deployment your_deployment_name
CLI Improvements
Created a new kubectl rollout restart command that does a rolling restart of a deployment.
kubectl rollout restart now works for DaemonSets and StatefulSets
If you use k9s, the restart command can be found if you select deployments, statefulsets or daemonsets:

Redeployment in Kubernetes does not scale down the old Replica set

When we are trying to redeploy the same deployment which is already running, Deployment does not do the rolling update on the replicasets which means old and new replica sets are running.
I have tried to set the revisionHistoryLimit to 1 but it also does not help. Everytime, I am trying to delete/scale down the old replicaset by
kubectl delete replicaset/<old_replciaset_name> -n <namespace>
or
kubectl scale replicaset/<old_replciaset_name> --replicas=0

Not able to update the pod images of a ReplicationController in K8S

I created a ReplicationController using the below command.
kubectl run nginx --image=nginx -r=2 --generator=run/v1
Now I tried upgrading the image to version 1.7.1.
kubectl set image rc/nginx nginx=nginx:1.7.1
But, the image doesn't seem to update.
watch -n1 "kubectl describe pods | grep "Image:""
Also tried kubectl edit .... and the kubectl apply -f .... command, but the image is not getting updated.
How do I update an image in K8S ReplicationController?
Here in documentation is described how to make rolling upgrade on replication controllers https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/#rolling-updates
You need to know that actually your image is updated in replication controller but replication controller won't kill existing pods and spawn new with new image. So to achieve that you need to do one of 2 options:
Manually kill pods
Scale your RC to 0 to kill pods and then to desired number of replicas using following command kubectl scale --replicas=3 rc/nginx
Replication controller is only able to scale numbers of replicas of given pod and can't do any updates.
There is a way to "update" your ReplicationController using kubectl rolling-update, but it does not update it literally.
That what is happening when you run kubectl rolling-update (link1):
Creating a new replication controller with the updated configuration.
Increasing/decreasing the replica count on the new and old controllers until the correct number of replicas is reached.
Deleting the original replication controller.
Rolling updates are initiated with the kubectl rolling-update command:
$ kubectl rolling-update NAME \
([NEW_NAME] --image=IMAGE | -f FILE)
Assume that we have a current replication controller named foo and it is running image image:v1 (link2)
kubectl rolling-update foo [foo-v2] --image=myimage:v2
If the user doesn't specify a name for the 'next' replication
controller, then the 'next' replication controller is renamed to the
name of the original replication controller.
Here are some more examples from the kubectl reference:
Update pods of frontend-v1 using new replication controller data in
frontend-v2.json.
kubectl rolling-update frontend-v1 -f frontend-v2.json
Update pods of frontend-v1 using JSON data passed into stdin.
cat frontend-v2.json | kubectl rolling-update frontend-v1 -f -
Update the pods of frontend-v1 to frontend-v2 by just changing the
image, and switching the # name of the replication controller.
kubectl rolling-update frontend-v1 frontend-v2 --image=image:v2
Update the pods of frontend by just changing the image, and keeping
the old name.
kubectl rolling-update frontend --image=image:v2
Abort and reverse an existing rollout in progress (from frontend-v1 to
frontend-v2).
kubectl rolling-update frontend-v1 frontend-v2 --rollback
There are alternatives to the ReplicationController (link3)
ReplicaSet (Still it does not support updating Pod's image)
ReplicaSet is the next-generation ReplicationController that supports
the new set-based label selector. It’s mainly used by Deployment as a
mechanism to orchestrate pod creation, deletion and updates. Note that
we recommend using Deployments instead of directly using Replica Sets,
unless you require custom update orchestration or don’t require
updates at all.
Deployment (Recommended) (It works as an orchestrator for ReplicaSets, so it supports updates by design)
Deployment is a higher-level API object that updates its underlying
Replica Sets and their Pods in a similar fashion as kubectl rolling-update.
Deployments are recommended if you want this rolling
update functionality, because unlike kubectl rolling-update, they are
declarative, server-side, and have additional features.
kubectl run nginx1 --image nginx --replicas=3
kubectl get deployment nginx1 --export -o yaml

How to restart a failed pod in kubernetes deployment

I have 3 nodes in kubernetes cluster. I create a daemonset and deployed it in all the 3 devices. This daemonset created 3 pods and they were successfully running. But for some reasons, one of the pod failed.
I need to know how can we restart this pod without affecting other pods in the daemon set, also without creating any other daemon set deployment?
Thanks
kubectl delete pod <podname> it will delete this one pod and Deployment/StatefulSet/ReplicaSet/DaemonSet will reschedule a new one in its place
There are other possibilities to acheive what you want:
Just use rollout command
kubectl rollout restart deployment mydeploy
You can set some environment variable which will force your deployment pods to restart:
kubectl set env deployment mydeploy DEPLOY_DATE="$(date)"
You can scale your deployment to zero, and then back to some positive value
kubectl scale deployment mydeploy --replicas=0
kubectl scale deployment mydeploy --replicas=1
Just for others reading this...
A better solution (IMHO) is to implement a liveness probe that will force the pod to restart the container if it fails the probe test.
This is a great feature K8s offers out of the box. This is auto healing.
Also look into the pod lifecycle docs.
kubectl -n <namespace> delete pods --field-selector=status.phase=Failed
I think the above command is quite useful when you want to restart 1 or more failed pods :D
And we don't need to care about name of the failed pod.

How to kill pods on Kubernetes local setup

I am starting exploring runnign docker containers with Kubernetes. I did the following
Docker run etcd
docker run master
docker run service proxy
kubectl run web --image=nginx
To cleanup the state, I first stopped all the containers and cleared the downloaded images. However I still see pods running.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
web-3476088249-w66jr 1/1 Running 0 16m
How can I remove this?
To delete the pod:
kubectl delete pods web-3476088249-w66jr
If this pod is started via some replicaSet or deployment or anything that is creating replicas then find that and delete that first.
kubectl get all
This will list all the resources that have been created in your k8s cluster. To get information with respect to resources created in your namespace kubectl get all --namespace=<your_namespace>
To get info about the resource that is controlling this pod, you can do
kubectl describe web-3476088249-w66jr
There will be a field "Controlled By", or some owner field using which you can identify which resource created it.
When you do kubectl run ..., that's a deployment you create, not a pod directly. You can check this with kubectl get deploy. If you want to delete the pod, you need to delete the deployment with kubectl delete deploy DEPLOYMENT.
I would recommend you to create a namespace for testing when doing this kind of things. You just do kubectl create ns test, then you do all your tests in this namespace (by adding -n test). Once you have finished, you just do kubectl delete ns test, and you are done.
If you defined your object as Pod then
kubectl delete pod <--all | pod name>
will remove all of the generated Pod. But, If wrapped your Pod to Deployment object then running the command above only will trigger a re-creation of them.
In that case, you need to run
kubectl delete deployment <--all | deployment name>
That will also remove the Service object that is related to the deleted Deployment