Redeployment in Kubernetes does not scale down the old Replica set - kubernetes

When we are trying to redeploy the same deployment which is already running, Deployment does not do the rolling update on the replicasets which means old and new replica sets are running.
I have tried to set the revisionHistoryLimit to 1 but it also does not help. Everytime, I am trying to delete/scale down the old replicaset by
kubectl delete replicaset/<old_replciaset_name> -n <namespace>
or
kubectl scale replicaset/<old_replciaset_name> --replicas=0

Related

How to delete pod created with rolling restart?

I ran kubectl rollout restart deployment.
It created a new pod which is now stuck in Pending state because there are not enough resources to schedule it.
I can't increase the resources.
How do I delete the new pod?
please check if that pod has a Deployment controller (which should be recreating the pod), use:
kubectl get deployments
Then try to delete the Deployment with
Kubectl delete deployment DEPLOYMENT_NAME
Also, I would suggest to check resources allocation on GKE and its usage on your nodes with next command:
kubectl describe nodes | grep -A10 "Allocated resources"
And if you need more resources, try to activate GKE CA (cluster autoscaler) or in case you already have it enabled, then increase the number of nodes on Max value. You can also try to manually add a new node by manually resizing the Nodepool you are using.

Correct way to scale/restart an application down/up in kubernetes (replicaset, deployments and pod deletion)?

I usually restart my applications by:
kubectl scale deployment my-app --replicas=0
Followed by:
kubectl scale deployment my-app --replicas=1
which works fine. I also have another running application but when I look at its replicaset I see:
$ kubectl get rs
NAME DESIRED CURRENT READY AGE
another-app 2 2 2 2d
So to restart that correctly I would of course need to:
kubectl scale deployment another-app --replicas=0
kubectl scale deployment another-app --replicas=2
But is there a better way to do this so I don't have to manually look at the repliasets before scaling/restarting my application (that might have replicas > 1)?
You can restart pods by using level
kubectl delete pods -l name=myLabel
You can rolling restart of all pods for a deployments, so that you don't take the service down
kubectl patch deployment your_deployment_name -p \
"{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}"
And After kubernetes version 1.15 you can
kubectl rollout restart deployment your_deployment_name
To make changes in your current deployment you can use kubectl rollout pause deployment/YOUR_DEPLOYMENT. This way the deployment will be marked as paused and won't be reconciled by the controller. After it's paused you can make necessary changes to your configuration and then resume it by using kubectl rollout resume deployment/YOUR_DEPLOYMENT. This way it will create a new replicaset with updated configuration.
Pod with new configuration will be started and when it's in running status, pod with old configuration will be terminated.
Using this method you will be able to rollout the deployment to previous version by using:
kubectl rollout history deployment/YOUR_DEPLOYMENT
to check history of the rollouts and then execute following command to rollback:
kubectl rollout undo deployment/YOUR_DEPLOYMENT --to-revision=REVISION_NO

how to update max replicas in running pod

I'm looking to update manually with the command kubectl autoscale my maximum number of replicas for auto scaling.
however each time I run the command it creates a new hpa that fails to launch the pod why I don't know at all:(
Do you have an idea how i can update manually with kubectl my HPA ?
https://gist.github.com/zyriuse75/e75a75dc447eeef9e8530f974b19c28a
I think you are mixing two topics here, one is manually scale a pod (you can do it through a deployment applying kubectl scale deploy {mydeploy} --replicas={#repl}). In the other hand you have HPA (Horizontal Pod AutoScaler), in order to do this (HPA) you should have configured any app metrics provider system
e.g:
metrics server
https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/metrics-server
heapster (deprecated) https://github.com/kubernetes-retired/heapster
then you can create a HPA to handle your autoscaling, you can get more info on this link https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/
Once created you can patch your HPA or deleted it and create it again
kubectl delete hpa hpa-pod -n ns-svc-cas
kubectl autoscale hpa-pod --min={#number} --max={#number} -n ns-svc-cas
easiest way

Not able to update the pod images of a ReplicationController in K8S

I created a ReplicationController using the below command.
kubectl run nginx --image=nginx -r=2 --generator=run/v1
Now I tried upgrading the image to version 1.7.1.
kubectl set image rc/nginx nginx=nginx:1.7.1
But, the image doesn't seem to update.
watch -n1 "kubectl describe pods | grep "Image:""
Also tried kubectl edit .... and the kubectl apply -f .... command, but the image is not getting updated.
How do I update an image in K8S ReplicationController?
Here in documentation is described how to make rolling upgrade on replication controllers https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/#rolling-updates
You need to know that actually your image is updated in replication controller but replication controller won't kill existing pods and spawn new with new image. So to achieve that you need to do one of 2 options:
Manually kill pods
Scale your RC to 0 to kill pods and then to desired number of replicas using following command kubectl scale --replicas=3 rc/nginx
Replication controller is only able to scale numbers of replicas of given pod and can't do any updates.
There is a way to "update" your ReplicationController using kubectl rolling-update, but it does not update it literally.
That what is happening when you run kubectl rolling-update (link1):
Creating a new replication controller with the updated configuration.
Increasing/decreasing the replica count on the new and old controllers until the correct number of replicas is reached.
Deleting the original replication controller.
Rolling updates are initiated with the kubectl rolling-update command:
$ kubectl rolling-update NAME \
([NEW_NAME] --image=IMAGE | -f FILE)
Assume that we have a current replication controller named foo and it is running image image:v1 (link2)
kubectl rolling-update foo [foo-v2] --image=myimage:v2
If the user doesn't specify a name for the 'next' replication
controller, then the 'next' replication controller is renamed to the
name of the original replication controller.
Here are some more examples from the kubectl reference:
Update pods of frontend-v1 using new replication controller data in
frontend-v2.json.
kubectl rolling-update frontend-v1 -f frontend-v2.json
Update pods of frontend-v1 using JSON data passed into stdin.
cat frontend-v2.json | kubectl rolling-update frontend-v1 -f -
Update the pods of frontend-v1 to frontend-v2 by just changing the
image, and switching the # name of the replication controller.
kubectl rolling-update frontend-v1 frontend-v2 --image=image:v2
Update the pods of frontend by just changing the image, and keeping
the old name.
kubectl rolling-update frontend --image=image:v2
Abort and reverse an existing rollout in progress (from frontend-v1 to
frontend-v2).
kubectl rolling-update frontend-v1 frontend-v2 --rollback
There are alternatives to the ReplicationController (link3)
ReplicaSet (Still it does not support updating Pod's image)
ReplicaSet is the next-generation ReplicationController that supports
the new set-based label selector. It’s mainly used by Deployment as a
mechanism to orchestrate pod creation, deletion and updates. Note that
we recommend using Deployments instead of directly using Replica Sets,
unless you require custom update orchestration or don’t require
updates at all.
Deployment (Recommended) (It works as an orchestrator for ReplicaSets, so it supports updates by design)
Deployment is a higher-level API object that updates its underlying
Replica Sets and their Pods in a similar fashion as kubectl rolling-update.
Deployments are recommended if you want this rolling
update functionality, because unlike kubectl rolling-update, they are
declarative, server-side, and have additional features.
kubectl run nginx1 --image nginx --replicas=3
kubectl get deployment nginx1 --export -o yaml

How to restart a failed pod in kubernetes deployment

I have 3 nodes in kubernetes cluster. I create a daemonset and deployed it in all the 3 devices. This daemonset created 3 pods and they were successfully running. But for some reasons, one of the pod failed.
I need to know how can we restart this pod without affecting other pods in the daemon set, also without creating any other daemon set deployment?
Thanks
kubectl delete pod <podname> it will delete this one pod and Deployment/StatefulSet/ReplicaSet/DaemonSet will reschedule a new one in its place
There are other possibilities to acheive what you want:
Just use rollout command
kubectl rollout restart deployment mydeploy
You can set some environment variable which will force your deployment pods to restart:
kubectl set env deployment mydeploy DEPLOY_DATE="$(date)"
You can scale your deployment to zero, and then back to some positive value
kubectl scale deployment mydeploy --replicas=0
kubectl scale deployment mydeploy --replicas=1
Just for others reading this...
A better solution (IMHO) is to implement a liveness probe that will force the pod to restart the container if it fails the probe test.
This is a great feature K8s offers out of the box. This is auto healing.
Also look into the pod lifecycle docs.
kubectl -n <namespace> delete pods --field-selector=status.phase=Failed
I think the above command is quite useful when you want to restart 1 or more failed pods :D
And we don't need to care about name of the failed pod.