Apply changed limits in Kubernetes? - kubernetes

I changed the limits (default requested amount of CPU) on my Kubernetes cluster. Of course the new limits don't affect already running Pods. So, how can I apply the new (lower) limits to already running Pods.
Is there any way to update the limits in the running Pods without restarting them?
If I have to restart the Pods, how can this be done without deleting and recreating them? (I am really using pure Pods, no Depoyments or so)

You need to restart the Pods:
You can't update the resources field of a running Pod. The update would be rejected.
You need to create new Pods and delete the old ones. You can create the new ones first and delete the old ones when the new ones are running, if this allows you to avoid downtime.

You can do that only when you run it as a deployment, or atleast run a pod with RestartPolicy as RestartAlways, so you can always scale down to zero and scale up to 1 for a safe restart. In your case, considering that you just run your pod using kubectl run without any restartpolicy, or a restartpolicy as Never, i would run another pod, test and kill the already running ones. Expect better answers from any one.

You cant change the properties of a running pod. It would reject the changes. Rather you can create a deployment whose rolling update feature ensures,
one pod will be running during the update of limits.
There wont be any downtime for your pod.

Related

Is it possible to kill previous replicaset without waiting for new replicaset to be rolled out?

Context
Say we have d.yaml in which a deployment, whose strategy is RollingUpdate, is defined.
We first create a deployment:
kubectl apply -f d.yaml
After some time, we modify d.yaml and re-apply it to update the deployment.
vi d.yaml
kubectl apply -f d.yaml
This starts rolling out a new replicaset R_new.
Normally, the old (previous) replicaset R_old is killed only after R_new has successfully been rolled out.
Question (tl;dr)
Is it possible to kill R_old without waiting for rolling out R_new to complete?
By "kill", I mean completely stopping a replicaset; it should never restart. (So kubectl delete replicaset didn't help.)
Question (long)
In my specific situation, my containers connect to an external database. This single database is also connected from many containers managed by other teams.
If the maximum number of connections allowed is already reached, new containers associated with R_new fail to start (i.e. CrashLoopBackOff).
If I could forcefully kill R_old, the number of connections would be lowered by N where N is the number of replicas, and thus R_new's containers would successfully start.
FAQ:
Q. Why not temporarily stop using RollingUpdate strategy?
A. Actually I have no permission to edit d.yaml. It is edited by CI/CD.
Q. Why not just make the maximum number of connections larger?
A. I have no permission for the database either...
Is it possible to kill R_old without waiting for rolling out R_new to complete?By "kill", I mean completely stopping a replicaset; it should never restart. (So kubectl delete replicaset didn't help.)
Make Changes to deployment
Scale down deployment to replicas=0 so that it is as good as stopping the old replicaset
scale up deployment to desired number of replicas, new replicasets will be created with new configuration changes in deployment.
steps number 1 & 2 can be interchanged based on the requirement
Technically, you can delete the old replicaset by running kubectl delete replicaset R_old and this would terminate the old pod. I just verified it in my cluster (kubernetes version 1.21.8).
However, terminating a pod doesn't necessarily mean it is been killed immediately.
The actual termination of the pod depends on whether or not a preStop lifecycle hook is defined, and on the value of terminationGracePeriodSeconds (see description by typing kubectl explain pod.spec.terminationGracePeriodSeconds).
By default, the kubelet deletes the pod only after all of the processes in its containers are stopped, or the grace period has been over.

How to delete a pod from Kubernetes master node?

Does anyone know how to delete pod from kubernetes master node? I have this one master node on bare-metal ubuntu server. When i'm trying to delete it with "kubectl delete pod .." or force deleting from there: https://kubernetes.io/docs/tasks/run-application/force-delete-stateful-set-pod/ it doesnt work. the pod is creating again and again...
The pods in a Statefulsets are managed by ReplicaSets and will be recreated again if the current and the desired replicas defined in the spec do not match.
The document you linked provides instructions as to how to kill the pods forcefully avoiding the graceful shutdown behaviour which can have unexpected behaviour depending on the application.
The link clearly states the pods will be recreated in the section:
Force deletions do not wait for confirmation from the kubelet that the Pod has been terminated. Irrespective of whether a force deletion is successful in killing a Pod, it will immediately free up the name from the apiserver. This would let the StatefulSet controller create a replacement Pod with that same identity; this can lead to the duplication of a still-running Pod, and if said Pod can still communicate with the other members of the StatefulSet, will violate the at most one semantics that StatefulSet is designed to guarantee.
If you want the pods to be stopped and new pods for the Statefulset do not get created, you need to scale down the Statefulset by changing the replicas to 0.
You can read the official docs for how to scale the Statefulset replicas.
The key to figuring out how to kill the pod will be to understand how it was created. For example, if the pod is part of a deployment with a declared replicas count as 1, Once you kill/ force kill, Kubernetes detects a mismatch between the desired state (the number of replicas defined in the deployment configuration) to the current state and will create a new pod to replace the one that was deleted - therefor in this example you will need to either scale the deployment to 0 or delete the deployment.
If we need to kill any pod we can just scale down the replica set.
kubectl scale deploy <deployment_name> --replicas=<expected_no_of_replicas>
Way of deleting pods will depends on how you created it. If you created it individually ( not part of a ReplicaSet/ReplicationController/Deployment ) then you can delete pod directly. otherwise the only option to delete is the scale option. In production setup what I believe is all are using Deployment option out of ReplicaSet/ReplicationController/Deployment( Please refer documents and understand the difference between all those three options )

Kubernetes rolling deploy: terminate a pod only when there are no containers running

I am trying to deploy updates to pods. However I want the current pods to terminate only when all the containers inside the pod have terminated and their process is complete.
The new pods can keep waiting to start untill all container in the old pods have completed. We have a mechanism to stop old pods from picking up new tasks and therefore they should eventually terminate.
It's okay if twice the pods exist at some instance of time. I tried finding solution for this in kubernetes docs but wan't successful. Pointers on how / if this is possible would be helpful.
well I guess then you may have to create a duplicate kind of deployment with new image as required and change the selector in service to new deployment, which will prevent external traffic from entering pre-existing pods and new calls can go to new pods. Then later you can check for something like -
Kubectl top pods -c containers
and if the load appears to be static and low, then preferrably you can delete the old pods related deployment later.
But for this thing everytime the service selectors have to be updated and likely for keeping track of things you can append the git commit hash to the service selector to keep it unique everytime.
But rollback to previous versions if required from inside Kubernetes cluster will be difficult, so preferably you can trigger the wanted build again.
I hope this makes some sense !!

Duplicate pods / Pods creating without deploy existing

I'm running into an issue managing my Kubernetes pods.
I had a deploy instance which I removed and created a new one. The pod tied to that deploy instance shut down as expected and a new one came up when I created a new deploy, as expected.
However, once I changed the deploy, a second pod began running. I tried to "kubectl delete pod pod-id" but it would just recreate itself again.
I went through the same process again and now I'm stuck with 3 pods, and no deploy. I removed the deploy completely, and I try to delete the pods but they keep recreating themselves. This is an issue because I am exhausting the resources available on my Kubernetes.
Does anyone know how to force remove these pods? I do not know how they are recreating themselves if there's no deploy to go by.
The root cause could be either an existing deployment, replicaset, daemonset, statefulset or a static pod. Check if any of these exist in the affected namespace using kubectl get <RESOURCE-TYPE>
I've had this happen after issuing a rollout restart deployment while a pod was already in an error or creating state, and explicitly deleting the second pod only resulted in a new one getting scheduled (trick birthday candle situation).
I find almost any time I have an issue like this it can be fixed by simply zeroing out the replicaSets in the deployment, applying, then restoring replicaSets to the original value.

Why does scaling down a deployment seem to always remove the newest pods?

(Before I start, I'm using minikube v27 on Windows 10.)
I have created a deployment with the nginx 'hello world' container with a desired count of 2:
I actually went into the '2 hours' old pod and edited the index.html file from the welcome message to "broken" - I want to play with k8s to seem what it would look like if one pod was 'faulty'.
If I scale this deployment up to more instances and then scale down again, I almost expected k8s to remove the oldest pods, but it consistently removes the newest:
How do I make it remove the oldest pods first?
(Ideally, I'd like to be able to just say "redeploy everything as the exact same version/image/desired count in a rolling deployment" if that is possible)
Pod deletion preference is based on a ordered series of checks, defined in code here:
https://github.com/kubernetes/kubernetes/blob/release-1.11/pkg/controller/controller_utils.go#L737
Summarizing- precedence is given to delete pods:
that are unassigned to a node, vs assigned to a node
that are in pending or not running state, vs running
that are in not-ready, vs ready
that have been in ready state for fewer seconds
that have higher restart counts
that have newer vs older creation times
These checks are not directly configurable.
Given the rules, if you can make an old pod to be not ready, or cause an old pod to restart, it will be removed at scale down time before a newer pod that is ready and has not restarted.
There is discussion around use cases for the ability to control deletion priority, which mostly involve workloads that are a mix of job and service, here:
https://github.com/kubernetes/kubernetes/issues/45509
what about this :
kubectl scale deployment ingress-nginx-controller --replicas=2
Wait until 2 replicas are up.
kubectl delete pod ingress-nginx-controller-oldest-replica
kubectl scale deployment ingress-nginx-controller --replicas=1
I experienced zero downtime doing so while removing oldest pod.