With GKE Autopilot banning the cluster-autoscaler.kubernetes.io/safe-to-evict=false annotation, is there a way to ensure job pods do not get evicted? - kubernetes

Our GKE Autopilot cluster was recently upgraded to version 1.21.6-gke.1503, which apparently causes the cluster-autoscaler.kubernetes.io/safe-to-evict=false annotation to be banned.
I totally get this for deployments, as Google doesn't want a deployment preventing scale-down, but for jobs I'd argue this annotation makes perfect sense in certain cases. We start complex jobs that start and monitor other jobs themselves, which makes it hard to make them restart-resistant given the sheer number of moving parts.
Is there any way to make it as unlikely as possible for job pods to be restarted/moved around when using Autopilot? Prior to switching to Autopilot, we used to make sure our jobs filled a single node by requesting all of its available resources; combined with a Guaranteed QoS class, this made sure the only way for a pod to be evicted was if the node somehow failed, which almost never happened. Now all we seem to have left is the Guaranteed QoS class, but that doesn't prevent pods from being evicted.

At this point the only thing left is to ask to bring back this feature on IssueTracker - raise a new feature reqest and hope for the best.
Link to this thread also as it contains quite a lot of troubleshooting and may be useful.

Related

GKE Autoscaling: How do I tell the autoscaler to remove older pods first? (FILO insteado FIFO)

There is a small memory leak in our application. For certain business reasons we do not have the resources to fix this memory leak. Instead, it would be better if our pods were deleted or scaled out after a certain period.
Rather than debugging this memory leak would it be possible to change the Google Kubernetes Engine autoscaling profile to scale down by removing older pods instead of newer pods first? Essentially, I am looking for a "First In Last Out" method of scaling down pods instead of a "First In First Out" method, which is what GKE currently uses (from my understanding) when autoscaling.
Is this possible? I'm not finding anything about this in the documentation. Thank you!
Scale-down in cluster-autoscaler isn't really either of those. It's looking for nodes with low utilization and simulating if those pods were evicted would the cluster have enough capacity. In practice FIFO or close to it is common because newer pods end up on newer nodes and those have less utilization. But you can use tools like Descheduler to help balance stuff out a bit.

Kubernetes dynamic pod provisioning

I have an app I'm building on Kubernetes which needs to dynamically add and remove worker pods (which can't be known at initial deployment time). These pods are not interchangeable (so increasing the replica count wouldn't make sense). My question is: what is the right way to do this?
One possible solution would be to call the Kubernetes API to dynamically start and stop these worker pods as needed. However, I've heard that this might be a bad way to go since, if those dynamically-created pods are not in a replica set or deployment, then if they die, nothing is around to restart them (I have not yet verified for certain if this is true or not).
Alternatively, I could use the Kubernetes API to dynamically spin up a higher-level abstraction (like a replica set or deployment). Is this a better solution? Or is there some other more preferable alternative?
If I understand you correctly you need ConfigMaps.
From the official documentation:
The ConfigMap API resource stores configuration data as key-value
pairs. The data can be consumed in pods or provide the configurations
for system components such as controllers. ConfigMap is similar to
Secrets, but provides a means of working with strings that don’t
contain sensitive information. Users and system components alike can
store configuration data in ConfigMap.
Here you can find some examples of how to setup it.
Please try it and let me know if that helped.

GCP Kubernetes spreading pods across nodes instead of filling available resources

I have a few kubefiles defining Kubernetes services and deployments. When I create a cluster of 4 nodes on GCP (never changes), all the small kube-system pods are spread across the nodes instead of filling one at a time. Same with the pods created when I apply my kubefiles.
The problem is sometimes I have plenty of available total CPU for a deployment, but its pods can't be provisioned because no single node has that much free. It's fragmented, and it would obviously fit if the kube-system pods all went into one node instead of being spread out.
I can avoid problems by using bigger/fewer nodes, but I feel like I shouldn't have to do that. I'd also rather not deal with pod affinity settings for such a basic testing setup. Is there a solution to this, maybe a setting to have it prefer filling nodes in order? Like using an already opened carton of milk instead of opening a fresh one each time.
Haven't tested this, but the order I apply files in probably matters, meaning applying the biggest CPU users first could help. But that seems like a hack.
I know there's some discussion on rescheduling that gets complicated because they're dealing with a dynamic node pool, and it seems like they don't have it ready, so I'm guessing there's no way to have it rearrange my pods dynamically.
You can write your own scheduler. Almost all components in k8s are replaceable.
I know you won't. If you don't want to deal with affinity, you def won't write your own scheduler. But know that you have that option.
With GCP native, try to have all your pods with resource request and limits set up.

Why should I store kubernetes deployment configuration into source control if kubernetes already keeps track of it?

One of the documented best practices for Kubernetes is to store the configuration in version control. It is mentioned in the official best practices and also summed up in this Stack Overflow question. The reason is that this is supposed to speed-up rollbacks if necessary.
My question is, why do we need to store this configuration if this is already stored by Kubernetes and there are ways with which we can easily go back to a previous version of the configuration using for example kubectl? An example is a command like:
kubectl rollout history deployment/nginx-deployment
Isn't storing the configuration an unnecessary duplication of a piece of information that we will then have to keep synchronized?
The reason I am asking this is that we are building a configuration service on top of Kubernetes. The user will interact with it to configure multiple deployments, I was wondering if we should keep a history of the Kubernetes configuration and the content of configMaps in a database for possible roll backs or if we should just rely on kubernetes to retrieve the current configuration and rolling back to previous versions of the configuration.
You can use Kubernetes as your store of configuration, to your point, it's just that you probably shouldn't want to. By storing configuration as code, you get several benefits:
Configuration changes get regular code reviews.
They get versioned, are diffable, etc.
They can be tested, linted, and whatever else you desired.
They can be refactored, share code, and be documented.
And all this happens before actually being pushed to Kubernetes.
That may seem bad ("but then my configuration is out of date!"), but keep in mind that configuration is actually never in date - just because you told Kubernetes you want 3 replicas running doesn't mean there are, or if there were that 1 isn't temporarily down right now, and so on.
Configuration expresses intent. It takes a different process to actually notice when your intent changes or doesn't match reality, and make it so. For Kubernetes, that storage is etcd and it's up to the master to, in a loop forever, ensure the stored intent matches reality. For you, the storage is source control and whatever process you want, automated or not, can, in a loop forever, ensure your code eventually becomes reflected in Kubernetes.
The rollback command, then, is just a very fast shortcut to "please do this right now!". It's for when your configuration intent was wrong and you don't have time to fix it. As soon as you roll back, you should chase your configuration and update it there as well. In a sense, this is indeed duplication, but it's a rare event compared to the normal flow, and the overall benefits outweigh this downside.
Kubernetes cluster doesn't store your configuration it runs it, as you server runs your application code.

Auto scale kubernetes pods based on downstream api results

I have seen HPA can be scaled based on CPU usage. That is super cool.
However, the scenario I have is: the stateful app (container in pod) is one to one mapping based on the downstream API results. For example, the downstream api results return maximum and expected capacity like {response: 10}. I would like to see replicaSet or statefulSet or other kubernetes controller can obtain this value and auto scale the pods to 10. Unfortunately, the pod replicas is hardcoded in the yaml file.
If I am doing it manually, I think I can do it via running start a scheduler. The job of the scheduler is to watch the api and run the kubectl scale command based on the downstream api results. This can be error prone and there is another system I need to maintain. I guess this logic should belong to a kubernetes controller ?
May I ask has someone done this stuff before and what is the way to configure it ?
Thanks in advance.
Unfortunately, it is not possible to use an HPA in that mode, but your conception about how to scale is right.
HPA is designed to analyze metrics and decide how many pods need to be spawned based on those metrics. It is using scaling rules and can only spawn pods one by one based on the result of its decision.
Moreover, it using standard Kubernetes API for scale pods.
Because a logic of HPA is already in your application, you can use the same API to scale your pods. Btw, kubectl scale is using the same way to interact with a cluster.
So, you can use i.e. Cronjob, with a small application which will call API of your application every 5 minutes and call kubectl scale with proper name of deployment to scale your app.
But, please keep in mind, you need to somehow control the frequency of up- and downscaling of pods, it will make your application more stable. That’s why I think that scaling not more often than once per 5 minutes is OK, but trying to do it every minute generally is not the best idea.
And of course, you can create a daemon and run it using Deployment, but I think Cronjob solution is more easy and faster to implement.