Scheduled scaling for PODs in Kubernetes - kubernetes

I have a scaled deployment with predictable load change depends on time. How can I make my deployment prepared to the load (for example, I want to double pods number every evening from 16:00 to 23:00). Does Kubernetes provides such tool?
I know Kubernetes pods are scaling with Horizontal Pod Autoscaler, which scales the number of pods based on CPU utilisation or custom metric. But it is reactive approach, I'm looking for proactive.

A quick google search would direct you here: https://github.com/kubernetes/kubernetes/issues/49931
In essence the best solution as of now, is to either run a sidecar container for your pod's main container, which could use the kubernetes api to scale itself up based on a time period with a simple bash script, or write a CRD yourself that reacts to time based events (it is 6pm), something like this one:
https://github.com/amelbakry/kube-schedule-scaler
which watches annotations with cron-like specs on deployments and reacts accordingly.

If you are looking for a more advanced Auto scaler then you can give Keda Keda.sh a try. It has the support for cron based auto scale up & down.
Plus it also support some other event driven based auto scaling like what I have done based on Consumer group's lag in Apache Kafka for particular topic.
There are multiple event source supported, check it out here

Horizontal Pod Autoscaler of Kubernetes is not a re-active approach, but in fact it is a proactive scaling approach. Let I explain its algorithm using its default setting:
The cool time is 5 minutes
Resource utilization tracing for every 15 seconds
It means that the system traces resource utilization (depend on what metrics the end-users set, e.g., CPU, storage...etc.) for every 15 seconds.
Until every 5 minutes of cooling down (no scaling actions), the controller will calculate the resource utilization in the past 5 minutes (which uses the historical data traces in every 15 seconds above). Then it estimates number of resource (i.e., number of replicas) requires for the next 5-min time window by the equation:
desiredReplicas = ceil[currentReplicas * ( currentMetricValue /
desiredMetricValue )]
Other pro-active auto-scaler also works in the similar manner. Different points is that they may apply different techniques (queue theory, machine learning, or time series model) to estimate desiredReplicas as what done in the above equation.

Related

Set cpu requests in K8s for fluctuating load

I have a service deployed in Kubernetes and I am trying to optimize the requested cpu resources.
For now, I have deployed 10 instances and set spec.containers[].resources.limits.cpu to 0.1, based on the "average" use. However, it became obvious that this average is rather useless in practice because under constant load, the load increases significantly (to 0.3-0.4 as far as I can tell).
What happens consequently, when multiple instances are deployed on the same node, is that this node is heavily overloaded; pods are no longer responsive, are killed and restarted etc.
What is the best practice to find a good value? My current best guess is to increase the requested cpu to 0.3 or 0.4; I'm looking at Grafana visualizations and see that the pods on the heavily loaded node(s) converge there under continuous load.
However, how can I know if they would use more load if they could before becoming unresponsive as the node is overloaded?
I'm actually trying to understand how to approach this in general. I would expect an "ideal" service (presuming it is CPU-focused) to use close to 0.0 when there is no load, and close to 1.0 when requests are constantly coming in. With that assumption, should I set the cpu.requests to 1.0, taking a perspective where actual constant usage is assumed?
I have read some Kubernetes best practice guides, but none of them seem to address how to set the actual value for cpu requests in practice in more depth than "find an average".
Basically come up with a number that is your lower acceptable bound for how much the process runs. Setting a request of 100m means that you are okay with a lower limit of your process running 0.1 seconds for every 1 second of wall time (roughly). Normally that should be some kind of average utilization, usually something like a P99 or P95 value over several days or weeks. Personally I usually look at a chart of P99, P80, and P50 (median) over 30 days and use that to decide on a value.
Limits are a different beast, they are setting your CPU timeslice quota. This subsystem in Linux has some persistent bugs so unless you've specifically vetted your kernel as correct, I don't recommend using it for anything but the most hostile of programs.
In a nutshell: Main goal is to understand how much traffic a pod can handle and how much resource it consumes to do so.
CPU limits are hard to understand and can be harmful, you might want
to avoid them, see static policy documentation and relevant
github issue.
To dimension your CPU requests you will want to understand first how much a pod can consume during high load. In order to do this you can :
disable all kind of autoscaling (HPA, vertical pod autoscaler, ...)
set the number of replicas to one
lift the CPU limits
request the highest amount of CPU you can on a node (3.2 usually on 4cpu nodes)
send as much traffic as you can on the application (you can achieve simple Load Tests scenarios with locust for example)
You will eventually end up with a ratio clients-or-requests-per-sec/cpu-consumed. You can suppose the relation is linear (this might not be true if your workload complexity is O(n^2) with n the number of clients connected, but this is not the nominal case).
You can then choose the pod resource requests based on the ratio you measured. For example if you consume 1.2 cpu for 1000 requests per second you know that you can give each pod 1 cpu and it will handle up to 800 requests per second.
Once you know how much a pod can consume under its maximal load, you can start setting up cpu-based autoscaling, 70% is a good first target that can be refined if you encounter issues like latency or pods not autoscaling fast enough. This will avoid your nodes to run out of cpu if the load increases.
There are a few gotchas, for example single-threaded applications are not able to consume more than a cpu. Thus if you give it 1.5 cpu it will run out of cpu but you won't be able to visualize it from metrics as you'll believe it still can consume 0.5 cpu.

How databricks do auto scaling for a cluster

I have a databricks cluster setup with auto scale upto 12 nodes.
I have often observed databricks scaling cluster from 6 to 8, then 8 to 11 and then 11 to 14 nodes.
So my queries -
1. Why is it picking up 2-3 nodes to be added at one go
2. Why auto scale is triggered as I see not many jobs are active or heavy processing on cluster. CPU usage is pretty low.
3. While auto scaling why is it leaving notebook in waiting state
4. Why is it taking up to 8-10 min to auto scale
Thanks
I am trying to investigate why data bricks is auto scaling cluster when its not needed
When you create a cluster, you can either provide a fixed number of workers for the cluster or provide a minimum and maximum number of workers for the cluster.
When you provide a fixed size cluster, Databricks ensures that your cluster has the specified number of workers. When you provide a range for the number of workers, Databricks chooses the appropriate number of workers required to run your job. This is referred to as autoscaling.
With autoscaling, Databricks dynamically reallocates workers to account for the characteristics of your job. Certain parts of your pipeline may be more computationally demanding than others, and Databricks automatically adds additional workers during these phases of your job (and removes them when they’re no longer needed).
Autoscaling makes it easier to achieve high cluster utilization, because you don’t need to provision the cluster to match a workload. This applies especially to workloads whose requirements change over time (like exploring a dataset during the course of a day), but it can also apply to a one-time shorter workload whose provisioning requirements are unknown. Autoscaling thus offers two advantages:
Workloads can run faster compared to a constant-sized
under-provisioned cluster.
Autoscaling clusters can reduce overall costs compared to a
statically-sized cluster.
Databricks offers two types of cluster node autoscaling: standard and optimized.
How autoscaling behaves
Autoscaling behaves differently depending on whether it is optimized or standard and whether applied to an interactive or a job cluster.
Optimized
Scales up from min to max in 2 steps.
Can scale down even if the cluster is not idle by looking at shuffle
file state.
Scales down based on a percentage of current nodes.
On job clusters, scales down if the cluster is underutilized over
the last 40 seconds.
On interactive clusters, scales down if the cluster is underutilized
over the last 150 seconds.
Standard
Starts with adding 4 nodes. Thereafter, scales up exponentially, but
can take many steps to reach the max.
Scales down only when the cluster is completely idle and it has been
underutilized for the last 10 minutes.
Scales down exponentially, starting with 1 node.

How kubernetes HPA with 2 or more metrics behaves - especially the no.of replicas calculation?

We have configured to use 2 metrics for HPA
CPU Utilization
App specific custom metrics
When testing, we observed the scaling happening, but calculation of no.of replicas is not very clear. I am not able to locate any documentation on this.
Questions:
Can someone point to documentation or code on the calculation part?
Is it a good practice to use multiple metrics for scaling?
Thanks in Advance!
From https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#how-does-the-horizontal-pod-autoscaler-work
If multiple metrics are specified in a HorizontalPodAutoscaler, this calculation is done for each metric, and then the largest of the desired replica counts is chosen. If any of those metrics cannot be converted into a desired replica count (e.g. due to an error fetching the metrics from the metrics APIs), scaling is skipped.
Finally, just before HPA scales the target, the scale recommendation is recorded. The controller considers all recommendations within a configurable window choosing the highest recommendation from within that window. This value can be configured using the --horizontal-pod-autoscaler-downscale-stabilization-window flag, which defaults to 5 minutes. This means that scaledowns will occur gradually, smoothing out the impact of rapidly fluctuating metric values

How does open-faas deployed on kubernetes determine when to scale a function up or down?

In Kubernetes, I am a little unclear of what criteria needs to be met for open-faas to scale a function's replicas up or down.
According to the documentation:
Auto-scaling in OpenFaaS allows a function to scale up or down depending on demand represented by different metrics.
It sounds like, by default, a reason for scaling would be requests/second increasing/decreasing.
OpenFaaS ships with a single auto-scaling rule defined in the mounted configuration file for AlertManager. AlertManager reads usage (requests per second) metrics from Prometheus in order to know when to fire an alert to the API Gateway.
And this "alert" sent to the API Gateway would cause a function's replica count to scale up.
I don't see in the documentation, or the AlertManager, where the threshold for requests/second is set to scale up/down at.
My overall questions:
What is the default threshold of requests/second that would cause a scale up?
Is this threshold configurable? If so, how?

Spiky kubernetes HPA with metric number of pubsub unacked messsages

Currently we have a pipeline of data streaming: api call -> google pub/sub -> BigQuery. The number of api call will depend on the traffic on the website.
We create a kubernetes deployment (in GKE) for ingesting data from pub/sub to BigQuery. This deployment have a horizontal pod autoscaler (HPA) with with metricName: pubsub.googleapis.com|subscription|num_undelivered_messages and targetValue: "5000". This structure able to autoscale when the traffic have a sudden increase. However, it will cause a spiky scaling.
What I meant by spiky is as follows:
The number of unacked messages will go up more than the target value
The autoscaler will increase the number of pods
Since the number of unacked will slowly decrease, but since it is still above target value the autoscaler will still increase the number of pods --> this happen until we hit the max number of pods in the autoscaler
The number of unacked will decrease until it goes below target and it will stay very low
The autoscaler will reduce the number of pods to the minimum number of pods
The number of unacked messages will increase again and will go similar situation with (1) and it will go into a loop/cycle of spikes
Here are the chart when it goes spiky (the traffic is going up but it is stable and non-spiky):
The spiky number of unacknowledged message in pub/sub
We set an alarm in stackdriver if the number of unacknowledged message is more than 20k, and in this situation it will always triggered frequently.
Is there a way so that the HPA become more stable (non-spiky) in this case?
Any comment, suggestion, or answer is well appreciated.
Thanks!
I've been dealing with the same behavior. What I ended up doing is smoothing the num_undelivered_messages using a moving average. I set up a k8s cron that publishes the average of the last 20 mins of time series data to a custom metric every minute. Then configured the HPA to respond to the custom metric.
This worked pretty good but not perfect. I observed that as soon as the average converges on the actual value, the HPA will scale the service down too low. So I ended up just adding a constant, so the custom metric is just average + constant. I found for my specific case a value of 25,000 worked well.
With this, and after dialing in the targetAverageValue, the autoscaling has been very stable.
I'm not sure if this is due to a defect or just the nature of the num_undelivered_messages metric at very high loads.
Edit:
I used the stackdriver/monitoring golang packages. There is a straightforward way to aggregate the time series data; see here under 'Aggregating data' https://cloud.google.com/monitoring/custom-metrics/reading-metrics
https://cloud.google.com/monitoring/custom-metrics/creating-metrics