Scaling Kafka streams application using kubernetes horizontal pod scaling - kubernetes

We have a kafka streams application with 3 pods. Application scaling is a heavy operation(because of large state) for us. So, I would like to increase/scale pod only if it absolutely necessary. For example, if the application utilization increases beyond a number for lets say 10 mins.
Again, i don't need to scale up/down my application for sudden burst(a fews seconds) of messages
Looking for something configuration like:
window : 15 mins
avergae cpu : 1000 milli
So, I would like to scale the application is the average cpu over 15 mins window is greater than 1000 milli.

You can take a look into HPA policies.
There is stabilizationWindowSeconds:
StabilizationWindowSeconds is the number of seconds for which past
recommendations should be considered while scaling up or scaling down.
StabilizationWindowSeconds must be greater than or equal to zero and
less than or equal to 3600 (one hour). If not set, use the default
values: - For scale up: 0 (i.e. no stabilization is done)
And the limits CPU average utilization can be set in metric target objects under averageUtilization.

Related

Why the speed of k8s HPA downscale related to the number of pods?

I'm performing a stress test on the server. After I put pressure by sending tons of requests, the k8s's HPA start working and the pod's number starts to grow. The thing made me confused is when I stop sending the requests, the cluster's load returned to nearly zero. But it seems that the speed of downscale has some relations with the number of pods.
I tried two test cases A&B and sending different amount of requests to server. The upscale mechanism works fine but the downscale speed is significantly different. After case A finished, the pod number was 165 and it took about 10 minutes to downscale back to 60, which was the minimum pod number. And after test case B the pod number raised to 390, it took about 30 minutes to perform its first downscale after I stop sending requests to the server.
But according to the documents, HPA's algorithm is based on
desiredReplicas = ceil[currentReplicas * ( currentMetricValue / desiredMetricValue )]
And there should be a stabilizationWindowSeconds controling the down scale rate. But I haven't found any connecitons with the pods number.

Kubernetes HPA (with custom metrics) scaling policies

Starting from Kubernetes v1.18 the v2beta2 API allows scaling behavior to be configured through the Horizontal Pod Autoscalar (HPA) behavior field. I'm planning to apply HPA with custom metrics to a StatefulSet.
The use case I'm looking at is scaling out using a custom metric (e.g. number of user sessions on my application), but the HPA will not scale down at all. This use case is also described by K8s SIG-Autoscaling enhancements - "Configurable scale velocity for HPA >> Story 4: Scale Up As Usual, Do Not Scale Down".
behavior:
scaleDown:
policies:
- type: pods
value: 0
The user sessions could stay active for minutes to hours. Starting with 1 replica of the StatefulSet, as the number of user sessions hit an upper limit (exposed using Prometheus collector and later configured using HPA custom metric option), the application pods will scale-out. The new pods will start serving new users.
Since this is a StatefulSet and cannot just abruptly scale down, I'm seeking help on ways to scale down when the user sessions on the new replicas go down to 0. The above link says that the scale down can be controlled by a separate process. Not sure how to do this? Looking for some pointers.
Thanks.
You can use periodSeconds and stabilizationWindowSeconds values to manage how much time will pass between termination of pods, for example:
behavior:
scaleDown:
stabilizationWindowSeconds: 10
policies:
- type: Pods
value: 1
periodSeconds: 20
This way it will scale down 1 pod every ~30 seconds (or whatever value will be used in periodSeconds and stabilizationWindowSeconds). Time may vary depending on stabilizationWindowSeconds values over time.
periodSeconds describes how much time will pass between termination of each pod, maximum value is 1800 second (30 minutes).
stabilizationWindowSeconds when metrics indicate that target should be scaled down, this algorithm takes a look into previously calculated desired states and uses highest value from specified interval. For scale down default value is 300, maximum value is 3600 (one hour).

Horizontal-Pod-Autoscale scale only if CPU load is remain constant for given (5 min) duration

I have a k8s cluster deployed in AWS's EKS. Using Kubernetes 1.14 version
Horizontal-Pod-Autoscale scale only if CPU load is remain constant for given (5 min) duration
As we want to take decision after 4-5 mins if load remain high during that duration.
if load reduces after 3-4 mins then don't scale up, but currently we are not able to find any way for that.
horizontal-pod-autoscaler-upscale-delay is deprecated.
So we are looking for parameter by which, we can set CPU usage duration for HPA.
horizontal-pod-autoscaler-upscale-delay is removed. It might still work. You can add it to kube-controller arguments and check

Kubernetes HPA Auto Scaling Velocity

We have defined HPA for an application to have min 1 and max 4 replicas with 80% cpu as the threshold.
What we wanted was, if the pod cpu goes beyond 80%, the app needs to be scaled up 1 at a time.
Instead what is happening is the application is getting scaled up to max number of replicas.
How can we define the scale velocity to scale 1 pod at a time. And again if one of the pod consumes more than 80% cpu then scale one more pod up but not maximum replicas.
Let me know how do we achieve this.
First of all, the 80% CPU utilisation is not a threshold but a target value.
The HPA algorithm for calculating the desired number of replicas is based on the following formula:
X = N * (C/T)
Where:
X: desired number of replicas
N: current number of replicas
C: current value of the metric
T: target value for the metric
In other words, the algorithm aims at calculating a replica count that keeps the observed metric value as close as possible to the target value.
In your case, this means if the average CPU utilisation across the pods of your app is below 80%, the HPA tends to decrease the number of replicas (to make the CPU utilisation of the remaining pods go up). On the other hand, if the average CPU utilisation across the pods is above 80%, the HPA tends to increase the number of replicas, so that the CPU utilisation of the individual pods decreases.
The number of replicas that are added or removed in a single step depends on how far apart the current metric value is from the target value and on the current number of replicas. This decision is internal to the HPA algorithm and you can't directly influence it. The only contract that the HPA has with its users is to keep the metric value as close as possible to the target value.
If you need a very specific autoscaling behaviour, you can write a custom controller (or operator) to autoscale your application instead of using the HPA.
This - https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#algorithm-details - expains the algorithm HPA uses, including the formula to calculate the number of "desired replicas".
If I recall, there were some (positive) changes to the HPA algo with v1.12.
HPA has total control on scale up as of today. You can only fine tune scale down operation with the following parameter.
--horizontal-pod-autoscaler-downscale-stabilization
The good news is that there is a proposal for Configurable scale up/down velocity for HPA

How to make Horizontal Pod Autoscaler scale down pod replicas on a percentage decrease threshold?

I am looking for a syntax/condition of percentage decrease threshold to be inserted in HPA.yaml file which would allow the Horizontal Pod Autoscaler to start decreasing the pod replicas when the CPU utilization falls that particular percentage threshold.
Consider this scenario:-
I mentioned an option targetCPUUtilizationPercentage and assigned it with value 50. minReplicas to be 1 and MaxReplicas to be 5.
Now lets assume the CPU utilization went above 50, and went till 100, making the HPA to create 2 replicas. If the utilization decreases to 51% also, HPA will not terminate 1 pod replica.
Is there any way to conditionize the scale down on the basis of % decrease in CPU utilization?
Just like targetCPUUtilizationPercentage, I could be able to mention targetCPUUtilizationPercentageDecrease and assign it value 30, so that when the CPU utilization falls from 100% to 70%, HPA terminates a pod replica and further 30% decrease in CPU utilization, so that when it reaches 40%, the other remaining pod replica gets terminated.
As per on-line resources, this topic is still under community progress "Configurable HorizontalPodAutoscaler options"
I didn't try but as workaround you can try to create custom metrics f.e. using Prometheus Adapter, Horizontal pod auto scaling by using custom metrics
in order to have more control about provided limits.
At the moment you can use horizontal-pod-autoscaler-downscale-stabilization:
--horizontal-pod-autoscaler-downscale-stabilization option to control
The value for this option is a duration that specifies how long the autoscaler has to wait before another downscale operation can be performed after the current one has completed. The default value is 5 minutes (5m0s).
On the other point of view this is expected due to the basis of HPA:
Applications that process very important data events. These should scale up as fast as possible (to reduce the data processing time), and scale down as soon as possible (to reduce cost).
Hope this help.