i set hpa for my deployment/app, for example, CPU 80%.
my app deployment has two containers, one is app for traffic, the other is automatically injected istio-proxy.
when i get hpa during running traffic, i found something unexpected for the hpa result.
the cpu request of istio-proxy is 2G.
the cpu request of app is 4G.
the cpu consumed of istio-proxy is 1G.
the cpu consumed of app is 4G.
so, i expected the hpa of this pod (including 2 containers) is (1+2)/(2+4) = 50%.
but the actual result is close to (1+2)/4 = 75%.
it seems the istio-proxy's cpu request is excluded from calculating cpu utilization of hpa.
as i know, k8s get cpu requests from deployment, but actually for this sidecar auto injection case, the deployment yaml doesn't have any istio-proxy container information.
i guess that's why the istio-proxy cpu request is excluded.
but is that the expected behavior or a bug ?
I think as of 1.19, the hpa works on an average value of all containers in the pods. The exact logic is here : https://github.com/kubernetes/kubernetes/blob/v1.9.0/pkg/controller/podautoscaler/metrics/utilization.go#L49
currentUtilization = int32((metricsTotal * 100) / requestsTotal)
As per the above logic HPA is calculating pod cpu utilization as total cpu usage of all containers in pod divided by total request
Related
Having a HPA configuration of 50% average CPU
kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10
I found the problem that I have only one pod receiving traffic so the CPU is higher than 50% of request cpu.
Then start auto scaling up new pods, but those sometimes are not receiving yet any traffic, so the cpu consumption is very low.
My expectations was to see those pods that dont use any cpu to be scale down at some point(how much it should take?), but it's not happening, and I believe the reason is, that first condition of one pod cpu use, higher than 50% is forcing to keep those pods up.
What I need is to scale up/down those pods, until they can start receiving traffic, which it depends on in which node they are deployed.
Any suggestion of how to accomplish this issue?
I have a EKS cluster running with cluster-autoscaler version 1.21.2 deployed. When I did a kubectl top nodes, I found a node using 5% cpu and 21% memory utilised. But in cluster-autoscaler pod log, I see below message for the same node:
Node XXXX is not suitable for removal - cpu utilization too big (0.663130)
I'm now confused how is cluster autoscaler calculating this value and why is the node not scaled down. BTW, I used default config of --scale-down-utilization-threshold=0.5
We stumbled upon the same issue, and realized that the CPU utilization value (in your case 66,31%) matches roughly the amount of CPU requested by the pods/containers running on the node.
Remember: Requested CPU (and other resources) by a pod/container is given guaranteed.
This is why it sounds logical to us that when looking at the node's actual CPU usage, it might be idle, though from a Kubernetes autoscaling perspective, the node uses 66% from the CPU.
The docs says:
For per-pod resource metrics (like CPU), the controller fetches the metrics from the resource metrics API for each Pod targeted by the HorizontalPodAutoscaler. Then, if a target utilization value is set, the controller calculates the utilization value as a percentage of the equivalent resource request on the containers in each Pod. If a target raw value is set, the raw metric values are used directly. The controller then takes the mean of the utilization or the raw value (depending on the type of target specified) across all targeted Pods, and produces a ratio used to scale the number of desired replicas.
Assume I have a Pod with:
resources:
limits:
cpu: "0.3"
memory: 500M
requests:
cpu: "0.01"
memory: 40M
and now I have an autoscaling definition as:
type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 60
Which according to the docs:
With this metric the HPA controller will keep the average utilization of the pods in the scaling target at 60%. Utilization is the ratio between the current usage of resource to the requested resources of the pod
So, I'm not understanding something here. If request is the minimum resources required to run the app, how would scaling be based on this value? 60% of 0.01 is nothing, and the service would be constantly scaling.
Your misunderstanding might be that the value of request is not necessarily the minimum your app need to run.
It is what you (the developer, admin, DevOps) request from the Kubernetes cluster for a pod in your application to run and it helps the scheduler to pick the right node for your workload (say the in one that has sufficient resources available). So, don't pick this value too small or too high.
Apart from that, autoscaling works as you described it. In this case, the cluster calculates how much of your requested CPU is used and will scale out when more than 60% are in use. Keep in mind, that Kubernetes does not look at every single pod but on the average of all pods in that group.
For example, given two pods running, one pod could run on 100% of requests and the other one at (almost) 0%. The average would be around 50% so no autoscaling happens in the case of the Horizontal Pod Autoscaler.
In production, I personally try to do a guess on the right values and then look at the metrics and adjust the values to my real-world workload. Prometheus is your friend or at least the metrics server:
https://github.com/prometheus-operator/kube-prometheus
https://github.com/kubernetes-sigs/metrics-server
I set up my cluster and I want my deployments to scale up when the first pod uses 75% of one cpu (core). I did this with hpa and everything is working but I noticed that the hpa percentage is strange.
Based on what I know 1 cpu = 1000 milliunits and what I see in kubectl top pods is pod-A using 9m but what I see in kubectl get hpa is pod-A 9%/75% which doesn't make sense, 9% of 1000 is 90 and not 9.
I want to know how hpa is calculating the percentage and how shall I configure it so when I reach 75% of one cpu it scales up?
To the horizontal pod autoscaler 100% of a metric (cpu or memory) is the amount set in resource requests. So if you pod requests 100m cpu, 9m is 9% and it would scale out on 75m.
Double check if you really have requested 1 (or 1000m) cpu by issuing kubectl describe pod <pod-name>.
I'm defining this autoscaler with kubernetes and GCE and I'm wondering what exactly should I specify for targetCPUUtilizationPercentage. That target points to what exactly? Is it the total CPU in my cluster? When the pods referenced in this autoscaler consume more than targetCPUUtilizationPercentage what happens?
The CPU utilization is the average CPU usage of a all pods in a deployment across the last minute divided by the requested CPU of this deployment. If the mean of the pods' CPU utilization is higher than the target you defined, the your replicas will be adjusted.
You can read more about this topic here.
This is average cpu utilisation of all the pods, so if you have given CPU as 200 in the resource requests and targetCPUUtilizationPercentage as 80%, then at 160 value of threshold, it will scale out the pod. It will create a new repliace.