apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: testingHPA
spec:
scaleTargetRef:
apiVersion: apps/v1beta1
kind: Deployment
name: my_app
minReplicas: 3
maxReplicas: 5
targetCPUUtilizationPercentage: 85
Above is the normal hpa.yaml structure, is it possible to use kind as a pod and auto scale it ??
As already pointed by others, it is not possible to set Pod as the Kind object as the target resource for an HPA.
The document describes HPA as:
The Horizontal Pod Autoscaler automatically scales the number of Pods
in a replication controller, deployment, replica set or stateful set
based on observed CPU utilization (or, with custom metrics support, on
some other application-provided metrics). Note that Horizontal Pod
Autoscaling does not apply to objects that can't be scaled, for
example, DaemonSets.
The document also described how the algorithm is implemented at the backend as:
desiredReplicas = ceil[currentReplicas * ( currentMetricValue / desiredMetricValue )]
and since the Pod resource does not have the replicas field as part of its spec therefore we can conclude that the same is not supported for auto scaling using the HPA.
A single Pod is only ever one Pod. It does not have any mechanism for horizontal scaling because it is that mechanism for everything else.
Related
We are considering to use HPA to scale number of pods in our cluster. This is how a typical HPA object would like:
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: hpa-demo
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: hpa-deployment
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: 20
My question is - can we have multiple targets (scaleTargetRef) for HPA? Or each deployment/RS/SS/etc. has to have its own HPA?
Tried to look into K8s doc, but could not find any info on this. Any help appreciated, thanks.
https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-metrics-apis
https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/
Can we have multiple targets (scaleTargetRef) for HPA ?
One HorizontalPodAutoscaler has only one scaleTargetRef that hold one referred resource only.
HorizontalPodAutoscaler controls the scale of a single resource - Deployment/StatefulSet/ReplicaSet. It is actually stated in documentation, though not that directly:
Here there is a reference to it as well - single target resource is defined by the scaleTargetRef, horizontal pod autoscaler learns the current resource consumption for it and will set the desired number of pods by using its Scale subresource.
From practical experience, reference for multiple workload resources in a single HorizontalPodAutoscaler definition will work for only one of them. In addition, when applying kubectl autoscale command with several resources to create a HorizontalPodAutoscaler object, separate hpa object will be created for each of them.
I'm testing out HPA with custom metrics from application and exposing to K8s using Prometheus-adapter.
My app exposes a "jobs_executing" custom metric that is a numerical valued guage (prometheus-client) in golang exposing number of jobs executed by the app (pod).
Now to cater this in hpa, here is how my HPA configuration looks like:
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: myapp
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: myapp
minReplicas: 1
maxReplicas: 10
metrics:
- type: Pods
pods:
metric:
name: jobs_executing
target:
type: AverageValue
averageValue: 5
I want autoscaler to scale my pod when the average no. of jobs executed by overall pods equals "5". This works, but sometimes the HPA configuration shows values like this:
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
my-autoscaler Deployment/my-scaling-sample-app 7700m/5 1 10 10 38m
here targets show up as "7700m/5" even though the average no. of jobs executed overall were 7.7. This makes HPA just scale horizontally aggressively. I don't understand why it is putting "7700m" in the current target value"?
My question is, if there is a way to define a flaoting point here in HPA that doesn't confuse a normal integer with a 7700m (CPU unit?)
or what am I missing? Thank you
From the docs:
https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#appendix-quantities
All metrics in the HorizontalPodAutoscaler and metrics APIs are specified using a special whole-number notation known in Kubernetes as a quantity. For example, the quantity 10500m would be written as 10.5 in decimal notation. The metrics APIs will return whole numbers without a suffix when possible, and will generally return quantities in milli-units otherwise. This means you might see your metric value fluctuate between 1 and 1500m, or 1 and 1.5 when written in decimal notation.
So it does not seem like you are able to adjust the unit of measurement that the HPA uses, the generic Quantity.
I have a cluster with Cluster Autoscaler activated and HPA for one of my deployments.
This is the HPA definition:
kind: HorizontalPodAutoscaler
metadata:
name: hpa-resource-metrics-cpu
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: ReplicationController
name: hello-hpa-cpu
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 50
Now in a situation where my cluster is being used very lightly, that means this deployment will only have 1 available replica.
And since the cluster is not under high usage, it could be the case that the node containing that replica is scheduled for deletion (downscaling).
In that case, it would make my deployment have a downtime (when the cluster node is deleted, the only replica for the deployment is deleted as well, so it needs to be rescheduled in a new pod). I don't want that to happen (the downtime).
From this issue: https://github.com/kubernetes/kubernetes/issues/48307, it seems that Pod Disruption Budgets are not applicable to deployments with only 1 replica.
So the only solution to my problem would be to have minReplicas set to 2?
Or is there something else I could do to prevent this downtime, and still let minReplicas as 1?
Kubernetes has the notion of a disruption. The cluster autoscaler (or an administrator) taking a node offline is a "voluntary" disruption (as distinct from, say, the node losing power) and so you have some control over it. If you create a pod disruption budget:
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: hello-pdb
spec:
minAvailable: 1
selector:
matchLabels:
app: hello
You have specified that there shouldn't be fewer than one pod, with a label app: hello, when the cluster tries to perform a voluntary disruption.
Doing this can prevent the cluster autoscaler from actually deleting the node. The examples in the PDB documentation generally have multiple replicas and can tolerate some of them being offline, so it's possible to delete 1 replica of 3 and recreate it on a different node. There is an extended example where there's not capacity in the cluster to start a rescheduled pod, and this blocks destroying a node. You might set the HPA to minReplicas: 3 to avoid this case, even if it means your system will be overprovisioned at the quietest times.
Is it possible to have the HPA scale based on the number of available running pods?
I have set up a readiness probe that cuts out a pod based it's internal state (idle, working, busy). When a pod is 'busy', it no longer receives new requests. But the cpu, and memory demands are low.
I don't want to scale based on cpu, mem, or other metrics.
Seeing as the readiness probe removes it from active service, can I scale based on the average number of active (not busy) pods? When that number drops below a certain point more pods are scaled.
TIA for any suggestions.
You can create custom metrics, a number of busy-pods for HPA.
That is, the application should emit a metric value when it is busy. And use that metric to create HorizontalPodAutoscaler.
Something like this:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: custom-metric-sd
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1beta1
kind: Deployment
name: custom-metric-sd
minReplicas: 1
maxReplicas: 20
metrics:
- type: Pods
pods:
metricName: busy-pods
targetAverageValue: 4
Here is another reference for HPA with custom metrics.
I have created a horizontal auto-scaler based on the cpu usage and it works fine. I want to know how I can configure the autoscaler in a way that it just scales up without scaling down? The reason I want such a thing is when I have high load/request I create some operators but I want to keep them alive even if for some amount of time they don't do anything but auto-scaler kills the pods and scaling down to the minimum replicas after sometime if there is no load.
My autoscaler:
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: gateway
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: gateway
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: 20
Edit:
By operator, I mean small applications/programs that are running in a pod.
You can add --horizontal-pod-autoscaler-downscale-stabilization flag to kube-controller-manager as described in docs. Default delay is set to 5 minutes.
To add flag to kube-controller-manager edit /etc/kubernetes/manifests/kube-controller-manager.yaml on master node, pod will be then recreated.