Autoscaling Deployments with Cloud Monitoring metrics - kubernetes

I am trying to auto-scale my pods based on CloudSQL instance response time. We are using cloudsql-proxy for secure connection.
Deployed the Custom Metrics Adapter.
https://raw.githubusercontent.com/GoogleCloudPlatform/k8s-stackdriver/master/custom-metrics-stackdriver-adapter/deploy/production/adapter_new_resource_model.yaml
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: application_name
spec:
minReplicas: 1
maxReplicas: 5
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: application_name
minReplicas: 1
maxReplicas: 5
metrics:
- type: Pods
pods:
metric:
name: custom-metric-stackdriver-adapter
target:
type: AverageValue
averageValue: 20
I deployed the application and create HPA for that, But i am seeing error.
AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale
ScalingActive False FailedGetPodsMetric the HPA was unable to compute the replica count: unable to get metric custom-metric: unable to fetch metrics from custom metrics API: the server could not find the descriptor for metric custom.googleapis.com/custom-metric: googleapi: Error 404: Could not find descriptor for metric 'custom.googleapis.com/custom-metric'., notFound
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedGetPodsMetric 4m22s (x10852 over 47h) horizontal-pod-autoscaler unable to get metric custom-metric: unable to fetch metrics from custom metrics API: the server could not find the descriptor for metric custom.googleapis.com/custom-metric: googleapi: Error 404: Could not find descriptor for metric 'custom.googleapis.com/custom-metric'., notFound

Please refer to the link below to deploy a HorizontalPodAutoscaler (HPA) resource to scale your application based on Cloud Monitoring metrics.
https://cloud.google.com/kubernetes-engine/docs/tutorials/autoscaling-metrics#custom-metric_4
Looks like the custom metric name is different in the app and hpa deployment configuration files(yaml). Metric and application names should be the same in both app and hpa deployment configuration files.
In the hpa deployment yaml file,
a. Replace custom-metric-stackdriver-adapter with custom-metric (Or change the
metric name to custom-metric-stackdriver-adapter in the app deployment yaml
file).
b. Add “namespace: default” next to the application name at metadata.Also
ensure you are adding the namespace in the app deployment configuration
file.
c. Delete the duplicate lines 6 & 7 (minReplicas: 1, maxReplicas: 5).
d. Go to Cloud Console->Kubernetes Engine->Workloads. Delete the workloads (application-name & custom-metrics-stackdriver-adapter) created by app deployment yaml and adapter_new_resource_model.yaml files.
e. Now apply configurations to resource model, app and hpa (yaml files).

Related

How does the Kubernetes HPA(HorizontalPodAutoscaler) determine which pod's metrics should be used if multiple PODs have the same metrics

suppose we have below HPA(HorizontalPodAutoscaler) deployed in the demo namespace, and multiple pods (POD-A,POD-B) in this demo namespace have the same metric "istio_requests_per_second", How does the HPA determine the metric "istio_requests_per_second" from which pod should be used? Or every POD with this metric will be evaluate against the HPA target?
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: httpbin
spec:
minReplicas: 1
maxReplicas: 5
metrics:
- type: Pods
pods:
metric:
name: istio_requests_per_second
target:
type: AverageValue
averageValue: "10"
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: httpbin
test...
If you're using prometheus then its the adapter thats correlating between k8's pod name and what metric value to return. Basically the HPA is asking the prometheus adapter for metric istio_requests_per_second. By calling /apis/custom.metrics.k8s.io/v1beta1/namespaces/myNamespace/pods/mypod the adapter takes that and looks at its rules configured for what it should query for.
https://github.com/kubernetes-sigs/prometheus-adapter/blob/master/docs/config-walkthrough.md
Based on my test, I think HPA uses the 'scaleTargetRef' to determine which POD's metrics should be used, and pull these metrics from the metrics server and evaluate them against the target config.
As per Kubernetes documentation:
For object metrics and external metrics, a single metric is fetched, which describes the object in question. This metric is compared to the target value, to produce a ratio as above. In the autoscaling/v2 API version, this value can optionally be divided by the number of Pods before the comparison is made.
It will calculate the ratio based on the mean across the target pods.
References:
1.-https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#how-does-a-horizontalpodautoscaler-work

FailedGetPodsMetric: for HPA autoscaling

I am trying to autoscale using custom metrics, with metric type "http_request". My following command is showing correct output:
kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/" | jq
Below is my hpa.yaml file:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: podinfo
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
minReplicas: 2
maxReplicas: 10
metrics:
- type: Pods
pods:
metricName: http_requests
targetAverageValue: 1
but my scaling is failing due to
the HPA was unable to compute the replica count:
unable to get metric http_requests: unable to fetch metrics from custom metrics API: an error on the server`
("Internal Server Error: \"/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/%!A(MISSING)/http_requests?labelSelector=app%!D(MISSING)podinfo\": the server could not find the requested resource")
has prevented the request from succeeding (get pods.custom.metrics.k8s.io *)
Please help me out in this :)
Seems like you are missing pods in your cluster that match the provided deployment specification. Can you check if your podinfo deployment is running? And that it has healthy pods in it?
The command works because you're only checking the availability of the metrics endpoint. This simply implies that the endpoint is live to start providing metrics, doesn't guarantee that you will receive metrics (without any resources).

HPA cannot get metrics due to 403 errors

I used the following metrics inside hpa
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: app-svc-hpa
namespace: default
spec:
scaleTargetRef:
apiVersion: extensions/v1beta1
kind: Deployment
name: app-svc
minReplicas: 1
maxReplicas: 1000
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
- type: Pods
pods:
metric:
name: packets-per-second
target:
type: AverageValue
averageValue: 1k
But the hpa is unable to get the metrics
Warning FailedGetPodsMetric 14s (x6 over 1m) horizontal-pod-autoscaler unable to get metric packets-per-second: unable to fetch metrics from custom metrics API: the server could not find the descriptor for metric custom.googleapis.com/packets-per-second: googleapi: Error 403: Permission monitoring.metricDescriptors.get denied (or the resource may not exist)., forbidden
I am running the pods on a dedicated nodepool and each nodes is running under a service account.
The service account does have these iam roles
Monitoring Viewere,
Monitoring Metrics Writer
Unsure how to fix this error. Any pointers are greatly appreciated. Thanks.
I had a cluster with workload identity enabled. Apparently when a cluster has workload identity enabled, the metrics fetch was failing.
1) I had to install the custom stack driver adapter and create the custom metric as pointed by David Kruk in his comments
2) I had to add the hostNetwork:true in the custom stackdriver adapter deployment pod spec. The issue is mentioned here in github repository for csa
With these two updates, the autoscaler works as expected.

How to make HPA scale a deployment based on metrics produced by another deployment

What I am trying to achieve is creating a Horizontal Pod Autoscaler able to scale worker pods according to a custom metric produced by a controller pod.
I already have Prometheus scraping, Prometheus Adapater, Custom Metric Server fully operational and scaling the worker deployment with a custom metric my_controller_metric produced by the worker pods already works.
Now my workerpods don't produce this metric anymore, but the controller does.
It seems that the API autoscaling/v1 does not support this feature. I am able to specify the HPA with the autoscaling/v2beta1 API if necessary though.
Here is my spec for this HPA:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: my-worker-hpa
namespace: work
spec:
maxReplicas: 10
minReplicas: 1
scaleTargetRef:
apiVersion: extensions/v1beta1
kind: Deployment
name: my-worker-deployment
metrics:
- type: Object
object:
target:
kind: Deployment
name: my-controller-deployment
metricName: my_controller_metric
targetValue: 1
When the configuration is applied with kubectl apply -f my-worker-hpa.yml I get the message:
horizontalpodautoscaler "my-worker-hpa" configured
Though this message seems to be OK, the HPA does not work. Is this spec malformed?
As I said, the metric is available in the Custom Metric Server with a kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq . | grep my_controller_metric.
This is the error message from the HPA:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale
ScalingActive False FailedGetObjectMetric the HPA was unable to compute the replica count: unable to get metric my_controller_metric: Deployment on work my-controller-deployment/unable to fetch metrics from custom metrics API: the server could not find the metric my_controller_metric for deployments
Thanks!
In your case problem is HPA configuration: spec.metrics.object.target should also specify API version.
Putting apiVersion: extensions/v1beta1 under spec.metrics.object.target should fix it.
In addition, there is an open issue about better config validation in HPA: https://github.com/kubernetes/kubernetes/issues/60511

Difficulty configuring Horizontal Pod Autoscaler with external metric

I'm attempting to configure a Horizontal Pod Autoscaler to scale a deployment based on the duty cycle of attached GPUs.
I'm using GKE, and my Kubernetes master version is 1.10.7-gke.6 .
I'm working off the tutorial at https://cloud.google.com/kubernetes-engine/docs/tutorials/external-metrics-autoscaling . In particular, I ran the following command to set up custom metrics:
kubectl create -f https://raw.githubusercontent.com/GoogleCloudPlatform/k8s-stackdriver/master/custom-metrics-stackdriver-adapter/deploy/production/adapter.yaml
This appears to have worked, or at least I can access a list of metrics at /apis/custom.metrics.k8s.io/v1beta1 .
This is my YAML:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: images-srv-hpa
spec:
minReplicas: 1
maxReplicas: 10
metrics:
- type: External
external:
metricName: container.googleapis.com|container|accelerator|duty_cycle
targetAverageValue: 50
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: images-srv-deployment
I believe that the metricName exists because it's listed in /apis/custom.metrics.k8s.io/v1beta1 , and because it's described on https://cloud.google.com/monitoring/api/metrics_gcp .
This is the error I get when describing the HPA:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedGetExternalMetric 18s (x3 over 1m) horizontal-pod-autoscaler unable to get external metric prod/container.googleapis.com|container|accelerator|duty_cycle/nil: no metrics returned from external metrics API
Warning FailedComputeMetricsReplicas 18s (x3 over 1m) horizontal-pod-autoscaler failed to get container.googleapis.com|container|accelerator|duty_cycle external metric: unable to get external metric prod/container.googleapis.com|container|accelerator|duty_cycle/nil: no metrics returned from external metrics API
I don't really know how to go about debugging this. Does anyone know what might be wrong, or what I could do next?
You are using ‘type: External’. For External Metrics List, you need to use ‘kubernetes.io’ instead of ‘container.googleapis.com’ [1]
Replace the ‘metricName:container.googleapis.com|container|accelerator|duty_cycle’
with
‘metricName: kubernetes.io|container|accelerator|duty_cycle’
[1]https://cloud.google.com/monitoring/api/metrics_other#other-kubernetes.io
This problem went away on its own once I placed the system under load. It's working fine now with the same configuration.
I'm not sure why. My best guess is that StackMetrics wasn't reporting a duty cycle value until it went above 1%.