Difficulty configuring Horizontal Pod Autoscaler with external metric - kubernetes

I'm attempting to configure a Horizontal Pod Autoscaler to scale a deployment based on the duty cycle of attached GPUs.
I'm using GKE, and my Kubernetes master version is 1.10.7-gke.6 .
I'm working off the tutorial at https://cloud.google.com/kubernetes-engine/docs/tutorials/external-metrics-autoscaling . In particular, I ran the following command to set up custom metrics:
kubectl create -f https://raw.githubusercontent.com/GoogleCloudPlatform/k8s-stackdriver/master/custom-metrics-stackdriver-adapter/deploy/production/adapter.yaml
This appears to have worked, or at least I can access a list of metrics at /apis/custom.metrics.k8s.io/v1beta1 .
This is my YAML:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: images-srv-hpa
spec:
minReplicas: 1
maxReplicas: 10
metrics:
- type: External
external:
metricName: container.googleapis.com|container|accelerator|duty_cycle
targetAverageValue: 50
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: images-srv-deployment
I believe that the metricName exists because it's listed in /apis/custom.metrics.k8s.io/v1beta1 , and because it's described on https://cloud.google.com/monitoring/api/metrics_gcp .
This is the error I get when describing the HPA:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedGetExternalMetric 18s (x3 over 1m) horizontal-pod-autoscaler unable to get external metric prod/container.googleapis.com|container|accelerator|duty_cycle/nil: no metrics returned from external metrics API
Warning FailedComputeMetricsReplicas 18s (x3 over 1m) horizontal-pod-autoscaler failed to get container.googleapis.com|container|accelerator|duty_cycle external metric: unable to get external metric prod/container.googleapis.com|container|accelerator|duty_cycle/nil: no metrics returned from external metrics API
I don't really know how to go about debugging this. Does anyone know what might be wrong, or what I could do next?

You are using ‘type: External’. For External Metrics List, you need to use ‘kubernetes.io’ instead of ‘container.googleapis.com’ [1]
Replace the ‘metricName:container.googleapis.com|container|accelerator|duty_cycle’
with
‘metricName: kubernetes.io|container|accelerator|duty_cycle’
[1]https://cloud.google.com/monitoring/api/metrics_other#other-kubernetes.io

This problem went away on its own once I placed the system under load. It's working fine now with the same configuration.
I'm not sure why. My best guess is that StackMetrics wasn't reporting a duty cycle value until it went above 1%.

Related

FailedGetPodsMetric: for HPA autoscaling

I am trying to autoscale using custom metrics, with metric type "http_request". My following command is showing correct output:
kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/" | jq
Below is my hpa.yaml file:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: podinfo
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
minReplicas: 2
maxReplicas: 10
metrics:
- type: Pods
pods:
metricName: http_requests
targetAverageValue: 1
but my scaling is failing due to
the HPA was unable to compute the replica count:
unable to get metric http_requests: unable to fetch metrics from custom metrics API: an error on the server`
("Internal Server Error: \"/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/%!A(MISSING)/http_requests?labelSelector=app%!D(MISSING)podinfo\": the server could not find the requested resource")
has prevented the request from succeeding (get pods.custom.metrics.k8s.io *)
Please help me out in this :)
Seems like you are missing pods in your cluster that match the provided deployment specification. Can you check if your podinfo deployment is running? And that it has healthy pods in it?
The command works because you're only checking the availability of the metrics endpoint. This simply implies that the endpoint is live to start providing metrics, doesn't guarantee that you will receive metrics (without any resources).

Autoscaling Deployments with Cloud Monitoring metrics

I am trying to auto-scale my pods based on CloudSQL instance response time. We are using cloudsql-proxy for secure connection.
Deployed the Custom Metrics Adapter.
https://raw.githubusercontent.com/GoogleCloudPlatform/k8s-stackdriver/master/custom-metrics-stackdriver-adapter/deploy/production/adapter_new_resource_model.yaml
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: application_name
spec:
minReplicas: 1
maxReplicas: 5
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: application_name
minReplicas: 1
maxReplicas: 5
metrics:
- type: Pods
pods:
metric:
name: custom-metric-stackdriver-adapter
target:
type: AverageValue
averageValue: 20
I deployed the application and create HPA for that, But i am seeing error.
AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale
ScalingActive False FailedGetPodsMetric the HPA was unable to compute the replica count: unable to get metric custom-metric: unable to fetch metrics from custom metrics API: the server could not find the descriptor for metric custom.googleapis.com/custom-metric: googleapi: Error 404: Could not find descriptor for metric 'custom.googleapis.com/custom-metric'., notFound
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedGetPodsMetric 4m22s (x10852 over 47h) horizontal-pod-autoscaler unable to get metric custom-metric: unable to fetch metrics from custom metrics API: the server could not find the descriptor for metric custom.googleapis.com/custom-metric: googleapi: Error 404: Could not find descriptor for metric 'custom.googleapis.com/custom-metric'., notFound
Please refer to the link below to deploy a HorizontalPodAutoscaler (HPA) resource to scale your application based on Cloud Monitoring metrics.
https://cloud.google.com/kubernetes-engine/docs/tutorials/autoscaling-metrics#custom-metric_4
Looks like the custom metric name is different in the app and hpa deployment configuration files(yaml). Metric and application names should be the same in both app and hpa deployment configuration files.
In the hpa deployment yaml file,
a. Replace custom-metric-stackdriver-adapter with custom-metric (Or change the
metric name to custom-metric-stackdriver-adapter in the app deployment yaml
file).
b. Add “namespace: default” next to the application name at metadata.Also
ensure you are adding the namespace in the app deployment configuration
file.
c. Delete the duplicate lines 6 & 7 (minReplicas: 1, maxReplicas: 5).
d. Go to Cloud Console->Kubernetes Engine->Workloads. Delete the workloads (application-name & custom-metrics-stackdriver-adapter) created by app deployment yaml and adapter_new_resource_model.yaml files.
e. Now apply configurations to resource model, app and hpa (yaml files).

HPA cannot get metrics due to 403 errors

I used the following metrics inside hpa
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: app-svc-hpa
namespace: default
spec:
scaleTargetRef:
apiVersion: extensions/v1beta1
kind: Deployment
name: app-svc
minReplicas: 1
maxReplicas: 1000
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
- type: Pods
pods:
metric:
name: packets-per-second
target:
type: AverageValue
averageValue: 1k
But the hpa is unable to get the metrics
Warning FailedGetPodsMetric 14s (x6 over 1m) horizontal-pod-autoscaler unable to get metric packets-per-second: unable to fetch metrics from custom metrics API: the server could not find the descriptor for metric custom.googleapis.com/packets-per-second: googleapi: Error 403: Permission monitoring.metricDescriptors.get denied (or the resource may not exist)., forbidden
I am running the pods on a dedicated nodepool and each nodes is running under a service account.
The service account does have these iam roles
Monitoring Viewere,
Monitoring Metrics Writer
Unsure how to fix this error. Any pointers are greatly appreciated. Thanks.
I had a cluster with workload identity enabled. Apparently when a cluster has workload identity enabled, the metrics fetch was failing.
1) I had to install the custom stack driver adapter and create the custom metric as pointed by David Kruk in his comments
2) I had to add the hostNetwork:true in the custom stackdriver adapter deployment pod spec. The issue is mentioned here in github repository for csa
With these two updates, the autoscaler works as expected.

How to make HPA scale a deployment based on metrics produced by another deployment

What I am trying to achieve is creating a Horizontal Pod Autoscaler able to scale worker pods according to a custom metric produced by a controller pod.
I already have Prometheus scraping, Prometheus Adapater, Custom Metric Server fully operational and scaling the worker deployment with a custom metric my_controller_metric produced by the worker pods already works.
Now my workerpods don't produce this metric anymore, but the controller does.
It seems that the API autoscaling/v1 does not support this feature. I am able to specify the HPA with the autoscaling/v2beta1 API if necessary though.
Here is my spec for this HPA:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: my-worker-hpa
namespace: work
spec:
maxReplicas: 10
minReplicas: 1
scaleTargetRef:
apiVersion: extensions/v1beta1
kind: Deployment
name: my-worker-deployment
metrics:
- type: Object
object:
target:
kind: Deployment
name: my-controller-deployment
metricName: my_controller_metric
targetValue: 1
When the configuration is applied with kubectl apply -f my-worker-hpa.yml I get the message:
horizontalpodautoscaler "my-worker-hpa" configured
Though this message seems to be OK, the HPA does not work. Is this spec malformed?
As I said, the metric is available in the Custom Metric Server with a kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq . | grep my_controller_metric.
This is the error message from the HPA:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale
ScalingActive False FailedGetObjectMetric the HPA was unable to compute the replica count: unable to get metric my_controller_metric: Deployment on work my-controller-deployment/unable to fetch metrics from custom metrics API: the server could not find the metric my_controller_metric for deployments
Thanks!
In your case problem is HPA configuration: spec.metrics.object.target should also specify API version.
Putting apiVersion: extensions/v1beta1 under spec.metrics.object.target should fix it.
In addition, there is an open issue about better config validation in HPA: https://github.com/kubernetes/kubernetes/issues/60511

Resource limit breaching with Higher number of concurrent pods in Kubernetes

One of our microservice(worker component - nature is short lived) is actually getting deployed on K8s pods in an autoscale fashion, sometimes this number goes to few thousands as well based upon load and this worker is bound to make connections with various persistent services, since these services come with some resource limit, so we're getting bottlenecked at access level, so my ask is, do we have some way in Kubernetes(similar to some sort of gateway/proxy) which narrow down multiplex requests to limit under resource limits. Let's say every pod makes a connection to MySQL server which has an active connection limit of 50, so if we keep spinning new pods(requirement of 1 MySQL connection), then we can not spin more than 50 pods concurrently.
You can setup a Pod Quota for a Namespace.
If you can spin those Pods on a separate Namespace, you can limit the number of running pods with creating a ResourceQuota object, lets call is quota-pod.yaml:
apiVersion: v1
kind: ResourceQuota
metadata:
name: pod-demo
spec:
hard:
pods: "2"
kubectl create -f quota-pod.yaml --namespace=quota-pod-example
If you check kubectl get resourcequota pod-demo --namespace=quota-pod-example --output=yaml, you would get something like:
spec:
hard:
pods: "2"
status:
hard:
pods: "2"
used:
pods: "0"
In the description of the for example 3 replica nginx deployment you would see:
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 2m deployment-controller Scaled up replica set nginx-1-7cb5b65464 to 3
Normal ScalingReplicaSet 16s deployment-controller Scaled down replica set nginx-1-7cb5b65464 to 1
And kubectl get deployment nginx -o yaml would show:
...
status:
availableReplicas: 1
conditions:
- lastTransitionTime: 2018-12-05T10:42:45Z
lastUpdateTime: 2018-12-05T10:42:45Z
message: Deployment does not have minimum availability.
reason: MinimumReplicasUnavailable
status: "False"
type: Available
- lastTransitionTime: 2018-12-05T10:42:45Z
lastUpdateTime: 2018-12-05T10:42:45Z
message: 'pods "nginx-6bd764c757-4gkfq" is forbidden: exceeded quota: pod-demo,
requested: pods=1, used: pods=2, limited: pods=2'
I recommend checking K8s docs Create a ResourceQuota for more information.