Kubernetes HPA using metrics from another deployment - kubernetes

Im currently trying to run an autoscaling demo using prometheus and the prometheus adapter, and i was wondering if there is a way to autoscale one of my deployments based on metrics that prometheus scrapes from another deployment.
What i have right now are 2 different deployments, kafka-consumer-application (which i want to scale) and kafka-exporter (which exposes the kafka metrics that I'll be using for scaling). I know that if I have both of them as containers in the same deployment the autoscaling works, but the issue is that the kafka-exporter also gets autoscaled and its not ideal, so i want to separate them. I tried with the following HPA but i could not get it to work:
kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2beta1
metadata:
name: consumer-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: kafka-consumer-application
minReplicas: 1
maxReplicas: 10
metrics:
- type: object
object:
target: kafka-exporter
metricName: "kafka_consumergroup_lag"
targetValue: 5
Im not sure if im doing something wrong or if this is just not something i can do, so any advice is appreciated.
Thanks!
Note: im running the adapter with this config:
rules:
default: false
resource: {}
custom:
- seriesQuery: 'kafka_consumergroup_lag'
resources:
overrides:
kubernetes_namespace: {resource: "namespace"}
kubernetes_pod_name: {resource: "pod"}
name:
matches: "kafka_consumergroup_lag"
as: "kafka_consumergroup_lag"
metricsQuery: 'avg_over_time(kafka_consumergroup_lag{topic="my-topic",consumergroup="we-consume"}[1m])'
``

In kubernetes documentation you can read:
Autoscaling on metrics not related to Kubernetes objects
Applications running on Kubernetes may need to autoscale based on metrics that don’t have an obvious relationship to any object in the Kubernetes cluster, such as metrics describing a hosted service with no direct correlation to Kubernetes namespaces. In Kubernetes 1.10 and later, you can address this use case with external metrics
So using external metrics, your HPA yaml could look like following:
kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2beta2
metadata:
name: consumer-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: kafka-consumer-application
minReplicas: 1
maxReplicas: 10
metrics:
- type: External
external:
metric:
name: kafka_consumergroup_lag
#selector:
# matchLabels:
# topic: "my-topic"
target:
type: AverageValue
averageValue: 5
If you have more than one kafka-exporter you can use selector to filter it (source):
selector is the string-encoded form of a standard kubernetes label selector for the given metric When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping. When unset, just the metricName will be used to gather metrics
Also have a look at this Stack question.

Related

Auto scale MongoDB Community Operator Replicaset

I have three node mongodb cluster in GCP and it was deployed using MongoDB Community Operator. It is working fine. I need to setup auto scaling feature. I tried it with HPA Kubernetes object.
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: mongodb-hpa
spec:
maxReplicas: 5
minReplicas: 3
scaleTargetRef:
apiVersion: apps/v1
kind: StatefulSet
name: mongodb-dev
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 80
HPA is collect stats and try to scale up/down. But created pod suddenly delete in scale up and again change to 3.
Is this done by operator ?
How I achieve this auto scaling feature?

Unable to fetch metrics from custom metrics API

I am using prometheus kubernetes adapter to scale up application.
I have service nginx running at port 8080 displaying metric requests.
I use hpa.yaml as follows:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: nginx-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: nginx
minReplicas: 1
maxReplicas: 10
metrics:
- type: Pods
pods:
metricName: requests
targetAverageValue: 10
This is working... But I want to scale up different deployment using requests as metric so I change my hpa to
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: nginx-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: nginx2
minReplicas: 1
maxReplicas: 10
metrics:
- type: Object
object:
target:
kind: Service
name: nginx
metricName: requests
targetValue: 10
But it is giving error as " invalid metrics (1 invalid out of 1), first error is: failed to get pods metric value: unable to get metric requests: no metrics returned from custom metrics API"
"the HPA was unable to compute the replica count: unable to get metric requests: no metrics returned from custom metrics API"
So I can not scale up other application base on nginx. I want to use nginx service to scale up other deployments.
Any Suggestions... Where I am going wrong?

HorizontalPodAutoscaler scaling based on custom metrics - node-pool level metric

I am currently trying to set up a GKE cluster and to configure an HorizontalPodAutoscaler based on a custom metric (GPU consumption).
I have two node-pools and I want to horizontally scale them based on the average GPU consumption of each node_pool. I have configured two identical HPA like this:
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: ner
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: ner
minReplicas: 1
maxReplicas: 10
metrics:
- type: External
external:
metric:
name: kubernetes.io|container|accelerator|duty_cycle
target:
type: AverageValue
averageValue: 60
where I only replace the scaleTargetRef but it turns out that this metric seems to be aggregated at a cluster level. I have double checked that the scaleTargetRef are properly defined.
Is there a way to filter the metrics by container_name or node_pool? Any other suggestion would be awesome !
So I think you are looking for metrics for your k8 cluster especially by container_name or node_pool.
You have five types of metrics you can use in an HPA object(autoscaling/v2beta2)
k explain HorizontalPodAutoscaler.spec.metrics.type --api-version=autoscaling/v2beta2
Edit update
ContainerResource
External # Use this if the metrics not related to Kubernetes objects.
Object
Pods
Resource
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: ner
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: ner
minReplicas: 1
maxReplicas: 10
metrics:
- type: ContainerResource
containerResource:
name: gpu
container: your-application-container
target:
type: Utilization
averageUtilization: 60
Edit Update
For GKP Autoscaling Deployments with Cloud Monitoring metrics

Not able to use the advanced behavior config in gke cluster with the latest kubernetes version as well

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: test
spec:
behavior:
scaleDown:
stabilizationWindowSeconds: 0
policies:
- type: Percent
value: 10
periodSeconds: 15
scaleUp:
stabilizationWindowSeconds: 0
policies:
#-type: Percent
#value: 100
#periodSeconds: 15
- type: Pods
value: 5
periodSeconds: 15
maxReplicas: 30
minReplicas: 2
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: test
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
As per the Kubernetes official doc the HPA behavior is available for Kubernetes version v1.18 but GKE has it's own versioning. also it has api version "autoscaling/v2beta2" but the behavior is not supported.
GKE VERSION: 1.16.13-gke.1
Am I the only one to face this issue ?
Yes, you are right. GKE have it's own versioning. You can find more details here.
Note: The Kubernetes API is versioned separately from Kubernetes itself. Refer to the Kubernetes API documentation for information about Kubernetes API versioning.
Unfortunately, GKE is not supporting behavior parameter in apiVersion: autoscaling/v2beta2.
error: error validating "hpa.yaml": error validating data: ValidationError(HorizontalPodAutoscaler.spec): unknown field "behavior" in io.k8s.api.autoscaling.v2beta2.HorizontalPodAutoscalerSpec; if you choose to ignore these errors, turn validation off with --validate=false
However, it can be freely used with Kubeadm and Minikube with Kubernetes 1.18+.
There is already a Public Issue Tracker related to this issue. You can add yourself to CC in this PIT to get new updates related to this issue.
if you are on GKE and facing issue where enabled API are
autoscaling/v1
autoscaling/v2beta1
while GKE version is around 1.12 to 1.14 you wont be able to apply manifest of autoscaling/v2beta2 however you can apply same thing something like
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: core-deployment
namespace: default
spec:
maxReplicas: 9
minReplicas: 5
scaleTargetRef:
apiVersion: extensions/v1beta1
kind: Deployment
name: core-deployment
metrics:
- type: Resource
resource:
name: cpu
targetAverageValue: 500m
if you want based on utilization
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: core-deployment
namespace: default
spec:
maxReplicas: 9
minReplicas: 5
scaleTargetRef:
apiVersion: extensions/v1beta1
kind: Deployment
name: core-deployment
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 80

How to autoscale Kubernetes Pods based on average memory usage in EKS?

I am running an EKS cluster and I have a HorizontalPodAutoscaler created for autoscaling number of pods based on average CPU utilisation.
How to do the same for Average memory utilization?
Suppose all of the pods running in an EKS clusters, have used average of 70% of memory they are allocated (using resources), then the deployment should be autoscaled.
How to do this? Is creating a custom metric in CloudWatch the only way?
Even if cloudWatch is the only way, how to do that? Is there a specific documentation or tutorial or blog that does this?
Please try the below HPA configuration object.
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: nginx-hpa
namespace: default
spec:
scaleTargetRef:
apiVersion: extensions/v1beta1
kind: Deployment
name: nginx
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: memory
targetAverageUtilization: 70
and apply the object using kubectl apply