Empty metricLabels map for Kubernetes external metrics - kubernetes

I am currently trying to set up an horizontal pod autoscaler for my application running inside Kubernetes. The HPA is relying on external metrics that are fetched from Prometheus by a Prometheus adapter (https://github.com/kubernetes-sigs/prometheus-adapter).
The metrics are fetched by the adapter and made available to the Kubernetes metrics API successfully, but the metricLabels map is empty, making it impossible for the HPA to associate the correct metrics with the correct pod.
Eg. of a query to the metrics API
kubectl get --raw "/apis/external.metrics.k8s.io/v1beta1/namespaces/<namespace>/batchCommandsActive_totalCount/"
{"kind":"ExternalMetricValueList","apiVersion":"external.metrics.k8s.io/v1beta1","metadata":{},"items":[{"metricName":"batchCommandsActive_totalCount",**"metricLabels":{}**,"timestamp":"2023-02-10T11:38:48Z","value":"0"}]}
Those metrics should have three labels associated to them (hostname, localnode and path) in order for the correct pod to retrieve them.
Here is an extract of the Prometheus adapter configmap that defines the queries made to Prometheus by the Prometheus adapter
- seriesQuery: '{__name__="batchCommandsActive_totalCount",hostname!="",localnode!="",path!=""}'
metricsQuery: sum(<<.Series>>{<<.LabelMatchers>>}) by (name)
resources:
namespaced: false
Thanks for your help!
So far, no answer from StackOverflow or tutorial (eg. https://github.com/kubernetes-sigs/prometheus-adapter/blob/master/docs/walkthrough.md) have helped with my problem.

Related

Horizontal Pod Autoscaling using REST API exposed by the application in container

I am using minikube on Windows, there is only one node "master".
The spring boot application deployed has REST endpoint which gives the number of client its currently serving. I would like to scale out horizontally or auto spin a pod when the requests reaches some limit.
Lets say:
There is 1 pod in the cluster.
If the request limit reached 50 (for Pod 1), spin up a new pod.
If the request limit reached 50 for Pod 1 and Pod 2, spin up a new Pod (Pod 3).
I tried researching on how to achieve this, I was not able to figure out any.
All I could find was scaling out using CPU usage with HorizontalPodAutoscaler(HPA).
Would be helpful to receive a guidance on how to achieve this using Kubernetes HPA.
I believe you can start from the autoscaling on custom metrics article. As per I see - this is achievable using the custom metrics in conjunction with Prometheus Adapter for Kubernetes Metrics APIs (An implementation of the custom.metrics.k8s.io API using Prometheus).
Prometheus Adapter for Kubernetes Metrics APIs repo contains an implementation of the Kubernetes resource metrics API and custom metrics API.
This adapter is therefore suitable for use with the autoscaling/v2 Horizontal Pod Autoscaler in Kubernetes 1.6+.
Info from autoscaling on custom metrics:
Notice that you can specify other resource metrics besides CPU. By
default, the only other supported resource metric is memory. These
resources do not change names from cluster to cluster, and should
always be available, as long as the metrics.k8s.io API is available.
The first of these alternative metric types is pod metrics. These metrics describe pods, and are averaged together across pods and compared with a target value to determine the replica count. They work much like resource metrics, except that they only support a target type of AverageValue.
Pod metrics are specified using a metric block like this:
type: Pods
pods:
metric:
name: packets-per-second
target:
type: AverageValue
averageValue: 1k
The second alternative metric type is object metrics. These metrics describe a different object in the same namespace, instead of describing pods. The metrics are not necessarily fetched from the object; they only describe it. Object metrics support target types of both Value and AverageValue. With Value, the target is compared directly to the returned metric from the API. With AverageValue, the value returned from the custom metrics API is divided by the number of pods before being compared to the target. The following example is the YAML representation of the requests-per-second metric.
type: Object
object:
metric:
name: requests-per-second
describedObject:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
name: main-route
target:
type: Value
value: 2k
Also maybe below will be helpful for your future investigations:
Autoscaling on more specific metrics
Autoscaling on metrics not related to Kubernetes objects
Hope it helps

HPA using Kafka Exporter in on premise Kubernetes cluster

I had been trying to implement Kubernetes HPA using Metrics from Kafka-exporter. Hpa supports Prometheus, so we tried writing the metrics to prometheus instance. From there, we are unclear on the steps to do. Is there an article where it will explain in details ?
I followed https://medium.com/google-cloud/kubernetes-hpa-autoscaling-with-kafka-metrics-88a671497f07
for same in GCP and we used stack driver, and the implementation worked like a charm. But, we are struggling in on-premise setup, as stack driver needs to be replaced by Prometheus
In order to scale based on custom metrics, Kubernetes needs to query an API for metrics to check for those metrics. That API needs to implement the custom metrics interface.
So for Prometheus, you need to setup an API that exposes Prometheus metrics through the custom metrics API. Luckily, there already is an adapter.
When I implemented Kubernetes HPA using Metrics from Kafka-exporter I had a few setbacks which I solved doing the following:
I deployed the kafka-exporter container as a sidecar to the pods I
wanted to scale. I found that the HPA scales the pod it gets the
metrics from.
I used annotations to make Prometheus scrape the metrics from the pods with exporter.
Then I verified that the kafka-exporter metrics are getting to Prometheus. If it's not there you can't advance further.
I deployed prometheus adapter using its helm chart. The adapter will "translate" Prometheus's metrics into custom Metrics
Api, which will make it visible to HPA.
I made sure that the metrics are visible in k8s by executing kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1 from one of
the master nodes.
I created an hpa with the matching metric name.
Here is a complete guide explaining how to implement Kubernetes HPA using Metrics from Kafka-exporter
Please comment if you have more questions

Prometheus is not collecting pod metrics

I deployed Prometheus and Grafana into my cluster.
When I open the dashboards I don't get data for pod CPU usage.
When I check Prometheus UI, it shows pods 0/0 up, however I have many pods running in my cluster.
What could be the reason? I have node exporter running in all of nodes.
Am getting this for kube-state-metrics,
I0218 14:52:42.595711 1 builder.go:112] Active collectors: configmaps,cronjobs,daemonsets,deployments,endpoints,horizontalpodautoscalers,jobs,limitranges,namespaces,nodes,persistentvolumeclaims,persistentvolumes,poddisruptionbudgets,pods,replicasets,replicationcontrollers,resourcequotas,secrets,services,statefulsets
I0218 14:52:42.595735 1 main.go:208] Starting metrics server: 0.0.0.0:8080
Here is my Prometheus config file:
https://gist.github.com/karthikeayan/41ab3dc4ed0c344bbab89ebcb1d33d16
I'm able to hit and get data for:
http://localhost:8080/api/v1/nodes/<my_worker_node>/proxy/metrics/cadvisor
As it was mentioned by karthikeayan in comments:
ok, i found something interesting in the values.yaml comments, prometheus.io/scrape: Only scrape pods that have a value of true, when i remove this relabel_config in k8s configmap, i got the data in prometheus ui.. unfortunately k8s configmap doesn't have comments, i believe helm will remove the comments before deploying it.
And just for clarification:
kube-state-metrics vs. metrics-server
The metrics-server is a project that has been inspired by Heapster and is implemented to serve the goals of the Kubernetes Monitoring Pipeline. It is a cluster level component which periodically scrapes metrics from all Kubernetes nodes served by Kubelet through Summary API. The metrics are aggregated, stored in memory and served in Metrics API format. The metric-server stores the latest values only and is not responsible for forwarding metrics to third-party destinations.
kube-state-metrics is focused on generating completely new metrics from Kubernetes' object state (e.g. metrics based on deployments, replica sets, etc.). It holds an entire snapshot of Kubernetes state in memory and continuously generates new metrics based off of it. And just like the metric-server it too is not responsibile for exporting its metrics anywhere.
Having kube-state-metrics as a separate project also enables access to these metrics from monitoring systems such as Prometheus.

Prometheus Metrics - Use for autoscaling

I've setup prometheus to collect metrics from my pods and nodes.
I've also setup the prometheus custom metrics adapter.
How can I use those metrics provided by prometheus to autoscale my pods ? I tried to google it but I only find custom pods that provides their metrics on their /metrics url. I would like to be able to autoscale any of my pods that already have a prometheus metric based on the cpu or memory usage.
I can visualize all the metrics in grafana for all my pods and nodes but can't find a way to use it with autoscale
You need to create an HPA (Horizontal Pod Autoscaler)
More info here
This is a good tool showing you how to use an HPA with custom metrics either using a the K8s metrics server or Prometheus.

Kubernetes pod metrics

There are three levels of metrics collection to consider in Kubernetes - Node, Pod and the Application that runs in the pod.
For Node and Application metrics I have solutions that work wonderfully, but I am stuck on pod metrics.
I have tried cAdvisor and Kube state metrics but none of them give me what I want. Kube state metrics only gives information that is already known like pod CPU limits and requests. cAdvisor doesn't insert pod labels to container names so I have no means of knowing which pod is misbehaving.
Given a pod, I'd like to know it's CPU, memory and storage usage both with respect to the pod itself and also with respect to the node it is scheduled on.
I am using prometheus to collect metrics via the prometheus operator CRD.
Can anyone help suggest an open source metrics exporter that would do the job I mentioned above?
The standard metric collector is Heapster. It comes preinstalled in many vendors like GKE also. With Heapster installed, you can just do kubectl top pods to see cpu/mem metrics on the client side. You can plug it with some sink to store the results for archival.
https://github.com/kubernetes/heapster