Custom Metrics API service install for kubernetes cluster - kubernetes

We are planning to Kubernetes horizontal pod scheduler and for that need to install Custom Metrics API.
Can someone please tell different ways to install Custom Metrics API on kubernetes cluster?

As you are using EKS with Prometheus, the best source of knowledge is AWS documentation.
Do i need prometheus adaptor for registering custom metrics API?
Yes, you need at least Prometheus and Prometheus Adapter.
Prometheus: scrapes pods and stores metrics
Prometheus metrics adapter: queries Prometheus and exposes metrics for the Kubernetes custom metrics API
Metrics server: collects pods CPU and memory usage and exposes metrics for the Kubernetes resource metrics API
Without Custom Metrics or External Metrics, you can only use metrics based on CPU or Memory.
In Autoscaling Amazon EKS services based on custom Prometheus metrics using CloudWatch Container Insights article, it's stated:
The custom metrics gathered by Prometheus can be exposed to the autoscaler using a Prometheus Adapter as outlined in the blog post titled Autoscaling EKS on Fargate with custom metrics.
In Autoscaling EKS on Fargate with custom metrics blog you also find some examples of autoscaling based on CPU usage, autoscaling based on App Mesh traffic or autoscaling based on HTTP traffic
Additional documentation
Control plane metrics with Prometheus
Why can't I collect metrics from containers, pods, or nodes using Metrics Server in Amazon EKS?
Install the CloudWatch agent with Prometheus metrics collection on Amazon EKS and Kubernetes clusters

Related

Using multiple custom metrics adapters in Kubernetes

I am using GKE.
I have a cluster which is using stackdriver-adapter to get GCP metrics inside the cluster. I am using these metrics to create HPAs. This is working fine.
But now I need to create HPA on metrics which are provided by prometheus. I am trying to launch prometheus-adapter but it is failing because the API service has already been created by stackdriver-adapter. But if I delete the stackdriver my present HPAs will fail.
Can we have both prometheus-adapter and stackdriver-adpater running in the same cluster?
If no, I guess we need to send prometheus-metrics to stackdriver? But wouldn't that be slow?
As said in the comments:
Have a look at the documentation Using Prometheus, you'll find there how to install Prometheus and get external metrics. After that, follow the documentation Custom and external metrics for autoscaling workloads to configure HPA.
You can configure a sidecar to the Prometheus server that will send the data from the Prometheus to the Stackdriver. From this point you will be able to use the Prometheus metrics as External metrics when configuring the HPA.
You will need to check following requirements before "installing" the collector:
You must be running a compatible Prometheus server and have configured it to monitor the applications in your cluster. To learn how to install Prometheus on your cluster, refer to the Prometheus Getting Started guide.
You must have configured your cluster to use Cloud Operations for GKE. For instructions, see Installing Cloud Operations for GKE.
You must have the Kubernetes Engine Cluster Admin role for your cluster. For more information, see GKE roles.
You must ensure that your service account has the proper permissions. For more information, see Use Least Privilege Service Accounts for your Nodes.
-- Cloud.google.com: Stackdriver: Solutions: GKE: Prometheus: Before you begin
For testing purposes of installing Prometheus and configuring the data transfer to the Stackdriver, I used the script from:
Github.com: Stackdriver: Stackdriver-prometheus-sidecar
Steps:
download the repository:
$ git clone https://github.com/Stackdriver/stackdriver-prometheus-sidecar.git
set the following environment variables (values are examples):
export KUBE_NAMESPACE="prometheus"
export KUBE_CLUSTER="gke-prometheus"
export GCP_REGION="europe-west3-c"
export GCP_PROJECT="awesome-project-12345"
export SIDECAR_IMAGE_TAG="0.8.0"
SIDECAR_IMAGE_TAG can be found here:
Gcr.io: Stackdriver-prometheus: Stackdriver prometheus sidecar
run the script:
kube/full/deploy.sh
After successfully spawning Prometheus with a Stackdriver sidecar you should be able to see the metrics in the Cloud Console:
GCP Cloud Console (Web UI) -> Monitoring -> Metrics Explorer
Example:
From this point you can follow the guide for configuring HPA and set your External metric as the source for autoscaling your Deployment/Statefulset:
Cloud.google.com: Kubernetes Engine: Tutorials: Autoscaling metrics
Additional resources:
Kubernetes.io: Horizontal Pod Autoscaler
Cloud.google.com: Custom and external metrics for autoscaling workloads

How to add metric-server inside an helm-couchdb application

When trying to do HorizontalPodAutoscaling I'm getting (failed to get memory utilization: unable to get metrics for resource memory: no metrics returned from resource metrics API) how can solve this problem.
As far as I understand before using hpa you have to install metrics-server. More in below docs and links.
Before you begin
This example requires a running Kubernetes cluster and kubectl, version 1.2 or later. metrics-server monitoring needs to be deployed in the cluster to provide metrics via the resource metrics API, as Horizontal Pod Autoscaler uses this API to collect metrics. The instructions for deploying this are on the GitHub repository of metrics-server, if you followed getting started on GCE guide, metrics-server monitoring will be turned-on by default.
Kubernetes metrics server
Metrics Server collects resource metrics from Kubelets and exposes them in Kubernetes apiserver through Metrics API for use by Horizontal Pod Autoscaler and Vertical Pod Autoscaler.
Additional links:
https://www.cloudtechnologyexperts.com/autoscaling-microservices-in-kubernetes-with-horizontal-pod-autoscaler/
https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/
https://www.youtube.com/watch?v=dALta9zQkfs

How to integrate Prometheus with Kubernetes where both are running on different host?

My prometheus server is running on different server. Also I have another kubernetes cluster. So, I need monitoring kubernetes pod metrics using prometheus running on different servers.
To monitor external cluster I would take advantage of Prometheus federation topology.
In your Kubernetes cluster install node-exporter pods and configure Prometheus with short-term storage.
Expose the Prometheus service (you can follow this guide) outside of Kubernetes cluster, this can be done either by LB or a node port.
Configure the Prometheus server to scrape metrics from Kubernetes endpoints configuring them with correct tags and proper authentication.

HPA using Kafka Exporter in on premise Kubernetes cluster

I had been trying to implement Kubernetes HPA using Metrics from Kafka-exporter. Hpa supports Prometheus, so we tried writing the metrics to prometheus instance. From there, we are unclear on the steps to do. Is there an article where it will explain in details ?
I followed https://medium.com/google-cloud/kubernetes-hpa-autoscaling-with-kafka-metrics-88a671497f07
for same in GCP and we used stack driver, and the implementation worked like a charm. But, we are struggling in on-premise setup, as stack driver needs to be replaced by Prometheus
In order to scale based on custom metrics, Kubernetes needs to query an API for metrics to check for those metrics. That API needs to implement the custom metrics interface.
So for Prometheus, you need to setup an API that exposes Prometheus metrics through the custom metrics API. Luckily, there already is an adapter.
When I implemented Kubernetes HPA using Metrics from Kafka-exporter I had a few setbacks which I solved doing the following:
I deployed the kafka-exporter container as a sidecar to the pods I
wanted to scale. I found that the HPA scales the pod it gets the
metrics from.
I used annotations to make Prometheus scrape the metrics from the pods with exporter.
Then I verified that the kafka-exporter metrics are getting to Prometheus. If it's not there you can't advance further.
I deployed prometheus adapter using its helm chart. The adapter will "translate" Prometheus's metrics into custom Metrics
Api, which will make it visible to HPA.
I made sure that the metrics are visible in k8s by executing kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1 from one of
the master nodes.
I created an hpa with the matching metric name.
Here is a complete guide explaining how to implement Kubernetes HPA using Metrics from Kafka-exporter
Please comment if you have more questions

Prometheus Metrics - Use for autoscaling

I've setup prometheus to collect metrics from my pods and nodes.
I've also setup the prometheus custom metrics adapter.
How can I use those metrics provided by prometheus to autoscale my pods ? I tried to google it but I only find custom pods that provides their metrics on their /metrics url. I would like to be able to autoscale any of my pods that already have a prometheus metric based on the cpu or memory usage.
I can visualize all the metrics in grafana for all my pods and nodes but can't find a way to use it with autoscale
You need to create an HPA (Horizontal Pod Autoscaler)
More info here
This is a good tool showing you how to use an HPA with custom metrics either using a the K8s metrics server or Prometheus.