I refer this doc.
I want to send data from my device and visualize it on grafana so, how to connect prometheus(deployed as a cluster in gcp) to GCP pubsub.
Prometheus is pull-based rather than push-based. So, whatever the metrics source is, it must expose the metrics in Prometheus format, and Prometheus will periodically query them with HTTP request.
If directly exposing the metrics is not possible, the metrics source can push the metrics to some intermediate component which exposes the metrics in Prometheus format so that Prometheus can query them.
It seems this is the approach taken by the document you're referring to. The metrics are submitted from the source via PubSub to a Metrics Telemetry Converter pod running in the Kubernetes cluster, which exposes them in Prometheus format.
You then have to configure Prometheus to scrape the metrics from this pod, as you would configure it for any other job.
Related
We are planning to Kubernetes horizontal pod scheduler and for that need to install Custom Metrics API.
Can someone please tell different ways to install Custom Metrics API on kubernetes cluster?
As you are using EKS with Prometheus, the best source of knowledge is AWS documentation.
Do i need prometheus adaptor for registering custom metrics API?
Yes, you need at least Prometheus and Prometheus Adapter.
Prometheus: scrapes pods and stores metrics
Prometheus metrics adapter: queries Prometheus and exposes metrics for the Kubernetes custom metrics API
Metrics server: collects pods CPU and memory usage and exposes metrics for the Kubernetes resource metrics API
Without Custom Metrics or External Metrics, you can only use metrics based on CPU or Memory.
In Autoscaling Amazon EKS services based on custom Prometheus metrics using CloudWatch Container Insights article, it's stated:
The custom metrics gathered by Prometheus can be exposed to the autoscaler using a Prometheus Adapter as outlined in the blog post titled Autoscaling EKS on Fargate with custom metrics.
In Autoscaling EKS on Fargate with custom metrics blog you also find some examples of autoscaling based on CPU usage, autoscaling based on App Mesh traffic or autoscaling based on HTTP traffic
Additional documentation
Control plane metrics with Prometheus
Why can't I collect metrics from containers, pods, or nodes using Metrics Server in Amazon EKS?
Install the CloudWatch agent with Prometheus metrics collection on Amazon EKS and Kubernetes clusters
We have components which use the Go library to write status to prometheus,
we are able to see the data in Prometheus UI,
we have components outside the K8S cluster which need to pull the data from
Prometheus , how can I expose this metrics? is there any components which I should use ?
You may want to check the Federation section of the Prometheus documents.
Federation allows a Prometheus server to scrape selected time series
from another Prometheus server. Commonly, it is used to either achieve scalable Prometheus monitoring setups or to pull related metrics from one service's Prometheus into another.
It would require to expose Prometheus service out of the cluster with Ingress or nodePort and configure the Center Prometheus to scrape metrics from the exposed service endpoint. You will have set also some proper authentication. Here`s an example of it.
Second way that comes to my mind is to use Kube-state-metrics
kube-state-metrics is a simple service that listens to the Kubernetes
API server and generates metrics about the state of the objects.
Metrics are exported on the HTTP endpoint and designed to be consumed either by Prometheus itself or by scraper that is compatible with Prometheus client endpoints. However this differ from the Metrics Server and generate metrics about the state of Kubernetes objects: node status, node capacity, number of desired replicas, pod status etc.
Having a cluster running on VMs on our private cloud and using MetalLB as ingress-controller we need to see the network traffic and HTTP codes returned from our applications to see in Grafana HTTP requests and traffic load the way you see it on AWS Load Balancers for example.
We have deployed Prometheus through the Helm deployment in all nodes so we can gather metrics from all the cluster but didn't find any metric containing the needed information. Tried looking the metrics in Prometheus about ingresses, proxy, http but there is nothing matching our need. Also tried some Grafana dashboards from the repository but nothing shows the metrics.
Thanks.
I had been trying to implement Kubernetes HPA using Metrics from Kafka-exporter. Hpa supports Prometheus, so we tried writing the metrics to prometheus instance. From there, we are unclear on the steps to do. Is there an article where it will explain in details ?
I followed https://medium.com/google-cloud/kubernetes-hpa-autoscaling-with-kafka-metrics-88a671497f07
for same in GCP and we used stack driver, and the implementation worked like a charm. But, we are struggling in on-premise setup, as stack driver needs to be replaced by Prometheus
In order to scale based on custom metrics, Kubernetes needs to query an API for metrics to check for those metrics. That API needs to implement the custom metrics interface.
So for Prometheus, you need to setup an API that exposes Prometheus metrics through the custom metrics API. Luckily, there already is an adapter.
When I implemented Kubernetes HPA using Metrics from Kafka-exporter I had a few setbacks which I solved doing the following:
I deployed the kafka-exporter container as a sidecar to the pods I
wanted to scale. I found that the HPA scales the pod it gets the
metrics from.
I used annotations to make Prometheus scrape the metrics from the pods with exporter.
Then I verified that the kafka-exporter metrics are getting to Prometheus. If it's not there you can't advance further.
I deployed prometheus adapter using its helm chart. The adapter will "translate" Prometheus's metrics into custom Metrics
Api, which will make it visible to HPA.
I made sure that the metrics are visible in k8s by executing kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1 from one of
the master nodes.
I created an hpa with the matching metric name.
Here is a complete guide explaining how to implement Kubernetes HPA using Metrics from Kafka-exporter
Please comment if you have more questions
I am running Service monitors to gather metrics from pods. Then with the help of the Prometheus operator, I am using serviceMonitorSelector to catch those metrics in Prometheus. I see those metrics in Prometheus being collected.
Now, I am trying to export those custom metrics from Prometheus to AWS Cloudwatch. Does anyone have any idea how to do that? The end result is to set and alerting system with the help of Zenoss on Cloudwatch.
You have set up something like the prometheus-to-cloudwatch. You can run it in Kubernetes or on any server then make it scrape the same target that Prometheus is scraping. (prometheus-to-cloudwatch scrapes metrics from exporters or as a Prometheus client and not from the Prometheus server)
Then whatever you scrape will show up as metrics in Cloudwatch and then you can set alerts on those. For Zenoss you can use the AWS ZenPack and read the metrics from CloudWatch.
The Kubernetes Prometheus Operator automatically scrapes the services in your Kubernetes cluster and dynamically scrapes them as they get created, you will probably have check what targets are being scraped by Prometheus dynamically to configure what to scrape with prometheus-to-cloudwatch (Or you could build another operator; a prometheus-to-cloudwatch operator, but that will take time/work)
(There isn't such as thing as a scraper of the Prometheus server to CloudWatch either)