How to integrate Prometheus with Kubernetes where both are running on different host? - kubernetes

My prometheus server is running on different server. Also I have another kubernetes cluster. So, I need monitoring kubernetes pod metrics using prometheus running on different servers.

To monitor external cluster I would take advantage of Prometheus federation topology.
In your Kubernetes cluster install node-exporter pods and configure Prometheus with short-term storage.
Expose the Prometheus service (you can follow this guide) outside of Kubernetes cluster, this can be done either by LB or a node port.
Configure the Prometheus server to scrape metrics from Kubernetes endpoints configuring them with correct tags and proper authentication.

Related

What is the Best Way to Scale an external (non EKS) EC2 Auto Scaling Group from Inside a Kubernetes Cluster Based on Prometheus Metrics?

I am currently autoscaling an HPA via internal Prometheus metrics which then filters down to scale the cluster via the AWS Cluster Autoscaler. That HPA is tied to an external service run on bare EC2 instances. I would like to use the same metrics that I use to scale that HPA to also scale the ASG behind that service that is external to the Kubernetes cluster.
What is the best way to do this? It is preferable that the external EC2 cluster does not have network access to the EKS cluster.
I was thinking about just writing a small service that does it via the AWS API based on polling Prometheus intermittently but I figured that there must be a better way.

Using multiple custom metrics adapters in Kubernetes

I am using GKE.
I have a cluster which is using stackdriver-adapter to get GCP metrics inside the cluster. I am using these metrics to create HPAs. This is working fine.
But now I need to create HPA on metrics which are provided by prometheus. I am trying to launch prometheus-adapter but it is failing because the API service has already been created by stackdriver-adapter. But if I delete the stackdriver my present HPAs will fail.
Can we have both prometheus-adapter and stackdriver-adpater running in the same cluster?
If no, I guess we need to send prometheus-metrics to stackdriver? But wouldn't that be slow?
As said in the comments:
Have a look at the documentation Using Prometheus, you'll find there how to install Prometheus and get external metrics. After that, follow the documentation Custom and external metrics for autoscaling workloads to configure HPA.
You can configure a sidecar to the Prometheus server that will send the data from the Prometheus to the Stackdriver. From this point you will be able to use the Prometheus metrics as External metrics when configuring the HPA.
You will need to check following requirements before "installing" the collector:
You must be running a compatible Prometheus server and have configured it to monitor the applications in your cluster. To learn how to install Prometheus on your cluster, refer to the Prometheus Getting Started guide.
You must have configured your cluster to use Cloud Operations for GKE. For instructions, see Installing Cloud Operations for GKE.
You must have the Kubernetes Engine Cluster Admin role for your cluster. For more information, see GKE roles.
You must ensure that your service account has the proper permissions. For more information, see Use Least Privilege Service Accounts for your Nodes.
-- Cloud.google.com: Stackdriver: Solutions: GKE: Prometheus: Before you begin
For testing purposes of installing Prometheus and configuring the data transfer to the Stackdriver, I used the script from:
Github.com: Stackdriver: Stackdriver-prometheus-sidecar
Steps:
download the repository:
$ git clone https://github.com/Stackdriver/stackdriver-prometheus-sidecar.git
set the following environment variables (values are examples):
export KUBE_NAMESPACE="prometheus"
export KUBE_CLUSTER="gke-prometheus"
export GCP_REGION="europe-west3-c"
export GCP_PROJECT="awesome-project-12345"
export SIDECAR_IMAGE_TAG="0.8.0"
SIDECAR_IMAGE_TAG can be found here:
Gcr.io: Stackdriver-prometheus: Stackdriver prometheus sidecar
run the script:
kube/full/deploy.sh
After successfully spawning Prometheus with a Stackdriver sidecar you should be able to see the metrics in the Cloud Console:
GCP Cloud Console (Web UI) -> Monitoring -> Metrics Explorer
Example:
From this point you can follow the guide for configuring HPA and set your External metric as the source for autoscaling your Deployment/Statefulset:
Cloud.google.com: Kubernetes Engine: Tutorials: Autoscaling metrics
Additional resources:
Kubernetes.io: Horizontal Pod Autoscaler
Cloud.google.com: Custom and external metrics for autoscaling workloads

Expose prometheus data outside the cluster

We have components which use the Go library to write status to prometheus,
we are able to see the data in Prometheus UI,
we have components outside the K8S cluster which need to pull the data from
Prometheus , how can I expose this metrics? is there any components which I should use ?
You may want to check the Federation section of the Prometheus documents.
Federation allows a Prometheus server to scrape selected time series
from another Prometheus server. Commonly, it is used to either achieve scalable Prometheus monitoring setups or to pull related metrics from one service's Prometheus into another.
It would require to expose Prometheus service out of the cluster with Ingress or nodePort and configure the Center Prometheus to scrape metrics from the exposed service endpoint. You will have set also some proper authentication. Here`s an example of it.
Second way that comes to my mind is to use Kube-state-metrics
kube-state-metrics is a simple service that listens to the Kubernetes
API server and generates metrics about the state of the objects.
Metrics are exported on the HTTP endpoint and designed to be consumed either by Prometheus itself or by scraper that is compatible with Prometheus client endpoints. However this differ from the Metrics Server and generate metrics about the state of Kubernetes objects: node status, node capacity, number of desired replicas, pod status etc.

HTTP codes monitoring for Kubernetes cluster using MetalLB ingress controller

Having a cluster running on VMs on our private cloud and using MetalLB as ingress-controller we need to see the network traffic and HTTP codes returned from our applications to see in Grafana HTTP requests and traffic load the way you see it on AWS Load Balancers for example.
We have deployed Prometheus through the Helm deployment in all nodes so we can gather metrics from all the cluster but didn't find any metric containing the needed information. Tried looking the metrics in Prometheus about ingresses, proxy, http but there is nothing matching our need. Also tried some Grafana dashboards from the repository but nothing shows the metrics.
Thanks.

How to install prometheus operator and collect the metrics from remote Thanos that install on deferent kubernetes cluster

There is any way to install Thanos on Kubernetes cluster and collect metrics from remote prometheus operator on different Kubernetes cluster? How can I configure the Thanos collect the data from the remote prometheus operator pod?
I am using with Kubernetes 1.12.8 on AWS.
thanos,You need to install the thanos sidecar in your clusters, this thanos side car expose store API which can be exposed to other clusters-centralised cluster. There are some deployment model in below linksArchitecture and Getting started