How to install prometheus operator and collect the metrics from remote Thanos that install on deferent kubernetes cluster - kubernetes

There is any way to install Thanos on Kubernetes cluster and collect metrics from remote prometheus operator on different Kubernetes cluster? How can I configure the Thanos collect the data from the remote prometheus operator pod?
I am using with Kubernetes 1.12.8 on AWS.

thanos,You need to install the thanos sidecar in your clusters, this thanos side car expose store API which can be exposed to other clusters-centralised cluster. There are some deployment model in below linksArchitecture and Getting started

Related

Custom Metrics API service install for kubernetes cluster

We are planning to Kubernetes horizontal pod scheduler and for that need to install Custom Metrics API.
Can someone please tell different ways to install Custom Metrics API on kubernetes cluster?
As you are using EKS with Prometheus, the best source of knowledge is AWS documentation.
Do i need prometheus adaptor for registering custom metrics API?
Yes, you need at least Prometheus and Prometheus Adapter.
Prometheus: scrapes pods and stores metrics
Prometheus metrics adapter: queries Prometheus and exposes metrics for the Kubernetes custom metrics API
Metrics server: collects pods CPU and memory usage and exposes metrics for the Kubernetes resource metrics API
Without Custom Metrics or External Metrics, you can only use metrics based on CPU or Memory.
In Autoscaling Amazon EKS services based on custom Prometheus metrics using CloudWatch Container Insights article, it's stated:
The custom metrics gathered by Prometheus can be exposed to the autoscaler using a Prometheus Adapter as outlined in the blog post titled Autoscaling EKS on Fargate with custom metrics.
In Autoscaling EKS on Fargate with custom metrics blog you also find some examples of autoscaling based on CPU usage, autoscaling based on App Mesh traffic or autoscaling based on HTTP traffic
Additional documentation
Control plane metrics with Prometheus
Why can't I collect metrics from containers, pods, or nodes using Metrics Server in Amazon EKS?
Install the CloudWatch agent with Prometheus metrics collection on Amazon EKS and Kubernetes clusters

Using multiple custom metrics adapters in Kubernetes

I am using GKE.
I have a cluster which is using stackdriver-adapter to get GCP metrics inside the cluster. I am using these metrics to create HPAs. This is working fine.
But now I need to create HPA on metrics which are provided by prometheus. I am trying to launch prometheus-adapter but it is failing because the API service has already been created by stackdriver-adapter. But if I delete the stackdriver my present HPAs will fail.
Can we have both prometheus-adapter and stackdriver-adpater running in the same cluster?
If no, I guess we need to send prometheus-metrics to stackdriver? But wouldn't that be slow?
As said in the comments:
Have a look at the documentation Using Prometheus, you'll find there how to install Prometheus and get external metrics. After that, follow the documentation Custom and external metrics for autoscaling workloads to configure HPA.
You can configure a sidecar to the Prometheus server that will send the data from the Prometheus to the Stackdriver. From this point you will be able to use the Prometheus metrics as External metrics when configuring the HPA.
You will need to check following requirements before "installing" the collector:
You must be running a compatible Prometheus server and have configured it to monitor the applications in your cluster. To learn how to install Prometheus on your cluster, refer to the Prometheus Getting Started guide.
You must have configured your cluster to use Cloud Operations for GKE. For instructions, see Installing Cloud Operations for GKE.
You must have the Kubernetes Engine Cluster Admin role for your cluster. For more information, see GKE roles.
You must ensure that your service account has the proper permissions. For more information, see Use Least Privilege Service Accounts for your Nodes.
-- Cloud.google.com: Stackdriver: Solutions: GKE: Prometheus: Before you begin
For testing purposes of installing Prometheus and configuring the data transfer to the Stackdriver, I used the script from:
Github.com: Stackdriver: Stackdriver-prometheus-sidecar
Steps:
download the repository:
$ git clone https://github.com/Stackdriver/stackdriver-prometheus-sidecar.git
set the following environment variables (values are examples):
export KUBE_NAMESPACE="prometheus"
export KUBE_CLUSTER="gke-prometheus"
export GCP_REGION="europe-west3-c"
export GCP_PROJECT="awesome-project-12345"
export SIDECAR_IMAGE_TAG="0.8.0"
SIDECAR_IMAGE_TAG can be found here:
Gcr.io: Stackdriver-prometheus: Stackdriver prometheus sidecar
run the script:
kube/full/deploy.sh
After successfully spawning Prometheus with a Stackdriver sidecar you should be able to see the metrics in the Cloud Console:
GCP Cloud Console (Web UI) -> Monitoring -> Metrics Explorer
Example:
From this point you can follow the guide for configuring HPA and set your External metric as the source for autoscaling your Deployment/Statefulset:
Cloud.google.com: Kubernetes Engine: Tutorials: Autoscaling metrics
Additional resources:
Kubernetes.io: Horizontal Pod Autoscaler
Cloud.google.com: Custom and external metrics for autoscaling workloads

DNS for Pod in another Kubernetes cluster on GKE

I am trying to connect to the MongoDB replica set that is hosted in another Kubernetes cluster of the same GCP project. I want to use DNS names in the connection string.
I was able to connect to mongodb hosted in the same cluster using this connection string:
mongodb://<pod-name>.<service-name>.<namespace>.svc.cluster.local:27017,<pod-name>.<service-name>.<namespace>.svc.cluster.local:27017/?replicaSet=<rs-name>
So my question is:
Is it possible to use the DNS name to reference the pod in another cluster? I looked through this document and it states:
Any pods created by a Deployment or DaemonSet have the following DNS
resolution available:
pod-ip-address.deployment-name.my-namespace.svc.cluster-domain.example.
But I am not sure what is the format of the cluster-domain.example part.
You can not use Kubernetes Service DNS(CoreDNS) to access a pod from outside the kubernetes cluster even from another kubernetes cluster. You need to expose the mongodb pod via LoadBalancer(recommended) or NodePort type service and access it using LoadBalancer endpoint or NodeIP:NodePort from the other kubernetes cluster.

How to integrate Prometheus with Kubernetes where both are running on different host?

My prometheus server is running on different server. Also I have another kubernetes cluster. So, I need monitoring kubernetes pod metrics using prometheus running on different servers.
To monitor external cluster I would take advantage of Prometheus federation topology.
In your Kubernetes cluster install node-exporter pods and configure Prometheus with short-term storage.
Expose the Prometheus service (you can follow this guide) outside of Kubernetes cluster, this can be done either by LB or a node port.
Configure the Prometheus server to scrape metrics from Kubernetes endpoints configuring them with correct tags and proper authentication.

K8s cluster working with Openshift?

I know that Openshift uses some K8s components to orchestrate PODS. Is there any way K8 and Openshift integrate together?. Means I should see the PODS which are deployed with K8s in Openshift UI and vise versa.
Followed Openshift as POD in K8 documentation,but I was struck at Step-4, unable to find kubernetes account key in GCE cluster (/srv/kubernetes/server.key).
Or is any way K8 nodes join under Openshift cluster?