Prometheus: Better Option to monitor external K8s Cluster - kubernetes

I have two kubernetes clusters who do not talk to one another in any way. The idea is to maintain one prometheus instance(in another 3rd cluster) that can scrape endpoints from both the clusters.
I created a service account in each cluster, gave it cluster role & clusterrolebinding and took an yaml file of the secret. I then imported the same secret in the 3rd cluster where I have prometheus running. Using these mounted secrets, I was able to pull data from all pods in cluster 1 and 2.
Are there any better options to achieve this usecase?
I am in a way transferring secrets from one cluster to another to get the same ca.crt and token.

I think it is not safe to share secrets between clusters.
What about federation prometheus, one prometheus instance can export some data, which can be consumed by external prometheus instance.
For example, a cluster scheduler running multiple services might expose resource usage information (like memory and CPU usage) about service instances running on the cluster. On the other hand, a service running on that cluster will only expose application-specific service metrics. Often, these two sets of metrics are scraped by separate Prometheus servers.
Or deploy some exporter, which can be consumed by external prometheus. e.g. https://github.com/kubernetes/kube-state-metrics (but it is not providing cpu/memory usage of pods)

Related

What is the Best Way to Scale an external (non EKS) EC2 Auto Scaling Group from Inside a Kubernetes Cluster Based on Prometheus Metrics?

I am currently autoscaling an HPA via internal Prometheus metrics which then filters down to scale the cluster via the AWS Cluster Autoscaler. That HPA is tied to an external service run on bare EC2 instances. I would like to use the same metrics that I use to scale that HPA to also scale the ASG behind that service that is external to the Kubernetes cluster.
What is the best way to do this? It is preferable that the external EC2 cluster does not have network access to the EKS cluster.
I was thinking about just writing a small service that does it via the AWS API based on polling Prometheus intermittently but I figured that there must be a better way.

how to distribute load across multiple ingress pods

In the cluster, 2 ingress pods are raised, but when the load is applied to the cluster, requests go only through one pod, tell me with what tools you can use to balance the load distribution between the ingress pods
You can do this with a load balancer. It is also possible to create one that is only internal of the cluster so that none get created by your cloud provider: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer

Accessing k8s pods using ingress externally (outside k8s cluster) is better than accessing k8s pods using service within the cluster?

I did a Performance test where I do Writes/ Reads into/from DB (cassandra pods).
I perfrommed read/write ops on k8s pods (Cassandra) through Ingress from a external machine (outside k8s cluster). Similarly I also tried accessing same cassandra pods from another k8s pod using service with in the same cluster.
Surprisingly I could see the performance (in terms of latency) was better when i accessed via ingress!
Why accessing within the cluster via service is taking more time compared to ingress ?
Please let me know why this happens !
Thanks

Expose prometheus data outside the cluster

We have components which use the Go library to write status to prometheus,
we are able to see the data in Prometheus UI,
we have components outside the K8S cluster which need to pull the data from
Prometheus , how can I expose this metrics? is there any components which I should use ?
You may want to check the Federation section of the Prometheus documents.
Federation allows a Prometheus server to scrape selected time series
from another Prometheus server. Commonly, it is used to either achieve scalable Prometheus monitoring setups or to pull related metrics from one service's Prometheus into another.
It would require to expose Prometheus service out of the cluster with Ingress or nodePort and configure the Center Prometheus to scrape metrics from the exposed service endpoint. You will have set also some proper authentication. Here`s an example of it.
Second way that comes to my mind is to use Kube-state-metrics
kube-state-metrics is a simple service that listens to the Kubernetes
API server and generates metrics about the state of the objects.
Metrics are exported on the HTTP endpoint and designed to be consumed either by Prometheus itself or by scraper that is compatible with Prometheus client endpoints. However this differ from the Metrics Server and generate metrics about the state of Kubernetes objects: node status, node capacity, number of desired replicas, pod status etc.

How to integrate Prometheus with Kubernetes where both are running on different host?

My prometheus server is running on different server. Also I have another kubernetes cluster. So, I need monitoring kubernetes pod metrics using prometheus running on different servers.
To monitor external cluster I would take advantage of Prometheus federation topology.
In your Kubernetes cluster install node-exporter pods and configure Prometheus with short-term storage.
Expose the Prometheus service (you can follow this guide) outside of Kubernetes cluster, this can be done either by LB or a node port.
Configure the Prometheus server to scrape metrics from Kubernetes endpoints configuring them with correct tags and proper authentication.