Does Prometheus scrapes 1 pod on Kubernetes? - kubernetes

I installed my Spring Boot application with 2 or 3 pods on Kubernetes in Linux Server. And to monitor it, I installed Prometheus, too. Currently, the metrics from application to Prometheus go very well.
But I suspect that Prometheus takes metrics from only one pod. With a job like below in Prometheus config file, does prometheus takes metrics only from one pod? How can I make Prometheus scrape all pods in same time?
- job_name: 'SpringBootPrometheusDemoProject'
metrics_path: '/SpringBootPrometheusDemoProject/actuator/prometheus'
scrape_interval: 5s
static_configs:
- targets: ['127.0.0.1:8080']

Yes. In this case, you have to add few annotations in your pod (if it does not exist already) and use kubernetes_sd_configs instead of static_configs.
You will find an example here: https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/builtin/README.md#kubernetes-pod

Related

Prometheus Federated server metrics not appearing

I have multiple Kubernetes clusters and I wish to use the Prometheus federation.
Each cluster has its own Prometheus installed using helm.
Only one cluster's Prometheus is connected to Grafana.
This Prometheus server is going to get metrics from all other clusters Prometheus servers using federation.
Since I am using helm, I added this in values.yaml of the main central Prometheus server
extraScrapeConfigs: |
- job_name: 'federate'
scrape_interval: 10s
scheme: https
honor_labels: true
metrics_path: '/federate'
params:
'match[]':
- '{__name__=~"container_.*|kube_.*"}'
static_configs:
- targets:
- 'prometheus.prod-1.xx.xx'
- 'prometheus.prod-2.xx.xx'
- 'prometheus.prod-3.xx.xx'
Then I apply the change using
helm upgrade prometheus prometheus-community/kube-prometheus-stack -f values.yaml
It should work but it is not working.
The metrics of the other servers are not appearing in the main central server.
I verified using query filters of other servers.
I verified whether the federation is working or using the following link:
https://prometheus.prod-1.xx.xx/federate?match[]={__name__=~"container_.*|kube.*|"}
and this link returned all the metrics.
I access my main central Prometheus server using kubectl port-forward and then visit http://localhost:9090/config but in the config, I don't see the federate config which I added.
Also, nothing wrong is appearing in the logs, no error/warning msgs.
What am I doing wrong? Is there something more that is required?

Monitor Kubernetes Cluster from other Kubernetes cluster with Prometheus

I have run several Kubernetes Clusters on Azure AKS, so I intend to create Centralized Monitoring Cluster run on other Kubernetes cluster with Prometheus-Grafana.
My idea is that:
Isolating & centralizing monitoring cluster.
In case of any cluster is downed, Monitoring cluster is still alive to inspect downed cluster.
Run cross-cloud provider Kubernetes (if available)
I'm confusing about connecting clusters, network, ingress, how does Prometheus discovery, pull metric from outside cluster...
Is there any best practice, instruction for my usecase. Thank you!
Yes, Prometheus is a very flexible monitoring solution wherein each Prometheus server is able to act as a target for another Prometheus server. Using prometheus federation, Prometheus servers can scrape selected time series data from other Prometheus servers.
A typical Prometheus federation example configuration looks like this:
- job_name: 'federate'
scrape_interval: 15s
honor_labels: true
metrics_path: '/federate'
params:
'match[]':
- '{job="prometheus"}'
- '{__name__=~"job:.*"}'
static_configs:
- targets:
- 'source-prometheus-1:9090'
- 'source-prometheus-2:9090'
- 'source-prometheus-3:9090'

Prometheus does not show metrics of all pods

When we use Kubernetes for production and we have a scaled application with many pods and publish as a service, every single metrics fetching request of Prometheus is routed to a pod with a random of selection.
In this situation, results are not true for monitoring.
In a moment we need all pods metrics (for example 10 pod) and it's not possible by calling a Kubernetes Service endpoint!
Is there any solution for this problem?
You can configure your kubernetes_sd_configs so it scrapes the pods individually and not just the service.
To do that, set the role to pod, like this:
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
See this blog post for a full config example.

How to update Prometheus config in k8s cluster

I have running Prometheus in k8s. Could you please advice how can I change running config prometheus.yaml in cluster? I just want simply to change:
scrape_configs:
- job_name: my-exporter
scrape_interval: 15s
scrape_timeout: 10s
metrics_path: /metrics
scheme: http
How can I do this?
Thanks.
The recommended way is to provide the prometheus.yml via a ConfigMap. That way changes in the ConfigMap will be propagated into the pod that consumes the configMap. However, that is not enough for prometheus to pick up the new config.
Prometheus supports runtime reload of the config, so that you don't need to stop prometheus in order to pickup the new config. You can either do that manually by sending a POST request as described in the link above, or automate this process by having a sidecar container inside the same prometheus pod that watch for updates to the config file and does the reload POST request.
The following is an example on the second approach: prometheus-configmaps-continuous-deployment

unable to get the system service memory and cpu metrics of kubernetes cluster in grafana dashboard using prometheus

I am trying to monitor my kubernetes cluster metrics using Prometheus and grafana. Here is the link which was i followed. Facing the issue with kubernetes-service-endpoints (2/3 up) in my Prometheus dashboard.
below is grafana dashboard which is used in this task.
I checked my Prometheus pod logs .It shows the errors like
Could anybody suggest how to get system services metrics in the above dashboard?
(or)
suggest me the any grafana dashboard name for monitoring the kubernetes cluster using Prometheus?
Check your prometheus.yaml it should have static configs as for prometheus,
static_configs:
- targets:
- localhost:9090
curl localhost:9090/metrics and make sure you're receiving metrics as output
For grafana dashboard, create an org -> prometheus as data source configure prometheus IP:PORT and click on test and confirm connectivity is available.
Open .json file and change the configs according your requirements and import and check.