Prometheus for k8s multi clusters - kubernetes

I have 3 kubernetes clusters (prod, test, monitoring). Iam new to prometheus so i have tested it by installing it in my test environment with the helm chart:
# https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack
helm install [RELEASE_NAME] prometheus-community/kube-prometheus-stack
But if i want to have metrics from the prod and test clusters, i have to repeat the same installation of the helm and each "kube-prometheus-stack" would be standalone in its own cluster. It is not ideal at all. Iam trying to find a way to have a single prometheus/grafana which would federate/agregate the metrics from each cluster's prometheus server.
I found this link, saying about prometheus federation:
https://prometheus.io/docs/prometheus/latest/federation/
If install the helm chart "kube-prometheus-stack" and get rid of grafana on the 2 other cluster, how can i make the 3rd "kube-prometheus-stack", on the 3rd cluster, scrapes metrics from the 2 other ones?
thanks

You have to modify configuration for prometheus federate so it can scrape metrics from other clusters as described in documentation:
scrape_configs:
- job_name: 'federate'
scrape_interval: 15s
honor_labels: true
metrics_path: '/federate'
params:
'match[]':
- '{job="prometheus"}'
- '{__name__=~"job:.*"}'
static_configs:
- targets:
- 'source-prometheus-1:9090'
- 'source-prometheus-2:9090'
- 'source-prometheus-3:9090'
params field checks for jobs to scrape metrics from. In this particular example
It will scrape any series with the label job="prometheus" or a metric name starting
with job: from the Prometheus servers at
source-prometheus-{1,2,3}:9090
You can check following articles to give you more insight of prometheus federation:
Monitoring Kubernetes with Prometheus - outside the cluster!
Prometheus federation in Kubernetes
Monitoring multiple federated clusters with Prometheus - the secure way
Monitoring a Multi-Cluster Environment Using Prometheus Federation and Grafana

You have few options here:
Option 1:
You can achieve this buy having vmagent or grafana-agent in prod and test clusters and configure remote write on them to your monitoring cluster.
But in this case you will need to install kube-state-metrics and node-exporter separately into prod and test cluster.
Also it's important to add extra label for a cluster name (or any unique identifier) before sending metrics to remote write, to make sure that recording rules from "kube-prometheus-stack" are working correctly
diagram
Option 2:
You can install victoria-metrics-k8s-stack chart. It has similar functionality as kube-prometheus-stack - also installs bunch of components recording rules and dashboards.
With this case you install victoria-metrics-k8s-stack in every cluster, but with different values.
For monitoring cluster you can use default values, with
grafana:
sidecar:
dashboards:
multicluster: true
and proper configured ingress for vmsingle
For prod and test cluster you need to disable bunch of components
defaultRules:
create: false
vmsingle:
enabled: false
alertmanager:
enabled: false
vmalert:
enabled: false
vmagent:
spec:
remoteWrite:
- url: "<vmsingle-ingress>/api/v1/write"
externalLabels:
cluster: <cluster-name>
grafana:
enabled: false
defaultDashboardsEnabled: false
in this case chart will deploy vmagent, kube-state-metrics, node-exporter and scrape configurations for vmagent.
diagram

You could try looking at Wavefront. It's a commercial tool now but you can get a 30 day trial free - also, it understands promQL. So essentially, you could use the same prometheus rules and config across all clusters, and then use wavefront to just connect to all of those prom instances.
Another option may be Thanos, but I've never used it personally.

Related

Enable metrics for bitnami/redis with prometheus-community/kube-prometheus-stack

I have already setup prometheus-community/kube-prometheus-stack in my cluster using helm.
I need to also deploy a redis cluster in the same cluster.
How can provide option so that metric of this redis cluster go to prometheus and to be fed to grafana?
On github page some options are listed.
Will it work with below configuration?
$ helm install my-release \
--set metrics.enabled=true\
bitnami/redis
Do I need to do anything else?
I would assume that you asking this question in the first place means the redis metrics didn't show up in the prometheus for you.
Setting up prometheus using "prometheus-community/kube-prometheus-stack" helm chart could be very different for you than me as it has a lot of configurables.
As the helm chart comes with "prometheus operator", we have used PodMonitor and/or ServiceMonitor CRD's as they provide far more configuration options. Here's some docs around that:
https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/user-guides/getting-started.md#include-servicemonitors
So basically, deploy prometheus with "prometheus.prometheusSpec.serviceMonitorSelector.matchLabels" with a label value of your choice
serviceMonitorSelector:
matchLabels:
monitoring-platform: core-prometheus
and thereafter deploy redis with "metrics.enabled=true", "metrics.serviceMonitor.enabled=true" & "metrics.serviceMonitor.selector" set to value similar to the label defined in prometheus serviceMonitorSelector (monitoring-platform: core-prometheus in this case). Something like this:
metrics:
enabled: true
serviceMonitor:
enabled: true
selector:
monitoring-platform: core-prometheus
This setup works for us.
screenshot

Dynamically update prometheus scrape config based on pod labels

I'm trying to enhance my monitoring and want to expand the amount of metrics pulled into Prometheus from our Kube estate. We already have a stand alone Prom implementation which has a hard coded config file monitoring some bare metal servers, and hooks into cadvisor for generic Pod metrics.
What i would like to do is configure Kube to monitor the apache_exporter metrics from a webserver deployed in the cluster, but also dynamically add a 2nd, 3rd etc webserver as the instances are scaled up.
I've looked at the kube-prometheus project, but this seems to be more geared to instances where there is no established Prometheus deployed. Is there a simple way to get prometheus to scrape the Kube API or etcd to pull in the current list of pods which match a certain criteria (ie, a tag like deploymentType=webserver) and scrape the apache_exporter metrics for these pods, and scrape the mysqld_exporter metrics where deploymentType=mysql
There's a project called kube-prometheus-stack (formerly prometheus-operator): https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack
It has concepts called ServiceMonitor and PodMonitor:
https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/design.md#servicemonitor
https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/design.md#podmonitor
Basically, this is a selector that points your Prometheus instance to scrape targets. In the case of service selector, it discovers all the pods behind the service. In the case of a pod selector, it discovers pods directly. Prometheus scrape config is updated and reloaded automatically in both cases.
Example PodMonitor:
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: example
namespace: monitoring
spec:
podMetricsEndpoints:
- interval: 30s
path: /metrics
port: http
namespaceSelector:
matchNames:
- app
selector:
matchLabels:
app.kubernetes.io/name: my-app
Note that this PodMonitor object itself must be discovered by the controller. To achieve this you write a PodMonitorSelector(link). This additional explicit linkage is done intentionally - in this way, if you have 2 Prometheus instances on your cluster (say Infra and Product) you can separate which Prometheus will get which Pods to its scraping config.
The same applies to a ServiceMonitor.

Prometheus Alert Manager for Federation

We have several clusters where our applications are running. We would like to set up a Central Monitoring cluster which can scrape metrics from rest of cluster using Prometheus Federation.
So to do that, I need to install prometheus server in each of cluster and install prometheus server via federation in central cluster.I will install Grafana as well in central cluster to visualise the metrics that we gather from rest of prometheus server.
So the question is;
Where should I setup the Alert Manager? Only for Central Cluster or each cluster has to be also alert manager?
What is the best practice alerting while using Federation?
I though ı can use ingress controller to expose each prometheus server? What is the best practice to provide communication between prometheus server and federation in k8s?
Based on this blog
Where should I setup the Alert Manager? Only for Central Cluster or each cluster has to be also alert manager?
What is the best practice alerting while using Federation?
The answer here would be to do that on each cluster.
If the data you need to do alerting is moved from one Prometheus to another then you've added an additional point of failure. This is particularly risky when WAN links such as the internet are involved. As far as is possible, you should try and push alerting as deep down the federation hierarchy as possible. For example an alert about a target being down should be setup on the Prometheus scraping that target, not a global Prometheus which could be several steps removed.
I though ı can use ingress controller to expose each prometheus server? What is the best practice to provide communication between prometheus server and federation in k8s?
I think that depends on use case, in each doc I checked they just use targets in scrape_configs.static_configs in the prometheus.yml
like here
scrape_configs:
- job_name: 'federate'
scrape_interval: 15s
honor_labels: true
metrics_path: '/federate'
params:
'match[]':
- '{job="prometheus"}'
- '{__name__=~"job:.*"}'
static_configs:
- targets:
- 'source-prometheus-1:9090'
- 'source-prometheus-2:9090'
- 'source-prometheus-3:9090'
OR
like here
prometheus.yml:
rule_files:
- /etc/config/rules
- /etc/config/alerts
scrape_configs:
- job_name: 'federate'
scrape_interval: 15s
honor_labels: true
metrics_path: '/federate'
params:
'match[]':
- '{job="prometheus"}'
- '{__name__=~"job:.*"}'
static_configs:
- targets:
- 'prometheus-server:80'
Additionally, worth to check how they did this in this tutorial, where they used helm to build central monitoring cluster with two prometheus servers on two clusters.

Rancher Cluster Monitoring + Prometheus Operator?

I'm managing several k8s clusters with Rancher. I've setup most of them with cluster monitoring apps from Rancher (so graphs and grafana links show up in rancher under workload monitoring, etc etc).
Question: Is there a way to configure Rancher to pull metrics/graphs from prometheus-operator instead?
I've asked this in Slack, and have not gotten an answer or response at all.
Reason: seems I can configure/add additional configurations (configmaps) to prometheus-operator, that I cannot add to prometheus installed through Rancher's cluster monitoring app.
Rancher installed prometheus-operator, but in the app says to not touch it (screenshot)
Edit:
This is what I was after all along:
additionalScrapeConfigs:[]
https://github.com/rancher/system-charts/blob/dev/charts/rancher-monitoring/v0.0.3/charts/prometheus/values.yaml#L61
and
storageSpec: {}
https://github.com/rancher/system-charts/blob/dev/charts/rancher-monitoring/v0.0.3/charts/prometheus/values.yaml#L35
Unlike in coreos/prometheus-operator chart:
answer for rancher-monitoring app should be:
prometheus:
additionalScrapeConfigs: []
# - job_name: "prometheus"
# static_configs:
# - targets:
# - "localhost:9090"
remoteWrite: []
# - url: http://remote1/push

Prometheus Adapter Custom Metrics for Libvirt in a K8S Cluster

I have a K8S cluster which is also managing VMs via virtlet. This K8S cluster is running K8S v1.13.2, with prometheus and the prometheus-adapter, and a custom-metrics server. I have written a custom metrics exporter for libvirtd which pulls in VM metrics and have configured prometheus to scrape that exporter for those VM metrics -- this is working and working well.
What I need to do next, is to have the prometheus-adapter push those metrics into K8S. Nothing I have done is working. Funny thing is, I can see the metrics in prometheus, but I am unable to present them to the custom metrics API.
Example metric visible in prometheus:
libvirt_cpu_stats_cpu_time_nanosecs{app="prometheus-lex",domain="virtlet-c91822c8-5e82-beta-deflect",instance="192.168.2.32:9177",job="kubernetes-pods",kubernetes_namespace="default",kubernetes_pod_name="prometheus-lex-866694b884-9z8v6",name="prometheus-lex",pod_template_hash="866694b884"}
Prometheus Adapter configuration for this metric:
- seriesQuery: 'libvirt_cpu_stats_cpu_time_nanosecs{job="kubernetes-pods", app="prometheus-lex"}'
seriesFilters: []
resource:
overrides:
kubernetes_pod_name:
resource: pod
kubernetes_namespace:
resource: namespace
name:
matches: libvirt_cpu_stats_cpu_time_nanosecs
as: libvirt_cpu_stats_cpu_time_rate
metricsQuery: rate(libvirt_cpu_stats_cpu_time_nanosecs{job="kubernetes-pods", app="prometheus-lex", <<.LabelMatchers>>}[5m])
When I query the custom metrics API, I do not see what I am looking for:
kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1|grep libvirt
returns nothing
Additionally, I can see the prometheus-adapter is able to query the series from prometheus. So I know that side of the adapter is working. I am just trying to figure out why it's not presenting them to the custom metrics server.
From the prometheus-adapter
I0220 19:12:58.442937 1 api.go:74] GET http://prometheus-server.default.svc.cluster.local:80/api/v1/series?match%5B%5D=libvirt_cpu_stats_cpu_time_nanosecs%7Bkubernetes_namespace%21%3D%22%22%2Ckubernetes_pod_name%21%3D%22%22%7D&start=1550689948.392 200 OK
Any ideas what I am missing here?
Update::
I have also tried the following new configuration, and it's still not working.
- seriesQuery: 'libvirt_cpu_stats_cpu_time_nanosecs{kubernetes_namespace!="",kubernetes_pod_name!=""}'
seriesFilters: []
resource:
overrides:
kubernetes_namespace: {resource: "namespace"}
kubernetes_pod_name: {resource: "pod"}
name:
matches: 'libvirt_cpu_stats_cpu_time_nanosecs'
as: 'libvirt_cpu_stats_cpu_time_rate'
metricsQuery: 'sum(rate(<<.Series>>{<<.LabelMatchers>>}[2m])) by (<<.GroupBy>>)'
It actually depends on how you install the Prometheus Adapter. If you install via helm and use the YAML as configuration to the rules. You need to follow this README https://github.com/helm/charts/blob/master/stable/prometheus-adapter/README.md and and declare the rules like
rules:
custom:
- seriesQuery: '{__name__=~"^some_metric_count$"}'
resources:
template: <<.Resource>>
name:
matches: ""
as: "my_custom_metric"
metricsQuery: sum(<<.Series>>{<<.LabelMatchers>>}) by (<<.GroupBy>>)
Pay attention to the custom keyword. If you miss it, the number won't be available via custom metrics.