Setting up Prometheus / Grafana in Kubernetes for RabbitMQ alerts - kubernetes

I have setup Prometheus / Grafana in Kubernetes using stable/prometheus-operator helm chart. I have setup RabbitMQ exporter helm chart as well. I have to setup the alerts for RabbitMQ which is running on k8s. I can't see any target for RabbitMQ in Prometheus. RabbitMQ is not showing up in targets so that I can monitor metrics. It's critical.

Target in the RabbitMQ exporter can be set by passing the arguments to the helmchart. We have to just set the url and password of RabbitMQ in helmchart using --set.

Related

How to configure istio helm chart to use external kube-prometheus-stack?

I have deployed the istio service mesh on the GKE cluster using base & istiod helm charts using this documents in the istio-system namespace.
I have deployed Prometheus, grafana & alert-manager using kube-prometheus-stack helm chart.
Every pod of this workload is working fine; I didn't see any error. Somehow I didn't get any metrics in Prometheus UI related to istio workload. Because of that, I didn't see any network graph in kiali dashboard.
Can anyone help me resolve this issue?
Istio expects Prometheus to discover which pods are exposing metrics through the use of the Kubernetes annotations prometheus.io/scrape, prometheus.io/port, and prometheus.io/path.
The Prometheus community has decided that those annotations, while popular, are insufficiently useful to be enabled by default. Because of this the kube-prometheus-stack helm chart does not discover pods using those annotations.
To get your installation of Prometheus to scrape your Istio metrics you need to either configure Istio to expose metrics in a way that your installation of Prometheus expects (you'll have to check the Prometheus configuration for that, I do not know what it does by default) or add a Prometheus scrape job which will do discovery using the above annotations.
Details about how to integrate Prometheus with Istio are available here and an example Prometheus configuration file is available here.
Need to add additionalScrapConfigs for istio in kube-prometheus-stack helm chart values.yaml.
prometheus:
prometheusSpec:
additionalScrapeConfigs:
- {{ add your scrap config for istio }}

Integrate opensearch prometheus exporter to Kubernetes/helm

My Opensearch app runs in Kubernetes.
I found this OpenSearch exporter plugin:
https://github.com/aparo/opensearch-prometheus-exporter
to export metrics to prometheus.
Is there any option to integrate this solution to kubernetes?
Yes, after you install plugins, You need Prometheus running on Kubernetes.
I recommend you install Prometheus-operator, which creates Prometheus service for you.
https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/user-guides/getting-started.md
It includes the "ServiceMonitor" kind, which defines what "pod" under a "service" you would like to monitor.
After creating the "ServerMonitor" point to your Opensearch pod correctly, Prometheus will scape your metrics.

Stackdriver Prometheus sidecar with Prometheus Operator helm chart

We have setup Prometheus + Grafana on our GKE cluster using the stable/prometheus-operator helm chart. Now we want to export some metrics to Stackdriver because we have installed custom metrics Stackdriver adapter. We are already using some Pub/Sub metrics from Stackdriver for autoscaling few deployments. Now we also want to use some Prometheus metrics (mainly nginx request rate) in the autoscaling of other deployments.
So, my first question: Can we use Prometheus adapter in parallel with Stackdriver adapter for autoscaling in the same cluster?
If not, we will need to install Stackdriver Prometheus Sidecar for exporting the Prometheus metrics to Stackdriver and then use them for autoscaling via Stackdriver adapter.
From the instructions here, it looks like we need to install Stackdriver sidecar on same pod on which Prometheus is running. I gave it a try. When I run the patch.sh script, I got the message back: statefulset.apps/prometheus-prom-operator-prometheus-o-prometheus patched but when I inspected the statefulset again, it didn't have the Stackdriver sidecar container in it. Since this statefulset is created by a Helm chart, we probably can't modify it directly. Is there a recommended way of doing this in Helm?
Thanks to this comment on GitHub, I figured it out. There are so many configuration options accepted by this Helm chart that I missed it while reading the docs.
So, turns out that this Helm chart accepts a configuration option prometheus.prometheusSpec.containers. Its description in the docs says: "Containers allows injecting additional containers. This is meant to allow adding an authentication proxy to a Prometheus pod". But obviously, it is not limited to the authentication proxy and you can pass any container spec here and it will be added to Prometheus StatefulSet created by this Helm chart.
Here is the sample configuration I used. Some key points:
Please replace the values in angle brackets with your actual values.
Feel free to remove the arg --include. I added it because nginx_http_requests_total is the only Prometheus metric I want to send to Stackdriver for now. Check Managing costs for Prometheus-derived metrics for more details about it.
To figure out the name of volume to use in volumeMounts:
List down StatefulSets in Prometheus Operator namespace. Assuming that you installed it in monitoring namespace: kubectl get statefulsets -n monitoring
Describe the Prometheus StatefulSet assuming that its name is prometheus-prom-operator-prometheus-o-prometheus: kubectl describe statefulset prometheus-prom-operator-prometheus-o-prometheus -n monitoring
In details of this StatefulSet, find container named prometheus. Note the value passed to it in arg --storage.tsdb.path
Find the volume that is mounted on this container on same path. In my case, it was prometheus-prom-operator-prometheus-o-prometheus-db so I mounted the same volume on my Stackdriver sidecar container as well.
prometheus:
prometheusSpec:
containers:
- name: stackdriver-sidecar
image: gcr.io/stackdriver-prometheus/stackdriver-prometheus-sidecar:0.7.5
imagePullPolicy: Always
args:
- --stackdriver.project-id=<GCP PROJECT ID>
- --prometheus.wal-directory=/prometheus/wal
- --stackdriver.kubernetes.location=<GCP PROJECT REGION>
- --stackdriver.kubernetes.cluster-name=<GKE CLUSTER NAME>
- --include=nginx_http_requests_total
ports:
- name: stackdriver
containerPort: 9091
volumeMounts:
- name: prometheus-prom-operator-prometheus-o-prometheus-db
mountPath: /prometheus
Save this yaml to a file. Let's assume you saved it to prom-config.yaml
Now, find the release name you have used to install Prometheus Operator Helm chart on your cluster:
helm list
Assuming that release name is prom-operator, you can update this release according to the config composed above by running this command:
helm upgrade -f prom-config.yaml prom-operator stable/prometheus-operator
I hope you found this helpful.

K8S monitoring stack configuration with alerts

I am trying to set up a k8s monitoring stack for my on-premises cluster. What I want to set up is:
Prometheus
Grafana
Kube-state-metrics
Alertmanager
Loki
I can find a lot of resources to do that like:
This configures the monitoring stack except Loki using their own CRD files:
https://medium.com/faun/production-grade-kubernetes-monitoring-using-prometheus-78144b835b60
Configure Prometheus and Grafana in different namespaces using separate helm charts:
https://github.com/helm/charts/tree/master/stable/prometheus
https://github.com/helm/charts/tree/master/stable/grafana
Configure Prometheus-operator helm chart into a single namespace:
https://github.com/helm/charts/tree/master/stable/prometheus-operator
I have doubts regarding the configuration of the alert notifications.
All three setups mentioned above have Grafana UI. So, there is an option to configure alert rules and notification channels via that UI.
But in the first option, Prometheus rules are configured with Prometheus setup and notification channels are configured with the alert-manager setup using configMap CRDs.
Which is the better configuration option?
What is the difference in setting up alerts via Grafana UI & Prometheus rules and channels via such configMap CRDs?
What are the advantages and disadvantages of both methods?
I chose the third option to setup prometheus-operator in a namespace. Because this chart configures prometheus, grafana, and alertmanager. Prometheus is added as a datasource in grafana by default. It allows adding additional alert rules for Prometheus , datasources, and dashboards for grafana using chart's values file.
Then Configured Loki in the same namespace and added it as a datasource in grafana.
Also configured a webhook to redirect the notifications from alertmanager to MS teams.

unable to get the system service memory and cpu metrics of kubernetes cluster in grafana dashboard using prometheus

I am trying to monitor my kubernetes cluster metrics using Prometheus and grafana. Here is the link which was i followed. Facing the issue with kubernetes-service-endpoints (2/3 up) in my Prometheus dashboard.
below is grafana dashboard which is used in this task.
I checked my Prometheus pod logs .It shows the errors like
Could anybody suggest how to get system services metrics in the above dashboard?
(or)
suggest me the any grafana dashboard name for monitoring the kubernetes cluster using Prometheus?
Check your prometheus.yaml it should have static configs as for prometheus,
static_configs:
- targets:
- localhost:9090
curl localhost:9090/metrics and make sure you're receiving metrics as output
For grafana dashboard, create an org -> prometheus as data source configure prometheus IP:PORT and click on test and confirm connectivity is available.
Open .json file and change the configs according your requirements and import and check.