Alertmanager is disabled in Helm chart - kubernetes-helm

I have recently installed prometheus kube-stack with helm chart but cluster status is disabled by default
I am using default configuration, alertmanager and prometheus are integrated and i can see alerts on alertmanager
I tried to fix it with --cluster.listen-address as it mentioned before but i did not find any option in all over the helm files

Related

How to configure istio helm chart to use external kube-prometheus-stack?

I have deployed the istio service mesh on the GKE cluster using base & istiod helm charts using this documents in the istio-system namespace.
I have deployed Prometheus, grafana & alert-manager using kube-prometheus-stack helm chart.
Every pod of this workload is working fine; I didn't see any error. Somehow I didn't get any metrics in Prometheus UI related to istio workload. Because of that, I didn't see any network graph in kiali dashboard.
Can anyone help me resolve this issue?
Istio expects Prometheus to discover which pods are exposing metrics through the use of the Kubernetes annotations prometheus.io/scrape, prometheus.io/port, and prometheus.io/path.
The Prometheus community has decided that those annotations, while popular, are insufficiently useful to be enabled by default. Because of this the kube-prometheus-stack helm chart does not discover pods using those annotations.
To get your installation of Prometheus to scrape your Istio metrics you need to either configure Istio to expose metrics in a way that your installation of Prometheus expects (you'll have to check the Prometheus configuration for that, I do not know what it does by default) or add a Prometheus scrape job which will do discovery using the above annotations.
Details about how to integrate Prometheus with Istio are available here and an example Prometheus configuration file is available here.
Need to add additionalScrapConfigs for istio in kube-prometheus-stack helm chart values.yaml.
prometheus:
prometheusSpec:
additionalScrapeConfigs:
- {{ add your scrap config for istio }}

Installed prometheus-community / helm-charts but I can't get metrics on "default" namespace

I recently learned about helm and how easy it is to deploy the whole prometheus stack for monitoring a Kubernetes cluster, so I decided to try it out on a staging cluster at my work.
I started by creating a dedicates namespace on the cluster for monitoring with:
kubectl create namespace monitoring
Then, with helm, I added the prometheus-community repo with:
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
Next, I installed the chart with a prometheus release name:
helm install prometheus prometheus-community/kube-prometheus-stack -n monitoring
At this time I didn't pass any custom configuration because I'm still trying it out.
After the install is finished, it all looks good. I can access the prometheus dashboard with:
kubectl port-forward prometheus-prometheus-kube-prometheus-prometheus-0 9090 -n monitoring
There, I see a bunch of pre-defined alerts and rules that are monitoring but the problem is that I don't quite understand how to create new rules to check the pods in the default namespace, where I actually have my services deployed.
I am looking at http://localhost:9090/graph to play around with the queries and I can't seem to use any that will give me metrics on my pods in the default namespace.
I am a bit overwhelmed with the amount of information so I would like to know what did I miss or what am I doing wrong here?
The Prometheus Operator includes several Custom Resource Definitions (CRDs) including ServiceMonitor (and PodMonitor). ServiceMonitor's are used to define services to the Operator to be monitored.
I'm familiar with the Operator although not the Helm deployment but I suspect you'll want to create ServiceMonitors to generate metrics for your apps in any (including default) namespace.
See: https://github.com/prometheus-operator/prometheus-operator#customresourcedefinitions
ServiceMonitors and PodMonitors are CRDs for Prometheus Operator. When working directly with Prometheus helm chart (without operator), you need have to configure your targets directly in values.yaml by editing the scrape_configs section.
It is more complex to do it, so take a deep breath and start by reading this: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config

how to connect Loki helm chart to my app in kubernetes

I am new to Loki but all i want to do is to use it as simply as possible with helm.
I want to get the logs of my app witch is in kubernetes, but it seems that there is no instructions on how to do that. All I find is how to install Loki and to add it to Grafana as a datasource but I don't think that's what Loki is made for.
I simply want to track my app's logs in kubernetes so I am using Loki helm chart and all I can find about a custom config is this line:
Deploy with custom config
helm upgrade --install loki grafana/loki-stack --set "key1=val1,key2=val2,..."
After installing Loki you can set it as a data source for Grafana.
For more details you can follow this example :Logging in Kubernetes with Loki and the PLG Stack
I hope that this can help you to resolve your issue .

Setting up Prometheus / Grafana in Kubernetes for RabbitMQ alerts

I have setup Prometheus / Grafana in Kubernetes using stable/prometheus-operator helm chart. I have setup RabbitMQ exporter helm chart as well. I have to setup the alerts for RabbitMQ which is running on k8s. I can't see any target for RabbitMQ in Prometheus. RabbitMQ is not showing up in targets so that I can monitor metrics. It's critical.
Target in the RabbitMQ exporter can be set by passing the arguments to the helmchart. We have to just set the url and password of RabbitMQ in helmchart using --set.

K8S monitoring stack configuration with alerts

I am trying to set up a k8s monitoring stack for my on-premises cluster. What I want to set up is:
Prometheus
Grafana
Kube-state-metrics
Alertmanager
Loki
I can find a lot of resources to do that like:
This configures the monitoring stack except Loki using their own CRD files:
https://medium.com/faun/production-grade-kubernetes-monitoring-using-prometheus-78144b835b60
Configure Prometheus and Grafana in different namespaces using separate helm charts:
https://github.com/helm/charts/tree/master/stable/prometheus
https://github.com/helm/charts/tree/master/stable/grafana
Configure Prometheus-operator helm chart into a single namespace:
https://github.com/helm/charts/tree/master/stable/prometheus-operator
I have doubts regarding the configuration of the alert notifications.
All three setups mentioned above have Grafana UI. So, there is an option to configure alert rules and notification channels via that UI.
But in the first option, Prometheus rules are configured with Prometheus setup and notification channels are configured with the alert-manager setup using configMap CRDs.
Which is the better configuration option?
What is the difference in setting up alerts via Grafana UI & Prometheus rules and channels via such configMap CRDs?
What are the advantages and disadvantages of both methods?
I chose the third option to setup prometheus-operator in a namespace. Because this chart configures prometheus, grafana, and alertmanager. Prometheus is added as a datasource in grafana by default. It allows adding additional alert rules for Prometheus , datasources, and dashboards for grafana using chart's values file.
Then Configured Loki in the same namespace and added it as a datasource in grafana.
Also configured a webhook to redirect the notifications from alertmanager to MS teams.