I am trying to set up a k8s monitoring stack for my on-premises cluster. What I want to set up is:
Prometheus
Grafana
Kube-state-metrics
Alertmanager
Loki
I can find a lot of resources to do that like:
This configures the monitoring stack except Loki using their own CRD files:
https://medium.com/faun/production-grade-kubernetes-monitoring-using-prometheus-78144b835b60
Configure Prometheus and Grafana in different namespaces using separate helm charts:
https://github.com/helm/charts/tree/master/stable/prometheus
https://github.com/helm/charts/tree/master/stable/grafana
Configure Prometheus-operator helm chart into a single namespace:
https://github.com/helm/charts/tree/master/stable/prometheus-operator
I have doubts regarding the configuration of the alert notifications.
All three setups mentioned above have Grafana UI. So, there is an option to configure alert rules and notification channels via that UI.
But in the first option, Prometheus rules are configured with Prometheus setup and notification channels are configured with the alert-manager setup using configMap CRDs.
Which is the better configuration option?
What is the difference in setting up alerts via Grafana UI & Prometheus rules and channels via such configMap CRDs?
What are the advantages and disadvantages of both methods?
I chose the third option to setup prometheus-operator in a namespace. Because this chart configures prometheus, grafana, and alertmanager. Prometheus is added as a datasource in grafana by default. It allows adding additional alert rules for Prometheus , datasources, and dashboards for grafana using chart's values file.
Then Configured Loki in the same namespace and added it as a datasource in grafana.
Also configured a webhook to redirect the notifications from alertmanager to MS teams.
Related
I have deployed the istio service mesh on the GKE cluster using base & istiod helm charts using this documents in the istio-system namespace.
I have deployed Prometheus, grafana & alert-manager using kube-prometheus-stack helm chart.
Every pod of this workload is working fine; I didn't see any error. Somehow I didn't get any metrics in Prometheus UI related to istio workload. Because of that, I didn't see any network graph in kiali dashboard.
Can anyone help me resolve this issue?
Istio expects Prometheus to discover which pods are exposing metrics through the use of the Kubernetes annotations prometheus.io/scrape, prometheus.io/port, and prometheus.io/path.
The Prometheus community has decided that those annotations, while popular, are insufficiently useful to be enabled by default. Because of this the kube-prometheus-stack helm chart does not discover pods using those annotations.
To get your installation of Prometheus to scrape your Istio metrics you need to either configure Istio to expose metrics in a way that your installation of Prometheus expects (you'll have to check the Prometheus configuration for that, I do not know what it does by default) or add a Prometheus scrape job which will do discovery using the above annotations.
Details about how to integrate Prometheus with Istio are available here and an example Prometheus configuration file is available here.
Need to add additionalScrapConfigs for istio in kube-prometheus-stack helm chart values.yaml.
prometheus:
prometheusSpec:
additionalScrapeConfigs:
- {{ add your scrap config for istio }}
I have setup Prometheus / Grafana in Kubernetes using stable/prometheus-operator helm chart. I have setup RabbitMQ exporter helm chart as well. I have to setup the alerts for RabbitMQ which is running on k8s. I can't see any target for RabbitMQ in Prometheus. RabbitMQ is not showing up in targets so that I can monitor metrics. It's critical.
Target in the RabbitMQ exporter can be set by passing the arguments to the helmchart. We have to just set the url and password of RabbitMQ in helmchart using --set.
I want to configure istio in such a way that it does not uses Prometheus or Grafana which come by default with it. I want to use my existing Prometheus and Grafana which is already deplyoed in cluster. Any help will be appreciated.
You need to configure your existing prometheus with a scrape configuration.For prometheus config you can use this ConfigMap. For grafana you need to configure your prometheus as datasource and you can use this configMap for that. You can generate the ConfigMaps using helm template and use.
Check this guide
I am trying to monitor my kubernetes cluster metrics using Prometheus and grafana. Here is the link which was i followed. Facing the issue with kubernetes-service-endpoints (2/3 up) in my Prometheus dashboard.
below is grafana dashboard which is used in this task.
I checked my Prometheus pod logs .It shows the errors like
Could anybody suggest how to get system services metrics in the above dashboard?
(or)
suggest me the any grafana dashboard name for monitoring the kubernetes cluster using Prometheus?
Check your prometheus.yaml it should have static configs as for prometheus,
static_configs:
- targets:
- localhost:9090
curl localhost:9090/metrics and make sure you're receiving metrics as output
For grafana dashboard, create an org -> prometheus as data source configure prometheus IP:PORT and click on test and confirm connectivity is available.
Open .json file and change the configs according your requirements and import and check.
I already have some services in my k8s cluster and want to mantain them separately. Examples:
grafana with custom dashboards and custom dockerfile
prometheus-operator instead of basic prometheus
jaeger pointing to elasticsearch as internal storage
certmanager in my own namespace (also I use it for nginx-ingress legacy routing)
Is it possible to use existing instances instead of creating istio-specific ones? Can istio communicate with them or it's hardcoded?
Yes - it is possible to use external services with istio. You can disable grafana and prometheus just by setting proper flags in values.yaml of istio helm chart (grafana.enabled=false, etc).
You can check kyma-project project to see how istio is integrated with prometheus-operator, grafana deployment with custom dashboards, and custom jaeger deployment. From your list only certmanager is missing.
Kubernetes provides quite a big variety of Networking and Load Balancing features from the box. However, the idea to simplify and extend the functionality of Istio sidecars is a good choice as they are used for automatic injection into the Pods in order to proxy the traffic between internal Kubernetes services.
You can implement sidecars manually or automatically. If you choose the manual way, make sure to add the appropriate parameter under Pod's annotation field:
annotations:
sidecar.istio.io/inject: "true"
Automatic sidecar injection requires Mutating Webhook admission controller, available since Kubernetes version 1.9 released, therefore sidecars can be integrated for Pod's creation process as well.
Get yourself familiar with this Article to shed light on using different monitoring and traffic management tools in Istio.