I have already setup prometheus-community/kube-prometheus-stack in my cluster using helm.
I need to also deploy a redis cluster in the same cluster.
How can provide option so that metric of this redis cluster go to prometheus and to be fed to grafana?
On github page some options are listed.
Will it work with below configuration?
$ helm install my-release \
--set metrics.enabled=true\
bitnami/redis
Do I need to do anything else?
I would assume that you asking this question in the first place means the redis metrics didn't show up in the prometheus for you.
Setting up prometheus using "prometheus-community/kube-prometheus-stack" helm chart could be very different for you than me as it has a lot of configurables.
As the helm chart comes with "prometheus operator", we have used PodMonitor and/or ServiceMonitor CRD's as they provide far more configuration options. Here's some docs around that:
https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/user-guides/getting-started.md#include-servicemonitors
So basically, deploy prometheus with "prometheus.prometheusSpec.serviceMonitorSelector.matchLabels" with a label value of your choice
serviceMonitorSelector:
matchLabels:
monitoring-platform: core-prometheus
and thereafter deploy redis with "metrics.enabled=true", "metrics.serviceMonitor.enabled=true" & "metrics.serviceMonitor.selector" set to value similar to the label defined in prometheus serviceMonitorSelector (monitoring-platform: core-prometheus in this case). Something like this:
metrics:
enabled: true
serviceMonitor:
enabled: true
selector:
monitoring-platform: core-prometheus
This setup works for us.
screenshot
Related
I'd like to do all k8s installation, configuration, and maintenance using Helm v3 (v3.7.2).
Thus I have setup yaml templates for:
deployment
configmap
service
ingress
Yet I can't find any information in the Helm v3 docs on setting up an HPA (HorizontalPodAutoscaler). Can this be done using an hpa.yaml that pulls from values.yaml?
Yes. Example, try helm create nginx will create a template project call "nginx", and inside the "nginx" directory you will find a templates/hpa.yaml example. Inside the values.yaml -> autoscaling is what control the HPA resources:
autoscaling:
enabled: false # <-- change to true to create HPA
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
Can this be done using an hpa.yaml that pulls from values.yaml?
Yes. HPA is a native Kubernetes resource that you can template out just like you have done for your other resources. Helm is both a package management system and a templating tool, but it is unlikely its docs contain specific examples for all Kubernetes API objects.
You can see many examples of HPA templates in the Bitnami Helm Charts. For example, apache has an hpa.yaml that is templated-out if .Values.autoscaling.enabled.
I have a working bitnami/rabbitmq server helm chart working on my Kubernetes cluster but the graphs displayed on Grafana are not enough, I found this dashboard RabbitMQ-Overview which has sufficient details that can be very useful for my bitnami/rabbitmq server but unfortunately there is no data showing on the dashboard. The issue here is I cannot see my metrics on this dashboard, can someone please suggest a work-around approach for my case. Please note I am using the kube-prometheus-stack helm chart for Prometheus+Grafana services.
Steps to solve this issue:
I enabled the rabbitmq_prometheus for all my Rabbitmq nodes by entering the Pods
rabbitmq-plugins enable rabbitmq_prometheus
Output:
:/$ rabbitmq-plugins list
Listing plugins with pattern "." ...
Configured: E = explicitly enabled; e = implicitly enabled
| Status: * = running on rabbit#rabbitmq-0.broker-server
|/
[E] rabbitmq_management 3.8.9
[e*] rabbitmq_management_agent 3.8.9
[ ] rabbitmq_mqtt 3.8.9
[e*] rabbitmq_peer_discovery_common 3.8.9
[E*] rabbitmq_peer_discovery_k8s 3.8.9
[E*] rabbitmq_prometheus 3.8.9
Made sure I was using the same data source for Prometheus used in my Grafana
I tried creating a prometheus-rabbitmq-exporter to get the metrics from Rabbitmq and send them to RabbitMQ-Overview dashboard but no data was displayed.
my dashboard no data
Please note while reading this documentation to solve my issue I already have metrics from Rabbitmq displayed on Grafana but I need the RabbitMQ-Overview that contains more details about my server.
To build on #amin 's answer, the label can be added to the helm like so:
Before helm chart version 9.0.0
metrics:
serviceMonitor:
enabled: true
additionalLabels:
release: prometheus
After helm chart version 9.0.0
metrics:
serviceMonitor:
enabled: true
labels:
release: prometheus
Fixed the issue after including 2 commands from this table table and need to edit the ServiceMonitor to include the label release: prometheus as it is needed to be visible in the prometheus targets as I am using kube-prometheus-stack helm chart.
helm install -f values.yml broker bitnami/rabbitmq --namespace default --set nodeSelector.test=rabbit --set volumePermissions.enabled=true --set replicaCount=1 --set metrics.enabled=true --set metrics.serviceMonitor.enabled=true
I hope it helps someone in the future!
I have 3 kubernetes clusters (prod, test, monitoring). Iam new to prometheus so i have tested it by installing it in my test environment with the helm chart:
# https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack
helm install [RELEASE_NAME] prometheus-community/kube-prometheus-stack
But if i want to have metrics from the prod and test clusters, i have to repeat the same installation of the helm and each "kube-prometheus-stack" would be standalone in its own cluster. It is not ideal at all. Iam trying to find a way to have a single prometheus/grafana which would federate/agregate the metrics from each cluster's prometheus server.
I found this link, saying about prometheus federation:
https://prometheus.io/docs/prometheus/latest/federation/
If install the helm chart "kube-prometheus-stack" and get rid of grafana on the 2 other cluster, how can i make the 3rd "kube-prometheus-stack", on the 3rd cluster, scrapes metrics from the 2 other ones?
thanks
You have to modify configuration for prometheus federate so it can scrape metrics from other clusters as described in documentation:
scrape_configs:
- job_name: 'federate'
scrape_interval: 15s
honor_labels: true
metrics_path: '/federate'
params:
'match[]':
- '{job="prometheus"}'
- '{__name__=~"job:.*"}'
static_configs:
- targets:
- 'source-prometheus-1:9090'
- 'source-prometheus-2:9090'
- 'source-prometheus-3:9090'
params field checks for jobs to scrape metrics from. In this particular example
It will scrape any series with the label job="prometheus" or a metric name starting
with job: from the Prometheus servers at
source-prometheus-{1,2,3}:9090
You can check following articles to give you more insight of prometheus federation:
Monitoring Kubernetes with Prometheus - outside the cluster!
Prometheus federation in Kubernetes
Monitoring multiple federated clusters with Prometheus - the secure way
Monitoring a Multi-Cluster Environment Using Prometheus Federation and Grafana
You have few options here:
Option 1:
You can achieve this buy having vmagent or grafana-agent in prod and test clusters and configure remote write on them to your monitoring cluster.
But in this case you will need to install kube-state-metrics and node-exporter separately into prod and test cluster.
Also it's important to add extra label for a cluster name (or any unique identifier) before sending metrics to remote write, to make sure that recording rules from "kube-prometheus-stack" are working correctly
diagram
Option 2:
You can install victoria-metrics-k8s-stack chart. It has similar functionality as kube-prometheus-stack - also installs bunch of components recording rules and dashboards.
With this case you install victoria-metrics-k8s-stack in every cluster, but with different values.
For monitoring cluster you can use default values, with
grafana:
sidecar:
dashboards:
multicluster: true
and proper configured ingress for vmsingle
For prod and test cluster you need to disable bunch of components
defaultRules:
create: false
vmsingle:
enabled: false
alertmanager:
enabled: false
vmalert:
enabled: false
vmagent:
spec:
remoteWrite:
- url: "<vmsingle-ingress>/api/v1/write"
externalLabels:
cluster: <cluster-name>
grafana:
enabled: false
defaultDashboardsEnabled: false
in this case chart will deploy vmagent, kube-state-metrics, node-exporter and scrape configurations for vmagent.
diagram
You could try looking at Wavefront. It's a commercial tool now but you can get a 30 day trial free - also, it understands promQL. So essentially, you could use the same prometheus rules and config across all clusters, and then use wavefront to just connect to all of those prom instances.
Another option may be Thanos, but I've never used it personally.
I'm using istioctl 1.6.8 and with the help of command istioctl install --set profile=demo --file istio-config.yaml I was able to deloy istio to my cluster with grafana and prometheus enabled. My istio-config.yaml file looks like this.
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
components:
ingressGateways:
- name: istio-ingressgateway
enabled: true
k8s:
serviceAnnotations:
service.beta.kubernetes.io/aws-load-balancer-internal: true
values:
grafana:
security:
enabled: true
I have some grafana dashboard json files which I need to export to the newly installed grafana and for these dashboards to work I have to add some exporter details in to my prometheus scrape-config.
My question:
Apart from importing dashboard via grafana UI, is there any way I could do this by passing relevant details to my istio-config.yaml? If not, can anyone suggest any other approach?
(One approach that I have in my mind is to overwrite the existing resources with custom yaml using kubectl apply -f -)
Thanks In Advance
You shouldn't investigate on this any further. With Istio 1.7 Prometheues/Kiali/Grafana installation with istioctl was deprecated and will be removed with Istio 1.8.
See: https://istio.io/latest/blog/2020/addon-rework/
In the further you will have to set up your own prometheus/grafana eg with helm so I would recommand to work in this direction.
I want to monitor a couple applications running on a Kubernetes cluster in namespaces named development and production through prometheus-operator.
Installation command used (as per Github) is:
helm install prometheus-operator stable/prometheus-operator -n production --set prometheusOperator.enabled=true,prometheus.service.type=NodePort,prometheusOperator.service.type=NodePort,alertmanager.service.type=NodePort,grafana.service.type=NodePort,grafana.service.nodePort=30906
What parameters do I need to add to above command to have prometheus-operator discover and monitor all apps/services/pods running in all namespaces?
With this, Service Discovery only shows some prometheus-operator related services, but not the app that I am running within 'production' namespace even though prometheus-operator is installed in the same namespace.
Anything I am missing?
Note - Am running performing all actions using the same user (which uses the $HOME/.kube/config file), so I assume permissions are not an issue.
kubectl version - v1.17.3
helm version - 3.1.2
P.S. There are numerous articles on this on different forums, but am still not finding simple and direct answers for this.
I had the same problem. After some investigation answering with more details.
I've installed Prometheus stack via Helm charts which include Prometheus operator chart directly as a sub-project. Prometheus operator monitors namespaces specified by the following helm values:
prometheusOperator:
namespaces: ''
denyNamespaces: ''
prometheusInstanceNamespaces: ''
alertmanagerInstanceNamespaces: ''
thanosRulerInstanceNamespaces: ''
The namespaces value specifies monitored namespaces for ServiceMonitor and PodMonitor CRDs. Other CRDs have their own settings, which if not set, default to namespaces. Helm values are passed as command-line arguments to the operator. See here and here.
Prometheus CRDs are picked up by the operator from the mentioned namespaces, by default - everywhere. However, as the operator is designed with multiple simultaneous Prometheus releases in mind, what to pick up by a particular Prometheus app instance is controlled by the corresponding Prometheus CRD. CRDs selectors and corresponding namespaces selectors are controlled via the following Helm values:
prometheus:
prometheusSpec:
serviceMonitorSelectorNilUsesHelmValues: true
serviceMonitorSelector: {}
serviceMonitorNamespaceSelector: {}
Similar values are present for other CRDs: alertmanagerConfigXXX, ruleNamespaceXXX, podMonitorXXX, probeXXX. XXXSelectorNilUsesHelmValues set to true, means to look for CRD with particular release label, e.g. release=myrelease. See here.
Empty selector (for a namespace, CRD, or any other object) means no filtering. So for Prometheus object to pick up a ServiceMonitor from the other namespaces there are few options:
Set serviceMonitorSelectorNilUsesHelmValues: false. This leaves serviceMonitorSelector empty.
Apply the release label, e.g. release=myrelease, to your ServiceMonitor CRD.
Set a non-empty serviceMonitorSelector that matches your ServiceMonitor.
For the curious ones here are links to the operator sources:
Enqueue of Prometheus CRD processing
Processing of Prometheus CRD
I used values.yaml from https://github.com/helm/charts/blob/master/stable/prometheus-operator/values.yaml, modified parameters *NilUsesHelmValues to False and it seems to work fine with that.
helm install prometheus-operator stable/prometheus-operator -n monitoring -f values.yaml
Also, like https://stackoverflow.com/users/7889479/anish-kumar-mourya stated, the services do show in Grafana dashboard even though they dont appear in Prometheus UI under Service Discovery or Targets.
Hope this helps other newbies like me.
no its fine but you can create new namespace for monitoring and install prometheus over there would be good to manage things related to monitoring.
helm install prometheus-operator stable/prometheus-operator -n monitoring
You need to create a service for the pod and a serviceMonitor custom resource to configure which services in which namespace need to be discovered by prometheus.
kube-state-metrics Service example
apiVersion: v1
kind: Service
metadata:
labels:
app: kube-state-metrics
k8s-app: kube-state-metrics
annotations:
alpha.monitoring.coreos.com/non-namespaced: "true"
name: kube-state-metrics
spec:
ports:
- name: http-metrics
port: 8080
targetPort: metrics
protocol: TCP
selector:
app: kube-state-metrics
This Service targets all Pods with the label k8s-app: kube-state-metrics.
Generic ServiceMonitor example
This ServiceMonitor targets all Services with the label k8s-app (spec.selector) any value, in the namespaces kube-system and monitoring (spec.namespaceSelector).
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: k8s-apps-http
labels:
k8s-apps: http
spec:
jobLabel: k8s-app
selector:
matchExpressions:
- {key: k8s-app, operator: Exists}
namespaceSelector:
matchNames:
- kube-system
- monitoring
endpoints:
- port: http-metrics
interval: 15s
https://github.com/coreos/prometheus-operator/blob/master/Documentation/user-guides/running-exporters.md