I have a working bitnami/rabbitmq server helm chart working on my Kubernetes cluster but the graphs displayed on Grafana are not enough, I found this dashboard RabbitMQ-Overview which has sufficient details that can be very useful for my bitnami/rabbitmq server but unfortunately there is no data showing on the dashboard. The issue here is I cannot see my metrics on this dashboard, can someone please suggest a work-around approach for my case. Please note I am using the kube-prometheus-stack helm chart for Prometheus+Grafana services.
Steps to solve this issue:
I enabled the rabbitmq_prometheus for all my Rabbitmq nodes by entering the Pods
rabbitmq-plugins enable rabbitmq_prometheus
Output:
:/$ rabbitmq-plugins list
Listing plugins with pattern "." ...
Configured: E = explicitly enabled; e = implicitly enabled
| Status: * = running on rabbit#rabbitmq-0.broker-server
|/
[E] rabbitmq_management 3.8.9
[e*] rabbitmq_management_agent 3.8.9
[ ] rabbitmq_mqtt 3.8.9
[e*] rabbitmq_peer_discovery_common 3.8.9
[E*] rabbitmq_peer_discovery_k8s 3.8.9
[E*] rabbitmq_prometheus 3.8.9
Made sure I was using the same data source for Prometheus used in my Grafana
I tried creating a prometheus-rabbitmq-exporter to get the metrics from Rabbitmq and send them to RabbitMQ-Overview dashboard but no data was displayed.
my dashboard no data
Please note while reading this documentation to solve my issue I already have metrics from Rabbitmq displayed on Grafana but I need the RabbitMQ-Overview that contains more details about my server.
To build on #amin 's answer, the label can be added to the helm like so:
Before helm chart version 9.0.0
metrics:
serviceMonitor:
enabled: true
additionalLabels:
release: prometheus
After helm chart version 9.0.0
metrics:
serviceMonitor:
enabled: true
labels:
release: prometheus
Fixed the issue after including 2 commands from this table table and need to edit the ServiceMonitor to include the label release: prometheus as it is needed to be visible in the prometheus targets as I am using kube-prometheus-stack helm chart.
helm install -f values.yml broker bitnami/rabbitmq --namespace default --set nodeSelector.test=rabbit --set volumePermissions.enabled=true --set replicaCount=1 --set metrics.enabled=true --set metrics.serviceMonitor.enabled=true
I hope it helps someone in the future!
Related
I'm attempting to use a Bitnami Helm chart for Postgresql to spin up a custom Docker image that I create (I take the Bitnami Postgres Docker image and pre-populate it with my data, and then build/tag it and push it to my own Docker registry).
When I attempt to start up the chart with my own image coordinates, the service spins up, but no pods are present. I'm trying to figure out why.
I've tried running helm install with the --debug option and I notice that if I run my configuration below, only 4 resources get created (client.go:128: [debug] creating 4 resource(s)), vs 5 resources if I try to spin up the default Docker image specified in the Postgres Helm chart. Presumably the missing resource is my pod. But I'm not sure why or how to fix this.
My Chart.yaml:
apiVersion: v2
name: test-db
description: A Helm chart for Kubernetes
type: application
version: "1.0-SNAPSHOT"
appVersion: "1.0-SNAPSHOT"
dependencies:
- name: postgresql
version: 11.9.13
repository: https://charts.bitnami.com/bitnami
condition: postgresql.enabled
My values.yaml:
postgresql:
enabled: true
image:
registry: myregistry.com
repository: test-db
tag: "1.0-SNAPSHOT"
pullPolicy: always
pullSecrets:
- my-reg-secret
service:
type: ClusterIP
nameOverride: test-db
I'm starting this all up by running
helm dep up
helm install mydb .
When I start up a Helm chart (helm install mychart .), is there a way to see what Helm/Kubectl is doing, beyond just passing the --debug flag? I'm trying to figure out why it's not spinning up the pod.
The issue was on this line in my values.yaml:
pullPolicy: always
The pullPolicy is case sensitive. Changing the value to Always (note capital A) fixed the issue.
I'll add that this error wasn't immediately obvious, and there was no indication in the output from running the Helm command that this was the issue.
I was able to discover the problem by looking at how the statefulset got deployed (I use k9s to navigate Kubernetes resources): it showed 0/0 for the number of pods that were deployed. Describing this statefulset, I was able to see the error in capitalization in the startup logs.
Got an issue in deploying the argo-cd helm chart, it seems failing checking the Kubernetes version: argocd: >= 1.22.0-0 and got Kubernetes 1.20.0
Pulumi is not using the installed Helm on my Mac and seems to have kube-version set to 1.20.0!
Pulumi Chart ressource:
new k8s.helm.v3.Chart(
'argo-cd',
{
chart: 'argo-cd',
fetchOpts: {
repo: 'https://argoproj.github.io/argo-helm'
},
namespace: 'argo',
values: {}
},
{
providers: {
kubernetes: cluster.provider
}
}
);
Result:
pulumi:pulumi:Stack (my-project-prod):
error: Error: invocation of kubernetes:helm:template returned an error: failed to generate YAML for specified Helm chart: failed to create chart from template: chart requires kubeVersion: >=1.22.0-0 which is incompatible with Kubernetes v1.20.0
The chart is working as intended. >=1.22.0-0 means the chart must be rendered with a version of the Kubernetes client greater than 1.22.0.
Check your Helm version is compiled against 1.22.
If you want to render your chart against your cluster's capabilities, use helm template --validate. That will tell Helm to pull the Kubernetes version from your cluster. Otherwise it uses the version of Kubernetes the Helm client was compiled against.
You can also refer to this github link for more troubleshooting steps.
This is probably a very basic question. I am looking at Install Istio with Helm and Enable Envoy’s access logging.
How do I enable envoy access logging if I install istio via its helm charts?
Easiest, and probably only, way to do this is to install Istio with IstioOperator using Helm.
Steps to do so are almost the same, but instead of base chart, you need to use istio-operator chart.
First create istio-operator namespace:
kubectl create namespace istio-operator
then deploy IstioOperator using Helm (assuming you have downloaded Istio, and changed current working directory to istio root):
helm install istio-operator manifests/charts/istio-operator -n istio-operator
Having installed IstioOperator, you can now install Istio. This is a step where you can enable Envoy’s access logging:
kubectl apply -f - <<EOF
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
name: istiocontrolplane
spec:
profile: default
meshConfig:
accessLogFile: /dev/stdout
EOF
I tried enabling Envoy’s access logging with base chart, but could not succeed, no matter what I did.
I have already setup prometheus-community/kube-prometheus-stack in my cluster using helm.
I need to also deploy a redis cluster in the same cluster.
How can provide option so that metric of this redis cluster go to prometheus and to be fed to grafana?
On github page some options are listed.
Will it work with below configuration?
$ helm install my-release \
--set metrics.enabled=true\
bitnami/redis
Do I need to do anything else?
I would assume that you asking this question in the first place means the redis metrics didn't show up in the prometheus for you.
Setting up prometheus using "prometheus-community/kube-prometheus-stack" helm chart could be very different for you than me as it has a lot of configurables.
As the helm chart comes with "prometheus operator", we have used PodMonitor and/or ServiceMonitor CRD's as they provide far more configuration options. Here's some docs around that:
https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/user-guides/getting-started.md#include-servicemonitors
So basically, deploy prometheus with "prometheus.prometheusSpec.serviceMonitorSelector.matchLabels" with a label value of your choice
serviceMonitorSelector:
matchLabels:
monitoring-platform: core-prometheus
and thereafter deploy redis with "metrics.enabled=true", "metrics.serviceMonitor.enabled=true" & "metrics.serviceMonitor.selector" set to value similar to the label defined in prometheus serviceMonitorSelector (monitoring-platform: core-prometheus in this case). Something like this:
metrics:
enabled: true
serviceMonitor:
enabled: true
selector:
monitoring-platform: core-prometheus
This setup works for us.
screenshot
I'm using istioctl 1.6.8 and with the help of command istioctl install --set profile=demo --file istio-config.yaml I was able to deloy istio to my cluster with grafana and prometheus enabled. My istio-config.yaml file looks like this.
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
components:
ingressGateways:
- name: istio-ingressgateway
enabled: true
k8s:
serviceAnnotations:
service.beta.kubernetes.io/aws-load-balancer-internal: true
values:
grafana:
security:
enabled: true
I have some grafana dashboard json files which I need to export to the newly installed grafana and for these dashboards to work I have to add some exporter details in to my prometheus scrape-config.
My question:
Apart from importing dashboard via grafana UI, is there any way I could do this by passing relevant details to my istio-config.yaml? If not, can anyone suggest any other approach?
(One approach that I have in my mind is to overwrite the existing resources with custom yaml using kubectl apply -f -)
Thanks In Advance
You shouldn't investigate on this any further. With Istio 1.7 Prometheues/Kiali/Grafana installation with istioctl was deprecated and will be removed with Istio 1.8.
See: https://istio.io/latest/blog/2020/addon-rework/
In the further you will have to set up your own prometheus/grafana eg with helm so I would recommand to work in this direction.