stable/prometheus-operator - adding persistent grafana dashboards - grafana

I am trying to add a new dashboard to the below helm chart
https://github.com/helm/charts/tree/master/stable/prometheus-operator
The documentation is not very clear.
I have added a config map to the name space like the below -
apiVersion: v1
kind: ConfigMap
metadata:
name: sample-grafana-dashboard
namespace: monitoring
labels:
grafana_dashboard: "1"
data:
etcd-dashboard.json: |-
{JSON}
According to the documentation, this should just be "picked" up and added, but its not.
https://github.com/helm/charts/tree/master/stable/grafana#configuration
The sidecar option in my values.yaml looks like -
grafana:
enabled: true
## Deploy default dashboards.
##
defaultDashboardsEnabled: true
adminPassword: password
ingress:
## If true, Grafana Ingress will be created
##
enabled: false
## Annotations for Grafana Ingress
##
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
## Labels to be added to the Ingress
##
labels: {}
## Hostnames.
## Must be provided if Ingress is enable.
##
# hosts:
# - grafana.domain.com
hosts: []
## Path for grafana ingress
path: /
## TLS configuration for grafana Ingress
## Secret must be manually created in the namespace
##
tls: []
# - secretName: grafana-general-tls
# hosts:
# - grafana.example.com
#dashboardsConfigMaps:
#sidecarProvider: sample-grafana-dashboard
sidecar:
dashboards:
enabled: true
label: grafana_dashboard
I have also tried adding this to the value.yml
dashboardsConfigMaps:
- sample-grafana-dashboard
Which, doesn't work.
Does anyone have any experience with adding your own dashboards to this helm chart as I really am at my wits end.

To sum up:
For sidecar you need only one option set to true - grafana.sidecar.dashboards.enabled
Install prometheus-operator witch sidecard enabled:
helm install stable/prometheus-operator --name prometheus-operator --set grafana.sidecar.dashboards.enabled=true --namespace monitoring
Add new dashboard, for example
MongoDB_Overview:
wget https://raw.githubusercontent.com/percona/grafana-dashboards/master/dashboards/MongoDB_Overview.json
kubectl -n monitoring create cm grafana-mongodb-overview --from-file=MongoDB_Overview.json
Now the tricky part, you have to set a correct label for your
configmap, by default grafana.sidecar.dashboards.label is set
tografana_dashboard, so:
kubectl -n monitoring label cm grafana-mongodb-overview grafana_dashboard=mongodb-overview
Now you should find your newly added dashboard in grafana, moreover every confimap with label grafana_dashboard will be processed as dashboard.
The dashboard is persisted and safe, stored in configmap.
UPDATE:
January 2021:
Prometheus operator chart was migrated from stable repo to Prometheus Community Kubernetes Helm Charts and helm v3 was released so:
Create namespace:
kubectl create namespace monitoring
Install prometheus-operator from helm chart:
helm install prometheus-operator prometheus-community/kube-prometheus-stack --namespace monitoring
Add Mongodb dashboard as an example
wget https://raw.githubusercontent.com/percona/grafana-dashboards/master/dashboards/MongoDB_Overview.json
kubectl -n monitoring create cm grafana-mongodb-overview --from-file=MongoDB_Overview.json
Lastly, label the dashboard:
kubectl -n monitoring label cm grafana-mongodb-overview grafana_dashboard=mongodb-overview

You have to:
define you dashboard json as a configmap (as you have done, but see below for an easier way)
define a provider: to tell where to load the dashboard
map the two together
from values.yml:
dashboardsConfigMaps:
application: application
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: application
orgId: 1
folder: "Application Metrics"
type: file
disableDeletion: true
editable: false
options:
path: /var/lib/grafana/dashboards/application
Now the application config map should create files in this directory in the pod, and as has been discussed the sidecar should load them into an Application Metrics folder, seen in the GUI.
That probably answers your issue as written, but as long as your dashboards aren't too big using kustonmise mean you can have the json on disk without needing to include the json in another file thus:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
# May choose to enable this if need to refer to configmaps outside of kustomize
generatorOptions:
disableNameSuffixHash: true
namespace: monitoring
configMapGenerator:
- name: application
files:
- grafana-dashboards/application/api01.json
- grafana-dashboards/application/api02.json
For completeness sake you can also load dashboards from url or from the Grafana site, although I don't believe mixing method in the same folder works.
So:
dashboards:
kafka:
kafka01:
url: https://raw.githubusercontent.com/kudobuilder/operators/master/repository/kafka/docs/latest/resources/grafana-dashboard.json
folder: "KUDO Kafka"
datasource: Prometheus
nginx:
nginx1:
gnetId: 9614
datasource: Prometheus
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: kafka
orgId: 1
folder: "KUDO Kafka"
type: file
disableDeletion: true
editable: false
options:
path: /var/lib/grafana/dashboards/kafka
- name: nginx
orgId: 1
folder: Nginx
type: file
disableDeletion: true
editable: false
options:
path: /var/lib/grafana/dashboards/nginx
Creates two new folders containing a dashboard each, from external sources, or maybe you point this at your git repo you de-couple your dashboard commits from your deployment.

If you do not change the settings in the helm chart. The default user/password for grafana is:
user: admin
password: prom-operator

Related

How to add additional scrape config to Prometheus

I want to add an additional scrape config into Prometheus. I have followed the below method.
https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/additional-scrape-config.md
First, created a file prometheus-additional.yaml and added the new config
- job_name: "prometheus"
static_configs:
- targets: ["localhost:9090"]
Secondly, created a secret out of it.
kubectl create secret generic additional-scrape-configs --from-file=prometheus-additional.yaml --dry-run -oyaml > additional-scrape-configs.yaml
Then created the secret using the below command
kubectl apply -f additional-scrape-configs.yaml -n monitoring
Then in the above link it says
"Finally, reference this additional configuration in your prometheus.yaml CRD."
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: prometheus
labels:
prometheus: prometheus
spec:
replicas: 2
serviceAccountName: prometheus
serviceMonitorSelector:
matchLabels:
team: frontend
additionalScrapeConfigs:
name: additional-scrape-configs
key: prometheus-additional.yaml
Where I can find the above? Do I need to create a new CRD? Can't I update the existing running deployment?
This is somehow wrong in the Documentation, you have to use additionalScrapeConfigsSecret:
additionalScrapeConfigsSecret:
enabled: true
name: additional-scrape-configs
key: prometheus-additional.yaml
Else you get the error cannot unmarshal !!map into []yaml.MapSlice
Here is a better documentation:
https://github.com/prometheus-community/helm-charts/blob/8b45bdbdabd9b54766c4beb3c562b766b268a034/charts/kube-prometheus-stack/values.yaml#L2691
According to this, you could add scrape configs without packaging into a secret like this:
additionalScrapeConfigs: |
- job_name: "prometheus"
static_configs:
- targets: ["localhost:9090"]
That document applies to prometheus-operator. If you have deployed it, you should have your Prometheus CRD:
kubectl get prometheus -n monitoring
Then you can edit the Prometheus exactly as stated in above: adding additionalScrapeConfigs key in the spec. (after adding a secret)
After editing, new scrape configs will be reloaded and applied automatically (conf-reloaders).

Kubectl rollout restart gets error: Unable to decode

When I want to restart a deployment by the following command: kubectl rollout restart -n ind-iv -f mattermost-installation.yml it returns an error: unable to decode "mattermost-installation.yml": no kind "Mattermost" is registered for version "installation.mattermost.com/v1beta1" in scheme "k8s.io/kubectl/pkg/scheme/scheme.go:28"
The yml file looks like this:
apiVersion: installation.mattermost.com/v1beta1
kind: Mattermost
metadata:
name: mattermost-iv # Choose the desired name
spec:
size: 1000users # Adjust to your requirements
useServiceLoadBalancer: true # Set to true to use AWS or Azure load balancers instead of an NGINX controller.
ingressName: ************* # Hostname used for Ingress, e.g. example.mattermost-example.com. Required when using an Ingress controller. Ignored if useServiceLoadBalancer is true.
ingressAnnotations:
kubernetes.io/ingress.class: nginx
version: 6.0.0
licenseSecret: "" # Name of a Kubernetes secret that contains Mattermost license. Required only for enterprise installation.
database:
external:
secret: db-credentials # Name of a Kubernetes secret that contains connection string to external database.
fileStore:
external:
url: ********** # External File Storage URL.
bucket: ********** # File Storage bucket name to use.
secret: file-store-credentials
mattermostEnv:
- name: MM_FILESETTINGS_AMAZONS3SSE
value: "false"
- name: MM_FILESETTINGS_AMAZONS3SSL
value: "false"
Anybody an idea?

Azure Kubernetes - prometheus is deployed as a part of ISTIO not showing the deployments?

I have used the following configuration to setup the Istio
cat << EOF | kubectl apply -f -
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
name: istio-control-plane
spec:
# Use the default profile as the base
# More details at: https://istio.io/docs/setup/additional-setup/config-profiles/
profile: default
# Enable the addons that we will want to use
addonComponents:
grafana:
enabled: true
prometheus:
enabled: true
tracing:
enabled: true
kiali:
enabled: true
values:
global:
# Ensure that the Istio pods are only scheduled to run on Linux nodes
defaultNodeSelector:
beta.kubernetes.io/os: linux
kiali:
dashboard:
auth:
strategy: anonymous
components:
egressGateways:
- name: istio-egressgateway
enabled: true
EOF
and exposed the prometheus service as mentioned below
kubectl expose service prometheus --type=LoadBalancer --name=prometheus-svc --namespace istio-system
kubectl get svc prometheus-svc -n istio-system -o json
export PROMETHEUS_URL=$(kubectl get svc prometheus-svc -n istio-system -o jsonpath="{.status.loadBalancer.ingress[0]['hostname','ip']}"):$(kubectl get svc prometheus-svc -n istio-system -o 'jsonpath={.spec.ports[0].port}')
echo http://${PROMETHEUS_URL}
curl http://${PROMETHEUS_URL}
I have deployed an application however couldn't see the below deployments in prometheus
The standard prometheus installation by istio does not configure your pods to send metrics to prometheus. It just collects data from the istio resouces.
To add your pods to being scraped add the following annotations in the deployment.yml of your application:
apiVersion: apps/v1
kind: Deployment
[...]
spec:
template:
metadata:
annotations:
prometheus.io/scrape: true # determines if a pod should be scraped. Set to true to enable scraping.
prometheus.io/path: /metrics # determines the path to scrape metrics at. Defaults to /metrics.
prometheus.io/port: 80 # determines the port to scrape metrics at. Defaults to 80.
[...]
By the way: The prometheus instance installed with istioctl should not be used for production. From docs:
[...] pass --set values.prometheus.enabled=true during installation. This built-in deployment of Prometheus is intended for new users to help them quickly getting started. However, it does not offer advanced customization, like persistence or authentication and as such should not be considered production ready.
You should setup your own prometheus and configure istio to report to it. See:
Reference: https://istio.io/latest/docs/ops/integrations/prometheus/#option-1-metrics-merging
The following yaml provided by istio can be used as reference for setup of prometheus:
https://raw.githubusercontent.com/istio/istio/release-1.7/samples/addons/prometheus.yaml
Furthermore, if I remember correctly, installation of addons like kiali, prometheus, ... with istioctl will be removed with istio 1.8 (release date december 2020). So you might want to setup your own instances with helm anyway.

Grafana dashboard not working with Ingress

I have installed below kube-prometheus-stack and getting an error when trying to access Grafana dashboard using it's own Ingress URL. I believe I am missing something silly here but unable to find any clues. I have looked at similar post here and others as well.
Chart: kube-prometheus-stack-9.4.5
App Version: 0.38.1
When I navigate to https://myorg.grafanatest.com URL, I get redirected to https://myorg.grafanatest.com/login with following message.
Changes made to grafana/values.yaml:
grafana.ini:
server:
# The full public facing url you use in browser, used for redirects and emails
root_url: https://myorg.grafanatest.com
Helm command used to install Prometheus-Grafana operator after making above changes.
helm install pg kube-prometheus-stack/ -n monitoring
I see below settings in grafana.ini file inside Grafana pod.
[analytics]
check_for_updates = true
[grafana_net]
url = https://grafana.net
[log]
mode = console
[paths]
data = /var/lib/grafana/data
logs = /var/log/grafana
plugins = /var/lib/grafana/plugins
provisioning = /etc/grafana/provisioning
[server]
root_url = https://myorg.grafanatest.com/
Posting a solution here as it's working now. Followed steps as gumelaragum mentioned above to create values.yaml and updated below values in that, and passed that values.yaml to helm install step. Not sure why it didn't work without enabling serve_from_sub_path, but it's ok as it's working now. Note that I didn't enable Ingress section since I have already created Ingress route outside the installation process.
helm show values prometheus-com/kube-prometheus-stack > custom-values.yaml
Then install by changing below values in custom-values.yaml. Change namespace as needed.
helm -n monitoring install -f ./custom-values.yaml pg prometheus-com/kube-prometheus-stack
grafana:
enabled: true
namespaceOverride: ""
# set pspUseAppArmor to false to fix Grafana pod Init errors
rbac:
pspUseAppArmor: false
grafana.ini:
server:
domain: mysb.grafanasite.com
#root_url: "%(protocol)s://%(domain)s/"
root_url: https://mysb.grafanasite.com/grafana/
serve_from_sub_path: true
## Deploy default dashboards.
##
defaultDashboardsEnabled: true
adminPassword: prom-operator
ingress:
## If true, Grafana Ingress will be created
##
enabled: false
## Annotations for Grafana Ingress
##
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
## Labels to be added to the Ingress
##
labels: {}
## Hostnames.
## Must be provided if Ingress is enable.
##
# hosts:
# - grafana.domain.com
hosts:
- mysb.grafanasite.com
## Path for grafana ingress
path: /grafana/
I see same values being reflected in grafana.ini file inside Grafana container mount path(/etc/grafana/grafana.ini).
[analytics]
check_for_updates = true
[grafana_net]
url = https://grafana.net
[log]
mode = console
[paths]
data = /var/lib/grafana/data
logs = /var/log/grafana
plugins = /var/lib/grafana/plugins
provisioning = /etc/grafana/provisioning
[server]
domain = mysb.grafanasite.com
root_url = https://mysb.grafanasite.com/grafana/
serve_from_sub_path = true
you need to edit from parent charts values.yaml
get default values.yaml from kube-prometheus-stack chart, save to file
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo add stable https://kubernetes-charts.storage.googleapis.com/
helm repo update
helm show values prometheus-community/kube-prometheus-stack > values.yaml
in values.yaml file, edit like this :
## Using default values from https://github.com/grafana/helm-charts/blob/main/charts/grafana/values.yaml
##
#### This below line is in 509 line
grafana:
enabled: true
namespaceOverride: ""
## Deploy default dashboards.
##
defaultDashboardsEnabled: true
adminPassword: prom-operator
ingress:
## If true, Grafana Ingress will be created
##
enabled: true
## Annotations for Grafana Ingress
##
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
## Labels to be added to the Ingress
##
labels: {}
## Hostnames.
## Must be provided if Ingress is enable.
##
# hosts:
# - grafana.domain.com
hosts:
- myorg.grafanatest.com
## Path for grafana ingress
path: /
grafana.ingress.enabled to true
grafana.ingress.hosts add - myorg.grafanatest.com
Apply it with
helm -n monitoring install -f ./values.yaml kube-prometheus prometheus-community/kube-prometheus-stack
Hopefully help you
Update your grafana.ini config like this:
The grafana.ini can mostly be found under grafana config map
kubectl get cm
kubectl edit cm map_name
**data:
grafana.ini: |
[server]
serve_from_sub_path = true
domain = ingress-gateway.yourdomain.com
root_url = http://ingress-gateway.yourdomain.com/grafana/**
This grafana.ini is mostly saved under the config map or YAML files which can be edited.
reapply or edit the rules & create a mapping in the ingress, that should work.
Don't forget to restart your pod so that config map changes can be applied!

Unable to import grafana json file via the grafana helm chart

I am deploying the helm chart stable/grafana 4.3.0 onto a k8s cluster. I am using Helm 3. From a previous grafana installation, I have exported the json of a dashboard and saved it as my-dashboard.json. I want to have helm take care of uploading this file, so in my values.yaml I have
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'default'
orgId: 1
folder: ''
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards
dashboards:
default:
my-dashboard:
file: my-dashboard.json
prometheus-stats:
gnetId: 2
revision: 2
datasource: Prometheus
I already have my Prometheus datasource (from the prometheus helm chart) defined as
datasources:
datasources.yaml:
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
url: http://my-prometheus-release-server.default.svc.cluster.local
access: proxy
isDefault: true
And I've verified that the datasource works correctly.
If I run helm upgrade my-grafana-release stable/grafana --values values.yaml however, in the logs on the pod it repeats:
t=2020-01-17T21:33:35+0000 lvl=eror msg="failed to load dashboard from " logger=provisioning.dashboard type=file name=default file=/var/lib/grafana/dashboards/default/my-dashboard.json error=EOF
Seeing EOF makes me think the file isn't uploading. I have my-dashboard.json saved in the same folder as values.yaml, and I'm running the helm command from that folder. Do I need to store it somewhere else? I have searched all the documentation and it's not clear to me how it gets uploaded.
For anyone stumbling on this:
In case you installed grafana with the grafana helm chart or prometheus operator helm chart an easy way to add grafana dashboards is to set sidecar.dashboards.enabled: true in your values.yml (I recommend checking the docs for more info on this).
Then you can add dashboards with a simple configmap like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-grafana-dashboard
labels:
grafana_dashboard: "1"
data:
my-dashboard.json: |
{
"annotations": {
"list": [
{
.....JSON.....
}
I found another way to handle this.
I am using Terraform to accomplish this, and I made values.yaml as a template file. This is the relevant section in values.yaml now:
dashboards:
default:
dashbaord1:
json: |
${my-dashboard-1}
dashboard2:
json: |
${my-dashboard-2}
And the templatefile block looks like this:
resource "helm_release" "grafana" {
name = "grafana-release"
repository = data.helm_repository.stable.metadata[0].name
chart = "grafana"
version = "4.3.0"
values = [
"${templatefile(
"${path.module}/values.yaml.tpl",
{
my-dashboard-1 = "${indent(8, data.template_file.my-dashboard-1.rendered)}}",
my-dashboard-2 = "${indent(8, data.template_file.my-dashboard-2.rendered)}}"
}
)}"
]
}
The indent is very important!
Where did you put the my-dashboard.jsonfile ?
It should be on the same level as values.yaml
Also check the grafana-dashboards-default configmap, it should contain the dashboard.