Configure dashboard via values - kubernetes

As the title indicates I'm trying to setup grafana using helmfile with a default dashboard via values.
The relevant part of my helmfile is here
releases:
...
- name: grafana
namespace: grafana
chart: stable/grafana
values:
- datasources:
datasources.yaml:
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
access: proxy
url: http://prometheus-server.prometheus.svc.cluster.local
isDefault: true
- dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'default'
orgId: 1
folder: ''
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards
- dashboards:
default:
k8s:
url: https://grafana.com/api/dashboards/8588/revisions/1/download
As far as I can understand by reading here I need a provider and then I can refer to a dashboard by url. However when I do as shown above no dashboard is installed and when I do as below (which as )
- dashboards:
default:
url: https://grafana.com/api/dashboards/8588/revisions/1/download
I get the following error message
Error: render error in "grafana/templates/deployment.yaml": template: grafana/templates/deployment.yaml:148:20: executing "grafana/templates/deployment.yaml" at <$value>: wrong type for value; expected map[string]interface {}; got string
Any clues about what I'm doing wrong?

I think the problem is that you're defining the datasources, dashboardProviders and dashboards as lists rather than maps so you need to remove the hyphens, meaning that the values section becomes:
values:
datasources:
datasources.yaml:
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
url: http://prometheus-prometheus-server
access: proxy
isDefault: true
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'default'
orgId: 1
folder: ''
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards
dashboards:
default:
k8s:
url: https://grafana.com/api/dashboards/8588/revisions/1/download
The grafana chart has them as maps and using helmfile doesn't change that

Related

How To Import Multiple Grafana Dashbords via A Helm Chart

Is there a way to install multiple Grafana dashboards into the same folder via Helm?
I have created a configMap
---
apiVersion: v1
kind: ConfigMap
metadata:
name: grafana-dashboards
labels:
grafana_dashboard: "1"
data:
kubernetes.json: |
{{ .Files.Get "dashboards/kubernetes-cluster.json" | indent 4 }}
And also created a dashbordProvider and dashboardConfigMap for it.
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'monitoring'
orgId: 1
folder: "monitoring"
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards/monitoring
dashboardsConfigMaps:
monitoring: "grafana-dashboards"
However, I want to add an additional dashboard into the same monitoring folder.
I've tried importing the json via the Grafana UI, and it works just fine, but I would like to persist it in code.
So I've created a new configmap.
---
apiVersion: v1
kind: ConfigMap
metadata:
name: persistent-volumes
labels:
grafana_dashboard: "1"
data:
kubernetes.json: |
{{ .Files.Get "dashboards/persistent-volumes.json" | indent 4 }}
And also created a new dashboardProviders section and dashbordConfigMap
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'monitoring'
orgId: 1
folder: "monitoring"
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards/monitoring
- name: 'pvc'
orgId: 1
folder: "monitoring"
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards/monitoring
dashboardsConfigMaps:
monitoring: "grafana-dashboards"
pvc: "persistent-volumes"
But when I log into Grafana, I see a pvc folder but no dashboard in it.
What I want to do is to create this new dashboard inside the monitoring folder. The same way I'm able to do in the UI
Your config looks about right.
Have you tried changing the path for the pvc provider to path: /var/lib/grafana/dashboards/pvc
So it looks like this.
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'monitoring'
orgId: 1
folder: "monitoring"
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards/monitoring
- name: 'pvc'
orgId: 1
folder: "monitoring"
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards/pvc
Instead of what you have at the moment.

Invalid configmap generation for helm chart grafana

I'm trying to create a configmap
I use this values ​​file. Example :
datasources:
datasources.yaml:
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
uid: prometheus
url: http://prometheus-kube-prometheus-prometheus.monitoring:9090
- name: Thanos
type: prometheus
uid: thanos
url: http://thanos-query.monitoring:9090
- name: Tempo
type: tempo
uid: tempo
url: http://tempo-tempo-distributed-query-frontend.monitoring:3100
jsonData:
httpMethod: GET
datasourceUid: loki
lokiSearch:
datasourceUid: loki
serviceMap:
datasourceUid: prometheus
search:
hide: false
nodeGraph:
enabled: true
- name: Loki
type: loki
uid: loki
url: http://loki.loki:3100
jsonData:
maxLine: 1000
derivedFields:
- datasourceUid: tempo
matcherRegex: "traceID=(\\w+)"
name: TraceID
url: "$${__value.raw}"
...
But when creating configmap i get wrong CM
Example :
datasources.yaml: |
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
uid: prometheus
url: http://prometheus-kube-prometheus-prometheus.monitoring:9090
- name: Thanos
type: prometheus
uid: thanos
url: http://thanos-query.monitoring:9090
- jsonData:
datasourceUid: loki
httpMethod: GET
lokiSearch:
datasourceUid: loki
nodeGraph:
enabled: true
search:
hide: false
serviceMap:
datasourceUid: prometheus
name: Tempo
type: tempo
uid: tempo
url: http://tempo-tempo-distributed-query-frontend.monitoring:3100
- jsonData:
derivedFields:
- datasourceUid: tempo
matcherRegex: traceID=(\w+)
name: TraceID
url: $${__value.raw}
maxLine: 1000
name: Loki
type: loki
uid: loki
url: http://loki.loki:3100
I did everything according to the documentation, but the second day I can not understand what is the reason for the incorrect creation.
As a result, I'm trying to do a Loki search to work with Tempo

Error on Telegraf Helm Chart update: Error parsing data

Im trying to deploy telegraf helm chart on kubernetes.
helm upgrade --install telegraf-instance -f values.yaml influxdata/telegraf
When I add modbus input plugin with holding_register i get error
[telegraf] Error running agent: Error loading config file /etc/telegraf/telegraf.conf: Error parsing data: line 49: key `name’ is in conflict with line 2fd
my values.yaml like below
## Default values.yaml for Telegraf
## This is a YAML-formatted file.
## ref: https://hub.docker.com/r/library/telegraf/tags/
replicaCount: 1
image:
repo: "telegraf"
tag: "1.21.4"
pullPolicy: IfNotPresent
podAnnotations: {}
podLabels: {}
imagePullSecrets: []
args: []
env:
- name: HOSTNAME
value: "telegraf-polling-service"
resources: {}
nodeSelector: {}
affinity: {}
tolerations: []
service:
enabled: true
type: ClusterIP
annotations: {}
rbac:
create: true
clusterWide: false
rules: []
serviceAccount:
create: false
name:
annotations: {}
config:
agent:
interval: 60s
round_interval: true
metric_batch_size: 1000000
metric_buffer_limit: 100000000
collection_jitter: 0s
flush_interval: 60s
flush_jitter: 0s
precision: ''
hostname: '9825128'
omit_hostname: false
processors:
- enum:
mapping:
field: "status"
dest: "status_code"
value_mappings:
healthy: 1
problem: 2
critical: 3
inputs:
- modbus:
name: "PS MAIN ENGINE"
controller: 'tcp://192.168.0.101:502'
slave_id: 1
holding_registers:
- name: "Coolant Level"
byte_order: CDAB
data_type: FLOAT32
scale: 0.001
address: [51410, 51411]
- modbus:
name: "SB MAIN ENGINE"
controller: 'tcp://192.168.0.102:502'
slave_id: 1
holding_registers:
- name: "Coolant Level"
byte_order: CDAB
data_type: FLOAT32
scale: 0.001
address: [51410, 51411]
outputs:
- influxdb_v2:
token: token
organization: organisation
bucket: bucket
urls:
- "url"
metrics:
health:
enabled: true
service_address: "http://:8888"
threshold: 5000.0
internal:
enabled: true
collect_memstats: false
pdb:
create: true
minAvailable: 1
Problem resolved by doing the following steps
deleted config section of my values.yaml
added my telegraf.conf to /additional_config path
added configmap to kubernetes with the following command
kubectl create configmap external-config --from-file=/additional_config
added the following command to values.yaml
volumes:
- name: my-config
configMap:
name: external-config
volumeMounts:
- name: my-config
mountPath: /additional_config
args:
- "--config=/etc/telegraf/telegraf.conf"
- "--config-directory=/additional_config"

KOWL Kafka Connect yaml configuration - has anyone managed to get it right?

getting this error: {"level":"fatal","msg":"failed to load config","error":"failed to unmarshal YAML config into config struct: 1 error(s) decoding:\n\n* '' has invalid keys: connect"}
with the folowing yaml:
kafka:
brokers:
- 192.168.12.12:9092
schemaRegistry:
enabled: true
urls:
- "http://192.168.12.12:8081"
connect:
enabled: true
clusters:
name: xy
url: "http://192.168.12.12:8091"
tls:
enabled: false
username: 1
password: 1
name: xya
url: http://192.168.12.12:8092
Try downgrade your image back to v1.5.0.
Seems that there's a mistake in master recently.
You could find all the images here

Import dashboard with Helm using Sidecar for dashboards

I've exported a Grafana Dashboard (output is a json file) and now I would like to import it when I install Grafana (all automatic, with Helm and Kubernetes)
I just red this post about how to add a datasource which uses the sidecar setup. In short, you need to create a values.yaml with
sidecar:
image: xuxinkun/k8s-sidecar:0.0.7
imagePullPolicy: IfNotPresent
datasources:
enabled: true
label: grafana_datasource
And a ConfigMap which matches that label
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-grafana-datasource
labels:
grafana_datasource: '1'
data:
datasource.yaml: |-
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
access: proxy
orgId: 1
url: http://source-prometheus-server
Ok, this works, so I tried to do something similar for bashboards, so I updated the values.yaml
sidecar:
image: xuxinkun/k8s-sidecar:0.0.7
imagePullPolicy: IfNotPresent
dashboards:
enabled: false
# label that the configmaps with dashboards are marked with
label: grafana_dashboard
datasources:
enabled: true
label: grafana_datasource
And the ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-grafana-dashboards
labels:
grafana_dashboard: '1'
data:
custom-dashboards.json: |-
{
"annotations": {
"list": [
{
...
However when I install grafana this time and login, there are no dashboards
Any suggestions what I'm doing wrong here?
sidecar:
image: xuxinkun/k8s-sidecar:0.0.7
imagePullPolicy: IfNotPresent
dashboards:
enabled: false
# label that the configmaps with dashboards are marked with
label: grafana_dashboard
datasources:
enabled: true
label: grafana_datasource
In the above code there should be dashboard.enabled: true to get dashboard enabled.