How To Import Multiple Grafana Dashbords via A Helm Chart - kubernetes-helm

Is there a way to install multiple Grafana dashboards into the same folder via Helm?
I have created a configMap
---
apiVersion: v1
kind: ConfigMap
metadata:
name: grafana-dashboards
labels:
grafana_dashboard: "1"
data:
kubernetes.json: |
{{ .Files.Get "dashboards/kubernetes-cluster.json" | indent 4 }}
And also created a dashbordProvider and dashboardConfigMap for it.
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'monitoring'
orgId: 1
folder: "monitoring"
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards/monitoring
dashboardsConfigMaps:
monitoring: "grafana-dashboards"
However, I want to add an additional dashboard into the same monitoring folder.
I've tried importing the json via the Grafana UI, and it works just fine, but I would like to persist it in code.
So I've created a new configmap.
---
apiVersion: v1
kind: ConfigMap
metadata:
name: persistent-volumes
labels:
grafana_dashboard: "1"
data:
kubernetes.json: |
{{ .Files.Get "dashboards/persistent-volumes.json" | indent 4 }}
And also created a new dashboardProviders section and dashbordConfigMap
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'monitoring'
orgId: 1
folder: "monitoring"
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards/monitoring
- name: 'pvc'
orgId: 1
folder: "monitoring"
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards/monitoring
dashboardsConfigMaps:
monitoring: "grafana-dashboards"
pvc: "persistent-volumes"
But when I log into Grafana, I see a pvc folder but no dashboard in it.
What I want to do is to create this new dashboard inside the monitoring folder. The same way I'm able to do in the UI

Your config looks about right.
Have you tried changing the path for the pvc provider to path: /var/lib/grafana/dashboards/pvc
So it looks like this.
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'monitoring'
orgId: 1
folder: "monitoring"
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards/monitoring
- name: 'pvc'
orgId: 1
folder: "monitoring"
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards/pvc
Instead of what you have at the moment.

Related

Patching list in kubernetes manifest with Kustomize

I want to patch (overwrite) list in kubernetes manifest with Kustomize.
I am using patchesStrategicMerge method.
When I patch the parameters which are not in list the patching works as expected - only addressed parameters in patch.yaml are replaced, rest is untouched.
When I patch list the whole list is replaced.
How can I replace only specific items in the list and the res of the items in list stay untouched?
I found these two resources:
https://github.com/kubernetes-sigs/kustomize/issues/581
https://github.com/kubernetes/community/blob/master/contributors/devel/sig-api-machinery/strategic-merge-patch.md
but wasn't able to make desired solution of it.
exmaple code:
orig-file.yaml
apiVersion: monitoring.coreos.com/v1alpha1
kind: AlertmanagerConfig
metadata:
name: alertmanager-slack-config
namespace: system-namespace
spec:
test: test
other: other-stuff
receivers:
- name: default
slackConfigs:
- name: slack
username: test-user
channel: "#alerts"
sendResolved: true
apiURL:
name: slack-webhook-url
key: address
patch.yaml:
apiVersion: monitoring.coreos.com/v1alpha1
kind: AlertmanagerConfig
metadata:
name: alertmanager-slack-config
namespace: system-namespace
spec:
test: brase-yourself
receivers:
- name: default
slackConfigs:
- name: slack
username: Karl
kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- orig-file.yaml
patchesStrategicMerge:
- patch.yaml
What I get:
apiVersion: monitoring.coreos.com/v1alpha1
kind: AlertmanagerConfig
metadata:
name: alertmanager-slack-config
namespace: system-namespace
spec:
other: other-stuff
receivers:
- name: default
slackConfigs:
- name: slack
username: Karl
test: brase-yourself
What I want:
apiVersion: monitoring.coreos.com/v1alpha1
kind: AlertmanagerConfig
metadata:
name: alertmanager-slack-config
namespace: system-namespace
spec:
other: other-stuff
receivers:
- name: default
slackConfigs:
- name: slack
username: Karl
channel: "#alerts"
sendResolved: true
apiURL:
name: slack-webhook-url
key: address
test: brase-yourself
What you can do is to use jsonpatch instead of patchesStrategicMerge, so in your case:
cat <<EOF >./kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- orig-file.yaml
patches:
- path: patch.yaml
target:
group: monitoring.coreos.com
version: v1alpha1
kind: AlertmanagerConfig
name: alertmanager-slack-config
EOF
patch:
cat <<EOF >./patch.yaml
- op: replace
path: /spec/receivers/0/slackConfigs/0/username
value: Karl
EOF

Kubernetes ConfigMap to write Node details to file

How can I use ConfigMap to write cluster node information to a JSON file?
The below gives me Node information :
kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(#.type=="Hostname")].address}'
How can I use Configmap to write the above output to a text file?
You can save the output of command in any file.
Then use the file or data inside file to create configmap.
After creating the configmap you can mount it as a file in your deployment/pod.
For example:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: appname
name: appname
namespace: development
spec:
selector:
matchLabels:
app: appname
tier: sometier
template:
metadata:
creationTimestamp: null
labels:
app: appname
tier: sometier
spec:
containers:
- env:
- name: NODE_ENV
value: development
- name: PORT
value: "3000"
- name: SOME_VAR
value: xxx
image: someimage
imagePullPolicy: Always
name: appname
volumeMounts:
- name: your-volume-name
mountPath: "your/path/to/store/the/file"
readOnly: true
volumes:
- name: your-volume-name
configMap:
name: your-configmap-name
items:
- key: your-filename-inside-pod
path: your-filename-inside-pod
I added the following configuration in deployment:
volumeMounts:
- name: your-volume-name
mountPath: "your/path/to/store/the/file"
readOnly: true
volumes:
- name: your-volume-name
configMap:
name: your-configmap-name
items:
- key: your-filename-inside-pod
path: your-filename-inside-pod
To create ConfigMap from file:
kubectl create configmap your-configmap-name --from-file=your-file-path
Or just create ConfigMap with the output of your command:
apiVersion: v1
kind: ConfigMap
metadata:
name: your-configmap-name
namespace: your-namespace
data:
your-filename-inside-pod: |
output of command
At first save output of kubect get nodes command into JSON file:
$ exampleCommand > node-info.json
Then create proper ConfigMap.
Here is an example:
apiVersion: v1
kind: ConfigMap
metadata:
name: example-config
data:
node-info.json: |
{
"array": [
1,
2
],
"boolean": true,
"number": 123,
"object": {
"a": "egg",
"b": "egg1"
},
"string": "Welcome"
}
Then remember to add following lines below specification section in pod configuration file:
env:
- name: NODE_CONFIG_JSON
valueFrom:
configMapKeyRef:
name: example-config
key: node-info.json
You can also use PodPresent.
PodPreset is an object that enable to inject information egg. environment variables into pods during creation time.
Look at the example below:
apiVersion: settings.k8s.io/v1alpha1
kind: PodPreset
metadata:
name: example
spec:
selector:
matchLabels:
app: your-pod
env:
- name: DB_PORT
value: "6379"
envFrom:
- configMapRef:
name: etcd-env-config
key: node-info.json
but remember that you have to also add:
env:
- name: NODE_CONFIG_JSON
valueFrom:
configMapKeyRef:
name: example-config
key: node-info.json
section to your pod definition proper to your PodPresent and ConfigMap configuration.
More information you can find here: podpresent, pod-present-configuration.

Import dashboard with Helm using Sidecar for dashboards

I've exported a Grafana Dashboard (output is a json file) and now I would like to import it when I install Grafana (all automatic, with Helm and Kubernetes)
I just red this post about how to add a datasource which uses the sidecar setup. In short, you need to create a values.yaml with
sidecar:
image: xuxinkun/k8s-sidecar:0.0.7
imagePullPolicy: IfNotPresent
datasources:
enabled: true
label: grafana_datasource
And a ConfigMap which matches that label
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-grafana-datasource
labels:
grafana_datasource: '1'
data:
datasource.yaml: |-
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
access: proxy
orgId: 1
url: http://source-prometheus-server
Ok, this works, so I tried to do something similar for bashboards, so I updated the values.yaml
sidecar:
image: xuxinkun/k8s-sidecar:0.0.7
imagePullPolicy: IfNotPresent
dashboards:
enabled: false
# label that the configmaps with dashboards are marked with
label: grafana_dashboard
datasources:
enabled: true
label: grafana_datasource
And the ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-grafana-dashboards
labels:
grafana_dashboard: '1'
data:
custom-dashboards.json: |-
{
"annotations": {
"list": [
{
...
However when I install grafana this time and login, there are no dashboards
Any suggestions what I'm doing wrong here?
sidecar:
image: xuxinkun/k8s-sidecar:0.0.7
imagePullPolicy: IfNotPresent
dashboards:
enabled: false
# label that the configmaps with dashboards are marked with
label: grafana_dashboard
datasources:
enabled: true
label: grafana_datasource
In the above code there should be dashboard.enabled: true to get dashboard enabled.

Configure dashboard via values

As the title indicates I'm trying to setup grafana using helmfile with a default dashboard via values.
The relevant part of my helmfile is here
releases:
...
- name: grafana
namespace: grafana
chart: stable/grafana
values:
- datasources:
datasources.yaml:
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
access: proxy
url: http://prometheus-server.prometheus.svc.cluster.local
isDefault: true
- dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'default'
orgId: 1
folder: ''
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards
- dashboards:
default:
k8s:
url: https://grafana.com/api/dashboards/8588/revisions/1/download
As far as I can understand by reading here I need a provider and then I can refer to a dashboard by url. However when I do as shown above no dashboard is installed and when I do as below (which as )
- dashboards:
default:
url: https://grafana.com/api/dashboards/8588/revisions/1/download
I get the following error message
Error: render error in "grafana/templates/deployment.yaml": template: grafana/templates/deployment.yaml:148:20: executing "grafana/templates/deployment.yaml" at <$value>: wrong type for value; expected map[string]interface {}; got string
Any clues about what I'm doing wrong?
I think the problem is that you're defining the datasources, dashboardProviders and dashboards as lists rather than maps so you need to remove the hyphens, meaning that the values section becomes:
values:
datasources:
datasources.yaml:
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
url: http://prometheus-prometheus-server
access: proxy
isDefault: true
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'default'
orgId: 1
folder: ''
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards
dashboards:
default:
k8s:
url: https://grafana.com/api/dashboards/8588/revisions/1/download
The grafana chart has them as maps and using helmfile doesn't change that

Kubernetes - How to define ConfigMap built using a file in a yaml?

At present I am creating a configmap from the file config.json by executing:
kubectl create configmap jksconfig --from-file=config.json
I would want the ConfigMap to be created as part of the deployment and tried to do this:
apiVersion: v1
kind: ConfigMap
metadata:
name: jksconfig
data:
config.json: |-
{{ .Files.Get "config.json" | indent 4 }}
But doesn't seem to work. What should be going into configmap.yaml so that the same configmap is created?
---UPDATE---
when I do a helm install dry run:
# Source: mychartv2/templates/jks-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: jksconfig
data:
config.json: |
Note: I am using minikube as my kubernetes cluster
Your config.json file should be inside your mychart/ directory, not inside mychart/templates
Chart Template Guide
configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-configmap
data:
config.json: |-
{{ .Files.Get "config.json" | indent 4}}
config.json
{
"val": "key"
}
helm install --dry-run --debug mychart
[debug] Created tunnel using local port: '52091'
[debug] SERVER: "127.0.0.1:52091"
...
NAME: dining-saola
REVISION: 1
RELEASED: Fri Nov 23 15:06:17 2018
CHART: mychart-0.1.0
USER-SUPPLIED VALUES:
{}
...
---
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: dining-saola-configmap
data:
config.json: |-
{
"val": "key"
}
EDIT:
But I want it the values in the config.json file to be taken from values.yaml. Is that possible?
configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-configmap
data:
config.json: |-
{
{{- range $key, $val := .Values.json }}
{{ $key | quote | indent 6}}: {{ $val | quote }}
{{- end}}
}
values.yaml
json:
key1: val1
key2: val2
key3: val3
helm install --dry-run --debug mychart
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: mangy-hare-configmap
data:
config.json: |-
{
"key1": "val1"
"key2": "val2"
"key3": "val3"
}
Here is an example of a ConfigMap that is attached to a Deployment:
ConfigMap:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: jksconfig
data:
config.json: |-
{{ .Files.Get "config.json" | indent 4 }}
Deployment:
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: jksapp
labels:
app: jksapp
spec:
selector:
matchLabels:
app: jksapp
template:
metadata:
labels:
app: jksapp
containers:
- name: jksapp
image: jksapp:1.0.0
ports:
- containerPort: 8080
volumeMounts:
- name: config #The name(key) value must match pod volumes name(key) value
mountPath: /path/to/config.json
volumes:
- name: config
configMap:
name: jksconfig
Soln 01:
insert your config.json file content into a template
then use this template into your data against config.json
then run $ helm install command
finally,
{{define "config"}}
{
"a": "A",
"b": {
"b1": 1
}
}
{{end}}
apiVersion: v1
kind: ConfigMap
metadata:
name: jksconfig
labels:
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app: "my-app"
heritage: "{{ .Release.Service }}"
release: "{{ .Release.Name }}"
data:
config.json: {{ (include "config" .) | trim | quote }}