I've exported a Grafana Dashboard (output is a json file) and now I would like to import it when I install Grafana (all automatic, with Helm and Kubernetes)
I just red this post about how to add a datasource which uses the sidecar setup. In short, you need to create a values.yaml with
sidecar:
image: xuxinkun/k8s-sidecar:0.0.7
imagePullPolicy: IfNotPresent
datasources:
enabled: true
label: grafana_datasource
And a ConfigMap which matches that label
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-grafana-datasource
labels:
grafana_datasource: '1'
data:
datasource.yaml: |-
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
access: proxy
orgId: 1
url: http://source-prometheus-server
Ok, this works, so I tried to do something similar for bashboards, so I updated the values.yaml
sidecar:
image: xuxinkun/k8s-sidecar:0.0.7
imagePullPolicy: IfNotPresent
dashboards:
enabled: false
# label that the configmaps with dashboards are marked with
label: grafana_dashboard
datasources:
enabled: true
label: grafana_datasource
And the ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-grafana-dashboards
labels:
grafana_dashboard: '1'
data:
custom-dashboards.json: |-
{
"annotations": {
"list": [
{
...
However when I install grafana this time and login, there are no dashboards
Any suggestions what I'm doing wrong here?
sidecar:
image: xuxinkun/k8s-sidecar:0.0.7
imagePullPolicy: IfNotPresent
dashboards:
enabled: false
# label that the configmaps with dashboards are marked with
label: grafana_dashboard
datasources:
enabled: true
label: grafana_datasource
In the above code there should be dashboard.enabled: true to get dashboard enabled.
Related
Is there a way to install multiple Grafana dashboards into the same folder via Helm?
I have created a configMap
---
apiVersion: v1
kind: ConfigMap
metadata:
name: grafana-dashboards
labels:
grafana_dashboard: "1"
data:
kubernetes.json: |
{{ .Files.Get "dashboards/kubernetes-cluster.json" | indent 4 }}
And also created a dashbordProvider and dashboardConfigMap for it.
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'monitoring'
orgId: 1
folder: "monitoring"
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards/monitoring
dashboardsConfigMaps:
monitoring: "grafana-dashboards"
However, I want to add an additional dashboard into the same monitoring folder.
I've tried importing the json via the Grafana UI, and it works just fine, but I would like to persist it in code.
So I've created a new configmap.
---
apiVersion: v1
kind: ConfigMap
metadata:
name: persistent-volumes
labels:
grafana_dashboard: "1"
data:
kubernetes.json: |
{{ .Files.Get "dashboards/persistent-volumes.json" | indent 4 }}
And also created a new dashboardProviders section and dashbordConfigMap
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'monitoring'
orgId: 1
folder: "monitoring"
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards/monitoring
- name: 'pvc'
orgId: 1
folder: "monitoring"
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards/monitoring
dashboardsConfigMaps:
monitoring: "grafana-dashboards"
pvc: "persistent-volumes"
But when I log into Grafana, I see a pvc folder but no dashboard in it.
What I want to do is to create this new dashboard inside the monitoring folder. The same way I'm able to do in the UI
Your config looks about right.
Have you tried changing the path for the pvc provider to path: /var/lib/grafana/dashboards/pvc
So it looks like this.
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'monitoring'
orgId: 1
folder: "monitoring"
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards/monitoring
- name: 'pvc'
orgId: 1
folder: "monitoring"
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards/pvc
Instead of what you have at the moment.
How can I use ConfigMap to write cluster node information to a JSON file?
The below gives me Node information :
kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(#.type=="Hostname")].address}'
How can I use Configmap to write the above output to a text file?
You can save the output of command in any file.
Then use the file or data inside file to create configmap.
After creating the configmap you can mount it as a file in your deployment/pod.
For example:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: appname
name: appname
namespace: development
spec:
selector:
matchLabels:
app: appname
tier: sometier
template:
metadata:
creationTimestamp: null
labels:
app: appname
tier: sometier
spec:
containers:
- env:
- name: NODE_ENV
value: development
- name: PORT
value: "3000"
- name: SOME_VAR
value: xxx
image: someimage
imagePullPolicy: Always
name: appname
volumeMounts:
- name: your-volume-name
mountPath: "your/path/to/store/the/file"
readOnly: true
volumes:
- name: your-volume-name
configMap:
name: your-configmap-name
items:
- key: your-filename-inside-pod
path: your-filename-inside-pod
I added the following configuration in deployment:
volumeMounts:
- name: your-volume-name
mountPath: "your/path/to/store/the/file"
readOnly: true
volumes:
- name: your-volume-name
configMap:
name: your-configmap-name
items:
- key: your-filename-inside-pod
path: your-filename-inside-pod
To create ConfigMap from file:
kubectl create configmap your-configmap-name --from-file=your-file-path
Or just create ConfigMap with the output of your command:
apiVersion: v1
kind: ConfigMap
metadata:
name: your-configmap-name
namespace: your-namespace
data:
your-filename-inside-pod: |
output of command
At first save output of kubect get nodes command into JSON file:
$ exampleCommand > node-info.json
Then create proper ConfigMap.
Here is an example:
apiVersion: v1
kind: ConfigMap
metadata:
name: example-config
data:
node-info.json: |
{
"array": [
1,
2
],
"boolean": true,
"number": 123,
"object": {
"a": "egg",
"b": "egg1"
},
"string": "Welcome"
}
Then remember to add following lines below specification section in pod configuration file:
env:
- name: NODE_CONFIG_JSON
valueFrom:
configMapKeyRef:
name: example-config
key: node-info.json
You can also use PodPresent.
PodPreset is an object that enable to inject information egg. environment variables into pods during creation time.
Look at the example below:
apiVersion: settings.k8s.io/v1alpha1
kind: PodPreset
metadata:
name: example
spec:
selector:
matchLabels:
app: your-pod
env:
- name: DB_PORT
value: "6379"
envFrom:
- configMapRef:
name: etcd-env-config
key: node-info.json
but remember that you have to also add:
env:
- name: NODE_CONFIG_JSON
valueFrom:
configMapKeyRef:
name: example-config
key: node-info.json
section to your pod definition proper to your PodPresent and ConfigMap configuration.
More information you can find here: podpresent, pod-present-configuration.
As the title indicates I'm trying to setup grafana using helmfile with a default dashboard via values.
The relevant part of my helmfile is here
releases:
...
- name: grafana
namespace: grafana
chart: stable/grafana
values:
- datasources:
datasources.yaml:
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
access: proxy
url: http://prometheus-server.prometheus.svc.cluster.local
isDefault: true
- dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'default'
orgId: 1
folder: ''
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards
- dashboards:
default:
k8s:
url: https://grafana.com/api/dashboards/8588/revisions/1/download
As far as I can understand by reading here I need a provider and then I can refer to a dashboard by url. However when I do as shown above no dashboard is installed and when I do as below (which as )
- dashboards:
default:
url: https://grafana.com/api/dashboards/8588/revisions/1/download
I get the following error message
Error: render error in "grafana/templates/deployment.yaml": template: grafana/templates/deployment.yaml:148:20: executing "grafana/templates/deployment.yaml" at <$value>: wrong type for value; expected map[string]interface {}; got string
Any clues about what I'm doing wrong?
I think the problem is that you're defining the datasources, dashboardProviders and dashboards as lists rather than maps so you need to remove the hyphens, meaning that the values section becomes:
values:
datasources:
datasources.yaml:
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
url: http://prometheus-prometheus-server
access: proxy
isDefault: true
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'default'
orgId: 1
folder: ''
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards
dashboards:
default:
k8s:
url: https://grafana.com/api/dashboards/8588/revisions/1/download
The grafana chart has them as maps and using helmfile doesn't change that
I have create docker registry as a pod with a service and it's working login, push and pull. But when I would like to create a pod that use an image from this registry, the kubelet can't get the image from the registry.
My pod registry:
apiVersion: v1
kind: Pod
metadata:
name: registry-docker
labels:
registry: docker
spec:
containers:
- name: registry-docker
image: registry:2
volumeMounts:
- mountPath: /opt/registry/data
name: data
- mountPath: /opt/registry/auth
name: auth
ports:
- containerPort: 5000
env:
- name: REGISTRY_AUTH
value: htpasswd
- name: REGISTRY_AUTH_HTPASSWD_PATH
value: /opt/registry/auth/htpasswd
- name: REGISTRY_AUTH_HTPASSWD_REALM
value: Registry Realm
volumes:
- name: data
hostPath:
path: /opt/registry/data
- name: auth
hostPath:
path: /opt/registry/auth
pod I would like to create from registry:
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: 10.96.81.252:5000/nginx:latest
imagePullSecrets:
- name: registrypullsecret
Error I get from my registry logs:
time="2018-08-09T07:17:21Z" level=warning msg="error authorizing
context: basic authentication challenge for realm \"Registry Realm\":
invalid authorization credential" go.version=go1.7.6
http.request.host="10.96.81.252:5000"
http.request.id=655f76a6-ef05-4cdc-a677-d10f70ed557e
http.request.method=GET http.request.remoteaddr="10.40.0.0:59088"
http.request.uri="/v2/" http.request.useragent="docker/18.06.0-ce
go/go1.10.3 git-commit/0ffa825 kernel/4.4.0-130-generic os/linux
arch/amd64 UpstreamClient(Go-http-client/1.1)"
instance.id=ec01566d-5397-4c90-aaac-f56d857d9ae4 version=v2.6.2
10.40.0.0 - - [09/Aug/2018:07:17:21 +0000] "GET /v2/ HTTP/1.1" 401 87 "" "docker/18.06.0-ce go/go1.10.3 git-commit/0ffa825
kernel/4.4.0-130-generic os/linux arch/amd64
UpstreamClient(Go-http-client/1.1)"
The secret I use created from cat ~/.docker/config.json | base64:
apiVersion: v1
kind: Secret
metadata:
name: registrypullsecret
data:
.dockerconfigjson: ewoJImF1dGhzIjogewoJCSJsb2NhbGhvc3Q6NTAwMCI6IHsKCQkJImF1dGgiOiAiWVdSdGFXNDZaRzlqYTJWeU1USXoiCgkJfQoJfSwKCSJIdHRwSGVhZGVycyI6IHsKCQkiVXNlci1BZ2VudCI6ICJEb2NrZXItQ2xpZW50LzE4LjA2$
type: kubernetes.io/dockerconfigjson
The modification I have made to my default serviceaccount:
cat ./sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: 2018-08-03T09:49:47Z
name: default
namespace: default
# resourceVersion: "51625"
selfLink: /api/v1/namespaces/default/serviceaccounts/default
uid: 8eecb592-9702-11e8-af15-02f6928eb0b4
secrets:
- name: default-token-rfqfp
imagePullSecrets:
- name: registrypullsecret
file ~/.docker/config.json:
{
"auths": {
"localhost:5000": {
"auth": "YWRtaW46ZG9ja2VyMTIz"
}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/18.06.0-ce (linux)"
}
The auths data has login credentials for "localhost:5000", but your image is at "10.96.81.252:5000/nginx:latest".
I try to deploy one pod by node. It works fine with the kind daemonSet and when the cluster is created with kubeup. But we migrated the cluster creation using kops and with kops the master node is part of the cluster.
I noticed the master node is defined with a specific label: kubernetes.io/role=master
and with a taint: scheduler.alpha.kubernetes.io/taints: [{"key":"dedicated","value":"master","effect":"NoSchedule"}]
But it does not stop to have a pod deployed on it with DaemonSet
So i tried to add scheduler.alpha.kubernetes.io/affinity:
- apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: elasticsearch-data
namespace: ess
annotations:
scheduler.alpha.kubernetes.io/affinity: >
{
"nodeAffinity": {
"requiredDuringSchedulingRequiredDuringExecution": {
"nodeSelectorTerms": [
{
"matchExpressions": [
{
"key": "kubernetes.io/role",
"operator": "NotIn",
"values": ["master"]
}
]
}
]
}
}
}
spec:
selector:
matchLabels:
component: elasticsearch
type: data
provider: fabric8
template:
metadata:
labels:
component: elasticsearch
type: data
provider: fabric8
spec:
serviceAccount: elasticsearch
serviceAccountName: elasticsearch
containers:
- env:
- name: "SERVICE_DNS"
value: "elasticsearch-cluster"
- name: "NODE_MASTER"
value: "false"
image: "essearch/ess-elasticsearch:1.7.6"
name: elasticsearch
imagePullPolicy: Always
ports:
- containerPort: 9300
name: transport
volumeMounts:
- mountPath: "/usr/share/elasticsearch/data"
name: task-pv-storage
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
nodeSelector:
minion: true
But it does not work. Is anyone know why?
The workaround I have for now is to use nodeSelector and add a label to the nodes that are minion only but i would avoid to add a label during the cluster creation because it's an extra step and if i could avoid it, it would be for the best :)
EDIT:
I changed to that (given the answer) and i think it's right but it does not help, i still have a pod deployed on it:
- apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: elasticsearch-data
namespace: ess
spec:
selector:
matchLabels:
component: elasticsearch
type: data
provider: fabric8
template:
metadata:
labels:
component: elasticsearch
type: data
provider: fabric8
annotations:
scheduler.alpha.kubernetes.io/affinity: >
{
"nodeAffinity": {
"requiredDuringSchedulingRequiredDuringExecution": {
"nodeSelectorTerms": [
{
"matchExpressions": [
{
"key": "kubernetes.io/role",
"operator": "NotIn",
"values": ["master"]
}
]
}
]
}
}
}
spec:
serviceAccount: elasticsearch
serviceAccountName: elasticsearch
containers:
- env:
- name: "SERVICE_DNS"
value: "elasticsearch-cluster"
- name: "NODE_MASTER"
value: "false"
image: "essearch/ess-elasticsearch:1.7.6"
name: elasticsearch
imagePullPolicy: Always
ports:
- containerPort: 9300
name: transport
volumeMounts:
- mountPath: "/usr/share/elasticsearch/data"
name: task-pv-storage
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
Just move the annotation into the pod template: section (under metadata:).
Alternatively taint the master node (and you can remove the annotation):
kubectl taint nodes nameofmaster dedicated=master:NoSchedule
I suggest you read up on taints and tolerations.