Invalid configmap generation for helm chart grafana - kubernetes-helm

I'm trying to create a configmap
I use this values ​​file. Example :
datasources:
datasources.yaml:
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
uid: prometheus
url: http://prometheus-kube-prometheus-prometheus.monitoring:9090
- name: Thanos
type: prometheus
uid: thanos
url: http://thanos-query.monitoring:9090
- name: Tempo
type: tempo
uid: tempo
url: http://tempo-tempo-distributed-query-frontend.monitoring:3100
jsonData:
httpMethod: GET
datasourceUid: loki
lokiSearch:
datasourceUid: loki
serviceMap:
datasourceUid: prometheus
search:
hide: false
nodeGraph:
enabled: true
- name: Loki
type: loki
uid: loki
url: http://loki.loki:3100
jsonData:
maxLine: 1000
derivedFields:
- datasourceUid: tempo
matcherRegex: "traceID=(\\w+)"
name: TraceID
url: "$${__value.raw}"
...
But when creating configmap i get wrong CM
Example :
datasources.yaml: |
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
uid: prometheus
url: http://prometheus-kube-prometheus-prometheus.monitoring:9090
- name: Thanos
type: prometheus
uid: thanos
url: http://thanos-query.monitoring:9090
- jsonData:
datasourceUid: loki
httpMethod: GET
lokiSearch:
datasourceUid: loki
nodeGraph:
enabled: true
search:
hide: false
serviceMap:
datasourceUid: prometheus
name: Tempo
type: tempo
uid: tempo
url: http://tempo-tempo-distributed-query-frontend.monitoring:3100
- jsonData:
derivedFields:
- datasourceUid: tempo
matcherRegex: traceID=(\w+)
name: TraceID
url: $${__value.raw}
maxLine: 1000
name: Loki
type: loki
uid: loki
url: http://loki.loki:3100
I did everything according to the documentation, but the second day I can not understand what is the reason for the incorrect creation.
As a result, I'm trying to do a Loki search to work with Tempo

Related

In Grafana I am getting a "400 Bad Request Client sent an HTTP request to an HTTPS server" when trying to update datasource configmaps

In Grafana I notice that when I deploy a configmap that should add a datasource it makes no change and does not add the new datasource - note that the configmap is in the cluster and in the correct namespace.
If I make a change to the configmap I get the following error if I look at the logs for the grafana-sc-datasources container:
POST request sent to http://localhost:3000/api/admin/provisioning/datasources/reload. Response: 400 Bad Request Client sent an HTTP request to an HTTPS server.
I assume I do not see any changes because it can not make the post request.
I played around a bit and at one point I did see changes being made/updated in the datasources:
I changed the protocol to http under grafana: / server: / protocol: and I was NOT able to open the grafana website but I did notice that if I did make a change to a datasource configmap in the cluster then I would see a successful 200 message in logs of the grafana-sc-datasources container : POST request sent to http://localhost:3000/api/admin/provisioning/datasources/reload. Response: 200 OK {"message":"Datasources config reloaded"}.
So I assume just need to know how to get Grafana to send the POST request as https instead of http.
Can someone point me to what might be wrong and how to fix it?
Note that I am pretty new to K8s, grafana and helmcharts.
Here is a configmap that I am trying to get to work:
apiVersion: v1
kind: ConfigMap
metadata:
name: jaeger-${NACKLE_ENV}-grafana-datasource
labels:
grafana_datasource: '1'
data:
jaeger-datasource.yaml: |-
apiVersion: 1
datasources:
- name: Jaeger-${NACKLE_ENV}
type: jaeger
access: browser
url: http://jaeger-${NACKLE_ENV}-query.${NACKLE_ENV}.svc.cluster.local:16690
version: 1
basicAuth: false
Here is the current Grafana values file:
# use 1 replica when using a StatefulSet
# If we need more than 1 replica, then we'll have to:
# - remove the `persistence` section below
# - use an external database for all replicas to connect to (refer to Grafana Helm chart docs)
replicas: 1
image:
pullSecrets:
- docker-hub
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: eks.amazonaws.com/capacityType
operator: In
values:
- ON_DEMAND
persistence:
enabled: true
type: statefulset
storageClassName: biw-durable-gp2
podDisruptionBudget:
maxUnavailable: 1
admin:
existingSecret: grafana
sidecar:
datasources:
enabled: true
label: grafana_datasource
dashboards:
enabled: true
label: grafana_dashboard
labelValue: 1
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'default'
orgId: 1
folder: ''
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards/default
dashboards:
default:
node-exporter:
gnetId: 1860
revision: 23
datasource: Prometheus
core-dns:
gnetId: 12539
revision: 5
datasource: Prometheus
fluentd:
gnetId: 7752
revision: 6
datasource: Prometheus
ingress:
apiVersion: networking.k8s.io/v1
enabled: true
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
alb.ingress.kubernetes.io/healthcheck-path: '/api/health'
alb.ingress.kubernetes.io/healthcheck-protocol: HTTPS
alb.ingress.kubernetes.io/backend-protocol: HTTPS
# Redirect to HTTPS at the ALB
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
spec:
rules:
- http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: ssl-redirect
port:
name: use-annotation
defaultBackend:
service:
name: grafana
port:
number: 80
livenessProbe: { "httpGet": { "path": "/api/health", "port": 3000, "scheme": "HTTPS" }, "initialDelaySeconds": 60, "timeoutSeconds": 30, "failureThreshold": 10 }
readinessProbe: { "httpGet": { "path": "/api/health", "port": 3000, "scheme": "HTTPS" } }
service:
type: NodePort
name: grafana
rolePrefix: app-role
env: eks-test
serviceAccount:
name: grafana
annotations:
eks.amazonaws.com/role-arn: ""
pod:
spec:
serviceAccountName: grafana
grafana.ini:
server:
# don't use enforce_domain - it causes an infinite redirect in our setup
# enforce_domain: true
enable_gzip: true
# NOTE - if I set the protocol to http I do see it make changes to datasources but I can not see the website
protocol: https
cert_file: /biw-cert/domain.crt
cert_key: /biw-cert/domain.key
users:
auto_assign_org_role: Editor
# https://grafana.com/docs/grafana/v6.5/auth/gitlab/
auth.gitlab:
enabled: true
allow_sign_up: true
org_role: Editor
scopes: read_api
auth_url: https://gitlab.biw-services.com/oauth/authorize
token_url: https://gitlab.biw-services.com/oauth/token
api_url: https://gitlab.biw-services.com/api/v4
allowed_groups: nackle-teams/devops
securityContext:
fsGroup: 472
runAsUser: 472
runAsGroup: 472
extraConfigmapMounts:
- name: "cert-configmap"
mountPath: "/biw-cert"
subPath: ""
configMap: biw-grafana-cert
readOnly: true

How To Import Multiple Grafana Dashbords via A Helm Chart

Is there a way to install multiple Grafana dashboards into the same folder via Helm?
I have created a configMap
---
apiVersion: v1
kind: ConfigMap
metadata:
name: grafana-dashboards
labels:
grafana_dashboard: "1"
data:
kubernetes.json: |
{{ .Files.Get "dashboards/kubernetes-cluster.json" | indent 4 }}
And also created a dashbordProvider and dashboardConfigMap for it.
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'monitoring'
orgId: 1
folder: "monitoring"
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards/monitoring
dashboardsConfigMaps:
monitoring: "grafana-dashboards"
However, I want to add an additional dashboard into the same monitoring folder.
I've tried importing the json via the Grafana UI, and it works just fine, but I would like to persist it in code.
So I've created a new configmap.
---
apiVersion: v1
kind: ConfigMap
metadata:
name: persistent-volumes
labels:
grafana_dashboard: "1"
data:
kubernetes.json: |
{{ .Files.Get "dashboards/persistent-volumes.json" | indent 4 }}
And also created a new dashboardProviders section and dashbordConfigMap
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'monitoring'
orgId: 1
folder: "monitoring"
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards/monitoring
- name: 'pvc'
orgId: 1
folder: "monitoring"
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards/monitoring
dashboardsConfigMaps:
monitoring: "grafana-dashboards"
pvc: "persistent-volumes"
But when I log into Grafana, I see a pvc folder but no dashboard in it.
What I want to do is to create this new dashboard inside the monitoring folder. The same way I'm able to do in the UI
Your config looks about right.
Have you tried changing the path for the pvc provider to path: /var/lib/grafana/dashboards/pvc
So it looks like this.
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'monitoring'
orgId: 1
folder: "monitoring"
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards/monitoring
- name: 'pvc'
orgId: 1
folder: "monitoring"
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards/pvc
Instead of what you have at the moment.

Kong-ingress-controller CrashLoopBackOff : "kong-proxy" is forbidden

I've installed kong-ingress-controller using yaml file on a 3-nodes k8s cluster.
but I'm getting this (the status of pod is CrashLoopBackOff):
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kong ingress-kong-74d8d78f57-57fqr 1/2 CrashLoopBackOff 12 (3m23s ago) 40m
[...]
there are 2 container declarations in kong yaml file: proxy and ingress-controller.
The first one is up and running but the ingress-controller container is not:
$kubectl describe pod ingress-kong-74d8d78f57-57fqr -n kong |less
[...]
ingress-controller:
Container ID: docker://8e9a3370f78b3057208b943048c9ecd51054d0b276ef6c93ccf049093261d8de
Image: kong/kubernetes-ingress-controller:1.3
Image ID: docker-pullable://kong/kubernetes-ingress-controller#sha256:cff0df9371d5ad07fef406c356839736ce9eeb0d33f918f56b1b232cd7289207
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 07 Sep 2021 17:15:54 +0430
Finished: Tue, 07 Sep 2021 17:15:54 +0430
Ready: False
Restart Count: 13
Liveness: http-get http://:10254/healthz delay=5s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:10254/healthz delay=5s timeout=1s period=10s #success=1 #failure=3
Environment:
CONTROLLER_KONG_ADMIN_URL: https://127.0.0.1:8444
CONTROLLER_KONG_ADMIN_TLS_SKIP_VERIFY: true
CONTROLLER_PUBLISH_SERVICE: kong/kong-proxy
POD_NAME: ingress-kong-74d8d78f57-57fqr (v1:metadata.name)
POD_NAMESPACE: kong (v1:metadata.namespace)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ft7gg (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-ft7gg:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 46m default-scheduler Successfully assigned kong/ingress-kong-74d8d78f57-57fqr to kung-node-2
Normal Pulled 46m kubelet Container image "kong:2.5" already present on machine
Normal Created 46m kubelet Created container proxy
Normal Started 46m kubelet Started container proxy
Normal Pulled 45m (x4 over 46m) kubelet Container image "kong/kubernetes-ingress-controller:1.3" already present on machine
Normal Created 45m (x4 over 46m) kubelet Created container ingress-controller
Normal Started 45m (x4 over 46m) kubelet Started container ingress-controller
Warning BackOff 87s (x228 over 46m) kubelet Back-off restarting failed container
And here is the log of ingress-controller container:
-------------------------------------------------------------------------------
Kong Ingress controller
Release:
Build:
Repository:
Go: go1.16.7
-------------------------------------------------------------------------------
W0907 12:56:12.940106 1 client_config.go:614] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
time="2021-09-07T12:56:12Z" level=info msg="version of kubernetes api-server: 1.22" api-server-host="https://10.*.*.1:443" git_commit=632ed300f2c34f6d6d15ca4cef3d3c7073412212 git_tree_state=clean git_version=v1.22.1 major=1 minor=22 platform=linux/amd64
time="2021-09-07T12:56:12Z" level=fatal msg="failed to fetch publish-service: services \"kong-proxy\" is forbidden: User \"system:serviceaccount:kong:kong-serviceaccount\" cannot get resource \"services\" in API group \"\" in the namespace \"kong\"" service_name=kong-proxy service_namespace=kong
If someone could help me to get a solution, that would be awesome.
============================================================
UPDATE:
The kong-ingress-controller's yaml file:
apiVersion: v1
kind: Namespace
metadata:
name: kong
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: kongclusterplugins.configuration.konghq.com
spec:
additionalPrinterColumns:
- JSONPath: .plugin
description: Name of the plugin
name: Plugin-Type
type: string
- JSONPath: .metadata.creationTimestamp
description: Age
name: Age
type: date
- JSONPath: .disabled
description: Indicates if the plugin is disabled
name: Disabled
priority: 1
type: boolean
- JSONPath: .config
description: Configuration of the plugin
name: Config
priority: 1
type: string
group: configuration.konghq.com
names:
kind: KongClusterPlugin
plural: kongclusterplugins
shortNames:
- kcp
scope: Cluster
subresources:
status: {}
validation:
openAPIV3Schema:
properties:
config:
type: object
configFrom:
properties:
secretKeyRef:
properties:
key:
type: string
name:
type: string
namespace:
type: string
required:
- name
- namespace
- key
type: object
type: object
disabled:
type: boolean
plugin:
type: string
protocols:
items:
enum:
- http
- https
- grpc
- grpcs
- tcp
- tls
type: string
type: array
run_on:
enum:
- first
- second
- all
type: string
required:
- plugin
version: v1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: kongconsumers.configuration.konghq.com
spec:
additionalPrinterColumns:
- JSONPath: .username
description: Username of a Kong Consumer
name: Username
type: string
- JSONPath: .metadata.creationTimestamp
description: Age
name: Age
type: date
group: configuration.konghq.com
names:
kind: KongConsumer
plural: kongconsumers
shortNames:
- kc
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
properties:
credentials:
items:
type: string
type: array
custom_id:
type: string
username:
type: string
version: v1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: kongingresses.configuration.konghq.com
spec:
group: configuration.konghq.com
names:
kind: KongIngress
plural: kongingresses
shortNames:
- ki
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
properties:
proxy:
properties:
connect_timeout:
minimum: 0
type: integer
path:
pattern: ^/.*$
type: string
protocol:
enum:
- http
- https
- grpc
- grpcs
- tcp
- tls
type: string
read_timeout:
minimum: 0
type: integer
retries:
minimum: 0
type: integer
write_timeout:
minimum: 0
type: integer
type: object
route:
properties:
headers:
additionalProperties:
items:
type: string
type: array
type: object
https_redirect_status_code:
type: integer
methods:
items:
type: string
type: array
path_handling:
enum:
- v0
- v1
type: string
preserve_host:
type: boolean
protocols:
items:
enum:
- http
- https
- grpc
- grpcs
- tcp
- tls
type: string
type: array
regex_priority:
type: integer
request_buffering:
type: boolean
response_buffering:
type: boolean
snis:
items:
type: string
type: array
strip_path:
type: boolean
upstream:
properties:
algorithm:
enum:
- round-robin
- consistent-hashing
- least-connections
type: string
hash_fallback:
type: string
hash_fallback_header:
type: string
hash_on:
type: string
hash_on_cookie:
type: string
hash_on_cookie_path:
type: string
hash_on_header:
type: string
healthchecks:
properties:
active:
properties:
concurrency:
minimum: 1
type: integer
healthy:
properties:
http_statuses:
items:
type: integer
type: array
interval:
minimum: 0
type: integer
successes:
minimum: 0
type: integer
type: object
http_path:
pattern: ^/.*$
type: string
timeout:
minimum: 0
type: integer
unhealthy:
properties:
http_failures:
minimum: 0
type: integer
http_statuses:
items:
type: integer
type: array
interval:
minimum: 0
type: integer
tcp_failures:
minimum: 0
type: integer
timeout:
minimum: 0
type: integer
type: object
type: object
passive:
properties:
healthy:
properties:
http_statuses:
items:
type: integer
type: array
interval:
minimum: 0
type: integer
successes:
minimum: 0
type: integer
type: object
unhealthy:
properties:
http_failures:
minimum: 0
type: integer
http_statuses:
items:
type: integer
type: array
interval:
minimum: 0
type: integer
tcp_failures:
minimum: 0
type: integer
timeout:
minimum: 0
type: integer
type: object
type: object
threshold:
type: integer
type: object
host_header:
type: string
slots:
minimum: 10
type: integer
type: object
version: v1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: kongplugins.configuration.konghq.com
spec:
additionalPrinterColumns:
- JSONPath: .plugin
description: Name of the plugin
name: Plugin-Type
type: string
- JSONPath: .metadata.creationTimestamp
description: Age
name: Age
type: date
- JSONPath: .disabled
description: Indicates if the plugin is disabled
name: Disabled
priority: 1
type: boolean
- JSONPath: .config
description: Configuration of the plugin
name: Config
priority: 1
type: string
group: configuration.konghq.com
names:
kind: KongPlugin
plural: kongplugins
shortNames:
- kp
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
properties:
config:
type: object
configFrom:
properties:
secretKeyRef:
properties:
key:
type: string
name:
type: string
required:
- name
- key
type: object
type: object
disabled:
type: boolean
plugin:
type: string
protocols:
items:
enum:
- http
- https
- grpc
- grpcs
- tcp
- tls
type: string
type: array
run_on:
enum:
- first
- second
- all
type: string
required:
- plugin
version: v1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: tcpingresses.configuration.konghq.com
spec:
additionalPrinterColumns:
- JSONPath: .status.loadBalancer.ingress[*].ip
description: Address of the load balancer
name: Address
type: string
- JSONPath: .metadata.creationTimestamp
description: Age
name: Age
type: date
group: configuration.konghq.com
names:
kind: TCPIngress
plural: tcpingresses
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
properties:
apiVersion:
type: string
kind:
type: string
metadata:
type: object
spec:
properties:
rules:
items:
properties:
backend:
properties:
serviceName:
type: string
servicePort:
format: int32
type: integer
type: object
host:
type: string
port:
format: int32
type: integer
type: object
type: array
tls:
items:
properties:
hosts:
items:
type: string
type: array
secretName:
type: string
type: object
type: array
type: object
status:
type: object
version: v1beta1
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kong-serviceaccount
namespace: kong
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: kong-ingress-clusterrole
rules:
- apiGroups:
- ""
resources:
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
- extensions
- networking.internal.knative.dev
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- networking.k8s.io
- extensions
- networking.internal.knative.dev
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- configuration.konghq.com
resources:
- tcpingresses/status
verbs:
- update
- apiGroups:
- configuration.konghq.com
resources:
- kongplugins
- kongclusterplugins
- kongcredentials
- kongconsumers
- kongingresses
- tcpingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- get
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kong-ingress-clusterrole-nisa-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kong-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: kong-serviceaccount
namespace: kong
---
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-type: nlb
name: kong-proxy
namespace: kong
spec:
ports:
- name: proxy
port: 80
protocol: TCP
targetPort: 8000
- name: proxy-ssl
port: 443
protocol: TCP
targetPort: 8443
selector:
app: ingress-kong
type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
name: kong-validation-webhook
namespace: kong
spec:
ports:
- name: webhook
port: 443
protocol: TCP
targetPort: 8080
selector:
app: ingress-kong
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: ingress-kong
name: ingress-kong
namespace: kong
spec:
replicas: 1
selector:
matchLabels:
app: ingress-kong
template:
metadata:
annotations:
kuma.io/gateway: enabled
prometheus.io/port: "8100"
prometheus.io/scrape: "true"
traffic.sidecar.istio.io/includeInboundPorts: ""
labels:
app: ingress-kong
spec:
containers:
- env:
- name: KONG_PROXY_LISTEN
value: 0.0.0.0:8000, 0.0.0.0:8443 ssl http2
- name: KONG_PORT_MAPS
value: 80:8000, 443:8443
- name: KONG_ADMIN_LISTEN
value: 127.0.0.1:8444 ssl
- name: KONG_STATUS_LISTEN
value: 0.0.0.0:8100
- name: KONG_DATABASE
value: "off"
- name: KONG_NGINX_WORKER_PROCESSES
value: "2"
- name: KONG_ADMIN_ACCESS_LOG
value: /dev/stdout
- name: KONG_ADMIN_ERROR_LOG
value: /dev/stderr
- name: KONG_PROXY_ERROR_LOG
value: /dev/stderr
image: kong:2.5
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- kong quit
livenessProbe:
failureThreshold: 3
httpGet:
path: /status
port: 8100
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: proxy
ports:
- containerPort: 8000
name: proxy
protocol: TCP
- containerPort: 8443
name: proxy-ssl
protocol: TCP
- containerPort: 8100
name: metrics
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /status
port: 8100
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
- env:
- name: CONTROLLER_KONG_ADMIN_URL
value: https://127.0.0.1:8444
- name: CONTROLLER_KONG_ADMIN_TLS_SKIP_VERIFY
value: "true"
- name: CONTROLLER_PUBLISH_SERVICE
value: kong/kong-proxy
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: kong/kubernetes-ingress-controller:1.3
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: ingress-controller
ports:
- containerPort: 8080
name: webhook
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
serviceAccountName: kong-serviceaccount
Having analysed the comments it looks like changing apiVersion from rbac.authorization.k8s.io/v1beta1 to rbac.authorization.k8s.io/v1 has solved the problem temporally, an alternative to this solution is to downgrade the cluster.
I had installed kubernetes:1.22 and tried to use kong/kubernetes-ingress-controller:1.3 .
as #mdaniel said in the comment:
Upon further investigation into that repo, 1.x only works up to k8s 1.21 so I'll delete my answer and you'll have to downgrade your cluster(!) or find an alternative Ingress controller
‌Based on documentation(you can find this at KIC version compatibility) :
As you can see, Kong/kubernetes-ingress-controller supports a maximum version of 1.21 of kubernetes(at the time of writing this answer). So I decided to downgrade my cluster to version 1.20 and this solved my problem.

Import dashboard with Helm using Sidecar for dashboards

I've exported a Grafana Dashboard (output is a json file) and now I would like to import it when I install Grafana (all automatic, with Helm and Kubernetes)
I just red this post about how to add a datasource which uses the sidecar setup. In short, you need to create a values.yaml with
sidecar:
image: xuxinkun/k8s-sidecar:0.0.7
imagePullPolicy: IfNotPresent
datasources:
enabled: true
label: grafana_datasource
And a ConfigMap which matches that label
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-grafana-datasource
labels:
grafana_datasource: '1'
data:
datasource.yaml: |-
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
access: proxy
orgId: 1
url: http://source-prometheus-server
Ok, this works, so I tried to do something similar for bashboards, so I updated the values.yaml
sidecar:
image: xuxinkun/k8s-sidecar:0.0.7
imagePullPolicy: IfNotPresent
dashboards:
enabled: false
# label that the configmaps with dashboards are marked with
label: grafana_dashboard
datasources:
enabled: true
label: grafana_datasource
And the ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-grafana-dashboards
labels:
grafana_dashboard: '1'
data:
custom-dashboards.json: |-
{
"annotations": {
"list": [
{
...
However when I install grafana this time and login, there are no dashboards
Any suggestions what I'm doing wrong here?
sidecar:
image: xuxinkun/k8s-sidecar:0.0.7
imagePullPolicy: IfNotPresent
dashboards:
enabled: false
# label that the configmaps with dashboards are marked with
label: grafana_dashboard
datasources:
enabled: true
label: grafana_datasource
In the above code there should be dashboard.enabled: true to get dashboard enabled.

Configure dashboard via values

As the title indicates I'm trying to setup grafana using helmfile with a default dashboard via values.
The relevant part of my helmfile is here
releases:
...
- name: grafana
namespace: grafana
chart: stable/grafana
values:
- datasources:
datasources.yaml:
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
access: proxy
url: http://prometheus-server.prometheus.svc.cluster.local
isDefault: true
- dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'default'
orgId: 1
folder: ''
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards
- dashboards:
default:
k8s:
url: https://grafana.com/api/dashboards/8588/revisions/1/download
As far as I can understand by reading here I need a provider and then I can refer to a dashboard by url. However when I do as shown above no dashboard is installed and when I do as below (which as )
- dashboards:
default:
url: https://grafana.com/api/dashboards/8588/revisions/1/download
I get the following error message
Error: render error in "grafana/templates/deployment.yaml": template: grafana/templates/deployment.yaml:148:20: executing "grafana/templates/deployment.yaml" at <$value>: wrong type for value; expected map[string]interface {}; got string
Any clues about what I'm doing wrong?
I think the problem is that you're defining the datasources, dashboardProviders and dashboards as lists rather than maps so you need to remove the hyphens, meaning that the values section becomes:
values:
datasources:
datasources.yaml:
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
url: http://prometheus-prometheus-server
access: proxy
isDefault: true
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'default'
orgId: 1
folder: ''
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards
dashboards:
default:
k8s:
url: https://grafana.com/api/dashboards/8588/revisions/1/download
The grafana chart has them as maps and using helmfile doesn't change that