Helm (jinja2): parse error "did not find expected key" at a non-existant further line - visual-studio-code

This is my first question ever on the internet. I've been helped much by reading other people's fixes but now it's time to humbly ask for help myself.
I get the following error by Helm (helm3 install ist-gw-t1 --dry-run )
Error: INSTALLATION FAILED: YAML parse error on istio-gateways/templates/app-name.yaml: error converting YAML to JSON: yaml: line 40: did not find expected key
However the file is 27 lines long! It used to be longer but I removed the other Kubernetes resources so that I narrow down the area for searching for the issue.
Template file
{{- range .Values.ingressConfiguration.app-name }}
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: "{{ .name }}-tcp-8080"
namespace: {{ .namespace }}
labels:
app: {{ .name }}
chart: {{ $.Chart.Name }}-{{ $.Chart.Version | replace "+" "_" }}
release: {{ $.Release.Name }}
heritage: {{ $.Release.Service }}
spec:
hosts: # Is was line 40 before shortening the file.
- {{ .fqdn }}
gateways:
- {{ .name }}
{{ (.routing).type }}:
- route:
- destination:
port:
number: {{ (.services).servicePort2 }}
host: {{ (.services).serviceUrl2 }}
match:
port: "8080"
uri:
exact: {{ .fqdn }}
{{- end }}
values.yaml
networkPolicies:
ingressConfiguration:
app-name:
- namespace: 'namespace'
name: 'app-name'
ingressController: 'internal-istio-ingress-gateway' # ?
fqdn: '48.characters.long'
tls:
credentialName: 'name-of-the-secret' # ?
mode: 'SIMPLE' # ?
serviceUrl1: 'foo' # ?
servicePort1: '8080'
routing:
# Available routing types: http or tls
# In case of tls routing type selected, the matchingSniHost(resp. rewriteURI/matchingURIs) has(resp. have) to be filled(resp. empty)
type: http
rewriteURI: ''
matchingURIs: ['foo']
matchingSniHost: []
- services:
serviceUrl2: "foo"
servicePort2: "8080"
- externalServices:
mysql: 'bar'
Where does the error come from?
Why does Helm still report line 40 as problematic -- even after the shortening of the file.
Can you please recommend some Visual Studio Code extention that could have helped me? I have the following slightly relevant but they do not have linting (or I do not know to use it): YAML by Red Hat; amd Kubernetes by Microsoft.

Related

Failing to set range loop of helm in Istio virtual Service HTTP Retry

Bug Description
I can smoothly work on adding multiple destinations for canary deployment but when I try adding retry it fails with the custom-built Helm chart. As I can't iterate over it.
This is a problem because Retry is tied to each destination according to that whole should be iterated.
Please find helm chart template below.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: {{ .Values.virtualservice.name }}
namespace: {{ .Values.namespace }}
spec:
hosts:
- {{ .Values.virtualservice.hosts }}
gateways:
- {{ .Values.virtualservice.gateways }}
http:
- route:
{{- range $key, $value := .Values.destination }}
- destination:
host: {{ $value.host }}
subset: {{ $value.subset }}
weight: {{ $value.weight }}
retries:
attempts: {{ $value.retries.attempts }}
perTryTimeout: {{ $value.retries.perTryTimeout }}
retryOn: {{ $value.retries.retryOn }}
timeout: {{ $value.retries.timeout }}
{{- end }}
Error log
$ helm install asm-helm ./asm-svc-helm-chart -f values.yaml --dry-run
Error: INSTALLATION FAILED: YAML parse error on asmvrtsvc/templates/retry-svc.yaml: error converting YAML to JSON: yaml: line 21: did not find expected key
Version
$ kubectl version --short
Client Version: v1.24.0
Kustomize Version: v4.5.4
Server Version: v1.22.12-gke.300
$ helm version
v3.9.4+gdbc6d8e
Added example for reference
spec:
hosts:
- reviews
http:
- route:
- destination:
host: reviews
subset: v1
weight: 75
retries:
attempts: 3
perTryTimeout: 2s
- destination:
host: reviews
subset: v2
weight: 25
retries:
attempts: 3
perTryTimeout: 2s
According to the schema of Virtual Service, The route field in Virtual Service can have one retries field.
So, the loop should include destination as array.
*: https://istio.io/latest/docs/reference/config/networking/virtual-service/

acumos AI clio installation fails with "error converting YAML to JSON"

I have been trying to install clio release.
VM :
ubuntu 18.04
16 Cores
32 GB RAM
500 GB Storage.
Command :
bash /home/ubuntu/system-integration/tools/aio_k8s_deployer/aio_k8s_deployer.sh all acai-server ubuntu generic
All most all steps of installation have completed successfully but during "setup-lum", I got below error.
Error:
YAML parse error on lum-helm/templates/deployment.yaml:
error converting YAML to JSON: yaml: line 36: mapping values are not allowed in this context
Workaround :
I was able to get away with these error(tested via helm install --dry-run ) by
a. removing "resource, affinity and tolerant blocks
b. replace "Release.Name" with actual release value( e.g. license-clio-configmap)
but when I run the full installation command, those helms charts are updated again.
Full error :
...
helm install -f kubernetes/values.yaml --name license-clio --namespace default --debug ./kubernetes/license-usage-manager/lum-helm
[debug] Created tunnel using local port: '46109'
[debug] SERVER: "127.0.0.1:46109"
[debug] Original chart version: ""
[debug] CHART PATH: /deploy/system-integration/AIO/lum/kubernetes/license-usage-manager/lum-helm
YAML parse error on lum-helm/templates/deployment.yaml: error converting YAML to JSON: yaml: line 36: mapping values are not allowed in this context
Yaml of deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "lum-helm.fullname" . }}
labels:
app: {{ template "lum-helm.name" . }}
chart: {{ template "lum-helm.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "lum-helm.name" . }}
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ template "lum-helm.name" . }}
release: {{ .Release.Name }}
spec:
initContainers:
- name: wait-for-db
image: busybox:1.28
command:
- 'sh'
- '-c'
- >
until nc -z -w 2 {{ .Release.Name }}-postgresql {{ .Values.postgresql.servicePort }} && echo postgresql ok;
do sleep 2;
done
containers:
- name: {{ .Chart.Name }}
image: nexus3.acumos.org:10002/acumos/lum-server:default
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: {{ .Release.Name }}-postgresql
key: postgresql-password
- name: NODE
volumeMounts:
- name: config-volume
mountPath: /opt/app/lum/etc/config.json
subPath: lum-config.json
ports:
- name: http
containerPort: 2080
protocol: TCP
livenessProbe:
httpGet:
path: '/api/healthcheck'
port: http
initialDelaySeconds: 60
periodSeconds: 10
failureThreshold: 10
readinessProbe:
httpGet:
path: '/api/healthcheck'
port: http
initialDelaySeconds: 60
periodSeconds: 10
failureThreshold: 10
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
volumes:
- name: config-volume
configMap:
name: {{ .Release.Name }}-configmap
This error was resolved as per Error trying to install Acumos Clio using AIO
I provided an imagetag:1.3.2 in my actual value.yaml and lum deployment was successful
in acumos setup there are two copied of setup-lum.sh and values.yaml
actual :
~/system-integration/AIO/lum/kubernetes/value.yaml
and run time copy
~/aio_k8s_deployer/deploy/system-integration/AIO/lum/kubernetes/value.yaml
I found this workaround:
Uncommented the IMAGE-TAG line in the values.yaml file
Commented the following lines in the setup-lum.sh file (these were already executed at the first run and in this way I skipped the overwriting problem)
rm -frd kubernetes/license-usage-manager
git clone "https://gerrit.acumos.org/r/license-usage-manager" \
kubernetes/license-usage-manager

Knative service with Keycloak gatekeeper sidecar

I am trying to deploy the following service:
{{- if .Values.knativeDeploy }}
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
{{- if .Values.service.name }}
name: {{ .Values.service.name }}
{{- else }}
name: {{ template "fullname" . }}
{{- end }}
labels:
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
template:
spec:
containers:
- image: quay.io/keycloak/keycloak-gatekeeper:9.0.3
name: gatekeeper-sidecar
ports:
- containerPort: {{ .Values.keycloak.proxyPort }}
env:
- name: KEYCLOAK_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: {{ template "keycloakclient" . }}
key: secret
args:
- --resources=uri=/*
- --discovery-url={{ .Values.keycloak.url }}/auth/realms/{{ .Values.keycloak.realm }}
- --client-id={{ template "keycloakclient" . }}
- --client-secret=$(KEYCLOAK_CLIENT_SECRET)
- --listen=0.0.0.0:{{ .Values.keycloak.proxyPort }} # listen on all interfaces
- --enable-logging=true
- --enable-json-logging=true
- --upstream-url=http://127.0.0.1:{{ .Values.service.internalPort }} # To connect with the main container's port
resources:
{{ toYaml .Values.gatekeeper.resources | indent 12 }}
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
{{- range $pkey, $pval := .Values.env }}
- name: {{ $pkey }}
value: {{ quote $pval }}
{{- end }}
envFrom:
{{ toYaml .Values.envFrom | indent 10 }}
ports:
- containerPort: {{ .Values.service.internalPort }}
livenessProbe:
httpGet:
path: {{ .Values.probePath }}
port: {{ .Values.service.internalPort }}
initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.livenessProbe.periodSeconds }}
successThreshold: {{ .Values.livenessProbe.successThreshold }}
timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }}
readinessProbe:
httpGet:
path: {{ .Values.probePath }}
port: {{ .Values.service.internalPort }}
periodSeconds: {{ .Values.readinessProbe.periodSeconds }}
successThreshold: {{ .Values.readinessProbe.successThreshold }}
timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}
resources:
{{ toYaml .Values.resources | indent 12 }}
terminationGracePeriodSeconds: {{ .Values.terminationGracePeriodSeconds }}
{{- end }}
Which fails with the following error:
Error from server (BadRequest): error when creating "/tmp/helm-template-workdir-290082188/jx/output/namespaces/jx-staging/env/charts/docs/templates/part0-ksvc.yaml": admission webhook "webhook.serving.knative.dev" denied the request: mutation failed: expected exactly one, got both: spec.template.spec.containers'
Now, if I read the specs (https://knative.dev/v0.15-docs/serving/getting-started-knative-app/), I can see this example:
apiVersion: serving.knative.dev/v1 # Current version of Knative
kind: Service
metadata:
name: helloworld-go # The name of the app
namespace: default # The namespace the app will use
spec:
template:
spec:
containers:
- image: gcr.io/knative-samples/helloworld-go # The URL to the image of the app
env:
- name: TARGET # The environment variable printed out by the sample app
value: "Go Sample v1"
Which has exactly the same structure. Now, my questions are:
How can I validate my yam without waiting for a deployment? Intellij has a k8n plugin, but I can't find the CRD schema for serving.knative.dev/v1 that are machine consumable. (https://knative.dev/docs/serving/spec/knative-api-specification-1.0/)
Is it allowed with knative to have multiple container? (that configuration works perfectly with apiVersion: apps/v1 kind: Deployment)
Multi container is alpha feature in knative version 0.16.
This feature need to be enabled by setting multi-container to enabled in the config-features ConfigMap. So edit the configmap using
kubectl edit cm config-features and enable that feature.
apiVersion: v1
kind: ConfigMap
metadata:
name: config-features
namespace: knative-serving
labels:
serving.knative.dev/release: devel
annotations:
knative.dev/example-checksum: "983ddf13"
data:
_example: |
...
# Indicates whether multi container support is enabled
multi-container: "enabled"
...
What version of Knative are you using?
Support for multiple containers was added as an alpha feature in 0.16. If you're not using 0.16 or later or don't have the alpha flag enabled, the request will probably be blocked.
There were a number of edge cases to define for multi-container support in Knative, so the default was to be conservative and only allow one container until the constraints had been explored.

Helm Error converting YAML to JSON: yaml: line 20: did not find expected key

I don't really know what is the error here, is a simple helm deploy with a _helpers.tpl, it doesn't make sense and is probably a stupid mistake, the code:
deploy.yaml
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
{{ include "metadata.name" . }}-deploy
spec:
selector:
matchLabels:
app: nginx
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
vars: {{- include "envs.var" .Values.secret.data }}
_helpers.tpl
{{- define "envs.var"}}
{{- range $key := . }}
- name: {{ $key | upper | quote}}
valueFrom:
secretKeyRef:
key: {{ $key | lower }}
name: {{ $key }}-auth
{{- end }}
{{- end }}
values.yaml
secret:
data:
username: root
password: test
the error
Error: YAML parse error on mychart/templates/deploy.yaml: error converting YAML to JSON: yaml: line 21: did not find expected key
Here this problem happens because of indent. You can resolve by updating
env: {{- include "envs.var" .Values.secret.data | nindent 12 }}
Simplest way to resolve this kind of issues is to use tools.
These are mostly indentation issues, and can be resolved very easily using the right tool
npm install -g yaml-lint
yaml-lint is one such tool
PS E:\vsc-workspaces\grafana-1> yamllint .\grafana.yaml
× YAML Lint failed for C:/Users/mnadeem6/vsc-workspaces/grafana-1/grafana.yaml
× bad indentation of a mapping entry at line 137, column 11:
restartPolicy: Always
^
PS E:\vsc-workspaces\grafana-1> yamllint .\grafana.yaml
√ YAML Lint successful.

How do I load multiple templated config files into a helm chart?

So I am trying to build a helm chart.
in my templates file I've got a file like:
apiVersion: v1
kind: ConfigMap
metadata:
name: config-map
data:
{{ Do something here to load up a set of files | indent 2 }}
I have another directory in my chart: configmaps
where a set of json files, that themselves will have templated variables in them:
a.json
b.json
c.json
Ultimately I'd like to be sure in my chart I can reference:
volumes:
- name: config-a
configMap:
name: config-map
items:
- key: a.json
path: a.json
I had same problem for a few weeks ago with adding files and templates directly to container.
Look for the sample syntax:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-configmap-{{ .Release.Name }}
namespace: {{ .Release.Namespace }}
labels:
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
data:
nginx_conf: {{ tpl (.Files.Get "files/nginx.conf") . | quote }}
ssl_conf: {{ tpl (.Files.Get "files/ssl.conf") . | quote }}
dhparam_pem: {{ .Files.Get "files/dhparam.pem" | quote }}
fastcgi_conf: {{ .Files.Get "files/fastcgi.conf" | quote }}
mime_types: {{ .Files.Get "files/mime.types" | quote }}
proxy_params_conf: {{ .Files.Get "files/proxy_params.conf" | quote }}
Second step is to reference it from deployment:
volumes:
- name: {{ $.Release.Name }}-configmap-volume
configMap:
name:nginx-configmap-{{ $.Release.Name }}
items:
- key: dhparam_pem
path: dhparam.pem
- key: fastcgi_conf
path: fastcgi.conf
- key: mime_types
path: mime.types
- key: nginx_conf
path: nginx.conf
- key: proxy_params_conf
path: proxy_params.conf
- key: ssl_conf
path: ssl.conf
It's actual for now. Here you can find 2 types of importing:
regular files without templating
configuration files with dynamic variables inside
Please do not forget to read official docs:
https://helm.sh/docs/chart_template_guide/accessing_files/
Good luck!
include all files from directory config-dir/, with {{ range ..:
my-configmap.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
{{- $files := .Files }}
{{- range $key, $value := .Files }}
{{- if hasPrefix "config-dir/" $key }} {{/* only when in config-dir/ */}}
{{ $key | trimPrefix "config-dir/" }}: {{ $files.Get $key | quote }} {{/* adapt $key as desired */}}
{{- end }}
{{- end }}
my-deployment.yaml
apiVersion: apps/v1
kind: Deployment
...
spec:
template:
...
spec:
containers:
- name: my-pod-container
...
volumeMounts:
- name: my-volume
mountPath: /config
readOnly: true # is RO anyway for configMap
volumes:
- name: my-volume
configMap:
name: my-configmap
# defaultMode: 0555 # mode rx for all
I assume that a.json,b.json,c.json etc. is a defined list and you know all the contents (apart from the bits that you want to set as values through templated variables). I'm also assuming you only want to expose parts of the content of the files to users and not to let the user configure the whole file content. (But if I'm assuming wrong and you do want to let users set the whole file content then the suggestion from #hypnoglow of following the datadog chart seems to me a good one.) If so I'd suggest the simplest way to do it is to do:
apiVersion: v1
kind: ConfigMap
metadata:
name: config-map
data:
a.json:
# content of a.json in here, including any templated stuff with {{ }}
b.json:
# content of b.json in here, including any templated stuff with {{ }}
c.json:
# content of c.json in here, including any templated stuff with {{ }}
I guess you'd like to mount then to the same directory. It would be tempting for cleanliness to use different configmaps but that would then be a problem for mounting to the same directory. It would also be nice to be able to load the files independently using .Files.Glob to be able to reference the files without having to put the whole content in the configmap but I don't think you can do that and still use templated variables in them... However, you can do it with Files.Get to read the file content as a string and the pass that into tpl to put it through the templating engine as #Oleg Mykolaichenko suggests in https://stackoverflow.com/a/52009992/9705485. I suggest everyone votes for his answer as it is the better solution. I'm only leaving my answer here because it explains why his suggestion is so good and some people may prefer the less abstract approach.