Helm Error converting YAML to JSON: yaml: line 20: did not find expected key - kubernetes

I don't really know what is the error here, is a simple helm deploy with a _helpers.tpl, it doesn't make sense and is probably a stupid mistake, the code:
deploy.yaml
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
{{ include "metadata.name" . }}-deploy
spec:
selector:
matchLabels:
app: nginx
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
vars: {{- include "envs.var" .Values.secret.data }}
_helpers.tpl
{{- define "envs.var"}}
{{- range $key := . }}
- name: {{ $key | upper | quote}}
valueFrom:
secretKeyRef:
key: {{ $key | lower }}
name: {{ $key }}-auth
{{- end }}
{{- end }}
values.yaml
secret:
data:
username: root
password: test
the error
Error: YAML parse error on mychart/templates/deploy.yaml: error converting YAML to JSON: yaml: line 21: did not find expected key

Here this problem happens because of indent. You can resolve by updating
env: {{- include "envs.var" .Values.secret.data | nindent 12 }}

Simplest way to resolve this kind of issues is to use tools.
These are mostly indentation issues, and can be resolved very easily using the right tool
npm install -g yaml-lint
yaml-lint is one such tool
PS E:\vsc-workspaces\grafana-1> yamllint .\grafana.yaml
× YAML Lint failed for C:/Users/mnadeem6/vsc-workspaces/grafana-1/grafana.yaml
× bad indentation of a mapping entry at line 137, column 11:
restartPolicy: Always
^
PS E:\vsc-workspaces\grafana-1> yamllint .\grafana.yaml
√ YAML Lint successful.

Related

Can i use the ne and eq in deployment.yaml like below?

I am trying to use ne and eq in deployment.yaml but while template helm getting below error
Error:YAML parse error on cdp/templates/cdp-deployment.yaml: error converting YAML to JSON: yaml: line 50: did not find expected key
{{- if (or (ne .Values.metadata.name "application-A") (eq .Values.metadata.name "application-B") )}}
ports:
- containerPort: {{ .Values.service.port }}
envFrom:
- configMapRef:
name: {{ .Values.metadata.name }}
- secretRef:
name: {{ .Values.metadata.name }}
{{- end }}
Thank you in advance
There is no problem with this if statement. I tried to write a demo to test it. There is no problem with this paragraph.
values.yaml
metadata:
name: application-B
templates/cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: test
data:
cfg: |-
{{- if (or (ne .Values.metadata.name "application-A") (eq .Values.metadata.name "application-B") )}}
ok
{{- else }}
notok
{{- end }}
output
apiVersion: v1
kind: ConfigMap
metadata:
name: test-v32
labels:
helm.sh/chart: test-0.1.0
app.kubernetes.io/name: test
app.kubernetes.io/instance: test
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
data:
cfg: |-
ok
The rendered line labels are not the same as the actual line labels, as the fool said, you should call the helm template --debug test . command to debug to see what the problem is.

Helm (jinja2): parse error "did not find expected key" at a non-existant further line

This is my first question ever on the internet. I've been helped much by reading other people's fixes but now it's time to humbly ask for help myself.
I get the following error by Helm (helm3 install ist-gw-t1 --dry-run )
Error: INSTALLATION FAILED: YAML parse error on istio-gateways/templates/app-name.yaml: error converting YAML to JSON: yaml: line 40: did not find expected key
However the file is 27 lines long! It used to be longer but I removed the other Kubernetes resources so that I narrow down the area for searching for the issue.
Template file
{{- range .Values.ingressConfiguration.app-name }}
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: "{{ .name }}-tcp-8080"
namespace: {{ .namespace }}
labels:
app: {{ .name }}
chart: {{ $.Chart.Name }}-{{ $.Chart.Version | replace "+" "_" }}
release: {{ $.Release.Name }}
heritage: {{ $.Release.Service }}
spec:
hosts: # Is was line 40 before shortening the file.
- {{ .fqdn }}
gateways:
- {{ .name }}
{{ (.routing).type }}:
- route:
- destination:
port:
number: {{ (.services).servicePort2 }}
host: {{ (.services).serviceUrl2 }}
match:
port: "8080"
uri:
exact: {{ .fqdn }}
{{- end }}
values.yaml
networkPolicies:
ingressConfiguration:
app-name:
- namespace: 'namespace'
name: 'app-name'
ingressController: 'internal-istio-ingress-gateway' # ?
fqdn: '48.characters.long'
tls:
credentialName: 'name-of-the-secret' # ?
mode: 'SIMPLE' # ?
serviceUrl1: 'foo' # ?
servicePort1: '8080'
routing:
# Available routing types: http or tls
# In case of tls routing type selected, the matchingSniHost(resp. rewriteURI/matchingURIs) has(resp. have) to be filled(resp. empty)
type: http
rewriteURI: ''
matchingURIs: ['foo']
matchingSniHost: []
- services:
serviceUrl2: "foo"
servicePort2: "8080"
- externalServices:
mysql: 'bar'
Where does the error come from?
Why does Helm still report line 40 as problematic -- even after the shortening of the file.
Can you please recommend some Visual Studio Code extention that could have helped me? I have the following slightly relevant but they do not have linting (or I do not know to use it): YAML by Red Hat; amd Kubernetes by Microsoft.

Range issue in go template in vault configuration in k8s

I don't know Golang at all but need to implement Go template syntax in my kubernetes config (where hishicorp vault is configured). What I'm trying to do is to modify file in order to change its format. So source looks like this:
data: map[key1:value1]
metadata: map[created_time:2021-10-06T21:02:18.41643371Z deletion_time: destroyed:false version:1]
The Kubernetes config part with go template is used in order to format file is here:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test
spec:
replicas: ${REPLICAS}
selector:
matchLabels:
component: test
template:
metadata:
labels:
component: test
annotations:
vault.hashicorp.com/agent-inject: 'true'
vault.hashicorp.com/agent-inject-status: 'update'
vault.hashicorp.com/role: 'test'
vault.hashicorp.com/agent-inject-secret-config: 'secret/data/test/config'
vault.hashicorp.com/agent-inject-template-config: |
{{- with secret "secret/data/test/config" -}}
{{ range $k, $v := .Data.data }}
export {{ $k }}={{ $v | quote }}
{{ end }}
{{- end}}
spec:
serviceAccountName: test
containers:
- name: test
image: ${IMAGE}
ports:
- containerPort: 3000
But the error I'm getting is:
runtime error encountered: error="template server: (dynamic): parse: template: :2: unexpected "," in range"
EDIT:
To deploy vault on k8s I'm using vault helm chart
For what I can see you have env variables in your yaml file (${REPLICAS}, ${IMAGE}), which makes me think that you are using something like cat file.yml | envsubst | kubectl apply --wait=true -f - in order to replace those env vars for the real values.
The issue with this is that $k and $v are also being replaced for '' (since you do not have that env var in your system).
One ugly but effective solution is to export v="$v" and export k="$k" which whill generate your yaml file correctly.

In a Helm Chart, how can I block the upgrade until a deployment in it is complete?

I am using Rancher Pipelines and catalogs to run Helm Charts like this:
.rancher-pipeline.yml
stages:
- name: Deploy app-web
steps:
- applyAppConfig:
catalogTemplate: cattle-global-data:chart-web-server
version: 0.4.0
name: ${CICD_GIT_REPO_NAME}-${CICD_GIT_BRANCH}-serv
targetNamespace: ${CICD_GIT_REPO_NAME}
answers:
pipeline.sequence: ${CICD_EXECUTION_SEQUENCE}
...
- name: Another chart needs to wait until the previous one success
...
And in the chart-web-server app, it has a deployment:
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-dpy
labels:
{{- include "labels" . | nindent 4 }}
spec:
replicas: 1
selector:
matchLabels:
app: {{ .Release.Name }}
{{- include "labels" . | nindent 6 }}
template:
metadata:
labels:
app: {{ .Release.Name }}
{{- include "labels" . | nindent 8 }}
spec:
containers:
- name: "web-server-{{ include "numericSafe" .Values.git.commitID }}"
image: "{{ .Values.harbor.host }}/{{ .Values.web.imageTag }}"
imagePullPolicy: Always
env:
...
ports:
- containerPort: {{ .Values.web.port }}
protocol: TCP
resources:
{{- .Values.resources | toYaml | nindent 12 }}
Now, I need the pipeline to be blocked until the deployment is upgraded since I want to do some server testing in the following stages.
My idea is to use Helm hook: If I can create a Job hooking post-install and post-upgrade and waiting for the deployment to be completed, I can then block the whole pipeline until the deployment (a web server) is updated.
Does this idea work? If so, how can I write such a blocking and detecting Job?
Does not appear to be supported from what I can find of their code. It would appear they just shell out to helm upgrade, would need to use the --wait mode.

Conftest Exception Rule Fails with Kustomization & Helm

I'm having in one of my projects several k8s resources which is built, composed and packaged using Helm & Kustomize. I wrote few OPA tests using Conftest where one of the check is to avoid running containers as root. So here is my deployment.yaml in my base folder:
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: {{ .Values.app.namespace }}
labels:
name: {{ .Values.app.name }}
#app: {{ .Values.app.name }}
component: {{ .Values.plantSimulatorService.component }}
part-of: {{ .Values.app.name }}
managed-by: helm
instance: {{ .Values.app.name }}
version: {{ .Values.app.version }}
spec:
selector:
matchLabels:
app: {{ .Values.app.name }}
replicas: 1
template:
metadata:
labels:
app: {{ .Values.app.name }}
spec:
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
runAsNonRoot: true
containers:
- name: {{ .Values.app.name }}
image: {{ .Values.plantSimulatorService.image.repository }}:{{ .Values.plantSimulatorService.image.tag }}
ports:
- containerPort: {{ .Values.plantSimulatorService.ports.containerPort }} # Get this value from ConfigMap
I then have a patch file (flux-patch-prod.yaml) in my overlays folder that looks like this:
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
fluxPatchFile: prodPatchFile
annotations:
flux.weave.works/locked: "true"
flux.weave.works/locked_msg: Lock deployment in production
flux.weave.works/locked_user: Joesan <github.com/joesan>
name: plant-simulator-prod
namespace: {{ .Values.app.namespace }}
I have now written the Conftest in my base.rego file that looks like this:
# Check container is not run as root
deny_run_as_root[msg] {
kubernetes.is_deployment
not input.spec.template.spec.securityContext.runAsNonRoot
msg = sprintf("Containers must not run as root in Deployment %s", [name])
}
exception[rules] {
kubernetes.is_deployment
input.metadata.name == "plant-simulator-prod"
rules := ["run_as_root"]
}
But when I ran them (I have the helm-conftest plugin installed), I get the following error:
FAIL - Containers must not run as root in Deployment plant-simulator-prod
90 tests, 80 passed, 3 warnings, 7 failures
Error: plugin "conftest" exited with error
I have no idea how to get this working. I do not want to end up copying the contents from deployment.yaml into the flux-patch-prod.yaml back again as it would defeat the whole purpose of using Kustomization in the first place. Any idea how to get this fixed? I have been down with this issue since yesterday!
I managed to get this fixed, but the error messages that were throwing up were not that so helpful. May be it gets better with next release versions of Conftest.
So here is what I had to do:
I went to the https://play.openpolicyagent.org/ playground to test out my files. As I copied my rego rules over there, I noticed the following error message:
policy.rego:25: rego_unsafe_var_error: var msg is unsafe
I got suspicious and as I paid closer attention to the line to see what the actual problem is, I had to change from:
msg = sprintf("Containers must not run as root in Deployment %s",
[name])
To:
msg = sprintf("Containers must not run as root in Deployment %s",
[input.name])
And it worked as expected! Silly mistake, but the error messages from Conftest were not that useful to figure them out in the first glance!