I am trying to use ne and eq in deployment.yaml but while template helm getting below error
Error:YAML parse error on cdp/templates/cdp-deployment.yaml: error converting YAML to JSON: yaml: line 50: did not find expected key
{{- if (or (ne .Values.metadata.name "application-A") (eq .Values.metadata.name "application-B") )}}
ports:
- containerPort: {{ .Values.service.port }}
envFrom:
- configMapRef:
name: {{ .Values.metadata.name }}
- secretRef:
name: {{ .Values.metadata.name }}
{{- end }}
Thank you in advance
There is no problem with this if statement. I tried to write a demo to test it. There is no problem with this paragraph.
values.yaml
metadata:
name: application-B
templates/cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: test
data:
cfg: |-
{{- if (or (ne .Values.metadata.name "application-A") (eq .Values.metadata.name "application-B") )}}
ok
{{- else }}
notok
{{- end }}
output
apiVersion: v1
kind: ConfigMap
metadata:
name: test-v32
labels:
helm.sh/chart: test-0.1.0
app.kubernetes.io/name: test
app.kubernetes.io/instance: test
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
data:
cfg: |-
ok
The rendered line labels are not the same as the actual line labels, as the fool said, you should call the helm template --debug test . command to debug to see what the problem is.
Related
I need to make the installation of this resource file to be conditionally applied based on the flag istioEnables from the values.yaml
virtualService.yaml
{{ if .Values.istioEnabled }}
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: {{ .Values.istio.destinationRule.name }}
namespace: {{ .Values.namespace }}
spec:
host:
- {{ .Values.service.name }}
trafficPolicy:
connectionPool:
tcp:
maxConnections: {{ .Values.istio.destinationRule.maxConnections }}
loadBalancer:
simple: {{ .Values.istio.destinationRule.loadBalancer }}
subsets:
- name: v1
labels: {{ toYaml .Values.app.labels | indent 10 }}
{{ end }}
Values.yaml
namespace: default
app:
name: backend-api
labels: |+
app: backend-api
service:
name: backend-service
istioEnabled: "true"
istio:
destinationRule:
name: backend-destination-rule
loadBalancer: RANDOM
virtualService:
name: backend-virtual
timeout: 5s
retries: 3
perTryTimeout: 3s
this file cause an error while installing the helm chart.
the error is:
Error: INSTALLATION FAILED: YAML parse error on backend-chart/templates/virtualservice.yaml: error converting YAML to JSON: yaml: line 2: mapping values are not allowed in this context
It's caused by the leading two spaces in your file:
{{ if eq 1 1 }}
apiVersion: v1
kind: Kaboom
{{ end }}
produces the same error:
Error: YAML parse error on helm0/templates/debug.yaml: error converting YAML to JSON: yaml: line 2: mapping values are not allowed in this context
While deploying a Kubernetes application, I want to check if a resource is already present. If so it shall not be rendered. To archive this behaviour the lookup function of helm is used. As it seems is always empty while deploying (no dry-run). Any ideas what I am doing wrong?
---
{{- if not (lookup "v1" "ServiceAccount" "my-namespace" "my-sa") }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .Chart.Name }}-{{ .Values.environment }}
namespace: {{ .Values.namespace }}
labels:
app: {{ $.Chart.Name }}
environment: {{ .Values.environment }}
annotations:
"helm.sh/resource-policy": keep
iam.gke.io/gcp-service-account: "{{ .Chart.Name }}-{{ .Values.environment }}#{{ .Values.gcpProjectId }}.iam.gserviceaccount.com"
{{- end }}
running the corresponding kubectl command return the expected service account
kubectl get ServiceAccount my-sa -n my-namespace lists the expected service account
helm version: 3.5.4
i think you cannot use this if-statement to validate what you want.
the lookup function returns a list of objects that were found by your lookup. so, if you want to validate that there are no serviceaccounts with the properties you specified, you should check if the returned list is empty.
test something like
---
{{ if eq (len (lookup "v1" "ServiceAccount" "my-namespace" "my-sa")) 0 }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .Chart.Name }}-{{ .Values.environment }}
namespace: {{ .Values.namespace }}
labels:
app: {{ $.Chart.Name }}
environment: {{ .Values.environment }}
annotations:
"helm.sh/resource-policy": keep
iam.gke.io/gcp-service-account: "{{ .Chart.Name }}-{{ .Values.environment }}#{{ .Values.gcpProjectId }}.iam.gserviceaccount.com"
{{- end }}
see: https://helm.sh/docs/chart_template_guide/functions_and_pipelines/#using-the-lookup-function
I'm running a Migration Job as a pre-install hook so I created a secret also with DB values as a pre-install hook with lesser weight(should run before migration) and everything works fine, both secret and migration. The problem is the secret is deleted afterwards, which causes the regular pods to fail because it can't find the secret and I can't figure out why.
apiVersion: v1
kind: Secret
metadata:
namespace: {{ .Release.Namespace }}
labels:
app: {{ .Values.secrets.name }}
chart: {{ .Values.secrets.name }}
name: {{ .Values.secrets.name }}
annotations:
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-weight": "-5"
type: Opaque
data:
{{- range $key, $val := .Values.secrets.values }}
{{ $key }}: {{ $val }}
{{- end}}
This is what the migration job looks like:
kind: Job
metadata:
namespace: {{ .Release.Namespace }}
labels:
app: {{ .Values.migration.name }}
chart: {{ .Values.migration.name }}
name: {{ .Values.migration.name }}
annotations:
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-weight": "-1"
"helm.sh/hook-delete-policy": hook-succeeded,hook-failed
spec:
backoffLimit: 4
template:
metadata:
labels:
app: {{ .Values.migration.name }}
release: {{ .Values.migration.name }}
spec:
containers:
#other config container values
env:
- name: APP_ROLE
value: {{ .Values.migration.role | quote }}
envFrom:
- secretRef:
name: {{ .Values.secrets.name }}
restartPolicy: Never
You've been caught using chart hooks in a way that's not really intended.
Have a look at the official helm docs for chart hooks here: Helm Docs
Scroll to the very bottom, to "Hook Deletion Policies", you'll read:
If no hook deletion policy annotation is specified, the before-hook-creation behavior applies by default.
What happens, is helm runs the hook that creates the secret, it creates it, succeeds, goes on to run the next hook ( your migration ) and deletes the secret again before executing that.
Hooks are not intended to create resources that are stay. You could try to hack your way around it by setting a hook-deletion-policy of hook-failed to the secret, but i'm not really sure what the outcome will be.
Ideally, you don't run the Migration job of your app in an Init Container of your app. This way, you would create the secrets normally, without a hook, and the init container and the app could reuse the same secret.
I don't really know what is the error here, is a simple helm deploy with a _helpers.tpl, it doesn't make sense and is probably a stupid mistake, the code:
deploy.yaml
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
{{ include "metadata.name" . }}-deploy
spec:
selector:
matchLabels:
app: nginx
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
vars: {{- include "envs.var" .Values.secret.data }}
_helpers.tpl
{{- define "envs.var"}}
{{- range $key := . }}
- name: {{ $key | upper | quote}}
valueFrom:
secretKeyRef:
key: {{ $key | lower }}
name: {{ $key }}-auth
{{- end }}
{{- end }}
values.yaml
secret:
data:
username: root
password: test
the error
Error: YAML parse error on mychart/templates/deploy.yaml: error converting YAML to JSON: yaml: line 21: did not find expected key
Here this problem happens because of indent. You can resolve by updating
env: {{- include "envs.var" .Values.secret.data | nindent 12 }}
Simplest way to resolve this kind of issues is to use tools.
These are mostly indentation issues, and can be resolved very easily using the right tool
npm install -g yaml-lint
yaml-lint is one such tool
PS E:\vsc-workspaces\grafana-1> yamllint .\grafana.yaml
× YAML Lint failed for C:/Users/mnadeem6/vsc-workspaces/grafana-1/grafana.yaml
× bad indentation of a mapping entry at line 137, column 11:
restartPolicy: Always
^
PS E:\vsc-workspaces\grafana-1> yamllint .\grafana.yaml
√ YAML Lint successful.
i.e. from name: {{ .Chart.Name }}-{{ .Values.module5678.name }}-pod below
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: {{ template "project1234.name" . }}
chart: {{ template "project1234.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
name: {{ template "project1234.module5678.fullname" . }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "project1234.name" . }}
template:
metadata:
labels:
app: {{ template "project1234.name" . }}
spec:
containers:
- image: "{{ .Values.image.name }}:{{ .Values.image.tag }}"
name: {{ .Chart.Name }}-{{ .Values.module5678.name }}-pod
ports:
- containerPort: 1234
imagePullSecrets:
- name: {{ .Values.image.pullSecret }}
I am expecting the pod name to be:
pod/project1234-module5678-pod
Instead, the resulting Pod name is:
pod/chartname-project1234-module5678-dc7db787-skqvv
...where (in my understanding):
chartname is from: helm install --name chartname -f values.yaml .
project1234 is from:
# Chart.yaml
apiVersion: v1
appVersion: "1.0"
description: project1234 Helm chart for Kubernetes
name: project1234
version: 0.1.0
module5678 is from:
# values.yaml
rbac:
create: true
serviceAccounts:
module5678:
create: true
name:
image:
name: <image location>
tag: 1.5
pullSecret: <pull secret>
gitlab:
secretName: <secret name>
username: foo
password: bar
module5678:
enabled: true
name: module5678
ingress:
enabled: true
replicaCount: 1
resources: {}
I've tried changing name: {{ .Chart.Name }}-{{ .Values.module5678.name }}-pod into a plain string value like "podname1234" and it isn't followed. I even tried removing the name setting entirely and the resulting pod name remains the same.
Pods created from a Deployment always have a generated name based on the Deployment's name (and also the name of the intermediate ReplicaSet, if you go off and look for it). You can't override it.
Given the YAML you've shown, I'd expect that this fragment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "project1234.module5678.fullname" . }}
expands out to a Deployment name of chartname-project1234-module5678; the remaining bits are added in by the ReplicaSet and then the Pod itself.
If you do look up the Pod and kubectl describe pod chartname-project1234-module5678-dc7db787-skqvv you will probably see that it has a single container that has the expected name project1234-module5678-pod. Pretty much the only time you need to use this is if you need to kubectl logs (or, more rarely, kubectl exec) in a multi-container pod; if you are in this case, you'll appreciate having a shorter name, and since the container names are always scoped to the specific pod in which they appear, there's nothing wrong with using a short fixed name here
spec:
containers:
- name: container