Slice in helm no matches found - kubernetes

I have the following define definition in helm:
{{- define "svc.envVars" -}}
{{- range .Values.envVars.withSecret }}
- name: {{ .key }}
valueFrom:
secretKeyRef:
name: {{ .secretName }}
key: {{ .secretKey | quote }}
{{- end }}
{{- range .Values.envVars.withoutSecret }}
- name: {{ .name }}
value: {{ .value | quote }}
{{- end }}
{{- end }}
and I am going to use it in deployment.yaml
containers:
- name: {{ .Release.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- if .Values.envVars.enabled }}
env:
{{- include "svc.envVars" . | indent 10 }}
{{- end }}
ports:
- name: http
containerPort: 8080
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
resources:
{{- toYaml .Values.resources | nindent 12 }}
in values.yaml it is defined as follows:
envVars:
enabled: false
withSecret: []
withoutSecret: []
then I tried to render:
helm template --debug user-svc \
--set image.tag=0.1.0 \
--set image.repository=user-svc \
--set envVars.enabled=true \
--set envVars.withSecret[0].key=POSTGRES_URL,envVars.withSecret[0].secretName=postgres_name,envVars.withSecret[0].secretKey=postgres_pw \
--set envVars.withSecret[1].key=MYSQL_URL,envVars.withSecret[1].secretName=mysql_name,envVars.withSecret[1].secretKey=mysql_pw \
./svc
it shows me:
zsh: no matches found: envVars.withSecret[0].key=POSTGRES_URL,envVars.withSecret[0].secretName=postgres_name,envVars.withSecret[0].secretKey=postgres_pw
When I set the variables manually in values.yaml:
envVars:
enabled: false
withSecret:
- key: POSTGRES_URL
secretName: postgres_name
secretKey: postgres_pw
- key: MYSQL_URL
secretName: mysql_name
secretKey: mysql_pw
withoutSecret:
- name: NOT_SECRET
value: "Value of not serect"
then render it with:
helm template --debug user-svc \
--set image.tag=0.1.0 \
--set image.repository=user-svc \
--set envVars.enabled=true \
./svc
then it works as expected.
What am I doing wrong?

I had the same issue. Due to the fact, that ´[´ and ´]´ are interpreted by zsh.
You can use noglob to disable the globals. So, ´[´ and ´]´ are not interpreted.
noglob helm template --set lorem[0].ipsum=1337
In your example:
noglob helm template --debug user-svc \
--set image.tag=0.1.0 \
--set image.repository=user-svc \
--set envVars.enabled=true \
--set envVars.withSecret[0].key=POSTGRES_URL,envVars.withSecret[0].secretName=postgres_name,envVars.withSecret[0].secretKey=postgres_pw \
--set envVars.withSecret[1].key=MYSQL_URL,envVars.withSecret[1].secretName=mysql_name,envVars.withSecret[1].secretKey=mysql_pw \
./svc
Sources:
Helm - zsh: no matches found: imagePullSecrets[0].name=regcred¨

Related

How to add a PersistentVolumeClaim to a deployment running GitLab AutoDevops?

What am I trying to achieve?
We are using a self-hosted GitLab instance and use GitLab AutoDevops to deploy our projects to a Kubernetes cluster. At the time of writing, we are only using one node within the cluster. For one of our projects it is important that the built application (i.e. the pod(s)) is able to access (read only) files stored on the Kubernetes cluster's node itself.
What have I tried?
Created a (hostPath) PersistentVolume (PV) on our cluster
Created a PersistentVolumeClaim (PVC) on our cluster (named "test-api-claim")
Now GitLab AutoDevops uses a default helm chart to deploy the applications. In order to modify it's behavior, I've added this chart to the project's repository (GitLab AutoDevops automatically uses the chart in a project's ./chart directory if found). So my line of thinking was to modify the chart so that the deployed pods use the PV and PVC which I created manually on the cluster.
Therefore I modified the deployment.yaml file that can be found here. As you can see in the following code-snippet, I have added the volumeMounts & volumes keys (not present in the default/original file). Scroll to the end of the snippet to see the added keys.
{{- if not .Values.application.initializeCommand -}}
apiVersion: {{ default "extensions/v1beta1" .Values.deploymentApiVersion }}
kind: Deployment
metadata:
name: {{ template "trackableappname" . }}
annotations:
{{ if .Values.gitlab.app }}app.gitlab.com/app: {{ .Values.gitlab.app | quote }}{{ end }}
{{ if .Values.gitlab.env }}app.gitlab.com/env: {{ .Values.gitlab.env | quote }}{{ end }}
labels:
app: {{ template "appname" . }}
track: "{{ .Values.application.track }}"
tier: "{{ .Values.application.tier }}"
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
{{- if or .Values.enableSelector (eq (default "extensions/v1beta1" .Values.deploymentApiVersion) "apps/v1") }}
selector:
matchLabels:
app: {{ template "appname" . }}
track: "{{ .Values.application.track }}"
tier: "{{ .Values.application.tier }}"
release: {{ .Release.Name }}
{{- end }}
replicas: {{ .Values.replicaCount }}
{{- if .Values.strategyType }}
strategy:
type: {{ .Values.strategyType | quote }}
{{- end }}
template:
metadata:
annotations:
checksum/application-secrets: "{{ .Values.application.secretChecksum }}"
{{ if .Values.gitlab.app }}app.gitlab.com/app: {{ .Values.gitlab.app | quote }}{{ end }}
{{ if .Values.gitlab.env }}app.gitlab.com/env: {{ .Values.gitlab.env | quote }}{{ end }}
{{- if .Values.podAnnotations }}
{{ toYaml .Values.podAnnotations | indent 8 }}
{{- end }}
labels:
app: {{ template "appname" . }}
track: "{{ .Values.application.track }}"
tier: "{{ .Values.application.tier }}"
release: {{ .Release.Name }}
spec:
imagePullSecrets:
{{ toYaml .Values.image.secrets | indent 10 }}
containers:
- name: {{ .Chart.Name }}
image: {{ template "imagename" . }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- if .Values.application.secretName }}
envFrom:
- secretRef:
name: {{ .Values.application.secretName }}
{{- end }}
env:
{{- if .Values.postgresql.managed }}
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: app-postgres
key: username
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: app-postgres
key: password
- name: POSTGRES_HOST
valueFrom:
secretKeyRef:
name: app-postgres
key: privateIP
{{- end }}
- name: DATABASE_URL
value: {{ .Values.application.database_url | quote }}
- name: GITLAB_ENVIRONMENT_NAME
value: {{ .Values.gitlab.envName | quote }}
- name: GITLAB_ENVIRONMENT_URL
value: {{ .Values.gitlab.envURL | quote }}
ports:
- name: "{{ .Values.service.name }}"
containerPort: {{ .Values.service.internalPort }}
livenessProbe:
{{- if eq .Values.livenessProbe.probeType "httpGet" }}
httpGet:
path: {{ .Values.livenessProbe.path }}
scheme: {{ .Values.livenessProbe.scheme }}
port: {{ .Values.service.internalPort }}
{{- else if eq .Values.livenessProbe.probeType "tcpSocket" }}
tcpSocket:
port: {{ .Values.service.internalPort }}
{{- else if eq .Values.livenessProbe.probeType "exec" }}
exec:
command:
{{ toYaml .Values.livenessProbe.command | indent 14 }}
{{- end }}
initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }}
readinessProbe:
{{- if eq .Values.readinessProbe.probeType "httpGet" }}
httpGet:
path: {{ .Values.readinessProbe.path }}
scheme: {{ .Values.readinessProbe.scheme }}
port: {{ .Values.service.internalPort }}
{{- else if eq .Values.readinessProbe.probeType "tcpSocket" }}
tcpSocket:
port: {{ .Values.service.internalPort }}
{{- else if eq .Values.readinessProbe.probeType "exec" }}
exec:
command:
{{ toYaml .Values.readinessProbe.command | indent 14 }}
{{- end }}
initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- end -}}
volumeMounts:
- mountPath: /data
name: test-pvc
volumes:
- name: test-pvc
persistentVolumeClaim:
claimName: test-api-claim
What is the problem?
Now when I trigger the Pipeline to deploy the application (using AutoDevops with my modified helm chart), I am getting this error:
Error: YAML parse error on auto-deploy-app/templates/deployment.yaml: error converting YAML to JSON: yaml: line 71: did not find expected key
Line 71 in the script refers to the valueFrom.secretKeyRef.name in the yaml:
- name: POSTGRES_HOST
valueFrom:
secretKeyRef:
name: app-postgres
key: privateIP
The weird thing is that when I delete the volumes and volumeMounts keys, it works as expected (and the valueFrom.secretKeyRef.name is still presented and causes no trouble..).
I am not using tabs in the yaml file and I double checked the indentation.
Two questions
Could there be something wrong with my yaml?
Does anyone know of another solution to achieve my desired behavior? (adding PVC to the deployment so that pods actually use it?)
General information
We use GitLab EE 13.12.11
For auto-deploy-image (which provides the helm chart I am referring to) we use version 1.0.7
Thanks in advance and have a nice day!
it seems that adding persistence is now supported in the default helm chart.
Check the pvc.yaml and deployment.yaml.
Given that, it should be enough to edit values in .gitlab/auto-deploy-values.yaml to meet your needs. Check defaults in values.yaml for more context.

Helm Chart configmap templating with toYaml

I have values.yml file that takes in a list of mountPaths with this format:
global:
mountPath:
hello:
config: /etc/hello/hello.conf
node: /opt/hello/node.jks
key: /opt/hello/key.jks
cert: /opt/hello/cert.jks
I want the resulting rendered template to be
volumeMounts:
- name: config
mountPath: /etc/hello/hello.conf
subPath: config
- name: node
mountPath: /opt/hello/node.jks
subPath: node
- name: key
mountPath: /opt/hello/key.jks
subPath: key
- name: cert
mountPath: /opt/hello/cert.jks
subPath: cert
How would I accomplish this? I tried the following in my deployment.yaml template file:
volumeMounts:
{{- range $key, $value := pluck .Values.service_name .Values.global.mountPath.serviceName | first }}
- name: {{ $key }}
mountPath: $value
subPath: {{ $key }}
{{- end }}
the following helm command that i have run but the it won't work for me. How do I accomplish getting to the format I want above based on the input?
helm upgrade --install \
--namespace ${NAMESPACE} \
--set service_name=hello \
--set namespace=${NAMESPACE} \
hello . \
-f values.yaml \
Here is what I did:
volumeMounts:
{{- range $key, $value := pluck .Values.service_name .Values.global.mountPath | first }}
- name: {{ $key }}
mountPath: {{ $value }}
subPath: {{ $key }}
{{- end }}
helm template --set service_name=hello [...] seems to render exactly what you want.
Notice I changed the line with mountPath field: $value -> {{ $value }},
and the line with range: .Values.global.mountPath.serviceName -> .Values.global.mountPath

Error: UPGRADE FAILED: error validating "": error validating data: ValidationError(Ingress.spec.rules[0].http): missing required field "paths"

I am very new to using helm charts and not sure why I get this error when I try to install my helm chart. I am using --set with helm install command to set the hostname at ingress.hosts[0].host.I do not understand why it says missing paths where as "paths" is already present.
ingress.yaml
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "project.fullname" . -}}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
app.kubernetes.io/name: {{ include "project.name" . }}
helm.sh/chart: {{ include "project.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ . }}
backend:
serviceName: {{ $fullName }}
servicePort: http
{{- end }}
{{- end }}
{{- end }}
values.yaml
...
...
...
ingress:
enabled: true
hostname: some_hostname
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "180"
nginx.ingress.kubernetes.io/proxy-send-timeout: "180"
nginx.ingress.kubernetes.io/proxy-read-timeout: "180"
hosts:
- host: some_hostname
paths: [/]
tls:
- secretName: some_secretname
hosts:
- some_hostname
resources: {}
...
...
...
command to install helm
helm upgrade --install $(PROJECT_NAME) --set ingress.hosts[0].host="${HOST_NAME} --set ingress.tls[0].hosts="{${HOST_NAME}}""
error:
Error: UPGRADE FAILED: error validating "": error validating data: ValidationError(Ingress.spec.rules[0].http): missing required field "paths" in io.k8s.api.extensions.v1beta1.HTTPIngressRuleValue
I had the same issue, for some reason if you define the host in the --set you also have to define the path in the set (even though it matches the yaml). Like so,
helm upgrade --install $(PROJECT_NAME) --set ingress.hosts[0].host=${HOST_NAME} --set ingress.hosts[0].paths[0]=/
I haven't done tls yet so I am not sure if that has the same issue.

How to set environment variables in helm?

I have the following deployment definition:
...
containers:
- name: {{ .Release.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{ if .Values.env.enabled }}
env:
{{- range .Values.env.vars }}
?????What comes here?????
{{- end }}
{{ end }}
ports:
- name: http
containerPort: 8080
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
resources:
{{- toYaml .Values.resources | nindent 12 }}
...
in the values.yaml, I have defined:
env:
enabled: false
vars: []
What I would like to do is, to set environment dynamically via --set, for instance:
helm template user-svc \
--set image.tag=0.1.0 \
--set image.repository=user-svc \
--set env.enabled=true \
--set env.vars.POSTGRES_URL="jdbc:postgresql://localhost:5432/users" \
--set env.vars.POSTGRES_USER="dbuser" \
./svc
after rendering, it should show:
...
containers:
- name: demo
image: game.example/demo-game
env:
- name: POSTGRES_URL
value: jdbc:postgresql://localhost:5432/users
...
and how to set the following option via --set:
- name: UI_PROPERTIES_FILE_NAME
valueFrom:
configMapKeyRef:
name: game-demo
key: ui_properties_file_name
You can access the --set option using .Values.
{{- if eq .Values.env.enabled "true" -}}
env:
- name: {{ .Values.env.vars.POSTGRES_USER }}
value: {{ .Values.env.vars.env.vars.POSTGRES_URL}}
{{- end }}
Try the above.

How to pass dynamic arguments to a helm chart that runs a job

I'd like to allow our developers to pass dynamic arguments to a helm template (Kubernetes job). Currently my arguments in the helm template are somewhat static (apart from certain values) and look like this
Args:
--arg1
value1
--arg2
value2
--sql-cmd
select * from db
If I were run a task using the docker container without Kubernetes, I would pass parameters like so:
docker run my-image --arg1 value1 --arg2 value2 --sql-cmd "select * from db"
Is there any way to templatize arguments in a helm chart in such way that any number of arguments could be passed to a template.
For example.
cat values.yaml
...
arguments: --arg1 value1 --arg2 value2 --sql-cmd "select * from db"
...
or
cat values.yaml
...
arguments: --arg3 value3
...
I've tried a few approaches but was not successful. Here is one example:
Args:
{{ range .Values.arguments }}
{{ . }}
{{ end }}
Yes. In values.yaml you need to give it an array instead of a space delimited string.
cat values.yaml
...
arguments: ['--arg3', 'value3', '--arg2', 'value2']
...
or
cat values.yaml
...
arguments:
- --arg3
- value3
- --arg2
- value2
...
and then you like you mentioned in the template should do it:
args:
{{ range .Values.arguments }}
- {{ . }}
{{ end }}
If you want to override the arguments on the command line you can pass an array with --set like this:
--set arguments={--arg1, value1, --arg2, value2, --arg3, value3, ....}
In your values file define arguments as:
extraArgs:
argument1: value1
argument2: value2
booleanArg1:
In your template do:
args:
{{- range $key, $value := .Values.extraArgs }}
{{- if $value }}
- --{{ $key }}={{ $value }}
{{- else }}
- --{{ $key }}
{{- end }}
{{- end }}
Rico's answer needed to be improved.
Using the previous example I've received errors like:
templates/deployment.yaml: error converting YAML to JSON: yaml or
failed to get versionedObject: unable to convert unstructured object to apps/v1beta2, Kind=Deployment: cannot restore slice from string
This is my working setup with coma in elements:
( the vertical format for the list is more readable )
cat values.yaml
...
arguments: [
"--arg3,",
"value3,",
"--arg2,",
"value2,",
]
...
in the template should do it:
args: [
{{ range .Values.arguments }}
{{ . }}
{{ end }}
]
because of some limitations, I had to work with split and to use a delimiter, so in my case:
deployment.yaml :
{{- if .Values.deployment.args }}
args:
{{- range (split " " .Values.deployment.args) }}
- {{ . }}
{{- end }}
{{- end }}
when use --set:
helm install --set deployment.args="--inspect server.js" ...
results with:
- args:
- --inspect
- server.js
The arguments format needs to be kept consistent in such cases.
Here is my case and it works fine.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.app.name }}
labels:
app: {{ .Values.app.name }}
instance: test
spec:
replicas: {{ .Values.master.replicaCount }}
selector:
matchLabels:
app: {{ .Values.app.name }}
instance: test
template:
metadata:
labels:
app: {{ .Values.app.name }}
instance: test
spec:
imagePullSecrets:
- name: gcr-pull-secret
containers:
- name: {{ .Values.app.name }}
image: {{ .Values.app.image }}
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
args:
[
"--users={{int .Values.cmd.users}}",
"--spawn-rate={{int .Values.cmd.rate}}",
"--host={{.Values.cmd.host}}",
"--logfile={{.Values.cmd.logfile}}",
"--{{.Values.cmd.role}}"]
ports:
- containerPort: {{ .Values.container.port }}
resources:
requests:
memory: {{ .Values.container.requests.memory }}
cpu: {{ .Values.container.requests.cpu }}
limits:
memory: {{ .Values.container.limits.memory }}
cpu: {{ .Values.container.limits.cpu }}
Unfortunately following mixed args format does not work within container construct -
mycommand -ArgA valA --ArgB valB --ArgBool1 -ArgBool2 --ArgC=valC
The correct format of above command expected is -
mycommand --ArgA=valA --ArgB=valB --ArgC=valC --ArgBool1 --ArgBool2
This can be achieved by following constructs -
#Dockerfile last line
ENTRYPOINT [mycommand]
#deployment.yaml
containers:
- name: {{ .Values.app.name }}
image: {{ .Values.app.image }}
args: [
"--ArgA={{ .Values.cmd.ArgA }}",
"--ArgB={{ .Values.cmd.ArgB }}",
"--ArgC={{ .Values.cmd.ArgC }}",
"--{{ .Values.cmd.ArgBool1 }}",
"--{{ .Values.cmd.ArgBool2 }}" ]
#values.yaml
cmd:
ArgA: valA
ArgB: valB
ArgC: valC
ArgBool1: "ArgBool1"
ArgBool2: "ArgBool2"
helm install --name "airflow" stable/airflow --set secrets.database=mydatabase,secrets.password=mypassword
So this is the helm chart in question: https://github.com/helm/charts/tree/master/stable/airflow
Now I want to overwrite the default values in the helm chart
secrets.database and
secrets.password so I use --set argument and then it is key=value pairs separated by commas.
helm install --name "<name for your chart>" <chart> --set key0=value0,key1=value1,key2=value2,key3=value3
Did you try this?
{{ range .Values.arguments }}
{{ . | quote }}
{{ end }}
Acid R's key/value solution was the only thing that worked for me.
I ended up with this:
values.yaml
arguments:
url1: 'http://something1.example.com'
url2: 'http://something2.example.com'
url3: 'http://something3.example.com'
url4: 'http://something3.example.com'
And in my template:
args:
{{- range $key, $value := .Values.arguments }}
- --url={{ $value }}
{{- end }}