Helm not using values override? - kubernetes-helm

I'm using sub-charts. Here's my directory structure
/path/microservice-base-chart
/path/myApp
I have this values.yaml for my "base" (generic) chart
# Default region and repository
aws_region: us-east-1
repository: 012234567890.dkr.ecr.us-east-1.amazonaws.com
repositories:
us-east-1: 01234567890.dkr.ecr.us-east-1.amazonaws.com
eu-north-1: 98765432109.dkr.ecr.eu-north-1.amazonaws.com
image:
name: ""
version: ""
...and this in the base chart's templates/_helpers.yaml file
{{/*
Get the repository from the AWS region
*/}}
{{- define "microservice-base-chart.reponame" -}}
{{- $repo := index .Values.repositories .Values.aws_region | default .Values.repository }}
{{- printf "%s" $repo }}
{{- end }}
...and this in the base chart's templates/deployment.yaml file
apiVersion: apps/v1
kind: Deployment
...
spec:
...
template:
...
spec:
...
containers:
- name: {{ .Values.image.name }}
image: {{ include "microservice-base-chart.reponame" . }}/{{ .Values.image.name }}:{{ .Values.image.version }}
I have this in the Chart.yaml of a sub chart that uses the base chart.
dependencies:
- alias: microservice-0
name: microservice-base-chart
version: "0.1.0"
repository: file://../microservice-base-chart
...and this in the values.yaml of a sub chart
microservice-0:
image:
name: myApp
version: 1.2.3
However, when I run this, where I set aws_region
$ helm install marcom-stats-svc microservice-chart/ \
--set image.aws_region=eu-north-1 \
--set microservice-0.image.version=2.0.0 \
--dry-run --debug
I get this for the image name of the above deployment.yaml template
image: 01234567890.dkr.ecr.us-east-1.amazonaws.com/myApp:2.0.0
instead of the expected
image: 98765432109.dkr.ecr.eu-north-1.amazonaws.com/myApp:2.0.0
What am I missing? TIA

Related

In a Helm template function, how can I iterate through a dictionary, and return the value of a key?

I have the following values.yaml file
# Default region and repository
aws_region: us-east-1
repository: 01234567890.dkr.ecr.us-east-1.amazonaws.com
repositories:
us-east-1: 01234567890.dkr.ecr.us-east-1.amazonaws.com
eu-north-1: 98765432109.dkr.ecr.eu-north-1.amazonaws.com
image:
name: "ms-0"
...
I wrote a function to return the value from the repositories dictionary based on the key, which is an AWS region.
{{/*
Get the repository from the AWS region
*/}}
{{- define "microservice-base-chart.reponame" -}}
{{- $repo := default .Values.repository }}
{{- range $key, $value := .Values.repositories }}
{{- if .eq $key .Values.aws_region }}
{{- $repo = $value }}
{{- end }}
{{- end }}
{{- printf "%s" $repo }}
{{- end }}
And then I want to use the function in, say my deployment.yaml template
apiVersion: apps/v1
kind: Deployment
...
spec:
...
template:
...
spec:
...
containers:
- name: {{ .Values.image.name }}
image: {{ include "microservice-base-chart.reponame" . }}/{{ .Values.image.name }}:{{ .Values.image.version }}
But when I do
$ helm install ms-0 ./microservice-chart/ --dry-run --debug
I get
install.go:178: [debug] Original chart version: ""
install.go:195: [debug] CHART PATH: /path/helm/microservice-chart
Error: INSTALLATION FAILED: template: microservice-chart/charts/microservice-0/templates/deployment.yaml:36:20: executing "microservice-chart/charts/microservice-0/templates/deployment.yaml" at <include "microservice-base-chart.reponame" .>: error calling include: template: microservice-chart/charts/microservice-0/templates/_helpers.tpl:70:7: executing "microservice-base-chart.reponame" at <.eq>: can't evaluate field eq in type interface {}
helm.go:84: [debug] template: microservice-chart/charts/microservice-0/templates/deployment.yaml:36:20: executing "microservice-chart/charts/microservice-0/templates/deployment.yaml" at <include "microservice-base-chart.reponame" .>: error calling include: template: microservice-chart/charts/microservice-0/templates/_helpers.tpl:70:7: executing "microservice-base-chart.reponame" at <.eq>: can't evaluate field eq in type interface {}
INSTALLATION FAILED
main.newInstallCmd.func2
helm.sh/helm/v3/cmd/helm/install.go:127
github.com/spf13/cobra.(*Command).execute
github.com/spf13/cobra#v1.4.0/command.go:856
github.com/spf13/cobra.(*Command).ExecuteC
github.com/spf13/cobra#v1.4.0/command.go:974
github.com/spf13/cobra.(*Command).Execute
github.com/spf13/cobra#v1.4.0/command.go:902
main.main
helm.sh/helm/v3/cmd/helm/helm.go:83
runtime.main
runtime/proc.go:250
runtime.goexit
runtime/asm_amd64.s:1594
What am I doing wrong? TIA!

Helm - deep merge in containers field in chart with two deployments

I have library chart:
# only part from it
containers:
- name: {{ .Chart.Name }}
{{- if .Values.config.command }}
command: {{ .Values.config.command }}
{{- end }}
resources:
{{- toYaml .Values.config.resources | nindent 10 }}
{{- if .Values.config.containerPort}}
ports:
- containerPort: {{ .Values.config.containerPort }}
{{- end}}
envFrom:
{{- if .Values.config.envFrom }}
{{- toYaml .Values.config.envFrom | nindent 10 }}
{{- end }}
...
# from the Common Helm Helper Chart
{{- define "common-chartlib.deployment" -}}
{{- include "common-chartlib.util.merge" (append . "common-chartlib.deployment.tpl") -}}
{{- end -}}
There is application chart that contains two deployments that differs only with command field value:
# values
command1: ["123"]
command2: ["456"]
# deployment1
spec:
containers:
- name: deployment1
command: {{ .Values.config.command1 }}
# deployment2
spec:
containers:
- name: deployment1
command: {{ .Values.config.command2 }}
If I run helm template I will get:
containers:
- command:
- 123
name: backend
# other fields like ports, envFrom, resources were removed
volumes:
- name: backend-private-key
secret:
secretName: backend-private-key
As you see all fields except name and command were removed after merging.
Expected result:
containers:
- command:
- 123
name: backend
# other fields taken from library chart like ports, envFrom, resources must NOT be removed
ports:
- containerPort: 8000
envFrom:
- configMapRef:
name: backend
volumes:
- name: backend-private-key
secret:
secretName: backend-private-key
Output of helm version:
version.BuildInfo{Version:"v3.8.0", GitCommit:"d14138609b01886f544b2025f5000351c9eb092e", GitTreeState:"clean", GoVersion:"go1.17.6"}
Output of kubectl version:
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.2", GitCommit:"8b5a19147530eaac9476b0ab82980b4088bbc1b2", GitTreeState:"clean", BuildDate:"2021-09-15T21:31:32Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"darwin/amd64"}
Please help.

Helm Construct dynamic configmap from multiple configuration YAML file

I have 2 files as follows:
_config-dev.yaml
frontend:
NODE_ENV: dev
REACT_APP_API_URL: 'https://my-dev-apiurl/'
database:
DB_USER: admin-dev
DB_PASSWORD: password-dev
_config-stag.yaml
frontend:
NODE_ENV: stag
REACT_APP_API_URL: 'https://my-stag-api-url/'
database:
DB_USER: admin-stag
DB_PASSWORD: password-stag
myConfigMap.yaml
kind: ConfigMap
apiVersion: v1
metadata:
name: {{ .Release.Name }}-frontend
namespace: {{ .Values.global.namespace }}
data:
# Here I want to insert only frontend data from _config-dev.yaml file if my {{ eq .Values.global.environment "dev" }} like below
NODE_ENV: dev
REACT_APP_API_URL: 'https://my-dev-apiurl/'
# if my {{ eq .Values.global.environment "stag" }} i want to get frontend values from _config-dev.yaml like below
NODE_ENV: stag
REACT_APP_API_URL: 'https://my-stag-api-url/'
Can anyone figure out how to insert the data as per above scenario mentioned in myConfigMap.yaml file as a comment under data:.
my test project
test
├── Chart.yaml
├── cfg
│   ├── _config-dev.yaml
│   └── _config-stag.yaml
├── templates
│   └── configmap.yaml
└── values.yaml
values.yaml
global:
environment: dev
test/cfg/_config-dev.yaml
frontend:
NODE_ENV: dev
REACT_APP_API_URL: 'https://my-dev-apiurl/'
database:
DB_USER: admin-dev
DB_PASSWORD: password-dev
test/cfg/_config-stag.yaml
frontend:
NODE_ENV: stag
REACT_APP_API_URL: 'https://my-stag-api-url/'
database:
DB_USER: admin-stag
DB_PASSWORD: password-stag
test/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "test.fullname" . }}
data:
{{- $data := .Files.Get "cfg/_config-stag.yaml" }}
{{- if eq .Values.global.environment "dev" }}
{{- $data = .Files.Get "cfg/_config-dev.yaml" }}
{{- end }}
{{- $cfg := fromYaml $data }}
{{- range $k, $v := $cfg.frontend }}
{{ $k }}: {{ $v }}
{{- end }}
output
apiVersion: v1
kind: ConfigMap
metadata:
name: test
data:
NODE_ENV: dev
REACT_APP_API_URL: https://my-dev-api-url/

Helm3: Create .properties files recursively in Configmap

Below are files that I have:
users-values.yaml file :
users:
- foo
- baz
other-values.yaml file:
foo_engine=postgres
foo_url=some_url
foo_username=foofoo
baz_engine=postgres
baz_url=some_url
baz_username=bazbaz
config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: test-catalog
data:
{{- range $user := .Values.users }}
{{ . }}: |
engine.name={{ printf ".Values.%s_engine" ($user) }}
url={{ printf ".Values.%s_url" ($user) }}
username={{ printf".Values.%s_username" ($user) }}
{{- end }}
deployment-coordinator.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: "{{ .Release.Name }}-coordinator"
labels:
app.kubernetes.io/name: "{{ .Release.Name }}-coordinator"
spec:
replicas: 1
...
template:
metadata:
labels:
app.kubernetes.io/name: "{{ .Release.Name }}-coordinator"
spec:
volumes:
- name: config
configMap:
name: test-catalog
...
volumeMounts:
- name: config
mountPath: "/etc/config"
Then, I do a helm install test mychart.
When I exec into the pod, and cd to /etc/config, I expect to see foo.properties and baz.properties files in there, and each file looks like:
foo.properties: |
engine.name=postgres
url=some_url
username=foofoo
baz.properties: |
engine.name=postgres
url=some_url
username=bazbaz
The answer from Pawel below solved the error I got previously
unexpected bad character U+0022 '"' in command
But, the files are still not created in the /etc/config directory.
So, I was wondering if it's even possible to create the .properties files using helm range as I mentioned in my config.yaml file above.
The reason I wanted to do it the below way is because I have more than 10 users to create .properties files on, not just foo and baz. Just thought it'll be easier if I can do a for loop on it if possible.
data:
{{- range $user := .Values.users }}
{{ . }}: |
engine.name={{ printf ".Values.%s_engine" ($user) }}
url={{ printf ".Values.%s_url" ($user) }}
username={{ printf".Values.%s_username" ($user) }}
{{- end }}

How to pass dynamic arguments to a helm chart that runs a job

I'd like to allow our developers to pass dynamic arguments to a helm template (Kubernetes job). Currently my arguments in the helm template are somewhat static (apart from certain values) and look like this
Args:
--arg1
value1
--arg2
value2
--sql-cmd
select * from db
If I were run a task using the docker container without Kubernetes, I would pass parameters like so:
docker run my-image --arg1 value1 --arg2 value2 --sql-cmd "select * from db"
Is there any way to templatize arguments in a helm chart in such way that any number of arguments could be passed to a template.
For example.
cat values.yaml
...
arguments: --arg1 value1 --arg2 value2 --sql-cmd "select * from db"
...
or
cat values.yaml
...
arguments: --arg3 value3
...
I've tried a few approaches but was not successful. Here is one example:
Args:
{{ range .Values.arguments }}
{{ . }}
{{ end }}
Yes. In values.yaml you need to give it an array instead of a space delimited string.
cat values.yaml
...
arguments: ['--arg3', 'value3', '--arg2', 'value2']
...
or
cat values.yaml
...
arguments:
- --arg3
- value3
- --arg2
- value2
...
and then you like you mentioned in the template should do it:
args:
{{ range .Values.arguments }}
- {{ . }}
{{ end }}
If you want to override the arguments on the command line you can pass an array with --set like this:
--set arguments={--arg1, value1, --arg2, value2, --arg3, value3, ....}
In your values file define arguments as:
extraArgs:
argument1: value1
argument2: value2
booleanArg1:
In your template do:
args:
{{- range $key, $value := .Values.extraArgs }}
{{- if $value }}
- --{{ $key }}={{ $value }}
{{- else }}
- --{{ $key }}
{{- end }}
{{- end }}
Rico's answer needed to be improved.
Using the previous example I've received errors like:
templates/deployment.yaml: error converting YAML to JSON: yaml or
failed to get versionedObject: unable to convert unstructured object to apps/v1beta2, Kind=Deployment: cannot restore slice from string
This is my working setup with coma in elements:
( the vertical format for the list is more readable )
cat values.yaml
...
arguments: [
"--arg3,",
"value3,",
"--arg2,",
"value2,",
]
...
in the template should do it:
args: [
{{ range .Values.arguments }}
{{ . }}
{{ end }}
]
because of some limitations, I had to work with split and to use a delimiter, so in my case:
deployment.yaml :
{{- if .Values.deployment.args }}
args:
{{- range (split " " .Values.deployment.args) }}
- {{ . }}
{{- end }}
{{- end }}
when use --set:
helm install --set deployment.args="--inspect server.js" ...
results with:
- args:
- --inspect
- server.js
The arguments format needs to be kept consistent in such cases.
Here is my case and it works fine.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.app.name }}
labels:
app: {{ .Values.app.name }}
instance: test
spec:
replicas: {{ .Values.master.replicaCount }}
selector:
matchLabels:
app: {{ .Values.app.name }}
instance: test
template:
metadata:
labels:
app: {{ .Values.app.name }}
instance: test
spec:
imagePullSecrets:
- name: gcr-pull-secret
containers:
- name: {{ .Values.app.name }}
image: {{ .Values.app.image }}
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
args:
[
"--users={{int .Values.cmd.users}}",
"--spawn-rate={{int .Values.cmd.rate}}",
"--host={{.Values.cmd.host}}",
"--logfile={{.Values.cmd.logfile}}",
"--{{.Values.cmd.role}}"]
ports:
- containerPort: {{ .Values.container.port }}
resources:
requests:
memory: {{ .Values.container.requests.memory }}
cpu: {{ .Values.container.requests.cpu }}
limits:
memory: {{ .Values.container.limits.memory }}
cpu: {{ .Values.container.limits.cpu }}
Unfortunately following mixed args format does not work within container construct -
mycommand -ArgA valA --ArgB valB --ArgBool1 -ArgBool2 --ArgC=valC
The correct format of above command expected is -
mycommand --ArgA=valA --ArgB=valB --ArgC=valC --ArgBool1 --ArgBool2
This can be achieved by following constructs -
#Dockerfile last line
ENTRYPOINT [mycommand]
#deployment.yaml
containers:
- name: {{ .Values.app.name }}
image: {{ .Values.app.image }}
args: [
"--ArgA={{ .Values.cmd.ArgA }}",
"--ArgB={{ .Values.cmd.ArgB }}",
"--ArgC={{ .Values.cmd.ArgC }}",
"--{{ .Values.cmd.ArgBool1 }}",
"--{{ .Values.cmd.ArgBool2 }}" ]
#values.yaml
cmd:
ArgA: valA
ArgB: valB
ArgC: valC
ArgBool1: "ArgBool1"
ArgBool2: "ArgBool2"
helm install --name "airflow" stable/airflow --set secrets.database=mydatabase,secrets.password=mypassword
So this is the helm chart in question: https://github.com/helm/charts/tree/master/stable/airflow
Now I want to overwrite the default values in the helm chart
secrets.database and
secrets.password so I use --set argument and then it is key=value pairs separated by commas.
helm install --name "<name for your chart>" <chart> --set key0=value0,key1=value1,key2=value2,key3=value3
Did you try this?
{{ range .Values.arguments }}
{{ . | quote }}
{{ end }}
Acid R's key/value solution was the only thing that worked for me.
I ended up with this:
values.yaml
arguments:
url1: 'http://something1.example.com'
url2: 'http://something2.example.com'
url3: 'http://something3.example.com'
url4: 'http://something3.example.com'
And in my template:
args:
{{- range $key, $value := .Values.arguments }}
- --url={{ $value }}
{{- end }}