I have a values.yaml where I need to mention multiple ports like the following:
kafkaClientPort:
- 32000
- 32001
- 32002
In yaml for statefulset, I need to get value using ordinal number.
So for kf-0, I need to put first element of kafkaClientPort; and for kf-1, second element and so on.
I am trying like the following:
args:
- "KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://$(MY_NODE_NAME):{{ index .Values.kafkaClientPort ${HOSTNAME##*-} }}"
But it is showing an error.
Please advise what is the best way to access dynamically values.yaml value.
The trick here is that Helm template doesn't know anything about ordinal in your stateful set. If you look at the Kafka Helm Chart, you see that they are using a base port 31090 and then they add the ordinal number but that substitution is in place 'after' the template is created. Something like this in your values:
"advertised.listener": |-
PLAINTEXT://kafka.cluster.local:$((31090 + ${KAFKA_BROKER_ID}))
and then in the template file, the use a bash export under command with a printf which is an alias for fmt.Sprintf. Something like this in your case:
command:
- sh
- -exc
- |
unset KAFKA_PORT && \
export KAFKA_BROKER_ID=${HOSTNAME##*-} && \
export "KAFKA_ADVERTISED_LISTENERS={{ printf "%s" $advertised.listener }} \\
...
Related
In a shell scrip I want to assigning a variable what to use in a value in a deployment. For the life of me I can not figure out how to get it to work.
My helm deploy script file has the following in order to set the value to use my variable :
--set AuthConfValue=$AUTH_CONF_VALUE
And I have this in the deployment.yaml file in order to use the variable :
- name: KONG_SETTING
value: "{ {{ .Values.AuthConfValue }} }"
If I assign the variable in my shell script like the following :
AUTH_CONF_VALUE="ernie"
It will work and the value in the deployment will show up like so:
value: '{ ernie }'
Now if I try to assign the variable like this:
AUTH_CONF_VALUE="\\\"ernie\\\":\\\"123\\\""
I will then get the error error converting YAML to JSON: yaml: line 118: did not find expected key when the helm deploy runs.
I was hoping that this would give me the following value in the deployment :
value: "{ "ernie":"123" }"
If I hardcode the value into the deployment.yaml with this:
- name: KONG_SETTING
value: "{ \"ernie\": \"123\" }"
and then run the helm deploy it will work and populate the value in the deployment with this -
value: "{ "ernie":"123" }"
Can someone show me if/how I might be able to do this?
The Helm --set option also uses backslash escaping. So in your example, the $AUTH_CONF_VALUE variable in the host shell contains a single backslash before each quote, which is consumed by --set, so .Values.AuthConfValue contains no backslashes at all, and you get invalid YAML.
If you want to keep this as close to the existing form as you can, let's construct a string with no backslashes at all (and hopefully no commas or brackets either, since those also have special meaning to --set)
AUTH_CONF_VALUE='"ernie":"123"'
helm install --set AuthConfValue="$AUTH_CONF_VALUE" .
When Helm expands a template it doesn't know anything about the context where it might be used. In your case, you know
.Values.AuthConfValue is the body of a JSON object
If you surround it in curly braces { ... } then it should be a valid JSON object
You need to turn that into a correctly-escaped YAML string
Helm contains a lightly-documented toJson function that takes an arbitrary object and converts it to JSON; any valid JSON is also valid YAML. So the closest-to-what-you-have approach might look like
- name: KONG_SETTING
value: {{ printf "{%s}" .Values.AuthConfValue | toJson }}
If you're willing to modify your deploy process a little more, you can have less escaping and more certainty. In the sequence above, we have a string that happens to be a JSON object; what if we had an actual object? Imagine settings like
# kong-auth.yaml
authConf:
ernie: "123"
You could provide this file at install time with a helm install -f option. Since valid JSON is valid YAML, again, you could also provide a JSON file here without changing anything.
helm install -f kong-auth.yaml .
Now with this setup .Values.authConf is an object; the only escaping you need to do is standard YAML/JSON escaping (for example quoting "123" so it's a string and not a number). Now we can use toJson twice, once to get the {"ernie":"123"} JSON object string, and a second time to escape that string as a value "{\"ernie\":\"123\"}".
- name: KONG_SETTING
value: {{ .Values.authConf | toJson | toJson }}
Setting this up would require modifying your deployment script, but it would be much safer against quoting and escaping concerns.
I need to pass a private RSA key as ENV var to my deployment file, and I can't do it at the moment.
containers:
env:
- name: MY_PRIVATE_KEY
value: |+
{{ .Values.fpm.dot_env.MY_PRIVATE_KEY}}
I've tried with indent, without indent, using toYaml (there is no error with this but my env var start with |-)...
Any idea?
This is the error I get from that code:
Error: UPGRADE FAILED: YAML parse error on broker-api/templates/deployment.yaml: error converting YAML to JSON: yaml: line 59: could not find expected ':'
If you're trying to embed a multi-line string in a Kubernetes artifact in a Helm chart, the easiest recipe is
Use the YAML | block scalar form to preserve newlines;
Start the Go template {{ ... }} macro at the first column; and
Use the sprig indent function to indent every line of the block, including the first one.
(You frequently will see |- which trims the final newline; for this I can imagine wanting to keep the final newline |+ or just plain |; the difference between these last two is whether extra empty lines at the end are kept or not.)
containers:
env:
- name: MY_PRIVATE_KEY
value: |+
{{ .Values.fpm.dot_env.MY_PRIVATE_KEY | indent 12 }}
(Usually for actual secrets it's considered preferable to store them in Kubernetes Secret objects. Those values are base64 encoded in the Kubernetes API, so when you declare the Secret object in Helm you'd use ... | b64enc instead of this indent recipe.)
Finally I solved my problem b64encoding my key, and b64decoding it from my backend.
Thanks.
I'm building a helm chart for my application, and I'm using stable/nginx-ingress as a subchart. I have a single overrides.yml file that contains (among other overrides):
nginx-ingress:
controller:
annotations:
external-dns.alpha.kubernetes.io/hostname: "*.{{ .Release.Name }}.mydomain.com"
So, I'm trying to use the release name in the overrides file, and my command looks something like: helm install mychart --values overrides.yml, but the resulting annotation does not do the variable interpolation, and instead results in something like
Annotations: external-dns.alpha.kubernetes.io/hostname=*.{{ .Release.Name }}.mydomain.com
I installed the subchart by using helm fetch, and I'm under the (misguided?) impression that it would be best to leave the fetched thing as-is, and override values in it - however, if variable interpolation isn't available with that method, I will have to put my values in the subchart's values.yaml.
Is there a best practice for this? Is it ok to put my own values in the fetched subchart's values.yaml? If I someday helm fetch this subchart again, I'll have to put those values back in by hand, instead of leaving them in an untouched overrides file...
Thanks in advance for any feedback!
I found the issue on github -- it is not supported yet:
https://github.com/kubernetes/helm/issues/2133
Helm 3.x (Q4 2019) now includes more about this, but for chart only, not for subchart (see TBBle's comment)
Milan Masek adds as a comment:
Thankfully, latest Helm manual says how to achieve this.
The trick is:
enclosing variable in " or in a yaml block |-, and
then referencing it in a template as {{ tpl .Values.variable . }}
This seems to make Helm happy.
Example:
$ cat Chart.yaml | grep appVersion
appVersion: 0.0.1-SNAPSHOT-d2e2f42
$ cat platform/shared/t/values.yaml | grep -A2 image:
image:
tag: |-
{{ .Chart.AppVersion }}
$ cat templates/deployment.yaml | grep image:
image: "{{ .Values.image.repository }}:{{ tpl .Values.image.tag . }}"
$ helm template . --values platform/shared/t/values.betradar.yaml | grep image
image: "docker-registry.default.svc:5000/namespace/service:0.0.1-SNAPSHOT-d2e2f42"
imagePullPolicy: Always
image: busybox
Otherwise there is an error thrown..
$ cat platform/shared/t/values.yaml | grep -A1 image:
image:
tag: {{ .Chart.AppVersion }}
1 $ helm template . --values platform/shared/t/values.yaml | grep image
Error: failed to parse platform/shared/t/values.yaml: error converting YAML to JSON: yaml: invalid map key: map[interface {}]interface {}{".Chart.AppVersion":interface {}(nil)}
For Helm subchart, TBBle adds to issue 2133
#MilanMasek 's solution won't work in general for subcharts, because the context . passed into tpl will have the subchart's values, not the parent chart's values.
!<
It happens to work in the specific example this ticket was opened for, because .Release.Name should be the same in all the subcharts.
It won't work for .Chart.AppVersion as in the tpl example.
There was a proposal to support tval in #3252 for interpolating templates in values files, but that was dropped in favour of a lua-based Hook system which has been proposed for Helm v3: #2492 (comment)
That last issue 2492 include workarounds like this one:
You can put a placeholder in the text that you want to template and then replace that placeholder with the template that you would like to use in yaml files in the template.
For now, what I've done in the CI job is run helm template on the values.yaml file.
It works pretty well atm.
cp values.yaml templates/
helm template $CI_BUILD_REF_NAME ./ | sed -ne '/^# Source:
templates\/values.yaml/,/^---/p' > values.yaml
rm templates/values.yaml
helm upgrade --install ...
This breaks if you have multiple -f values.yml files, but I'm thinking of writing a small helm wrapper that runs essentially runs that bash script for each values.yaml file.
fsniper illustrates again the issue:
There is a use case where you would need to pass deployment name to dependency charts where you have no control.
For example I am trying to set podAffinity for zookeeper. And I have an application helm chart which sets zookeeper as a dependency.
In this case, I am passing pod antiaffinity to zookeeper via values. So in my apps values.yaml file I have a zookeeper.affinity section.
If I had the ability to get the release name inside the values yaml I would just set this as default and be done with it.
But now for every deployment I have to override this value, which is a big problem.
Update Oct. 2022, from issue 2133:
lazychanger proposes
I submitted a plugin to override values.yaml with additional templates.
See lazychanger/helm-viv: "Helm-variable-in-values" and its example.
As the title says when I'd like to be able to pass a variable that is registered under one host group to another, but I'm not sure how to do that and I couldn't find anything relevant under the variable documentation http://docs.ansible.com/ansible/playbooks_variables.html
This is a simplified example of what I am trying to see. I have a playbook that calls many different groups and checks where a symlink points. I'd like to be able to report all of the symlink targets to console at the end of the play.
The problem is the registered value is only valid under the host group that it was defined in. Is there a proper way of exporting these variables?
---
- hosts: max_logger
tasks:
- shell: ls -la /home/ubuntu/apps/max-logger/active | awk -F':' '{print $NF}'
register: max_logger_old_active
- hosts: max_data
tasks:
- shell: ls -la /home/ubuntu/apps/max-data/active | awk -F':' '{print $NF}'
register: max_data_old_active
- hosts: "localhost"
tasks:
- debug: >
msg="The old max_logger build is {{ max_logger_old_active.stdout }}
The old max_data build is {{ max_data_old_active.stdout }}"
You don't need to pass anything here (you just need to access). Registered variables are stored as host facts and they are stored in memory for the time the whole playbook is run, so you can access them from all subsequent plays.
This can be achieved using magic variable hostvars.
You need however to refer to a host name, which doesn't necessarily match the host group name (e.g. max_logger) which you posted in the question:
- hosts: "localhost"
tasks:
- debug: >
msg="The old max_logger build is {{ hostvars['max_logger_host'].max_logger_old_active.stdout }}
The old max_data build is {{ hostvars['max_data_host'].max_data_old_active.stdout }}"
You can also write hostvars['max_data_host']['max_data_old_active']['stdout'].
I'm working on an ansible-playbook which should help to generate build agents for a continuous delivery pipeline. Among other issues, I'll need to install an oracle client on such an agent. I want to do something like
- name: "Provide response file"
copy: src=/custom.rsp dest=/opt/oracle
Within the custom.rsp file I've got some variables to be substituted. Normally, one could do it with a separate shell command like this:
- name: "Substitute Vars"
shell: "sed 's|<PARAMETER>|<VALUE>|g' -i /opt/oracle/custom.rsp"
I don't like it, though. There should be a more convinient way to do this. Anybody giving me a hint?
You want to be using a template rather than copying a static file.
Also, when using the copy or template modules, the dest parameter is a full path AND filename, not just a path. So if you want to end up with a copy of custom.rsp in the directory /opt/oracle then you need to do this:
- name: "Provide response file"
template: src=/custom.rsp dest=/opt/oracle/custom.rsp
I'm going to extend Bruce's answer with an example:
This is part of my inventory.yaml:
kafka_stage:
children:
kafka_with_zookeeper_stage:
kafka_only_stage:
vars:
zookeeper_hosts: "kafka-stage01:2181,kafka-stage02:2181,kafka-stage03:2181"
kafka_with_zookeeper_stage:
hosts:
kafka-stage01:
broker_id: 0
kafka-stage02:
broker_id: 1
vars:
services:
kafka:
zookeeper:
This is part of a configuration file:
# The id of the broker. This must be set to a unique integer for each broker.
broker.id={{ broker_id }}
# {{ zookeeper_hosts }}
advertised.listeners=PLAINTEXT://{{ ansible_host }}:9092
# {{ services }}
This command in a playbook:
- name: Copy to Host
ansible.builtin.template:
src: my_configfile.properties
dest: /tmp/hejsan.properties
Gave me this on the remote host kafka-stage02:
# The id of the broker. This must be set to a unique integer for each broker.
broker.id=1
# kafka-stage01:2181,kafka-stage02:2181,kafka-stage03:2181
advertised.listeners=PLAINTEXT://kafka-stage02:9092
# {'kafka': None, 'zookeeper': None}