Airflow to pass config from UI manual run - kubernetes

is there any way to pass JSON config from manual DAG run (the one from dag_run.conf['attribute'] to KubernetesPodOperator?
Tried to use the Jinja template on the templated field in YAML, but got an error, dag_run is not defined.
task_parse_raw_data = KubernetesPodOperator(
namespace=NAMESPACE,
image='artifactory/image:tag',
service_account_name='airflow',
cmds=["sh", "/current.sh"],
arguments=[ {{ dag_run.conf['date']}} ],
...)

You need to wrap the Jinja expression in quotes like so:
arguments=[ "{{ dag_run.conf['date'] }}" ]

Related

Problem passing in a variable from a shell script into a deployment yaml

In a shell scrip I want to assigning a variable what to use in a value in a deployment. For the life of me I can not figure out how to get it to work.
My helm deploy script file has the following in order to set the value to use my variable :
--set AuthConfValue=$AUTH_CONF_VALUE
And I have this in the deployment.yaml file in order to use the variable :
- name: KONG_SETTING
value: "{ {{ .Values.AuthConfValue }} }"
If I assign the variable in my shell script like the following :
AUTH_CONF_VALUE="ernie"
It will work and the value in the deployment will show up like so:
value: '{ ernie }'
Now if I try to assign the variable like this:
AUTH_CONF_VALUE="\\\"ernie\\\":\\\"123\\\""
I will then get the error error converting YAML to JSON: yaml: line 118: did not find expected key when the helm deploy runs.
I was hoping that this would give me the following value in the deployment :
value: "{ "ernie":"123" }"
If I hardcode the value into the deployment.yaml with this:
- name: KONG_SETTING
value: "{ \"ernie\": \"123\" }"
and then run the helm deploy it will work and populate the value in the deployment with this -
value: "{ "ernie":"123" }"
Can someone show me if/how I might be able to do this?
The Helm --set option also uses backslash escaping. So in your example, the $AUTH_CONF_VALUE variable in the host shell contains a single backslash before each quote, which is consumed by --set, so .Values.AuthConfValue contains no backslashes at all, and you get invalid YAML.
If you want to keep this as close to the existing form as you can, let's construct a string with no backslashes at all (and hopefully no commas or brackets either, since those also have special meaning to --set)
AUTH_CONF_VALUE='"ernie":"123"'
helm install --set AuthConfValue="$AUTH_CONF_VALUE" .
When Helm expands a template it doesn't know anything about the context where it might be used. In your case, you know
.Values.AuthConfValue is the body of a JSON object
If you surround it in curly braces { ... } then it should be a valid JSON object
You need to turn that into a correctly-escaped YAML string
Helm contains a lightly-documented toJson function that takes an arbitrary object and converts it to JSON; any valid JSON is also valid YAML. So the closest-to-what-you-have approach might look like
- name: KONG_SETTING
value: {{ printf "{%s}" .Values.AuthConfValue | toJson }}
If you're willing to modify your deploy process a little more, you can have less escaping and more certainty. In the sequence above, we have a string that happens to be a JSON object; what if we had an actual object? Imagine settings like
# kong-auth.yaml
authConf:
ernie: "123"
You could provide this file at install time with a helm install -f option. Since valid JSON is valid YAML, again, you could also provide a JSON file here without changing anything.
helm install -f kong-auth.yaml .
Now with this setup .Values.authConf is an object; the only escaping you need to do is standard YAML/JSON escaping (for example quoting "123" so it's a string and not a number). Now we can use toJson twice, once to get the {"ernie":"123"} JSON object string, and a second time to escape that string as a value "{\"ernie\":\"123\"}".
- name: KONG_SETTING
value: {{ .Values.authConf | toJson | toJson }}
Setting this up would require modifying your deployment script, but it would be much safer against quoting and escaping concerns.

can't access helm .Values from a named template with non global context passed in

I'm trying to use a Helm named template that I plan to include with several different contexts, and the template has many values that are the same for all contexts.
Whenever I pass a context to template or include to invoke the named template, the references to .Values do not work, which is understandable because I'm explicitly setting a lower context.
In the Helm documentation for with, it claims there is a "global" variable $ that will allow reference to the global .Values, e.g., {{ $.Values... }}. This does not work (the example below shows the error).
I've also tried defining variables (using :=) and "enclosing" the include inside that variable definition (via indentation - I don't know if it matters) to make that variable available within the named template, but this doesn't work either.
I've also tried putting these in "globals" as described here which is more of a subchart thing and this doesn't work either.
So, I'm out of Helm tricks to make this work and will sadly have to re-define these many same variable many times - which makes the entire named template solution a bit less elegant - or just go back to having largely duplicate partially-parameterized templates.
What am I missing?
$ helm version
Client: &version.Version{SemVer:"v2.9+unreleased", GitCommit:"", GitTreeState:"clean"}
Values.yaml:
---
commonSetting1: "common1"
commonSetting2: "common2"
context1:
setting1: "c1s1"
setting2: "c1s2"
context2:
setting1: "c2s1"
setting2: "c2s2"
deployment.yaml:
---
{{- define "myNamedTemplate" }}
- name: {{ .setting1 }}
image: {{ $.Values.commonSetting1 }}
{{- include "myNamedTemplate" .Values.context1 }}
{{- include "myNamedTemplate" .Values.context2 }}
$ helm template test-0.1.0.tgz
Error: render error in "test/templates/deployment.yaml": template: test/templates/deployment.yaml:7:4: executing "test/templates/deployment.yaml" at <include "myNamedTemp...>: error calling include: template: test/templates/deployment.yaml:4:19: executing "myNamedTemplate" at <$.Values.commonSetti...>: can't evaluate field commonSetting1 in type interface {}
When I do this, I tend to explicitly pass in the top-level context object as a parameter. This gets a little tricky because the Go text/template templates only take a single parameter, so you need to use the (Helm/Sprig) list function to package multiple parameters together, and then the (standard text/template) index function to unpack them.
The template definition would look like:
{{- define "myNamedTemplate" }}
{{- $top := index . 0 }}
{{- $context := index . 1 }}
- name: {{ $context.setting1 }}
image: {{ $top.Values.commonSetting1 }}
{{ end }}
When you invoke it, you would then need to explicitly pass the current context as a parameter:
{{ include "myNamedTemplate" (list . .Values.context1) }}

Helm Charts - How do I use `default` on undefined object property values?

Using Helm, I was under the impression default would be the fallback if a variable is not defined. However, it doesn't appear Helm can get to values in sub-object hashes:
type: {{ default "NodePort" .Values.fpm.service.type }}
If .Values.fpm.service or service.type is not defined, it should use 9000.
However, attempting to template this throws a nil pointer error:
<.Values.fpm.service.type>: nil pointer evaluating interface {}.type
Is there a way to simply perform this level of variable testing? Or am I subjected to an if/else test?
The intent of this is to optionally define .fpm.service (and [..].type) within your values.yaml file.
(I'm building a Helm Library chart to handle optional definitions by main charts)
According to the official Helm doc (Using Default Function), the syntax is different and you should use it this way:
type: {{ .Values.fpm.service.type | default "NodePort" | quote }}
Doesn't look like there's really a good way to stop Helm from trying to dive into non-existing objects. I moved into a single line if condition, and it worked:
type: {{ if .Values.fpm.service -}} {{ .default "NodePort" .Values.fpm.service.type | quote }} {{- else -}} "NodePort" {{- end }}
This way, I check if fpm.service exists first, before trying .type check. It works, whether .service and .service.type is or is not defined.

Using an include statement as the default value

I am creating a helm chart in which I want to specify a default for a value using a template function. Specifically I want to either use the override value image.name, or default to the template function chart.name:
{{ .Values.image.name | default include chart.name . }}
But when linting the chart I have the following error:
[ERROR] templates/: render error in "chart/templates/deployment.yaml": template: chart/templates/deployment.yaml:22:81: executing "chart/templates/deployment.yaml" at <include>: wrong number of args for include: want 2 got 0
Is it possible to use an included template function as the default value? Or can I only use literals?
You can. Just enclose your include statement in parentheses:
{{ .Values.image.name | default (include "chart.name" .)}}
Please see using the default function

overriding values in kubernetes helm subcharts

I'm building a helm chart for my application, and I'm using stable/nginx-ingress as a subchart. I have a single overrides.yml file that contains (among other overrides):
nginx-ingress:
controller:
annotations:
external-dns.alpha.kubernetes.io/hostname: "*.{{ .Release.Name }}.mydomain.com"
So, I'm trying to use the release name in the overrides file, and my command looks something like: helm install mychart --values overrides.yml, but the resulting annotation does not do the variable interpolation, and instead results in something like
Annotations: external-dns.alpha.kubernetes.io/hostname=*.{{ .Release.Name }}.mydomain.com
I installed the subchart by using helm fetch, and I'm under the (misguided?) impression that it would be best to leave the fetched thing as-is, and override values in it - however, if variable interpolation isn't available with that method, I will have to put my values in the subchart's values.yaml.
Is there a best practice for this? Is it ok to put my own values in the fetched subchart's values.yaml? If I someday helm fetch this subchart again, I'll have to put those values back in by hand, instead of leaving them in an untouched overrides file...
Thanks in advance for any feedback!
I found the issue on github -- it is not supported yet:
https://github.com/kubernetes/helm/issues/2133
Helm 3.x (Q4 2019) now includes more about this, but for chart only, not for subchart (see TBBle's comment)
Milan Masek adds as a comment:
Thankfully, latest Helm manual says how to achieve this.
The trick is:
enclosing variable in " or in a yaml block |-, and
then referencing it in a template as {{ tpl .Values.variable . }}
This seems to make Helm happy.
Example:
$ cat Chart.yaml | grep appVersion
appVersion: 0.0.1-SNAPSHOT-d2e2f42
$ cat platform/shared/t/values.yaml | grep -A2 image:
image:
tag: |-
{{ .Chart.AppVersion }}
$ cat templates/deployment.yaml | grep image:
image: "{{ .Values.image.repository }}:{{ tpl .Values.image.tag . }}"
$ helm template . --values platform/shared/t/values.betradar.yaml | grep image
image: "docker-registry.default.svc:5000/namespace/service:0.0.1-SNAPSHOT-d2e2f42"
imagePullPolicy: Always
image: busybox
Otherwise there is an error thrown..
$ cat platform/shared/t/values.yaml | grep -A1 image:
image:
tag: {{ .Chart.AppVersion }}
1 $ helm template . --values platform/shared/t/values.yaml | grep image
Error: failed to parse platform/shared/t/values.yaml: error converting YAML to JSON: yaml: invalid map key: map[interface {}]interface {}{".Chart.AppVersion":interface {}(nil)}
For Helm subchart, TBBle adds to issue 2133
#MilanMasek 's solution won't work in general for subcharts, because the context . passed into tpl will have the subchart's values, not the parent chart's values.
!<
It happens to work in the specific example this ticket was opened for, because .Release.Name should be the same in all the subcharts.
It won't work for .Chart.AppVersion as in the tpl example.
There was a proposal to support tval in #3252 for interpolating templates in values files, but that was dropped in favour of a lua-based Hook system which has been proposed for Helm v3: #2492 (comment)
That last issue 2492 include workarounds like this one:
You can put a placeholder in the text that you want to template and then replace that placeholder with the template that you would like to use in yaml files in the template.
For now, what I've done in the CI job is run helm template on the values.yaml file.
It works pretty well atm.
cp values.yaml templates/
helm template $CI_BUILD_REF_NAME ./ | sed -ne '/^# Source:
templates\/values.yaml/,/^---/p' > values.yaml
rm templates/values.yaml
helm upgrade --install ...
This breaks if you have multiple -f values.yml files, but I'm thinking of writing a small helm wrapper that runs essentially runs that bash script for each values.yaml file.
fsniper illustrates again the issue:
There is a use case where you would need to pass deployment name to dependency charts where you have no control.
For example I am trying to set podAffinity for zookeeper. And I have an application helm chart which sets zookeeper as a dependency.
In this case, I am passing pod antiaffinity to zookeeper via values. So in my apps values.yaml file I have a zookeeper.affinity section.
If I had the ability to get the release name inside the values yaml I would just set this as default and be done with it.
But now for every deployment I have to override this value, which is a big problem.
Update Oct. 2022, from issue 2133:
lazychanger proposes
I submitted a plugin to override values.yaml with additional templates.
See lazychanger/helm-viv: "Helm-variable-in-values" and its example.