Is there some way I can pass Chart.AppVersion to a subchart? - kubernetes-helm

I have a chart that uses on a sub chart to create some resources.
I want to pass the parent chart's appVersion to the subchart for labels and stuff.
I tried to use tpl but when it's evaluated it's the appVersion of the subchart:
# values.yaml of parent
parent-chart:
blah:
version: "{{ $.Chart.AppVersion }}"
In subchart:
...
version: {{ tpl .Values.blah.version . }}
Is there any way to do something like this? I wanted to do this without using a global variable as well assuming that is possible.
Edit
It looks like I might be waiting on this PR? https://github.com/helm/helm/pull/10059

Related

helm chart - value file variables

I am using a helm chart (with sub-charts) to deploy my application.
I am using a value file for setting values.
I am looking a way to set variables in my value file (or any other place) that will be valid for my value file.
I have some sections (services) in my value files that I need to use the same value in it
so I am looking for a variable in my value file.
Is there any way that I can use variables for my value file?
Thx
Helm on its own can't do this.
If you control all of the charts and subcharts, you can allow specific values to have embedded Go templating. Helm includes a tpl extension function that will let you render an arbitrary string as a template. So if you have values
global:
commonKey: some value
otherKey: '{{ .Values.global.commonKey }}'
then you can render
- name: OTHER_KEY
value: '{{ tpl .Values.otherKey . }}'
But, you have to use tpl every place you access the key value(s); if you don't control the subcharts you may not be able to do this.
Higher-level tools may also let you do this. I'm familiar in particular with Helmfile which lets you declare multiple Helm charts and their settings, but also lets you use almost-Helm templating in many places. So your helmfile.yaml could specify:
environments:
default:
# These values are available when rendering templates in this file
values:
- commonKey: some value
releases:
- name: my-service
namespace: my-service
chart: ./charts/my-service
values:
# List items can be file names or YAML dictionaries.
# If it's a dictionary, arbitrary nested values.yaml content.
# If it's a *.yaml.gotmpl file name, templating is applied to the file.
- otherKey: '{{ .Values.commonKey }}'
yetAnotherKey: '{{ .Values.commonKey }}'
- ./my-service.yaml.gotmpl
Helmsman is simpler, but can only set chart values from environment variables; but I believe you can reference the same environment variable in different setString: options. You could also do something similar with the Terraform Helm provider, using Terraform's native expression syntax, particularly if you're already familiar with Terraform.

helm template output showing values not being resolved

I'm new to helm charts and K8s, so forgive me. I'm working on a project that deploys an application project with several apps as part of it. The previous dev that put the charts together was using a "find-and-replace" technique to fill in values for things like the image repository, tags, etc. This is making our CICD pipeline development tricky and not scalable. I'm trying to update the charts to use variables and values.yml files. Most of it seems to be working, values are getting passed down to the templates except for one part and I can't figure out why. Its a large project so I won't copy all the chart files. I'll try to lay out the important parts:
Folder structure:
helm
project1
dev
charts
app1
templates
*template files
Chart.yaml
values.yaml
app2
*same subfolders
app3
*same subfolders
Chart.yml
values.yml
Base Values.yml
artifactory_base_url: company.repo.io/repo_folder
imageversions:
app1_tag: 6.1.2-alpine-edge
app2_tag: 8.1.0.0-edge
app3_tag: 8.1.0.0-alpine-edge
app4_tag: 10.1.1-alpine-edge
initcontainer: latest
App Values.yml file
app:
image:
repository: "{{ .Values.artifactory_base_url }}/pingaccess"
tag: "{{ .Values.pa_tag }}"
deployment.yml template file
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.app.image }}"
I'm running the following helm template command to confirm that I'm getting the proper output for at least the app1 part before actually trying to deploy to the k8s cluster.
helm template app1 --set date="$EPOCHSECONDS" --set namespace='porject_namespace' --values helm/project1/dev/values.yaml helm/project1/dev/charts/app1
Most of the resulting yaml looks great, and it looks like the values I have defined in the base values.yml file are getting passed through in other areas like this example:
initContainers:
- name: appinitcontainer
image: "company.repo.io/repo_folder/initcontainer:latest"
But there is one portion that is populated from the deployment.yml template file that is still showing the curly braces for variables
containers:
- name: app1
image: "map[repository:{{ .Values.image_repo_base_url }}/app1 tag:{{ .Values.app1_tag }}]"
imagePullPolicy: Always
I've tried making variations in all the files mentioned above to remove quotes, use single quotes, etc. In those attempts I usually get a variation of the following errors:
"error converting yaml to json. did not find expected key"
"error mapping values"
I haven't been able to find a solution. I'm assuming that the "helm template" command should not contain any braces like that, all variables and values should be resolved. I'm hoping somebody can provide some tips of things I might be missing.
You're hitting two issues here. First, .Values.app.image is a map containing the two keys repository and tag; that's why you get the weird map[repository:... tag:...] syntax in the output. Second, string values in values.yaml aren't reinterpreted for Helm template syntax; that's why the {{ ... }} markup gets passed through to the output.
This in turn means you need to do two things. To resolve the map, construct the string from the contents of the dictionary; and to resolve the templating markup inside the string values, use Helm's tpl function.
{{- $repository := tpl .Values.app.image.repository . }}
{{- $tag := tpl .Values.app.image.tag . }}
image: "{{ $repository }}:{{ $tag }}"
(You may find it useful to separate "repository", "registry" or "image", and "tag" into three separate parts, since probably all of your images are coming from the same repository; that would let you configure the repository in one place and customize the image name per component. The bitnami/postgresql chart is one example of this setup.)

how to pass conditional argument as command in kubernetes with helm charts

I have to run a command when my pod start in kubernetes which takes some argument but those argument are conditional. How to set those values. My configuration looks like
#file: values.yaml
arguments:
debug: false
values: 16 # this is not necessarily set
#deployment command section looks like
command: [ "/bin/bash", "-ce", "./my_app.sh" ]
args:
- {{ -f .Values.arguments.debug }}
- {{ -v .Values.arguments.values}}
But it seems to not accepting argument. Is this incorrect way. How can I pass multiple argument?
Helm uses the Go text/template language with a number of extensions; the Helm Chart Template Guide has quite a few examples.
In particular the templating language includes an if...else...end construct. You can use this like:
args:
- -f
- {{ quote .Values.arguments.debug }}
{{- if .Values.arguments.values }}
- -v
- {{ quote .Values.arguments.values}}
{{- end }}
Note that the -f and -v text are outside the template curly braces, and I've split them out into separate items in the argument list. In the last part there's a test if the values option is set, and the -v option isn't emitted if it's not.

Hierarchy of values for dynamic helm chart configuration

What I'm trying to do
I want to have some default settings and options in values.yaml, and then a hash map of different instances, which will be converted into services and deployments, whose individual settings override the defaults.
values.yaml
someSetting: TheDefault
deployments:
one:
role: XYZ
two:
role: ABC
someSetting: Overridden
In the above case, there would be two deployments and services, one and two. The value for someSetting for one would be TheDefault and for two would be Overridden.
actual template yaml
I'm trying this - to build a dictionary, $p, which has the root-scope Chart and Release objects in it, then has the root scope values merged in, then the current deployment value merged in.
{{- range $deploymentKey, $deploymentVal := .Values.deployments }}
{{- $p := dict "deploymentKey" $deploymentKey }}
{{- $_ := set $p "Chart" $.Chart }}
{{- $_ := set $p "Release" $.Release }}
{{- $_ := merge $p $.Values }}
{{- $_ := merge $p . }}
...
{{- end }}
The reason for including Chart and Release is that, despite what the documentation says, $.Chart is not always available - it's literally empty when I pass a scope into a template and that template tries to use $. to refer to root scope.
So I'm doing things like:
name: {{ template "my-app.fullname" $p }}
and
image: {{ $p.image.name }}
The error
The problem is that while helm lint returns no errors, helm template . (or a dry-run) yields:
Error: rendering template failed: runtime error: invalid memory address or nil pointer dereference
What I've tried
Removing each merge to try to narrow down the crash - they don't seem to cause it
Plain-old merging $ into the dictionary
Weeping
Asking on Helm Slack
Asking on GitHub issues
My question..
How can I fix this crash?
Or, how can I achieve what I'm trying to?
try this:
deployment.yaml:
{{- range $deploymentKey, $deploymentVal := .Values.deployments }}
{{- $p := dict "deploymentKey" $deploymentKey }}
{{- $_ := set $p "Chart" $.Chart }}
{{- $_ := set $p "Release" $.Release }}
{{- $_ := merge $p . }}
{{- $_ := merge $p $.Values }}
...
{{ end }}
_helpers.tpl:
{{- define "repro.fullname" -}}
{{- printf "%s" .Chart.Name }}
{{- end -}}
You could instead create a common base chart with the defaults and a vanilla service and deployment and then create an umbrella chart that includes the base chart twice under the aliases 'one' and 'two'. Then the values file of the umbrella chart is where you override the defaults and you don't need any dictionary.
As an example here is a base chart - https://github.com/ryandawsonuk/configmaps-transformers/tree/master/helm/transformers/charts/transformer and the umbrella chart includes it multiple times under different aliases - https://github.com/ryandawsonuk/configmaps-transformers/blob/master/helm/transformers/requirements.yaml. The umbrella chart's values file plugs in different values for each instance of the base chart that is included. In the umbrella values each instance is referenced by its alias - https://github.com/ryandawsonuk/configmaps-transformers/blob/master/helm/transformers/values.yaml#L14

Ansible - need to output all the hosts in the playbook into a configuration file

I'll try to make this brief... I'm setting up ansible to write a PostgreSQL pg_hba.conf file, and what I want to do is permit any db server to replicate to any other db server. This way I don't have to reconfigure in the event of a failure. I want ansible to insert lines for each host listed in the group "db". These entries must be CIDR types. So, far I've only succeeded in getting each system to show their own CIDR in the file. I've looked extensively with no joy, but here's what I'm trying to use:
- name: Update the pg_hba.conf file
lineinfile:
path: '{{ pg_data }}/{{ pg_cluster_name }}/pg_hba.conf'
regexp: 'hostssl replication'
insertafter: 'hostssl replication'
line: "hostssl replication rplctn_usr {{ hostvars[ '{{ item }}' ]['ansible_default_ipv4']['address'] }}/32 md5"
with_items: groups['db']
tags:
- "pg_hba.conf"
Nothing I've done gets the {{ item }} variable to expand properly. Anyone?
Firstly, you need to reference the var to iterate through with braces:
with_items: "{{ groups['db'] }}"
Second, item is the var representing the value of each iteration. Inside {{ }} you can reference any vars directly without extra braces:
{{ hostvars[item]['ansible_default_ipv4']['address'] }}