Helm tpl function in configmap for read index.js - kubernetes-helm

How can we read index.js file in configmap with TPL function in Helm.
Means how to read below like index.js file.
exports.CDN_URL = 'http://100.470.255.255/';
exports.CDN_NAME = 'staticFilesNew';
exports.REDIS = [
{
host: 'redis-{{.Release.Name}}.{{.Release.Namespace}}.svc.cluster.local',
port: '26379',
},
];
file structure is below
.
├── Chart.yaml
├── templates
│ ├── NOTES.txt
│ ├── _helpers.tpl
│ ├── configmap.yaml
│ ├── deployment.yaml
│ └── service.yaml
└── values
├──values.yaml
├── index.js
tried with below solution on configmap.yaml but getting error
apiVersion: v1
kind: ConfigMap
metadata:
name: test_config
data:
{{- tpl (.Files.Get (printf "values/index.js" .)) . | quote 12 }}
Getting ERROR
Error: YAML parse error on app-name/templates/configmap.yaml: error converting YAML to JSON: yaml: line 6: did not find expected key
helm.go:88: [debug] error converting YAML to JSON: yaml: line 6: did not find expected key

So it works what you are trying but I think you have issues with the indention.
Use nindent to ensure all lines are indented and dont quote the value.
kind: ConfigMap
apiVersion: v1
metadata:
name: test
namespace: test
data:
index.js: | {{- tpl (.Files.Get "values/index.js") $ | nindent 4 }}

Related

Kustomize overlays when using a shared ConfigMap

I have an environment made of pods that address their target environment based on an environment variable called CONF_ENV that could be test, stage or prod.
The application running inside the Pod has the same source code across environments, the configuration file is picked according to the CONF_ENV environment variable.
I'v encapsulated this CONF_ENV in *.properties files just because I may have to add more environment variables later, but I make sure that each property file contains the expected CONF_ENV e.g.:
test.properites has CONF_ENV=test,
prod.properties has CONF_ENV=prod, and so on...
I struggle to make this work with Kustomize overlays, because I want to define a ConfigMap as a shared resource across all the pods within the same overlay e.g. test (each pod in their own directory, along other stuff when needed).
So the idea is:
base/ (shared) with the definition of the Namespace, the ConfigMap (and potentially other shared resources
base/pod1/ with the definition of pod1 picking from the shared ConfigMap (this defaults to test, but in principle it could be different)
Then the overlays:
overlay/test that patches the base with CONF_ENV=test (e.g. for overlay/test/pod1/ and so on)
overlay/prod/ that patches the base with CONF_ENV=prod (e.g. for overlay/prod/pod1/ and so on)
Each directory with their own kustomize.yaml.
The above doesn't work because when going into e.g. overlay/test/pod1/ and I invoke the command kubectl kustomize . to check the output YAML, then I get all sorts of errors depending on how I defined the lists for the YAML keys bases: or resources:.
I am trying to share the ConfigMap across the entire CONF_ENV environment in an attempt to minimize the boilerplate YAML by leveraging the patching-pattern with Kustomize.
The Kubernetes / Kustomize YAML directory structure works like this:
├── base
│ ├── configuration.yaml # I am trying to share this!
│ ├── kustomization.yaml
│ ├── my_namespace.yaml # I am trying to share this!
│ ├── my-scheduleset-etl-misc
│ │ ├── kustomization.yaml
│ │ └── my_scheduleset_etl_misc.yaml
│ ├── my-scheduleset-etl-reporting
│ │ ├── kustomization.yaml
│ │ └── my_scheduleset_etl_reporting.yaml
│ └── test.properties # I am trying to share this!
└── overlay
└── test
├── kustomization.yaml # here I want tell "go and pick up the shared resources in the base dir"
├── my-scheduleset-etl-misc
│ ├── kustomization.yaml
│ └── test.properties # I've tried to share this one level above, but also to add this inside the "leaf" level for a given pod
└── my-scheduleset-etl-reporting
└── kustomization.yaml
The command kubectl with Kustomize:
sometimes complains that the shared namespace does not exist:
error: merging from generator &{0xc001d99530 { map[] map[]} {{ my-schedule-set-props merge {[CONF_ENV=test] [] [] } <nil>}}}:
id resid.ResId{Gvk:resid.Gvk{Group:"", Version:"v1", Kind:"ConfigMap", isClusterScoped:false}, Name:"my-schedule-set-props", Namespace:""}
does not exist; cannot merge or replace
sometimes doesn't allow to have shared resources inside an overlay:
error: loading KV pairs: env source files: [../test.properties]:
security; file '/my/path/to/yaml/overlay/test/test.properties'
is not in or below '/my/path/to/yaml/overlay/test/my-scheduleset-etl-misc'
sometimes doesn't allow cycles when I am trying to have multiple bases - the shared resources and the original pod definition:
error: accumulating resources: accumulation err='accumulating resources from '../':
'/my/path/to/yaml/overlay/test' must resolve to a file':
cycle detected: candidate root '/my/path/to/yaml/overlay/test'
contains visited root '/my/path/to/yaml/overlay/test/my-scheduleset-etl-misc'
The overlay kustomization.yaml files inside the pod dirs have:
bases:
- ../ # tried with/without this to share the ConfigMap
- ../../../base/my-scheduleset-etl-misc/
The kustomization.yaml at the root of the overlay has:
bases:
- ../../base
The kustomization.yaml at the base dir contains this configuration for the ConfigMap:
# https://gist.github.com/hermanbanken/3d0f232ffd86236c9f1f198c9452aad9
configMapGenerator:
- name: my-schedule-set-props
namespace: my-ss-schedules
envs:
- test.properties
vars:
- name: CONF_ENV
objref:
kind: ConfigMap
name: my-schedule-set-props
apiVersion: v1
fieldref:
fieldpath: data.CONF_ENV
configurations:
- configuration.yaml
With configuration.yaml containing:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
varReference:
- path: spec/confEnv/value
kind: Pod
How do I do this?
How do I make sure that I minimise the amount of YAML by sharing all the ConfigMap stuff and the Pods definitions as much as I can?
If I understand your goal correctly, I think you may be grossly over-complicating things. I think you want a common properties file defined in your base, but you want to override specific properties in your overlays. Here's one way of doing that.
In base, I have:
$ cd base
$ tree
.
├── example.properties
├── kustomization.yaml
└── pod1
├── kustomization.yaml
└── pod.yaml
Where example.properties contains:
SOME_OTHER_VAR=somevalue
CONF_ENV=test
And kustomization.yaml contains:
resources:
- pod1
configMapGenerator:
- name: example-props
envs:
- example.properties
I have two overlays defined, test and prod:
$ cd ../overlays
$ tree
.
├── prod
│   ├── example.properties
│   └── kustomization.yaml
└── test
└── kustomization.yaml
test/kustomization.yaml looks like this:
resources:
- ../../base
It's just importing the base without any changes, since the value of CONF_ENV from the base directory is test.
prod/kustomization.yaml looks like this:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
configMapGenerator:
- name: example-props
behavior: merge
envs:
- example.properties
And prod/example.properties looks like:
CONF_ENV=prod
If I run kustomize build overlays/test, I get as output:
apiVersion: v1
data:
CONF_ENV: test
SOME_OTHER_VAR: somevalue
kind: ConfigMap
metadata:
name: example-props-7245222b9b
---
apiVersion: v1
kind: Pod
metadata:
name: example
spec:
containers:
- command:
- sleep
- 1800
envFrom:
- configMapRef:
name: example-props-7245222b9b
image: docker.io/alpine
name: alpine
If I run kustomize build overlays/prod, I get:
apiVersion: v1
data:
CONF_ENV: prod
SOME_OTHER_VAR: somevalue
kind: ConfigMap
metadata:
name: example-props-h4b5tc869g
---
apiVersion: v1
kind: Pod
metadata:
name: example
spec:
containers:
- command:
- sleep
- 1800
envFrom:
- configMapRef:
name: example-props-h4b5tc869g
image: docker.io/alpine
name: alpine
That is, everything looks as you would expect given the configuration in base, but we have provided a new value for CONF_ENV.
You can find all these files here.

helm trigger pod restart on parent-chart configmap change

I'm facing the problem that we use an umbrella helm chart to deploy our services but our services are contained in subcharts. But there are some global configmaps that multiple services are using which are deployed by the umbrella chart. So a simplified structure looks like this:
├── Chart.yaml
├── templates
│ ├── global-config.yaml
├── charts
│ ├── subchart1
│ │ ├── Chart.yaml
│ │ ├── templates
│ │ │ ├── deployment1.yaml
│ ├── subchart2
│ │ ├── Chart.yaml
│ │ ├── templates
│ │ │ ├── deployment2.yaml
...
I what I need is to put a checksum of global-config.yaml in deployment1.yaml and deployment2.yaml as annotation like in https://helm.sh/docs/howto/charts_tips_and_tricks/#automatically-roll-deployments
The problem is that I can define a template like this in the umbrella chart:
{{- define "configmap.checksum" }}{{ include (print $.Template.BasePath "/global-config.yaml") . | sha256sum }}{{ end }}
and then use {{ include "configmap.checksum" . }} in the deployment yamls as templates are global. But $.Template.BasePath is parsed when the include happens so it actually points to the template directory of the subchart where it is included. I have played around with '..' and the Files. and with passing another context in the template and also in the include like passing the global context $ but none of them were successful as I ways always locked in the specific subchart at rendering time.
Is there a solution to this or a different approach to solve this? Or is this something that cannot be done in helm? But I'm grateful for any solution
I found a workaround that requires you to refactor the contents of your global ConfigMap into a template in the umbrellas chart. You won't be able to use regular values from the umbrella chart in your ConfigMap, otherwise you'll get an error when trying to render the template in a subchart; you'll have to use global values instead.
{{/*
Create the contents of the global-config ConfigMap
*/}}
{{- define "my-umbrella-chart.global-config-contents" -}}
spring.activemq.user={{ .Values.global.activeMqUsername | quote }}
spring.activemq.password={{ .Values.global.activeMqPassword | quote }}
jms.destinations.itemCreationQueue={{ .Release.Namespace }}-itemCreationQueue
{{- end }}
kind: ConfigMap
apiVersion: v1
metadata:
name: global-config
data:
{{- include "my-umbrella-chart.global-config-contents" . | nindent 2 }}
Then you add a checksum of my-umbrella-chart.global-config-contents in deployment1.yaml and deployment2.yaml as an annotation like so:
annotations:
# Automatically roll deployment when the configmap contents changes
checksum/global-config: {{ include "my-umbrella-chart.global-config-contents" . | sha256sum }}

kustomize edit set image doesn't work with kustomize multibases and common base

I am using this example:
├── base
│   ├── kustomization.yaml
│   └── pod.yaml
├── dev
│   └── kustomization.yaml
├── kustomization.yaml
├── production
│   └── kustomization.yaml
└── staging
└── kustomization.yaml
and in kustomization.yaml file in root:
resources:
- ./dev
- ./staging
- ./production
I also have the image transformer code in dev, staging, production kustomization.yaml:
images:
- name: my-app
newName: gcr.io/my-platform/my-app
To build a single deployment manifest, I use:
(cd dev && kustomize edit set image my-app=gcr.io/my-platform/my-app:0.0.2 && kustomize build .)
which simply works!
to build deployment manifest for all overlays (dev, staging, production), I use:
(kustomize edit set image my-app=gcr.io/my-platform/my-app:0.0.2 && kustomize build .)
which uses the kustomization.yaml in root which contains all resources(dev, staging, production).
It does work and the final build is printed on console but without the image tag.
It seems like the kusotmize edit set image only updates the kustomizaion.yaml of the current dir.
Is there anything which can be done to handle this scenario in an easy and efficient way so the final output contains image tag as well for all deployments?
To test please use this repo
It took some time to realise what happens here. I'll explain step by step what happens and how it should work.
What happens
Firstly I re-created the same structure:
$ tree
.
├── base
│   ├── kustomization.yaml
│   └── pod.yaml
├── dev
│   └── kustomization.yaml
├── kustomization.yaml
└── staging
└── kustomization.yaml
When you run this command for single deployment:
(cd dev && kustomize edit set image my-app=gcr.io/my-platform/my-app:0.0.2 && kustomize build .)
you change working directory to dev, manually override image from gcr.io/my-platform/my-app and adding tag 0.0.2 and then render the deployment.
The thing is previously added transformer code gets overridden by the command above. You can remove transformer code, run the command above and get the same result. And after running the command you will find out that your dev/kustomization.yaml will look like:
resources:
- ./../base
namePrefix: dev-
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
images:
- name: my-app
newName: gcr.io/my-platform/my-app
newTag: 0.0.2
Then what happens when you run this command from main directory:
(kustomize edit set image my-app=gcr.io/my-platform/my-app:0.0.2 && kustomize build .)
kustomize firstly goes to overlays and do transformation code which is located in overlays/kustomization.yaml. When this part is finished, image name is not my-app, but gcr.io/my-platform/my-app.
At this point kustomize edit command tries to find image with name my-app and can't do so and therefore does NOT apply the tag.
What to do
You need to use transformed image name if you run kustomize edit in main working directory:
$ kustomize edit set image gcr.io/my-platform/my-app=*:0.0.4 && kustomize build .
apiVersion: v1
kind: Pod
metadata:
labels:
app: my-app
name: dev-myapp-pod
spec:
containers:
- image: gcr.io/my-platform/my-app:0.0.4
name: my-app
---
apiVersion: v1
kind: Pod
metadata:
labels:
app: my-app
name: stag-myapp-pod
spec:
containers:
- image: gcr.io/my-platform/my-app:0.0.4
name: my-app

How to access values from a custom value.yaml files in templates (HELM)

I am trying to access the key - values from a custom value file secretvalues.yaml passed to helm with the -f parameter. The key - value in this file is being passed to the yaml file postgres.configmap.yml.
here is my folder structure (there are there are a few other charts but I have removed them for simplicity)
├── k8shelmcharts
│ ├── Chart.yaml
│ ├── charts
│ │ ├── postgres-service
│ │ │ ├── Chart.yaml
│ │ │ ├── templates
│ │ │ │ ├── postgres.configmap.yml
│ │ │ │ ├── postgres.deployment.yml
│ │ │ │ └── postgres.service.yml
│ │ │ └── values.yaml
│ └── secretvalues.yaml
The contents of the values.yaml file in the postgres-services/ folder is
config:
postgres_admin_name: "postgres"
The contents of the secretvalues.yaml file in the k8shelmchars/ folder is
secrets:
postgres_admin_password: "password"
and the contents of the postgres.configmap.yml file in the postgres-services/ folder is
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
data:
# property-like keys; each key maps to a simple value
POSTGRES_USER: {{ .Values.config.postgres_admin_name }}
POSTGRES_PASSWORD: {{ .secretvalues.secrets.postgres_admin_password }}
I have tried several combinations here like .secretvalues.secrets.postgres_admin_password, .Secretvalues.secrets.postgres_admin_password and tried to remove the secrets key but no avail.
When I run the command to install the charts helm install -f k8shelmcharts/secretvalues.yaml testapp k8shelmcharts/ --dry-run --debug
I get the error:
Error: template: flask-k8s-app/charts/postgresdb/templates/postgres.configmap.yml:8:37: executing "flask-k8s-app/charts/postgresdb/templates/postgres.configmap.yml" at <.secretvalues.parameters.postgres_admin_password>: nil pointer evaluating interface {}.parameters
helm.go:94: [debug] template: flask-k8s-app/charts/postgresdb/templates/postgres.configmap.yml:8:37: executing "flask-k8s-app/charts/postgresdb/templates/postgres.configmap.yml" at <.secretvalues.parameters.postgres_admin_password>: nil pointer evaluating interface {}.parameters
my question is how do I access the secret.postgres_admin_password ?
I am using helm3
Thanks!
Trying to access the key-values from the secretvalues.yaml file by using POSTGRES_PASSWORD: {{ .Values.config.postgres_admin_password }} in the postgres.configmap.yml seems to pull null/nil values.
I am getting the error when I run helm install testapp k8shelmcharts/ --dry-run --debug -f k8shelmcharts/secretvalues.yaml:
Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: unknown object type "nil" in ConfigMap.data.POSTGRES_PASSWORD
helm.go:94: [debug] error validating "": error validating data: unknown object type "nil" in ConfigMap.data.POSTGRES_PASSWORD
When I try to debug the template using helm template k8shelmcharts/ --debug, I see:
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
data:
# property-like keys; each key maps to a simple value
POSTGRES_USER: postgres
POSTGRES_PASSWORD:
indicating helm is not able to pull values from the secretvalues.yaml file. NOTE: I have updated the key from secrets to config in the secretvalues.yaml file.
Values from all files are merged into one object Values, so you should access variables from the secretvalues.yaml the same way you access the others, i.e.
.Values.secrets.postgres_admin_password

Unable to read files using common library

I have an issue when trying to use a file from a library chart. Helm fails when I try to access a file from the library.
I have followed the example from the library_charts documentation.
Everything is the same as the documentation except two parts:
I have added the file mylibchart/files/foo.conf and this file is referenced in mylibchart/templates/_configmap.yaml's data key (in the documentation, data is an empty object):
├── mychart
│ ├── Chart.yaml
│ └── templates
│ └── configmap.yaml
└── mylibchart
├── Chart.yaml
├── files
│ └── foo.conf
└── templates
├── _configmap.yaml
└── _util.yaml
cat mylibchart/templates/_configmap.yaml
{{- define "mylibchart.configmap.tpl" -}}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name | printf "%s-%s" .Chart.Name }}
data:
fromlib: yes
{{ (.Files.Glob "files/foo.conf").AsConfig | nindent 2 }}
{{- end -}}
{{- define "mylibchart.configmap" -}}
{{- include "mylibchart.util.merge" (append . "mylibchart.configmap.tpl") -}}
{{- end -}}
This error is caused by the fact that mychart/files/foo.conf does not exist. If I create it, it does not crash, but contains mychart/files/foo.conf's content instead of mylibchart/files/foo.conf's content.
The file foo.conf does exist inside the file generated by "helm dependency update" (mychart/charts/mylibchart-0.1.0.tgz).
How can I use that file from the .tgz file?
You can easy reproduce the issue by cloning the project: https://github.com/florentvaldelievre/helm-issue
Helm version:
version.BuildInfo{Version:"v3.2.3", GitCommit:"8f832046e258e2cb800894579b1b3b50c2d83492", GitTreeState:"clean", GoVersion:"go1.13.12"}