How to create ConfigMap from directory using helm - kubernetes

I have a folder with 3 json files in a postman folder. How can I create a ConfigMap using a Helm yaml template?
kubectl create configmap test-config --from-
file=clusterfitusecaseapihelm/data/postman/
The above solution works, but I need this as yaml file as I am using Helm.

Inside a Helm template, you can use the Files.Glob and AsConfig helper functions to achieve this:
{{- (.Files.Glob "postman/**.json").AsConfig | nindent 2 }}
Or, to give a full example of a Helm template:
apiVersion: v1
kind: ConfigMap
metadata:
name: test-config
data:
{{- (.Files.Glob "postman/**.json").AsConfig | nindent 2 }}
See the documentation (especially the section on "ConfigMap and secrets utility functions") for more information.

Related

Configmap data is blank

I am creating a ConfigMap using this YAML file:
apiVersion: v1
kind: ConfigMap
metadata:
name: "{{ .Values.configmapName}}-config"
labels:
app: "{{ .Values.configmapName}}"
data:
{{ (.Files.Glob "configmap/dev/*").AsConfig | indent 2 }}
The ConfigMap gets created but there is no data, the ConfigMap is blank. I have multiple files under a configmap/dev/ directory located at the root level of my Helm chart. I'm expecting to see the data from the files which are under this configmap/dev/ directory.

Helm Variables inside ConfigMap File

So I had a ConfigMap with a json configuration file in it, like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-config-map
data:
config.json: |+
{
"some-url": "{{ .Values.myApp.someUrl }}"
}
But I've moved to having my config files outside the ConfigMap's yaml, and just referencing them there, like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-config-map
data:
config.json: |-
{{ .Files.Get .Values.myApp.configFile | indent 4 }}
But now I want my json to look like the following
{
"some-url": "{{ .Values.myApp.someUrl }}"
}
The only thing I tried is what I just showed. I 'm not even sure how to look for this answer.
Is it even possible?
At the time of reading the file, its content is a string. It's not evaluated as template, and therefore you cannot use variables like you do.
However, helm has a function for this purpose specifically called tpl:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-config-map
data:
config.json: |-
{{ tpl (.Files.Get .Values.myApp.configFile) $ | indent 4 }}
The tpl function takes a template string and renders it with some context. This is useful when you have template snippets in your values file or like in your case in some files content.

data node empty in Configmap on helm install

These are my helm charts:
# helm/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
mycfgmap: |-
{{ .Files.Get (printf "environments/configmap.%s\n.yaml" .Values.namespace) | indent 4 }}
# helm/environments/values.dev.yaml
namespace: dev
# helm/environments/configmap.dev.yaml
MY_ENV_VARIABLE_1: "true"
This is how I am installing my helm charts:
helm install --dry-run --debug --create-namespace -n dev -f helm/environments/values.dev.yaml my-test-release helm
This is the output I am getting:
# Source: helm/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
mycfgmap: |-
---
I am expecting this to be in the data node (as defined in my configmap.dev.yaml):
MY_ENV_VARIABLE_1: "true"
Please note that I have alredy tried these for helm/templates/config.yaml:
{{ .Files.Get (printf "../environments/configmap.%s\n.yaml" .Values.namespace) | indent 4 }}
# and
{{ .Files.Get (printf "environments/configmap.%s.yaml" .Values.namespace) | indent 4 }}
and it doesn't work. I have also looked at similar questions answered on SO, but my problem seem to be something else.
I can see from your message that the file you have used for configmap yaml is # helm/environments/config.dev.yaml. Please rename it to configmap.dev.yml. And change your printf statement like below. Once you make these changes, things should work fine.
{{ .Files.Get (printf "environments/configmap.%s.yaml" .Values.namespace) | indent 4 }}

Reuse uuid in helm charts

I am writing helm charts and it creates one deployment and one statefulset component.
Now I want to generate uuid and send the value to both k8s components.
I a using uuid function to generate the uuid. But need help how I can send this value to both components.
Here is my chart folder structure --
projectdir
chart1
templates
statefulset.yaml
chart2
templates
deployment.yaml
helperchart
templates
_helpers.tpl
I have to write the logic to generate the uuid in _helpers.tpl.
Edit: It seems defining it in the _helpers.tpl does not work - thank you for pointing it out.
I have lookup it up a bit, and it seems currently the only way to achieve that is to put both of the manifests, separated by --- to the same file under the templates/. See the following example, where the UUID is defined in the first line and then used in the both Deployment and the StatefulSet:
{{- $mySharedUuid := uuidv4 -}}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "uuid-test.fullname" . }}-1
labels:
{{- include "uuid-test.labels" . | nindent 4 }}
annotations:
my-uuid: {{ $mySharedUuid }}
spec:
...
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ include "uuid-test.fullname" . }}-2
labels:
{{- include "uuid-test.labels" . | nindent 4 }}
annotations:
my-uuid: {{ $mySharedUuid }}
spec:
...
After templating, the output is:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: uuid-test-app-1
labels:
helm.sh/chart: uuid-test-0.1.0
app.kubernetes.io/name: uuid-test
app.kubernetes.io/instance: uuid-test-app
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
annotations:
my-uuid: fe0346f5-a963-4ca1-ada0-af17405f3155
spec:
...
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: uuid-test-app-2
labels:
helm.sh/chart: uuid-test-0.1.0
app.kubernetes.io/name: uuid-test
app.kubernetes.io/instance: uuid-test-app
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
annotations:
my-uuid: fe0346f5-a963-4ca1-ada0-af17405f3155
spec:
...
See the same issue: https://github.com/helm/helm/issues/6456
Note that this approach will still cause the UUID to be regenerated when you do a helm upgrade. To circumvent that, you would need to use another workaround along with this one.
You should explicitly pass the value in as a Helm value; don't try to generate it in the chart.
The other answers to this question highlight a couple of the issues you'll run into. #UtkuƖzdemir notes that every time you call the Helm uuidv4 function it will create a new random UUID, so you can only call that function once in the chart ever; and #srr further notes that there's no way to persist a generated value like this, so if you helm upgrade the chart the UUID value will be regenerated, which will cause all of the involved Kubernetes objects to be redeployed.
The Bitnami RabbitMQ chart has an interesting middle road here. One of its configuration options is an "Erlang cookie", also a random string that needs to be consistent across all replicas and upgrades. On an initial install it generates a random value if one isn't provided, and tells you how to retrieve it from a Secret; but if .Release.IsUpgrade then you must provide the value directly, and the error message explains how to get it from your existing deployment.
You may be able to get around the "only call uuidv4 once ever" problem by putting the value into a ConfigMap or Secret, and then referencing it from elsewhere. This works only if the only place you use the UUID value is in an environment variable, or something else that can have a value injected from a secret; it won't help if you need it in an annotation or label.
apiVersion: v1
kind: Secret
metadata:
name: {{ template "chart.name" . }}
data:
the-uuid: {{ .Values.theUuid | default uuidv4 | b64enc }}
{{-/* this is the only place uuidv4 ^^^^^^ is called at all */}}
env:
- name: THE_UUID
valueFrom:
secretKeyRef:
name: {{ template "chart.name" . }}
key: the-uuid
As suggested in helm issue tracker https://github.com/helm/helm/issues/6456, we have to put both components in same file and looks like thats the only solution right now.
Its a surprise, Helm not supporting cache the value to share across charts/components. I wish Helm support this feature in future.

Files.Get concatenated string value appears empty after helm template in Kubernetes Configmap

I'm using a configmap with a dynamic filename defined as below. However, after I do helm template the value for the filename is empty:
apiVersion: v1
kind: ConfigMap
metadata:
name: krb5-configmap
data:
krb5.conf: |-
{{ .Files.Get (printf "%s//krb5-%s.conf" .Values.kerberosConfigDirectory .Values.environment) | indent 4 }}
kerberosConfigDirectory: kerberos-configs (set in Values.yaml)
Folder structure:
k8s:
templates
configmap.yaml
kerberos-configs
krb5-dev.conf
After helm template the data value looks like this:
data:
krb5.conf: |-
--
I can't figure out why the value for the filename is empty. Note that I'm able to run the helm template command successfully.
You have an extra / and extra indentation in you file. Working example:
apiVersion: v1
kind: ConfigMap
metadata:
name: krb5-configmap
data:
krb5.conf: |-
{{ .Files.Get (printf "%s/krb5-%s.conf" .Values.kerberosConfigDirectory .Values.environment) | indent 4 }}