Helm templating seems to only render alphabetically - kubernetes-helm

Im trying to template volumes and volumemounts in a deployment.
The output i want is:
volumeMounts:
- name: appsettings
mountPath: /usr/share/nginx/html/appsettings.json
subPath: appsettings.json
volumes:
- name: appsettings
configMap:
name: appsettings-file
Here is my values.yml:
volumemounts:
- name: appsettings
mountPath: /usr/share/nginx/html/appsettings.json
subPath: appsettings.json
volumes:
- name: appsettings
configMap:
name: appsettings-file
Here is my template:
{{- with .Values.volumemounts }}
volumeMounts:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.volumes }}
volumes:
{{- print . | nindent 4 }}
{{- end }}
Here is the output:
volumeMounts:
- mountPath: /usr/share/nginx/html/appsettings.json
name: appsettings
subPath: appsettings.json
volumes:
- configMap:
name: appsettings-file
name: appsettings
If i change line 2 and 6 in values.yml to be - aname: appsettings
The output is:
volumeMounts:
- aname: appsettings
mountPath: /usr/share/nginx/html/appsettings.json
subPath: appsettings.json
volumes:
- aname: appsettings
configMap:
name: appsettings-file
So it seems helm always gets the array in alphabetical order.
(i also tried with print instead of toYaml, ofcourse it gave an error but i saw that it still fetched they array sorted alphabeticaly:[map[configMap:map[name:appsettings-file] name:appsettings]])
What am i missing ? How should i do this ?

toYaml in fact alphabetizes dictionary keys. This doesn't usually matter; Kubernetes will treat both versions of the pod spec the same. If you want the dictionary to be serialized in a specific order, you'll need to write out the keys by hand.
In this particular case, I probably wouldn't try to make this low-level detail of the pod spec be configurable. I'd write it out in the template file instead. If this entire ConfigMap mount is optional, put it behind a single shared {{ if .Values.mountConfig }} option. That would also remove the toYaml call.

Related

How to set the value in envFrom as the value of env?

I have a configMap that stores a json file:
apiVersion: v1
data:
config.json: |
{{- toPrettyJson $.Values.serviceConfig | nindent 4 }}
kind: ConfigMap
metadata:
name: service-config
namespace: {{ .Release.Namespace }}
In my deployment.yaml, I use volume and envFrom
spec:
... ...
volumes:
- name: config-vol
configMap:
name: service-config
containers:
- name: {{ .Chart.Name }}
envFrom:
- configMapRef:
name: service-config
... ...
volumeMounts:
- mountPath: /src/config
name: config-vol
After I deployed the helm and run kubectl describe pods, I god this:
Environment Variables from:
service-config ConfigMap Optional: false
I wonder how can I get/use this service-config in my code? My values.yaml can extract the value under env but I don't know how to extract the value of configMap. Is there a way I can move this service-config or the json file stored in it as the variable of env in values.yaml? Thank you in advance!

How to overwrite Table/Mapping in Helm Chart

I have a values.yaml that has
ingress:
enabled: false
volume:
hostPath:
path: /tmp
type: DirectoryOrCreate
I have an overlay.yaml that changes the value of values.yaml.
ingress:
enabled: true
volume:
persistentVolumeClaim:
claimName: test
For the ingress, it's working as I suspected because the value of enabled will change to true. However, for the volume, it appears that tables add on to each other rather than get overwritten. For instance, I would get something like:
volume:
persistentVolumeClaim:
claimName: test
hostPath:
path: /tmp
type: DirectoryOrCreate
I would like to specify a default volume type and its configurations (e.g. path) in values.yaml, but have the freedom for others to change this through an overlay. However, what I have now "adds" a volume type rather than overwrite it. Is there a way to accomplish this?
null is a specific valid YAML value (the same as JSON null). If you set a Helm value to null, then the Go text/template logic will unmarshal it to Go nil and it will appear as "false" in if statements and similar conditionals.
volume:
persistentVolumeClaim:
claimName: test
hostPath: null
I might avoid this problem in chart logic, though. One approach would be to use a separate type field to say which sub-field you're looking for:
# the chart's defaults in values.yaml
volume:
type: HostPath
hostPath: { ... }
# your `helm install -f` overrides
volume:
type: PersistentVolumeClaim
persistentVolumeClaim: { ... }
# (the default hostPath: will be present but unused)
A second option is to make the default "absent", and either disable the feature entirely or construct sensible defaults in the chart code if it's not present.
# values.yaml
# volume specifies where the application will keep persistent data.
# If unset, each replica will have independent data and this data will
# be lost on restart.
#
# volume:
#
# persistentVolumeClaim stores data in an existing PVC.
#
# persistentVolumeClaim:
# name: ???
# deep in templates/deployment.yaml
volumes:
{{- if .Values.volume.persistentVolumeClaim }}
- name: storage
persistentVolumeClaim:
claimName: {{ .Values.volume.persistentVolumeClaim.name }}
{{- else if .Values.volume.hostPath }}
- name: storage
hostPath:
path: {{ .Values.volume.hostPath.path }}
{{- end }}
{{-/* (just don't create a volume if not set) */}}
Or, to always provide some kind of storage, even if it's not that useful:
volumes:
- name: storage
{{- if .Values.volume.persistentVolumeClaim }}
persistentVolumeClaim:
claimName: {{ .Values.volume.persistentVolumeClaim.name }}
{{- else if .Values.volume.hostPath }}
hostPath:
path: {{ .Values.volume.hostPath.path }}
{{- else }}
emptyDir: {}
{{- end }}

Kubernetes copying jars into a pod and restart

I have a Kubernetes problem where I need to copy 2 jars (each jar > 1Mb) into a pod after it is deployed. So ideally the solution is we cannot use configMap (> 1Mb) but we need to use "wget" in "initcontainer" and download the jars.
so below is my kubernetes-template configuration which i have modified. The original one is available at https://github.com/dremio/dremio-cloud-tools/blob/master/charts/dremio/templates/dremio-executor.yaml
{{ if not .Values.DremioAdmin }}
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: dremio-executor
spec:
serviceName: "dremio-cluster-pod"
replicas: {{.Values.executor.count}}
podManagementPolicy: "Parallel"
revisionHistoryLimit: 1
selector:
matchLabels:
app: dremio-executor
template:
metadata:
labels:
app: dremio-executor
role: dremio-cluster-pod
annotations:
dremio-configmap/checksum: {{ (.Files.Glob "config/*").AsConfig | sha256sum }}
spec:
terminationGracePeriodSeconds: 5
{{- if .Values.nodeSelector }}
nodeSelector:
{{- range $key, $value := .Values.nodeSelector }}
{{ $key }}: {{ $value }}
{{- end }}
{{- end }}
containers:
- name: dremio-executor
image: {{.Values.image}}:{{.Values.imageTag}}
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 0
resources:
requests:
memory: {{.Values.executor.memory}}M
cpu: {{.Values.executor.cpu}}
volumeMounts:
- name: dremio-executor-volume
mountPath: /opt/dremio/data
##################### START added this section #####################
- name: dremio-connector
mountPath: /opt/dremio/jars
#################### END added this section ##########################
- name: dremio-config
mountPath: /opt/dremio/conf
env:
- name: DREMIO_MAX_HEAP_MEMORY_SIZE_MB
value: "{{ template "HeapMemory" .Values.executor.memory }}"
- name: DREMIO_MAX_DIRECT_MEMORY_SIZE_MB
value: "{{ template "DirectMemory" .Values.executor.memory }}"
- name: DREMIO_JAVA_EXTRA_OPTS
value: >-
-Dzookeeper=zk-hs:2181
-Dservices.coordinator.enabled=false
{{- if .Values.extraStartParams }}
{{ .Values.extraStartParams }}
{{- end }}
command: ["/opt/dremio/bin/dremio"]
args:
- "start-fg"
ports:
- containerPort: 45678
name: server
initContainers:
################ START added this section ######################
- name: installjars
image: {{.Values.image}}:{{.Values.imageTag}}
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 0
volumeMounts:
- name: dremio-connector
mountPath: /opt/dremio/jars
command: ["/bin/sh","-c"]
args: ["wget --no-check-certificate -O /dir/connector.jar https://<some nexus repo URL>/connector.jar; sleep 10;"]
################ END added this section ###############
- name: wait-for-zk
image: busybox
command: ["sh", "-c", "until ping -c 1 -W 1 zk-hs > /dev/null; do echo waiting for zookeeper host; sleep 2; done;"]
# since we're mounting a separate volume, reset permission to
# dremio uid/gid
- name: chown-data-directory
image: {{.Values.image}}:{{.Values.imageTag}}
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 0
volumeMounts:
- name: dremio-executor-volume
mountPath: /opt/dremio/data
command: ["chown"]
args:
- "dremio:dremio"
- "/opt/dremio/data"
volumes:
- name: dremio-config
configMap:
name: dremio-config
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
- name: {{ .Values.imagePullSecrets }}
{{- end}}
#################### START added this section ########################
- name: dremio-connector
emptyDir: {}
#################### END added this section ########################
volumeClaimTemplates:
- metadata:
name: dremio-executor-volume
spec:
accessModes: [ "ReadWriteOnce" ]
{{- if .Values.storageClass }}
storageClassName: {{ .Values.storageClass }}
{{- end }}
resources:
requests:
storage: {{.Values.executor.volumeSize}}
{{ end }}
So the above is NOT working and I don't see any jars being downloaded once I "exec" into the pod. I don't understand what is wrong with the above. however do note that inside the pod if i run the same wget command it does download the jar which baffles me. So the URL is working, readwrite of directory is no problem but still jar is not downloaded ???
If you can remove the need for Wget altogether it would make life easier...
Option 1
Using your own docker image will save some pain if thats an option
Dockerfile
# docker build -f Dockerfile -t ghcr.io/yourOrg/projectId/dockerImageName:0.0.1 .
# docker push ghcr.io/yourOrg/projectId/dockerImageName:0.0.1
FROM nginx:1.19.10-alpine
# Use local copies of config
COPY files/some1.jar /dir/
COPY files/some2.jar /dir/
Files will be ready in the container, no need for cryptic commands in your pod definition that will make little sense. Alternatively if you need to download the files you could copy a script to do that work into the Docker image instead and run that on startup via the docker directive CMD.
Option 2
Alternatively, you could do a two stage deployment...
Create a persistent volume
mount the volume to a pod (use busybox as a base?) that will run for enough time for the files to copy across from your local machine (or for them to be downloaded if you continue to use Wget)
kubectl cp the files you need to the (Retained) PersistentVolume
Now mount the PV to your pod's container(s) so the files are readily available when the pod fires up.
Your approch seems right.
Another solution could be to include the jar on the Docker image but I think it's not possible right ?
You could just use an emptyDir instead of a VolumeClaim.
Last one, I would have download the jar before waiting for ZooKeeper to gain some time.

ConfigMap mounted on Persistent Volume Claims

In my deployment, I would like to use a Persistent Volume Claim in combination with a config map mount. For example, I'd like the following:
volumeMounts:
- name: py-js-storage
mountPath: /home/python
- name: my-config
mountPath: /home/python/my-config.properties
subPath: my-config.properties
readOnly: true
...
volumes:
- name: py-storage
{{- if .Values.py.persistence.enabled }}
persistentVolumeClaim:
claimName: python-storage
{{- else }}
emptyDir: {}
{{- end }}
Is this a possible and viable way to go? Is there any better way to approach such situation?
Since you didn't give your use case, my answer will be based on if it is possible or not. In fact: Yes, it is.
I'm supposing you wish mount file from a configMap in a mount point that already contains other files, and your approach to use subPath is correct!
When you need to mount different volumes on the same path, you need specify subPath or the content of the original dir will be hidden.
In other words, if you want to keep both files (from the mount point and from configMap) you must use subPath.
To illustrate this, I've tested with the deployment code below. There I mount the hostPath /mnt that contains a file called filesystem-file.txt in my pod and the file /mnt/configmap-file.txt from my configmap test-pd-plus-cfgmap:
Note: I'm using Kubernetes 1.18.1
Configmap:
apiVersion: v1
kind: ConfigMap
metadata:
name: test-pd-plus-cfgmap
data:
file-from-cfgmap: file data
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-pv
spec:
replicas: 3
selector:
matchLabels:
app: test-pv
template:
metadata:
labels:
app: test-pv
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: /mnt
name: task-pv-storage
- mountPath: /mnt/configmap-file.txt
subPath: configmap-file.txt
name: task-cm-file
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
- name: task-cm-file
configMap:
name: test-pd-plus-cfgmap
As a result of the deployment, you can see the follow content in /mnt of the pod:
$ kubectl exec test-pv-5bcb54bd46-q2xwm -- ls /mnt
configmap-file.txt
filesystem-file.txt
You could check this github issue with the same discussion.
Here you could read a little more about volumes subPath.
You can go with the following approach.
In your deployment.yaml template file you can configure:
...
{{- if .Values.volumeMounts }}
volumeMounts:
{{- range .Values.volumeMounts }}
- name: {{ .name }}
mountPath: {{ .mountPath }}
{{- end }}
{{- end }}
...
{{- if .Values.volumeMounts }}
volumes:
{{- range .Values.volumeMounts }}
- name: {{ .name }}
{{ toYaml .volumeSource | indent 8 }}
{{- end }}
{{- end }}
And your values.yaml file you can define any volume sources:
volumeMounts:
- name: volume-mount-1
mountPath: /var/data
volumeSource:
persistentVolumeClaim:
claimName: pvc-name
- name: volume-mount-2
mountPath: /var/config
volumeSource:
configMap:
name: config-map-name
In this way, you don't have to worry about the source of the volume. You can add any kind of sources in your values.yaml file and you don't have to update the deployment.yaml template.
Hope this helps!

How can I reuse common configuration across different kubernetes manifests?

Assume I have this manifest:
apiVersion: batch/v1
kind: Job
metadata:
name: initialize-assets-fixtures
spec:
template:
spec:
initContainers:
- name: wait-for-minio
image: bitnami/minio-client
env:
- name: MINIO_SERVER_ACCESS_KEY
valueFrom:
secretKeyRef:
name: minio
key: access-key
- name: MINIO_SERVER_SECRET_KEY
valueFrom:
secretKeyRef:
name: minio
key: secret-key
- name: MINIO_SERVER_HOST
value: minio
- name: MINIO_SERVER_PORT_NUMBER
value: "9000"
- name: MINIO_ALIAS
value: minio
command:
- /bin/sh
- -c
- |
mc config host add ${MINIO_ALIAS} http://${MINIO_SERVER_HOST}:${MINIO_SERVER_PORT_NUMBER} ${MINIO_SERVER_ACCESS_KEY} ${MINIO_SERVER_SECRET_KEY}
containers:
- name: initialize-assets-fixtures
image: bitnami/minio
env:
- name: MINIO_SERVER_ACCESS_KEY
valueFrom:
secretKeyRef:
name: minio
key: access-key
- name: MINIO_SERVER_SECRET_KEY
valueFrom:
secretKeyRef:
name: minio
key: secret-key
- name: MINIO_SERVER_HOST
value: minio
- name: MINIO_SERVER_PORT_NUMBER
value: "9000"
- name: MINIO_ALIAS
value: minio
command:
- /bin/sh
- -c
- |
mc config host add ${MINIO_ALIAS} http://${MINIO_SERVER_HOST}:${MINIO_SERVER_PORT_NUMBER} ${MINIO_SERVER_ACCESS_KEY} ${MINIO_SERVER_SECRET_KEY}
for category in `ls`; do
for f in `ls $category/*` ; do
mc cp $f ${MINIO_ALIAS}/$category/$(basename $f)
done
done
restartPolicy: Never
You see I have here one initContainer and one container. In both containers, I have the same configuration, i.e. the same env section.
Assume I have yet another Job manifest where I use the very same env section again.
It's a lot of duplicated configuration that I bet I can simplify drastically, but I don't know how to do it. Any hint? Any link to some documentation? After some googling, I was not able to come up with anything useful. Maybe with kustomize, but I'm not sure. Or maybe I'm doing it the wrong way with all those environment variables, but I don't think I have a choice, depending on the service I'm using (here it's minio, but I want to do the same kind of stuff with other services which might not be as flexible as minio).
Based on my knowledge you have those 3 options
Kustomize
Helm
ConfigMap
ConfigMap
You can use either kubectl create configmap or a ConfigMap generator in kustomization.yaml to create a ConfigMap.
The data source corresponds to a key-value pair in the ConfigMap, where
key = the file name or the key you provided on the command line
value = the file contents or the literal value you provided on the command line.
More about how to use it in pod here
Helm
As #Matt mentionted in comments you can use helm
helm lets you template the yaml with values. Also once you get into it there are ways to create and include partial templates – Matt
By the way, helm has it's own created minio chart, you might take a look how it is created there.
Kustomize
It's well described here and here how could you do that in kustomize.
Let me know if you have any more questions.
So, long story short: to solve my problem, I first created a new chart for my service and transformed the k8s manifests I had into helm templates. Then, I completed the _helpers.tpl with the following code:
{{/*
Common minio environment variables setup
*/}}
{{- define "minio.envvarsblock" -}}
- name: MINIO_SERVER_ACCESS_KEY
valueFrom:
secretKeyRef:
name: {{ .Values.minio.fullname }}
key: access-key
- name: MINIO_SERVER_SECRET_KEY
valueFrom:
secretKeyRef:
name: {{ .Values.minio.fullname }}
key: secret-key
- name: MINIO_SERVER_HOST
value: {{ .Values.minio.fullname }}
- name: MINIO_SERVER_PORT_NUMBER
value: {{ .Values.minio.server.port | quote }}
- name: MINIO_ALIAS
value: {{ .Values.minio.client.alias }}
{{- end -}}
{{/*
Wait for minio init container definition
*/}}
{{- define "wait-for-minio" -}}
- name: wait-for-minio
image: {{ .Values.minio.client.image }}
env: {{- include "minio.envvarsblock" . | nindent 4 }}
command:
- /bin/sh
- -c
- |
mc config host add ${MINIO_ALIAS} http://${MINIO_SERVER_HOST}:${MINIO_SERVER_PORT_NUMBER} ${MINIO_SERVER_ACCESS_KEY} ${MINIO_SERVER_SECRET_KEY}
{{- end -}}
The first section above allows to reuse the env section throughout all my templates and the second allows to reuse an initContainer that I use all over the place too. I was then able to inject those partial templates into my helm templates like so (to take the example I put in my original post):
{{- if .Values.fixtures.enabled -}}
apiVersion: batch/v1
kind: Job
metadata:
name: {{ include "chart.fullname" . }}-init-fixtures
labels:
{{ include "chart.labels" . | indent 4 }}
spec:
template:
spec:
initContainers:
{{- include "wait-for-minio" . | nindent 6 }}
containers:
- name: {{ .Chart.Name }}-init-fixtures
image: {{ .Values.image }}
env: {{- include "minio.envvarsblock" . | nindent 10 }}
command:
- /bin/sh
- -c
- |
mc config host add ${MINIO_ALIAS} http://${MINIO_SERVER_HOST}:${MINIO_SERVER_PORT_NUMBER} ${MINIO_SERVER_ACCESS_KEY} ${MINIO_SERVER_SECRET_KEY}
for category in `ls`; do
for f in `ls $category/*` ; do
mc cp $f ${MINIO_ALIAS}/$category/$(basename $f)
done
done
restartPolicy: OnFailure
{{- end -}}