I have a values.yaml that has
ingress:
enabled: false
volume:
hostPath:
path: /tmp
type: DirectoryOrCreate
I have an overlay.yaml that changes the value of values.yaml.
ingress:
enabled: true
volume:
persistentVolumeClaim:
claimName: test
For the ingress, it's working as I suspected because the value of enabled will change to true. However, for the volume, it appears that tables add on to each other rather than get overwritten. For instance, I would get something like:
volume:
persistentVolumeClaim:
claimName: test
hostPath:
path: /tmp
type: DirectoryOrCreate
I would like to specify a default volume type and its configurations (e.g. path) in values.yaml, but have the freedom for others to change this through an overlay. However, what I have now "adds" a volume type rather than overwrite it. Is there a way to accomplish this?
null is a specific valid YAML value (the same as JSON null). If you set a Helm value to null, then the Go text/template logic will unmarshal it to Go nil and it will appear as "false" in if statements and similar conditionals.
volume:
persistentVolumeClaim:
claimName: test
hostPath: null
I might avoid this problem in chart logic, though. One approach would be to use a separate type field to say which sub-field you're looking for:
# the chart's defaults in values.yaml
volume:
type: HostPath
hostPath: { ... }
# your `helm install -f` overrides
volume:
type: PersistentVolumeClaim
persistentVolumeClaim: { ... }
# (the default hostPath: will be present but unused)
A second option is to make the default "absent", and either disable the feature entirely or construct sensible defaults in the chart code if it's not present.
# values.yaml
# volume specifies where the application will keep persistent data.
# If unset, each replica will have independent data and this data will
# be lost on restart.
#
# volume:
#
# persistentVolumeClaim stores data in an existing PVC.
#
# persistentVolumeClaim:
# name: ???
# deep in templates/deployment.yaml
volumes:
{{- if .Values.volume.persistentVolumeClaim }}
- name: storage
persistentVolumeClaim:
claimName: {{ .Values.volume.persistentVolumeClaim.name }}
{{- else if .Values.volume.hostPath }}
- name: storage
hostPath:
path: {{ .Values.volume.hostPath.path }}
{{- end }}
{{-/* (just don't create a volume if not set) */}}
Or, to always provide some kind of storage, even if it's not that useful:
volumes:
- name: storage
{{- if .Values.volume.persistentVolumeClaim }}
persistentVolumeClaim:
claimName: {{ .Values.volume.persistentVolumeClaim.name }}
{{- else if .Values.volume.hostPath }}
hostPath:
path: {{ .Values.volume.hostPath.path }}
{{- else }}
emptyDir: {}
{{- end }}
Related
Im trying to template volumes and volumemounts in a deployment.
The output i want is:
volumeMounts:
- name: appsettings
mountPath: /usr/share/nginx/html/appsettings.json
subPath: appsettings.json
volumes:
- name: appsettings
configMap:
name: appsettings-file
Here is my values.yml:
volumemounts:
- name: appsettings
mountPath: /usr/share/nginx/html/appsettings.json
subPath: appsettings.json
volumes:
- name: appsettings
configMap:
name: appsettings-file
Here is my template:
{{- with .Values.volumemounts }}
volumeMounts:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.volumes }}
volumes:
{{- print . | nindent 4 }}
{{- end }}
Here is the output:
volumeMounts:
- mountPath: /usr/share/nginx/html/appsettings.json
name: appsettings
subPath: appsettings.json
volumes:
- configMap:
name: appsettings-file
name: appsettings
If i change line 2 and 6 in values.yml to be - aname: appsettings
The output is:
volumeMounts:
- aname: appsettings
mountPath: /usr/share/nginx/html/appsettings.json
subPath: appsettings.json
volumes:
- aname: appsettings
configMap:
name: appsettings-file
So it seems helm always gets the array in alphabetical order.
(i also tried with print instead of toYaml, ofcourse it gave an error but i saw that it still fetched they array sorted alphabeticaly:[map[configMap:map[name:appsettings-file] name:appsettings]])
What am i missing ? How should i do this ?
toYaml in fact alphabetizes dictionary keys. This doesn't usually matter; Kubernetes will treat both versions of the pod spec the same. If you want the dictionary to be serialized in a specific order, you'll need to write out the keys by hand.
In this particular case, I probably wouldn't try to make this low-level detail of the pod spec be configurable. I'd write it out in the template file instead. If this entire ConfigMap mount is optional, put it behind a single shared {{ if .Values.mountConfig }} option. That would also remove the toYaml call.
I have a configmap created. I have added data as a key-value pair in configmap
data:
EXTERNAL_CONFIG_FILE: /user/app/api/config
I need to use this variable to set a mount path in deployment
- name: config-properties-volume-mnt
mountPath: {{ $EXTERNAL_CONFIG_FILE }}
I am getting undefined variable "$EXTERNAL_CONFIG_FILE" while deploying. I do not want to define this variable in values.yaml. Is there a way where I can use this variable defined in configmap in deployment?
It's not possible to dynamically define any parameter on a manifest, you have to use the Helm or Kustomize
Or else you can use the sed to replace the Text in manifest simply
You can not use this way, configmap and secret are normally to inject the variables or file into the POD not at declare step.
- name: config-properties-volume-mnt
mountPath: {{ $EXTERNAL_CONFIG_FILE }}
if you have helm chart keeping details into the vaules.yaml is the only option.
Subpath method :
You can use the subpath to achieve, with subpath you can use the environment: https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath-expanded-environment
apiVersion: v1
kind: Pod
metadata:
name: pod1
spec:
containers:
- name: container1
env:
- name: EXTERNAL_CONFIG_FILE
value: /user/app/api/config
image: busybox:1.28
command: [ "sh", "-c", "while [ true ]; do echo 'Hello'; sleep 10; done | tee -a /logs/hello.txt" ]
volumeMounts:
- name: workdir1
mountPath: /logs
# The variable expansion uses round brackets (not curly brackets).
subPathExpr: $(EXTERNAL_CONFIG_FILE)
restartPolicy: Never
volumes:
- name: workdir1
hostPath:
path: /var/log/pods
Instead of env used in the above example you can use the configmap
I have a Kubernetes problem where I need to copy 2 jars (each jar > 1Mb) into a pod after it is deployed. So ideally the solution is we cannot use configMap (> 1Mb) but we need to use "wget" in "initcontainer" and download the jars.
so below is my kubernetes-template configuration which i have modified. The original one is available at https://github.com/dremio/dremio-cloud-tools/blob/master/charts/dremio/templates/dremio-executor.yaml
{{ if not .Values.DremioAdmin }}
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: dremio-executor
spec:
serviceName: "dremio-cluster-pod"
replicas: {{.Values.executor.count}}
podManagementPolicy: "Parallel"
revisionHistoryLimit: 1
selector:
matchLabels:
app: dremio-executor
template:
metadata:
labels:
app: dremio-executor
role: dremio-cluster-pod
annotations:
dremio-configmap/checksum: {{ (.Files.Glob "config/*").AsConfig | sha256sum }}
spec:
terminationGracePeriodSeconds: 5
{{- if .Values.nodeSelector }}
nodeSelector:
{{- range $key, $value := .Values.nodeSelector }}
{{ $key }}: {{ $value }}
{{- end }}
{{- end }}
containers:
- name: dremio-executor
image: {{.Values.image}}:{{.Values.imageTag}}
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 0
resources:
requests:
memory: {{.Values.executor.memory}}M
cpu: {{.Values.executor.cpu}}
volumeMounts:
- name: dremio-executor-volume
mountPath: /opt/dremio/data
##################### START added this section #####################
- name: dremio-connector
mountPath: /opt/dremio/jars
#################### END added this section ##########################
- name: dremio-config
mountPath: /opt/dremio/conf
env:
- name: DREMIO_MAX_HEAP_MEMORY_SIZE_MB
value: "{{ template "HeapMemory" .Values.executor.memory }}"
- name: DREMIO_MAX_DIRECT_MEMORY_SIZE_MB
value: "{{ template "DirectMemory" .Values.executor.memory }}"
- name: DREMIO_JAVA_EXTRA_OPTS
value: >-
-Dzookeeper=zk-hs:2181
-Dservices.coordinator.enabled=false
{{- if .Values.extraStartParams }}
{{ .Values.extraStartParams }}
{{- end }}
command: ["/opt/dremio/bin/dremio"]
args:
- "start-fg"
ports:
- containerPort: 45678
name: server
initContainers:
################ START added this section ######################
- name: installjars
image: {{.Values.image}}:{{.Values.imageTag}}
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 0
volumeMounts:
- name: dremio-connector
mountPath: /opt/dremio/jars
command: ["/bin/sh","-c"]
args: ["wget --no-check-certificate -O /dir/connector.jar https://<some nexus repo URL>/connector.jar; sleep 10;"]
################ END added this section ###############
- name: wait-for-zk
image: busybox
command: ["sh", "-c", "until ping -c 1 -W 1 zk-hs > /dev/null; do echo waiting for zookeeper host; sleep 2; done;"]
# since we're mounting a separate volume, reset permission to
# dremio uid/gid
- name: chown-data-directory
image: {{.Values.image}}:{{.Values.imageTag}}
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 0
volumeMounts:
- name: dremio-executor-volume
mountPath: /opt/dremio/data
command: ["chown"]
args:
- "dremio:dremio"
- "/opt/dremio/data"
volumes:
- name: dremio-config
configMap:
name: dremio-config
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
- name: {{ .Values.imagePullSecrets }}
{{- end}}
#################### START added this section ########################
- name: dremio-connector
emptyDir: {}
#################### END added this section ########################
volumeClaimTemplates:
- metadata:
name: dremio-executor-volume
spec:
accessModes: [ "ReadWriteOnce" ]
{{- if .Values.storageClass }}
storageClassName: {{ .Values.storageClass }}
{{- end }}
resources:
requests:
storage: {{.Values.executor.volumeSize}}
{{ end }}
So the above is NOT working and I don't see any jars being downloaded once I "exec" into the pod. I don't understand what is wrong with the above. however do note that inside the pod if i run the same wget command it does download the jar which baffles me. So the URL is working, readwrite of directory is no problem but still jar is not downloaded ???
If you can remove the need for Wget altogether it would make life easier...
Option 1
Using your own docker image will save some pain if thats an option
Dockerfile
# docker build -f Dockerfile -t ghcr.io/yourOrg/projectId/dockerImageName:0.0.1 .
# docker push ghcr.io/yourOrg/projectId/dockerImageName:0.0.1
FROM nginx:1.19.10-alpine
# Use local copies of config
COPY files/some1.jar /dir/
COPY files/some2.jar /dir/
Files will be ready in the container, no need for cryptic commands in your pod definition that will make little sense. Alternatively if you need to download the files you could copy a script to do that work into the Docker image instead and run that on startup via the docker directive CMD.
Option 2
Alternatively, you could do a two stage deployment...
Create a persistent volume
mount the volume to a pod (use busybox as a base?) that will run for enough time for the files to copy across from your local machine (or for them to be downloaded if you continue to use Wget)
kubectl cp the files you need to the (Retained) PersistentVolume
Now mount the PV to your pod's container(s) so the files are readily available when the pod fires up.
Your approch seems right.
Another solution could be to include the jar on the Docker image but I think it's not possible right ?
You could just use an emptyDir instead of a VolumeClaim.
Last one, I would have download the jar before waiting for ZooKeeper to gain some time.
In my deployment, I would like to use a Persistent Volume Claim in combination with a config map mount. For example, I'd like the following:
volumeMounts:
- name: py-js-storage
mountPath: /home/python
- name: my-config
mountPath: /home/python/my-config.properties
subPath: my-config.properties
readOnly: true
...
volumes:
- name: py-storage
{{- if .Values.py.persistence.enabled }}
persistentVolumeClaim:
claimName: python-storage
{{- else }}
emptyDir: {}
{{- end }}
Is this a possible and viable way to go? Is there any better way to approach such situation?
Since you didn't give your use case, my answer will be based on if it is possible or not. In fact: Yes, it is.
I'm supposing you wish mount file from a configMap in a mount point that already contains other files, and your approach to use subPath is correct!
When you need to mount different volumes on the same path, you need specify subPath or the content of the original dir will be hidden.
In other words, if you want to keep both files (from the mount point and from configMap) you must use subPath.
To illustrate this, I've tested with the deployment code below. There I mount the hostPath /mnt that contains a file called filesystem-file.txt in my pod and the file /mnt/configmap-file.txt from my configmap test-pd-plus-cfgmap:
Note: I'm using Kubernetes 1.18.1
Configmap:
apiVersion: v1
kind: ConfigMap
metadata:
name: test-pd-plus-cfgmap
data:
file-from-cfgmap: file data
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-pv
spec:
replicas: 3
selector:
matchLabels:
app: test-pv
template:
metadata:
labels:
app: test-pv
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: /mnt
name: task-pv-storage
- mountPath: /mnt/configmap-file.txt
subPath: configmap-file.txt
name: task-cm-file
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
- name: task-cm-file
configMap:
name: test-pd-plus-cfgmap
As a result of the deployment, you can see the follow content in /mnt of the pod:
$ kubectl exec test-pv-5bcb54bd46-q2xwm -- ls /mnt
configmap-file.txt
filesystem-file.txt
You could check this github issue with the same discussion.
Here you could read a little more about volumes subPath.
You can go with the following approach.
In your deployment.yaml template file you can configure:
...
{{- if .Values.volumeMounts }}
volumeMounts:
{{- range .Values.volumeMounts }}
- name: {{ .name }}
mountPath: {{ .mountPath }}
{{- end }}
{{- end }}
...
{{- if .Values.volumeMounts }}
volumes:
{{- range .Values.volumeMounts }}
- name: {{ .name }}
{{ toYaml .volumeSource | indent 8 }}
{{- end }}
{{- end }}
And your values.yaml file you can define any volume sources:
volumeMounts:
- name: volume-mount-1
mountPath: /var/data
volumeSource:
persistentVolumeClaim:
claimName: pvc-name
- name: volume-mount-2
mountPath: /var/config
volumeSource:
configMap:
name: config-map-name
In this way, you don't have to worry about the source of the volume. You can add any kind of sources in your values.yaml file and you don't have to update the deployment.yaml template.
Hope this helps!
I have created a kubernetes configmap which contains multiple key value pairs. I want to mount each value in a different path. Im using helm to create charts.
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.name }}-configmap
namespace: {{ .Values.namespace }}
labels:
name: {{ .Values.name }}-configmap
data:
test1.yml: |-
{{ .Files.Get .Values.test1_filename }}
test2.yml: |-
{{ .Files.Get .Values.test2_filename }}
I want test1.yml and test2.yml to be mounted in different directories.How can i do it?
You can use subPath field to pickup specific file from configMap:
volumeMounts:
- mountPath: /my/first/path/test.yaml
name: configmap
subPath: test1.yaml
- mountPath: /my/second/path/test.yaml
name: configmap
subPath: test2.yaml