ConfigMap mounted on Persistent Volume Claims - kubernetes

In my deployment, I would like to use a Persistent Volume Claim in combination with a config map mount. For example, I'd like the following:
volumeMounts:
- name: py-js-storage
mountPath: /home/python
- name: my-config
mountPath: /home/python/my-config.properties
subPath: my-config.properties
readOnly: true
...
volumes:
- name: py-storage
{{- if .Values.py.persistence.enabled }}
persistentVolumeClaim:
claimName: python-storage
{{- else }}
emptyDir: {}
{{- end }}
Is this a possible and viable way to go? Is there any better way to approach such situation?

Since you didn't give your use case, my answer will be based on if it is possible or not. In fact: Yes, it is.
I'm supposing you wish mount file from a configMap in a mount point that already contains other files, and your approach to use subPath is correct!
When you need to mount different volumes on the same path, you need specify subPath or the content of the original dir will be hidden.
In other words, if you want to keep both files (from the mount point and from configMap) you must use subPath.
To illustrate this, I've tested with the deployment code below. There I mount the hostPath /mnt that contains a file called filesystem-file.txt in my pod and the file /mnt/configmap-file.txt from my configmap test-pd-plus-cfgmap:
Note: I'm using Kubernetes 1.18.1
Configmap:
apiVersion: v1
kind: ConfigMap
metadata:
name: test-pd-plus-cfgmap
data:
file-from-cfgmap: file data
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-pv
spec:
replicas: 3
selector:
matchLabels:
app: test-pv
template:
metadata:
labels:
app: test-pv
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: /mnt
name: task-pv-storage
- mountPath: /mnt/configmap-file.txt
subPath: configmap-file.txt
name: task-cm-file
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
- name: task-cm-file
configMap:
name: test-pd-plus-cfgmap
As a result of the deployment, you can see the follow content in /mnt of the pod:
$ kubectl exec test-pv-5bcb54bd46-q2xwm -- ls /mnt
configmap-file.txt
filesystem-file.txt
You could check this github issue with the same discussion.
Here you could read a little more about volumes subPath.

You can go with the following approach.
In your deployment.yaml template file you can configure:
...
{{- if .Values.volumeMounts }}
volumeMounts:
{{- range .Values.volumeMounts }}
- name: {{ .name }}
mountPath: {{ .mountPath }}
{{- end }}
{{- end }}
...
{{- if .Values.volumeMounts }}
volumes:
{{- range .Values.volumeMounts }}
- name: {{ .name }}
{{ toYaml .volumeSource | indent 8 }}
{{- end }}
{{- end }}
And your values.yaml file you can define any volume sources:
volumeMounts:
- name: volume-mount-1
mountPath: /var/data
volumeSource:
persistentVolumeClaim:
claimName: pvc-name
- name: volume-mount-2
mountPath: /var/config
volumeSource:
configMap:
name: config-map-name
In this way, you don't have to worry about the source of the volume. You can add any kind of sources in your values.yaml file and you don't have to update the deployment.yaml template.
Hope this helps!

Related

How to set the value in envFrom as the value of env?

I have a configMap that stores a json file:
apiVersion: v1
data:
config.json: |
{{- toPrettyJson $.Values.serviceConfig | nindent 4 }}
kind: ConfigMap
metadata:
name: service-config
namespace: {{ .Release.Namespace }}
In my deployment.yaml, I use volume and envFrom
spec:
... ...
volumes:
- name: config-vol
configMap:
name: service-config
containers:
- name: {{ .Chart.Name }}
envFrom:
- configMapRef:
name: service-config
... ...
volumeMounts:
- mountPath: /src/config
name: config-vol
After I deployed the helm and run kubectl describe pods, I god this:
Environment Variables from:
service-config ConfigMap Optional: false
I wonder how can I get/use this service-config in my code? My values.yaml can extract the value under env but I don't know how to extract the value of configMap. Is there a way I can move this service-config or the json file stored in it as the variable of env in values.yaml? Thank you in advance!

Kubernetes copying jars into a pod and restart

I have a Kubernetes problem where I need to copy 2 jars (each jar > 1Mb) into a pod after it is deployed. So ideally the solution is we cannot use configMap (> 1Mb) but we need to use "wget" in "initcontainer" and download the jars.
so below is my kubernetes-template configuration which i have modified. The original one is available at https://github.com/dremio/dremio-cloud-tools/blob/master/charts/dremio/templates/dremio-executor.yaml
{{ if not .Values.DremioAdmin }}
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: dremio-executor
spec:
serviceName: "dremio-cluster-pod"
replicas: {{.Values.executor.count}}
podManagementPolicy: "Parallel"
revisionHistoryLimit: 1
selector:
matchLabels:
app: dremio-executor
template:
metadata:
labels:
app: dremio-executor
role: dremio-cluster-pod
annotations:
dremio-configmap/checksum: {{ (.Files.Glob "config/*").AsConfig | sha256sum }}
spec:
terminationGracePeriodSeconds: 5
{{- if .Values.nodeSelector }}
nodeSelector:
{{- range $key, $value := .Values.nodeSelector }}
{{ $key }}: {{ $value }}
{{- end }}
{{- end }}
containers:
- name: dremio-executor
image: {{.Values.image}}:{{.Values.imageTag}}
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 0
resources:
requests:
memory: {{.Values.executor.memory}}M
cpu: {{.Values.executor.cpu}}
volumeMounts:
- name: dremio-executor-volume
mountPath: /opt/dremio/data
##################### START added this section #####################
- name: dremio-connector
mountPath: /opt/dremio/jars
#################### END added this section ##########################
- name: dremio-config
mountPath: /opt/dremio/conf
env:
- name: DREMIO_MAX_HEAP_MEMORY_SIZE_MB
value: "{{ template "HeapMemory" .Values.executor.memory }}"
- name: DREMIO_MAX_DIRECT_MEMORY_SIZE_MB
value: "{{ template "DirectMemory" .Values.executor.memory }}"
- name: DREMIO_JAVA_EXTRA_OPTS
value: >-
-Dzookeeper=zk-hs:2181
-Dservices.coordinator.enabled=false
{{- if .Values.extraStartParams }}
{{ .Values.extraStartParams }}
{{- end }}
command: ["/opt/dremio/bin/dremio"]
args:
- "start-fg"
ports:
- containerPort: 45678
name: server
initContainers:
################ START added this section ######################
- name: installjars
image: {{.Values.image}}:{{.Values.imageTag}}
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 0
volumeMounts:
- name: dremio-connector
mountPath: /opt/dremio/jars
command: ["/bin/sh","-c"]
args: ["wget --no-check-certificate -O /dir/connector.jar https://<some nexus repo URL>/connector.jar; sleep 10;"]
################ END added this section ###############
- name: wait-for-zk
image: busybox
command: ["sh", "-c", "until ping -c 1 -W 1 zk-hs > /dev/null; do echo waiting for zookeeper host; sleep 2; done;"]
# since we're mounting a separate volume, reset permission to
# dremio uid/gid
- name: chown-data-directory
image: {{.Values.image}}:{{.Values.imageTag}}
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 0
volumeMounts:
- name: dremio-executor-volume
mountPath: /opt/dremio/data
command: ["chown"]
args:
- "dremio:dremio"
- "/opt/dremio/data"
volumes:
- name: dremio-config
configMap:
name: dremio-config
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
- name: {{ .Values.imagePullSecrets }}
{{- end}}
#################### START added this section ########################
- name: dremio-connector
emptyDir: {}
#################### END added this section ########################
volumeClaimTemplates:
- metadata:
name: dremio-executor-volume
spec:
accessModes: [ "ReadWriteOnce" ]
{{- if .Values.storageClass }}
storageClassName: {{ .Values.storageClass }}
{{- end }}
resources:
requests:
storage: {{.Values.executor.volumeSize}}
{{ end }}
So the above is NOT working and I don't see any jars being downloaded once I "exec" into the pod. I don't understand what is wrong with the above. however do note that inside the pod if i run the same wget command it does download the jar which baffles me. So the URL is working, readwrite of directory is no problem but still jar is not downloaded ???
If you can remove the need for Wget altogether it would make life easier...
Option 1
Using your own docker image will save some pain if thats an option
Dockerfile
# docker build -f Dockerfile -t ghcr.io/yourOrg/projectId/dockerImageName:0.0.1 .
# docker push ghcr.io/yourOrg/projectId/dockerImageName:0.0.1
FROM nginx:1.19.10-alpine
# Use local copies of config
COPY files/some1.jar /dir/
COPY files/some2.jar /dir/
Files will be ready in the container, no need for cryptic commands in your pod definition that will make little sense. Alternatively if you need to download the files you could copy a script to do that work into the Docker image instead and run that on startup via the docker directive CMD.
Option 2
Alternatively, you could do a two stage deployment...
Create a persistent volume
mount the volume to a pod (use busybox as a base?) that will run for enough time for the files to copy across from your local machine (or for them to be downloaded if you continue to use Wget)
kubectl cp the files you need to the (Retained) PersistentVolume
Now mount the PV to your pod's container(s) so the files are readily available when the pod fires up.
Your approch seems right.
Another solution could be to include the jar on the Docker image but I think it's not possible right ?
You could just use an emptyDir instead of a VolumeClaim.
Last one, I would have download the jar before waiting for ZooKeeper to gain some time.

How can I start a job automatically after a successful deployment in kubernetes?

I have a deployment .yaml file that basically create a pod with mariadb, as follows
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-pod
spec:
replicas: 1
selector:
matchLabels:
pod: {{ .Release.Name }}-pod
strategy:
type: Recreate
template:
metadata:
labels:
pod: {{ .Release.Name }}-pod
spec:
containers:
- env:
- name: MYSQL_ROOT_PASSWORD
value: {{ .Values.db.password }}
image: {{ .Values.image.repository }}
name: {{ .Release.Name }}
ports:
- containerPort: 3306
resources:
requests:
memory: 2048Mi
cpu: 0.5
limits:
memory: 4096Mi
cpu: 1
volumeMounts:
- mountPath: /var/lib/mysql
name: dbsvr-claim
- mountPath: /etc/mysql/conf.d/my.cnf
name: conf
subPath: my.cnf
- mountPath: /docker-entrypoint-initdb.d/init.sql
name: conf
subPath: init.sql
restartPolicy: Always
volumes:
- name: dbsvr-claim
persistentVolumeClaim:
claimName: {{ .Release.Name }}-claim
- name: conf
configMap:
name: {{ .Release.Name }}-configmap
status: {}
Upon success on
helm install abc ./abc/ -f values.yaml
I have a job that generates a mysqldump backup file and it completes successfully (just showing the relevant code)
apiVersion: batch/v1
kind: Job
metadata:
name: {{ .Release.Name }}-job
spec:
template:
metadata:
name: {{ .Release.Name }}-job
spec:
containers:
- name: {{ .Release.Name }}-dbload
image: {{ .Values.image.repositoryRoot }}/{{.Values.image.imageName}}
command: ["/bin/sh", "-c"]
args:
- mysqldump -p$(PWD) -h{{.Values.db.source}} -u$(USER) --databases xyz > $(FILE);
echo "done!";
imagePullPolicy: Always
# Do not restart containers after they exit
restartPolicy: Never
So, here's my question. Is there a way to automatically start the job after the helm install abc ./ -f values.yaml finishes with success?
you can use kubectl wait -h command to execute job when the condition=Ready for the deployment.
Here the article wait-for-condition demonstrate quite similar situation

How to use key value pair in kubernetes configmaps to mount volume

I have created a kubernetes configmap which contains multiple key value pairs. I want to mount each value in a different path. Im using helm to create charts.
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.name }}-configmap
namespace: {{ .Values.namespace }}
labels:
name: {{ .Values.name }}-configmap
data:
test1.yml: |-
{{ .Files.Get .Values.test1_filename }}
test2.yml: |-
{{ .Files.Get .Values.test2_filename }}
I want test1.yml and test2.yml to be mounted in different directories.How can i do it?
You can use subPath field to pickup specific file from configMap:
volumeMounts:
- mountPath: /my/first/path/test.yaml
name: configmap
subPath: test1.yaml
- mountPath: /my/second/path/test.yaml
name: configmap
subPath: test2.yaml

Host specific volumes in Kubernetes manifests

I am fairly sure this isn't possible, but I wanted to check.
I am using Kubernetes stateful sets, so my hosts get obvious hostnames.
I'd like them to provision a hostPath mount that is mapped to their hostname.
An example helm chart that I'm using might look like this:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: app
namespace: '{{ .Values.name }}'
labels:
chart: '{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}'
spec:
serviceName: "app"
replicas: {{ .Values.replicaCount }}
template:
metadata:
labels:
app: app
spec:
terminationGracePeriodSeconds: 30
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}/{{ .Values.image.version}}"
imagePullPolicy: '{{ .Values.image.pullPolicy }}'
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
ports:
- containerPort: {{ .Values.baseport | add 80 }}
name: app
volumeMounts:
- mountPath: /NAS/$(POD_NAME)
name: store
readOnly: true
volumes:
- name: store
hostPath:
path: /NAS/$(POD_NAME)
Essentially, instead of hardcoding a volume, I'd like to have some kind of dynamic variable as the path. I don't mind using helm or the downward API for this, but ideally it would work when I scale the stateful set outwards.
Is there any way of doing this? All my docs reading seems to think it's not... :(