How to use key value pair in kubernetes configmaps to mount volume - kubernetes

I have created a kubernetes configmap which contains multiple key value pairs. I want to mount each value in a different path. Im using helm to create charts.
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.name }}-configmap
namespace: {{ .Values.namespace }}
labels:
name: {{ .Values.name }}-configmap
data:
test1.yml: |-
{{ .Files.Get .Values.test1_filename }}
test2.yml: |-
{{ .Files.Get .Values.test2_filename }}
I want test1.yml and test2.yml to be mounted in different directories.How can i do it?

You can use subPath field to pickup specific file from configMap:
volumeMounts:
- mountPath: /my/first/path/test.yaml
name: configmap
subPath: test1.yaml
- mountPath: /my/second/path/test.yaml
name: configmap
subPath: test2.yaml

Related

How to set the value in envFrom as the value of env?

I have a configMap that stores a json file:
apiVersion: v1
data:
config.json: |
{{- toPrettyJson $.Values.serviceConfig | nindent 4 }}
kind: ConfigMap
metadata:
name: service-config
namespace: {{ .Release.Namespace }}
In my deployment.yaml, I use volume and envFrom
spec:
... ...
volumes:
- name: config-vol
configMap:
name: service-config
containers:
- name: {{ .Chart.Name }}
envFrom:
- configMapRef:
name: service-config
... ...
volumeMounts:
- mountPath: /src/config
name: config-vol
After I deployed the helm and run kubectl describe pods, I god this:
Environment Variables from:
service-config ConfigMap Optional: false
I wonder how can I get/use this service-config in my code? My values.yaml can extract the value under env but I don't know how to extract the value of configMap. Is there a way I can move this service-config or the json file stored in it as the variable of env in values.yaml? Thank you in advance!

ConfigMap mounted on Persistent Volume Claims

In my deployment, I would like to use a Persistent Volume Claim in combination with a config map mount. For example, I'd like the following:
volumeMounts:
- name: py-js-storage
mountPath: /home/python
- name: my-config
mountPath: /home/python/my-config.properties
subPath: my-config.properties
readOnly: true
...
volumes:
- name: py-storage
{{- if .Values.py.persistence.enabled }}
persistentVolumeClaim:
claimName: python-storage
{{- else }}
emptyDir: {}
{{- end }}
Is this a possible and viable way to go? Is there any better way to approach such situation?
Since you didn't give your use case, my answer will be based on if it is possible or not. In fact: Yes, it is.
I'm supposing you wish mount file from a configMap in a mount point that already contains other files, and your approach to use subPath is correct!
When you need to mount different volumes on the same path, you need specify subPath or the content of the original dir will be hidden.
In other words, if you want to keep both files (from the mount point and from configMap) you must use subPath.
To illustrate this, I've tested with the deployment code below. There I mount the hostPath /mnt that contains a file called filesystem-file.txt in my pod and the file /mnt/configmap-file.txt from my configmap test-pd-plus-cfgmap:
Note: I'm using Kubernetes 1.18.1
Configmap:
apiVersion: v1
kind: ConfigMap
metadata:
name: test-pd-plus-cfgmap
data:
file-from-cfgmap: file data
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-pv
spec:
replicas: 3
selector:
matchLabels:
app: test-pv
template:
metadata:
labels:
app: test-pv
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: /mnt
name: task-pv-storage
- mountPath: /mnt/configmap-file.txt
subPath: configmap-file.txt
name: task-cm-file
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
- name: task-cm-file
configMap:
name: test-pd-plus-cfgmap
As a result of the deployment, you can see the follow content in /mnt of the pod:
$ kubectl exec test-pv-5bcb54bd46-q2xwm -- ls /mnt
configmap-file.txt
filesystem-file.txt
You could check this github issue with the same discussion.
Here you could read a little more about volumes subPath.
You can go with the following approach.
In your deployment.yaml template file you can configure:
...
{{- if .Values.volumeMounts }}
volumeMounts:
{{- range .Values.volumeMounts }}
- name: {{ .name }}
mountPath: {{ .mountPath }}
{{- end }}
{{- end }}
...
{{- if .Values.volumeMounts }}
volumes:
{{- range .Values.volumeMounts }}
- name: {{ .name }}
{{ toYaml .volumeSource | indent 8 }}
{{- end }}
{{- end }}
And your values.yaml file you can define any volume sources:
volumeMounts:
- name: volume-mount-1
mountPath: /var/data
volumeSource:
persistentVolumeClaim:
claimName: pvc-name
- name: volume-mount-2
mountPath: /var/config
volumeSource:
configMap:
name: config-map-name
In this way, you don't have to worry about the source of the volume. You can add any kind of sources in your values.yaml file and you don't have to update the deployment.yaml template.
Hope this helps!

helm values.yaml - use value from another node

so for example i have
database:
name: x-a2d9f4
replicaCount: 1
repository: mysql
tag: 5.7
pullPolicy: IfNotPresent
tier: database
app:
name: x-576a77
replicaCount: 1
repository: wordpress
tag: 5.2-php7.3
pullPolicy: IfNotPresent
tier: frontend
global:
namespace: x-c0ecdb9f
env:
name: WORDPRESS_DB_HOST
value:
and I want to do something like this
env:
name: WORDPRESS_DB_HOST
value: {{ .Values.database.name | lower }}
All these are examples from the same values.yaml
is this possible in Helm?
Yes, you can achieve this using the 'tpl' function
The tpl function allows developers to evaluate strings as templates inside a template. This is useful to pass a template string as a value to a chart or render external configuration files. Syntax: {{ tpl TEMPLATE_STRING VALUES }}
values.yaml
database:
name: x-a2d9f4
env:
name: WORDPRESS_DB_HOST
value: "{{ .Values.database.name | upper }}"
configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-configmap
data:
some: {{ tpl .Values.env.value . }}
output:
> helm template .
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: release-name-configmap
data:
some: X-A2D9F4

Why doesn't helm use the name defined in the deployment template?

i.e. from name: {{ .Chart.Name }}-{{ .Values.module5678.name }}-pod below
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: {{ template "project1234.name" . }}
chart: {{ template "project1234.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
name: {{ template "project1234.module5678.fullname" . }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "project1234.name" . }}
template:
metadata:
labels:
app: {{ template "project1234.name" . }}
spec:
containers:
- image: "{{ .Values.image.name }}:{{ .Values.image.tag }}"
name: {{ .Chart.Name }}-{{ .Values.module5678.name }}-pod
ports:
- containerPort: 1234
imagePullSecrets:
- name: {{ .Values.image.pullSecret }}
I am expecting the pod name to be:
pod/project1234-module5678-pod
Instead, the resulting Pod name is:
pod/chartname-project1234-module5678-dc7db787-skqvv
...where (in my understanding):
chartname is from: helm install --name chartname -f values.yaml .
project1234 is from:
# Chart.yaml
apiVersion: v1
appVersion: "1.0"
description: project1234 Helm chart for Kubernetes
name: project1234
version: 0.1.0
module5678 is from:
# values.yaml
rbac:
create: true
serviceAccounts:
module5678:
create: true
name:
image:
name: <image location>
tag: 1.5
pullSecret: <pull secret>
gitlab:
secretName: <secret name>
username: foo
password: bar
module5678:
enabled: true
name: module5678
ingress:
enabled: true
replicaCount: 1
resources: {}
I've tried changing name: {{ .Chart.Name }}-{{ .Values.module5678.name }}-pod into a plain string value like "podname1234" and it isn't followed. I even tried removing the name setting entirely and the resulting pod name remains the same.
Pods created from a Deployment always have a generated name based on the Deployment's name (and also the name of the intermediate ReplicaSet, if you go off and look for it). You can't override it.
Given the YAML you've shown, I'd expect that this fragment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "project1234.module5678.fullname" . }}
expands out to a Deployment name of chartname-project1234-module5678; the remaining bits are added in by the ReplicaSet and then the Pod itself.
If you do look up the Pod and kubectl describe pod chartname-project1234-module5678-dc7db787-skqvv you will probably see that it has a single container that has the expected name project1234-module5678-pod. Pretty much the only time you need to use this is if you need to kubectl logs (or, more rarely, kubectl exec) in a multi-container pod; if you are in this case, you'll appreciate having a shorter name, and since the container names are always scoped to the specific pod in which they appear, there's nothing wrong with using a short fixed name here
spec:
containers:
- name: container

Host specific volumes in Kubernetes manifests

I am fairly sure this isn't possible, but I wanted to check.
I am using Kubernetes stateful sets, so my hosts get obvious hostnames.
I'd like them to provision a hostPath mount that is mapped to their hostname.
An example helm chart that I'm using might look like this:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: app
namespace: '{{ .Values.name }}'
labels:
chart: '{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}'
spec:
serviceName: "app"
replicas: {{ .Values.replicaCount }}
template:
metadata:
labels:
app: app
spec:
terminationGracePeriodSeconds: 30
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}/{{ .Values.image.version}}"
imagePullPolicy: '{{ .Values.image.pullPolicy }}'
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
ports:
- containerPort: {{ .Values.baseport | add 80 }}
name: app
volumeMounts:
- mountPath: /NAS/$(POD_NAME)
name: store
readOnly: true
volumes:
- name: store
hostPath:
path: /NAS/$(POD_NAME)
Essentially, instead of hardcoding a volume, I'd like to have some kind of dynamic variable as the path. I don't mind using helm or the downward API for this, but ideally it would work when I scale the stateful set outwards.
Is there any way of doing this? All my docs reading seems to think it's not... :(