Mount (add) files to existing directory using configmap volume mount - kubernetes

I have a ConfigMap with multiple files, and want to add these files to an already existing directory. But the tricky part here is, the filenames(keys) can change. So I can't try to mount them individually using subPath.
Is there any way this can be achieved from Deployment manifest?
Configmap:
config-files-configmap
└── newFile1.yml
└── newFile2.yml
Existing directory after adding files from configmap:
config/
└── existingFile1.yml
└── existingFile2.yml
└── newFile1.yml
└── newFile2.yml
PS: I have tried mounting the configmap as directory, which will override existing contents of the directory.
Thanks

You can use the init container with configmap as a volume mount.
Not sure about the actual deployment architecture.
i would suggest injecting the configmap files to another directory and copying and pasting at starting of the main container.
Using life cycle hook of POD of init container.
As we can not go with subpath, this one option i am seeing as of now.
Example helm template from RabbitMQ
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ .Release.Name }}-rabbitmq
labels: &RabbitMQDeploymentLabels
app.kubernetes.io/name: {{ .Release.Name }}
app.kubernetes.io/component: rabbitmq-server
spec:
selector:
matchLabels: *RabbitMQDeploymentLabels
serviceName: {{ .Release.Name }}-rabbitmq-discovery
replicas: {{ .Values.rabbitmq.replicas }}
updateStrategy:
# https://www.rabbitmq.com/upgrade.html
# https://cloud.google.com/kubernetes-engine/docs/how-to/updating-apps
type: RollingUpdate
template:
metadata:
labels: *RabbitMQDeploymentLabels
spec:
serviceAccountName: {{ .Values.rabbitmq.serviceAccount }}
terminationGracePeriodSeconds: 180
initContainers:
# This init container copies the config files from read-only ConfigMap to writable location.
- name: copy-rabbitmq-config
image: {{ .Values.rabbitmq.initImage }}
imagePullPolicy: Always
command:
- /bin/bash
- -euc
- |
# Remove cached erlang cookie since we are always providing it,
# that opens the way to recreate the application and access to existing data
# as a new erlang will be regenerated again.
echo ${RABBITMQ_ERLANG_COOKIE} > /var/lib/rabbitmq/.erlang.cookie
chmod 600 /var/lib/rabbitmq/.erlang.cookie
# Copy the mounted configuration to both places.
cp /rabbitmqconfig/rabbitmq.conf /etc/rabbitmq/rabbitmq.conf
# Change permission to allow to add more configurations via variables
chown :999 /etc/rabbitmq/rabbitmq.conf
chmod 660 /etc/rabbitmq/rabbitmq.conf
cp /rabbitmqconfig/enabled_plugins /etc/rabbitmq/enabled_plugins
volumeMounts:
- name: configmap
mountPath: /rabbitmqconfig
- name: config
mountPath: /etc/rabbitmq
- name: {{ .Release.Name }}-rabbitmq-pvc
mountPath: /var/lib/rabbitmq
env:
- name: RABBITMQ_ERLANG_COOKIE
valueFrom:
secretKeyRef:
name: {{ .Release.Name }}-rabbitmq-secret
key: rabbitmq-erlang-cookie
containers:
- name: rabbitmq
image: "{{ .Values.rabbitmq.image.repo }}:{{ .Values.rabbitmq.image.tag }}"
imagePullPolicy: Always
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: RABBITMQ_USE_LONGNAME
value: 'true'
- name: RABBITMQ_NODENAME
value: 'rabbit#$(MY_POD_NAME).{{ .Release.Name }}-rabbitmq-discovery.{{ .Release.Namespace }}.svc.cluster.local'
- name: K8S_SERVICE_NAME
value: '{{ .Release.Name }}-rabbitmq-discovery'
- name: K8S_HOSTNAME_SUFFIX
value: '.{{ .Release.Name }}-rabbitmq-discovery.{{ .Release.Namespace }}.svc.cluster.local'
# User name to create when RabbitMQ creates a new database from scratch.
- name: RABBITMQ_DEFAULT_USER
value: '{{ .Values.rabbitmq.user }}'
# Password for the default user.
- name: RABBITMQ_DEFAULT_PASS
valueFrom:
secretKeyRef:
name: {{ .Release.Name }}-rabbitmq-secret
key: rabbitmq-pass
ports:
- name: clustering
containerPort: 25672
- name: amqp
containerPort: 5672
- name: amqp-ssl
containerPort: 5671
- name: prometheus
containerPort: 15692
- name: http
containerPort: 15672
volumeMounts:
- name: config
mountPath: /etc/rabbitmq
- name: {{ .Release.Name }}-rabbitmq-pvc
mountPath: /var/lib/rabbitmq
livenessProbe:
exec:
command:
- rabbitmqctl
- status
initialDelaySeconds: 60
timeoutSeconds: 30
readinessProbe:
exec:
command:
- rabbitmqctl
- status
initialDelaySeconds: 20
timeoutSeconds: 30
lifecycle:
postStart:
exec:
command:
- /bin/bash
- -c
- |
# Wait for the RabbitMQ to be ready.
until rabbitmqctl node_health_check; do
sleep 5
done
# By default, RabbitMQ does not have Highly Available policies enabled,
# using the following command to enable it.
rabbitmqctl set_policy ha-all "." '{"ha-mode":"all", "ha-sync-mode":"automatic"}' --apply-to all --priority 0
{{ if .Values.metrics.exporter.enabled }}
- name: prometheus-to-sd
image: {{ .Values.metrics.image }}
ports:
- name: profiler
containerPort: 6060
command:
- /monitor
- --stackdriver-prefix=custom.googleapis.com
- --source=rabbitmq:http://localhost:15692/metrics
- --pod-id=$(POD_NAME)
- --namespace-id=$(POD_NAMESPACE)
- --monitored-resource-type-prefix=k8s_
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
{{ end }}
volumes:
- name: configmap
configMap:
name: {{ .Release.Name }}-rabbitmq-config
items:
- key: rabbitmq.conf
path: rabbitmq.conf
- key: enabled_plugins
path: enabled_plugins
- name: config
emptyDir: {}
volumeClaimTemplates:
- metadata:
name: {{ .Release.Name }}-rabbitmq-pvc
labels: *RabbitMQDeploymentLabels
spec:
accessModes:
- ReadWriteOnce
storageClassName: {{ .Values.rabbitmq.persistence.storageClass }}
resources:
requests:
storage: {{ .Values.rabbitmq.persistence.size }}
Example reference : https://github.com/GoogleCloudPlatform/click-to-deploy/blob/master/k8s/rabbitmq/chart/rabbitmq/templates/statefulset.yaml

Related

Kubernetes copying jars into a pod and restart

I have a Kubernetes problem where I need to copy 2 jars (each jar > 1Mb) into a pod after it is deployed. So ideally the solution is we cannot use configMap (> 1Mb) but we need to use "wget" in "initcontainer" and download the jars.
so below is my kubernetes-template configuration which i have modified. The original one is available at https://github.com/dremio/dremio-cloud-tools/blob/master/charts/dremio/templates/dremio-executor.yaml
{{ if not .Values.DremioAdmin }}
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: dremio-executor
spec:
serviceName: "dremio-cluster-pod"
replicas: {{.Values.executor.count}}
podManagementPolicy: "Parallel"
revisionHistoryLimit: 1
selector:
matchLabels:
app: dremio-executor
template:
metadata:
labels:
app: dremio-executor
role: dremio-cluster-pod
annotations:
dremio-configmap/checksum: {{ (.Files.Glob "config/*").AsConfig | sha256sum }}
spec:
terminationGracePeriodSeconds: 5
{{- if .Values.nodeSelector }}
nodeSelector:
{{- range $key, $value := .Values.nodeSelector }}
{{ $key }}: {{ $value }}
{{- end }}
{{- end }}
containers:
- name: dremio-executor
image: {{.Values.image}}:{{.Values.imageTag}}
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 0
resources:
requests:
memory: {{.Values.executor.memory}}M
cpu: {{.Values.executor.cpu}}
volumeMounts:
- name: dremio-executor-volume
mountPath: /opt/dremio/data
##################### START added this section #####################
- name: dremio-connector
mountPath: /opt/dremio/jars
#################### END added this section ##########################
- name: dremio-config
mountPath: /opt/dremio/conf
env:
- name: DREMIO_MAX_HEAP_MEMORY_SIZE_MB
value: "{{ template "HeapMemory" .Values.executor.memory }}"
- name: DREMIO_MAX_DIRECT_MEMORY_SIZE_MB
value: "{{ template "DirectMemory" .Values.executor.memory }}"
- name: DREMIO_JAVA_EXTRA_OPTS
value: >-
-Dzookeeper=zk-hs:2181
-Dservices.coordinator.enabled=false
{{- if .Values.extraStartParams }}
{{ .Values.extraStartParams }}
{{- end }}
command: ["/opt/dremio/bin/dremio"]
args:
- "start-fg"
ports:
- containerPort: 45678
name: server
initContainers:
################ START added this section ######################
- name: installjars
image: {{.Values.image}}:{{.Values.imageTag}}
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 0
volumeMounts:
- name: dremio-connector
mountPath: /opt/dremio/jars
command: ["/bin/sh","-c"]
args: ["wget --no-check-certificate -O /dir/connector.jar https://<some nexus repo URL>/connector.jar; sleep 10;"]
################ END added this section ###############
- name: wait-for-zk
image: busybox
command: ["sh", "-c", "until ping -c 1 -W 1 zk-hs > /dev/null; do echo waiting for zookeeper host; sleep 2; done;"]
# since we're mounting a separate volume, reset permission to
# dremio uid/gid
- name: chown-data-directory
image: {{.Values.image}}:{{.Values.imageTag}}
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 0
volumeMounts:
- name: dremio-executor-volume
mountPath: /opt/dremio/data
command: ["chown"]
args:
- "dremio:dremio"
- "/opt/dremio/data"
volumes:
- name: dremio-config
configMap:
name: dremio-config
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
- name: {{ .Values.imagePullSecrets }}
{{- end}}
#################### START added this section ########################
- name: dremio-connector
emptyDir: {}
#################### END added this section ########################
volumeClaimTemplates:
- metadata:
name: dremio-executor-volume
spec:
accessModes: [ "ReadWriteOnce" ]
{{- if .Values.storageClass }}
storageClassName: {{ .Values.storageClass }}
{{- end }}
resources:
requests:
storage: {{.Values.executor.volumeSize}}
{{ end }}
So the above is NOT working and I don't see any jars being downloaded once I "exec" into the pod. I don't understand what is wrong with the above. however do note that inside the pod if i run the same wget command it does download the jar which baffles me. So the URL is working, readwrite of directory is no problem but still jar is not downloaded ???
If you can remove the need for Wget altogether it would make life easier...
Option 1
Using your own docker image will save some pain if thats an option
Dockerfile
# docker build -f Dockerfile -t ghcr.io/yourOrg/projectId/dockerImageName:0.0.1 .
# docker push ghcr.io/yourOrg/projectId/dockerImageName:0.0.1
FROM nginx:1.19.10-alpine
# Use local copies of config
COPY files/some1.jar /dir/
COPY files/some2.jar /dir/
Files will be ready in the container, no need for cryptic commands in your pod definition that will make little sense. Alternatively if you need to download the files you could copy a script to do that work into the Docker image instead and run that on startup via the docker directive CMD.
Option 2
Alternatively, you could do a two stage deployment...
Create a persistent volume
mount the volume to a pod (use busybox as a base?) that will run for enough time for the files to copy across from your local machine (or for them to be downloaded if you continue to use Wget)
kubectl cp the files you need to the (Retained) PersistentVolume
Now mount the PV to your pod's container(s) so the files are readily available when the pod fires up.
Your approch seems right.
Another solution could be to include the jar on the Docker image but I think it's not possible right ?
You could just use an emptyDir instead of a VolumeClaim.
Last one, I would have download the jar before waiting for ZooKeeper to gain some time.

Grafana is generating links with Base URL : http://localhost:3000 instead of using my url

I deployed grafana 7 with Kubernetes, here is my deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: grafana-core
namespace: monitoring
labels:
app: grafana
component: core
spec:
selector:
matchLabels:
app: grafana
replicas: 1
template:
metadata:
labels:
app: grafana
component: core
spec:
initContainers:
- name: init-chown-data
image: grafana/grafana:7.0.3
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 0
command: ["chown", "-R", "472:472", "/var/lib/grafana"]
volumeMounts:
- name: grafana-persistent-storage
mountPath: /var/lib/grafana
containers:
- image: grafana/grafana:7.0.3
name: grafana-core
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 472
# env:
envFrom:
- secretRef:
name: grafana-env
env:
# The following env variables set up basic auth twith the default admin user and admin password.
- name: GF_INSTALL_PLUGINS
value: grafana-clock-panel,grafana-simple-json-datasource,camptocamp-prometheus-alertmanager-datasource
- name: GF_AUTH_BASIC_ENABLED
value: "true"
- name: GF_SECURITY_ADMIN_USER
valueFrom:
secretKeyRef:
name: grafana
key: admin-username
- name: GF_SECURITY_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: grafana
key: admin-password
- name: GF_AUTH_ANONYMOUS_ENABLED
value: "false"
readinessProbe:
httpGet:
path: /login
port: 3000
initialDelaySeconds: 30
timeoutSeconds: 1
volumeMounts:
- name: grafana-persistent-storage
mountPath: /var/lib/grafana
- name: grafana-datasources
mountPath: /etc/grafana/provisioning/datasources
volumes:
- name: grafana-persistent-storage
persistentVolumeClaim:
claimName: grafana-storage
- name: grafana-datasources
configMap:
name: grafana-datasources
nodeSelector:
kops.k8s.io/instancegroup: monitoring-nodes
It is working well, but each time it generates an URL, it does it with base url : http://localhost:3000 instead of using https://grafana.company.com
Where can I configure that ? I couldn't find a env var that handle it.
Configure the root_url option of [server] in your Grafana config file or env variable GF_SERVER_ROOT_URL to https://grafana.company.com/.
I have fount it can be done through using the env variable inside the grafana pod. This set up is a tricky one, misuse of the url format of the GF_SERVER_ROOT_URL to your.url with no quotes, "your.url" without https:// or http:// and even "http://your.url" with no / at the end may cause problems.
grafana:
env:
GF_SERVER_ROOT_URL: "http://your.url/"
notifiers:
notifiers.yaml:
notifiers:
- name: telegram
type: telegram
uid: telegram
is_default: true
settings:
bottoken: "yourbottoken"
chatid: "-yourchatid"
and then use uid: "telegram" in the provisioned dashboards

How can I start a job automatically after a successful deployment in kubernetes?

I have a deployment .yaml file that basically create a pod with mariadb, as follows
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-pod
spec:
replicas: 1
selector:
matchLabels:
pod: {{ .Release.Name }}-pod
strategy:
type: Recreate
template:
metadata:
labels:
pod: {{ .Release.Name }}-pod
spec:
containers:
- env:
- name: MYSQL_ROOT_PASSWORD
value: {{ .Values.db.password }}
image: {{ .Values.image.repository }}
name: {{ .Release.Name }}
ports:
- containerPort: 3306
resources:
requests:
memory: 2048Mi
cpu: 0.5
limits:
memory: 4096Mi
cpu: 1
volumeMounts:
- mountPath: /var/lib/mysql
name: dbsvr-claim
- mountPath: /etc/mysql/conf.d/my.cnf
name: conf
subPath: my.cnf
- mountPath: /docker-entrypoint-initdb.d/init.sql
name: conf
subPath: init.sql
restartPolicy: Always
volumes:
- name: dbsvr-claim
persistentVolumeClaim:
claimName: {{ .Release.Name }}-claim
- name: conf
configMap:
name: {{ .Release.Name }}-configmap
status: {}
Upon success on
helm install abc ./abc/ -f values.yaml
I have a job that generates a mysqldump backup file and it completes successfully (just showing the relevant code)
apiVersion: batch/v1
kind: Job
metadata:
name: {{ .Release.Name }}-job
spec:
template:
metadata:
name: {{ .Release.Name }}-job
spec:
containers:
- name: {{ .Release.Name }}-dbload
image: {{ .Values.image.repositoryRoot }}/{{.Values.image.imageName}}
command: ["/bin/sh", "-c"]
args:
- mysqldump -p$(PWD) -h{{.Values.db.source}} -u$(USER) --databases xyz > $(FILE);
echo "done!";
imagePullPolicy: Always
# Do not restart containers after they exit
restartPolicy: Never
So, here's my question. Is there a way to automatically start the job after the helm install abc ./ -f values.yaml finishes with success?
you can use kubectl wait -h command to execute job when the condition=Ready for the deployment.
Here the article wait-for-condition demonstrate quite similar situation

How do I get Kubernetes Physical Volumes to deploy in proper zone?

I'm running a kubernetes 1.6.2 cluster across three nodes in different zones in GKE and I'm trying to deploy my statefulset where each pod in the statefulset gets a PV attached to it. The problem is that kubernetes is creating the PVs in the one zone where I don't have a node!
$ kubectl describe node gke-multi-consul-default-pool-747c9378-zls3|grep 'zone=us-central1'
failure-domain.beta.kubernetes.io/zone=us-central1-a
$ kubectl describe node gke-multi-consul-default-pool-7e987593-qjtt|grep 'zone=us-central1'
failure-domain.beta.kubernetes.io/zone=us-central1-f
$ kubectl describe node gke-multi-consul-default-pool-8e9199ea-91pj|grep 'zone=us-central1'
failure-domain.beta.kubernetes.io/zone=us-central1-c
$ kubectl describe pv pvc-3f668058-2c2a-11e7-a7cd-42010a8001e2|grep 'zone=us-central1'
failure-domain.beta.kubernetes.io/zone=us-central1-b
I'm using the standard storageclass which has no default zone set:
$ kubectl describe storageclass standard
Name: standard
IsDefaultClass: Yes
Annotations: storageclass.beta.kubernetes.io/is-default-class=true
Provisioner: kubernetes.io/gce-pd
Parameters: type=pd-standard
Events: <none>
So I thought that the scheduler would automatically provision the volumes in a zone where a cluster node existed, but it doesn't seem to be doing that.
For reference, here is the yaml for my statefulset:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: "{{ template "fullname" . }}"
labels:
heritage: {{.Release.Service | quote }}
release: {{.Release.Name | quote }}
chart: "{{.Chart.Name}}-{{.Chart.Version}}"
component: "{{.Release.Name}}-{{.Values.Component}}"
spec:
serviceName: "{{ template "fullname" . }}"
replicas: {{default 3 .Values.Replicas}}
template:
metadata:
name: "{{ template "fullname" . }}"
labels:
heritage: {{.Release.Service | quote }}
release: {{.Release.Name | quote }}
chart: "{{.Chart.Name}}-{{.Chart.Version}}"
component: "{{.Release.Name}}-{{.Values.Component}}"
app: "consul"
annotations:
pod.alpha.kubernetes.io/initialized: "true"
spec:
securityContext:
fsGroup: 1000
containers:
- name: "{{ template "fullname" . }}"
image: "{{.Values.Image}}:{{.Values.ImageTag}}"
imagePullPolicy: "{{.Values.ImagePullPolicy}}"
ports:
- name: http
containerPort: {{.Values.HttpPort}}
- name: rpc
containerPort: {{.Values.RpcPort}}
- name: serflan-tcp
protocol: "TCP"
containerPort: {{.Values.SerflanPort}}
- name: serflan-udp
protocol: "UDP"
containerPort: {{.Values.SerflanUdpPort}}
- name: serfwan-tcp
protocol: "TCP"
containerPort: {{.Values.SerfwanPort}}
- name: serfwan-udp
protocol: "UDP"
containerPort: {{.Values.SerfwanUdpPort}}
- name: server
containerPort: {{.Values.ServerPort}}
- name: consuldns
containerPort: {{.Values.ConsulDnsPort}}
resources:
requests:
cpu: "{{.Values.Cpu}}"
memory: "{{.Values.Memory}}"
env:
- name: INITIAL_CLUSTER_SIZE
value: {{ default 3 .Values.Replicas | quote }}
- name: STATEFULSET_NAME
value: "{{ template "fullname" . }}"
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: STATEFULSET_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: datadir
mountPath: /var/lib/consul
- name: gossip-key
mountPath: /etc/secrets
readOnly: true
- name: config
mountPath: /etc/consul
- name: tls
mountPath: /etc/tls
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- consul leave
livenessProbe:
exec:
command:
- consul
- members
initialDelaySeconds: 300
timeoutSeconds: 5
command:
- "/bin/sh"
- "-ec"
- "/tmp/consul-start.sh"
volumes:
- name: config
configMap:
name: consul
- name: gossip-key
secret:
secretName: {{ template "fullname" . }}-gossip-key
- name: tls
secret:
secretName: consul
volumeClaimTemplates:
- metadata:
name: datadir
annotations:
{{- if .Values.StorageClass }}
volume.beta.kubernetes.io/storage-class: {{.Values.StorageClass | quote}}
{{- else }}
volume.alpha.kubernetes.io/storage-class: default
{{- end }}
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
# upstream recommended max is 700M
storage: "{{.Values.Storage}}"
There is a bug open for this issue here.
The workaround in the meantime is to set the zones parameter in your StorageClass to specify the exact zones where your Kubernetes cluster has nodes. Here is an example.
Answer from the Kubernetes documentation about Persistent Volumes: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#gce
zone: GCE zone. If not specified, a random zone in the same region as controller-manager will be chosen.
I guess your controller manager is in region us-central-1, so any zone can be choosen from that region, in your case I guess the only zone that is not covered is us-central-1b, so you have to start a node there as well, or set the zone in the StorageClass resource.
You could create storage classes for each zone, then a PV/PVC may specify that storage class. Your stateful sets/deployments could be set up to target a specific node via nodeSelector so they always get scheduled on a node in a specific zone (see built-in node labels)
storage_class.yml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: us-central-1a
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
zone: us-central1-a
persistent_volume.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: some-volume
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
storageClassName: us-central-1a
Note that you can use storageClassName in kubernetes 1.6, or otherwise the annotation volume.beta.kubernetes.io/storage-class should work too (however will deprecate in the future).

Host specific volumes in Kubernetes manifests

I am fairly sure this isn't possible, but I wanted to check.
I am using Kubernetes stateful sets, so my hosts get obvious hostnames.
I'd like them to provision a hostPath mount that is mapped to their hostname.
An example helm chart that I'm using might look like this:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: app
namespace: '{{ .Values.name }}'
labels:
chart: '{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}'
spec:
serviceName: "app"
replicas: {{ .Values.replicaCount }}
template:
metadata:
labels:
app: app
spec:
terminationGracePeriodSeconds: 30
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}/{{ .Values.image.version}}"
imagePullPolicy: '{{ .Values.image.pullPolicy }}'
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
ports:
- containerPort: {{ .Values.baseport | add 80 }}
name: app
volumeMounts:
- mountPath: /NAS/$(POD_NAME)
name: store
readOnly: true
volumes:
- name: store
hostPath:
path: /NAS/$(POD_NAME)
Essentially, instead of hardcoding a volume, I'd like to have some kind of dynamic variable as the path. I don't mind using helm or the downward API for this, but ideally it would work when I scale the stateful set outwards.
Is there any way of doing this? All my docs reading seems to think it's not... :(