Use Pod name or uid in Volume mountPath - kubernetes

I have an NFS physical volume that my pods can all access via a PVC, files are kept after pods are destroyed.
I want each pod to be able to put its files under a unique subdirectory.
Is there anyway that I can dynamically utilize say metadata.uid or metadata.name in the mountPath for the container? i.e. conceptually this:
volumeMounts:
- name: persistent-nfs-storage
mountPath: /metadata.name/files
I think I can see how to handle first making the directory, by using an init container and putting the value into the environment using the downward API. But I don't see any way to utilize it in a PVC mountPath.
Thanks for any help.

I don't know if it is possible to use Pod Name in Volume mountPath. But, if the intention is writing files in a separate folder(using pod name) of the same PVC, there are workarounds.
One way to achieve it is by getting the file path and pod name from env and then append them. After that write the log on that directory.
In details,
volumeMounts:
- name: persistent-nfs-storage
mountPath: /nfs/directory
ENVs:
env:
- name: WRITE_PATH
value: "$(NFS_DIR)/$(POD_NAME)"
- name: NFS_DIR
value: /nfs/directory
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
In Application, use $WRITE_PATH directory to write your necessary files. Also, if necessary create this directory from init container.

Related

Kubernetes copy image data to volume mounts

I need to share a directory between two containers: myapp and monitoring and to achieve this I created an emptyDir: {} and then volumeMount on both the containers.
spec:
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: myapp
volumeMounts:
- name: shared-data
mountPath: /etc/myapp/
- name: monitoring
volumeMounts:
- name: shared-data
mountPath: /var/read
This works fine as the data I write to the shared-data directory is visible in both containers. However, the config file that is created when creating the container under /etc/myapp/myapp.config is hidden as the shared-data volume is mounted over /etc/myapp path (overlap).
How can I force the container to first mount the volume to /etc/myapp path and then cause the docker image to place the myapp.config file under the default path /etc/myapp except that it is the mounted volume thus allowing the config file to be accessible by the monitoring container under /var/read?
Summary: let the monitoring container read the /etc/myapp/myapp.config file sitting on myapp container.
Can anyone advice please?
Can you mount shared-data at /var/read in an init container and copy config file from /etc/myapp/myapp.config to /var/read?
Consider using ConfigMaps with SubPaths.
A ConfigMap is an API object used to store non-confidential data in
key-value pairs. Pods can consume ConfigMaps as environment variables,
command-line arguments, or as configuration files in a volume.
Sometimes, it is useful to share one volume for multiple uses in a
single pod. The volumeMounts.subPath property specifies a sub-path
inside the referenced volume instead of its root.
ConfigMaps can be used as volumes. The volumeMounts inside the template.spec are the same as any other volume. However, the volumes section is different. Instead of specifying a persistentVolumeClaim or other volume type you reference the configMap by name. Than you can add the subPath property which would look something like this:
volumeMounts:
- name: shared-data
mountPath: /etc/myapp/
subPath: myapp.config
Here are the resources that would show you how to set it up:
Configure a Pod to Use a ConfigMap: official docs
Using ConfigMap SubPaths to Mount Files: step by step guide
Mount a file in your Pod using a ConfigMap: supplement

Using a variable within a path in Kubernetes

I have a simple StatefulSet with two containers. I just want to share a path by an emptyDir volume:
volumes:
- name: shared-folder
emptyDir: {}
The first container is a busybox:
- image: busybox
name: test
command:
- sleep
- "3600"
volumeMounts:
- mountPath: /cache
name: shared-folder
The second container creates a file on /cache/<POD_NAME>. I want to mount both paths within the emptyDir volume to be able to share files between containers.
volumeMounts:
- name: shared-folder
mountPath: /cache/$(HOSTNAME)
Problem. The second container doesn't resolve /cache/$(HOSTNAME) so instead of mounting /cache/pod-0 it mounts /cache/$(HOSTNAME). I have also tried getting the POD_NAME and setting as env variable but it doesn't resolve it neither.
Dows anybody knows if it is possible to use a path like this (with env variables) in the mountPath attribute?
To use mountpath with env variable you can use subPath with expanded environment variables (k8s v1.17+).
In your case it would look like following:
containers:
- env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
volumeMounts:
- mountPath: /cache
name: shared-folder
subPathExpr: $(MY_POD_NAME)
I tested here and just using Kubernetes (k8s < 1.16) with env variables isn't possible to achieve what you want, basically what is happening is that variable will be accessible only after the pod gets deployed and you're referencing it before it happens.
You can use Helm to define your mounthPath and statefulset with the same value in the values.yaml file, then get this same value and set as a value for the mounthPath field and statefulset name. You can see about this here.
Edit:
Follow Matt's answer if you are using k8s 1.17 or higher.
The problem is that YAML configuration files are POSTed to Kubernetes exactly as they are written. This means that you need to create a templated YAML file, in which you will be able to replace the referenced ti environment variables with values bound to environment variables.
As this is a known "quirk" of Kubernetes there already exist tools to circumvent this problem. Helm is one of those tools which is very pleasant to use

Kubernetes Volumes - Dynamic path

I want my applications to write log files at a host location, so I'm mounting a hostPath volume. But all applications try to write logs using the same file name.
I'd like to separate the files into folders named after the Pod names, but I see nowhere in the documentation how to implement it:
volumes:
- name: logs-volume
hostPath:
path: /var/logs/apps/${POD_NAME}
type: DirectoryOrCreate
In the (not working) example above, apps should write files to the POD_NAME folder.
Is it possible?
As of kubernetes 1.17 this is supported using subPathExpr. See https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath-expanded-environment for details.
An alpha feature that might help is available in kubernetes 1.11. I haven't tested it, but it apparently allows something like:
volumeMounts:
- mountPath: /var/log
name: logs
subPathFrom:
fieldRef:
fieldPath: metadata.name
volumes:
- name: logs
hostPath:
path: /var/logs/apps/

Can I share a single file between containers in a pod?

My pod has two containers - a primary container, and a sidecar container that monitors the /var/run/utmp file in the primary container and takes action when it changes. I'm trying to figure out how to make this file visible in the sidecar container.
This page describes how to use an emptyDir volume to share directories between containers in a pod. However, this only seems to work for directories, not single files. I also can't use this strategy to share the entire /var/run/ directory in the primary container, since mounting a volume there erases the contents of the directory, which the container needs to run.
I tried to work around this by creating a symlink to utmp in another directory and mounting that directory, but it doesn't look like symlinks in volumes are resolved in the way they would need to be for this to work.
Is there any way I can make one file in a container visible to other containers in the same pod? The manifest I'm experimenting with looks like this:
apiVersion: v1
kind: Pod
metadata:
name: utmp-demo
spec:
restartPolicy: Never
containers:
- name: main
image: debian
command: ["/bin/bash"]
args: ["-c", "sleep infinity"]
volumeMounts:
- name: main-run
mountPath: /var/run # or /var/run/utmp, which crashes
- name: helper
image: debian
command: ["/bin/bash"]
args: ["-c", "sleep infinity"]
volumeMounts:
- name: main-run
mountPath: /tmp/main-run
volumes:
- name: main-run
emptyDir: {}
If you can move the file to be shared in an empty subfolder this could be a simple solution.
For example, move your file to /var/run/utmp/utmp and share /var/run/utmp folder with an emptydir.

kubernetes transfer Physical IP to dubbo

I want to transfer Physical IP to dubbo pod by yaml,but the parameter is Fixed value.For example:
dubbo.yaml
spec:
replicas: 2
...
env:
- name: PhysicalIP
value: 192.168.1.1
In pod before start dubbo,i can replay container ip,for example:
echo "replay /etc/hosts"
cp /etc/hosts /etc/hosts.tmp
sed -i "s/.*$(hostname)/${PhysicalIP} $(hostname)/" /etc/hosts.tmp
cat /etc/hosts.tmp > /etc/hosts
There is an question,when pod deploy to host 192.168.1.1 and host 192.168.1.2,the host 192.168.1.2's pod environment variable ${PhysicalIP} value is 192.168.1.1,I want to ${PhysicalIP} is 192.168.1.2 in host 192.168.1.2,Is there any way?
You should be able to get information about the pod through Environment Variables or using a DownwardAPIVolumeFile.
Using environment variables:
You should add to your yaml something like this:
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
As far as I know, the name of the node is, right now, the best approach to what you need that you can get from inside the container.
Using a DownwardAPIVolumeFile:
You should add to your yaml something like this:
volumes:
- name: podinfo
downwardAPI:
items:
- path: "nodename"
fieldRef:
fieldPath: spec.nodeName
This way you will have the information of the node name stored in /etc/nodename
The issue #24657 and the pull #42717, on the kubernetes github are related to this. (Sorry, I need more reputation here to be able to post more links!).
As you can see there, access to the node IP through the downwardAPI should be available soon (using status.hostIP, probably).