Kubernetes Volumes - Dynamic path - kubernetes

I want my applications to write log files at a host location, so I'm mounting a hostPath volume. But all applications try to write logs using the same file name.
I'd like to separate the files into folders named after the Pod names, but I see nowhere in the documentation how to implement it:
volumes:
- name: logs-volume
hostPath:
path: /var/logs/apps/${POD_NAME}
type: DirectoryOrCreate
In the (not working) example above, apps should write files to the POD_NAME folder.
Is it possible?

As of kubernetes 1.17 this is supported using subPathExpr. See https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath-expanded-environment for details.

An alpha feature that might help is available in kubernetes 1.11. I haven't tested it, but it apparently allows something like:
volumeMounts:
- mountPath: /var/log
name: logs
subPathFrom:
fieldRef:
fieldPath: metadata.name
volumes:
- name: logs
hostPath:
path: /var/logs/apps/

Related

Kubernetes: creating permanent folders in pod

I have a pod in which I'm running a image.
The pod is not mine but belongs to the company I work for.
Each time I mount a new image in the pod it has access to some predefined "Permanent" folders.
When I use the edit deployment command I see this:
volumeMounts:
- mountPath: /Data/logs
name: ba-server-api-dh-pvc
subPath: logs
- mountPath: /Data/ErrorAndAbortedBlobs
name: ba-server-api-dh-pvc
subPath: ErrorAndAbortedBlobs
- mountPath: /Data/SuccessfullyTransferredBlobs
name: ba-server-api-dh-pvc
subPath: SuccessfullyTransferredBlobs
- mountPath: /Data/BlobsToBeTransferred
name: ba-server-api-dh-pvc
subPath: BlobsToBeTransferred
Now I want to manually add another such mountPath so I get another folder in the pod. But when I add it to the deployment config (the one above) and try saving it I get the following error.
"error: deployments.extensions "ba-server-api-dh-deployment" is invalid"
What can I do to add another permanent folder to the POD?
kind regards
It looks like you haven't specified the volume.
Something looks like this.
...
volumeMounts:
- mountPath: /Data/BlobsToBeTransferred
name: ba-server-api-dh-pvc
subPath: BlobsToBeTransferred
...
volume:
- name: ba-server-api-dh-pvc
persistentVolumeClaim:
claimName: ba-server-api-dh-pvc
Note that you already have a PersistentVolumeClaim named ba-server-api-dh-pvc, otherwise you will have to create.

Mounting /etc/default directory problem with SOLR image

I'm deploying a basic "solr:8.9.0" image to local Kubernetes env.
If I'm trying to mount pod's "/var/solr" directory, it works well.
I can see the files inside /var/solr in the mounted directory.
spec:
containers:
- image: solr:8.6.0
imagePullPolicy: IfNotPresent
name: solr
ports:
- name: solrport
containerPort: 8983
volumeMounts:
- mountPath: /var/solr/
name: solr-volume
volumes:
- name: solr-volume
persistentVolumeClaim:
claimName: solr-pvc
But somehow I can't mount "/etc/default/" directory. That doesn't work.
I knew there are files inside that directory but they are disappearing.
Any idea why?
Thanks!
this is because of how volumeMounts work.
A standard volumeMount mounts the volume in the suplied directory overwriting everything that is inside that directory.
You want to specify a subpath for the data you actually want to mount. By doing this the original contents of the directory won't get overridden.
see here for more information regarding the usage of subpaths.

Using a variable within a path in Kubernetes

I have a simple StatefulSet with two containers. I just want to share a path by an emptyDir volume:
volumes:
- name: shared-folder
emptyDir: {}
The first container is a busybox:
- image: busybox
name: test
command:
- sleep
- "3600"
volumeMounts:
- mountPath: /cache
name: shared-folder
The second container creates a file on /cache/<POD_NAME>. I want to mount both paths within the emptyDir volume to be able to share files between containers.
volumeMounts:
- name: shared-folder
mountPath: /cache/$(HOSTNAME)
Problem. The second container doesn't resolve /cache/$(HOSTNAME) so instead of mounting /cache/pod-0 it mounts /cache/$(HOSTNAME). I have also tried getting the POD_NAME and setting as env variable but it doesn't resolve it neither.
Dows anybody knows if it is possible to use a path like this (with env variables) in the mountPath attribute?
To use mountpath with env variable you can use subPath with expanded environment variables (k8s v1.17+).
In your case it would look like following:
containers:
- env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
volumeMounts:
- mountPath: /cache
name: shared-folder
subPathExpr: $(MY_POD_NAME)
I tested here and just using Kubernetes (k8s < 1.16) with env variables isn't possible to achieve what you want, basically what is happening is that variable will be accessible only after the pod gets deployed and you're referencing it before it happens.
You can use Helm to define your mounthPath and statefulset with the same value in the values.yaml file, then get this same value and set as a value for the mounthPath field and statefulset name. You can see about this here.
Edit:
Follow Matt's answer if you are using k8s 1.17 or higher.
The problem is that YAML configuration files are POSTed to Kubernetes exactly as they are written. This means that you need to create a templated YAML file, in which you will be able to replace the referenced ti environment variables with values bound to environment variables.
As this is a known "quirk" of Kubernetes there already exist tools to circumvent this problem. Helm is one of those tools which is very pleasant to use

Use Pod name or uid in Volume mountPath

I have an NFS physical volume that my pods can all access via a PVC, files are kept after pods are destroyed.
I want each pod to be able to put its files under a unique subdirectory.
Is there anyway that I can dynamically utilize say metadata.uid or metadata.name in the mountPath for the container? i.e. conceptually this:
volumeMounts:
- name: persistent-nfs-storage
mountPath: /metadata.name/files
I think I can see how to handle first making the directory, by using an init container and putting the value into the environment using the downward API. But I don't see any way to utilize it in a PVC mountPath.
Thanks for any help.
I don't know if it is possible to use Pod Name in Volume mountPath. But, if the intention is writing files in a separate folder(using pod name) of the same PVC, there are workarounds.
One way to achieve it is by getting the file path and pod name from env and then append them. After that write the log on that directory.
In details,
volumeMounts:
- name: persistent-nfs-storage
mountPath: /nfs/directory
ENVs:
env:
- name: WRITE_PATH
value: "$(NFS_DIR)/$(POD_NAME)"
- name: NFS_DIR
value: /nfs/directory
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
In Application, use $WRITE_PATH directory to write your necessary files. Also, if necessary create this directory from init container.

mountPath overrides the rest of the files in that same folder

I have a volume with a secret called config-volume. I want to have that file in the /home/code/config folder, which is where the rest of the configuration files are. For that, I mount it as this:
volumeMounts:
- name: config-volumes
- mountPath: /home/code/config
The issue is that, after deploying, in the /home/code/config I only have the secret file and the rest of them are gone
So the /home/code/config is an existing folder (not empty), I suspect that the volumeMount overwrites the folder.
Is there a way that this can be done without overwriting everything?
You can do the following, taken from this GitHub issue
containers:
- volumeMounts:
- name: config-volumes
mountPath: /home/code/config
subPath: config
volumes:
- name: config-volumes
configMap:
name: my-config
Suggested that your ConfigMapis called my-config and that you have a key config in it.
Kubernetes Secrets are mounted as a directory, with each key as a file in that directory. So in your case, the config-volumes secret is mounted to /home/code/config, shadowing whatever that directory was before.
You could specify your volume mount as:
volumeMounts:
- name: config-volumes
- mountPath: /home/code/config/config-volumes
which would provide a config-volumes directory inside the config directory with files for your secret's keys inside.