Kubernetes Persistant Volume overwrites image data - kubernetes

I have a pod that reads from an image that contains data within var/www/html. I want this data to be stored in a persistent volume. This is my deployment yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
replicas: 1
selector:
matchLabels:
container: app
template:
metadata:
labels:
container: app
spec:
containers:
- name: app
image: my/toolkit-app:working
ports:
- containerPort: 80
volumeMounts:
- mountPath: /var/www/html
name: toolkit-volume
subPath: html
volumes:
- name: toolkit-volume
persistentVolumeClaim:
claimName: azurefile
imagePullSecrets:
- name: my-cred
However when I look into the pod I can see the directory is empty:
If I comment out the persistent volume:
#volumeMounts:
# - mountPath: /var/www/html
# name: toolkit-volume
# subPath: html
I can see that the image data is there:
So it seems like the persistent volume is overwriting the existing directory - is there a way round this? Ideally I want /var/www/html to be stored in a separate volume and for any existing files within the image to be stored there too.

This is more a problem of visibility: If you mount an empty volume at a specific path, you won't be able to see, what was placed there in the container image.
From your question I assume that you want to be able to rollout updates by the means of a new container image, but at the same time retain variable data, that was created at the same directory from your application.
You could achieve this with the following method:
Use an init container with the same image and mount your persistent directory to a different path, for example /data
As command for the init container copy the contents of /var/www/html to /data.
In the regular container image use the mount you already have, it will contain your variable data and the updated data from the init container.

Related

Creating a file in Kubernetes pod

I have an application which saves the user uploaded file to disk does some processing and creates a new file on the disk with the processed data and returns to the user.
I am migrating this application to kubernetes and when i deploy the application it is erroring out when trying to save file to local disk.
any suggestion?
Save your file to an emptyDir volume for this kind of temporary storage.
See the configuration example in the documentation, that use this kind of volume for "cache":
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /cache
name: cache-volume
volumes:
- name: cache-volume
emptyDir: {}

Sharing non-persistent volume between containers in a pod

I am trying to put two nodejs applications into the same pod, because normally they should sit in the same machine, and are unfortunately heavily coupled together in such a way that each of them is looking for the folder of the other (pos/app.js needs /pos-service, and pos-service/app.js needs /pos)
In the end, the folder is supposed to contain:
/pos
/pos-service
Their volume doesn't need to be persistent, so I tried sharing their volumes with an emptyDir like the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: pos-deployment
labels:
app: pos
spec:
replicas: 1
selector:
matchLabels:
app: pos
template:
metadata:
labels:
app: pos
spec:
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: pos-service
image: pos-service:0.0.1
volumeMounts:
- name: shared-data
mountPath: /pos-service
- name: pos
image: pos:0.0.3
volumeMounts:
- name: shared-data
mountPath: /pos
However, when the pod is launched, and I exec into each of the containers, they still seem to be isolated and eachother's folders can't be seen
I would appereciate any help, thanks
This is a Community Wiki answer so feel free to edit it and add any additional details you consider important.
Since this issue has already been solved or rather clarified as in fact there is nothing to be solved here, let's post a Community Wiki answer as it's partially based on comments of a few different users.
As Matt and David Maze have already mentioned, it works as expected and in your example there is nothing that copies any content to your emptyDir volume:
With just the YAML you've shown, nothing ever copies anything into the
emptyDir volume, unless the images' startup knows to do that. – David
Maze Dec 28 '20 at 12:45
And as the name itselt may suggest, emptyDir comes totally empty, so it's your task to pre-populate it with the desired data. It can be done with the init container by temporarily mounting your emptyDir to a different mount point e.g. /mnt/my-epmty-dir and copying the content of specific directory or directories already present in your container e.g. /pos and /pos-service as in your example and then mounting it again to the desired location. Take a look at this example, presented in one of my older answers as it can be done in the very same way. Your Deployment may look as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: pos-deployment
labels:
app: pos
spec:
replicas: 1
selector:
matchLabels:
app: pos
template:
metadata:
labels:
app: pos
spec:
volumes:
- name: shared-data
emptyDir: {}
initContainers:
- name: pre-populate-empty-dir-1
image: pos-service:0.0.1
command: ['sh', '-c', 'cp -a /pos-service/* /mnt/empty-dir-content/']
volumeMounts:
- name: shared-data
mountPath: "/mnt/empty-dir-content/"
- name: pre-populate-empty-dir-2
image: pos:0.0.3
command: ['sh', '-c', 'cp -a /pos/* /mnt/empty-dir-content/']
volumeMounts:
- name: shared-data
mountPath: "/mnt/empty-dir-content/"
containers:
- name: pos-service
image: pos-service:0.0.1
volumeMounts:
- name: shared-data
mountPath: /pos-service
- name: pos
image: pos:0.0.3
volumeMounts:
- name: shared-data
mountPath: /pos
It's worth mentioning that there is nothing surprising here as it is exacly how mount works on Linux or other nix-based operating systems.
If you have e.g. /var/log/your-app on your main disk, populated with logs and then you mount a new, empty disk defining as its mountpoint /var/log/your-app, you won't see any content there. It won't be deleted from its original location on your main disc, it will simply become unavailable as in this location now you've mounted completely different volume (which happens to be empty or may have totally different content). When you unmount and visit your /var/log/your-app again, you'll see its original content. I hope it's all clear.

K8s 1.16: Mounting an existing directory in an image to a pv

tl;dr: How do we mount an existing directory in a pod to a PV allowing us to be persistent with our data that will be generated?
We are running K8s 1.16.7 at the moment, with Azure Disk and Azure File integration. We have an image that contains some directories we would like to have stored on a PV for persistency. In Docker, this could be easily handled since the container would write the data to a hostmount. Does anyone know how to solve this issues in Kubernetes? When we do this now, the container boots but the directory (for example: /etc/nginx/conf.d/ as a mount into PV) is empty and there for the pod crashes.
Example:
In the container below, the /usr/src/app is filled with the hello-world application. After deployment of the file below, the container crashes due it not being able to find anything in /usr/src/app (directory is empty due to PV mount).
---
apiVersion: v1
kind: Namespace
metadata:
name: testwebsite
labels:
environment: development
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: normal
namespace: testwebsite
provisioner: disk.csi.azure.com
parameters:
storageaccounttype: Standard_LRS
kind: Managed
resourceGroup: resourcegroup
cachingmode: None
mountOptions:
- dir_mode=0777
- file_mode=0777
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-azurefile
namespace: testwebsite
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: normal
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes
namespace: testwebsite
spec:
replicas: 1
selector:
matchLabels:
app: hello-kubernetes
template:
metadata:
labels:
app: hello-kubernetes
spec:
containers:
- name: hello-kubernetes
image: paulbouwer/hello-kubernetes:1.8
ports:
- containerPort: 8080
volumeMounts:
- name: azurefile01
mountPath: "/usr/src/app"
volumes:
- name: azurefile01
persistentVolumeClaim:
claimName: pvc-azurefile
Goal: Have the data thats in /usr/src/app within the container written to the PV.
Thx in advance!
As far as I understand your requirement, each time your Pod is created, you want its /usr/src/app to contain both the data, generated so far by your app and stored permanently in PersistentVolume as well as the original content of the /usr/src/app being the integral part of your paulbouwer/hello-kubernetes:1.8 image, available under /usr/src/app directory.
You can achieve it in kubernetes by using the init container, which would copy the original content of /usr/src/app directory during Pod startup process to the PersistentVolume which may already contain some data, previously generated by your app. After such volume initialization, the main container will mount the PersistentVolume containing both the data previously generated by your app (if any) as well as the original content of /usr/src/app directory from your image.
Your Deployment may look as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes
namespace: testwebsite
spec:
replicas: 1
selector:
matchLabels:
app: hello-kubernetes
template:
metadata:
labels:
app: hello-kubernetes
spec:
initContainers:
- name: init-hello-kubernetes
image: paulbouwer/hello-kubernetes:1.8
command: ['sh', '-c', 'cp -a /usr/src/app/* /mnt/pv-content/']
volumeMounts:
- name: azurefile01
mountPath: "/mnt/pv-content"
containers:
- name: hello-kubernetes
image: paulbouwer/hello-kubernetes:1.8
ports:
- containerPort: 8080
volumeMounts:
- name: azurefile01
mountPath: "/usr/src/app"
volumes:
- name: azurefile01
persistentVolumeClaim:
claimName: pvc-azurefile
In order to get the original data from /usr/src/app/ of the paulbouwer/hello-kubernetes:1.8 image, your init container must be also based on that image.
One caveat: paulbouwer/hello-kubernetes:1.8 image must contain cp binary to be able to perform the operation.
As you can see it's not a very "elegant" solution. Well, in fact it isn't. And that's why it is not recommended to mount your PersistentVolume under the directory which already contains some important files, required by your app to run properly. But there is no way to mount a volume under certain mount point and preserve its original content at the same time. It simply doesn't work this way in Linux or other nix-based systems. You either mount the whole volume or you don't mount it at all and preserve the original content of a specific directory. The original content isn't even overwritten. It's still there. It simply remains unavailable while this specific path is used as a mount point for a different volume.

How to mount data file in kubernetes via pvc?

I want to persistent data file via pvc with glusterfs in kubernetes, I mount the diretory and it'll work, but when I try to mount the file, it'll fail, because the file was mounted to the directory type, how can I mount the data file in k8s ?
image info:
how can I mount the data file in k8s ?
This is often application specific and there are several ways to do so, but mainly you want to read about subPath.
Generally, you can chose to:
use subPath to separate config files.
Mount volume/path as directory at some other location and then link file to specific place within pod (in rare cases that mixing with other config files or directory permission in same dir is presenting an issue, or boot/start policy of application prevents files from being mounted at the pod start but are required to be present after some initialization is performed, really edge cases).
Use ConfigMaps (or even Secrets) to hold configuration files. Note that if using subPath with configMap and Secret pod won't get updates there automatically, but is more common way of handling configuration files, and your conf/interpreter.json looks like a fine example...
Notes to keep in mind:
Mounting is "overlaping" underlying path, so you have to mount file up to the point of file in order to share its folder with other files. Sharing up to a folder would get you folder with single file in it which is usually not what is required.
If you use ConfigMaps then you have to reference individual file with subPath in order to mount it, even if you have a single file in ConfigMap. Something like this:
containers:
- volumeMounts:
- name: my-config
mountPath: /my-app/my-config.json
subPath: config.json
volumes:
- name: my-config
configMap:
name: cm-my-config-map-example
Edit:
Full example of mounting a single example.sh script file to /bin directory of a container using ConfigMap.
This example you can adjust to suit your needs of placing any file with any privilege in any desired folder. Replace my-namespace with any desired (or remove completely for default one)
Config map:
kind: ConfigMap
apiVersion: v1
metadata:
namespace: my-namespace
name: cm-example-script
data:
example-script.sh: |
#!/bin/bash
echo "Yaaaay! It's an example!"
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: my-namespace
name: example-deployment
labels:
app: example-app
spec:
selector:
matchLabels:
app: example-app
strategy:
type: Recreate
template:
metadata:
labels:
app: example-app
spec:
containers:
- image: ubuntu:16.04
name: example-app-container
stdin: true
tty: true
volumeMounts:
- mountPath: /bin/example-script.sh
subPath: example-script.sh
name: example-script
volumes:
- name: example-script
configMap:
name: cm-example-script
defaultMode: 0744
Full example of mounting a single test.txt file to /bin directory of a container using persistent volume (file already exists in root of volume).
However, if you wish to mount with persistent volume instead configMap, here is another example of mounting in much the same way (test.txt is mounted in /bin/test.txt)... Note two things: test.txt must exist on PV and that I'm using statefulset just to run with automatically provisioned pvc, and you can adjust accordingly...
apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: my-namespace
name: ss-example-file-mount
spec:
serviceName: svc-example-file-mount
replicas: 1
selector:
matchLabels:
app: example-app
template:
metadata:
labels:
app: example-app
spec:
containers:
- image: ubuntu:16.04
name: example-app-container
stdin: true
tty: true
volumeMounts:
- name: persistent-storage-example
mountPath: /bin/test.txt
subPath: test.txt
volumeClaimTemplates:
- metadata:
name: persistent-storage-example
spec:
storageClassName: sc-my-storage-class-for-provisioning-pv
accessModes: [ ReadWriteOnce ]
resources:
requests:
storage: 2Gi

Directories creation inside the Kubernetes Persistent Volume

How would we create a directory inside the kubernetes persistent volume to mount to use in the container as subPath ? eg: mysql directory should be created while claiming the persistent volume
I would probably put an init container into my podspec that simply mounts the volume and runs a mkdir -p to create the directory and then exit. You could also do this in the target container itself with some kind of script.
https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
This is how I implemented the wise solution of #brett-wagner with initContainer and mkdir -p. I create two sub-diretctories, my-app-data and my-app-media, in my NFS server volume /exports:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nfs-server-deploy
labels:
app: my-nfs-server
spec:
replicas: 1
selector:
matchLabels:
app: my-nfs-server
template:
spec:
containers:
- name: my-nfs-server-cntr
image: k8s.gcr.io/volume-nfs:0.8
volumeMounts:
- name: my-nfs-server-exports
mountPath: "/exports"
initContainers:
- name: volume-dirs-init-cntr
image: busybox:1.35
command:
- "/bin/mkdir"
args:
- "-p"
- "/exports/my-app-data"
- "/exports/my-app-media"
volumeMounts:
- name: my-nfs-server-exports
mountPath: "/exports"
volumes:
- name: my-nfs-server-exports
persistentVolumeClaim:
claimName: my-nfs-server-pvc
I think you could use the readinessProbe where you could use the execAction to create the sub folder. It will make sure your folder ready before container is ready to accept requests.
Otherwise you could use the COMMAND option to create it. But that will be executed after container starts.