K8s 1.16: Mounting an existing directory in an image to a pv - kubernetes

tl;dr: How do we mount an existing directory in a pod to a PV allowing us to be persistent with our data that will be generated?
We are running K8s 1.16.7 at the moment, with Azure Disk and Azure File integration. We have an image that contains some directories we would like to have stored on a PV for persistency. In Docker, this could be easily handled since the container would write the data to a hostmount. Does anyone know how to solve this issues in Kubernetes? When we do this now, the container boots but the directory (for example: /etc/nginx/conf.d/ as a mount into PV) is empty and there for the pod crashes.
Example:
In the container below, the /usr/src/app is filled with the hello-world application. After deployment of the file below, the container crashes due it not being able to find anything in /usr/src/app (directory is empty due to PV mount).
---
apiVersion: v1
kind: Namespace
metadata:
name: testwebsite
labels:
environment: development
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: normal
namespace: testwebsite
provisioner: disk.csi.azure.com
parameters:
storageaccounttype: Standard_LRS
kind: Managed
resourceGroup: resourcegroup
cachingmode: None
mountOptions:
- dir_mode=0777
- file_mode=0777
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-azurefile
namespace: testwebsite
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: normal
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes
namespace: testwebsite
spec:
replicas: 1
selector:
matchLabels:
app: hello-kubernetes
template:
metadata:
labels:
app: hello-kubernetes
spec:
containers:
- name: hello-kubernetes
image: paulbouwer/hello-kubernetes:1.8
ports:
- containerPort: 8080
volumeMounts:
- name: azurefile01
mountPath: "/usr/src/app"
volumes:
- name: azurefile01
persistentVolumeClaim:
claimName: pvc-azurefile
Goal: Have the data thats in /usr/src/app within the container written to the PV.
Thx in advance!

As far as I understand your requirement, each time your Pod is created, you want its /usr/src/app to contain both the data, generated so far by your app and stored permanently in PersistentVolume as well as the original content of the /usr/src/app being the integral part of your paulbouwer/hello-kubernetes:1.8 image, available under /usr/src/app directory.
You can achieve it in kubernetes by using the init container, which would copy the original content of /usr/src/app directory during Pod startup process to the PersistentVolume which may already contain some data, previously generated by your app. After such volume initialization, the main container will mount the PersistentVolume containing both the data previously generated by your app (if any) as well as the original content of /usr/src/app directory from your image.
Your Deployment may look as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes
namespace: testwebsite
spec:
replicas: 1
selector:
matchLabels:
app: hello-kubernetes
template:
metadata:
labels:
app: hello-kubernetes
spec:
initContainers:
- name: init-hello-kubernetes
image: paulbouwer/hello-kubernetes:1.8
command: ['sh', '-c', 'cp -a /usr/src/app/* /mnt/pv-content/']
volumeMounts:
- name: azurefile01
mountPath: "/mnt/pv-content"
containers:
- name: hello-kubernetes
image: paulbouwer/hello-kubernetes:1.8
ports:
- containerPort: 8080
volumeMounts:
- name: azurefile01
mountPath: "/usr/src/app"
volumes:
- name: azurefile01
persistentVolumeClaim:
claimName: pvc-azurefile
In order to get the original data from /usr/src/app/ of the paulbouwer/hello-kubernetes:1.8 image, your init container must be also based on that image.
One caveat: paulbouwer/hello-kubernetes:1.8 image must contain cp binary to be able to perform the operation.
As you can see it's not a very "elegant" solution. Well, in fact it isn't. And that's why it is not recommended to mount your PersistentVolume under the directory which already contains some important files, required by your app to run properly. But there is no way to mount a volume under certain mount point and preserve its original content at the same time. It simply doesn't work this way in Linux or other nix-based systems. You either mount the whole volume or you don't mount it at all and preserve the original content of a specific directory. The original content isn't even overwritten. It's still there. It simply remains unavailable while this specific path is used as a mount point for a different volume.

Related

Kubernetes Persistent Volume Claim doesn't save the data

I made a persistent volume claim on kubernetes to save mongodb data after restarting the deployment I found that data is not existed also my PVC is in bound state.
here is my yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-mongo-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth-mongo
template:
metadata:
labels:
app: auth-mongo
spec:
volumes:
- name: auth-mongo-data
persistentVolumeClaim:
claimName: auth-mongo-pvc
containers:
- name: auth-mongo
image: mongo
ports:
- containerPort: 27017
name: 'auth-mongo-port'
volumeMounts:
- name: auth-mongo-data
mountPath: '/data/db'
---
# Persistent Volume Claim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: auth-mongo-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
and I made clusterIP service for the deployment
First off, if the PVC status is still Bound and the desired pod happens to start on another node, it will fail as the PV can't be mounted into the pod. This happens because the reclaimPolicy: Retain of the StorageClass (can also be set on the PV directly persistentVolumeReclaimPolicy: Retain). In order to fix this, you have to manually overrite/delete the claimRef of the PV. Use kubectl patch pv PV_NAME -p '{"spec":{"claimRef": null}}' to do this, after doing so the PV's status should be Available.
In order to see if the your application writes any data to the desired path, run your application and exec into it (kubectl -n NAMESPACE POD_NAME -it -- /bin/sh) and check your /data/db. You could also create an file with some random text, restart your application and check again.
I'm fairly certain that if your PV isn't being recreated every time your application starts (which shouldn't be the case, because of Retain), then it's highly that your Application isn't writing to the path specified. But you could also share your PersistentVolume config with us, as there might be some misconfiguration there as well.

Mounting Windows local folder into pod

I'm running a Ubuntu container with SQL Server in my local Kubernetes environment with Docker Desktop on a Windows laptop.
Now I'm trying to mount a local folder (C:\data\sql) that contains database files into the pod.
For this, I configured a persistent volume and persistent volume claim in Kubernetes, but it doesn't seem to mount correctly. I don't see errors or anything, but when I go into the container using docker exec -it and inspect the data folder, it's empty. I expect the files from the local folder to appear in the mounted folder 'data', but that's not the case.
Is something wrongly configured in the PV, PVC or pod?
Here are my yaml files:
apiVersion: v1
kind: PersistentVolume
metadata:
name: dev-customer-db-pv
labels:
type: local
app: customer-db
chart: customer-db-0.1.0
release: dev
heritage: Helm
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /C/data/sql
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dev-customer-db-pvc
labels:
app: customer-db
chart: customer-db-0.1.0
release: dev
heritage: Helm
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
apiVersion: apps/v1
kind: Deployment
metadata:
name: dev-customer-db
labels:
ufo: dev-customer-db-config
app: customer-db
chart: customer-db-0.1.0
release: dev
heritage: Helm
spec:
selector:
matchLabels:
app: customer-db
release: dev
replicas: 1
template:
metadata:
labels:
app: customer-db
release: dev
spec:
volumes:
- name: dev-customer-db-pv
persistentVolumeClaim:
claimName: dev-customer-db-pvc
containers:
- name: customer-db
image: "mcr.microsoft.com/mssql/server:2019-latest"
imagePullPolicy: IfNotPresent
volumeMounts:
- name: dev-customer-db-pv
mountPath: /data
envFrom:
- configMapRef:
name: dev-customer-db-config
- secretRef:
name: dev-customer-db-secrets
At first, I was trying to define a volume in the pod without PV and PVC, but then I got access denied errors when I tried to read files from the mounted data folder.
spec:
volumes:
- name: dev-customer-db-data
hostPath:
path: C/data/sql
containers:
...
volumeMounts:
- name: dev-customer-db-data
mountPath: data
I've also tried to Helm install with --set volumePermissions.enabled=true but this didn't solve the access denied errors.
Based on this info from GitHub for Docker there is no support hostpath volumes in WSL 2.
Thus, next workaround can be used.
We need just to append /run/desktop/mnt/host to the initial path on the host /c/data/sql. No need for PersistentVolume and PersistentVolumeClaim in this case - just remove them.
I changed spec.volumes for Deployment according to information about hostPath configuration on Kubernetes site:
volumes:
- name: dev-customer-db-pv
hostPath:
path: /run/desktop/mnt/host/c/data/sql
type: Directory
After applying these changes, the files can be found in data folder in the pod, since mountPath: /data

How to have data persist in GKE kubernetes StatefulSet with postgres?

So I'm just trying to get a web app running on GKE experimentally to familiarize myself with Kubernetes and GKE.
I have a statefulSet (Postgres) with a persistent volume/ persistent volume claim which is mounted to the Postgres pod as expected. The problem I'm having is having the Postgres data endure. If I mount the PV at var/lib/postgres the data gets overridden with each pod update. If I mount at var/lib/postgres/data I get the warning:
initdb: directory "/var/lib/postgresql/data" exists but is not empty
It contains a lost+found directory, perhaps due to it being a mount point.
Using a mount point directly as the data directory is not recommended.
Create a subdirectory under the mount point.
Using Docker alone having the volume mount point at var/lib/postgresql/data works as expected and data endures, but I don't know what to do now in GKE. How does one set this up properly?
Setup file:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: sm-pd-volume-claim
spec:
storageClassName: "standard"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1G
---
apiVersion: "apps/v1"
kind: "StatefulSet"
metadata:
name: "postgis-db"
namespace: "default"
labels:
app: "postgis-db"
spec:
serviceName: "postgis-db"
replicas: 1
selector:
matchLabels:
app: "postgis-db"
template:
metadata:
labels:
app: "postgis-db"
spec:
terminationGracePeriodSeconds: 25
containers:
- name: "postgis"
image: "mdillon/postgis"
ports:
- containerPort: 5432
name: postgis-port
volumeMounts:
- name: sm-pd-volume
mountPath: /var/lib/postgresql/data
volumes:
- name: sm-pd-volume
persistentVolumeClaim:
claimName: sm-pd-volume-claim
You are getting this error because the postgres pod has tried to mount the data directory on / folder. It is not recommended to do so.
You have to create subdirectory to resolve this issues on the statefulset manifest yaml files.
volumeMounts:
- name: sm-pd-volume
mountPath: /var/lib/postgresql/data
subPath: data

Minio data does not persist through reboot

I deployed Minio on Kubernetes on an Ubuntu desktop. It works fine, except that whenever I reboot the machine, everything that was stored in Minio mysteriously disappears (if I create several buckets with files in them, I come back to a completely blanks slate after the reboot - the buckets, and all their files, are completely gone).
When I set up Minio, I created a persistent volume in Kubernetes which mounts to a folder (/mnt/minio/minio - I have a 4 TB HDD mounted at /mnt/minio with a folder named minio inside that). I noticed that this folder seems to be empty even when I store stuff in Minio, so perhaps Minio is ignoring the persistent volume and using the container storage? However, I don't know why this would be happening; I have both a PV and a PV claim, and kubectl shows that they are bound to each other.
Below are the yaml files I applied to deploy my minio installation:
kind: PersistentVolume
apiVersion: v1
metadata:
name: minio-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 100Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/minio/minio"
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: minio-pv-claim
labels:
app: minio-storage-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 99Gi
apiVersion: apps/v1 # for k8s versions before 1.9.0 use apps/v1beta2 and before 1.8.0 use extensions/v1beta1
kind: Deployment
metadata:
# This name uniquely identifies the Deployment
name: minio-deployment
spec:
selector:
matchLabels:
app: minio
strategy:
type: Recreate
template:
metadata:
labels:
# Label is used as selector in the service.
app: minio
spec:
# Refer to the PVC created earlier
volumes:
- name: storage
persistentVolumeClaim:
# Name of the PVC created earlier
claimName: minio-pv-claim
containers:
- name: minio
# Pulls the default Minio image from Docker Hub
image: minio/minio:latest
args:
- server
- /storage
env:
# Minio access key and secret key
- name: MINIO_ACCESS_KEY
value: "minio"
- name: MINIO_SECRET_KEY
value: "minio123"
ports:
- containerPort: 9000
hostPort: 9000
# Mount the volume into the pod
volumeMounts:
- name: storage # must match the volume name, above
mountPath: "/mnt/minio/minio"
apiVersion: v1
kind: Service
metadata:
name: minio-service
spec:
type: LoadBalancer
ports:
- port: 9000
targetPort: 9000
protocol: TCP
selector:
app: minio
you need to mount container's /storage directory in the directory you are mounting on the container /mnt/minio/minio/;
args:
- server
- /mnt/minio/minio/storage
But consider deploying using StatefulSet, so when your pod restarts it will retain everything of the previous pod.

How to mount data file in kubernetes via pvc?

I want to persistent data file via pvc with glusterfs in kubernetes, I mount the diretory and it'll work, but when I try to mount the file, it'll fail, because the file was mounted to the directory type, how can I mount the data file in k8s ?
image info:
how can I mount the data file in k8s ?
This is often application specific and there are several ways to do so, but mainly you want to read about subPath.
Generally, you can chose to:
use subPath to separate config files.
Mount volume/path as directory at some other location and then link file to specific place within pod (in rare cases that mixing with other config files or directory permission in same dir is presenting an issue, or boot/start policy of application prevents files from being mounted at the pod start but are required to be present after some initialization is performed, really edge cases).
Use ConfigMaps (or even Secrets) to hold configuration files. Note that if using subPath with configMap and Secret pod won't get updates there automatically, but is more common way of handling configuration files, and your conf/interpreter.json looks like a fine example...
Notes to keep in mind:
Mounting is "overlaping" underlying path, so you have to mount file up to the point of file in order to share its folder with other files. Sharing up to a folder would get you folder with single file in it which is usually not what is required.
If you use ConfigMaps then you have to reference individual file with subPath in order to mount it, even if you have a single file in ConfigMap. Something like this:
containers:
- volumeMounts:
- name: my-config
mountPath: /my-app/my-config.json
subPath: config.json
volumes:
- name: my-config
configMap:
name: cm-my-config-map-example
Edit:
Full example of mounting a single example.sh script file to /bin directory of a container using ConfigMap.
This example you can adjust to suit your needs of placing any file with any privilege in any desired folder. Replace my-namespace with any desired (or remove completely for default one)
Config map:
kind: ConfigMap
apiVersion: v1
metadata:
namespace: my-namespace
name: cm-example-script
data:
example-script.sh: |
#!/bin/bash
echo "Yaaaay! It's an example!"
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: my-namespace
name: example-deployment
labels:
app: example-app
spec:
selector:
matchLabels:
app: example-app
strategy:
type: Recreate
template:
metadata:
labels:
app: example-app
spec:
containers:
- image: ubuntu:16.04
name: example-app-container
stdin: true
tty: true
volumeMounts:
- mountPath: /bin/example-script.sh
subPath: example-script.sh
name: example-script
volumes:
- name: example-script
configMap:
name: cm-example-script
defaultMode: 0744
Full example of mounting a single test.txt file to /bin directory of a container using persistent volume (file already exists in root of volume).
However, if you wish to mount with persistent volume instead configMap, here is another example of mounting in much the same way (test.txt is mounted in /bin/test.txt)... Note two things: test.txt must exist on PV and that I'm using statefulset just to run with automatically provisioned pvc, and you can adjust accordingly...
apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: my-namespace
name: ss-example-file-mount
spec:
serviceName: svc-example-file-mount
replicas: 1
selector:
matchLabels:
app: example-app
template:
metadata:
labels:
app: example-app
spec:
containers:
- image: ubuntu:16.04
name: example-app-container
stdin: true
tty: true
volumeMounts:
- name: persistent-storage-example
mountPath: /bin/test.txt
subPath: test.txt
volumeClaimTemplates:
- metadata:
name: persistent-storage-example
spec:
storageClassName: sc-my-storage-class-for-provisioning-pv
accessModes: [ ReadWriteOnce ]
resources:
requests:
storage: 2Gi