I am a newbie here , but I have a use case where I need to mount same path to two different PV's , When ever I try to give the same path my pod doesn't come up check mount paths below
- name : xxx
mountPath : "/home/{username}"
readOnly : false
static:
pvcName:
subPath: '{username}'
capacity: 10Gi
homeMountPath: '/home/{username}'
dynamic:
storageClass: nfs-client
pvcNameTemplate: claim-{username}{servername}
volumeNameTemplate: volume-{username}{servername}
storageAccessModes: [ReadWriteOnce]
But after changing the mount path pod comes up without any issue example
mountPath : "/home/test/{username}"
Is there something I am missing
If you want to mount the Single POD with multiple PVC it wont possible unless you use the ReadWiteMany.
But if you want to mount multiple path to POD it's possible using single PVC or multiple PVC depends on use case.
Single PVC single POD
apiVersion: v1
kind: Pod
metadata:
name: my-test-pod
spec:
containers:
- name: mysql
image: mysql
env:
- name: MYSQL_HOST
value: "IP"
volumeMounts:
- mountPath: /var/lib/mysql
name: data
subPath: path1
- name: nginx
image: nginx
volumeMounts:
- mountPath: /var/www/html
name: data
subPath: path2
volumes:
- name: data
persistentVolumeClaim:
claimName: my-site-data
Multiple PVC and volume path to single POD
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
volumes:
# List of volumes
- name: volume1
< volume details, see below >
- name: volume2
< volume details, see below >
containers:
- name: mycontainer
volumeMounts:
# Path
# will mount 'volume1' into /var/www/html
- name: volume1
mountPath: /var/www/html
# will mount 'volume2' into /var/log
- name: volume2
mountPath: /var/log/
Reference : https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html/getting_started_with_kubernetes/get_started_provisioning_storage_in_kubernetes#example
Related
I'm running multiple containers in a pod. I have a persistence volume and mounting the same directories to containers.
My requirement is:
mount /opt/app/logs/app.log to container A where application writes data to app.log
mount /opt/app/logs/app.log to container B to read data back from app.log
- container-A
image: nginx
volumeMounts:
- mountPath: /opt/app/logs/ => container A is writing data here to **app.log** file
name: data
- container-B
image: busybox
volumeMounts:
- mountPath: /opt/app/logs/ => container B read data from **app.log**
name: data
The issue I'm facing is - when I mount the same directory /opt/app/logs/ to container-B, I'm not seeing the app.log file.
Can someone help me with this, please? This can be achievable but I'm not sure what I'm missing here.
According to your requirements, you need something like below:
- container-A
image: nginx
volumeMounts:
- mountPath: /opt/app/logs
name: data
- container-B
image: busybox
volumeMounts:
- mountPath: /opt/app/logs
name: data
Your application running on container-A will create or write files on the given path(/opt/app/logs) say app.log file. Then from container-B you'll find app.log file in the given path (/opt/app/logs). You can use any path here.
In your given spec you actually tried to mount a directory in a file(app.log). I think that's creating the issue.
Update-1:
Here I give a full yaml file from a working example. You can do it by yourself to see how things work.
kubectl exec -ti test-pd -c test-container sh
go to /test-path1
create some file using touch command. say "touch a.txt"
exit from test-container
kubectl exec -ti test-pd -c test sh
go to /test-path2
you will find a.txt file here.
pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-pv-claim
spec:
storageClassName:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: nginx
name: test-container
volumeMounts:
- mountPath: /test-path1
name: test-volume
- image: pkbhowmick/go-rest-api:2.0.1 #my-rest-api-server
name: test
volumeMounts:
- mountPath: /test-path2
name: test-volume
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: test-pv-claim
From your post it seems you‘re having two separate paths.
Conatainer B ist mounted to /opt/app/logs/logs.
Have different file names for each of your containers and also fix the mount path from the container config. Please use this as an example :-
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /data
# this field is optional
type: Directory
I am trying to configure MongoDB ops manager on Kubernetes, I have a PersistentVolumeClaim based on dynamic provisioning based on CEPH and configured it successfully, What I am trying to do is to define the volume mounts and volumes in MongoDBOpsManager YAML file, I tried different things but couldn't define them
here is my MongoDBOpsManager yaml file:
apiVersion: mongodb.com/v1
kind: MongoDBOpsManager
metadata:
name: ops-manager
namespace: mongodb
# podSpec:
# podTemplate:
# spec:
# containers:
# - name: mongodb-enterprise-database
# volumeMounts:
# - name: mongo-persistent-storage
# mountPath: /data/db
# volumes:
# - name: mongo-persistent-storage
# persistentVolumeClaim:
# claimName: mongo-pvc
spec:
# the version of Ops Manager distro to use
version: 4.2.4
containers:
- name: mongodb-ops-manager
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
volumes:
- name: mongo-persistent-storage
persistentVolumeClaim:
claimName: mongo-pvc
# the name of the secret containing admin user credentials.
adminCredentials: ops-manager-admin-secret
externalConnectivity:
type: NodePort
# the Replica Set backing Ops Manager.
# appDB has the SCRAM-SHA authentication mode always enabled
applicationDatabase:
members: 3
statefulSet:
spec:
# volumeClaimTemplates:letsChangeTheWorld1
template:
spec:
containers:
- name: mongodb-ops-manager
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
volumes:
- name: mongo-persistent-storage
persistentVolumeClaim:
claimName: mongo-pvc
I don't know where should I put the volume mounts and volume definition
I the ops manager om is created successfully but when I check the created pod for it I found this error
running "VolumeBinding" filter plugin for pod "ops-manager-db-0": pod has unbound immediate PersistentVolumeClaims
spec:
containers:
- image:
....
volumeMounts:
.....
- image:
....
volumeMounts:
......
volumes:
- name:
Volumes tag should come parallel to containers.
Volumes are defined globally for all containers and mounts are speific to containers
Example: https://kubernetes.io/docs/concepts/storage/volumes/
Check with this once
Below is deployment yaml, after deployment, I could access the pod
and I can see the mountPath "/usr/share/nginx/html", but I could not find
"/work-dir" which should be created by initContainer.
Could someone explain me the reason?
Thanks and Rgds
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox
command:
- wget
- "-O"
- "/work-dir/index.html"
- http://kubernetes.io
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
The volume at "/work-dir" is mounted by the init container and the "/work-dir" location only exists in the init container. When the init container completes, its file system is gone so the "/work-dir" directory in that init container is "gone". The application (nginx) container mounts the same volume, too, (albeit at a different location) providing mechanism for the two containers to share its content.
Per the docs:
Init containers can run with a different view of the filesystem than
app containers in the same Pod.
The volume mount with a PVC allows you to share the contents of /work-dir/ and /use/share/nginx/html/ but it does not mean the nginx container will have the /work-dir folder. Given this, you may think that you could just mount the path / which would allow you to share all folders underneath. However, a mountPath does not work for /.
So, how do you solve your problem? You could have another pod mount /work-dir/ in case you actually need the folder. Here is an example (pvc and deployment with mounts):
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: shared-fs-pvc
namespace: default
labels:
mojix.service: default-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: default
name: shared-fs
labels:
mojix.service: shared-fs
spec:
replicas: 1
selector:
matchLabels:
mojix.service: shared-fs
template:
metadata:
creationTimestamp: null
labels:
mojix.service: shared-fs
spec:
terminationGracePeriodSeconds: 3
containers:
- name: nginx-c
image: nginx:latest
volumeMounts:
- name: shared-fs-volume
mountPath: /var/www/static/
- name: alpine-c
image: alpine:latest
command: ["/bin/sleep", "10000s"]
lifecycle:
postStart:
exec:
command: ["/bin/mkdir", "-p", "/work-dir"]
volumeMounts:
- name: shared-fs-volume
mountPath: /work-dir/
volumes:
- name: shared-fs-volume
persistentVolumeClaim:
claimName: shared-fs-pvc
I have got the local persistent volumes to work, using local directories as mount points, storage class, PVC etc, all using standard documentation.
However, when I use this PVC in a Pod, all the files are getting created in the base of the mount point, i.e if /data is my mount point, all my application files are stored in the /data folder. I see this creating conflicts in the future, with more than one application writing to the same folder.
Looking for any suggestions or advice to make each PVC or even application files of a Pod into separate directories in the PV.
If you store your data in different directories on your volume, you can use subPath to separate your data into different directories using multiple mount points.
E.g.
apiVersion: v1
kind: Pod
metadata:
name: podname
spec:
containers:
- name: containername
image: imagename
volumeMounts:
- mountPath: /path/to/mount/point
name: volumename
subPath: volume_subpath
- mountPath: /path/to/mount/point2
name: volumename
subPath: volume_subpath2
volumes:
- name: volumename
persistentVolumeClaim:
claimName: pvcname
Another approach is using subPathExpr.
Note:
The subPath and subPathExpr properties are mutually exclusive
apiVersion: v1
kind: Pod
metadata:
name: pod3
spec:
containers:
- name: pod3
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
image: busybox
command: [ "sh", "-c", "while [ true ]; do echo 'Hello'; sleep 10; done | tee -a /logs/hello.txt" ]
volumeMounts:
- name: workdir1
mountPath: /logs
subPathExpr: $(POD_NAME)
restartPolicy: Never
volumes:
- name: workdir1
persistentVolumeClaim:
claimName: pvc1
As described here.
In addition please follow Fixing the Subpath Volume Vulnerability in Kubernetes here and here
You can simply change the mount path and sperate the each application mount path so that files of POD into separate directories.
I'm having hard time configuring mountPath as a relative path.
Let's say I'm running the deployment from /user/app folder and I want to create secret file under /user/app/secret/secret-volume as follows:
apiVersion: v1
kind: Pod
metadata:
name: secret-test-pod
spec:
containers:
- name: test-container
image: nginx
volumeMounts:
# name must match the volume name below
- name: secret-volume
mountPath: secret/secret-volume
# The secret data is exposed to Containers in the Pod through a Volume.
volumes:
- name: secret-volume
secret:
secretName: test-secret
For some reason the file secret-volume is created in the root directory /secret/secret-volume.
It's because you have mountPath: secret/secret-volume change it to mountPath: /user/app/secret/secret-volume
Check documentation here.