How do I mount a persistent volume? How do I reference it?
Here is a pod config:
kind: Pod
apiVersion: v1
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
Taken from: https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-pod
kind: Pod
apiVersion: v1
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
Related
I have a stateful set with a volume that uses a subPath: $(POD_NAME) I've also tried $HOSTNAME which also doesn't work. How does one set the subPath of a volumeMount to the name of the pod or the $HOSTNAME?
Here's what I have:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: ravendb
namespace: pltfrmd
labels:
app: ravendb
spec:
serviceName: ravendb
template:
metadata:
labels:
app: ravendb
spec:
containers:
- command:
# ["/bin/sh", "-ec", "while :; do echo '.'; sleep 6 ; done"]
- /bin/sh
- -c
- /opt/RavenDB/Server/Raven.Server --log-to-console --config-path /configuration/settings.json
image: ravendb/ravendb:latest
imagePullPolicy: Always
name: ravendb
env:
- name: POD_HOST_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: RAVEN_Logs_Mode
value: Information
ports:
- containerPort: 8080
name: http-api
protocol: TCP
- containerPort: 38888
name: tcp-server
protocol: TCP
- containerPort: 161
name: snmp
protocol: TCP
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /data
name: data
subPath: $(POD_NAME)
- mountPath: /configuration
name: configuration
subPath: ravendb
- mountPath: /certificates
name: certificates
dnsPolicy: ClusterFirst
restartPolicy: Always
terminationGracePeriodSeconds: 120
volumes:
- name: certificates
secret:
secretName: ravendb-certificate
- name: configuration
persistentVolumeClaim:
claimName: configuration
- name: data
persistentVolumeClaim:
claimName: ravendb
And the Persistent Volume:
apiVersion: v1
kind: PersistentVolume
metadata:
namespace: pltfrmd
name: ravendb
labels:
type: local
spec:
storageClassName: local-storage
capacity:
storage: 30Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /volumes/ravendb
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: pltfrmd
name: ravendb
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30Gi
$HOSTNAME used to work, but doesn't anymore for some reason. Wondering if it's a bug in the host path storage provider?
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
targetPort: 80
selector:
app: nginx
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
spec:
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.10
ports:
- containerPort: 80
volumeMounts:
- name: nginx
mountPath: /usr/share/nginx/html
subPath: $(POD_NAME)
volumes:
- name: nginx
configMap:
name: nginx
Ok, so after great experimentation I found a way that still works:
Step one, map an environment variable:
env:
- name: POD_HOST_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
This creates $(POD_HOST_NAME) based on the field metadata.name
Then in your mount you do this:
volumeMounts:
- mountPath: /data
name: data
subPathExpr: $(POD_HOST_NAME)
It's important to use subPathExpr as subPath (which worked before) doesn't work. Then it will use the environment variable you created and properly expand it.
I have a scenario to mount two Dynamic PVC in Minikube, when i tried the below manifest file, I can see only one volume mounted in the POD, when i describe pod, i can see both volumes bounded, not sure what is the cause.
apiVersion: v1
metadata:
name: niranjan-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: praveen-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
---
kind: Pod
apiVersion: v1
metadata:
name: praveen-pod
spec:
volumes:
- name: niranjan-pv
persistentVolumeClaim:
claimName: niranjan-pvc
- name: praveen-pv
persistentVolumeClaim:
claimName: praveen-pvc
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/tmp/niranjan-mount"
name: niranjan-pv
volumeMounts:
- mountPath: "/tmp/praveen-mount"
name: praveen-pv ```
[enter image description here][1]
[enter image description here][2]
[enter image description here][3]
[1]: https://i.stack.imgur.com/uyNKg.png
[2]: https://i.stack.imgur.com/3nDPl.png
[3]: https://i.stack.imgur.com/oOrq2.png
For the volumeMounts, you should provide only one value with an array:
kind: Pod
apiVersion: v1
metadata:
name: praveen-pod
spec:
volumes:
- name: niranjan-pv
persistentVolumeClaim:
claimName: niranjan-pvc
- name: praveen-pv
persistentVolumeClaim:
claimName: praveen-pvc
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/tmp/niranjan-mount"
name: niranjan-pv
- mountPath: "/tmp/praveen-mount"
name: praveen-pv
If not one of those variables will override the value of the second one, and you will have only one mounted volume.
I'm just curious about what are the differences between these 2 ways of defining ConfigMap in volumes section?
p.s. test-config is including config.json file.
newpod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: configmappod
spec:
containers:
- name: configmapcontainer
image: blue
volumeMounts:
- name: config-vol
mountPath: "/config/newConfig.json"
subPath: "config.json"
readOnly: true
volumes:
- name: config-vol
projected:
sources:
- configMap:
name: test-config
items:
- key: config.json
path: config.json
newpod2.yaml
apiVersion: v1
kind: Pod
metadata:
name: configmappod
spec:
containers:
- name: configmapcontainer
image: blue
volumeMounts:
- name: config-vol
mountPath: "/config/newConfig.json"
subPath: "config.json"
readOnly: true
volumes:
- name: config-vol
configMap:
name: test-config
No different, they yield the same result. By the way, the readOnly attribute is redundant, it does not have any effect in both cases.
i try to deploy a container but unfortunately i have an error when i try to execute kubectl apply -f *.yaml
the error is :
error validating data: ValidationError(Pod.spec.containers[1]):
unknown field "persistentVolumeClaim" in io.k8s.api.core.v1.Container;
i dont understand why i get the error because i wrote claimName: under persistentVolumeClaim: in my pd.yaml config :(
Pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: karafpod
spec:
containers:
- name: karaf
image: xxx/karaf:ids-1.1.0
volumeMounts:
- name: karaf-conf-storage
mountPath: /apps/karaf/etc
- name: karaf-conf-storage
persistentVolumeClaim:
claimName: karaf-conf-claim
PersistentVolumeClaimKaraf.yml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: karaf-conf-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
Deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: karaf
namespace: poc
spec:
replicas: 1
template:
metadata:
labels:
app: karaf
spec:
containers:
- name: karaf
image: "xxx/karaf:ids-1.1.0"
imagePullPolicy: Always
ports:
- containerPort: 6443
- containerPort: 6100
- containerPort: 6101
resources:
volumeMounts:
- mountPath: /apps/karaf/etc
name: karaf-conf
volumes:
- name: karaf-conf
persistentVolumeClaim:
claimName: karaf-conf
The reason you're seeing that error is due to you specifying a persistentVolumeClaim under your pod spec's container specifications. As you can see from the auto generated docs here: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#container-v1-core
persistentVolumeClaims aren't supported at this level/API object, which is what's giving the error you're seeing.
You should modify the pod.yml to specify this as a volume instead.
e.g.:
apiVersion: v1
kind: Pod
metadata:
name: karafpod
spec:
containers:
- name: karaf
image: xxx/karaf:ids-1.1.0
volumeMounts:
- name: karaf-conf-storage
mountPath: /apps/karaf/etc
volumes:
- name: karaf-conf-storage
persistentVolumeClaim:
claimName: karaf-conf-claim
According to kubernetes documentation, persistentVolumeClaim is a part of .spec.volume level, not .spec.container level of a pod object.
The correct pod.yaml is:
apiVersion: v1
kind: Pod
metadata:
name: karafpod
spec:
volumes:
- name: efgkaraf-conf-storage
persistentVolumeClaim:
claimName: efgkaraf-conf-claim
containers:
- name: karaf
image: docker-all.attanea.net/library/efgkaraf:ids-1.1.0
volumeMounts:
- name: efgkaraf-conf-storage
mountPath: /apps/karaf/etc
The following is the k8s definition used:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pv-provisioning-demo
labels:
demo: nfs-pv-provisioning
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 200Gi
---
apiVersion: v1
kind: ReplicationController
metadata:
name: nfs-server
spec:
securityContext:
runAsUser: 1000
fsGroup: 2000
replicas: 1
selector:
role: nfs-server
template:
metadata:
labels:
role: nfs-server
spec:
containers:
- name: nfs-server
image: k8s.gcr.io/volume-nfs:0.8
ports:
- name: nfs
containerPort: 2049
- name: mountd
containerPort: 20048
- name: rpcbind
containerPort: 111
securityContext:
privileged: true
volumeMounts:
- mountPath: /exports
name: mypvc
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: nfs-pv-provisioning-demo
---
kind: Service
apiVersion: v1
metadata:
name: nfs-server
spec:
ports:
- name: nfs
port: 2049
- name: mountd
port: 20048
- name: rpcbind
port: 111
selector:
role: nfs-server
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
nfs:
# FIXME: use the right IP
server: nfs-server
path: "/"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 1Gi
---
# This mounts the nfs volume claim into /mnt and continuously
# overwrites /mnt/index.html with the time and hostname of the pod.
apiVersion: v1
kind: ReplicationController
metadata:
name: nfs-busybox
spec:
securityContext:
runAsUser: 1000
fsGroup: 2000
replicas: 2
selector:
name: nfs-busybox
template:
metadata:
labels:
name: nfs-busybox
spec:
containers:
- image: busybox
command:
- sh
- -c
- 'while true; do date > /mnt/index.html; hostname >> /mnt/index.html; sleep $(($RANDOM % 5 + 5)); done'
imagePullPolicy: IfNotPresent
name: busybox
volumeMounts:
# name must match the volume name below
- name: nfs
mountPath: "/mnt"
volumes:
- name: nfs
persistentVolumeClaim:
claimName: nfs
Now /mnt directory in nfs-busybox should have 2000 as gid(as per docs). But it still have root and root as user and group. Since application is running with 1000/2000 its not able to create any logs or data in /mnt directory.
chmod might solve the issue, but it looks like work around. Is there any permenant solution for this?
Observations: If i replace nfs with some other PVC its working fine as told in docs.
Have you tried initContainers method? It fixes permissions on an exported directory:
initContainers:
- name: volume-mount-hack
image: busybox
command: ["sh", "-c", "chmod -R 777 /exports"]
volumeMounts:
- name: nfs
mountPath: /exports
If you use a standalone NFS server on Linux box, I suggest using no_root_squash option:
/exports *(rw,no_root_squash,no_subtree_check)
To manage the directory permission on nfs-server, there is a need to change security context and raise it to privileged mode:
apiVersion: v1
kind: Pod
metadata:
name: nfs-server
labels:
role: nfs-server
spec:
containers:
- name: nfs-server
image: nfs-server
ports:
- name: nfs
containerPort: 2049
securityContext:
privileged: true