I'm trying to share a cephfs volumes between namespaces within k8s cluster.
I'm using ceph-csi with cephfs.
Followed https://github.com/ceph/ceph-csi/blob/devel/docs/static-pvc.md#cephfs-static-pvc to create static pv+pvc in both namespaces.
Works if I don't launch both pods on same node.
If both pods on same node, Second pod get stuck with error:
MountVolume.SetUp failed for volume "team-test-vol-pv" : rpc error: code = Internal desc = failed to bind-mount /var/lib/kubelet/plugins/k
ubernetes.io/csi/pv/team-test-vol-pv/globalmount to /var/lib/kubelet/pods/007fc605-7fa4-4dc6-890f-fc0dabe5740b/volumes/kubernetes.io~csi/team-test-vol-pv/mount: an error (exit status 32) occurred while running mount arg
s: [-o bind,_netdev /var/lib/kubelet/plugins/kubernetes.io/csi/pv/team-test-vol-pv/globalmount /var/lib/kubelet/pods/007fc605-7fa4-4dc6-890f-fc0dabe5740b/volumes/kubernetes.io~csi/team-test-vol-pv/moun
Any ideas how to resolve this or how to use a single RWX volume in different NS?
PV+PVC for team-x:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-vol
namespace: team-x
spec:
storageClassName: ""
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
volumeMode: Filesystem
# volumeName should be same as PV name
volumeName: team-x-test-vol-pv
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: team-x-test-vol-pv
spec:
claimRef:
namespace: team-x
name: test-vol
storageClassName: ""
accessModes:
- ReadWriteMany
capacity:
storage: 1Gi
csi:
driver: cephfs.csi.ceph.com
nodeStageSecretRef:
name: csi-cephfs-secret-hd
namespace: ceph-csi
volumeAttributes:
"clusterID": "cd79ae11-1804-4c06-a97e-aeeb961b84b0"
"fsName": "cephfs"
"staticVolume": "true"
"rootPath": /volumes/team/share/8b73d3bb-282e-4c32-b13a-97459419bd5b
# volumeHandle can be anything, need not to be same
# as PV name or volume name. keeping same for brevity
volumeHandle: team-share
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
PV+PVC for team-y
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-vol
namespace: team-y
spec:
storageClassName: ""
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
volumeMode: Filesystem
# volumeName should be same as PV name
volumeName: team-y-test-vol-pv
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: team-y-test-vol-pv
spec:
claimRef:
namespace: team-y
name: test-vol
storageClassName: ""
accessModes:
- ReadWriteMany
capacity:
storage: 1Gi
csi:
driver: cephfs.csi.ceph.com
nodeStageSecretRef:
name: csi-cephfs-secret-hd
namespace: ceph-csi
volumeAttributes:
"clusterID": "cd79ae11-1804-4c06-a97e-aeeb961b84b0"
"fsName": "cephfs"
"staticVolume": "true"
"rootPath": /volumes/team-y/share/8b73d3bb-282e-4c32-b13a-97459419bd5b
# volumeHandle can be anything, need not to be same
# as PV name or volume name. keeping same for brevity
volumeHandle: team-share
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
Having volumeHandle: xyz unique for each pv done the trick. Tested deploying 3xdaemonsets in 3 different namespaces.
you might need to provide ReadWriteMany option
Reference link: https://kubernetes.io/docs/concepts/storage/persistent-volumes/
Related
I'm trying to deploy a helm chart for postgres in aws Fargate. But i receive this error:
Error: INSTALLATION FAILED: PersistentVolume "postgres-pv-volume" is invalid: spec.csi: Forbidden: may not specify more than 1 volume type
PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: postgres-pv-volume
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: efs-sc
csi:
driver: efs.csi.aws.com
volumeHandle: {id}
hostPath:
path: '/mnt/data'
PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pv-claim
spec:
storageClassName: efs-sc
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
storageClass
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: efs-sc
provisioner: efs.csi.aws.com
In Deployment:
spec:
volumes:
- name: postgresdb
persistentVolumeClaim:
claimName: postgres-pv-claim
...
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgresdb
apiVersion: v1
kind: PersistentVolume
metadata:
name: ****-pv-public
namespace: ****
spec:
storageClassName: efs-sc
capacity:
storage: 3Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
csi:
driver: efs.csi.aws.com
volumeHandle: fs-***
volumeAttributes:
path: /***/public
Mounting arguments: -t efs -o tls fs-2f974c54:/****/public /var/lib/kubelet/pods/9784d80e-4678-4b0b-96ae-a5cccf7db7a0/volumes/kubernetes.io~csi/******/mount
Output: Could not start amazon-efs-mount-watchdog, unrecognized init system "aws-efs-csi-dri"
b'mount.nfs4: mounting 127.0.0.1:/****/public failed, reason given by server: No such file or directory'
Here, how I fixed it
first, create an access point
apiVersion: v1
kind: PersistentVolume
metadata:
name: **-pv-public
namespace: laravel-test
spec:
storageClassName: efs-sc
capacity:
storage: 3Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
csi:
driver: efs.csi.aws.com
volumeHandle: fs-**::fsap-***
and fs-::fsap-* (::) not (:)
does the sub directory exists which you're mounting? also can you try adding like this?
As per the example the path should exists.
Replace FileSystemId of the EFS filesystem ID that needs to be mounted. And replace Path with a existing path on the filesystem.
you can refer this link:
https://github.com/kubernetes-sigs/aws-efs-csi-driver/tree/master/examples/kubernetes/volume_path
apiVersion: v1
kind: PersistentVolume
metadata:
name: efs-****-pv
spec:
capacity:
storage: 15Gi
volumeMode: Filesystem
mountOptions:
- tls
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: efs-sc
csi:
driver: efs.csi.aws.com
volumeHandle: fs-*****:/****
I've created GCP's disk form a snapshot and now I'm trying to resize it using PVC in kubernetes: 100GB -> 400GB. I've applied:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: restored-resize
parameters:
type: pd-standard
provisioner: kubernetes.io/gce-pd
allowVolumeExpansion: true
reclaimPolicy: Retain
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: restored-graphite
spec:
storageClassName: restored-resize
capacity:
storage: 400G
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: dev-restored-graphite
fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: restored-graphite
spec:
# It's necessary to specify "" as the storageClassName
# so that the default storage class won't be used, see
# https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class-1
storageClassName: restored-resize
volumeName: restored-graphite
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 400G
Status in PVC shows 400G:
(...)
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 400G
phase: Bound
However pod mounts previous disk value:
/dev/sdc 98.4G 72.8G 25.6G 74% /opt/graphite/storage
What am I doing wrong?
To me seems that you have setted 400G directly on the manifest, but as the manual said, you should had edited only
resources:
requests:
storage: 400G
https://kubernetes.io/blog/2018/07/12/resizing-persistent-volumes-using-kubernetes/
and those, triggering the new condition: FileSystemResizePending
As of Kubernetes v1.11, those PVC will autoresize in time after some time in this status, due to that, you shouldn't even have to restart the pod bounded to the pc.
But, again on your problem: i would edit this way the manifes:
spec:
storageClassName: restored-resize
capacity:
storage: 100G
in order for the system to reload the old config and notice that the situation is not as he thinks.
or at least, that is what i would try (on another environment, not production for sure.)
I'm trying to create a PersistentVolumeClaim giv it a specific volumeName to use.
I use this code:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: zipkin
name: pvc-ciro
spec:
accessModes:
- ReadWriteOnce
storageClassName: nfs-provisioner
resources:
requests:
storage: 0.1Gi
volumeName: "demo"
If I remove volumeName the PVC is correctly bound otherways remain in pending status.
Why?
The volumeName is a name of the PersistentVolume you want to use.
On GKE PVC can automatically create a PV that will bound to, or you can specify the name of it using volumeName.
pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-ciro
spec:
accessModes:
- ReadWriteOnce
storageClassName: standard
resources:
requests:
storage: 0.1Gi
volumeName: demo
pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: demo
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: standard
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /tmp
server: 172.17.0.2
And the output will be:
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
demo 5Gi RWO Recycle Bound default/pvc-ciro standard 13s
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-ciro Bound demo 5Gi RWO standard 8s
You can read more details in Kubernetes documentation regarding Persistent Volumes.
Is it required to create the directory manually in nodes or will it be auto created by pv?
Here is my pv & pvc file, and I'm seeing this error
no persistent volumes available for this claim and no storage class is set
how to resolve this?
kind: PersistentVolume
apiVersion: v1
metadata:
name: zk1-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Mi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mr/zk"
cat zk1-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: zk1-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
kubectl describe pvc zk1-pvc
Name: zk1-pvc
Namespace: instavote
StorageClass:
Status: Pending
Volume:
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"zk1-pvc","namespace":"instavote"},"spec":{"accessMo...
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 12s (x14 over 3m7s) *persistentvolume-controller no persistent volumes available for this claim and no storage class is set*
Mounted By: zk1-745b7cbf46-lg7v9
Back to your main question
Is it required to create the directory manually in nodes or will it be
auto created by pv?
First of all, error in your output is not related with your question. As an answer for your question - Yes. It is crated by PV automatically.
In order to do achieve this, first you have to create StorageClass with no-provisioner as an example below
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: manual
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
Then you have to create PersistentVolume by defining this storageClassName and hostPath parameter like below:
apiVersion: v1
kind: PersistentVolume
metadata:
name: zk1-pv
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /mr/zk
Then you have to create PVC and Pod/Deployment as an example below:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: containerName
image: gcr.io/google-containers/nginx:1.7.9
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim
NOTE:
Don't forget put storageClassName: manual parameter on both PVC and PV manifests. Otherwise they will not be able to bound to each other.
Hope it clears
You forgot to specify storageClassName: manual in PersistentVolumeClaim.