kubernetes volume claim pending with specific volume name - kubernetes

I'm trying to create a PersistentVolumeClaim giv it a specific volumeName to use.
I use this code:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: zipkin
name: pvc-ciro
spec:
accessModes:
- ReadWriteOnce
storageClassName: nfs-provisioner
resources:
requests:
storage: 0.1Gi
volumeName: "demo"
If I remove volumeName the PVC is correctly bound otherways remain in pending status.
Why?

The volumeName is a name of the PersistentVolume you want to use.
On GKE PVC can automatically create a PV that will bound to, or you can specify the name of it using volumeName.
pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-ciro
spec:
accessModes:
- ReadWriteOnce
storageClassName: standard
resources:
requests:
storage: 0.1Gi
volumeName: demo
pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: demo
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: standard
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /tmp
server: 172.17.0.2
And the output will be:
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
demo 5Gi RWO Recycle Bound default/pvc-ciro standard 13s
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-ciro Bound demo 5Gi RWO standard 8s
You can read more details in Kubernetes documentation regarding Persistent Volumes.

Related

How to share a cephfs volume between pods in different k8s namespaces

I'm trying to share a cephfs volumes between namespaces within k8s cluster.
I'm using ceph-csi with cephfs.
Followed https://github.com/ceph/ceph-csi/blob/devel/docs/static-pvc.md#cephfs-static-pvc to create static pv+pvc in both namespaces.
Works if I don't launch both pods on same node.
If both pods on same node, Second pod get stuck with error:
MountVolume.SetUp failed for volume "team-test-vol-pv" : rpc error: code = Internal desc = failed to bind-mount /var/lib/kubelet/plugins/k
ubernetes.io/csi/pv/team-test-vol-pv/globalmount to /var/lib/kubelet/pods/007fc605-7fa4-4dc6-890f-fc0dabe5740b/volumes/kubernetes.io~csi/team-test-vol-pv/mount: an error (exit status 32) occurred while running mount arg
s: [-o bind,_netdev /var/lib/kubelet/plugins/kubernetes.io/csi/pv/team-test-vol-pv/globalmount /var/lib/kubelet/pods/007fc605-7fa4-4dc6-890f-fc0dabe5740b/volumes/kubernetes.io~csi/team-test-vol-pv/moun
Any ideas how to resolve this or how to use a single RWX volume in different NS?
PV+PVC for team-x:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-vol
namespace: team-x
spec:
storageClassName: ""
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
volumeMode: Filesystem
# volumeName should be same as PV name
volumeName: team-x-test-vol-pv
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: team-x-test-vol-pv
spec:
claimRef:
namespace: team-x
name: test-vol
storageClassName: ""
accessModes:
- ReadWriteMany
capacity:
storage: 1Gi
csi:
driver: cephfs.csi.ceph.com
nodeStageSecretRef:
name: csi-cephfs-secret-hd
namespace: ceph-csi
volumeAttributes:
"clusterID": "cd79ae11-1804-4c06-a97e-aeeb961b84b0"
"fsName": "cephfs"
"staticVolume": "true"
"rootPath": /volumes/team/share/8b73d3bb-282e-4c32-b13a-97459419bd5b
# volumeHandle can be anything, need not to be same
# as PV name or volume name. keeping same for brevity
volumeHandle: team-share
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
PV+PVC for team-y
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-vol
namespace: team-y
spec:
storageClassName: ""
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
volumeMode: Filesystem
# volumeName should be same as PV name
volumeName: team-y-test-vol-pv
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: team-y-test-vol-pv
spec:
claimRef:
namespace: team-y
name: test-vol
storageClassName: ""
accessModes:
- ReadWriteMany
capacity:
storage: 1Gi
csi:
driver: cephfs.csi.ceph.com
nodeStageSecretRef:
name: csi-cephfs-secret-hd
namespace: ceph-csi
volumeAttributes:
"clusterID": "cd79ae11-1804-4c06-a97e-aeeb961b84b0"
"fsName": "cephfs"
"staticVolume": "true"
"rootPath": /volumes/team-y/share/8b73d3bb-282e-4c32-b13a-97459419bd5b
# volumeHandle can be anything, need not to be same
# as PV name or volume name. keeping same for brevity
volumeHandle: team-share
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
Having volumeHandle: xyz unique for each pv done the trick. Tested deploying 3xdaemonsets in 3 different namespaces.
you might need to provide ReadWriteMany option
Reference link: https://kubernetes.io/docs/concepts/storage/persistent-volumes/

how pvc decide which pv to bound in kubernetes

I am create a pv in kubernetes v1.16.0 cluster like this:
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-flink-pv1
namespace: middleware
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
nfs:
server: "192.168.64.251"
path: "/mnt/data/flink"
persistentVolumeReclaimPolicy: Retain
and create pvc like this:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: flink-pv-claim
namespace: middleware
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
what is the according about pvc bound to pv? the storage size? how to make pvc bound to specify pv?
From the docs here a PVC is bound to a PV which has got enough capacity to satisfy the PV.
Also claims can specify a label selector to further filter the set of volumes. Only the volumes whose labels match the selector can be bound to the claim. This is documented here

Creating multiple PV and PVC in same kubernetes namespace

I am trying to create multiple PV and PVC(for each one of the PV) in a single namespace and it is not allowing me to do so. Is this an expected behavior? i am using NFS.
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-office-tools-service-pv 70Gi RWX Retain Bound office-tools-service-ns/nfs-office-tools-service-pv manual 4d
nfs-perfqa-jenkins-pv 20Gi RWX Retain Available manual 8m
nfs-perfqa-pv 2Gi RWX Retain Bound perfqa/nfs-perfqa-pvc
manual 17d
When i create a new PVC for the newly created PV, it is giving error as below:
Below are the yaml for PV and PVC:
PV.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-perfqa-jenkins-pv
namespace: perfqa
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/nfs_share/docker/test/jenkins"
PVC.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-perfqa-jenkins-pvc
namespace: default
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
Your cluster has ResourceQuota or LimitRange with requests.storage set to 2Gi. So you cannot create PVC with 20Gi.
First of all note that persistent volume is defined at cluster level. it is not defined at namespace level.
correct pv definition as below
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-perfqa-jenkins-pv
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/nfs_share/docker/test/jenkins"
There is no issue with pv. it is created and is available
nfs-perfqa-jenkins-pv 20Gi RWX Retain Available
also check for resourceQuota in default namespace. You might have set max storage limit to 2GB

PersistentVolumeClaim in a namespace does not connect to a PersistentVolume

My PersistentVolumeClaim will not use the PersistentVolume I have prepared for it.
I have this PersistentVolume in monitoring-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: monitoring-volume
labels:
usage: monitoring
spec:
capacity:
storage: 50Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /data/k8data/monitoring
After I have done
kubectl apply -f monitoring-pv.yaml
I can check that it exists with kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
monitoring-volume 50Gi RWO Retain Available 5m
My PersistentVolumeClaim in monitoring-pvc.yaml looks like this:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: monitoring-claim
namespace: monitoring
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
selector:
matchLabels:
usage: monitoring
When I do kubectl apply -f monitoring-pvc.yaml it gets created.
I can look at my new PersistentVolumeClaim with get pvc -n monitoringand I see
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
monitoring-claim Pending manual 31s
When I look at my PersistentVolume with kubectl get pv I can see that it's still available:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
monitoring-volume 50Gi RWO Retain Available 16m
I had expected the PersistentVolume to be Boundbut it isn't. When I use a ´PersistentVolumeClaim´ with the same name as this, a new PersistentVolumeClaim is created that is written in /tmp and therefore not very persistent.
When I do the same operations without a namespace for my PersistentVolumeClaim everything seems to work.
I'm on minikube on a Ubuntu 18.04.
What do I need to change to be able to connect the volume with the claim?
When I reviewed my question and compared it to a working solution, I noticed that I had missed storageClassName that was set to manual in an example without a namespace that I was able to use.
My updated PersistentVolumenow looks like this:
apiVersion: v1
kind: PersistentVolume
metadata:
name: monitoring-volume
labels:
usage: monitoring
spec:
storageClassName: manual
capacity:
storage: 50Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /data/k8data/monitoring
The only difference is
storageClassName: manual
My preliminary findings is that this was the silly mistake I had done.
Persistent Volume and Volume Claim should in same namespace. You need to add namespace: monitoring. now you can try this below code
for Persistent Volume
monitoring-pv.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: monitoring-volume
labels:
usage: monitoring
namespace: monitoring
spec:
capacity:
storage: 50Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /data/k8data/monitoring
for Persistent volume claim
monitoring-pvc.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: monitoring-claim
namespace: monitoring
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
selector:
matchLabels:
usage: monitoring

Kubernetes - PVC not binding the NFS PV

I created a physical volume using NFS and PVC for the same volume. However, the PVC always creates a EBS disk storage instead of binding to the PV. Please see the log below:
> kubectl get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
mynfspv 100Gi RWX Retain Available 7s
create PVC now
> kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
mynfspvc Bound pvc-a081c470-3f23-11e7-9d30-024e42ef6b60 100Gi RWX default 4s
> kubectl get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
mynfspv 100Gi RWX Retain Available 50s
pvc-a081c470-3f23-11e7-9d30-024e42ef6b60 100Gi RWX Delete Bound default/mynfspvc default 17s
nfs-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: mynfspv
labels:
name: nfs2
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: dbc56.efs.us-west-2.amazonaws.com
path: /
nfs-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mynfspvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
It looks like you have dynamic provisioning and default storageclass feature enabled, and the default class is AWS ebs. You can check your default class with following command:
$ kubectl get storageclasses
NAME TYPE
standard (default) kubernetes.io/aws-ebs
If this is correct, then I think you'll need to specify storage class to solve you problem.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: nfs-class
provisioner: kubernetes.io/fake-nfs
Add storageClassName to both of your PV
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mynfspvc
spec:
storageClassName: nfs-class
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
and PVC
apiVersion: v1
kind: PersistentVolume
metadata:
name: mynfspv
labels:
name: nfs2
spec:
storageClassName: nfs-class
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: dbc56.efs.us-west-2.amazonaws.com
path: /
You can check out https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class-1 for details.
Which version of Kube is this ? The root cause is as mentioned by #ddysher. In your setup the "Default" Storage class is "EBS" as you can see in the get pv{c} outputs. According to kube version you can also make use of 'calim selector' in PVC spec. Refer # https://github.com/kubernetes/community/blob/master/contributors/design-proposals/volume-selectors.md