Creating multiple PV and PVC in same kubernetes namespace - kubernetes

I am trying to create multiple PV and PVC(for each one of the PV) in a single namespace and it is not allowing me to do so. Is this an expected behavior? i am using NFS.
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-office-tools-service-pv 70Gi RWX Retain Bound office-tools-service-ns/nfs-office-tools-service-pv manual 4d
nfs-perfqa-jenkins-pv 20Gi RWX Retain Available manual 8m
nfs-perfqa-pv 2Gi RWX Retain Bound perfqa/nfs-perfqa-pvc
manual 17d
When i create a new PVC for the newly created PV, it is giving error as below:
Below are the yaml for PV and PVC:
PV.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-perfqa-jenkins-pv
namespace: perfqa
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/nfs_share/docker/test/jenkins"
PVC.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-perfqa-jenkins-pvc
namespace: default
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi

Your cluster has ResourceQuota or LimitRange with requests.storage set to 2Gi. So you cannot create PVC with 20Gi.

First of all note that persistent volume is defined at cluster level. it is not defined at namespace level.
correct pv definition as below
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-perfqa-jenkins-pv
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/nfs_share/docker/test/jenkins"
There is no issue with pv. it is created and is available
nfs-perfqa-jenkins-pv 20Gi RWX Retain Available
also check for resourceQuota in default namespace. You might have set max storage limit to 2GB

Related

how pvc decide which pv to bound in kubernetes

I am create a pv in kubernetes v1.16.0 cluster like this:
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-flink-pv1
namespace: middleware
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
nfs:
server: "192.168.64.251"
path: "/mnt/data/flink"
persistentVolumeReclaimPolicy: Retain
and create pvc like this:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: flink-pv-claim
namespace: middleware
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
what is the according about pvc bound to pv? the storage size? how to make pvc bound to specify pv?
From the docs here a PVC is bound to a PV which has got enough capacity to satisfy the PV.
Also claims can specify a label selector to further filter the set of volumes. Only the volumes whose labels match the selector can be bound to the claim. This is documented here

kubernetes volume claim pending with specific volume name

I'm trying to create a PersistentVolumeClaim giv it a specific volumeName to use.
I use this code:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: zipkin
name: pvc-ciro
spec:
accessModes:
- ReadWriteOnce
storageClassName: nfs-provisioner
resources:
requests:
storage: 0.1Gi
volumeName: "demo"
If I remove volumeName the PVC is correctly bound otherways remain in pending status.
Why?
The volumeName is a name of the PersistentVolume you want to use.
On GKE PVC can automatically create a PV that will bound to, or you can specify the name of it using volumeName.
pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-ciro
spec:
accessModes:
- ReadWriteOnce
storageClassName: standard
resources:
requests:
storage: 0.1Gi
volumeName: demo
pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: demo
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: standard
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /tmp
server: 172.17.0.2
And the output will be:
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
demo 5Gi RWO Recycle Bound default/pvc-ciro standard 13s
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-ciro Bound demo 5Gi RWO standard 8s
You can read more details in Kubernetes documentation regarding Persistent Volumes.

PersistentVolumeClaim in a namespace does not connect to a PersistentVolume

My PersistentVolumeClaim will not use the PersistentVolume I have prepared for it.
I have this PersistentVolume in monitoring-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: monitoring-volume
labels:
usage: monitoring
spec:
capacity:
storage: 50Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /data/k8data/monitoring
After I have done
kubectl apply -f monitoring-pv.yaml
I can check that it exists with kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
monitoring-volume 50Gi RWO Retain Available 5m
My PersistentVolumeClaim in monitoring-pvc.yaml looks like this:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: monitoring-claim
namespace: monitoring
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
selector:
matchLabels:
usage: monitoring
When I do kubectl apply -f monitoring-pvc.yaml it gets created.
I can look at my new PersistentVolumeClaim with get pvc -n monitoringand I see
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
monitoring-claim Pending manual 31s
When I look at my PersistentVolume with kubectl get pv I can see that it's still available:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
monitoring-volume 50Gi RWO Retain Available 16m
I had expected the PersistentVolume to be Boundbut it isn't. When I use a ´PersistentVolumeClaim´ with the same name as this, a new PersistentVolumeClaim is created that is written in /tmp and therefore not very persistent.
When I do the same operations without a namespace for my PersistentVolumeClaim everything seems to work.
I'm on minikube on a Ubuntu 18.04.
What do I need to change to be able to connect the volume with the claim?
When I reviewed my question and compared it to a working solution, I noticed that I had missed storageClassName that was set to manual in an example without a namespace that I was able to use.
My updated PersistentVolumenow looks like this:
apiVersion: v1
kind: PersistentVolume
metadata:
name: monitoring-volume
labels:
usage: monitoring
spec:
storageClassName: manual
capacity:
storage: 50Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /data/k8data/monitoring
The only difference is
storageClassName: manual
My preliminary findings is that this was the silly mistake I had done.
Persistent Volume and Volume Claim should in same namespace. You need to add namespace: monitoring. now you can try this below code
for Persistent Volume
monitoring-pv.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: monitoring-volume
labels:
usage: monitoring
namespace: monitoring
spec:
capacity:
storage: 50Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /data/k8data/monitoring
for Persistent volume claim
monitoring-pvc.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: monitoring-claim
namespace: monitoring
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
selector:
matchLabels:
usage: monitoring

VolumeClaim in Kubernetes Google Cloud

I am trying to create both a PersistentVolume and a PersistentVolumeClaim on Google Kubernetes Engine.
The way to link them is via labelSelector.
I am creating the objects with this definition:
volume.yml
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-test
namespace: mynamespace
labels:
pv-owner: owner
pv-usage: pv-test
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
hostPath:
path: /data/test/
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-test
namespace: mynamespace
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
selector:
matchLabels:
pv-usage: pv-test
and running:
kubectl apply -f volume.yml
Both objects are successfully created, however, VolumeClaim apparently keeps pending forever awaiting a Volume that matches its requirements.
Could you please help me?
Thanks!
First of all, PersistentVolume resources don’t belong to any namespace. They’re cluster-level resources like nodes, but PersistentVolumeClaim objects can only be created in a specific namespace.
Seems like when you created the claim earlier, it was immediately bound to the PersistentVolume. Can you show output of the commands:
$ kubectl get pv
$ kubectl get pvc
Highly likely your persistentVolumeReclaimPolicy was set to Retain, so your PersistentVolume is in Released status now. Since there is no another PersistenVolume resource matches your claim's requirements your PersistentVolumeClaim is in Pending status.
Thanks for your help #konstantin-vustin
I found the solution. I had to specify storageClassName: manual attribute in the spec of both objects.
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class
According to the doc
A PV with no storageClassName has no class and can only be bound to PVCs that request no particular class.
So IMO it should have worked before, so I am not sure if I clearly understood it.
This was the status before
kubectl get pv pv-test-vol && kubectl get pvc pv-test --namespace openwhisk
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-test-vol 2Gi RWO Retain Available manual 26s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pv-test Pending standard 26s
The updated definitions
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-test
namespace: mynamespace
labels:
pv-owner: owner
pv-usage: pv-test
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
hostPath:
path: /data/test/
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-test
namespace: mynamespace
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
selector:
matchLabels:
pv-usage: pv-test
This is the status after
kubectl get pv pv-test-vol && kubectl get pvc pv-test --namespace openwhisk
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-test-vol 2Gi RWO Retain Bound openwhisk/pv-test manual 4s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pv-test Bound pv-test-vol 2Gi RWO manual 4s

Kubernetes - PVC not binding the NFS PV

I created a physical volume using NFS and PVC for the same volume. However, the PVC always creates a EBS disk storage instead of binding to the PV. Please see the log below:
> kubectl get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
mynfspv 100Gi RWX Retain Available 7s
create PVC now
> kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
mynfspvc Bound pvc-a081c470-3f23-11e7-9d30-024e42ef6b60 100Gi RWX default 4s
> kubectl get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
mynfspv 100Gi RWX Retain Available 50s
pvc-a081c470-3f23-11e7-9d30-024e42ef6b60 100Gi RWX Delete Bound default/mynfspvc default 17s
nfs-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: mynfspv
labels:
name: nfs2
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: dbc56.efs.us-west-2.amazonaws.com
path: /
nfs-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mynfspvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
It looks like you have dynamic provisioning and default storageclass feature enabled, and the default class is AWS ebs. You can check your default class with following command:
$ kubectl get storageclasses
NAME TYPE
standard (default) kubernetes.io/aws-ebs
If this is correct, then I think you'll need to specify storage class to solve you problem.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: nfs-class
provisioner: kubernetes.io/fake-nfs
Add storageClassName to both of your PV
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mynfspvc
spec:
storageClassName: nfs-class
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
and PVC
apiVersion: v1
kind: PersistentVolume
metadata:
name: mynfspv
labels:
name: nfs2
spec:
storageClassName: nfs-class
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: dbc56.efs.us-west-2.amazonaws.com
path: /
You can check out https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class-1 for details.
Which version of Kube is this ? The root cause is as mentioned by #ddysher. In your setup the "Default" Storage class is "EBS" as you can see in the get pv{c} outputs. According to kube version you can also make use of 'calim selector' in PVC spec. Refer # https://github.com/kubernetes/community/blob/master/contributors/design-proposals/volume-selectors.md