My PersistentVolumeClaim will not use the PersistentVolume I have prepared for it.
I have this PersistentVolume in monitoring-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: monitoring-volume
labels:
usage: monitoring
spec:
capacity:
storage: 50Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /data/k8data/monitoring
After I have done
kubectl apply -f monitoring-pv.yaml
I can check that it exists with kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
monitoring-volume 50Gi RWO Retain Available 5m
My PersistentVolumeClaim in monitoring-pvc.yaml looks like this:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: monitoring-claim
namespace: monitoring
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
selector:
matchLabels:
usage: monitoring
When I do kubectl apply -f monitoring-pvc.yaml it gets created.
I can look at my new PersistentVolumeClaim with get pvc -n monitoringand I see
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
monitoring-claim Pending manual 31s
When I look at my PersistentVolume with kubectl get pv I can see that it's still available:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
monitoring-volume 50Gi RWO Retain Available 16m
I had expected the PersistentVolume to be Boundbut it isn't. When I use a ´PersistentVolumeClaim´ with the same name as this, a new PersistentVolumeClaim is created that is written in /tmp and therefore not very persistent.
When I do the same operations without a namespace for my PersistentVolumeClaim everything seems to work.
I'm on minikube on a Ubuntu 18.04.
What do I need to change to be able to connect the volume with the claim?
When I reviewed my question and compared it to a working solution, I noticed that I had missed storageClassName that was set to manual in an example without a namespace that I was able to use.
My updated PersistentVolumenow looks like this:
apiVersion: v1
kind: PersistentVolume
metadata:
name: monitoring-volume
labels:
usage: monitoring
spec:
storageClassName: manual
capacity:
storage: 50Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /data/k8data/monitoring
The only difference is
storageClassName: manual
My preliminary findings is that this was the silly mistake I had done.
Persistent Volume and Volume Claim should in same namespace. You need to add namespace: monitoring. now you can try this below code
for Persistent Volume
monitoring-pv.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: monitoring-volume
labels:
usage: monitoring
namespace: monitoring
spec:
capacity:
storage: 50Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /data/k8data/monitoring
for Persistent volume claim
monitoring-pvc.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: monitoring-claim
namespace: monitoring
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
selector:
matchLabels:
usage: monitoring
Related
I am trying to create multiple PV and PVC(for each one of the PV) in a single namespace and it is not allowing me to do so. Is this an expected behavior? i am using NFS.
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-office-tools-service-pv 70Gi RWX Retain Bound office-tools-service-ns/nfs-office-tools-service-pv manual 4d
nfs-perfqa-jenkins-pv 20Gi RWX Retain Available manual 8m
nfs-perfqa-pv 2Gi RWX Retain Bound perfqa/nfs-perfqa-pvc
manual 17d
When i create a new PVC for the newly created PV, it is giving error as below:
Below are the yaml for PV and PVC:
PV.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-perfqa-jenkins-pv
namespace: perfqa
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/nfs_share/docker/test/jenkins"
PVC.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-perfqa-jenkins-pvc
namespace: default
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
Your cluster has ResourceQuota or LimitRange with requests.storage set to 2Gi. So you cannot create PVC with 20Gi.
First of all note that persistent volume is defined at cluster level. it is not defined at namespace level.
correct pv definition as below
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-perfqa-jenkins-pv
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/nfs_share/docker/test/jenkins"
There is no issue with pv. it is created and is available
nfs-perfqa-jenkins-pv 20Gi RWX Retain Available
also check for resourceQuota in default namespace. You might have set max storage limit to 2GB
Is it required to create the directory manually in nodes or will it be auto created by pv?
Here is my pv & pvc file, and I'm seeing this error
no persistent volumes available for this claim and no storage class is set
how to resolve this?
kind: PersistentVolume
apiVersion: v1
metadata:
name: zk1-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Mi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mr/zk"
cat zk1-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: zk1-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
kubectl describe pvc zk1-pvc
Name: zk1-pvc
Namespace: instavote
StorageClass:
Status: Pending
Volume:
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"zk1-pvc","namespace":"instavote"},"spec":{"accessMo...
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 12s (x14 over 3m7s) *persistentvolume-controller no persistent volumes available for this claim and no storage class is set*
Mounted By: zk1-745b7cbf46-lg7v9
Back to your main question
Is it required to create the directory manually in nodes or will it be
auto created by pv?
First of all, error in your output is not related with your question. As an answer for your question - Yes. It is crated by PV automatically.
In order to do achieve this, first you have to create StorageClass with no-provisioner as an example below
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: manual
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
Then you have to create PersistentVolume by defining this storageClassName and hostPath parameter like below:
apiVersion: v1
kind: PersistentVolume
metadata:
name: zk1-pv
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /mr/zk
Then you have to create PVC and Pod/Deployment as an example below:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: containerName
image: gcr.io/google-containers/nginx:1.7.9
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim
NOTE:
Don't forget put storageClassName: manual parameter on both PVC and PV manifests. Otherwise they will not be able to bound to each other.
Hope it clears
You forgot to specify storageClassName: manual in PersistentVolumeClaim.
I am trying to create both a PersistentVolume and a PersistentVolumeClaim on Google Kubernetes Engine.
The way to link them is via labelSelector.
I am creating the objects with this definition:
volume.yml
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-test
namespace: mynamespace
labels:
pv-owner: owner
pv-usage: pv-test
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
hostPath:
path: /data/test/
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-test
namespace: mynamespace
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
selector:
matchLabels:
pv-usage: pv-test
and running:
kubectl apply -f volume.yml
Both objects are successfully created, however, VolumeClaim apparently keeps pending forever awaiting a Volume that matches its requirements.
Could you please help me?
Thanks!
First of all, PersistentVolume resources don’t belong to any namespace. They’re cluster-level resources like nodes, but PersistentVolumeClaim objects can only be created in a specific namespace.
Seems like when you created the claim earlier, it was immediately bound to the PersistentVolume. Can you show output of the commands:
$ kubectl get pv
$ kubectl get pvc
Highly likely your persistentVolumeReclaimPolicy was set to Retain, so your PersistentVolume is in Released status now. Since there is no another PersistenVolume resource matches your claim's requirements your PersistentVolumeClaim is in Pending status.
Thanks for your help #konstantin-vustin
I found the solution. I had to specify storageClassName: manual attribute in the spec of both objects.
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class
According to the doc
A PV with no storageClassName has no class and can only be bound to PVCs that request no particular class.
So IMO it should have worked before, so I am not sure if I clearly understood it.
This was the status before
kubectl get pv pv-test-vol && kubectl get pvc pv-test --namespace openwhisk
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-test-vol 2Gi RWO Retain Available manual 26s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pv-test Pending standard 26s
The updated definitions
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-test
namespace: mynamespace
labels:
pv-owner: owner
pv-usage: pv-test
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
hostPath:
path: /data/test/
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-test
namespace: mynamespace
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
selector:
matchLabels:
pv-usage: pv-test
This is the status after
kubectl get pv pv-test-vol && kubectl get pvc pv-test --namespace openwhisk
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-test-vol 2Gi RWO Retain Bound openwhisk/pv-test manual 4s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pv-test Bound pv-test-vol 2Gi RWO manual 4s
StorageClasses are the new method of specifying dynamic persistent volume claim (PVC) dependencies within Kubernetes. This avoids the need to explicitly provision one directly with the cloud provider (in my case Google Container Engine (GKE)).
Definition for the StorageClasses (GKE already has a default for standard class)
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: fast
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
zone: europe-west1-b
Definition for the actual PVC
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-server-pvc
namespace: staging
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 100Gi
storageClassName: "standard"
Here is the result of kubernetes get storageclass:
NAME TYPE
fast kubernetes.io/gce-pd
standard (default) kubernetes.io/gce-pd
Here is the result of kubernetes get pvc:
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
nfs-pvc Bound nfs 1Mi RWX 119d
nfs-server-pvc Bound pvc-905a810b-3f13-11e7-82f9-42010a840072 100Gi RWO standard 81d
I would like to continue taking snapshots of the volumes but the dynamic nature of the volume names created (in this case pvc-905a810b-3f13-11e7-82f9-42010a840072), mean i cannot continue with the following command that i had been using via cron (note the "nfs" name is now incorrect):
gcloud compute --project "XXX-XXX" disks snapshot "nfs" --zone "europe-west1-b" --snapshot-names "nfs-${DATE}"
I guess this boils down to Kubernetes allowing explicit naming through StorageClass-based PVC. The docs don't seem to allow this. Any ideas?
One approach is to manually create the PV and give it a stable name that you can use in your scripts. You can use gcloud commands to create the underlying PD disks. When you create the PV, give it a label:
apiVersion: "v1"
kind: "PersistentVolume"
metadata:
name: my-pv-0
labels:
pdName: my-pv-0
spec:
capacity:
storage: "10Gi"
accessModes:
- "ReadWriteOnce"
storageClassName: fast
gcePersistentDisk:
fsType: "ext4"
pdName: "my-pd-0"
Then attach it to the PVC using a selector:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-pvc-0
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: fast
selector:
matchLabels:
pdName: my-pv-0
I created a physical volume using NFS and PVC for the same volume. However, the PVC always creates a EBS disk storage instead of binding to the PV. Please see the log below:
> kubectl get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
mynfspv 100Gi RWX Retain Available 7s
create PVC now
> kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
mynfspvc Bound pvc-a081c470-3f23-11e7-9d30-024e42ef6b60 100Gi RWX default 4s
> kubectl get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
mynfspv 100Gi RWX Retain Available 50s
pvc-a081c470-3f23-11e7-9d30-024e42ef6b60 100Gi RWX Delete Bound default/mynfspvc default 17s
nfs-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: mynfspv
labels:
name: nfs2
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: dbc56.efs.us-west-2.amazonaws.com
path: /
nfs-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mynfspvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
It looks like you have dynamic provisioning and default storageclass feature enabled, and the default class is AWS ebs. You can check your default class with following command:
$ kubectl get storageclasses
NAME TYPE
standard (default) kubernetes.io/aws-ebs
If this is correct, then I think you'll need to specify storage class to solve you problem.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: nfs-class
provisioner: kubernetes.io/fake-nfs
Add storageClassName to both of your PV
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mynfspvc
spec:
storageClassName: nfs-class
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
and PVC
apiVersion: v1
kind: PersistentVolume
metadata:
name: mynfspv
labels:
name: nfs2
spec:
storageClassName: nfs-class
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: dbc56.efs.us-west-2.amazonaws.com
path: /
You can check out https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class-1 for details.
Which version of Kube is this ? The root cause is as mentioned by #ddysher. In your setup the "Default" Storage class is "EBS" as you can see in the get pv{c} outputs. According to kube version you can also make use of 'calim selector' in PVC spec. Refer # https://github.com/kubernetes/community/blob/master/contributors/design-proposals/volume-selectors.md