Given the following PVC and PV:
PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: packages-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
volumeName: packages-volume
PV:
apiVersion: v1
kind: PersistentVolume
metadata:
name: packages-volume
namespace: test
spec:
claimRef:
name: packages-pvc
namespace: test
accessModes:
- ReadWriteMany
nfs:
path: {{NFS_PATH}}
server: {{NFS_SERVER}}
capacity:
storage: 1Gi
persistentVolumeReclaimPolicy: Retain
if I create the PV, then the PVC, they bind together. However if I delete the PVC then re-create it, they do not bind (pvc pending). Why?
Note that after deleting PVC, PV remains in Released status:
$ kubectl get pv packages-volume
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
packages-volume 1007Gi RWX Retain Released default/packages-pvc 10m
It should have status Available so it can be reused by another PersistentVolumeClaim instance.
Why it isn't Available ?
If you display current yaml definition of the PV, which you can easily do by executing:
kubectl get pv packages-volume -o yaml
you may notice that in claimRef section it contains the uid of the recently deleted PersistentVolumeClaim:
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: packages-pvc
namespace: default
resourceVersion: "10218121"
uid: 1aede3e6-eaa1-11e9-a594-42010a9c0005
You can easily verify it by issuing:
kubectl get pvc packages-pvc -o yaml | grep uid
just before you delete your PVC and compare it with what PV definition contains. You'll see that this is exactly the same uid that is still referred by your PV after PVC is deleted. This remaining reference is the actual reason that PV remains in Released status.
Why newly created PVC remains in a Pending state ?
Although your newly created PVC may seem to you exactly the same PVC that you've just deleted as you're creating it using the very same yaml file, from the perspective of Kubernetes it's a completely new instance of PersistentVolumeClaim object which has completely different uid. This is the reason why it remains in Pending status and is unable to bind to the PV.
Solution:
To make the PV Available again you just need to remove the mentioned uid reference e.g. by issuing:
kubectl patch pv packages-volume --type json -p '[{"op": "remove", "path": "/spec/claimRef/uid"}]'
or alternatively by removing the whole claimRef section which can be done as follows:
kubectl patch pv packages-volume --type json -p '[{"op": "remove", "path": "/spec/claimRef"}]'
the above answer solves the problem. however there is a reason that PV is kept into released stated. k8s controller can easily remove the PVC ref after the user deletes the PVC claim and brings the PV to be in available state.
But there is a problem, once a PV goes into bound state, it's must have some data. if you forcefully bind it, you will loose or corrupt your data.
so you must backup/clean the PV, before you try to make it available.
thanks
Related
I deployed a PVC, which dynamically created a PV.
After that I deleted the PVC and now my PV looks like below:
PS Kubernetes> kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-1b59942c-eb26-4603-b78e-7054d9418da6 2G RWX Retain Released default/db-pvc hostpath 26h
When I recreate my PVC, that creates a new PV.
Is there a way to reattach the existing PV to my PVC ?
Is there a way to do it automatically ?
I tried to attach the PV with my PVC using "volumeName" option, but it did not work.
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
db-pvc Pending pvc-1b59942c-eb26-4603-b78e-7054d9418da6 0 hostpath 77s
When a PVC is deleted, the PV stays in the "Released" state with the claimRef uid of the deleted PVC.
To reuse a PV, you need to delete the claimRef to make it go to the "Available" state
You may either edit the PV and manually delete the claimRef section, or run the patch command as under:
kubectl patch pv pvc-1b59942c-eb26-4603-b78e-7054d9418da6 --type json -p '[{"op": "remove", "path": "/spec/claimRef"}]'
Subsequently, you recreate the PVC.
If you are on GKE and your PV is running
You can create the PVC using
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-1b59942c-eb26-4603-b78e-7054d9418da6
spec:
storageClassName: default
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2G
I am working on deploying Hyperledger Fabric test network on Kubernetes minikube cluster. I intend to use PersistentVolume to share cytpo-config and channel artifacts among various peers and orderers. Following is my PersistentVolume.yaml and PersistentVolumeClaim.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: persistent-volume
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
nfs:
path: "/nfsroot"
server: "3.128.203.245"
readOnly: false
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: persistent-volume-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Following is the pod where the above claim is mount on /data
kind: Pod
apiVersion: v1
metadata:
name: test-shell
labels:
name: test-shell
spec:
containers:
- name: shell
image: ubuntu
command: ["/bin/bash", "-c", "while true ; do sleep 10 ; done"]
volumeMounts:
- mountPath: "/data"
name: pv
volumes:
- name: pv
persistentVolumeClaim:
claimName: persistent-volume-claim
NFS is setup on my EC2 instance. I have verified NFS server is working fine and I was able to mount it inside minikube. I am not understanding what wrong am I doing, but any file present inside 3.128.203.245:/nfsroot is not present in test-shell:/data
What point am I missing. I even tried hostPath mount but to no avail. Please help me out.
I think you should check the following things to verify that NFS is mounted successfully or not
run this command on the node where you want to mount.
$showmount -e nfs-server-ip
like in my case $showmount -e 172.16.10.161
Export list for 172.16.10.161:
/opt/share *
use $df -hT command see that Is NFS is mounted or not like in my case it will give output 172.16.10.161:/opt/share nfs4 91G 32G 55G 37% /opt/share
if not mounted then use the following command
$sudo mount -t nfs 172.16.10.161:/opt/share /opt/share
if the above commands show an error then check firewall is allowing nfs or not
$sudo ufw status
if not then allow using the command
$sudo ufw allow from nfs-server-ip to any port nfs
I made the same setup I don't face any issues. My k8s cluster of fabric is running successfully . The hf k8s yaml files can be found at my GitHub repo. There I have deployed the consortium of Banks on hyperledger fabric which is a dynamic multihost blockchain network that means you can add orgs, peers, join peers, create channels, install and instantiate chaincode on the go in an existing running blockchain network.
By default in minikube you should have default StorageClass:
Each StorageClass contains the fields provisioner, parameters, and reclaimPolicy, which are used when a PersistentVolume belonging to the class needs to be dynamically provisioned.
For example, NFS doesn't provide an internal provisioner, but an external provisioner can be used. There are also cases when 3rd party storage vendors provide their own external provisioner.
Change the default StorageClass
In your example this property can lead to problems.
In order to list enabled addons in minikube please use:
minikube addons list
To list all StorageClasses in your cluster use:
kubectl get sc
NAME PROVISIONER
standard (default) k8s.io/minikube-hostpath
Please note that at most one StorageClass can be marked as default. If two or more of them are marked as default, a PersistentVolumeClaim without storageClassName explicitly specified cannot be created.
In your example the most probable scenario is that you have already default StorageClass. Applying those resources caused: new PV creation (without StoraglClass), new PVC creation (with reference to existing default StorageClass). In this situation there is no reference between your custom pv/pvc binding) as an example please take a look:
kubectl get pv,pvc,sc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/nfs 3Gi RWX Retain Available 50m
persistentvolume/pvc-8aeb802f-cd95-4933-9224-eb467aaa9871 1Gi RWX Delete Bound default/pvc-nfs standard 50m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/pvc-nfs Bound pvc-8aeb802f-cd95-4933-9224-eb467aaa9871 1Gi RWX standard 50m
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
storageclass.storage.k8s.io/standard (default) k8s.io/minikube-hostpath Delete Immediate false 103m
This example will not work due to:
new persistentvolume/nfs has been created (without reference to pvc)
new persistentvolume/pvc-8aeb802f-cd95-4933-9224-eb467aaa9871 has been created using default StorageClass. In the Claim section we can notice that this pv has been created due to dynamic pv provisioning using default StorageClass with reference to default/pvc-nfs claim (persistentvolumeclaim/pvc-nfs ).
Solution 1.
According to the information from the comments:
Also I am able to connect to it within my minikube and also my actual ubuntu system.
I you are able to mount from inside minikube host this nfs share
If you mounted nfs share into your minikube node, please try to use this example with hostpath volume directly from your pod:
apiVersion: v1
kind: Pod
metadata:
name: test-shell
namespace: default
spec:
volumes:
- name: pv
hostPath:
path: /path/shares # path to nfs mount point on minikube node
containers:
- name: shell
image: ubuntu
command: ["/bin/bash", "-c", "sleep 1000 "]
volumeMounts:
- name: pv
mountPath: /data
Solution 2.
If you are using PV/PVC approach:
kind: PersistentVolume
apiVersion: v1
metadata:
name: persistent-volume
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
storageClassName: "" # Empty string must be explicitly set otherwise default StorageClass will be set / or custom storageClassName name
nfs:
path: "/nfsroot"
server: "3.128.203.245"
readOnly: false
claimRef:
name: persistent-volume-claim
namespace: default
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: persistent-volume-claim
namespace: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: "" # Empty string must be explicitly set otherwise default StorageClass will be set / or custom storageClassName name
volumeName: persistent-volume
Note:
If you are not referencing any provisioner associated with your StorageClass
Helper programs relating to the volume type may be required for consumption of a PersistentVolume within a cluster. In this example, the PersistentVolume is of type NFS and the helper program /sbin/mount.nfs is required to support the mounting of NFS filesystems.
Please keep in mind that when you are creating pvc kubernetes persistent-controller is trying to bind pvc with proper pv. During this process different factors are take into account like: storageClassName (default/custom), accessModes, claimRef, volumeName.
In this case you can use:
PersistentVolume.spec.claimRef.name: persistent-volume-claim PersistentVolumeClaim.spec.volumeName: persistent-volume
Note:
The control plane can bind PersistentVolumeClaims to matching PersistentVolumes in the cluster. However, if you want a PVC to bind to a specific PV, you need to pre-bind them.
By specifying a PersistentVolume in a PersistentVolumeClaim, you declare a binding between that specific PV and PVC. If the PersistentVolume exists and has not reserved PersistentVolumeClaims through its claimRef field, then the PersistentVolume and PersistentVolumeClaim will be bound.
The binding happens regardless of some volume matching criteria, including node affinity. The control plane still checks that storage class, access modes, and requested storage size are valid.
Once the PV/pvc were created or in case of any problem with pv/pvc binding please use the following commands to figure current state:
kubectl get pv,pvc,sc
kubectl describe pv
kubectl describe pvc
kubectl describe pod
kubectl get events
I'm trying to create a Persistent Volume on top of/based off of an existing Storage Class Name. Then I want to attach the PVC to it; so that they are bound. Running the code below, will give me the "sftp-pv-claim" I want, but it is not bound to my PV ("sftp-pv-storage"). It's status is "pending".
The error message I receive is: "The PersistentVolume "sftp-pv-storage" is invalid: spec: Required value: must specify a volume type". If anyone can point me in the right direction as to why I'm getting the error message, it'd be much appreciated.
Specs:
I'm creating the PV and PVC using a helm chart.
I'm using the Rancher UI to see if they are bound or not and if the PV is generated.
The storage I'm using is Ceph with Rook (to allow for dynamic provisioning of PVs).
Error:
The error message I receive is: "The PersistentVolume "sftp-pv-storage" is invalid: spec: Required value: must specify a volume type".
Attempts:
I've tried using claimRef and matchLabels to no avail.
I've added "volumetype: none" to my PV specs.
If I add "hostPath: path: "/mnt/data"" as a spec to the PV, it will show up as an Available PV (with a local node path), but my PVC is not bonded to it. (Also, for deployment purposes I don't want to use hostPath.
## Create Persistent Storage for SFTP
## Ref: https://www.cloudtechnologyexperts.com/kubernetes-persistent-volume-with-rook/
kind: PersistentVolume
apiVersion: v1
metadata:
name: sftp-pv-storage
labels:
type: local
name: sftp-pv-storage
spec:
storageClassName: rook-ceph-block
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
allowVolumeExpansion: true
volumetype: none
---
## Create Claim (links user to PV)
## ==> If pod is created, need to automatically create PVC for user (without their input)
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: sftp-pv-claim
spec:
storageClassName: sftp-pv-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
The PersistentVolume "sftp-pv-storage" is invalid: spec: Requiredvalue: must specify a volume type.
In PV manifest you must provide type of volume. List of all supported types are described here.
As you are using Ceph I assume you will use CephFS.
A cephfs volume allows an existing CephFS volume to be mounted into
your Pod. Unlike emptyDir, which is erased when a Pod is removed, the
contents of a cephfs volume are preserved and the volume is merely
unmounted. This means that a CephFS volume can be pre-populated with
data, and that data can be “handed off” between Pods. CephFS can be
mounted by multiple writers simultaneously.
Example of CephFS you can find in Github.
If I add "hostPath: path: "/mnt/data"" as a spec to the PV, it will show up as an Available PV (with a local node path), but my PVC is not bonded to it.
If you will check Official Kubernetes docs about storageClassName.
A claim can request a particular class by specifying the name of a
StorageClass using the attribute storageClassName. Only PVs of the
requested class, ones with the same storageClassName as the PVC, can
be bound to the PVC.
storageClassName of your PV and PVC are different.
PV:
spec:
storageClassName: rook-ceph-block
PVC:
spec:
storageClassName: sftp-pv-storage
Hope it will help.
You did not specify the "hostPath:" in your PersistentVolume
Add it and the error should be resolved. See sample below
I know there are lots of discussions round this topic but somehow, I can not get it working.
I am trying to install elastic search cluster with statefulset and nfs persistent volume on bare metal. My pv, pvc and sc configs are as below:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: manual
provisioner: kubernetes.io/no-provisioner
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-storage-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: manual
nfs:
server: 172.23.240.85
path: /servers/scratch50g/vishalg/kube
Statefuleset has following pvc section defined:
volumeClaimTemplates:
- metadata:
name: beehive-pv-claim
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: manual
resources:
requests:
storage: 1Gi
Now, when I try to deploy it, I get the following error on statefulset:
pod has unbound immediate PersistentVolumeClaims
When I get the pvc events, it shows:
Warning ProvisioningFailed 3s (x2 over 12s) persistentvolume-controller no volume plugin matched
I tried not giving any storageclass (did not create it) and removed it from pv and pvc both altogether. This time, I get below error:
no persistent volumes available for this claim and no storage class is set
I also tried setting storageclass as "" in pvc and not mention it in pv, but it did not work also.
Please help here. What can I check more to get it working?
Can it be related to nfs server and path (if by chance, it is mentioned incorrectly), though I see pv created successfully.
EDIT1:
One issue was that accessmode of pvc was different from accessmode of pv. I got it corrected and now my pvc is shown as bound.
But now even, I get following error:
pod has unbound immediate PersistentVolumeClaims
I tried using local volume also but again same error. PV and PVC are bound correctly but statefulset shows above error.
When using hostPath volume, everything works fine.
Is anything fundamentally that I am doing wrong here?
EDIT2
I got the local volume working. It takes some time to pod to bind to pvc. After waiting for coupl eof minutes, my pod got bind to pvc.
I think, nfs binding issue can be more of permission related. But still, k8s should give out some error for the same.
Could you try matching the accessModes as well?
The PVC is targeting a ReadWriteOnce volume right now.
And if you mount the nfs volume on the node manually, any access/security issue can be debugged.
I'm in the process of converting a stack to k8s. The database requires persistent storage.
I have used kubectl create -f pv.yaml
pv.yaml (with edits based on #whites11's answer):
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/nfs"
claimRef:
kind: PersistentVolumeClaim
namespace: default
name: mongo-persisted-storage
I then create an example mongo replica set.
When I look at my k8s dashboard I see the error:
PersistentVolumeClaim is not bound: "mongo-persistent-storage-mongo-0"
(repeated 2 times)
In the persistent volume tab I see the volume which looks ok:
I'm having trouble figuring out the next step to make the volume claim happen successfully.
Edit #2
I went into the PVC page on the GUI and added a volume to the claim manually (based on feedback from #whites11). I can see that the PVC has been updated with the volume but it is still pending.
Edit #3
Realizing that since making the change suggested by #whites11, the original error message in the pod has changed. It is now "persistentvolume "pvvolume" not found (repeated 2 times)", I think I just need to figure out where I wrote pvvolume, instead of pv-volume. (or it could be the - was auto-parsed out somewhere?
You need to manually bind your PV to your PVC, by adding the appropriate claimRef section to the PV spec.
In practice, edit your PV with the method you prefer, and add a section similar to this:
claimRef:
name: mongo-persisted-storag
namespace: <your PVC namespace>
Than, you need to edit your PVC to bind the correct volume, by adding the following in its spec section:
volumeName: "<your volume name>"
Here an explanation on how this process works: https://docs.openshift.org/latest/dev_guide/persistent_volumes.html#persistent-volumes-volumes-and-claim-prebinding