It seems like, While deleting the pv,pvc, I have messed up. I can delete the pvc without issues but I can not delete the pv that pv-protection in it. while deleting pv,pvc earlier, I have pressed CTRL+C since it was taking time to delete it and also deleted the storageclass before deleting the pvc. I don't remember the storage class that was used for creating the pvc.
In this post, it says, updating the pvc protection to null will help remove the pvc. But I had to delete the pv which has pv-protection. The below is the describe output of pv.
~/github/vault-operator# kubectl describe pv pv-hostpath
Name: pv-hostpath
Labels: type=local
Annotations: pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: manual
Status: Terminating (lasts <invalid>)
Claim: poc-namespace/pvc-hostpath
Reclaim Policy: Retain
Access Modes: RWO
Capacity: 1Gi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /kube
HostPathType:
Events: <none>
~/github/vault-operator# kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pv-hostpath 1Gi RWO Retain Terminating poc-namespace/pvc-hostpath manual 11d
question is, how can i delete the pv that is not deleted properly and what could be my issue.
In this scenario, you have a PVC (poc-namespace/pvc-hostpath) that is preventing your PV from being deleted. Delte the PVC, and you can delete the PV.
Generally speaking, the Reclaim Policy of a PVC is delete, by default, so when you delete a PVC, it automatically deletes the PV bound to it.
Your storageClass probably was this one (or similar), from rancher. It is a hostPath based one, meaning it maps your container volume to the host machine.
Related
I deployed a PVC, which dynamically created a PV.
After that I deleted the PVC and now my PV looks like below:
PS Kubernetes> kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-1b59942c-eb26-4603-b78e-7054d9418da6 2G RWX Retain Released default/db-pvc hostpath 26h
When I recreate my PVC, that creates a new PV.
Is there a way to reattach the existing PV to my PVC ?
Is there a way to do it automatically ?
I tried to attach the PV with my PVC using "volumeName" option, but it did not work.
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
db-pvc Pending pvc-1b59942c-eb26-4603-b78e-7054d9418da6 0 hostpath 77s
When a PVC is deleted, the PV stays in the "Released" state with the claimRef uid of the deleted PVC.
To reuse a PV, you need to delete the claimRef to make it go to the "Available" state
You may either edit the PV and manually delete the claimRef section, or run the patch command as under:
kubectl patch pv pvc-1b59942c-eb26-4603-b78e-7054d9418da6 --type json -p '[{"op": "remove", "path": "/spec/claimRef"}]'
Subsequently, you recreate the PVC.
If you are on GKE and your PV is running
You can create the PVC using
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-1b59942c-eb26-4603-b78e-7054d9418da6
spec:
storageClassName: default
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2G
I have what seems like a straightforward PV and PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: www-pvc
spec:
storageClassName: ""
volumeName: www-pv
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: www-pv
spec:
storageClassName: ""
claimRef:
name: www-pvc
capacity:
storage: 1Mi
accessModes:
- ReadOnlyMany
nfs:
server: 192.168.1.100
path: "/www"
For some reason these do not bind to each other and the PVC stays "pending" forever:
$ kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/www-pv 1Mi ROX Retain Available /www-pvc 107m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/www-pvc Pending www-pv 0 107m
How can I debug the matching? Which service does the matching in k3s? Would I be looking in the log of the k3s binary (running as a service under Debian)?
In Kubernetes documentation about Persistent Volumes you can find information that :
A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes.
A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources.
In Binding section you have information :
Claims will remain unbound indefinitely if a matching volume does not exist. Claims will be bound as matching volumes become available. For example, a cluster provisioned with many 50Gi PVs would not match a PVC requesting 100Gi. The PVC can be bound when a 100Gi PV is added to the cluster.
In Openshift Documentation - Volume and Claim Pre-binding you can find information that when you are using pre-binding you are skipping some matchings.
If you know exactly what PersistentVolume you want your PersistentVolumeClaim to bind to, you can specify the PV in your PVC using the volumeName field. This method skips the normal matching and binding process. The PVC will only be able to bind to a PV that has the same name specified in volumeName. If such a PV with that name exists and is Available, the PV and PVC will be bound regardless of whether the PV satisfies the PVC’s label selector, access modes, and resource requests.
Issue 1
In your PV configuration you set
capacity:
storage: 1Mi
which means that you have storage with 1Mi which is ~ 1.04 MB.
Your PVC was configured to request 1Gi which is ~ 1.07GB.
resources:
requests:
storage: 1Gi
Your PV didn't fulfill your PVC request.
You can have many PV with example 5Gi storage but none of them will be bound if PVC request is higher than 5Gi, like 6Gi. But if PV storage is higher 6Gi and PVC request is lower, like 5Gi it will be bounded, however 1Gi will be wasted.
Issue 2
If you will describe your PVC you will find Warning below:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedBinding 2s (x2 over 17s) persistentvolume-controller volume "www-pv" already bound to a different claim.
In your configuration you are using something called Pre-Binding as you have specified volumeName in PVC and claimRef in PV.
This example is well described in OpenShift Documentation - Using Persistent Volumes. In your current setup you've used claimRef.name but you didn't specify claimRef.namespace.
$ kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/www-pv 1Gi ROX Retain Available /www-pvc 4m28s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/www-pvc Pending www-pv 0 4m28s
But when you add claimRef.namespace it will work.
$ kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/www-pv 1Gi ROX Retain Bound default/www-pvc 7m3s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/www-pvc Bound www-pv 1Gi ROX 7m3s
You should specify PVC's namespace in your PV's spec.claimRef.namespace as PVC is namespaced resource.
$ kubectl api-resources | grep pv
persistentvolumeclaims pvc true PersistentVolumeClaim
persistentvolumes pv false PersistentVolume
Solution
In your PV change spec.capacity.storage to 1Gi.
In your PV add spec.claimRef.namespace: default like on the example below:
spec:
storageClassName: ""
claimRef:
name: www-pvc
namespace: default # adding namespace: defaults
capacity:
storage: 1Gi # changed storage size
Please let me know if you were able to bind PV and PVC.
I think the problem is that the PVC is trying to get a PV of size 1Gi but your PV is of size 1M.
So, the bind is failing. You can fix this by either increasing the PV size or reducing the PVC size.
Use kubectl describe pvc to get more info about events and the reason for the failures.
To further clarify, a PVC is a request for the storage so if you say you need 1G of storage in claim but you only provision 1M of actual storage, the PVC is going to be stay in Pending state. Based on this, the size defined in PVC should always be less than or equal to the PV size.
This is an addition to the answers provided above, (pv/pvc size correction)
You should make sure you have nfs-common package installed and that you can mount that nfs export in the node itself.
Since storageClassName is empty in your definition - i can advise looking into
https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner
PV size can not be smaller than PVC size.
in otherwords
PVC 1 GB size can not be greater than PV 1 MB size.
Please update the PV & PVC sizes.
I am trying to delete a persistent volume, to start form scratch a used kafka cluster into kubernetes, i changed the Retain mode to Delete, it was Retain.
But i am not able to delete two of the three volumes:
[yo#machine kafka_k8]$ kubectl describe pv kafka-zk-pv-0
Name: kafka-zk-pv-0
Labels: type=local
StorageClass:
Status: Failed
Claim: kafka-ns/datadir-0-poc-cp-kafka-0
Reclaim Policy: Delete
Access Modes: RWO
Capacity: 500Gi
Message: host_path deleter only supports /tmp/.+ but received provided /mnt/disk/kafka
Source:
Type: HostPath (bare host directory volume)
Path: /mnt/disk/kafka
Events:
{persistentvolume-controller } Warning
VolumeFailedDelete host_path deleter only supports /tmp/.+ but received provided /mnt/disk/kafka
I changed the policy "Retain" to "Recycle" and the volume now is able to be recreated.
kubectl patch pv kafka-zk-pv-0 -p '{"spec":{"persistentVolumeReclaimPolicy":"Recycle"}}'
I am trying to deploy a persistentvolume for 3 pods to work on and i want to use the cluster's node storage i.e. not an external storage like ebs spin off.
To achieve the above i did the following experiment's -
1) I applied only the below PVC resource defined below -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: pv1
name: pv1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
status: {}
This spin's up a storage set by default storageclass, which in my case was digital ocean's volume. So it created a 1Gi volume.
2) Created a PV resource and PVC resource like below -
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: pv1
name: pv1
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
status: {}
Post this i see my claim is bound.
pavan#p1:~$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pv1 Bound task-pv-volume 10Gi RWO manual 2m5s
pavan#p1:~$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
task-pv-volume 10Gi RWO Retain Bound default/pv1 manual 118m
pavan#p1:~$ kubectl describe pvc
Name: pv1
Namespace: default
StorageClass: manual
Status: Bound
Volume: task-pv-volume
Labels: io.kompose.service=pv1
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"creationTimestamp":null,"labels":{"io.kompose.service":"mo...
pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 10Gi
Access Modes: RWO
VolumeMode: Filesystem
Mounted By: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed 28s (x8 over 2m2s) persistentvolume-controller storageclass.storage.k8s.io "manual" not found
Below are my questions that i am hoping to get answers/pointers to -
The above warning, storage class could not be found, do i need to
create one? If so, can you tell me why and how? or any pointer. (Somehow this link misses to state that - https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/)
Notice the PV has storage capacity of 10Gi and PVC with request capacity of 1Gi, but still PVC was bound with 10Gi capacity? Can't i share the same PV capacity with other PVCs?
For question 2) If i have to create different PVs for different PVC with the required capacity, do i have to create storageclass as-well? Or same storage class and use selectors to select corresponding PV?
I was trying to reproduce all behavior to answer all your questions. However, I don't have access to DigitalOcean, so I tested it on GKE.
The above warning, storage class could not be found, do i need to
create one?
According to the documentation and best practices, it is highly recommended to create a storageclass and later create PV / PVC based on it. However, there is something called "manual provisioning". Which you did in this case.
Manual provisioning is when you need to manually create a PV first, and then a PVC with matching spec.storageClassName: field. Examples:
If you create a PVC without default storageclass, PV and storageClassName parameter (afaik kubeadm is not providing default storageclass) - PVC will be stuck on Pending with event: no persistent volumes available for this claim and no storage class is set.
If you create a PVC with default storageclass setup on cluster but without storageClassName parameter it will create it based on default storageclass.
If you create a PVC with storageClassName parameter (somewhere in the Cloud, Minikube or kubeadm) - PVC will also be Pending with the warning: storageclass.storage.k8s.io "manual" not found.
However, if you create PV with the same storageClassName parameter, it will be bound in a while.
Example:
$ kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/task-pv-volume 10Gi RWO Retain Available manual 4s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/pv1 Pending manual 4m12s
...
kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/task-pv-volume 10Gi RWO Retain Bound default/pv1 manual 9s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/pv1 Bound task-pv-volume 10Gi RWO manual 4m17s
The disadvantage of manual provisioning is that you have to create PV for each PVC. If you use storageclass you can just create PVC.
If so, can you tell me why and how? or any pointer.
You can use documentation examples or check here. As you are using cloud with default storageclass you can export it to yaml by:
$ kubectl get sc -oyaml >> storageclass.yaml.
Or if you have more than one, you have to specify which one. Names of storageclass can be obtained by $ kubectl get sc.
Later you can refer to K8s API to customize your storageclass.
Notice the PV has storage capacity of 10Gi and PVC with request
capacity of 1Gi, but still PVC was bound with 10Gi capacity?
You created manually a PV with 10Gi and the PVC requested 1Gi. As PVC and PV are bounding 1:1, PVC searched for a PV which meets all conditions and bound to it. PVC (pv1) requested 1Gi and the PV (task-pv-volume) meet those requirements so Kubernetes bound them. Unfortunately much of the space was wasted in this case.
Can't i share the same PV capacity with other PVCs
Unfortunately, you cannot bound 2 PVC to 1 PV as the relationship between PVC and PV is 1:1, but you can configure many pods/deployments to use the same PVC.
I can advise you to look at this stackoverflow case as it explains very well AccessMode specifics.
If i have to create different PVs for different PVC with the required
capacity, do i have to create storageclass as-well? Or same storage
class and use selectors to select corresponding PV?
As I mentioned before, if you create PV manually with a specific size and a PVC bounded to it, which request less, the extra space will be wasted. So, you have to create PV and PVC with the same resource request, or let storageclass adjust the storage based on PVC request.
Yes, you have to create storage class, check, but I guess digital-ocean provide default storage class, you can check it with:
kubectl get storageclasses
You can share one PV, but only in read-only access, if you need write access for all pods you have to create multiple PV, check
I'm trying to setup a volume to use with Mongo on k8s.
I use kubectl create -f pv.yaml to create the volume.
pv.yaml:
kind: PersistentVolume
apiVersion: v1
metadata:
name: pvvolume
labels:
type: local
spec:
storageClassName: standard
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/nfs"
claimRef:
kind: PersistentVolumeClaim
namespace: default
name: pvvolume
I then deploy this StatefulSet that has pods making PVCs to this volume.
My volume seems to have been created without problem, I'm expecting it to just use the storage of the host node.
When I try to deploy I get the following error:
Unable to mount volumes for pod
"mongo-0_default(2735bc71-5201-11e8-804f-02dffec55fd2)": timeout
expired waiting for volumes to attach/mount for pod
"default"/"mongo-0". list of unattached/unmounted
volumes=[mongo-persistent-storage]
Have a missed a step in setting up my persistent volume?
A persistent volume is just the declaration of availability of some storage inside your kubernetes cluster. There is no binding with your pod at this stage.
Since your pod is deployed through a StatefulSet, there should be in your cluster one or more PersistentVolumeClaims which are the objects that connect a pod with a PersistentVolume.
In order to manually bind a PV with a PVC you need to edit your PVC by adding the following in its spec section:
volumeName: "<your persistent volume name>"
Here an explanation on how this process works: https://docs.openshift.org/latest/dev_guide/persistent_volumes.html#persistent-volumes-volumes-and-claim-prebinding
My case is an edge case, and I doubt that you will reach it. However, I will describe it because, it cost me a lot of grey hairs - and maybe it will save yours.
This same error occurred for me despite PV and PVC being binned. Pod was constantly in ContainerCreating stare, yet kubectl get events throw the error asked in this question.
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
sewage-db 5Ti RWO Retain Bound global-sewage/sewage-db nfs 3h40m
$kubectl get pvc -n global-sewage
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
sewage-db Bound sewage-db 5Ti RWO nfs 3h39m
After rebooting the server it turned out that, one of 32GiB RAM physical memory was corrupted. Removing the memory fixed the issue.