I'm trying to setup a volume to use with Mongo on k8s.
I use kubectl create -f pv.yaml to create the volume.
pv.yaml:
kind: PersistentVolume
apiVersion: v1
metadata:
name: pvvolume
labels:
type: local
spec:
storageClassName: standard
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/nfs"
claimRef:
kind: PersistentVolumeClaim
namespace: default
name: pvvolume
I then deploy this StatefulSet that has pods making PVCs to this volume.
My volume seems to have been created without problem, I'm expecting it to just use the storage of the host node.
When I try to deploy I get the following error:
Unable to mount volumes for pod
"mongo-0_default(2735bc71-5201-11e8-804f-02dffec55fd2)": timeout
expired waiting for volumes to attach/mount for pod
"default"/"mongo-0". list of unattached/unmounted
volumes=[mongo-persistent-storage]
Have a missed a step in setting up my persistent volume?
A persistent volume is just the declaration of availability of some storage inside your kubernetes cluster. There is no binding with your pod at this stage.
Since your pod is deployed through a StatefulSet, there should be in your cluster one or more PersistentVolumeClaims which are the objects that connect a pod with a PersistentVolume.
In order to manually bind a PV with a PVC you need to edit your PVC by adding the following in its spec section:
volumeName: "<your persistent volume name>"
Here an explanation on how this process works: https://docs.openshift.org/latest/dev_guide/persistent_volumes.html#persistent-volumes-volumes-and-claim-prebinding
My case is an edge case, and I doubt that you will reach it. However, I will describe it because, it cost me a lot of grey hairs - and maybe it will save yours.
This same error occurred for me despite PV and PVC being binned. Pod was constantly in ContainerCreating stare, yet kubectl get events throw the error asked in this question.
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
sewage-db 5Ti RWO Retain Bound global-sewage/sewage-db nfs 3h40m
$kubectl get pvc -n global-sewage
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
sewage-db Bound sewage-db 5Ti RWO nfs 3h39m
After rebooting the server it turned out that, one of 32GiB RAM physical memory was corrupted. Removing the memory fixed the issue.
Related
I have a Kubernetes cron job in AWS EKS that requires a persistent volume, so this is roughly what I have:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-{{$.Release.Name}}-tmp
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 10Gi
Then it's mounted to a cronjob (the mount part is correct, as the following shows)
All are deployed with Helm, and a fresh deployment times out, because the PVC remains in the Pending state with the message waiting for the first consumer to be created before binding. If during the deployment I create a new job based on the cron job, the PVC is immediately bound and this and all subsequent deployment work like expected.
Is it possible to either make a PVC bind "eagerly", without a pod that requires it or, preferably, not to wait for it to get bound during the chart installation?
What is the storage class that you use? Storage class has volumeBindingMode attributes that controls how PV is dynamically created.
The volumeBindingMode could be Immediate and WaitForFirstConsumer mode.
For checking the storage class you can do kubectl get storageclass or kubectl describe storageclass. The default storage class will be used if not specified on the K8 PVC definition.
References:
https://kubernetes.io/docs/concepts/storage/storage-classes/#volume-binding-mode
I have k8tes cluster in which I am facing issues while mounting the existing volume to the pod in the new deployment. I have the existing deployments where I am mounting the same existing PV and PVCs. But facing issues only new deployment.
What could be the reason? How can I mount(NFS) volume to the new deployments because both PV and PVC statuses are bound and claimed respectively?
you can not ideally If your mount mode is set to ReadWriteOnce.
If you are planning to use the NFS and want to attach multiple PODs to a single mount you have to use the ReadWriteMany.
Example :
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-data
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
storageClassName: nfs
A PersistentVolumeClaim (PVC) is a request for storage by a user. It
is similar to a Pod. Pods consume node resources and PVCs consume PV
resources. Pods can request specific levels of resources (CPU and
Memory). Claims can request specific size and access modes (e.g., they
can be mounted ReadWriteOnce, ReadOnlyMany or ReadWriteMany, see
AccessModes).
Acces mode : https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
GKE example : https://medium.com/platformer-blog/nfs-persistent-volumes-with-kubernetes-a-case-study-ce1ed6e2c266
I have what seems like a straightforward PV and PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: www-pvc
spec:
storageClassName: ""
volumeName: www-pv
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: www-pv
spec:
storageClassName: ""
claimRef:
name: www-pvc
capacity:
storage: 1Mi
accessModes:
- ReadOnlyMany
nfs:
server: 192.168.1.100
path: "/www"
For some reason these do not bind to each other and the PVC stays "pending" forever:
$ kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/www-pv 1Mi ROX Retain Available /www-pvc 107m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/www-pvc Pending www-pv 0 107m
How can I debug the matching? Which service does the matching in k3s? Would I be looking in the log of the k3s binary (running as a service under Debian)?
In Kubernetes documentation about Persistent Volumes you can find information that :
A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes.
A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources.
In Binding section you have information :
Claims will remain unbound indefinitely if a matching volume does not exist. Claims will be bound as matching volumes become available. For example, a cluster provisioned with many 50Gi PVs would not match a PVC requesting 100Gi. The PVC can be bound when a 100Gi PV is added to the cluster.
In Openshift Documentation - Volume and Claim Pre-binding you can find information that when you are using pre-binding you are skipping some matchings.
If you know exactly what PersistentVolume you want your PersistentVolumeClaim to bind to, you can specify the PV in your PVC using the volumeName field. This method skips the normal matching and binding process. The PVC will only be able to bind to a PV that has the same name specified in volumeName. If such a PV with that name exists and is Available, the PV and PVC will be bound regardless of whether the PV satisfies the PVC’s label selector, access modes, and resource requests.
Issue 1
In your PV configuration you set
capacity:
storage: 1Mi
which means that you have storage with 1Mi which is ~ 1.04 MB.
Your PVC was configured to request 1Gi which is ~ 1.07GB.
resources:
requests:
storage: 1Gi
Your PV didn't fulfill your PVC request.
You can have many PV with example 5Gi storage but none of them will be bound if PVC request is higher than 5Gi, like 6Gi. But if PV storage is higher 6Gi and PVC request is lower, like 5Gi it will be bounded, however 1Gi will be wasted.
Issue 2
If you will describe your PVC you will find Warning below:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedBinding 2s (x2 over 17s) persistentvolume-controller volume "www-pv" already bound to a different claim.
In your configuration you are using something called Pre-Binding as you have specified volumeName in PVC and claimRef in PV.
This example is well described in OpenShift Documentation - Using Persistent Volumes. In your current setup you've used claimRef.name but you didn't specify claimRef.namespace.
$ kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/www-pv 1Gi ROX Retain Available /www-pvc 4m28s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/www-pvc Pending www-pv 0 4m28s
But when you add claimRef.namespace it will work.
$ kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/www-pv 1Gi ROX Retain Bound default/www-pvc 7m3s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/www-pvc Bound www-pv 1Gi ROX 7m3s
You should specify PVC's namespace in your PV's spec.claimRef.namespace as PVC is namespaced resource.
$ kubectl api-resources | grep pv
persistentvolumeclaims pvc true PersistentVolumeClaim
persistentvolumes pv false PersistentVolume
Solution
In your PV change spec.capacity.storage to 1Gi.
In your PV add spec.claimRef.namespace: default like on the example below:
spec:
storageClassName: ""
claimRef:
name: www-pvc
namespace: default # adding namespace: defaults
capacity:
storage: 1Gi # changed storage size
Please let me know if you were able to bind PV and PVC.
I think the problem is that the PVC is trying to get a PV of size 1Gi but your PV is of size 1M.
So, the bind is failing. You can fix this by either increasing the PV size or reducing the PVC size.
Use kubectl describe pvc to get more info about events and the reason for the failures.
To further clarify, a PVC is a request for the storage so if you say you need 1G of storage in claim but you only provision 1M of actual storage, the PVC is going to be stay in Pending state. Based on this, the size defined in PVC should always be less than or equal to the PV size.
This is an addition to the answers provided above, (pv/pvc size correction)
You should make sure you have nfs-common package installed and that you can mount that nfs export in the node itself.
Since storageClassName is empty in your definition - i can advise looking into
https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner
PV size can not be smaller than PVC size.
in otherwords
PVC 1 GB size can not be greater than PV 1 MB size.
Please update the PV & PVC sizes.
I know there are lots of discussions round this topic but somehow, I can not get it working.
I am trying to install elastic search cluster with statefulset and nfs persistent volume on bare metal. My pv, pvc and sc configs are as below:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: manual
provisioner: kubernetes.io/no-provisioner
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-storage-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: manual
nfs:
server: 172.23.240.85
path: /servers/scratch50g/vishalg/kube
Statefuleset has following pvc section defined:
volumeClaimTemplates:
- metadata:
name: beehive-pv-claim
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: manual
resources:
requests:
storage: 1Gi
Now, when I try to deploy it, I get the following error on statefulset:
pod has unbound immediate PersistentVolumeClaims
When I get the pvc events, it shows:
Warning ProvisioningFailed 3s (x2 over 12s) persistentvolume-controller no volume plugin matched
I tried not giving any storageclass (did not create it) and removed it from pv and pvc both altogether. This time, I get below error:
no persistent volumes available for this claim and no storage class is set
I also tried setting storageclass as "" in pvc and not mention it in pv, but it did not work also.
Please help here. What can I check more to get it working?
Can it be related to nfs server and path (if by chance, it is mentioned incorrectly), though I see pv created successfully.
EDIT1:
One issue was that accessmode of pvc was different from accessmode of pv. I got it corrected and now my pvc is shown as bound.
But now even, I get following error:
pod has unbound immediate PersistentVolumeClaims
I tried using local volume also but again same error. PV and PVC are bound correctly but statefulset shows above error.
When using hostPath volume, everything works fine.
Is anything fundamentally that I am doing wrong here?
EDIT2
I got the local volume working. It takes some time to pod to bind to pvc. After waiting for coupl eof minutes, my pod got bind to pvc.
I think, nfs binding issue can be more of permission related. But still, k8s should give out some error for the same.
Could you try matching the accessModes as well?
The PVC is targeting a ReadWriteOnce volume right now.
And if you mount the nfs volume on the node manually, any access/security issue can be debugged.
Use case:
I have a NFS directory available and I want to use it to persist data for multiple deployments & pods.
I have created a PersistentVolume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
nfs:
server: http://mynfs.com
path: /server/mount/point
I want multiple deployments to be able to use this PersistentVolume, so my understanding of what is needed is that I need to create multiple PersistentVolumeClaims which will all point at this PersistentVolume.
kind: PersistentVolumeClaim
apiVersion: v1
metaData:
name: nfs-pvc-1
namespace: default
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50Mi
I believe this to create a 50MB claim on the PersistentVolume. When I run kubectl get pvc, I see:
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
nfs-pvc-1 Bound nfs-pv 10Gi RWX 35s
I don't understand why I see 10Gi capacity, not 50Mi.
When I then change the PersistentVolumeClaim deployment yaml to create a PVC named nfs-pvc-2 I get this:
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
nfs-pvc-1 Bound nfs-pv 10Gi RWX 35s
nfs-pvc-2 Pending 10s
PVC2 never binds to the PV. Is this expected behaviour? Can I have multiple PVCs pointing at the same PV?
When I delete nfs-pvc-1, I see the same thing:
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
nfs-pvc-2 Pending 10s
Again, is this normal?
What is the appropriate way to use/re-use a shared NFS resource between multiple deployments / pods?
Basically you can't do what you want, as the relationship PVC <--> PV is one-on-one.
If NFS is the only storage you have available and would like multiple PV/PVC on one nfs export, use Dynamic Provisioning and a default storage class.
It's not in official K8s yet, but this one is in the incubator and I've tried it and it works well: https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client
This will enormously simplify your volume provisioning as you only need to take care of the PVC, and the PV will be created as a directory on the nfs export / server that you have defined.
From: https://docs.openshift.org/latest/install_config/storage_examples/shared_storage.html
As Baroudi Safwen mentioned, you cannot bind two pvc to the same pv, but you can use the same pvc in two different pods.
volumes:
- name: nfsvol-2
persistentVolumeClaim:
claimName: nfs-pvc-1 <-- USE THIS ONE IN BOTH PODS
A persistent volume claim is exclusively bound to a persistent volume.
You cannot bind 2 pvc to the same pv. I guess you are interested in the dynamic provisioning. I faced this issue when I was deploying statefulsets, which require dynamic provisioning for pods. So you need to deploy an NFS provisioner in your cluster, the NFS provisioner(pod) will have access to the NFS folder(hostpath), and each time a pod requests a volume, the NFS provisioner will mount it in the NFS directory on behalf of the pod. Here is the github repository to deploy it:
https://github.com/kubernetes-incubator/external-storage/tree/master/nfs/deploy/kubernetes
You have to be careful though, you must ensure the nfs provisioner always runs on the same machine where you have the NFS folder by making use of the node selector since you the volume is of type hostpath.
For my future-self and everyone else looking for the official documentation:
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#binding
Once bound, PersistentVolumeClaim binds are exclusive, regardless of
how they were bound. A PVC to PV binding is a one-to-one mapping,
using a ClaimRef which is a bi-directional binding between the
PersistentVolume and the PersistentVolumeClaim.
a few points on dynamic provisioning..
using dynamic provisioning of nfs prevents you for changing any of the default nfs mount options. On my platform this uses rsize/wsize of 1M. this can cause huge problems in some applications using small files or block reading. (I've just hit this issue in a big way)
dynamic is a great option if it suits your needs. I'm now stuck with creating 250 pv/pvc pairs for my application that was being handled by dynamic due to the 1-1 relationship.