Local PersistentVolumeClaim says "no volume plugin matched" - kubernetes

I recently started exploring Kubernetes and decided to try and deploy kafka to k8s. However I have a problem with creating the persistent volume. I create a storage class and a persistent volume, but the persistent volume claims stay in status pending saying "no volume plugin matched". This is the yaml files I used with the dashed lines denoting a new file. Anybody has an idea why this is happening?
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
reclaimPolicy: Retain
------------------------
apiVersion: v1
kind: PersistentVolume
metadata:
name: kafka-pv
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /mnt/disks/ssd1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- docker-desktop
---------------------------
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: zookeeper-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
storageClassName: local-storage

As MaggieO said changing ReadWriteMany to ReadWriteOnce was part of the problem. The other part was that I had to go and create the /mnt/disks/ssd1 folder on my C: drive manually and write "path: /c/mnt/disks/ssd1" instead. Something that is not present in my example, but I was trying to do anyway and might be helpful to others was that I was trying to have two PVCs for one PV which is impossible. The PV to PVC relationship is 1 to 1.

Your Persistent Volume Claim configuration file should look like this:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: zookeeper-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: local-storage
Just change access mode from ReadWriteMany to ReadWriteOnce.
Let me know if it helped.

You have to bound Persistent-volume claim with your persistent volume.
As you have mentioned pvc storageclassName: local-storage.
try it with as storageclassName: kafka-pv So that your pvc get bounded to pv.

In my case, it's caused by the pv storage,just increase the mount size of the pv storage.

From our experience (under a security-hardened Kubernetes distribution called Openshift 3.11) this provisioning error happens most often when creating of PVCs is not possible due to the node having run out of PVs.
So the cluster admin needs to add some PVs and/or we need to release some other PVCs (unused but still bound to PVs).
Having requested an incorrect PVC capacity is another possible reason (when the cluster is poorly configured and accepts only a single "magic number" for capacity).

Related

kubernetes ignoring persistentvolume

I have created a persistent volume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "C:/Users/xxx/Desktop/pv"
And want to make save mysql statefulset pods things on it.
So, I wrote the volumeclaimtemplate:
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
Thinking this would request the persistent storage from the only persistent volume I have. Instead, this is what happens:
StatefulSets requires you to use storage classes in order to bind the correct PVs with the correct PVCs.
The correct way to make StatefulSets mount local storage is by using local type of volumes, take a look at the procedure below.
First, you create a storage class for the local volumes. Something like the following:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
It has no-provisioner so it will not be able to automatically provision PVs, you'll need to create them manually, but that's exactly what you want for local storage.
Second, you create your local PV, something as the following:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-volume
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: "C:/Users/xxx/Desktop/pv"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- the-node-hostname-on-which-the-storage-is-located
This definition tells the local path on the node, but also forces the PV to be used on a specific node (which match the nodeSelectorTerms).
It also links this PV to the storage class created earlier. This means that now, if a StatefulSets requires a storage with that storage class, it will receive this disk (if the space required is less or equal, of course)
Third, you can now link the StatefulSet:
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-storage"
resources:
requests:
storage: 5Gi
When the StatefulSet Pod will need to be scheduled for the first time, the following will happen:
A PVC will be created and it will go Bound with the PV you just created
The Pod will be scheduled to run on the node on which the bounded PV is restricted to run
UPDATE:
In case you want to use hostPath storage instead of local storage (because for example you are on minikube and that is supported out of the box directly, so it's more easy) you need to change the PV declaration a bit, something like the following:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-volume
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
capacity:
storage: 5Gi
hostPath:
path: /data/pv0001/
Now, the /data directory and all its content is persisted on the host (so if minikube gets restarted, it's still there) but if you want to mount specific directories of your host, you need to use minikube mount, for example:
minikube mount <source directory>:<target directory>
For example, you could do:
minikube mount C:/Users/xxx/Desktop/pv:/host/my-special-pv
and then you could use /host/my-special-pv as the hostPath inside the PV declaration.
More info can be read in the docs.

Unable to setup couchbase operator 1.2 with persistent volume on local storage class

I am trying to setup couchbase operator 1.2 on my local system.
i followed the following steps :
Install the Couchbase Admission Controller.
Deploy the Couchbase Autonomous Operator.
Deploy the Couchbase Cluster.
Access CouchBase from UI.
But the problem with this is that as soon as the system or docker resets or the pod resets, the cluster's data is lost.
So for the same I tried to do it by adding persistent volume with local storage class as mentioned in the docs but the result was still the same. The pod still gets resets. and i am unable to find the reason for the same.
So if anyone can advise on how to do the same with persistent volume on local storage class. I have successfully created a storage class. Just having problem while getting the cluster up and keep the consistency for the same.
Here is the yamls that i used to create the storage class and pv and pv claim
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: myssd
provisioner: local
apiVersion: v1
kind: PersistentVolume
metadata:
name: couchbase-data-2
labels:
type: local
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
storageClassName: myssd
hostPath:
path: "/home/<user>/cb-storage/"
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-test-claim-2
spec:
accessModes:
- ReadWriteOnce
storageClassName: myssd
resources:
requests:
storage: 1Gi
Thanks in advance
Persistent volume using hostPath is not durable. Use a local volume. Compared to hostPath volumes, local volumes can be used in a durable and portable manner without manually scheduling Pods to nodes, as the system is aware of the volume's node constraints by looking at the node affinity on the PersistentVolume.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: couchbase-data
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /home/<User>/cb-storage/
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node1
- node2
- node3
- node4
You don't need to create a PersistentVolume manually because the storage class will do that internally.
Also you need to configure the local volume provisioner as discussed here so that dynamic provisioning using the local storage class happens.

Can I combine StorageClass with PersistentVolume in GKE?

I'm fairly new to Kubernetes and find it difficult to get it working from documentation, Kubenetes docs says that StorageClass contains the fields provisioner, parameters, and reclaimPolicy, which are used when a PersistentVolume belonging to the class needs to be dynamically provisioned however can I use StorageClass with PV(not dynamic allocation) to specify high performance disk allocation such as ssd?
without StorageClass it worked fine for me.
following is my manifest
kind: PersistentVolume
metadata:
name: gke-pv
labels:
app: test
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: gce-disk
fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gke-pvc
labels:
app: test
spec:
accessModes:
- ReadWriteOnce
storageClassName: ssd-sc
resources:
requests:
storage: 2Gi
selector:
matchLabels:
app: test
You need storage class if the storage needs to be provisioned dynamically.
If you are provisioning persistent volumes then it is called static storage provisioning. You don't need storage class in this scenario
The problem that is going on here is that if you want to statically provision PersistentVolumes, they don't have a StorageClass. However, GKE clusters are created with a standard StorageClass which is the default, and so the PVC gets confused and tries to dynamically allocate.
The solution is to have the PVC request an empty storage class, which forces it to look at the statically provisioned PVs.
So you'd use a sequence like this to create a PV and then get it bound to a PVC:
Manually provision the ssd:
gcloud compute disks create --size=10Gi --zone=[YOUR ZONE] --type=pd-ssd already-created-ssd-disk
Then apply a PV object that uses the statically provisioned disk, like so:
apiVersion: v1
kind: PersistentVolume
metadata:
name: ssd-for-k8s-volume
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: already-created-ssd-disk
fsType: ext4
Then, you can claim it with a PVC like this:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-ssd-demo
spec:
storageClassName: ""
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
You could also use labels to refine which PVs are selected, of course, for example if you have some that are SSD and others that are regular spinning metal.
Note that the idea of using a StorageClass for static provisioning isn't really the right thing, since StorageClass is tied to how you describe storage for dynamic provisioning.

Getting an error when trying to create a persistent volume

I am trying to create a persistent volume on my kubernetes cluster running on an Amazon AWS EC2 instance (Ubuntu 18.04). I'm getting an error from kubectl when trying to create it.
I've tried looking up the error but I'm not getting any satisfactory search results.
Here is the pv.yaml file that I'm using.
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv001
labels:
type: local
spec:
capacity:
storage: 1Gi
storageClassName: manual
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: /home/ubuntu/data/pv001
Here's the error that I am getting:
Error from server (BadRequest): error when creating "./mysql-pv.yaml":
PersistentVolume in version "v1" cannot be handled as a
PersistentVolume: v1.PersistentVolume.Spec:
v1.PersistentVolumeSpec.PersistentVolumeSource: HostPath: Capacity:
unmarshalerDecoder: quantities must match the regular expression
'^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$', error found in #10 byte
of ...|":"manual"},"hostPat|..., bigger context ...|city":
{"storage":"1Gi","storageClassName":"manual"},"hostPath":
{"path":"/home/ubuntu/data/pv001"},"p|...
I cannot figure out from the message what the actual error is.
Any help appreciated.
remove the storage class from pv definition. storage class is needed for dynamic provisioning of pv's.
in your case, you are using host path volumes. it should work without storage class.
If you are on k8s 1.14 then look at local volumes. refer the below link
https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/
I don't think it's related to having quotes in the path. It's more about using the right indentation for storageClassName. It should be this instead:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv001
labels:
type: local
spec:
capacity:
storage: 1Gi
storageClassName: manual
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: /home/ubuntu/data/pv001
You can remove it too, and it will use the default StorageClass
Try this:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv001
labels:
type: local
spec:
capacity:
storage: 1Gi
storageClassName: manual
accessModes: ["ReadWriteOnce"]
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: /home/ubuntu/data/pv001
storageClassName is under spec and same level as capacity (You put storageClassName under capacity which is wrong).
Read more: https://kubernetes.io/docs/concepts/storage/persistent-volumes/

PersistentVolumeClaim Pending for NFS Volume

What specific changes need to be made to the yaml below in order to get the PersistentVolumeClaim to bind to the PersistentVolume?
An EC2 instance in the same VPC subnet as the Kubernetes worker nodes has an ip of 10.0.0.112 and and has been configured to act as an NFS server in the /nfsfileshare path.
Creating the PersistentVolume
We created a PersistentVolume pv01 with pv-volume-network.yaml:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv01
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: "/nfsfileshare"
server: "10.0.0.112"
and by typing:
kubectl create -f pv-volume-network.yaml
Then when we type kubectl get pv pv01, the pv01 PersistentVolume shows a STATUS of "Available".
Creating the PersistentVolumeClaim
Then we created a PersistentVolumeClaim named `` with pv-claim.yaml:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
And by typing:
kubectl create -f pv-claim.yaml
STATUS is Pending
But then when we type kubectl get pvc my-pv-claim, we see that the STATUS is Pending. The STATUS remains as Pending for as long as we continued to check back.
Note this OP is different than this other question, because this problem persists even with quotes around the NFS IP and the path.
Why is this PVC not binding to the PV? What specific changes need to be made to resolve this?
I diagnosed the problem by typing kubectl describe pvc my-pv-claim and looking in the Events section of the results.
Then, based on the reported Events, I was able to fix this by changing storageClassName: manual to storageClassName: slow.
The problem was that the PVC's StorageClassName did not meet the requirement that it match the class specified in the PV.