Getting an error when trying to create a persistent volume - kubernetes

I am trying to create a persistent volume on my kubernetes cluster running on an Amazon AWS EC2 instance (Ubuntu 18.04). I'm getting an error from kubectl when trying to create it.
I've tried looking up the error but I'm not getting any satisfactory search results.
Here is the pv.yaml file that I'm using.
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv001
labels:
type: local
spec:
capacity:
storage: 1Gi
storageClassName: manual
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: /home/ubuntu/data/pv001
Here's the error that I am getting:
Error from server (BadRequest): error when creating "./mysql-pv.yaml":
PersistentVolume in version "v1" cannot be handled as a
PersistentVolume: v1.PersistentVolume.Spec:
v1.PersistentVolumeSpec.PersistentVolumeSource: HostPath: Capacity:
unmarshalerDecoder: quantities must match the regular expression
'^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$', error found in #10 byte
of ...|":"manual"},"hostPat|..., bigger context ...|city":
{"storage":"1Gi","storageClassName":"manual"},"hostPath":
{"path":"/home/ubuntu/data/pv001"},"p|...
I cannot figure out from the message what the actual error is.
Any help appreciated.

remove the storage class from pv definition. storage class is needed for dynamic provisioning of pv's.
in your case, you are using host path volumes. it should work without storage class.
If you are on k8s 1.14 then look at local volumes. refer the below link
https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/

I don't think it's related to having quotes in the path. It's more about using the right indentation for storageClassName. It should be this instead:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv001
labels:
type: local
spec:
capacity:
storage: 1Gi
storageClassName: manual
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: /home/ubuntu/data/pv001
You can remove it too, and it will use the default StorageClass

Try this:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv001
labels:
type: local
spec:
capacity:
storage: 1Gi
storageClassName: manual
accessModes: ["ReadWriteOnce"]
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: /home/ubuntu/data/pv001
storageClassName is under spec and same level as capacity (You put storageClassName under capacity which is wrong).
Read more: https://kubernetes.io/docs/concepts/storage/persistent-volumes/

Related

Cloned drive setup as one pv (or two mirroring pv's)

I'm running a k3s cluster in two different locations. I'm currently running a PV in one of these locations and I'm trying to develop a configuration that can be read as one PV but clone/mirror that drive to another location, all of this through k3s PV and PVC. Any clever ideas on how to achieve this?
My PV and PVC looks like this:
# pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-data-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 32Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/srv/dev-disk-by-uuid-********-****-****-****-************/kubernetes"
# pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-data-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 12Gi

kubernetes ignoring persistentvolume

I have created a persistent volume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "C:/Users/xxx/Desktop/pv"
And want to make save mysql statefulset pods things on it.
So, I wrote the volumeclaimtemplate:
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
Thinking this would request the persistent storage from the only persistent volume I have. Instead, this is what happens:
StatefulSets requires you to use storage classes in order to bind the correct PVs with the correct PVCs.
The correct way to make StatefulSets mount local storage is by using local type of volumes, take a look at the procedure below.
First, you create a storage class for the local volumes. Something like the following:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
It has no-provisioner so it will not be able to automatically provision PVs, you'll need to create them manually, but that's exactly what you want for local storage.
Second, you create your local PV, something as the following:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-volume
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: "C:/Users/xxx/Desktop/pv"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- the-node-hostname-on-which-the-storage-is-located
This definition tells the local path on the node, but also forces the PV to be used on a specific node (which match the nodeSelectorTerms).
It also links this PV to the storage class created earlier. This means that now, if a StatefulSets requires a storage with that storage class, it will receive this disk (if the space required is less or equal, of course)
Third, you can now link the StatefulSet:
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-storage"
resources:
requests:
storage: 5Gi
When the StatefulSet Pod will need to be scheduled for the first time, the following will happen:
A PVC will be created and it will go Bound with the PV you just created
The Pod will be scheduled to run on the node on which the bounded PV is restricted to run
UPDATE:
In case you want to use hostPath storage instead of local storage (because for example you are on minikube and that is supported out of the box directly, so it's more easy) you need to change the PV declaration a bit, something like the following:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-volume
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
capacity:
storage: 5Gi
hostPath:
path: /data/pv0001/
Now, the /data directory and all its content is persisted on the host (so if minikube gets restarted, it's still there) but if you want to mount specific directories of your host, you need to use minikube mount, for example:
minikube mount <source directory>:<target directory>
For example, you could do:
minikube mount C:/Users/xxx/Desktop/pv:/host/my-special-pv
and then you could use /host/my-special-pv as the hostPath inside the PV declaration.
More info can be read in the docs.

Kubernetes persistent volume not allowing read or write

I'm following this tutorial to create a jenkins server on my Kubernetes server.
I've got a volume
apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 1000Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/t1/kubernetes/vol/"
and a volume claim
---
# PersistentVolume for Jenkins
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-pv-claim
namespace: ns-jenkins # PV are not scoped to any namespace, but pvc is associated with the namespace
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
If I navigate to the mounted location inside Jenkins and run touch test, I get touch: cannot touch 'test': Permission denied
User looks right.
$ id
uid=1000(jenkins) gid=1000(jenkins) groups=1000(jenkins)
I've even gone so far as to make the host folder 777 permissions and no luck. What's going on?

MongoDB Community Kubernetes Operator and Custom Persistent Volumes

I'm trying to deploy a MongoDB replica set by using the MongoDB Community Kubernetes Operator in Minikube.
I followed the instructions on the official GitHub, so:
Install the CRD
Install the necessary roles and role-bindings
Install the Operator Deploy the Replicaset
By default, the operator will creates three pods, each of them automatically linked to a new persistent volume claim bounded to a new persistent volume also created by the operator (so far so good).
However, I would like the data to be saved in a specific volume, mounted in a specific host path. So in order I would need to create three persistent volumes, each mounted to a specific host path, and then automatically I would want to configure the replicaset so that each pod would connect to its respective persistent volume (perhaps using the matchLabels selector).
So I created three volumes by applying the following file:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongodb-pv-00
namespace: $NAMESPACE
labels:
type: local
service: mongo
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/mongodata/00"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongodb-pv-01
namespace: $NAMESPACE
labels:
type: local
service: mongo
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/mongodata/01"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongodb-pv-02
namespace: $NAMESPACE
labels:
type: local
service: mongo
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/mongodata/02"
and then I set up the replica set configuration file in the following way, but it still fails to connect the pods to the volumes:
apiVersion: mongodbcommunity.mongodb.com/v1
kind: MongoDBCommunity
metadata:
name: mongo-rs
namespace: $NAMESPACE
spec:
members: 3
type: ReplicaSet
version: "4.4.0"
persistent: true
podSpec:
persistence:
single:
labelSelector:
matchLabels:
type: local
service: mongo
storage: 5Gi
storageClass: manual
statefulSet:
spec:
volumeClaimTemplates:
- metadata:
name: data-volume
spec:
accessModes: [ "ReadWriteOnce", "ReadWriteMany" ]
resources:
requests:
storage: 5Gi
selector:
matchLabels:
type: local
service: mongo
storageClassName: manual
security:
authentication:
modes: ["SCRAM"]
users:
- ...
additionalMongodConfig:
storage.wiredTiger.engineConfig.journalCompressor: zlib
I can't find any documentation online, except the mongodb.com_v1_custom_volume_cr.yaml, has anyone faced this problem before? How could I make it work?
I think you could be interested into using local type of volumes. It works, like this:
First, you create a storage class for the local volumes. Something like the following:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
Since it has no-provisioner, it will be usable only if you manually create local PVs. WaitForFirstConsumer instead, will prevent attaching a PV to a PVC of a Pod which cannot be scheduled on the host on which the PV is available.
Second, you create the local PVs. Similarly to how you created them in your example, something like this:
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-pv
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /path/on/the/host
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- the-node-hostname-on-which-the-storage-is-located
Notice the definition, it tells the path on the host, the capacity.. and then it explains on which node of the cluster, such PV can be used (with the nodeAffinity). It also link them to the storage class we created early.. so that if someone (a claim template) requires storage with that class, it will now find this PV.
You can create 3 PVs, on 3 different nodes.. or 3 PVs on the same node at different paths, you can organize things as you desire.
Third, you can now use the local-storage class in claim template. The claim template could be something similar to this:
volumeClaimTemplates:
- metadata:
name: the-name-of-the-pvc
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-storage"
resources:
requests:
storage: 5Gi
And each Pod of the StatefulSet will try to be scheduled on a node with a local-storage PV available.
Remember that with local storages or, in general, with volumes that utilize host paths.. you may want to spread the various Pods of your app on different nodes, so that the app may resist the failure of a single node on its own.
In case you want to be able to decide which Pod links to which volume, the easiest way is to create one PV at a time, then wait for the Pod to Bound with it.. before creating the next one. It's not optimal but it's the easiest way.

Local PersistentVolumeClaim says "no volume plugin matched"

I recently started exploring Kubernetes and decided to try and deploy kafka to k8s. However I have a problem with creating the persistent volume. I create a storage class and a persistent volume, but the persistent volume claims stay in status pending saying "no volume plugin matched". This is the yaml files I used with the dashed lines denoting a new file. Anybody has an idea why this is happening?
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
reclaimPolicy: Retain
------------------------
apiVersion: v1
kind: PersistentVolume
metadata:
name: kafka-pv
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /mnt/disks/ssd1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- docker-desktop
---------------------------
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: zookeeper-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
storageClassName: local-storage
As MaggieO said changing ReadWriteMany to ReadWriteOnce was part of the problem. The other part was that I had to go and create the /mnt/disks/ssd1 folder on my C: drive manually and write "path: /c/mnt/disks/ssd1" instead. Something that is not present in my example, but I was trying to do anyway and might be helpful to others was that I was trying to have two PVCs for one PV which is impossible. The PV to PVC relationship is 1 to 1.
Your Persistent Volume Claim configuration file should look like this:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: zookeeper-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: local-storage
Just change access mode from ReadWriteMany to ReadWriteOnce.
Let me know if it helped.
You have to bound Persistent-volume claim with your persistent volume.
As you have mentioned pvc storageclassName: local-storage.
try it with as storageclassName: kafka-pv So that your pvc get bounded to pv.
In my case, it's caused by the pv storage,just increase the mount size of the pv storage.
From our experience (under a security-hardened Kubernetes distribution called Openshift 3.11) this provisioning error happens most often when creating of PVCs is not possible due to the node having run out of PVs.
So the cluster admin needs to add some PVs and/or we need to release some other PVCs (unused but still bound to PVs).
Having requested an incorrect PVC capacity is another possible reason (when the cluster is poorly configured and accepts only a single "magic number" for capacity).