Persistent volume claims vs subPath - kubernetes

I would like to use a single mount point on a node (ie /data) and have a different sub folder for each PersistentVolumeClaim that I am going to use in my cluster.
At the moment I have multiple StorageClass and PersistentVolume for each sub folder, for example:
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: prometheus
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: prometheus
labels:
type: local
spec:
storageClassName: prometheus
capacity:
storage: 100Gi
accessModes:
- ReadWriteOnce
local:
path: "/data/prometheus"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: disk
operator: In
values:
- local
As you can image having a StorageClass, a PersistentVolume for each PersistentVolumeClaim looks a bit of an overkill.
I have tried to use a single StorageClass and PersistentVolume (just pointing to /data), the usePath option (ie prometheus) with multiple PersistentVolumeClaim.
But I have noticed that if the securityContext.fsGroupChangePolicy option is enabled it will apply the user/group changes to root of the volume (ie /data) not to the subPath (ie /data/prometheus).
Is there a better solution?
Thanks

As you can image having a StorageClass, a PersistentVolume for each
PersistentVolumeClaim looks a bit of an overkill.
That's exactly how dynamic storage provisioning works. Single storage class specified in PVC used by a pod will provision single PV for that PVC. There's nothing wrong with it. I'd suggest using it if you are ok with its default volume reclaim policy of delete.

local-path-provisioner seems to be a good solution.

Related

Expand PV from Kubernetes no-provision storageclass

I'm trying to deploy the Confluent for Kafka (zookeeper statefulset) and part of the documentation mentions that I should be able to resize it, meaning that my storageclass should have allowVolumeExpansion: true set.
While the listed supported on-prem storage solutions are only Ceph RBD and Portworx, if not using Dynamic Provisioning, the given example is of provisioner no-provisioner.
I would like to know if using a storageclass provisioner with no-provisioner does not actually allow me to resize the persistent storage volumes.
For reference:
My SC manifest is as below:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: my-storage-class
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
reclaimPolicy: Retain
I am able to create it.
As for the PV manifest, it is as below:
apiVersion: v1
kind: PersistentVolume
metadata:
name: data1
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: my-storage-class
local:
path: /mnt/app/data1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node1
Then, I follow the steps below:
Add a new disk to my VM
Expand LVM vg with new disk
Expand LVM lv with free space
Change PV/PVC storage capacity and request
Rollout restart the sts
Once I exec into the Pod and check the FS size, it still shows 10Gb.
The issue was Linux side, I forgot to run resize2fs.
That's correct, you cannot set allowVolumeExpansion: true when using no-provisioner type of StorageClass (you can but it won't work). Supported types are listed here.
Expanding Persistent Volume Claims is also not supported in this type of storage.
The workaround you can try is to:
Resize current StatefulSet to 0 replicas.
Delete existing PV/PVC.
Update the desired capacity in manifest of PV/PVC.
Apply PV/PVC
Resize StatefulSet to desired amount of Pods.

kubernetes storage class node selector

I'm trying to leverage a local volume dynamic provisioner for k8s, Rancher's one, with multiple instances, each with its own storage class so that I can provide multiple types of local volumes based on their performance (e.g. ssd, hdd ,etc).
The underlying infrastructure is not symmetric; some nodes only have ssds, some only hdds, some of them both.
I know that I can hint the scheduler to select the proper nodes by providing node affinity rules for pods.
But, is there a better way to address this problem at the level of provisioner / storage class only ? E.g., make a storage class only available for a subset of the cluster nodes.
I know that I can hint the scheduler to select the proper nodes by
providing node affinity rules for pods.
There is no need to define node affinity rules on Pod level when using local persistent volumes. Node affinity can be specified in PersistentVolume definition.
But, is there a better way to address this problem at the level of
provisioner / storage class only ? E.g., make a storage class only
available for a subset of the cluster nodes.
No, it cannot be specified on a StorageClass level. Neither you can make a StorageClass available only for a subset of nodes.
But when it comes to a provisioner, I would say yes, it should be feasible as one of the major storage provisioner tasks is creating matching PersistentVolume objects in response to PersistentVolumeClaim created by the user. You can read about it here:
Dynamic volume provisioning allows storage volumes to be created
on-demand. Without dynamic provisioning, cluster administrators have
to manually make calls to their cloud or storage provider to create
new storage volumes, and then create PersistentVolume objects to
represent them in Kubernetes. The dynamic provisioning feature
eliminates the need for cluster administrators to pre-provision
storage. Instead, it automatically provisions storage when it is
requested by users.
So looking at the whole volume provision process from the very beginning it looks as follows:
User creates only PersistenVolumeClaim object, where he specifies a StorageClass:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 10Gi
storageClassName: local-storage ### ๐Ÿ‘ˆ
and it can be used in a Pod definition:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim ### ๐Ÿ‘ˆ
So in practice, in a Pod definition you need only to specify the proper PVC. No need for defining any node-affinity rules here.
A Pod references a PVC, PVC then references a StorageClass, StorageClass references the provisioner that should be used:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/my-fancy-provisioner ### ๐Ÿ‘ˆ
volumeBindingMode: WaitForFirstConsumer
So in the end it is the task of a provisioner to create matching PersistentVolume object. It can look as follows:
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-pv
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /var/tmp/test
nodeAffinity: ### ๐Ÿ‘ˆ
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- ssd-node ### ๐Ÿ‘ˆ
So a Pod which uses myclaim PVC -> which references the local-storage StorageClass -> which selects a proper storage provisioner will be automatically scheduled on the node selected in PV definition created by this provisioner.

Unable to setup couchbase operator 1.2 with persistent volume on local storage class

I am trying to setup couchbase operator 1.2 on my local system.
i followed the following steps :
Install the Couchbase Admission Controller.
Deploy the Couchbase Autonomous Operator.
Deploy the Couchbase Cluster.
Access CouchBase from UI.
But the problem with this is that as soon as the system or docker resets or the pod resets, the cluster's data is lost.
So for the same I tried to do it by adding persistent volume with local storage class as mentioned in the docs but the result was still the same. The pod still gets resets. and i am unable to find the reason for the same.
So if anyone can advise on how to do the same with persistent volume on local storage class. I have successfully created a storage class. Just having problem while getting the cluster up and keep the consistency for the same.
Here is the yamls that i used to create the storage class and pv and pv claim
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: myssd
provisioner: local
apiVersion: v1
kind: PersistentVolume
metadata:
name: couchbase-data-2
labels:
type: local
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
storageClassName: myssd
hostPath:
path: "/home/<user>/cb-storage/"
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-test-claim-2
spec:
accessModes:
- ReadWriteOnce
storageClassName: myssd
resources:
requests:
storage: 1Gi
Thanks in advance
Persistent volume using hostPath is not durable. Use a local volume. Compared to hostPath volumes, local volumes can be used in a durable and portable manner without manually scheduling Pods to nodes, as the system is aware of the volume's node constraints by looking at the node affinity on the PersistentVolume.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: couchbase-data
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /home/<User>/cb-storage/
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node1
- node2
- node3
- node4
You don't need to create a PersistentVolume manually because the storage class will do that internally.
Also you need to configure the local volume provisioner as discussed here so that dynamic provisioning using the local storage class happens.

pod has unbound PersistentVolumeClaims

When I push my deployments, for some reason, I'm getting the error on my pods:
pod has unbound PersistentVolumeClaims
Here are my YAML below:
This is running locally, not on any cloud solution.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.16.0 ()
creationTimestamp: null
labels:
io.kompose.service: ckan
name: ckan
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: ckan
spec:
containers:
image: slckan/docker_ckan
name: ckan
ports:
- containerPort: 5000
resources: {}
volumeMounts:
- name: ckan-home
mountPath: /usr/lib/ckan/
subPath: ckan
volumes:
- name: ckan-home
persistentVolumeClaim:
claimName: ckan-pv-home-claim
restartPolicy: Always
status: {}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ckan-pv-home-claim
labels:
io.kompose.service: ckan
spec:
storageClassName: ckan-home-sc
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
volumeMode: Filesystem
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ckan-home-sc
provisioner: kubernetes.io/no-provisioner
mountOptions:
- dir_mode=0755
- file_mode=0755
- uid=1000
- gid=1000
You have to define a PersistentVolume providing disc space to be consumed by the PersistentVolumeClaim.
When using storageClass Kubernetes is going to enable "Dynamic Volume Provisioning" which is not working with the local file system.
To solve your issue:
Provide a PersistentVolume fulfilling the constraints of the claim (a size >= 100Mi)
Remove the storageClass from the PersistentVolumeClaim or provide it with an empty value ("")
Remove the StorageClass from your cluster
How do these pieces play together?
At creation of the deployment state-description it is usually known which kind (amount, speed, ...) of storage that application will need.
To make a deployment versatile you'd like to avoid a hard dependency on storage. Kubernetes' volume-abstraction allows you to provide and consume storage in a standardized way.
The PersistentVolumeClaim is used to provide a storage-constraint alongside the deployment of an application.
The PersistentVolume offers cluster-wide volume-instances ready to be consumed ("bound"). One PersistentVolume will be bound to one claim. But since multiple instances of that claim may be run on multiple nodes, that volume may be accessed by multiple nodes.
A PersistentVolume without StorageClass is considered to be static.
"Dynamic Volume Provisioning" alongside with a StorageClass allows the cluster to provision PersistentVolumes on demand.
In order to make that work, the given storage provider must support provisioning - this allows the cluster to request the provisioning of a "new" PersistentVolume when an unsatisfied PersistentVolumeClaim pops up.
Example PersistentVolume
In order to find how to specify things you're best advised to take a look at the API for your Kubernetes version, so the following example is build from the API-Reference of K8S 1.17:
apiVersion: v1
kind: PersistentVolume
metadata:
name: ckan-pv-home
labels:
type: local
spec:
capacity:
storage: 100Mi
hostPath:
path: "/mnt/data/ckan"
The PersistentVolumeSpec allows us to define multiple attributes.
I chose a hostPath volume which maps a local directory as content for the volume. The capacity allows the resource scheduler to recognize this volume as applicable in terms of resource needs.
Additional Resources:
Configure PersistentVolume Guide
If your using rancher k3s kubernetes distribution, set storageClassName to local-path as described in the doc
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: local-path-pvc
namespace: default
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-path
resources:
requests:
storage: 2Gi
To use it on other distributions use https://github.com/rancher/local-path-provisioner
I ran into this issue but I realized that I was creating my PV's with "manual" StorageClass type.
YOUR POD
Expects what kind of storage class?
YOUR PVC Definition
volumeClaimTemplates --> storageClassName : "standard"
PV
spec --> storageClassName : "standard"
In may case the problem was, the wrong name of PersistentVolume specified in PersistentVolumeClaim declaration.
But there might be more reasons to it. Make sure that :
The volumeName name specified in PVC match PV name
The storageClassName name specified in PVC match PV name
The sufficient capacity size is allocated to your resource
The access modes of You PV and PVC are consistent
The number of PV match PVC
For detailed explanation read this article.
We faced a very similar issue today. For us the problem was that there was no CSI driver installed on the nodes. To check the drivers installed, you can use this command:
kubectl get csidriver
Our managed kubernetes clusters v1.25 run in Google Cloud, so for us the solution was to just enable the feature โ€˜Compute Engine persistent disk CSI Driverโ€™

Kubernetes - PersistentVolumeClaim failed

I have a GKE based Kubernetes setup and a POD that requires a storage volume. I attempt to use the config below:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-scratch-space
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2000Gi
storageClassName: standard
This PVC is not provisioned. I get the below error:
Failed to provision volume with StorageClass "standard": googleapi: Error 503: The zone 'projects/p01/zones/europe-west2-b' does not have enough resources available to fulfill the request. Try a different zone, or try again later.
Looking at GKE quotas page, I don't see any issues. Deleting other PVCs also is not solving the issue. Can anyone help? Thanks.
There is no configuration problem at your side - there are actually not enough resources in the europe-west2-b zone to create a 2T persistent disk. Either try for a smaller volume or use a different zone.
There is an example for GCE in the docs. Create a new StorageClass specifying say the europe-west1-b zone (which is actually cheaper than europe-west2-b) like this:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: gce-pd-europe-west1-b
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
zones: europe-west1-b
And modify your PVC:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-scratch-space
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2000Gi
storageClassName: gce-pd-europe-west1-b