PersistentVolume and PersistentVolumeClaim for multiple deployments - kubernetes

I have 3 deployments, a-depl, b-depl, c-depl. Now each of these 3 deployments has a db deployment: a-db-depl, b-db-depl, c-db-depl.
Now I want to persist each of these dbs. Do I need to create a single PV for all or a PV for each of the deployments?
I know that PV <-> PVC is 1-to-1 relation. But I dont know about Depl <-> PV.
Can someone please help?
As of now, I have no clue, so I am using a single PV for all of the dp deployments
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongo-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
hostPath:
path: "/mnt/data/mongo"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi

At a time one PV can be bound to only one PVC. So for each of your PVC you need to create a corresponding PV. To automate PV creation you can create a StorageClass and refer that StorageClass in your PVC. StorageClass can dynamically provision a PV for each PVC.
Whether multiple deployments can use the same PVC or PV depends on accessModes of the PVC or PV.
ReadOnlyMany - the volume can be mounted read-only by many nodes
ReadWriteMany- the volume can be mounted as read-write by many nodes
ReadWriteOnce - the volume can be mounted as read-write by a single node

How does one run multiple replicas of a pod and have each pod use its own storage
volume?
A StatefulSet resource, which is specifically tailored to applications where instances of the application must be treated as non-fungible individuals, with each one having a stable name and state.

Related

NFS persistent volume is not deleted even with "Delete" persistentVolumeReclaimPolicy

I am using NFS Persistent Volume to create PV.
The reclaim policy being used in persistentVolumeReclaimPolicy: Delete.
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv001
spec:
capacity:
storage: 3Gi
volumeMode: Filesystem
accessModes:
- ReadOnlyMany
persistentVolumeReclaimPolicy: Delete
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /
server: fs-0bb.efs.us-east-2.amazonaws.com
However, when I delete my deployment-controller and also delete PersistentVolumeClaim, the NFS volume is not getting deleted.
Expected Behaviour: The NFS PV volume should be deleted after PVC is deleted.
There are two types of PV configurations one is Static PV and the other is dynamic PV, if you have configured Static PV you need to delete both PVC and PV. For dynamic PV you just need to delete the PVC and then the PV will be released. From the manifest file you have provided it seems that you are using static PV, so you need to delete both PVC and PV.

How can I mount Pv on one node and use that same PV for pods in another anode

I have attached an EBS volume to one of the nodes in my cluster and I want that whatever pod are coming up, irrespective of the nodes they are scheduled onto, should use that EBS volume. is this possible?
My approach was to create a PV/PVC that mounts to that volume and then use that PVC in my pod, but I am not sure if it's mounting to same host that pod comes up in or a different host.
YAML for Storage Class
kind: StorageClass
metadata:
name: local-path
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
allowVolumeExpansion: true
reclaimPolicy: Delete
PV.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: redis-pv
labels:
type: local
spec:
capacity:
storage: 200Mi
storageClassName: local-path
claimRef:
namespace: redis
name: data-redis-0
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt2/data/redis"
PVC.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-redis-0
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 200Mi
storageClassName: local-path
no when i am trying to schedule a pod the storage is also getting mounted on the same node instead
you are using local path you can not do it.
There is a different type of AccessMount ReadWriteMany, ReadWriteOnce, and ReadyWriteOnly with PVC.
A PersistentVolumeClaim (PVC) is a request for storage by a user. It
is similar to a Pod. Pods consume node resources and PVCs consume PV
resources. Pods can request specific levels of resources (CPU and
Memory). Claims can request specific size and access modes (e.g., they
can be mounted ReadWriteOnce, ReadOnlyMany or ReadWriteMany, see
AccessModes).
Read More at : https://kubernetes.io/docs/concepts/storage/persistent-volumes/
Yes you can mount the multiple PODs to a single PVC but in that case, you have to use the ReadWriteMany. Most people use the NFS or EFS for this type of use case.
EBS is ReadWriteOnce, so it won't be possible to use the EBS in your case. you have to either use NFS or EFS.
you can use GlusterFs in the back it will be provisioning EBS volume. GlusterFS support ReadWriteMany and it will be faster compared to EFS as it's block storage (SSD).
For ReadWiteMany you can also checkout : https://min.io/
Find access mode details here : https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
I have attached an EBS volume to one of the nodes in my cluster and I want that whatever pod are coming up, irrespective of the nodes they are scheduled onto, should use that EBS volume. is this possible?
No. An EBS volume can only be attached to at most one EC2 instance, and correspondingly, one Kubernetes node. In Kubernetes terminology, it only allows the ReadWriteOnce access mode.
It looks like the volume you're trying to create is the backing store for a Redis instance. If the volume will only be attached to one pod at a time, then this isn't a problem on its own, but you need to let Kubernetes manage the volume for you. Then the cluster will know to detach the EBS volume from the node it's currently on and reattach it to the node with the new pod. Setting this up is a cluster-administration problem and not something you as a programmer can do, but it should be set up for you in environments like Amazon's EKS managed Kubernetes.
In this environment:
Don't create a StorageClass; this is cluster-level configuration.
Don't manually create a PersistentVolume; the cluster will create it for you.
You should be able to use the default storageClass: in your PersistentVolumeClaim.
You probably should use a StatefulSet to create the PersistentVolumeClaim for you.
So for example:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis
spec:
volumeClaimTemplates: # automatically creates PersistentVolumeClaims
- metadata:
name: data-redis
spec:
accessModes: [ReadWriteOnce] # data won't be shared between pods
resources:
requests:
storage: 200Mi
# default storageClassName:
template:
spec:
containers:
- name: redis
volumeMounts:
- name: data-redis
mountPath: /data

How to migrate from PVC to VolumeClaimTemplate

There is a statefulset which uses a dynamic provisioner with storageclass name of 'some-class'
and the following simple PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: example-pvc
spec:
storageClassName: some-class
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
ReclaimPolicy of the storageclass is retain.
When I delete the application the PV will be Retained.
But when reinstalling the App a new PVC and a new PV will be created and used. So I can't use the data from the old PV.
For this behaviour I need to use volumeClaimTemplates instead of PVC
The Question is how retain the data either by using the old PV or using the new one but data migrated from the old?
One solution would be to use label-selector for the PV from inside the claimtemplates, but unfortunately it doesn't work because of the dynamic provisioner.
Is there any Kubernetes level solution?
Or this is outside of the K8s capabilities and should be done by hand?

PersistentVolumeClaim Pending for NFS Volume

What specific changes need to be made to the yaml below in order to get the PersistentVolumeClaim to bind to the PersistentVolume?
An EC2 instance in the same VPC subnet as the Kubernetes worker nodes has an ip of 10.0.0.112 and and has been configured to act as an NFS server in the /nfsfileshare path.
Creating the PersistentVolume
We created a PersistentVolume pv01 with pv-volume-network.yaml:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv01
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: "/nfsfileshare"
server: "10.0.0.112"
and by typing:
kubectl create -f pv-volume-network.yaml
Then when we type kubectl get pv pv01, the pv01 PersistentVolume shows a STATUS of "Available".
Creating the PersistentVolumeClaim
Then we created a PersistentVolumeClaim named `` with pv-claim.yaml:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
And by typing:
kubectl create -f pv-claim.yaml
STATUS is Pending
But then when we type kubectl get pvc my-pv-claim, we see that the STATUS is Pending. The STATUS remains as Pending for as long as we continued to check back.
Note this OP is different than this other question, because this problem persists even with quotes around the NFS IP and the path.
Why is this PVC not binding to the PV? What specific changes need to be made to resolve this?
I diagnosed the problem by typing kubectl describe pvc my-pv-claim and looking in the Events section of the results.
Then, based on the reported Events, I was able to fix this by changing storageClassName: manual to storageClassName: slow.
The problem was that the PVC's StorageClassName did not meet the requirement that it match the class specified in the PV.

Can one persistent volume by consumed by several persistent volume claims?

Is it correct to assume that one PV can be consumed by several PVCs and each pod instance needs one binding of PVC? I'm asking because I created a PV and then a PVC with different size requirements such as:
kind: PersistentVolume
apiVersion: v1
metadata:
name: k8sdisk
labels:
type: amazonEBS
spec:
capacity:
storage: 200Gi
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
volumeID: vol-xxxxxx
fsType: ext4
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: couchbase-pvc
labels:
type: amazonEBS
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
But when I use the PVC with the pod, it shows as 200GB available instead of the 5GB.
I'm sure I'm mixing things, but could not find a reasonable explanation.
When you have a PVC it will look for a PV that will satisfy it's requirements, but unless it is a volume and claim in multi-access mode (and there is a limited amount of backends that support it, like ie. NFS - details in http://kubernetes.io/docs/user-guide/persistent-volumes/#access-modes), the PV will not be shared by multiple PVC. Furthermore, the size in PVC is not intended as quota for the amount of data saved on the volume during pods life, but as a way to match big enough PV, and thats it.