NFS persistent volume is not deleted even with "Delete" persistentVolumeReclaimPolicy - kubernetes

I am using NFS Persistent Volume to create PV.
The reclaim policy being used in persistentVolumeReclaimPolicy: Delete.
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv001
spec:
capacity:
storage: 3Gi
volumeMode: Filesystem
accessModes:
- ReadOnlyMany
persistentVolumeReclaimPolicy: Delete
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /
server: fs-0bb.efs.us-east-2.amazonaws.com
However, when I delete my deployment-controller and also delete PersistentVolumeClaim, the NFS volume is not getting deleted.
Expected Behaviour: The NFS PV volume should be deleted after PVC is deleted.

There are two types of PV configurations one is Static PV and the other is dynamic PV, if you have configured Static PV you need to delete both PVC and PV. For dynamic PV you just need to delete the PVC and then the PV will be released. From the manifest file you have provided it seems that you are using static PV, so you need to delete both PVC and PV.

Related

How can I mount Pv on one node and use that same PV for pods in another anode

I have attached an EBS volume to one of the nodes in my cluster and I want that whatever pod are coming up, irrespective of the nodes they are scheduled onto, should use that EBS volume. is this possible?
My approach was to create a PV/PVC that mounts to that volume and then use that PVC in my pod, but I am not sure if it's mounting to same host that pod comes up in or a different host.
YAML for Storage Class
kind: StorageClass
metadata:
name: local-path
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
allowVolumeExpansion: true
reclaimPolicy: Delete
PV.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: redis-pv
labels:
type: local
spec:
capacity:
storage: 200Mi
storageClassName: local-path
claimRef:
namespace: redis
name: data-redis-0
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt2/data/redis"
PVC.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-redis-0
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 200Mi
storageClassName: local-path
no when i am trying to schedule a pod the storage is also getting mounted on the same node instead
you are using local path you can not do it.
There is a different type of AccessMount ReadWriteMany, ReadWriteOnce, and ReadyWriteOnly with PVC.
A PersistentVolumeClaim (PVC) is a request for storage by a user. It
is similar to a Pod. Pods consume node resources and PVCs consume PV
resources. Pods can request specific levels of resources (CPU and
Memory). Claims can request specific size and access modes (e.g., they
can be mounted ReadWriteOnce, ReadOnlyMany or ReadWriteMany, see
AccessModes).
Read More at : https://kubernetes.io/docs/concepts/storage/persistent-volumes/
Yes you can mount the multiple PODs to a single PVC but in that case, you have to use the ReadWriteMany. Most people use the NFS or EFS for this type of use case.
EBS is ReadWriteOnce, so it won't be possible to use the EBS in your case. you have to either use NFS or EFS.
you can use GlusterFs in the back it will be provisioning EBS volume. GlusterFS support ReadWriteMany and it will be faster compared to EFS as it's block storage (SSD).
For ReadWiteMany you can also checkout : https://min.io/
Find access mode details here : https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
I have attached an EBS volume to one of the nodes in my cluster and I want that whatever pod are coming up, irrespective of the nodes they are scheduled onto, should use that EBS volume. is this possible?
No. An EBS volume can only be attached to at most one EC2 instance, and correspondingly, one Kubernetes node. In Kubernetes terminology, it only allows the ReadWriteOnce access mode.
It looks like the volume you're trying to create is the backing store for a Redis instance. If the volume will only be attached to one pod at a time, then this isn't a problem on its own, but you need to let Kubernetes manage the volume for you. Then the cluster will know to detach the EBS volume from the node it's currently on and reattach it to the node with the new pod. Setting this up is a cluster-administration problem and not something you as a programmer can do, but it should be set up for you in environments like Amazon's EKS managed Kubernetes.
In this environment:
Don't create a StorageClass; this is cluster-level configuration.
Don't manually create a PersistentVolume; the cluster will create it for you.
You should be able to use the default storageClass: in your PersistentVolumeClaim.
You probably should use a StatefulSet to create the PersistentVolumeClaim for you.
So for example:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis
spec:
volumeClaimTemplates: # automatically creates PersistentVolumeClaims
- metadata:
name: data-redis
spec:
accessModes: [ReadWriteOnce] # data won't be shared between pods
resources:
requests:
storage: 200Mi
# default storageClassName:
template:
spec:
containers:
- name: redis
volumeMounts:
- name: data-redis
mountPath: /data

PersistentVolume and PersistentVolumeClaim for multiple deployments

I have 3 deployments, a-depl, b-depl, c-depl. Now each of these 3 deployments has a db deployment: a-db-depl, b-db-depl, c-db-depl.
Now I want to persist each of these dbs. Do I need to create a single PV for all or a PV for each of the deployments?
I know that PV <-> PVC is 1-to-1 relation. But I dont know about Depl <-> PV.
Can someone please help?
As of now, I have no clue, so I am using a single PV for all of the dp deployments
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongo-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
hostPath:
path: "/mnt/data/mongo"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
At a time one PV can be bound to only one PVC. So for each of your PVC you need to create a corresponding PV. To automate PV creation you can create a StorageClass and refer that StorageClass in your PVC. StorageClass can dynamically provision a PV for each PVC.
Whether multiple deployments can use the same PVC or PV depends on accessModes of the PVC or PV.
ReadOnlyMany - the volume can be mounted read-only by many nodes
ReadWriteMany- the volume can be mounted as read-write by many nodes
ReadWriteOnce - the volume can be mounted as read-write by a single node
How does one run multiple replicas of a pod and have each pod use its own storage
volume?
A StatefulSet resource, which is specifically tailored to applications where instances of the application must be treated as non-fungible individuals, with each one having a stable name and state.

Kubernetes Delete Persistent Voulmes Created by hostPath

I created a PV and a PVC on docker-desktop and even after removing the pv and pvc the file still remains. When I re-create it, it attaches the same mysql database to new pods. How do you manually delete the files created by the hostPath? I suppose one way is to just reset Kubernetes in the preferences but there has to be another less nuclear option.
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim2
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
According to the docs, "...Recycle reclaim policy performs a basic scrub (rm -rf /thevolume/*) on the volume and makes it available again for a new claim". Also, "...Currently, only NFS and HostPath support recycling". So, try changing
persistentVolumeReclaimPolicy: Delete
to
persistentVolumeReclaimPolicy: Recycle
hostPath volumes are simply folders on one of your node's filesystem (in this case /mnt/data). All you need to do is delete that folder from the node that hosted the volume.
If you defined any node affinity to pod that you need to check. Then find out node where that pod is schedule. Delete PVC an PV Then delete data from /mnt/data directory.
kubectl get pod -o wide | grep <pod_name>
Here you will get on which node it is scheduled.
kubectl delete deploy or statefulset <deploy_name>
kubectl get pv,pvc
kubectl delete pv <pv_name>
kubectl delete pvc <pvc_name>
Now go on that node and delete that data from /mnt/data
One more way to do it you can define persistentVolumeReclaimPolicy
to retain or delete

K8S PersistenVolume and PersistentVolumeClaim

I am practicing k8s on storage topic. I don't understand why step2: PersistentVolume has different storage size when the tutorial configures PersistenVolumeClaim in step3
For example nfs-0001.yaml, nfs-0002.yaml. storages are 2Gi and 5Gi
apiVersion: v1
kind: PersistentVolumemetadata:
name: nfs-0001
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
nfs:
server: 172.17.0.7
path: /exports/data-0001
apiVersion: v1
kind: PersistentVolume
metadata: name: nfs-0002
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
nfs:
server: 172.17.0.7
path: /exports/data-0002
Example in step3 : pvc-mysql.yaml, pvc-http.yaml
kind: PersistentVolumeClaim
apiVersion: v1metadata:
name: claim-mysql
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: claim-http
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
And when I check the pv and pvc
master $ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
claim-http Bound nfs-0001 2Gi RWO,RWX 17m
claim-mysql Bound nfs-0002 5Gi RWO,RWX 17m
master $ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-0001 2Gi RWO,RWX Recycle Bound default/claim-http 19m
nfs-0002 5Gi RWO,RWX Recycle Bound default/claim-mysql 19m
Neither 1Gi and 3Gi shown up in the terminal.
Question:
Where are the 1Gi and 3Gi?
If they are not used. Is it safe to put arbitrary number to storage in PersistenVolumeClaim yaml?
You need to understand the difference between PV and PVC's. PVC is a declaration of stroage which that at some point become available for application to use and that is not the actual size of volume allocated.
PV's are the actual volume allocated at the time on the disk and ready to use. In order to use these PVs user needs to create PersistentVolumeClaims which is nothing but a request for PVs. A claim must specify the access mode and storage capacity, once a claim is created PV is automatically bound to this claim.
In your case, you have PV size of 5 and 3 GB respectively and you have started two PVC's with 3 and 1 GB respectively with accessmode: ReadWriteOnce that means there can be only one PV is attached to the one PVC.
Now the capacity of the PV available is the larger than requested and hence it allocated the larger size PV to the PVC.
PVC.spec.capacity is user's request for storage, "I want 10 GiB volume". PV.spec.capacity is actual size of the PV. PVC can bind to a bigger PV when there is no smaller available PV, so the user can get actually more than he wants.
Similarly, dynamic provisioning works typically in bigger chunks. So if user asks for 0.5GiB in a PVC, he will get 1 GiB PV because that's the smallest one that AWS can provision.
There is nothing wrong about it. Also, you should not put any random number in PVC size, it should be well calculated according to your application need and scaling.

PersistentVolumeClaim Pending for NFS Volume

What specific changes need to be made to the yaml below in order to get the PersistentVolumeClaim to bind to the PersistentVolume?
An EC2 instance in the same VPC subnet as the Kubernetes worker nodes has an ip of 10.0.0.112 and and has been configured to act as an NFS server in the /nfsfileshare path.
Creating the PersistentVolume
We created a PersistentVolume pv01 with pv-volume-network.yaml:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv01
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: "/nfsfileshare"
server: "10.0.0.112"
and by typing:
kubectl create -f pv-volume-network.yaml
Then when we type kubectl get pv pv01, the pv01 PersistentVolume shows a STATUS of "Available".
Creating the PersistentVolumeClaim
Then we created a PersistentVolumeClaim named `` with pv-claim.yaml:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
And by typing:
kubectl create -f pv-claim.yaml
STATUS is Pending
But then when we type kubectl get pvc my-pv-claim, we see that the STATUS is Pending. The STATUS remains as Pending for as long as we continued to check back.
Note this OP is different than this other question, because this problem persists even with quotes around the NFS IP and the path.
Why is this PVC not binding to the PV? What specific changes need to be made to resolve this?
I diagnosed the problem by typing kubectl describe pvc my-pv-claim and looking in the Events section of the results.
Then, based on the reported Events, I was able to fix this by changing storageClassName: manual to storageClassName: slow.
The problem was that the PVC's StorageClassName did not meet the requirement that it match the class specified in the PV.