I created a PV and a PVC on docker-desktop and even after removing the pv and pvc the file still remains. When I re-create it, it attaches the same mysql database to new pods. How do you manually delete the files created by the hostPath? I suppose one way is to just reset Kubernetes in the preferences but there has to be another less nuclear option.
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim2
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
According to the docs, "...Recycle reclaim policy performs a basic scrub (rm -rf /thevolume/*) on the volume and makes it available again for a new claim". Also, "...Currently, only NFS and HostPath support recycling". So, try changing
persistentVolumeReclaimPolicy: Delete
to
persistentVolumeReclaimPolicy: Recycle
hostPath volumes are simply folders on one of your node's filesystem (in this case /mnt/data). All you need to do is delete that folder from the node that hosted the volume.
If you defined any node affinity to pod that you need to check. Then find out node where that pod is schedule. Delete PVC an PV Then delete data from /mnt/data directory.
kubectl get pod -o wide | grep <pod_name>
Here you will get on which node it is scheduled.
kubectl delete deploy or statefulset <deploy_name>
kubectl get pv,pvc
kubectl delete pv <pv_name>
kubectl delete pvc <pvc_name>
Now go on that node and delete that data from /mnt/data
One more way to do it you can define persistentVolumeReclaimPolicy
to retain or delete
Related
I am using NFS Persistent Volume to create PV.
The reclaim policy being used in persistentVolumeReclaimPolicy: Delete.
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv001
spec:
capacity:
storage: 3Gi
volumeMode: Filesystem
accessModes:
- ReadOnlyMany
persistentVolumeReclaimPolicy: Delete
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /
server: fs-0bb.efs.us-east-2.amazonaws.com
However, when I delete my deployment-controller and also delete PersistentVolumeClaim, the NFS volume is not getting deleted.
Expected Behaviour: The NFS PV volume should be deleted after PVC is deleted.
There are two types of PV configurations one is Static PV and the other is dynamic PV, if you have configured Static PV you need to delete both PVC and PV. For dynamic PV you just need to delete the PVC and then the PV will be released. From the manifest file you have provided it seems that you are using static PV, so you need to delete both PVC and PV.
I have 3 deployments, a-depl, b-depl, c-depl. Now each of these 3 deployments has a db deployment: a-db-depl, b-db-depl, c-db-depl.
Now I want to persist each of these dbs. Do I need to create a single PV for all or a PV for each of the deployments?
I know that PV <-> PVC is 1-to-1 relation. But I dont know about Depl <-> PV.
Can someone please help?
As of now, I have no clue, so I am using a single PV for all of the dp deployments
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongo-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
hostPath:
path: "/mnt/data/mongo"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
At a time one PV can be bound to only one PVC. So for each of your PVC you need to create a corresponding PV. To automate PV creation you can create a StorageClass and refer that StorageClass in your PVC. StorageClass can dynamically provision a PV for each PVC.
Whether multiple deployments can use the same PVC or PV depends on accessModes of the PVC or PV.
ReadOnlyMany - the volume can be mounted read-only by many nodes
ReadWriteMany- the volume can be mounted as read-write by many nodes
ReadWriteOnce - the volume can be mounted as read-write by a single node
How does one run multiple replicas of a pod and have each pod use its own storage
volume?
A StatefulSet resource, which is specifically tailored to applications where instances of the application must be treated as non-fungible individuals, with each one having a stable name and state.
I am using kubectl apply -f pv.yaml on this basic setup:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-demo
spec:
storageClassName: "normal"
capacity:
storage: 1Gi
persistentVolumeReclaimPolicy: Delete
accessModes:
- ReadWriteOnce
hostPath:
path: /home/demo/
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-demo
spec:
storageClassName: "normal"
resources:
requests:
storage: 200Mi
accessModes:
- ReadWriteOnce
---
apiVersion: v1
kind: Pod
metadata:
name: nginx-demo
labels:
name: nginx-demo
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: /usr/share/nginx/html
name: pv-demo
volumes:
- name: pv-demo
persistentVolumeClaim:
claimName: pvc-demo
Now I wanted to delete everything so I used: kubectl delete -f pv.yaml
However, the volume still persists on the node at /home/demo and has to be removed manually.
So I tried to patch and remove protection before deletion:
kubectl patch pv pv-demo -p '{"metadata":{"finalizers":null}}'
But the mount still persists on the node.
I tried to edit and null Finalizers manually, although it said 'edited'; kubectl get pv shows Finalizers unmodified.
I don't understand what's going on, Why all of the above is not working? I want when delete, the mount folder on the node /home/demo gets deleted as well.
This is expected behavior when using hostPath as it does not support deletion as to other volume types. I tested this with kubeadm and gke clusters and the mounted directory and files remain intact after removal the pv and pvc.
Taken from the manual about reclaim policies:
Currently, only NFS and HostPath support recycling. AWS EBS, GCE PD,
Azure Disk, and Cinder volumes support deletion.
While recycle is mentioned in documentation as deprecated since version 1.5 it still works and can cleanup your files but it won`t delete your mounted directory. It is not ideal but that is the closest workaround.
IMPORTANT:
To successfully use recycle you cannot delete PV itself. If you delete PVC then controller manager creates recycyler pod that cleans up the volumes and this volume become available for binding to the next PVC.
When looking at the control-manager logs you can see that host_path deleter rejects the /home/demo/ dir deletion and it supports only deletion of the /tmp/.+ directory. However after testing this tmp is also not being deleted.
'Warning' reason: 'VolumeFailedDelete' host_path deleter only supports /tmp/.+ but received provided /home/demo/```
May be you can try with hostpath under /tmp/
I have a PersistentVolumeClaim in Kubernetes cluster. I would like to delete and recreate it, in my development environment to, in this way, sort of reset some services that use it.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: kafka-disk1
spec:
accessModes:
- ReadWriteOnce
storageClassName: managed-premium
resources:
requests:
storage: 1Gi
What is the best way to accomplish this?
Sorry for this noob question!
the imperative way:
$ kubectl delete pvc kafka-disk1
the declarative way:
you can label your resources , and then do kubectl apply -f with prune option , and label , so when you delete the yaml from the manifest directory , kubectl will contact the api server and compare the resources on the file and in the cluster , and the missing resource in the files will be deleted
What specific changes need to be made to the yaml below in order to get the PersistentVolumeClaim to bind to the PersistentVolume?
An EC2 instance in the same VPC subnet as the Kubernetes worker nodes has an ip of 10.0.0.112 and and has been configured to act as an NFS server in the /nfsfileshare path.
Creating the PersistentVolume
We created a PersistentVolume pv01 with pv-volume-network.yaml:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv01
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: "/nfsfileshare"
server: "10.0.0.112"
and by typing:
kubectl create -f pv-volume-network.yaml
Then when we type kubectl get pv pv01, the pv01 PersistentVolume shows a STATUS of "Available".
Creating the PersistentVolumeClaim
Then we created a PersistentVolumeClaim named `` with pv-claim.yaml:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
And by typing:
kubectl create -f pv-claim.yaml
STATUS is Pending
But then when we type kubectl get pvc my-pv-claim, we see that the STATUS is Pending. The STATUS remains as Pending for as long as we continued to check back.
Note this OP is different than this other question, because this problem persists even with quotes around the NFS IP and the path.
Why is this PVC not binding to the PV? What specific changes need to be made to resolve this?
I diagnosed the problem by typing kubectl describe pvc my-pv-claim and looking in the Events section of the results.
Then, based on the reported Events, I was able to fix this by changing storageClassName: manual to storageClassName: slow.
The problem was that the PVC's StorageClassName did not meet the requirement that it match the class specified in the PV.