delete PersistentVolumeClaim in Kubernetes cluster - kubernetes

I have a PersistentVolumeClaim in Kubernetes cluster. I would like to delete and recreate it, in my development environment to, in this way, sort of reset some services that use it.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: kafka-disk1
spec:
accessModes:
- ReadWriteOnce
storageClassName: managed-premium
resources:
requests:
storage: 1Gi
What is the best way to accomplish this?
Sorry for this noob question!

the imperative way:
$ kubectl delete pvc kafka-disk1
the declarative way:
you can label your resources , and then do kubectl apply -f with prune option , and label , so when you delete the yaml from the manifest directory , kubectl will contact the api server and compare the resources on the file and in the cluster , and the missing resource in the files will be deleted

Related

How to remove mounted volumes? PV/PVC won't delete|edit|patch

I am using kubectl apply -f pv.yaml on this basic setup:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-demo
spec:
storageClassName: "normal"
capacity:
storage: 1Gi
persistentVolumeReclaimPolicy: Delete
accessModes:
- ReadWriteOnce
hostPath:
path: /home/demo/
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-demo
spec:
storageClassName: "normal"
resources:
requests:
storage: 200Mi
accessModes:
- ReadWriteOnce
---
apiVersion: v1
kind: Pod
metadata:
name: nginx-demo
labels:
name: nginx-demo
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: /usr/share/nginx/html
name: pv-demo
volumes:
- name: pv-demo
persistentVolumeClaim:
claimName: pvc-demo
Now I wanted to delete everything so I used: kubectl delete -f pv.yaml
However, the volume still persists on the node at /home/demo and has to be removed manually.
So I tried to patch and remove protection before deletion:
kubectl patch pv pv-demo -p '{"metadata":{"finalizers":null}}'
But the mount still persists on the node.
I tried to edit and null Finalizers manually, although it said 'edited'; kubectl get pv shows Finalizers unmodified.
I don't understand what's going on, Why all of the above is not working? I want when delete, the mount folder on the node /home/demo gets deleted as well.
This is expected behavior when using hostPath as it does not support deletion as to other volume types. I tested this with kubeadm and gke clusters and the mounted directory and files remain intact after removal the pv and pvc.
Taken from the manual about reclaim policies:
Currently, only NFS and HostPath support recycling. AWS EBS, GCE PD,
Azure Disk, and Cinder volumes support deletion.
While recycle is mentioned in documentation as deprecated since version 1.5 it still works and can cleanup your files but it won`t delete your mounted directory. It is not ideal but that is the closest workaround.
IMPORTANT:
To successfully use recycle you cannot delete PV itself. If you delete PVC then controller manager creates recycyler pod that cleans up the volumes and this volume become available for binding to the next PVC.
When looking at the control-manager logs you can see that host_path deleter rejects the /home/demo/ dir deletion and it supports only deletion of the /tmp/.+ directory. However after testing this tmp is also not being deleted.
'Warning' reason: 'VolumeFailedDelete' host_path deleter only supports /tmp/.+ but received provided /home/demo/```
May be you can try with hostpath under /tmp/

Kubernetes Delete Persistent Voulmes Created by hostPath

I created a PV and a PVC on docker-desktop and even after removing the pv and pvc the file still remains. When I re-create it, it attaches the same mysql database to new pods. How do you manually delete the files created by the hostPath? I suppose one way is to just reset Kubernetes in the preferences but there has to be another less nuclear option.
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim2
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
According to the docs, "...Recycle reclaim policy performs a basic scrub (rm -rf /thevolume/*) on the volume and makes it available again for a new claim". Also, "...Currently, only NFS and HostPath support recycling". So, try changing
persistentVolumeReclaimPolicy: Delete
to
persistentVolumeReclaimPolicy: Recycle
hostPath volumes are simply folders on one of your node's filesystem (in this case /mnt/data). All you need to do is delete that folder from the node that hosted the volume.
If you defined any node affinity to pod that you need to check. Then find out node where that pod is schedule. Delete PVC an PV Then delete data from /mnt/data directory.
kubectl get pod -o wide | grep <pod_name>
Here you will get on which node it is scheduled.
kubectl delete deploy or statefulset <deploy_name>
kubectl get pv,pvc
kubectl delete pv <pv_name>
kubectl delete pvc <pvc_name>
Now go on that node and delete that data from /mnt/data
One more way to do it you can define persistentVolumeReclaimPolicy
to retain or delete

Kubernetes upgrade of pod with PersistentVolumeClaim in ReadWriteOnce accessMode

I have postgres pod using a PersistentVolumeClaim for the database storage, in mode ReadWriteOnce.
To upgrade the pod using Helm is tricky because the new pod is blocked until the old pod releases the claim, and Helm won't remove the old pod until the new pod is ready.
How does one normally handle this problem? I can't seem to find documentation on this anywhere, and I would think that this is a common problem.
This is my pvc:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pgdata-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: standard
resources:
requests:
storage: 20Gi
selector:
matchLabels:
app: postgres
In case when you are using ReadWriteOnce mode, my proposal is to use "StatefulSet" with "volumeClaimTemplates" it was tested successfully (however without helm).
As an example please take a look for this:
https://kubernetes.io/blog/2019/04/04/kubernetes-1.14-local-persistent-volumes-ga/
Please share with the results and your findings.

PersistentVolumeClaim Pending for NFS Volume

What specific changes need to be made to the yaml below in order to get the PersistentVolumeClaim to bind to the PersistentVolume?
An EC2 instance in the same VPC subnet as the Kubernetes worker nodes has an ip of 10.0.0.112 and and has been configured to act as an NFS server in the /nfsfileshare path.
Creating the PersistentVolume
We created a PersistentVolume pv01 with pv-volume-network.yaml:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv01
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: "/nfsfileshare"
server: "10.0.0.112"
and by typing:
kubectl create -f pv-volume-network.yaml
Then when we type kubectl get pv pv01, the pv01 PersistentVolume shows a STATUS of "Available".
Creating the PersistentVolumeClaim
Then we created a PersistentVolumeClaim named `` with pv-claim.yaml:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
And by typing:
kubectl create -f pv-claim.yaml
STATUS is Pending
But then when we type kubectl get pvc my-pv-claim, we see that the STATUS is Pending. The STATUS remains as Pending for as long as we continued to check back.
Note this OP is different than this other question, because this problem persists even with quotes around the NFS IP and the path.
Why is this PVC not binding to the PV? What specific changes need to be made to resolve this?
I diagnosed the problem by typing kubectl describe pvc my-pv-claim and looking in the Events section of the results.
Then, based on the reported Events, I was able to fix this by changing storageClassName: manual to storageClassName: slow.
The problem was that the PVC's StorageClassName did not meet the requirement that it match the class specified in the PV.

Kubernetes : Dynamic Storage Provisioning using host-path

My question is about PersistentVolumeClaim
I have one node cluster setup on aws ec2
I am trying to create a storage class using kubernetes.io/host-path as Provisioner.
yaml file content for storage class as follows,
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
namespace: kube-system
name: my-storage
annotations:
storageclass.beta.kubernetes.io/is-default-class: "false"
labels:
kubernetes.io/cluster-service: "true"
provisioner: kubernetes.io/host-path
yaml file content for PersistentVolumeClaim as follows,
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: task-pv-claim
annotations:
volume.beta.kubernetes.io/storage-class: my-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
When I am trying to create storage class and PVC on minikube, it's working. It is creating
volume on minikube in /tmp/hostpath_volume/
But, When I am trying similar thing on one node cluster setup on aws ec2, I am getting following error
Failed to create provisioner: Provisioning in volume plugin "kubernetes.io/host-path" is disabled
I can see this error when I do the kubectl describe pvc task-pv-claim, Also as, PV is not created, so claim is in pending state
I found something like kube-controller-manager which shows
--enable-dynamic-provisioning and --enable-hostpath-provisioner in its option but don't know how to use it.
It seems you might not be running the provisioner itself, so there's nothing to actually do the work of creating the hostpath directory.
Take a look here
The way this works is that the hostpath provisioner reads from the kubernetes API, and watches for you to create a storage class (which you've done) and a persistentvolumeclaim (also done).
When those exist, the provisioner (which is running as a pod) will go an execute a mkdir to create the hostpath.
Run the following:
kubectl apply -f https://raw.githubusercontent.com/kubernetes-incubator/external-storage/master/docs/demo/hostpath-provisioner/pod.yaml
And then recreate your storageclass and pvc