How to remove mounted volumes? PV/PVC won't delete|edit|patch - kubernetes

I am using kubectl apply -f pv.yaml on this basic setup:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-demo
spec:
storageClassName: "normal"
capacity:
storage: 1Gi
persistentVolumeReclaimPolicy: Delete
accessModes:
- ReadWriteOnce
hostPath:
path: /home/demo/
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-demo
spec:
storageClassName: "normal"
resources:
requests:
storage: 200Mi
accessModes:
- ReadWriteOnce
---
apiVersion: v1
kind: Pod
metadata:
name: nginx-demo
labels:
name: nginx-demo
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: /usr/share/nginx/html
name: pv-demo
volumes:
- name: pv-demo
persistentVolumeClaim:
claimName: pvc-demo
Now I wanted to delete everything so I used: kubectl delete -f pv.yaml
However, the volume still persists on the node at /home/demo and has to be removed manually.
So I tried to patch and remove protection before deletion:
kubectl patch pv pv-demo -p '{"metadata":{"finalizers":null}}'
But the mount still persists on the node.
I tried to edit and null Finalizers manually, although it said 'edited'; kubectl get pv shows Finalizers unmodified.
I don't understand what's going on, Why all of the above is not working? I want when delete, the mount folder on the node /home/demo gets deleted as well.

This is expected behavior when using hostPath as it does not support deletion as to other volume types. I tested this with kubeadm and gke clusters and the mounted directory and files remain intact after removal the pv and pvc.
Taken from the manual about reclaim policies:
Currently, only NFS and HostPath support recycling. AWS EBS, GCE PD,
Azure Disk, and Cinder volumes support deletion.
While recycle is mentioned in documentation as deprecated since version 1.5 it still works and can cleanup your files but it won`t delete your mounted directory. It is not ideal but that is the closest workaround.
IMPORTANT:
To successfully use recycle you cannot delete PV itself. If you delete PVC then controller manager creates recycyler pod that cleans up the volumes and this volume become available for binding to the next PVC.
When looking at the control-manager logs you can see that host_path deleter rejects the /home/demo/ dir deletion and it supports only deletion of the /tmp/.+ directory. However after testing this tmp is also not being deleted.
'Warning' reason: 'VolumeFailedDelete' host_path deleter only supports /tmp/.+ but received provided /home/demo/```

May be you can try with hostpath under /tmp/

Related

Running multiple pods of go-ethereum with common EFS storage on AWS EKS/kubernetes

I am trying to run a go-ethereum node on AWS EKS, for that i have used statefulsets with below configuration.
statefulset.yaml file
Runningkubectl apply -f statefulset.yaml creates 2 pods out of which 1 is running and 1 is in CrashLoopBackOff state.
Pods status
After checking the logs for second pod the error I am getting is Fatal: Failed to create the protocol stack: datadir already used by another process.
Error logs i am getting
The problem is mainly due to the pods using the same directory to write(geth data) on the persistant volume(i.e the pods are writing to '/data'). If I use a subpath expression and mount the pod's directory to a sub-directory with pod name(for eg: '/data/geth-0') it works fine.
statefulset.yaml with volume mounting to a sub directory with podname
But my requirement is that all the three pod's data is written at '/data' directory.
Below is my volume config file.
volume configuration
You need to dynamically provision the access point for each of your stateful pod. First create an EFS storage class that support dynamic provision:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: efs-dyn-sc
provisioner: efs.csi.aws.com
reclaimPolicy: Retain
parameters:
provisioningMode: efs-ap
directoryPerms: "700"
fileSystemId: <get the ID from the EFS console>
Update your spec to support claim template:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: geth
...
spec:
...
template:
...
spec:
containers:
- name: geth
...
volumeMounts:
- name: geth
mountPath: /data
...
volumeClaimTemplates:
- metadata:
name: geth
spec:
accessModes:
- ReadWriteOnce
storageClassName: efs-dyn-sc
resources:
requests:
storage: 5Gi
All pods now write to their own /data.
The same directory cannot be reused by multiple instances of go-ethereum, so you have the following options:
Use the same persistent volume for each pod and use a subdirectory for each pod
Use a separate persistent volume for each pod, then each can use the same /data path

Mounting a NFS volume by a OpenShift 3.11 PersistentVolume: mount.nfs: mounting failed, reason given by server: No such file or directory

In our OpenShift 3.11 cluster, we are trying to use NFS through a PersistentVolume and a NFS volume previously created on a external NFS storage (a Isilon Storage).
We created and applied succesfully the PersistentVolume and the PersistentVolumeClaim on the Kubernetes/OpenShift Layer. The PVC binds the PV correctly, but when checking the Deployment events we face an error in the mounting NFS phase.
PersistentVolume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: tool1pv
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteOnce
nfs:
path: /tool1shareenv1
server: tommytheserver.companydomain.priv
persistentVolumeReclaimPolicy: Retain
claimRef:
name: tool1claimenv1
namespace: ocpnamespace1
PersistentVolumeClaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: tool1claimenv1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
volumeName: tool1pvenv1
When checking the Development Events, we see a "No such file":
MountVolume.SetUp failed for volume "tool1pvenv1" : mount failed:
exit status 32 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/origin/openshift.local.volumes/pods/f1cb1291-fe12-01ea-bb92-0050123aa39be/volumes/kubernetes.io~nfs/tool1pvenv1 --scope -- mount -t nfs tommytheserver.companydomain.priv:/tool1shareenv1
/var/lib/origin/openshift.local.volumes/pods/f1cb9191-fe73-11ea-bb92-005056ba12be/volumes/kubernetes.io~nfs/tool1pvenv1d Output: Running scope as unit run-74039.scope. **mount.nfs: mounting tommytheserver.companydomain.priv:/tool1env1 failed, reason given by server: No such file or directory**
We investigated the server and the path fields and tried different varations such as:
PersistentVolumeVersion2:
apiVersion: v1
kind: PersistentVolume
metadata:
name: tool1pv
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteOnce
nfs:
path: /tool1shareenv1
server: tommytheserver.companydomain.priv/tool1shareenv1
persistentVolumeReclaimPolicy: Retain
claimRef:
name: tool1claimenv1
namespace: ocpnamespace1
but we're still faceing the same No such file error.
How can we troubleshoot it ?
Normally to troubleshoot something like this I would...
Double check that your share path actually exists
Get the IP address of your node where your pod ran and ssh onto it. You can get the IP like this:
kubectl get pod <podname> -o wide -n namespace
Then I would make sure I can connect to the nfs server where the share exists:
telnet <nfs server> port
Following that I would run dmesg to see mounting related errors
I would try to mount the volume myself using the same arguments your error is showing. ie-
mount -t nfs tommytheserver.companydomain.priv:/tool1shareenv1
It is difficult to provide a specific answer without seeing the results of these troubleshooting steps. But, that is the approach I would take.

How can I mount the same persistent volume on multiple pods?

I have a three node GCE cluster and a single-pod GKE deployment with three replicas. I created the PV and PVC like so:
# Create a persistent volume for web content
apiVersion: v1
kind: PersistentVolume
metadata:
name: nginx-content
labels:
type: local
spec:
capacity:
storage: 5Gi
accessModes:
- ReadOnlyMany
hostPath:
path: "/usr/share/nginx/html"
--
# Request a persistent volume for web content
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nginx-content-claim
annotations:
volume.alpha.kubernetes.io/storage-class: default
spec:
accessModes: [ReadOnlyMany]
resources:
requests:
storage: 5Gi
They are referenced in the container spec like so:
spec:
containers:
- image: launcher.gcr.io/google/nginx1
name: nginx-container
volumeMounts:
- name: nginx-content
mountPath: /usr/share/nginx/html
ports:
- containerPort: 80
volumes:
- name: nginx-content
persistentVolumeClaim:
claimName: nginx-content-claim
Even though I created the volumes as ReadOnlyMany, only one pod can mount the volume at any given time. The rest give "Error 400: RESOURCE_IN_USE_BY_ANOTHER_RESOURCE". How can I make it so all three replicas read the same web content from the same volume?
First I'd like to point out one fundamental discrapency in your configuration. Note that when you use your PersistentVolumeClaim defined as in your example, you don't use your nginx-content PersistentVolume at all. You can easily verify it by running:
kubectl get pv
on your GKE cluster. You'll notice that apart from your manually created nginx-content PV, there is another one, which was automatically provisioned based on the PVC that you applied.
Note that in your PersistentVolumeClaim definition you're explicitely referring the default storage class which has nothing to do with your manually created PV. Actually even if you completely omit the annotation:
annotations:
volume.alpha.kubernetes.io/storage-class: default
it will work exactly the same way, namely the default storage class will be used anyway. Using the default storage class on GKE means that GCE Persistent Disk will be used as your volume provisioner. You can read more about it here:
Volume implementations such as gcePersistentDisk are configured
through StorageClass resources. GKE creates a default StorageClass for
you which uses the standard persistent disk type (ext4). The default
StorageClass is used when a PersistentVolumeClaim doesn't specify a
StorageClassName. You can replace the provided default StorageClass
with your own.
But let's move on to the solution of the problem you're facing.
Solution:
First, I'd like to emphasize you don't have to use any NFS-like filesystems to achive your goal.
If you need your PersistentVolume to be available in ReadOnlyMany mode, GCE Persistent Disk is a perfect solution that entirely meets your requirements.
It can be mounted in ro mode by many Pods at the same time and what is even more important by many Pods, scheduled on different GKE nodes. Furthermore it's really simple to configure and it works on GKE out of the box.
In case you want to use your storage in ReadWriteMany mode, I agree that something like NFS may be the only solution as GCE Persistent Disk doesn't provide such capability.
Let's take a closer look how we can configure it.
We need to start from defining our PVC. This step was actually already done by yourself but you got lost a bit in further steps. Let me explain how it works.
The following configuration is correct (as I mentioned annotations section can be omitted):
# Request a persistent volume for web content
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nginx-content-claim
spec:
accessModes: [ReadOnlyMany]
resources:
requests:
storage: 5Gi
However I'd like to add one important comment to this. You said:
Even though I created the volumes as ReadOnlyMany, only one pod can
mount the volume at any given time.
Well, actually you didn't. I know it may seem a bit tricky and somewhat surprising but this is not the way how defining accessModes really works. In fact it's a widely misunderstood concept. First of all you cannot define access modes in PVC in a sense of putting there the constraints you want. Supported access modes are inherent feature of a particular storage type. They are already defined by the storage provider.
What you actually do in PVC definition is requesting a PV that supports the particular access mode or access modes. Note that it's in a form of a list which means you may provide many different access modes that you want your PV to support.
Basically it's like saying: "Hey! Storage provider! Give me a volume that supports ReadOnlyMany mode." You're asking this way for a storage that will satisfy your requirements. Keep in mind however that you can be given more than you ask. And this is also our scenario when asking for a PV that supports ReadOnlyMany mode in GCP. It creates for us a PersistentVolume which meets our requirements we listed in accessModes section but it also supports ReadWriteOnce mode. Although we didn't ask for something that also supports ReadWriteOnce you will probably agree with me that storage which has a built-in support for those two modes fully satisfies our request for something that supports ReadOnlyMany. So basically this is the way it works.
Your PV that was automatically provisioned by GCP in response for your PVC supports those two accessModes and if you don't specify explicitely in Pod or Deployment definition that you want to mount it in read-only mode, by default it is mounted in read-write mode.
You can easily verify it by attaching to the Pod that was able to successfully mount the PersistentVolume:
kubectl exec -ti pod-name -- /bin/bash
and trying to write something on the mounted filesystem.
The error message you get:
"Error 400: RESOURCE_IN_USE_BY_ANOTHER_RESOURCE"
concerns specifically GCE Persistent Disk that is already mounted by one GKE node in ReadWriteOnce mode and it cannot be mounted by another node on which the rest of your Pods were scheduled.
If you want it to be mounted in ReadOnlyMany mode, you need to specify it explicitely in your Deployment definition by adding readOnly: true statement in the volumes section under Pod's template specification like below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: nginx-content
volumes:
- name: nginx-content
persistentVolumeClaim:
claimName: nginx-content-claim
readOnly: true
Keep in mind however that to be able to mount it in readOnly mode, first we need to pre-populate such volume with data. Otherwise you'll see another error message, saying that unformatted volume cannot be mounted in read only mode.
The easiest way to do it is by creating a single Pod which will serve only for copying data which was already uploaded to one of our GKE nodes to our destination PV.
Note that pre-populating PersistentVolume with data can be done in many different ways. You can mount in such Pod only your PersistentVolume that you will be using in your Deployment and get your data using curl or wget from some external location saving it directly on your destination PV. It's up to you.
In my example I'm showing how to do it using additional local volume that allows us to mount into our Pod a directory, partition or disk (in my example I use a directory /var/tmp/test located on one of my GKE nodes) available on one of our kubernetes nodes. It's much more flexible solution than hostPath as we don't have to care about scheduling such Pod to particular node, that contains the data. Specific node affinity rule is already defined in PersistentVolume and Pod is automatically scheduled on specific node.
To create it we need 3 things:
StorageClass:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
PersistentVolume definition:
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-pv
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /var/tmp/test
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- <gke-node-name>
and finally PersistentVolumeClaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 10Gi
storageClassName: local-storage
Then we can create our temporary Pod which will serve only for copying data from our GKE node to our GCE Persistent Disk.
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/mnt/source"
name: mypd
- mountPath: "/mnt/destination"
name: nginx-content
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim
- name: nginx-content
persistentVolumeClaim:
claimName: nginx-content-claim
Paths you can see above are not really important. The task of this Pod is only to allow us to copy our data to the destination PV. Eventually our PV will be mounted in completely different path.
Once the Pod is created and both volumes are successfully mounted, we can attach to it by running:
kubectl exec -ti my-pod -- /bin/bash
Withing the Pod simply run:
cp /mnt/source/* /mnt/destination/
That's all. Now we can exit and delete our temporary Pod:
kubectl delete pod mypod
Once it is gone, we can apply our Deployment and our PersistentVolume finally can be mounted in readOnly mode by all the Pods located on various GKE nodes:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: nginx-content
volumes:
- name: nginx-content
persistentVolumeClaim:
claimName: nginx-content-claim
readOnly: true
Btw. if you are ok with the fact that your Pods will be scheduled only on one particular node, you can give up on using GCE Persistent Disk at all and switch to the above mentioned local volume. This way all your Pods will be able not only to read from it but also to write to it at the same time. The only caveat is that all those Pods will be running on a single node.
You can achieve this with a NFS like file system. On Google Cloud, Filestore is the right product for this (NFS managed). You have a tutorial here for achieving your configuration
You will need to use a shared volume claim with ReadWriteMany (RWX) type if you want to share the volume across different nodes and provide highly scalable solution. Like using NFS server.
You can find out how to deploy an NFS server here:
https://www.shebanglabs.io/run-nfs-server-on-ubuntu-20-04/
And then you can mount volumes (directories from NFS server) as follows:
https://www.shebanglabs.io/how-to-set-up-read-write-many-rwx-persistent-volumes-with-nfs-on-kubernetes/
I've used such a way to deliver shared static content between +8 k8s deployments (+200 pods) serving 1 Billion requests a month over Nginx. and it did work perfectly with that NFS setup :)
Google provides NFS like filesystem called as Google Cloud Filestore. You can mount that on multiple pods.

how to find my persistent volume location

I tried creating persistent volume using the host path. I can bind it to a specific node using node affinity but I didn't provide that. My persistent volume YAML looks like this
apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins
labels:
type: fast
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: /mnt/data
After this I created PVC
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 1Gi
And finally attached it onto the pod.
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: thinkingmonster/nettools
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim
Now in describe command for pv or pvc it does not tell that on which node it has actually kept the volume /mnt/data
and I had to ssh to all nodes to locate the same.
And pod is smart enough to be created on that node only where Kubernetes had mapped host directory to PV
How can I know that on which node Kubernetes has created Persistent volume? Without the requirement to ssh the nodes or check that where is pod running.
It's only when a volume is bound to a claim that it's associated with a particular node. HostPath volumes are a bit different than the regular sort, making it a little less clear. When you get the volume claim, the annotations on it should give you a bunch of information, including what you're looking for. In particular, look for the:
volume.kubernetes.io/selected-node: ${NODE_NAME}
annotation on the PVC. You can see the annotations, along with the other computed configuration, by asking the Kubernetes api server for that info:
kubectl get pvc -o yaml -n ${NAMESPACE} ${PVC_NAME}

Kubernetes Delete Persistent Voulmes Created by hostPath

I created a PV and a PVC on docker-desktop and even after removing the pv and pvc the file still remains. When I re-create it, it attaches the same mysql database to new pods. How do you manually delete the files created by the hostPath? I suppose one way is to just reset Kubernetes in the preferences but there has to be another less nuclear option.
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim2
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
According to the docs, "...Recycle reclaim policy performs a basic scrub (rm -rf /thevolume/*) on the volume and makes it available again for a new claim". Also, "...Currently, only NFS and HostPath support recycling". So, try changing
persistentVolumeReclaimPolicy: Delete
to
persistentVolumeReclaimPolicy: Recycle
hostPath volumes are simply folders on one of your node's filesystem (in this case /mnt/data). All you need to do is delete that folder from the node that hosted the volume.
If you defined any node affinity to pod that you need to check. Then find out node where that pod is schedule. Delete PVC an PV Then delete data from /mnt/data directory.
kubectl get pod -o wide | grep <pod_name>
Here you will get on which node it is scheduled.
kubectl delete deploy or statefulset <deploy_name>
kubectl get pv,pvc
kubectl delete pv <pv_name>
kubectl delete pvc <pvc_name>
Now go on that node and delete that data from /mnt/data
One more way to do it you can define persistentVolumeReclaimPolicy
to retain or delete