We are trying to configure local-storage in Rancher and storage provisioner configured successfully.
But when I create pvc using local-storage sc its going in pending state with below error.
Normal ExternalProvisioning 4m31s (x62 over 19m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "rancher.io/local-path" or manually created by system administrator
Normal Provisioning 3m47s (x7 over 19m) rancher.io/local-path_local-path-provisioner-5f8f96cb66-8s9dj_f1bdad61-eb48-4a7a-918c-6827e75d6a27 External provisioner is provisioning volume for claim "local-path-storage/test-pod-pvc-local"
Warning ProvisioningFailed 3m47s (x7 over 19m) rancher.io/local-path_local-path-provisioner-5f8f96cb66-8s9dj_f1bdad61-eb48-4a7a-918c-6827e75d6a27 failed to provision volume with StorageClass "local-path": configuration error, no node was specified
[root#n01-deployer local]#
sc configuration
[root#n01-deployer local]# kubectl edit sc local-path
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"local-path"},"provisioner":"rancher.io/local-path","reclaimPolicy":"Delete","volumeBindingMode":"Immediate"}
creationTimestamp: "2022-02-07T16:12:58Z"
name: local-path
resourceVersion: "1501275"
uid: e8060018-e4a8-47f9-8dd4-c63f28eef3f2
provisioner: rancher.io/local-path
reclaimPolicy: Delete
volumeBindingMode: Immediate
PVC configuration
[root#n01-deployer local]# cat pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: local-path-storage
name: test-pod-pvc-local-1
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-path
resources:
requests:
storage: 5Gi
volumeMode: Filesystem
I have mounted the local volume in all the worker node still my pvc not getting created. Can some please help me solve this issue?
The key to your problem was updating PSP.
I would like to add something about PSP:
According to this documentation and this blog:
As of Kubernetes version 1.21, PodSecurityPolicy (beta) is deprecated. The Kubernetes project aims to shut the feature down in version 1.25.
However I haven't found any information in Rancher's case (the documentation is up to date).
Rancher ships with two default Pod Security Policies (PSPs): the restricted and unrestricted policies.
See also:
The benefits of Pod Security Policy
Secure Kubernetes cluster PSP
Pod Security Policies
Related
I am working on deploying Hyperledger Fabric test network on Kubernetes minikube cluster. I intend to use PersistentVolume to share cytpo-config and channel artifacts among various peers and orderers. Following is my PersistentVolume.yaml and PersistentVolumeClaim.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: persistent-volume
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
nfs:
path: "/nfsroot"
server: "3.128.203.245"
readOnly: false
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: persistent-volume-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Following is the pod where the above claim is mount on /data
kind: Pod
apiVersion: v1
metadata:
name: test-shell
labels:
name: test-shell
spec:
containers:
- name: shell
image: ubuntu
command: ["/bin/bash", "-c", "while true ; do sleep 10 ; done"]
volumeMounts:
- mountPath: "/data"
name: pv
volumes:
- name: pv
persistentVolumeClaim:
claimName: persistent-volume-claim
NFS is setup on my EC2 instance. I have verified NFS server is working fine and I was able to mount it inside minikube. I am not understanding what wrong am I doing, but any file present inside 3.128.203.245:/nfsroot is not present in test-shell:/data
What point am I missing. I even tried hostPath mount but to no avail. Please help me out.
I think you should check the following things to verify that NFS is mounted successfully or not
run this command on the node where you want to mount.
$showmount -e nfs-server-ip
like in my case $showmount -e 172.16.10.161
Export list for 172.16.10.161:
/opt/share *
use $df -hT command see that Is NFS is mounted or not like in my case it will give output 172.16.10.161:/opt/share nfs4 91G 32G 55G 37% /opt/share
if not mounted then use the following command
$sudo mount -t nfs 172.16.10.161:/opt/share /opt/share
if the above commands show an error then check firewall is allowing nfs or not
$sudo ufw status
if not then allow using the command
$sudo ufw allow from nfs-server-ip to any port nfs
I made the same setup I don't face any issues. My k8s cluster of fabric is running successfully . The hf k8s yaml files can be found at my GitHub repo. There I have deployed the consortium of Banks on hyperledger fabric which is a dynamic multihost blockchain network that means you can add orgs, peers, join peers, create channels, install and instantiate chaincode on the go in an existing running blockchain network.
By default in minikube you should have default StorageClass:
Each StorageClass contains the fields provisioner, parameters, and reclaimPolicy, which are used when a PersistentVolume belonging to the class needs to be dynamically provisioned.
For example, NFS doesn't provide an internal provisioner, but an external provisioner can be used. There are also cases when 3rd party storage vendors provide their own external provisioner.
Change the default StorageClass
In your example this property can lead to problems.
In order to list enabled addons in minikube please use:
minikube addons list
To list all StorageClasses in your cluster use:
kubectl get sc
NAME PROVISIONER
standard (default) k8s.io/minikube-hostpath
Please note that at most one StorageClass can be marked as default. If two or more of them are marked as default, a PersistentVolumeClaim without storageClassName explicitly specified cannot be created.
In your example the most probable scenario is that you have already default StorageClass. Applying those resources caused: new PV creation (without StoraglClass), new PVC creation (with reference to existing default StorageClass). In this situation there is no reference between your custom pv/pvc binding) as an example please take a look:
kubectl get pv,pvc,sc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/nfs 3Gi RWX Retain Available 50m
persistentvolume/pvc-8aeb802f-cd95-4933-9224-eb467aaa9871 1Gi RWX Delete Bound default/pvc-nfs standard 50m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/pvc-nfs Bound pvc-8aeb802f-cd95-4933-9224-eb467aaa9871 1Gi RWX standard 50m
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
storageclass.storage.k8s.io/standard (default) k8s.io/minikube-hostpath Delete Immediate false 103m
This example will not work due to:
new persistentvolume/nfs has been created (without reference to pvc)
new persistentvolume/pvc-8aeb802f-cd95-4933-9224-eb467aaa9871 has been created using default StorageClass. In the Claim section we can notice that this pv has been created due to dynamic pv provisioning using default StorageClass with reference to default/pvc-nfs claim (persistentvolumeclaim/pvc-nfs ).
Solution 1.
According to the information from the comments:
Also I am able to connect to it within my minikube and also my actual ubuntu system.
I you are able to mount from inside minikube host this nfs share
If you mounted nfs share into your minikube node, please try to use this example with hostpath volume directly from your pod:
apiVersion: v1
kind: Pod
metadata:
name: test-shell
namespace: default
spec:
volumes:
- name: pv
hostPath:
path: /path/shares # path to nfs mount point on minikube node
containers:
- name: shell
image: ubuntu
command: ["/bin/bash", "-c", "sleep 1000 "]
volumeMounts:
- name: pv
mountPath: /data
Solution 2.
If you are using PV/PVC approach:
kind: PersistentVolume
apiVersion: v1
metadata:
name: persistent-volume
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
storageClassName: "" # Empty string must be explicitly set otherwise default StorageClass will be set / or custom storageClassName name
nfs:
path: "/nfsroot"
server: "3.128.203.245"
readOnly: false
claimRef:
name: persistent-volume-claim
namespace: default
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: persistent-volume-claim
namespace: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: "" # Empty string must be explicitly set otherwise default StorageClass will be set / or custom storageClassName name
volumeName: persistent-volume
Note:
If you are not referencing any provisioner associated with your StorageClass
Helper programs relating to the volume type may be required for consumption of a PersistentVolume within a cluster. In this example, the PersistentVolume is of type NFS and the helper program /sbin/mount.nfs is required to support the mounting of NFS filesystems.
Please keep in mind that when you are creating pvc kubernetes persistent-controller is trying to bind pvc with proper pv. During this process different factors are take into account like: storageClassName (default/custom), accessModes, claimRef, volumeName.
In this case you can use:
PersistentVolume.spec.claimRef.name: persistent-volume-claim PersistentVolumeClaim.spec.volumeName: persistent-volume
Note:
The control plane can bind PersistentVolumeClaims to matching PersistentVolumes in the cluster. However, if you want a PVC to bind to a specific PV, you need to pre-bind them.
By specifying a PersistentVolume in a PersistentVolumeClaim, you declare a binding between that specific PV and PVC. If the PersistentVolume exists and has not reserved PersistentVolumeClaims through its claimRef field, then the PersistentVolume and PersistentVolumeClaim will be bound.
The binding happens regardless of some volume matching criteria, including node affinity. The control plane still checks that storage class, access modes, and requested storage size are valid.
Once the PV/pvc were created or in case of any problem with pv/pvc binding please use the following commands to figure current state:
kubectl get pv,pvc,sc
kubectl describe pv
kubectl describe pvc
kubectl describe pod
kubectl get events
I have configured a kubernetes cluster on bare metal using kubeadm. Everything works well and I can deploy an example nginx app. Problem comes in when I want to deploy a statefulset with volumeClaimTemplates as shown below
volumeClaimTemplates:
- metadata:
name: jackrabbit-volume
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: jackrabbit
and the storageclass
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
meta.helm.sh/release-name: chart-1591185140
meta.helm.sh/release-namespace: gluu
storageclass.beta.kubernetes.io/is-default-class: "false"
labels:
app.kubernetes.io/managed-by: Helm
storage: jackrabbit
managedFields:
- apiVersion: storage.k8s.io/v1
mountOptions:
- debug
parameters:
fsType: ext4
pool: default
provisioner: kubernetes.io/no-provisioner
reclaimPolicy: Retain
I have also tried to add a persistentVolume with hostPath spec but still not working.
---- ------ ---- ---- -------
Warning ProvisioningFailed 82s (x3 over 98s) persistentvolume-controller no volume plugin matched
In your StorageClass you are using kubernetes.io/no-provisioner and this means you are trying to use Local Volume Plugin.
You cluster doesn't know kubernetes.io/no-provisioner yet and that's why no volume plugin matched is presented.
According to documentation, this Plugin is not included in the kubernetes.io as an Internal Provisioner. Here you can see a chart listing all Provisioners, their Plugin Names, if they are included in the Internal Provisioner and a link to a config example.
In the documentation we can read:
You are not restricted to specifying the “internal” provisioners
listed here (whose names are prefixed with kubernetes.io and shipped
alongside Kubernetes). You can also run and specify external
provisioners, which are independent programs that follow a
specification
defined by Kubernetes. Authors of external provisioners have full
discretion over where their code lives, how the provisioner is
shipped, how it needs to be run, what volume plugin it uses (including
Flex), etc. The repository
kubernetes-sigs/sig-storage-lib-external-provisioner
houses a library for writing external provisioners that implements the
bulk of the specification. Some external provisioners are listed under
the repository
kubernetes-incubator/external-storage.
For example, NFS doesn’t provide an internal provisioner, but an
external provisioner can be used. There are also cases when 3rd party
storage vendors provide their own external provisioner.
The Local external provisioner is maiteined on this GitHub repository and there you can find the Getting Started guide that will lead you trough how to use it.
The no volume plugin matched error from the persistentvolume-controller usually appears (when describing the PVC) when there was a problem with the allocation of a PV to the PVC.
To verify this - run kubectl describe on the pods / statefulsets - if you see the error of:
... pod has unbound immediate PersistentVolumeClaims
This can be caused due to multiple reasons when working a local volume - a few examples:
1 ) The PVC is requesting too much storage and there is no PV that can satisfy this value (the PV storage value need to be equal or higher then the PVC storage requests).
2 ) Not all PVC as a corresponding PV - in the case we can see that some resources will be succesful and some will be stuck:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mongo-0 2/2 Running 0 3m38s
mongo-1 0/2 Pending 0 3m23s
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mongo-persistent-storage-mongo-0 Bound mongo-local-pv 50Gi RWO local-storage 80m
mongo-persistent-storage-mongo-1 Pending
3 ) When the scheduler failed to match a node to the PV.
When using local volumes, the nodeAffinity of the PV is required and should be a value of an existing node in the cluster.
I am trying to deploy a persistentvolume for 3 pods to work on and i want to use the cluster's node storage i.e. not an external storage like ebs spin off.
To achieve the above i did the following experiment's -
1) I applied only the below PVC resource defined below -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: pv1
name: pv1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
status: {}
This spin's up a storage set by default storageclass, which in my case was digital ocean's volume. So it created a 1Gi volume.
2) Created a PV resource and PVC resource like below -
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: pv1
name: pv1
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
status: {}
Post this i see my claim is bound.
pavan#p1:~$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pv1 Bound task-pv-volume 10Gi RWO manual 2m5s
pavan#p1:~$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
task-pv-volume 10Gi RWO Retain Bound default/pv1 manual 118m
pavan#p1:~$ kubectl describe pvc
Name: pv1
Namespace: default
StorageClass: manual
Status: Bound
Volume: task-pv-volume
Labels: io.kompose.service=pv1
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"creationTimestamp":null,"labels":{"io.kompose.service":"mo...
pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 10Gi
Access Modes: RWO
VolumeMode: Filesystem
Mounted By: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed 28s (x8 over 2m2s) persistentvolume-controller storageclass.storage.k8s.io "manual" not found
Below are my questions that i am hoping to get answers/pointers to -
The above warning, storage class could not be found, do i need to
create one? If so, can you tell me why and how? or any pointer. (Somehow this link misses to state that - https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/)
Notice the PV has storage capacity of 10Gi and PVC with request capacity of 1Gi, but still PVC was bound with 10Gi capacity? Can't i share the same PV capacity with other PVCs?
For question 2) If i have to create different PVs for different PVC with the required capacity, do i have to create storageclass as-well? Or same storage class and use selectors to select corresponding PV?
I was trying to reproduce all behavior to answer all your questions. However, I don't have access to DigitalOcean, so I tested it on GKE.
The above warning, storage class could not be found, do i need to
create one?
According to the documentation and best practices, it is highly recommended to create a storageclass and later create PV / PVC based on it. However, there is something called "manual provisioning". Which you did in this case.
Manual provisioning is when you need to manually create a PV first, and then a PVC with matching spec.storageClassName: field. Examples:
If you create a PVC without default storageclass, PV and storageClassName parameter (afaik kubeadm is not providing default storageclass) - PVC will be stuck on Pending with event: no persistent volumes available for this claim and no storage class is set.
If you create a PVC with default storageclass setup on cluster but without storageClassName parameter it will create it based on default storageclass.
If you create a PVC with storageClassName parameter (somewhere in the Cloud, Minikube or kubeadm) - PVC will also be Pending with the warning: storageclass.storage.k8s.io "manual" not found.
However, if you create PV with the same storageClassName parameter, it will be bound in a while.
Example:
$ kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/task-pv-volume 10Gi RWO Retain Available manual 4s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/pv1 Pending manual 4m12s
...
kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/task-pv-volume 10Gi RWO Retain Bound default/pv1 manual 9s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/pv1 Bound task-pv-volume 10Gi RWO manual 4m17s
The disadvantage of manual provisioning is that you have to create PV for each PVC. If you use storageclass you can just create PVC.
If so, can you tell me why and how? or any pointer.
You can use documentation examples or check here. As you are using cloud with default storageclass you can export it to yaml by:
$ kubectl get sc -oyaml >> storageclass.yaml.
Or if you have more than one, you have to specify which one. Names of storageclass can be obtained by $ kubectl get sc.
Later you can refer to K8s API to customize your storageclass.
Notice the PV has storage capacity of 10Gi and PVC with request
capacity of 1Gi, but still PVC was bound with 10Gi capacity?
You created manually a PV with 10Gi and the PVC requested 1Gi. As PVC and PV are bounding 1:1, PVC searched for a PV which meets all conditions and bound to it. PVC (pv1) requested 1Gi and the PV (task-pv-volume) meet those requirements so Kubernetes bound them. Unfortunately much of the space was wasted in this case.
Can't i share the same PV capacity with other PVCs
Unfortunately, you cannot bound 2 PVC to 1 PV as the relationship between PVC and PV is 1:1, but you can configure many pods/deployments to use the same PVC.
I can advise you to look at this stackoverflow case as it explains very well AccessMode specifics.
If i have to create different PVs for different PVC with the required
capacity, do i have to create storageclass as-well? Or same storage
class and use selectors to select corresponding PV?
As I mentioned before, if you create PV manually with a specific size and a PVC bounded to it, which request less, the extra space will be wasted. So, you have to create PV and PVC with the same resource request, or let storageclass adjust the storage based on PVC request.
Yes, you have to create storage class, check, but I guess digital-ocean provide default storage class, you can check it with:
kubectl get storageclasses
You can share one PV, but only in read-only access, if you need write access for all pods you have to create multiple PV, check
I know there are lots of discussions round this topic but somehow, I can not get it working.
I am trying to install elastic search cluster with statefulset and nfs persistent volume on bare metal. My pv, pvc and sc configs are as below:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: manual
provisioner: kubernetes.io/no-provisioner
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-storage-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: manual
nfs:
server: 172.23.240.85
path: /servers/scratch50g/vishalg/kube
Statefuleset has following pvc section defined:
volumeClaimTemplates:
- metadata:
name: beehive-pv-claim
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: manual
resources:
requests:
storage: 1Gi
Now, when I try to deploy it, I get the following error on statefulset:
pod has unbound immediate PersistentVolumeClaims
When I get the pvc events, it shows:
Warning ProvisioningFailed 3s (x2 over 12s) persistentvolume-controller no volume plugin matched
I tried not giving any storageclass (did not create it) and removed it from pv and pvc both altogether. This time, I get below error:
no persistent volumes available for this claim and no storage class is set
I also tried setting storageclass as "" in pvc and not mention it in pv, but it did not work also.
Please help here. What can I check more to get it working?
Can it be related to nfs server and path (if by chance, it is mentioned incorrectly), though I see pv created successfully.
EDIT1:
One issue was that accessmode of pvc was different from accessmode of pv. I got it corrected and now my pvc is shown as bound.
But now even, I get following error:
pod has unbound immediate PersistentVolumeClaims
I tried using local volume also but again same error. PV and PVC are bound correctly but statefulset shows above error.
When using hostPath volume, everything works fine.
Is anything fundamentally that I am doing wrong here?
EDIT2
I got the local volume working. It takes some time to pod to bind to pvc. After waiting for coupl eof minutes, my pod got bind to pvc.
I think, nfs binding issue can be more of permission related. But still, k8s should give out some error for the same.
Could you try matching the accessModes as well?
The PVC is targeting a ReadWriteOnce volume right now.
And if you mount the nfs volume on the node manually, any access/security issue can be debugged.
I'm trying to setup a volume to use with Mongo on k8s.
I use kubectl create -f pv.yaml to create the volume.
pv.yaml:
kind: PersistentVolume
apiVersion: v1
metadata:
name: pvvolume
labels:
type: local
spec:
storageClassName: standard
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/nfs"
claimRef:
kind: PersistentVolumeClaim
namespace: default
name: pvvolume
I then deploy this StatefulSet that has pods making PVCs to this volume.
My volume seems to have been created without problem, I'm expecting it to just use the storage of the host node.
When I try to deploy I get the following error:
Unable to mount volumes for pod
"mongo-0_default(2735bc71-5201-11e8-804f-02dffec55fd2)": timeout
expired waiting for volumes to attach/mount for pod
"default"/"mongo-0". list of unattached/unmounted
volumes=[mongo-persistent-storage]
Have a missed a step in setting up my persistent volume?
A persistent volume is just the declaration of availability of some storage inside your kubernetes cluster. There is no binding with your pod at this stage.
Since your pod is deployed through a StatefulSet, there should be in your cluster one or more PersistentVolumeClaims which are the objects that connect a pod with a PersistentVolume.
In order to manually bind a PV with a PVC you need to edit your PVC by adding the following in its spec section:
volumeName: "<your persistent volume name>"
Here an explanation on how this process works: https://docs.openshift.org/latest/dev_guide/persistent_volumes.html#persistent-volumes-volumes-and-claim-prebinding
My case is an edge case, and I doubt that you will reach it. However, I will describe it because, it cost me a lot of grey hairs - and maybe it will save yours.
This same error occurred for me despite PV and PVC being binned. Pod was constantly in ContainerCreating stare, yet kubectl get events throw the error asked in this question.
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
sewage-db 5Ti RWO Retain Bound global-sewage/sewage-db nfs 3h40m
$kubectl get pvc -n global-sewage
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
sewage-db Bound sewage-db 5Ti RWO nfs 3h39m
After rebooting the server it turned out that, one of 32GiB RAM physical memory was corrupted. Removing the memory fixed the issue.