NFS based mount fails in kubernetes - kubernetes

I'm using Kubernetes - v1.24.7 on Ubuntu 18.04.6 LTS and facing problem with the NFS - Persistent Volume mount. When i tried to deploy my Jenkins deployment file it always fails with below errors.
$ kubectl describe pod jenkins-6786789d5d-m26zw -n jenkins
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 25m default-scheduler Successfully assigned jenkins/jenkins-6786789d5d-m26zw to worker-3
Warning FailedMount 5m31s (x2 over 14m) kubelet Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[kube-api-access-65npd data]: timed out waiting for the condition
Warning FailedMount 3m17s (x8 over 23m) kubelet Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[data kube-api-access-65npd]: timed out waiting for the condition
Warning FailedMount 3m6s (x19 over 25m) kubelet MountVolume.SetUp failed for volume "pv-nfs" : mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs -o nfsvers=4.1 192.168.72.136:/mnt/nfs/stg/jenkins /var/lib/kubelet/pods/853c44ed-bf2b-4e6a-b666-c1adab7f7f4b/volumes/kubernetes.io~nfs/pv-nfs
Output: mount.nfs: mounting 192.168.72.136:/mnt/nfs/stg/jenkins failed, reason given by server: No such file or directory
The below External NFS mount path provided by our IT-Storage Administrator.
192.168.72.136:/nfs-volume
The below packages have already been installed on master and nodes.
apt install nfs-common
apt install cifs-utils
apt install nfs-kernel-server
In my master and workers(Host Machine) i have added below in /etc/fstab and i could mount the nfs volume.
192.168.72.136:/nfs-volume /mnt/nfs/stg/ nfs defaults 0 0
However still same problem persisting while Kubernetes application deployment, Also tried with below option in /etc/fstab but same result.
192.168.72.136:/nfs-volume /mnt/nfs/stg/ nfs rw,hard,intr 0 0
My pv & pvc volume status.
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-nfs 100Gi RWX Retain Bound jenkins/pvc-nfs nfs 11s
$ kubectl get pvc -n jenkins
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-nfs Bound pv-nfs 100Gi RWX nfs 17s
My PersistentVolume and Deployment yml as follows.
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nfs
labels:
type: pv-nfs
spec:
storageClassName: nfs
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
mountOptions:
- nfsvers=4.0
nfs:
server: 192.168.72.136
path: "/mnt/nfs/stg/jenkins"
readOnly: false
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
namespace: jenkins
labels:
app: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
securityContext:
fsGroup: 0
runAsUser: 0
serviceAccountName: admin
containers:
- name: jenkins
image: jenkins/jenkins:latest
securityContext:
privileged: true
runAsUser: 0
ports:
- containerPort: 8080
volumeMounts:
- name: data
mountPath: /var/jenkins_home
volumes:
- name: data
persistentVolumeClaim:
claimName: pvc-nfs
Directory /mnt/nfs/stg/jenkins existing in NFS. Please let me know what I'm missing here?
Thanks for helping.

When the storage IT administrator has exported NFS share: /nfs-volume from 192.168.72.136, then in the PersistentVolume spec, the path should be /nfs-volume.

Related

AKS: Kubernetes Pod volume mounting failing even after PV is bound and attached to pod

I am trying to deploy pod with file mount to azure file share, it seems to bound just fine, but does not mount on the container - could i get some help to make it mount please
I create secret:
kubectl create secret generic azstorage-secret --from-literal=azurestorageaccountname=$AKS_PERS_STORAGE_ACCOUNT_NAME --from-literal=azurestorageaccountkey=$STORAGE_KEY -n large-agents
my conatiners:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: linuxbuildagent-large
namespace: agents-large
spec:
replicas: 3
serviceName: "azdevops-agent-sv"
selector:
matchLabels:
app: azdevops-agent
template:
metadata:
labels:
app: azdevops-agent
spec:
containers:
- name: selfhosted-agents
...
volumeMounts:
- name: azurefileshare
mountPath: /angular-cache
securityContext:
privileged: true
volumes:
- name: azurefileshare
persistentVolumeClaim:
claimName: pvc-azurefile
my persistent volume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-azurefile
namespace: agents-large
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain # if set as "Delete" file share would be removed in pvc deletion
storageClassName: azurefile-csi
csi:
driver: file.csi.azure.com
readOnly: false
volumeHandle: azureBuildClient # make sure this volumeid is unique in the cluster
volumeAttributes:
storageAccount: EXISTING_STORAGE_ACCOUNT_NAME
shareName: longtestshare
nodeStageSecretRef:
name: azstorage-secret
namespace: agents-large
my pv claim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-azurefile
namespace: agents-large
spec:
accessModes:
- ReadWriteMany
storageClassName: azurefile-csi
volumeName: pv-azurefile
resources:
requests:
storage: 10Gi
pv bound in aks cluster:
pv claims bound in aks cluster:
Events in describe pod:
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedMount 38m (x19 over 88m) kubelet Unable to attach or mount volumes: unmounted volumes=[azurefileshare], unattached volumes=[docker-graph-storage azurefileshare kube-api-access-cx6tn]: timed out waiting for the condition
Warning FailedMount 24m (x6 over 66m) kubelet MountVolume.MountDevice failed for volume "pv-azurefile" : rpc error: code = Internal desc = volume(azureBuildClient) mount //crbbroakspersistentstg.file.core.windows.net/longtestshare on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pv-azurefile/globalmount failed with mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t cifs -o dir_mode=0777,actimeo=30,mfsymlinks,file_mode=0777,<masked> //crbbroakspersistentstg.file.core.windows.net/longtestshare /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pv-azurefile/globalmount
Output: mount error(2): No such file or directory
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) and kernel log messages (dmesg)
Warning FailedMount 20m (x5 over 86m) kubelet Unable to attach or mount volumes: unmounted volumes=[azurefileshare], unattached volumes=[azurefileshare kube-api-access-cx6tn docker-graph-storage]: timed out waiting for the condition
Warning FailedMount 5m3s (x31 over 90m) kubelet MountVolume.MountDevice failed for volume "pv-azurefile" : rpc error: code = Internal desc = volume(azureBuildClient) mount //crbbroakspersistentstg.file.core.windows.net/longtestshare on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pv-azurefile/globalmount failed with mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t cifs -o file_mode=0777,dir_mode=0777,actimeo=30,mfsymlinks,<masked> //crbbroakspersistentstg.file.core.windows.net/longtestshare /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pv-azurefile/globalmount
Output: mount error(2): No such file or directory
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) and kernel log messages (dmesg)
kubectl get events in namespace:
28m Warning FailedMount pod/linuxbuildagent-large-0 MountVolume.MountDevice failed for volume "pv-azurefile" : rpc error: code = Internal desc = volume(azureBuildClient) mount //crbbroakspersistentstg.file.core.windows.net/longtestshare on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pv-azurefile/globalmount failed with mount failed: exit status 32...
2m46s Warning FailedMount pod/linuxbuildagent-large-0 Unable to attach or mount volumes: unmounted volumes=[azurefileshare], unattached volumes=[docker-graph-storage azurefileshare kube-api-access-cx6tn]: timed out waiting for the condition
13m Warning FailedMount pod/linuxbuildagent-large-0 MountVolume.MountDevice failed for volume "pv-azurefile" : rpc error: code = Internal desc = volume(azureBuildClient) mount //crbbroakspersistentstg.file.core.windows.net/longtestshare on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pv-azurefile/globalmount failed with mount failed: exit status 32...
7m19s Warning FailedMount pod/linuxbuildagent-large-0 Unable to attach or mount volumes: unmounted volumes=[azurefileshare], unattached volumes=[azurefileshare kube-api-access-cx6tn docker-graph-storage]: timed out waiting for the condition

microk8s-hostpath does not create PV for a claim

I am trying to use Microk8s storage addon but my PVC and pod are stuck at pending and I don't know what is wrong. I am also using the "registry" addon which uses the storage and that one works without a problem.
FYI:
I already restarted the microk8s multiple times and even totally deleted and reinstalled it but the problem remained.
Yaml files:
# =================== pvc.yaml
apiVersion: v1
kind: List
items:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wws-registry-claim
spec:
volumeName: registry-pvc
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: microk8s-hostpath
# =================== deployment.yaml (just spec section)
spec:
serviceName: registry
replicas: 1
selector:
matchLabels:
io.kompose.service: registry
template:
metadata:
labels:
io.kompose.service: registry
spec:
containers:
- image: {{ .Values.image }}
name: registry-master
ports:
- containerPort: 28015
- containerPort: 29015
- containerPort: 8080
resources:
requests:
cpu: {{ .Values.request_cpu }}
memory: {{ .Values.request_memory }}
limits:
cpu: {{ .Values.limit_cpu }}
memory: {{ .Values.limit_memory }}
volumeMounts:
- mountPath: /data
name: rdb-local-data
env:
- name: RUN_ENV
value: 'kubernetes'
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
volumes:
- name: rdb-local-data
persistentVolumeClaim:
claimName: wws-registry-claim
Cluster info:
$ kubectl get pvc -A
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
container-registry registry-claim Bound pvc-dfef8e65-0618-4980-8b3c-e6e9efc5b0ca 20Gi RWX microk8s-hostpath 56m
default wws-registry-claim Pending registry-pvc 0 microk8s-hostpath 23m
$ kubectl get pv -A
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-dfef8e65-0618-4980-8b3c-e6e9efc5b0ca 20Gi RWX Delete Bound container-registry/registry-claim microk8s-hostpath 56m
$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-9b8997588-vk5vt 1/1 Running 0 57m
hostpath-provisioner-7b9cb5cdb4-wxcp6 1/1 Running 0 57m
metrics-server-v0.2.1-598c8978c-74krr 2/2 Running 0 57m
tiller-deploy-77855d9dcf-4cvsv 1/1 Running 0 46m
$ kubectl -n kube-system logs hostpath-provisioner-7b9cb5cdb4-wxcp6
I0322 12:31:31.231110 1 controller.go:293] Starting provisioner controller 87fc12df-8b0a-11eb-b910-ee8a00c41384!
I0322 12:31:31.231963 1 controller.go:893] scheduleOperation[lock-provision-container-registry/registry-claim[dfef8e65-0618-4980-8b3c-e6e9efc5b0ca]]
I0322 12:31:31.235618 1 leaderelection.go:154] attempting to acquire leader lease...
I0322 12:31:31.237785 1 leaderelection.go:176] successfully acquired lease to provision for pvc container-registry/registry-claim
I0322 12:31:31.237841 1 controller.go:893] scheduleOperation[provision-container-registry/registry-claim[dfef8e65-0618-4980-8b3c-e6e9efc5b0ca]]
I0322 12:31:31.239011 1 hostpath-provisioner.go:86] creating backing directory: /var/snap/microk8s/common/default-storage/container-registry-registry-claim-pvc-dfef8e65-0618-4980-8b3c-e6e9efc5b0ca
I0322 12:31:31.239102 1 controller.go:627] volume "pvc-dfef8e65-0618-4980-8b3c-e6e9efc5b0ca" for claim "container-registry/registry-claim" created
I0322 12:31:31.244798 1 controller.go:644] volume "pvc-dfef8e65-0618-4980-8b3c-e6e9efc5b0ca" for claim "container-registry/registry-claim" saved
I0322 12:31:31.244813 1 controller.go:680] volume "pvc-dfef8e65-0618-4980-8b3c-e6e9efc5b0ca" provisioned for claim "container-registry/registry-claim"
I0322 12:31:33.243345 1 leaderelection.go:196] stopped trying to renew lease to provision for pvc container-registry/registry-claim, task succeeded
$ kubectl get sc
NAME PROVISIONER AGE
microk8s-hostpath microk8s.io/hostpath 169m
$ kubectl get sc -o yaml
apiVersion: v1
items:
- apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"microk8s-hostpath"},"provisioner":"microk8s.io/hostpath"}
creationTimestamp: "2021-03-22T12:31:25Z"
name: microk8s-hostpath
resourceVersion: "2845"
selfLink: /apis/storage.k8s.io/v1/storageclasses/microk8s-hostpath
uid: e94b5653-e261-4e1f-b646-e272e0c8c493
provisioner: microk8s.io/hostpath
reclaimPolicy: Delete
volumeBindingMode: Immediate
kind: List
metadata:
resourceVersion: ""
selfLink: ""
Microk8s inspect:
$ microk8s.inspect
Inspecting services
Service snap.microk8s.daemon-cluster-agent is running
Service snap.microk8s.daemon-flanneld is running
Service snap.microk8s.daemon-containerd is running
Service snap.microk8s.daemon-apiserver is running
Service snap.microk8s.daemon-apiserver-kicker is running
Service snap.microk8s.daemon-proxy is running
Service snap.microk8s.daemon-kubelet is running
Service snap.microk8s.daemon-scheduler is running
Service snap.microk8s.daemon-controller-manager is running
Service snap.microk8s.daemon-etcd is running
Copy service arguments to the final report tarball
Inspecting AppArmor configuration
Gathering system information
Copy processes list to the final report tarball
Copy snap list to the final report tarball
Copy VM name (or none) to the final report tarball
Copy disk usage information to the final report tarball
Copy memory usage information to the final report tarball
Copy server uptime to the final report tarball
Copy current linux distribution to the final report tarball
Copy openSSL information to the final report tarball
Copy network configuration to the final report tarball
Inspecting kubernetes cluster
Inspect kubernetes cluster
WARNING: Docker is installed.
Add the following lines to /etc/docker/daemon.json:
{
"insecure-registries" : ["localhost:32000"]
}
and then restart docker with: sudo systemctl restart docker
Building the report tarball
Report tarball is at /var/snap/microk8s/1671/inspection-report-20210322_143034.tar.gz
I found the problem. Since the "host-provisioner" takes care of creating PV we should not pass the volumeName in our PVC yaml file. When I removed that field the provisioner could make a PV and bound my PVC to it and now my pod has started.
Now my PVC is:
apiVersion: v1
kind: List
items:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wws-registry-claim
spec:
# volumeName: registry-pvc
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: microk8s-hostpath

How to mount a persistent volume on a Deployment/Pod using PersistentVolumeClaim?

I am trying to mount a persistent volume on pods (via a deployment).
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
- image: ...
volumeMounts:
- mountPath: /app/folder
name: volume
volumes:
- name: volume
persistentVolumeClaim:
claimName: volume-claim
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: volume-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
However, the pod stays in "ContainerCreating" status and the events show the following error message.
Unable to mount volumes for pod "podname": timeout expired waiting for volumes to attach or mount for pod "namespace"/"podname". list of unmounted volumes=[volume]. list of unattached volumes=[volume]
I verified that the persistent volume claim is ok and bound to a persistent volume.
What am I missing here?
When you create a PVC without specifying a PV or type of StorageClass in GKE clusters it will fall back to default option:
StorageClass: standard
Provisioner: kubernetes.io/gce-pd
Type: pd-standard
Please take a look on official documentation: Cloud.google.com: Kubernetes engine persistent volumes
There could be a lot of circumstances that can produce error message encountered.
As it's unknown how many replicas are in your deployment as well as number of nodes and how pods were scheduled on those nodes, I've tried to reproduce your issue and I encountered the same error with following steps (GKE cluster was freshly created to prevent any other dependencies that might affect the behavior).
Steps:
Create a PVC
Create a Deployment with replicas > 1
Check the state of pods
Additional links
Create a PVC
Below is example YAML definition of a PVC the same as yours:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: volume-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
After applying above definition please check if it created successfully. You can do it by using below commands:
$ kubectl get pvc volume-claim
$ kubectl get pv
$ kubectl describe pvc volume-claim
$ kubectl get pvc volume-claim -o yaml
Create a Deployment with replicas > 1
Below is example YAML definition of deployment with volumeMounts and replicas > 1:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ubuntu-deployment
spec:
selector:
matchLabels:
app: ubuntu
replicas: 10 # amount of pods must be > 1
template:
metadata:
labels:
app: ubuntu
spec:
containers:
- name: ubuntu
image: ubuntu
command:
- sleep
- "infinity"
volumeMounts:
- mountPath: /app/folder
name: volume
volumes:
- name: volume
persistentVolumeClaim:
claimName: volume-claim
Apply it and wait for a while.
Check the state of pods
You can check the state of pods with below command:
$ kubectl get pods -o wide
Output of above command:
NAME READY STATUS RESTARTS AGE IP NODE
ubuntu-deployment-2q64z 0/1 ContainerCreating 0 4m27s <none> gke-node-1
ubuntu-deployment-4tjp2 1/1 Running 0 4m27s 10.56.1.14 gke-node-2
ubuntu-deployment-5tn8x 0/1 ContainerCreating 0 4m27s <none> gke-node-1
ubuntu-deployment-5tn9m 0/1 ContainerCreating 0 4m27s <none> gke-node-3
ubuntu-deployment-6vkwf 0/1 ContainerCreating 0 4m27s <none> gke-node-1
ubuntu-deployment-9p45q 1/1 Running 0 4m27s 10.56.1.12 gke-node-2
ubuntu-deployment-lfh7g 0/1 ContainerCreating 0 4m27s <none> gke-node-3
ubuntu-deployment-qxwmq 1/1 Running 0 4m27s 10.56.1.13 gke-node-2
ubuntu-deployment-r7k2k 0/1 ContainerCreating 0 4m27s <none> gke-node-3
ubuntu-deployment-rnr72 0/1 ContainerCreating 0 4m27s <none> gke-node-3
Take a look on above output:
3 pods are in Running state
7 pods are in ContainerCreating state
All of the Running pods are located on the same gke-node-2
You can get more detailed information why pods are in ContainerCreating state by:
$ kubectl describe pod NAME_OF_POD_WITH_CC_STATE
The Events part in above command shows:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 14m default-scheduler Successfully assigned default/ubuntu-deployment-2q64z to gke-node-1
Warning FailedAttachVolume 14m attachdetach-controller Multi-Attach error for volume "pvc-7d756147-6434-11ea-a666-42010a9c0058" Volume is already used by pod(s) ubuntu-deployment-qxwmq, ubuntu-deployment-9p45q, ubuntu-deployment-4tjp2
Warning FailedMount 92s (x6 over 12m) kubelet, gke-node-1 Unable to mount volumes for pod "ubuntu-deployment-2q64z_default(9dc28e95-6434-11ea-a666-42010a9c0058)": timeout expired waiting for volumes to attach or mount for pod "default"/"ubuntu-deployment-2q64z". list of unmounted volumes=[volume]. list of unattached volumes=[volume default-token-dnvnj]
Pod cannot pass ContainerCreating state because of failed mounting of a volume. Mentioned volume is already used by other pods on a different node.
ReadWriteOnce: The Volume can be mounted as read-write by a single node.
Additional links
Please take a look at: Cloud.google.com: Access modes of persistent volumes.
There is detailed answer on topic of access mode: Stackoverflow.com: Why can you set multiple accessmodes on a persistent volume
As it's unknown what you are trying to achieve please take a look on comparison between Deployments and Statefulsets: Cloud.google.com: Persistent Volume: Deployments vs statefulsets.
If doing this in a cloud provider, the storageClass object will create the respective volume for your persistent volume claim.
If you are trying to do this locally on minikube or in a self managed kubernetes cluster, you need to manually create the storageClass that will provide the volumes for you, or create it manually like this example:
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
The hostPath variable will mount this data in the current pod node.

Setting GCP FileStorage and Kubernetes

How do you mount the FileStorage to the Kubernetes pod in GCP
I did follow the documentation but the pods still pending
I did:
apiVersion: v1
kind: PersistentVolume
metadata:
name: <some name>
spec:
capacity:
storage: 50Gi
accessModes:
- ReadWriteMany
nfs:
path: /
server: <filestorage_ip with this format xx.xxx.xxx.xx>
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: <some name>
namespace: <some name>
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 50Gi
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: <some name>
name: <some name>
labels:
app: <some name>
spec:
replicas: 2
selector:
matchLabels:
app: <some name>
template:
metadata:
labels:
app: <some name>
spec:
containers:
- name: <some name>
image: gcr.io/somepath/<some name>#sha256:<some hash>
ports:
- containerPort: 80
volumeMounts:
- name: <some name>
mountPath: /var/www/html
imagePullPolicy: Always
restartPolicy: Always
volumes:
- name: <some name>
persistentVolumeClaim:
claimName: <some name>
readOnly: false
Running kubectl -n <some name> describe pods returns:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedMount 23m (x52 over 3h21m) kubelet, gke-<some name>-default-pool-<some hash> Unable to mount volumes for pod "<some name>-<some hash>_<some name>(<some hash>)": timeout expired waiting for volumes to attach or mount for pod "<some name>"/"<some name>-<some hash>". list of unmounted volumes=[<some name>-persistent-storage]. list of unattached volumes=[<some name>-persistent-storage default-token-<some hash>]
Warning FailedMount 3m5s (x127 over 3h21m) kubelet, gke-<some name>-default-pool-<some hash> (combined from similar events): MountVolume.SetUp failed for volume "<some name>-storage" : mount failed: exit status 1
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/<some path>/volumes/kubernetes.io~nfs/<some name>-storage --scope -- /home/kubernetes/containerized_mounter/mounter mount -t nfs <filestorage_ip with this format xx.xxx.xxx.xx>:/ /var/lib/kubelet/pods/<some hash>/volumes/kubernetes.io~nfs/<some name>-storage
Output: Running scope as unit: run-<some hash>.scope
Mount failed: mount failed: exit status 32
Mounting command: chroot
Mounting arguments: [/home/kubernetes/containerized_mounter/rootfs mount -t nfs <filestorage_ip with this format xx.xxx.xxx.xx>:/ /var/lib/kubelet/pods/<some hash>/volumes/kubernetes.io~nfs/<some name>-storage]
Output: mount.nfs: access denied by server while mounting <filestorage_ip with this format xx.xxx.xxx.xx>:/
It seems that the pod can't access de the IP of the FileStorage service
In the documentation says that needs to be on the same VPC
"Authorized network *
Filestore instances can only be accessed from machines on an authorized VPC network. Select the network from which you need access."
But I don't know how to add the Kubernetes cluster to the VPC
Any suggestions?
I found the problem
The PersistentVolume can't be mount in path: /
It needs the "Fileshare properties" field that makes you fill in on the creation
Now works with multiple pods!

Why is my persistent disk failing to mount to my pod?

I've been trying to find out why my pod won't start up, when I go to describe it I get this:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m default-scheduler Successfully assigned my-namespace/nfs to gke-default-pool
Normal SuccessfulAttachVolume 2m attachdetach-controller AttachVolume.Attach succeeded for volume "nfspvc"
Warning FailedMount 50s kubelet, default-pool Unable to mount volumes for pod "nfs-8496cc5fd5-wjkm2_sxdb-branch161666(28ff323d-8839-11e9-a080-42010a8400bb)": timeout expired waiting for volumes to attach or mount for pod "my-namespace"/"nfs-8496cc5fd5-wjkm2". list of unmounted volumes=[nfspvc]. list of unattached volumes=[nfspvc default-token-ntmfv]
Warning FailedMount 28s (x9 over 2m) kubelet, default-pool MountVolume.MountDevice failed for volume "nfspvc" : executable file not found in $PATH
When I describe the compute disk, all looks good:
creationTimestamp: '2019-06-06T01:56:31.079-07:00'
id: '5701286856735681489'
kind: compute#disk
labelFingerprint: 42WmSpB8rSM=
lastAttachTimestamp: '2019-06-06T01:57:51.852-07:00'
name: nfs-pd
physicalBlockSizeBytes: '4096'
selfLink: <omitted>
sizeGb: '10'
status: READY
type: <omitted>
users:
- <omitted>
zone: <omitted>
Updates are available for some Cloud SDK components. To install them,
please run:
$ gcloud components update
And here is my pods manifest:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
role: nfs
name: nfs
namespace: my-namespace
spec:
replicas: 1
selector:
matchLabels:
role: nfs
spec:
containers:
- image: gcr.io/google_containers/volume-nfs:0.8
name: nfs
ports:
- containerPort: 2049
name: nfs
protocol: TCP
- containerPort: 20048
name: mountd
protocol: TCP
- containerPort: 111
name: rpcbind
protocol: TCP
volumeMounts:
- mountPath: /exports
name: nfspvc
restartPolicy: Always
volumes:
- gcePersistentDisk:
fsType: ext4i
pdName: nfs-pd
name: nfspvc
I'm not really sure what 'MountVolume.MountDevice failed for volume "nfspvc" : executable file not found in $PATH' means, or what else I should look in to, to investigate the source of the issue?
If it makes any difference, this is created by a script, the ordering of which is:
Create the compute disk
Run helm, which in turn creates the above deployment, service, persistent volume, and persistent volume claim.
fsType: ext4i
Try removing that 'i'.