IBM File Storage on Kubernetes stuck "Pending" - kubernetes

I am trying to use the following https://cloud.ibm.com/docs/containers?topic=containers-file_storage#add_file:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ibmc-file
labels:
billingType: 'monthly'
region: us-south
zone: dal10
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 12Gi
storageClassName: ibmc-file-silver
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:11
imagePullPolicy: Always
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: ibmc-file
But the PVC is never "Bound" and gets stuck as "Pending".
➜ postgres-kubernetes kubectl describe pvc ibmc-file
Name: ibmc-file
Namespace: default
StorageClass: ibmc-file-silver
Status: Pending
Volume:
Labels: billingType=monthly
region=us-south
zone=dal10
Annotations: ibm.io/provisioning-status=failed: Storage creation failed with error: {Code:E0013, Description:User doesn't have permissions to create or manage Storage [Backend Error:Validation failed due to missin...
kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"labels":{"billingType":"monthly","region":"us-south","zone":"dal10"},"n...
volume.beta.kubernetes.io/storage-provisioner=ibm.io/ibmc-file
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Provisioning 10m (x3 over 10m) ibm.io/ibmc-file_ibm-file-plugin-5d7684d8c5-xlvks_db50c480-500f-11e9-ba08-cae91657b92d External provisioner is provisioning volume for claim "default/ibmc-file"
Warning ProvisioningFailed 10m (x3 over 10m) ibm.io/ibmc-file_ibm-file-plugin-5d7684d8c5-xlvks_db50c480-500f-11e9-ba08-cae91657b92d failed to provision volume with StorageClass "ibmc-file-silver": Storage creation failed with error: {Code:E0013, Description:User doesn't have permissions to create or manage Storage [Backend Error:Validation failed due to missing permissions[NAS_MANAGE] for User[id:xxx, name:xxxm_2018-11-20-07.35.49, email:xxx, account:xxx]], Type:MissingStoragePermissions, RC:401, Recommended Action(s):Run `ibmcloud ks api-key-info` to see the owner of the API key that is used to order storage. Then, contact the account administrator to add the missing storage permissions. If infrastructure credentials were manually set via `ibmcloud ks credentials-set`, check the permissions of that user. Delete the PVC and re-create it. If the problem persists, open an IBM Cloud support case.}
Normal ExternalProvisioning 7m (x22 over 10m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "ibm.io/ibmc-file" or manually created by system administrator
Normal ExternalProvisioning 11s (x26 over 6m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "ibm.io/ibmc-file" or manually created by system administrator

#atkayla Could you try running kubectl get secret storage-secret-store -n kube-system -o yaml | grep slclient.toml: | awk '{print $2}' | base64 --decode to see what API key is used in the storage secret store? If this also shows your name and email address, then the file storage plug-in uses the permissions that are assigned to you.
You might have the permissions to create the cluster, but you might lack some storage permissions that do not let you create the storage. Are you the owner of the account and have the possibility to check the permissions? You should have Add/Upgrade Storage (StorageLayer), and Storage Manage.
If you do not have these permissions, add these and then run ibmcloud ks api-key-set to update the API key. The storage secret store is automatically refreshed after 5-15 minutes. Then, you can try again.

Related

Unable to mount NFS on Kubernetes Pod

I am working on deploying Hyperledger Fabric test network on Kubernetes minikube cluster. I intend to use PersistentVolume to share cytpo-config and channel artifacts among various peers and orderers. Following is my PersistentVolume.yaml and PersistentVolumeClaim.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: persistent-volume
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
nfs:
path: "/nfsroot"
server: "3.128.203.245"
readOnly: false
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: persistent-volume-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Following is the pod where the above claim is mount on /data
kind: Pod
apiVersion: v1
metadata:
name: test-shell
labels:
name: test-shell
spec:
containers:
- name: shell
image: ubuntu
command: ["/bin/bash", "-c", "while true ; do sleep 10 ; done"]
volumeMounts:
- mountPath: "/data"
name: pv
volumes:
- name: pv
persistentVolumeClaim:
claimName: persistent-volume-claim
NFS is setup on my EC2 instance. I have verified NFS server is working fine and I was able to mount it inside minikube. I am not understanding what wrong am I doing, but any file present inside 3.128.203.245:/nfsroot is not present in test-shell:/data
What point am I missing. I even tried hostPath mount but to no avail. Please help me out.
I think you should check the following things to verify that NFS is mounted successfully or not
run this command on the node where you want to mount.
$showmount -e nfs-server-ip
like in my case $showmount -e 172.16.10.161
Export list for 172.16.10.161:
/opt/share *
use $df -hT command see that Is NFS is mounted or not like in my case it will give output 172.16.10.161:/opt/share nfs4 91G 32G 55G 37% /opt/share
if not mounted then use the following command
$sudo mount -t nfs 172.16.10.161:/opt/share /opt/share
if the above commands show an error then check firewall is allowing nfs or not
$sudo ufw status
if not then allow using the command
$sudo ufw allow from nfs-server-ip to any port nfs
I made the same setup I don't face any issues. My k8s cluster of fabric is running successfully . The hf k8s yaml files can be found at my GitHub repo. There I have deployed the consortium of Banks on hyperledger fabric which is a dynamic multihost blockchain network that means you can add orgs, peers, join peers, create channels, install and instantiate chaincode on the go in an existing running blockchain network.
By default in minikube you should have default StorageClass:
Each StorageClass contains the fields provisioner, parameters, and reclaimPolicy, which are used when a PersistentVolume belonging to the class needs to be dynamically provisioned.
For example, NFS doesn't provide an internal provisioner, but an external provisioner can be used. There are also cases when 3rd party storage vendors provide their own external provisioner.
Change the default StorageClass
In your example this property can lead to problems.
In order to list enabled addons in minikube please use:
minikube addons list
To list all StorageClasses in your cluster use:
kubectl get sc
NAME PROVISIONER
standard (default) k8s.io/minikube-hostpath
Please note that at most one StorageClass can be marked as default. If two or more of them are marked as default, a PersistentVolumeClaim without storageClassName explicitly specified cannot be created.
In your example the most probable scenario is that you have already default StorageClass. Applying those resources caused: new PV creation (without StoraglClass), new PVC creation (with reference to existing default StorageClass). In this situation there is no reference between your custom pv/pvc binding) as an example please take a look:
kubectl get pv,pvc,sc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/nfs 3Gi RWX Retain Available 50m
persistentvolume/pvc-8aeb802f-cd95-4933-9224-eb467aaa9871 1Gi RWX Delete Bound default/pvc-nfs standard 50m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/pvc-nfs Bound pvc-8aeb802f-cd95-4933-9224-eb467aaa9871 1Gi RWX standard 50m
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
storageclass.storage.k8s.io/standard (default) k8s.io/minikube-hostpath Delete Immediate false 103m
This example will not work due to:
new persistentvolume/nfs has been created (without reference to pvc)
new persistentvolume/pvc-8aeb802f-cd95-4933-9224-eb467aaa9871 has been created using default StorageClass. In the Claim section we can notice that this pv has been created due to dynamic pv provisioning using default StorageClass with reference to default/pvc-nfs claim (persistentvolumeclaim/pvc-nfs ).
Solution 1.
According to the information from the comments:
Also I am able to connect to it within my minikube and also my actual ubuntu system.
I you are able to mount from inside minikube host this nfs share
If you mounted nfs share into your minikube node, please try to use this example with hostpath volume directly from your pod:
apiVersion: v1
kind: Pod
metadata:
name: test-shell
namespace: default
spec:
volumes:
- name: pv
hostPath:
path: /path/shares # path to nfs mount point on minikube node
containers:
- name: shell
image: ubuntu
command: ["/bin/bash", "-c", "sleep 1000 "]
volumeMounts:
- name: pv
mountPath: /data
Solution 2.
If you are using PV/PVC approach:
kind: PersistentVolume
apiVersion: v1
metadata:
name: persistent-volume
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
storageClassName: "" # Empty string must be explicitly set otherwise default StorageClass will be set / or custom storageClassName name
nfs:
path: "/nfsroot"
server: "3.128.203.245"
readOnly: false
claimRef:
name: persistent-volume-claim
namespace: default
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: persistent-volume-claim
namespace: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: "" # Empty string must be explicitly set otherwise default StorageClass will be set / or custom storageClassName name
volumeName: persistent-volume
Note:
If you are not referencing any provisioner associated with your StorageClass
Helper programs relating to the volume type may be required for consumption of a PersistentVolume within a cluster. In this example, the PersistentVolume is of type NFS and the helper program /sbin/mount.nfs is required to support the mounting of NFS filesystems.
Please keep in mind that when you are creating pvc kubernetes persistent-controller is trying to bind pvc with proper pv. During this process different factors are take into account like: storageClassName (default/custom), accessModes, claimRef, volumeName.
In this case you can use:
PersistentVolume.spec.claimRef.name: persistent-volume-claim PersistentVolumeClaim.spec.volumeName: persistent-volume
Note:
The control plane can bind PersistentVolumeClaims to matching PersistentVolumes in the cluster. However, if you want a PVC to bind to a specific PV, you need to pre-bind them.
By specifying a PersistentVolume in a PersistentVolumeClaim, you declare a binding between that specific PV and PVC. If the PersistentVolume exists and has not reserved PersistentVolumeClaims through its claimRef field, then the PersistentVolume and PersistentVolumeClaim will be bound.
The binding happens regardless of some volume matching criteria, including node affinity. The control plane still checks that storage class, access modes, and requested storage size are valid.
Once the PV/pvc were created or in case of any problem with pv/pvc binding please use the following commands to figure current state:
kubectl get pv,pvc,sc
kubectl describe pv
kubectl describe pvc
kubectl describe pod
kubectl get events

Creating a link to an NFS share in K3s Kubernetes

I'm very new to Kubernetes, and trying to get node-red running on a small cluster of raspberry pi's
I happily managed that, but noticed that once the cluster is powered down, next time I bring it up, the flows in node-red have vanished.
So, I've create a NFS share on a freenas box on my local network and can mount it from another RPI, so I know the permissions work.
However I cannot get my mount to work in a kubernetes deployment.
Any help as to where I have gone wrong please?
apiVersion: apps/v1
kind: Deployment
metadata:
name: node-red
labels:
app: node-red
spec:
replicas: 1
selector:
matchLabels:
app: node-red
template:
metadata:
labels:
app: node-red
spec:
containers:
- name: node-red
image: nodered/node-red:latest
ports:
- containerPort: 1880
name: node-red-ui
securityContext:
privileged: true
volumeMounts:
- name: node-red-data
mountPath: /data
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: TZ
value: Europe/London
volumes:
- name: node-red-data
nfs:
server: 192.168.1.96
path: /mnt/Pool1/ClusterStore/nodered
The error I am getting is
error: error validating "node-red-deploy.yml": error validating data:
ValidationError(Deployment.spec.template.spec): unknown field "nfs" in io.k8s.api.core.v1.PodSpec; if
you choose to ignore these errors, turn validation off with --validate=false
New Information
I now have the following
apiVersion: v1
kind: PersistentVolume
metadata:
name: clusterstore-nodered
labels:
type: nfs
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
nfs:
path: /mnt/Pool1/ClusterStore/nodered
server: 192.168.1.96
persistentVolumeReclaimPolicy: Recycle
claim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: clusterstore-nodered-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
Now when I start the deployment it waits at pending forever and I see the following the the events for the PVC
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal WaitForFirstConsumer 5m47s (x7 over 7m3s) persistentvolume-controller waiting for first consumer to be created before binding
Normal Provisioning 119s (x5 over 5m44s) rancher.io/local-path_local-path-provisioner-58fb86bdfd-rtcls_506528ac-afd0-11ea-930d-52d0b85bb2c2 External provisioner is provisioning volume for claim "default/clusterstore-nodered-claim"
Warning ProvisioningFailed 119s (x5 over 5m44s) rancher.io/local-path_local-path-provisioner-58fb86bdfd-rtcls_506528ac-afd0-11ea-930d-52d0b85bb2c2 failed to provision volume with StorageClass "local-path": Only support ReadWriteOnce access mode
Normal ExternalProvisioning 92s (x19 over 5m44s) persistentvolume-controller
waiting for a volume to be created, either by external provisioner "rancher.io/local-path" or manually created by system administrator
I assume that this is becuase I don't have a nfs provider, in fact if I do kubectl get storageclass I only see local-path
New question, how do I a add a storageclass for NFS? A little googleing around has left me without a clue.
Ok, solved the issue. Kubernetes tutorials are really esoteric and missing lots of assumed steps.
My problem was down to k3s on the pi only shipping with a local-path storage provider.
I finally found a tutorial that installed an nfs client storage provider, and now my cluster works!
This was the tutorial I found the information in.
In the stated Tutorial there are basically these steps to fulfill:
1.
showmount -e 192.168.1.XY
to check if the share is reachable from outside the NAS
2.
helm install nfs-provisioner stable/nfs-client-provisioner --set nfs.server=192.168.1.**XY** --set nfs.path=/samplevolume/k3s --set image.repository=quay.io/external_storage/nfs-client-provisioner-arm
Whereas you replace the IP with your NFS Server and the NFS path with your specific Path on your synology (both should be visible from your showmount -e IP command
Update 23.02.2021
It seems that you have to use another Chart and Image too:
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner --set nfs.server=192.168.1.**XY** --set nfs.path=/samplevolume/k3s --set image.repository=gcr.io/k8s-staging-sig-storage/nfs-subdir-external-provisioner
kubectl get storageclass
To check if the storageclass now exists
4.
kubectl patch storageclass nfs-client -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' && kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
To configure the new Storage class as "default". Replace nfs-client and local-path with what kubectl get storageclass tells
5.
kubectl get storageclass
Final check, if it's marked as "default"
This is a validation error pointing at the very last part of your Deployment yaml, therefore making it an invalid object. It looks like you've made a mistake with indentations. It should look more like this:
volumes:
- name: node-red-data
nfs:
server: 192.168.1.96
path: /mnt/Pool1/ClusterStore/nodered
Also, as you are new to Kubernetes, I strongly recommend getting familiar with the concepts of PersistentVolumes and its claims. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual Pod that uses the PV.
Please let me know if that helped.

NFS volume mount times out on Kubernetes with incorrect IP?

97 Mounting command: systemd-run 98 Mounting arguments:
--description=Kubernetes transient mount for /var/lib/kubelet/pods/06b9ae42-8e99-11e9-b888-a44c24184b19/volumes/kubernetes.io~nfs/nfs-data
--scope -- mount -t nfs 10.100.155.82:/exports/www /var/lib/kubelet/pods/06b9ae42-8e99-11e9-b888-a44c24184b19/volumes/kubernetes.io~nfs/nfs-data
99 Output: Running scope as unit:
run-r26f9da6c287846589bec8d059c33441d.scope 100 mount.nfs: Connection
timed out
101 FailedMount Warning 2019-06-14T11:48:46Z
102 typo3-app-67b58d7657-cvqdg Pod Unable to mount volumes for pod
"typo3-app-67b58d7657-cvqdg_default(1fb4c719-8e9a-11e9-b888-a44c24184b19)":
timeout expired waiting for volumes to attach or mount for pod
"default"/"typo3-app-67b58d7657-cvqdg". list of unmounted
volumes=[nfs-data nfs-data-src]. list of unattached volumes=[nfs-data
nfs-data-src
default-token-lmtl4] FailedMount Warning 2019-06-14T11:49:04Z
Weirdly I have no idea where it's getting 10.100.155.82 from? This was the the previous IP of the ClusterIP (relating to the nfs service)...
apiVersion: apps/v1
kind: Deployment
metadata:
name: typo3-app
labels:
app: typo3
spec:
replicas: 1
selector:
matchLabels:
app: typo3
template:
metadata:
labels:
app: typo3
spec:
containers:
- name: app
image: us.gcr.io/objit-chris/chrisjitit-typo3:v11
ports:
- containerPort: 80
volumeMounts:
- mountPath: /var/www/html-chrisjitit
name: nfs-data
- mountPath: /var/www/typo3_src-6.2.6
name: nfs-data-src
volumes:
- name: nfs-data
nfs:
# https://github.com/kubernetes/minikube/issues/3417
# server is not resolved using kube dns (so can't resolve to a service name - hence we need the IP)
#server: 10.11.250.37
server: 10.97.78.206
path: /exports/www
- name: nfs-data-src
nfs:
# https://github.com/kubernetes/minikube/issues/3417
# server is not resolved using kube dns (so can't resolve to a service name - hence we need the IP)
#server: 10.11.250.37
server: 10.97.78.206
path: /exports/www/typo3_src
What may be the cause of this timeout / wrong IP being used?
I tried deleting the deployment, changing the name that still didn't seem to work, but few minutes later it worked... Really strange behavior?
Ran into this issue again...:
kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-data 10Gi RWO Retain Available 67m
pvc-f1353542-a8b1-11e9-bdf7-38ffa66115bc 10Gi RWO Delete Bound default/nfs-data standard 67m
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nfs-data Bound pvc-f1353542-a8b1-11e9-bdf7-38ffa66115bc 10Gi RWO standard 67m
It keeps picking up an old NFS IP. It's a bug...:
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/9470ac17-a8b9-11e9-bdf7-38ffa66115bc/volumes/kubernetes.io~nfs/nfs-data-src --scope -- mount -t nfs 10.11.250.37:/exports/www/typo3_src /var/lib/kubelet/pods/9470ac17-a8b9-11e9-bdf7-38ffa66115bc/volumes/kubernetes.io~nfs/nfs-data-src
Output: Running scope as unit: run-ref2095cb52c94d0c87de5458c3b16733.scope
mount.nfs: Connection timed out

k8s - how to project service account token into pod

I am trying to project the serviceAccount token into my pod as described in this k8s doc - https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-token-volume-projection.
I create a service account using below command
kubectl create sa acct
Then I create the pod
kind: Pod
apiVersion: v1
metadata:
name: nginx
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: /var/run/secrets/tokens
name: vault-token
serviceAccountName: acct
volumes:
- name: vault-token
projected:
sources:
- serviceAccountToken:
path: vault-token
expirationSeconds: 7200
It fails due to - MountVolume.SetUp failed for volume "vault-token" : failed to fetch token: the server could not find the requested resource
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m15s default-scheduler Successfully assigned default/nginx to minikube
Warning FailedMount 65s (x10 over 5m15s) kubelet, minikube MountVolume.SetUp failed for volume "vault-token" : failed to fetch token: the server could not find the requested resource
My minikube version: v0.33.1
kubectl version : 1.13
Question:
What am i doing wrong here?
I tried this on kubeadm, and was able to suceed.
#Aman Juneja was right, you have to add the API flags as described in the documentation.
You can do that by creating the serviceaccount and then adding this flags to the kubeapi:
sudo vim /etc/kubernetes/manifests/kube-apiserver.yaml
- --service-account-issuer=api
- --service-account-signing-key-file=/etc/kubernetes/pki/apiserver.key
- --service-account-api-audiences=api
After that apply your pod.yaml and it will work. As you will see in describe pod:
Volumes:
vault-token:
Type: Projected (a volume that contains injected data from multiple sources)
[removed as not working solution]
unfortunately in my case my minikube did not want to start with this flags, it gets stuck on: waiting for pods: apiserver soon I will try to debug again.
UPDATE
Turns out you have to just pass the arguments into the minikube with directories from the inside of minikubeVM and not the outside as I did with previous example (so the .minikube directory), so it will look like this:
minikube start \
--extra-config=apiserver.service-account-signing-key-file=/var/lib/minikube/certs/apiserver.key \
--extra-config=apiserver.service-account-issuer=api \
--extra-config=apiserver.service-account-api-audiences=api
After that creating ServiceAccount and applying pod.yaml works.
you should use deployment since when you use deployment the token is automatically mounted into the pods.

Kubernetes mount.nfs: access denied by server while mounting

I have a kubernetes cluster that is running in out network and have setup an NFS server on another machine in the same network. I am able to ssh to any of the nodes in the cluster and mount from the server by running sudo mount -t nfs 10.17.10.190:/export/test /mnt but whenever my test pod tries to use an nfs persistent volume that points at that server it fails with this message:
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
19s 19s 1 default-scheduler Normal Scheduled Successfully assigned nfs-web-58z83 to wal-vm-newt02
19s 3s 6 kubelet, wal-vm-newt02 Warning
FailedMount MountVolume.SetUp failed for volume "kubernetes.io/nfs/bad55e9c-7303-11e7-9c2f-005056b40350-test-nfs" (spec.Name: "test-nfs") pod "bad55e9c-7303-11e7-9c2f-005056b40350" (UID: "bad55e9c-7303-11e7-9c2f-005056b40350") with: mount failed: exit status 32
Mounting command: mount
Mounting arguments: 10.17.10.190:/exports/test /var/lib/kubelet/pods/bad55e9c-7303-11e7-9c2f-005056b40350/volumes/kubernetes.io~nfs/test-nfs nfs []
Output: mount.nfs: access denied by server while mounting 10.17.10.190:/exports/test
Does anyone know how I can fix this and make it so that I can mount from the external NFS server?
The nodes of the cluster are running on 10.17.10.185 - 10.17.10.189 and all of the pods run with ips that start with 10.0.x.x. All of the nodes on the cluster and the NFS server are running Ubuntu. The NFS server is running on 10.17.10.190 with this /etc/exports:
/export 10.17.10.185/255.0.0.0(rw,sync,no_subtree_check)
I set up a persistent volume and persistent volume claim and they both create successfully showing this output from running kubectl get pv,pvc:
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
pv/test-nfs 1Mi RWX Retain Bound staging/test-nfs 15m
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
pvc/test-nfs Bound test-nfs 1Mi RWX 15m
They were created like this:
apiVersion: v1
kind: PersistentVolume
metadata:
name: test-nfs
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
nfs:
# FIXME: use the right IP
server: 10.17.10.190
path: "/exports/test"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-nfs
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
My test pod is using this configuration:
apiVersion: v1
kind: ReplicationController
metadata:
name: nfs-web
spec:
replicas: 1
selector:
role: web-frontend
template:
metadata:
labels:
role: web-frontend
spec:
containers:
- name: web
image: nginx
ports:
- name: web
containerPort: 80
volumeMounts:
# name must match the volume name below
- name: test-nfs
mountPath: "/usr/share/nginx/html"
volumes:
- name: test-nfs
persistentVolumeClaim:
claimName: test-nfs
It's probably because the uid used in your pod/container has not enough rights on the NFS server.
You can runAsUser as mentioned by #Giorgio or try to edit uid-range annotations of your namespace and fix a value (ex : 666). Like this every pod in your namespace
will run with uid 666.
Don't forget to chown 666 properly your NFS directory.
You have to set a securityContext as privileged: true. Take a look at this link
The complete solution for kubernetes cluster to prepare NFS folders provisioning is to apply the followings:
# set folder permission
sudo chmod 666 /your/folder/ # maybe 777
# append new line on exports file to allow network access to folder
sudo bash -c "echo '/your/folder/ <network ip/range>(rw,sync,no_root_squash,subtree_check)' >> /etc/exports"
# set folder export
sudo exportfs -ra
In my case I was trying to mount the wrong directory...
volumes:
- name: nfs-data
nfs:
# https://github.com/kubernetes/minikube/issues/3417
# server is not resolved using kube dns (so can't resolve to a service name - hence we need the IP)
server: 10.100.155.82
path: /tmp
I did not have /tmp in the /etc/exports in the server...
Another option is to add the uid/gid information to the nfs container itself. You can do this by creating a script to add the entries to /etc/passwd and then launch the provisioner:
groupadd -g 102 postfix
adduser -u 101 -g 102 -M postfix
groupadd -g 5000 vmail
adduser -u 5000 -g 5000 -M vmail
adduser -u 33 -g 33 -M www-data
groupadd -g 8983 solr
adduser -u 8983 -g 8983 -M solr
...
/nfs-server-provisioner -provisioner=cluster.local/nfs-provisioner-nfs-server-provisioner
This allows the user/group information to be preserved over the NFS boundary using NFSv3 (which is what nfs-provisioner uses). My understanding is that NFSv4 doesn't have this problem, but I've been unable to get NFSv4 working with the provisioner.
In my case, it was the way I mounted NFS. The configuration that worked was following:
/media/drive *(rw,sync,no_root_squash,insecure,no_subtree_check)
Note that this is insecure, you might want to tweak it to make it secure, while still making it work!