I created a persistent volume using the following YAML
apiVersion: v1
kind: PersistentVolume
metadata:
name: dq-tools-volume
labels:
name: dq-tools-volume
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: volume-class
nfs:
server: 192.168.215.83
path: "/var/nfsshare"
After creating this I created two persistentvolumeclaims using following YAMLS
PVC1:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-volume-1
labels:
name: jenkins-volume-1
spec:
accessMOdes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
storageClassName: volume-class
selector:
matchLabels:
name: dq-tools-volume
PVC2:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-volume-2
labels:
name: jenkins-volume-2
spec:
accessMOdes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
storageClassName: volume-class
selector:
matchLabels:
name: dq-tools-volume
But i noticed that both of these persistent volume claims are writing to same backend volume.
How can i isolate data of one persistentvolumeclaim from another. I am using this for multiple installations of Jenkins. I want workspace of each Jenkins to be isolated.
As #D.T. explained a persistent volume claim is exclusively bound to a persistent volume.
You cannot bind 2 pvc to the same pv.
Here you can find another case where it was discussed.
There is a better solution for your scenario and it involves using nfs-client-provisioner. To achive that, firstly you have to install helm in your cluster an than follow these steps that I created for a previous answer on ServerFault.
I've tested it and using this solution you can isolate one PVC from the other.
1 - Install and configur NFS Server on my Master Node (Debian Linux, this might change depending on your Linux distribution):
Before installing the NFS Kernel server, we need to update our system’s repository index:
$ sudo apt-get update
Now, run the following command in order to install the NFS Kernel Server on your system:
$ sudo apt install nfs-kernel-server
Create the Export Directory
$ sudo mkdir -p /mnt/nfs_server_files
As we want all clients to access the directory, we will remove restrictive permissions of the export folder through the following commands (this may vary on your set-up according to your security policy):
$ sudo chown nobody:nogroup /mnt/nfs_server_files
$ sudo chmod 777 /mnt/nfs_server_files
Assign server access to client(s) through NFS export file
$ sudo nano /etc/exports
Inside this file, add a new line to allow access from other servers to your share.
/mnt/nfs_server_files 10.128.0.0/24(rw,sync,no_subtree_check)
You may want to use different options in your share. 10.128.0.0/24 is my k8s internal network.
Export the shared directory and restart the service to make sure all configuration files are correct.
$ sudo exportfs -a
$ sudo systemctl restart nfs-kernel-server
Check all active shares:
$ sudo exportfs
/mnt/nfs_server_files
10.128.0.0/24
2 - Install NFS Client on all my Worker Nodes:
$ sudo apt-get update
$ sudo apt-get install nfs-common
At this point you can make a test to check if you have access to your share from your worker nodes:
$ sudo mkdir -p /mnt/sharedfolder_client
$ sudo mount kubemaster:/mnt/nfs_server_files /mnt/sharedfolder_client
Notice that at this point you can use the name of your master node. K8s is taking care of the DNS here.
Check if the volume mounted as expected and create some folders and files to male sure everything is working fine.
$ cd /mnt/sharedfolder_client
$ mkdir test
$ touch file
Go back to your master node and check if these files are at /mnt/nfs_server_files folder.
3 - Install NFS Client Provisioner.
Install the provisioner using helm:
$ helm install --name ext --namespace nfs --set nfs.server=kubemaster --set nfs.path=/mnt/nfs_server_files stable/nfs-client-provisioner
Notice that I've specified a namespace for it.
Check if they are running:
$ kubectl get pods -n nfs
NAME READY STATUS RESTARTS AGE
ext-nfs-client-provisioner-f8964b44c-2876n 1/1 Running 0 84s
At this point we have a storageclass called nfs-client:
$ kubectl get storageclass -n nfs
NAME PROVISIONER AGE
nfs-client cluster.local/ext-nfs-client-provisioner 5m30s
We need to create a PersistentVolumeClaim:
$ more nfs-client-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
namespace: nfs
name: test-claim
annotations:
volume.beta.kubernetes.io/storage-class: "nfs-client"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
$ kubectl apply -f nfs-client-pvc.yaml
Check the status (Bound is expected):
$ kubectl get persistentvolumeclaim/test-claim -n nfs
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-claim Bound pvc-e1cd4c78-7c7c-4280-b1e0-41c0473652d5 1Mi RWX nfs-client 24s
4 - Create a simple pod to test if we can read/write out NFS Share:
Create a pod using this yaml:
apiVersion: v1
kind: Pod
metadata:
name: pod0
labels:
env: test
namespace: nfs
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
volumeMounts:
- name: nfs-pvc
mountPath: "/mnt"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: test-claim
$ kubectl apply -f pod.yaml
Let's list all mounted volumes on our pod:
$ kubectl exec -ti -n nfs pod0 -- df -h /mnt
Filesystem Size Used Avail Use% Mounted on
kubemaster:/mnt/nfs_server_files/nfs-test-claim-pvc-a2e53b0e-f9bb-4723-ad62-860030fb93b1 99G 11G 84G 11% /mnt
As we can see, we have a NFS volume mounted on /mnt. (Important to notice the path kubemaster:/mnt/nfs_server_files/nfs-test-claim-pvc-a2e53b0e-f9bb-4723-ad62-860030fb93b1)
Let's check it:
root#pod0:/# cd /mnt
root#pod0:/mnt# ls -la
total 8
drwxrwxrwx 2 nobody nogroup 4096 Nov 5 08:33 .
drwxr-xr-x 1 root root 4096 Nov 5 08:38 ..
It's empty. Let's create some files:
$ for i in 1 2; do touch file$i; done;
$ ls -l
total 8
drwxrwxrwx 2 nobody nogroup 4096 Nov 5 08:58 .
drwxr-xr-x 1 root root 4096 Nov 5 08:38 ..
-rw-r--r-- 1 nobody nogroup 0 Nov 5 08:58 file1
-rw-r--r-- 1 nobody nogroup 0 Nov 5 08:58 file2
Now let's where are these files on our NFS Server (Master Node):
$ cd /mnt/nfs_server_files
$ ls -l
total 4
drwxrwxrwx 2 nobody nogroup 4096 Nov 5 09:11 nfs-test-claim-pvc-4550f9f0-694d-46c9-9e4c-7172a3a64b12
$ cd nfs-test-claim-pvc-4550f9f0-694d-46c9-9e4c-7172a3a64b12/
$ ls -l
total 0
-rw-r--r-- 1 nobody nogroup 0 Nov 5 09:11 file1
-rw-r--r-- 1 nobody nogroup 0 Nov 5 09:11 file2
And here are the files we just created inside our pod!
As i understand it is not possible to bind two PVC to the same PV.
Refer this link > A PVC to PV binding is a one-to-one mapping
You will possibly need to look into Dynamic Provisioning option for your setup.
Tested by creating one PV of 10G and two PVC with 8Gi an 2Gi claim request
PVC-2 goes into pending state.
master $ kubectl get persistentvolume
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv 10Gi RWX Retain Bound default/pv1 7m
master $ kubectl get persistentvolumeclaims
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc1 Bound pv 10Gi RWX 3m
pvc2 Pending 8s
Files used for creating PV and PVC as below
master $ cat pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
hostPath:
path: /var/tmp/
master $ cat pvc1.ayml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc1
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 8Gi
master $ cat pvc2.ayml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc2
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
Related
I'm using bitnami/etcd chart and it has ability to create snapshots via EFS mounted pvc.
However I get permission error after aws-efs-csi-driver is provisioned and PVC mounted to any non-root pod (user/gid is 1001)
I'm using helm chart https://kubernetes-sigs.github.io/aws-efs-csi-driver/ version 2.2.0
values of the chart:
# you can obtain the fileSystemId with
# aws efs describe-file-systems --query "FileSystems[*].FileSystemId"
storageClasses:
- name: efs
parameters:
fileSystemId: fs-exxxxxxx
directoryPerms: "777"
gidRangeStart: "1000"
gidRangeEnd: "2000"
basePath: "/snapshots"
# enable it after the following issue is resolved
# https://github.com/bitnami/charts/issues/7769
# node:
# nodeSelector:
# etcd: "true"
I then manually created the PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: etcd-snapshotter-pv
annotations:
argocd.argoproj.io/sync-wave: "60"
spec:
capacity:
storage: 32Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: efs
csi:
driver: efs.csi.aws.com
volumeHandle: fs-exxxxxxx
Then if I mount that EFS PVC in non-rood pod I get the following error
➜ klo etcd-snapshotter-001-ph8w9
etcd 23:18:38.76 DEBUG ==> Using endpoint etcd-snapshotter-001-ph8w9:2379
{"level":"warn","ts":1633994320.7789018,"logger":"client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0005ea380/#initially=[etcd-snapshotter-001-ph8w9:2379]","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 10.120.2.206:2379: connect: connection refused\""}
etcd-snapshotter-001-ph8w9:2379 is unhealthy: failed to commit proposal: context deadline exceeded
Error: unhealthy cluster
etcd 23:18:40.78 WARN ==> etcd endpoint etcd-snapshotter-001-ph8w9:2379 not healthy. Trying a different endpoint
etcd 23:18:40.78 DEBUG ==> Using endpoint etcd-2.etcd-headless.etcd.svc.cluster.local:2379
etcd-2.etcd-headless.etcd.svc.cluster.local:2379 is healthy: successfully committed proposal: took = 1.6312ms
etcd 23:18:40.87 INFO ==> Snapshotting the keyspace
Error: could not open /snapshots/db-2021-10-11_23-18.part (open /snapshots/db-2021-10-11_23-18.part: permission denied)
As a result I have to spawn a new "root" pod, get inside the pod and manually adjust the permissions
apiVersion: v1
kind: Pod
metadata:
name: perm
spec:
securityContext:
runAsUser: 0
runAsGroup: 0
fsGroup: 0
containers:
- name: app1
image: busybox
command: ["/bin/sh"]
args: ["-c", "sleep 3000"]
volumeMounts:
- name: persistent-storage
mountPath: /snapshots
securityContext:
runAsUser: 0
runAsGroup: 0
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: etcd-snapshotter
nodeSelector:
etcd: "true"
k apply -f setup.yaml
k exec -ti perm -- ash
cd /snapshots
/snapshots # chown -R 1001.1001 .
/snapshots # chmod -R 777 .
/snapshots # exit
➜ k create job --from=cronjob/etcd-snapshotter etcd-snapshotter-001
job.batch/etcd-snapshotter-001 created
➜ klo etcd-snapshotter-001-bmv79
etcd 23:31:10.22 DEBUG ==> Using endpoint etcd-1.etcd-headless.etcd.svc.cluster.local:2379
etcd-1.etcd-headless.etcd.svc.cluster.local:2379 is healthy: successfully committed proposal: took = 2.258532ms
etcd 23:31:10.32 INFO ==> Snapshotting the keyspace
{"level":"info","ts":1633995070.4244702,"caller":"snapshot/v3_snapshot.go:68","msg":"created temporary db file","path":"/snapshots/db-2021-10-11_23-31.part"}
{"level":"info","ts":1633995070.4907935,"logger":"client","caller":"v3/maintenance.go:211","msg":"opened snapshot stream; downloading"}
{"level":"info","ts":1633995070.4908395,"caller":"snapshot/v3_snapshot.go:76","msg":"fetching snapshot","endpoint":"etcd-1.etcd-headless.etcd.svc.cluster.local:2379"}
{"level":"info","ts":1633995070.4965465,"logger":"client","caller":"v3/maintenance.go:219","msg":"completed snapshot read; closing"}
{"level":"info","ts":1633995070.544217,"caller":"snapshot/v3_snapshot.go:91","msg":"fetched snapshot","endpoint":"etcd-1.etcd-headless.etcd.svc.cluster.local:2379","size":"320 kB","took":"now"}
{"level":"info","ts":1633995070.5507936,"caller":"snapshot/v3_snapshot.go:100","msg":"saved","path":"/snapshots/db-2021-10-11_23-31"}
Snapshot saved at /snapshots/db-2021-10-11_23-31
➜ k exec -ti perm -- ls -la /snapshots
total 924
drwxrwxrwx 2 1001 1001 6144 Oct 11 23:31 .
drwxr-xr-x 1 root root 46 Oct 11 23:25 ..
-rw------- 1 1001 root 319520 Oct 11 23:31 db-2021-10-11_23-31
Is there a way to automate this?
I have this setting in storage class
gidRangeStart: "1000"
gidRangeEnd: "2000"
but it has no effect.
PVC is defined as:
➜ kg pvc etcd-snapshotter -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: efs.csi.aws.com
name: etcd-snapshotter
namespace: etcd
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 32Gi
storageClassName: efs
volumeMode: Filesystem
volumeName: etcd-snapshotter-pv
By default the StorageClass field provisioningMode is unset, please set it to provisioningMode: "efs-ap" to enable dynamic provision with access point.
I'm trying to set up RabbitMQ on Minikube using the RabbitMQ Cluster Operator:
When I try to attach a persistent volume, I get the following error:
$ kubectl logs -f rabbitmq-rabbitmq-server-0
Configuring logger redirection
20:04:40.081 [warning] Failed to write PID file "/var/lib/rabbitmq/mnesia/rabbit#rabbitmq-rabbitmq-server-0.rabbitmq-rabbitmq-headless.default.pid": permission denied
20:04:40.264 [error] Failed to create Ra data directory at '/var/lib/rabbitmq/mnesia/rabbit#rabbitmq-rabbitmq-server-0.rabbitmq-rabbitmq-headless.default/quorum/rabbit#rabbitmq-rabbitmq-server-0.rabbitmq-rabbitmq-headless.default', file system operation error: enoent
20:04:40.265 [error] Supervisor ra_sup had child ra_system_sup started with ra_system_sup:start_link() at undefined exit with reason {error,"Ra could not create its data directory. See the log for details."} in context start_error
20:04:40.266 [error] CRASH REPORT Process <0.247.0> with 0 neighbours exited with reason: {error,"Ra could not create its data directory. See the log for details."} in ra_system_sup:init/1 line 43
20:04:40.267 [error] CRASH REPORT Process <0.241.0> with 0 neighbours exited with reason: {{shutdown,{failed_to_start_child,ra_system_sup,{error,"Ra could not create its data directory. See the log for details."}}},{ra_app,start,[normal,[]]}} in application_master:init/4 line 138
{"Kernel pid terminated",application_controller,"{application_start_failure,ra,{{shutdown,{failed_to_start_child,ra_system_sup,{error,\"Ra could not create its data directory. See the log for details.\"}}},{ra_app,start,[normal,[]]}}}"}
Kernel pid terminated (application_controller) ({application_start_failure,ra,{{shutdown,{failed_to_start_child,ra_system_sup,{error,"Ra could not create its data directory. See the log for details."}
Crash dump is being written to: erl_crash.dump...
The issue is that RabbitMQ is not able to set up its data files in the data directory /var/lib/rabbitmq/mnesia due to a lack of permission.
My initial guess was that I needed to specify the data directory as a volumeMount, but that doesn't seem to be configurable according to the documentation.
RabbitMQ's troubleshooting documentation on persistence results in a 404.
I tried to find other resources online with the same problem but none of them were using the RabbitMQ Cluster Operator. I plan on following that route if I'm not able to find a solution but I really would like to solve this issue somehow.
Does anyone have any ideas?
The setup that I have is as follows:
apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: rabbitmq
spec:
replicas: 1
service:
type: NodePort
persistence:
storageClassName: local-storage
storage: 20Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rabbitmq-pvc
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: rabbitmq-pv
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
capacity:
storage: 20Gi
hostPath:
path: /mnt/app/rabbitmq
type: DirectoryOrCreate
To reproduce this issue on minikube:
Install the rabbitmq operator:
kubectl apply -f "https://github.com/rabbitmq/cluster-operator/releases/latest/download/cluster-operator.yml"
Apply the manifest file above
kubectl apply -f rabbitmq.yml
Running kubectl get po displays a pod named rabbitmq-rabbitmq-server-0.
Running kubectl logs -f rabbitmq-rabbitmq-server-0 to view the logs displays the above error.
As I alread suggested in comments, you can solve it running:
minikube ssh -- sudo chmod g+w /mnt/app/rabbitmq/
Answering to your question:
Is there a way I can add that to my manifest file rather than having to do it manually?
You can override the rabbitmq statefulset manifest fields to change last line in initContainer command script from chgrp 999 /var/lib/rabbitmq/mnesia/ to this: chown 999:999 /var/lib/rabbitmq/mnesia/.
Complete RabbitmqCluster yaml manifest looks like following:
apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: rabbitmq
spec:
replicas: 1
service:
type: NodePort
persistence:
storageClassName: local-storage
storage: 20Gi
override:
statefulSet:
spec:
template:
spec:
containers: []
initContainers:
- name: setup-container
command:
- sh
- -c
- cp /tmp/rabbitmq/rabbitmq.conf /etc/rabbitmq/rabbitmq.conf && chown 999:999
/etc/rabbitmq/rabbitmq.conf && echo '' >> /etc/rabbitmq/rabbitmq.conf ; cp /tmp/rabbitmq/advanced.config
/etc/rabbitmq/advanced.config && chown 999:999 /etc/rabbitmq/advanced.config
; cp /tmp/rabbitmq/rabbitmq-env.conf /etc/rabbitmq/rabbitmq-env.conf && chown
999:999 /etc/rabbitmq/rabbitmq-env.conf ; cp /tmp/erlang-cookie-secret/.erlang.cookie
/var/lib/rabbitmq/.erlang.cookie && chown 999:999 /var/lib/rabbitmq/.erlang.cookie
&& chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins
/etc/rabbitmq/enabled_plugins && chown 999:999 /etc/rabbitmq/enabled_plugins
; chown 999:999 /var/lib/rabbitmq/mnesia/ # <- CHANGED THIS LINE
I had the same issue when deploying RabbitMQ in kubernetes inside Vagrant (not minikube though). I was using this setup.
I tried running sudo chmod g+w /mnt/app/rabbitmq/ but had no luck...
Eventually gave up and ended up running minikube inside vagrant using this box and everything worked perfectly fine out of the box! Didn't have to do anything special... Not event manually creating the persistent volume...
inside my nodes
I had this issue in a live version and minikube does not allow to run SSH command then.So what I did was run chmod to the my hostpath provisioner and recreated my rabbitmq-cluster
chmod 777 /tmp/hostpath-provisioner/default/*
I found the answer for this issue. It happen when there is few nodes in cluster.
Solution is to add securityContext: {}
https://github.com/rabbitmq/rabbitmq-website/blob/3ee8e72a7c4fe52e323ba1039eecbf3a67c554f7/site/kubernetes/operator/using-on-openshift.md#arbitrary-user-ids
I'm setting up my application with Kubernetes. I have 2 Docker images (Oracle and Weblogic). I have 2 kubernetes nodes, Node1 (20 GB storage) and Node2 (60 GB) storage.
When I run kubectl apply -f oracle.yaml it tries to create oracle pod on Node1 and after few minutes it fails due to lack of storage. How can I force Kubernetes to check the free storage of that node before creating the pod there?
Thanks
First of all, you probably want to give Node1 more storage.
But if you don't want the pod to start at all you can probably run a check with an initContainer where you check how much space you are using with something like du or df. It could be a script that checks for a threshold that exits unsuccessfully if there is not enough space. Something like this:
#!/bin/bash
# Check if there are less than 10000 bytes in the <dir> directory
if [ `du <dir> | tail -1 | awk '{print $1}'` -gt "10000" ]; then exit 1; fi
Another alternative is to use a persistent volume (PV) with a persistent volume claim (PVC) that has enough space together with the default StorageClass Admission Controller, and you do allocate the appropriate space in your volume definition.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 40Gi
storageClassName: mytype
Then on your Pod:
kind: Pod
apiVersion: v1
metadata:
name: mypod
spec:
containers:
- name: mycontainer
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim
The Pod will not start if your claim cannot be allocated (There isn't enough space)
You may try to specify ephemeral storage requirement for pod:
resources:
requests:
ephemeral-storage: "40Gi"
limits:
ephemeral-storage: "40Gi"
Then it would be scheduled only on nodes with sufficient ephemeral storage available.
You can verify the amount of ephemeral storage available on each node in the output of "kubectl describe node".
$ kubectl describe node somenode | grep -A 6 Allocatable
Allocatable:
attachable-volumes-gce-pd: 64
cpu: 3920m
ephemeral-storage: 26807024751
hugepages-2Mi: 0
memory: 12700032Ki
pods: 110
I'm trying to create a dynamic Azure Disk volume to use in a pod that has specific permissions requirements.
The container runs under the user id 472, so I need to find a way to mount the volume with rw permissions for (at least) that user.
With the following StorageClass defined
apiVersion: storage.k8s.io/v1
kind: StorageClass
provisioner: kubernetes.io/azure-disk
reclaimPolicy: Delete
volumeBindingMode: Immediate
metadata:
name: foo-storage
mountOptions:
- rw
parameters:
cachingmode: None
kind: Managed
storageaccounttype: Standard_LRS
and this PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: foo-storage
namespace: foo
spec:
accessModes:
- ReadWriteOnce
storageClassName: foo-storage
resources:
requests:
storage: 1Gi
I can run the following in a pod:
containers:
- image: ubuntu
name: foo
imagePullPolicy: IfNotPresent
command:
- ls
- -l
- /var/lib/foo
volumeMounts:
- name: foo-persistent-storage
mountPath: /var/lib/foo
volumes:
- name: foo-persistent-storage
persistentVolumeClaim:
claimName: foo-storage
The pod will mount and start correctly, but kubectl logs <the-pod> will show
total 24
drwxr-xr-x 3 root root 4096 Nov 23 11:42 .
drwxr-xr-x 1 root root 4096 Nov 13 12:32 ..
drwx------ 2 root root 16384 Nov 23 11:42 lost+found
i.e. the current directory is mounted as owned by root and read-only for all other users.
I've tried adding a mountOptions section to the StorageClass, but whatever I try (uid=472, user=472 etc) I get mount errors on startup, e.g.
mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/plugins/kubernetes.io/azure-disk/mounts/m1019941199 --scope -- mount -t ext4 -o group=472,rw,user=472,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/azure-disk/mounts/m1019941199
Output: Running scope as unit run-r7165038756bf43e49db934e8968cca8b.scope.
mount: wrong fs type, bad option, bad superblock on /dev/sdc,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so.
I've also tried to get some info from man mount, but I haven't found anything that worked.
How can I configure this storage class, persistent volume claim and volume mount so that the non-root user running the container process gets access to write (and create subdirectories) in the mounted path?
You need to define the securityContext of your pod spec like the following, so it matches the new running user and group id:
securityContext:
runAsUser: 472
fsGroup: 472
The stable Grafana Helm Chart also does it in the same way. See securityContext under Configuration here: https://github.com/helm/charts/tree/master/stable/grafana#configuration
I have a kubernetes cluster that is running in out network and have setup an NFS server on another machine in the same network. I am able to ssh to any of the nodes in the cluster and mount from the server by running sudo mount -t nfs 10.17.10.190:/export/test /mnt but whenever my test pod tries to use an nfs persistent volume that points at that server it fails with this message:
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
19s 19s 1 default-scheduler Normal Scheduled Successfully assigned nfs-web-58z83 to wal-vm-newt02
19s 3s 6 kubelet, wal-vm-newt02 Warning
FailedMount MountVolume.SetUp failed for volume "kubernetes.io/nfs/bad55e9c-7303-11e7-9c2f-005056b40350-test-nfs" (spec.Name: "test-nfs") pod "bad55e9c-7303-11e7-9c2f-005056b40350" (UID: "bad55e9c-7303-11e7-9c2f-005056b40350") with: mount failed: exit status 32
Mounting command: mount
Mounting arguments: 10.17.10.190:/exports/test /var/lib/kubelet/pods/bad55e9c-7303-11e7-9c2f-005056b40350/volumes/kubernetes.io~nfs/test-nfs nfs []
Output: mount.nfs: access denied by server while mounting 10.17.10.190:/exports/test
Does anyone know how I can fix this and make it so that I can mount from the external NFS server?
The nodes of the cluster are running on 10.17.10.185 - 10.17.10.189 and all of the pods run with ips that start with 10.0.x.x. All of the nodes on the cluster and the NFS server are running Ubuntu. The NFS server is running on 10.17.10.190 with this /etc/exports:
/export 10.17.10.185/255.0.0.0(rw,sync,no_subtree_check)
I set up a persistent volume and persistent volume claim and they both create successfully showing this output from running kubectl get pv,pvc:
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
pv/test-nfs 1Mi RWX Retain Bound staging/test-nfs 15m
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
pvc/test-nfs Bound test-nfs 1Mi RWX 15m
They were created like this:
apiVersion: v1
kind: PersistentVolume
metadata:
name: test-nfs
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
nfs:
# FIXME: use the right IP
server: 10.17.10.190
path: "/exports/test"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-nfs
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
My test pod is using this configuration:
apiVersion: v1
kind: ReplicationController
metadata:
name: nfs-web
spec:
replicas: 1
selector:
role: web-frontend
template:
metadata:
labels:
role: web-frontend
spec:
containers:
- name: web
image: nginx
ports:
- name: web
containerPort: 80
volumeMounts:
# name must match the volume name below
- name: test-nfs
mountPath: "/usr/share/nginx/html"
volumes:
- name: test-nfs
persistentVolumeClaim:
claimName: test-nfs
It's probably because the uid used in your pod/container has not enough rights on the NFS server.
You can runAsUser as mentioned by #Giorgio or try to edit uid-range annotations of your namespace and fix a value (ex : 666). Like this every pod in your namespace
will run with uid 666.
Don't forget to chown 666 properly your NFS directory.
You have to set a securityContext as privileged: true. Take a look at this link
The complete solution for kubernetes cluster to prepare NFS folders provisioning is to apply the followings:
# set folder permission
sudo chmod 666 /your/folder/ # maybe 777
# append new line on exports file to allow network access to folder
sudo bash -c "echo '/your/folder/ <network ip/range>(rw,sync,no_root_squash,subtree_check)' >> /etc/exports"
# set folder export
sudo exportfs -ra
In my case I was trying to mount the wrong directory...
volumes:
- name: nfs-data
nfs:
# https://github.com/kubernetes/minikube/issues/3417
# server is not resolved using kube dns (so can't resolve to a service name - hence we need the IP)
server: 10.100.155.82
path: /tmp
I did not have /tmp in the /etc/exports in the server...
Another option is to add the uid/gid information to the nfs container itself. You can do this by creating a script to add the entries to /etc/passwd and then launch the provisioner:
groupadd -g 102 postfix
adduser -u 101 -g 102 -M postfix
groupadd -g 5000 vmail
adduser -u 5000 -g 5000 -M vmail
adduser -u 33 -g 33 -M www-data
groupadd -g 8983 solr
adduser -u 8983 -g 8983 -M solr
...
/nfs-server-provisioner -provisioner=cluster.local/nfs-provisioner-nfs-server-provisioner
This allows the user/group information to be preserved over the NFS boundary using NFSv3 (which is what nfs-provisioner uses). My understanding is that NFSv4 doesn't have this problem, but I've been unable to get NFSv4 working with the provisioner.
In my case, it was the way I mounted NFS. The configuration that worked was following:
/media/drive *(rw,sync,no_root_squash,insecure,no_subtree_check)
Note that this is insecure, you might want to tweak it to make it secure, while still making it work!