How to mount Kubernetes NFS share from external computer? - kubernetes

I'm able to create an NFS server using Docker, then mount it from an external computer on the network. My laptop for example.
Docker example:
docker run -itd --privileged \
--restart unless-stopped \
-e SHARED_DIRECTORY=/data \
-v /home/user/nfstest/data:/data \
-p 2049:2049 \
itsthenetwork/nfs-server-alpine
I am to mount the above server onto a local machine using
sudo mount -v 192.168.60.80:/ nfs
From my local laptop, I always get permission errors when trying to mount an NFS share that is hosted by Kubernetes. Can someone please explain why I may be getting permission errors not allowing me to mount the NFS share when creating the NFS server with Kubernetes?
As an example, I create my persistent volume:
piVersion: v1 [0/1864]
kind: PersistentVolume
metadata:
name: nfs-pv
spec:
capacity:
storage: 80Gi
accessModes:
- ReadWriteMany
storageClassName: local-nfs-storage
local:
path: "/home/rancher/nexus-data"
persistentVolumeReclaimPolicy: Retain
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- kubenode-02
Then create the persistent volume claim:
apiVersion: v1 [0/1865]
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc
spec:
storageClassName: local-nfs-storage
accessModes:
- ReadWriteMany
resources:
requests:
storage: 80Gi
Last, I create the NFS server deployment. I use the external IP service (externalIPs) so that I can connect to the pod from anywhere outside of my Kubernetes network. I can Nmap the ports from my laptop and see that they are open.
---
kind: Service
apiVersion: v1
metadata:
name: nfs-service
spec:
selector:
role: nfs
ports:
# Open the ports required by the NFS server
# Port 2049 for TCP
- name: tcp-2049
targetPort: 2049
port: 2049
protocol: TCP
# Port 111 for UDP
- name: udp-111
targetPort: 111
port: 111
protocol: UDP
# Use host network with external IP service
externalIPs:
- 192.168.60.42
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-server
spec:
replicas: 1
selector:
matchLabels:
role: nfs
template:
metadata:
name: nfs-server
labels:
role: nfs
spec:
volumes:
- name: nfs-pv-storage
persistentVolumeClaim:
claimName: nfs-pvc
securityContext:
fsGroup: 1000
containers:
- name: nfs-server-container
image: itsthenetwork/nfs-server-alpine
securityContext:
privileged: true
env:
# Pass the path to share data
- name: SHARED_DIRECTORY
value: "/exports"
volumeMounts:
- mountPath: "/exports"
name: nfs-pv-storage
I have tried playing with Pod Security policies such as fsGroup, runAsUser, and runAsGroup. I have played with different permissions for the local folder I am mounting the NFS share onto. I have also tried different options for /etc/exports within the NFS pod. No matter what, I get a permission denied when trying to mount the NFS share to a local computer.
I AM able to mount the Kubernetes NFS share from within a separate Kubernetes Pod.
Is there something I do not understand about how the Kubernetes network handles external traffic?

Related

Kubernetes mount volume keeps timeing out even though volume can be mounted from sudo mount

I have a read only persistent volume that I'm trying to mount onto the statefulset, but after making some changes to the program and re-creating the pods, the pod can now no longer mount to the volume.
PV yaml file:
apiVersion: v1
kind: PersistentVolume
metadata:
name: foo-pv
spec:
capacity:
storage: 2Gi
accessModes:
- ReadOnlyMany
nfs:
server: <ip>
path: "/var/foo"
claimRef:
name: foo-pvc
namespace: foo
PVC yaml file:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: foo-pvc
namespace: foo
spec:
accessModes:
- ReadOnlyMany
storageClassName: ""
volumeName: foo-pv
resources:
requests:
storage: 2Gi
Statefulset yaml:
apiVersion: v1
kind: Service
metadata:
name: foo-service
spec:
type: ClusterIP
ports:
- name: http
port: 80
selector:
app: foo-app
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: foo-statefulset
namespace: foo
spec:
selector:
matchLabels:
app: foo-app
serviceName: foo-app
replicas: 1
template:
metadata:
labels:
app: foo-app
spec:
serviceAccountName: foo-service-account
containers:
- name: fooContainer
image: <image>
imagePullPolicy: Always
volumeMounts:
- name: writer-data
mountPath: <path>
- name: nfs-objectd
mountPath: <path>
volumes:
- name: nfs-foo
persistentVolumeClaim:
claimName: foo-pvc
volumeClaimTemplates:
- metadata:
name: writer-data
spec:
accessModes: [ "ReadWriteMany" ]
storageClassName: "foo-sc"
resources:
requests:
storage: 2Gi
k describe pod reports "Unable to attach or mount volumes: unmounted volumes=[nfs-foo]: timed out waiting for the condition". There is a firewall between the machine running kubernetes and the NFS, however the port has been unblocked, and the folder has been exported for mounting on the NFS side. Running sudo mount -t nfs :/var/foo /var/foo is able to successfully mount the NFS, so I don't understand why kuebernetes isn't about to mount it anymore. Its been stuck failing mount for several days now. Is there any other way to debug this?
Thanks!
Based on the error “unable to attach or mount volumes …….timed out waiting for condition”, there were some similar issues reported to the Product Team and it is a known issue. But, this error is more observed on the preemptible/spot nodes when the node is preempted. In similar occurrences of this issue for other users, upgrading the control plane version resolved this issue temporarily in preemptible/spot nodes.
Also, if you are not using any preemptible/spot nodes in your cluster, this issue might have happened when the old node is replaced by a new node. If you are still facing this issue, try upgrading the control plane to the same version i.e. you can execute the following command:
$ gcloud container clusters upgrade CLUSTER_NAME --master --zone ZONE --cluster-version VERSION
Another workaround to fix this issue would be remove the stale VolumeAttachment with the following command:
$ kubectl delete volumeattachment [volumeattachment_name]
After running the command and thus removing the VolumeAttachment, the pod should eventually pick up and retry. You can read more about this issue and its cause here.

permission denied when mount in kubernetes pod with root user

When I using this command in kubernetes v1.18 jenkins's master pod to mount a nfs file system:
root#jenkins-67fff76bb6-q77xf:/# mount -t nfs -o v4 192.168.31.2:/infrastructure/jenkins-share-workspaces /opt/jenkins
mount: permission denied
root#jenkins-67fff76bb6-q77xf:/#
why it shows permission denied althrough I am using root user? when I using this command in another machine(not in docker), it works fine, shows the server side works fine. this is my kubernetes jenkins master pod secure text config in yaml:
securityContext:
runAsUser: 0
fsGroup: 0
today I tried another kubernetes pod and mount nfs file system and throw the same error. It seems mount NFS from host works fine, and mount from kubernetes pod have a perssion problem. Why would this happen? the NFS is works fine by PVC binding PV in this kubernetes pod, why it mount from docker failed? I am confusing.
There are two ways to mount nfs volume to a pod
First (directly in pod spec):
kind: Pod
apiVersion: v1
metadata:
name: pod-using-nfs
spec:
volumes:
- name: nfs-volume
nfs:
server: 192.168.31.2
path: /infrastructure/jenkins-share-workspaces
containers:
- name: app
image: example
volumeMounts:
- name: nfs-volume
mountPath: /var/nfs
Second (creating persistens nfs volume and volume claim):
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
nfs:
server: 192.168.31.2
path: "/infrastructure/jenkins-share-workspaces"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 1Mi
volumeName: nfs
---
kind: Pod
apiVersion: v1
metadata:
name: pod-using-nfs
spec:
containers:
- name: app
image: example
volumeMounts:
- name: nfs
mountPath: /opt/jenkins
volumes:
- name: nfs
persistentVolumeClaim:
claimName: nfs
EDIT:
The solution above is prefered one, but if you reallly need to use mount in container you need to add capabilities to the pod:
spec:
containers:
- securityContext:
capabilities:
add: ["SYS_ADMIN"]
Try using
securityContext:
privileged: true
This needs if you are using dind for jenkins

K8S: It is possible to use disk of one node as cluster shared storage?

I have two PC avaliable at home, and want to make testlab for K8S.
One PC have big drive, so i think about use that store as avaliable for both nodes.
most info i found is about local storage or fully external storage.
ideally i want to have full k8s solution, which can be autoscaled via deployment(just need one more node with needed affinity and it will be scaled there as well).
So, it is possible? any guides how to do that?
Like it was already mentioned by #David Maze, there is no native way in Kubernetes for sharing nodes storage.
What you could do is setup a NFS storage on the node which has the highest storage and share it across the pod.
You could setup the NFS inside the k8s cluster as a pod using Docker NFS Server.
The NFS pod might be looking like this:
kind: Pod
apiVersion: v1
metadata:
name: nfs-server-pod
labels:
role: nfs
spec:
containers:
- name: nfs-server-container
image: cpuguy83/nfs-server
securityContext:
privileged: true
args:
# Pass the paths to share to the Docker image
- /exports
You will also have to expose the pod using a service:
kind: Service
apiVersion: v1
metadata:
name: nfs-service
spec:
selector:
role: nfs
ports:
# Open the ports required by the NFS server
# Port 2049 for TCP
- name: tcp-2049
port: 2049
protocol: TCP
# Port 111 for UDP
- name: udp-111
port: 111
protocol: UDP
Once done you will be able to use it for any pod in the cluster:
kind: Pod
apiVersion: v1
metadata:
name: pod-using-nfs
spec:
# Add the server as an NFS volume for the pod
volumes:
- name: nfs-volume
nfs:
# URL for the NFS server
server: 10.108.211.244 # Change this!
path: /
# In this container, we'll mount the NFS volume
# and write the date to a file inside it.
containers:
- name: app
image: alpine
# Mount the NFS volume in the container
volumeMounts:
- name: nfs-volume
mountPath: /var/nfs
# Write to a file inside our NFS
command: ["/bin/sh"]
args: ["-c", "while true; do date >> /var/nfs/dates.txt; sleep 5; done"]
This is fully described at Kubernetes Volumes Guide by Matthew Palmer also you should read Deploying Dynamic NFS Provisioning in Kubernetes.

Minio data does not persist through reboot

I deployed Minio on Kubernetes on an Ubuntu desktop. It works fine, except that whenever I reboot the machine, everything that was stored in Minio mysteriously disappears (if I create several buckets with files in them, I come back to a completely blanks slate after the reboot - the buckets, and all their files, are completely gone).
When I set up Minio, I created a persistent volume in Kubernetes which mounts to a folder (/mnt/minio/minio - I have a 4 TB HDD mounted at /mnt/minio with a folder named minio inside that). I noticed that this folder seems to be empty even when I store stuff in Minio, so perhaps Minio is ignoring the persistent volume and using the container storage? However, I don't know why this would be happening; I have both a PV and a PV claim, and kubectl shows that they are bound to each other.
Below are the yaml files I applied to deploy my minio installation:
kind: PersistentVolume
apiVersion: v1
metadata:
name: minio-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 100Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/minio/minio"
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: minio-pv-claim
labels:
app: minio-storage-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 99Gi
apiVersion: apps/v1 # for k8s versions before 1.9.0 use apps/v1beta2 and before 1.8.0 use extensions/v1beta1
kind: Deployment
metadata:
# This name uniquely identifies the Deployment
name: minio-deployment
spec:
selector:
matchLabels:
app: minio
strategy:
type: Recreate
template:
metadata:
labels:
# Label is used as selector in the service.
app: minio
spec:
# Refer to the PVC created earlier
volumes:
- name: storage
persistentVolumeClaim:
# Name of the PVC created earlier
claimName: minio-pv-claim
containers:
- name: minio
# Pulls the default Minio image from Docker Hub
image: minio/minio:latest
args:
- server
- /storage
env:
# Minio access key and secret key
- name: MINIO_ACCESS_KEY
value: "minio"
- name: MINIO_SECRET_KEY
value: "minio123"
ports:
- containerPort: 9000
hostPort: 9000
# Mount the volume into the pod
volumeMounts:
- name: storage # must match the volume name, above
mountPath: "/mnt/minio/minio"
apiVersion: v1
kind: Service
metadata:
name: minio-service
spec:
type: LoadBalancer
ports:
- port: 9000
targetPort: 9000
protocol: TCP
selector:
app: minio
you need to mount container's /storage directory in the directory you are mounting on the container /mnt/minio/minio/;
args:
- server
- /mnt/minio/minio/storage
But consider deploying using StatefulSet, so when your pod restarts it will retain everything of the previous pod.

NFS volumes are not persistent in kubernetes

I'm trying to mount mongo /data directory on to a NFS volume in my kubernetes master machine for persisting mongo data. I see the volume is mounted successfully but I can see only configdb and db dirs but not their subdirectories. And I see the data is not even persisting in the volume. when I kubectl describe <my_pv> it shows NFS (an NFS mount that lasts the lifetime of a pod)
Why is that so?
I see in kubernetes docs stating that:
An nfs volume allows an existing NFS (Network File System) share to be
mounted into your pod. Unlike emptyDir, which is erased when a Pod is
removed, the contents of an nfs volume are preserved and the volume is
merely unmounted. This means that an NFS volume can be pre-populated
with data, and that data can be “handed off” between pods. NFS can be
mounted by multiple writers simultaneously.
I'm using kubernetes version 1.8.3.
mongo-deployment.yml:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: mongo
labels:
name: mongo
app: mongo
spec:
replicas: 3
selector:
matchLabels:
name: mongo
app: mongo
template:
metadata:
name: mongo
labels:
name: mongo
app: mongo
spec:
containers:
- name: mongo
image: mongo:3.4.9
ports:
- name: mongo
containerPort: 27017
protocol: TCP
volumeMounts:
- name: mongovol
mountPath: "/data"
volumes:
- name: mongovol
persistentVolumeClaim:
claimName: mongo-pvc
mongo-pv.yml:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongo-pv
labels:
type: NFS
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: "/mongodata"
server: 172.20.33.81
mongo-pvc.yml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 3Gi
storageClassName: slow
selector:
matchLabels:
type: NFS
The way I mounted my nfs share on my kubernetes master machine:
1) apt-get install nfs-kernel-server
2) mkdir /mongodata
3) chown nobody:nogroup -R /mongodata
4) vi /etc/exports
5) added the line "/mongodata *(rw,sync,all_squash,no_subtree_check)"
6) exportfs -ra
7) service nfs-kernel-server restart
8) showmount -e ----> shows the share
I logged into the bash of my pod and I see the directory is mounted correctly but data is not persisting in my nfs server (kubernetes master machine).
Please help me see what I am doing wrong here.
It's possible that pods don't have permission to create files and directories. You can exec to your pod and try to touch a file in NFS share if you get permission error you can ease up permission on file system and exports file to allow write access.
It's possible to specify GID in PV object to avoid permission denied issues.
https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#access-control
I see you did a chown nobody:nogroup -R /mongodata.
Make sure that the application on your pod runs as nobody:nogroup
Add the parameter mountOptions: "vers=4.1" to your StorageClass config, this should fix your issue.
See this Github comment for more info:
https://github.com/kubernetes-incubator/external-storage/issues/223#issuecomment-344972640