We have the setup in GKE with two different clusters. One cluster is running a nfs-server and on that cluster we have a persistent-volume which points to the server. This PV is then mounted on a pod running on this cluster. The second cluster also has a PV and a pod that should mount the same nfs volume. Here is where the problem occurs. Where we point out the server it does not work with using the nfs-server clusterIp address. This is understandable but I wonder how to best achieve this.
The setup is basically this:
Persistent Volume and Persistent Volume Claim used by NFS
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
spec:
capacity:
storage: 20Gi
storageClassName: manual
accessModes:
- ReadWriteMany
gcePersistentDisk:
pdName: files
fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
NFS server deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nfs-server
spec:
replicas: 1
selector:
matchLabels:
role: nfs-server
template:
metadata:
labels:
role: nfs-server
spec:
containers:
- name: nfs-server
image: gcr.io/google_containers/volume-nfs:0.8
ports:
- name: nfs
containerPort: 2049
- name: mountd
containerPort: 20048
- name: rpcbind
containerPort: 111
securityContext:
privileged: true
volumeMounts:
- mountPath: /exports
name: mypvc
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: nfs-pvc
NFS-server service
apiVersion: v1
kind: Service
metadata:
name: nfs-server
spec:
ports:
- name: nfs
port: 2049
- name: mountd
port: 20048
- name: rpcbind
port: 111
selector:
role: nfs-server
Persistent volume and Persistent Volume Claim used by the pods:
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs
spec:
capacity:
storage: 20Gi
storageClassName: manual
accessModes:
- ReadWriteMany
nfs:
server: 10.4.0.20
path: "/"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nfs
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
Part of deployment file for pod mounting the nfs
volumes:
- name: files
persistentVolumeClaim:
claimName: nfs
Output of kubectl get pv and kubectl get pvc
user#HP-EliteBook:~/Downloads$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs 100Gi RWX Retain Bound default/nfs manual 286d
nfs-pv 100Gi RWO Retain Bound default/nfs-pvc manual 286d
user#HP-EliteBook:~/Downloads$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nfs Bound nfs 100Gi RWX manual 286d
nfs-pvc Bound nfs-pv 100Gi RWO manual 286d
The ip in the PV used by the pods is the problem. The pod on the same cluster can connect to it but not the pod on the other cluster. I can use the actual podIP from the other cluster but the podIP changes with every deploy so that is not a working solution. What is the best way to get around this problem, I only want this second cluster to have access to the nfs server and not opening it to the world for example.
Related
I have a NFS server running on a VM in GCE. The NFS server /etc/exports file is already configured to allow mounting by the K8s cluster. I attempted to create a Persistent Volume (PV) and Persistent Volume Claim (PVC) and I added spec.containers.volumeMounts and spec.volumes entries. Basically, Google provisions a new disk instead of connecting to the NFS server.
In the deployments file I have added:
volumeMounts:
- name: nfs-pvc-data
mountPath: "/mnt/nfs"
volumes:
- name: nfs-pvc-data
persistentVolumeClaim:
claimName: nfs-pvc
nfs-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
namespace: comms
labels:
volume: nfs-pv
spec:
capacity:
storage: 10G
#volumeMode: Filesystem
accessModes:
- ReadOnlyMany
mountOptions:
- hard
- nfsvers=4.2
nfs:
path: /
server: 10.xxx.xx.xx
readOnly: false
nfs-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc
namespace: comms
#spec: # The PVC is stuck on ready as long as spec.selector exists in the yaml
#selector:
#matchLabels:
#volume: nfs-pv
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1G
I was able to mount the NFS server external to the Kubernetes cluster using the following PV and PVC YAML.
apiVersion: v1
kind: PersistentVolume
metadata:
name: gce-nfs
namespace: comms
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
nfs:
server: 10.xxx.xxx.xxx
path: "/"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: gce-nfs
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 1Mi
Then in the deployment YAML I added the following to the containers section:
volumeMounts:
- name: nfs-pvc-data
mountPath: "/mnt/nfs"
volumes:
- name: nfs-pvc-data
persistentVolumeClaim:
claimName: gce-nfs
The pod/container will boot connected to the external NFS server.
I have a 3 nodes cluster running in VirtualBox and I'm trying to create a NFS storage using PV and PVC, but it seems that I'm doing something wrong.
I have the following:
apiVersion: v1
kind: PersistentVolume
metadata:
name: redis-pv
labels:
type: nfs
spec:
capacity:
storage: 100Mi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /redis/data
server: 192.168.56.2 #ip of my master-node
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redis-pvc
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 100Mi
storageClassName: slow
selector:
matchLabels:
type: nfs
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-master
labels:
app: redis
spec:
selector:
matchLabels:
app: redis
role: master
tier: backend
replicas: 1
template:
metadata:
labels:
app: redis
role: master
tier: backend
spec:
containers:
- name: master
image: redis
resources:
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: data
mountPath: "/redis/data"
ports:
- containerPort: 6379
volumes:
- name: data
persistentVolumeClaim:
claimName: redis-pvc
I've already installed nfs-common in all my nodes.
Whenever creating the PV, PVC and POD the pod does not start and I get the following:
Warning FailedMount 30s kubelet, kubenode02 MountVolume.SetUp failed for volume "redis-pv" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/9326d818-b78a-42cc-bcff-c487fc8840a4/volumes/kubernetes.io~nfs/redis-pv --scope -- mount -t nfs -o hard,nfsvers=4.1 192.168.56.2:/redis/data /var/lib/kubelet/pods/9326d818-b78a-42cc-bcff-c487fc8840a4/volumes/kubernetes.io~nfs/redis-pv
Output: Running scope as unit run-rc316990c37b14a3ba24d5aedf66a3f6a.scope.
mount.nfs: Connection timed out
Here is the status of kubectl get pv, pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/redis-pv 100Mi RWO Retain Bound default/redis-pvc slow 8s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/redis-pvc Bound redis-pv 100Mi RWO slow 8s
Any ideas of what am I missing?
1 - you need to install your NFS Server: Follow the instructions in this link:
https://vitux.com/install-nfs-server-and-client-on-ubuntu/
2- Create your sharedfolder where you want to persist your data<
mount 192.168.56.2:/mnt/sharedfolder /mnt/shared/folder_client
3- Change in PV.yaml the following instructions:
apiVersion: v1
kind: PersistentVolume
metadata:
name: redis-pv
labels:
type: nfs
spec:
capacity:
storage: 100Mi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /mnt/sharedfolder
server: 192.168.56.2 #ip of my master-node
I am trying to create a NFS sidecar for Kubernetes. The goal is to be able to mount an NFS volume to an existing pod without affecting performance. At the same time, I want to be able to mount the same NFS volume onto another pod or server (read-only perhaps) in order to view the content there. Has anyone tried this? Do anyone have the procedure?
Rather than use a sidecar I would suggest using a PersistentVolume which uses the NFS driver and PersistentVolumeClaim. If you use the RWX/ReadWriteMany access mode, you'll be able to mount the share into multiple pods.
For examplen the pv:
kind: PersistentVolume
apiVersion: v1
metadata:
name: mypv
spec:
capacity:
storage: 2Gi
nfs:
server: my.nfs.server
path: /myshare
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
the pvc:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
and mounted in a pod:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim
Kubernetes Docs on Persistent Volumes
I want to setup a in-cluster NFS-Server in Kubernetes to provide shares for my pods (nginx webroot etc.).
In theory there should be a persistent volume, a volume claim and the NFS-Server, which, as I understand is a deployment.
To use the PV and PVC I need to assign the NFS-Server's IP-Adress, which I don't know, because it automatically generated when I expose the NFS-Server with a service.
The same problem appears if I want to deploy the nfs-server deployment itself, because I am using the PVC as volumes. But I can't deploy the PV and PVCs without giving them the NFS-Server IP.
I think I am lost, maybe you can help me.
PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nfs-pv1
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
nfs:
path: "/exports/www"
server: SERVER_NAME:PORT
PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-nfs-pv1
labels:
type: local
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 500Mi
NFS-Deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nfs-server
spec:
replicas: 1
selector:
matchLabels:
role: nfs-server
template:
metadata:
labels:
role: nfs-server
spec:
containers:
- name: nfs-server
image: gcr.io/google_containers/volume-nfs:0.8
ports:
- name: nfs
containerPort: 2049
- name: mountd
containerPort: 20048
- name: rpcbind
containerPort: 111
securityContext:
privileged: true
volumeMounts:
- mountPath: /exports/www
name: pv-nfs-pv1
volumes:
- name: pv-nfs-pv1
gcePersistentDisk:
pdName: pv-nfs-pv1
# fsType: ext4
1) You create NFS-server deployment.
2) You expose NFS-server deployment by creating service, say "nfs-server", exposing TCP port 2049 (assuming you use NFSv4).
3) You create PV with the following information:
nfs:
path: /exports/www
server: nfs-server
4) You create PVC and mount it wherever you need it.
By following kubernetes guide i have created a pv, pvc and pod. i have claimed only 10Mi of out of 20Mi pv. I have copied 23Mi that is more than my pv. But my pod is still running. Can any one explain ?
pv-volume.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 20Mi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
pv-claim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
pv-pod.yaml
kind: Pod
apiVersion: v1
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
Probably you can copy as much data into shared storage /mnt/data (on your active node) using any of applied POD's storages ,/usr/share/nginx/html, shared between node and pods till your node will stop responding.
In case you need to test this scenario in more real conditions could you please consider create NFS persistent storage using GlusterFS, nfs-utils, or mount a raw partition file made with dd.
In Minikube nodes are using ephemeral-storages. Detailed information about node/pod resources you can find here:
https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable
Hope this help.