I am trying to create a NFS sidecar for Kubernetes. The goal is to be able to mount an NFS volume to an existing pod without affecting performance. At the same time, I want to be able to mount the same NFS volume onto another pod or server (read-only perhaps) in order to view the content there. Has anyone tried this? Do anyone have the procedure?
Rather than use a sidecar I would suggest using a PersistentVolume which uses the NFS driver and PersistentVolumeClaim. If you use the RWX/ReadWriteMany access mode, you'll be able to mount the share into multiple pods.
For examplen the pv:
kind: PersistentVolume
apiVersion: v1
metadata:
name: mypv
spec:
capacity:
storage: 2Gi
nfs:
server: my.nfs.server
path: /myshare
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
the pvc:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
and mounted in a pod:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim
Kubernetes Docs on Persistent Volumes
Related
I have a NFS server running on a VM in GCE. The NFS server /etc/exports file is already configured to allow mounting by the K8s cluster. I attempted to create a Persistent Volume (PV) and Persistent Volume Claim (PVC) and I added spec.containers.volumeMounts and spec.volumes entries. Basically, Google provisions a new disk instead of connecting to the NFS server.
In the deployments file I have added:
volumeMounts:
- name: nfs-pvc-data
mountPath: "/mnt/nfs"
volumes:
- name: nfs-pvc-data
persistentVolumeClaim:
claimName: nfs-pvc
nfs-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
namespace: comms
labels:
volume: nfs-pv
spec:
capacity:
storage: 10G
#volumeMode: Filesystem
accessModes:
- ReadOnlyMany
mountOptions:
- hard
- nfsvers=4.2
nfs:
path: /
server: 10.xxx.xx.xx
readOnly: false
nfs-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc
namespace: comms
#spec: # The PVC is stuck on ready as long as spec.selector exists in the yaml
#selector:
#matchLabels:
#volume: nfs-pv
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1G
I was able to mount the NFS server external to the Kubernetes cluster using the following PV and PVC YAML.
apiVersion: v1
kind: PersistentVolume
metadata:
name: gce-nfs
namespace: comms
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
nfs:
server: 10.xxx.xxx.xxx
path: "/"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: gce-nfs
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 1Mi
Then in the deployment YAML I added the following to the containers section:
volumeMounts:
- name: nfs-pvc-data
mountPath: "/mnt/nfs"
volumes:
- name: nfs-pvc-data
persistentVolumeClaim:
claimName: gce-nfs
The pod/container will boot connected to the external NFS server.
hi all quick question on host paths for persistent volumes
I created a PV and PVC here
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
and I ran a sample pod
apiVersion: v1
kind: Pod
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
i exec the pod and created a file
root#task-pv-pod:/# cd /usr/share/nginx/html
root#task-pv-pod:/usr/share/nginx/html# ls
tst.txt
However, when I go back to my host and try to ls the file , its not appearing. Any idea why? My PV and PVC are correct as I can see that it has been bounded.
ubuntu#ip-172-31-24-21:/home$ cd /mnt/data
ubuntu#ip-172-31-24-21:/mnt/data$ ls -lrt
total 0
A persistent volume (PV) is a kubernetes resource which has its own lifecycle independent of the pod pv documentation. Using a PVC to consume from a PV makes it visible in some other tool. For example azure files, ELB, a server with NFS, etc. My point here is that there is no reason why the PV should exist in the node.
If you want your persistence to be saved in the node use the hostPath option for PVs. Check this link. Though this is not a good production practice.
First of all, you don't need to create a PV if you are creating a PVC. PVCs create PV, if you have the right storageClass.
Second, hostPath is one delicate PV in Kubernetes world. That's the only PV that doen't need to be created to be mounted in a Pod. So you could have not created neither PV nor PVC and a hostPath volume would work just fine.
To make a test, delete your PV and PVC, and create your Pod like this:
apiVersion: v1
kind: Pod
metadata:
name: nginx-volume
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
securityContext:
privileged: true
ports:
- containerPort: 80
name: nginx-http
volumeMounts:
- name: nginx
mountPath: /root/nginx-volume # path in the pod
volumes:
- name: nginx
hostPath:
path: /var/test # path in the host machine
I know this is a confusing concept, but that's how it is.
I have a folder of TFRecords on a network that I want to expose to multiple pods. The folder has been exported via NFS.
I have tried creating a Persistent Volume, followed by a Persistent Volume Claim. However, that just creates a folder inside the NFS mount, which I don't want. Instead, I want to Pod to access the folder with the TFRecords.
I have listed the manifests for the PV and PVC.
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-tfrecord-pv
spec:
capacity:
storage: 30Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
path: /media/veracrypt1/
server: 1.2.3.4
readOnly: false
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-tfrecord-pvc
namespace: default
spec:
accessModes:
- ReadWriteMany
storageClassName: nfs-tfrecord
resources:
requests:
storage: 1Gi
I figured it out. The issue was I was looking at the problem the wrong way. I didn't need any provisioning. Instead, what was need was to simply mount the NFS volume within the container:
kind: Pod
apiVersion: v1
metadata:
name: pod-using-nfs
spec:
containers:
- name: app
image: alpine
volumeMounts:
- name: data
mountPath: /mnt/data
command: ["/bin/sh"]
args: ["-c", "sleep 500000"]
volumes:
- name: data
nfs:
server: 1.2.3.4
path: /media/foo/DATA
By following kubernetes guide i have created a pv, pvc and pod. i have claimed only 10Mi of out of 20Mi pv. I have copied 23Mi that is more than my pv. But my pod is still running. Can any one explain ?
pv-volume.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 20Mi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
pv-claim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
pv-pod.yaml
kind: Pod
apiVersion: v1
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
Probably you can copy as much data into shared storage /mnt/data (on your active node) using any of applied POD's storages ,/usr/share/nginx/html, shared between node and pods till your node will stop responding.
In case you need to test this scenario in more real conditions could you please consider create NFS persistent storage using GlusterFS, nfs-utils, or mount a raw partition file made with dd.
In Minikube nodes are using ephemeral-storages. Detailed information about node/pod resources you can find here:
https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable
Hope this help.
I have a 3-node Kubernetes cluster running on vagrant using the oracle Kubernetes vagrant boxes from http://github.com/oracle/vagrant-boxes.git.
I want to add a pod including an Oracle database and persist the data so that in case all nodes go down, I don't lose my data.
According to how I read the Kubernetes documentation persistent volumes cannot be created on a local filesystem only on a cloud-backed device. I want to configure the persistent volume and persistent volume claim on my vagrant boxes as a proof of concept and training exercise for my Kubernetes learning.
Are there any examples of how I might go about creating the PV and PVC in this configuration?
As a complete Kubernetes newbie, any code samples would be greatly appreciated.
Use host path:
create PV:
kind: PersistentVolume
apiVersion: v1
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data
create PVC:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
Use it in a pod:
kind: Pod
apiVersion: v1
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
documentation
This is just an example, for testing only.
For production use case, you will need dynamic provisioning using the StorageClass for PVC, so that the volume/data is available when the pod moves across the cluster.