I want to setup a in-cluster NFS-Server in Kubernetes to provide shares for my pods (nginx webroot etc.).
In theory there should be a persistent volume, a volume claim and the NFS-Server, which, as I understand is a deployment.
To use the PV and PVC I need to assign the NFS-Server's IP-Adress, which I don't know, because it automatically generated when I expose the NFS-Server with a service.
The same problem appears if I want to deploy the nfs-server deployment itself, because I am using the PVC as volumes. But I can't deploy the PV and PVCs without giving them the NFS-Server IP.
I think I am lost, maybe you can help me.
PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nfs-pv1
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
nfs:
path: "/exports/www"
server: SERVER_NAME:PORT
PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-nfs-pv1
labels:
type: local
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 500Mi
NFS-Deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nfs-server
spec:
replicas: 1
selector:
matchLabels:
role: nfs-server
template:
metadata:
labels:
role: nfs-server
spec:
containers:
- name: nfs-server
image: gcr.io/google_containers/volume-nfs:0.8
ports:
- name: nfs
containerPort: 2049
- name: mountd
containerPort: 20048
- name: rpcbind
containerPort: 111
securityContext:
privileged: true
volumeMounts:
- mountPath: /exports/www
name: pv-nfs-pv1
volumes:
- name: pv-nfs-pv1
gcePersistentDisk:
pdName: pv-nfs-pv1
# fsType: ext4
1) You create NFS-server deployment.
2) You expose NFS-server deployment by creating service, say "nfs-server", exposing TCP port 2049 (assuming you use NFSv4).
3) You create PV with the following information:
nfs:
path: /exports/www
server: nfs-server
4) You create PVC and mount it wherever you need it.
Related
I want to mount a folder that is located on my local hard disk (ex. /home/logon/volumes/algovens/ids/app_data/) on a pod:
{{/home/logon/volumes/algovens/ids/app_data/}} SHOULD BE MOUNTED ON {{/app/app_data/}} ON A POD
I create a persistent volume and a persistent volume claim, then import the pvc on my pod .yaml file.
I apply all three yaml files for pv, pvc, pod into kubernetes cluster.
When getting access to the container using interactive bash, I see there's a folder that I've configured to mount from my local hard disk (/app/app_data/ ON POD), but it's still empty.
However, I have some files and folders on my local folder on my local hard disk (/home/logon/volumes/algovens/ids/app_data/)
My configuration files:
identityserver-pv.yaml:
apiVersion: v1
kind: PersistentVolume
metadata:
name: identityserver-pv
labels:
name: identityserver-pv
spec:
storageClassName: local-storage
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
local:
path: /home/logon/volumes/algovens/ids/app_data/
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minikube
identityserver-pvc.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: identityserver-pvc
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
identityserver-deployment.yaml
# Deployment for identityserver
apiVersion: apps/v1
kind: Deployment
metadata:
name: identityserver-deployment
labels:
app: identityserver-deployment
spec:
replicas: 1
selector:
matchLabels:
app: identityserver
template:
metadata:
labels:
app: identityserver # pod label
spec:
volumes:
- name: identityserver-pvc
persistentVolumeClaim:
claimName: identityserver-pvc
containers:
- name: identityserver-container
image: localhost:5000/algovens-identityserver:v1.0
ports:
- containerPort: 80
volumeMounts:
- name: identityserver-pvc
mountPath: "/app/app_data/"
env:
- name: ALGOVENS_CORS_ORIGINS
valueFrom:
configMapKeyRef:
name: identityserver-config
key: ALGOVENS_CORS_ORIGINS
---
# Service for identityserver
apiVersion: v1
kind: Service
metadata:
name: identityserver-service
spec:
type: NodePort # External service. default value is ClusterIP
selector:
app: identityserver
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30100
Hi i'm trying to create a persistent volume for my mongoDB on kubernetes on google cloud platform and i'm stuck in pending
Here you have my manifest :
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: account-mongo-depl
labels:
app: account-mongo
spec:
replicas: 1
serviceName: 'account-mongo'
selector:
matchLabels:
app: account-mongo
template:
metadata:
labels:
app: account-mongo
spec:
volumes:
- name: account-mongo-storage
persistentVolumeClaim:
claimName: account-db-bs-claim
containers:
- name: account-mongo
image: mongo
volumeMounts:
- mountPath: '/data/db'
name: account-mongo-storage
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: account-db-bs-claim
spec:
storageClassName: do-block-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: account-mongo-srv
spec:
type: ClusterIP
selector:
app: account-mongo
ports:
- name: account-db
protocol: TCP
port: 27017
targetPort: 27017
Here you have my list of pods
My pods is stuck in pending until fail
If you're on GKE, you should already have dynamic provisioning setup.
storageClassName: do-block-storage
accessModes:
- ReadWriteOnce
"do-block-storage"? Were you running on Digital Ocean previously? You should be able remove the storageClassName line and use the default provisioner Google provides.
For example, here's a snippet from one of my own statefulsets
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
## Specify storage class to use nfs (requires nfs provisioner. Leave unset for dynamic provisioning)
#storageClassName: "nfs"
resources:
requests:
storage: 10G
I do not specify a storage class here, so it uses the default one, which provisions as a persistent disk on GCP
$ kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
standard (default) kubernetes.io/gce-pd Delete Immediate false 2y245d
Could someone help me please and point me what configuration should I be doing for my use-case?
I'm building a development k8s cluster and one of the steps is to generate security files (private keys) that are generated in a number of pods during deployment (let's say for a simple setup I have 6 pods that each build their own security keys). I need to have access to all these files, also they must be persistent after the pod goes down.
I'm trying to figure out now how to set up it locally for internal testing. From what I understand Local PersistentVolumes only allow 1:1 with PersistentVolumeClaims, so I would have to create a separate PersistentVolume and PersistentVolumeClaim for each pod that get's configured. I would prefer to void this and use one PersistentVolume for all.
Could someone be so nice and help me or point me to the right setup that should be used?
-- Update: 26/11/2020
So this is my setup:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hlf-nfs--server
spec:
replicas: 1
selector:
matchLabels:
app: hlf-nfs--server
template:
metadata:
labels:
app: hlf-nfs--server
spec:
containers:
- name: hlf-nfs--server
image: itsthenetwork/nfs-server-alpine:12
ports:
- containerPort: 2049
name: tcp
- containerPort: 111
name: udp
securityContext:
privileged: true
env:
- name: SHARED_DIRECTORY
value: "/opt/k8s-pods/data"
volumeMounts:
- name: pvc
mountPath: /opt/k8s-pods/data
volumes:
- name: pvc
persistentVolumeClaim:
claimName: shared-nfs-pvc
apiVersion: v1
kind: Service
metadata:
name: hlf-nfs--server
labels:
name: hlf-nfs--server
spec:
type: ClusterIP
selector:
app: hlf-nfs--server
ports:
- name: tcp-2049
port: 2049
protocol: TCP
- name: udp-111
port: 111
protocol: UDP
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: shared-nfs-pvc
spec:
accessModes:
- ReadWriteMany
storageClassName: nfs
resources:
requests:
storage: 1Gi
These three are being created at once, after that, I'm reading the IP of the service and adding it to the last one:
apiVersion: v1
kind: PersistentVolume
metadata:
name: shared-nfs-pv
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
nfs:
path: /opt/k8s-pods/data
server: <<-- IP from `kubectl get svc -l name=hlf-nfs--server`
The problem I'm getting and trying to resolve is that the PVC does not get bound with the PV and the deployment keeps in READY mode.
Did I miss anything?
You can create a NFS and have the pods use NFS volume. Here is the manifest file to create such in-cluster NFS server (make sure you modify STORAGE_CLASS and the other variables below):
export NFS_NAME="nfs-share"
export NFS_SIZE="10Gi"
export NFS_IMAGE="itsthenetwork/nfs-server-alpine:12"
export STORAGE_CLASS="thin-disk"
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: ${NFS_NAME}
labels:
app.kubernetes.io/name: nfs-server
app.kubernetes.io/instance: ${NFS_NAME}
spec:
ports:
- name: tcp-2049
port: 2049
protocol: TCP
- name: udp-111
port: 111
protocol: UDP
selector:
app.kubernetes.io/name: nfs-server
app.kubernetes.io/instance: ${NFS_NAME}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app.kubernetes.io/name: nfs-server
app.kubernetes.io/instance: ${NFS_NAME}
name: ${NFS_NAME}
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: $STORAGE_CLASS
volumeMode: Filesystem
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ${NFS_NAME}
labels:
app.kubernetes.io/name: nfs-server
app.kubernetes.io/instance: ${NFS_NAME}
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: nfs-server
app.kubernetes.io/instance: ${NFS_NAME}
template:
metadata:
labels:
app.kubernetes.io/name: nfs-server
app.kubernetes.io/instance: ${NFS_NAME}
spec:
containers:
- name: nfs-server
image: ${NFS_IMAGE}
ports:
- containerPort: 2049
name: tcp
- containerPort: 111
name: udp
securityContext:
privileged: true
env:
- name: SHARED_DIRECTORY
value: /nfsshare
volumeMounts:
- name: pvc
mountPath: /nfsshare
volumes:
- name: pvc
persistentVolumeClaim:
claimName: ${NFS_NAME}
EOF
Below is an example how to point the other pods to this NFS. In particular, refer to the volumes section at the end of the YAML:
export NFS_NAME="nfs-share"
export NFS_IP=$(kubectl get --template={{.spec.clusterIP}} service/$NFS_NAME)
kubectl apply -f - <<EOF
kind: Deployment
apiVersion: apps/v1
metadata:
name: apache
labels:
app: apache
spec:
replicas: 2
selector:
matchLabels:
app: apache
template:
metadata:
labels:
app: apache
containers:
- name: apache
image: apache
ports:
- containerPort: 80
volumeMounts:
- mountPath: /var/www/html/
name: nfs-vol
subPath: html
volumes:
- name: nfs-vol
nfs:
server: $NFS_IP
path: /
EOF
It is correct that there is a 1:1 relation between a PersistentVolumeClaim and a PersistentVolume.
However, Pods running on the same Node can concurrently mount the same volume, e.g. use the same PersistentVolumeClaim.
If you use Minikube for local development, you only have one node, so you can use the same PersistentVolumeClaim. Since you want to use different files for each app, you could use a unique directory for each app in that shared volume.
So finally, I did it by using a dynamic provider.
I installed the stable/nfs-server-provisioner with helm. With proper configuration, it managed to create a pv named nfs two which my pvc's are able to bound :)
helm install stable/nfs-server-provisioner --name nfs-provisioner -f nfs_provisioner.yaml
the nfs_provisioner.yaml is as follows
persistence:
enabled: true
storageClass: "standard"
size: 20Gi
storageClass:
# Name of the storage class that will be managed by the provisioner
name: nfs
defaultClass: true
I want to integrate a minio object storage in to my minikune cluster.
I use the docker file from the minio gitrepo
I also added the persistent volume with the claim
kind: PersistentVolume
apiVersion: v1
metadata:
name: minio-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: "/mnt/data/minio"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: minio-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
for the minio deployment I have
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: minio
spec:
selector:
matchLabels:
app: minio
role: master
tier: backend
replicas: 1
template:
metadata:
labels:
app: minio
role: master
tier: backend
spec:
imagePullSecrets:
- name: regcred
containers:
- name: minio
image: <secret Registry >
env:
- name: MINIO_ACCESS_KEY
value: akey
- name: MINIO_SECRET_KEY
value: skey
ports:
- containerPort: 9000
volumeMounts:
- name: data
mountPath: /data/ob
volumes:
- name: data
persistentVolumeClaim:
claimName: minio-pv-claim
For the service I opened up the external IP just for debugging
apiVersion: v1
kind: Service
metadata:
name: minio
labels:
app: minio
role: master
tier: backend
spec:
ports:
- port: 9000
targetPort: 9000
externalIPs:
- 192.168.99.101
selector:
app: minio
role: master
tier: backend
But when I start the deployment I get the error message ERROR Unable to initialize backend: The disk size is less than the minimum threshold.
I assumed that 3GB should be enough. How can I solve this issue moreover now that I try to delete my persistent volume it rest in the terminating status.
How can I run minio in a minikube clutster?
I dont think there is enough storage in /mnt/data inside minikube. Try /mnt/sda1 or /data. Better yet, go inside minikube and check the storage available. To get into minikube you can do minikube ssh.
We have the setup in GKE with two different clusters. One cluster is running a nfs-server and on that cluster we have a persistent-volume which points to the server. This PV is then mounted on a pod running on this cluster. The second cluster also has a PV and a pod that should mount the same nfs volume. Here is where the problem occurs. Where we point out the server it does not work with using the nfs-server clusterIp address. This is understandable but I wonder how to best achieve this.
The setup is basically this:
Persistent Volume and Persistent Volume Claim used by NFS
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
spec:
capacity:
storage: 20Gi
storageClassName: manual
accessModes:
- ReadWriteMany
gcePersistentDisk:
pdName: files
fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
NFS server deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nfs-server
spec:
replicas: 1
selector:
matchLabels:
role: nfs-server
template:
metadata:
labels:
role: nfs-server
spec:
containers:
- name: nfs-server
image: gcr.io/google_containers/volume-nfs:0.8
ports:
- name: nfs
containerPort: 2049
- name: mountd
containerPort: 20048
- name: rpcbind
containerPort: 111
securityContext:
privileged: true
volumeMounts:
- mountPath: /exports
name: mypvc
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: nfs-pvc
NFS-server service
apiVersion: v1
kind: Service
metadata:
name: nfs-server
spec:
ports:
- name: nfs
port: 2049
- name: mountd
port: 20048
- name: rpcbind
port: 111
selector:
role: nfs-server
Persistent volume and Persistent Volume Claim used by the pods:
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs
spec:
capacity:
storage: 20Gi
storageClassName: manual
accessModes:
- ReadWriteMany
nfs:
server: 10.4.0.20
path: "/"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nfs
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
Part of deployment file for pod mounting the nfs
volumes:
- name: files
persistentVolumeClaim:
claimName: nfs
Output of kubectl get pv and kubectl get pvc
user#HP-EliteBook:~/Downloads$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs 100Gi RWX Retain Bound default/nfs manual 286d
nfs-pv 100Gi RWO Retain Bound default/nfs-pvc manual 286d
user#HP-EliteBook:~/Downloads$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nfs Bound nfs 100Gi RWX manual 286d
nfs-pvc Bound nfs-pv 100Gi RWO manual 286d
The ip in the PV used by the pods is the problem. The pod on the same cluster can connect to it but not the pod on the other cluster. I can use the actual podIP from the other cluster but the podIP changes with every deploy so that is not a working solution. What is the best way to get around this problem, I only want this second cluster to have access to the nfs server and not opening it to the world for example.