I'm trying to mount mongo /data directory on to a NFS volume in my kubernetes master machine for persisting mongo data. I see the volume is mounted successfully but I can see only configdb and db dirs but not their subdirectories. And I see the data is not even persisting in the volume. when I kubectl describe <my_pv> it shows NFS (an NFS mount that lasts the lifetime of a pod)
Why is that so?
I see in kubernetes docs stating that:
An nfs volume allows an existing NFS (Network File System) share to be
mounted into your pod. Unlike emptyDir, which is erased when a Pod is
removed, the contents of an nfs volume are preserved and the volume is
merely unmounted. This means that an NFS volume can be pre-populated
with data, and that data can be “handed off” between pods. NFS can be
mounted by multiple writers simultaneously.
I'm using kubernetes version 1.8.3.
mongo-deployment.yml:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: mongo
labels:
name: mongo
app: mongo
spec:
replicas: 3
selector:
matchLabels:
name: mongo
app: mongo
template:
metadata:
name: mongo
labels:
name: mongo
app: mongo
spec:
containers:
- name: mongo
image: mongo:3.4.9
ports:
- name: mongo
containerPort: 27017
protocol: TCP
volumeMounts:
- name: mongovol
mountPath: "/data"
volumes:
- name: mongovol
persistentVolumeClaim:
claimName: mongo-pvc
mongo-pv.yml:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongo-pv
labels:
type: NFS
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: "/mongodata"
server: 172.20.33.81
mongo-pvc.yml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 3Gi
storageClassName: slow
selector:
matchLabels:
type: NFS
The way I mounted my nfs share on my kubernetes master machine:
1) apt-get install nfs-kernel-server
2) mkdir /mongodata
3) chown nobody:nogroup -R /mongodata
4) vi /etc/exports
5) added the line "/mongodata *(rw,sync,all_squash,no_subtree_check)"
6) exportfs -ra
7) service nfs-kernel-server restart
8) showmount -e ----> shows the share
I logged into the bash of my pod and I see the directory is mounted correctly but data is not persisting in my nfs server (kubernetes master machine).
Please help me see what I am doing wrong here.
It's possible that pods don't have permission to create files and directories. You can exec to your pod and try to touch a file in NFS share if you get permission error you can ease up permission on file system and exports file to allow write access.
It's possible to specify GID in PV object to avoid permission denied issues.
https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#access-control
I see you did a chown nobody:nogroup -R /mongodata.
Make sure that the application on your pod runs as nobody:nogroup
Add the parameter mountOptions: "vers=4.1" to your StorageClass config, this should fix your issue.
See this Github comment for more info:
https://github.com/kubernetes-incubator/external-storage/issues/223#issuecomment-344972640
Related
I have a read only persistent volume that I'm trying to mount onto the statefulset, but after making some changes to the program and re-creating the pods, the pod can now no longer mount to the volume.
PV yaml file:
apiVersion: v1
kind: PersistentVolume
metadata:
name: foo-pv
spec:
capacity:
storage: 2Gi
accessModes:
- ReadOnlyMany
nfs:
server: <ip>
path: "/var/foo"
claimRef:
name: foo-pvc
namespace: foo
PVC yaml file:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: foo-pvc
namespace: foo
spec:
accessModes:
- ReadOnlyMany
storageClassName: ""
volumeName: foo-pv
resources:
requests:
storage: 2Gi
Statefulset yaml:
apiVersion: v1
kind: Service
metadata:
name: foo-service
spec:
type: ClusterIP
ports:
- name: http
port: 80
selector:
app: foo-app
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: foo-statefulset
namespace: foo
spec:
selector:
matchLabels:
app: foo-app
serviceName: foo-app
replicas: 1
template:
metadata:
labels:
app: foo-app
spec:
serviceAccountName: foo-service-account
containers:
- name: fooContainer
image: <image>
imagePullPolicy: Always
volumeMounts:
- name: writer-data
mountPath: <path>
- name: nfs-objectd
mountPath: <path>
volumes:
- name: nfs-foo
persistentVolumeClaim:
claimName: foo-pvc
volumeClaimTemplates:
- metadata:
name: writer-data
spec:
accessModes: [ "ReadWriteMany" ]
storageClassName: "foo-sc"
resources:
requests:
storage: 2Gi
k describe pod reports "Unable to attach or mount volumes: unmounted volumes=[nfs-foo]: timed out waiting for the condition". There is a firewall between the machine running kubernetes and the NFS, however the port has been unblocked, and the folder has been exported for mounting on the NFS side. Running sudo mount -t nfs :/var/foo /var/foo is able to successfully mount the NFS, so I don't understand why kuebernetes isn't about to mount it anymore. Its been stuck failing mount for several days now. Is there any other way to debug this?
Thanks!
Based on the error “unable to attach or mount volumes …….timed out waiting for condition”, there were some similar issues reported to the Product Team and it is a known issue. But, this error is more observed on the preemptible/spot nodes when the node is preempted. In similar occurrences of this issue for other users, upgrading the control plane version resolved this issue temporarily in preemptible/spot nodes.
Also, if you are not using any preemptible/spot nodes in your cluster, this issue might have happened when the old node is replaced by a new node. If you are still facing this issue, try upgrading the control plane to the same version i.e. you can execute the following command:
$ gcloud container clusters upgrade CLUSTER_NAME --master --zone ZONE --cluster-version VERSION
Another workaround to fix this issue would be remove the stale VolumeAttachment with the following command:
$ kubectl delete volumeattachment [volumeattachment_name]
After running the command and thus removing the VolumeAttachment, the pod should eventually pick up and retry. You can read more about this issue and its cause here.
This is the mongodb yaml file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-mongo-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth-mongo
template:
metadata:
labels:
app: auth-mongo
spec:
containers:
- name: auth-mongo
image: mongo
volumeMounts:
- mountPath: "/data/db/auth"
name: auth-db-storage
volumes:
- name: auth-db-storage
persistentVolumeClaim:
claimName: mongo-pvc
---
apiVersion: v1
kind: Service
metadata:
name: auth-mongo-srv
spec:
selector:
app: auth-mongo
ports:
- name: db
protocol: TCP
port: 27017
targetPort: 27017
And this is the persistent volume file.
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongo-pv
labels:
type: local
spec:
storageClassName: mongo
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/db"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo-pvc
spec:
storageClassName: mongo
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
I'm running this on Ubuntu using kubectl and minikube v1.25.1.
When I run describe pod, I see this on the mongodb pod.
Volumes:
auth-db-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mongo-pvc
ReadOnly: false
I have a similar setup for other pods to store files, and it's working fine. But with mongodb, every time I restart the pods, the data is lost. Can someone help me?
EDIT: I noticed that if I change the mongodb mountPath to /data/db, it works fine. But if I have multiple mongodb pods running on /data/db, they don't work. So I need to have one persistent volume claim for EACH mongodb pod?
When using these yaml files, you are mounting the /data/db dir on the minikube node to /data/db/auth in auth-mongo pods.
First, you should change /data/db/auth to /data/db in your k8s deployment so that your mongodb can read the database from the default db location.
Even if you delete the deployment, the db will stay in '/data/db' dir on the minikube node. And after running the new pod from this deployment, mongodb will open this existing db (all data saved).
Second, you can't use multiple mongodb pods like this by just scaling replicas in the deployment because the second mongodb in other Pod can't use already used by the first Pod db. Mongodb will throw this error:
Unable to lock the lock file: /data/db/mongod.lock (Resource temporarily unavailable). Another mongod instance is already running on the /data/db directory
So, the solution is either to use only 1 replica in your deployment or, for example, use MongoDB packaged by Bitnami helm chart.
https://github.com/bitnami/charts/tree/master/bitnami/mongodb
This chart bootstraps a MongoDB(®) deployment on a Kubernetes cluster using the Helm package manager.
$ helm install my-release bitnami/mongodb --set architecture=replicaset --set replicaCount=2
Understand MongoDB Architecture Options.
Also, check this link MongoDB Community Kubernetes Operator.
This is a Kubernetes Operator which deploys MongoDB Community into Kubernetes clusters.
On the host, everything in the mounted directory (/opt/testpod) is owned by uid=0 gid=0. I need those files to be owned by whatever the container decides, i.e. a different gid, to be able to write there. Resources I'm testing with:
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv
labels:
name: pv
spec:
storageClassName: manual
capacity:
storage: 10Mi
accessModes:
- ReadWriteOnce
hostPath:
path: "/opt/testpod"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc
spec:
storageClassName: manual
selector:
matchLabels:
name: pv
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
---
apiVersion: v1
kind: Pod
metadata:
name: testpod
spec:
nodeSelector:
foo: bar
securityContext:
runAsUser: 500
runAsGroup: 500
fsGroup: 500
volumes:
- name: vol
persistentVolumeClaim:
claimName: pvc
containers:
- name: testpod
image: busybox
command: [ "sh", "-c", "sleep 1h" ]
volumeMounts:
- name: vol
mountPath: /data
After the pod is running, I kubectl exec into it and ls -la /data shows everything still owned by gid=0. According to some Kuber docs, fsGroup is supposed to chown everything on the pod start but it doesn't happen. What am I doing wrong please?
The hostpath type PV doesn't support security context. You have to be root for the volume to be written in. It is described well in this github issue and this docs about hostPath:
The directories created on the underlying hosts are only writable by root. You either need to run your process as root in a privileged
container or modify the file permissions on the host to be able to write to a
hostPath volume
You may also want to check this github request describing why changing permission of host directory is dangerous.
The workaround people describe that it appears to be working is to grant your user sudo privileges but that actually makes the idea of running container as non root user useless.
Security context appears to be working well with emptyDir volume (described well in the k8s docs here)
I have created a deployment in the xyz-namespace namespace, it has PVC. I can create the deployment and able to access it. It is working properly but while scale the deployment from the Kubernetes console then the pod is pending state only.
persistent_claim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins
spec:
accessModes:
- ReadWriteOnce
storageClassName: standard
resources:
requests:
storage: 5Gi
namespace: xyz-namespace
and deployment object is like below.
apiVersion: apps/v1
kind: Deployment
metadata:
name: db-service
labels:
k8s-app: db-service
Name:db-service
ServiceName: db-service
spec:
selector:
matchLabels:
tier: data
Name: db-service
ServiceName: db-service
strategy:
type: Recreate
template:
metadata:
labels:
app: jenkins
tier: data
Name: db-service
ServiceName: db-service
spec:
hostname: jenkins
initContainers:
- command:
- "/bin/sh"
- "-c"
- chown -R 1000:1000 /var/jenkins_home
image: busybox
imagePullPolicy: Always
name: jenkins-init
volumeMounts:
- name: jenkinsvol
mountPath: "/var/jenkins_home"
containers:
- image: jenkins/jenkins:lts
name: jenkins
ports:
- containerPort: 8080
name: jenkins1
- containerPort: 8080
name: jenkins2
volumeMounts:
- name: jenkinsvol
mountPath: "/var/jenkins_home"
volumes:
- name: jenkinsvol
persistentVolumeClaim:
claimName: jenkins
nodeSelector:
nodegroup: xyz-testing
namespace: xyz-namespace
replicas: 1
Deployment is created fine and working as well but
When I am trying to Scale the deployment from console then the pod is getting stuck and it's pending state only.
If I removed the persistent volume and then scaled it then it is working fine, but with persistent volume, it is not working.
When using standard storage class I assume you are using the default GCEPersisentDisk Volume PlugIn. In this case you cannot set them at all as they are already set by the storage provider (GCP in your case, as you are using GCE perisistent disks), these disks only support ReadWriteOnce(RWO) and ReadOnlyMany (ROX) access modes. If you try to create a ReadWriteMany(RWX) PV that will never come in a success state (your case when set the PVC with accessModes: ReadWriteMany).
Also if any pod tries to attach a ReadWriteOnce volume on some other node, you’ll get following error:
FailedMount Failed to attach volume "pv0001" on node "xyz" with: googleapi: Error 400: The disk resource 'abc' is already being used by 'xyz'
References from above on this article
As mentioned here and here, NFS is the easiest way to get ReadWriteMany as all nodes need to be able to ReadWriteMany to the storage device you are using for your pods.
Then I would suggest you to use an NFS storage option. In case you want to test it, here is a good guide by Google using its Filestore solution which are fully managed NFS file servers.
Your PersistentVolumeClaim is set to:
accessModes:
- ReadWriteOnce
But it should be set to:
accessModes:
- ReadWriteMany
The ReadWriteOnce access mode means, that
the volume can be mounted as read-write by a single node [1].
When you scale your deployment it's most likely scaled to different nodes, therefore you need ReadWriteMany.
[1] https://kubernetes.io/docs/concepts/storage/persistent-volumes/
I am trying to run a deployment for mongo using minikube. I have created a persistent storage using the following configuration:
kind: PersistentVolume
apiVersion: v1
metadata:
name: mongo-volume
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
claimRef:
namespace: default
name: mongo-claim
hostPath:
path: "/test"
The "/test" folder is being mounted using minikube mount <local_path>:/test
Then I created a PV Claim using the following configuration:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mongo-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Mi
Finally, I am trying to create a Service and Deployment with the following configuration:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mongo
spec:
replicas: 1
template:
metadata:
labels:
tier: backend
app: mongo
spec:
containers:
- name: mongo
image: "mongo"
envFrom:
- configMapRef:
name: mongo-config
ports:
- name: mongo-port
containerPort: 27017
volumeMounts:
- name: mongo-storage
mountPath: "/data/db"
volumes:
- name: mongo-storage
persistentVolumeClaim:
claimName: mongo-claim
---
apiVersion: v1
kind: Service
metadata:
name: mongo
spec:
selector:
app: mongo
ports:
- protocol: TCP
port: 27017
targetPort: mongo-port
The container quits with an error Changing ownership of '/data/db', Input/Output error.
Question 1) Who is trying to change the ownership of the internal directory of the container? Is it the PV Claim?
Question 2) Why the above culprit is trying to mess with the permission of the Mongodb container's default storage path?
Looks like it more about virtualbox driver for external folder then k8s itself,
in my scenario
i've created a folder on my OS X,
mapped that folder to minikube minikube mount data-storage/:/data-storage
created PersistentVolume pointing to folder inside minikube
created PersistentVolumeClaim pointing to PV above
tried to start single simple mongodb using PVC above
and got constantly restarting pods with logs:
Fatal Assertion and fsync: Invalid Argument
was fighting for a few hours, and finally found this
https://github.com/mvertes/docker-alpine-mongo/issues/1
which is basically reporting issues with virtualbox driver in case if folder mapped to host.
Once i've mapped PersistentVolume to /data inside of minikube - my pod went live like a charm.
i my case i've decided since minikube is development environment there no reason to be stuck on this
UPDATE:
i wish i would found out this earlier, would save me some time!
docker CE desktop has built in kubernetes!
all you need is to go to the properties and turn it on, that's it no need in virtual box or minikube at all.
and the best thing is that shared folders (on File Sharing tab) - are available for kubernetes - checked with mongodb inside of k8s.
And it way faster then minikube (which was failing all the time by the way on my OS X).
Hope it's going to save someone time.