UPDATE Finally this is nothing to do with Azure File Share. It is actually the same case with Azure Disk and NFS or HostPath
I have mounted an Azure file Shares volume to a mongoDb pod with the mountPath /data. Everything seems to work as expected. When I exec into the pod, I can see mongo data in /data/db. But on the Azure File Shares I can only see the folders /db and /dbconfig, not the files. Any idea ? I have granted the permission 0777 to the volume.
This is my yaml files
StorageClass
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: azurefile
provisioner: kubernetes.io/azure-file
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=999
- gid=999
parameters:
storageAccount: ACCOUNT_NAME
skuName: Standard_LRS
PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: azurefile
spec:
accessModes:
- ReadWriteMany
storageClassName: azurefile
resources:
requests:
storage: 20Gi
Mongo deployement file
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: mongo
labels:
app: mongo
namespace: development
spec:
replicas: 1
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
spec:
containers:
- name: mongo
image: "mongo"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 27017
protocol: TCP
volumeMounts:
- mountPath: /data
name: mongovolume
subPath: mongo
imagePullSecrets:
- name: secret-acr
volumes:
- name: mongovolume
persistentVolumeClaim:
claimName: azurefile
Kubernetes version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.6", GitCommit:"a21fdbd78dde8f5447f5f6c331f7eb6f80bd684e", GitTreeState:"clean", BuildDate:"2018-07-26T10:04:08Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Solved the problem by changing mongo image for docker.io/bitnami/mongodb:4.0.2-debian-9. With this image, mongo data is written on the file share and the data is now persistant
This setup doesn't work with Azure Files nor with Azure Disks.
I am working on one of the projects where I faced similar kind of issue and took Azure support but they don't have any specific resolution for the same.
Root cause provided by Azure Support : The data/files which doesn't remain persistent are the ones which has ownership of mongodb.
Related
This is the mongodb yaml file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-mongo-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth-mongo
template:
metadata:
labels:
app: auth-mongo
spec:
containers:
- name: auth-mongo
image: mongo
volumeMounts:
- mountPath: "/data/db/auth"
name: auth-db-storage
volumes:
- name: auth-db-storage
persistentVolumeClaim:
claimName: mongo-pvc
---
apiVersion: v1
kind: Service
metadata:
name: auth-mongo-srv
spec:
selector:
app: auth-mongo
ports:
- name: db
protocol: TCP
port: 27017
targetPort: 27017
And this is the persistent volume file.
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongo-pv
labels:
type: local
spec:
storageClassName: mongo
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/db"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo-pvc
spec:
storageClassName: mongo
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
I'm running this on Ubuntu using kubectl and minikube v1.25.1.
When I run describe pod, I see this on the mongodb pod.
Volumes:
auth-db-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mongo-pvc
ReadOnly: false
I have a similar setup for other pods to store files, and it's working fine. But with mongodb, every time I restart the pods, the data is lost. Can someone help me?
EDIT: I noticed that if I change the mongodb mountPath to /data/db, it works fine. But if I have multiple mongodb pods running on /data/db, they don't work. So I need to have one persistent volume claim for EACH mongodb pod?
When using these yaml files, you are mounting the /data/db dir on the minikube node to /data/db/auth in auth-mongo pods.
First, you should change /data/db/auth to /data/db in your k8s deployment so that your mongodb can read the database from the default db location.
Even if you delete the deployment, the db will stay in '/data/db' dir on the minikube node. And after running the new pod from this deployment, mongodb will open this existing db (all data saved).
Second, you can't use multiple mongodb pods like this by just scaling replicas in the deployment because the second mongodb in other Pod can't use already used by the first Pod db. Mongodb will throw this error:
Unable to lock the lock file: /data/db/mongod.lock (Resource temporarily unavailable). Another mongod instance is already running on the /data/db directory
So, the solution is either to use only 1 replica in your deployment or, for example, use MongoDB packaged by Bitnami helm chart.
https://github.com/bitnami/charts/tree/master/bitnami/mongodb
This chart bootstraps a MongoDB(®) deployment on a Kubernetes cluster using the Helm package manager.
$ helm install my-release bitnami/mongodb --set architecture=replicaset --set replicaCount=2
Understand MongoDB Architecture Options.
Also, check this link MongoDB Community Kubernetes Operator.
This is a Kubernetes Operator which deploys MongoDB Community into Kubernetes clusters.
I am trying to use the VolumeSnapshot backup mechanism promoted in kubernetes to beta from 1.17.
Here is my scenario:
Create the nginx deployment and the PVC used by it
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- name: my-pvc
mountPath: /root/test
volumes:
- name: my-pvc
persistentVolumeClaim:
claimName: nginx-pvc
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
finalizers: null
labels:
name: nginx-pvc
name: nginx-pvc
namespace: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
storageClassName: premium-rwo
Exec into the running nginx container, cd into the PVC mounted path and create some files
▶ k exec -it nginx-deployment-84765795c-7hz5n bash
root#nginx-deployment-84765795c-7hz5n:/# cd /root/test
root#nginx-deployment-84765795c-7hz5n:~/test# touch {1..10}.txt
root#nginx-deployment-84765795c-7hz5n:~/test# ls
1.txt 10.txt 2.txt 3.txt 4.txt 5.txt 6.txt 7.txt 8.txt 9.txt lost+found
root#nginx-deployment-84765795c-7hz5n:~/test#
Create the following VolumeSnapshot using as source the nginx-pvc
apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshot
metadata:
namespace: default
name: nginx-volume-snapshot
spec:
volumeSnapshotClassName: pd-retain-vsc
source:
persistentVolumeClaimName: nginx-pvc
The VolumeSnapshotClass used is the following
apiVersion: snapshot.storage.k8s.io/v1beta1
deletionPolicy: Retain
driver: pd.csi.storage.gke.io
kind: VolumeSnapshotClass
metadata:
creationTimestamp: "2020-09-25T09:10:16Z"
generation: 1
name: pd-retain-vsc
and wait until it becomes readyToUse: true
apiVersion: v1
items:
- apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshot
metadata:
creationTimestamp: "2020-11-04T09:38:00Z"
finalizers:
- snapshot.storage.kubernetes.io/volumesnapshot-as-source-protection
generation: 1
name: nginx-volume-snapshot
namespace: default
resourceVersion: "34170857"
selfLink: /apis/snapshot.storage.k8s.io/v1beta1/namespaces/default/volumesnapshots/nginx-volume-snapshot
uid: ce1991f8-a44c-456f-8b2a-2e12f8df28fc
spec:
source:
persistentVolumeClaimName: nginx-pvc
volumeSnapshotClassName: pd-retain-vsc
status:
boundVolumeSnapshotContentName: snapcontent-ce1991f8-a44c-456f-8b2a-2e12f8df28fc
creationTime: "2020-11-04T09:38:02Z"
readyToUse: true
restoreSize: 8Gi
kind: List
metadata:
resourceVersion: ""
selfLink: ""
Delete the nginx deployment and the initial PVC
▶ k delete pvc,deploy --all
persistentvolumeclaim "nginx-pvc" deleted
deployment.apps "nginx-deployment" deleted
Create a new PVC, using the previously created VolumeSnapshot as its dataSource
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
finalizers: null
labels:
name: nginx-pvc-restored
name: nginx-pvc-restored
namespace: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
dataSource:
name: nginx-volume-snapshot
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
▶ k create -f nginx-pvc-restored.yaml
persistentvolumeclaim/nginx-pvc-restored created
▶ k get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nginx-pvc-restored Bound pvc-56d0a898-9f65-464f-8abf-90fa0a58a048 8Gi RWO standard 39s
Set the name of the new (restored) PVC to the nginx deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- name: my-pvc
mountPath: /root/test
volumes:
- name: my-pvc
persistentVolumeClaim:
claimName: nginx-pvc-restored
and create the Deployment again
▶ k create -f nginx-deployment-restored.yaml
deployment.apps/nginx-deployment created
cd into the PVC mounted directory. It should contain the previously created files but its empty
▶ k exec -it nginx-deployment-67c7584d4b-l7qrq bash
root#nginx-deployment-67c7584d4b-l7qrq:/# cd /root/test
root#nginx-deployment-67c7584d4b-l7qrq:~/test# ls
lost+found
root#nginx-deployment-67c7584d4b-l7qrq:~/test#
▶ k version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.12", GitCommit:"5ec472285121eb6c451e515bc0a7201413872fa3", GitTreeState:"clean", BuildDate:"2020-09-16T13:39:51Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"17+", GitVersion:"v1.17.12-gke.1504", GitCommit:"17061f5bd4ee34f72c9281d49f94b4f3ac31ac25", GitTreeState:"clean", BuildDate:"2020-10-19T17:00:22Z", GoVersion:"go1.13.15b4", Compiler:"gc", Platform:"linux/amd64"}
This is a community wiki answer posted for more clarity of the current problem. Feel free to expand on it.
As mentioned by #pkaramol, this is an on-going issue registered under the following thread:
Creating an intree PVC with datasource should fail #96225
What happened: In clusters that have intree drivers as the default
storageclass, if you try to create a PVC with snapshot data source and
forget to put the csi storageclass in it, then an empty PVC will be
provisioned using the default storageclass.
What you expected to happen: PVC creation should not proceed and
instead have an event with an incompatible error, similar to how we
check proper csi driver in the csi provisioner.
This issue has not yet been resolved at the moment of writing this answer.
I deployed Minio on Kubernetes on an Ubuntu desktop. It works fine, except that whenever I reboot the machine, everything that was stored in Minio mysteriously disappears (if I create several buckets with files in them, I come back to a completely blanks slate after the reboot - the buckets, and all their files, are completely gone).
When I set up Minio, I created a persistent volume in Kubernetes which mounts to a folder (/mnt/minio/minio - I have a 4 TB HDD mounted at /mnt/minio with a folder named minio inside that). I noticed that this folder seems to be empty even when I store stuff in Minio, so perhaps Minio is ignoring the persistent volume and using the container storage? However, I don't know why this would be happening; I have both a PV and a PV claim, and kubectl shows that they are bound to each other.
Below are the yaml files I applied to deploy my minio installation:
kind: PersistentVolume
apiVersion: v1
metadata:
name: minio-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 100Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/minio/minio"
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: minio-pv-claim
labels:
app: minio-storage-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 99Gi
apiVersion: apps/v1 # for k8s versions before 1.9.0 use apps/v1beta2 and before 1.8.0 use extensions/v1beta1
kind: Deployment
metadata:
# This name uniquely identifies the Deployment
name: minio-deployment
spec:
selector:
matchLabels:
app: minio
strategy:
type: Recreate
template:
metadata:
labels:
# Label is used as selector in the service.
app: minio
spec:
# Refer to the PVC created earlier
volumes:
- name: storage
persistentVolumeClaim:
# Name of the PVC created earlier
claimName: minio-pv-claim
containers:
- name: minio
# Pulls the default Minio image from Docker Hub
image: minio/minio:latest
args:
- server
- /storage
env:
# Minio access key and secret key
- name: MINIO_ACCESS_KEY
value: "minio"
- name: MINIO_SECRET_KEY
value: "minio123"
ports:
- containerPort: 9000
hostPort: 9000
# Mount the volume into the pod
volumeMounts:
- name: storage # must match the volume name, above
mountPath: "/mnt/minio/minio"
apiVersion: v1
kind: Service
metadata:
name: minio-service
spec:
type: LoadBalancer
ports:
- port: 9000
targetPort: 9000
protocol: TCP
selector:
app: minio
you need to mount container's /storage directory in the directory you are mounting on the container /mnt/minio/minio/;
args:
- server
- /mnt/minio/minio/storage
But consider deploying using StatefulSet, so when your pod restarts it will retain everything of the previous pod.
We have success creating the pods, services and replication controllers according to our project requirements. Now we are planning to setup persistence storage in AWS using Kubernetes. I have created the YAML file to create an EBS volume in AWS, it's working fine as expected. I am able to claim volume and successfully mount to my pod (this is for single replica only).
I am able to successfully create the file.Volume also creating but my Pods is going to pending state, volume still shows available state in aws. I am not able to see any error logs over there.
Storage file:
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: mongo-ssd
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
Main file:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: web2
spec:
selector:
matchLabels:
app: mongodb
serviceName: "mongodb"
replicas: 2
template:
metadata:
labels:
app: mongodb
annotations:
pod.alpha.kubernetes.io/initialized: "true"
spec:
containers:
- image: mongo
name: mongodb
ports:
- name: web2
containerPort: 27017
hostPort: 27017
volumeMounts:
- mountPath: "/opt/couchbase/var"
name: mypd1
volumeClaimTemplates:
- metadata:
name: mypd1
annotations:
volume.alpha.kubernetes.io/storage-class: mongo-ssd
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
Kubectl version:
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.2", GitCommit:"5fa2db2bd46ac79e5e00a4e6ed24191080aa463b", GitTreeState:"clean", BuildDate:"2018-01-18T10:09:24Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.6", GitCommit:"6260bb08c46c31eea6cb538b34a9ceb3e406689c", GitTreeState:"clean", BuildDate:"2017-12-21T06:23:29Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
I can see you have used hostPort in your container. In this case, If you do not have more than one node in your cluster, One Pod will remain pending. Because It will not fit any node.
containers:
- image: mongo
name: mongodb
ports:
- name: web2
containerPort: 27017
hostPort: 27017
I am getting this error when I describe my pending Pod
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 27s (x7 over 58s) default-scheduler No nodes are available that match all of the predicates: PodFitsHostPorts (1).
HostPort in your container will be bind with your node. Suppose, you are using HostPort 10733, but another pod is already using that port, now you pod can't use that. So it will be in pending state. And If you have replica 2, and both pod is deployed in same node, they can't be started either.
So, you need to use a port as HostPort, that you can surely say that no one else is using.
In this repository https://github.com/mappedinn/kubernetes-nfs-volume-on-gke I am trying to share a volume through NFS service on GKE. The NFS file sharing is successful if hard coded IP address is used.
But, in my point of view, it would be better to use DNS name in stead of hard coded IP address.
Below is the declaration of the NFS service being used for sharing a volume in Google Cloud Platform:
apiVersion: v1
kind: Service
metadata:
name: nfs-server
spec:
ports:
- name: nfs
port: 2049
- name: mountd
port: 20048
- name: rpcbind
port: 111
selector:
role: nfs-server
Below is the definition of the PersistentVolume with hard coded IP address:
apiVersion: v1
kind: PersistentVolume
metadata:
name: wp01-pv-data
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
nfs:
server: 10.247.248.43 # with hard coded IP, it works
path: "/"
Below is the definition of the PersistentVolume with DNS name:
apiVersion: v1
kind: PersistentVolume
metadata:
name: wp01-pv-data
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
nfs:
server: nfs-service.default.svc.cluster.local # with DNS, it does not works
path: "/"
I am using this https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/ for getting the DNS of the service. Is there any thing missed?
Thanks
The problem is in DNS resolution on node it self. Mounting of the NFS share to the pod is a job of kubelet that is launched on the node. Hence the DNS resolution happens according to /etc/resolv.conf on the node it self as well. What could suffice is adding a nameserver <your_kubedns_service_ip> to the nodes /etc/resolv.conf, but it can become somewhat chicken-and-egg problem in some corner cases
I solved the problem by just upgrading kubectl of my GKE cluster from the version 1.7.11-gke.1 to 1.8.6-gke.0.
kubectl version
# Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.6", GitCommit:"6260bb08c46c31eea6cb538b34a9ceb3e406689c", GitTreeState:"clean", BuildDate:"2017-12-21T06:34:11Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
# Server Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.6-gke.0", GitCommit:"ee9a97661f14ee0b1ca31d6edd30480c89347c79", GitTreeState:"clean", BuildDate:"2018-01-05T03:36:42Z", GoVersion:"go1.8.3b4", Compiler:"gc", Platform:"linux/amd64"}
Actually, this the final version of yaml files:
apiVersion: v1
kind: Service
metadata:
name: nfs-server
spec:
# clusterIP: 10.3.240.20
ports:
- name: nfs
port: 2049
- name: mountd
port: 20048
- name: rpcbind
port: 111
selector:
role: nfs-server
# type: "LoadBalancer"
and
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
nfs:
# FIXED: Use internal DNS name
server: nfs-server.default.svc.cluster.local
path: "/"