K8s pod has unbound immediate PersistentVolumeClaims - mongoDB - mongodb

I'm trying to deploy mongoDB in my Kubernetes cluster (Google Cloud Platform). Right now I'm focused on minikube local version of the deployment. The thing is that I'm getting error like so:
pod has unbound immediate PersistentVolumeClaims
This is my config files which sets up mongodb inside the minikube:
Service file:
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongo
Stateful set file:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo
spec:
selector:
matchLabels:
role: mongo
serviceName: "mongo"
replicas: 1
template:
metadata:
labels:
role: mongo
environment: test
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
command:
- mongod
- "--replSet"
- rs0
- "--smallfiles"
- "--noprealloc"
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
- name: MONGO_SIDECAR_POD_LABELS
value: "role=mongo,environment=test"
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
annotations:
volume.beta.kubernetes.io/storage-class: "fast"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 2Gi
Storage class file:
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: fast
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
What am I missing here?

I think the problem is that you are trying to apply of
provisioner: kubernetes.io/gce-pd
this should not work locally, as it is intended for GCE PD.
For minikube, you can create hostPath pvc. Read more

I work with minikube during the feature development and I am running MongoDB as well, I would recommend you to use the hostPath when working with minikube, here is my volume definition:
volume
apiVersion: "v1"
kind: "PersistentVolume"
metadata:
name: "blog-repoflow-resources-data-volume"
namespace: repoflow-blog-namespace
labels:
service: "resources-data-service"
fsOwner: "1001"
fsGroup: "0"
fsMode: "775"
spec:
capacity:
storage: "500Mi"
accessModes:
- "ReadWriteOnce"
storageClassName: local-storage
hostPath:
path: /home/docker/production/blog.repoflow.com/volumes/blog-repoflow-resources-data-volume
The complete cluster is up and running and open source: repository.
Check also the stateful, volume claim and service definitions

Related

Persistent Storage - Pi K8s Cluster - NFS version transport protocol not supported

I have a Raspberry Pi Cluster consisting of 1-Master 20-Nodes:
192.168.0.92 (Master)
192.168.0.112 (Node w/ USB Drive)
I mounted a USB drive to /media/hdd & set a label - purpose=volume to it.
Using the following I was able to setup a NFS server:
apiVersion: v1
kind: Namespace
metadata:
name: storage
labels:
app: storage
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-pv
namespace: storage
spec:
capacity:
storage: 3.5Ti
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /media/hdd
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: purpose
operator: In
values:
- volume
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: local-claim
namespace: storage
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-storage
resources:
requests:
storage: 3Ti
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-server
namespace: storage
labels:
app: nfs-server
spec:
replicas: 1
selector:
matchLabels:
app: nfs-server
template:
metadata:
labels:
app: nfs-server
name: nfs-server
spec:
containers:
- name: nfs-server
image: itsthenetwork/nfs-server-alpine:11-arm
env:
- name: SHARED_DIRECTORY
value: /exports
ports:
- name: nfs
containerPort: 2049
- name: mountd
containerPort: 20048
- name: rpcbind
containerPort: 111
securityContext:
privileged: true
volumeMounts:
- mountPath: /exports
name: mypvc
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: local-claim
nodeSelector:
purpose: volume
---
kind: Service
apiVersion: v1
metadata:
name: nfs-server
namespace: storage
spec:
ports:
- name: nfs
port: 2049
- name: mountd
port: 20048
- name: rpcbind
port: 111
clusterIP: 10.96.0.11
selector:
app: nfs-server
And I was even able to make a persistent volume with this:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-nfs-volume
labels:
directory: mysql
spec:
capacity:
storage: 200Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: slow
nfs:
path: /mysql
server: 10.244.19.5
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-nfs-claim
spec:
storageClassName: slow
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
selector:
matchLabels:
directory: mysql
But when I try to use the volume like so:
apiVersion: v1
kind: Service
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
ports:
- port: 3306
selector:
app: wordpress
tier: mysql
clusterIP: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-nfs-claim
I get NFS version transport protocol not supported error.
When seeing mount.nfs: requested NFS version or transport protocol is not supported error there are three main reasons:
NFS services are not running on NFS server
NFS utils not installed on the client
NFS service hung on NFS server
According tho this artice there are three solutions to resolve the problem with your error.
First one:
Login to the NFS server and check the NFS services status. If the following command
service nfs status returns an information that NFS services are stopped on the server - just start them using service nfs start. To mount NFS share on the client use the same command.
Second one:
If after trying first solution your problem isn't resolved
try installing package nfs-utils on your server.
Third one:
Open file /etc/sysconfig/nfs and try to check below parameters
# Turn off v4 protocol support
#RPCNFSDARGS="-N 4"
# Turn off v2 and v3 protocol support
#RPCNFSDARGS="-N 2 -N 3"
Removing hash from RPCNFSDARGS lines will turn off specific version support. This way clients with mentioned NFS versions won’t be able to connect to the NFS server for mounting share. If you have any of it enabled, try disabling it and mounting at the client after the NFS server service restarts.

Kubernetes stateful mongodb: What is right string or how to connect to mongodb of running istance of statefulset which has service also attached?

Here is my config of YAML (all PV, Statefulset, and Service get created fine no issues in that). Tried a bunch of solution for connection string of Kubernetes mongo but didn't work any.
Kubernetes version (minikube):
1.20.1
Storage for config:
NFS (working fine, tested)
OS:
Linux Mint 20
YAML CONFIG:
apiVersion: v1
kind: PersistentVolume
metadata:
name: auth-pv
spec:
capacity:
storage: 250Mi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: manual
nfs:
path: /nfs/auth
server: 192.168.10.104
---
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
selector:
role: mongo
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo
spec:
selector:
matchLabels:
app: mongo
serviceName: mongo
replicas: 1
template:
metadata:
labels:
app: mongo
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
spec:
storageClassName: manual
accessModes: ["ReadWriteMany"]
resources:
requests:
storage: 250Mi
I have found few issues in your configuration file. In your service manifest file you use
selector:
role: mongo
But in your statefull set pod template you are using
labels:
app: mongo
One more thing you should use ClusterIP:None to use the headless service which is recommended for statefull set, if you want to access the db using dns name.
To all viewers, working YAML is below
right connection string is: mongodb://mongo:27017/<dbname>
apiVersion: v1
kind: PersistentVolume
metadata:
name: auth-pv
spec:
capacity:
storage: 250Mi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: manual
nfs:
path: /nfs/auth
server: 192.168.10.104
---
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
selector:
app: mongo
clusterIP: None
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo
spec:
selector:
matchLabels:
app: mongo
serviceName: mongo
replicas: 1
template:
metadata:
labels:
app: mongo
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
spec:
storageClassName: manual
accessModes: ["ReadWriteMany"]
resources:
requests:
storage: 250Mi

Kubernetes: Mongo container not creating

I am unable to create a mongo container using a deployment:
Persistent volume and persistent volume claims are working fine.
> kubectl logs -f mongo-depl-dc764fb6d-qqdxh
Error from server (BadRequest): container "mongo" in pod "mongo-depl-dc764fb6d-qqdxh" is waiting to start: CreateContainerError
Persistent Volume :
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongo-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "E:\\Linux\\mongo"
Persistent Volume Claim :
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo-pvc
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 0.5Gi
Mongodb Deployment :
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-depl
spec:
replicas: 1
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
spec:
volumes:
- name: mongo-volume
persistentVolumeClaim:
claimName: mongo-pvc
containers:
- name: mongo
image: mongo:3.6.5-jessie
ports:
- name: mongo
containerPort: 27017
volumeMounts:
- name: mongo-volume
mountPath: /data/db
---
apiVersion: v1
kind: Service
metadata:
name: mongo-srv
spec:
type: NodePort
selector:
app: mongo
ports:
- name: mongo
protocol: TCP
port: 27017
targetPort: 27017
I would really appreciate the help. I am running kubernetes on a dev environment on the windows machine.

How to define PVC for specific path in a single node in kubernetes

I am running local k8s cluster and defining PV as hostPath for mysql pods.
Sharing all the configuration details below .
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.7
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
The problem I am getting is as mysql pod is running in k8s cluster ,when its deleted and recreate ,it will choose any one of the node and deployed .So mysql hostpath always mounted to specific node .Is it a good idea to fix the node for mysql or any other options are there ?please share if any idea .
you have below choices
Use node selector or node affinity to ensure that pod gets scheduled on the node where the mount is created OR
Use local persistent volumes. it is supported on kubernetes 1.14 and above
Why are you using a PVC and a PV? Actually, for hostPath, you don't even need to create the PV object. It just gets it.
You should use a StatefulSet if you want a pod that is re-created to get the storage it was using the previous one (state).
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: mysql
serviceName: "mysql"
replicas: 1
template:
metadata:
labels:
app: mysql
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mysql
image: mysql:5.7
env:
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumeClaimTemplates:
- metadata:
name: mysql-persistent-storage
spec:
accessModes: ["ReadWriteOnce"]
# storageClassName: "standard"
resources:
requests:
storage: 2Gi
This statefulSet fails, but it is a mysql thing. As reference, should serve.

How do run object storage minio in a minikube cluster?

I want to integrate a minio object storage in to my minikune cluster.
I use the docker file from the minio gitrepo
I also added the persistent volume with the claim
kind: PersistentVolume
apiVersion: v1
metadata:
name: minio-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: "/mnt/data/minio"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: minio-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
for the minio deployment I have
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: minio
spec:
selector:
matchLabels:
app: minio
role: master
tier: backend
replicas: 1
template:
metadata:
labels:
app: minio
role: master
tier: backend
spec:
imagePullSecrets:
- name: regcred
containers:
- name: minio
image: <secret Registry >
env:
- name: MINIO_ACCESS_KEY
value: akey
- name: MINIO_SECRET_KEY
value: skey
ports:
- containerPort: 9000
volumeMounts:
- name: data
mountPath: /data/ob
volumes:
- name: data
persistentVolumeClaim:
claimName: minio-pv-claim
For the service I opened up the external IP just for debugging
apiVersion: v1
kind: Service
metadata:
name: minio
labels:
app: minio
role: master
tier: backend
spec:
ports:
- port: 9000
targetPort: 9000
externalIPs:
- 192.168.99.101
selector:
app: minio
role: master
tier: backend
But when I start the deployment I get the error message ERROR Unable to initialize backend: The disk size is less than the minimum threshold.
I assumed that 3GB should be enough. How can I solve this issue moreover now that I try to delete my persistent volume it rest in the terminating status.
How can I run minio in a minikube clutster?
I dont think there is enough storage in /mnt/data inside minikube. Try /mnt/sda1 or /data. Better yet, go inside minikube and check the storage available. To get into minikube you can do minikube ssh.