Setting up MongoDB with Kubernetes on Google Cloud - mongodb

I'm trying to deploy MongoDB on Google Cloud using Kubernetes. However, I'm having some difficulties. It says:
error converting YAML to JSON: yaml: line 29: found character that cannot start any token
However, on line 29 there's only a space indenting the line, as shown below.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongo
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: "mongo"
replicas: 1
template:
metadata:
labels:
role: mongo
environment: test
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
command:
- mongod
- "--replSet"
- rs0
- "--smallfiles"
- "--noprealloc"
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
volumes:
- name: mongo-persistent-storage
persistentVolumeClaim:
claimName: mongo-pv-claim
Could somebody point me to where I'm doing something wrong, or more resources on the matter? I've only recently started using Kubernetes, so would appreciate any information.

You had two issues with formatting your yaml file:
volumes should be on the same level as containers.
persistentVolumeClaim should be under - name: mongo-persistent-storage in volumes statement.
Correct yaml file is:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongo
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: "mongo"
replicas: 1
template:
metadata:
labels:
role: mongo
environment: test
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
command:
- mongod
- "--replSet"
- rs0
- "--smallfiles"
- "--noprealloc"
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
volumes:
- name: mongo-persistent-storage
persistentVolumeClaim:
claimName: mongo-pv-claim

Related

Can't authenticate MongoDb on Kubernetes cluster

When I try to connect to MongoDb running on Kubernetes cluster with mongo -u admin -p password -authenticationDatabase admin, I get this error:
{"mechanism":"SCRAM-SHA-1","speculative":false,"principalName":"admin","authenticationDatabase":"admin","remote":"192.168.65.3:47486","extraInfo":{},"error":"UserNotFound: Could not find user \"admin\" for db \"admin\""}}
Below is the yaml file I'm using to create the MongoDb service.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongodb-statefulset
spec:
serviceName: "mongodb-service"
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: admin
- name: MONGO_INITDB_ROOT_PASSWORD
value: password
volumeMounts:
- mountPath: /data/db
name: data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
spec:
type: LoadBalancer
selector:
app: mongodb
ports:
- port: 27017
targetPort: 27017
I've tried everything and it still doesn't work. I appreciate any help.
Try something like, using mongod as command
apiVersion: v1
kind: Service
metadata:
labels:
app: mongo
name: mongodb-service
spec:
type: NodePort
ports:
- name: "http"
port: 27017
protocol: TCP
targetPort: 27017
selector:
service: mongo
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mongo
spec:
replicas: 1
template:
metadata:
labels:
service: mongo
name: mongodb-service
spec:
containers:
- args:
- mongod
- --smallfiles
image: mongo:latest
name: mongo
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: admin
- name: MONGO_INITDB_ROOT_PASSWORD
value: admin

error:error parsing yaml , converting YAML to JSON: yaml: line 20: did not find expected '-' indicator Kubernetes

Hello folks I'm trying to deploy a nodejs application that runs mongoDB in Kubernetes. In order to accomplish this, I've created a folder named k8s and in that folder I have 2 files - deployment_nodejs.yaml and deployment_mongo.yaml. After run the command kubectl apply -f k8s.
I successfully create the 'deployment_nodejs.yaml' but I failed deploying the other one. The error says the following: 'error: error parsing k8s/deployment_mongo.yaml: error converting YAML to JSON: yaml: line 20: did not find expected '-' indicator'.
I realized that the error might be the ports command although I can't understand why because in my opinion I've done anything wrong. Hope you could help me fixing this bug.
Code of deployment_mongo.yaml file:
kind: PersistentVolumeClaim
metadata:
name: mongo-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 256Mi
---
apiVersion: v1
kind: Service
metadata:
name: mongo
spec:
selector:
app: mongo
ports:
- port: 27017
targetPort: 27017
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo
spec:
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
specs:
containers:
- name: mongo
image: 3.6.23-xenial
ports:
- containerPort: 27017
volumeMounts:
- name: storage
mountPath: /data/db
volumes:
- name: storage
PersistentVolumeClaim:
claimName: mongo-pvc
Tried generating the resources as per the YAML given, below are the issues found.
api-version is not mentioned on the first line, volumes and volumeMounts are not properly indented in Deployment Yaml. PersistentVolumeClaim (P in caps it should be persistentVolumeClaim), claimName is not indented. Fix them or use the below manifest and you should be good to go.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 256Mi
---
apiVersion: v1
kind: Service
metadata:
name: mongo
spec:
selector:
app: mongo
ports:
- port: 27017
targetPort: 27017
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo
spec:
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
spec:
containers:
- name: mongo
image: 3.6.23-xenial
ports:
- containerPort: 27017
volumeMounts:
- name: storage
mountPath: /data/db
volumes:
- name: storage
persistentVolumeClaim:
claimName: mongo-pvc
You have in line 20 spce:, instead of spec:
In the deployment, change specs: to spec:

Kubernetes stateful mongodb: What is right string or how to connect to mongodb of running istance of statefulset which has service also attached?

Here is my config of YAML (all PV, Statefulset, and Service get created fine no issues in that). Tried a bunch of solution for connection string of Kubernetes mongo but didn't work any.
Kubernetes version (minikube):
1.20.1
Storage for config:
NFS (working fine, tested)
OS:
Linux Mint 20
YAML CONFIG:
apiVersion: v1
kind: PersistentVolume
metadata:
name: auth-pv
spec:
capacity:
storage: 250Mi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: manual
nfs:
path: /nfs/auth
server: 192.168.10.104
---
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
selector:
role: mongo
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo
spec:
selector:
matchLabels:
app: mongo
serviceName: mongo
replicas: 1
template:
metadata:
labels:
app: mongo
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
spec:
storageClassName: manual
accessModes: ["ReadWriteMany"]
resources:
requests:
storage: 250Mi
I have found few issues in your configuration file. In your service manifest file you use
selector:
role: mongo
But in your statefull set pod template you are using
labels:
app: mongo
One more thing you should use ClusterIP:None to use the headless service which is recommended for statefull set, if you want to access the db using dns name.
To all viewers, working YAML is below
right connection string is: mongodb://mongo:27017/<dbname>
apiVersion: v1
kind: PersistentVolume
metadata:
name: auth-pv
spec:
capacity:
storage: 250Mi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: manual
nfs:
path: /nfs/auth
server: 192.168.10.104
---
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
selector:
app: mongo
clusterIP: None
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo
spec:
selector:
matchLabels:
app: mongo
serviceName: mongo
replicas: 1
template:
metadata:
labels:
app: mongo
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
spec:
storageClassName: manual
accessModes: ["ReadWriteMany"]
resources:
requests:
storage: 250Mi

Kubernetes: Mongo container not creating

I am unable to create a mongo container using a deployment:
Persistent volume and persistent volume claims are working fine.
> kubectl logs -f mongo-depl-dc764fb6d-qqdxh
Error from server (BadRequest): container "mongo" in pod "mongo-depl-dc764fb6d-qqdxh" is waiting to start: CreateContainerError
Persistent Volume :
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongo-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "E:\\Linux\\mongo"
Persistent Volume Claim :
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo-pvc
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 0.5Gi
Mongodb Deployment :
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-depl
spec:
replicas: 1
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
spec:
volumes:
- name: mongo-volume
persistentVolumeClaim:
claimName: mongo-pvc
containers:
- name: mongo
image: mongo:3.6.5-jessie
ports:
- name: mongo
containerPort: 27017
volumeMounts:
- name: mongo-volume
mountPath: /data/db
---
apiVersion: v1
kind: Service
metadata:
name: mongo-srv
spec:
type: NodePort
selector:
app: mongo
ports:
- name: mongo
protocol: TCP
port: 27017
targetPort: 27017
I would really appreciate the help. I am running kubernetes on a dev environment on the windows machine.

K8s pod has unbound immediate PersistentVolumeClaims - mongoDB

I'm trying to deploy mongoDB in my Kubernetes cluster (Google Cloud Platform). Right now I'm focused on minikube local version of the deployment. The thing is that I'm getting error like so:
pod has unbound immediate PersistentVolumeClaims
This is my config files which sets up mongodb inside the minikube:
Service file:
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongo
Stateful set file:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo
spec:
selector:
matchLabels:
role: mongo
serviceName: "mongo"
replicas: 1
template:
metadata:
labels:
role: mongo
environment: test
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
command:
- mongod
- "--replSet"
- rs0
- "--smallfiles"
- "--noprealloc"
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
- name: MONGO_SIDECAR_POD_LABELS
value: "role=mongo,environment=test"
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
annotations:
volume.beta.kubernetes.io/storage-class: "fast"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 2Gi
Storage class file:
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: fast
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
What am I missing here?
I think the problem is that you are trying to apply of
provisioner: kubernetes.io/gce-pd
this should not work locally, as it is intended for GCE PD.
For minikube, you can create hostPath pvc. Read more
I work with minikube during the feature development and I am running MongoDB as well, I would recommend you to use the hostPath when working with minikube, here is my volume definition:
volume
apiVersion: "v1"
kind: "PersistentVolume"
metadata:
name: "blog-repoflow-resources-data-volume"
namespace: repoflow-blog-namespace
labels:
service: "resources-data-service"
fsOwner: "1001"
fsGroup: "0"
fsMode: "775"
spec:
capacity:
storage: "500Mi"
accessModes:
- "ReadWriteOnce"
storageClassName: local-storage
hostPath:
path: /home/docker/production/blog.repoflow.com/volumes/blog-repoflow-resources-data-volume
The complete cluster is up and running and open source: repository.
Check also the stateful, volume claim and service definitions