Error when creating mongodb user in kubernetes environment - mongodb

I am trying to create a mongodb user along with a stateful set. Here is my .yaml file:
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
name: mongo
spec:
type: NodePort
ports:
- port: 27017
targetPort: 27017
selector:
name: mongo
---
apiVersion: v1
kind: Secret
metadata:
name: admin-secret
# corresponds to user.spec.passwordSecretKeyRef.name
type: Opaque
stringData:
password: pass1
# corresponds to user.spec.passwordSecretKeyRef.key
---
apiVersion: mongodb.com/v1
kind: MongoDBUser
metadata:
name: admin
spec:
passwordSecretKeyRef:
name: admin-secret
# Match to metadata.name of the User Secret
key: password
username: admin
db: "admin" #
mongodbResourceRef:
name: mongo
# Match to MongoDB resource using authenticaiton
roles:
- db: "admin"
name: "clusterAdmin"
- db: "admin"
name: "userAdminAnyDatabase"
- db: "admin"
name: "readWrite"
- db: "admin"
name: "userAdminAnyDatabase"
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: "mongo"
replicas: 2
selector:
matchLabels:
name: mongo
template:
metadata:
labels:
name: mongo
spec:
terminationGracePeriodSeconds: 10
containers:
# - envFrom:
# - secretRef:
# name: mongo-secret
- image: mongo
name: mongodb
command:
- mongod
- "--replSet"
- rs0
- "--bind_ip"
- 0.0.0.0
ports:
- containerPort: 27017
Earlier I used the secret to create a mongo user:
...
spec:
containers:
- envFrom:
- secretRef:
name: mongo-secret
...
but once I added spec.template.spec.containers.command to the StatefulSet this approach is no longer working. Then I added Secret and MongoDBUser but I started getting this error:
unable to recognize "mongo.yaml": no matches for kind "MongoDBUser" in version "mongodb.com/v1"
How to automatically create a mongodb user when creating StatefulSet with few replicas in kubernetes?

One of the resources in your yaml file refers to a kind that doesn't exist in your cluster.
You can check this by running the command kubectl api-resources | grep mongo -i
Specifically it's the resource of kind MongoDBUser. This API resource type is part of MongoDB Enterprise Kubernetes Operator.
You haven't indicated whether you are using this in your cluster, but the error you're getting implies the CRD's for the operator are not installed and so cannot be used.
MongoDB Kubernetes Operator is a paid enterprise package for Kubernetes. If you don't have access to this enterprise package from MongoDB you can also install the community edition yourself by either setting up all the resources yourself or using Helm to install it as a package. Using Helm makes managing the resources significantly easier, especially with regards to configuration, upgrades, re-installation or unistalling. The existing Helm charts are open source and also allow for running MongDB as a standalone instance, replica set or a sharded cluster.
For reference, Bitnami provides a MongoDB Standalone or replica set helm chart which seems to be on the latest MongoDB version and is maintained regularly. There is also this one, but it's on an older version of MongoDB and doesn't seem to be getting much attention.

Related

How do I get the value of MONGODB_URI in a kubernetes deploy?

I have a working mongoDB deployment on minikube and I have managed to create a database , collection as well as a user (same as the user referenced in yaml) to do backups on that database.
In the yaml file for my backup cron job I need to specify a MONGODB_URI parameter and quite frankly I am at a loss as to the exact convention for getting this (where exactly do you get the value).
As a check I have done a kubectl exec -it <pod_name> so that I can check if I am going to put the correct URI beforehand. After running kubectl exec -it <pod_name> at the prompt that follows I tried the following :
1.
mongosh mongodb://aaa:abc123#mongodb-service.default.svc.cluster.local:27017/plaformdb/?directConnection=true
Not working I get error :
Current Mongosh Log ID: 62938b50880f139dad4b19c4
Connecting to: mongodb://mongodb-service.default.svc.cluster.local:27017/platformdb/?directConnection=true&appName=mongosh+1.4.2
MongoServerSelectionError: Server selection timed out after 30000 ms
mongosh mongodb://aaa:abc123#mongodb-service.svc.cluster.local:27017/platformdb?directConnection=true
Not working also I get error:
MongoNetworkError: getaddrinfo ENOTFOUND mongodb-service.svc.cluster.local
mongosh mongodb://aaa:abc123#mongodb-deployment-757ffdfdr5-thuiy.mongodb-service.default.svc.cluster.local:27017/platformdb
Not working I get an error :
Current Mongosh Log ID: 62938c93829ft678h88990
Connecting to: mongodb://mongodb-deployment-757ccdd6y8-thhhh.mongodb-service.default.svc.cluster.local:27017/platformdb?directConnection=true&appName=mongosh+1.4.2
MongoNetworkError: getaddrinfo ENOTFOUND mongodb-deployment-757ffdd5f5-tpzll.mongodb-service.default.svc.cluster.local
This however is the recommended way according to the :docs
Expected:
I should be able to log in to the database once I run that command.
This is how my deployment is defined :
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-deployment
labels:
app: mongodb
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret-amended
key: mongo-root-username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret-amended
key: mongo-root-password
volumeMounts:
- mountPath: /data/db
name: mongodb-vol
volumes:
- name: mongodb-vol
persistentVolumeClaim:
claimName: mongodb-claim
---
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
spec:
selector:
app: mongodb
ports:
- protocol: TCP
port: 27017
targetPort: 27017
And I need to specify MONGODB_URI in this cron job :
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: mongodump-backup
spec:
schedule: "0 */6 * * *" #Cron job every 6 hours
startingDeadlineSeconds: 60
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 2
jobTemplate:
spec:
template:
spec:
containers:
- name: mongodump-backup
image: golide/backupsdemo
imagePullPolicy: "IfNotPresent"
env:
- name: DB_NAME
value: "microfunctions"
- name: MONGODB_URI
value: mongodb://aaa:abc123#host-mongodb:27017/dbname
volumeMounts:
- mountPath: "/mongodump"
name: mongodump-volume
command: ['sh', '-c',"./dump.sh"]
restartPolicy: OnFailure
volumes:
- name: mongodump-volume
persistentVolumeClaim:
claimName: mongodb-backup
UPDATE
I have tried the suggested solutions on my localhost minikube but I am still getting errors :
mongo mongodb://aaa:abc123#mongodb-service:27017/platformdb?
authSource=admi
n
MongoDB shell version v5.0.8
connecting to: mongodb://mongodb-service:27017/platformdb?authSource=admin&compressors=disabled&gssapiServiceName=mongodb
Error: couldn't connect to server mongodb-service:27017, connection attempt failed: SocketException: Error connecting to mongodb-service:27017 (10.102.216.34:27017) :: caused by :: Connection timed out :
connect#src/mongo/shell/mongo.js:372:17
#(connect):2:6
exception: connect failed
exiting with code 1
This is giving same error even when I remove the port and use mongodb://aaa:abc123#mongodb-service/platformdb?authSource=admin . I have also tried putting quotes "" around the URL but getting same error.
As a check I tried replicating the exact same scenario on another mongodb deployment with same structure (it also has a headless service). This deploy is on a remote k8s cluster however.
This is what I found out :
I cannot connect using a user other than the root user. I created a custom user to do the backups:
db.createUser( {user: "aaa", pwd: "abc123", roles: ["userAdminAnyDatabase", "dbAdminAnyDatabase", "readWriteAnyDatabase","backup"], mechanisms:["SCRAM-SHA-1"]})
NB: I have the same user created also on minikube context.
For this custom user I am getting an Authentication failed error everytime I try to connect :
mongo mongodb://aaa:abc123#mongodb-headless-service:27017/TestDb?
authSource=admin
MongoDB shell version v4.4.7
connecting to: mongodb://mongodb-headless-service:27017/TestDb?authSource=admin&compressors=disabled&gssapiServiceName=mongodb
Error: Authentication failed. :
connect#src/mongo/shell/mongo.js:374:17
#(connect):2:6
exception: connect failed
exiting with code 1
I can connect using the root user but the connection attempt is intermittent. I have to exit out of the pod sometimes and re-run the command in order to connect.
This seems to be a bug unless Im missing something else obvious.
The screen below shows a succesful connection then on the subsequent attempt the exact same connection is failing :
On the 1st attempt I managed to login and run a show collections command but once I logout and try to connect I get Authentication Failed. The feature seems unstable at best.
Given the structure of your Service, you'll need to use the hostname mongodb-service (or mongodb-service.<namesapce>.svc.cluster.local, if you like fully qualified names). The connection URI -- as far as I can tell from the documentation -- would be:
mongodb://<username>:<password>#mongodb-service/dbname?authSource=admin
You can also connect successfully like this:L
mongodb://<username>:<password>#mongodb-service/
Because:
If [the username and password are] specified, the client will attempt to authenticate the user to the authSource. If authSource is unspecified, the client will attempt to authenticate the user to the defaultauthdb. And if the defaultauthdb is unspecified, to the admin database.
I tested this using a slightly modified version of your Deployment (mostly, I dropped the volumeMounts, because I don't need persistent storage for testing, and I used envFrom because I find that easier in general).
I deployed this using kustomize (kubectl apply -k .) with the following kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: mongo
commonLabels:
app: mongodb
resources:
- deployment.yaml
- service.yaml
secretGenerator:
- name: mongo-credentials
envs:
- mongo.env
This deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-deployment
spec:
replicas: 1
template:
spec:
containers:
- name: mongodb
image: docker.io/mongo:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 27017
envFrom:
- secretRef:
name: mongo-credentials
This service.yaml:
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
spec:
ports:
- protocol: TCP
port: 27017
targetPort: 27017
And this mongo.env:
MONGO_INITDB_ROOT_USERNAME=admin
MONGO_INITDB_ROOT_PASSWORD=secret
Once everything was up and running, I started a client pod:
kubectl run --image docker.io/mongo:latest mongoc -- sleep inf
And I was able to start a shell in that pod and connect to the database:
$ kubectl exec -it mongoc -- bash
Current Mongosh Log ID: 6293a4bc534ff40ec737c383
Connecting to: mongodb://<credentials>#mongodb-service.mongo.svc.cluster.local/?directConnection=true&appName=mongosh+1.4.2
Using MongoDB: 5.0.8
Using Mongosh: 1.4.2
[...]
test>

How to apply configuration to a down Mongo pod in Kubernetes cluster?

I have a running cluster in Kubernetes (Google Cloud), with 2 pods for 2 frontend apps (Angular), 1 pod backend app (NodeJS) and 1 pod for Mongo DB (currently Down)
In the last code update from git Mongo version updated unintencionally, image tag was not specified in Replication Controller, so it took the latest. This version seems not work.
I get CrashLoopBackOff error in Kubernetes, and the details are:
Failed to start up WiredTiger under any compatibility version. This may be due to an unsupported upgrade or downgrade."}
I have updated the Replication Controller specifying a Mongo version, but when I commit/push the change, workload in Google Cloud is not updating because it's down. The date of last "Created on" is some days ago, not now (see attached images if it's not clear what I'm trying to explain).
My 2 biggest doubts are:
How to force the (re)start of the Mongo Pod (with the added tag specifying the version), in order to fix the down pod issue?
How could I recover the data of the Mongo database in GKE, in order to migrate it quickly?
New Mongo.yaml
apiVersion: v1
kind: Service
metadata:
name: mongo
namespace: sgw-production
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
selector:
name: mongo
---
apiVersion: v1
kind: ReplicationController
metadata:
name: mongo-controller
namespace: sgw-production
labels:
name: mongo
spec:
replicas: 1
template:
metadata:
labels:
name: mongo
spec:
containers:
- image: mongo:4.2.10
name: mongo
ports:
- name: mongo
containerPort: 27017
hostPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
volumes:
- name: mongo-persistent-storage
gcePersistentDisk:
pdName: mongo-disk-$CI_ENVIRONMENT_SLUG
fsType: ext4
I answered my own question here with the solution to this issue (copy/paste below):
CrashLoopBackOff (Mongo in Docker/Kubernetes) - Failed to start up WiredTiger under any compatibility version
I solved this issue editing the Replication Controller online from the Google Cloud Console.
Access to: "Kubernetes Engine" > "Workload" > "mongo-controller" > "Managed pods" > "mongo-controller-XXXXX"
...and press EDIT button (in the top navbar). You can edit the configuration online in real time. I simply specified the Mongo version (4.2.10) in the image, and everything woked as expected.
spec:
replicas: 1
selector:
name: mongo
template:
metadata:
creationTimestamp: null
labels:
name: mongo
spec:
containers:
- image: mongo: 4.2.10
(...)

How to Install MongoDb Exporter for Prometheus Monitoring in Kubernetes

I want to monitor my MongoDb with prometheus. I currently have my MongoDb deployed like this
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-deployment
labels:
app: mongodb
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
ports:
- containerPort: 27017
env:
# These variables, used in conjunction, create a new user and set that user's password (From Mongo Docker Image)
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-password
and this service
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
spec:
selector:
app: mongodb
ports:
- protocol: TCP
port: 27017
targetPort: 27017
And I installed prometheus via helm
helm install stable/prometheus-operator
I know about this MongoDb Helm chart https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus-mongodb-exporter. But to my understanding, this installs MongoDb and the MongoDb Exporter, so because I have MongoDb already setup, this is of no use for me, right?
What are the steps to install only the Mongo Exporter and connect it with my Mongo Database? All I know is, I need to create a ServiceMonitor that has a label of release: prometheus-operator-1602753506.
Sorry for this beginner question, I'm still very new to kubernetes and helm, so I'm quite confused at this point
The helm chart does not install mongodb, it requires some configurations to be able to connect to mongodb and pull metrics from it. Also it's stated that the helm chart comes with its own service monitor so you don't need to create a new one:
https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus-mongodb-exporter#service-monitor

How can I fix MongoError: no mongos proxy available on GKE

I am trying to deploy and Express api on GKE, with a Mongo StatefulSet.
googlecloud_ssd.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: fast
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
mongo-statefulset.yaml
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongo
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: "mongo"
replicas: 2
template:
metadata:
labels:
role: mongo
environment: test
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
command:
- mongod
- "--replSet"
- rs0
- "--smallfiles"
- "--noprealloc"
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
- name: MONGO_SIDECAR_POD_LABELS
value: "role=mongo,environment=test"
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
annotations:
volume.beta.kubernetes.io/storage-class: "fast"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
I deployed my Express app and it works perfect, I then deployed Mongo using the above yaml config.
Having set the connection string in express as:
"mongodb://mongo-0.mongo,mongo-1.mongo:27017/"
I can see the updated pod(s) not starting.
Looking at the logs for that container I see
{
insertId: "a9tu83g211w2a6"
labels: {…}
logName: "projects/<my-project-id>/logs/express"
receiveTimestamp: "2019-06-03T14:19:14.142238836Z"
resource: {…}
severity: "ERROR"
textPayload: "[ ERROR ] MongoError: no mongos proxy available
"
timestamp: "2019-06-03T14:18:56.132989616Z"
}
I am unsure how to debug / fix MongoError: no mongos proxy available
Edit
So I scaled down my replicas to 1 on each and it's now working.
I'm confused as to why this won't work more than 1 replica.
The connection to your Mongodb database doesn't work for two reasons:
You cannot connect to high-available MongoDB deployment running inside your Kubernetes cluster using Pods DNS names. These unique POD names: mongo-0.mongo, mongo-1.mongo, with corresponding FQDNs as mongo-0.mongo.default.svc.cluster.local, mongo-1.mongo.default.svc.cluster.local, can be only reached within the K8S cluster. You have an Express web application that runs on client side (Web browser), and needs to connect to your mongodb from outside of cluster.
Connection string: you should connect to primary node via Kubernetes service name, that abstracts access to the Pods behind the replica sets.
Solution:
Create a separate Kubernetes Service of LoadBalancer or NodePort type for your Primary ReplicaSet, and use <ExternalIP_of_LoadBalancer>:27017 in your connection string.
I would encourage you to take a look at official mongodb helm chart, to see what kind of manifest files are required to satisfy your case.
Hint: use '--set service.type=LoadBalancer' with this helm chart

How I can use MongoDB GUI tool like mongo-express or RockMongo on Kubernetes cluster

I have MongoDB running on Kuberenetes cluster and I am looking for a MongoDB GUI tool like PHPmyAdmin to run it as a pod on the cluster and , I have Rockmongo running as a pod but it doesn't connect to MongoDB and also I couldn't expose it, I need any microservice i can run on kubernetes cluster that can do administration for MongoDB pod that is running on default namespace as well.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: rockmongo
spec:
selector:
matchLabels:
app: rockmongo
replicas: 1
template:
metadata:
labels:
app: rockmongo
spec:
containers:
- name: rockmongo
image: webts/rockmongo
ports:
- containerPort: 8050
env:
- name: MONGO_HOSTS
value: '27017'
- name: ROCKMONGO_PORT
value: '8050'
- name: MONGO_HIDE_SYSTEM_COLLECTIONS
value: 'false'
- name: MONGO_AUTH
value: 'false'
- name: ROCKMONGO_USER
value: 'admin'
- name: ROCKMONGO_PASSWORD
value: 'admin'
Services running on the cluster
rockmongo ClusterIP 10.107.52.82 <none> 8050/TCP 13s
As Vishal Biyani suggested, you
may consider using Kubernetes ingress (with ingress controller) to access internal
resources of MongoDB or GUI for PHP operations.
Distributed databases such as MongoDB require a little extra
attention when being deployed with orchestration frameworks such as Kubernetes.
I found interesting documentation regarding your needs of MongoDB as a
microservice with docker and Kubernetes.