Can't authenticate MongoDb on Kubernetes cluster - mongodb

When I try to connect to MongoDb running on Kubernetes cluster with mongo -u admin -p password -authenticationDatabase admin, I get this error:
{"mechanism":"SCRAM-SHA-1","speculative":false,"principalName":"admin","authenticationDatabase":"admin","remote":"192.168.65.3:47486","extraInfo":{},"error":"UserNotFound: Could not find user \"admin\" for db \"admin\""}}
Below is the yaml file I'm using to create the MongoDb service.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongodb-statefulset
spec:
serviceName: "mongodb-service"
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: admin
- name: MONGO_INITDB_ROOT_PASSWORD
value: password
volumeMounts:
- mountPath: /data/db
name: data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
spec:
type: LoadBalancer
selector:
app: mongodb
ports:
- port: 27017
targetPort: 27017
I've tried everything and it still doesn't work. I appreciate any help.

Try something like, using mongod as command
apiVersion: v1
kind: Service
metadata:
labels:
app: mongo
name: mongodb-service
spec:
type: NodePort
ports:
- name: "http"
port: 27017
protocol: TCP
targetPort: 27017
selector:
service: mongo
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mongo
spec:
replicas: 1
template:
metadata:
labels:
service: mongo
name: mongodb-service
spec:
containers:
- args:
- mongod
- --smallfiles
image: mongo:latest
name: mongo
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: admin
- name: MONGO_INITDB_ROOT_PASSWORD
value: admin

Related

Authentication Failed when connecting to MongoDB Kubernetes Statefulset

I am deploying a mongodb statefulset in Kubernetes and I am setting up my own username and password, but I can't authenticate to the cluster. Please find my YAML manifest below:
Create the Statefulset along with the Headless service
...
apiVersion: v1
kind: Service
metadata:
name: mongo
namespace: ng-mongo
labels:
app: mongo
spec:
ports:
- port: 27017
targetPort: 27017
name: database
clusterIP: None
selector:
role: mongo
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo
namespace: ng-mongo
spec:
serviceName: "mongo"
replicas: 1
selector:
matchLabels:
role: mongo
template:
metadata:
labels:
role: mongo
spec:
containers:
- name: mongo
image: mongo
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: test
- name: MONGO_INITDB_ROOT_PASSWORD
value: test
- name: MONGO_INITDB_DATABASE
value: ng-db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
# https://github.com/cvallance/mongo-k8s-sidecar#settings
- name: MONGO_SIDECAR_POD_LABELS
value: "role=mongo"
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-storage"
resources:
requests:
storage: 10Gi
Expose the headless service for testing purposes
kubectl port-forward mongo-0 27017:27017
Try to access the mongodb through mongoshell
mongo -u test -ptest --authenticationDatabase ng-db
And I get the following error:
MongoDB shell version v5.0.11
connecting to: mongodb://127.0.0.1:27017/?authSource=ng-db&compressors=disabled&gssapiServiceName=mongodb
Error: Authentication failed. :
connect#src/mongo/shell/mongo.js:372:17
#(connect):2:6
exception: connect failed
exiting with code 1
Can someone help me on this please? Thank you

Unable to authenticate Mongodb deployed in kubernetes

Hi I have deployed mongodb V-4.2.6 on kubernetes and the below is the yaml file.
# Service to expose MongoDB
apiVersion: v1
kind: Service
metadata:
name: mongo
namespace: namespace-name
labels:
app: mongo
spec:
ports:
- name: mongo
port: 27017
targetPort: 27017
clusterIP: None
selector:
app: mongo
tier: db
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo
namespace: namespace-name
labels:
tier: "db"
app: "mongo"
spec:
replicas: 1
selector:
matchLabels:
app: mongo
tier: "db"
template:
metadata:
labels:
app: mongo
tier: "db"
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo:4.2.6
resources:
limits:
memory: "2Gi"
requests:
memory: "512Mi"
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: "admin"
- name: MONGO_INITDB_ROOT_PASSWORD
value: "password"
- name: MONGO_INITDB_DATABASE
value: admin
command:
- mongod
- --auth
ports:
- containerPort: 27017
volumeMounts:
- name: nfs1
mountPath: /data/db
volumes:
- name: nfs1
nfs:
server: 0.0.0.0
path: "/path/to/volumes"
If I try to login with the credentials( db.auth("admin", "password") )specified in the above yaml file I'm getting authentication failed message in mongodb. I have found related issues in stackoverflow but couldn't find the solution for this. Can any one help me to create admin user from the yaml file.
Remove command from your deployment file and it'll work. Because when you set MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD ENV variables at your manifest. Mongo container will enable --auth by itself. So, you don't need to specify explicitly. Take a look at here.
Updated YAML
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: "admin"
- name: MONGO_INITDB_ROOT_PASSWORD
value: "password"
- name: MONGO_INITDB_DATABASE
value: admin
ports:
- containerPort: 27017

MongoDB in Kubernetes within GCP

I'm trying to deploy mongodb on my k8s cluster as mongodb is my db of choice. To do that I've config files (very similar to what I did with postgress few weeks ago).
Here's mongo's deployment k8s object:
apiVersion: apps/v1
kind: Deployment
metadata:
name: panel-admin-mongo-deployment
spec:
replicas: 1
selector:
matchLabels:
component: panel-admin-mongo
template:
metadata:
labels:
component: panel-admin-mongo
spec:
volumes:
- name: panel-admin-mongo-storage
persistentVolumeClaim:
claimName: database-persistent-volume-claim
containers:
- name: panel-admin-mongo
image: mongo
ports:
- containerPort: 27017
volumeMounts:
- name: panel-admin-mongo-storage
mountPath: /data/db
In order to get into the pod I made a service:
apiVersion: v1
kind: Service
metadata:
name: panel-admin-mongo-cluster-ip-service
spec:
type: ClusterIP
selector:
component: panel-admin-mongo
ports:
- port: 27017
targetPort: 27017
And of cource I need a PVC as well:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: database-persistent-volume-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
In order to get to the db from my server I used server deployment object:
apiVersion: apps/v1
kind: Deployment
metadata:
name: panel-admin-api-deployment
spec:
replicas: 1
selector:
matchLabels:
component: panel-admin-api
template:
metadata:
labels:
component: panel-admin-api
spec:
containers:
- name: panel-admin-api
image: my-image
ports:
- containerPort: 3001
env:
- name: MONGO_URL
value: panel-admin-mongo-cluster-ip-service // This is important
imagePullSecrets:
- name: gcr-json-key
But for some reason when I'm booting up all containers with kubectl apply command my server says:
MongoDB :: connection error: MongoParseError: Invalid connection string
Can I deploy it like that (as it was possible with postgress)? Or what am I missing here?
Use mongodb:// in front of your panel-admin-mongo-cluster-ip-service
So it should look like this:
mongodb://panel-admin-mongo-cluster-ip-service

Deploying 2 Namespaces With 1 MongoDB Instance In Each

I am attempting to deploy 2 MongoDB instances; 1 to the staging namespace and 1 to the production namespace. My staging deployment is running fine, however the production data-retriever deployment crashes with the error MongoNetworkError: getaddrinfo ENOTFOUND
I am sure this is something that has been achieved before but I cannot find any documentation on achieving the desired outcome above.
The following YAML files are being used and the key differences are in the serviceName change in statefulset-prod.yml and MONGODB_URI in the environment variables of deployment-prod.yml:
service-staging.yml
apiVersion: v1
kind: Service
metadata:
name: mongodb
namespace: staging
labels:
app: my-app
environment: staging
spec:
ports:
- name: http
protocol: TCP
port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongodb
environment: staging
service-prod.yml
apiVersion: v1
kind: Service
metadata:
name: mongodb-production
namespace: production
labels:
app: my-app
environment: production
spec:
ports:
- name: http
protocol: TCP
port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongodb-production
environment: production
statefulset-staging.yml
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongodb-staging
namespace: staging
labels:
app: my-app
environment: staging
annotations:
prometheus.io.scrape: "true"
spec:
serviceName: "mongodb"
replicas: 1
template:
metadata:
labels:
role: mongodb
environment: staging
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
command:
- mongod
- "--replSet"
- rs0
- "--smallfiles"
- "--noprealloc"
- "--bind_ip_all"
- "--wiredTigerCacheSizeGB=0.5"
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
- name: MONGO_SIDECAR_POD_LABELS
value: "role=mongodb,environment=staging"
- name: KUBERNETES_MONGO_SERVICE_NAME
value: "mongodb"
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: fast-storage
resources:
requests:
storage: 25Gi
statefulset-prod.yml
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongodb-production
namespace: production
labels:
app: my-app
environment: production
annotations:
prometheus.io.scrape: "true"
spec:
serviceName: "mongodb-production"
replicas: 1
template:
metadata:
labels:
role: mongodb-production
environment: production
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
command:
- mongod
- "--replSet"
- rs0
- "--smallfiles"
- "--noprealloc"
- "--bind_ip_all"
- "--wiredTigerCacheSizeGB=0.5"
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
- name: MONGO_SIDECAR_POD_LABELS
value: "role=mongodb-production,environment=production"
- name: KUBERNETES_MONGO_SERVICE_NAME
value: "mongodb-production"
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: fast-storage
resources:
requests:
storage: 25Gi
deployment-staging.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: data-retriever-deployment
namespace: staging
labels:
app: data-retriever
environment: staging
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
template:
metadata:
labels:
app: data-retriever
environment: staging
tier: backend
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- data-retriever
topologyKey: "kubernetes.io/hostname"
imagePullSecrets:
- name: gitlab-reg
containers:
- name: data-retriever
image: registry.gitlab.com/xxx/xxx:latest
env:
- name: MONGODB_URI
value: "mongodb://mongodb:27017/appdb_staging?replicaSet=rs0"
- name: NODE_ENV
value: "staging"
deployment-prod.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: data-retriever-deployment
namespace: production
labels:
app: data-retriever
environment: production
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
template:
metadata:
labels:
app: data-retriever
environment: production
tier: backend
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- data-retriever
topologyKey: "kubernetes.io/hostname"
imagePullSecrets:
- name: gitlab-reg
containers:
- name: data-retriever
image: registry.gitlab.com/xxx/xxx:latest
env:
- name: MONGODB_URI
value: "mongodb://mongodb-production:27017/appdb_production?replicaSet=rs0"
- name: NODE_ENV
value: "production"
Logs
Unable to connect to database: MongoNetworkError: failed to connect to server [mongodb:27017] on first connect [MongoNetworkError: getaddrinfo ENOTFOUND mongodb mongodb:27017]
Unable to connect to database: MongoNetworkError: failed to connect to server [mongodb:27017] on first connect [MongoNetworkError: getaddrinfo ENOTFOUND mongodb mongodb:27017]
at NativeConnection.db.on (/app/services/mongoose.service.js:25:19)
at emitOne (events.js:116:13)
at NativeConnection.emit (events.js:211:7)
at process.nextTick (/node_modules/mongoose/lib/connection.js:575:37)
at _combinedTickCallback (internal/process/next_tick.js:132:7)
at process._tickCallback (internal/process/next_tick.js:181:9)

Setting up MongoDB with Kubernetes on Google Cloud

I'm trying to deploy MongoDB on Google Cloud using Kubernetes. However, I'm having some difficulties. It says:
error converting YAML to JSON: yaml: line 29: found character that cannot start any token
However, on line 29 there's only a space indenting the line, as shown below.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongo
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: "mongo"
replicas: 1
template:
metadata:
labels:
role: mongo
environment: test
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
command:
- mongod
- "--replSet"
- rs0
- "--smallfiles"
- "--noprealloc"
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
volumes:
- name: mongo-persistent-storage
persistentVolumeClaim:
claimName: mongo-pv-claim
Could somebody point me to where I'm doing something wrong, or more resources on the matter? I've only recently started using Kubernetes, so would appreciate any information.
You had two issues with formatting your yaml file:
volumes should be on the same level as containers.
persistentVolumeClaim should be under - name: mongo-persistent-storage in volumes statement.
Correct yaml file is:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongo
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: "mongo"
replicas: 1
template:
metadata:
labels:
role: mongo
environment: test
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
command:
- mongod
- "--replSet"
- rs0
- "--smallfiles"
- "--noprealloc"
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
volumes:
- name: mongo-persistent-storage
persistentVolumeClaim:
claimName: mongo-pv-claim