Authentication Failed when connecting to MongoDB Kubernetes Statefulset - mongodb

I am deploying a mongodb statefulset in Kubernetes and I am setting up my own username and password, but I can't authenticate to the cluster. Please find my YAML manifest below:
Create the Statefulset along with the Headless service
...
apiVersion: v1
kind: Service
metadata:
name: mongo
namespace: ng-mongo
labels:
app: mongo
spec:
ports:
- port: 27017
targetPort: 27017
name: database
clusterIP: None
selector:
role: mongo
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo
namespace: ng-mongo
spec:
serviceName: "mongo"
replicas: 1
selector:
matchLabels:
role: mongo
template:
metadata:
labels:
role: mongo
spec:
containers:
- name: mongo
image: mongo
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: test
- name: MONGO_INITDB_ROOT_PASSWORD
value: test
- name: MONGO_INITDB_DATABASE
value: ng-db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
# https://github.com/cvallance/mongo-k8s-sidecar#settings
- name: MONGO_SIDECAR_POD_LABELS
value: "role=mongo"
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-storage"
resources:
requests:
storage: 10Gi
Expose the headless service for testing purposes
kubectl port-forward mongo-0 27017:27017
Try to access the mongodb through mongoshell
mongo -u test -ptest --authenticationDatabase ng-db
And I get the following error:
MongoDB shell version v5.0.11
connecting to: mongodb://127.0.0.1:27017/?authSource=ng-db&compressors=disabled&gssapiServiceName=mongodb
Error: Authentication failed. :
connect#src/mongo/shell/mongo.js:372:17
#(connect):2:6
exception: connect failed
exiting with code 1
Can someone help me on this please? Thank you

Related

Keycloak with postgres on minikube: cannot connect to database

I have one pod with postgres and one pod with keycloak on minikube.
The pod with keycloak deployed via an helm chart from codecentric (chart version 17.0.1, application version 16.1.1) is failing to initialize.
I have inspected the logs and it is failing to connect to the database:
FATAL [org.keycloak.services] (ServerService Thread Pool -- 64) Error during startup: java.lang.RuntimeException: Failed to connect to database
...
Caused by: org.postgresql.util.PSQLException: Connection to 127.0.0.1:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
values.yml used to deploy the helm chart
postgresql:
enabled: false
extraEnv: |
- name: DB_VENDOR
value: postgres
- name: DB_ADDR
value: "127.0.0.1"
- name: DB_PORT
value: "5432"
- name: DB_DATABASE
value: keycloak
- name: DB_USER
value: keycloak
- name: DB_PASSWORD
value: keycloak
- name: KEYCLOAK_USER
value: "admin"
- name: KEYCLOAK_PASSWORD
value: "admin"
- name: PROXY_ADDRESS_FORWARDING
value: "true"
- name: JDBC_PARAMS
value: "connectTimeout=30000"
Files used to deploy the postgresql pod:
postgres-configmap.yml:
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config-old
labels:
app: postgres-old
data:
POSTGRES_DB: keycloak
POSTGRES_USER: keycloak
POSTGRES_PASSWORD: keycloak
postgres-deployment.yml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-old
spec:
replicas: 1
selector:
matchLabels:
app: postgres-old
template:
metadata:
labels:
app: postgres-old
spec:
containers:
- name: postgres-old
image: docker.io/library/postgres:14.4
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: postgres-config-old
volumeMounts:
- mountPath: /var/lib/postgresql/14/data
name: postgredb-old
volumes:
- name: postgredb-old
persistentVolumeClaim:
claimName: postgres-pv-claim-old
postgres-service.yml:
apiVersion: v1
kind: Service
metadata:
name: postgres-old
labels:
app: postgres-old
spec:
type: NodePort
ports:
- port: 5432
selector:
app: postgres-old
postgres-storage.yml:
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv-volume-old
labels:
type: local
app: postgres-old
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pv-claim-old
labels:
app: postgres-old
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
I've also tried to expose an external port with:
minikube service postgres-old --url
and add this port on values.yml instead of 5432 but with no luck.
I am running minikube on wsl2.

Cannot connect to MongoDB: server selection error: server selection timeout

I have setup Percona MongoDB Exporter and when I attempt to scrape metrics for my MongDB instance I am getting an error :
An error has occurred while connecting to MongoDB:
cannot connect to MongoDB: server selection error: server selection timeout, current topology: { Type: Unknown, Servers: [{ Addr: mongodb-headless-service.labs.svc.cluster.local:27017, Type: Unknown, Last error: connection() error occured during connection handshake: dial tcp: lookup mongodb-headless-service.svc.cluster.local on 159.XX.XX.XX:XX: no such host }, ] }
Both MongoDB and Percona-MongoDB Exporter are deployed in the same namespace on Kubernetes and I have created a user (at admin level ) to extract the metrics :
db.createUser({ user: "promt", pwd: "abc123", roles: [ { role: "clusterMonitor", db: "admin" },{ role: "read", db: "local" } ], mechanisms:["SCRAM-SHA-1"]})
To verify if the exporter is able to scrape any metrics from the MongoDB instance I am using:
kubectl -n labs port-forward service/mongodb-exporter 9216
and I can access the Exporter at http://localhost:9216
In the Percona MongoDB-Exporter deployment I have specified Mongo-URI as below :
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-exporter
namespace: labs
spec:
replicas: 1
selector:
matchLabels:
app: mongodb-exporter
template:
metadata:
labels:
app: mongodb-exporter
spec:
containers:
- name: mongodb-exporter
image: percona/mongodb_exporter:0.30
imagePullPolicy: "IfNotPresent"
args:
- "--mongodb.direct-connect=false"
- "--mongodb.uri=mongodb://promt:abc123#mongo-headless-service.svc.cluster.local/admin"
ports:
- name: metrics
containerPort: 9216
resources:
requests:
memory: 128Mi
cpu: 250m
I have tried a lot of options but they all result in the same error:
Adding k8s namespace to the mongo-uri
--mongodb.uri=mongodb://promt:abc123#mongodb-headless-service.labs.svc.cluster.local/admin
Adding option for SSL
--mongodb.uri=mongodb://promt:abc123#mongodb-headless-service.svc.cluster.local/admin?ssl=false
Switching from using a k8s headless service to a ClusterIP
--mongodb.uri=mongodb://promt:abc123#mongo-single-clusterip.labs.svc.cluster.local/admin
Enabling direct-connect option
"--mongodb.direct-connect=true"
- "--mongodb.uri=mongodb://promt:abc123#mongodb-headless-service.svc.cluster.local/admin"
For all these options I also tried variations of adding the ssl option and namespace but that is failing with same error in all cases.
My MongoDB instance is a standalone deployment with the following definition :
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-deployment
namespace: labs
labels:
app: mongodb
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: docker.io/bitnami/mongodb:4.4.6-debian-10-r0
imagePullPolicy: "IfNotPresent"
securityContext:
runAsNonRoot: true
runAsUser: 1001
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret-amended
key: mongo-root-username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret-amended
key: mongo-root-password
volumeMounts:
- mountPath: /data/db
name: mongodb-vol
volumes:
- name: mongodb-vol
persistentVolumeClaim:
claimName: mongodb-claim
---
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
namespace: labs
spec:
type: NodePort
selector:
app: mongodb
ports:
- protocol: TCP
port: 27017
targetPort: 27017
---
apiVersion: v1
kind: Service
metadata:
name: mongodb-headless-service
namespace: labs
spec:
selector:
app: mongodb
ports:
- protocol: TCP
port: 27017
targetPort: 27017
---
apiVersion: v1
kind: Service
metadata:
name: mongo-single-clusterip
namespace: labs
spec:
type: ClusterIP
selector:
app: mongodb
ports:
- protocol: TCP
port: 27017
targetPort: 27017
For context I am following the example given here and here.
What am I missing?
The reason is stated in the error message you got back:
error occured during connection handshake: dial tcp: lookup mongodb-headless-service.svc.cluster.local on 159.XX.XX.XX:XX: no such host
DNS lookup for this FQDN: mongodb-headless-service.svc.cluster.local fails.
You can go through the steps here to confirm your DNS is set up correctly: https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/
You need to change your mongoDB URI to mongodb-headless-service.labs.svc.cluster.local in the configuration & it should start working well.

Can't authenticate MongoDb on Kubernetes cluster

When I try to connect to MongoDb running on Kubernetes cluster with mongo -u admin -p password -authenticationDatabase admin, I get this error:
{"mechanism":"SCRAM-SHA-1","speculative":false,"principalName":"admin","authenticationDatabase":"admin","remote":"192.168.65.3:47486","extraInfo":{},"error":"UserNotFound: Could not find user \"admin\" for db \"admin\""}}
Below is the yaml file I'm using to create the MongoDb service.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongodb-statefulset
spec:
serviceName: "mongodb-service"
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: admin
- name: MONGO_INITDB_ROOT_PASSWORD
value: password
volumeMounts:
- mountPath: /data/db
name: data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
spec:
type: LoadBalancer
selector:
app: mongodb
ports:
- port: 27017
targetPort: 27017
I've tried everything and it still doesn't work. I appreciate any help.
Try something like, using mongod as command
apiVersion: v1
kind: Service
metadata:
labels:
app: mongo
name: mongodb-service
spec:
type: NodePort
ports:
- name: "http"
port: 27017
protocol: TCP
targetPort: 27017
selector:
service: mongo
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mongo
spec:
replicas: 1
template:
metadata:
labels:
service: mongo
name: mongodb-service
spec:
containers:
- args:
- mongod
- --smallfiles
image: mongo:latest
name: mongo
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: admin
- name: MONGO_INITDB_ROOT_PASSWORD
value: admin

Unable to authenticate Mongodb deployed in kubernetes

Hi I have deployed mongodb V-4.2.6 on kubernetes and the below is the yaml file.
# Service to expose MongoDB
apiVersion: v1
kind: Service
metadata:
name: mongo
namespace: namespace-name
labels:
app: mongo
spec:
ports:
- name: mongo
port: 27017
targetPort: 27017
clusterIP: None
selector:
app: mongo
tier: db
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo
namespace: namespace-name
labels:
tier: "db"
app: "mongo"
spec:
replicas: 1
selector:
matchLabels:
app: mongo
tier: "db"
template:
metadata:
labels:
app: mongo
tier: "db"
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo:4.2.6
resources:
limits:
memory: "2Gi"
requests:
memory: "512Mi"
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: "admin"
- name: MONGO_INITDB_ROOT_PASSWORD
value: "password"
- name: MONGO_INITDB_DATABASE
value: admin
command:
- mongod
- --auth
ports:
- containerPort: 27017
volumeMounts:
- name: nfs1
mountPath: /data/db
volumes:
- name: nfs1
nfs:
server: 0.0.0.0
path: "/path/to/volumes"
If I try to login with the credentials( db.auth("admin", "password") )specified in the above yaml file I'm getting authentication failed message in mongodb. I have found related issues in stackoverflow but couldn't find the solution for this. Can any one help me to create admin user from the yaml file.
Remove command from your deployment file and it'll work. Because when you set MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD ENV variables at your manifest. Mongo container will enable --auth by itself. So, you don't need to specify explicitly. Take a look at here.
Updated YAML
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: "admin"
- name: MONGO_INITDB_ROOT_PASSWORD
value: "password"
- name: MONGO_INITDB_DATABASE
value: admin
ports:
- containerPort: 27017

Setting up MongoDB with Kubernetes on Google Cloud

I'm trying to deploy MongoDB on Google Cloud using Kubernetes. However, I'm having some difficulties. It says:
error converting YAML to JSON: yaml: line 29: found character that cannot start any token
However, on line 29 there's only a space indenting the line, as shown below.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongo
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: "mongo"
replicas: 1
template:
metadata:
labels:
role: mongo
environment: test
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
command:
- mongod
- "--replSet"
- rs0
- "--smallfiles"
- "--noprealloc"
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
volumes:
- name: mongo-persistent-storage
persistentVolumeClaim:
claimName: mongo-pv-claim
Could somebody point me to where I'm doing something wrong, or more resources on the matter? I've only recently started using Kubernetes, so would appreciate any information.
You had two issues with formatting your yaml file:
volumes should be on the same level as containers.
persistentVolumeClaim should be under - name: mongo-persistent-storage in volumes statement.
Correct yaml file is:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongo
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: "mongo"
replicas: 1
template:
metadata:
labels:
role: mongo
environment: test
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
command:
- mongod
- "--replSet"
- rs0
- "--smallfiles"
- "--noprealloc"
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
volumes:
- name: mongo-persistent-storage
persistentVolumeClaim:
claimName: mongo-pv-claim