Mongodb Authentication failed in Kubernetes (minikube) - mongodb

Creating a mongodb from the below minikube deployment, I am not able to authenticate from the command line.
Basically I'd just want to list all databases because I suspect there is an issue with the connectivity to mongo-express.
I exec into the pod using kubectl exec -it mongodb-deployment-6b46455744-gfkzw -- /bin/bash,
mongo to start up the cli
db.auth("username", "password") gives MongoServerError: Authentication failed.
even though printenv gives
MONGO_INITDB_ROOT_PASSWORD=password
MONGO_INITDB_ROOT_USERNAME=username
any help?
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-deployment
labels:
app: mongodb
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-password
- name: ME_CONFIG_MONGODB_ENABLE_ADMIN
value: "true"
---
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
spec:
selector:
app: mongodb
ports:
- protocol: TCP
port: 27017
targetPort: 27017

managed to connect using
mongosh --port 27017 --authenticationDatabase \
"admin" -u "myUserAdmin" -p
from https://docs.mongodb.com/manual/tutorial/authenticate-a-user/#std-label-authentication-auth-as-user and docker hub page.

Related

Minikube Kubernetes, Postgres, Spring Boot Cluster - Postgres connection refused

so I have a basic minikube cluster configuration for K8s cluster with only 2 pods for Postgres DB and my Spring app. However, I can't get my app to connect to my DB. I know that in Docker such issue could be solved with networking but after a lot of research I can't seem to find the problem and the solution to my issue.
Currently, given my configuration I get a Connection refused error by postgres whenever my Spring App tries to start:
Caused by: org.postgresql.util.PSQLException: Connection to postgres-service:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
So my spring-app is a basic REST API with some open endpoints where I query for some data. The app works completely fine and here is my application.properties:
spring.datasource.driverClassName=org.postgresql.Driver
spring.datasource.url=jdbc:postgresql://${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}
spring.datasource.username=${POSTGRES_USER}
spring.datasource.password=${POSTGRES_PASSWORD}
spring.jpa.database-platform=org.hibernate.dialect.PostgreSQLDialect
spring.jpa.hibernate.ddl-auto=update
The way I create my Postgres component is by creating a ConfigMap, a Secret and finally a Deployment with it's Service inside. They look like so:
postgres-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
data:
postgres-url: postgres-service
postgres-port: "5432"
postgres-db: "test"
postgres-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: postgres-secret
type: Opaque
data:
postgres_user: cm9vdA== #already encoded in base64
postgres_password: cm9vdA== #already encoded in base64
postgres.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
labels:
app: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgresdb
image: postgres
ports:
- containerPort: 5432
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-secret
key: postgres_user
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: postgres_password
- name: POSTGRES_DB
valueFrom:
configMapKeyRef:
name: postgres-config
key: postgres-db
---
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
selector:
app.kubernetes.io/name: postgres
ports:
- protocol: TCP
port: 5432
targetPort: 5432
and finally here's my Deployment with it's Service for my spring app
spring-app.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: spring-app-deployment
labels:
app: spring-app
spec:
replicas: 1
selector:
matchLabels:
app: spring-app
template:
metadata:
labels:
app: spring-app
spec:
containers:
- name: spring-app
image: app #image is pulled from my docker hub
ports:
- containerPort: 8080
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-secret
key: postgres_user
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: postgres_password
- name: POSTGRES_HOST
valueFrom:
configMapKeyRef:
name: postgres-config
key: postgres-url
- name: POSTGRES_PORT
valueFrom:
configMapKeyRef:
name: postgres-config
key: postgres-port
- name: POSTGRES_DB
valueFrom:
configMapKeyRef:
name: postgres-config
key: postgres-db
---
apiVersion: v1
kind: Service
metadata:
name: spring-app-service
spec:
type: NodePort
selector:
app.kubernetes.io/name: spring-app
ports:
- protocol: TCP
port: 8080
targetPort: 8080
nodePort: 30001
A connection refused means that the host you are connecting to, does not have the port you mentioned opened.
This leads me to think that the postgres pod isnt running correctly, or the service is not pointing to those pods correctly.
By checking the Yamls I can see that the service's pod selector isnt configured correctly:
The service is selecting pods with label: app.kubernetes.io/name: postgres
The deployment is configured with pods with label: app: postgres
The correct service manifest should look like:
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
selector:
app: postgres
ports:
- protocol: TCP
port: 5432
targetPort: 5432
You can double check that by describing the service using kubectl describe service postgres-service.
The output should contain the postgres pods IPs for Endpoints.

Authentication Failed when connecting to MongoDB Kubernetes Statefulset

I am deploying a mongodb statefulset in Kubernetes and I am setting up my own username and password, but I can't authenticate to the cluster. Please find my YAML manifest below:
Create the Statefulset along with the Headless service
...
apiVersion: v1
kind: Service
metadata:
name: mongo
namespace: ng-mongo
labels:
app: mongo
spec:
ports:
- port: 27017
targetPort: 27017
name: database
clusterIP: None
selector:
role: mongo
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo
namespace: ng-mongo
spec:
serviceName: "mongo"
replicas: 1
selector:
matchLabels:
role: mongo
template:
metadata:
labels:
role: mongo
spec:
containers:
- name: mongo
image: mongo
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: test
- name: MONGO_INITDB_ROOT_PASSWORD
value: test
- name: MONGO_INITDB_DATABASE
value: ng-db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
# https://github.com/cvallance/mongo-k8s-sidecar#settings
- name: MONGO_SIDECAR_POD_LABELS
value: "role=mongo"
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-storage"
resources:
requests:
storage: 10Gi
Expose the headless service for testing purposes
kubectl port-forward mongo-0 27017:27017
Try to access the mongodb through mongoshell
mongo -u test -ptest --authenticationDatabase ng-db
And I get the following error:
MongoDB shell version v5.0.11
connecting to: mongodb://127.0.0.1:27017/?authSource=ng-db&compressors=disabled&gssapiServiceName=mongodb
Error: Authentication failed. :
connect#src/mongo/shell/mongo.js:372:17
#(connect):2:6
exception: connect failed
exiting with code 1
Can someone help me on this please? Thank you

Getting a MongoCommandException: Command failed with error 18 (AuthenticationFailed): 'Authentication failed. deployed on kubernetes

I am running mongoDB on kubernetes, I have added username and password and am getting the following exception `
com.mongodb.MongoCommandException: Command failed with error 18 (AuthenticationFailed): 'Authentication failed.' on server mongodb-service:27017. The full response is {"ok": 0.0, "errmsg": "Authentication failed.", "code": 18, "codeName": "AuthenticationFailed"}
at com.mongodb.internal.connection.ProtocolHelper.getCommandFailureException(ProtocolHelper.java:198) ~[mongodb-driver-core-4.6.1.jar!/:na]
Now I am able to connect to the database using MongoClients
MongoClient client = MongoClients.create("mongodb://mongodb-service:27017");
ListDatabasesIterable<org.bson.Document> databases = client.listDatabases();
And able to connect to it via mongo express
I created the following yml files:
mongo-configmap.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: mongodb-configmap
data:
database_url: mongodb-service
mongo-secret.yml
apiVersion: v1
kind: Secret
metadata:
name: mongodb-secret
type: Opaque
data:
mongo-root-username: dXNlcm5hbWU=
mongo-root-password: cGFzc3dvcmQ=
mongo-deploy-service.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-deployment
labels:
app: mongodb
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-password
---
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
spec:
selector:
app: mongodb
ports:
- protocol: TCP
port: 27017
targetPort: 27017
And mongo-express.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-express
labels:
app: mongo-express
spec:
replicas: 1
selector:
matchLabels:
app: mongo-express
template:
metadata:
labels:
app: mongo-express
spec:
containers:
- name: mongo-express
image: mongo-express
ports:
- containerPort: 8081
env:
- name: ME_CONFIG_MONGODB_ADMINUSERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-username
- name: ME_CONFIG_MONGODB_ADMINPASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-password
- name: ME_CONFIG_MONGODB_SERVER
valueFrom:
configMapKeyRef:
name: mongodb-configmap
key: database_url
---
apiVersion: v1
kind: Service
metadata:
name: mongo-express-service
spec:
selector:
app: mongo-express
type: LoadBalancer
ports:
- protocol: TCP
port: 8081
targetPort: 8081
nodePort: 30000
And my springboot application.yml
spring:
application:
name: kubernetes-mongodb
data:
mongodb:
database: Transaction
host: mongodb-service
port: 27017
username: username
password: password
server:
port: '8080'
I am connecting to the mongoDB trying to add data and its falling over with the authentication error.I have researched around this and in some examples they talk about creating a user and assigning roles. But not in all of them it really depends on your mongoDB set up. I just want to try the most simple on minikube. Using the scripts I have created.
The first thing I need to know is, that do I have to set up another yml file
apiVersion: mongodb.com/v1
kind: MongoDBUser
metadata:
name: user-mongodb
spec:
passwordSecretKeyRef:
name: mongodb-secret
# Match to metadata.name of the User Secret
key: password
username: user-mongodb
db: "admin" #
mongodbResourceRef:
name: mongodb
# Match to MongoDB resource using authenticaiton
roles:
- db: "admin"
name: "clusterAdmin"
- db: "admin"
name: "userAdminAnyDatabase"
- db: "admin"
name: "readWrite"
- db: "admin"
name: "userAdminAnyDatabase"
And then I got the error
error: resource mapping not found for name: "scram-mongodb" namespace: "" from "mongo-user-roles.yml": no matches for kind "MongoDBUser" in version "mongodb.com/v1"
ensure CRDs are installed first
Before I go down the road trying to fix creating a user. I just want to check that I have to add this step if I want to add data to MongoDB with a username and password.

Unable to authenticate Mongodb deployed in kubernetes

Hi I have deployed mongodb V-4.2.6 on kubernetes and the below is the yaml file.
# Service to expose MongoDB
apiVersion: v1
kind: Service
metadata:
name: mongo
namespace: namespace-name
labels:
app: mongo
spec:
ports:
- name: mongo
port: 27017
targetPort: 27017
clusterIP: None
selector:
app: mongo
tier: db
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo
namespace: namespace-name
labels:
tier: "db"
app: "mongo"
spec:
replicas: 1
selector:
matchLabels:
app: mongo
tier: "db"
template:
metadata:
labels:
app: mongo
tier: "db"
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo:4.2.6
resources:
limits:
memory: "2Gi"
requests:
memory: "512Mi"
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: "admin"
- name: MONGO_INITDB_ROOT_PASSWORD
value: "password"
- name: MONGO_INITDB_DATABASE
value: admin
command:
- mongod
- --auth
ports:
- containerPort: 27017
volumeMounts:
- name: nfs1
mountPath: /data/db
volumes:
- name: nfs1
nfs:
server: 0.0.0.0
path: "/path/to/volumes"
If I try to login with the credentials( db.auth("admin", "password") )specified in the above yaml file I'm getting authentication failed message in mongodb. I have found related issues in stackoverflow but couldn't find the solution for this. Can any one help me to create admin user from the yaml file.
Remove command from your deployment file and it'll work. Because when you set MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD ENV variables at your manifest. Mongo container will enable --auth by itself. So, you don't need to specify explicitly. Take a look at here.
Updated YAML
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: "admin"
- name: MONGO_INITDB_ROOT_PASSWORD
value: "password"
- name: MONGO_INITDB_DATABASE
value: admin
ports:
- containerPort: 27017

Expose database to deployment on GKE

I have a deployment running a pod that needs access to a postgres database I am running in the same cluster as the kubernetes cluster. How do I create a service that selects the deployment such that it has access. My pods keep restarting as the connection times out. I have created firewall rules in the vpc subnet to allow internal communication and have modified pg_hba.conf and postgresql.conf
My deployment definition is given below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
labels:
name: server
app: api
spec:
replicas: 3
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: gcr.io/api:v1
ports:
- containerPort: 80
env:
- name: DB_HOSTNAME
valueFrom:
secretKeyRef:
name: api-config
key: hostname
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: api-config
key: username
- name: DB_NAME
valueFrom:
secretKeyRef:
name: api-config
key: name
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: api-config
key: password
This is my service definition to expose the database but I don't think I am selecting the deployment. I have followed the example here.
kind: Service
apiVersion: v1
metadata:
name: postgres
label:
spec:
type: ClusterIP
ports:
- port: 5432
targetPort: 5432
---
kind: Endpoints
apiVersion: v1
metadata:
name: postgres
subsets:
- addresses:
- ip: 10.0.0.50
ports:
- port: 5432
You can use the following to expose database to deployment on GKE:
$ kubectl expose deployment name-of-db --type=LoadBalancer --port 80 --target-port 8080