Mongod in recovering after enabling security in mongod.conf - mongodb

Created 2 shards with 3 node replica set.
After enabling the security in mongod.conf file, every node was in recovering state. But if we disable/comment out in the mongod.conf file the nodes were normal.
All nodes were running on AWS.
OS: CentOS 7
Mongodb Version: 3.6.2
Background: Before enabling the security, I created a user as below.
db.createUser( { user: "cxiroot",
pwd: "root",
roles: [ { role: "root", db: "admin" }] },
{ w: "majority" , wtimeout: 5000 } );
I am able to authenticate with, db.auth("cxiroot","root")
After enabling security and restarting the service, all nodes were in recovering state.
Basically I am trying to do db.shutdownServer() by enabling security.
Log reporting:
2019-10-17T11:40:23.138-0400 I REPL_HB [replexec-1] Error in heartbeat (requestId: 1934) to 109.99.16.36:27018, response status: Unauthorized: not authorized on admin to execute command { replSetHeartbeat: "shard1rs", configVersion: 4, hbv: 1, from: "109.99.16.112:27018", fromId: 1, term: 128, $replData: 1, $clusterTime: { clusterTime: Timestamp(1571326818, 1), signature: { hash: BinData(0, 466B43AC8CDFBE9B5CBEA8AC4860925560B63296), keyId: 6745513220909301779 } }, $db: "admin" }
2019-10-17T11:40:23.150-0400 I ACCESS [conn2] Unauthorized: not authorized on admin to execute command { replSetHeartbeat: "shard1rs", configVersion: 4, hbv: 1, from: "109.99.16.137:27018", fromId: 2, term: 128, $replData: 1, $clusterTime: { clusterTime: Timestamp(1571326818, 1), signature: { hash: BinData(0, 466B43AC8CDFBE9B5CBEA8AC4860925560B63296), keyId: 6745513220909301779 } }, $db: "admin" }
mongod.conf
systemLog:
destination: file
logAppend: true
path: /mongodb/data/logs/mongod.log
storage:
dbPath: /mongodb/data/db
journal:
enabled: true
processManagement:
fork: true # fork and run in background
pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile
timeZoneInfo: /usr/share/zoneinfo
net:
port: 27018
bindIp: x.x.x.x
security:
authorization: 'enabled'
replication:
replSetName: shard1rs
sharding:
clusterRole: shardsvr
What's causing this problem?.

Edit 2019-10-22:
OP asked to change ports to reflect MongoDB recommendations. It took a bit of digging to find any recommendations from MongoDB. For all modern versions (i.e., 3.6 or later) there is no specific port recommended. Going back to 3.0 docs, however, the default port for config servers is documented as 27019. https://docs.mongodb.com/v3.0/tutorial/deploy-shard-cluster/ . Looking at Ops Manager default deployments we see shard replica sets default to port 27018.
I assume these ports are selected to help protect developers who are writing connection strings and consuming the database from accidentally connecting to a replica set directly, and bypassing the MongoDB router MONGOS. OK, so below I have changed the ports respectively...
Here are instructions to setup 10 hosts to run a 2-shard sharded cluster on AWS. These instructions describe hardware setup only, and do not describe how to select a proper shard key to apply to data. The instructions setup the hardware but have no data. The breakdown of servers is as follows...
1 host for MONGOS
3 hosts for shard config server replica set - each host has a MONGOD installed.
3 hosts for shard0 replica set - each host has a MONGOD installed.
3 hosts for shard1 replica set - each host has a MONGOD installed.
This setup assumes a keyfile will be used for internal authentication. The keyfile is a shared secret stored in a file on all 10 hosts. When using keyfile authentication treat the keyfile like a password. It should be protected.
Generate a keyfile:
openssl rand -base64 741 > mykeyfile
for the sake of this tutorial assume the keyfile generated is ...
bgi+xXyBAHtNXmQnTjDNrSyTa+I9SGQXbBZONHRxHxKw2y/M3kGtpiJCVCyI+bDk
bXKTHnejIGXcyl7Ykc812DHEEngmqw63HfPxsUHFiDZ8FiwU/5X7W/T9lgKk9SoV
ybIL8+EBjSPvWDa9JWgVKrJFYqG0IejSyrO+js9os6n9kq5kneNOYjnovJwS9MgM
euENHfzTJ2XItcMWtcMilMoXd4Pm9VQgkW8i+Cb9hhQcwm1yA/wT7Tr04l6Pgq74
wQgp5MlYXLmlOGMhsTFGgBv4eVfKVR/r3zr2nshLowpBR6CiX098TO/+mZIGIM54
CoULxBlLBxngSXOWH86tvG05YvrjtAOaiDnHFm59fYKmT1+jfecx/NHOT4Cn1bfO
2q4cpGma3Cy2iRHuEdrm8zV1wvx94x0bLIEttiO4qelb32HLZM9MaL+lKodwhfko
A/3Bcx6+c1tTFtCE2sd5xpAngw6oMKau3nfynQgxgvnLrymCW4Hxqj3ew7F1ShYp
OAskY5/qu+ruad0VO9gqM2PHtsPrVHgkO8zBn+twtpYMOvdTE4M4vYMxA14vxkLA
FrELT/TqYmCPSO8pVS8tu12nADEkRUYRK+LqYKXsogl2FolnvYPLsiw4g9psQw4x
2CmJsEVYel1sl3cxq21Sgd+uO9nyWuNEaKBkYOOgLw67xg6xkGWBrLkg3gC840eI
JE0eOJfDLl+EkF1CUubKv8JB7bxK6kwnoTkfd2OGEHqLGQbV/hP6Gpni4gnqoNlp
65meHn2djSUGWu3wS7m5NRjCqICRTbOQs4K/ugM2hVu4e4dZV0RDt/FOF3u+6Anv
G8X5/GqWDvoIJ4WCvPyqVQoAyDG6S5DiSMmhlwCJUaXu0gFn7NuDPEtC8KAzHK75
qOmsTddIhSSs3fjmPm3wKAxyf2r5/6oBIjMq3vN8ahi+NLa+Vz+8VMRa+ajvE4ws
HPAh7P5iar4u9Uu2WE/J0WvGM88p
I generated this using the command above. It is random and only used for this tutorial.
Install MongoDB:
This example uses MongoDB Enterprise. This version requires a license purchased from MongoDB. To use the Community edition change the repo definition.
This example assumes AWS hosts are AMI2 Amazon Linux.
Run this command on all 10 hosts.
echo '[mongodb-enterprise]
name=MongoDB Enterprise Repository
baseurl=https://repo.mongodb.com/yum/amazon/2/mongodb-enterprise/4.0/$basearch/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-4.0.asc' | sudo tee /etc/yum.repos.d/mongodb-enterprise.repo
sudo yum -y install mongodb-enterprise
Distribute the keyfile to all hosts:
On all 10 AWS hosts issue the following...
echo "bgi+xXyBAHtNXmQnTjDNrSyTa+I9SGQXbBZONHRxHxKw2y/M3kGtpiJCVCyI+bDk
bXKTHnejIGXcyl7Ykc812DHEEngmqw63HfPxsUHFiDZ8FiwU/5X7W/T9lgKk9SoV
ybIL8+EBjSPvWDa9JWgVKrJFYqG0IejSyrO+js9os6n9kq5kneNOYjnovJwS9MgM
euENHfzTJ2XItcMWtcMilMoXd4Pm9VQgkW8i+Cb9hhQcwm1yA/wT7Tr04l6Pgq74
wQgp5MlYXLmlOGMhsTFGgBv4eVfKVR/r3zr2nshLowpBR6CiX098TO/+mZIGIM54
CoULxBlLBxngSXOWH86tvG05YvrjtAOaiDnHFm59fYKmT1+jfecx/NHOT4Cn1bfO
2q4cpGma3Cy2iRHuEdrm8zV1wvx94x0bLIEttiO4qelb32HLZM9MaL+lKodwhfko
A/3Bcx6+c1tTFtCE2sd5xpAngw6oMKau3nfynQgxgvnLrymCW4Hxqj3ew7F1ShYp
OAskY5/qu+ruad0VO9gqM2PHtsPrVHgkO8zBn+twtpYMOvdTE4M4vYMxA14vxkLA
FrELT/TqYmCPSO8pVS8tu12nADEkRUYRK+LqYKXsogl2FolnvYPLsiw4g9psQw4x
2CmJsEVYel1sl3cxq21Sgd+uO9nyWuNEaKBkYOOgLw67xg6xkGWBrLkg3gC840eI
JE0eOJfDLl+EkF1CUubKv8JB7bxK6kwnoTkfd2OGEHqLGQbV/hP6Gpni4gnqoNlp
65meHn2djSUGWu3wS7m5NRjCqICRTbOQs4K/ugM2hVu4e4dZV0RDt/FOF3u+6Anv
G8X5/GqWDvoIJ4WCvPyqVQoAyDG6S5DiSMmhlwCJUaXu0gFn7NuDPEtC8KAzHK75
qOmsTddIhSSs3fjmPm3wKAxyf2r5/6oBIjMq3vN8ahi+NLa+Vz+8VMRa+ajvE4ws
HPAh7P5iar4u9Uu2WE/J0WvGM88p" | sudo tee /var/run/mongodb/mykeyfile
sudo chown mongod.mongod /var/run/mongodb/mykeyfile
sudo chmod 400 /var/run/mongodb/mykeyfile
Setup the Config Servers:
Select 3 AWS instances as the config servers. On these three hosts run the following...
sudo rm /etc/mongod.conf
echo "systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
storage:
dbPath: /var/lib/mongo
journal:
enabled: true
processManagement:
fork: true # fork and run in background
pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile
timeZoneInfo: /usr/share/zoneinfo
net:
port: 27019
bindIp: 0.0.0.0
security:
keyFile: /var/run/mongodb/mykeyfile
replication:
replSetName: configrs
sharding:
clusterRole: configsvr
" | sudo tee /etc/mongod.conf
sudo systemctl start mongod
On the final config server log into the mongo shell...
mongo
and initiate the replica set. This example uses host names from my AWS instances. Change the host names to match yours. Notice the name of the replica set is configrs.
rs.initiate(
{
_id: "configrs",
version: 1,
members: [
{ _id: 0, host: "ip-172-31-27-98.us-west-2.compute.internal:27019" },
{ _id: 1, host: "ip-172-31-17-202.us-west-2.compute.internal:27019" },
{ _id: 2, host: "ip-172-31-19-63.us-west-2.compute.internal:27019" }
]
}
)
Add credentials to allow root level access for administration. If the prompt says 'SECONDARY' then wait about 1 minute. Test by issuing command use admin. Continue to wait until it says 'PRIMARY'. If it never says 'PRIMARY' you have problems and cannot proceed.
use admin
db.createUser({user: "barry", pwd: "mypassword", roles: [{role: "root", db: "admin"}]})
db.auth("barry", "mypassword")
Configure Shard0:
Identify another 3 hosts for shard0.
Run the following on all 3 hosts...
sudo rm /etc/mongod.conf
echo "systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
storage:
dbPath: /var/lib/mongo
journal:
enabled: true
processManagement:
fork: true # fork and run in background
pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile
timeZoneInfo: /usr/share/zoneinfo
net:
port: 27018
bindIp: 0.0.0.0
security:
keyFile: /var/run/mongodb/mykeyfile
replication:
replSetName: shard0
sharding:
clusterRole: shardsvr
" | sudo tee /etc/mongod.conf
sudo systemctl start mongod
On the final host log into the mongo shell...
mongo
Initiate the replica set. Notice replica set name shard0. Replace host names with yours...
rs.initiate(
{
_id: "shard0",
version: 1,
members: [
{ _id: 0, host: "ip-172-31-21-228.us-west-2.compute.internal:27018" },
{ _id: 1, host: "ip-172-31-17-221.us-west-2.compute.internal:27018" },
{ _id: 2, host: "ip-172-31-17-145.us-west-2.compute.internal:27018" }
]
}
)
... and create root user...
use admin
db.createUser({user: "barry", pwd: "mypassword", roles: [{role: "root", db: "admin"}]})
db.auth("barry", "mypassword")
Configure shard1:
Select 3 remaining hosts for shard1 and apply these settings to all 3...
sudo rm /etc/mongod.conf
echo "systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
storage:
dbPath: /var/lib/mongo
journal:
enabled: true
processManagement:
fork: true # fork and run in background
pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile
timeZoneInfo: /usr/share/zoneinfo
net:
port: 27018
bindIp: 0.0.0.0
security:
keyFile: /var/run/mongodb/mykeyfile
replication:
replSetName: shard1
sharding:
clusterRole: shardsvr
" | sudo tee /etc/mongod.conf
sudo systemctl start mongod
On the final of these 3 start the mongo shell ...
mongo
... and initialize the replica set. Notice replica set name shard1. Replace host names with yours...
rs.initiate(
{
_id: "shard1",
version: 1,
members: [
{ _id: 0, host: "ip-172-31-30-65.us-west-2.compute.internal:27018" },
{ _id: 1, host: "ip-172-31-17-88.us-west-2.compute.internal:27018" },
{ _id: 2, host: "ip-172-31-23-140.us-west-2.compute.internal:27018" }
]
}
)
On the final of the 3 create a root user...
use admin
db.createUser({user: "barry", pwd: "mypassword", roles: [{role: "root", db: "admin"}]})
db.auth("barry", "mypassword")
Setup MONGOS:
On the final host reserved for the MONGOS router create a MONGOS config file. Change host names on references to config servers to match your implementation.
echo "systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
processManagement:
fork: true
pidFilePath: /var/run/mongodb/mongos.pid # location of pidfile
timeZoneInfo: /usr/share/zoneinfo
net:
bindIp: 0.0.0.0
port: 27017
sharding:
configDB: configrs/ip-172-31-27-98.us-west-2.compute.internal:27019,ip-172-31-17-202.us-west-2.compute.internal:27019,ip-172-31-19-63.us-west-2.compute.internal:27019
security:
keyFile: /var/run/mongodb/mykeyfile
" | sudo tee /etc/mongos.conf
Create systemd script to start the MONGOS using the user 'mongod'.
echo '[Unit]
Description=MongoDB Database Server Router
Documentation=https://docs.mongodb.org/manual
After=network.target
[Service]
User=mongod
Group=mongod
Environment="OPTIONS=-f /etc/mongos.conf"
EnvironmentFile=-/etc/sysconfig/mongos
ExecStart=/usr/bin/mongos $OPTIONS
ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb
ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb
ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb
PermissionsStartOnly=true
PIDFile=/var/run/mongodb/mongos.pid
Type=forking
# file size
LimitFSIZE=infinity
# cpu time
LimitCPU=infinity
# virtual memory size
LimitAS=infinity
# open files
LimitNOFILE=64000
# processes/threads
LimitNPROC=64000
# locked memory
LimitMEMLOCK=infinity
# total threads (user+kernel)
TasksMax=infinity
TasksAccounting=false
# Recommended limits for for mongod as specified in
# http://docs.mongodb.org/manual/reference/ulimit/#recommended-settings
[Install]
WantedBy=multi-user.target
' | sudo tee /usr/lib/systemd/system/mongos.service
Reload the systemctl daemon and start the MONGOS.
sudo systemctl daemon-reload
sudo systemctl start mongos
If you prefer to start MONGOS manually instead of using systemd scripts use the following command... This is optional.
sudo -u mongod mongos -f /etc/mongos.conf
On the host having the MONGOS log into the mongo shell and authenticate...
mongo
use admin
db.auth("barry", "mypassword")
Initialize the sharding. Only one host is needed. Mongo will discover all the other hosts in the shard. Change host names to match yours...
sh.addShard("shard0/ip-172-31-21-228.us-west-2.compute.internal:27018")
sh.addShard("shard1/ip-172-31-30-65.us-west-2.compute.internal:27018")
View the status of the sharding...
sh.status()
At this point sharding is prepared. No data is on these hosts. No databases or collections have been told to shard. No shard key has been established. These instructions are merely for setting up hardware using keyfile internal authentication. See MongoDB instructions for sharding databases
Conclusion:
If we utilize keyfile internal authentication the entire cluster requires user authorization to perform tasks. System administrators can choose to implement SCRAM-SHA username/password authentication mechanisms for users, or can use x.509 Certificate based authentication, or others. If your cluster was not using any security, applying keyfile internal authentication may cause surprise as client authentication is now required. My testing showed if I applied authorization: 'enabled' while keyfile was defined there was no change in behavior. Example:
security:
keyFile: /var/run/mongodb/mykeyfile
... behaved the same as ...
security:
keyFile: /var/run/mongodb/mykeyfile
authorization: 'enabled'

Related

Connection attempt failed : Socket Exception - MongoDB TLS encryption on Ubuntu 16.04

I am facing this error on my ubuntu 16.04 machine while trying to encrypt data from the client to the server using TLS/SSL on Mongodb:
As requested, here is my command in text format :
mongo --tls --tlsCAFile rootCA.pem --tlsCertificateKeyFile mongodb.pem --host 127.0.0.1:27017
I have created a CA certificate which I have self-signed, and created the mongodb.pem file too as it is required for tls/ssl encryption.
Does anybody know how to fix it ? If you need more info I would gladly provide them.
This is my mongodb.conf file :
mongod.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# Where and how to store data.
storage:
dbPath: /var/lib/mongodb
journal:
enabled: true
# engine:
# mmapv1:
# wiredTiger:
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
# network interfaces
net:
port: 27017
bindIp: 127.0.0.1
tls :
mode : requireTLS
certificateKeyFile : /home/youssef/mongodb.pem
# how the process runs
processManagement:
timeZoneInfo: /usr/share/zoneinfo
#security:
#operationProfiling:
#replication:
#sharding:
## Enterprise-Only Options:
#auditLog:
#snmp:
And I used this method to create a user :
db.createUser( { user: "accountAdmin01",
pwd: "password", // Or "<cleartext password>
roles: [ { role: "clusterAdmin", db: "admin" }])
This is the error I get from the logs :
"msg":"Error receiving request from client. Ending connection from remote","attr":{"error":{"code":141,"codeName":"SSLHandshakeFailed","errmsg":"SSL handshake received but server is started without SSL support"},"remote":"127.0.0.1:34766","connectionId":4}}
And just in case you are wondering where I got the rootCA.pem and mongodb.pem files, I just went through this tutorial : https://rajanmaharjan.medium.com/secure-your-mongodb-connections-ssl-tls-92e2addb3c89
According to your config file and createUser you use the TLS/SSL certificate only to encrypt the connection. In this case skip --tlsCertificateKeyFile mongodb.pem option.
The MongoDB server provides the certificate (mongodb.pem), the client has to verify this certificate by using the CA rootCA.pem
If you like to use --tlsCertificateKeyFile, then you must specify the CAFile in mongodb.conf. Otherwise the MongoDB server cannot verify the certificate provided from the client:
net:
port: 27017
bindIp: 127.0.0.1
tls :
mode : requireTLS
certificateKeyFile : /home/youssef/mongodb.pem
CAFile: /etc/ssl/rootCA.pem
allowConnectionsWithoutCertificates: true # if you like to permit connections with and without certificate
Note, try openssl verify -CAfile rootCA.pem mongodb.pem in order to check if your certificate is working and valid.

Mongo DB Version 5 Access Control With Two Read Replicas

I have three ubuntu 20.04 servers:
db-master
db-node1
db-node2
I would like to know how to enable access control.
The servers work fine except when the following configuration is added to /etc/mongod.conf Thanks in advance:
$ mongo --version
MongoDB shell version v5.0.1
$ sudo vim /etc/mongod.conf
security:
keyFile: "/home/ubuntu/security.keyFile"
authorization: enabled
$ sudo systemctl status mongod
● mongod.service - MongoDB Database Server
Loaded: loaded (/lib/systemd/system/mongod.service; disabled; vendor preset: enabled)
Active: active (running)
$ sudo systemctl restart mongod
ubuntu#db-node2:~$ sudo systemctl status mongod
● mongod.service - MongoDB Database Server
Loaded: loaded (/lib/systemd/system/mongod.service; disabled; vendor preset: enabled)
Active: failed
# mongod.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# Where and how to store data.
storage:
dbPath: /var/lib/mongodb
journal:
enabled: true
# engine:
# wiredTiger:
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
# network interfaces
net:
port: 27017
bindIp: 0.0.0.0
# how the process runs
processManagement:
timeZoneInfo: /usr/share/zoneinfo
#security:
#operationProfiling:
replication:
replSetName: "rs0"
#sharding:
## Enterprise-Only Options:
#auditLog:
security:
keyFile: "/home/ubuntu/security.keyFile"
authorization: enabled
#snmp:
Solution
ubuntu#:~$ sudo chown mongodb [PATH TO KEY FILE]
ubuntu#:~$ sudo chgrp root [PATH TO KEY FILE]
ubuntu#:~$ sudo chmod 400 [PATH TO KEY FILE]

MongoDB Config server replication setup failing - Pinging failed for distributed lock pinger

I have to setup 2 node mongodb config server(for testing) and when adding second node as slave it is failing
repset_init.js file details
rs.add( { host: "10.0.1.222:27017" } )
rs.add( { host: "10.0.2.41:27017" } )
printjson(rs.status())
command executed for replicaset addition
mongo -u xxxxx -p yyyy --authenticationDatabase admin --port 27017 repset_init.js
My initialization command
mongo --host 127.0.0.1 --port {{mongod_port}} --eval 'printjson(rs.initiate())'
My config file
net:
bindIp: 0.0.0.0
port: 27017
ssl: {}
processManagement:
fork: "true"
pidFilePath: /var/run/mongodb/mongod.pid
replication:
replSetName: configRS
security:
authorization: enabled
keyFile: /etc/zzzzzkey.key
setParameter:
authenticationMechanisms: SCRAM-SHA-256
sharding:
clusterRole: configsvr
storage:
dbPath: /data/dbdata
engine: wiredTiger
systemLog:
destination: file
path: /data/log/mongodb.log
Below are the errors in the log file
{"t":{"$date":"2021-06-09T14:37:36.071+00:00"},"s":"I", "c":"REPL", "id":4508702, "ctx":"conn16","msg":"Waiting for the current config to propagate to a majority of nodes"}
{"t":{"$date":"2021-06-09T14:37:53.143+00:00"},"s":"I", "c":"COMMAND", "id":51803, "ctx":"replSetDistLockPinger","msg":"Slow query","attr":{"type":"command","ns":"config.lockpings","command":{"findAndModify":"lockpings","query":{"_id":"ConfigServer"},"update":{"$set":{"ping":{"$date":"2021-06-09T14:37:38.125Z"}}},"upsert":true,"writeConcern":{"w":"majority","wtimeout":15000},"$db":"config"},"planSummary":"IDHACK","keysExamined":1,"docsExamined":1,"nMatched":1,"nModified":1,"keysInserted":1,"keysDeleted":1,"numYields":0,"reslen":585,"locks":{"ParallelBatchWriterMode":{"acquireCount":{"r":1}},"ReplicationStateTransition":{"acquireCount":{"w":1}},"Global":{"acquireCount":{"w":1}},"Database":{"acquireCount":{"w":1}},"Collection":{"acquireCount":{"w":1}},"Mutex":{"acquireCount":{"r":1}}},"flowControl":{"acquireCount":1,"timeAcquiringMicros":1},"writeConcern":{"w":"majority","wtimeout":15000,"provenance":"clientSupplied"},"storage":{},"protocol":"op_msg","durationMillis":15017}}
{"t":{"$date":"2021-06-09T14:37:53.143+00:00"},"s":"W", "c":"SHARDING", "id":22668, "ctx":"replSetDistLockPinger","msg":"Pinging failed for distributed lock pinger","attr":{"error":{"code":64,"codeName":"WriteConcernFailed","errmsg":"waiting for replication timed out; Error details: { wtimeout: true, writeConcern: { w: \"majority\", wtimeout: 15000, provenance: \"clientSupplied\" } }"}}}
{"t":{"$date":"2021-06-09T14:38:08.065+00:00"},"s":"I", "c":"CONNPOOL", "id":22572, "ctx":"MirrorMaestro","msg":"Dropping all pooled connections","attr":{"hostAndPort":"10.0.2.41:27017","error":"ShutdownInProgress: Pool for 10.0.2.41:27017 has expired."}}
Please guide me

MongoDB Cluster Setup

I am trying to setup a 3 node MongoDB cluster.
1) Started mongodb in all 3 nodes with the below config file.
net:
bindIp: 0.0.0.0
port: 10901
setParameter:
enableLocalhostAuthBypass: false
systemLog:
destination: file
path: "<LOG_PATH>"
logAppend: true
processManagement:
fork: true
storage:
dbPath: "<DB_PATH>/data"
journal:
enabled: true
security:
keyFile : "<KEY_FILE_PATH>"
sharding:
clusterRole: "configsvr"
replication:
replSetName: "configReplSet"
2) Created Admin user in one of the config node and able to login with the admin user.
mongo --port 10901 -u "admin" -p "adminpwd" --authenticationDatabase "admin" --host <IP>
now the console says, user:PRIMARY>
3) Created replica set using the below command.
rs.initiate(
{
_id: "configReplSet",
configsvr: true,
members: [
{ _id : 0, host : "<IP1>:10901" },
{ _id : 1, host : "<IP2>:10901" },
{ _id : 2, host : "<IP3>:10901" }
]
}
)
4) Executed rs.status() and got the proper output.
5) Started Mongo shards with the below config in all 3 instances.
net:
bindIp: 0.0.0.0
port: 10903
setParameter:
enableLocalhostAuthBypass: false
systemLog:
destination: file
path: "<LOG_PATH>"
logAppend: true
processManagement:
fork: true
storage:
dbPath: "<DB_PATH>/shard_data/"
journal:
enabled: true
security:
keyFile : "<KEY_FILE>"
sharding:
clusterRole: "shardsvr"
replication:
replSetName: "shardReplSet"
6) Created Admin user in one of the shard node also and able to login with the admin user.
mongo --port 10903 -u "admin" -p "adminpwd" --authenticationDatabase "admin" --host <IP>
7) Created shard replica set using the below command.
rs.initiate(
{
_id: "shardReplSet",
members: [
{ _id : 0, host : "<IP1>:10903" },
{ _id : 1, host : "<IP2>:10903" },
{ _id : 2, host : "<IP3>:10903" }
]
}
)
8) Started the router with the below config
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: <LOG_PATH_FOR_MONGOS>
# network interfaces
net:
port: 10902
security:
keyFile: <KEY_FILE>
processManagement:
fork: true
sharding:
configDB: configReplSet/<IP1>:10901,<IP2>:10901,<IP3>:10901
6) Connected to mongos using mongo
mongo --port 10902 -u "admin" -p "adminpwd" --authenticationDatabase "admin" --host <IP>
Now, I see the below in my command.
MongoDB server version: 3.4.2
mongos>
7) Now added each shard in the mongos interface.
Since, I have configured replica set,
sh.addShard("shardReplSet/:10903, :10903, :10903")
Issues :-
1) Unable to connect to mongodb from the remote machine
I am able to connect to the other node within these 3 nodes.
From Node1,
mongo --port 10902 -u "user" -p "password" --authenticationDatabase "admin" --host
mongo --port 10902 -u "user" -p "password" --authenticationDatabase "admin" --host
mongo --port 10902 -u "user" -p "password" --authenticationDatabase "admin" --host
All the above 3 connections are working from Node1 and Node2 and Node3.
But If I try from my localhost to connect to these instances, i get timeout error.
I am able to ssh to these servers.
2) I am running config on port 10901, shard on port 10903 and router on port 10902. Running, config, shard and router on each node. Is this ok?
DB path for config and shard are different. Have to create admin user on each service(config, shard, router). Is this correct?
Created replica set for config and shard server, but not for router? Is this ok?
4) Unable to connect to these instances from a remote mongo chef tool. I use the router port to connect to these instances? Is this correct? If so, Do I need to run router on each node?
5) Do we need to connect to the port 10903 or 10902 or 10901 to create new databases, create new users for db's.?
6) Is there anything else important to be added here?
Thanks

When creating first admin user on mongdb cluster getting error "couldn't add user: not authorized on admin to execute command"

I am using mongoDB Cluster with version 3.4 in google cloud compute engine, actually past week my database got attacked by hackers that's why i thought about using authorization so that i can avoid these types of attack. Now to add Authorizations i saw this article how-to-create-mongodb-replication-clusters, now i have added a keyfile with chmod 0600 on each of my cluster node, but now when i am trying to add my first admin user i am getting below error
use admin
switched to db admin
rs0:PRIMARY> db.createUser({user: "RootAdmin", pwd: "password123", roles: [ { role: "root", db: "admin" } ]});
2017-01-21T18:19:09.814+0000 E QUERY [main] Error: couldn't add user: not authorized on admin to execute comm
and { createUser: "RootAdmin", pwd: "xxx", roles: [ { role: "root", db: "admin" } ], digestPassword: false, writ
eConcern: { w: "majority", wtimeout: 300000.0 } } :
_getErrorWithCode#src/mongo/shell/utils.js:25:13
DB.prototype.createUser#src/mongo/shell/db.js:1290:15
#(shell):1:1
I have searched everywhere but haven't found anything on why i am getting this error.
Can anyone please help me how can i solve this error.
UPDATE
My config file is given below for each of the instances
Secondary Server Config
#!/bin/bash
# mongod.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# Where and how to store data.
storage:
dbPath: /var/lib/mongodb
journal:
enabled: false
#engine:
mmapv1:
smallFiles: true
# wiredTiger:
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
# network interfaces
net:
port: 27017
bindIp: 0.0.0.0
replication:
replSetName: rs0
#processManagement:
security:
authorization: disabled
keyFile: /opt/mongodb/keyfile
#operationProfiling:
#replication:
#sharding:
## Enterprise-Only Options:
#auditLog:
#snmp:
Arbiter Server Config
#!/bin/bash
# mongod.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# Where and how to store data.
storage:
dbPath: /mnt/mongodb/db
journal:
enabled: true
#engine:
#mmapv1:
#smallFiles: true
# wiredTiger:
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /mnt/mongodb/log/mongodb.log
# network interfaces
net:
port: 27017
bindIp: 0.0.0.0
replication:
replSetName: rs0
#processManagement:
security:
authorization: disabled
keyFile: /opt/mongodb/keyfile
#operationProfiling:
#replication:
#sharding:
## Enterprise-Only Options:
#auditLog:
#snmp:
Primary Server Config
#!/bin/bash
# mongod.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# Where and how to store data.
storage:
dbPath: /mnt/mongodb/db
journal:
enabled: true
#engine:
#mmapv1:
#smallFiles: true
# wiredTiger:
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /mnt/mongodb/log/mongodb.log
# network interfaces
net:
port: 27017
bindIp: 0.0.0.0
replication:
replSetName: rs0
#processManagement:
security:
authorization: disabled
keyFile: /opt/mongodb/keyfile
#operationProfiling:
#replication:
#sharding:
## Enterprise-Only Options:
#auditLog:
#snmp:
You have to change your mongod.conf file to disable authorization before creating such admin user
security:
authorization: disabled
After that, restart the mongod service and open mongodb shell to create the admin user
use admin
db.createUser({user:"RootAdmin",pwd:"blahblah",roles:["root"]})
Remember to enable authorization back on after creating user.
johnlowvale's answer is correct, but
keyFile implies security.authorization.
source: https://docs.mongodb.com/manual/reference/configuration-options/#security.keyFile
You have to disable authorization AND the keyFile.
security:
authorization: disabled
# keyFile: /opt/mongodb/keyfile
(insufficient rep or I'd have just commented this on johnlowvale's answer)
Once you are connected to this first node, you can initiate the replica set with rs.initiate(). Again, this command must be run from the same host as the mongod to use the localhost exception.
We can create our admin user with the following commands:
rs.initiate()
use admin
db.createUser({
user: "admin",
pwd: "pass",
roles: [
{role: "root", db: "admin"}
]
})
edit vim /lib/systemd/system/mongod.service
remove --auth
restart
#ExecStart=/usr/bin/mongod --quiet --auth --config /etc/mongod.conf
ExecStart=/usr/bin/mongod --quiet --config /etc/mongod.conf
use admin
db.createUser({user:"RootAdmin",pwd:"blahblah",roles:["root"]})
To be able to create a new user, you need to first disable security in /etc/mongod.conf
// security:
// authorization: enabled
Then restart Mongodb server
sudo service mongo restart
After this you can add the user and role that you want from the shell.
db.createUser({
user: 'test_user',
pwd: 'test',
roles: [
{ role: "userAdmin", db: "test" },
{ role: "dbAdmin", db: "test" },
{ role: "readWrite", db: "test" }
]
})
To enable authenticated connection
Uncomment the line again in /etc/mongod.conf
security:
authorization: enabled
and restart the server again
When a new database is setup with authorisation/security enabled but no users set up, you can only connect to it from the localhost. In your config file you should have bind ip set to 127.0.0.1 I think in order to make sure you connect to it with the correct authorisation to create new users.
This is what it says in Mongo course M103
By default, a mongod that enforces authentication but has no configured users only allows connections through the localhost.