MongoDB Cluster Setup - mongodb

I am trying to setup a 3 node MongoDB cluster.
1) Started mongodb in all 3 nodes with the below config file.
net:
bindIp: 0.0.0.0
port: 10901
setParameter:
enableLocalhostAuthBypass: false
systemLog:
destination: file
path: "<LOG_PATH>"
logAppend: true
processManagement:
fork: true
storage:
dbPath: "<DB_PATH>/data"
journal:
enabled: true
security:
keyFile : "<KEY_FILE_PATH>"
sharding:
clusterRole: "configsvr"
replication:
replSetName: "configReplSet"
2) Created Admin user in one of the config node and able to login with the admin user.
mongo --port 10901 -u "admin" -p "adminpwd" --authenticationDatabase "admin" --host <IP>
now the console says, user:PRIMARY>
3) Created replica set using the below command.
rs.initiate(
{
_id: "configReplSet",
configsvr: true,
members: [
{ _id : 0, host : "<IP1>:10901" },
{ _id : 1, host : "<IP2>:10901" },
{ _id : 2, host : "<IP3>:10901" }
]
}
)
4) Executed rs.status() and got the proper output.
5) Started Mongo shards with the below config in all 3 instances.
net:
bindIp: 0.0.0.0
port: 10903
setParameter:
enableLocalhostAuthBypass: false
systemLog:
destination: file
path: "<LOG_PATH>"
logAppend: true
processManagement:
fork: true
storage:
dbPath: "<DB_PATH>/shard_data/"
journal:
enabled: true
security:
keyFile : "<KEY_FILE>"
sharding:
clusterRole: "shardsvr"
replication:
replSetName: "shardReplSet"
6) Created Admin user in one of the shard node also and able to login with the admin user.
mongo --port 10903 -u "admin" -p "adminpwd" --authenticationDatabase "admin" --host <IP>
7) Created shard replica set using the below command.
rs.initiate(
{
_id: "shardReplSet",
members: [
{ _id : 0, host : "<IP1>:10903" },
{ _id : 1, host : "<IP2>:10903" },
{ _id : 2, host : "<IP3>:10903" }
]
}
)
8) Started the router with the below config
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: <LOG_PATH_FOR_MONGOS>
# network interfaces
net:
port: 10902
security:
keyFile: <KEY_FILE>
processManagement:
fork: true
sharding:
configDB: configReplSet/<IP1>:10901,<IP2>:10901,<IP3>:10901
6) Connected to mongos using mongo
mongo --port 10902 -u "admin" -p "adminpwd" --authenticationDatabase "admin" --host <IP>
Now, I see the below in my command.
MongoDB server version: 3.4.2
mongos>
7) Now added each shard in the mongos interface.
Since, I have configured replica set,
sh.addShard("shardReplSet/:10903, :10903, :10903")
Issues :-
1) Unable to connect to mongodb from the remote machine
I am able to connect to the other node within these 3 nodes.
From Node1,
mongo --port 10902 -u "user" -p "password" --authenticationDatabase "admin" --host
mongo --port 10902 -u "user" -p "password" --authenticationDatabase "admin" --host
mongo --port 10902 -u "user" -p "password" --authenticationDatabase "admin" --host
All the above 3 connections are working from Node1 and Node2 and Node3.
But If I try from my localhost to connect to these instances, i get timeout error.
I am able to ssh to these servers.
2) I am running config on port 10901, shard on port 10903 and router on port 10902. Running, config, shard and router on each node. Is this ok?
DB path for config and shard are different. Have to create admin user on each service(config, shard, router). Is this correct?
Created replica set for config and shard server, but not for router? Is this ok?
4) Unable to connect to these instances from a remote mongo chef tool. I use the router port to connect to these instances? Is this correct? If so, Do I need to run router on each node?
5) Do we need to connect to the port 10903 or 10902 or 10901 to create new databases, create new users for db's.?
6) Is there anything else important to be added here?
Thanks

Related

MongoDB Config server replication setup failing - Pinging failed for distributed lock pinger

I have to setup 2 node mongodb config server(for testing) and when adding second node as slave it is failing
repset_init.js file details
rs.add( { host: "10.0.1.222:27017" } )
rs.add( { host: "10.0.2.41:27017" } )
printjson(rs.status())
command executed for replicaset addition
mongo -u xxxxx -p yyyy --authenticationDatabase admin --port 27017 repset_init.js
My initialization command
mongo --host 127.0.0.1 --port {{mongod_port}} --eval 'printjson(rs.initiate())'
My config file
net:
bindIp: 0.0.0.0
port: 27017
ssl: {}
processManagement:
fork: "true"
pidFilePath: /var/run/mongodb/mongod.pid
replication:
replSetName: configRS
security:
authorization: enabled
keyFile: /etc/zzzzzkey.key
setParameter:
authenticationMechanisms: SCRAM-SHA-256
sharding:
clusterRole: configsvr
storage:
dbPath: /data/dbdata
engine: wiredTiger
systemLog:
destination: file
path: /data/log/mongodb.log
Below are the errors in the log file
{"t":{"$date":"2021-06-09T14:37:36.071+00:00"},"s":"I", "c":"REPL", "id":4508702, "ctx":"conn16","msg":"Waiting for the current config to propagate to a majority of nodes"}
{"t":{"$date":"2021-06-09T14:37:53.143+00:00"},"s":"I", "c":"COMMAND", "id":51803, "ctx":"replSetDistLockPinger","msg":"Slow query","attr":{"type":"command","ns":"config.lockpings","command":{"findAndModify":"lockpings","query":{"_id":"ConfigServer"},"update":{"$set":{"ping":{"$date":"2021-06-09T14:37:38.125Z"}}},"upsert":true,"writeConcern":{"w":"majority","wtimeout":15000},"$db":"config"},"planSummary":"IDHACK","keysExamined":1,"docsExamined":1,"nMatched":1,"nModified":1,"keysInserted":1,"keysDeleted":1,"numYields":0,"reslen":585,"locks":{"ParallelBatchWriterMode":{"acquireCount":{"r":1}},"ReplicationStateTransition":{"acquireCount":{"w":1}},"Global":{"acquireCount":{"w":1}},"Database":{"acquireCount":{"w":1}},"Collection":{"acquireCount":{"w":1}},"Mutex":{"acquireCount":{"r":1}}},"flowControl":{"acquireCount":1,"timeAcquiringMicros":1},"writeConcern":{"w":"majority","wtimeout":15000,"provenance":"clientSupplied"},"storage":{},"protocol":"op_msg","durationMillis":15017}}
{"t":{"$date":"2021-06-09T14:37:53.143+00:00"},"s":"W", "c":"SHARDING", "id":22668, "ctx":"replSetDistLockPinger","msg":"Pinging failed for distributed lock pinger","attr":{"error":{"code":64,"codeName":"WriteConcernFailed","errmsg":"waiting for replication timed out; Error details: { wtimeout: true, writeConcern: { w: \"majority\", wtimeout: 15000, provenance: \"clientSupplied\" } }"}}}
{"t":{"$date":"2021-06-09T14:38:08.065+00:00"},"s":"I", "c":"CONNPOOL", "id":22572, "ctx":"MirrorMaestro","msg":"Dropping all pooled connections","attr":{"hostAndPort":"10.0.2.41:27017","error":"ShutdownInProgress: Pool for 10.0.2.41:27017 has expired."}}
Please guide me

How to prevent to anonymous access to Mongo DB?

I still don`t can prevent to anonymous access to Mongo DB after do below steps.
1- Create admin user with this command
mongod --port 27017 --logpath D:\Files\Sessions\log\mongo.log --dbpath D:\Files\Sessions\data\db
2- Create mongod.conf file with this config
systemLog:
destination: file
path: "D:/Files/Sessions/log/mongo.log"
storage:
dbPath: "D:/Files/Sessions/data/db"
net:
bindIp: 127.0.0.1, localhost
port: 27017
3- Execute mongod --config "D:\Files\Sessions\mongod.conf"
4- Create admin user with
use admin
db.createUser(
{
user: "myUserAdmin",
pwd: passwordPrompt(), // or cleartext password
roles: [ { role: "userAdminAnyDatabase", db: "admin" }, "readWriteAnyDatabase" ]
}
)
5- Update mongod.conf with add this config to that
security:
authorization: enabled
6- Execute mongod --config "D:\Files\Sessions\mongod.conf"
I also in between steps, many times execute this command
mongod --port 27017 --logpath D:\Files\Sessions\log\mongo.log --dbpath D:\Files\Sessions\data\db --auth
In final, I can create new admin user and authenticate with that but still i can also authenticate as anonymous user without credential!
Just in case =>
mongo version: 4.2, windows 10
What`s wrong?

Mongod in recovering after enabling security in mongod.conf

Created 2 shards with 3 node replica set.
After enabling the security in mongod.conf file, every node was in recovering state. But if we disable/comment out in the mongod.conf file the nodes were normal.
All nodes were running on AWS.
OS: CentOS 7
Mongodb Version: 3.6.2
Background: Before enabling the security, I created a user as below.
db.createUser( { user: "cxiroot",
pwd: "root",
roles: [ { role: "root", db: "admin" }] },
{ w: "majority" , wtimeout: 5000 } );
I am able to authenticate with, db.auth("cxiroot","root")
After enabling security and restarting the service, all nodes were in recovering state.
Basically I am trying to do db.shutdownServer() by enabling security.
Log reporting:
2019-10-17T11:40:23.138-0400 I REPL_HB [replexec-1] Error in heartbeat (requestId: 1934) to 109.99.16.36:27018, response status: Unauthorized: not authorized on admin to execute command { replSetHeartbeat: "shard1rs", configVersion: 4, hbv: 1, from: "109.99.16.112:27018", fromId: 1, term: 128, $replData: 1, $clusterTime: { clusterTime: Timestamp(1571326818, 1), signature: { hash: BinData(0, 466B43AC8CDFBE9B5CBEA8AC4860925560B63296), keyId: 6745513220909301779 } }, $db: "admin" }
2019-10-17T11:40:23.150-0400 I ACCESS [conn2] Unauthorized: not authorized on admin to execute command { replSetHeartbeat: "shard1rs", configVersion: 4, hbv: 1, from: "109.99.16.137:27018", fromId: 2, term: 128, $replData: 1, $clusterTime: { clusterTime: Timestamp(1571326818, 1), signature: { hash: BinData(0, 466B43AC8CDFBE9B5CBEA8AC4860925560B63296), keyId: 6745513220909301779 } }, $db: "admin" }
mongod.conf
systemLog:
destination: file
logAppend: true
path: /mongodb/data/logs/mongod.log
storage:
dbPath: /mongodb/data/db
journal:
enabled: true
processManagement:
fork: true # fork and run in background
pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile
timeZoneInfo: /usr/share/zoneinfo
net:
port: 27018
bindIp: x.x.x.x
security:
authorization: 'enabled'
replication:
replSetName: shard1rs
sharding:
clusterRole: shardsvr
What's causing this problem?.
Edit 2019-10-22:
OP asked to change ports to reflect MongoDB recommendations. It took a bit of digging to find any recommendations from MongoDB. For all modern versions (i.e., 3.6 or later) there is no specific port recommended. Going back to 3.0 docs, however, the default port for config servers is documented as 27019. https://docs.mongodb.com/v3.0/tutorial/deploy-shard-cluster/ . Looking at Ops Manager default deployments we see shard replica sets default to port 27018.
I assume these ports are selected to help protect developers who are writing connection strings and consuming the database from accidentally connecting to a replica set directly, and bypassing the MongoDB router MONGOS. OK, so below I have changed the ports respectively...
Here are instructions to setup 10 hosts to run a 2-shard sharded cluster on AWS. These instructions describe hardware setup only, and do not describe how to select a proper shard key to apply to data. The instructions setup the hardware but have no data. The breakdown of servers is as follows...
1 host for MONGOS
3 hosts for shard config server replica set - each host has a MONGOD installed.
3 hosts for shard0 replica set - each host has a MONGOD installed.
3 hosts for shard1 replica set - each host has a MONGOD installed.
This setup assumes a keyfile will be used for internal authentication. The keyfile is a shared secret stored in a file on all 10 hosts. When using keyfile authentication treat the keyfile like a password. It should be protected.
Generate a keyfile:
openssl rand -base64 741 > mykeyfile
for the sake of this tutorial assume the keyfile generated is ...
bgi+xXyBAHtNXmQnTjDNrSyTa+I9SGQXbBZONHRxHxKw2y/M3kGtpiJCVCyI+bDk
bXKTHnejIGXcyl7Ykc812DHEEngmqw63HfPxsUHFiDZ8FiwU/5X7W/T9lgKk9SoV
ybIL8+EBjSPvWDa9JWgVKrJFYqG0IejSyrO+js9os6n9kq5kneNOYjnovJwS9MgM
euENHfzTJ2XItcMWtcMilMoXd4Pm9VQgkW8i+Cb9hhQcwm1yA/wT7Tr04l6Pgq74
wQgp5MlYXLmlOGMhsTFGgBv4eVfKVR/r3zr2nshLowpBR6CiX098TO/+mZIGIM54
CoULxBlLBxngSXOWH86tvG05YvrjtAOaiDnHFm59fYKmT1+jfecx/NHOT4Cn1bfO
2q4cpGma3Cy2iRHuEdrm8zV1wvx94x0bLIEttiO4qelb32HLZM9MaL+lKodwhfko
A/3Bcx6+c1tTFtCE2sd5xpAngw6oMKau3nfynQgxgvnLrymCW4Hxqj3ew7F1ShYp
OAskY5/qu+ruad0VO9gqM2PHtsPrVHgkO8zBn+twtpYMOvdTE4M4vYMxA14vxkLA
FrELT/TqYmCPSO8pVS8tu12nADEkRUYRK+LqYKXsogl2FolnvYPLsiw4g9psQw4x
2CmJsEVYel1sl3cxq21Sgd+uO9nyWuNEaKBkYOOgLw67xg6xkGWBrLkg3gC840eI
JE0eOJfDLl+EkF1CUubKv8JB7bxK6kwnoTkfd2OGEHqLGQbV/hP6Gpni4gnqoNlp
65meHn2djSUGWu3wS7m5NRjCqICRTbOQs4K/ugM2hVu4e4dZV0RDt/FOF3u+6Anv
G8X5/GqWDvoIJ4WCvPyqVQoAyDG6S5DiSMmhlwCJUaXu0gFn7NuDPEtC8KAzHK75
qOmsTddIhSSs3fjmPm3wKAxyf2r5/6oBIjMq3vN8ahi+NLa+Vz+8VMRa+ajvE4ws
HPAh7P5iar4u9Uu2WE/J0WvGM88p
I generated this using the command above. It is random and only used for this tutorial.
Install MongoDB:
This example uses MongoDB Enterprise. This version requires a license purchased from MongoDB. To use the Community edition change the repo definition.
This example assumes AWS hosts are AMI2 Amazon Linux.
Run this command on all 10 hosts.
echo '[mongodb-enterprise]
name=MongoDB Enterprise Repository
baseurl=https://repo.mongodb.com/yum/amazon/2/mongodb-enterprise/4.0/$basearch/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-4.0.asc' | sudo tee /etc/yum.repos.d/mongodb-enterprise.repo
sudo yum -y install mongodb-enterprise
Distribute the keyfile to all hosts:
On all 10 AWS hosts issue the following...
echo "bgi+xXyBAHtNXmQnTjDNrSyTa+I9SGQXbBZONHRxHxKw2y/M3kGtpiJCVCyI+bDk
bXKTHnejIGXcyl7Ykc812DHEEngmqw63HfPxsUHFiDZ8FiwU/5X7W/T9lgKk9SoV
ybIL8+EBjSPvWDa9JWgVKrJFYqG0IejSyrO+js9os6n9kq5kneNOYjnovJwS9MgM
euENHfzTJ2XItcMWtcMilMoXd4Pm9VQgkW8i+Cb9hhQcwm1yA/wT7Tr04l6Pgq74
wQgp5MlYXLmlOGMhsTFGgBv4eVfKVR/r3zr2nshLowpBR6CiX098TO/+mZIGIM54
CoULxBlLBxngSXOWH86tvG05YvrjtAOaiDnHFm59fYKmT1+jfecx/NHOT4Cn1bfO
2q4cpGma3Cy2iRHuEdrm8zV1wvx94x0bLIEttiO4qelb32HLZM9MaL+lKodwhfko
A/3Bcx6+c1tTFtCE2sd5xpAngw6oMKau3nfynQgxgvnLrymCW4Hxqj3ew7F1ShYp
OAskY5/qu+ruad0VO9gqM2PHtsPrVHgkO8zBn+twtpYMOvdTE4M4vYMxA14vxkLA
FrELT/TqYmCPSO8pVS8tu12nADEkRUYRK+LqYKXsogl2FolnvYPLsiw4g9psQw4x
2CmJsEVYel1sl3cxq21Sgd+uO9nyWuNEaKBkYOOgLw67xg6xkGWBrLkg3gC840eI
JE0eOJfDLl+EkF1CUubKv8JB7bxK6kwnoTkfd2OGEHqLGQbV/hP6Gpni4gnqoNlp
65meHn2djSUGWu3wS7m5NRjCqICRTbOQs4K/ugM2hVu4e4dZV0RDt/FOF3u+6Anv
G8X5/GqWDvoIJ4WCvPyqVQoAyDG6S5DiSMmhlwCJUaXu0gFn7NuDPEtC8KAzHK75
qOmsTddIhSSs3fjmPm3wKAxyf2r5/6oBIjMq3vN8ahi+NLa+Vz+8VMRa+ajvE4ws
HPAh7P5iar4u9Uu2WE/J0WvGM88p" | sudo tee /var/run/mongodb/mykeyfile
sudo chown mongod.mongod /var/run/mongodb/mykeyfile
sudo chmod 400 /var/run/mongodb/mykeyfile
Setup the Config Servers:
Select 3 AWS instances as the config servers. On these three hosts run the following...
sudo rm /etc/mongod.conf
echo "systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
storage:
dbPath: /var/lib/mongo
journal:
enabled: true
processManagement:
fork: true # fork and run in background
pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile
timeZoneInfo: /usr/share/zoneinfo
net:
port: 27019
bindIp: 0.0.0.0
security:
keyFile: /var/run/mongodb/mykeyfile
replication:
replSetName: configrs
sharding:
clusterRole: configsvr
" | sudo tee /etc/mongod.conf
sudo systemctl start mongod
On the final config server log into the mongo shell...
mongo
and initiate the replica set. This example uses host names from my AWS instances. Change the host names to match yours. Notice the name of the replica set is configrs.
rs.initiate(
{
_id: "configrs",
version: 1,
members: [
{ _id: 0, host: "ip-172-31-27-98.us-west-2.compute.internal:27019" },
{ _id: 1, host: "ip-172-31-17-202.us-west-2.compute.internal:27019" },
{ _id: 2, host: "ip-172-31-19-63.us-west-2.compute.internal:27019" }
]
}
)
Add credentials to allow root level access for administration. If the prompt says 'SECONDARY' then wait about 1 minute. Test by issuing command use admin. Continue to wait until it says 'PRIMARY'. If it never says 'PRIMARY' you have problems and cannot proceed.
use admin
db.createUser({user: "barry", pwd: "mypassword", roles: [{role: "root", db: "admin"}]})
db.auth("barry", "mypassword")
Configure Shard0:
Identify another 3 hosts for shard0.
Run the following on all 3 hosts...
sudo rm /etc/mongod.conf
echo "systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
storage:
dbPath: /var/lib/mongo
journal:
enabled: true
processManagement:
fork: true # fork and run in background
pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile
timeZoneInfo: /usr/share/zoneinfo
net:
port: 27018
bindIp: 0.0.0.0
security:
keyFile: /var/run/mongodb/mykeyfile
replication:
replSetName: shard0
sharding:
clusterRole: shardsvr
" | sudo tee /etc/mongod.conf
sudo systemctl start mongod
On the final host log into the mongo shell...
mongo
Initiate the replica set. Notice replica set name shard0. Replace host names with yours...
rs.initiate(
{
_id: "shard0",
version: 1,
members: [
{ _id: 0, host: "ip-172-31-21-228.us-west-2.compute.internal:27018" },
{ _id: 1, host: "ip-172-31-17-221.us-west-2.compute.internal:27018" },
{ _id: 2, host: "ip-172-31-17-145.us-west-2.compute.internal:27018" }
]
}
)
... and create root user...
use admin
db.createUser({user: "barry", pwd: "mypassword", roles: [{role: "root", db: "admin"}]})
db.auth("barry", "mypassword")
Configure shard1:
Select 3 remaining hosts for shard1 and apply these settings to all 3...
sudo rm /etc/mongod.conf
echo "systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
storage:
dbPath: /var/lib/mongo
journal:
enabled: true
processManagement:
fork: true # fork and run in background
pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile
timeZoneInfo: /usr/share/zoneinfo
net:
port: 27018
bindIp: 0.0.0.0
security:
keyFile: /var/run/mongodb/mykeyfile
replication:
replSetName: shard1
sharding:
clusterRole: shardsvr
" | sudo tee /etc/mongod.conf
sudo systemctl start mongod
On the final of these 3 start the mongo shell ...
mongo
... and initialize the replica set. Notice replica set name shard1. Replace host names with yours...
rs.initiate(
{
_id: "shard1",
version: 1,
members: [
{ _id: 0, host: "ip-172-31-30-65.us-west-2.compute.internal:27018" },
{ _id: 1, host: "ip-172-31-17-88.us-west-2.compute.internal:27018" },
{ _id: 2, host: "ip-172-31-23-140.us-west-2.compute.internal:27018" }
]
}
)
On the final of the 3 create a root user...
use admin
db.createUser({user: "barry", pwd: "mypassword", roles: [{role: "root", db: "admin"}]})
db.auth("barry", "mypassword")
Setup MONGOS:
On the final host reserved for the MONGOS router create a MONGOS config file. Change host names on references to config servers to match your implementation.
echo "systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
processManagement:
fork: true
pidFilePath: /var/run/mongodb/mongos.pid # location of pidfile
timeZoneInfo: /usr/share/zoneinfo
net:
bindIp: 0.0.0.0
port: 27017
sharding:
configDB: configrs/ip-172-31-27-98.us-west-2.compute.internal:27019,ip-172-31-17-202.us-west-2.compute.internal:27019,ip-172-31-19-63.us-west-2.compute.internal:27019
security:
keyFile: /var/run/mongodb/mykeyfile
" | sudo tee /etc/mongos.conf
Create systemd script to start the MONGOS using the user 'mongod'.
echo '[Unit]
Description=MongoDB Database Server Router
Documentation=https://docs.mongodb.org/manual
After=network.target
[Service]
User=mongod
Group=mongod
Environment="OPTIONS=-f /etc/mongos.conf"
EnvironmentFile=-/etc/sysconfig/mongos
ExecStart=/usr/bin/mongos $OPTIONS
ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb
ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb
ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb
PermissionsStartOnly=true
PIDFile=/var/run/mongodb/mongos.pid
Type=forking
# file size
LimitFSIZE=infinity
# cpu time
LimitCPU=infinity
# virtual memory size
LimitAS=infinity
# open files
LimitNOFILE=64000
# processes/threads
LimitNPROC=64000
# locked memory
LimitMEMLOCK=infinity
# total threads (user+kernel)
TasksMax=infinity
TasksAccounting=false
# Recommended limits for for mongod as specified in
# http://docs.mongodb.org/manual/reference/ulimit/#recommended-settings
[Install]
WantedBy=multi-user.target
' | sudo tee /usr/lib/systemd/system/mongos.service
Reload the systemctl daemon and start the MONGOS.
sudo systemctl daemon-reload
sudo systemctl start mongos
If you prefer to start MONGOS manually instead of using systemd scripts use the following command... This is optional.
sudo -u mongod mongos -f /etc/mongos.conf
On the host having the MONGOS log into the mongo shell and authenticate...
mongo
use admin
db.auth("barry", "mypassword")
Initialize the sharding. Only one host is needed. Mongo will discover all the other hosts in the shard. Change host names to match yours...
sh.addShard("shard0/ip-172-31-21-228.us-west-2.compute.internal:27018")
sh.addShard("shard1/ip-172-31-30-65.us-west-2.compute.internal:27018")
View the status of the sharding...
sh.status()
At this point sharding is prepared. No data is on these hosts. No databases or collections have been told to shard. No shard key has been established. These instructions are merely for setting up hardware using keyfile internal authentication. See MongoDB instructions for sharding databases
Conclusion:
If we utilize keyfile internal authentication the entire cluster requires user authorization to perform tasks. System administrators can choose to implement SCRAM-SHA username/password authentication mechanisms for users, or can use x.509 Certificate based authentication, or others. If your cluster was not using any security, applying keyfile internal authentication may cause surprise as client authentication is now required. My testing showed if I applied authorization: 'enabled' while keyfile was defined there was no change in behavior. Example:
security:
keyFile: /var/run/mongodb/mykeyfile
... behaved the same as ...
security:
keyFile: /var/run/mongodb/mykeyfile
authorization: 'enabled'

Can't connect to mongodb running on EC2, Ubuntu 16

This is my mongo.conf
storage:
dbPath: /var/lib/mongodb
journal:
enabled: true
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
net:
port: 27017
bindIp: 0.0.0.0
security:
authorization: enabled
And I'm starting it via systemctl which does this:
/usr/bin/mongod --quiet --auth --config /etc/mongod.conf
Created an admin user like so:
mongo
> use admin
> db.createUser({ user:"user001", pwd:"pwd001", roles:[{role:"root", db:"admin"}]})
I can access mongo on EC2 via:
mongo -u user001 -p pwd001 --authenticationDatabase admin
However, when I try to access it on my local via:
mongo -u user001 -p pwd001 --authenticationDatabase admin --host 58.17.53.6
I get this error:
MongoDB shell version: 3.0.4
connecting to: 58.17.53.6:27017/test
2017-10-30T19:42:33.302-0700 W NETWORK Failed to connect to 58.17.53.6:27017 after 5000 milliseconds, giving up.
2017-10-30T19:42:33.304-0700 E QUERY Error: couldn't connect to server 58.17.53.6:27017 (58.17.53.6), connection attempt failed
at connect (src/mongo/shell/mongo.js:181:14)
at (connect):1:6 at src/mongo/shell/mongo.js:181
exception: connect failed
Also, my security group for instance looks like this:
Any help would be appreciated!

Mongo: provide default user

I'd like to get how to provide a default user on mongo.
Up to now, we've been able to provide a mongo instance using chef. It's working with this configuration file (mongod.conf):
---
systemLog:
path: "/var/log/mongodb/mongod.log"
logAppend: true
destination: file
processManagement:
fork: true
pidFilePath: "/var/run/mongodb/mongod.pid"
net:
port: 30158
bindIp: localhost
security:
authorization: enabled
storage:
dbPath: "/var/lib/mongo"
journal:
enabled: true
As you can see mongo is running with authorization. So, I'd like to provide a default user - passwd values in order to allow to access to the mongo instance.
So, I want to get it wihout having to interact with mongo command client. Is there any way to perform an script?
I don't know if I've explained so well.
You can create a administrator user in the following manner:
use admin
db.createUser(
{
user: "myUserAdmin",
pwd: "abc123",
roles: [ { role: "userAdminAnyDatabase", db: "admin" } ]
}
)
You can then login using the credentials:
mongo --port 27017 -u "myUserAdmin" -p "abc123" \
--authenticationDatabase "admin"
You can create a user and assign role to it,
use myDb;
db.createUser({user : 'myUser',pwd:'myPassword',roles:[{role : 'readWrite',db : 'myDb'}]});
Your db connection string,
'mongodb://myUser:myPassword#localhost:27017/myDb',
From terminal,
mongo --port 27017 -u "myUser" -p "myPassword" --authenticationDatabase "myDb"