Sharded Cluster authentication in mongodb 3.2.4 - mongodb

I have 6 mongod server.
2 shard with replica set of two servers each(totally 4 mongod server)
2 config server
2 mongos which will run on shard server itself
I would like to enable authentication on sharded cluster. I tried enabling --auth while starting the 6 mongod instances but it throwing below error.
SHARDING [mongosMain] Error initializing sharding state, sleeping for 2 seconds and trying again :: caused by :: Unauthorized: could not get updated shard list from config server due to not authorized for query on config.shards
How to enable authentication in sharded cluster? I'm using mongodb 3.2.4 version.
How config server will communicate internally with other mongod server?
Do i need to create user on each mongod separately in admin db?
Please help me to understand this.
-Thanks in advance.

For shared cluster, you have to use keyfile or x.509 certificate authentication for inter cluster communication.
Please refer to this link:
https://docs.mongodb.com/manual/core/security-internal-authentication/
To create users, connect to the mongos and add the users. Since version 2.6+, MongoDB stores user login data in the admin database of the config servers, so you don't have to create user on each mongod separately.
Also you can refer to these links:
http://pe-kay.blogspot.in/2016/02/update-existing-mongodb-replica-set-to.html
http://pe-kay.blogspot.in/2016/02/securing-mongodb-using-x509-certificate.html

Related

Authentication not working on MongoDB cluster

So I have a MongoDB cluster deployed on 8 VMs (2 config servers, 2 shards with 2 replicas each and 2 mongos instances).
I configured it all from command line and decided to configure the authentication afterwards.
For that I edited the /etc/mongod.conf of the config servers and mongos to enable auth like so:
setParameter:
enableLocalhostAuthBypass: false
security:
authorization: enabled
Then I restarted both mongod instances as well as the mongos instances, however, when I connect to a mongos I get the following warning: Access control is not enabled for the database. Read and write access to data and configuration is unrestricted
And when I create a user with only permission to read it can still write to a db...
Is there an error on the configuration file or am I missing something?

MongoDB Sharding - Unable to reach primary for set c1

I have a huge amount of data, therefore I am going to use sharding on MongoDB. I have exactly followed the same steps mentioned here, but getting the warning Unable to reach primary for set c1 when I execute the command mongos --configdb c1/mongodb-test-config on the front-end machine.
Also, when I execute the next command mentioned in the linked tutorial mongo localhost:27017 on the front-end machine, getting the error couldn't connect to server localhost:27017.
Notes in advance:
I have already configured iptables to allow connections on the port 27017
I have set bindIp to 0.0.0.0 in the mongod.conf file
MongoDB Version: 4.0.2
Ubuntu Version: 16.04.5
All the machines (2 shards, 1 config server, and 1 front-end) are located in the same network, and successfully connect with each other.

Trouble Connect Query Router to Config Server in MongoDB Sharding

I'm working on my project to Create a Sharded Cluster in MongoDB using Microsoft Azure Ubuntu 14.01 Virtual Machine.
I'm using 3 config server, 2 query router, and 4 shard server.
I already install MongoDB on all those server with version 2.6.9. Then i set up all of the configs server and they worked, they're listening. But when i started to configure the query router instances, it didn't want to connect to the configs server. Here is the command that i use:
mongos --configdb config0.example.com:27019,config1.example.com:27019,config2.example.com:27019
Here is one of the error message that appear after i run the command.
Couldn't check dbhash on config server [hostname]:27019 :: caused by :: 11002 socket exception [CONNECT_ERROR] server [[hostname]:27019] connection pool error: couldn't connect to server [hostname]:27019 (23.97.57.219), connection attempt failed.
Is there any solution of this problem? Really appreciate your help.

mongodb replication with admin authentication error

I have three nodes of mongodb replication set
2 servers have the data and the third is an arbiter
54.83.20.44 : 27017 (primary)
54.197.243.55 : 40000 (secondary)
23.21.148.73 : 27017 (arbiter)
All things have been configured well with automatic failover.
BUT, I ignored any thing about authentication.
I can connect to the replset using "Robomongo" (desktop mongodb management tool) without username/password :(
So I connected to admin database of primary member and ran this command:
mongo
use admin
db.addUser("username", "password");
Then, I restarted mongod process with --auth option
This is my log after restart:
[rsBackgroundSync] replSet not trying to sync from 54.197.243.55:40000, it is vetoed for 8 more seconds
[rsHealthPoll] could not authenticate against 54.197.243.55:40000, { code: 18, ok: 0.0, errmsg: "auth fails" }
[rsHealthPoll] replset info 54.197.243.55:40000 thinks that we are down
What can I do ?
Adding username/password to all admin servers
or primary server only?
I think you should restart all the mongod instances with --auth option.
if you are using replication authentication you must enable with keyfile authentication otherwise they will go out of sync state

Setting up MongoDB replica set

I have a fast Windows 7 PC with 8Gb RAM. I want to test this MongoDB replica set: http://www.mongodb.org/display/DOCS/Replica+Sets for my development. I dont want to buy 3 PCs though, as it's kind of expensive. Is there a way to use some kind of technology, like Hyper-V, to be able to set it up? If not, how many PC and what kind should I buy?
You can run multiple mongod processes on the same machine on different ports and pointing to different data directories and make them a part of the same replicaset.
http://www.mongodb.org/display/DOCS/Starting+and+Stopping+Mongo
mongod --dbpath c:/data1 --port 12345 --replSet foo
mongod --dbpath c:/data2 --port 12346 --replSet foo
and then connect to one of the mongod processes using the mongo console and add initiate the replica set using instructions outlined here:
http://www.mongodb.org/display/DOCS/Replica+Sets+-+Basics
On Ubuntu 18.04, MongoDb : shell version v4.2.6
In The Terminal (1) (Secure the port using a firewall since we are using 0.0.0.0)
sudo systemctl stop mongod
sudo systemctl status mongod
sudo mongod --auth --port 27017 --dbpath /var/lib/mongodb --replSet rs0 --bind_ip 0.0.0.0
Then open another instance of terminal (2) (Keep the previous one open)
mongo -u yourUserName -p (it will ask for password - follow on)
rs.initiate()
Then open yet another instance of terminal (3)
Here you will run server.js with your connection url like this :
const url = 'mongodb://' + user + ':' + password +
'#localhost:27017/?replicaSet=rs0'
MongoClient.connect(url, { useUnifiedTopology: true, authSource: 'admin' },
function (err, client) {
if (err) {
throw err;
}
});
you can create multiple mongod instances running on the same server on diff. ports.
for configuration and the way replica set works, refer to blog below. this will set up replica set as per the instruction on the same box.
http://pareshbhav.blogspot.com/2014/12/mongdb-replicaset-and-streaming.html
One super easy way is to set the MongoDB replica set by using Docker.
Within our Docker Host, we can create Docker Network which would give us the isolated DNS resolution across containers. Then we can start creating the MongoDB docker containers. They would initially be unaware of each other. However, we can initialise the replication by connecting to one of the containers and running the replica set initialisation command. Finally, we can deploy our application container under the same docker network.
Check out the MongoDB replica set by using Docker post on how to make this work.
There is no use in having replica set at the same single host since this contradicts the terms of redundancy and high availability. If your PC goes down or slows down, your replica set will be ruined or undergo degradation. But for sure it’s not cost-efficient to buy several PCs to evaluate replica set, so you can consider a possible scenario described in MongoDB Replica Set with Master-Slave Replication.
With respect to the number of replica set members, you’re right, the most common topology comprises 3 members, but I would suggest you also adding an Arbiter on the separate host. It is a lightweight process and doesn’t require a lot of resources but it plays an important role in maintaining a quorum in an election of new PRIMARY in case you got an even number of members once PRIMARY fails.
We will configure MongoDB replica set with 3 nodes.
Suposse we have 3 nodes with:
hostname Mongodb01, ip address 192.168.1.11 and MongoDB
installed with port no 27017
hostname Mongodb02, ip address 192.168.1.22 and MongoDB installed with port no 27017
hostname Mongodb03, ip address 192.168.1.33 and MongoDB
installed with port no 27017
Before initiating with this configuration, ensure that you have below points in place:
MongoDB service is installed and running in all the 3 nodes
All the 3 nodes are connected to each other via IP Address or Hostname
The default port nos 27017 and 28017 (or any other port no you are planning to use) are not blocked by any firewall or antivirus
Now, lets begin with configuration
Step 1: Modify the mongodb.conf file of each node to include replica set information.
replSet = myCluster
rest = true
replSet is the unique name of replica set and all the nodes must have same value for replSet parameter. rest is optional but used to enable rest interface for admin web page.
Step 2: Restart MongoDB service on all the 3 nodes
Step 3: Configure replica set on the node you plan to use as primary. In our case we will execute below commands in Mongodb01's mongo shell
rs.initiate()
Initiates replica set
rs.add("<hostname or ip-address>:<port-no>")
Adds secondary node in replica set.
e.g.; rs.add("Mongodb02:27017") or rs.add("192.168.1.22:27017")
rs.addArb("<hostname or ip-address>:<port-no>")
Adds arbiter node in replica set.
e.g.; rs.addArb("Mongodb03:27017") or rs.add("192.168.1.33:27017")
rs.status()
Checks whether all the nodes are added in the replica set. Other way of checking for the nodes in replica is use following URL in your browser address bar http://<hostname or ip-address>:<port>/_replSet
e.g.; http://localhost:27017/_replSet or http://Mongodb01:27017/_replSet or http://192.168.1.11:27017/_replSet.
This URL is accessible only when you set rest = true in mongodb.conf file