So I have a MongoDB cluster deployed on 8 VMs (2 config servers, 2 shards with 2 replicas each and 2 mongos instances).
I configured it all from command line and decided to configure the authentication afterwards.
For that I edited the /etc/mongod.conf of the config servers and mongos to enable auth like so:
setParameter:
enableLocalhostAuthBypass: false
security:
authorization: enabled
Then I restarted both mongod instances as well as the mongos instances, however, when I connect to a mongos I get the following warning: Access control is not enabled for the database. Read and write access to data and configuration is unrestricted
And when I create a user with only permission to read it can still write to a db...
Is there an error on the configuration file or am I missing something?
Related
I have forgot my mongodb root user password for the shared cluster of 3 nodes. I have gone through stack overflow for the same issue but was unable to replicate due to different configuration. Below is my configuration
mongodb version 4.4.
replication on 3 servers(nodes) using keyfile authentication.
all nodes are running in docker containers.
In case useful, I have other credentials that were created through root user for backup and read write permission but they dont have access to admin database.
Please guide me if you have any solution. thanks
unable to find anything to try
The official way of doing this is:
Restart the MongoDB without authorization, i.e. mongod --noauth ... or via configuration file
security:
authorization: disabled
Then you can logon without password and change credentials of the root user.
Attention: while the MongoDB is running without authorization, every user connects with root privileges, so you better restart the MongoDB in maintenance mode, i.e.
net:
bindIp: localhost
port: 55555
#replication:
# replSetName: shardA
#sharding:
# clusterRole: shardsvr
setParameter:
skipShardingConfigurationChecks: true
disableLogicalSessionCacheRefresh: true
Then you can connect only from localhost using port 55555 (which is not configured by other cluster members nor known by other users)
You need to do this only on the configuration server, because user accounts are stored there, not on the shards or mongos members.
However, there is a much simpler way to achieve the same, use the keyfile for authentication:
mongosh --authenticationDatabase local -u __system -p "$(tr -d '\011-\015\040' < /path/to/keyfile)"
I have a huge amount of data, therefore I am going to use sharding on MongoDB. I have exactly followed the same steps mentioned here, but getting the warning Unable to reach primary for set c1 when I execute the command mongos --configdb c1/mongodb-test-config on the front-end machine.
Also, when I execute the next command mentioned in the linked tutorial mongo localhost:27017 on the front-end machine, getting the error couldn't connect to server localhost:27017.
Notes in advance:
I have already configured iptables to allow connections on the port 27017
I have set bindIp to 0.0.0.0 in the mongod.conf file
MongoDB Version: 4.0.2
Ubuntu Version: 16.04.5
All the machines (2 shards, 1 config server, and 1 front-end) are located in the same network, and successfully connect with each other.
I have created a MongoDB instance on an AWS EC2 Ubuntu instance.
MongoDB is running and when I ssh into the machine and run the MongoDB mongod console, I am able to create databases, so I am confident it is running successfully.
However, I am not able to gain access to the database from my local machine in a browser.
I have changed the bindIp in /etc/mongod.conf to 0.0.0.0, and I have opened port 27017 by executing sudo ufw allow 27017 but my browser still times out trying to connect.
When I try to configure the instance using mongod --config /etc/mongod.conf, I get the error:
CONTROL [main] Failed global initialization: FileNotOpen: Failed to open "/var/log/mongodb/mongod.log"
This file exists! The relevant portion of the config file looks like this:
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
I have read that you have to enable the REST interface by specifying --rest, but I don't know how to do this since I am starting the database service with the sudo service mongod start command.
My AWS security settings looks like this:
What am I missing?
You haven't specified which MongoDB version you're using, but since you're showing YAML configuration, I'll assume it's 2.6 or later.
You can enable the REST interface via mongodb.conf with the following (per https://docs.mongodb.com/v3.0/reference/configuration-options/):
net:
http:
enabled: true
RESTInterfaceEnabled: true
According to the MongoDB docs, the REST interface (which is deprecated in 3.2) listens on port 28017 (1000 + the mongod port), so you will have to open the firewall for that port.
Also, I strongly recommend NOT opening up any DB ports to the world (0.0.0.0). Find your laptop's IP (or probably your router's IP as assigned by your ISP) and add that instead.
Your browser likely cannot connect (you didn't specify the exact error) because mongodb doesn't use HTTP or any other browser protocol and your browser doesn't know how to to talk to it. You won't be able to do much with your browser even with the REST interface enabled anyway. Try getting the mongo shell (make sure you get the same version as mongodb on your server) on your laptop and seeing if you can connect to port 27017 with that.
I have 6 mongod server.
2 shard with replica set of two servers each(totally 4 mongod server)
2 config server
2 mongos which will run on shard server itself
I would like to enable authentication on sharded cluster. I tried enabling --auth while starting the 6 mongod instances but it throwing below error.
SHARDING [mongosMain] Error initializing sharding state, sleeping for 2 seconds and trying again :: caused by :: Unauthorized: could not get updated shard list from config server due to not authorized for query on config.shards
How to enable authentication in sharded cluster? I'm using mongodb 3.2.4 version.
How config server will communicate internally with other mongod server?
Do i need to create user on each mongod separately in admin db?
Please help me to understand this.
-Thanks in advance.
For shared cluster, you have to use keyfile or x.509 certificate authentication for inter cluster communication.
Please refer to this link:
https://docs.mongodb.com/manual/core/security-internal-authentication/
To create users, connect to the mongos and add the users. Since version 2.6+, MongoDB stores user login data in the admin database of the config servers, so you don't have to create user on each mongod separately.
Also you can refer to these links:
http://pe-kay.blogspot.in/2016/02/update-existing-mongodb-replica-set-to.html
http://pe-kay.blogspot.in/2016/02/securing-mongodb-using-x509-certificate.html
I have a fast Windows 7 PC with 8Gb RAM. I want to test this MongoDB replica set: http://www.mongodb.org/display/DOCS/Replica+Sets for my development. I dont want to buy 3 PCs though, as it's kind of expensive. Is there a way to use some kind of technology, like Hyper-V, to be able to set it up? If not, how many PC and what kind should I buy?
You can run multiple mongod processes on the same machine on different ports and pointing to different data directories and make them a part of the same replicaset.
http://www.mongodb.org/display/DOCS/Starting+and+Stopping+Mongo
mongod --dbpath c:/data1 --port 12345 --replSet foo
mongod --dbpath c:/data2 --port 12346 --replSet foo
and then connect to one of the mongod processes using the mongo console and add initiate the replica set using instructions outlined here:
http://www.mongodb.org/display/DOCS/Replica+Sets+-+Basics
On Ubuntu 18.04, MongoDb : shell version v4.2.6
In The Terminal (1) (Secure the port using a firewall since we are using 0.0.0.0)
sudo systemctl stop mongod
sudo systemctl status mongod
sudo mongod --auth --port 27017 --dbpath /var/lib/mongodb --replSet rs0 --bind_ip 0.0.0.0
Then open another instance of terminal (2) (Keep the previous one open)
mongo -u yourUserName -p (it will ask for password - follow on)
rs.initiate()
Then open yet another instance of terminal (3)
Here you will run server.js with your connection url like this :
const url = 'mongodb://' + user + ':' + password +
'#localhost:27017/?replicaSet=rs0'
MongoClient.connect(url, { useUnifiedTopology: true, authSource: 'admin' },
function (err, client) {
if (err) {
throw err;
}
});
you can create multiple mongod instances running on the same server on diff. ports.
for configuration and the way replica set works, refer to blog below. this will set up replica set as per the instruction on the same box.
http://pareshbhav.blogspot.com/2014/12/mongdb-replicaset-and-streaming.html
One super easy way is to set the MongoDB replica set by using Docker.
Within our Docker Host, we can create Docker Network which would give us the isolated DNS resolution across containers. Then we can start creating the MongoDB docker containers. They would initially be unaware of each other. However, we can initialise the replication by connecting to one of the containers and running the replica set initialisation command. Finally, we can deploy our application container under the same docker network.
Check out the MongoDB replica set by using Docker post on how to make this work.
There is no use in having replica set at the same single host since this contradicts the terms of redundancy and high availability. If your PC goes down or slows down, your replica set will be ruined or undergo degradation. But for sure it’s not cost-efficient to buy several PCs to evaluate replica set, so you can consider a possible scenario described in MongoDB Replica Set with Master-Slave Replication.
With respect to the number of replica set members, you’re right, the most common topology comprises 3 members, but I would suggest you also adding an Arbiter on the separate host. It is a lightweight process and doesn’t require a lot of resources but it plays an important role in maintaining a quorum in an election of new PRIMARY in case you got an even number of members once PRIMARY fails.
We will configure MongoDB replica set with 3 nodes.
Suposse we have 3 nodes with:
hostname Mongodb01, ip address 192.168.1.11 and MongoDB
installed with port no 27017
hostname Mongodb02, ip address 192.168.1.22 and MongoDB installed with port no 27017
hostname Mongodb03, ip address 192.168.1.33 and MongoDB
installed with port no 27017
Before initiating with this configuration, ensure that you have below points in place:
MongoDB service is installed and running in all the 3 nodes
All the 3 nodes are connected to each other via IP Address or Hostname
The default port nos 27017 and 28017 (or any other port no you are planning to use) are not blocked by any firewall or antivirus
Now, lets begin with configuration
Step 1: Modify the mongodb.conf file of each node to include replica set information.
replSet = myCluster
rest = true
replSet is the unique name of replica set and all the nodes must have same value for replSet parameter. rest is optional but used to enable rest interface for admin web page.
Step 2: Restart MongoDB service on all the 3 nodes
Step 3: Configure replica set on the node you plan to use as primary. In our case we will execute below commands in Mongodb01's mongo shell
rs.initiate()
Initiates replica set
rs.add("<hostname or ip-address>:<port-no>")
Adds secondary node in replica set.
e.g.; rs.add("Mongodb02:27017") or rs.add("192.168.1.22:27017")
rs.addArb("<hostname or ip-address>:<port-no>")
Adds arbiter node in replica set.
e.g.; rs.addArb("Mongodb03:27017") or rs.add("192.168.1.33:27017")
rs.status()
Checks whether all the nodes are added in the replica set. Other way of checking for the nodes in replica is use following URL in your browser address bar http://<hostname or ip-address>:<port>/_replSet
e.g.; http://localhost:27017/_replSet or http://Mongodb01:27017/_replSet or http://192.168.1.11:27017/_replSet.
This URL is accessible only when you set rest = true in mongodb.conf file