What are best practices for synching users and roles between Mongo instances?
On the same Windows machine, I am trying to copy MongoDB users and roles in the admin database from one Mongo instance to another. Authentication is 'on' for each instance. No combination of mongodump\mongorestore or mongoexport\mongoimport I have tried works. With mongodump\restore, the restore step displays:
assuming users in the dump directory are from <= 2.4 (auth version 1)
Failed: the users and roles collections in the dump have an incompatible auth version with target server: cannot restore users of auth version 1 to a server of auth version 5
I found no command line option to tell it not to do this silly thing. I have Mongo version 4 and that's it installed.
You would think --dumpDbUsersAndRoles and --restoreDbUsersAndRoles would
be symmetrical, but they are not.
I was able to run this,
mongoexport -p 27017 -u admin --password please -d admin --collection system.roles --out myRoles.json
However, when trying mongoimport
mongoimport -p 26017 -u admin --password please -d admin --collection "system.roles" --file myRoles.json
the output displays
error validating settings: invalid collection name: collection name 'system.roles' is not allowed to begin with 'system.'
Primer
Users are attached to databases. Ideally, you have your database specific users stored in the respective database. All “global” users should go into admin. The good part: replica sets take care of syncing those users to each member of the replica set.
Solution
That being said, it seems to be quite obvious on how to deal with this.
For a worst case scenario, it is much easier to have a .js ready which simply recreates the 3-4 global roles instead
of fiddling with system.* collections in the admin database. This has the advantage that you can also do other setup stuff automatically, like sharding setup if TSHTF and you need to rebuild your cluster from scratch.
use admin;
db.createRole([...])
db.createRole([...])
// do other stuff, like sharding setup
Run it against the primary of your replica set or a mongos instance (if you have a sharded cluster) using
mongo daHost:27017/admin myjsfile.js
after you set up your machines but before you enable authentication.
Another option would be to use Ansible for user creation.
As for dump and restore, you might want to leave out the collection name.
Related
I want to migrate a collection from one Mongo Atlas Cluster to another. How do I go about in doing that?
There are 2 poosible approaches here:
Migration with downtime: stop the service, export the data from the collection to some 3rd location, and then import the data into the new collection on the new cluster, and resume the service.
But there's a better way: using the MongoMirror utility. With this utility, you can sync collections across clusters without any downtime. the utility first syncs the db (or selected collections from it) and then ensures subsequent writes to the source are synced to the dest.
following is the syntax I used to get it to run:
./mongomirror --host atlas-something-shard-0/prod-mysourcedb--shard-00-02-pri.abcd.gcp.mongodb.net:27017 \
--username myUserName \
--password PASSWORD \
--authenticationDatabase admin \
--destination prod-somethingelse-shard-0/prod-mydestdb-shard-00-02-pri.abcd.gcp.mongodb.net:27017 \
--destinationUsername myUserName \
--destinationPassword PASSWORD \
--includeNamespace dbname.collection1 \
--includeNamespace dbname.collection2 \
--ssl \
--forceDump
Unfortunately, there are MANY pitfalls here:
ensure your user has the correct role. this is actually covered in the docs so read the relevant section closely.
to correctly specify the host and destination fields, you'll need to obtain both the RS name and the primary instance name. One way to get these is to use the mongosh tool and run rs.conf() on both source and destination clusters. The RS name is specified as "_id" in the command's output, and the instances are listed as "members" in the output. You'll want to take the primary instance's "host' field. the end result should look like RS-name/primary-instance-host:port
IF you specify replica-set, you MUST specify the PRIMARY instance. Failing to do so will result in an obscure error (EOF something).
I recommend adding the forceDump flag (at least until you manage to get it to run for the first time).
If you specify non-existing collections, the utility will only give one indication that they don't exist and then go on to "sync" these, rather than failing.
I have a mongodb replica-set of 3 members (version 2.4) in which the administrator user for the 'admin' db does not have the 'userAdminAnyDatabase' role.
This role is required for managing the users on all databases.
The roles I currently have are: [ "readWriteAnyDatabase", "dbAdminAnyDatabase", "clusterAdmin" ]
I tried updating roles for myself or creating a new user, however I have no permission to access db.system.users in the admin db.
I tried setting noauth=true but that did not help. When removing the keyFile as well, the db was not able to sync with the other members (obviously) and got stuck in RECOVERY state.
I found a similar question that refers to a stand alone db (no replica set) so it doesn't really help in this case.
What would be the best way to add this role while having minimal system downtime?
I would use mongodump and mongorestore to backup the data then rebuild node with right permissions and restore the data.
However this approach should work:
If you have locked yourself out then you need to do the following:
Stop your MongoDB instance
Remove the --auth and/or --keyfile options from your MongoDB config
to disable authentication
Start the instance without authentication
Edit the users as needed
Restart the instance with authentication enabled
As you are using mongo 2.4, that means you have MMAP as a storage engine.
my proposal will be:
create similar replica set on each host but different port, and set database directory on same media as current one.
configure all auth stuff same as running ones
stop old replica set members
MOVE database files to new directory excluding local
change port on new replica set
start it
As moving files to other directory is just a pointer change this will take some seconds.
Please test before implementation.
Any comments welcome!
I am migrating from server OLD at the old hosting company to server NEW at the new hosting company.
I want to run the clone command so I clone the mongoDB from OLD to NEW.
For OLD:
The public ip address is: 44.55.66.77.
The machine login user name is: admin, and the password is password
What is the right way to do this?
So far I can't even log into the server OLD
So far I have tried the following command prompts on NEW:
mongo -u admin -p password 44.55.66.77
mongo remote-ip:44.55.77.66 -u admin -p password
That don't work
I also tried this from mongo shell:
db.CopyDatabase('OldDb', 'NewDb', '44.55.66.77', 'admin', 'password')
and I get: the "could not connect to server" error message
Aside from firewall considerations in order to copy data between MongoDB servers, db.copyDatabase() (aka the copydb command) has a number of important usage caveats including:
copydb does not produce point-in-time snapshots of the source database; writing data to the source or destination database during the copy process will result in divergent data sets
copydb does not lock the destination server during its operation, so the copy will occasionally yield to allow other operations to complete.
There is also a known issue that copydb may not work with the role-based privileges in MongoDB 2.4 if you have authentication enabled (see SERVER-8213, which was recently fixed in the 2.5.x development releases).
A much better approach to migrating your data would be to restore from a normal backup using mongodump/mongorestore or file system snapshots. The Backup & Recovery section of the MongoDB manual has tutorials covering procedures for different deployment types.
I'm using MongoDB in a pretty simple setup and need a consistent backup strategy. I found out the hard way that wrapping a mongodump in a lock/unlock is a bad idea. Then I read that the --oplog option should be able to provide consistency without lock/unlock. However, when I tried that, it said that I could only use the --oplog option on a "full dump." I've poked around the docs and lots of articles but it still seems unclear on how to dump a mongo database from a single point in time.
For now I'm just going with a normal dump but I'm assuming that if there are writes during the dump it would make the backup not from a single point in time, correct?
mongodump -h $MONGO_HOST:$MONGO_PORT -d $MONGO_DATABASE -o ./${EXPORT_FILE} -u backup -p password --authenticationDatabase admin
In production environment, MongoDB is typically deployed as replica set(s) to ensure redundancy and high availability. There are a few options available for point in time backup if you are running a standalone mongod instance.
One option as you have mentioned is to do a mongodump with –oplog option. However, this option is only available if you are running a replica set. You can convert a standalone mongod instance to a single node replica set easily without adding any new replica set members. Please check the following document for details.
http://docs.mongodb.org/manual/tutorial/convert-standalone-to-replica-set/
This way, if there are writes while mongodump is running, they will be part of your backup. Please see Point in Time Operation Using Oplogs section from the following link.
http://docs.mongodb.org/manual/tutorial/backup-databases-with-binary-database-dumps/#point-in-time-operation-using-oplogs
Be aware that using mongodump and mongorestore to back up and restore MongodDB can be slow.
File system snapshot is another option. Information from the following link details two snapshot options for performing hot backup of MongoDB.
http://docs.mongodb.org/manual/tutorial/backup-databases-with-filesystem-snapshots/
You can also look into MongoDB backup service.
http://www.10gen.com/products/mongodb-backup-service
In addition, mongodump with oplog options does not work with single db/collection at this moment. There are plans to implement the feature. You can follow the ticket and vote for the feature under the More Actions button.
https://jira.mongodb.org/browse/SERVER-4273
I'm trying to copy all the data from my localhost default meteor mongo database to the production server to use it in "app.meteor.com".
I tried to use mongorestore usinng the information provided by "meteor mongo --url app.meteor.com", but it does not modify any document.
Moreover, when I connect to mongo database of the server, I can only read (find) documents. When i use update or insert functions it says "not master"
Run
~/meteor/meteor mongo -U yoapp
You will get something like this
mongodb://client:387shff-fe52-07d4-69a4-ba321f3665fe7#c0.meteor.m0.mongolayer.com:27017/yoapp_meteor_com
Take the values and put into mongorestore like this
mongorestore -u client -p 387shff-fe52-07d4-69a4-ba321f3665fe7 -h c0.meteor.m0.mongolayer.com:27017 -db yoapp_meteor_com /home/user/dump/yoapp
I just dumped a prod app to my local dev machine. Testing some new changes. Pushed code to a staging install on meteor.com and then used mongorestore to populate the staging database.
Moreover, when I connect to mongo database of the server, I can only read (find) documents. When i use update or insert functions it says "not master"
It's probably because the server you are connecting to is not a MASTER, but a SLAVE in a replica-set. Slaves are readonly and all writes need to be directed to the master. You can get the master hostname:port by querying rs.conf() and looking at the entries in members. See http://docs.mongodb.org/manual/reference/replica-configuration/
Get the master and then try mongoimport/mongorestore on it.
You should also tail the mongod logs on your production server, to see if there are any errors on import (provided you have access).