MongoDB - grantRolesToUser is rolled back? - mongodb

Replicaset - Mongo version 3.2.4, I use bash script to update mongo permission in some situations, something like :
mongo <<EOF
db["grantRolesToUser"]("someone", ["readWriteAnyDatabase"]);
print(JSON.stringify(db.getUsers()));
EOF
Basically, adding readWriteAnyDatabase role to "someone".
It works, and the print shows the user with the new role.
However, 2-3 seconds later. it's gone !!
Any thoughts on what could be causing this?

The issue is related to MMS/OpsManager automatic sync of users across the replicaset.
From the OpsManager Manual:
If you want Ops Manager to ensure that all deployments in a group have the same database users, use only the Ops Manager interface to manage users.
If you want certain deployments in a group to possess users not set at the group level, you can add them through direct connection to the MongoDB instances.
So you can either control users in the OpsManager or from mongo shell itself, but not both.

Related

MongoDB replication when access control is enabled

I have 2 servers where the MongoDB database is installed.
Both servers have enabled database access control by creating users.
Now, I need to make replication for these servers. One for primary and the other for secondary.
I followed https://docs.mongodb.com/manual/tutorial/deploy-replica-set/
But above reference URL steps have been for when access control is disabled.
Need MongoDB replication steps when access control is already enabled.
Actually nothing changes. You only have to provide username/password when you connect to the database, the rest is identical.
However, you may follow Deploy Replica Set With Keyfile Authentication. Just go from item 1 up to 5, then it should be done.
Anyway, for me it is not clear what you try to do. You write you have 2 existing MongoDB servers. Are they different?
Do you like to put these two different databases into one new Replica Set? In a Replica Set the SECONDARY is an exact copy of the PRIMARY, so you cannot push data from 2 different sources into PRIMARY and SECONDARY.

AWS DMS "Load complete, replication ongoing" not working MongoDB to DocDB

I am trying to make a PoC for MongoDB to DocDB migration with DMS.
I've set up a MongoDB instance with some dummy data and an empty DocDB. Source and Target endpoints are also set in DMS and both of them are connecting successfully to my databases.
When I create a migration task in DMS everything seems to be working fine. All existing data is successfully replicated from the MongoDB instance to DocDB and the migration task state is at "Load complete, replication ongoing".
At this point I tried creating new entries in existing collections as well as creating new empty collections in MongoDB but nothing happens in DocDB. If I understand correctly the replication should be real time and anything I create should be replicated instantly?
Also there are no indication of errors or warnings whatsoever... I don't suppose its a connectivity issue to the databases since the initial data is being replicated.
Also the users I am using for the migration have admin privileges in both databases.
Does anyone have any suggestions?
#PPetkov - could you check the following?
1. Check if right privileges were assigned to the user in the MongoDB endpoint according to https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MongoDB.html.
2. Check if replicate set to capture changes was appropriately configured in the MongoDB instance.
3. Once done, try to search for "]E:" or "]W:" in the CloudWatch logs to understand if there are any noticeable failures/warnings.

MongoDB - not authorized in shared cluster despite of atlasAdmin role

I have a problem with a shared mongodb cluster: I try to get data via the nodejs implementation of mongodb. A few days ago, it worked just fine. But now, every getmore command I send to the cluster is very, very slow. So I thought: Maybe I just have to turn it off and on again.
So I tried to connect to the cluster with the mongo shell. Everything works fine, my user has the atlasAdmin role (with can be seen via db.getUser("admin")), but when I try to execute commands like db.shutdownServer() or show users, the server tells me that I'm not authorized. Even the command "db.auth("admin", ...pw...)" returns 1.
After some research, I found out I have to shutdown the server to have a chance to fix this problem. But without permission, how should I do it? Is there any other possibility to perform this, like a button on the atlas webapp or something?
Atlas is a hosted service, so the privileges are different vs. a bare metal MongoDB server. From MongoDB Database User Privileges This is the list of privileges of atlasAdmin:
readWriteAnyDatabase
readAnyDatabase
dbAdminAnyDatabase
clusterMonitor
cleanupOrphaned
enableSharding
flushRouterConfig
moveChunk
splitChunk
viewUser
shutdown privilege is part of the hostManager role, which is not included in the list above.
Depending on your type of Atlas deployment, here are the list of restricted commands/privileges:
Unsupported Commands in M0/M2/M5 Clusters
Unsupported Commands in M10+ Clusters
If you need to "turn on and off" your deployment, you might be able to use the Test Failover button if your type of deployment supports it. That button will step down the primary node and elect a new primary, which for most cases is almost equivalent to "turn off and on again".

deploying mongodb on google cloud platform?

Hello all actually for my startup i am using google cloud platform, now i am using app engine with node.js this part is working fine but now for database, as i am mongoDB i saw this for mongoDB https://console.cloud.google.com/launcher/details/click-to-deploy-images/mongodb?q=mongo now when i launched it on my server now it created three instances in my compute engine but now i don't know which is primary instance and which is secondary, also one more thing as i read that primary instance should be used for writing data and secondary for reading, now when i will query my database should i provide secondary instance url and for updating/inserting data in my mongodb database should i provide primary instance url otherwise which url should i use for CRUD operations on my mongodb database ?? also after launcing this do i have to make any changes in any conf file or in any file manually or they already done that for me ?? Also do i have to make instance groups of all three instances or not ??
Please if any one of you think i have not done any research on this or its not a valid stackoverflow question then i am so sorry google cloud platform is very much new that's why there is not much documentation on it also this is my first time here in deploying my code on servers that's why i am completely noob in this field Thanks Anyways please help me ut of here guys.
but now i don't know which is primary instance and which is secondary,
Generally the Cloud Launcher will name the primary with suffix -1 (dash one). For example by default it would create mongodb-1-server-1 instance as the primary.
Although you can also discover which one is the primary by running rs.status() on any of the instances via the mongo shell. As an example:
mongo --host <External instance IP> --port <Port Number>
You can get the list of external IPs of the instances using gcloud. For example:
gcloud compute instances list
By default you won't be able to connect straight away, you need to create a firewall rule for the compute engines to open port(s). For example:
gcloud compute firewall-rules create default-allow-mongo --allow tcp:<PORT NUMBER> --source-ranges 0.0.0.0/0 --target-tags mongodb --description "Allow mongodb access to all IPs"
Insert a sensible port number, please avoid using the default value. You may also want to limit the source IP ranges. i.e. your office IP. See also Cloud Platform: Networking
i read that primary instance should be used for writing data and secondary for reading,
Generally replication is to provide redundancy and high availability. Where the primary instance is being used to read and write, and secondaries act as replicas to provide a level of fault tolerance. i.e. the loss of primary server.
See also:
MongoDB Replication.
Replication Read Preference.
MongoDB Sharding.
now when i will query my database should i provide secondary instance url and for updating/inserting data in my mongodb database should i provide primary instance url otherwise which url should i use for CRUD operations on my mongodb database
You can provide both in MongoDB URI and the driver will figure out where to read/write. For example in your Node.js you could have:
mongodb://<instance 1>:<port 1>,<instance 2>:<port 2>/<database name>?replicaSet=<replica set name>
The default replica set name set by Cloud Launcher is rs0. Also see:
Node Driver: URI.
Node Driver: Read Preference.
also after launcing this do i have to make any changes in any conf file or in any file manually or they already done that for me ?? Also do i have to make instance groups of all three instances or not ??
This depends on your application use case, but if you are launching through click and deploy the MongoDB config should all be taken care of.
For a complete guide please follow tutorial : Deploy MongoDB with Node.js. I would also recommend to check out MongoDB security checklist.
Hope that helps.

Local MongoDB instance with index in remote server

One of our clients have a server running a MongoDB instance and we have to build an analytical application using the data stored in their MongoDB database which changes frequently.
Clients requirements are:
That we do not connect to their MongoDB instance directly or run another instance of MongoDB on their server but just somehow run our own MongoDB instance on our machine in our office using their MongoDB database directory with read only access remotely.
We've suggested deploying a REST application, getting a copy of their database dump but they did not want that. They just want us to run our own MongoDB intance which is hooked up with the MongoDB instance directory. Is this even possible ?
I've been searching for a solution for the past two days and we have to submit a solution by Monday. I really need some help.
I think this is normal request because analytical queries could cause too much load on the production server. It is pretty normal to separate production and analytical databases.
The easiest option is to use MongoDB replication. Set up MongoDB replica set with production database instance as primary and analytical database instance as secondary, also configure the analytical instance to never become primary.
If it is not possible to use replication - for example client doesn't want this, the servers could not connect directly to each other... - there is another option. You can read oplog from remote database and apply operations to your database instance. This is exactly the low level mechanism how replica set works, but you can do it manually too. For example MMS (Mongo Monitoring Sevice) Backup uses reading oplog for online backups of MongoDB.
Update: mongooplog could be the right tool for real-time application of replication oplog pulled from remote server on local server.
I don't think that running two databases that points to the same database files is possible or even recommended.
You could use mongorestore to restore from their data files directly, but this will only work if their mongod instance is not running (because mongorestore will need to lock the directory).
Another solution will be to do file system snapshots and then restore to your local database.
The downside to this backup/restore solutions is that your data will not be synced all the time.
Probably the best solution will be to use replica sets with hidden members.
You can create a replica set with just two members:
Primary - this will be the client server.
Secondary - hidden, with votes and priority set to 0. This will be your local instance.
Their server will always be primary (because hidden members cannot become primaries). Clients cannot see hidden members so for all intents and purposes your server will be read only.
Another upside to this is that the MongoDB replication will do all the "heavy" work of syncing the data between servers and your instance will always have the latest data.