I have a problem with a shared mongodb cluster: I try to get data via the nodejs implementation of mongodb. A few days ago, it worked just fine. But now, every getmore command I send to the cluster is very, very slow. So I thought: Maybe I just have to turn it off and on again.
So I tried to connect to the cluster with the mongo shell. Everything works fine, my user has the atlasAdmin role (with can be seen via db.getUser("admin")), but when I try to execute commands like db.shutdownServer() or show users, the server tells me that I'm not authorized. Even the command "db.auth("admin", ...pw...)" returns 1.
After some research, I found out I have to shutdown the server to have a chance to fix this problem. But without permission, how should I do it? Is there any other possibility to perform this, like a button on the atlas webapp or something?
Atlas is a hosted service, so the privileges are different vs. a bare metal MongoDB server. From MongoDB Database User Privileges This is the list of privileges of atlasAdmin:
readWriteAnyDatabase
readAnyDatabase
dbAdminAnyDatabase
clusterMonitor
cleanupOrphaned
enableSharding
flushRouterConfig
moveChunk
splitChunk
viewUser
shutdown privilege is part of the hostManager role, which is not included in the list above.
Depending on your type of Atlas deployment, here are the list of restricted commands/privileges:
Unsupported Commands in M0/M2/M5 Clusters
Unsupported Commands in M10+ Clusters
If you need to "turn on and off" your deployment, you might be able to use the Test Failover button if your type of deployment supports it. That button will step down the primary node and elect a new primary, which for most cases is almost equivalent to "turn off and on again".
Related
I am using Hashicorp Vault to generate dynamic creds in multiple database clusters. We have one database cluster that is somewhat ephemeral so on occasion it will be refreshed from another database cluster. This database cluster will be connected to via Vault dynamic creds just like the other database clusters.
I have the process to clean up the database users brought over by the backup from the source system when this cluster is refreshed but I don't know how I should handle the Vault cleanup. The database config will be the same (same host/user) but all the existing database user accounts recently created by Vault will be gone after the refresh so I don't know what I need to do to reset/clean up Vault for that database. The database system I'm using (Redshift) doesn't seem to have DROP USER ... IF EXISTS type of syntax otherwise I would simply use that in the dynamic role's revocation_statements and let it cycle out naturally that way.
So my main question is how do I reset or delete all the dynamic creds that were created for a specific database cluster in Vault if the database cluster is refreshed or no longer exists?
I figured out the answer to this and I wanted to share here in case anyone else encounters this.
The "lease revoke" documentation explains that you can use the -prefix switch to revoke leases based on a partial or full prefix match.
Using this information you can run a command similar to the following in order to force revoke existing leases for a specific role:
vault lease revoke -force -prefix database/creds/ROLE_NAME
Using the -force switch will remove the lease even if the revocation_statements fails to process (case when the database user no longer exists).
As an aside, the following command can be used to list leases and is useful to check before and after that all the leases are, in fact, revoked:
vault list sys/leases/lookup/database/creds/ROLE_NAME
This solves my problem of "how to I remove leases for orphaned Vault dynamic credentials" in cases where the target database is refreshed from a backup which is the case I am using this for.
We try to replicate from AWS RDS pg11 (pglogical 2.2.1) to pg12.
AWS RDS pg12 has only pglogical 2.3.0, which is not compatible to 2.2.1, and there is no way to downgrade (tried already). The replication starts and creates schemas in target, but stops then due to some errors (no need to cover it here).
As a workaround we want to replicate to EC2 instance with pg12 and pglogical 2.3.1 (compatible with 2.2.1 and should work well).
Both users are setup in both databases the same way, the nodes are OK. The replication fails with
ERROR: only rds_superusers can query or manipulate replication origins.
And no idea how to debug this issue.
As already mentioned by gsteiner: the user was not explicitly granted rds_superuser role. Even though I was using a role which was initially assigned by the AWS engine, looks like it "dropped out" of rds_superuser some time ago and I had to reassign.
While checking for roles, you don't see that you belong to rds_superuser(or not). So if something like this happens one might grant rds_superuser (again) to be sure that this one is resolved.
Best way to ensure that this works how intended is to create a new role right away in role rds_superuser.
I am trying to make a PoC for MongoDB to DocDB migration with DMS.
I've set up a MongoDB instance with some dummy data and an empty DocDB. Source and Target endpoints are also set in DMS and both of them are connecting successfully to my databases.
When I create a migration task in DMS everything seems to be working fine. All existing data is successfully replicated from the MongoDB instance to DocDB and the migration task state is at "Load complete, replication ongoing".
At this point I tried creating new entries in existing collections as well as creating new empty collections in MongoDB but nothing happens in DocDB. If I understand correctly the replication should be real time and anything I create should be replicated instantly?
Also there are no indication of errors or warnings whatsoever... I don't suppose its a connectivity issue to the databases since the initial data is being replicated.
Also the users I am using for the migration have admin privileges in both databases.
Does anyone have any suggestions?
#PPetkov - could you check the following?
1. Check if right privileges were assigned to the user in the MongoDB endpoint according to https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MongoDB.html.
2. Check if replicate set to capture changes was appropriately configured in the MongoDB instance.
3. Once done, try to search for "]E:" or "]W:" in the CloudWatch logs to understand if there are any noticeable failures/warnings.
Replicaset - Mongo version 3.2.4, I use bash script to update mongo permission in some situations, something like :
mongo <<EOF
db["grantRolesToUser"]("someone", ["readWriteAnyDatabase"]);
print(JSON.stringify(db.getUsers()));
EOF
Basically, adding readWriteAnyDatabase role to "someone".
It works, and the print shows the user with the new role.
However, 2-3 seconds later. it's gone !!
Any thoughts on what could be causing this?
The issue is related to MMS/OpsManager automatic sync of users across the replicaset.
From the OpsManager Manual:
If you want Ops Manager to ensure that all deployments in a group have the same database users, use only the Ops Manager interface to manage users.
If you want certain deployments in a group to possess users not set at the group level, you can add them through direct connection to the MongoDB instances.
So you can either control users in the OpsManager or from mongo shell itself, but not both.
Hello all actually for my startup i am using google cloud platform, now i am using app engine with node.js this part is working fine but now for database, as i am mongoDB i saw this for mongoDB https://console.cloud.google.com/launcher/details/click-to-deploy-images/mongodb?q=mongo now when i launched it on my server now it created three instances in my compute engine but now i don't know which is primary instance and which is secondary, also one more thing as i read that primary instance should be used for writing data and secondary for reading, now when i will query my database should i provide secondary instance url and for updating/inserting data in my mongodb database should i provide primary instance url otherwise which url should i use for CRUD operations on my mongodb database ?? also after launcing this do i have to make any changes in any conf file or in any file manually or they already done that for me ?? Also do i have to make instance groups of all three instances or not ??
Please if any one of you think i have not done any research on this or its not a valid stackoverflow question then i am so sorry google cloud platform is very much new that's why there is not much documentation on it also this is my first time here in deploying my code on servers that's why i am completely noob in this field Thanks Anyways please help me ut of here guys.
but now i don't know which is primary instance and which is secondary,
Generally the Cloud Launcher will name the primary with suffix -1 (dash one). For example by default it would create mongodb-1-server-1 instance as the primary.
Although you can also discover which one is the primary by running rs.status() on any of the instances via the mongo shell. As an example:
mongo --host <External instance IP> --port <Port Number>
You can get the list of external IPs of the instances using gcloud. For example:
gcloud compute instances list
By default you won't be able to connect straight away, you need to create a firewall rule for the compute engines to open port(s). For example:
gcloud compute firewall-rules create default-allow-mongo --allow tcp:<PORT NUMBER> --source-ranges 0.0.0.0/0 --target-tags mongodb --description "Allow mongodb access to all IPs"
Insert a sensible port number, please avoid using the default value. You may also want to limit the source IP ranges. i.e. your office IP. See also Cloud Platform: Networking
i read that primary instance should be used for writing data and secondary for reading,
Generally replication is to provide redundancy and high availability. Where the primary instance is being used to read and write, and secondaries act as replicas to provide a level of fault tolerance. i.e. the loss of primary server.
See also:
MongoDB Replication.
Replication Read Preference.
MongoDB Sharding.
now when i will query my database should i provide secondary instance url and for updating/inserting data in my mongodb database should i provide primary instance url otherwise which url should i use for CRUD operations on my mongodb database
You can provide both in MongoDB URI and the driver will figure out where to read/write. For example in your Node.js you could have:
mongodb://<instance 1>:<port 1>,<instance 2>:<port 2>/<database name>?replicaSet=<replica set name>
The default replica set name set by Cloud Launcher is rs0. Also see:
Node Driver: URI.
Node Driver: Read Preference.
also after launcing this do i have to make any changes in any conf file or in any file manually or they already done that for me ?? Also do i have to make instance groups of all three instances or not ??
This depends on your application use case, but if you are launching through click and deploy the MongoDB config should all be taken care of.
For a complete guide please follow tutorial : Deploy MongoDB with Node.js. I would also recommend to check out MongoDB security checklist.
Hope that helps.