Executing some query on MLab generate not auth error - mongodb

I would like to move the local implementations (mongoDB gridFS content repository) to cloud Heroku using mLAB. But I have a problem because some of the functionality of generating below error :
"errmsg" : "not authorized on admin to execute command {
listDatabases: 1.0 }", "code" : 13
Does anyone have an idea how to fix it?
Thanks for any help

Related

Why is there a database named "test" when connecting MongoDB using Studio3T?

I'm learning MongoDB and trying to use Atlas. I created a remote cluster and tried to connect it using both MongoDB Compass and Studio 3T. However, I noticed that after connecting with Studio 3T, there was an empty database named "test" appearing in the left panel, below "admin" and "local" databases. Where did it come from? And how can I drop it? Because when I tried to drop this database, I got this error
Mongo Server error (MongoCommandException): Command failed with error 8000 (AtlasError): 'user is not allowed to do action [dropDatabase] on [test.]' on server ac-bkmhuxm-shard-00-02.w2nutv2.mongodb.net:27017.
The full response is:
{
"ok" : 0.0,
"errmsg" : "user is not allowed to do action [dropDatabase] on [test.]",
"code" : 8000.0,
"codeName" : "AtlasError"
}
After changing the roles in Atlas, I can now delete the database. However it keeps appearing when I make a new connection to MongoDB. Why is that?
Database test is the default database when you don't define anything.
Databases local, admin and config are MongoDB system internal databases, you should not touch them unless advised by MongoDB support or special admin tasks.
See also 3 default database in MongoDB

Does DocumentDB support mongo shell?

I am using AWS document db v4.0.0. It works fine with my application using MongoDB driver to connect. But when I try to use mongo shell it gives me some errors on basic query:
db.myCollection.find({})
Error: error: {
"ok" : 0,
"code" : 303,
"errmsg" : "Feature not supported: 'causal consistency'",
"operationTime" : Timestamp(1645150705, 1)
}
Does this mean DocumentDB is not compatible with mongo shell?
Your code is missing the actual connection part, but make sure you have the TLS keys placed on the machine.
Your connection string should be something like this:
mongo --ssl host docdb-2020-02-08-14-15-11. cluster.region.docdb.amazonaws.com:27107 --sslCAFile rds-combined-ca-bundle.pem --username demoUser --password
There is actually a really good example/explanation in the official AWS Documentation
Casual consistency isn't supported in DocumentDB. From your mongo shell you can run "db.getMongo().setCausalConsistency(false)" then re-run your find operation.
https://docs.mongodb.com/manual/reference/method/Mongo.setCausalConsistency/

Mongodb on Google Cloud -- Confirm content via show dbs?

I've installed a MongoDB on Google Cloud Compute Engine via a Bitnami script. I can see the vm instance in the Google Cloud dashboard. I can connect to the database using my deployed node.js app. My app works just fine.
What I can't figure out is how to independently verify the content in the Mongo database.
On the Compute Cloud dashboard for the Mongo DB VM, there is a SSH pulldown button at the top of the screen. Clicking this button opens up a browser frame. The frame connects to the VM instance via https, and confirms login info. I've seen this related stackoverflow posting, and I've met all of these suggestions. Settings confirmed at the Google Cloud VM instance interface. When I type mongo I can see the mongo shell. When I try show dbs I get back unexpected results:
show dbs
2017-02-19T05:51:45.161+0000 E QUERY [thread1] Error: listDatabases failed:{
"ok" : 0,
"errmsg" : "not authorized on admin to execute command { listDatabases: 1.0 }",
"code" : 13,
"codeName" : "Unauthorized"
} :
How can I do a simple show dbs and then show collections and finally db.foo.find() to confirm data content?
Ouch. So it turns out I missed something. When I created the instance the system sent me an confirmation email. In the email were a couple of key links.
There are two very different methods to "connect" to the database.
The first is to open the mongo shell interface via an admin / login.
$ mongo admin --username root -p
MongoDB shell version v3.4.2
Enter password: (password entered here)
connecting to: mongodb:///opt/bitnami/mongodb/tmp/mongodb-27017.sock/admin
MongoDB server version: 3.4.2
This worked quite well, without having to transfer SSH keys at all. Reference link here. At this point I could
> show dbs
admin 0.000GB
local 0.000GB
> use admin
switched to db admin
> show collections
books
system.users
system.version
> db.books.find()
{ "_id" : ObjectId("58a900452c972b0010def8a7"), "title" : "Mr. Tickle", "author" : "Roger Hargreaves", "publishedDate" : "1971", "description" : "" }
{ "_id" : ObjectId("58a900612c972b0010def8a8"), "title" : "Mr. Sneeze", "author" : "Roger Hargreaves", "publishedDate" : "1982", "description" : "" }
{ "_id" : ObjectId("58a93a192c972b0010def8a9"), "title" : "Mr. Happy", "author" : "Roger Hargreaves", "publishedDate" : "1971", "description" : "" }
>
The second method is to register SSH keys, via these instructions over at Bitnami.com
In this method, you have to first add your public SSH key via the Google Cloud interface to the instance.

MongoDB: how do I authenticate when executing the `copydb` command

I have a replica set on a remote host which requires authentication in order to connect. The original (root) user was created in the admin database which I have used in order to remotely connect. I building some sort of a "backup" script which copies a db into the replica set and in a later time I should be able to copy a db from the remote location into other MongoDB instances.
So I wrote the script to copy a database by connecting TO the remote location, authenticating and then running the db.runCommand using copydb: 1. It works great, no problems here.
When I try to copy a db back into my local machine that's when things go wrong, mainly because I have to authenticate as part of the copydb command. I originally tried to use the same technique (db.runCommand) but since the nonce and key authentication are messy by themselves I tried to solve the problem first by writing the commands manually into mongo's shell using db.copyDatabase, according to the documentation it should do this process for me.
This is the command:
db.copyDatabase('from_db', 'to_db', 'remote.host.example.com', 'my_user', 'my_password')
Which responds with:
{ "ok" : 0, "errmsg" : "Authentication failed.", "code" : 18 }
I tried switching roles (root, userAdmin, readWrite, ...) but nothing works. I tried creating another user inside the db I am trying to copy, but that didn't seems to do much other than change the response a little into:
{
"ok" : 0,
"errmsg" : "unable to login { ok: 0.0, code: 18, errmsg: \"Authentication failed.\" }"
}
I searched everywhere, went over anything in the manual which seemed remotely relevant and I still can't figure it out.
How am I suppose to copy a db from a remote location which requires an authentication??

Using cloneCollection in MongoDB: how to authenticate?

I'm trying to clone a remotely hosted collection to my local Mongo database. I tried opening up the mongo console in the local environment and issued:
db.runCommand({cloneCollection: "<dbname.colname>", from: "<remotehost:port>"})
It fails with
"errmsg" : "exception: nextSafe(): { $err: \"not authorized for query on <dbname>.system.namespaces\", code: 16550 }",
"code" : 13106,
How do I properly authorize with the remote server to clone the collection?
Unfortunately that's currently not possible. There is a Jira ticket open for this feature. As a workaround you could consider using mongodump --collection and mongorestore.