MongoDB Error: "not authorized for query on admin.system.users" - mongodb

How to setup authentication on MongoDB using the following configuration ?
2 mongoD instances, sharded collections.
1 mongoS instance on another server.
1 mongoD as a config server.
Whenever a turn on auth on mongod, i'm not enabled to logon on any servers, the users are created but i still can't login. The following error appear when trying to logon on mongoS instance:
$err: "not authorized for query on admin.system.users"

If you have a problem like I had it is likely that you could not query anything (with similar messages that you are not authorized).
By accident I noticed if I run two different versions of mongo daemon (win7/ent) second daemon starts but does not care if other mongo is listening, so it is unclear which daemon takes requests.
When I stopped all daemons and started one, everything started working. (for me)

Related

Mongo DB service getting stopped intermittently

We have installed Mongo DB (v4.4.8) on our Windows Server 2016.
But the Mongo DB service stops very frequently, then we have to start the service again, which will run for few mins (varies from less than a minute to 10 mins), then will stop again. Sometimes the service will stop immediately.
We are able to access the DB and create collection when it is running.
I can see a folder that does not exist in our server in the Mongo DB logs.
{"t":{"$date":"2021-09-14T16:55:22.194+02:00"},"s":"I", "c":"CONTROL", "id":31445, "ctx":"ftdc","msg":" Frame: {frame}","attr":{"frame":{"a":"7FF6F29F14EC","module":"mongod.exe","file":"C:/Program Files (x86)/Microsoft Visual Studio/2019/Professional/VC/Tools/MSVC/14.26.28801/include/thread","line":44,"s":"std::thread::_Invokestd::tuple<<lambda_726633d7e71a0bdccc2c30a401264d8c >,0>","s+":"2C"}}}
The account with which we run the Mongo DB service is added as an Administrator on the server.
Can someone help me with this.
Security tools were blocking Mongo DB files, due to which the service was keep on stopping intermittently because the files were not accessible for the Mongo DB service. Finally we had to exclude the Mongo DB folders from the security scanning to make it work temporarily.

MongoDB: Too many connections created from mongos to shards while building index

We'v set up sharded cluster with replica set for each shard in our production ENV. Last night we encountered a issue while connecting to the cluster and issuing building index command through mongo shell , by connecting to the mongos instance , not directly to specific shard.
The issue is : once starting building the index, connections created from mongos to this shard increases rapidly and 'too many connections' errors show up in the primary shard's log file very soon.
The below is link for primary shard's log summary:
At the very beginning for index
Then very soon, the connections numbers reached to 10000:
Connection limit exceeded
From the three mongos' log, all the connections are initiated from mongos. We have googled and find related issue link as : https://jira.mongodb.org/browse/SERVER-28822
But there is no trigger conditions. And the same time, I tried to reproduce the question in test ENV ,but not occurred again. So, please help.
here is configurations for mongos:
mongos' configuration
and here is for shard:
primary shard's configuration
Found the answer.
It was because the index creation issued by mongorestore command was foreground, not background. I mistook the way which mongorestore took and not checked the meta file for the table schema.

Where is the mongos config database string being stored?

I made a mistake in my mongo sharding setup - I had an error in my config database string. I tried to clean this up by deleting all the data in the config database servers, and restarting all the mongod services. However, even after restarting mongos I still initially get an error like this,
When I run :
sh.status():
I get :
mongos specified a different config database string : stored : <old string here>
Where is this this string actually being stored? I tried looking for it in the config databases themselves and also the members of the shard, but I can't seem to find it.
As at MongoDB 2.4, the --configsvr string specified for the mongos servers is also cached in-memory on the shard mongod servers. In order for mongos servers to join a sharded cluster, they must have an identical config string (including order of hostnames).
There is a tutorial in the MongoDB manual to Migrate Config Servers with Different Hostnames which covers the full process, including migrating a config server to a new host (which isn't applicable in your case).
If you are still seeing the "different config database string" error after restarting everything, it's likely that you had a least one rogue mongod or mongos running during the process.
To resolve this error:
shut down all the mongod processes (for the shards)
shut down the mongos processes
restart the mongod processes (for the shards)
restart the mongos with the correct --configsvr string

User field missing in system.profile collection when connecting with Mongos

We have a MongoDB cluster and clients connecting to it through a Mongos instance. The individual mongo(s) in the cluster are all running with --auth, and the Mongo use a --keyfile when communicating with them. We are profiling slow queries but are not getting the user names on queries that go through Mongo.
To make it clearer:
If I connect directly to one of the Mongo, authenticate, and run a query, then I can look in the system.profile collection afterwards, and the user field will be populated with my username.
If I connect through mongos, authenticate, and run a query, then the system.profile collection contains profiling info about the query, but the user field is blank.
The authentication is required, I can't run a query through Mongo without authenticating first, but the user name just doesn't seem to be included in the profiling info, and we'd really like to be able to see it.
Any ideas? Any alterations I can make to our configuration?
Just to actually add an answer:
As Ren stated in his comment, he filed a ticket, as this is related to a bug.

Mongos authentication

We have 9 mongo nodes in our environment with:
1 mongos
3 config servers (mongod --configSvr)
9 mongod servers (shards or members of sharded replica-sets)
and we are trying to implement authentication on them.
I have done this in the past with a single server and it was really easy:
just add the admin user to the admin database
add a user on each database
I had to restart mongod with --auth option, but here it doesn't seem to work.
I've added the admin account to our mongos and for our sharded databases; I tried to authenticate as the user I had just created, but it didn't work.
I've tried creating an admin user on each database, and the other user accounts that we need, but it still didn't work.
I also tried making sure all of our mongo servers were running with the --keyFile option specified either on the command-line or in their /etc/mongodb.conf files, but that didn't seem to help.
When I try to authenticate as a given user, like so:
db.auth("user","passwd")
it fails and returns 0, as in false; not non-zero.
I seriously need all the help I can get, so please at least leave some suggestions on things I could try--I can't overstress this, any help is more than welcome since I don't seem to be getting anywhere just from following the official docs on managing/administrating mongo sharded clusters.
In a sharded cluster you should use --keyFile to allow all the members of the cluster to authenticate to each other. When you use this option, --auth is "assumed". Since there've been several version changes since you asked this question, the roles assigned to users are more granular now - you would need to have a 'clusterAdmin', 'userAdmin', 'dbAdmin', etc.
This page has more details about how to configure security in MongoDB for a sharded cluster.