Not authorized for command: addShard on database admin - mongodb

I'm building a MongoDB cluster using shards of replica sets and have the first replica set setup and three config servers running (all on Linux servers) with a mongos instance running pointing to the three config servers, but when connecting to the mongos instance on the application server (on Windows Server 2012 Standard x64) via the mongo shell and issuing the sh.addShard() command as per the docs, I get the following response:
> sh.addShard("rs1/xxx:xxx")
{
"note" : "not authorized for command: addShard on database admin",
"ok" : 0,
"errmsg" : "unauthorized"
}
Does anyone know what I'm doing wrong? I'm running all Mongo instances using a keyfile for security. The keyfile is a Windows compatible one as per these docs.

My results:
If your data nodes use keyfile based authentication, all the mongod and mongos instances (data, config, etc) need to use --keyFile as well and point to an exact copy of the keyfile.
Secondly, make sure you "use admin" after connecting to config servers using mongos. If this doesn't get you there then add an admin user at the mongos prompt, authenticate with those credentials and try again.

I've since solved this. It was because authentication was enabled by virtue of the keyfile and using the localhost connection wasn't enough to authenticate. After disabling keyfile usage across the cluster, creating an admin account and using that to connect, it worked.

in addition to bisharkha's answer, here is one more clue to use keyfile.
after use admin command, also make sure you have authenticated with:
db.auth("user", "passwd")

It also can happen when you specify wrong name of your collection.

Related

authentication in MongoDB and Ubuntu

We installed MongoDB on windows(development) version 3.4, and enabled authentication, after running the command mongod --auth, the authentication was successfully implemented.
Now on the production server that is Ubuntu 16, with MongoDb version 4.0, we made changes to the mongod.conf file as seen below and then restated the mongod service with command sudo service mongod start, but now we are not able to connect to our MongoDB Ubuntu server.
security:
authorization: "enabled"
Where did we go wrong in implementing authentication for MongoDB on Ubuntu server.
security:
authorization: "enabled"
2 possible issues here : reading the doc, i'm not sure you need to quote the enabled word.
Moreover, yaml format need to increment sub part of conf, so your conf file have to look like :
security:
authorization: enabled
But cannot really test, since i don't have any running local instance

What's the proper way of running a mongos client for a sharded cluster?

I have a mongodb cluster up and running. I want to setup a client (mongos) to connect to the config servers from ubuntu. Most instructions just say to run this command:
mongos --configdb cfg0.example.net:27019,cfg1.example.net:27019,cfg2.example.net:27019
Is this command running as a service? Will the process still be running when I exit the shell? What happens if the process goes down? What is the proper way of running this client as a service?
You would use --fork or an init script to make this run as a service post terminal session shut down.
If the process goes down then your application cannot connect to the sharded set. It will be unable to connect at all to your DB. This is (not the only reason) why you should have good redundancy in mongos instances.
I tend to have one mongos per app server personally, however, it is all down to preference. Another option is to have a load balanced set of mongos instances.

How to get MONGO_URL from command line Meteor Up deployment?

I am currently deploying to Digital Ocean using Meteor Up. If I don't specify a MONGO_URL in the mup.json, can I get the value from the command line while the website is running, i.e. I don't want to shutdown the site?
If I go to the app directory and run meteor mongo --url, I get the following error:
mongo: Meteor isn't running a local MongoDB server.
This command only works while Meteor is running your application
locally. Start your application first. (This error will also occur if
you asked Meteor to use a different MongoDB server with $MONGO_URL when
you ran your application.)
If you're trying to connect to the database of an app you deployed
with 'meteor deploy', specify your site's name with this command.
Even if I run the app from the app directory, it will only give the localhost MONGO_URL. I need the MONGO_URL for the deployed app.
I have also taken a look at a similar question as suggested by some of the answers. I disagree that it is "impossible" to get the MONGO_URL without some other program running on the server. It's not as if we are defying the laws of physics here, folks. Fundamentally, there should be a way to access it. Just because no one has yet figured it out doesn't mean it is impossible.
meteor mongo --url should return the URL.
Try opening another shell in the app directory and running that command.
Meteor Up packages your app in production mode with meteor build so that it runs via node rather than the meteor command line interface. Among other things, this means meteor foo won't work on the remote server (at least not by default). So what you're really looking for is a way to access mongo itself remotely.
I recently set up mongo on an AWS EC2 instance and listed some lessons learned here: https://stackoverflow.com/a/28846703/2669596. Some details of how you do it are going to be different on Digital Ocean, but these are the main things you have to take care of once mongo itself is installed:
Public IP/DNS Address: This is probably fine already since you can deploy to the server.
Port Security Rules: You need to make sure port 27017 is open for TCP access, at least from your IP address. MongoDB also has an http interface you can set up; if you want to use that you'll need to open 28017 as well.
/etc/mongod.conf (file location may differ depending on Linux flavor):
Uncomment port=27017 to make sure you have the default port (I don't think this is actually necessary, but it made me feel better and it's good to know where to change the default port...).
Comment out bind_ip=127.0.0.1 in order to listen to external interfaces (e.g. remote connections).
Uncomment httpinterface=true if you want to use the http interface.
You may have to restart the mongod host via sudo service mongod restart. That's a problem if you can't have downtime, but I don't know of a way around that if you change the config file.
Create User: You need to create an admin and/or user to access the database remotely.
Once you've done all of that, you should be able to access the database from your local machine (assuming you have the mongo client installed locally) by running
mongo server.url.com:27017/mup-app-name -u username -p
where server.url.com is the URL or IP address of your remote server, mup-app-name is the appName parameter from your mup.json file, username is the user you created to access the database, and you'll be prompted for that user's password after you run the command (or you could put it after -p on the same line, depending on the password).
There may also be a way to do this by setting up nginx to reverse-proxy 127.0.0.1:27017 on your remote server, but I've never done it and that's just me speculating.

mongodb cluster:after use keyFile for authentication ,it report socket exception for shard

I use mongodb --keyfile parameter to config a authenticated mongodb cluster.Yes, I at first add an admin user to admin db without the authentication ,and then ,I restart the mongodb with --keyFile.Yes, after that , the authentication takes effect.But when I tried to addUser() or show dbs or show collections or db.collection.find(), it reports error:
mongos> db.system.users.find()
error: {
"$err" : "socket exception [CONNECT_ERROR] for shard4/192.168.10.10:10004,192.168.10.12:10004",
"code" : 11002,
"shard" : "config"
}
sometimes, it is shared1,sometimes it is shard2 or shard3...
I checked every shard heath(I have 3 shards ,each shard have 3 replication set), all shard member's health status is 1,namely ok.
So ,any one can help me?
I checked the log on /var/data/master/log.log, it said:
the permission of keyfile is too open.That means,I shouldn't have given such a file too high permission.So ,I run command below for every mongodb cluster member:
sudo chmod 700 /var/data/keyfile
And I checked the log again , the permission problem solved,but the socked exception error remains.
But I can ping to these config server net port successfully,it proves that actually these config server is working properly.
So ,why these config server cannot be connected ? Finally I get the reason: it is because I firstly start the mongos process and then the 3 config servers' mongod instance .When mongos is started , it is trying to connect to the config server ,but the config server is started afterwards,so ,it cannot reach them.Only when I started these 3 config server at first and then started the mongos instance ,problem solved!

mongodb replicaset auth not working

I have a problem with replica sets
After I add keyFile path to mongodb.conf I can connect, this is my mongo.conf:
logpath=/path/to/log
logappend=true
replSet = rsname
fork = true
keyFile = /path/to/key
And this is what is showed in the command line:
XXXX#XXXX:/etc$ sudo service mongodb restart
stop: Unknown instance:
mongodb start/running, process 10540
XXXX#XXXX:/etc$ mongo
MongoDB shell version: 2.4.6
connecting to: test
Mon Sep 30 18:44:20.984 Error: couldn't connect to server 127.0.0.1:27017 at src/mongo/shell/mongo.js:145
exception: connect failed
XXXX#XXXX:/etc$
if I comment the keyFile line in mongo.conf it works fine.
I solve the problem.
It was related with the key file permissions, I fixed the permissionas and ownership and work like charm:
As a root user I did:
$ chmod 700 keyfile
$ chown monogdb:mongodb keyfile
If the authentication would be the problem you should get a different message (and should be able to start the shell without the authenticated session just prevent you to run most of the commands).
This one means more like a socket exception that where you likely to connect there is no service listening. You can check with netstat if the process is listening that ip:port which is in the message. I assume that the mongod process have not started which can be for several reasons check the logs for the current one. One thing can be that the keyfile is not exists at the specified path or not the appropriate privileges have set on.
Adding a keyfile automaticly turns on the auth option too. This means you have to use a user to authenticate, but you can bypass this authentication with a localhost exception: . Read the documentation.