I'm trying to initialize a config replica set on a single member. If I run,
mongo --host localhost:27017
It works but as soon as I run the command to initialize the replica set.
rs.initiate({_id : "c_rs1", configsvr: true, members: [{ _id: 1, host : "10.0.0.2:2701"}]})
I get this error
{
"ok" : 0,
"errmsg" : "not authorized on admin to execute command { replSetInitiate: { _id: \"c_rs1\", configsvr: true, members: [ { _id: 1.0, host: \"10.0.0.2:27017\" } ] } }",
"code" : 13,
"codeName" : "Unauthorized"
}
However, if I run this instead
mongo --port 27017
We get this when we run the exact same command.
{ ok : 1 }
This is a problem when I have to run the script to initialize a replica set on a different server because I cannot simply define the host and assume there's a config mongod instance running on the right port on the same server. Any ideas on why the --host option isn't working at the moment?
FYI: security.authentication = disabled in the config file for the mongod
Related
while register shards with our mongos query router gives error
start mongo --port 27011 --host localhost
then adding sh.addShard("localhost:27012") gives error
{
"ok" : 0,
"errmsg" : "no such command: 'addShard', bad cmd: '{ addShard: \"localhost:27012\" }'",
"code" : 59,
"codeName" : "CommandNotFound"
}
My planned steps (my goal):
Create 3x Mongo Hosts
Initiate Replication (rs.initiate())
Add hosts to replication set
Enable auth on primary (which should sync user/role/auth settings across cluster??)
Auth is disabled during the replica set config, however I'm getting "not authorized on local to execute command" when attempting to add a member.
Here are the steps I'm running on a fresh brand new Mongo install without auth enabled yet ---
Check that --auth is not enabled:
root#3106f5453c95:/# ps -ef |grep mongod
mongodb 1 0 1 16:34 ? 00:00:01 mongod --bind_ip_all --config /etc/mongod.conf
mongod.conf
systemLog:
destination: file
path: "/var/log/mongodb/mongod.log"
logAppend: true
replication:
replSetName: rsprod
security:
keyFile: /data/db/replicaSetKey.key
The following commands are ran on one of the hosts in the 3 node set:
Initiate replica set without issue:
root#b902fd176bdd:/# mongo --eval 'rs.initiate()'
MongoDB shell version v3.6.3
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 3.6.3
{
"info2" : "no configuration specified. Using a default configuration for the set",
"me" : "b902fd176bdd:27017",
"ok" : 1,
"operationTime" : Timestamp(1526490381, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1526490381, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
Then adding a host (it's denied):
root#b902fd176bdd:/# mongo --eval 'rs.add("myhost.example.com:27017")'
MongoDB shell version v3.6.3
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 3.6.3
2018-05-16T17:07:20.860+0000 E QUERY [thread1] Error: count failed: {
"operationTime" : Timestamp(1526490432, 1),
"ok" : 0,
"errmsg" : "not authorized on local to execute command { count: \"system.replset\", query: {}, fields: {}, $clusterTime: { clusterTime: Timestamp(1526490432, 1), signature: { hash: BinData(0, 5DC7E35270B286518353EDADEBF474074AD1140A), keyId: 6556226268348547073 } }, $db: \"local\" }",
"code" : 13,
"codeName" : "Unauthorized",
"$clusterTime" : {
"clusterTime" : Timestamp(1526490432, 1),
"signature" : {
"hash" : BinData(0,"XcfjUnCyhlGDU+2t6/R0B0rRFAo="),
"keyId" : NumberLong("6556226268348547073")
}
}
} :
_getErrorWithCode#src/mongo/shell/utils.js:25:13
DBQuery.prototype.count#src/mongo/shell/query.js:383:11
DBCollection.prototype.count#src/mongo/shell/collection.js:1584:12
rs.add#src/mongo/shell/utils.js:1274:1
#(shell eval):1:1
My understanding is that auth roles were not a requirement of replica sets, so what am I doing wrong here?
Found the problem by searching around and finally found it.
Here is the cause of the whole problem:
keyFile implies security.authorization.
https://docs.mongodb.com/manual/reference/configuration-options/#security.keyFile
So, if you use a KeyFile for the cluster, you MUST use authorization as well. This is pretty hidden in the documentation but clearly it's very important.
So the order of operations should be:
Initialize databases
Create Users/Roles
Restart DB with auth enabled
Configure replication
It's either that or don't use a keyFile for the cluster.
I have an AWS ubuntu instance where I upgraded a) ubuntu version (to 16.04.2 from 16.04) and b) mongoDB version (to 3.4.3 from 3.2). However, when I did so, mongoDB unlinked to all my databases.
I was able to copy the database files to /data/db/, use sudo mongod --repair and in the mongo shell, access all the databases.
However, the service version of mongoDB, run when I execute sudo service start mongod is still using the original /var/lib/mongodb/ (where the data still exists). When I try to repair from within the shell using db.repairDatabase(), I get only { "ok" : 1 } but no added databases. I need to have the service mongod running so clients can access it.
Here is a readout of what the service mongod outputs:
> db.adminCommand("getCmdLineOpts")
{
"argv" : [
"/usr/bin/mongod",
"--quiet",
"--config",
"/etc/mongod.conf"
],
"parsed" : {
"config" : "/etc/mongod.conf",
"net" : {
"bindIp" : "127.0.0.1",
"port" : 27017
},
"storage" : {
"dbPath" : "/var/lib/mongodb",
"journal" : {
"enabled" : true
}
},
"systemLog" : {
"destination" : "file",
"logAppend" : true,
"path" : "/var/log/mongodb/mongod.log",
"quiet" : true
}
},
"ok" : 1
}
> db.repairDatabase()
{ "ok" : 1 }
> show databases
admin 0.000GB
local 0.000GB
How can I repair the service mongod databases?
I realized that I could use mongodump --archive={path} and dump to an archive from the local database that was working. Then I turned on the service and used mongorestore --archive={path}, which loaded everything to the correct database. It is functional again.
In ref to mongo dba course trying to create replica set as asked shown by instructor in El Capitano (Single machine only), I get following error. I have three members:
(mongodb was installed using homebrew)
Step I: Setting up config
cfg ={ _id :"abc", members:[{_id:0, host:"localhost:27001"}, {_id:1, host:"localhost:27002"}, {_id:2, host:"localhost:27003"}] }
{
"_id" : "abc",
"members" : [
{
"_id" : 0,
"host" : "localhost:27001"
},
{
"_id" : 1,
"host" : "localhost:27002"
},
{
"_id" : 2,
"host" : "localhost:27003"
}
]
}
STEP II: Initialize the Config.
rs.reconfig(cfg)
2015-10-05T11:34:27.082-0400 E QUERY Error: Could not retrieve replica set config: { "ok" : 0, "errmsg" : "not running with --replSet", "code" : 76 }
at Function.rs.conf (src/mongo/shell/utils.js:1017:11)
at Function.rs.reconfig (src/mongo/shell/utils.js:969:22)
at (shell):1:4 at src/mongo/shell/utils.js:1017
Make sure you have the replSetName name configured in /etc/mongod.conf
replication:
replSetName: "somename"
Then restart your mongod.
sudo service mongod stop
sudo service mongod start
sudo service mongod restart
You are not running replica set with the repl set name.The solution is to set a replication set name in the mongod config file using the paramater --replSet.
eg) --replSet=test_replica
Once changes are done in config file restart the server.
After installing mongodb, I ran mongod with
mongod --dbpath <pathtodb> --logpath <pathtolog> --replSet rs0
I then connected with the mongo shell and ran
rs.initiate()
I then tried to insert a document into a collection, but received an error:
> db.blah.insert({a:1})
WriteResult({ "writeError" : { "code" : undefined, "errmsg" : "not master" } })
Looking at rs.status(), I see the status is REMOVED:
> rs.status()
{
"state" : 10,
"stateStr" : "REMOVED",
"uptime" : 1041,
"optime" : Timestamp(1429037007, 1),
"optimeDate" : ISODate("2015-04-14T18:43:27Z"),
"ok" : 0,
"errmsg" : "Our replica set config is invalid or we are not a member of it",
"code" : 93
}
I have no idea what I could have done to mess this up. This should have worked I think. How do I get past this?
As above answers said, the config is not set correctly.
I tried to re-init the replica, but got the error msg:
singleNodeRepl:OTHER> rs.initiate({ _id: "rs0", members: [ { _id: 0, host : "localhost:27017" } ] } )
{
"info" : "try querying local.system.replset to see current configuration",
"ok" : 0,
"errmsg" : "already initialized",
"code" : 23,
"codeName" : "AlreadyInitialized"
}
The solution is to reconf the mongo:
singleNodeRepl:OTHER> rsconf = rs.conf()
singleNodeRepl:OTHER> rsconf.members = [{_id: 0, host: "localhost:27017"}]
[ { "_id" : 0, "host" : "localhost:27017" } ]
singleNodeRepl:OTHER> rs.reconfig(rsconf, {force: true})
{ "ok" : 1 }
singleNodeRepl:OTHER>
singleNodeRepl:SECONDARY>
singleNodeRepl:PRIMARY>
Problem here is that you ran rs.initiate().. As EMPTY! You didn't tell what machines belongs to that replica set.
So..
rs.initiate({
_id: "rs0",
version: 1,
members: [
{ _id: 0, host : "address.to.this.machine:27017" }
]
}
)
Short Answer:
I needed to do:
rs.initiate({_id:'rs0', version: 1, members: [{_id: 0, host:'localhost:27017'}]}
rather than rs.initiate().
Long Answer:
I am almost the same as #Madbreaks and #Yihe 's comment, but I was in the different background so that I'm adding my comment here.
Background
I used docker container of mongoDB and initiated the replicaset by rs.initiate(). (The data volume is mounted to the host, but it is out-of-topic here).
What Happend
When I restart the mongoDB container, the error "MongoError: not master and slaveOk=false" happened. Yes, the error message was different from #d0c_s4vage 's, but the workaround is the same as #Yihe 's.
Root Cause
The root cause was dynamically assigned hostname as follows:
rs0:PRIMARY> rs.conf()
{
"_id" : "rs0",
...
"members" : [
{
...
"host" : "ee4ed99b555e:27017", # <----- !!!!
Where, the host "ee4..." above comes from docker container's internal hostname; this is set by my first rs.initiate(). This would be changed when recreate container. In my case, localhost is fine because of single server and single replicaset for 'rocketchat' app evaluation purpose.
I'm also facing same issue, I tried below steps,
NOTE: If you have already cluster setup follow my steps
I stopped particular server (host : "address.to.this.machine:27017")
Remove mongod.lock file
create one more data directory
- (deafult: /data/db, new Data directory: /data/db_rs0)
update the **configuration ** file
-change dbpath ( "/data/db_rs0" ),
- check bindIP (default: 127.0.0.0 to 0.0.0.0)
Check Hostname & Hosts
hostname
sudo vi /etc/hosts
add to hosts
127.0.0.0 hostname
127.0.1.1 hostname
(add your Public/Private IP) hostname
Start the MONGODB server in
sudo /usr/bin/mongod -f /etc/mongod.conf &
rs.initiate({ _id: "rs0", members: [ { _id: 0, host : "hostname:27017" } ] } )
rs.status()
{
....
"members": [
{
"_id": 0,
"name": "hostname:27017",
"health": 1,
"state": 1,
"stateStr": "PRIMARY",
"uptime": 98231,
"optime": {
"ts": Timestamp(1474963880, 46),
"t": NumberLong(339)
},
"optimeDate": ISODate("2016-09-27T08:11:20Z"),
"electionTime": Timestamp(1474956813, 1),
"electionDate": ISODate("2016-09-27T06:13:33Z"),
"configVersion": 12,
"self": true
},
...........
]
"ok": 1
}
-------- THANK YOU --------