Single Instance Mongodb Replica Set - cannot perform query/insert operations - mongodb

After installing mongodb, I ran mongod with
mongod --dbpath <pathtodb> --logpath <pathtolog> --replSet rs0
I then connected with the mongo shell and ran
rs.initiate()
I then tried to insert a document into a collection, but received an error:
> db.blah.insert({a:1})
WriteResult({ "writeError" : { "code" : undefined, "errmsg" : "not master" } })
Looking at rs.status(), I see the status is REMOVED:
> rs.status()
{
"state" : 10,
"stateStr" : "REMOVED",
"uptime" : 1041,
"optime" : Timestamp(1429037007, 1),
"optimeDate" : ISODate("2015-04-14T18:43:27Z"),
"ok" : 0,
"errmsg" : "Our replica set config is invalid or we are not a member of it",
"code" : 93
}
I have no idea what I could have done to mess this up. This should have worked I think. How do I get past this?

As above answers said, the config is not set correctly.
I tried to re-init the replica, but got the error msg:
singleNodeRepl:OTHER> rs.initiate({ _id: "rs0", members: [ { _id: 0, host : "localhost:27017" } ] } )
{
"info" : "try querying local.system.replset to see current configuration",
"ok" : 0,
"errmsg" : "already initialized",
"code" : 23,
"codeName" : "AlreadyInitialized"
}
The solution is to reconf the mongo:
singleNodeRepl:OTHER> rsconf = rs.conf()
singleNodeRepl:OTHER> rsconf.members = [{_id: 0, host: "localhost:27017"}]
[ { "_id" : 0, "host" : "localhost:27017" } ]
singleNodeRepl:OTHER> rs.reconfig(rsconf, {force: true})
{ "ok" : 1 }
singleNodeRepl:OTHER>
singleNodeRepl:SECONDARY>
singleNodeRepl:PRIMARY>

Problem here is that you ran rs.initiate().. As EMPTY! You didn't tell what machines belongs to that replica set.
So..
rs.initiate({
_id: "rs0",
version: 1,
members: [
{ _id: 0, host : "address.to.this.machine:27017" }
]
}
)

Short Answer:
I needed to do:
rs.initiate({_id:'rs0', version: 1, members: [{_id: 0, host:'localhost:27017'}]}
rather than rs.initiate().
Long Answer:
I am almost the same as #Madbreaks and #Yihe 's comment, but I was in the different background so that I'm adding my comment here.
Background
I used docker container of mongoDB and initiated the replicaset by rs.initiate(). (The data volume is mounted to the host, but it is out-of-topic here).
What Happend
When I restart the mongoDB container, the error "MongoError: not master and slaveOk=false" happened. Yes, the error message was different from #d0c_s4vage 's, but the workaround is the same as #Yihe 's.
Root Cause
The root cause was dynamically assigned hostname as follows:
rs0:PRIMARY> rs.conf()
{
"_id" : "rs0",
...
"members" : [
{
...
"host" : "ee4ed99b555e:27017", # <----- !!!!
Where, the host "ee4..." above comes from docker container's internal hostname; this is set by my first rs.initiate(). This would be changed when recreate container. In my case, localhost is fine because of single server and single replicaset for 'rocketchat' app evaluation purpose.

I'm also facing same issue, I tried below steps,
NOTE: If you have already cluster setup follow my steps
I stopped particular server (host : "address.to.this.machine:27017")
Remove mongod.lock file
create one more data directory
- (deafult: /data/db, new Data directory: /data/db_rs0)
update the **configuration ** file
-change dbpath ( "/data/db_rs0" ),
- check bindIP (default: 127.0.0.0 to 0.0.0.0)
Check Hostname & Hosts
hostname
sudo vi /etc/hosts
add to hosts
127.0.0.0 hostname
127.0.1.1 hostname
(add your Public/Private IP) hostname
Start the MONGODB server in
sudo /usr/bin/mongod -f /etc/mongod.conf &
rs.initiate({ _id: "rs0", members: [ { _id: 0, host : "hostname:27017" } ] } )
rs.status()
{
....
"members": [
{
"_id": 0,
"name": "hostname:27017",
"health": 1,
"state": 1,
"stateStr": "PRIMARY",
"uptime": 98231,
"optime": {
"ts": Timestamp(1474963880, 46),
"t": NumberLong(339)
},
"optimeDate": ISODate("2016-09-27T08:11:20Z"),
"electionTime": Timestamp(1474956813, 1),
"electionDate": ISODate("2016-09-27T06:13:33Z"),
"configVersion": 12,
"self": true
},
...........
]
"ok": 1
}
-------- THANK YOU --------

Related

"No host described in new configuration 1 for replica set rs0 maps to this node " : Mongodb

I have installed docker in 3 Ubuntu 16.04 VMs, for each of those VMs docker's containers I run mongodb image using the commande :
docker run -d -p 27017:27017 -v data:/data/db mongo --replSet "rs0"
After that I changed the file
/etc/hosts of each Vms with adding their Ip(vms) addresses in each file of the 3 Vms
cat /etc/hosts
127.0.0.1 localhost
127.0.1.1 mongodb0
aa.bb.cc.ee mongodb0
aa.bb.cc.ff mongodb1
aa.bb.cc.gg mongodb2
when I use the commande rs.initiate () to deploy replicaset
rs.initiate ( {
_id : "rs0", members: [
{ _id: 0, host: "mongodb0:27017" },
{ _id: 1, host: "mongodb1:27017" },
{ _id: 2, host: "mongodb2:27017" }
] })
I have this error :
"operationTime" : Timestamp(0, 0),
"ok" : 0,
"errmsg" : "No host described in new configuration 1 for replica set rs0 maps to this node",
"code" : 93,
"codeName" : "InvalidReplicaSetConfig",
"$clusterTime" : {
"clusterTime" : Timestamp(0, 0),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
how can I fix this issue, please?

Error initialising MongoDB replica set in Docker container

I have created a MongoDB docker container, with a replica set, using this command:
docker run -d --name mongo -v /data/db:/data/db mongo --replSet name
The container starts running.
I then try to initiate the replica set, using this command:
rs.initiate()
{
"info2" : "no configuration specified. Using a default configuration for the set",
"me" : "fa07bcdd8591:27017",
"info" : "try querying local.system.replset to see current configuration",
"ok" : 0,
"errmsg" : "already initialized",
"code" : 23
}
But it gives the error message "already initialized".
When I check the health of the replica set with the rs.status() command
rs.status()
{
"state" : 10,
"stateStr" : "REMOVED",
"uptime" : 102,
"optime" : {
"ts" : Timestamp(1472680081, 1),
"t" : NumberLong(77)
},
"optimeDate" : ISODate("2016-08-31T21:48:01Z"),
"ok" : 0,
"errmsg" : "Our replica set config is invalid or we are not a member of it",
"code" : 93
}
It says the replica set config is invalid.
It looks you didn't specify any replica set configuration.
The replica set initiate example:
rs.initiate(
{
_id: "myReplSet",
version: 1,
members: [
{ _id: 0, host : "mongodb0.example.net:27017" },
{ _id: 1, host : "mongodb1.example.net:27017" },
{ _id: 2, host : "mongodb2.example.net:27017" }
]
}
)

replica set in MongoDB using docker, primary has error and stops being primary when another member is added to the set

I have two docker containers running a mongo instance each, they were initialized like this:
docker run --name mongodb-shard-1-node-1 -d -v mongodb-shard-1-node-1:/data/db -p 27031:27017 mongo --replSet rs0 --smallfiles --oplogSize 128
when i do docker inspect mongodb-shard-1-node-1 it shows the ip 172.17.0.2
docker run --name mongodb-shard-1-node-2 -d -v mongodb-shard-1-node-2:/data/db -p 27020:27017 mongo --replSet rs0 --smallfiles --oplogSize 128
when i do docker inspect mongodb-shard-1-node-2 it shows the ip 172.17.0.4
So i proceed to access mongodb-shard-1-node-1 by using docker exec -it mongodb-shard-1-node-1 mongo and i initialize it as the primary member like this:
rs.initiate()
{
"info2" : "no configuration specified. Using a default configuration for the set",
"me" : "ee3c41ef76b2:27017",
"ok" : 1
}
Then I proceed to add the mongodb-shard-1-node-2 to this member for it to work as a secondary member, at first it looks like it worked:
rs0:PRIMARY> rs.add("172.17.0.4:27017")
{ "ok" : 1 }
rs0:PRIMARY> rs.status()
{
"set" : "rs0",
"date" : ISODate("2016-05-20T01:04:02.095Z"),
"myState" : 1,
"term" : NumberLong(1),
"members" : [ervalMillis" : NumberLong(2000),
{ "_id" : 0,
"name" : "ee3c41ef76b2:27017",
"state" : 1,,
"uptime" : 27,PRIMARY",
"optime""ts" : Timestamp(1463706237, 1),
}, "t" : NumberLong(1)
"infoMessage" : "could not find member to sync from",
"electionDate" : ISODate("2016-05-20T01:03:43Z"),
"self" : truen" : 2,
{,
"name" : "172.17.0.4:27017",
"state" : 0,,
"uptime" : 4,"STARTUP",
"optime""ts" : Timestamp(0, 0),
}, "t" : NumberLong(-1)
"lastHeartbeat" : ISODate("2016-05-20T01:04:01.187Z"),
"pingMs" : NumberLong(0),Date("1970-01-01T00:00:00Z"),
} "configVersion" : -2
"ok" : 1
}
but right away it fails for some reason and i have no idea why, here's what i get:
rs0:PRIMARY> rs.status()
2016-05-20T01:04:18.007+0000 E QUERY [thread1] Error: error doing query:
failed: network error while attempting to run command 'replSetGetStatus' on host '127.0.0.1:27017' :
DB.prototype.runCommand#src/mongo/shell/db.js:135:1
DB.prototype.adminCommand#src/mongo/shell/db.js:153:16
rs.status#src/mongo/shell/utils.js:1090:12
#(shell):1:1
2016-05-20T01:04:18.012+0000 I NETWORK [thread1] trying reconnect to 127.0.0.1:27017 (127.0.0.1) failed
2016-05-20T01:04:18.018+0000 I NETWORK [thread1] reconnect 127.0.0.1:27017 (127.0.0.1) ok
rs0:SECONDARY
What is wrong? How do I fix it?
Edit: just to clarify, i had already tried the connections between the containers by doing what this part of the documentation says at: Test Connections Between all Members
Had my question answered here:
https://dba.stackexchange.com/a/139145/91866
I'm gonna quote the whole answer:
Your primary is trying to auto-configure itself as ee3c41ef76b2:27017 and that then resolves to the loopback (127.0.0.1) which is then not responding on the container as it expects. Depending on what the second container does to resolve ee3c41ef76b2, and especially it it does not resolve to 172.17.0.2, it will probably not be able to talk to the primary either.
Assuming you are correct about the connectivity (and you have verified that the instances are listening on the IP and not just the loopbasck) then you need to override the automatic detection and be explicit when you are calling rs.initiate(), something like this:
rs.initiate(
{
_id: "rs0",
version: 1,
members: [
{ _id: 0, host : "172.17.0.2:27017" },
{ _id: 1, host : "172.17.0.4:27017" }
]
}
)

Replica Set Error Code 76

In ref to mongo dba course trying to create replica set as asked shown by instructor in El Capitano (Single machine only), I get following error. I have three members:
(mongodb was installed using homebrew)
Step I: Setting up config
cfg ={ _id :"abc", members:[{_id:0, host:"localhost:27001"}, {_id:1, host:"localhost:27002"}, {_id:2, host:"localhost:27003"}] }
{
"_id" : "abc",
"members" : [
{
"_id" : 0,
"host" : "localhost:27001"
},
{
"_id" : 1,
"host" : "localhost:27002"
},
{
"_id" : 2,
"host" : "localhost:27003"
}
]
}
STEP II: Initialize the Config.
rs.reconfig(cfg)
2015-10-05T11:34:27.082-0400 E QUERY Error: Could not retrieve replica set config: { "ok" : 0, "errmsg" : "not running with --replSet", "code" : 76 }
at Function.rs.conf (src/mongo/shell/utils.js:1017:11)
at Function.rs.reconfig (src/mongo/shell/utils.js:969:22)
at (shell):1:4 at src/mongo/shell/utils.js:1017
Make sure you have the replSetName name configured in /etc/mongod.conf
replication:
replSetName: "somename"
Then restart your mongod.
sudo service mongod stop
sudo service mongod start
sudo service mongod restart
You are not running replica set with the repl set name.The solution is to set a replication set name in the mongod config file using the paramater --replSet.
eg) --replSet=test_replica
Once changes are done in config file restart the server.

exception: hosts cannot switch between localhost and hostname

I created a replication set.
I added localhost in the set in the beginning, but when I try to edit the member with the actual hostname. I get error "exception: hosts cannot switch between localhost and hostname"
I need to get rid of localhost:27017 because, otherwise, it doesn't let me enter any other member as hostname (i.e. non-localhost address)
my-rs0:PRIMARY> cfg=rs.conf();
{
"_id" : "my-rs0",
"version" : 1,
"members" : [
{
"_id" : 0,
"host" : "localhost:27017"
}
]
}
my-rs0:PRIMARY> cfg.members[0].host="my-server04:27017"
my-rs0:PRIMARY> cfg
{
"_id" : "my-rs0",
"version" : 1,
"members" : [
{
"_id" : 0,
"host" : "my-server04:27017"
}
]
}
using rs.reconfig(cfg);
my-rs0:PRIMARY> rs.reconfig(cfg);
{
"errmsg" : "exception: hosts cannot switch between localhost and hostname",
"code" : 13645,
"ok" : 0
}
no luck with rs.add("my-server04:27017") or rs.remove("localhost:27017") as well.
my-rs0:PRIMARY> rs.add("my-server04:27017");
{
"errmsg" : "exception: can't use localhost in repl set member names except when using it for all members",
"code" : 13393,
"ok" : 0
}
I have tried all the reconfiguration methods mentioned here Replica Set Reconfig steps
But, none fixing above issue. Already spent hours, I am really frustrated.
I had the same problem and I fixed it without dropping any database. Just edited the host field of the member in the local.system.replset collection to match the local ip and then restarted mongod. Everything worked perfect.
It looks like you'll need to scrap your replicaset and start over.
I believe that when you initiated your Replica Set, you explicitly passed it a config document that references your MongoDB instance using localhost.
As I was investigating this, I brought up a replica set. When I initiated the replica set using rs.initiate() (without passing a config document) it used host name by default.
rs.initiate()
rs.conf()
{
"_id" : "stack1",
"version" : 1,
"members" : [
{
"_id" : 0,
"host" : "MY-HOSTNAME:28001"
}
]
}
This post describes the need to complete clear out your database files to create a fresh replica set.
Once I did this, I initiated a new replica set in the by passing a configuration document:
cfg = {
"_id" : "stack1",
"version" : 1,
"members" : [
{
"_id" : 0,
"host" : "localhost:28001"
}
]
}
rs.initiate(cfg)
rs.conf()
{
"_id" : "stack1",
"version" : 1,
"members" : [
{
"_id" : 0,
"host" : "localhost:28001"
}
]
}
Long story short, you'll need to delete all of the files in your --dbpath directory and re-create the replica set, without explicitly specifying "localhost" as your hostname.
I did according to the docs:
Restarted MongDB on another port (e.g. 37107) to prevent user connections to it.
Then started a shell on it:
$ mongo --port 37017
Then updated the configuration:
use local
cfg = db.system.replset.findOne( { "_id": "my-rs0" } )
cfg.members[0].host = "my-server04:27017"
db.system.replset.update( { "_id": "my-rs0" } , cfg )
Then restarted MongoDB on the original port.