"No host described in new configuration 1 for replica set rs0 maps to this node " : Mongodb - mongodb

I have installed docker in 3 Ubuntu 16.04 VMs, for each of those VMs docker's containers I run mongodb image using the commande :
docker run -d -p 27017:27017 -v data:/data/db mongo --replSet "rs0"
After that I changed the file
/etc/hosts of each Vms with adding their Ip(vms) addresses in each file of the 3 Vms
cat /etc/hosts
127.0.0.1 localhost
127.0.1.1 mongodb0
aa.bb.cc.ee mongodb0
aa.bb.cc.ff mongodb1
aa.bb.cc.gg mongodb2
when I use the commande rs.initiate () to deploy replicaset
rs.initiate ( {
_id : "rs0", members: [
{ _id: 0, host: "mongodb0:27017" },
{ _id: 1, host: "mongodb1:27017" },
{ _id: 2, host: "mongodb2:27017" }
] })
I have this error :
"operationTime" : Timestamp(0, 0),
"ok" : 0,
"errmsg" : "No host described in new configuration 1 for replica set rs0 maps to this node",
"code" : 93,
"codeName" : "InvalidReplicaSetConfig",
"$clusterTime" : {
"clusterTime" : Timestamp(0, 0),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
how can I fix this issue, please?

Related

mongodb localhost replicasetup

I want to setup a replicasetup in my local machine,I am using to instances of mongodb(mongod1.conf,mongod2.conf), rs initiated mongo1 on port 27018 and i want to add the members to 27018 , rs.add('ThinkPad-X230:27019') it is throwing an error called
commands :
mongod --replSet Replicaset1 --dbpath home/data --port 27018
mongo --port 27018
>> rs.initiate()
>> rs.add("ThinkPad-X230:27019")
mongod --dbpath home/data2 --port 27019
mongo --port 27019
i have checked the db.serverStatus().host in 27019 port and adding host name "ThinkPad-X230:27019" to rs.add() members it is throwing the error.
{
"ok" : 0,
"errmsg" : "Either all host names in a replica set configuration must be localhost references, or none must be; found 1 out of 2",
"code" : 103,
"codeName" : "NewReplicaSetConfigurationIncompatible",
"operationTime" : Timestamp(1568943205, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1568943205, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
As you started your first instance of mongod with --replSet Replicaset1 option, it is configured to be a part of Replicaset1 replica set.
And when you initialised the replica set, this instance was added to the replica set as a member. Below is the output snippet of rs.status()
{
"_id" : 0,
"name" : "localhost:27018",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 228,
"optime" : {
"ts" : Timestamp(1569751005, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2019-09-29T09:56:45Z"),
"electionTime" : Timestamp(1569750830, 2),
"electionDate" : ISODate("2019-09-29T09:53:50Z"),
"configVersion" : 3,
"self" : true
}
As you can see the name of the member is "localhost:2018".
So, when you try to add another member to this replica set as rs.add('ThinkPad-X230:27019'), it gives you the following error which is a valid error to be thrown, as one member is "localhost:2018" and another which you are trying to add is "ThinkPad-X230:27019" and both must be localhost.
"errmsg" : "Either all host names in a replica set configuration must be localhost references, or none must be; found 1 out of 2"
Try to add the member using the following command,
rs.add("localhost:27019")
And it will be added successfully.

How mongo db replicaset works in Azure

I have deployed the Mongo DB replica set using this template mongodb-replica-set-centos.
Mongo DB VM 1 (primary):
ps aux | grep mongo
root 10161 0.7 0.5 797140 40900 ? SLl 05:18 0:05 mongod --dbpath /var/lib/mongo/ --replSet repset --logpath /var/log/mongodb/mongod.log --fork --config /etc/mongod.conf
sshuser 10347 0.0 0.0 112640 960 pts/0 S+ 05:29 0:00 grep --color=auto mongo
Mongo DB database:-
mongo -u mongoadmin -p mongoadmin admin
MongoDB shell version: 3.2.19
connecting to: admin
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
http://docs.mongodb.org/
Questions? Try the support group
http://groups.google.com/group/mongodb-user
Server has startup warnings:
2018-03-23T05:18:21.137+0000 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2018-03-23T05:18:21.137+0000 I CONTROL [initandlisten]
repset:PRIMARY> rs.status()
{
"set" : "repset",
"date" : ISODate("2018-03-23T07:38:45.694Z"),
"myState" : 1,
"term" : NumberLong(1),
"heartbeatIntervalMillis" : NumberLong(2000),
"members" : [
{
"_id" : 0,
"name" : "52.170.83.3:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 8426,
"optime" : {
"ts" : Timestamp(1521782318, 3),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2018-03-23T05:18:38Z"),
"electionTime" : Timestamp(1521782318, 1),
"electionDate" : ISODate("2018-03-23T05:18:38Z"),
"configVersion" : 2,
"self" : true
},
{
"_id" : 1,
"name" : "10.0.1.5:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 8407,
"optime" : {
"ts" : Timestamp(1521782318, 3),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2018-03-23T05:18:38Z"),
"lastHeartbeat" : ISODate("2018-03-23T07:38:45.538Z"),
"lastHeartbeatRecv" : ISODate("2018-03-23T07:38:42.546Z"),
"pingMs" : NumberLong(1),
"configVersion" : 2
}
],
"ok" : 1
}
Mongo DB VM 2 (secondary):
ps aux | grep mongo
root 10115 0.4 0.5 447908 37892 ? SLl 05:11 0:17 mongod --dbpath /var/lib/mongo/ --config /etc/mongod.conf --replSet repset --logpath /var/log/mongodb/mongod.log --fork
sshuser 10269 0.0 0.0 112640 960 pts/0 S+ 06:21 0:00 grep --color=auto mongo
Mongo DB database:-
mongo -u mongoadmin -p mongoadmin admin
MongoDB shell version: 3.2.19
connecting to: admin
2018-03-23T07:38:54.311+0000 E QUERY [thread1] Error: Authentication failed. :
DB.prototype._authOrThrow#src/mongo/shell/db.js:1441:20
#(auth):6:1
#(auth):1:2
exception: login failed
Mongo DB VM 3(secondary):-
ps aux | grep mongo
root 10122 0.6 0.5 795472 40420 ? SLl 05:12 0:26 mongod --dbpath /var/lib/mongo/ --config /etc/mongod.conf --replSet repset --logpath /var/log/mongodb/mongod.log --fork
sshuser 10381 0.0 0.0 112640 960 pts/0 S+ 06:21 0:00 grep --color=auto mongo
Mongo DB database:-
mongo -u mongoadmin -p mongoadmin admin
MongoDB shell version: 3.2.19
connecting to: admin
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
http://docs.mongodb.org/
Questions? Try the support group
http://groups.google.com/group/mongodb-user
Server has startup warnings:
2018-03-23T05:12:19.613+0000 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2018-03-23T05:12:19.613+0000 I CONTROL [initandlisten]
repset:SECONDARY> rs.status()
{
"set" : "repset",
"date" : ISODate("2018-03-23T07:39:04.009Z"),
"myState" : 2,
"term" : NumberLong(1),
"heartbeatIntervalMillis" : NumberLong(2000),
"members" : [
{
"_id" : 0,
"name" : "52.170.83.3:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 8425,
"optime" : {
"ts" : Timestamp(1521782318, 3),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2018-03-23T05:18:38Z"),
"lastHeartbeat" : ISODate("2018-03-23T07:39:02.571Z"),
"lastHeartbeatRecv" : ISODate("2018-03-23T07:39:03.573Z"),
"pingMs" : NumberLong(1),
"electionTime" : Timestamp(1521782318, 1),
"electionDate" : ISODate("2018-03-23T05:18:38Z"),
"configVersion" : 2
},
{
"_id" : 1,
"name" : "10.0.1.5:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 8806,
"optime" : {
"ts" : Timestamp(1521782318, 3),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2018-03-23T05:18:38Z"),
"infoMessage" : "could not find member to sync from",
"configVersion" : 2,
"self" : true
}
],
"ok" : 1
}
Questions:-
Why I am unable to login into Mongo DB VM 2?
After I shutdown Mongo DB VM 3, will Mongo DB VM 2 acts as a secondary node?
If I shutdown Mongo DB VM 1, Will any one of secondary node act as a primary node?
All three questions are answered by the same fact: DB VM2 is not part of the replica set. It's clear from the rs.status() information that only two nodes are registered as part of the replica set, VM1 and VM3.
The implications are that:
On DB VM2, it is not part of the replica set so it does not have the authentication credentials you are trying to log in with
No, DB VM2 will not act as a secondary node; because it is not part of the replica set
In the current setup, with only 2 nodes in the replica set, if you shut down either node (VM1 or VM3) then the other node will not elect itself primary, because it cannot command a majority in an election.
Take a look at the docs on Replica Set Elections to understand what the majority is and why it matters; and take a look at DB VM2 to understand why it is not part of your replica set. Did you ever actually add it?

Error initialising MongoDB replica set in Docker container

I have created a MongoDB docker container, with a replica set, using this command:
docker run -d --name mongo -v /data/db:/data/db mongo --replSet name
The container starts running.
I then try to initiate the replica set, using this command:
rs.initiate()
{
"info2" : "no configuration specified. Using a default configuration for the set",
"me" : "fa07bcdd8591:27017",
"info" : "try querying local.system.replset to see current configuration",
"ok" : 0,
"errmsg" : "already initialized",
"code" : 23
}
But it gives the error message "already initialized".
When I check the health of the replica set with the rs.status() command
rs.status()
{
"state" : 10,
"stateStr" : "REMOVED",
"uptime" : 102,
"optime" : {
"ts" : Timestamp(1472680081, 1),
"t" : NumberLong(77)
},
"optimeDate" : ISODate("2016-08-31T21:48:01Z"),
"ok" : 0,
"errmsg" : "Our replica set config is invalid or we are not a member of it",
"code" : 93
}
It says the replica set config is invalid.
It looks you didn't specify any replica set configuration.
The replica set initiate example:
rs.initiate(
{
_id: "myReplSet",
version: 1,
members: [
{ _id: 0, host : "mongodb0.example.net:27017" },
{ _id: 1, host : "mongodb1.example.net:27017" },
{ _id: 2, host : "mongodb2.example.net:27017" }
]
}
)

replica set in MongoDB using docker, primary has error and stops being primary when another member is added to the set

I have two docker containers running a mongo instance each, they were initialized like this:
docker run --name mongodb-shard-1-node-1 -d -v mongodb-shard-1-node-1:/data/db -p 27031:27017 mongo --replSet rs0 --smallfiles --oplogSize 128
when i do docker inspect mongodb-shard-1-node-1 it shows the ip 172.17.0.2
docker run --name mongodb-shard-1-node-2 -d -v mongodb-shard-1-node-2:/data/db -p 27020:27017 mongo --replSet rs0 --smallfiles --oplogSize 128
when i do docker inspect mongodb-shard-1-node-2 it shows the ip 172.17.0.4
So i proceed to access mongodb-shard-1-node-1 by using docker exec -it mongodb-shard-1-node-1 mongo and i initialize it as the primary member like this:
rs.initiate()
{
"info2" : "no configuration specified. Using a default configuration for the set",
"me" : "ee3c41ef76b2:27017",
"ok" : 1
}
Then I proceed to add the mongodb-shard-1-node-2 to this member for it to work as a secondary member, at first it looks like it worked:
rs0:PRIMARY> rs.add("172.17.0.4:27017")
{ "ok" : 1 }
rs0:PRIMARY> rs.status()
{
"set" : "rs0",
"date" : ISODate("2016-05-20T01:04:02.095Z"),
"myState" : 1,
"term" : NumberLong(1),
"members" : [ervalMillis" : NumberLong(2000),
{ "_id" : 0,
"name" : "ee3c41ef76b2:27017",
"state" : 1,,
"uptime" : 27,PRIMARY",
"optime""ts" : Timestamp(1463706237, 1),
}, "t" : NumberLong(1)
"infoMessage" : "could not find member to sync from",
"electionDate" : ISODate("2016-05-20T01:03:43Z"),
"self" : truen" : 2,
{,
"name" : "172.17.0.4:27017",
"state" : 0,,
"uptime" : 4,"STARTUP",
"optime""ts" : Timestamp(0, 0),
}, "t" : NumberLong(-1)
"lastHeartbeat" : ISODate("2016-05-20T01:04:01.187Z"),
"pingMs" : NumberLong(0),Date("1970-01-01T00:00:00Z"),
} "configVersion" : -2
"ok" : 1
}
but right away it fails for some reason and i have no idea why, here's what i get:
rs0:PRIMARY> rs.status()
2016-05-20T01:04:18.007+0000 E QUERY [thread1] Error: error doing query:
failed: network error while attempting to run command 'replSetGetStatus' on host '127.0.0.1:27017' :
DB.prototype.runCommand#src/mongo/shell/db.js:135:1
DB.prototype.adminCommand#src/mongo/shell/db.js:153:16
rs.status#src/mongo/shell/utils.js:1090:12
#(shell):1:1
2016-05-20T01:04:18.012+0000 I NETWORK [thread1] trying reconnect to 127.0.0.1:27017 (127.0.0.1) failed
2016-05-20T01:04:18.018+0000 I NETWORK [thread1] reconnect 127.0.0.1:27017 (127.0.0.1) ok
rs0:SECONDARY
What is wrong? How do I fix it?
Edit: just to clarify, i had already tried the connections between the containers by doing what this part of the documentation says at: Test Connections Between all Members
Had my question answered here:
https://dba.stackexchange.com/a/139145/91866
I'm gonna quote the whole answer:
Your primary is trying to auto-configure itself as ee3c41ef76b2:27017 and that then resolves to the loopback (127.0.0.1) which is then not responding on the container as it expects. Depending on what the second container does to resolve ee3c41ef76b2, and especially it it does not resolve to 172.17.0.2, it will probably not be able to talk to the primary either.
Assuming you are correct about the connectivity (and you have verified that the instances are listening on the IP and not just the loopbasck) then you need to override the automatic detection and be explicit when you are calling rs.initiate(), something like this:
rs.initiate(
{
_id: "rs0",
version: 1,
members: [
{ _id: 0, host : "172.17.0.2:27017" },
{ _id: 1, host : "172.17.0.4:27017" }
]
}
)

Single Instance Mongodb Replica Set - cannot perform query/insert operations

After installing mongodb, I ran mongod with
mongod --dbpath <pathtodb> --logpath <pathtolog> --replSet rs0
I then connected with the mongo shell and ran
rs.initiate()
I then tried to insert a document into a collection, but received an error:
> db.blah.insert({a:1})
WriteResult({ "writeError" : { "code" : undefined, "errmsg" : "not master" } })
Looking at rs.status(), I see the status is REMOVED:
> rs.status()
{
"state" : 10,
"stateStr" : "REMOVED",
"uptime" : 1041,
"optime" : Timestamp(1429037007, 1),
"optimeDate" : ISODate("2015-04-14T18:43:27Z"),
"ok" : 0,
"errmsg" : "Our replica set config is invalid or we are not a member of it",
"code" : 93
}
I have no idea what I could have done to mess this up. This should have worked I think. How do I get past this?
As above answers said, the config is not set correctly.
I tried to re-init the replica, but got the error msg:
singleNodeRepl:OTHER> rs.initiate({ _id: "rs0", members: [ { _id: 0, host : "localhost:27017" } ] } )
{
"info" : "try querying local.system.replset to see current configuration",
"ok" : 0,
"errmsg" : "already initialized",
"code" : 23,
"codeName" : "AlreadyInitialized"
}
The solution is to reconf the mongo:
singleNodeRepl:OTHER> rsconf = rs.conf()
singleNodeRepl:OTHER> rsconf.members = [{_id: 0, host: "localhost:27017"}]
[ { "_id" : 0, "host" : "localhost:27017" } ]
singleNodeRepl:OTHER> rs.reconfig(rsconf, {force: true})
{ "ok" : 1 }
singleNodeRepl:OTHER>
singleNodeRepl:SECONDARY>
singleNodeRepl:PRIMARY>
Problem here is that you ran rs.initiate().. As EMPTY! You didn't tell what machines belongs to that replica set.
So..
rs.initiate({
_id: "rs0",
version: 1,
members: [
{ _id: 0, host : "address.to.this.machine:27017" }
]
}
)
Short Answer:
I needed to do:
rs.initiate({_id:'rs0', version: 1, members: [{_id: 0, host:'localhost:27017'}]}
rather than rs.initiate().
Long Answer:
I am almost the same as #Madbreaks and #Yihe 's comment, but I was in the different background so that I'm adding my comment here.
Background
I used docker container of mongoDB and initiated the replicaset by rs.initiate(). (The data volume is mounted to the host, but it is out-of-topic here).
What Happend
When I restart the mongoDB container, the error "MongoError: not master and slaveOk=false" happened. Yes, the error message was different from #d0c_s4vage 's, but the workaround is the same as #Yihe 's.
Root Cause
The root cause was dynamically assigned hostname as follows:
rs0:PRIMARY> rs.conf()
{
"_id" : "rs0",
...
"members" : [
{
...
"host" : "ee4ed99b555e:27017", # <----- !!!!
Where, the host "ee4..." above comes from docker container's internal hostname; this is set by my first rs.initiate(). This would be changed when recreate container. In my case, localhost is fine because of single server and single replicaset for 'rocketchat' app evaluation purpose.
I'm also facing same issue, I tried below steps,
NOTE: If you have already cluster setup follow my steps
I stopped particular server (host : "address.to.this.machine:27017")
Remove mongod.lock file
create one more data directory
- (deafult: /data/db, new Data directory: /data/db_rs0)
update the **configuration ** file
-change dbpath ( "/data/db_rs0" ),
- check bindIP (default: 127.0.0.0 to 0.0.0.0)
Check Hostname & Hosts
hostname
sudo vi /etc/hosts
add to hosts
127.0.0.0 hostname
127.0.1.1 hostname
(add your Public/Private IP) hostname
Start the MONGODB server in
sudo /usr/bin/mongod -f /etc/mongod.conf &
rs.initiate({ _id: "rs0", members: [ { _id: 0, host : "hostname:27017" } ] } )
rs.status()
{
....
"members": [
{
"_id": 0,
"name": "hostname:27017",
"health": 1,
"state": 1,
"stateStr": "PRIMARY",
"uptime": 98231,
"optime": {
"ts": Timestamp(1474963880, 46),
"t": NumberLong(339)
},
"optimeDate": ISODate("2016-09-27T08:11:20Z"),
"electionTime": Timestamp(1474956813, 1),
"electionDate": ISODate("2016-09-27T06:13:33Z"),
"configVersion": 12,
"self": true
},
...........
]
"ok": 1
}
-------- THANK YOU --------