exception: hosts cannot switch between localhost and hostname - mongodb

I created a replication set.
I added localhost in the set in the beginning, but when I try to edit the member with the actual hostname. I get error "exception: hosts cannot switch between localhost and hostname"
I need to get rid of localhost:27017 because, otherwise, it doesn't let me enter any other member as hostname (i.e. non-localhost address)
my-rs0:PRIMARY> cfg=rs.conf();
{
"_id" : "my-rs0",
"version" : 1,
"members" : [
{
"_id" : 0,
"host" : "localhost:27017"
}
]
}
my-rs0:PRIMARY> cfg.members[0].host="my-server04:27017"
my-rs0:PRIMARY> cfg
{
"_id" : "my-rs0",
"version" : 1,
"members" : [
{
"_id" : 0,
"host" : "my-server04:27017"
}
]
}
using rs.reconfig(cfg);
my-rs0:PRIMARY> rs.reconfig(cfg);
{
"errmsg" : "exception: hosts cannot switch between localhost and hostname",
"code" : 13645,
"ok" : 0
}
no luck with rs.add("my-server04:27017") or rs.remove("localhost:27017") as well.
my-rs0:PRIMARY> rs.add("my-server04:27017");
{
"errmsg" : "exception: can't use localhost in repl set member names except when using it for all members",
"code" : 13393,
"ok" : 0
}
I have tried all the reconfiguration methods mentioned here Replica Set Reconfig steps
But, none fixing above issue. Already spent hours, I am really frustrated.

I had the same problem and I fixed it without dropping any database. Just edited the host field of the member in the local.system.replset collection to match the local ip and then restarted mongod. Everything worked perfect.

It looks like you'll need to scrap your replicaset and start over.
I believe that when you initiated your Replica Set, you explicitly passed it a config document that references your MongoDB instance using localhost.
As I was investigating this, I brought up a replica set. When I initiated the replica set using rs.initiate() (without passing a config document) it used host name by default.
rs.initiate()
rs.conf()
{
"_id" : "stack1",
"version" : 1,
"members" : [
{
"_id" : 0,
"host" : "MY-HOSTNAME:28001"
}
]
}
This post describes the need to complete clear out your database files to create a fresh replica set.
Once I did this, I initiated a new replica set in the by passing a configuration document:
cfg = {
"_id" : "stack1",
"version" : 1,
"members" : [
{
"_id" : 0,
"host" : "localhost:28001"
}
]
}
rs.initiate(cfg)
rs.conf()
{
"_id" : "stack1",
"version" : 1,
"members" : [
{
"_id" : 0,
"host" : "localhost:28001"
}
]
}
Long story short, you'll need to delete all of the files in your --dbpath directory and re-create the replica set, without explicitly specifying "localhost" as your hostname.

I did according to the docs:
Restarted MongDB on another port (e.g. 37107) to prevent user connections to it.
Then started a shell on it:
$ mongo --port 37017
Then updated the configuration:
use local
cfg = db.system.replset.findOne( { "_id": "my-rs0" } )
cfg.members[0].host = "my-server04:27017"
db.system.replset.update( { "_id": "my-rs0" } , cfg )
Then restarted MongoDB on the original port.

Related

Error when try to connect to Replica Set in Mongo

When I try to connect to mongo replica set in AWS I get this error:
slavenode:27017: [Errno -2] Name or service not
known,ip-XXX-XX-XX-XX:27017: [Errno -2] Name or service not known
(where XXX-XX.. corresponds to my actual ip address)
The code to connect is shown below:
client = MongoClient("mongodb://Master-PublicIP:27017,Slave-PublicIP:27017/myFirstDB?replicaSet=rs0")
db = client.myFirstDB
try:
db.command("serverStatus")
except Exception as e:
print(e)
else:
print("You are connected!")
client.close()
(where in Master-PublicIP and Slave-PublicIP I have the actual IPv4 Public IP's from AWS console)
I have already a replica set and the configuration is:
rs0:PRIMARY> rs.conf()
{
"_id" : "rs0",
"version" : 2,
"members" : [
{
"_id" : 0,
"host" : "ip-XXX-XX-XX-XXX:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : 0,
"votes" : 1
},
{
"_id" : 1,
"host" : "SlaveNode:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : 0,
"votes" : 1
}
],
"settings" : {
"chainingAllowed" : true,
"heartbeatTimeoutSecs" : 10,
"getLastErrorModes" : {
},
"getLastErrorDefaults" : {
"w" : 1,
"wtimeout" : 0
}
}
}
I have create the /data/db in PRIMARY and the /data/db1 in SECONDARY and I have give the proper ownership with sudo chmod -R 755 /data/db
My MongoDB version is 3.0.15. Is anyone know what is going wrong?
Thanks in advance.
Have you tried removing the myFirstDB from within the MongoClient()
MongoClient("mongodb://Master-PublicIP:27017,Slave-PublicIP:27017/?replicaSet=rs0")
Because your next line then specifies which db you want to use
db = client.myFirstDB
Or I think you can specify the db by putting a dot after the closing brace on the MongoClient()
MongoClient("mongodb://Master-PublicIP:27017,Slave-PublicIP:27017/?replicaSet=rs0").myFirstDB
I manage to solve the problem. As #N3i1 suggests in commnets, I use the Public DNS (IPv4). There was an issue with the hosts that I had declare in /etc/hosts.
In this file I had define the ips of master/ slaves with some names. For some reason this didn't work. I delete them and then I reconfigure the replica set configuration.
In PRIMARY in mongo shell I did:
cfg = {"_id" : "rs0", "members" : [{"_id" : 0,"host" : "Public DNS (IPv4):27017"},{"_id" : 1,"host" : "Public DNS (IPv4):27017"}]}
rs.reconfig(cfg,{force: true});
Then I connect in the replica set with python with:
MongoClient("mongodb://Public DNS (IPv4):27017,Public DNS (IPv4):27017/?replicaSet=rs0")
Of course change the Public DNS (IPv4) adresses with yours.

No common protocol found when add shard in local network

I'm trying to build a small cluster using mongodb sharding. I tried with everything in localhost and it works perfect. But when I try on my local network where there are two nodes, node1 and node2, it does not work. In both nodes, mongod are started to serve as shard. In node1, config server and mongos are started. All listening 0.0.0.0 with exclusively allocated ports.
I can connect and do things with both nodes. When I use mongo to login mongos in node1, I can add Node1 mongod as shard but when I try to add Node2, an error occurs:
mongos> sh.addShard("<ip of node2 in local network>")
{ "ok" : 0, "errmsg" : "No common protocol found.", "code" : 126 }
I did some searching but few documentation is about this error.
mongo addShard "No common protocol found" errmsg 126 shows the same error but it does not seem helpful.
Couple of things to check
a) Are you using the same version of Mongod on all machines.
b) Are you using same kind of storageEngine on all machines.
Our problem should have been obvious, but wasn't.
We simply forgot to configure ports here, so the :27000 was missing:
db.shards.updateOne({ "_id" : "shard0000" }, { $set : { "host" : "oururl.foo:27000" } })
db.shards.updateOne({ "_id" : "shard0001" }, { $set : { "host" : "oururl.foo:27000" } })
db.shards.updateOne({ "_id" : "shard0002" }, { $set : { "host" : "oururl.foo:27000" } })
db.shards.updateOne({ "_id" : "shard0003" }, { $set : { "host" : "oururl.foo:27000" } })

How to update _id in Mongodb Replica Set configuration?

I had 5 mongo members in Replica Set. After I deleted 3 from it.
How can I change "_id" in others members to values "0", "1" and "2"?
rs.conf()
{
"_id" : "rs0",
"version" : 151261,
"members" : [
{
"_id" : 3,
"host" : "mongodb3:27017"
},
{
"_id" : 4,
"host" : "mongodb4:27017"
},
{
"_id" : 5,
"host" : "ok:27017",
"arbiterOnly" : true
}
]
}
Directly editing the replica set configuration may not be an elegant way. Instead use the rs.remove(hostname) command to remove a member from replica set , this way you need not have to bring down the primary during reconfiguration which will automatically assign ascending order values to "_id" field.
Try dropping the slaves collection as described here: http://docs.mongodb.org/manual/tutorial/troubleshoot-replica-sets/#duplicate-key-error-on-local-slaves
The master will recreate the collection the next time it is required.
You could try this in the Mongo console:
conf = rs.conf()
conf.members[0]._id = 0
conf.members[1]._id = 1
conf.members[2]._id = 2
rs.reconfig(conf)

Replication set for MongoDB

I am trying to follow the instructions for setting up a replication set for a MongoDB database with Azure. The original instructions are at http://www.mongodb.org/display/DOCS/MongoDB+on+Azure+VM+-+Linux+Tutorial. I connect the script interpreter with 'mongo --host bsicentos.cloudapp.net --port 27018' (somehting the instructions didn't tell me). The final step instructions me to enter:
> conf = {
id = “mongors”,
members : [
\{id:0, host:”mongodbrs.cloudapp.net:27018\},
\{id:0, host:”mongodbrs.cloudapp.net:27019\},
\{id:0, host:”mongodbrs.cloudapp.net:27020\}]}
>rs.initiate(conf)
If I don't type this exactly as specified and I modify it slightly to fit my host (missing closing quotes, not escapes, and id numbers all zero) I finally get an accepted command:
mongors:PRIMARY> conf = {
... _id:"mongors",
... members:[
... {_id:0,host:"bsicentos.cloudapp.net:27018"},
... {_id:1,host:"bsicentos.cloudapp.net:27019"},
... {_id:2,host:"bsicentos.cloudapp.net:27020"}]}
{
"_id" : "mongors",
"members" : [
{
"_id" : 0,
"host" : "bsicentos.cloudapp.net:27018"
},
{
"_id" : 1,
"host" : "bsicentos.cloudapp.net:27019"
},
{
"_id" : 2,
"host" : "bsicentos.cloudapp.net:27020"
}
]
}
But I get and error:
mongors:PRIMARY> rs.initiate(conf)
{
"info" : "try querying local.system.replset to see current configuration",
"errmsg" : "already initialized",
"ok" : 0
}
Ideas?

mongodb replicaset host name change error

I have a mongodb replicaset on ubuntu.. In replica set, hosts are defined as localhost. You can see ;
{
"_id" : "myrep",
"version" : 4,
"members" : [
{
"_id" : 0,
"host" : "localhost:27017"
},
{
"_id" : 2,
"host" : "localhost:27018"
},
{
"_id" : 1,
"host" : "localhost:27019",
"priority" : 0
}
]
}
I want to change host adresses with real ip of server. But when i run rs.reconfig, I get error :
{
"assertion" : "hosts cannot switch between localhost and hostname",
"assertionCode" : 13645,
"errmsg" : "db assertion failure",
"ok" : 0
}
How can i solve it ?
Thank you.
There is a cleaner way to do this:
use local
cfg = db.system.replset.findOne({_id:"replicaSetName"})
cfg.members[0].host="newHost:27017"
db.system.replset.update({_id:"replicaSetName"},cfg)
then restart mongo
The only way I found to change host names is recreating replica set.. To make it right db directories need to be cleaned.. Then starting all servers with replication mode after that creating new repset with new host names fixed it.