I have a mongodb replicaset on ubuntu.. In replica set, hosts are defined as localhost. You can see ;
{
"_id" : "myrep",
"version" : 4,
"members" : [
{
"_id" : 0,
"host" : "localhost:27017"
},
{
"_id" : 2,
"host" : "localhost:27018"
},
{
"_id" : 1,
"host" : "localhost:27019",
"priority" : 0
}
]
}
I want to change host adresses with real ip of server. But when i run rs.reconfig, I get error :
{
"assertion" : "hosts cannot switch between localhost and hostname",
"assertionCode" : 13645,
"errmsg" : "db assertion failure",
"ok" : 0
}
How can i solve it ?
Thank you.
There is a cleaner way to do this:
use local
cfg = db.system.replset.findOne({_id:"replicaSetName"})
cfg.members[0].host="newHost:27017"
db.system.replset.update({_id:"replicaSetName"},cfg)
then restart mongo
The only way I found to change host names is recreating replica set.. To make it right db directories need to be cleaned.. Then starting all servers with replication mode after that creating new repset with new host names fixed it.
Related
So I have a sharded cluster with 2 config servers, 2 shards each with 2 replicas and 2 mongos instances, everything running on different VMs.
However, after configuring all of it, I finally tried to interact with the database which is empty with a simple show dbs query from the mongos instance, but it threw me the following error (after thinking for like 1 min):
uncaught exception: Error: listDatabases failed:{
"ok" : 0,
"errmsg" : "Could not find host matching read preference { mode: \"primary\" } for set rep",
"code" : 133,
"codeName" : "FailedToSatisfyReadPreference",
"operationTime" : Timestamp(1648722327, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1648722327, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
Everything seems to be well configured and when I do sh.status() from the mongos instance it identifies the shards and replicas as such:
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("62421dd6b5f9640f309faca0")
}
shards:
{ "_id" : "rep", "host" : "rep/192.168.86.136:26000,192.168.86.141:26001", "state" : 1 }
{ "_id" : "repb", "host" : "repb/192.168.86.142:26002,192.168.86.143:26003", "state" : 1 }
active mongoses:
"4.4.8" : 2
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 5
Last reported error: Empty host component parsing HostAndPort from ""
Time of Reported error: Thu Mar 31 2022 11:06:39 GMT+0100 (WEST)
Migration Results for the last 24 hours:
No recent migrations
databases:
{ "_id" : "config", "primary" : "config", "partitioned" : true }
config.system.sessions
shard key: { "_id" : 1 }
unique: false
balancing: true
chunks:
rep 919
repb 105
too many chunks to print, use verbose if you want to force print
{ "_id" : "testdb", "primary" : "rep", "partitioned" : false, "version" : { "uuid" : UUID("2e584dcd-25ea-4ba4-805c-b40928e26511"), "lastMod" : 1 } }
Maybe a firewall issue.
Every node in your cluster must be able to reach any other node via according port. See
Simple HTTP/TCP health check for MongoDB
Try this script to check each member of each replica set:
const MONGO_PASSWROD = '*******'
const AUTH_SOURCE = 'admin'
const user = db.runCommand({ connectionStatus: 1 }).authInfo.authenticatedUsers.shift().user;
const map = db.adminCommand("getShardMap").map;
for (let rs of Object.keys(map)) {
let uri = map[rs].split("/");
let connectionString = `mongodb://${user}:${MONGO_PASSWROD}#${uri[1]}/admin?replicaSet=${uri[0]}&authSource=${AUTH_SOURCE}`;
let replicaSet = Mongo(connectionString).getDB("admin");
for (let member of replicaSet.adminCommand({ replSetGetStatus: 1 }).members) {
if (!replicaSet.hello().hosts.includes(member.name)) continue;
printjsononeline({ replicaSet: rs, host: member.name, stateStr: member.stateStr, health: member.health });
if (member.health != 1 || !Array("PRIMARY", "SECONDARY").includes(member.stateStr))
print(`Member state of ${member.name} is '${member.stateStr}'`);
}
}
Turns out I configured the replica set wrongly, so all I had to do was recreate the volumes of all VMs and configure it all again from scratch. Now it works as it should.
When I try to connect to mongo replica set in AWS I get this error:
slavenode:27017: [Errno -2] Name or service not
known,ip-XXX-XX-XX-XX:27017: [Errno -2] Name or service not known
(where XXX-XX.. corresponds to my actual ip address)
The code to connect is shown below:
client = MongoClient("mongodb://Master-PublicIP:27017,Slave-PublicIP:27017/myFirstDB?replicaSet=rs0")
db = client.myFirstDB
try:
db.command("serverStatus")
except Exception as e:
print(e)
else:
print("You are connected!")
client.close()
(where in Master-PublicIP and Slave-PublicIP I have the actual IPv4 Public IP's from AWS console)
I have already a replica set and the configuration is:
rs0:PRIMARY> rs.conf()
{
"_id" : "rs0",
"version" : 2,
"members" : [
{
"_id" : 0,
"host" : "ip-XXX-XX-XX-XXX:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : 0,
"votes" : 1
},
{
"_id" : 1,
"host" : "SlaveNode:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : 0,
"votes" : 1
}
],
"settings" : {
"chainingAllowed" : true,
"heartbeatTimeoutSecs" : 10,
"getLastErrorModes" : {
},
"getLastErrorDefaults" : {
"w" : 1,
"wtimeout" : 0
}
}
}
I have create the /data/db in PRIMARY and the /data/db1 in SECONDARY and I have give the proper ownership with sudo chmod -R 755 /data/db
My MongoDB version is 3.0.15. Is anyone know what is going wrong?
Thanks in advance.
Have you tried removing the myFirstDB from within the MongoClient()
MongoClient("mongodb://Master-PublicIP:27017,Slave-PublicIP:27017/?replicaSet=rs0")
Because your next line then specifies which db you want to use
db = client.myFirstDB
Or I think you can specify the db by putting a dot after the closing brace on the MongoClient()
MongoClient("mongodb://Master-PublicIP:27017,Slave-PublicIP:27017/?replicaSet=rs0").myFirstDB
I manage to solve the problem. As #N3i1 suggests in commnets, I use the Public DNS (IPv4). There was an issue with the hosts that I had declare in /etc/hosts.
In this file I had define the ips of master/ slaves with some names. For some reason this didn't work. I delete them and then I reconfigure the replica set configuration.
In PRIMARY in mongo shell I did:
cfg = {"_id" : "rs0", "members" : [{"_id" : 0,"host" : "Public DNS (IPv4):27017"},{"_id" : 1,"host" : "Public DNS (IPv4):27017"}]}
rs.reconfig(cfg,{force: true});
Then I connect in the replica set with python with:
MongoClient("mongodb://Public DNS (IPv4):27017,Public DNS (IPv4):27017/?replicaSet=rs0")
Of course change the Public DNS (IPv4) adresses with yours.
I am using mongodb replication
here is the output of rs.conf()
firstset:PRIMARY> rs.conf();
{
"_id" : "firstset",
"version" : 43,
"members" : [
{
"_id" : 7,
"host" : "primaryip:10002"
},
{
"_id" : 10,
"host" : "arbiterip:10009",
"votes" : 2,
"arbiterOnly" : true
},
{
"_id" : 12,
"host" : "secondaryip:10006"
}
]
}
Now I want to add another secondary instance. So i just started another mongod process on port 10004 and fired the command
rs.add("secondaryip:10004");
I got the output
{ "ok" : 1 }
and the state of newly attached instance was
"stateStr" : "STARTUP2",
but at the same time my application was not able to connect to primary instance. why ?
Please help me to solve this issue.
This was a bug of MongoDB. Bug resolved by MongoDB team from version 2.6.2
I created a replication set.
I added localhost in the set in the beginning, but when I try to edit the member with the actual hostname. I get error "exception: hosts cannot switch between localhost and hostname"
I need to get rid of localhost:27017 because, otherwise, it doesn't let me enter any other member as hostname (i.e. non-localhost address)
my-rs0:PRIMARY> cfg=rs.conf();
{
"_id" : "my-rs0",
"version" : 1,
"members" : [
{
"_id" : 0,
"host" : "localhost:27017"
}
]
}
my-rs0:PRIMARY> cfg.members[0].host="my-server04:27017"
my-rs0:PRIMARY> cfg
{
"_id" : "my-rs0",
"version" : 1,
"members" : [
{
"_id" : 0,
"host" : "my-server04:27017"
}
]
}
using rs.reconfig(cfg);
my-rs0:PRIMARY> rs.reconfig(cfg);
{
"errmsg" : "exception: hosts cannot switch between localhost and hostname",
"code" : 13645,
"ok" : 0
}
no luck with rs.add("my-server04:27017") or rs.remove("localhost:27017") as well.
my-rs0:PRIMARY> rs.add("my-server04:27017");
{
"errmsg" : "exception: can't use localhost in repl set member names except when using it for all members",
"code" : 13393,
"ok" : 0
}
I have tried all the reconfiguration methods mentioned here Replica Set Reconfig steps
But, none fixing above issue. Already spent hours, I am really frustrated.
I had the same problem and I fixed it without dropping any database. Just edited the host field of the member in the local.system.replset collection to match the local ip and then restarted mongod. Everything worked perfect.
It looks like you'll need to scrap your replicaset and start over.
I believe that when you initiated your Replica Set, you explicitly passed it a config document that references your MongoDB instance using localhost.
As I was investigating this, I brought up a replica set. When I initiated the replica set using rs.initiate() (without passing a config document) it used host name by default.
rs.initiate()
rs.conf()
{
"_id" : "stack1",
"version" : 1,
"members" : [
{
"_id" : 0,
"host" : "MY-HOSTNAME:28001"
}
]
}
This post describes the need to complete clear out your database files to create a fresh replica set.
Once I did this, I initiated a new replica set in the by passing a configuration document:
cfg = {
"_id" : "stack1",
"version" : 1,
"members" : [
{
"_id" : 0,
"host" : "localhost:28001"
}
]
}
rs.initiate(cfg)
rs.conf()
{
"_id" : "stack1",
"version" : 1,
"members" : [
{
"_id" : 0,
"host" : "localhost:28001"
}
]
}
Long story short, you'll need to delete all of the files in your --dbpath directory and re-create the replica set, without explicitly specifying "localhost" as your hostname.
I did according to the docs:
Restarted MongDB on another port (e.g. 37107) to prevent user connections to it.
Then started a shell on it:
$ mongo --port 37017
Then updated the configuration:
use local
cfg = db.system.replset.findOne( { "_id": "my-rs0" } )
cfg.members[0].host = "my-server04:27017"
db.system.replset.update( { "_id": "my-rs0" } , cfg )
Then restarted MongoDB on the original port.
I am trying to follow the instructions for setting up a replication set for a MongoDB database with Azure. The original instructions are at http://www.mongodb.org/display/DOCS/MongoDB+on+Azure+VM+-+Linux+Tutorial. I connect the script interpreter with 'mongo --host bsicentos.cloudapp.net --port 27018' (somehting the instructions didn't tell me). The final step instructions me to enter:
> conf = {
id = “mongors”,
members : [
\{id:0, host:”mongodbrs.cloudapp.net:27018\},
\{id:0, host:”mongodbrs.cloudapp.net:27019\},
\{id:0, host:”mongodbrs.cloudapp.net:27020\}]}
>rs.initiate(conf)
If I don't type this exactly as specified and I modify it slightly to fit my host (missing closing quotes, not escapes, and id numbers all zero) I finally get an accepted command:
mongors:PRIMARY> conf = {
... _id:"mongors",
... members:[
... {_id:0,host:"bsicentos.cloudapp.net:27018"},
... {_id:1,host:"bsicentos.cloudapp.net:27019"},
... {_id:2,host:"bsicentos.cloudapp.net:27020"}]}
{
"_id" : "mongors",
"members" : [
{
"_id" : 0,
"host" : "bsicentos.cloudapp.net:27018"
},
{
"_id" : 1,
"host" : "bsicentos.cloudapp.net:27019"
},
{
"_id" : 2,
"host" : "bsicentos.cloudapp.net:27020"
}
]
}
But I get and error:
mongors:PRIMARY> rs.initiate(conf)
{
"info" : "try querying local.system.replset to see current configuration",
"errmsg" : "already initialized",
"ok" : 0
}
Ideas?