Replica configuration - MongoDB - mongodb

On mongo 3.2.17 I have the following output when running rs.initiate(). I need "ok" equal to 1. I don't know how to modified the configuration. Any suggestion?
{
"info2" : "no configuration specified. Using a default configuration for
the set",
"me" : "vpsxxxxxx:27017",
"info" : "try querying local.system.replset to see current
configuration",
"ok" : 0,
"errmsg" : "already initialized",
"code" : 23
}

You are getting this error because you have already initialized replication on your machine. This would work on a fresh instance. In your case try using reconfig instead of initiate
rs.reconfig(config, {force: true})
You can use force option when reconfiguring replica set. Make sure you have at least 3 nodes: 2 full nodes and 1 arbiter (minimum supported configuration) or 3 full nodes (minimum recommended configuration) so that primary node can be elected.

Related

Form a new replica set with removed members

How to configure removed members of a replica set to form new replica set?
I have a replica set with 4 mongod instances
Output of rs.config()
{
"_id" : "rs0",
"members" : [
{
"_id" : 0,
"host" : "localhost:27031"
},
{
"_id" : 1,
"host" : "localhost:27032"
},
{
"_id" : 2,
"host" : "localhost:27033"
},
{
"_id" : 3,
"host" : "localhost:27034"
}
],
"settings" : {
"replicaSetId" : ObjectId("5cf22332f5b9d21b01b9b6b2")
}
}
I removed 2 instances from the replica set
rs.remove("localhost:27033")
rs.remove("localhost:27034")
Now my requirement is to form a new replica set with these 2 removed members. What is the best way for that?
My current solution
connect to removed member
mongo --port 27033
and execute
conf = {
"_id" : "rs0",
"members" : [
{
"_id" : 2,
"host" : "localhost:27033"
},
{
"_id" : 3,
"host" : "localhost:27034"
}
],
"settings" : {
"replicaSetId" : ObjectId("5cf22332f5b9d21b01b9b6b2")
}
}
and then
rs.reconfig(conf, {force:true})
Outcome
This solution worked fine practically.
The removed members formed a replicaset, one of them became primary and other became secondary. Data was replicated among them.
And this replica set seems to be isolated from the the initial replica set from which they were removed.
Concerns
1) I had to use forced reconfiguration. Not sure about the consequences.
"errmsg" : "replSetReconfig should only be run on PRIMARY, but my state is REMOVED; use the \"force\" argument to override",
2) Is the new replica set actually new one? In the rs.config()
replicaSetId is same as old one.
"replicaSetId" : ObjectId("5cf22332f5b9d21b01b9b6b2")
I had to use same value for _id of members as in config of old replica set
"errmsg" : "New and old configurations both have members with host of localhost:27034 but in the new configuration the _id field is 1 and in the old configuration it is 3 for replica set rs0",
Is this solution good?
Is there any better solution?
Note: I need to retain data from old replica set (data which was present at the time of removal) in the new replica set.
As you have suspected, the procedure did not create a new replica set. Rather, it's a continuation of the old replica set, albeit superficially they look different.
There is actually a procedure in the MongoDB documentation to do what you want: Restore a Replica Set from MongoDB Backups. The difference being, you're not restoring from a backup. Rather, you're using one of the removed secondaries to seed a new replica set.
Hence you need to modify the first step in the procedure mentioned in the link above. The rest of the procedure would still be the same:
Restart the removed secondary as a standalone (without the --replSet parameter) and connect to it using the mongo shell.
Drop the local database in the standalone node:
use local
db.dropDatabase()
Restart the ex-secondary, this time with the --replSet parameter (with a new replica set name)
Connect to it using the mongo shell.
rs.initiate() the new set.
After this, the new set should have a different replicaSetId compared to the old set. In my quick test of the procedure above, this is the result I see:
Old set:
> rs.conf()
...
"replicaSetId": ObjectId("5cf45d72a1c6c4de948ff5d8")
...
New set
> rs.conf()
...
"replicaSetId": ObjectId("5cf45d000dda9e1025d6c65e")
...
As with any major deployment changes like this, please ensure that you have a backup, and thoroughly test the procedures before doing it on a production system.

failed with Received heartbeat from member with the same member ID as ourself:0

Trying to set a replica set configuration for 1 primary , 1 slave and 1 arbitrator.
I have setup this in the /etc/mongodb.conf
replication:
replSetName: ProductionReplicaSet
but accidentally ran rs.initiate() on the three servers. now when I'm running rs.add("mongo02....") i'm getting :
{
"ok" : 0,
"errmsg" : "Quorum check failed because not enough voting nodes responded; required 2 but only the following 1 voting nodes responded: chef-production2-mongo01:27017; the following nodes did not respond affirmatively: mongo02...:27017 failed with Received heartbeat from member with the same member ID as ourself: 0",
"code" : 74
}
I read here that rs.initiate() can be reversed but the instructions are not clear.
tried on mongo02:
config = rs.config() //config with _id :0
config["members"][0]["_id"] = 1
rs.reconfig(config)
{
"ok" : 0,
"errmsg" : "New and old configurations both have members with host of chef-production2-mongo02:27017 but in the new configuration the _id field is 1 and in the old configuration it is 0 for replica set ProductionReplicaSet",
"code" : 103
}
any help? i'm open to other solutions.
I ran in to this as well. Turns out it can be caused by two things.
1. If the node you are adding already has its own dbData.
2. If you ran rs.initiate() on both nodes. It is only supposed to be run on the primary.
To resolve this you will need to check /etc/mongod.conf for the "dbPath" attribute and move or remove all the files in that folder.

Mongodb created replica set string showing exception

I have got this issue while working on replica sets. Server is successfully turning on but after executing rs.initiate() and rs.status I am getting errors.
"info2" : "no configuration explicitly specified -- making one",
"errmsg" : "exception: bad --replSet config string format is: <setname>[host1>,<seedhost2>,...]",
"code" : 13093,
"ok" : 0
I ran into this problem as well. What happened was I configured the replica set in /etc/mongo.conf, went into the mongo client and executed rs.initiate(). What I forgot to do was restart mongo! A simple sudo service mongod restart fixed it.

How to know the existence of replica set in sharded environment from JAVA client

I want to set
mongoClient.setWriteConcern(WriteConcern.REPLICAS_SAFE);
only if replica set is present.
But in sharded environment when I do:
mongoClient.getReplicaSetStatus();
It returns null even though I have replica set.
To mongo client I am passing mongos IP.
Most MongoDB drivers, in particular Java driver which you are using will throw an exception if you try to set REPLICA_ACKNOWLEDGED writeConcern when it's not possible to get an acknowledgement from two or more nodes.
From the docs:
WriteConcern.REPLICA_ACKNOWLEDGED Tries to write to two separate nodes. [...] will
throw an exception if two writes are not possible.
See the following for more details:
http://docs.mongodb.org/manual/reference/write-concern/
http://docs.mongodb.org/ecosystem/drivers/java-replica-set-semantics/
In my testing with mongo shell, if you provide REPLICA_ACKNOWLEDGED (formerly called REPLICA_SAFE) concern to 'getlasterror' command, you will get an error when you are not communicating with a replica set. When talking to mongos process, the error will be:
{
"singleShard" : "localhost:30001",
"n" : 0,
"connectionId" : 3,
"wnote" : "no replication has been enabled, so w=2.0 won't work",
"err" : "norepl",
"ok" : 1
}
It is not the case that the client will hang forever without wtimeout being specified, that would only be the case if there is a replica set but two nodes are not available for writes indefinitely.
Note that using "majority" as w value for write concern will work correctly through mongos - note the difference in writeConcern responses:
mongos> db.coll.insert({}); db.runCommand({getlasterror:1,w:"majority"})
{
"singleShard" : "localhost:30001",
"n" : 0,
"connectionId" : 3,
"err" : null,
"ok" : 1
}
First verify that your replica set has a PRIMARY using the mongo shell command rs.status()
Then if that worked, verify that you are connecting to the database correctly:
MongoClient mongoClient = new MongoClient( "hostname" , 27017 );
If both of those are true then there should be no reason mongoClient.getReplicaSetStatus() should return NULL. It should be returning a ReplicaSetStatus object.

Can't add member into MongoDB replica-set

I am using the MongoDB 2.4.3, and following the wizard:
http://docs.mongodb.org/manual/tutorial/deploy-replica-set/
But when adding the other members into replica-set, get the following error:
root#vm3:~# mongo
MongoDB shell version: 2.4.3
connecting to: test
rs1:PRIMARY> rs.add("vm1")
{
"errmsg" : "exception: set name does not match the set name host vm1:27017 expects",
"code" : 13145,
"ok" : 0
}
rs1:PRIMARY> rs.add("vm4")
{
"errmsg" : "exception: set name does not match the set name host vm4:27017 expects",
"code" : 13145,
"ok" : 0
}
vm1, vm3 and vm4 know each other because I configured their /etc/hosts files correctly.
Any idea? I don't understand what does this error message mean!
After restarting all vms, it works now.
root#vm3:~# mongo
MongoDB shell version: 2.4.3
connecting to: test
rs1:PRIMARY> rs.add("vm4")
{ "ok" : 1 }
rs1:PRIMARY> rs.add("vm1")
{ "ok" : 1 }
In my case, just restart virtual machines, every thing is fine.
If you are re-installing a MongoDB instance, the replSet may be living in your data file on the drive. I had the same problem setting up a new replica set. The problem was from changing the replica set name after bringing up instances with an older replSet name. I deleted the data files, ran my install scripts again and it worked just fine.