Mongo Replica Sets - mongodb

I just set up a replica set according to the instructions online.
I have config like this:
{
"_id" : "rs0",
"members" : [
{
"_id" : 0,
"host" : "10.0.8.10:27017"
}
]
}
I ran the rs.initiate() command. No issues.
Ok, so I went to set up the second node I did the same thing (with a different IP of course).
Again I ran the rs.initiate() command without issues.
I ran the rs.add command, providing the IP/port of my first instance. No problem.
Finally from the other node I ran the rs.add command with the reference to the former node.
Now I got this:
rs0:PRIMARY> rs.add("10.0.0.10:27017");
{
"errmsg" : "exception: member 10.0.0.10:27017 has a config version >= to the new cfg version; cannot change config",
"code" : 13341,
"ok" : 0
}
What does that mean? And how do I fix it? Both nodes were created from identical Mongo releases.

The rs.initiate() command should only be used once for any given replicaset. You have the option of specifying all members of the set as options to the command like so:
rs.initiate({_id:'rs0', members:[{_id:0, host:'10.0.8.10:27017'},{_id:1, host:'10.0.0.10:27017'}]})
Once the replicaset is configured, you can add new members using rs.add(). If you choose this method, you would initialize the set with a single member, then execute rs.add() from the PRIMARY.
rs.initiate({_id:'rs0', members:[{_id:0, host:'10.0.8.10:27017'}]})
rs.add('10.0.8.10:27017')

You don't need to run rs.initiate() on the other node. Once the node is added to the config, running rs.initiate() on the first node will initiate the entire replica set.
The error is caused because the replica set has already been initiated with the config version you set up when you run rs.initiate() a second time.

Related

Mongodb : error - New and old configurations differ in replica set ID

Scenario
I am working on restoring backup taken from a different replica set i.e a unique replica set to another replica set.lets call them replica set A and replica set B..
The backup is in aws EBS snapshot .
Backup available is for set A which has to be restored for set B.
I had initially copied initial configuration cfg=rs.config() of node of set B.
Now after mounting the ebs volume of set A to a node of setB created from snapshot, I am able to connect to the db.The configuration will be of set A as the volume was created from set A backup which means all hostname are of set A in existing configuration after restoration.
Issue :
While trying to force the existing configuration,now I am running into below issue .
rs.reconfig(cfg,{force:true})
{
"ok" : 0,
"errmsg" : "New and old configurations differ in replica set ID; old was 5c4a6ab3b5306ee3ec95dae4, and new is 59dc23bfa547d208144dd564",
"code" : 103,
"codeName" : "NewReplicaSetConfigurationIncompatible",
"operationTime" : Timestamp(1616525693, 4976),
"$clusterTime" : {
"clusterTime" : Timestamp(1616573470, 22),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
Question
What is the significance of replicaset ID ?
If I make the replicaset ID same in the configuration I am trying to force -- then what will be it's side effect if any.
How does replset configs get synced across nodes ( I am not looking for any command but underneath details )
Let me know if more details are needed to add clarity to the question.
Note : the hosts are different in the set A and set B and both follows replication model with arbiter node.
MongoDB makes some sanity checks when dealing with replica sets:
the replica set name is specified in the configuration file or startup command line
that name is used as the replica set ID when creating the replica set
the replica set configuration document is copied to each other node when they are added
each node stores a copy of the replica set configuration document in its local database
When starting up, and when a new replica set configuration document is received, mongod will check that the replica set ID matches what is already has, and that its host name appears in the members list of the new configuration. If anything doesn't match, it transitions to a state that will not accept writes.
This helps to ensure the consistency of the data across replica set members.
Basic steps to restore a replica set from backups. Taken from the docs, for more detail, see
https://docs.mongodb.com/manual/tutorial/restore-replica-set-from-backup/index.html
Obtain backup MongoDB Database files.
Drop the local database if it exists in the backup.
Start a new single-node replica set.
Connect a mongo shell to the mongod instance.
Initiate the new replica set.

Error in configuration of Mongodb sharded cluster

I have Error in configuration of Mongodb sharded cluster.
I tried all the possibilities of rs.add("127.0.0.1:27002"), rs.add("loclahost:27002") and rs.add("hostname:27002") for sharding
But I am getting error:
{
"ok" : 0,
"errmsg" : "Either all host names in a replica set configuration must be localhost references, or none must be; found 1 out of 2",
"code" : 103
}
I assume that you try to connect to your primary and trying to add the secondary nodes. To start a Mongo instance by typing
mongo localhost:30001
I suppose this is primary, in the mongod shell for this primary. Type in this command
rs.status()
You'll get to know the name of your primary. Same will be the name of your secondary with just the difference of the port number.
Once you get the name, just do rs.add("name:port_number") and you'll be able to add.
rs.add() is used to make ReplicaSet not Sharded cluster.
If you what to add a shard to a sharded cluster, you may use sh.addShard("host:port").

The `cleanupOrphaned` doesn't exist in MongoDB 3.0.2?

I deploy a sharded cluster with the MongoDB version of 3.0.2.
I check the MongoDB 3.0 manual and find the command cleanupOrphaned.
http://docs.mongodb.org/manual/reference/command/cleanupOrphaned/#log-files
When I type this command from a mongos' admin database with the follow format:
db.runCommand({cleanupOrphaned:"mydb.mycol"})
it returns:
{ "ok" : 0, "errmsg" : "no such cmd: cleanupOrphaned", "code" : 59 }
Does anybody know why this happens???
Run cleanupOrphaned in the admin database directly on the mongod instance that is the primary replica set member of the shard. Do not run cleanupOrphaned on a mongos instance. It is given in the same link.

Connecting to Mongo in Replica set mode

I have a standalone Mongo instance running a replica set. I can't seem to connect and run any queries in the Mongo shell however. I get the following:
error: { "$err" : "not master and slaveOk=false", "code" : 13435 }
I set SlaveOk like so:
db.getMongo().setSlaveOk()
..but I still get an error:
error: {
"$err" : "not master or secondary; cannot currently read from this replSet member",
"code" : 13436
}
I can't seem to find a straight answer on Google: how do I connect to my replica set using the mongo shell?
I got the same problem and solved it using
rs.initiate()
If you are connecting to a node in a replica set that is not the master, you need to explicitly tell the client that it is ok that your are not connected to a master..
You can do this by calling
rs.slaveOk()
You can then perform your query.
Note that you will only be able to perform queries, not make changes to the repository, when connected to a slave node.
You are connected to a node that is neither in state secondary or primary. This node could be an arbiter or possibly a secondary in recovery mode. For example, if I had a replica set of 3 nodes, (where there is one primary, a secondary and an arbiter) I would get the same error if I had connected to the arbiter and issued a query even after I had set slaveOK true. The shell's command line prompt should indicate what state the node you are connected is in:
foo:ARBITER> db.test.find()
error: {
"$err" : "not master or secondary; cannot currently read from this replSet member",
"code" : 13436
}
have you tried: db.getMongo().setSlaveOk(true)
I also got the error. But when i tried connecting to the secondary node using the machine name instead of 'localhost' or 127.0.0.1 the error went away.
This error is only displayed when you are running an instance that's part of a replica set in standalone mode without completely removing it from the replica set.
e.g. You restart your instance on a different port but don't remove the --repSet option when starting it.
This starts it but neither as a primary nor as a secondary, hence the error
not master or secondary;
Depending on what you intended to do initially, either restart the instance on the correct port and with the correct --repSet option. This adds it to the replica set and gets rid of this error
If you intended to run the instance as standalone for some time (say to create an index), then start it on a different port WITHOUT the --repSet option
I got the same error while running aggregate() on staging server with two replica sets.I think you need to change the read preference to 'secondaryPreferred'.
Just put .read('secondaryPreferred') after the query function.

Setup Shards: Should I install MongoDB on the following servers

Following the Oreily Scaling MongoDB book (i.e. Page 27), I saw the following command:
Once you’re connected, you can add a shard. There are two ways to add
a shard, depending on whether the shard is a single server or a
replica set. Let’s say we have a single server, sf-02, that we’ve been
using for data. We can make it the first shard by running the addShard
command:
> db.runCommand({"addShard" : "sf-02:27017"})
{ "shardAdded" : "shard0000", "ok" : 1 }
Question 1>: What should be done on the servers of sf-02?
Should I also install MongoDB on it? If any, which package?
For example, if we had a replica set creatively named replica set “rs”
with members rs1-a, rs1-b, and rs1-c, we could say:
> db.runCommand({"addShard" : "rs/rs1-a,rs1-c"})
{ "shardAdded" : "rs", "ok" : 1 }
Question 2>: where is "rs" located?
Question 3>: Does rs1-a, rs1-c share the same machine?
reply 1: you should run mongod with the --shardsvr option to start it as a shard server. each shard server has to know that it is will receive a connection from a mongos (the shard router).
reply 2: 'rs' is the name of a replica set, a set is just a group of machine (usually 3). so it is not located on a single machine, it is an abstract entity which represent the group of machine in the set.
reply 3: no. for testing purpose you can run replica set on the same machine, but the purpose of a replica set is failover. in production you should use different machine for every member of the set.