Our config version of 1 is no larger than the version on torvm-core20.xyz.com:27019, which is 1 - mongodb

I was trying SSL enabled MongoDB 3.7.9 replica sets. below is the code
I ran this command on abc.xyz.com:27019
> rs.initiate({ _id: "rs0", configsvr: true, members: [{ _id : 0, host : "pqr.xyz.com:27019" }, { _id : 1, host : "abc.xyz.com:27019" }]});
{
"ok" : 0,
"errmsg" : "Our config version of 1 is no larger than the version on pqr.xyz.com:27019, which is 1",
"code" : 103,
"codeName" : "NewReplicaSetConfigurationIncompatible",
"$gleStats" : {
"lastOpTime" : Timestamp(0, 0),
"electionId" : ObjectId("000000000000000000000000")
},
"$clusterTime" : {
"clusterTime" : Timestamp(1536753816, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"lastCommittedOpTime" : Timestamp(0, 0)
}
I did not find any hint on internet. Can someone guide me over this. I am not able to see the relica set created, Ideally it shouls have created.

This error is saying that you are trying to initialize a new replica set, but the node at pqr.xyz.com:27019 is already a member of replica set configuration with the same _id. This error message is to avoid mismatching of RS configs by using the version value. See the code for more details.

Related

How do I Increase WT allocation_size - MongoDB

On this [link] (https://source.wiredtiger.com/3.1.0/tune_page_size_and_comp.html)
there is a note that allocation_size can be tuned between 512B and 128 MB
How do we modify that variable and start mongod process that will have allocation_size of 16KB for example, the default is 4KB ?
This does not work
replica1:PRIMARY> db.adminCommand( { "setParameter": 1, "wiredTigerEngineRuntimeConfig": "allocation_size=64KB"}){ "ok" : 0, "errmsg" : "WiredTiger reconfiguration failed with error code (22): Invalid argument", "code" : 2, "codeName" : "BadValue"}
replica1:PRIMARY> db.createCollection( "users", { storageEngine: { wiredTiger: { configString: "allocation_size=64KB" } } } ){ "ok" : 0, "errmsg" : "22: Invalid argument", "code" : 2, "codeName" : "BadValue"}
The way the parameter is being modified is correct. The error message is about a different issue. The parameters internal_page_max and leaf_page_max, whose default value is 4KB and 32KB respectively, have to be multiples of allocation_size. This has to be ensured when setting the allocation_size. So, in your case, you have to set the other two to at least 64KB if you are setting allocation_size to 64KB as shown below:
rs0:PRIMARY> db.createCollection( "users", { storageEngine: { wiredTiger: { configString: "allocation_size=64KB,internal_page_max=64KB,leaf_page_max=64KB" } } } ) {
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1595909936, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1595909936, 1) }
Source and example from: WT-6510

is it possible to make multiple mongo routers connected to just one shard on a mongoDB shard cluster?

I'd like to make a mongo cluster with multiple mongo routers(mongos) with just one shard(mongod) like this figure.
So I made two mongo routers named 'mongorouter-1', 'mongorouter-2', and also made one shard named 'mongod'.
In mongorouter-1 I added 'mongod' well with this command.
sh.addShard("mongod:27017")
It works well, but In mongorouter-2 this command put an error, like
mongos> sh.addShard("mongod:27017")
{
"ok" : 0,
"errmsg" : "E11000 duplicate key error collection: admin.system.version index: _id_ dup key: { : \"shardIdentity\" }",
"code" : 11000,
"codeName" : "DuplicateKey",
"operationTime" : Timestamp(1558591937, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1558591937, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
In mongorouter-1, sh.status is this
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("5ce6683cc490bfc9325389cb")
}
shards:
{ "_id" : "shard0000", "host" : "mongod:27017", "state" : 1 }
active mongoses:
"4.0.6" : 1
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
{ "_id" : "config", "primary" : "config", "partitioned" : true }
and in mongorouter-2, sh.status is this
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("5ce668176a4dcc52fd230ac9")
}
shards:
active mongoses:
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
{ "_id" : "config", "primary" : "config", "partitioned" : true }
I don't know how to make multiple mongo routers connected to just one shard.
If you know the solution, help me. Thanks in advance.

Uncaught exception 'MongoCursorException' with message 'Couldn't get connection: No candidate servers found'

Problem Description
I have a three member replica set, and a php web front end that a) writes a record, and then b) does a .find() on the collection and returns all documents in the database.
To better understand how replica sets work, I did the following:
stopped the mongo service on the primary server(mongohost1). the web page kept working.
stopped the mongo service on the server that got promoted to primary (mongohost2). At this point, even though I have another mongo host (mongohost3) with the same database, the PHP web app fails with the above error message.
I was expecting that the system would let me at least read the records from the database, even if the write failed.
What I've checked / tried so far:
All of the hosts are reachable. I've trying pinging by hostname from each box and it alll works.
Here's how the replica set has been configured as per mongohost3:
jlrs0:SECONDARY> cfg=rs.config()
{
"_id" : "jlrs0",
"version" : 5,
"members" : [
{
"_id" : 0,
"host" : "monghost1.test.mm.org:27017",
"priority" : 3
},
{
"_id" : 1,
"host" : "mongohost2.test.mm.org:27017",
"priority" : 2
},
{
"_id" : 2,
"host" : "mongohost3.test.mm.org:27017",
"priority" : 2
}
]
}
jlrs0:SECONDARY>
and the status of each member in the replica set per mongohost3:
jlrs0:SECONDARY> rs.status()
{
"set" : "jlrs0",
"date" : ISODate("2014-11-19T15:16:21Z"),
"myState" : 2,
"members" : [
{
"_id" : 0,
"name" : "mongohost1.test.mm.org:27017",
"health" : 0,
"state" : 8,
"stateStr" : "(not reachable/healthy)",
"uptime" : 0,
"optime" : Timestamp(1416419914, 1),
"optimeDate" : ISODate("2014-11-19T17:58:34Z"),
"lastHeartbeat" : ISODate("2014-11-19T15:16:20Z"),
"lastHeartbeatRecv" : ISODate("2014-11-19T14:06:49Z"),
"pingMs" : 0
},
{
"_id" : 1,
"name" : "mongohost2.test.mm.org:27017",
"health" : 0,
"state" : 8,
"stateStr" : "(not reachable/healthy)",
"uptime" : 0,
"optime" : Timestamp(1416419914, 5),
"optimeDate" : ISODate("2014-11-19T17:58:34Z"),
"lastHeartbeat" : ISODate("2014-11-19T15:16:17Z"),
"lastHeartbeatRecv" : ISODate("2014-11-19T14:10:58Z"),
"pingMs" : 0
},
{
"_id" : 2,
"name" : "mongohost3.test.mm.org:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 451417,
"optime" : Timestamp(1416419914, 5),
"optimeDate" : ISODate("2014-11-19T17:58:34Z"),
"self" : true
}
],
"ok" : 1
}
Here's the PHP code to connect:
$m = new MongoClient("mongodb://mongohost1.test.mm.org:27017,mongohost2.test.mm.org:27017,mongohost3.test.mm.org:27017/?replicaSet=jlrs0");
I'm still reading up on replica sets etc. so I'm sure it's something that I've missed / neglected to set up.
For example, I haven't set up an arbitor...
not sure if it's related or not but just in case, i thought i'd mention it. I'm not sure what else to check.
Thanks.
You need to set your read preference to primaryPreferred.
You need to specify that it ok to read from secondary when primary is not available.
By default, it is not so.
Link to documentation
Please check also your php mongo pecl lib version.
Before 1.5.6 there were 2 errors related to not selecting primary server by PHP after fail in replica set.
Be sure to have pecl mongo at least 1.5.6.

MongoDb Replication Issue- when add one more node then primary stops responding

I am using mongodb replication
here is the output of rs.conf()
firstset:PRIMARY> rs.conf();
{
"_id" : "firstset",
"version" : 43,
"members" : [
{
"_id" : 7,
"host" : "primaryip:10002"
},
{
"_id" : 10,
"host" : "arbiterip:10009",
"votes" : 2,
"arbiterOnly" : true
},
{
"_id" : 12,
"host" : "secondaryip:10006"
}
]
}
Now I want to add another secondary instance. So i just started another mongod process on port 10004 and fired the command
rs.add("secondaryip:10004");
I got the output
{ "ok" : 1 }
and the state of newly attached instance was
"stateStr" : "STARTUP2",
but at the same time my application was not able to connect to primary instance. why ?
Please help me to solve this issue.
This was a bug of MongoDB. Bug resolved by MongoDB team from version 2.6.2

mongodb - All nodes in replica set are primary

I am trying to configure a replica set with two nodes but when I execute rs.add("node2") and then rs.status() both nodes are set to PRIMARY. Also when I run rs.status() on the other node the only node that appears is the local one.
Edit1:
rs.status() output:
{
"set" : "rs0",
"date" : ISODate("2012-09-22T01:01:12Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "node1:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 70968,
"optime" : Timestamp(1348207012000, 1),
"optimeDate" : ISODate("2012-09-21T05:56:52Z"),
"self" : true
},
{
"_id" : 1,
"name" : "node2:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 68660,
"optime" : Timestamp(1348205568000, 1),
"optimeDate" : ISODate("2012-09-21T05:32:48Z"),
"lastHeartbeat" : ISODate("2012-09-22T01:01:11Z"),
"pingMs" : 0
}
],
"ok" : 1
}
Edit2: I tried doing the same thing with 3 different nodes and I got the same result (rs.status() says I have a replica set with three primary nodes). Is it possible that this problem is caused by some specific configuration of the network?
If you issue rs.initiate() from both of your the members of the replica set before rs.add() then both will come up as primary.
You should only use rs.initiate() on one of the members of the replica set, the one that you intend to be primary initially. Then you can rs.add() the other member to the replica set.
The answer above does not answer how to fix it. I kind of got it done using trial and error.
I have cleaned up the data directory (as in rm -rf *) and restarted these PRIMARY nodes, except one. I added them back. It seems to work.
Edit1
The nice little trick below did not seem to work for me,
So, I logged into the mongod console using mongo <hostname>:27018
here is how the shell looks like:
rs2:PRIMARY> rs.conf()
{
"_id" : "rs2",
"version" : 1,
"members" : [
{
"_id" : 0,
"host" : "ip-10-159-42-911:27018"
}
]
}
I decided to change it to secondary. So,
rs2:PRIMARY> var c = {
... "_id" : "rs2",
... "version" : 1,
... "members" : [
... {
... "_id" : 1,
... "host" : "ip-10-159-42-911:27018",
... "priority": 0.5
... }
... ]
... }
rs2:PRIMARY> rs.reconfig(c, { "force": true})
Mon Nov 11 19:46:39.244 DBClientCursor::init call() failed
Mon Nov 11 19:46:39.245 trying reconnect to ip-10-159-42-911:27018
Mon Nov 11 19:46:39.245 reconnect ip-10-159-42-911:27018 ok
reconnected to server after rs command (which is normal)
rs2:SECONDARY>
Now it is secondary. I do not know if there is a better way. But this seems to work.
HTH