MongoDB secondaries not catching up - mongodb

I have a replica set that I am trying to upgrade the primary to one with more memory and upgraded disk space. So I raided a couple disks together on the new primary, rsync'd the data from a secondary and added it to the replica set. After checking out the rs.status(), I noticed that all the secondaries are at about 12 hours behind the primary. So when I try to force the new server to the primary spot it won't work, because it is not up to date.
This seems like a big issue, because in case the primary fails, we are at least 12 hours and some almost 48 hours behind.
The oplogs all overlap and the oplogsize is fairly large. The only thing that I can figure is I am performing a lot of writes/reads on the primary, which could keep the server in lock, not allowing for proper catch up.
Is there a way to possibly force a secondary to catch up to the primary?
There are currently 5 Servers with last 2 are to replace 2 of the other nodes.
The node with _id as 6, is to be the one to replace the primary. The node that is the furthest from the primary optime is a little over 48 hours behind.
{
"set" : "gryffindor",
"date" : ISODate("2011-05-12T19:34:57Z"),
"myState" : 2,
"members" : [
{
"_id" : 1,
"name" : "10******:27018",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 20231,
"optime" : {
"t" : 1305057514000,
"i" : 31
},
"optimeDate" : ISODate("2011-05-10T19:58:34Z"),
"lastHeartbeat" : ISODate("2011-05-12T19:34:56Z")
},
{
"_id" : 2,
"name" : "10******:27018",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 20231,
"optime" : {
"t" : 1305056009000,
"i" : 400
},
"optimeDate" : ISODate("2011-05-10T19:33:29Z"),
"lastHeartbeat" : ISODate("2011-05-12T19:34:56Z")
},
{
"_id" : 3,
"name" : "10******:27018",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 20229,
"optime" : {
"t" : 1305228858000,
"i" : 422
},
"optimeDate" : ISODate("2011-05-12T19:34:18Z"),
"lastHeartbeat" : ISODate("2011-05-12T19:34:56Z")
},
{
"_id" : 5,
"name" : "10*******:27018",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 20231,
"optime" : {
"t" : 1305058009000,
"i" : 226
},
"optimeDate" : ISODate("2011-05-10T20:06:49Z"),
"lastHeartbeat" : ISODate("2011-05-12T19:34:56Z")
},
{
"_id" : 6,
"name" : "10*******:27018",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"optime" : {
"t" : 1305050495000,
"i" : 384
},
"optimeDate" : ISODate("2011-05-10T18:01:35Z"),
"self" : true
}
],
"ok" : 1
}

I'm not sure why the syncing has failed in your case, but one way to brute force a resync is to remove the data files on the replica and restart the mongod. It will initiate a resync. See http://www.mongodb.org/display/DOCS/Halted+Replication. It is likely to take quite some time, dependent on the size of your database.

After looking through everything I saw a single error, which led me back to a mapreduce that was run on the primary, which had this issue: https://jira.mongodb.org/browse/SERVER-2861 . So when replication was attempted it failed to sync because of a faulty/corrupt operation in the oplog.

Related

Resync a Mongo Replica Set

I'm having a replica set, and to free some disk space, I want to resync my replica set members.
Thus, on the SECONDARY member of the replica set, I've emptied the /var/lib/mongodb/ directory which holds the data for the database.
When I open a shell to the Replication Set, and execute the command rs.status(), the following is showed.
{
"set" : "rs1",
"date" : ISODate("2016-12-13T08:28:00.414Z"),
"myState" : 5,
"term" : NumberLong(29),
"heartbeatIntervalMillis" : NumberLong(2000),
"members" : [
{
"_id" : 0,
"name" : "10.20.2.87:27017",
"health" : 1.0,
"state" : 5,
"stateStr" : "SECONDARY",
"uptime" : 148,
"optime" : {
"ts" : Timestamp(6363490787761586, 1),
"t" : NumberLong(29)
},
"optimeDate" : ISODate("2016-12-13T07:54:16.000Z"),
"infoMessage" : "could not find member to sync from",
"configVersion" : 3,
"self" : true
},
{
"_id" : 1,
"name" : "10.20.2.95:27017",
"health" : 1.0,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 146,
"optime" : {
"ts" : Timestamp(6363490787761586, 1),
"t" : NumberLong(29)
},
"optimeDate" : ISODate("2016-12-13T07:54:16.000Z"),
"lastHeartbeat" : ISODate("2016-12-13T08:27:58.435Z"),
"lastHeartbeatRecv" : ISODate("2016-12-13T08:27:59.447Z"),
"pingMs" : NumberLong(0),
"electionTime" : Timestamp(6363486827801739, 1),
"electionDate" : ISODate("2016-12-13T07:38:54.000Z"),
"configVersion" : 3
},
{
"_id" : 2,
"name" : "10.20.2.93:30001",
"health" : 1.0,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 146,
"lastHeartbeat" : ISODate("2016-12-13T08:27:58.437Z"),
"lastHeartbeatRecv" : ISODate("2016-12-13T08:27:59.394Z"),
"pingMs" : NumberLong(0),
"configVersion" : 3
}
],
"ok" : 1.0
}
Why does my secondary member shows `Could not find member to sync from, however, my primary is up and running."
My collection is sharded, over 6 servers, and I have this message on 2 replica set members. The ones which have the SECONDARY member on top in the members array when requesting the replication set status.
I really would like to get rid of this error message.
It scares me :-)
Kind regards
I had a similar problem, and it was due to the fact that the heartbeat timeout was too short, you can solve that problem here

Initialise mongodb replica set never completing

I am trying to intialise a mongodb replica set but whenever I add the new node it never makes it past state 3 (RECOVERING). Here is a snapshot from rs.status():
rs0:OTHER> rs.status()
{
"set" : "rs0",
"date" : ISODate("2015-04-27T14:09:21.973Z"),
"myState" : 3,
"members" : [
{
"_id" : 0,
"name" : "10.0.1.184:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 6899,
"optime" : Timestamp(1430143759, 9),
"optimeDate" : ISODate("2015-04-27T14:09:19Z"),
"lastHeartbeat" : ISODate("2015-04-27T14:09:20.133Z"),
"lastHeartbeatRecv" : ISODate("2015-04-27T14:09:20.160Z"),
"pingMs" : 0,
"electionTime" : Timestamp(1430127299, 1),
"electionDate" : ISODate("2015-04-27T09:34:59Z"),
"configVersion" : 109483
},
{
"_id" : 1,
"name" : "10.0.1.119:27017",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 6899,
"lastHeartbeat" : ISODate("2015-04-27T14:09:20.133Z"),
"lastHeartbeatRecv" : ISODate("2015-04-27T14:09:20.166Z"),
"pingMs" : 0,
"configVersion" : 109483
},
{
"_id" : 2,
"name" : "10.0.1.179:27017",
"health" : 1,
"state" : 3,
"stateStr" : "RECOVERING",
"uptime" : 15651,
"optime" : Timestamp(1430136863, 2),
"optimeDate" : ISODate("2015-04-27T12:14:23Z"),
"infoMessage" : "could not find member to sync from",
"configVersion" : 109483,
"self" : true
}
],
"ok" : 1
}
Occasionally the infoMessage "could not find member to sync from" is visible on the new node. I note that the oplog on the current primary is only 0.12 hours (1.7GB) and that it is taking approx. 2 hours to copy over the majority of the dataset (as seen by network usage).
Is it correct to assume that the oplog must be greater than this 2 hour period for the initial sync to complete successfully?
It was indeed necessary for the oplog to be LARGER (in time) than the expected time to synchronise the data between two replicas. Disk is cheap so we increased our OPLOG to 50GB and restarted the sync, worked first time.

mongo secondary has no queries after recovery

I have a test case, a sharding cluster with 1 shard.
The shard is rs, which has 1 primary and 2 secondaries.
My application uses secondaryPreferred policy, at first the queries balanced over two secondaries. Then I stop 1 secondary 10.160.243.22 to simulate fault, and then reboot it, the status is ok:
rs10032:PRIMARY> rs.status()
{
"set" : "rs10032",
"date" : ISODate("2014-12-05T09:21:07Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "10.160.243.22:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 2211,
"optime" : Timestamp(1417771218, 3),
"optimeDate" : ISODate("2014-12-05T09:20:18Z"),
"lastHeartbeat" : ISODate("2014-12-05T09:21:05Z"),
"lastHeartbeatRecv" : ISODate("2014-12-05T09:21:07Z"),
"pingMs" : 0,
"lastHeartbeatMessage" : "syncing to: 10.160.188.52:27017",
"syncingTo" : "10.160.188.52:27017"
},
{
"_id" : 1,
"name" : "10.160.188.52:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 2211,
"optime" : Timestamp(1417771218, 3),
"optimeDate" : ISODate("2014-12-05T09:20:18Z"),
"electionTime" : Timestamp(1417770837, 1),
"electionDate" : ISODate("2014-12-05T09:13:57Z"),
"self" : true
},
{
"_id" : 2,
"name" : "10.160.189.52:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 2209,
"optime" : Timestamp(1417771218, 3),
"optimeDate" : ISODate("2014-12-05T09:20:18Z"),
"lastHeartbeat" : ISODate("2014-12-05T09:21:07Z"),
"lastHeartbeatRecv" : ISODate("2014-12-05T09:21:06Z"),
"pingMs" : 0,
"syncingTo" : "10.160.188.52:27017"
}
],
"ok" : 1
}
but all queries go to another secondary 10.160.188.52, and 10.160.243.22 is idle
Why the queries not balanced to two secondaries after recovery and how to fix it ?
Your application uses some kind of driver(I don't know exact technology stack you are using) to connect to MongoDb. Your driver could remember(cache) replica set status or connections for some period of time. So, there is no guarantee that secondary node will be available immediately after a recovery.

Recognising a stale member of a mongo cluster

If a mongo node is offline for too long and the oplog wraps before it comes back up then it can get stuck in a stale state and require manual intervention. How can I recognise that state from the replica set status document? Will it stick in state 3, which is also used by nodes in maintenance mode and presumably by nodes that can catch up? If so, how can I tell the difference?
From http://docs.mongodb.org/manual/reference/replica-status/:
Number State
0 Starting up, phase 1 (parsing configuration)
1 Primary
2 Secondary
3 Recovering (initial syncing, post-rollback, stale members)
4 Fatal error
5 Starting up, phase 2 (forking threads)
6 Unknown state (the set has never connected to the member)
7 Arbiter
8 Down
9 Rollback
10 Removed
It will be in state 3, Recovering. To recognize the stale state specifically you need to look for the errmsg field. When stale, the secondary in question will have an errmsg like this:
"errmsg" : "error RS102 too stale to catch up"
In terms of a full output, it would look something like this:
rs.status()
{
"set" : "testReplSet",
"date" : ISODate("2013-01-29T01:39:38Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "hostname:31000",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 507,
"optime" : Timestamp(1359423456000, 893),
"optimeDate" : ISODate("2013-01-29T01:37:36Z"),
"self" : true
},
{
"_id" : 1,
"name" : "hostname:31001",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 483,
"optime" : Timestamp(1359423456000, 893),
"optimeDate" : ISODate("2013-01-29T01:37:36Z"),
"lastHeartbeat" : ISODate("2013-01-29T01:39:37Z"),
"pingMs" : 0
},
{
"_id" : 2,
"name" : "hostname:31002",
"health" : 1,
"state" : 3,
"stateStr" : "RECOVERING",
"uptime" : 4,
"optime" : Timestamp(1359423087000, 1),
"optimeDate" : ISODate("2013-01-29T01:31:27Z"),
"lastHeartbeat" : ISODate("2013-01-29T01:39:38Z"),
"pingMs" : 0,
"errmsg" : "error RS102 too stale to catch up"
}
],
"ok" : 1
}
And finally, a code snippet to print out the error only, if it exists, from the shell:
rs.status().members.forEach(function printError(rsmember){if (rsmember.errmsg){print(rsmember.errmsg)}})

mongodb replicaset failed syncing

Ever since I have added a new database into the mongodb, it stopped syncing replicaSet secondary instances, i.e. database name appear when running show dbs yet appear as (empty)
There is a repeating error in the log file at the secondary which also appear in
"errmsg" : "syncTail: ...
below is the output of rs.Status() on primary
PRIMARY> rs.status()
{
"set" : "contoso_db_set",
"date" : ISODate("2012-11-01T13:05:22Z"),
"myState" : 1,
"syncingTo" : "dbuse1d.int.contoso.com:27017",
"members" : [
{
"_id" : 0,
"name" : "dbuse1a.int.contoso.com:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"optime" : {
"t" : 1351775119000,
"i" : 2
},
"optimeDate" : ISODate("2012-11-01T13:05:19Z"),
"self" : true
},
{
"_id" : 1,
"name" : "dbuse1d.int.contoso.com:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 4108139,
"optime" : {
"t" : 1351405977000,
"i" : 12
},
"optimeDate" : ISODate("2012-10-28T06:32:57Z"),
"lastHeartbeat" : ISODate("2012-11-01T13:05:21Z"),
"pingMs" : 1,
"errmsg" : "syncTail: 10068 invalid operator: $oid, syncing: { ts: Timestamp 1351576230000|1, h: -2878874165043062831, op: \"i\", ns: \"new_contoso_db.accounts\", o: { _id: { $oid: \"4f79a1d1d4941d3755000000\" }, delegation: [ \"nE/UhsnmZ1BCCB+tiiS8fjjNwkxbND5PwESsaXeuaJw=\""
},
{
"_id" : 2,
"name" : "dbuse1a.int.contoso.com:8083",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 10671267,
"optime" : {
"t" : 0,
"i" : 0
},
"optimeDate" : ISODate("1970-01-01T00:00:00Z"),
"lastHeartbeat" : ISODate("2012-11-01T13:05:21Z"),
"pingMs" : 0
}
],
"ok" : 1
}
PRIMARY>
The solution I have found was to delete the entire database from secondary
# rm -rf /data/db
# mkdir -p /data/db
And then restarting mongo and setting up the the replicaSet.
See more at Mongo's doc
What to do on a RS102 sync error
If one of your members has been offline and is now too far behind to
catch up, you will need to resync. There are a number of ways to do
this.
Perform a full resync. If you stop the failed mongod, delete all data in the dbpath (including subdirectories), and restart it, it will
automatically resynchronize itself. Obviously it would be better/safer
to back up the data first. If disk space is adequate, simply move it
to a backup location on the machine if appropriate. Resyncing may take
a long time if the database is huge or the network slow – even
idealized one terabyte of data would require three hours to transmit
over gigabit ethernet.*