How to measure time of replicating mongo database? - mongodb

I perform a write operation with a huge data to primary server.
How to measure time since data available on primary server to secondary server.

From https://docs.mongodb.com/manual/tutorial/troubleshoot-replica-sets/#check-the-replication-lag:
To check the current length of replication lag:
In a mongo shell connected to the primary, call the rs.printSlaveReplicationInfo() method.
Returns the syncedTo value for each member, which shows the time when the last oplog entry was written to the secondary, as shown in the following example:
source: m1.example.net:27017
syncedTo: Thu Apr 10 2014 10:27:47 GMT-0400 (EDT)
0 secs (0 hrs) behind the primary
source: m2.example.net:27017
syncedTo: Thu Apr 10 2014 10:27:47 GMT-0400 (EDT)
0 secs (0 hrs) behind the primary

Related

Can't view databases on secodary node on mongodb

I have just deployed a mongodb and inserted some data to a new db. I can see the database on MongoDB compass when connecting to the primary with MongoDB Compass but can't view it on secondary node.
It seems there is no replication lags on the secondary nodes.
rs.printSecondaryReplicationInfo()
source: node2:27017
{ syncedTo: 'Sun Oct 23 2022 12:20:07 GMT-0400 (Eastern Daylight Time)',
replLag: '0 secs (0 hrs) behind the primary ' }
source: node3:27017
{ syncedTo: 'Sun Oct 23 2022 12:20:07 GMT-0400 (Eastern Daylight Time)',
replLag: '0 secs (0 hrs) behind the primary ' }
Primary node:
Secondary node:
Edited:
I just spotted the problem. This is happening only when you set readPreference to primary but it doesn't happen with primaryPreferred. Do you know what that would be?
mongodb://user:password#localhost:27017/?authSource=admin&readPreference=primaryPreferred&directConnection=true&ssl=false

MongoDB data corruption on a replica set

I am working with a MongoDB database running in a replica set.
Unfortunately, I noticed that the data appears to be corrupted.
There should be over 10,000 documents in the database. However, there are several thousand records that are not being returned in queries.
The total count DOES show the correct total.
db.records.find().count()
10793
And some records are returned when querying by RecordID (a custom sequence integer).
db.records.find({"RecordID": 10049})
{ "_id" : ObjectId("5dfbdb35c1c2a400104edece")
However, when querying for a records that I know for a fact should exist, it does not return anything.
db.records.find({"RecordID": 10048})
db.records.find({"RecordID": 10047})
db.records.find({"RecordID": 10046})
The issue appears to be very sporadic, and in some cases entire ranges of records are missing. The entire range from RecordIDs 1500 to 8000 is missing.
Questions: What could be the cause of the issue? What can I do to troubleshoot this issue further and recover the corrupted data? I looked into running repairDatabase but that is for standalone instances only.
UPDATE:
More info on replication:
rs.printReplicationInfo()
configured oplog size: 5100.880859375MB
log length start to end: 14641107secs (4066.97hrs)
oplog first event time: Wed Mar 03 2021 05:21:25 GMT-0500 (EST)
oplog last event time: Thu Aug 19 2021 17:19:52 GMT-0400 (EDT)
now: Thu Aug 19 2021 17:20:01 GMT-0400 (EDT)
rs.printSecondaryReplicationInfo()
source: node2-examplehost.com:27017
syncedTo: Thu Aug 19 2021 17:16:42 GMT-0400 (EDT)
0 secs (0 hrs) behind the primary
source: node3-examplehost.com:27017
syncedTo: Thu Aug 19 2021 17:16:42 GMT-0400 (EDT)
0 secs (0 hrs) behind the primary
UPDATE 2:
We did a restore from a backup and somehow it looks like it fixed the issue.
We did a restore from a backup and somehow it looks like it fixed the issue.

Ensure data is not lost during MongoDB replication

Currently I have a replicaset for the production data.
I am adding a new members to the replica set. The state of new members becomes SECONDARY (after STARTUP, STARTUP2 etc).
Does that guarantee that all data in primary member has been replicated to the new members?
Is there any way to make sure that no data is lost after replication?
(Is there anything specified in the official docs of MongoDB - any guarantee for data being not lost or something. I am using MongoDB 3.2)
When the initial sync is completed(clones data from source and the applies oplogs to maintain changes in the data set), you can call rs.printSlaveReplicationInfo() from primary mongodb shell.
rs.printSlaveReplicationInfo()
This will return the last oplog entry applied on the secondaries, which are copied from the primaries oplog.rs collection.
The response is returned as:
source: m1.example.net:27017
syncedTo: Thu Apr 10 2014 10:27:47 GMT-0400 (EDT)
0 secs (0 hrs) behind the primary
source: m2.example.net:27017
syncedTo: Thu Apr 10 2014 10:27:47 GMT-0400 (EDT)
0 secs (0 hrs) behind the primary
Notice that both secondary members are 0 seconds behind the primary which indicates no replication lag.
That is essentially a difference b/w last operation recorded on primary and the time is was applied on the secondary.
As an additional precaution, you can note the db.stats() on the primary right before starting the sync and collecting same stats (db.stats()) from newly synced secondaries.
Read about initial sync here

MongoDB primary stepDown does not succeed

Setup: replica set with 5 nodes, version 3.4.5.
Trying to switch PRIMARY with rs.stepDown(60, 30) but consistently getting the error:
rs0:PRIMARY> rs.stepDown(60, 30)
{
"ok" : 0,
"errmsg" : "No electable secondaries caught up as of 2017-07-11T00:21:11.205+0000. Please use {force: true} to force node to step down.",
"code" : 50,
"codeName" : "ExceededTimeLimit"
}
However, rs.printSlaveReplicationInfo() running in a parallel terminal confirms that all replicas are fully caught up:
rs0:PRIMARY> rs.printSlaveReplicationInfo()
source: X.X.X.X:27017
syncedTo: Tue Jul 11 2017 00:21:11 GMT+0000 (UTC)
0 secs (0 hrs) behind the primary
source: X.X.X.X:27017
syncedTo: Tue Jul 11 2017 00:21:11 GMT+0000 (UTC)
0 secs (0 hrs) behind the primary
source: X.X.X.X:27017
syncedTo: Tue Jul 11 2017 00:21:11 GMT+0000 (UTC)
0 secs (0 hrs) behind the primary
source: X.X.X.X:27017
syncedTo: Tue Jul 11 2017 00:21:11 GMT+0000 (UTC)
0 secs (0 hrs) behind the primary
Am I doing something wrong?
UPD: I've checked long running operations before and during rs.stepDown as was suggested below and it looks like this:
# Before rs.stepDown
$ watch "mongo --quiet --eval 'JSON.stringify(db.currentOp())' | jq -r '.inprog[] | \"\(.secs_running) \(.desc) \(.op)\"' | sort -rnk1"
984287 rsSync none
984287 ReplBatcher none
67 WT RecordStoreThread: local.oplog.rs none
null SyncSourceFeedback none
null NoopWriter none
0 conn615153 command
0 conn614948 update
0 conn614748 getmore
...
# During rs.stepDown
984329 rsSync none
984329 ReplBatcher none
108 WT RecordStoreThread: local.oplog.rs none
16 conn615138 command
16 conn615136 command
16 conn615085 update
16 conn615079 insert
...
Basically, long running user operations seem to happen as a result of rs.stepDown() as secs_running becomes nonzero once PRIMARY attempts to switch over and keeps growing all the way up until stepDown fails. Then everything gets back to normal.
Any ideas on why this happens and whether that's normal at all?
I have used below command to step down to secondary
db.adminCommand( { replSetStepDown: 120, secondaryCatchUpPeriodSecs: 15, force: true } )
You can find this in below mongodb official documentation
https://docs.mongodb.com/manual/reference/command/replSetStepDown/
To close the loop on this question, it was determined that the failed stepdown was due to time going backward on the host.
MongoDB 3.4.6 is more resilient to time issues on the host, and upgrading the deployment fixes the stalling issues.
Before stepping down, rs.stepDown() will attempt to terminate long running user operations that would block the primary from stepping down, such as an index build, a write operation or a map-reduce job.
Do you have some long running jobs on going? Check db. Check result of db.currentOp()
You can try to set longer stepping down time rs.stepDown(60, 360).
Quoting an answer from https://jira.mongodb.org/browse/SERVER-27015:
This is most likely due to the fact that by default the shutdown command will only succeed on a primary if the secondaries are fully caught up at the exact moment that the shutdown command is executed.
I faced a similar issue and tried the db.shutdownServer() command several times, however it worked exactly when the secondary was 0 seconds behind the primary.

Couchbase: 20k items stuck in Tap Queue

We are currently evaluating couchbase as a memcached replacement in the first place. Our setup looks like this:
php -> localhost moxi -> couchbase bucket (Total bucket size = 10240 MB (2048 MB x 5 nodes with replica count 1))
The Servers have 16GB RAM and are SSD backed.
We were inserting at about 400 ops/s and had no problem for a few days. When we reached about 13 million items. We found out that we forgot to implement the delete function in our testsetup and a lot of keys had no expiration set.
To start over again we flushed the bucket through the webinterface. This where our problems began.
We started to see that we had temp ooms, back-offs, and tap queue was filled with 20k items. the drain and fill rate was nearly the same. See attached screenshot
What also catched our eye was that node 4 had only 220k items, where everyone else had around 1.39M
Somehow it looks like the replication messed up something, but im relatively new to couchbase. Any hints, suggestions? - See more at: http://www.couchbase.com/communities/q-and-a/20k-items-stuck-tap-queue#sthash.v9MxNnTk.dpuf
The problem was solved for a short time, after removing the failing node from the cluster.
So now with this four nodes left in the cluster, after some hours the same happend again with another node. We tried setting the now failing node into FailOver state. That fixed the problem again, but after Re-Adding the node, the same phenomenon happened again on that node.
Other things we realized are:
* Three out of four nodes have thousands of items in their TAP replication queue, but one
("the failing one") has 0.
* Also three out of four nodes have a back-off rate of around 400, but one ("the failing one") has 0.
* Only the failing one has a massive amount of "Temp OOMs per second", but the other three have 0.
The phenomenon seems to disappear, if we lower the load to the servers by disabling the couchbase-writes for one out of two software project writing to couchbase.
But if we enable the writes again, after around 10 minutes we can see this in the memcached.log on the failing node:
Tue Dec 17 12:29:05.010547 CET 3: (CENSORED) Received error[86] from mccouch for unknown
Tue Dec 17 12:29:05.010576 CET 3: (CENSORED) Retry notify CouchDB of update, vbucket=277 rev=522
Tue Dec 17 12:29:08.748103 CET 3: (CENSORED) Received error[86] from mccouch for unknown
Tue Dec 17 12:29:08.748257 CET 3: (CENSORED) Retry notify CouchDB of update, vbucket=321 rev=948
Tue Dec 17 12:40:17.354448 CET 3: (CENSORED) Received error[86] from mccouch for unknown
Tue Dec 17 12:40:17.354476 CET 3: (CENSORED) Retry notify CouchDB of update, vbucket=303 rev=491
This error then happens around 5 times within four hours:
Tue Dec 17 14:19:32.145071 CET 3: (CENSORED) TAP (Producer) eq_tapq:replication_ns_1#10.65.20.12 - Suspend for 5.00 secs
And after these four hours it starts spamming this instantly (Maybe, because the load increased heavily, because in the evening our page generates much more load than in the morning/noon) together with this "error from mccouch":
Tue Dec 17 16:42:30.875343 CET 3: (CENSORED) TAP (Producer) eq_tapq:replication_ns_1#10.65.20.12 - Suspend for 5.00 secs
Tue Dec 17 16:42:36.493317 CET 3: (CENSORED) TAP (Producer) eq_tapq:replication_ns_1#10.65.20.12 - Suspend for 5.00 secs
Tue Dec 17 16:43:25.239876 CET 3: (CENSORED) Received error[86] from mccouch for unknown
Tue Dec 17 16:43:25.240052 CET 3: (CENSORED) Retry notify CouchDB of update, vbucket=296 rev=483
Tue Dec 17 16:43:25.903997 CET 3: (CENSORED) TAP (Producer) eq_tapq:replication_ns_1#10.65.20.12 - Suspend for 5.00 secs
Tue Dec 17 16:43:31.906178 CET 3: (CENSORED) TAP (Producer) eq_tapq:replication_ns_1#10.65.20.12 - Suspend for 5.00 secs
Tue Dec 17 16:43:36.913045 CET 3: (CENSORED) TAP (Producer) eq_tapq:replication_ns_1#10.65.20.12 - Suspend for 5.00 secs
Tue Dec 17 16:43:42.919114 CET 3: (CENSORED) TAP (Producer) eq_tapq:replication_ns_1#10.65.20.12 - Suspend for 5.00 secs
Tue Dec 17 16:43:48.920354 CET 3: (CENSORED) TAP (Producer) eq_tapq:replication_ns_1#10.65.20.12 - Suspend for 5.00 secs
Tue Dec 17 16:43:54.924017 CET 3: (CENSORED) TAP (Producer) eq_tapq:replication_ns_1#10.65.20.12 - Suspend for 5.00 secs
Tue Dec 17 16:44:00.928572 CET 3: (CENSORED) TAP (Producer) eq_tapq:replication_ns_1#10.65.20.12 - Suspend for 5.00 secs
We have no clue what is happening here, why this failing node seems to reject every replication and throwing this error.
Do you have any idea?
Thanks for all your help and greetings from Cologne,
Andy!
Seeing as you just want to delete all items in the Bucket have you tried just deleting and re-creating the bucket?
This will be much faster than flush, as flush actually needs to send a delete request for every document in the bucket.
I can't find it in the docs at the moment, but I think Flush is not really recommended with the latest versions.
you are not writing what is your operating system. If it's Linux try to check maximum amount of open sockets for user running the Couchbase. Check the file /etc/security/limits.conf.
the command for check on Linux is: ulimit -Hn.
Hope that helps.
Daniel
I think you should try these settings:
http://docs.couchbase.com/couchbase-manual-2.1/#specifying-backoff-for-replication