Mongo sync: slow down or stop sync? - mongodb

All,
I have a replicaset setup where I run two mongo processes, M_pri on port 28001 and M_sec on 28002 in the same machine with the following config:
"_id" : "myReplSet",
"version" : 1,
"protocolVersion" : NumberLong(1),
"members" : [
{
"_id" : 0,
"host" : "localhost:28001",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 1,
"host" : "localhost:28002",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : true,
"priority" : 0,
"slaveDelay" : NumberLong(0),
"votes" : 1
}
],
"settings" : {
"chainingAllowed" : true,
"heartbeatIntervalMillis" : 2000,
"heartbeatTimeoutSecs" : 10,
"electionTimeoutMillis" : 10000,
"catchUpTimeoutMillis" : 2000,
"getLastErrorModes" : {
},
"replicaSetId" : ObjectId("593565b0ebd8ca36a07c6576")
}
The intention for this setup is to take a daily mongodump (gunzip) off M_sec. During the mongodump the whole of my system slow down as I have these processes that make writes and reads off the M_pri.
Is there a way by which I could stop the sync from Primary to Secondary mongo during the time when I am doing a mongodump off M_sec?
Thanks

You can set the M_sec to be a delayed member (https://docs.mongodb.com/manual/core/replica-set-delayed-member/#replica-set-delayed-members), so it will replicate from M_pri with a delay.
Things to keep in mind:
Requirements
Delayed members:
Must be priority 0 members. Set the priority to 0 to prevent a delayed member from becoming primary.
Should be hidden members. Always prevent applications from seeing and
querying delayed members.
do vote in elections for primary, if members[n].votes is set to 1.
Behavior
Delayed members copy and apply operations from the source oplog on a
delay. When choosing the amount of delay, consider that the amount of
delay:
must be equal to or greater than your expected maintenance window durations.
must be smaller than the capacity of the oplog. For more information on oplog size, see Oplog Size.
Configuration example:
{
"_id" : <num>,
"host" : <hostname:port>,
"priority" : 0,
"slaveDelay" : <seconds>,
"hidden" : true
}

Related

Mongoose is querying secondary instead of primary server

For some unknown reason, mongoose is querying my secondary MongoDB server, and I can't figure out how to change that.
I've set db.setProfilingLevel(2) on my secondary server, and I see a lot of queries there for no reason.
When I view the records, I see:
"command" : {
"$readPreference" : {
"mode" : "secondaryPreferred"
}
}
Which is odd because according to the documentation, the default read preference should be primary.
When I run db.getMongo().getReadPref() I see that indeed that's the case:
ReadPreference {
mode: 'primary',
tags: undefined,
hedge: undefined,
maxStalenessSeconds: undefined,
minWireVersion: undefined
}
I also tried adding {readPreference: 'primary'} to my mongoose connection, but the issue remains the same.
Any suggestions where the secondaryPreferred setting might be coming from?
(I am not sure if my issue is with mongoose or MongoDB, so I've tagged them both)
Update
A full entry from the profiler on the SECONDARY server:
{
"op" : "query",
"ns" : "***",
"command" : {
"find" : "***",
"batchSize" : 1,
"singleBatch" : true,
"maxTimeMS" : 1000,
"$readPreference" : {
"mode" : "secondaryPreferred"
},
"readConcern" : {
"level" : "local"
},
"$db" : "***"
},
"keysExamined" : 0,
"docsExamined" : 1,
"cursorExhausted" : true,
"numYield" : 0,
"nreturned" : 1,
"queryHash" : "17830885",
"queryExecutionEngine" : "classic",
"locks" : {
"FeatureCompatibilityVersion" : {
"acquireCount" : {
"r" : NumberLong(1)
}
},
"Global" : {
"acquireCount" : {
"r" : NumberLong(1)
}
},
"Mutex" : {
"acquireCount" : {
"r" : NumberLong(1)
}
}
},
"flowControl" : {},
"readConcern" : {
"level" : "local",
"provenance" : "clientSupplied"
},
"responseLength" : 0,
"protocol" : "op_msg",
"millis" : 0,
"planSummary" : "COLLSCAN",
"execStats" : {
"stage" : "COLLSCAN",
"nReturned" : 1,
"executionTimeMillisEstimate" : 0,
"works" : 2,
"advanced" : 1,
"needTime" : 1,
"needYield" : 0,
"saveState" : 0,
"restoreState" : 0,
"isEOF" : 0,
"direction" : "forward",
"docsExamined" : 1
},
"ts" : ISODate("2022-10-01T19:00:03.842+07:00"),
"client" : "***", //IP address of the PRIMARY server
"allUsers" : [],
"user" : ""
}
Update 2
I can't replicate the issue on dev environment, so I'm guessing it's not a mongoose issue but something related to the servers setup.
Update 3
When looking at the profiler log again, I noticed that the client is the PRIMARY server IP, and not the app server.
Update 3
When looking at the profiler log again, I noticed that the client is the PRIMARY server IP, and not the app server.
This is super helpful information and what I was attempting to ask about in my comment.
Based on this, I suspect what is happening here is that this profiler entry is associated with a Mirrored Read. Borrowing some from the documentation:
Mirrored reads reduce the impact of primary elections following an outage or planned maintenance. After a failover in a replica set, the secondary that takes over as the new primary updates its cache as new queries come in. While the cache is warming up performance can be impacted.
Starting in version 4.4, mirrored reads pre-warm the caches of
electable secondary replica set members. To pre-warm the caches of electable secondaries, the primary mirrors a sample of the supported operations it receives to electable secondaries.
One way to quickly prove or disprove this hypothesis would be to disable mirrored reads in the production environment. Instructions for doing so can be found here and it involves setting the samplingRate to 0.0.
Overall what you are observing is probably expected behavior. It has only become visible because you are inspecting the profiler that includes all operations and therefore is not something to be concerned about. It sounds like the application itself is configured appropriately and using the primary read preference as designed.

In mongodb 3.0 replication, how elections happen when a secondary goes down

Situation: I have a MongoDB replication set over two computers.
One computer is a server that holds the primary node and the arbiter. This server is a live server and is always on. It's local IP that is used in replication is 192.168.0.4.
Second is a PC that the secondary node resides on and is on for a few hours a day. It's local IP that is used in replication is 192.168.0.5.
My expectation: I wanted the live server to be the main point of data interaction of my application, regardless of the state of the PC (whether it is reachable or not, since PC is secondary), so I wanted to make sure that server's node is always primary.
The following is the result of rs.config():
liveSet:PRIMARY> rs.config()
{
"_id" : "liveSet",
"version" : 2,
"members" : [
{
"_id" : 0,
"host" : "192.168.0.4:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 10,
"tags" : {
},
"slaveDelay" : 0,
"votes" : 1
},
{
"_id" : 1,
"host" : "192.168.0.5:5051",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : 0,
"votes" : 1
},
{
"_id" : 2,
"host" : "192.168.0.4:5052",
"arbiterOnly" : true,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : 0,
"votes" : 1
}
],
"settings" : {
"chainingAllowed" : true,
"heartbeatTimeoutSecs" : 10,
"getLastErrorModes" : {
},
"getLastErrorDefaults" : {
"w" : 1,
"wtimeout" : 0
}
}
}
Also I have set the storage engine to be WiredTiger, if that matters.
What I actually get, and the problem: When I turn off the PC, or kill its mongod process, then the node on the server becomes secondary.
The following is the output of the server when I killed PC's mongod process, while connected to primary node's shell:
liveSet:PRIMARY>
2015-11-29T10:46:29.471+0430 I NETWORK Socket recv() errno:10053 An established connection was aborted by the software in your host machine. 127.0.0.1:27017
2015-11-29T10:46:29.473+0430 I NETWORK SocketException: remote: 127.0.0.1:27017 error: 9001 socket exception [RECV_ERROR] server [127.0.0.1:27017]
2015-11-29T10:46:29.475+0430 I NETWORK DBClientCursor::init call() failed
2015-11-29T10:46:29.479+0430 I NETWORK trying reconnect to 127.0.0.1:27017 (127.0.0.1) failed
2015-11-29T10:46:29.481+0430 I NETWORK reconnect 127.0.0.1:27017 (127.0.0.1) ok
liveSet:SECONDARY>
There are two doubts for me:
Considering this part of MongoDB documentation:
Replica sets use elections to determine which set member will become primary. Elections occur after initiating a replica set, and also any time the primary becomes unavailable.
The election occurs when the primary is not available (or at the time of initiating, however this is part does not concern our case), but primary was always available, so why the election happens.
Considering this part of the same documentation:
If a majority of the replica set is inaccessible or unavailable, the replica set cannot accept writes and all remaining members become read-only.
Considering the part 'members become read-only', I have two nodes up vs one down, so this should not also affect our replication.
Now my question: How to keep the node on the server as primary, when the node on PC is not reachable?
Update:
This is the output of rs.status().
Thanks to Wan Bachtiar, now This makes the behavior obvious, since arbiter was not reachable.
liveSet:PRIMARY> rs.status()
{
"set" : "liveSet",
"date" : ISODate("2015-11-30T04:33:03.864Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "192.168.0.4:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 1807553,
"optime" : Timestamp(1448796026, 1),
"optimeDate" : ISODate("2015-11-29T11:20:26Z"),
"electionTime" : Timestamp(1448857488, 1),
"electionDate" : ISODate("2015-11-30T04:24:48Z"),
"configVersion" : 2,
"self" : true
},
{
"_id" : 1,
"name" : "192.168.0.5:5051",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 496,
"optime" : Timestamp(1448796026, 1),
"optimeDate" : ISODate("2015-11-29T11:20:26Z"),
"lastHeartbeat" : ISODate("2015-11-30T04:33:03.708Z"),
"lastHeartbeatRecv" : ISODate("2015-11-30T04:33:02.451Z"),
"pingMs" : 1,
"configVersion" : 2
},
{
"_id" : 2,
"name" : "192.168.0.4:5052",
"health" : 0,
"state" : 8,
"stateStr" : "(not reachable/healthy)",
"uptime" : 0,
"lastHeartbeat" : ISODate("2015-11-30T04:33:00.008Z"),
"lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"),
"configVersion" : -1
}
],
"ok" : 1
}
liveSet:PRIMARY>
As stated in the documentation, if a majority of the replica set is inaccessible or unavailable, the replica set cannot accept writes and all remaining members become read-only.
In this case the primary has to step down if the arbiter and the secondary are not reachable. rs.status() should be able to determine the health of the replica members.
One thing you should also watch for is the primary oplog size. The size of the oplog determines how long a replica set member can be down for and still be able to catch up when it comes back online. The bigger the oplog size, the longer you can deal with a member being down for as the oplog can hold more operations. If it does fall too far behind, you must resynchronise the member by removing its data files and performing an initial sync.
See Check the size of the Oplog for more info.
Regards,
Wan.

Mongos routing with ReadPreference=NEAREST

I'm having trouble diagnosing an issue where my Java application's requests to MongoDB are not getting routed to the Nearest replica, and I hope someone can help. Let me start by explaining my configuration.
The Configuration:
I am running a MongoDB instance in production that is a Sharded ReplicaSet. It is currently only a single shard (it hasn't gotten big enough yet to require a split). This single shard is backed by a 3-node replica set. 2 nodes of the replica set live in our primary data center. The 3rd node lives in our secondary datacenter, and is prohibited from becoming the Master node.
We run our production application simultaneously in both data centers, however the instance in our secondary data center operates in "read-only" mode and never writes data into MongoDB. It only serves client requests for reads of existing data. The objective of this configuration is to ensure that if our primary datacenter goes down, we can still serve client read traffic.
We don't want to waste all of this hardware in our secondary datacenter, so even in happy times we actively load balance a portion of our read-only traffic to the instance of our application running in the secondary datacenter. This application instance is configured with readPreference=NEAREST and is pointed at a mongos instance running on localhost (version 2.6.7). The mongos instance is obviously configured to point at our 3-node replica set.
From a mongos:
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"version" : 4,
"minCompatibleVersion" : 4,
"currentVersion" : 5,
"clusterId" : ObjectId("52a8932af72e9bf3caad17b5")
}
shards:
{ "_id" : "shard1", "host" : "shard1/failover1.com:27028,primary1.com:27028,primary2.com:27028" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test", "partitioned" : false, "primary" : "shard1" }
{ "_id" : "MyApplicationData", "partitioned" : false, "primary" : "shard1" }
From the failover node of the replicaset:
shard1:SECONDARY> rs.status()
{
"set" : "shard1",
"date" : ISODate("2015-09-03T13:26:18Z"),
"myState" : 2,
"syncingTo" : "primary1.com:27028",
"members" : [
{
"_id" : 3,
"name" : "primary1.com:27028",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 674841,
"optime" : Timestamp(1441286776, 2),
"optimeDate" : ISODate("2015-09-03T13:26:16Z"),
"lastHeartbeat" : ISODate("2015-09-03T13:26:16Z"),
"lastHeartbeatRecv" : ISODate("2015-09-03T13:26:18Z"),
"pingMs" : 49,
"electionTime" : Timestamp(1433952764, 1),
"electionDate" : ISODate("2015-06-10T16:12:44Z")
},
{
"_id" : 4,
"name" : "primary2.com:27028",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 674846,
"optime" : Timestamp(1441286777, 4),
"optimeDate" : ISODate("2015-09-03T13:26:17Z"),
"lastHeartbeat" : ISODate("2015-09-03T13:26:18Z"),
"lastHeartbeatRecv" : ISODate("2015-09-03T13:26:18Z"),
"pingMs" : 53,
"syncingTo" : "primary1.com:27028"
},
{
"_id" : 5,
"name" : "failover1.com:27028",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 8629159,
"optime" : Timestamp(1441286778, 1),
"optimeDate" : ISODate("2015-09-03T13:26:18Z"),
"self" : true
}
],
"ok" : 1
}
shard1:SECONDARY> rs.conf()
{
"_id" : "shard1",
"version" : 15,
"members" : [
{
"_id" : 3,
"host" : "primary1.com:27028",
"tags" : {
"dc" : "primary"
}
},
{
"_id" : 4,
"host" : "primary2.com:27028",
"tags" : {
"dc" : "primary"
}
},
{
"_id" : 5,
"host" : "failover1.com:27028",
"priority" : 0,
"tags" : {
"dc" : "failover"
}
}
],
"settings" : {
"getLastErrorModes" : {"ACKNOWLEDGED" : {}}
}
}
The Problem:
The problem is that requests which hit this mongos in our secondary datacenter seem to be getting routed to a replica running in our primary datacenter, not the nearest node, which is running in the secondary datacenter. This incurs a significant amount of network latency and results in bad read performance.
My understanding is that the mongos is deciding which node in the replica set to route the request to, and it's supposed to honor the ReadPreference from my java driver's request. Is there a command I can run in the mongos shell to see the status of the replica set, including ping times to nodes? Or some way to see logging of incoming requests which indicates the node in the replicaSet that was chosen and why? Any advice at all on how to diagnose the root cause of my issue?
While configuring read preference, when ReadPreference = NEAREST the system does not look for minimum network latency as it may decide primary as the nearest, if the network connection is proper. However, the nearest read mode, when combined with a tag set, selects the matching member with the lowest network latency. Even nearest may be any of primary or secondary. Behaviour of mongos when preferences configured , and in terms of network latency is not so clearly explained in the official docs.
http://docs.mongodb.org/manual/core/read-preference/#replica-set-read-preference-tag-sets
hope this helps
If I start mongos with flag -vvvv (4x verbose) then I am presented with request routing information in the log files, including information about the read preference used and the host to which requests were routed. for example:
2015-09-10T17:17:28.020+0000 [conn3] dbclient_rs say
using secondary or tagged node selection in shard1,
read pref is { pref: "nearest", tags: [ {} ] }
(primary : primary1.com:27028,
lastTagged : failover1.com:27028)
Despite the wording, when using nearest, the absolute fastest member isn't necessarily the one chosen. Instead, a random member is chosen out of a pool of members that have a latency lower than the calculated latency window.
The latency window is calculated by taking the fastest member's ping and adding replication.localPingThresholdMs, whose default is 15ms. You can read more about the algorithm here.
So what I do is I combine nearest with tags so that I can specify the member manually that I know is geographically closest.

MongoDB secondary replica does not have collections as in primary

I have set up a mongodb replica set on my local machine, and created couple of collections in a database named "adaptive-db". From the mongo shell, when i connect to the primary and run show dbs, i can see and query my database "adaptive-db".
I then switched to secondary, and ran rs.slaveOk() expecting to see the "adaptive-db" created in primary to be present in secondary as well, but i don't see it.
Here are the commands i ran:
shell:bin user1$ ./mongo localhost:27017
MongoDB shell version: 3.0.2
connecting to: localhost:27017/test
rs0:PRIMARY> show dbs
adaptive-db 0.125GB
local 0.281GB
rs0:PRIMARY> use adaptive-db
switched to db adaptive-db
rs0:PRIMARY> show collections
people
student
system.indexes
rs0:PRIMARY> db.people.find().count()
6003
rs0:PRIMARY> exit
bye
shell:bin user1$ ./mongo localhost:27018
MongoDB shell version: 3.0.2
connecting to: localhost:27018/test
rs0:SECONDARY> show dbs
2015-06-25T11:16:40.751-0400 E QUERY Error: listDatabases failed:{ "note" : "from execCommand", "ok" : 0, "errmsg" : "not master" }
at Error (<anonymous>)
at Mongo.getDBs (src/mongo/shell/mongo.js:47:15)
at shellHelper.show (src/mongo/shell/utils.js:630:33)
at shellHelper (src/mongo/shell/utils.js:524:36)
at (shellhelp2):1:1 at src/mongo/shell/mongo.js:47
rs0:SECONDARY> rs.slaveOk()
rs0:SECONDARY> show dbs
admin 0.031GB
local 0.281GB
rs0:SECONDARY>
Can someone plz explain me why? Here is my rs.conf():
rs0:SECONDARY> rs.conf()
{
"_id" : "rs0",
"version" : 7,
"members" : [
{
"_id" : 0,
"host" : "localhost:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : 0,
"votes" : 1
},
{
"_id" : 1,
"host" : "localhost:27018",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : 0,
"votes" : 1
},
{
"_id" : 2,
"host" : "localhost:27019",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : 0,
"votes" : 1
}
],
"settings" : {
"chainingAllowed" : true,
"heartbeatTimeoutSecs" : 10,
"getLastErrorModes" : {
},
"getLastErrorDefaults" : {
"w" : 1,
"wtimeout" : 0
}
}
}
Thanks
Run the command rs.slaveOk() on the secondary replica member; this allows the current connection to allow read operations to run on the secondary member.
For reference:
http://docs.mongodb.org/manual/reference/method/rs.slaveOk/
I'm guessing a little, but judging by other collections you see, you are in the local database. Which, as the name suggests is not replicated (it contains oplog, startup_log and other things like that, which are specific to instance and would get messed up if replicated).
Use a different database. Either connect to 127.0.0.1/somedb (use appropriate IP/hostname), or do use somedb, to switch databases while in console. Then create collections and those should get replicated (to the database of same name, of course - somedb in my example).
For newer version of MongoDB run the command rs.secondaryOk() on the secondary database.

why is my mongo local db oplog gigantic

Appreciate any insights on this, I have 2 questions:
1) Figure out why my local db oplog is massive and growing
2) Safely delete (or reset) my local.oplog to free up the 18 gbs of wasted space
The scenario: I have been running mongod locally on a snapshot of production data like this:
mongod --dbpath /temp/MongoDumps/mongodata-2013-06-05_1205-snap/data
So I noticed the weird thing is my local db is huge
> show dbs
local 18.0693359375GB
prod-snapshot 7.9501953125GB
Which appears to be due to the gigantic local db oplog (even though its a capped collection)
db.oplog.rs.stats()
{
"ns" : "local.oplog.rs",
"count" : 25319382,
"size" : 10440151664,
"avgObjSize" : 412.33832895289464,
"storageSize" : 18634489728,
"numExtents" : 9,
"nindexes" : 0,
"lastExtentSize" : 1463074816,
"paddingFactor" : 1,
"systemFlags" : 0,
"userFlags" : 0,
"totalIndexSize" : 0,
"indexSizes" : {
},
"capped" : true,
"max" : NumberLong("9223372036854775807"),
"ok" : 1
}
And despite not having setup any replica sets on my local, my local db seems to have inherited my production replica set configurations (maybe it's inheriting through the snapshot???)
rs.config()
{
"_id" : "mongocluster1",
"version" : 38042,
"members" : [
{
"_id" : 4,
"host" : "mongolive-01D.mcluster-01:27017",
"tags" : {
"app" : "backend"
}
},
{
"_id" : 5,
"host" : "mongolive-01C.mcluster-01:27017"
},
{
"_id" : 11,
"host" : "mongoarbiter-01C.mcluster-01:27017",
"arbiterOnly" : true
},
{
"_id" : 7,
"host" : "mongoremote-01Z.mcluster-01:27017",
"priority" : 0,
"hidden" : true
},
{
"_id" : 21,
"host" : "mongodelayed-01D.mcluster-01:27017",
"priority" : 0,
"slaveDelay" : 3600,
"hidden" : true
}
]
}
Not sure if related but also seeing this:
> rs.status()
{ "ok" : 0, "errmsg" : "not running with --replSet" }
And when I start the server I get a replicaSet warning:
MongoDB shell version: 2.4.1
connecting to: test
Server has startup warnings:
** WARNING: soft rlimits too low. Number of files is 256, should be at least 1000
** WARNING: mongod started without --replSet yet 1 documents are present in local.system.replset
** Restart with --replSet unless you are doing maintenance and no other clients are connected.
** The TTL collection monitor will not start because of this.
You captured a snapshot of the data directory of a production node and therefore you got its EXACT database configuration.
This include its "local" database. The local database includes (among other things) the replica set configuration and the oplog.
Since you intend to run your mongod in stand-alone mode you can simply drop the local database with no ill effect. Use the dropDatabase() command. This will drop the database and the space will be reclaimed by the OS.