Suppose you have a three node replica set. Node 1 is the primary. Node 2 is a secondary, Node 3 is a secondary running with a delay of 10 seconds. All writes to the database are issued with w=majority and j=1 (by which we mean that the getLastError call has those values set).
A write operation (could be insert or update) is initiated from your application at time=0. At time=5 seconds, the primary, Node 1, goes down for an hour and another node is elected primary.
Will there be a rollback of data when Node 1 comes back up? Choose the best answer.
Always yes
Always no
Maybe, it depends on whether Node 3 has processed the write.
Maybe, it depends on whether Node 2 has processed the write.
Any help would be greatly appreciated.
I am going to change my answer to 4 however, it should be 2 with a w=majority. You could have an edge case whereby wtimeout on a operation is returned and the operation did not get acked by the majority of the set. These problems should be very rare or almost never happen, but something to keep in mind.
Since a majority of the nodes (1 & 2) will ack the write, if node 1 goes down node 2 should have its operations and be upto speed as such node 1 should not need to rollback to node 2's state; instead node 1 will play catch up.
Journal is not so important for defining whether a rollback would exist or not.
Please read this relevant excerpt from MongoDB document: "A rollback does not occur if the write operations replicate to another member of the replica set before the primary steps down and if that member remains available and accessible to a majority of the replica set."
I think this is a Question from a Mongo DB exam, but the answer is easy to see:
Maybe, it depends on whether Node 2 has processed the write.
Related
Consider I have a replica set with 3 nodes (2 data nodes and one arbiter (PSA)). When for some reason one of my data nodes goes down and I bring it back, during syncing with primary node, that is in state STARTUP2. At his time I will lose my change stream because my replica set has 2 data nodes but I don't have majority of nodes to read.
How can I handle this issue?
I also read this MongoDB doc. Is that possible to set primary node priority value higher than secondary node (that is syncing itself with primary node)? Can I have majority by doing this even when my secondary node is in STARTUP2 state?
There are technically two types of majority. As I called them, they're "election majority" and "data majority".
Arbiters are supposed to help with "election majority", where it helps maintain a primary availability in a PSA architecture should the S went down. However, they're not a part of "data majority".
"Data majority", in contrast, are both for voting and acknowledging majority-read and majority-write.
Changestreams by design will return documents that are committed to the "data majority" of voting nodes. This is because a write that's propagated to them will not be rolled back. It will be confusing if a changestream declared that a document was written, then it rolled back, then would have to issue a "no wait, scratch that, the write didn't happen".
Thus by its nature, arbiters are not compatible with majority-read and majority-write scenarios such as changestreams or transactions. However arbiters still has its place in a replica set, provided you know what to expect from them.
See What is the default mongod write concern in which version? for a more complete explanation of write concerns and the effect of having arbiters.
A secondary in STARTUP2 is not a secondary yet. It may vote in elections, but it won't acknowledge majority writes since it's still starting up.
In terms of changestream, since in a PSA architecture the "data majority" is practically only the PS part of PSA, none of the data bearing nodes can be offline for majority reads and writes to be maintained.
The best solution is to replace the arbiter with an actual data-bearing node. This way, you can have majority-write, majority-read, and can have one node down and still maintain majority.
Doing initial sync to the secondary is a very time consuming process, I haven't found anywhere in MongoDB docs that the primary can accept write operations during the initial Sync, or if not recommended. Is it safe to keep the primary operational (for write) during this process?
Thanks
In order for a primary to accept a write there have to be at least a quorum of voting replica set members available to vote and vote for the same primary. For instance for a 3 member replica set you need at least 2.
A secondary that is in initial sync should be in the Recovering state and according to the documentation can vote http://docs.mongodb.org/manual/reference/replica-states/:
3 RECOVERING Can vote. Members either perform startup self-checks,
or transition from completing a rollback or resync.
Now should you? I think the question depends on how many members were in the set before. If you've been running with 2 data nodes and 1 arbiter, running with 1 data node only for awhile is something only you can answer - yes it's riskier but what's your alternative, being down completely?
If you have 3 data nodes and 1 is down for an initial sync I don't see much issue unless you have very high data redundancy needs.
If you are starting from only having 1 node and you are transitioning into a replica set well you are no worse off then you were before.
Above all else always make certain you have at least 3 members of your replica set, preferably with at least 2 data nodes and generally speaking an odd number of voters.
Suppose i have set w=majority as write concern and a node fails during a write operation,
will the majority be changed according to the currently alive nodes?
i.e., Suppose there are 3 nodes. Now the majority is 2. And if a node fails during a write operation, will the majority be decreased or will it remain same and wait for the node to come up?
The majority of a replica set is determined based on replica set configuration, not its current running state.
In other words, if you have a three node replica set configured, then majority is always two. If one node is offline, two is still the majority. If two nodes are offline, two is still majority and cannot be satisfied until one of the offline nodes comes back online.
I'm writing to a 6-node mongo cluster.
To force write to all nodes, I'm using Write Concern with X=6 and timeout=2000.
My question: what happens if mongo cannot write to all 6 nodes within 2000 millisecs.
Will mongo come back with "operation failed" or "operation partial success".
I believe you mean w=6 and have already read the document about Write Concern. The document of getLastError explains possible response from getLastError().
The response in your timeout case should be something like this test case in MongoDB codebase.
In your case, w=6 with 6 nodes means that if you lose 1 node the writes will all return errors. Is there any particular reason to use 6 nodes in you replica set? If there is only one replica set, 5 nodes could give the same level of availability, i.e. losing less than the majority, 3 out of 5 (or 6), is fine.
My server set up is like that:
2 x Servers . The mongoDB has replica set among both servers. Each is one node.
and then I have my node.js server connect to the MongoDB.
What happen was.. when I kill the secondary server. (shutting down the server). The MongoDB at primary still up but the Node.js Server had connection issue with MongoDB then. Even I added the server back, it didn't work. I use mongoose and connect-mongo .
So, what happened? how to shut down Mongo node properly?
If you have a replica set with 2 nodes, when one node goes down the other will demote itself to secondary. If you aren't connecting with slaveOk true, then you won't be able to read (and in either case you won't be able to write).
This is a safety measure imposed by MongoDB, which requires that a majority (meaning half plus one) of a replica set be able to see one another in order to ensure that a primary can be safely elected. If a majority cannot be seen, the nodes in the minority cannot know whether the "other half" have elected a primary. Having two primaries at the same time would be Very Bad (TM), as that could lead to conflicting updates.
In situations where you only want to run two nodes, you can also run an arbiter to break ties in the case that one node goes down or becomes otherwise invisible to the replica set. An arbiter is a normal mongod process, but does not store any data -- essentially it only participates in elections, and is idle otherwise. In a replica set with 2 "normal" nodes and one arbiter, either one of the two data-holding nodes can go down without losing a majority.
For more, see the MongoDB documentation on replica sets and the documentation on artibers.
If your primary is still primary after you take down the secondary, it's a node's driver issue. Anyway you always should have an arbiter with an even number of replica nodes, the "why" is well documented on mongodb's doc.
In case this is a node.js issue, wich version of node-mongodb-native are you using ? I had some different replicaset issues 2 month ago but there have ben fixed with the latest versions. The last replicaset issue of the driver has ben closed the 9th Sept, you shoud giv it a try with the last tagged version (V0.9.6.18 as i'm writing this)