MongoDB WriteConcern impact on replication - mongodb

In general, MongoDB will replicate from a Primary to Secondaries asynchronously, based on number of write operations, time and other factors by shipping oplog from primary to secondaries.
When describing WriteConcern options, MongoDB documentation states "...primary waits until the required number of secondaries acknowledge the write before returning write concern acknowledgment". This seems to suggest that a WriteConcern other than "w:1" would replicate to at least some of the members of the replica set in a blocking manner, potentially avoiding log shipping.
The basic question I'm trying to answer is this: if every write is using WriteCocnern of "majority", would MongoDB ever have to use log shipment? In other words, is using WriteCocnern of "majority" also controls replication timing?
I would like to better understand how MongoDB handles WriteConcern of "majority". A few obvious options:
Primary sends write requests to every Secondary, and blocks the thread until majority respond with acknowledgment
or
Primary pre-selects Secondaries first and sends requests to only those secondaries, blocking the thread until all chosen secondaries respond with acknowledgment
or
Something much smarter than either of these options
If Option 1 is used, in most cases (assuming equidistant placement of secondaries) all secondaries will have received the write operation by the time Write completes, and there's high probability (although not a guarantee) all secondaries will have applied it. If true, this behavior enables use cases where writes need to be reflected on Secondaries quicker than typical asynchronous replication process.
Obviously WriteConcern of "majority" will incur performance penalty, but this may be acceptable for specific use cases where read operations may target Secondaries (e.g. ReadPreference of "nearest") and desire more recent data.

if every write is using WriteConcern of "majority", would MongoDB ever have to use log shipment?
Replication in MongoDB uses what is termed as the oplog. This is a record of all operations on the primary (the only node that accept writes).
Instead of pushing the oplog into the secondaries, the secondaries long-pull on the oplog of the primary. If replication chaining is allowed (the default), then a secondary can also pull the oplog from another secondary. So scenario 1 & 2 you posted are not the reality with MongoDB replication as of MongoDB 4.0.
The details of the replication process is described in MongoDB Github wiki page: Replication Internals.
To quote the relevant parts regarding your question:
If a command includes a write concern, the command will just block in its own thread until the oplog entries it generates have been replicated to the requested number of nodes. The primary keeps track of how up-to-date the secondaries are to know when to return. A write concern can specify a number of nodes to wait for, or majority.
In other words, the secondaries continually report back to the primary how far along it has applied the oplog into its own dataset. Since the primary knows the timestamp that the write took place, once a secondary has applied that timestamp, it can tell that the write has propagated to that secondary. To satisfy the write concern, the primary simply waits until a determined number of secondaries have applied the write timestamp.
Note that only the thread specifying the write concern is waiting for this acknowledgment. All other threads are not blocked due to this waiting at all.
Regarding to you other question:
Obviously WriteConcern of "majority" will incur performance penalty, but this may be acceptable for specific use cases where read operations may target Secondaries (e.g. ReadPreference of "nearest") and desire more recent data.
To achieve what you described, you need a combination of read and write concerns. See
Causal Consistency and Read and Write Concerns for more details on this subject.
Write majority is typically used for:
Ensuring that the write will not be rolled back in the event of the primary failure.
Ensuring that the application is not writing so fast that the provisioned hardware of the replica set cannot cope with the traffic; i.e. it can act as a backpressure mechanism.
In combination with read concern, provide the client with differing levels of consistency guarantees.
These points assume that the write majority was acknowledged and the acknowledgment was received by the client. There are multiple different failure scenario that are possible (as expected with a distributed system that needs to cope with unreliable network), but those are beyond the scope of this discussion.

Related

Guarantee consistency of data across microservices access a sharded cluster in MongoDB

My application is essentially a bunch of microservices deployed across Node.js instances. One service might write some data while a different service will read those updates. (specific example, I'm processing data that is inbound to my solution using a processing pipeline. Stage 1 does something, stage 2 does something else to the same data, etc. It's a fairly common pattern)
So, I have a large data set (~250GB now, and I've read that once a DB gets much larger than this size, it is impossible to introduce sharding to a database, at least, not without some major hoop jumping). I want to have a highly available DB, so I'm planning on a replica set with at least one secondary and an arbiter.
I am still researching my 'sharding' options, but I think that I can shard my data by the 'client' that it belongs to and so I think it makes sense for me to have 3 shards.
First question, if I am correct, if I have 3 shards and my replica set is Primary/Secondary/Arbiter (with Arbiter running on the Primary), I will have 6 instances of MongoDB running. There will be three primaries and three secondaries (with the Arbiter running on each Primary). Is this correct?
Second question. I've read conflicting info about what 'majority' means... If I have a Primary and Secondary and I'm writing using the 'majority' write acknowledgement, what happens when either the Primary or Secondary goes down? If the Arbiter is still there, the election can happen and I'll still have a Primary. But, does Majority refer to members of the replication set? Or to Secondaries? So, if I only have a Primary and I try to write with 'majority' option, will I ever get an acknowledgement? If there is only a Primary, then 'majority' would mean a write to the Primary alone triggers the acknowledgement. Or, would this just block until my timeout was reached and then I would get an error?
Third question... I'm assuming that as long as I do writes with 'majority' acknowledgement and do reads from all the Primaries, I don't need to worry about causally consistent data? I've read that doing reads from 'Secondary' nodes is not worth the effort. If reading from a Secondary, you have to worry about 'eventual consistency' and since writes are getting synchronized, the Secondaries are essentially seeing the same amount of traffic that the Primaries are. So there isn't any benefit to reading from the Secondaries. If that is the case, I can do all reads from the Primaries (using 'majority' read concern) and be sure that I'm always getting consistent data and the sharding I'm doing is giving me some benefits from distributing the load across the shards. Is this correct?
Fourth (and last) question... When are causally consistent sessions worthwhile? If I understand correctly, and I'm not sure that I do, then I think it is when I have a case like a typical web app (not some distributed application, like my current one), where there is just one (or two) nodes doing the reading and writing. In that case, I would use causally consistent sessions and do my writes to the Primary and reads from the Secondary. But, in that case, what would the benefit of reading from the Secondaries be, anyway? What am I missing? What is the use case for causally consistent sessions?
if I have 3 shards and my replica set is Primary/Secondary/Arbiter (with Arbiter running on the Primary), I will have 6 instances of MongoDB running. There will be three primaries and three secondaries (with the Arbiter running on each Primary). Is this correct?
A replica set Arbiter is still an instance of mongod. It's just that an Arbiter does not have a copy of the data and cannot become a Primary. You should have 3 instances per shard, which means 9 instances in total.
Since you mentioned that you would like to have a highly available database deployment, please note that the minimum recommended replica set members for production deployment would be a Primary with two Secondaries.
If I have a Primary and Secondary and I'm writing using the 'majority' write acknowledgement, what happens when either the Primary or Secondary goes down?
When either the Primary or Secondary becomes unavailable, a w:majority writes will either:
Wait indefinitely,
Wait until either nodes is restored, or
Failed with timeout.
This is because an Arbiter carries no data and unable to acknowledge writes but still counted as a voting member. See also Write Concern for Replica sets.
I can do all reads from the Primaries (using 'majority' read concern) and be sure that I'm always getting consistent data and the sharding I'm doing is giving me some benefits from distributing the load across the shards
Correct, MongoDB Sharding is to scale horizontally to distribute load across shards. While MongoDB Replication is to provide high availability.
If you read only from the Primary and also specifies readConcern:majority, the application will read data that has been acknowledged by the majority of the replica set members. This data is durable in the event of partition (i.e. not rolled back). See also Read Concern 'majority'.
What is the use case for causally consistent sessions?
Causal Consistency is used if the application requires an operation to be logically dependent on a preceding operation (causal). For example, a write operation that deletes all documents based on a specified condition and a subsequent read operation that verifies the delete operation have a causal relationship. This is especially important in a sharded cluster environment, where write operations may go to different replica sets.

Mongodb write performance/load primary vs secondary

In a replicated Mongodb environment, is the write performance/load on the secondaries the same as the primaries? If so, or not, why?
Edit: By writes to the secondary I am referring to the automatic propagation of the writes from the primary to the secondary.
Edit2: To help guide the conversation http://docs.mongodb.org/manual/core/replica-set-sync/#multithreaded-replication might suggest that write performance from the primaries to the secondaries might be better since they are performed in batch.
If by load you mean only writes in an isolated or off-peak system then it must be an uninteresting and similar performance/write. and almost who cares. However, in a working system where you have concurrent reads and writes then no. because one can alter that performance/readwrite if you use a 'read preference' of "secondary" or "secondary preferred" (heck, anything but "primary"). Under this scenario a replica set with 11 secondaries and 1 primary one can clearly see that any single secondary has a fraction of cpu/memory/disk/etc competition not only disk contention than the single primary. Recall that the default mode is brutal on the primary. here the secondaries exist only for redundancy vs high-availability.
primary Default mode. All operations read from the current replica set
primary.
One can think of a RAID system whereby mirroring increases redundancy while striping increases performance. (True, not exactly the same mechanics but from the user's pov its a similar result in terms of reads. Sharding is closer to a RAID with striping) Using the default read preference of 'primary' one taps only into mirroring; using a read preference of 'secondary' you tap into the greater throughput.

Can someone give me detailed technical reasons why writing to a secondary in MongoDB replica set is not allowed

I know we can't write to a secondary in MongoDB. But I can't find any technical reason why. In my case, I don't really care if there is a slight delay but write to a secondary might be faster. Please provide some reference if you can. Thanks!!
The reason why you can not write to a secondary is the way replication works:
Secondaries connect to a special collection on the primary, called oplog. This oplog contains operations which were run through the query optimizer. Basically, the oplog is a capped collection, and the secondaries use a tailable cursor to access it's entries and processes it from the oldest to the newest.
When a election takes place because the primary goes down / steps down, the secondary with the most recent oplog entry is elected primary. The secondaries connect to the new primary, query for the oplog entries they haven't processed yet and the cluster is in sync.
This procedure is pretty straight forward. Now imagine one could write to a secondary. All nodes in the cluster would have to have a tailable cursor on all other nodes of the cluster, and maintaining a consistent state in case of one machine failing becomes a very complicated and in case of a failure even race condition dependent thing. Effectively, there could be no guarantee even for eventual consistency any more. It would be a more or less a gamble.
That being said: A replica set is not for load balancing. A replica sets purpose is to enhance the availability and durability of the data. Because reading from a secondary is a non-risky thing, MongoDB made it possible, according to their dogma of offering the maximum of possible features without compromising scalability (which would be severely hampered if one could write to secondaries).
But MongoDB does provide a load balancing feature: sharding. Choosing the right shard key, you can distribute read and write load over (almost) as many shards as you want. Not to mention that you can provide a lot more of the precious RAM for a reasonable price when sharding.
There is a one liner answer:
Multi-master replication is a hairball.
If you was allowed to write to secondaries MongoDB would have to use milti-master replication to ge this working: http://en.wikipedia.org/wiki/Multi-master_replication where essentially evey node copies to each other the OPs (operations) they have received and somehow do it without losing data.
This form of replication has many obsticles to overcome.
One would be throughput; remember that OPs need to transfer across the entire network so it is possible you might actually lose throughput while adding consistentcy problems. So getting better throughput would be a problem. It is much having a secondary, taking all of the primaries OPs and then its own for replication outbound and then asking it to do yet another job.
Adding consistentcy over a distributed set like this would also be hazardous, one main question that bugs MongoDB when asking if a member is down or is: "Is it really down or just unavailable?". It is almost impossible to ensure true consistentcy in a distributed set like this, at the very least tricky.
Those are just two problems immediately.
Essentially, to sum up, MongoDB does not yet possess mlti-master replication. It could in the future but I would not be jumping for joy if it does, I will most likely ignore such a feature, normal replication and sharding in both ACID and non-ACID databases causes enough blood pressure.

mongo - read Preference design strategy

I have an application for which I am tasked with designing a mongo backed data storage.
The application goals are to provide the latest data ( no stale data ) with the fastest load times.
The data size is in the order of a few millions with the application being write heavy.
In choosing what the read strategy is given a 3-node replica set ( 1 primary, 1 secondary, 1 arbiter ), I came across two different strategies to determine where to source the reads from -
Read from the secondary to reduce load on primary. With the writeConcern = REPLICA_SAFE, thus ensuring the writes are done on both primary and the secondary. Set the read preference. to secondaryPreferred.
Always read from primary. but ensure the data is in primary before reading. So set writeConcern= SAFE . The read preference is default - primaryPreferred .
What are the things to be considered before choosing one of the options.
According to the documentation REPLICA_SAFE is a deprecated term and should be replaced with REPLICA_ACKNOWLEDGED. The other problem here is that the w value here appears to be 2 from this constant.
This is a problem for your configuration, as you have your Primary and only one Secondary, combined with an arbiter. In the event of a node going down, or being otherwise unreachable, with the level set as this it is looking to acknowledge all writes from 2 nodes where there will not be 2 nodes available. You can leave write operations hanging in this way.
The better case for your configuration would be MAJORITY, as no matter the number of nodes it will ensure writes to the Primary and the "majority" of the secondaries. But in your case any write concern condition involving more than the PRIMARY will block on all writes, if one of your nodes is down or unavailable, as you would have to have at least two more secondary nodes available so that there would still be a "majority" of nodes to acknowledge the write. Or drop the ARBITER and have two SECONDARY nodes.
So you will have to stick to the default w=1 where all writes are acknowledged to the PRIMARY unless you can deal with writes failing when your one SECONDARY goes down.
You can set the read preference to secondaryPreferred as long as you accept that you can ""possibly" be reading stale or not the latest representation of your data as the only real guarantee is of a write to the Primary node. The general replication considerations remain, in that the nodes should be somewhat equal in processing capability or this can lead to lag or general performance degradation as a result of your query operations.
Remember that replication is implemented for redundancy and is not a system for improving performance. If you are looking for performance then perhaps look into scaling up your system hardware or implement sharding to distribute the load.

MongoDB writeConcern for production

My question may sound too general, but I'm ready to give any missing data.
We make something like a social network. In order to make read performance better and to ease the life of master instance, we've set
readPreference=secondaryPreferred
in our replicaSet. But with this, there's no guarantee that the data is written to secondary instances before you read from there, so we had to set
w=3
option.
So far, everything seems to be working but measurements on my local replicaSet show the following insert statistics.
Inserting 300 objects:
w=1 - 0.10s
w=3 - 1.31s
Insertion 5000 objects:
w=1 - 0.6s
w=3 - 14.6s
The question is, is this difference expected, or I'm doing something wrong?
The difference in performance is expected because w=3 means that you want to wait for acknowledgement that data was successfully replicated to at least two of your secondaries in addition to the acknowledgement from your primary (w=1).
For clarity, w = 1 simply means that you want an acknowledgement from the primary that an operation was completed. Any errors such as duplicate key errors or network errors that occur would be reported back as part of the acknowledgement if occurred.
http://docs.mongodb.org/manual/reference/write-concern/
Refer to the link above, and you can see there are lower write concerns that let you trade safety for lower latency.
If you want higher level of durability or safety, then you might use j=1 to wait for an acknowledgement that your operation was written to the journal (allowing recovery from a failure). w > N increases safety by waiting for acknowledgement from > N replica members to ensure that your operation was successfully replicated to other members. So to be clear, w > 1 isn't necessary to instruct the driver to write to the replicas. If you decided to use w=N, be aware that you can get yourself in a bad situation if replica set members fail and fall below N. w = majority is a more flexible option.
Lastly, you may want to re-evaluate why you're reading from the secondaries. Secondaries are eventually consistent as MongoDB uses asynchronous replication. If you're expecting consistent reads, then it makes more sense to read from the primary. If your reason to read from the secondary is for scaling, you should consider sharding as this is the primary mechanism for scale-out. Distributing load on secondaries rarely improve scalability. Operations are replicated over to replicas, so you're not gaining much from a lower write load. Sometimes it makes sense to for distributing different types of workloads (may lead to better memory utilization). For instance, running a MR job on a secondary might make sense. Replica sets are primarily for high availability-- fault tolerance providing automatic fail-overs and network partition issues.