Mongodb eventual consistency on replica-sets when writing on two documents - mongodb

We have a single client that serially writes on two documents (with {w:1}).
For example, the original documents may be:
{_id: "a", value: 0},
{_id: "b", value: 0}
and the client updates document "a" to {_id: "a", value: 1} and then, after the update completes, the client updates document "b" to {_id: "b", value: 1}.
A second client calls find({}) afterwards. The second client reads from a secondary, which may have not received all the changes.
Obviously it can read the following states:
{_id:"a",value:0},{_id:"b",value:0}
{_id:"a",value:1},{_id:"b",value:0}
{_id:"a",value:1},{_id:"b",value:1}
which are "real" states on the primary (at some moment in the past).
Can the second client see a state like: {_id:"a",value:0},{_id:"b",value:1}? Notice that this state never existed on the primary.
P.S.
The explanation here says:
Secondaries ... apply write operations in the order that they appear in the oplog.
Does that mean the secondaries change their documents at the same order they were updated on the primary?
P.S. does find cursors "freeze" the state of the documents that they are reading (i.e. ignore changes that were made after the cursor was created)? Could things be different if I used find(...).sort({_id:-1}) or if document "a"'s id was "c" (i.e. larger than "b")?
Thanks

First question: yes, the operations on the secondary are performed in the same order as on the primary. All operations are recorded in the oplog. The oplog itself is not a journal of the queries performed (i.e. updateMany()) but what has to be done on the actual documents so it's operations become idempotent.
Regarding the cursor operation. It might happen that documents get moved or updated while iterating over the cursor. It may even happen, that the same document appears twice on the cursor if it's index or storage location changes during the update.
There is a special snapshot mode that provides some sort of isolation, but it has some limitations, i.e. it cannot be used with sharding

if our documnet was updated by sequence on master
change A
change B
change C
then secondaries will update document with the same sequence:
change A
documnet can be read without other changes applied
change B
documnet can be read without other changes applied
change C
For locking see this as mongo can optimise operations sequence, which can allow reads even if document update is fired to proceed.

Related

Ordering a sequence of writes to MongoDB v4.0 / DocumentDB

Problem
I need to establish write consistency for a sequence of queries using updateMany, against a DocumentDB cluster with only a single primary instance. I am not sure which approach to use, between Transactions, ordered BulkWrites, or simply setting a Majority write concern for each updateMany query.
Environment
AWS DocumentDB cluster, which maps to MongoDB v4.0, via pymongo 3.12.0 .
Note: the cluster has a single primary instance, and no other instances. In practice, AWS will have us connect to the cluster in replica set mode. I am not sure whether this means we need to still think about this problem in terms of replica sets.
Description
I have a sequence of documents D , each of which is an array of records. Each record is of the form {field: MyField, from_id: A, to_id: B}.
To process a record, I need to look in my DB for all fields MyField that have value A, and then set that value to B. The actual query I use to do this is updateMany. The code looks something like:
for doc in Documents:
for record in doc:
doWriteUpdate(record)
def doWriteUpdate(record):
query = ... # format the query based on record's information
db.updateMany(query)
I need the update operations to happen such that the writes have actually been applied, and are visible, before the next doWriteUpdate query runs.
This is because I expect to encounter a situation where I can have a record {field: MyField, from_id: A, to_id: B}, and then a subsequent record (whether in the same document, or a following document) {field: MyField, from_id: B, to_id: C}. Being able to properly apply the latter record operation, depends on the former record operation having been committed to the database.
Possible Approaches
Transactions
I have tried wrapping my updateMany operation in a Transaction. If this had worked, I would have called it a day; but I exceed the size allowed: Total size of all transaction operations must be less than 33554432. Without rewriting the queries, this cannot be worked around, because the updateMany has several layers of array-filtering, and digs through a lot of documents. I am not even sure if transactions are appropriate in this case, because I am not using any replica sets, and they seem to be intended for ACID with regard to replication.
Ordered Bulk Writes
BulkWrite.updateMany would appear to guarantee execution order of a sequence of writes. So, one approach could be, to generate the update query strings for each record r in a document D, and then send those through (preserving order) as a BulkWrite. While this would seem to "preserve order" of execution, I don't know if a) the preservation of execution order, also guarantees write consistency (everything executed serially is applied serially), and, more important, b) whether the following BulkWrites, for the other documents, will interleave with this one.
WriteConcern
Pymongo states that writes will block given a desired WriteConcern. My session is single-threaded, so this should give the desired behavior. However, MongoDB says
For multi-document transactions, you set the write concern at the transaction level, not at the individual operation level. Do not explicitly set the write concern for individual write operations in a transaction.
I am not clear on whether this pertains to "transactions" as in the general sense, or MongoDB Transactions set up through session objects. If it means the latter, then it shouldn't apply to my use case. If the former, then I don't know what other approach to use.
The proper write concern is majority, and with a read concern that uses the linearizable
Real Time Order Combined with "majority" write concern, "linearizable"
read concern enables multiple threads to perform reads and writes on a
single document as if a single thread performed these operations in
real time; that is, the corresponding schedule for these reads and
writes is considered linearizable.

Mongodb always increased "_id" field?

Is _id field in mongodb always increased for the next inserted document in the collection even if we have multiple shards? So if I have collection.watch do I always get higher _id field for the next document than for the prev one? I need this to implement catch-up subscription and not to lose any document. So on every processed document from collection.watch I store its _id and if crash - I can select all documents with _id > last_seen_id in addition to collection.watch.
Or do I have to use some sort of auto-incemented value? I don't wanna cause it will hurt performance a lot and kill reason of sharding.
ObjectIds are guaranteed to be monotonically increasing most of the time, but not all of the time. See What does MongoDB's documentation mean when it says ObjectIDs are "likely unique"? and Can a 4 byte timestamp value in MongoDb ObjectId overflow?. If you need a guaranteed monotonically increasing counter, you need to implement it yourself.
As you pointed out this isn't a trivial thing to implement in a distributed environment, which is why MongoDB doesn't provide this.
One possible solution:
Have a dedicated counter collection
Seed the collection with a document like {i: 1}
Issue find-and-modify operation that uses https://docs.mongodb.com/manual/reference/operator/update/inc/ and no condition (thus affecting all documents in the collection, i.e. the one and only document which is the counter)
Request the new document as the update result (e.g. https://docs.mongodb.com/ruby-driver/master/tutorials/ruby-driver-crud-operations/#update-options return_document: :after)
Use the returned value as the counter
This doesn't get you a queue. If you want a queue, there are various libraries and systems that provide queues.

MongoDB: Document-based ACID vs Multi-Document ACID

Consider an application in which we have some docs (I use doc instead of document in order to differentiate it from MongoDB's document) and modifications are performed on them. The only requirement we have is that changes on multiple docs are done atomically (All of them are done, or none). There are two ways to implement it:
A transaction is started and all the changes to the docs are performed inside it. Then it is commited. Whenever we need a doc we retrieve it by its ID.
A new document is added to MongoDB that includes all the changes to the docs (example below). Since a document is inserted atomically there is no need for a transaction. We put an index on changes.docId and whenever we want to retrieve a doc we find all changes on the doc (by the index) and aggregate them and produce the doc.
{
_id: ...
changes: [
{docId: 1, change: ...},
{docId: 10, change: ...},
{docId: 5, change: ...},
...
]
}
Note that since we need the history of changes, even in the first solution we keep the changed values inside the doc. Thus, by the measure of storage space these two solutions are not much different (without considering indexes, ...).
The question is that which of these solutions is better?
Some of my own thoughts on this question:
The second solution may be faster in writes (It does not need transaction handling among different documents and shards).
The first solution may be faster in reads (The second solution needs to look for all the changes on the doc with the help of the index, which may be spread in different documents or even shards).
Assuming that reads are more prevalent than write (although not much), if satisfying ACID among multiple documents (and shards) in MongoDB is super efficient and very low-cost the first solution may be better. But, if transaction handling makes a lot of overhead on the system and requires a tremendous amount of coordination among shards, the second solution may be better.

Spring Data MongoDB Concurrent Updates Behavior

Imagine theres a document containing a single field: {availableSpots: 100}
and there are millions of users, racing to get a spot by sending a request to an API server.
each time a request comes, the server reads the document and if the availableSpot is > 0, it then decrements it by 1 and creates a booking in another collection.
Now i read that mongodb locks the document whenever an update operation is performed.
What will happen if theres a million concurrent requests? will it take a long time because the same document keeps getting locked? Also, the server reads the value of document before it tries to update the document, and by the time it acquires the lock, the spot may not be available anymore.
It is also possible that the threads are getting "availableSpot > 0" is true at the same instant in time, but in reality the availableSpot may not be enough for all the requests. How to deal with this?
The most important thing here is atomicity and concurrency.
1. Atomicity
Your operation to update (decrement by one) if availableSpots > 0 :
db.collection.updateOne({"availableSpots" :{$gt : 0}}, { $inc: { availableSpots: -1 })
is atomic.
$inc is an atomic operation within a single document.
Refer : https://docs.mongodb.com/manual/reference/operator/update/inc/
2. Concurrency
Since MongoDB has document-level concurrency control for write operations. Each update will take a lock on the document.
Now your questions:
What will happen if theres a million concurrent requests?
Yes each update will be performed one by one (due to locking) hence will slow down.
the server reads the value of document before it tries to update the
document, and by the time it acquires the lock, the spot may not be
available anymore.
Since the operation is atomic, this will not happen. It will work as you want, only 100 updates will be executed with number of affected rows greater than 0 or equal to 1.
MongoDB uses Wired Tiger as a default storage engine starting version 3.2.
Wired Tiger provides document level concurrency:
From docs:
WiredTiger uses document-level concurrency control for write
operations. As a result, multiple clients can modify different
documents of a collection at the same time.
For most read and write operations, WiredTiger uses optimistic
concurrency control. WiredTiger uses only intent locks at the global,
database and collection levels. When the storage engine detects
conflicts between two operations, one will incur a write conflict
causing MongoDB to transparently retry that operation.
When multiple clients are trying to update a value in a document, only that document will be locked, but not the entire collections.
My understanding is that you are concerned about the performance of many concurrent ACID-compliant transactions against two separate collections:
a collection (let us call it spots) with one document {availableSpots: 999..}
another collection (let us call it bookings) with multiple documents, one per booking.
Now i read that mongodb locks the document whenever an update operation is performed.
It is also possible that the threads are getting "availableSpot > 0"
is true at the same instant in time, but in reality the availableSpot
may not be enough for all the requests. How to deal with this?
With version 4.0, MongoDB provides the ability to perform multi-document transactions against replica sets. (The forthcoming MongoDB 4.2 will extend this multi-document ACID transaction capability to sharded clusters.)
This means that no write operations within a multi-document transaction (such as updates to both the spots and bookings collections, per your proposed approach) are visible outside the transaction until the transaction commits.
Nevertheless, as noted in the MongoDB documentation on transactions a denormalized approach will usually provide better performance than multi-document transactions:
In most cases, multi-document transaction incurs a greater performance
cost over single document writes, and the availability of
multi-document transaction should not be a replacement for effective
schema design. For many scenarios, the denormalized data model
(embedded documents and arrays) will continue to be optimal for your
data and use cases. That is, for many scenarios, modeling your data
appropriately will minimize the need for multi-document transactions.
In MongoDB, an operation on a single document is atomic. Because you can use embedded documents and arrays to capture relationships between data in a single document structure instead of normalizing across multiple documents and collections, this single-document atomicity obviates the need for multi-document transactions for many practical use cases.
But do bear in mind that your use case, if implemented within one collection as a single denormalized document containing one availableSpots sub-document and many thousands of bookings sub-documents, may not be feasible as the maximum document size is 16MB.
So, in conclusion, a denormalized approach to write atomicity will usually perform better than a multi-document approach, but is constrained by the maximum document size of 16MB.
You can try using findAndModify() option while trying to update the document. Each time you will need to cherry pick whichever field you want to update in that particular document. Also, since mongo db replicates data to Primary and secondary nodes, you may also want to adjust your WriteConcern values as well. You can read more about this in official documentation. I have something similar coded that handles similar kind of concurrency issues in mongoDB using spring mongoTemplate. Let me know if you want any reference related to java with that.

pull push atomic operation?

I have a document with 2 arrays, I want to move one element from one array to the other, I tried this on the console and it works:
db.examplecol.update({_id: ObjectId("5056b4b2b9f53a21385076c5")} , {'$pull':{setA:3}, '$push': {setB:3}})
But I haven't seen yet an example of 2 updates in a single command. My question is if this is an atomic operation? If something goes wrong in the middle of this operation, do I have the risk of "losing" my element by it have been pulled but not pushed?
Based on MongoDB's Atomic Operations documentation and since your operation is on a single document, then the operation should be atomic. You should ensure that you're using journalling, so if the power is pulled half-way through your update, then MongoDB will recover to a known, good state prior to the update.