I have two callbacks that are often modifying the same mongo object with mongoose. The modification is complex such that I can't easily do it in one update(). As a result, a lot of the times only one of the two updates is applied, and I get a [VersionError: No matching document found.] error.
Is there any way to explicitly lock the document so that each update can wait for the other to finish, and can happen without worrying about a race condition?
Mongodb does not support object locking, you could however implement it yourself.
You have to choices
Optimistic concurrency control (when there is low contention for the resource) - This can be easily implemented using versionNumber or timeStamp field. On read you retrieve versionNo/timeStamp and supply it to all updates where it is checked and if matches incremented.
Pessimistic concurrency control (when single document is updated frequently) - this is bit more problematic technique since you have to cover recovering from the various failure scenarios when your operations do not complete successfully. (i.e. process dies)
if you decide to follow 2nd option here is interesting read it covers situations when you update number objects in "single" operation though
Related
This is MongoDB's api:
db.foo.watch([{$match: {"bar.baz": "qux" }}])
Let's say that collection foo contains millions of documents. The arguments passed into watch indicate that for every single document that gets updated the system will filter the ones that $match the query (but it will be triggered behind the scenes with any document change).
The problem is that as my application scales, my listeners will also scale and my intuition is that I will end up having n^2 complexity with this approach.
I think that as I add more listeners, database performance will deteriorate due to changes to documents that are not part of the $match query. There are other ways to deal with this, (web sockets & rooms) but before prematurely optimizing the system, I would like to know if my intuition is correct.
Actual Question:
Can I attach a listener to a single document, such that watch's performance isn't affected by sibling documents?
When I do collection.watch([$matchQuery]), does the MongoDB driver listen to all documents and then filters out the relevant ones? (this is what I am trying to avoid)
The code collection.watch([$matchQuery]) actually means watch the change stream for that collection rather than the collection directly.
As far as I know, there is no way to add a listener to a single document. Since I do not know of any way, I will give you a couple tips on how to avoid scalability problems with the approach you have chosen. Your code appears to be using change streams. It should not cause problems unless you open too many change streams.
There are two ways to accomplish this task by watching the entire collection with a process outside of that won't lead to deterioration of the database performance.
If you use change streams, you can open only a single change stream with logic that checks for all the conditions you need to filter for over time. The mistake is that people often open many change streams for single document filtering tasks, and that is when people have problems.
The simpler way, since you mentioned Atlas, is to use Triggers. You can use something called a match expression in your Triggers configuration to prevent any operations on your collection unless the
match expression evaluates to true. As noted in the documentation, the trigger function will not execute unless a field status in this case is updated to "blocked", but many match expressions are available:
{
"updateDescription.updatedFields": {
"status": "blocked"
}
}
I hope this helps. If not, I can keep digging. I think with change streams or Triggers, you are ok if you want to write a bit of code. :)
Imagine theres a document containing a single field: {availableSpots: 100}
and there are millions of users, racing to get a spot by sending a request to an API server.
each time a request comes, the server reads the document and if the availableSpot is > 0, it then decrements it by 1 and creates a booking in another collection.
Now i read that mongodb locks the document whenever an update operation is performed.
What will happen if theres a million concurrent requests? will it take a long time because the same document keeps getting locked? Also, the server reads the value of document before it tries to update the document, and by the time it acquires the lock, the spot may not be available anymore.
It is also possible that the threads are getting "availableSpot > 0" is true at the same instant in time, but in reality the availableSpot may not be enough for all the requests. How to deal with this?
The most important thing here is atomicity and concurrency.
1. Atomicity
Your operation to update (decrement by one) if availableSpots > 0 :
db.collection.updateOne({"availableSpots" :{$gt : 0}}, { $inc: { availableSpots: -1 })
is atomic.
$inc is an atomic operation within a single document.
Refer : https://docs.mongodb.com/manual/reference/operator/update/inc/
2. Concurrency
Since MongoDB has document-level concurrency control for write operations. Each update will take a lock on the document.
Now your questions:
What will happen if theres a million concurrent requests?
Yes each update will be performed one by one (due to locking) hence will slow down.
the server reads the value of document before it tries to update the
document, and by the time it acquires the lock, the spot may not be
available anymore.
Since the operation is atomic, this will not happen. It will work as you want, only 100 updates will be executed with number of affected rows greater than 0 or equal to 1.
MongoDB uses Wired Tiger as a default storage engine starting version 3.2.
Wired Tiger provides document level concurrency:
From docs:
WiredTiger uses document-level concurrency control for write
operations. As a result, multiple clients can modify different
documents of a collection at the same time.
For most read and write operations, WiredTiger uses optimistic
concurrency control. WiredTiger uses only intent locks at the global,
database and collection levels. When the storage engine detects
conflicts between two operations, one will incur a write conflict
causing MongoDB to transparently retry that operation.
When multiple clients are trying to update a value in a document, only that document will be locked, but not the entire collections.
My understanding is that you are concerned about the performance of many concurrent ACID-compliant transactions against two separate collections:
a collection (let us call it spots) with one document {availableSpots: 999..}
another collection (let us call it bookings) with multiple documents, one per booking.
Now i read that mongodb locks the document whenever an update operation is performed.
It is also possible that the threads are getting "availableSpot > 0"
is true at the same instant in time, but in reality the availableSpot
may not be enough for all the requests. How to deal with this?
With version 4.0, MongoDB provides the ability to perform multi-document transactions against replica sets. (The forthcoming MongoDB 4.2 will extend this multi-document ACID transaction capability to sharded clusters.)
This means that no write operations within a multi-document transaction (such as updates to both the spots and bookings collections, per your proposed approach) are visible outside the transaction until the transaction commits.
Nevertheless, as noted in the MongoDB documentation on transactions a denormalized approach will usually provide better performance than multi-document transactions:
In most cases, multi-document transaction incurs a greater performance
cost over single document writes, and the availability of
multi-document transaction should not be a replacement for effective
schema design. For many scenarios, the denormalized data model
(embedded documents and arrays) will continue to be optimal for your
data and use cases. That is, for many scenarios, modeling your data
appropriately will minimize the need for multi-document transactions.
In MongoDB, an operation on a single document is atomic. Because you can use embedded documents and arrays to capture relationships between data in a single document structure instead of normalizing across multiple documents and collections, this single-document atomicity obviates the need for multi-document transactions for many practical use cases.
But do bear in mind that your use case, if implemented within one collection as a single denormalized document containing one availableSpots sub-document and many thousands of bookings sub-documents, may not be feasible as the maximum document size is 16MB.
So, in conclusion, a denormalized approach to write atomicity will usually perform better than a multi-document approach, but is constrained by the maximum document size of 16MB.
You can try using findAndModify() option while trying to update the document. Each time you will need to cherry pick whichever field you want to update in that particular document. Also, since mongo db replicates data to Primary and secondary nodes, you may also want to adjust your WriteConcern values as well. You can read more about this in official documentation. I have something similar coded that handles similar kind of concurrency issues in mongoDB using spring mongoTemplate. Let me know if you want any reference related to java with that.
My use case is as follows -
I have a collection of documents in mongoDB which I have to send for analysis.
The format of the documents are as follows -
{ _id:ObjectId("517e769164702dacea7c40d8") ,
date:"1359911127494",
status:"available",
other_fields... }
I have a reader process which picks first 100 documents with status:available sorted by date and modifies them with status:processing.
ReaderProcess sends the documents for analysis. Once the analysis is complete the status is changed to processed.
Currently reader process first fetch 100 documents sorted by date and then update the status to processing for each document in a loop. Is there any better/efficient solution for this case?
Also, in future for scalability, we might go with more than one reader process.
In this case, I want that 100 documents picked by one reader process should not get picked by another reader process. But fetching and updating are seperate queries right now, so it is very much possible that multiple reader processes pick same documents.
Bulk findAndModify (with limit) would have solved all these problems. But unfortunately it is not provided in MongoDB yet. Is there any solution to this problem?
As you mention there is currently no clean way to do what you want. The best approach at this time for operations like the one you need is this :
Reader selects X documents with appropriate limit and sorting
Reader marks the documents returned by 1) with it's own unique reader ID (e.g. update({_id:{$in:[<result set ids>]}, state:"available", $isolated:1}, {$set:{readerId:<your reader's ID>, state:"processing"}}, false, true))
Reader selects all documents marked as processing and with it's own reader ID. At this point it is guaranteed that you have exclusive access to the resulting set of documents.
Offer the resultset from 3) for your processing.
Note that this even works in highly concurrent situations as a reader can never reserve documents not already reserved by another reader (note that step 2 can only reserve currently available documents, and writes are atomic). I would add a timestamp with reservation time as well if you want to be able to time out reservations (for example for scenarios where readers might crash/fail).
EDIT: More details :
All write operations can occasionally yield for pending operations if the write takes a relatively long time. This means that step 2) might not see all documents marked by step 1) unless you take the following steps :
Use an appropriate "w" (write concern) value, meaning 1 or higher. This will ensure that the connection on which the write operation is invoked will wait for it to complete regardless of it yielding.
Make sure you do the read in step 2 on the same connection (only relevant for replicasets with slaveOk enabled reads) or thread so that they are guaranteed to be sequential. The former can be done in most drivers with the "requestStart" and "requestDone" methods or similar (Java documentation here).
Add the $isolated flag to your multi-updates to ensure it cannot be interleaved with other write operations.
Also see comments for discussion regarding atomicity/isolation. I incorrectly assumed multi-updates were isolated. They are not, or at least not by default.
I know that I can't lock a single mongodb document, in fact there is no way to lock a collection either.
However, I've got this scenario, where I think I need some way to prevent more than one thread (or process, it's not important) from modifying a document. Here's my scenario.
I have a collection that contains object of type A. I have some code that retrieve a document of type A, add an element in an array that is a property of the document (a.arr.add(new Thing()) and then save back the document to mongodb. This code is parallel, multiple threads in my applications can do theses operations and for now there is no way to prevent to threads from doing theses operations in parallel on the same document. This is bad because one of the threads could overwrite the works of the other.
I do use the repository pattern to abstract the access to the mongodb collection, so I only have CRUDs operations at my disposition.
Now that I think about it, maybe it's a limitation of the repository pattern and not a limitation of mongodb that is causing me troubles. Anyway, how can I make this code "thread safe"? I guess there's a well known solution to this problem, but being new to mongodb and the repository pattern, I don't immediately sees it.
Thanks
Hey the only way of which I think now is to add an status parameter and use the operation findAndModify(), which enables you to atomically modify a document. It's a bit slower, but should do the trick.
So let's say you add an status attribut and when you retrieve the document change the status from "IDLE" to "PROCESSING". Then you update the document and save it back to the collection updating the status to "IDLE" again.
Code example:
var doc = db.runCommand({
"findAndModify" : "COLLECTION_NAME",
"query" : {"_id": "ID_DOCUMENT", "status" : "IDLE"},
"update" : {"$set" : {"status" : "RUNNING"} }
}).value
Change the COLLECTION_NAME and ID_DOCUMENT to a proper value. By default findAndModify() returns the old value, which means the status value will be still IDLE on the client side. So when you are done with updating just save/update everything again.
The only think you need be be aware is that you can only modify one document at a time.
Hope it helps.
Stumbled into this question while working on mongodb upgrades. Unlike at the time this question was asked, now mongodb supports document level locking out of the box.
From: http://docs.mongodb.org/manual/faq/concurrency/
"How granular are locks in MongoDB?
Changed in version 3.0.
Beginning with version 3.0, MongoDB ships with the WiredTiger storage engine, which uses optimistic concurrency control for most read and write operations. WiredTiger uses only intent locks at the global, database and collection levels. When the storage engine detects conflicts between two operations, one will incur a write conflict causing MongoDB to transparently retry that operation."
Classic solution when you want to make something thread-safe is to use locks (mutexes).
This is also called pessimistic locking as opposed to optimistic locking described here.
There are scenarios when pessimistic locking is more efficient (more details here). It is also far easier to implement (major difficulty of optimistic locking is recovery from collision).
MongoDB does not provide mechanism for a lock. But this can be easily implemented at application level (i.e. in your code):
Acquire lock
Read document
Modify document
Write document
Release lock
The granularity of the lock can be different: global, collection-specific, record/document-specific. The more specific the lock the less its performance penalty.
"Doctor, it hurts when I do this"
"Then don't do that!"
Basically, what you're describing sounds like you've got a serial dependency there -- MongoDB or whatever, your algorithm has a point at which the operation has to be serialized. That will be an inherent bottleneck, and if you absolutely must do it, you'll have to arrange some kind of semaphore to protect it.
So, the place to look is at your algorithm. Can you eliminate that? Could you, for example, handle it with some kind of conflict resolution, like "get record into local' update; store record" so that after the store the new record would be the one gotten on that key?
Answering my own question because I found a solution while doing research on the Internet.
I think what I need to do is use an Optimistic Concurency Control.
It consist in adding a timestamp, a hash or another unique identifier (I'll used UUIDs) to every documents. The unique identifier must be modified each time the document is modified. before updating the document I'll do something like this (in pseudo-code) :
var oldUUID = doc.uuid;
doc.uuid = new UUID();
BeginTransaction();
if (GetDocUUIDFromDatabase(doc.id) == oldUUID)
{
SaveToDatabase(doc);
Commit();
}
else
{
// Document was modified in the DB since we read it. We can't save our changes.
RollBack();
throw new ConcurencyException();
}
Update:
With MongoDB 3.2.2 using WiredTiger Storage implementation as default engine, MongoDB use default locking at document level.It was introduced in version 3.0 but made default in version 3.2.2. Therefore MongoDB now has document level locking.
As of 4.0, MongoDB supports Transactions for replica sets. Support for sharded clusters will come in MongoDB 4.2. Using transactions, DB updates will be aborted if a conflicting write occurs, solving your issue.
Transactions are much more costly in terms of performance so don't use Transactions as an excuse for poor NoSQL schema design!
An alternative is to do in place update
for ex:
http://www.mongodb.org/display/DOCS/Updating#comment-41821928
db.users.update( { level: "Sourcerer" }, { '$push' : { 'inventory' : 'magic wand'} }, false, true );
which will push 'magic wand' into all "Sourcerer" user's inventory array. Update to each document/user is atomic.
If you have a system with > 1 servers then you'll need a distributive lock.
I prefer to use Hazelcast.
While saving you can get Hazelcast lock by entity id, fetch and update data, then release a lock.
As an example:
https://github.com/azee/template-api/blob/master/template-rest/src/main/java/com/mycompany/template/scheduler/SchedulerJob.java
Just use lock.lock() instead of lock.tryLock()
Here you can see how to configure Hazelcast in your spring context:
https://github.com/azee/template-api/blob/master/template-rest/src/main/resources/webContext.xml
Instead of writing the question in another question, I try to answer this one: I wonder if this WiredTiger Storage will handle the problem I pointed out here:
Limit inserts in mongodb
If the order of the elements in the array is not important for you then the $push operator should be safe enough to prevent threads from overwriting each others changes.
I had a similar problem where I had multiple instances of the same application which would pull data from the database (the order did not matter; all documents had to be updated - efficiently), work on it and write back the results. However, without any locking in place, all instances obviously pulled the same document(s) instead of intelligently distributing their workforce.
I tried to solve it by implementing a lock on application level, which would add an locked-field in the corresponding document when it was currently being edited, so that no other instance of my application would pick the same document and waste time on it by performing the same operation as the other instance(s).
However, when running dozens or more instances of my application, the timespan between reading the document (using find()) and setting the locked-field to true (using update()) where to long and the instances still pulled the same documents from the database, making my idea of speeding up the work using multiple instances pointless.
Here are 3 suggestions that might solve your problem depending on your situation:
Use findAndModify() since the read and write operations are atomic using that function. Theoretically, a document requested by one instance of your application should then appear as locked for the other instances. And when the document is unlocked and visible for other instances again, it is also modified.
If however, you need to do other stuff in between the read find() and write update() operations, you could you use transactions.
Alternatively, if that does not solve your problem, a bit of a cheese solution (which might suffice) is making the application pull documents in large batches and making each instance pick a random document from that batch and work on it. Obvisously this shady solution is based on the fact that coincidence will not punish your application's efficieny.
Sounds like you want to use MongoDB's atomic operators: http://www.mongodb.org/display/DOCS/Atomic+Operations
I have two collections, one (A) containing items to be processed (relatively small) and one (B) with those already processed (fairly large, with extra result fields).
Items are read from A, get processed and save()'d to B, then remove()'d from A.
The rationale is that indices can be different across these, and that the "incoming" collection can be kept very small and fast this way.
I've run into two issues with this:
if either remove() or save() time out or otherwise fail under load, I lose the item completely, or process it twice
if both fail, the side effects happen but there is no record of that
I can sidestep the double-failure case with findAndModify locks (not needed otherwise, we have a process-level lock) but then we have stale lock issues and partial failures can still happen. There's no way to atomically remove+save to different collections, as far as I can tell (maybe by design?)
Is there a Best Practice for this situation?
There's no way to atomically remove+save to different collections, as far as I can tell (maybe by design?)
Yes this is by design. MongoDB explicitly does not provides joins or transactions. Remove + Save is a form of transaction.
Is there a Best Practice for this situation?
You really have two low-complexity options here, both involve findAndModify.
Option #1: a single collection
Based on your description, you are basically building a queue with some extra features. If you leverage a single collection then you use findAndModify to update the status of each item as it is processing.
Unfortunately, that means you will lose this: ...that the "incoming" collection can be kept very small and fast this way.
Option #2: two collections
The other option is basically a two phase commit, leveraging findAndModify.
Take a look at the docs for this here.
Once an item is processed in A you set a field to flag it for deletion. You then copy that item over to B. Once copied to B you can then remove the item from A.
I've not tried this myself yet but the new book 50 Tips and Tricks for MongoDB Developers mentions a few times about using cron jobs (or services/scheduler) to clean up data like this. You could leave the documents in Collection A flagged for deletion and run daily job to clear them out, reducing the overall scope of the original transaction.
From what I've learned so far, I'd never leave the database in a state where I rely on the next database action succeeding unless it is the last action (journalling will resend the last db action upon recovery). For example, I have a three phase account registration process where I create a user in CollectionA and then add another related document to CollectionB. When I create the user I embed the details of the CollectionB document in CollectionA in case the second write fails. Later I will write a process that removes the embedded data from CollectionA if the document in CollectionB exists
Not having transactions does cause pain points like this, but I think in some cases there are new ways of thinking about it. In my case, time will tell as I progress with my app