The MongoDB 3.0 FAQ states that:
MongoDB uses reader-writer locks that allow concurrent readers shared access to a resource, such as a database or collection, but give exclusive access to a single write operation.
I have understood that the read operation will be blocked until the write commits the changes to the main memory.
My question is:
Instead of getting blocked by the write lock, is there any way to allow the read operation to obtain the document (row) state that existed prior to the modification?
I know that the data prior to the write operation will be inconsistent (stale) data, but that is ok for me.
Related
I ha have a query regarding MongoDB. I read the documentation but did not get a clear idea about that, so asking here. I have one collection on which I need to perform 24/7 read and write operations from different sources. So the query is May we perform concurrent read and write operations on the same collection at the same time? If no then what is the alternative or what's the main reason behind that .
I have some python crons which perform R/W Operations on collections at the same time I have some node side backend API that performs R/W operations on the same collection so will it cause any issue? Currently, this all is performing on the MySQL side but now as per client requirement, I need to move to MongoDB from MySQL. So please help me get clear about this problem.
Read FAQ: Concurrency
MongoDB uses reader-writer locks that allow concurrent readers shared
access to a resource, such as a database or collection.
In addition to a shared (S) locking mode for reads and an exclusive
(X) locking mode for write operations, intent shared (IS) and intent
exclusive (IX) modes indicate an intent to read or write a resource
using a finer granularity lock. When locking at a certain granularity,
all higher levels are locked using an intent lock.
And regarding Wired Tiger engine
For most read and write operations, WiredTiger uses optimistic
concurrency control. WiredTiger uses only intent locks at the global,
database and collection levels.
Current default mongodb engine is WiredTiger so if you use it - you're OK. To check the engine execute this db.serverStatus().storageEngine
In my app, I am doing following with mongodb.
Start a mongodb session and start a transaction
Read a document
Do some calculations based on values in the document and some other arguments
Update the document that was read in step 2 with the results of the calculations in step 3
Commit transaction and end session
Above procedure is executed with retries on TransientTransactionError, so if the transaction is failed due to a concurrency issue, procedure is retried.
If two concurrent invocations were made on above procedure, if both invocations read the document before any of them writes to the document, I need only one invocation to be able to successfully write to the document and other to fail. If this doesn't happen, I don't get the expected result what I am trying to achieve with this.
Can I expect mongodb to fail one invocation in this scenario, so the procedure will be retried on the updated picture of the document?
MongoDB multi-document transactions are atomic (i.e. provide an “all-or-nothing” proposition). When a transaction commits, all data changes made in the transaction are saved and visible outside the transaction. That is, a transaction will not commit some of its changes while rolling back others.
This is also elaborated further in In-progress Transactions and Write Conflicts:
If a transaction is in progress and a write outside the transaction
modifies a document that an operation in the transaction later tries
to modify, the transaction aborts because of a write conflict.
If a transaction is in progress and has taken a lock to modify a
document, when a write outside the transaction tries to modify the
same document, the write waits until the transaction ends.
See also Video: How and When to Use Multi-Document Transactions on Write Conflicts section to understand multi-document transactions more (i.e. write locks, etc).
If you are writing to the same document that you read in both transactions then yes, one will roll back. But do make sure that your writes actually change the document as MongoDB is smart enough to not update if nothing has changed.
This is to prevent the lost updates.
Please see the source: https://www.mongodb.com/blog/post/how-to-select--for-update-inside-mongodb-transactions
In fact, I have the same implementation in one of my projects and it works as expected but I have multi-documents being read etc. But in your specific example, that is not the case.
Even if you did not have transactions, you could use findAndModify with an appropriate query part (such as the example for update operation here: https://www.mongodb.com/docs/manual/core/write-operations-atomicity/) to guarantee the behavior you expect.
I am new to MongoDB database and for one of my application planning some portion of it to move to MongoDB. Where we need to handle optimistic concurrency. What are the best practices available with MongoDB.
Is MongoDB right choice for application which requires concurrency?
Yes MongoDB would be right choice for concurrency.
MongoDB Locking is Different than the locking in RDBMS.
MongoDB uses multi-granularity locking(see wired tiger) that allows operations to lock at the global, database or collection level, and allows for individual storage engines to implement their own concurrency control below the collection level (e.g., at the document-level in WiredTiger).
MongoDB uses reader-writer locks that allow concurrent readers shared access to a resource, such as a database or collection, but in MMAPv1, give exclusive access to a single write operation.
WiredTiger uses optimistic concurrency control. WiredTiger uses only intent locks at the global, database and collection levels. When the storage engine detects conflicts between two operations, one will incur a write conflict causing MongoDB to transparently retry that operation.
MongoDB has a reader/writer latch for each database.
The latch is multiple-reader, single-writer, and it is writer-greedy, so we can have a unlimited number of simultaneous readers on a database,
but there can be only one writer at a time on any collection in any one database.
"writer-greedy", gives priority to write, so when we get a write request, all the read requests are blocked until the write is completed.
The lock here is called as latch since it's lighter than a lock and it performs the job within microseconds.
MongoDB is capable of running as many simultaneous queries.
Hope it Helps!!
References
https://docs.mongodb.com/manual/faq/concurrency/
https://docs.mongodb.com/manual/reference/command/findAndModify/
Changed in version 2.2: The use of yielding expanded greatly in MongoDB 2.2. Including the “yield for page fault.” MongoDB tracks the contents of memory and predicts whether data is available before performing a read. If MongoDB predicts that the data is not in memory a read operation yields its lock while MongoDB loads the data to memory. Once data is available in memory, the read will reacquire the lock to complete the operation.
Taken from "Does a read or write operation ever yield the lock?"
Imagine that a write operation acquires the lock. How can a write be performed (change a certain collection) while MongoDB reading the data from the same collection. What happens if part of the read collection was not changed by the write and part of it was changed?
Imagine that a write operation acquires the lock. How can a write be performed (change a certain collection) while MongoDB reading the data from the same collection.
It cannot, MongoDB has a writer greedy read/write database level lock ( http://docs.mongodb.org/manual/faq/concurrency/#what-type-of-locking-does-mongodb-use ).
While a single document write is occuring MongoDB is blocked from reading on a database wide level.
What happens if part of the read collection was not changed by the write and part of it was changed?
MongoDB is transactional to a single document which means that if a part of a large multi update fails then it fails, there is no rollback or atomicity or transaction for the life time of the multi update.
I have a collection in my database
1.I want to lock my collection when the User Updating the Document
2.No operations are Done Expect Reads while Updating the collection for another Users
please give suggestions how to Lock the collection in MongoDB
Best Regards
GSY
MongoDB implements a writer greedy database level lock already.
This means that when a specific document is being written to:
The User collection would be locked
No reads will be available until the data is written
The reason that no reads are available is because MongoDB cannot do a consistent read while writing (darn you physics, you win again).
It is good to note that if you wish for a more complex lock, spanning multiple rows, then this will not be available in MongoDB and there is no real way of implementing such a thing.
MongoDB locking already does that for you. See what operations acquire which lock and what does each lock mean.
See the MongoDB documentation on write operations paying special attention to this section:
Isolation of Write Operations
The modification of a single document is always atomic, even if the write operation modifies >multiple sub-documents within that document. For write operations that modify multiple >documents, the operation as a whole is not atomic, and other operations may interleave.
No other operations are atomic. You can, however, attempt to isolate a write operation that >affects multiple documents using the isolation operator.
To isolate a sequence of write operations from other read and write operations, see Perform >Two Phase Commits.