How to avoid concurrent writes to a row using Slick? - postgresql

I have the following table:
case class Project(id: Int, name: String, locked: Boolean)
Users can request some processing to be done on the project - but I'd like to make sure only one processing job is being run on the project at a time.
My way right now is to set locked = true on the project whenever a job begins, and if a user (malicious or otherwise) tries to start a second job while locked = true, it should check if locked is already true, and if so, it should respond with an error message saying 'please wait' or such.
I think I need to do this using transactions, so race conditions / concurrent requests wouldn't work, and a malicious user wouldn't be able to send concurrent requests and have multiple jobs start because all saw locked = false (as they started simultaneously)
How can I do this with Slick? My current attempt looks like this:
def lock(id: Long): Future[Int] = {
val select = (for {p <- projects if p.id === id && p.locked === false} yield l.locked)
val q = select.update(true).transactionally //attempting to use transactions
db.run(q)
}
I believe db.run will return the number of rows which were updated, and if p.locked === false condition fails, then the number of rows updated will be 0, and I could use that to determine if project was successfully locked. And the .transactionally should perhaps make this run in transactions so concurrent requests won't be an issue.
Are my assumptions / reasoning here correct? Is there a better way to do this?

The meaning of .transactionally here depends on which database you are using.
Without specifying anything else, in this way you are using the default isolation level for a transaction offered by you db, that for example, if you use Postgres, the level will be READ COMMITTED, that means that given two concurrent transactions, one can see the data committed from the other before it ends.
I suggest to you to specify always the isolation level with .transactionally.withTransactionIsolation(transactionLevel) to avoid concurrency problems

Related

mongoDB optimistic concurrency control for update

I am simulating multiple concurrent request for MongoDB`s "update".
Here is the thing, I insert a data amount=1000 in mongoDB, and every time I trigger the api, it will update the amount by amount += 50 and save it back to database. Basically it is a find and update operation on a single document.
err := globalDB.C("bank").Find(bson.M{"account": account}).One(&entry)
if err != nil {
panic(err)
}
wait := Random(1, 100)
time.Sleep(time.Duration(wait) * time.Millisecond)
//step 3: add current balance and update back to database
entry.Amount = entry.Amount + 50.000
err = globalDB.C("bank").UpdateId(entry.ID, &entry)
Here is the source code for the project.
I am simulating requests using Vegeta:
If I set -rate=10(which means trigger api 10 times in a second, so 1000 + 50 * 10 = 1500), the data is correct
echo "GET http://localhost:8000" | \
vegeta attack -rate=10 -connections=1 -duration=1s | \
tee results.bin | \
vegeta report
But with -rate=100(which means trigger api 100 times in a second, so 1000 + 50 * 100 = 6000) produces very confusing result.
echo "GET http://localhost:8000" | \
vegeta attack -rate=100 -connections=1 -duration=1s | \
tee results.bin | \
vegeta report
In short, the thing I want to know is: I thought MongoDB is using optimistic concurrency control, which means if there's a write conflict, it should retry again so the latency will go up, but the data should be guaranteed to be correct.
Why the result looks like the data correctness is totally not guaranteed in MongoDB?
I know some of you guys might notice the sleep at line 41 and 42, but even though I commented it out, when I test with -rate=500 the result is still not correct.
Any clues why this is happening?
Generally you should extract the relevant segment of the code into the question. It is inconsiderate to ask people to locate the 5 relevant lines in your 76 line program.
Your test is performing concurrent find-and-modify operations. Let's suppose there are two concurrent processes A and B that each increment account balance by 50. Starting balance is 0. The order of operations could be:
A: what is the current balance for account 1234?
B: what is the current balance for account 1234?
DB -> A: balance for account 1234 is 0
DB -> B: balance for account 1234 is 0
A: new balance is 0+50 = 50
A: set balance for account 1234 to 50
DB -> A: ok, new balance for account 1234 is 50
B: new balance is 0+50 = 50
B: set balance for account 1234 to 50
DB -> B: ok, new balance for account 1234 is 50
From the database's perspective, there are no "write conflicts" here. You asked to set the balance to 50 for the given account twice.
There are different ways of solving this issue. One is to use conditional updates such that the process looks like this:
A: what is the current balance for account 1234?
B: what is the current balance for account 1234?
DB -> A: balance for account 1234 is 0
DB -> B: balance for account 1234 is 0
A: new balance is 0+50 = 50
A: if balance in account 1234 is 0, set balance to 50
DB -> A: ok, new balance for account 1234 is 50
B: new balance is 0+50 = 50
B: if balance in account 1234 is 0, set balance to 50
DB -> B: balance is not 0, no update was performed
B: err, let's start over
B: what is the current balance for account 1234?
DB -> B: balance for account 1234 is 50
B: new balance is 50+50 = 100
B: if balance in account 1234 is 50, set balance to 100
DB -> B: ok, new balance for account 1234 is 100
As you see, the database must support the conditional update and the application must handle the possibility of concurrent updates and retry the operation.
If the balance can go up and down, this is not a practically useful way of writing a debit & credit system (but if balance can only increase or only decrease, this would in fact work quite fine). In real systems you'd use a special field whose purpose is to identify the specific version of the document that was in existence at the moment the application retrieved some data; the update is conditioned on the current version of the document staying the same, and each update increments the version. Concurrent updates would then be detected because the version number is wrong rather than a content field.
There are ways to produce a "write conflict" on the database side, for example by using transactions as supported by MongoDB 4.0+. In principle this works the same way but the "version" is called a "transaction identifier" and it's stored in a different place (not inline in the document being operated on). But the principle is the same. In this case the database would inform you that there was a write conflict, you'd still need to reissue the operations.
Update:
I think you also need to distinguish between "optimistic currency control" as a concept, its implementation, and what the implementation applies to. https://docs.mongodb.com/manual/faq/concurrency/#how-granular-are-locks-in-mongodb for example says:
For most read and write operations, WiredTiger uses optimistic concurrency control. WiredTiger uses only intent locks at the global, database and collection levels. When the storage engine detects conflicts between two operations, one will incur a write conflict causing MongoDB to transparently retry that operation.
Reading this statement carefully, it applies to write operations on storage engine level. I imagine when MongoDB performs something like $set, or other atomic write operations, this would apply. But this doesn't apply to application-level operation sequences like you've given in your example.
If you try your example code with your favorite relational DBMS, I think you'll find it produces roughly the same result as you've seen with MongoDB, if you issue a transaction around each individual read and write (such that balance read and write are in different transactions), for the same reason - RDBMSes lock data (or use techniques like MVCC) for the lifetime of a transaction, but not across transactions.
Similarly if you put both balance read and balance write on the same account into a transaction in MongoDB, you may find that you are receiving transient errors when other transactions modify the account in question concurrently.
Lastly, the API that MongoDB implements for transactions (with retries) is described here. If you look at it carefully you'll find that it expects the application to reissue not just the transaction commit command, but to repeat the entire transaction operation. This is because generally, if there is a "write conflict" the starting data has changed, and simply attempting the final write again isn't enough - potentially calculations in the applications need to be redone, possibly even side effects of that process change as a result.

MongoDB: Upon incomplete Mass delete, are the deleted values gone or they are rolled back as in a typical RDBMS Transaction? [duplicate]

I know there are similar questions here but they are either telling me to switch back to regular RDBMS systems if I need transactions or use atomic operations or two-phase commit. The second solution seems the best choice. The third I don't wish to follow because it seems that many things could go wrong and I can't test it in every aspect. I'm having a hard time refactoring my project to perform atomic operations. I don't know whether this comes from my limited viewpoint (I have only worked with SQL databases so far), or whether it actually can't be done.
We would like to pilot test MongoDB at our company. We have chosen a relatively simple project - an SMS gateway. It allows our software to send SMS messages to the cellular network and the gateway does the dirty work: actually communicating with the providers via different communication protocols. The gateway also manages the billing of the messages. Every customer who applies for the service has to buy some credits. The system automatically decreases the user's balance when a message is sent and denies the access if the balance is insufficient. Also because we are customers of third party SMS providers, we may also have our own balances with them. We have to keep track of those as well.
I started thinking about how I can store the required data with MongoDB if I cut down some complexity (external billing, queued SMS sending). Coming from the SQL world, I would create a separate table for users, another one for SMS messages, and one for storing the transactions regarding the users' balance. Let's say I create separate collections for all of those in MongoDB.
Imagine an SMS sending task with the following steps in this simplified system:
check if the user has sufficient balance; deny access if there's not enough credit
send and store the message in the SMS collection with the details and cost (in the live system the message would have a status attribute and a task would pick up it for delivery and set the price of the SMS according to its current state)
decrease the users's balance by the cost of the sent message
log the transaction in the transaction collection
Now what's the problem with that? MongoDB can do atomic updates only on one document. In the previous flow it could happen that some kind of error creeps in and the message gets stored in the database but the user's balance is not updated and/or the transaction is not logged.
I came up with two ideas:
Create a single collection for the users, and store the balance as a field, user related transactions and messages as sub documents in the user's document. Because we can update documents atomically, this actually solves the transaction problem. Disadvantages: if the user sends many SMS messages, the size of the document could become large and the 4MB document limit could be reached. Maybe I can create history documents in such scenarios, but I don't think this would be a good idea. Also I don't know how fast the system would be if I push more and more data to the same big document.
Create one collection for users, and one for transactions. There can be two kinds of transactions: credit purchase with positive balance change and messages sent with negative balance change. Transaction may have a subdocument; for example in messages sent the details of the SMS can be embedded in the transaction. Disadvantages: I don't store the current user balance so I have to calculate it every time a user tries to send a message to tell if the message could go through or not. I'm afraid this calculation can became slow as the number of stored transactions grows.
I'm a little bit confused about which method to pick. Are there other solutions? I couldn't find any best practices online about how to work around these kinds of problems. I guess many programmers who are trying to become familiar with the NoSQL world are facing similar problems in the beginning.
As of 4.0, MongoDB will have multi-document ACID transactions. The plan is to enable those in replica set deployments first, followed by the sharded clusters. Transactions in MongoDB will feel just like transactions developers are familiar with from relational databases - they'll be multi-statement, with similar semantics and syntax (like start_transaction and commit_transaction). Importantly, the changes to MongoDB that enable transactions do not impact performance for workloads that do not require them.
For more details see here.
Having distributed transactions, doesn't mean that you should model your data like in tabular relational databases. Embrace the power of the document model and follow the good and recommended practices of data modeling.
Check this out, by Tokutek. They develop a plugin for Mongo that promises not only transactions but also a boosting in performance.
Bring it to the point: if transactional integrity is a must then don't use MongoDB but use only components in the system supporting transactions. It is extremely hard to build something on top of component in order to provide ACID-similar functionality for non-ACID compliant components. Depending on the individual usecases it may make sense to separate actions into transactional and non-transactional actions in some way...
Now what's the problem with that? MongoDB can do atomic updates only on one document. In the previous flow it could happen that some kind of error creeps in and the message gets stored in the database but the user's balance is not gets reduced and/or the transaction is not gets logged.
This is not really a problem. The error you mentioned is either a logical (bug) or IO error (network, disk failure). Such kind of error can leave both transactionless and transactional stores in non-consistent state. For example, if it has already sent SMS but while storing message error occurred - it can't rollback SMS sending, which means it won't be logged, user balance won't be reduced etc.
The real problem here is the user can take advantage of race condition and send more messages than his balance allows. This also applies to RDBMS, unless you do SMS sending inside transaction with balance field locking (which would be a great bottleneck). As a possible solution for MongoDB would be using findAndModify first to reduce the balance and check it, if it's negative disallow sending and refund the amount (atomic increment). If positive, continue sending and in case it fails refund the amount. The balance history collection can be also maintained to help fix/verify balance field.
The project is simple, but you have to support transactions for payment, which makes the whole thing difficult. So, for example, a complex portal system with hundreds of collections (forum, chat, ads, etc...) is in some respect simpler, because if you lose a forum or chat entry, nobody really cares. If you, on the otherhand, lose a payment transaction that's a serious issue.
So, if you really want a pilot project for MongoDB, choose one which is simple in that respect.
Transactions are absent in MongoDB for valid reasons. This is one of those things that make MongoDB faster.
In your case, if transaction is a must, mongo seems not a good fit.
May be RDMBS + MongoDB, but that will add complexities and will make it harder to manage and support application.
This is probably the best blog I found regarding implementing transaction like feature for mongodb .!
Syncing Flag: best for just copying data over from a master document
Job Queue: very general purpose, solves 95% of cases. Most systems need to have at least one job queue around anyway!
Two Phase Commit: this technique ensure that each entity always has all information needed to get to a consistent state
Log Reconciliation: the most robust technique, ideal for financial systems
Versioning: provides isolation and supports complex structures
Read this for more info: https://dzone.com/articles/how-implement-robust-and
This is late but think this will help in future. I use Redis for make a queue to solve this problem.
Requirement:
Image below show 2 actions need execute concurrently but phase 2 and phase 3 of action 1 need finish before start phase 2 of action 2 or opposite (A phase can be a request REST api, a database request or execute javascript code...).
How a queue help you
Queue make sure that every block code between lock() and release() in many function will not run as the same time, make them isolate.
function action1() {
phase1();
queue.lock("action_domain");
phase2();
phase3();
queue.release("action_domain");
}
function action2() {
phase1();
queue.lock("action_domain");
phase2();
queue.release("action_domain");
}
How to build a queue
I will only focus on how avoid race conditon part when building a queue on backend site. If you don't know the basic idea of queue, come here.
The code below only show the concept, you need implement in correct way.
function lock() {
if(isRunning()) {
addIsolateCodeToQueue(); //use callback, delegate, function pointer... depend on your language
} else {
setStateToRunning();
pickOneAndExecute();
}
}
function release() {
setStateToRelease();
pickOneAndExecute();
}
But you need isRunning() setStateToRelease() setStateToRunning() isolate it's self or else you face race condition again. To do this I choose Redis for ACID purpose and scalable.
Redis document talk about it's transaction:
All the commands in a transaction are serialized and executed
sequentially. It can never happen that a request issued by another
client is served in the middle of the execution of a Redis
transaction. This guarantees that the commands are executed as a
single isolated operation.
P/s:
I use Redis because my service already use it, you can use any other way support isolation to do that.
The action_domain in my code is above for when you need only action 1 call by user A block action 2 of user A, don't block other user. The idea is put a unique key for lock of each user.
Transactions are available now in MongoDB 4.0. Sample here
// Runs the txnFunc and retries if TransientTransactionError encountered
function runTransactionWithRetry(txnFunc, session) {
while (true) {
try {
txnFunc(session); // performs transaction
break;
} catch (error) {
// If transient error, retry the whole transaction
if ( error.hasOwnProperty("errorLabels") && error.errorLabels.includes("TransientTransactionError") ) {
print("TransientTransactionError, retrying transaction ...");
continue;
} else {
throw error;
}
}
}
}
// Retries commit if UnknownTransactionCommitResult encountered
function commitWithRetry(session) {
while (true) {
try {
session.commitTransaction(); // Uses write concern set at transaction start.
print("Transaction committed.");
break;
} catch (error) {
// Can retry commit
if (error.hasOwnProperty("errorLabels") && error.errorLabels.includes("UnknownTransactionCommitResult") ) {
print("UnknownTransactionCommitResult, retrying commit operation ...");
continue;
} else {
print("Error during commit ...");
throw error;
}
}
}
}
// Updates two collections in a transactions
function updateEmployeeInfo(session) {
employeesCollection = session.getDatabase("hr").employees;
eventsCollection = session.getDatabase("reporting").events;
session.startTransaction( { readConcern: { level: "snapshot" }, writeConcern: { w: "majority" } } );
try{
employeesCollection.updateOne( { employee: 3 }, { $set: { status: "Inactive" } } );
eventsCollection.insertOne( { employee: 3, status: { new: "Inactive", old: "Active" } } );
} catch (error) {
print("Caught exception during transaction, aborting.");
session.abortTransaction();
throw error;
}
commitWithRetry(session);
}
// Start a session.
session = db.getMongo().startSession( { mode: "primary" } );
try{
runTransactionWithRetry(updateEmployeeInfo, session);
} catch (error) {
// Do something with error
} finally {
session.endSession();
}

Mongo transactions and updates

If I've got an environment with multiple instances of the same client connecting to a MongoDB server and I want a simple locking mechanism to ensure single client access for a short time, can I safely use my own lock object?
Say I have one object with a lockState that can be "locked" or "unlocked" and the plan is everyone checks that it is "unlocked" before doing "stuff". To lock the system I say:
db.collection.update( { "lockState": "unlocked" }, { "lockState": "locked" })
(aka UPDATE lockObj SET lockState = 'locked' WHERE lockState = 'unlocked')
If two clients try to lock the system at the same time, is it possible that both clients can end up thinking they "have the lock"?
Both clients find the record by the query parameter of the update
Client 1 updates the record (which is an atomic operation)
update returns success
Client 2 updates the document (it's already found it before client 1 modified it)
update returns success
I realize this is probably a very contrived case that would be very hard to reproduce, but is it possible or does mongo somehow make client 2's update fail?
Alternative approach
Use insert instead of update. insert is atomic and will fail if the document already exists.
To lock the system: db.locks.insert({someId: 27, state: “locked”}).
If the insert succeeds - I've got the lock and since the update was atomic, no one else can have it.
If the insert fails - someone else must have the lock.
If two clients try to lock the system at the same time, is it possible that both clients can end up thinking they "have the lock"?
No, only one client at a time writes to the lock space (Global, Database, Collection or Document depending on your version and configuration) and the operations on that lock space are sequential and one or the other (read or write, not both) per document so that other connections will not mistakenly pick up a document in a inbetween state and think that it is not locked by another client.
All operations on a single document are atomic, whether update or insert.

Locked Caching or Simple Cache with locking

Our e-commerce application built on ATG, has provision whereby multiple users can update the same Order. Since the cache mode for Order is Simple - this has resulted in large number of ConcurrentUpdateException and InvalidVersionException. We were considering locked cache mode, however are skeptical about using locked caching as the Orders are being updated very frequently and locking might result in deadlocks and have its own performance implications.
Is there a way we can continue using simple cache mode and minimize the occurances of ConcurrentUpdateException and InvalidVersionException?
My experience has been that you have to use locked caching with orders on any medium to high volume ATG websites.. Also, remember that the end-user experience is bad when this happens as they either get an error message (if the error handling is good) or they get something like an "internal server error" error.
The reason I believe you need to use locked caching for order is:
You can't guarantee that a user has not got multiple sessions open at the same time which are updating the shopping cart (which is just an incomplete Order). I have also seen examples where customers share their logins with family members etc and then wonder why all these items keep magically appearing in their shopping cart.
There are a number of processes which update the order including things like scenarios and customer service agents using the CSC module.
You could have code which updates orders in a non-safe way.
Some things which might help include:
Always use the OrderManager to load/update an order. Sounds obvious but I have seen a lot of updating orders via the repository.
Make sure that any updates are inside a transaction block.
Try to consolidate any background processes which might update orders to run on a small subset of your ATG instances (this will help reduce concurrency)
The ATG help has this to say about it:
A multi-server application might require locked caching, where only one Oracle ATG Web Commerce instance at a time has write access to the cached data of a given item type. You can use locked caching to prevent multiple servers from trying to update the same item simultaneously—for example, Commerce order items, which can be updated by customers on an external-facing server and by customer service agents on an internal-facing server. By restricting write access, locked caching ensures a consistent view of cached data among all Oracle ATG Web Commerce instances.
That said converting to locked caching will most certainly require performance testing and tuning of the order repository caches. It can and does result in deadlocks (seen that many times) but if configured correctly the deadlocks are infrequent.
Not sure what version of ATG you are using but for 10.2 there is a good explanation here of how you can get everything "in sync".
There is actually a Best Practices approach that was recommended in Legacy ATG Community long time ago. Just pasting it here.
When you are using the Order object with synchronization and transactions, there is a specific usage pattern that is critical to follow. Not following the expected pattern can lead to unnecessary ConcurrentUpdateExceptions, InvalidVersionExceptions, and deadlocks. The following sequence must be strictly adhered to in your code:
Obtain local-lock on profile ID.
Begin Transaction
Synchronize on Order
Perform ALL modifications to the order object.
Call OrderManager.updateOrder.
End Synchronization
End Transaction.
Release local-lock on profile ID.
Steps 1, 2, 7, 8 are done for you in the beforeSet() and afterSet() methods for ATG form handlers where order updates are expected. These include form handlers that extend PurchaseProcessFormHandler and OrderModifierFormHandler (deprecated). If your code accesses/modifies the order outside of a PurchaseProcessFormHandler, it will likely need to obtain the local-lock manually. The lock fetching can be done using the TransactionLockService.
So, if you have extended an ATG form handler based on PurchaseProcessFormHandler, and have written custom code in a handleXXX() method that updates an order, your code should look like:
synchronized( order )
{
// Do order updates
orderManager.updateOrder( order );
}
If you have written custom code updating an order outside of a PurchaseProcessFormHandler (e.g. CouponFormHandler, droplet, pipeline servlet, fulfillment-related), your code should look like:
ClientLockManager lockManager = getLocalLockManager(); // Should be configured as /atg/commerce/order/LocalLockManager
boolean acquireLock = false;
try
{
acquireLock = !lockManager.hasWriteLock( profileId, Thread.currentThread() );
if ( acquireLock )
lockManager.acquireWriteLock( profileId, Thread.currentThread() );
TransactionDemarcation td = new TransactionDemarcation();
td.begin( transactionManager );
boolean shouldRollback = false;
try
{
synchronized( order )
{
// do order updates
orderManager.updateOrder( order );
}
}
catch ( ... e )
{
shouldRollback = true;
throw e;
}
finally
{
try
{
td.end( shouldRollback );
}
catch ( Throwable th )
{
logError( th );
}
}
}
finally
{
try
{
if ( acquireLock )
lockManager.releaseWriteLock( profileId, Thread.currentThread(), true );
}
catch( Throwable th )
{
logError( th );
}
}
This pattern is only useful to prevent ConcurrentUpdateExceptions, InvalidVersionExceptions, and deadlocks when multiple threads attempt to update the same order on the same ATG instance. This should be adequate for most situations on a commerce site since session stickiness will confine updates to the same order to the same ATG instance.

Multiple / Rapid ajax requests and concurrency issues with Entity Framework

I have an asp.net MVC4 application that I am using Unity as my IoC. The constructor for my controller takes in a Repository and that repository takes in a UnitOfWork (DBContext). Everything seems to work fine until multiple ajax requests from the same session happen too fast. I get the Store update, insert, or delete statement affected an unexpected number of rows (0) error due to a concurrency issue. This is what the method looks like called from the ajax request:
public void CaptureData(string apiKey, Guid sessionKey, FormElement formElement)
{
var trackingData = _trackingService.FindById(sessionKey);
if(trackingData != null)
{
formItem = trackingData.FormElements
.Where(f => f.Name == formElement.Name)
.FirstOrDefault();
if(formItem != null)
{
formItem.Value = formElement.Value;
_formElementRepository.Update(formItem);
}
}
}
This only happens when the ajax requests happens rapidly, meaning fast. When the requests happen at a normal speed everything seems fine. It is like the app needs time to catch up. Not sure how I need to handle the concurrency check in my repository so I don't miss an update. Also, I have tried setting the "MultipleActiveResultSets" to true and that didn't help.
As you mentioned in the comment you are using a row version column. The point of this column is to prevent concurrent overwrites of the same row. You have two operations:
Read record - reads record and current row version
Update record - update record with specified key and row version. The row version is updated automatically
Now if those operations are executed by concurrent request you may receive this:
Request A: Read record
Request B: Read record
Request A: Write record - changes row version!
Request B: Write record - fires exception because record with row version retrieved during Read record doesn't exist
The exception is fired to tell you that you are trying to update obsolete data because there is already a new version of the updated record. Normally you need to refresh data (by reloading current record from the database) and try to save them again. In highly concurrent scenario this handling may repeat many times because simply your database is designed to prevent this. Your options are:
Remove row version and let requests overwrite the value as they wish. If you really need concurrent request processing and you are happy to have "some" value, this may be the way to go.
Not allow concurrent requests. If you need to process all updates you most probably also need their real order. In such case your application should not allow concurrent requests.
Use SQL / stored procedure instead. By using table hints you will be able to lock record during Read operation and no other request will be able to read that record before the first one save changes and commits or rollbacks transaction.