OptimisticLockException with concurrent JPA update under Spring Boot - jpa

Here is the sample project where the exception is reproduced.
This sample illustrates the issue when many concurrent transactions are modifiying Account balance. Account can have many Card entities bound. Transactions are related to Order and last in time. Each Thread executes as follows:
client requests '/order/{hashId}' for first available Order by given card hash id
client starts new tx for given order - '/tx/{orderId}/start'
client completes tx - '/tx/{txId}/stop/{amount}' where the tx amount is subtracted from Account balance.
Entity Locking
Account and Order entities are versioned with #javax.persistence.Version. In last step Account entity is locked with pessimistic write lock:
#Override
public Account getLockedAccount(Integer id) {
final Account account = findOne(id);
em.lock(account, LockModeType.PESSIMISTIC_WRITE);
return account;
}
Testing
To test the concurrent access use JMeter script src/main/resources/StressTest.jmx. NB: Extra libs have to be installed to JMeter home to run the script due to usage of JSON Path extractor. With these specific settings on an average laptop you can get around 10% of errors for TxEnd request:
{
"timestamp":1425407408204,
"status":500,
"error":"Internal Server Error",
"exception":"org.springframework.orm.ObjectOptimisticLockingFailureException",
"message":"Object of class [sample.data.jpa.domain.Account] with identifier [1]: optimistic locking failed; nested exception is org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect) : [sample.data.jpa.domain.Account#1]",
"path":"/tx/1443/stop/46.4"
}
Question
Despite of using pessimistic write lock I still get the optimistic locking exception. Is there any other approach to ensure the integrity of account without creating a task execution queue for all updates or synchronizing methods?
UPD: The work around with task executor is placed in another branch. Spring ThreadPoolTaskExecutor combined with transactional task remediates the issue.

Between find and locking, the Account object may have been already modified.
You need to do it in one statement
EM.find(Account.class, id, LockModeType.PESSIMISTIC_WRITE)

Related

Making POST requests idempotent

I have been looking for a way to design my API so it will be idempotent, meaning that some of that is to make my POST request routes idempotent, and I stumbled upon this article.
(If I have understood something not the way it is, please correct me!)
In it, there is a good explanation of the general idea. but what is lacking are some examples of the way that he implemented it by himself.
Someone asked the writer of the article, how would he guarantee atomicity? so the writer added a code example.
Essentially, in his code example there are two cases,
the flow if everything goes well:
Open a transaction on the db that holds the data that needs to change by the POST request
Inside this transaction, execute the needed change
Set the Idempotency-key key and the value, which is the response to the client, inside the Redis store
Set expire time to that key
Commit the transaction
the flow if something inside the code goes wrong:
and exception inside the flow of the function occurs.
a rollback to the transaction is performed
Notice that the transaction that is opened is for a certain DB, lets call him A.
However, it is not relevant for the redis store that he also uses, meaning that the rollback of the transaction will only affect DB A.
So it covers the case when something happends inside the code that make it impossible to complete the transaction.
But what will happend if the machine, which the code runs on, will crash, while it is in a state when it has already executed the Set expire time to that key and it is now about to run the committing of the transaction?
In that case, the key will be available in the redis store, but the transaction has not been committed.
This will result in a situation where the service is sure that the needed changes have already happen, but they didn't, the machine failed before it could finish it.
I need to design the API in such a way that if the change to the data or setting of the key and value in redis fail, that they will both roll back.
What is the solution to this problem?
How can I guarantee the atomicity of a changing the needed data in one database, and in the same time setting the key and the needed response in redis, and if any of them fails, rollback them both? (Including in a case that a machine crashes in the middle of the actions)
Please add a code example when answering! I'm using the same technologies as in the article (nodejs, redis, mongo - for the data itself)
Thanks :)
Per the code example you shared in your question, the behavior you want is to make sure there was no crash on the server between the moment where the idempotency key was set into the Redis saying this transaction already happened and the moment when the transaction is, in fact, persisted in your database.
However, when using Redis and another database together you have two independent points of failure, and two actions being executed sequentially in different moments (and even if they are executed asynchronously at the same time there is no guarantee the server won’t crash before any of them completed).
What you can do instead is include in your transaction an insert statement to a table holding relevant information on this request, including the idempotent key. As the ACID properties ensure atomicity, it guarantees either all the statements on the transaction to be executed successfully or none of them, which means your idempotency key will be available in your database if the transaction succeeded.
You can still use Redis as it’s gonna provide faster results than your database.
A code example is provided below, but it might be good to think about how relevant is the failure between insert to Redis and database to your business (could it be treated with another strategy?) to avoid over-engineering.
async function execute(idempotentKey) {
try {
// append to the query statement an insert into executions table.
// this will be persisted with the transaction
query = ```
UPDATE firsttable SET ...;
UPDATE secondtable SET ...;
INSERT INTO executions (idempotent_key, success) VALUES (:idempotent_key, true);
```;
const db = await dbConnection();
await db.beginTransaction();
await db.execute(query);
// we're setting a key on redis with a value: "false".
await redisClient.setAsync(idempotentKey, false, 'EX', process.env.KEY_EXPIRE_TIME);
/*
if server crashes exactly here, idempotent key will be on redis with false as value.
in this case, there are two possibilities: commit to database suceeded or not.
if on next request redis provides a false value, query database to verify if transaction was executed.
*/
await db.commit();
// you can now set key value to true, meaning commit suceeded and you won't need to query database to verify that.
await redis.setAsync(idempotentKey, true);
} catch (err) {
await db.rollback();
throw err;
}
}

Spring Batch: reuse existing service as a reader

I want to reuse an existing, transactional,paginated service class, which retrieves the items using JPA from a database, inside a Spring batch job, as a reader. I want to do that instead of using directly the JpaPagingItemReader basically because the JPA query is more complex to build and the service already provides this functionality.
My question would be what are the things I should take into account when developing the Spring batch adapter over this service. Although the reference documentation http://docs.spring.io/spring-batch/trunk/reference/html/readersAndWriters.html#pagingItemReaders has a section on reusing existing services, it doesn't say anything regarding the constraints, if there are any, of using such a transactional service.
Now, I looked at the JpaPagingItemReader as an example for building the reader, and I came up with a couple of questions I couldn't find answers for netiher in the documentation or on stackoverflow, although this post https://stackoverflow.com/a/26549831/4473261 helped.
The first thing I noticed is that a new transaction is used by the JpaPagingItemReader for reading a page of data. The above post says that this new transaction is needed "so that features like retry and skip can be correctly performed.". I have also found this article related to the matter https://blog.codecentric.de/en/2012/03/transactions-in-spring-batch-part-3-skip-and-retry/ that says that "when a skippable exception occurs during reading, we just increase the skip count and keep the exception for a later call on the onSkipInRead method of the SkipListener, if configured. There’s no rollback". So I assume that the reader has to do any reading of the records in a new transaction so that if a rollback of the transaction started when the processing of the chunk began happened, then the reader is not affected. I am wondering if this is true and if in this case my adapter should create a new transaction, invoke the service inside that transaction and then commit the transaction, similarly to how the JpaPagingItemReader does it. If that's true though, I wonder why there isn't any template provided by the framework which creates the transaction, delegates to the service the actual call to retrieve the data and then commits the transaction.
Greetings,
Cristi
From a reader perspective, there really isn't much to be concerned about. You can see in our JmsItemReader which obviously works with a transactional store that we don't take any additional precautions within the ItemReader itself.
What really matters is how you configure your step. When configuring your step, you'll need to mark the reader as transactional so that Spring Batch handles rollback correctly. When Spring Batch reads items in a fault tollerant step, the default behavior is to buffer them so that they won't be re-read on failure (retry, skip, etc). However, since the items read from a transactional store are tied to the transaction (and therefore reset when the rollback occurs), you need to tell Spring Batch to not buffer the items as they are read.
To mark the ItemReader as transactional, you'll set the not-quite-well-named flag is-reader-transactional-queue to true. You can read more about configuring steps and transactions in the documentation here: http://docs.spring.io/spring-batch/trunk/reference/html/configureStep.html

Locked Caching or Simple Cache with locking

Our e-commerce application built on ATG, has provision whereby multiple users can update the same Order. Since the cache mode for Order is Simple - this has resulted in large number of ConcurrentUpdateException and InvalidVersionException. We were considering locked cache mode, however are skeptical about using locked caching as the Orders are being updated very frequently and locking might result in deadlocks and have its own performance implications.
Is there a way we can continue using simple cache mode and minimize the occurances of ConcurrentUpdateException and InvalidVersionException?
My experience has been that you have to use locked caching with orders on any medium to high volume ATG websites.. Also, remember that the end-user experience is bad when this happens as they either get an error message (if the error handling is good) or they get something like an "internal server error" error.
The reason I believe you need to use locked caching for order is:
You can't guarantee that a user has not got multiple sessions open at the same time which are updating the shopping cart (which is just an incomplete Order). I have also seen examples where customers share their logins with family members etc and then wonder why all these items keep magically appearing in their shopping cart.
There are a number of processes which update the order including things like scenarios and customer service agents using the CSC module.
You could have code which updates orders in a non-safe way.
Some things which might help include:
Always use the OrderManager to load/update an order. Sounds obvious but I have seen a lot of updating orders via the repository.
Make sure that any updates are inside a transaction block.
Try to consolidate any background processes which might update orders to run on a small subset of your ATG instances (this will help reduce concurrency)
The ATG help has this to say about it:
A multi-server application might require locked caching, where only one Oracle ATG Web Commerce instance at a time has write access to the cached data of a given item type. You can use locked caching to prevent multiple servers from trying to update the same item simultaneously—for example, Commerce order items, which can be updated by customers on an external-facing server and by customer service agents on an internal-facing server. By restricting write access, locked caching ensures a consistent view of cached data among all Oracle ATG Web Commerce instances.
That said converting to locked caching will most certainly require performance testing and tuning of the order repository caches. It can and does result in deadlocks (seen that many times) but if configured correctly the deadlocks are infrequent.
Not sure what version of ATG you are using but for 10.2 there is a good explanation here of how you can get everything "in sync".
There is actually a Best Practices approach that was recommended in Legacy ATG Community long time ago. Just pasting it here.
When you are using the Order object with synchronization and transactions, there is a specific usage pattern that is critical to follow. Not following the expected pattern can lead to unnecessary ConcurrentUpdateExceptions, InvalidVersionExceptions, and deadlocks. The following sequence must be strictly adhered to in your code:
Obtain local-lock on profile ID.
Begin Transaction
Synchronize on Order
Perform ALL modifications to the order object.
Call OrderManager.updateOrder.
End Synchronization
End Transaction.
Release local-lock on profile ID.
Steps 1, 2, 7, 8 are done for you in the beforeSet() and afterSet() methods for ATG form handlers where order updates are expected. These include form handlers that extend PurchaseProcessFormHandler and OrderModifierFormHandler (deprecated). If your code accesses/modifies the order outside of a PurchaseProcessFormHandler, it will likely need to obtain the local-lock manually. The lock fetching can be done using the TransactionLockService.
So, if you have extended an ATG form handler based on PurchaseProcessFormHandler, and have written custom code in a handleXXX() method that updates an order, your code should look like:
synchronized( order )
{
// Do order updates
orderManager.updateOrder( order );
}
If you have written custom code updating an order outside of a PurchaseProcessFormHandler (e.g. CouponFormHandler, droplet, pipeline servlet, fulfillment-related), your code should look like:
ClientLockManager lockManager = getLocalLockManager(); // Should be configured as /atg/commerce/order/LocalLockManager
boolean acquireLock = false;
try
{
acquireLock = !lockManager.hasWriteLock( profileId, Thread.currentThread() );
if ( acquireLock )
lockManager.acquireWriteLock( profileId, Thread.currentThread() );
TransactionDemarcation td = new TransactionDemarcation();
td.begin( transactionManager );
boolean shouldRollback = false;
try
{
synchronized( order )
{
// do order updates
orderManager.updateOrder( order );
}
}
catch ( ... e )
{
shouldRollback = true;
throw e;
}
finally
{
try
{
td.end( shouldRollback );
}
catch ( Throwable th )
{
logError( th );
}
}
}
finally
{
try
{
if ( acquireLock )
lockManager.releaseWriteLock( profileId, Thread.currentThread(), true );
}
catch( Throwable th )
{
logError( th );
}
}
This pattern is only useful to prevent ConcurrentUpdateExceptions, InvalidVersionExceptions, and deadlocks when multiple threads attempt to update the same order on the same ATG instance. This should be adequate for most situations on a commerce site since session stickiness will confine updates to the same order to the same ATG instance.

Encountered a dead lock situation while querying a table

we have a table called job which has a self referencing key. We are using JPA and eclipselink as the JPA provider. Sometimes we are getting the following exception
Exception [EclipseLink-4002] (Eclipse Persistence Services -
2.3.2.v20111125-r10461): org.eclipse.persistence.exceptions.DatabaseException Internal
Exception: com.sybase.jdbc3.jdbc.SybSQLException: Your server command
(family id #0, process id #384) encountered a deadlock situation.
Please re-run your command.
We have an action in our UI which when performed a JSM message will go to some external component and a record will be created in our job table and then the job id will be sent to client and he will be redirected to the jobs view which lists all jobs in the table. After he is redirected the client will send an ajax request to list all jobs. While this operation is going we will receive notifications from external components and then we update the jobs table records.
I strongly believe that while the select operation is going we are trying to update the table and this is happening. Can anyone please tell me how to solve this problem.
Thank you all in advance good day.
You may be able to get around the select/update conflict by changing the locking scheme for the table, in addition to having good indexes.
Sybase has good documentation on this here:
Performance and Tuning Series: Locking and Concurrency Control

What things should I consider when using System.Transactions in my EF project?

Background
I have both an MVC app and a windows service that access the same data access library which utilizes EntityFramework. The windows service monitors certain activity on several tables and performs some calculations.
We are using the DAL project against several hundred databases, generating the connection string for the context at runtime.
We have a number of functions (both stored procedures and .NET methods which call on EF entities) which because of the scope of data we are using are VERY db intensive which have the potential to block one another.
The problem
The windows service is not so important that it can't wait. If something must be blocked, the windows service can. Earlier I found a number of SO questions that stated that System.Transactions is the way to go when setting your transaction isolation level to READ UNCOMMITTED to minimize locks.
I tried this, and I may be misunderstanding what is going on, so I need some clarification.
The method in the windows service is structured like so:
private bool _stopMe = false;
public void Update()
{
EntContext context = new EntContext();
do
{
var transactionOptions = new System.Transactions.TransactionOptions();
transactionOptions.IsolationLevel = System.Transactions.IsolationLevel.ReadUncommitted;
using (var transactionScope = new System.Transactions.TransactionScope( System.Transactions.TransactionScopeOption.Required, transactionOptions))
{
List<Ent1> myEnts = (from e....Complicated query here).ToList();
SomeCalculations(myEnts);
List<Ent2> myOtherEnts = (from e... Complicated query using entities from previous query here).ToList();
MakeSomeChanges(myOtherEnts);
context.SaveChanges();
}
Thread.Sleep(5000); //wait 5 seconds before allow do block to continue
}while (! _stopMe)
}
When I execute my second query, an exception gets thrown:
The underlying provider failed on Open.
Network access for Distributed Transaction Manager (MSDTC) has been disabled. Please
enable DTC for network access in the security configuration for MSDTC using the
Component Services Administrative tool.
The transaction manager has disabled its support for remote/network
transactions. (Exception from HRESULT: 0x8004D024)
I can assume that I should not be calling more than one query in that using block? The first query returned just fine. This is being performed on one database at a time (really other instances are being run in different threads and nothing from this thread touches the others).
My question is, is this how it should be used or is there more to this that I should know?
Of Note: This is a monitoring function, so it must be run repeatedly.
In your code you are using transaction scope. It looks like the first query uses a light weight db transaction. When the second query comes the transaction scope upgrades the transaction to a distributed transaction.
The distributed transaction uses MSDTC.
Here is where the error comes, by default MSDTC is not enabled. Even if it is enabled and started, it needs to be configured to allow a remote client to create a distributed transaction.