Find and update with pessimistic_write lock could not prevent from optimistic lock failure? - spring-data

I've tried to find and update one record using pessimistic_write lock, like below,
transactionService.executeTransaction(() -> {
Entity newEntity = em.find(entityClass, id, LockModeType.PESSIMISTIC_WRITE);
newEntity = setEntityWithProperty.setEntity(newEntity, property);
});
I'm using the transactionTemplate in Spring.The entity itself has one version field to use the optimistic lock. With pessimistic_write lock, during the transaction (find and then update), other threads are supposed not able to read or write the record, right? But when commit the transaction, I can still get OptimisticLockFailuerException. Why this happens?

Related

How to use document locks to prevent external modification of records during Spring Data Mongodb transactions

I have a question regarding Spring Data Mongo and Mongo Transactions.
I have successfully implemented Transactions, and have verified the commit and rollback works as expected utilizing the Spring #Transactional annotation.
However, I am having a hard time getting the transactions to work the way I would expect in the Spring Data environment.
Spring data does Mongo -> Java Object mapping. So, the typical pattern for updating something is to fetch it from the database, and then make modifications, then save it back to the database. Prior to implementing transactions, we have been using Spring's Optimistic Locking to account for the possibility of updates happening to a record between the fetch and the updated.
I was hoping that I would be able to not include the optimistic locking infrastructure for all of my updates once we were able to use Transactions. So, I was hoping that, in the context of a transaction, the fetch would create a lock, so that I could then do my updates and save, and I would be isolated so that no one could get in and make changes like previously.
However, based on what I have seen, the fetch does not create any kind of lock, so nothing prevents any other connection from updating the record, which means it appears that I have to maintain all of my optimistic locking code despite having native mongodb transaction support.
I know I could use mongodb findAndUpdate methods to do my updates and that would not allow interim modifications from occurring, but that is contrary to the standard pattern of Spring Data which loads the data into a Java Object. So, rather than just being able to manipulate Java Objects, I would have to either sprinkle mongo specific code throughout the app, or create Repository methods for every particular type of update I want to make.
Does anyone have any suggestions on how to handle this situation cleanly while maintaining the Spring Data paradigm of just using Java Objects?
Thanks in advance!
I was unable to find any way to do a 'read' lock within a Spring/MongoDB transaction.
However, in order to be able continue to use following pattern:
fetch record
make changes
save record
I ended up creating a method which does a findAndModify in order to 'lock' a record during fetch, then I can make the changes and do the save, and it all happens in the same transaction. If another process/thread attempts to update a 'locked' record during the transaction, it is blocked until my transaction completes.
For the lockForUpdate method, I leveraged the version field that Spring already uses for Optimistic locking, simply because it is convenient and can easily be modified for a simply lock operation.
I also added my implementation to a Base Repository implementation to enable 'lockForUpdate' on all repositories.
This is the gist of my solution with a bit of domain specific complexity removed:
public class BaseRepositoryImpl<T, ID extends Serializable> extends SimpleMongoRepository<T, ID>
implements BaseRepository<T, ID> {
private final MongoEntityInformation<T, ID> entityInformation;
private final MongoOperations mongoOperations;
public BaseRepositoryImpl(MongoEntityInformation<T, ID> metadata, MongoOperations mongoOperations) {
super(metadata, mongoOperations);
this.entityInformation = metadata;
this.mongoOperations = mongoOperations;
}
public T lockForUpdate(ID id) {
// Verify the class has a version before trying to increment the version in order to lock a record
try {
getEntityClass().getMethod("getVersion");
} catch (NoSuchMethodException e) {
throw new InvalidConfigurationException("Unable to lock record without a version field", e);
}
return mongoOperations.findAndModify(query(where("_id").is(id)),
new Update().inc("version", 1L), new FindAndModifyOptions().returnNew(true), getEntityClass());
}
private Class<T> getEntityClass() {
return entityInformation.getJavaType();
}
}
Then you can make calls along these lines when in the context of a Transaction:
Record record = recordRepository.lockForUpdate(recordId);
...make changes to record...
recordRepository.save();

Is there a default transaction for the SaveChanges method?

If I made multiple operations with the same Entity Framework DbContext (add and update)
with one call to SaveChanges, are those changes will be done as a Transaction or not?
using (MyContext context = new MyContext())
{
context.Table1.Add(entity1);
context.Table2.Add(entity2);
context.SaveChanges();
}
or is there a chance to execute just one of them without executing the other?
Yes, it's wrapped in a transaction:
In all versions of Entity Framework, whenever you execute
SaveChanges() to insert, update or delete on the database the
framework will wrap that operation in a transaction. This transaction
lasts only long enough to execute the operation and then completes.
When you execute another such operation a new transaction is started.
https://learn.microsoft.com/en-us/ef/ef6/saving/transactions
You can't do any partial save, otherwise, your DbContext could get into inconsistent state. You can only call SaveChanges multiple times after each change operation.

Within JTA transaction (using container managed transaction), executeUpdate mehtod for explicit Query does immediate commit

Within JBOSS 7.1 AS, I'm using container managed transaction. For each request, I do several entity updates. Most of the entities use "insert, merge, refresh" methods from EntityManager to manage the updates. However, there is one entity that uses explicit Query to do "executeUpdate" on the DB (see below for the code snippet). This sql update is immediately commited to the DB and it is not aligned with container managed transaction (like other entity updates). Is there anyway align explicit sql update (the one below) with container-managed-transaction? I'm trying to get rollback to work and this sql update is not being rolledback. All other entity updates and inserts are working fine except this one. Thanks for all your help.
code snippet:
entityManager.createQuery
( "UPDATE Balance a SET a.balanceValue = :newValue WHERE a.balanceId =:balanceId AND a.balanceValue = :currentValue" ) .setParameter("balanceId", cb.getBalanceId()) .setParameter("currentValue", cb.getBalanceValue()).setParameter("newValue", newAmt).executeUpdate();
Additional code: (Code below is using Bean-managed transaction, but i get same behaviour for CMT as well)
ut.begin();
ChargingBalance bal2 = entityManager.find(ChargingBalance.class, 13);
bal2.setResetValue((new Date()).getTime());
String UPDATE_BALANCE_AND_EXPIRYDATE_EQUAL = "UPDATE ChargingBalanceValue a"
+ " SET a.balanceValue = :newValue "
+ " WHERE a.balanceId = :balanceId";
Query query = entityManager.createQuery(UPDATE_BALANCE_AND_EXPIRYDATE_EQUAL)
.setParameter("balanceId", 33)
.setParameter("newValue", 1000l);
/*The executeUpdate command gets committed to DB before ut.commit is executed */
query.executeUpdate();
/* This below only commits changes on ResetValue */
ut.commit();
ut.begin();
ChargingBalance bal = entityManager.find(ChargingBalance.class, 23);
bal.setResetValue(1011l);
query = entityManager.createQuery(UPDATE_BALANCE_AND_EXPIRYDATE_EQUAL)
.setParameter("balanceId", 33)
.setParameter("newValue", 2000l);
query.executeUpdate();
/* This rollback doesn't rollback changes executed by executeUpdate, but it rollbacks ResetValue change */
ut.rollback();
The executeUpdate command gets committed to DB before ut.commit is
executed
It might have probably flushed changes into the database, but not committed as you were in BMT.
You can try roll back & verify if it is really committed & is within transaction.
This below only commits changes on ResetValue
When you execute native or JPQL/HQL query, it will make changes directly into the database & EntityManager might not be aware of those changes.
Therefore managed entities aren't refreshed implicitly by EntityManager & might contain outdated/stale data.
You can go through documentation for more details, below is the exerpt.
JPQL UPDATE queries provide an alternative way for updating entity
objects. Unlike SELECT queries, which are used to retrieve data from
the database, UPDATE queries do not retrieve data from the database,
but when executed, update the content of specified entity objects in
the database.
Updating entity objects in the database using an UPDATE query may be
slightly more efficient than retrieving entity objects and then
updating them, but it should be used cautiously because bypassing the
EntityManager may break its synchronization with the database. For
example, the EntityManager may not be aware that a cached entity
object in its persistence context has been modified by an UPDATE
query. Therefore, it is a good practice to use a separate
EntityManager for UPDATE queries.

Optimistic concurrency updates using Entity Framework

I have a repository using EF 4.1 and DbContext when updating an object I receive this error
Store update, insert, or delete statement affected an unexpected number of rows (0). Entities may have been modified or
deleted since entities were loaded. Refresh ObjectStateManager
entries.
I suppose is connected with optimistic concurrency updates.. Any idea how to solve it?
public void UpdateAddingCandidate(Event eventObj, int candidateId)
{
Candidate newCandidate = db.Candidates.AsNoTracking().FirstOrDefault(x => x.CandidateId == candidateId);
eventObj.Candidate = newCandidate;
eventObj.CandidateId = newCandidate.CandidateId;
db.Entry(eventObj).State = EntityState.Modified;
}
Look into ObjectContext.Refresh, which allows you to refresh entities from the database. You can set the RefreshMode to ClientWins or StoreWins.
Use Try...Catch logic and handle the conflict in the catch to force the change with ClientWins or pull down the changed data into the context and restart the edit. In most cases, the latter is the better approach.

How to prevent non-repeatable query results using persistence API in Java SE?

I am using Java SE and learning about the use of a persistence API (toplink-essentials) to manage entities in a Derby DB. Note: this is (distance learning) university work, but it is not 'homework' this issue crops up in the course materials.
I have two threads operating on the same set of entities. My problem is that every way I have tried, the entities within a query result set (query performed within a transaction) in one thread can be modified so that the result set is no longer valid for the rest of the transaction.
e.g. from one thread this operation is performed:
static void updatePrices(EntityManager manager, double percentage) {
EntityTransaction transaction = manager.getTransaction();
transaction.begin();
Query query = manager.createQuery("SELECT i FROM Instrument i where i.sold = 'no'");
List<Instrument> results = (List<Instrument>) query.getResultList();
// force thread interruption here (testing non-repeatable read)
try { Thread.sleep(2000); } catch (Exception e) { }
for (Instrument i : results) {
i.updatePrice(percentage);
}
transaction.commit();
System.out.println("Price update commited");
}
And if it is interrupted from another thread with this method:
private static void sellInstrument(EntityManager manager, int id)
{
EntityTransaction transaction = manager.getTransaction();
transaction.begin();
Instrument instrument = manager.find(Instrument.class, id);
System.out.println("Selling: " + instrument.toFullString());
instrument.setSold(true);
transaction.commit();
System.out.println("Instrument sale commited");
}
What can happen is that when the thread within updatePrices() resumes it's query resultSet is invalid, and the price of a sold item ends up being updated to different price to that at which it was sold. (The shop wishes to keep records of items that were sold in the DB). Since there are concurrent transactions occuring I am using a different EntityManager for each thread (from the same factory).
Is it possible (through locking or some kind of context propagation) to prevent the results of a query becoming 'invalid' during a (interrupted) transaction? I have an idea that this kind of scenario is what Java EE is for, but what I want to know is whether its doable in Java SE.
Edit:
Taking Vineet and Pascal's advice: using the #Version annotation in the entity's Class (with an additional DB column) causes the large transaction ( updatePrices() ) to fail with OptimisticLockException. This is very expensive if it happens at the end of a large set of query results though. Is there any way to cause my query (inside updatePrices() ) to lock the relevant rows causing the thread inside sellInstrument() to either block or abort throw an exception (then abort)? This would be much cheaper. (From what I understand I do not have pessimistic locking in Toplink Essentials).
Thread safety
I have a doubt about the way you manage your EntityManager. While a EntityManagerFactory is thread-safe (and should be created once at the application startup), an EntityManager is not and you should typically use one EntityManager per thread (or synchronize accesses to it but I would use one per thread).
Concurrency
JPA 1.0 supports (only) optimistic locking (if you use a Version attribute) and two lock modes allowing to avoid dirty read and non repeatable read through the EntityManager.lock() API. I recommend to read Read and Write Locking and/or the whole section 3.4 Optimistic Locking and Concurrency of the JPA 1.0 spec for full details.
PS: Note that Pessimistic locking is not supported in JPA 1.0 or only through provider specific extensions (it has been added to JPA 2.0, as well as other locking options). Just in case, Toplink supports it through the eclipselink.pessimistic-lock query hint.
As written in the JPA wiki, TopLink Essentials is supposed to support pessimistic locking in JPA 1.0 via a query hint:
// eclipselink.pessimistic-lock
Query Query = em.createQuery("select f from Foo f where f.bar=:bar");
query.setParameter("bar", "foobar");
query.setHint("eclipselink.pessimistic-lock", "Lock");
query.getResultList();
I don't use TopLink so I can't confirm this hint is supported in all versions. If it isn't, then you'll have to use a native SQL query if you want to generate a "FOR UPDATE".
You might want to take a look at the EntityManager.lock() method, which allows you to obtain an optimistic or a pessimistic lock on an entity once a transaction has been initialized.
Going by your description of the problem, you wish to lock the database record once it has been 'selected' from the database. This can be achieved via a pessimistic lock, which is more or less equivalent to a SELECT ... FROM tbl FOR UPDATE statement.