Exception Handling with an Entity Manager - jpa

I have heard that when committing transactions using an Entity Manager, it is good practice to try again if the commit fails, since it may be an issue where the object was changed while the transaction was processing.
Does this seem like a proper retry implementation?
int loopCount = 1;
boolean transactionCommited = false;
while(!transactionCommited && loopCount <3) {
EntityManager em = EMF.getInstance().getEntityManager();
try{
EntityTransaction tx = em.getTransaction();
tx.begin();
Player playerToEdit = em.find(Player.class, id);
playerToEdit.setLastName(lastName);
tx.commit();
transactionCommitted = true;
} catch(Exception e){
if(loopCount == 2){
//throw an exception, retry already occurred?
}
} finally{
if(tx.isActive()){
tx.rollback();
}
em.close();
}
loopCount++;
}

As you are caching "Exception" in your catch block, you are retrying to update in situations when the update can't be done. For example, if the Entity can't be found in the database, you are trying to find it twice.
You should catch the most specific exception "OptimisticLockException". This exception is thrown when the version of the Entity doesn't match with the version stored in the database. A Version field in the Entity is a requirement to implement this locking strategy.
There are other locking strategies to be used in high concurrent applications but most of the time a Optimistic Locking strategy is the most appropriate.
As a small detail, using constants for the number of retries instead of "magic numbers" improves code readability and it's easier to modify the number of retries in the future.

Generally it is a bad idea to try again to submit the same changes. Even if the thrown exception is OptimisticLockException, that is not a good idea, as that could mean the someone is overwriting changes that someone has made. Imagine the following scenario:
User 1 changes entityX and commits it.
User 2 changes the a part of fields of the same entityX and tries to commit it. EntityManager throws an exception.
The correct scenario would be here to indicate the exception to the User, so that he re-reads the entity and try again the same modifications.
And now the most important argument why this is dangerous:
At least Hibernate is known that BAD things could happen if you try to reuse the EntityManager after it throwed an exception. Before data gets corrupt or your applicaton stops working as you want, take a look at this article or this one.

Related

InvalidOperationException using SqlServerRetryingExecutionStrategy without user-initiated transactions

I'm getting an intermittent InvalidOperationException occurring in a ServiceBusTriggered Azure Function:
The configured execution strategy 'SqlServerRetryingExecutionStrategy' does not support user-initiated transactions. Use the execution strategy returned by 'DbContext.Database.CreateExecutionStrategy()' to execute all the operations in the transaction as a retriable unit.
The message is pretty clear, but there are no user-initiated transactions in the code. The code is very simple:
try
{
Context.Add<TEntity>(entity);
await Context.SaveChangesAsync();
}
catch (Exception ex)
{
...
}
The context is registered as scoped using DI.
I was looking online for answers, but all the explanations point toward user-initiated transactions, which there are none in the code.
Is there any other reason or explanation for this exception?
Is it possible that in some conditions the SaveChanges operation would be seen as a user-initiated transaction?
Is it possible for transactions from stored procedures to cause this?

How to make bypass insert failed objects in Entity Framework?

In one of my project, Entity Framework in used.
I came across a situation where performance is terrible. When there are multiple records that need to be inserted into a table one by one, about 50 ~ 500 not very sure but very huge.
At first, I used:
dbcontext.Adds.Add(alist);
to do the insert. But soon I found out once there exist even 1 object that is has invalid data and could not insert into database correctly, none of the data could be inserted! All data related to each other and could not bypass any incorrect ones.
Here is my solution:
foreach(var a in alist)
{
//...
try
{
dbcontext.Adds.Add(a);
dbcontext.SaveChanges();
}
catch (Exception ex)
{
// just log and bypass one by one
log4net.LogManager.GetLogger("NOTSAVE").Info(a.ToString());
}
}
Now, it works properly.
But there is a big issue that the performance is terrible! Client may wait for several seconds for each action. And custom feed back is also worse.
Does anyone know any other solution to improve performance? It's better to complete in 500ms ~ 1second. But now once the records > 100 it may cost more than 1 second each time. Obvious and frequent pause with customer each operate via UI as the result.
The First and important thing you can do is turning off AutoDetectChangesEnabled, it has huge impact on performance.
dbcontext.Configuration.AutoDetectChangesEnabled = false;
foreach(var a in alist)
{
//...
try
{
dbcontext.Adds.Add(a);
dbcontext.SaveChanges();
}
catch (Exception ex)
{
// just log and bypass one by one
log4net.LogManager.GetLogger("NOTSAVE").Info(a.ToString());
}
}
dbcontext.Configuration.AutoDetectChangesEnabled = true;
read this article for more info: EntityFramework Performance and AutoDetectChanges
You can use third party libraries:
Entity Framework Extensions
EntityFramework.Utilities
and you can use System.Data.SqlClient.SqlBulkCopy inADO.NET which is really fast.

EclipseLink disable change tracking per query

We currently experience severe slowdowns in our application due to change tracking in EclipseLink. The problem is home-made, we don’t use JPA as it was meant to be.
I would like to know, how I can get cache hits (1st level), but don’t include these entities in change tracking except some condition is met.
// not real code, but close if you decompose the layers and inline methods
public void foo(Long customerId, boolean changeName, String newName) {
/*** check customer valid ***/
// #1 HOW TO discard from change tracking? – Customer won’t get modified!
Customer customer = entityManager.find(Customer.class, customerId);
// some Business Rules
// #2 They should be auto discarded from change tracking because #1 is also discarded)
checkSomething(customer.getAddresses());
checkSomething(customer.getPhoneNumbers ());
…
/*** manipulate customer ***/
// somewhere else in different classes/methods …
if(changeName) {
// #3 HOW TO get Cache hit (1st level) - it was read in #1
// newName should be persisted
customer = entityManager.find(Customer.class, customerId);
customer.setName(newName);
}
}
It would be ok to use EclipseLink API for #1 and #2
I would prefer hints.
EclipseLink 2.4.2
2nd level cache: disabled
ChangeTrackingType: DEFERRED
Try using the read-only query hint, which can be passed as a property to find or to queries, and see this for more on hints. The read-only hint should return the instance from the shared 2nd level cache, which should not be modified. As it is not added to the 1st level EntityManager cache, any other reads without the hint will build/return the managed instance.
The documentation states this works for nontransactional read operations, so I'm not sure how it will work if the EntityManager is using a transactional connection for reads, as it will not use the shared cache for reads through a transaction.

EJB - Commit and flush within a MDB

I have a message driven bean which is communicating with a database via EntityManager. The EM is injected via #PersistenceContext, like normal. I want to flush changes to an Entity immediately without waiting for the MDB to fully complete its processing of the given Message.
For example:
MDB's onMessage() {
Foo f = em.find(Foo.class, 123);
f.setNewStatus("Performing work!");
em.merge(f);
em.flush();
...
// Continue doing a lot of work...
...
f.setNewStatus("Done!");
em.merge(f);
em.flush();
}
The problem is that I never see the "Performing Work!" status from outside the context of the MDB (e.g. by logging into the DB directly and checking the tuple's value).
This appears to be related to transactions. From online material, it sounds like a transaction is started within the context of onMessage() and not committed until the method is complete. Hence, the intermediate status is never committed since we eventually write "Done!" which overwrites the Foo's value within the PersistentContext.
Is there a solution to this type of problem? Some way to control the context of the transaction?
I think what you want to achieve is to see changes from outside of the transaction before this transaction commits. Well this is only possible when transaction isolation is set to Read uncommitted which I dont think is default in your DB.
What you can do is to add method, that will log your data, with attribute: #TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
In this case, container will have to pause current transaction, create new one that will executed in this method, and when it finishes, main transaction will be resumed.

Calling a HTTP service within a JPA/JTA transaction - transaction integrity

I have a JSF/EJB/JPA application which uses container managed persistence. There is
one case where a call is made to an external service via HTTP which has a cost, this
cost being allocated back to the requesting user. In the current implementation the
process of making the HTTP request is performed by a EJB timer method running periodically
in the background.
The timer method may have to deal with a number of requests in one invocation, although
each request needs to be treated independently, independently with respect to allocating
the cost back to the user, that is. If user A doesn't have enough credit to purchase a
book, this musn't prevent the successful purchase of a book by user B resulting in their
balance being debited due to a rollback.
To provide control over the transaction demarcation for independent processing of each
request I'm using bean managed transactions for the class in which the timer method
resides. This is a java-pseudo-code version of what I've got now:
#Stateless
#TransactionManagement(TransactionManagementType.BEAN)
public class MessageTimer {
private void processMessages(UserMessage msg) {
tx.begin();
em.joinTransaction();
try {
userData = em.find(..., PESSIMISTIC_WRITE);
if(user has enough credit) {
debit cost from user;
status = make external http request to order book from supplier;
if(status == success) {
commit = true;
}
}
} catch(Exception) {
tx.rollback();
}
if(commit) {
tx.commit();
}
else {
tx.rollback();
}
}
}
So the idea is that I start a transaction, assume success and debit the cost from the
user, call the http service and commit if it succeeds or rollback otherwise.
I have an uneasy feeling that I may not be anywhere near the right ballpark with this
design, particularly having the lengthy http call (actually done using jax-rs) inside
the pessimistic_write transaction. I wondered if I could firstly, within a transaction
debit the user (begin/debit/commit), make http call, then credit the user if any error
happens, but there's no transaction integrity.
This is new territory for me, can anyone point me in the right direction, is there an
established way of doing what I'm trying to do?
Many Thanks.
p.s. I'm using a glassfish 3.1 stack with Seam 3
I am not sure how jax-rs communication layer is. if the communication is single threaded, then the code you have written is a long running transaction. which might make your application slower.
I am not a tech guru, but what i can suggest is -
Credit the account and make the jax-rs call on a thread. on that case the transaction will be closed before sending the call to remote node. and it will not be a long running transaction, so the application will be faster.