JPA : Parallel Thread : Inserting same record : SQLIntegrityConstraintViolationException - jpa

Using JPARepository, we are trying to persist department and student details if not already exists. It works fine in single threaded environment.
But, it's failing with multiple threads.
Caused by: java.sql.SQLIntegrityConstraintViolationException: Duplicate entry 'DEP12' for key 'departmentId'
Code Snippet :
#Transactional
public void persistDetails(String departmentName, String studentName)
{
Department dep= departmentRepository.findByDepartmentName(departmentName);
if (dep== null) {
dep= createDepartmentObject(departmentName);
departmentRepository.save(dep);
}
...
}
How to achieve this in multi-threaded environment. We don't have to fail, instead use existing record and perform other operations.
Also, tried to catch exception and make select query inside it. But, in that case it fetches from cache object, not from DB.
Catching Exception : Code Snippet :
#Transactional
public void persistDetails(String departmentName, String studentName)
{
Department dep= departmentRepository.findByDepartmentName(departmentName);
try{
if (dep== null) {
dep= createDepartmentObject(departmentName);
departmentRepository.save(dep);
}
}
catch(Exception e)
{
dep= departmentRepository.findByDepartmentName(departmentName);
}
...
}

Implement your departmentRepository.save in such way that it uses saveOrUpdate (if you are using Hibernate directly) or merge (if you are using JPA API).

You are catching exception on a wrong place. The kind of catch you show here should be done outside of the transaction. Only then you can be sure you have consistent entities in the session.

Related

JBOSS Arquillian : How to force database to throw exception while running the aquillian test?

I am implementing a feature where if there is any exception while writing data into DB, we should retry it for 5 times before failing. I have implemented the feature but not able to test it using arquillian test.
We are using JPA and Versant as database. Till now, I am debbuging the the arquillian test and once my flow reaches DB handler code, I am stopping the database. But this is worst way of testing.
Do you have any suggestion how to achieve the same ?
With JPA in mind, the easiest way is to add method to your data access layer, with which you are able to run native queries. Then you run query against nonexisting table or something similar. So in my DAO utilities I found method like this:
public List findByNativeQuery(String nativeQuery, Map<String, Object> args) {
try{
final EntityManager em = getEntityManager();
final Query query = em.createNativeQuery(nativeQuery);
if (args!=null && args.entrySet().size()>0) {
final Iterator it = args.entrySet().iterator();
while (it.hasNext()) {
final Map.Entry pairs = (Map.Entry)it.next();
query.setParameter(pairs.getKey().toString(), pairs.getValue());
}
}
return query.getResultList();
}
catch (RuntimeException e) {
// throw some new Exception(e.getMessage()); // the best is to throw checked exception
}
}
Native solutions
There is the old trick by dividing by zero in the database. At the time of selection you could try:
select 1/0 from dual;
Insertion time (you need a table):
insert into test_table (test_number_field) values (1/0);
pure JPA solution
You can try to utilize the #Version annotation and decrement it to throw OptimisticLockException. This is not thrown in the database, but in the Java layer, but fullfills your need.
Those all will result in DB fail.

Good design to Insert multiple records

I am working on a program that read from a file and insert line by line into Oracle 11g database using JTA/EclipseLink 2.3.x JPA with container managed transaction.
I've developed the code below, but I'm bugged by the fact that the failed lines need to be known and being fixed manually.
public class CreateAccount {
#PersistenceContext(unitName="filereader")
private EntityManager em;
private ArrayList<String> unprocessed;
public void upload(){
//reading the file into unprocessed
for (String s : unprocessed) {
this.process(s);
}
}
private void process(String s){
//Setting the entity with appropriate properties.
//Validate the entity
em.persist(account);
}
}
This first version takes a few seconds to commit 5000 rows to database, as it seems taking advantage of caching the prepared statement. This works fine when all entities to persist are valid. However, I am concerning that even if I validate the entity, it is still possible to fail due to various unexpected reason, and when any entity throw an exception during commit, I cannot find the particular record that caused it, and all entities had been rolled back.
I had tried another approach that start a new transaction and commit for each line without using managed transaction using the following code in process(String s).
for (String s : unprocessedLines) {
try {
em.getTransaction().begin();
this.process(s);
em.getTransaction().commit();
} catch (Exception e) {
// Any exception that a line caused can be caught here
e.printStackTrace();
}
}
The second version works well for logging erroneous line as exception caused by individual lines were caught and handled, but it takes over 300s to commit the same 5000 lines to database. The time it takes is not reasonable when a large file is being processed.
Is there any workaround that I could check and insert record quickly and at the same time being notified of any failed lines?
Well this is more likely a guess, but why don't you try to keep the transaction and commiting it in batch, then you'll keep the rollback exception at the same time will keep the speed:
try {
em.getTransaction().begin();
for (String s : unprocessedLines) {
this.process(s);
}
em.getTransaction().commit();
} catch (RollbackException exc) {
// here you have your rollback reason
} finally {
if(em.getTransaction.isActive()) {
em.getTransaction.rollback(); // well of course you should declare em.getTransaction as a varaible above instead of constantly invoking it as I do :-)
}
}
My solution turned out to be a binary search, and start with a block of reasonable number, e.g. last = first + 1023 to minimize the depth of the tree.
However, note that this work only if the error is deterministic, and is worse than committing each record once if the error rate is very high.
private boolean batchProcess(int first, int last){
try {
em.getTransaction().begin();
for (String s : unprocessedLines.size(); i++) {
this.process(s);
}
em.getTransaction().commit();
} catch (Exception e) {
e.printStackTrace();
if(em.getTransaction.isActive()) {
em.getTransaction.rollback();
}
if( first == last ){
failedLine.add(unprocessedLines(first));
} else {
int mid = (first + last)/2+1
batchProcess(first, mid-1);
batchProcess(mid, last);
}
}
}
For container managed transaction, one may need to do the binary search out of the context of the transaction, otherwise there will be RollbackException because the container had already decided to rollback this transaction.

Why transaction can't commit in a self-invoked ejb method with #REQUIRES_NEW Annotation

First I want to explain my self-invoked ejb method in this situation. I have a stateful session bean with a method which starts a new transaction (Annotated by #REQUIRES_NEW). To invoke this method inside the bean itself and make the annotation effective, I use SessionContext#getBusinessObject() to achieve the effect of #EJB (#EJB here causes stackoverflow?!). My code is shown below:
#Stateful
#Local
public class TransactionTest implements ITransactionTest {
#PersistenceContext(unitName="Table",Type=PersistenceContextType.EXTENDED)
private EntityManager manager;
#Resource
SessionContext sc;
ITransactionTest me;
#PostConstruct
public void init(){
me = this.sc.getBusinessObject(ITransactionTest.class);
}
public void generateRecord(int i) throws RuntimeException{
Record record = new Record();
record.setId(i+"");
record.status(1);
manager.persist(record);
manager.flush(); //If not flush, result is correct. Why?
me.updateRecord(i);
}
#TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
public void updateRecord(int i) throws RuntimeException{
try {
Record record = manager.find(Record.class, i+"");
record.setStatus(2);
manager.flush();
} catch(Exception e) {
e.printStackTrace();
throw new RuntimeException();
}
}
}
While,generateRecord() runs properly. The console shows it executes 'insert' and 'update' HQL without any exception (I use Hibernate as JPA provider). However, the 'update' result doesn't appear in the database. Why? Does updateRecord() commit correctly?
Also, I try it in two altenative ways: First is invoking generateRecord() (it will no longer invoke updateRecord()) and updateRecord() consecutively in another bean. It can give me the right result.
The second is removing the first flush(). Then both 'insert' and 'update' HQL will be executed at the second flush(). This method can also produce right result.
My program is running under JBOSS 6.1.0-Final and database is Oracle.
Best Regards,
Kajelas

How to remove multiple objects in batch call using their IDs?

How can I remove multiple objects in batch call using their IDs ?
I tried this
EntityManager em = ...
em.getTransaction().begin();
try
{
for (Visitor obj : map.keySet())
{
Visitor fake = em.getReference(Visitor.class, obj.getId());
em.remove(fake);
}
em.getTransaction().commit();
}
catch (Exception ex)
{
ex.printStackTrace();
}
I see DELETE statements in log file but them it throws
<openjpa-2.1.1-r422266:1148538 fatal store error> org.apache.openjpa.persistence.RollbackException: Optimistic locking errors were detected when flushing to the data store. The following objects may have been concurrently modified in another transaction: [com.reporting.data.Visitor-53043]
at org.apache.openjpa.persistence.EntityManagerImpl.commit(EntityManagerImpl.java:593)
at com.reporting.ui.DBUtil.saveAnswers(DBUtil.java:311)
I have single thread.
Update:
I also tried
for (Visitor obj : map.keySet())
em.remove(obj);
But it's slow because on every iteration it sends SELECT to a server. I assume OpenJPA does it to reattach object to context.
After multiple experiments I ended up doing hacky JPQL query. Here is code-snippet:
List<Long> lstExistingVisitors = ...
Query qDeleteVisitors = em.createQuery("delete from Visitor obj where obj.id in (?1)");
qDeleteVisitors.setParameter(1, lstExistingVisitors);
qDeleteVisitors.executeUpdate();
I tried list as big as 5000 IDs. It works fine with mysql 5.1 and H2DB.
Try to use JPQL
em.createQuery("delete from Visitor v where v.id in (:param)")
.setParameter("param", idsList).executeUpdate();
OpenJPA docs: http://openjpa.apache.org/builds/1.2.0/apache-openjpa-1.2.0/docs/manual/jpa_langref.html#jpa_langref_bulk_ops

Getting access to newly inserted Identity ID before SaveChanges method will be called

I'm using the LINQ Entity Framework and I've came across the scenario where I need to access the newly inserted Identity record before performing multiple operations using procedure.
Following is the code sinppet:
public void SaveQuote(Domain.Quote currentQuote)
{
try
{
int newQuoteId;
//Add quote and quoteline details to db
if (currentQuote != null)
{
using (QuoteContainer quoteContainer = new QuoteContainer())
{
**quoteContainer.AddToQuote(currentQuote);**
newQuoteId = currentQuote.QuoteId;
}
}
else return;
// Execution of some stored Procedure by using above newly generated QuoteId
}
catch (Exception ex)
{
throw ex;
}
}
In the next function
quoteContainer.SaveChanges(); will get called to commit the DB changes.
Can any one suggest whether the above approach is correct?
correct so far.
remember: you cannot get IDENTITY until insert has occured! on an update, your entity already holds the IDENTITY (mainly PK)