How can I remove multiple objects in batch call using their IDs ?
I tried this
EntityManager em = ...
em.getTransaction().begin();
try
{
for (Visitor obj : map.keySet())
{
Visitor fake = em.getReference(Visitor.class, obj.getId());
em.remove(fake);
}
em.getTransaction().commit();
}
catch (Exception ex)
{
ex.printStackTrace();
}
I see DELETE statements in log file but them it throws
<openjpa-2.1.1-r422266:1148538 fatal store error> org.apache.openjpa.persistence.RollbackException: Optimistic locking errors were detected when flushing to the data store. The following objects may have been concurrently modified in another transaction: [com.reporting.data.Visitor-53043]
at org.apache.openjpa.persistence.EntityManagerImpl.commit(EntityManagerImpl.java:593)
at com.reporting.ui.DBUtil.saveAnswers(DBUtil.java:311)
I have single thread.
Update:
I also tried
for (Visitor obj : map.keySet())
em.remove(obj);
But it's slow because on every iteration it sends SELECT to a server. I assume OpenJPA does it to reattach object to context.
After multiple experiments I ended up doing hacky JPQL query. Here is code-snippet:
List<Long> lstExistingVisitors = ...
Query qDeleteVisitors = em.createQuery("delete from Visitor obj where obj.id in (?1)");
qDeleteVisitors.setParameter(1, lstExistingVisitors);
qDeleteVisitors.executeUpdate();
I tried list as big as 5000 IDs. It works fine with mysql 5.1 and H2DB.
Try to use JPQL
em.createQuery("delete from Visitor v where v.id in (:param)")
.setParameter("param", idsList).executeUpdate();
OpenJPA docs: http://openjpa.apache.org/builds/1.2.0/apache-openjpa-1.2.0/docs/manual/jpa_langref.html#jpa_langref_bulk_ops
Related
Using JPARepository, we are trying to persist department and student details if not already exists. It works fine in single threaded environment.
But, it's failing with multiple threads.
Caused by: java.sql.SQLIntegrityConstraintViolationException: Duplicate entry 'DEP12' for key 'departmentId'
Code Snippet :
#Transactional
public void persistDetails(String departmentName, String studentName)
{
Department dep= departmentRepository.findByDepartmentName(departmentName);
if (dep== null) {
dep= createDepartmentObject(departmentName);
departmentRepository.save(dep);
}
...
}
How to achieve this in multi-threaded environment. We don't have to fail, instead use existing record and perform other operations.
Also, tried to catch exception and make select query inside it. But, in that case it fetches from cache object, not from DB.
Catching Exception : Code Snippet :
#Transactional
public void persistDetails(String departmentName, String studentName)
{
Department dep= departmentRepository.findByDepartmentName(departmentName);
try{
if (dep== null) {
dep= createDepartmentObject(departmentName);
departmentRepository.save(dep);
}
}
catch(Exception e)
{
dep= departmentRepository.findByDepartmentName(departmentName);
}
...
}
Implement your departmentRepository.save in such way that it uses saveOrUpdate (if you are using Hibernate directly) or merge (if you are using JPA API).
You are catching exception on a wrong place. The kind of catch you show here should be done outside of the transaction. Only then you can be sure you have consistent entities in the session.
I am implementing a feature where if there is any exception while writing data into DB, we should retry it for 5 times before failing. I have implemented the feature but not able to test it using arquillian test.
We are using JPA and Versant as database. Till now, I am debbuging the the arquillian test and once my flow reaches DB handler code, I am stopping the database. But this is worst way of testing.
Do you have any suggestion how to achieve the same ?
With JPA in mind, the easiest way is to add method to your data access layer, with which you are able to run native queries. Then you run query against nonexisting table or something similar. So in my DAO utilities I found method like this:
public List findByNativeQuery(String nativeQuery, Map<String, Object> args) {
try{
final EntityManager em = getEntityManager();
final Query query = em.createNativeQuery(nativeQuery);
if (args!=null && args.entrySet().size()>0) {
final Iterator it = args.entrySet().iterator();
while (it.hasNext()) {
final Map.Entry pairs = (Map.Entry)it.next();
query.setParameter(pairs.getKey().toString(), pairs.getValue());
}
}
return query.getResultList();
}
catch (RuntimeException e) {
// throw some new Exception(e.getMessage()); // the best is to throw checked exception
}
}
Native solutions
There is the old trick by dividing by zero in the database. At the time of selection you could try:
select 1/0 from dual;
Insertion time (you need a table):
insert into test_table (test_number_field) values (1/0);
pure JPA solution
You can try to utilize the #Version annotation and decrement it to throw OptimisticLockException. This is not thrown in the database, but in the Java layer, but fullfills your need.
Those all will result in DB fail.
I am working on a program that read from a file and insert line by line into Oracle 11g database using JTA/EclipseLink 2.3.x JPA with container managed transaction.
I've developed the code below, but I'm bugged by the fact that the failed lines need to be known and being fixed manually.
public class CreateAccount {
#PersistenceContext(unitName="filereader")
private EntityManager em;
private ArrayList<String> unprocessed;
public void upload(){
//reading the file into unprocessed
for (String s : unprocessed) {
this.process(s);
}
}
private void process(String s){
//Setting the entity with appropriate properties.
//Validate the entity
em.persist(account);
}
}
This first version takes a few seconds to commit 5000 rows to database, as it seems taking advantage of caching the prepared statement. This works fine when all entities to persist are valid. However, I am concerning that even if I validate the entity, it is still possible to fail due to various unexpected reason, and when any entity throw an exception during commit, I cannot find the particular record that caused it, and all entities had been rolled back.
I had tried another approach that start a new transaction and commit for each line without using managed transaction using the following code in process(String s).
for (String s : unprocessedLines) {
try {
em.getTransaction().begin();
this.process(s);
em.getTransaction().commit();
} catch (Exception e) {
// Any exception that a line caused can be caught here
e.printStackTrace();
}
}
The second version works well for logging erroneous line as exception caused by individual lines were caught and handled, but it takes over 300s to commit the same 5000 lines to database. The time it takes is not reasonable when a large file is being processed.
Is there any workaround that I could check and insert record quickly and at the same time being notified of any failed lines?
Well this is more likely a guess, but why don't you try to keep the transaction and commiting it in batch, then you'll keep the rollback exception at the same time will keep the speed:
try {
em.getTransaction().begin();
for (String s : unprocessedLines) {
this.process(s);
}
em.getTransaction().commit();
} catch (RollbackException exc) {
// here you have your rollback reason
} finally {
if(em.getTransaction.isActive()) {
em.getTransaction.rollback(); // well of course you should declare em.getTransaction as a varaible above instead of constantly invoking it as I do :-)
}
}
My solution turned out to be a binary search, and start with a block of reasonable number, e.g. last = first + 1023 to minimize the depth of the tree.
However, note that this work only if the error is deterministic, and is worse than committing each record once if the error rate is very high.
private boolean batchProcess(int first, int last){
try {
em.getTransaction().begin();
for (String s : unprocessedLines.size(); i++) {
this.process(s);
}
em.getTransaction().commit();
} catch (Exception e) {
e.printStackTrace();
if(em.getTransaction.isActive()) {
em.getTransaction.rollback();
}
if( first == last ){
failedLine.add(unprocessedLines(first));
} else {
int mid = (first + last)/2+1
batchProcess(first, mid-1);
batchProcess(mid, last);
}
}
}
For container managed transaction, one may need to do the binary search out of the context of the transaction, otherwise there will be RollbackException because the container had already decided to rollback this transaction.
I have an Sqlite database mapped in an Entity Framework context.
I write on this database from several threads (bad idea, i know). However i tried using a global lock for my application like this:
partial class MyDataContext : ObjectContext
{
public new int SaveChanges()
{
lock (GlobalWriteLock.Lock)
{
try
{
int result = base.SaveChanges();
Log.InfoFormat("fff Save changes performed for {0} entries", result);
return result;
}
catch (UpdateException e)
{
throw e;
}
}
}
}
Still, i get the database file locked exception all the way down from sqlite itself. How can this be possible?
The only explanation I can see is that the base.SaveChanges method returns before the database gets unlocked and continues work asynchronously after returning.
Is this the case? If yes, how can I overcome this issue?
Note: My commits are usually updates of 1-100 entries and/or inserts of about 1-100 entries at a time.
i need to retrieve single row from table, and i was interested what approach is better.
On the one side getSingleResult is designed for retrieving single result, but it raises exception. Does this method have benefit in performance related to getResultList with
query.setFirstResult(0);
query.setMaxResults(1);
According to Effective Java by Joshua Bloch:
Use checked exceptions for conditions from wich the caller can
reasonably be expected to recover. Use runtime exceptions to indicate
programming errors.
Credit to the source: Why you should never use getSingleResult() in JPA
#Entity
#NamedQuery(name = "Country.findByName",
query = "SELECT c FROM Country c WHERE c.name = :name"
public class Country {
#PersistenceContext
transient EntityManager entityManager;
public static Country findByName(String name) {
List<Country> results = entityManager
.createNamedQuery("Country.findByName", Country.class)
.setParameter("name", name).getResultList();
return results.isEmpty() ? null : results.get(0);
}
}
getSingleResult throws NonUniqueResultException, if there are multiple rows. It is designed to retrieve single result when there is truly a single result.
The way you did is fine and JPA is designed to handle this properly. At the same time, you cannot compare it against getSingleResult any way, since it won't work.
However, depend on the code you are working on, it is always better to refine the query to return single result, if that's all what you want - then you can just call getSingleResult.
There is an alternative which I would recommend:
Query query = em.createQuery("your query");
List<Element> elementList = query.getResultList();
return CollectionUtils.isEmpty(elementList ) ? null : elementList.get(0);
This safeguards against Null Pointer Exception, guarantees only 1 result is returned.
getSingleResult throws NonUniqueResultException, if there are multiple rows or no any rows . It is designed to retrieve single result when there is truly a single result.
In combination with fetch() the usage of setMaxResults(1) can lead to a partially initialised objects. For example,
CriteriaQuery<Individual> query = cb.createQuery(Individual.class);
Root<Individual> root = query.from(Individual.class);
root.fetch(Individual_.contacts);
query.where(cb.equal(root.get(Individual_.id), id));
Individual i = em.createQuery(query)
.setMaxResults(1) // assertion fails if individual has 2 contacts
.getResultList()
.get(0);
assertEquals(2, i.getContacts().size());
So, I am using getResultList() without limit -- a bit unsatisfying.