How to Sync Entity Bean with Database After Trigger Update - jpa

PostgreSQL 9.1
Glassfish 3.1.2.2
Firefox 10.0.0.7
Linux
I am using JPA to persist entities in PostgreSQL. However, there is one table (named Position) which is populated by triggers and I use an entity of that table which acts as a read only. The entity never persist data into this table as it's loaded and manipulated from the triggers. I can load the data into my managed bean and view the Position table data fine.
The problem I have is that once the database (Position table) has been modified by the triggers, when the Position table is queried again, the values are still the same. The Position entity still contains the old values and have not reloaded.
Here is the bean which handles the Position entity. I added em.flush() which didn't help and wasn't sure how to use em.refresh() in this way. And actually, how would syncing help anyways since it doesn't know what I want to sync to without a query.
Any help much appreciated.
The EJB ...
#Stateless
public class PositionBean implements IPosition {
#PersistenceContext(unitName="positionbean-pu")
private EntityManager em;
public Position getPositionById(Integer posId) {
Position pos = null;
try {
pos = (Position) em.createNamedQuery("findPosition")
.setParameter("posId", posId).getSingleResult();
}
catch (Exception e) {
throw new EJBException(e.getMessage());
}
return pos;
}
The entity bean ...
#Entity
#SequenceGenerator(name="posIdGen", initialValue=10000,
sequenceName="pos_seq", allocationSize=1)
#NamedQuery(name="findPosition",
query="SELECT p FROM Position p WHERE p.posId = :posId")
#Table(name="Position")
public class Position implements Serializable {
// ...
persistence.xml
<persistence-unit name="positionbean-pu" transaction-type="JTA">
<jta-data-source>jdbc/postgreSQLPool</jta-data-source>
</persistence-unit>

In case anyone runs into this, I learned how to use refresh() and this fixed my problem.
pos = (Position) em.createNamedQuery("findPosition")
.setParameter("posId", posId).getSingleResult();
em.refresh(pos);

In my opinion the correct way to create a read only entity that is updated "externally" is as follows:
Do use field based annotation for the entity class by setting annotation #Access to AccessType.FIELD (or remove the annotation because it is the default access type) as stated out in this answer. Also propose public getters only as stated out in this answer. This prevents the entity from beeing modified within the application.
Enable selective shared cache mode in persistence.xml file and add annotation #Cacheable(false) to the entity class as stated out in this answer. This forces JPA to always load the entity from the persistence layer when you call EntityManager#find(...).

Related

How id can be found in Transaction-Scoped Persistence context if it's not in the database

An example from Pro JPA:
#Stateless
public class AuditServiceBean implements AuditService {
#PersistenceContext(unitName = "EmployeeService")
EntityManager em;
public void logTransaction(int empId, String action) {
// verify employee number is valid
if (em.find(Employee.class, empId) == null) {
throw new IllegalArgumentException("Unknown employee id");
}
LogRecord lr = new LogRecord(empId, action);
em.persist(lr);
}
}
#Stateless
public class EmployeeServiceBean implements EmployeeService {
#PersistenceContext(unitName = "EmployeeService")
EntityManager em;
#EJB
AuditService audit;
public void createEmployee(Employee emp) {
em.persist(emp);
audit.logTransaction(emp.getId(), "created employee");
}
// ...
}
And the text:
Even though the newly created Employee is not yet in the database, the
audit bean can find the entity and verify that it exists. This works
because the two beans are actually sharing the same persistence
context.
As far as I understand Id is generated by the database. So how can emp.getId() be passed into audit.logTransaction() if the transaction has not been committed yet and id has not been not generated yet?
it depends on the strategy of GeneratedValue. if you use something like Sequence or Table strategy. usually, persistence provider assign the id to the entities( it has some reserved id based on allocation size) immediately after calling persist method.
but if you use IDENTITY strategy id different provider may act different. for example in hibernate, if you use Identity strategy, it performs the insert statement immediately and fill the id field of entity.
https://thoughts-on-java.org/jpa-generate-primary-keys/ says:
Hibernate requires a primary key value for each managed entity and
therefore has to perform the insert statement immediately.
but in eclipselink, if you use IDENTITY strategy, id will be assigned after flushing. so if you set flush mode to auto(or call flush method) you will have id after persist.
https://wiki.eclipse.org/EclipseLink/UserGuide/JPA/Basic_JPA_Development/Entities/Ids/GeneratedValue says:
There is a difference between using IDENTITY and other id generation
strategies: the identifier will not be accessible until after the
insert has occurred – it is the action of inserting that caused the
identifier generation. Due to the fact that insertion of entities is
most often deferred until the commit time, the identifier would not be
available until after the transaction has been flushed or committed.
in implementation UnitOfWorkChangeSet has a collection for new entities which will have no real identity until inserted.
// This collection holds the new objects which will have no real identity until inserted.
protected Map<Class, Map<ObjectChangeSet, ObjectChangeSet>> newObjectChangeSets;
JPA - Returning an auto generated id after persist() is a question that is related to eclipselink.
there are good points at https://forum.hibernate.org/viewtopic.php?p=2384011#p2384011
I am basically referring to some remarks in Java Persistence with
Hibernate. Hibernate's API guarantees that after a call to save() the
entity has an assigned database identifier. Depending on the id
generator type this means that Hibernate might have to issue an INSERT
statement before flush() or commit() is called. This can cause
problems at rollback time. There is a discussion about this on page
490 of Java Persistence with Hibernate.
In JPA persist() does not return a database identifier. For that
reason one could imagine that an implementation holds back the
generation of the identifier until flush or commit time.
Your approach might work fine for now, but you could run into troubles
when changing the id generator or JPA implementation (switching from
Hibernate to something else).
Maybe this is no issue for you, but I just thought I bring it up.

JPA EclipseLink HistoryPolicy example

I am following the example provided here by eclipselink.
When I start my tests, it fails with:
javax.persistence.RollbackException: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.7.1.v20171221-bd47e8f):
org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: org.postgresql.util.PSQLException: ERROR: relation "event_history" does not exist.
The framework isn't creating the table as I would expect. I have the following configuration:
<property name="eclipselink.ddl-generation" value="drop-and-create-tables"/>
From this link, I don't feel it's necessary to add the DescriptorCustomizer class to the persistence.xml file. But I may be wrong.
My question is, do I have to create the table manually? Or I am doing something wrong? The examples I found relative to the feature are quiet poor.
Some solutions are discussed in eclipse link forum.
Clovis Wichoski CLA Friend 2016-01-02 15:29:19 EST
The problem still occurs with 2.6.2
Follow a StringTemplate to be used to easy the creation by hand (for Postgres database)
CREATE TABLE <tableName>_audit (
LIKE <tableName> EXCLUDING ALL,
audit_date_start timestamp,
audit_date_end timestamp
)
Here is another possible solution:
Peter Hansson CLA Friend 2016-03-25 05:30:42 EDT
Yes, I've had the same issue.
I've explored a couple of avenues in order to get EclipseLink to generate the history tables for me (so that they always reflect their base table). I haven't been able to come up with a method that would work, less one that would be db agnostic.
I do believe the only way to solve this is in the core of EclipseLink, for example by adding a new annotation, #HistoryTable.
I'm thinking something along the lines of the following:
Suppose you have base class, Person:
#Entity
public class Person {
#Id
private Long personId;
private String firstName;
private String lastName;
..
}
Then we could define a history entity for that entity as follows:
#Entity
#HistoryTable(base=Person.class, primaryKeyFields="personId,rowStartTime")
public class PersonHist {
// Add here the extra fields/columns that should exist for the
// history table.
private Date rowStartTime;
private Date rowEndTime;
..
}
The #HistoryTable annotation would replicate all fields from the base entity, including most field annotations, except for annotations related to relations, which wouldn't be relevant on the history table.
By definition the history table's primary key will always be a composite of columns in the base table, typically it will be like in the example. In the example the PersonHist entity will think it has an #Id notation on fields personId and rowStartTime. (yeah, this area needs more brain work :-))

JPA error "Cannot merge an entity that has been removed" trying to delete and reinsert a row with SpringData

I've an Entity (with a primary key that is not generated by a sequence) like this in a Spring Data JPA/Eclipselink environment :
#Entity
#Table(name="MY_ENTITY")
public class MyEntity implements Serializable {
#Id
#Column(insertable=true, updatable=true, nullable=false)
private String propertyid;
\\other columns
}
and I'm trying to delete a row from the table and reinsert it (with the same primary key).
My approach is to call deleteAll() to clean the table and then save() the new Entity :
#Transactional
public void deleteAndSave(MyEntity entity) {
propertyInfoRepository.deleteAll();
propertyInfoRepository.flush(); // <- having it or not, nothing changes
propertyInfoRepository.save(entity);
}
but this gives me this error :
Caused by: java.lang.IllegalArgumentException: Cannot merge an entity that has been removed: com.xxx.MyEntity#1f28c51
at org.eclipse.persistence.internal.sessions.MergeManager.registerObjectForMergeCloneIntoWorkingCopy(MergeManager.java:912)
at org.eclipse.persistence.internal.sessions.MergeManager.mergeChangesOfCloneIntoWorkingCopy(MergeManager.java:494)
at org.eclipse.persistence.internal.sessions.MergeManager.mergeChanges(MergeManager.java:271)
at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.mergeCloneWithReferences(UnitOfWorkImpl.java:3495)
at org.eclipse.persistence.internal.sessions.RepeatableWriteUnitOfWork.mergeCloneWithReferences(RepeatableWriteUnitOfWork.java:378)
at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.mergeCloneWithReferences(UnitOfWorkImpl.java:3455)
at org.eclipse.persistence.internal.jpa.EntityManagerImpl.mergeInternal(EntityManagerImpl.java:486)
at org.eclipse.persistence.internal.jpa.EntityManagerImpl.merge(EntityManagerImpl.java:463)
....
What am I doing wrong?
I do not understand why it is trying to merge the entity instead of simply reinsert it after its deletion.
Thanks for your help!
Directly to answer your question:
The problem is that the entity that you try to save has already a persistent identity, i.e an ID, which is why your repository will try to merge, and not to persist the entity.
If you see this question it seems that it is triggered (at least) on the level of the Spring Repository, so you might consider overriding the save method of the repository and test whether the problem is still there.
JPA EntityManager keeps track of the state of each managed entity. In your case, you delete the entity and then try to merge it, which raises the exception. I can't tell if your approach is correct (seems weird to delete and then merge) since you don't provide the whole picture but you can try the following:
Assuming em is your EntityManager and entity your entity:
em.remove(entity); //This will perform the delete
MyEntity detachedEntity = em.detach(entity); //Gets a detached copy of the entity, EM will not operated on this unless told to do so (see below)
detachedEntity.setId(null) // Avoid duplicate key violations; Optional since you are deleting the original entity
em.persist(detachedEntity); // This will perform the required insert

openjpa throws optimisticklockexception

I am trying openjpa and jpa. All I have is one entity class as corresponding table in the database. one of the attributes of the entity is username and corresponding row in the db table has varchar2(20). and in my main method what i tried to persist and instance of the entity with username longer than 20.
All I am doing is
em.getTransaction().begin();
em.persist(entity); //entity here is the instance with the username longer than 20
em.getTransaction().commit();
I tried this, hoping to get some other kind of exception, but I don't why I am getting optimisticklockexception.
I do not have any locking setting. I mean I am using default values for locking property.
Does anybody know what's happening here?
Not sure why this happens...I have noticed that the OptimisticLockException can be thrown in weird cases...
Adding a version field to your table and entity can often make OpenJPA work better with locking...
In your entity bean add this (also add the column named VERSION to your table):
private Long version;
#Version
#Column(name="VERSION")
public Long getVersion() {
return version;
}
public void setVersion(Long version) {
this.version = version;
}
Hope this helps...

Portable JPA Batch / Bulk Insert

I just jumped on a feature written by someone else that seems slightly inefficient, but my knowledge of JPA isn't that good to find a portable solution that's not Hibernate specific.
In a nutshell the Dao method called within a loop to insert each one of the new entities does a "entityManager.merge(object);".
Isnt' there a way defined in the JPA specs to pass a list of entities to the Dao method and do a bulk / batch insert instead of calling merge for every single object?
Plus since the Dao method is annotated w/ "#Transactional" I'm wondering if every single merge call is happening within its own transaction... which would not help performance.
Any idea?
No there is no batch insert operation in vanilla JPA.
Yes, each insert will be done within its own transaction. The #Transactional attribute (with no qualifiers) means a propagation level of REQUIRED (create a transaction if it doesn't exist already). Assuming you have:
public class Dao {
#Transactional
public void insert(SomeEntity entity) {
...
}
}
you do this:
public class Batch {
private Dao dao;
#Transactional
public void insert(List<SomeEntity> entities) {
for (SomeEntity entity : entities) {
dao.insert(entity);
}
}
public void setDao(Dao dao) {
this.dao = dao;
}
}
That way the entire group of inserts gets wrapped in a single transaction. If you're talking about a very large number of inserts you may want to split it into groups of 1000, 10000 or whatever works as a sufficiently large uncommitted transaction may starve the database of resources and possibly fail due to size alone.
Note: #Transactional is a Spring annotation. See Transactional Management from the Spring Reference.
What you could do, if you were in a crafty mood, is:
#Entity
public class SomeEntityBatch {
#Id
#GeneratedValue
private int batchID;
#OneToMany(cascade = {PERSIST, MERGE})
private List<SomeEntity> entities;
public SomeEntityBatch(List<SomeEntity> entities) {
this.entities = entities;
}
}
List<SomeEntity> entitiesToPersist;
em.persist(new SomeEntityBatch(entitiesToPersist));
// remove the SomeEntityBatch object later
Because of the cascade, that will cause the entities to be inserted in a single operation.
I doubt there is any practical advantage to doing this over simply persisting individual objects in a loop. It would be an interesting to look at the SQL that the JPA implementation emitted, and to benchmark.