I am working with jpa 2.0 and I have a field in a table of the database that has value by default, I put that field in my definition of entities as insertable = false so that when inserted retain the default value, the insert is done correctly but when requery the object that field is null in the entity, however, that it has been inserted correctly.
This is my code:
#Entity
#Table(name="SOME_TABLE")
public class SomeTable implements Serializable {
private static final long serialVersionUID = 1L;
#EmbeddedId
private SomeTablePK id;
#Column(name="X1")
private String x1;
**#Column(name="X2", insertable=false)**
private Date x2;
... more fields....
... setters and getters...
}
there any way to force the entity manager to refresh the value of the field that I used as insertable = false? or what can i do to fix it?
Thank you very much.
PS. It is important to mention that in my persistence.xml and place the following line to disable the cache.
<properties>
<property name="javax.persistence.sharedCache.mode" value="NONE"/>
</properties>
You will need to invoke manually the refresh after the flush operation.
The Spec (3.2.4 Synchronization to the Database) says that:
The state of persistent entities is synchronized to the database at transaction commit. This synchroniza- tion involves writing to the database any updates to persistent entities and their relationships as speci- fied above.
An update to the state of an entity includes both the assignment of a new value to a persistent property or field of the entity as well as the modification of a mutable value of a persistent property or field[28].
Pay attention below:
Synchronization to the database does not involve a refresh of any managed entities unless the refresh operation is explicitly invoked on those entities or cascaded to them as a result of the specification of the cascade=REFRESH or cascade=ALL annotation element value.
Related
I have been getting an error while trying to update a list of entities containing persisted entity and detached entity both (newly created entity) into my db using jpa2.0.
My entity contains internal entities which are giving an error (mentioned in the title) when merging the data:
Class superclass{
private A a;
private string name;
//getter setters here...
}
Class A{
private long id;
#onetoone(cascade=CascadeType.All, fetch=FetchType.Eager)
private B b;
#onetoone(cascade=CascadeType.All, fetch=FetchType.Eager)
private C c;
//getter setters here...
}
Class Dao{
daoInsert(superclass x){
em.merge(x);
}
}
I want any entity sent for persisting to be merged into the db.
Hibernate does provide solution for this by adding the following to the persistence.xml
Is there something I can do in jpa same as hibernate.
Please do not suggest to find the entity using em.find() and then update manually because I need both entities the persisted entity and the newly created entity too.
Also I'm using spring form to persist the entire patent entity into db.
I am sorry if I'm not clear enough, this is my first question and I'm really a beginner.
Any help will be most appreciated.
Found an answer to the question myself today.You just need to
remove CascadeType.MERGE from the entity that is not allowing you to persist the detached entity.
if you're using CascadeType.ALL then mention all cascade type other than CascadeType.MERGE.
Now removing CascadeType.MERGE from cascade is one solution but not a best solution because after removing MERGE from Cascade you won't be able to update the mapped object ever.
If you want to merge the Detached entity with Hibernate then clear the entity manager before you merge the entity
entityManager.clear();
//perform modification on object
entityManager.merge(object);
To solve this problem make sure to specify that the identifiers of your objects are automatically generated by adding #GeneratedValue(strategy = GenerationType.IDENTITY) on the identifaint such as id.
In this way when the merge will be carried out, the identifier of the elements to merge will be automatically incremented, compared to the other object already recorded in the database to avoid primary key conflicts
I've an Entity (with a primary key that is not generated by a sequence) like this in a Spring Data JPA/Eclipselink environment :
#Entity
#Table(name="MY_ENTITY")
public class MyEntity implements Serializable {
#Id
#Column(insertable=true, updatable=true, nullable=false)
private String propertyid;
\\other columns
}
and I'm trying to delete a row from the table and reinsert it (with the same primary key).
My approach is to call deleteAll() to clean the table and then save() the new Entity :
#Transactional
public void deleteAndSave(MyEntity entity) {
propertyInfoRepository.deleteAll();
propertyInfoRepository.flush(); // <- having it or not, nothing changes
propertyInfoRepository.save(entity);
}
but this gives me this error :
Caused by: java.lang.IllegalArgumentException: Cannot merge an entity that has been removed: com.xxx.MyEntity#1f28c51
at org.eclipse.persistence.internal.sessions.MergeManager.registerObjectForMergeCloneIntoWorkingCopy(MergeManager.java:912)
at org.eclipse.persistence.internal.sessions.MergeManager.mergeChangesOfCloneIntoWorkingCopy(MergeManager.java:494)
at org.eclipse.persistence.internal.sessions.MergeManager.mergeChanges(MergeManager.java:271)
at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.mergeCloneWithReferences(UnitOfWorkImpl.java:3495)
at org.eclipse.persistence.internal.sessions.RepeatableWriteUnitOfWork.mergeCloneWithReferences(RepeatableWriteUnitOfWork.java:378)
at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.mergeCloneWithReferences(UnitOfWorkImpl.java:3455)
at org.eclipse.persistence.internal.jpa.EntityManagerImpl.mergeInternal(EntityManagerImpl.java:486)
at org.eclipse.persistence.internal.jpa.EntityManagerImpl.merge(EntityManagerImpl.java:463)
....
What am I doing wrong?
I do not understand why it is trying to merge the entity instead of simply reinsert it after its deletion.
Thanks for your help!
Directly to answer your question:
The problem is that the entity that you try to save has already a persistent identity, i.e an ID, which is why your repository will try to merge, and not to persist the entity.
If you see this question it seems that it is triggered (at least) on the level of the Spring Repository, so you might consider overriding the save method of the repository and test whether the problem is still there.
JPA EntityManager keeps track of the state of each managed entity. In your case, you delete the entity and then try to merge it, which raises the exception. I can't tell if your approach is correct (seems weird to delete and then merge) since you don't provide the whole picture but you can try the following:
Assuming em is your EntityManager and entity your entity:
em.remove(entity); //This will perform the delete
MyEntity detachedEntity = em.detach(entity); //Gets a detached copy of the entity, EM will not operated on this unless told to do so (see below)
detachedEntity.setId(null) // Avoid duplicate key violations; Optional since you are deleting the original entity
em.persist(detachedEntity); // This will perform the required insert
I have a many-to-many relationship where the link table has an additional property. Hence the link table is represented by an entity class too and called Composition. The primary key of Composition is an #Embeddable linking to the according entities, eg. 2 #ManyToOne references.
It can happen that a user makes an error when selecting either of the 2 references and hence the composite primary key must be updated. However due to how JPA (hibernate) works this will of course always create a new row (insert) instead of an update and the old Composition will still exist. The end result being that a new row was added instead of one being updated.
Option 1:
The old Composition could just be deleted before the new one is inserted but that would require that the according method handling this requires both the old and new version. plus since the updated version is actually a new entity optimistic locking will not work and hence last update will always win.
Option 2:
Native query. The query also increments version column and includes version in WHERE clause. Throw OptimisticLockException if update count is 0 (concurrent modification or deletion)
What is the better choice? What is the "common approach" to this issue?
Why not just change the primary key of Composition to be a UID which is auto-generated? Then the users could change the two references to the entities being joined without having to delete/re-create the Composition entity. Optimistic locking would then be maintained.
EDIT: For example:
#Entity
#Table(name = "COMPOSITION")
public class Composition {
#Id
#Column(name = "ID")
private Long id; // Auto-generate using preferred method
#ManyToOne(fetch = FetchType.LAZY, optional = false)
#JoinColumn( .... as appropriate .... )
private FirstEntity firstEntity;
#ManyToOne(fetch = FetchType.LAZY, optional = false)
#JoinColumn( .... as appropriate .... )
private SecondEntity secondEntity;
....
In already existing table structure inheritance I am adding a new column type (I cut some of the code)
#Entity
#Inheritance(strategy = InheritanceType.TABLE_PER_CLASS)
public class Account {
......
#Column // already existed column
private String name; // get/set also applied
#Column(length=20) // new added column
#Enumerated(EnumType.STRING) // get/set also applied
private AccountType type;
..........
}
#Entity
public User extends Account {
................ // some other already existed fields
}
In my persistence.xml file I am using next strategy policy for DDL generation
property name="eclipselink.ddl-generation" value="drop-and-create-tables"
When DDL generation is processing the new added column type in Account table is successfully created, BUT for User table there is no such kind of column at all (the strategy is TABLE_PER_CLASS).
I fixed that when i drop the database and created it again. After that the current generation of DLL was applied - type in User is also added as a column. Does someone "met" with such kind of issue ? I fixed with with drop and create of the DB but I am not sure that should be the strategy in same cases in future, specially for production DB
Thanks,
Simeon Angelov
DDL generation is for development not production. The problem you are seeing is because when the table already exists, it cannot be created with the new field. Drop and create or the "create-or-extend-tables" feature will work if you are adding to the tables as described here http://wiki.eclipse.org/EclipseLink/DesignDocs/368365
Using JPA with EclipseLink, I would like to track the timestamp of the last update made to an entity instance. Assuming that this would be easy to combine with optimistic locking, I defined the entity as follows:
import javax.persistence.Version;
[...]
#Entity
public class Foo {
#Id int id;
#Version Timestamp lastChange;
[...]
}
Updating a changed object is done with the following code:
EntityManager em = Persistence.createEntityManagerFactory("myConfiguration");
em.getTransaction().begin();
em.merge(foo);
em.getTransaction().commit();
I would expect that foo.lastChange would be set to the new timestamp each time an update to a changed instance is committed. However, while the field LASTCHANGE is updated in the database, it is not updated in the object itself. A second attempt to save the same object again thus fails with an OptimisticLockException. I know that EclipseLink allows to choose between storing the version-field in cache or directly in the object and I made sure that the configuration is set to IN_OBJECT.
The obvious question is: How to get the foo.lastChange field set to the updated timestamp value when saving to the database? Would
foo = em.find(Foo.class, foo.id);
be the only option? I suspect there must be a simpler way to this.
merge does not modify its argument. It copies the state from its argument to the attached version of its argument, and returns the attached version. You should thus use
foo = em.merge(foo);
// ...
return foo;