I am having some trouble with the DDL generation of Toplink Essentials. I am developing a Glassfish 2.1 based application and use JPA for persistence.
I have an object graph where a parent entity of class A owns a set of entities of class B. Entites B come in several flavors which is modelled using inheritance. One such flavor is a composite entity class BC that bundles a set several other B entites. All entites B in a BC must also be owned by the same entity A as B. Note that not all entites B of an entity A have to be part of a composite BC, they can also be standalone.
So basically that maps to the following classes:
#Entity
class A {
#ManyToOne(mappedBy="owner", cascade = { CascadeType.PERSIST, CascadeType.REMOVE })
Set<B> bs;
}
#Entity
#Inheritance
abstract class B {
#Id
long id;
#ManyToOne(cascade = { CascadeType.PERSIST, CascadeType.REMOVE })
A owner;
#ManyToOne(optional = true)
BC composite;
}
#Entity
class BC extends B {
#OneToMany(cascade = { CascadeType.PERSIST, CascadeType.REMOVE }, mappedBy = "composite")
Set<B> parts;
}
When toplink generates the DDL for this object hierarchy it creates all foreign key constraints as expected. However it does not set cascading rules for the constraints.
When I now try to delete an entire object graph via a reference to the A instance there can be situations where toplink fails to correctly remove the graph from the database. When toplink deletes a BC entity before deleting the contained B entities the foreign key constraint for the "composite" relationship is violated.
This situation can be corrected by manually adjusting the generated DDL to CASCADE (or SET NULL) on the relevant foreign key constraint which is fine for a production environment. This however fails in a test environment with in-memory (Derby) databases where DDL generation is managed entirely by toplink essentials and thus leads to the constraint violation described above.
Is there any way to influence the DDL generation process such that the required cascading rules are correctly set by toplink essentials?
Thanks for your help!
This is not an issue with DDL generation, but with deletion.
TopLink Essentials had some issues with resolving deletes from complex object graphs, or cyclic relationships. The are a few workarounds, such as deleting the dependent objects first and calling flush, then deleting the other objects, or setting the foreign key to null so they get updated. Using a customizer to mark the mapping privateOwned, or play with the constraint dependency may also work. You can also drop or defer the constraints.
All of the deletion issues have been fixed in EclipseLink, so upgrading the to latest EclipseLink release should resolve the issue.
EclipseLink also supports an #CascadeOnDelete annotation to add the cascade to the constraint in DDL generation.
Related
We have this relationship:
public class RuleProviderEntity implements Serializable
{
...
#OneToMany(cascade = CascadeType.ALL, fetch = FetchType.EAGER)
#OrderColumn(name = RuleEntity.RULE_SEQUENCE)
private List<RuleEntity> rules;
}
This alone creates a join table with 2 keys and the RULE_SEQUENCE column. So far good and works for SELECTs.
Now there's a JQL query
DELETE FROM RuleProviderEntity WHERE ...
But this fails to cascade deleting the RuleEntity rows. It just deletes the RuleProviderEntity and leaves the RuleEntity intact.
Is this supposed to work in JPA 2 and it's a Hibernate bug, or am I missing something in the config?
I know I could add #JoinTable but it would only override the defaults.
Also orphanRemoval seems not necessary here.
Maybe I could do a workaround with #PreRemove but not sure how.
You mean a JPQL Bulk Delete query is issued? rather than em.remove().
A Bulk Delete query will NEVER respect cascade semantics and is not intended to (nor will it keep managed objects in-memory consistent with the datastore). If you want cascading then you need to call em.remove(). If in doubt about this look at the JPA spec.
I have been getting an error while trying to update a list of entities containing persisted entity and detached entity both (newly created entity) into my db using jpa2.0.
My entity contains internal entities which are giving an error (mentioned in the title) when merging the data:
Class superclass{
private A a;
private string name;
//getter setters here...
}
Class A{
private long id;
#onetoone(cascade=CascadeType.All, fetch=FetchType.Eager)
private B b;
#onetoone(cascade=CascadeType.All, fetch=FetchType.Eager)
private C c;
//getter setters here...
}
Class Dao{
daoInsert(superclass x){
em.merge(x);
}
}
I want any entity sent for persisting to be merged into the db.
Hibernate does provide solution for this by adding the following to the persistence.xml
Is there something I can do in jpa same as hibernate.
Please do not suggest to find the entity using em.find() and then update manually because I need both entities the persisted entity and the newly created entity too.
Also I'm using spring form to persist the entire patent entity into db.
I am sorry if I'm not clear enough, this is my first question and I'm really a beginner.
Any help will be most appreciated.
Found an answer to the question myself today.You just need to
remove CascadeType.MERGE from the entity that is not allowing you to persist the detached entity.
if you're using CascadeType.ALL then mention all cascade type other than CascadeType.MERGE.
Now removing CascadeType.MERGE from cascade is one solution but not a best solution because after removing MERGE from Cascade you won't be able to update the mapped object ever.
If you want to merge the Detached entity with Hibernate then clear the entity manager before you merge the entity
entityManager.clear();
//perform modification on object
entityManager.merge(object);
To solve this problem make sure to specify that the identifiers of your objects are automatically generated by adding #GeneratedValue(strategy = GenerationType.IDENTITY) on the identifaint such as id.
In this way when the merge will be carried out, the identifier of the elements to merge will be automatically incremented, compared to the other object already recorded in the database to avoid primary key conflicts
I have following problem: when I try to delete an entity that has following relation:
#OneToMany(mappedBy="pricingScheme", cascade=CascadeType.ALL, orphanRemoval=true)
private Collection<ChargeableElement> chargeableElements;
with a CrudRepository through a provided delete method it removes the entity along with its all chargeable elements which is fine. The problem appears when I try to use my custom delete:
#Modifying
#Query("DELETE FROM PricingScheme p WHERE p.listkeyId = :listkeyId")
void deleteByListkeyId(#Param("listkeyId") Integer listkeyId);
it says:
com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException:
Cannot delete or update a parent row: a foreign key constraint fails
(`listkey`.`chargeableelements`, CONSTRAINT `FK_pox231t1sfhadv3vy7ahsc1wt`
FOREIGN KEY (`pricingScheme_id`) REFERENCES `pricingschemes` (`id`))
Why I am not allowed to do this? Does #Query methods do not support cascade property? I know I can findByListkeyId(…) first and then remove persistent entity with standard delete method, but it is inelegant. Is it possible to use a custom #Query method the way I tried to?
This has got nothing to do with Spring Data JPA but is the way JPA specifies this to work (section 4.10 - "Bulk Update and Delete Operations", JPA 2.0 specification):
A delete operation only applies to entities of the specified class and its subclasses. It does not cascade to related entities.
If you think about it, JPA cascades are not database-level cascades but ones maintained by the EntityManager. Hence, the EntityManager needs to know about the entity instance to be deleted and its related instances. If you trigger a query, it effectively can't know about those as the persistence provider translates it into SQL and executes it. So there's no way the EntityManager can analyze the object graph as the execution is completely happening in the database.
A question and answer related to this topic here can be found over here.
I have one table named PLACES with one composite primary key (parent_id, version_id). It is a tree of objects, which are linked through the keys. One child has just one parent, and one parent may have many children.
How can I describe it with JPA entity?
Use a ManyToOne relation from the child to the parent.
This is for OpenJpa. Might even work.
public class Place{
#EmbeddedId
PlaceId id;
#ManyToOne(fetch=FetchType.LAZY)
#JoinColumns({
#JoinColumn(name="PARENT_ID" referencedColumnName="ID"), // ID = matching primary key
#JoinColumn(name="PARENT_VER" referencedColumnName="VER") //etc
})
public Place parent;
#OneToMany(fetch=FetchType.LAZY, mappedBy="parent")
public List<Place> childPlaces;
}
The OneToMany relation might be omitted if it's not needed. If I remember correctly, it needs to be managed, ie childs need to be inserted there too when creating child-places, by you, using java.
Btw.
I would advise against using a version column in a composite key in order to manually keep old versions of your data (for auditing or similar purposes) as that slows down and complicates all joins, and generally will make you miserable at some point in your life - As opposed to using a version column that is not part of a composite key, used for optimistic locking.
You might want to look into some kind of build in support for auditing/logging. OpenJpa has auditing support (OpenJPA Audit) and most database provide some support, either out-of-the-box or by using triggers. All alternatives are faster and better than using composite keys.
I have a many-to-many relationship where the link table has an additional property. Hence the link table is represented by an entity class too and called Composition. The primary key of Composition is an #Embeddable linking to the according entities, eg. 2 #ManyToOne references.
It can happen that a user makes an error when selecting either of the 2 references and hence the composite primary key must be updated. However due to how JPA (hibernate) works this will of course always create a new row (insert) instead of an update and the old Composition will still exist. The end result being that a new row was added instead of one being updated.
Option 1:
The old Composition could just be deleted before the new one is inserted but that would require that the according method handling this requires both the old and new version. plus since the updated version is actually a new entity optimistic locking will not work and hence last update will always win.
Option 2:
Native query. The query also increments version column and includes version in WHERE clause. Throw OptimisticLockException if update count is 0 (concurrent modification or deletion)
What is the better choice? What is the "common approach" to this issue?
Why not just change the primary key of Composition to be a UID which is auto-generated? Then the users could change the two references to the entities being joined without having to delete/re-create the Composition entity. Optimistic locking would then be maintained.
EDIT: For example:
#Entity
#Table(name = "COMPOSITION")
public class Composition {
#Id
#Column(name = "ID")
private Long id; // Auto-generate using preferred method
#ManyToOne(fetch = FetchType.LAZY, optional = false)
#JoinColumn( .... as appropriate .... )
private FirstEntity firstEntity;
#ManyToOne(fetch = FetchType.LAZY, optional = false)
#JoinColumn( .... as appropriate .... )
private SecondEntity secondEntity;
....