I'm having an issue with multiple EntityManager.merge() calls in a single transaction. This is using an Oracle database. Neither object exists yet. Entities:
public class A {
#Id
#Column("ID")
public Long getID();
#OneToOne(targetEntity = B.class)
#JoinColumn("ID")
public B getB();
}
public class B {
#Id
#Column("ID")
public Long getID();
}
The merge code looks something like this:
#Transactional
public void create(Object A, Object B) {
Object A = entitymanager.merge(A);
B.setId(A.getId());
entitymanager.merge(B);
}
Object A's ID is generated through a sequence and it gets correctly set on B. Looking at the log, merge on A is called before merge on B is called. There is a #OneToOne mapping from A to B. However, at the end of the method when it goes to actually commit, it tries to do an INSERT on B before it goes to do an INSERT on A, which throws an IntegrityConstraintViolation because the "parent key not found".
If I add entitymanager.flush() before the 2nd merge, it works fine.
#Transactional
public void create(Object A, Object B) {
Object A = entitymanager.merge(A);
entitymanager.flush();
B.setId(A.getId());
entitymanager.merge(B);
}
However, flush() is an expensive operation that shouldn't be necessary. All of this should be happening in the same transaction (default propagation of #Transactional is Propagation.REQUIRED).
Any idea why this doesn't work without flush(), and why even though the merge on A happens before the merge on B, the actual INSERT on COMMIT is reversed?
If entity A and B do not have a relationship (i.e. #OneToOne, #OneToMany, ...), then the persistence provider cannot calculate the correct insertion order. IIRC EclipseLink does not use the object-creation order when it comes to sending SQL statements to the database.
If you like to refrain from using flush(), simply set your constraints to be deferred.
As Frank mentioned, the code you have shown does not set a A->B relationship, so there is no way for the provider to know that this B object needs to be inserted before the A. Other relationships may cause it to think that in general A needs to be inserted first.
Deferring constraints can be done on some databases, and refers to setting the database to defer constraint processing until the end of the transaction. If you defer or remove the constraints, you can then see if the SQL that is being generated is correct or if there is another problem with the code and mappings that is being missed.
It appears that the merges are alphabetical (at least, that is one possibility) unless there are bidirectional #OneToOne annotations.
Previously:
public class A {
#OneToOne(targetEntity = B.class)
#JoinColumn("ID")
public B getB();
}
public class B {
#Id
#Column("ID")
public Long getID();
}
Now:
public class A {
#OneToOne(targetEntity = B.class)
#JoinColumn("ID")
public B getB();
}
public class B {
#Id
#Column("ID")
public Long getID();
#OneToOne(targetEntity = A.class)
#JoinColumn("ID")
public A getA();
}
For what I'm doing it doesn't matter that B has a way to get A, but I still don't understand why the annotations in A aren't sufficient.
Related
I am exploring Entity Framework 7 and I would like to know if there is a way to intercept a "SELECT" query. Every time an entity is created, updated or deleted I stamp the entity with the current date and time.
SELECT *
FROM MyTable
WHERE DeletedOn IS NOT NULL
I would like all my SELECT queries to exclude deleted data (see WHERE clause above). Is there a way to do that using Entity Framework 7?
I am not sure what your underlying infrastructure looks like and if you have any abstraction between your application and Entity Framework. Let's assume you are working with DbSet<T> you could write an extension method to exclude data that has been deleted.
public class BaseEntity
{
public DateTime? DeletedOn { get; set; }
}
public static class EfExtensions
{
public static IQueryable<T> ExcludeDeleted<T>(this IDbSet<T> dbSet)
where T : BaseEntity
{
return dbSet.Where(e => e.DeletedOn == null);
}
}
//Usage
context.Set<BaseEntity>().ExcludeDeleted().Where(...additional where clause).
I have somewhat same issue. I'm trying to intercept read queries like; select, where etc in order to look into the returned result set. In EF Core you don't have an equivalent to override SaveChanges for read queries, unfortunately.
You can however, still i Entity Framework Core, hook into commandExecuting and commandExecuted, by using
var listener = _context.GetService<DiagnosticSource>();
(listener as DiagnosticListener).SubscribeWithAdapter(new CommandListener());
and creating a class with following two methods
public class CommandListener
{
[DiagnosticName("Microsoft.EntityFrameworkCore.Database.Command.CommandExecuting")]
public void OnCommandExecuting(DbCommand command, DbCommandMethod executeMethod, Guid commandId, Guid connectionId, bool async, DateTimeOffset startTime)
{
//do stuff.
}
[DiagnosticName("Microsoft.EntityFrameworkCore.Database.Command.CommandExecuted")]
public void OnCommandExecuted(object result, bool async)
{
//do stuff.
}
}
However these are high lewel interceptors and hence you won't be able to view the returned result set (making it useless in your case).
I recommend two things, first go to and cast a vote on the implementation of "Hooks to intercept and modify queries on the fly at high and low level" at: https://data.uservoice.com/forums/72025-entity-framework-core-feature-suggestions/suggestions/1051569-hooks-to-intercept-and-modify-queries-on-the-fly-a
Second you can use PostSharp (a commercial product) by using interceptors like; LocationInterceptionAspect on properties or OnMethodBoundaryAspect for methods.
I am having an issue where eclipselink (2.5) is throwing an OptimisticLockException even though the only thing I'm modifiying is trying to either add or remove an item from a child list.
Entities :
#Entity
#Table(name="PLAN_ORG_RELATIONSHIP")
#Customizer(GridCacheCustomizer.class)
#AdditionalCriteria("CURRENT_TIMESTAMP BETWEEN this.startDate AND this.endDate")
#NamedQuery(name="PlanOrganizationRelationship.findAll", query="SELECT p FROM PlanOrganizationRelationship p")
#Portable
public class PlanOrganizationRelationship extends PrismObject implements Serializable {
#OneToMany(mappedBy="planOrganizationRelationship", cascade=CascadeType.PERSIST, orphanRemoval=true)
#PortableProperty(10)
private List<PlanOrganizationAction> planOrganizationActions;
public PlanOrganizationAction addPlanOrganizationAction(PlanOrganizationAction planOrganizationAction) {
getPlanOrganizationActions().add(planOrganizationAction);
planOrganizationAction.setPlanOrgRelationship(this);
return planOrganizationAction;
}
public PlanOrganizationAction removePlanOrganizationAction(PlanOrganizationAction planOrganizationAction) {
getPlanOrganizationActions().remove(planOrganizationAction);
planOrganizationAction.setPlanOrgRelationship(null);
return planOrganizationAction;
}
#Column(name="LST_UPDT_DT")
#Version
#PortableProperty(5)
private Timestamp lastUpdatedDate;
}
Other side of One To Many:
#Entity
#Table(name="PLAN_ORGANIZATION_ACTION")
#Customizer(GridCacheCustomizer.class)
#AdditionalCriteria("CURRENT_TIMESTAMP BETWEEN this.startDate AND this.endDate")
#NamedQuery(name="PlanOrganizationAction.findAll", query="SELECT p FROM PlanOrganizationAction p")
#Portable
public class PlanOrganizationAction extends PrismObject implements Serializable {
#ManyToOne
#JoinColumn(name="PLN_ORG_RLTNP_SEQ_ID")
#PortableProperty(7)
private PlanOrganizationRelationship planOrganizationRelationship;
}
I have 3 paths - Adding new Relationship with Actions (both entities new) or Add an Action or Remove Action
When I am adding both, I perist the parent and the children are persisted as well and that is the expected behavior.
When I try to add or remove I try something like
PrismOrganizationRelationship por = findById (..) //we are spring-data-jpa
por.addPlanOrganizationAction(action);
repo.save(por); // Throws optimistic lock - even though #Version is the same
Not sure what is causing this issue ?
Check cascade=CascadeType.PERSIST. It work only for persist(save newly entity) operation. your operation remove child entity mean updating main entity. Thats why, you need to useCascadeType.MARGE` for update operation.
Here are my entities:
#Entity
public class Actor {
private List<Film> films;
#ManyToMany
#JoinTable(name="film_actor",
joinColumns =#JoinColumn(name="actor_id"),
inverseJoinColumns = #JoinColumn(name="film_id"))
public List<Film> getFilms(){
return films;
}
//... more in here
Moving on:
#Entity
public class Film {
private List actors;
#ManyToMany
#JoinTable(name="film_actor",
joinColumns =#JoinColumn(name="film_id"),
inverseJoinColumns = #JoinColumn(name="actor_id"))
public List<Actor> getActors(){
return actors;
}
//... more in here
And the join table:
#javax.persistence.IdClass(com.tugay.sakkillaa.model.FilmActorPK.class)
#javax.persistence.Table(name = "film_actor", schema = "", catalog = "sakila")
#Entity
public class FilmActor {
private short actorId;
private short filmId;
private Timestamp lastUpdate;
So my problem is:
When I remove a Film from an Actor and merge that Actor, and check the database, I see that everything is fine. Say the actor id is 5 and the film id is 3, I see that these id 's are removed from film_actor table..
The problem is, in my JSF project, altough my beans are request scoped and they are supposed to be fetching the new information, for the Film part, they do not. They still bring me Actor with id = 3 for Film with id = 5. Here is a sample code:
#RequestScoped
#Named
public class FilmTableBackingBean {
#Inject
FilmDao filmDao;
List<Film> allFilms;
public List<Film> getAllFilms(){
if(allFilms == null || allFilms.isEmpty()){
allFilms = filmDao.getAll();
}
return allFilms;
}
}
So as you can see this is a request scoped bean. And everytime I access this bean, allFilms is initially is null. So new data is fetched from the database. However, this fetched data does not match with the data in the database. It still brings the Actor.
So I am guessing this is something like a cache issue.
Any help?
Edit: Only after I restart the Server, the fetched information by JPA is correct.
Edit: This does not help either:
#Entity
public class Film {
private short filmId;
#ManyToMany(mappedBy = "films", fetch = FetchType.EAGER)
public List<Actor> getActors(){
return actors;
}
The mapping is wrong.
The join table is mapped twice: once as the join table of the many-to-many association, and once as an entity. It's one or the other, but not both.
And the many-to-many is wrong as well. One side MUST be the inverse side and use the mappedBy attribute (and thus not define a join table, which is already defined at the other, owning side of the association). See example 7.24, and its preceeding text, in the Hibernate documentation (which also applies to other JPA implementations)
Side note: why use a short for an ID? A Long would be a wiser choice.
JB Nizet is correct, but you also need to maintain both sides of relationships as there is caching in JPA. The EntityManager itself caches managed entities, so make sure your JSF project is closing and re obtaining EntityManagers, clearing them if they are long lived or refreshing entities that might be stale. Providers like EclipseLink also have a second level cache http://wiki.eclipse.org/EclipseLink/Examples/JPA/Caching
Sorry if duplicated.
Is it possible or recommended for business layer to using objects instead of ids?
SELECT c
FROM Child AS c
WHERE c.parent = :parent
public List<Child> list(final Parent parent) {
// does parent must be managed?
// how can I know that?
// have parent even been persisted?
return em.createNamedQuery(...).
setParameter("parent", parent);
}
This is how I work with.
SELECT c
FROM Child AS c
WHERE c.parent.id = :parent_id
public List<Child> list(final Parent parent) {
// wait! parent.id could be null!
// it may haven't been persisted yet!
return list(parent.getId());
}
public List<Child> list(final long parentId) {
return em.createNamedQuery(...).
setParameter("parent_id", parentId);
}
UPDATED QUESTION --------------------------------------
Do any JAX-RS or JAX-WS classes which each can be injected with #EJB can be said in the same JTA?
Here come the very original problem that I always curious about.
Let's say we have two EJBs.
#Stateless
class ParentBean {
public Parent find(...) {
}
}
#Stateless
class ChildBean {
public List<Child> list(final Parent parent) {
}
public List<Child> list(final long parentId) {
}
}
What is a proper way to do with any EJB clients?
#Stateless // <<-- This is mandatory for being injected with #EJB, right?
#Path("/parents/{parent_id: \\d+}/children")
class ChildsResource {
#GET
#Path
public Response list(#PathParam("parent_id") final long parentId) {
// do i just have to stick to this approach?
final List<Child> children1 = childBean.list(parentId);
// is this parent managed?
// is it ok to pass to other EJB?
final Parent parent = parentBean.find(parentId);
// is this gonna work?
final List<Child> children2 = childBean.list(parent);
...
}
#EJB
private ParentBean parentBean;
#EJB
private ChildBean childBean;
}
Following is presented as an answer only to question "Is it possible or recommended for business layer to using objects instead of ids?", because I unfortunately do not fully understand second question "Do any JAX-RS or JAX-WS classes which each can be injected with #EJB can be said in the same JTA?".
It is possible. In most cases also recommended. Whole purpose of ORM is that we can operate to objects and their relationships and not to their presentation in database.
Id of entity (especially in the case of surrogate id) is often concept that is only interesting when we are near storage itself. When only persistence provided itself needs to access id, it makes often sense to design methods to access id as protected. When we do so, less noise is published to the users of entity.
There is also valid exceptions as usual. It can be for example found that moving whole entity over the wire is too resource consuming and having list of ids instead of list of entities is preferable. Such a design decision should not be done before problem actually exists.
If parent has not been persisted yet, then the query won't work, and executing it doesn't make much sense. It's your responsibility to avoid executing it if the parent hasn't been persisted. But I would not make it a responsibility of the find method itself. Just make it clear in the documentation of the method that the parent passed as argument must have an ID, or at least be persistent. No need to make the sameverification as the entity manager.
If it has been persisted, but the flush hasn't happened yet, the entity manager must flush before executing the query, precisely to make the query find the children of the new parent.
At least with Hibernate, you may execute the query with a detached parent. If the ID is there, the query will use it and execute the query.
I have a very basic relationship between two objects:
#Entity
public class A {
#ManyToOne(optional = false)
#JoinColumn(name="B_ID", insertable=false, updatable=true)
private StatusOfA sa;
getter+setter
}
#Entity
public class StatusOfA {
#Id
private long id;
#Column
private String status;
getter+setter
}
There's only a limited set of StatusOfA in DB.
I perform an update on A in a transaction:
#TransactionalAttribute
public void updateStatusOfA(long id) {
A a = aDao.getAById(123);
if(a != null) {
a.getStatusOfA().getId(); //just to ensure that the object is loaded from DB
StatusOfA anotherStatusOfA = statusOfADao.getStatusOfAById(456);
a.setStatusOfA(aontherStatusOfA);
aDao.saveOrPersistA(a);
}
}
The saveOrPersistA method is here merging 'a'.
I expect Eclipselink to perform only an update on 'a' to update the StatusOfA but it's executing a new insert on StatusOfA table. Oracle is then complaining due to a unique contraint violation (the StatusOfA that Eclipselink tries to persist already exists...).
There is no Cascading here so the problem is not there and Hibernate (in JPA2) is behaving as excepted.
In the same project, I already made some more complex relationships and I'm really surprised to see that the relation here in not working.
Thanks in advance for your help.
What does, statusOfADao.getStatusOfAById() do?
Does it use the same persistence context (same transaction and EntityManager)?
You need to use the same EntityManager, as you should not mix objects from different persistence contexts.
What does saveOrPersistA do exactly? The merge() call should resolve everything correctly, but if you have really messed up objects, it may be difficult to merge everything as you expect.
Are you merging just A, or its status as well? Try also setting the status to the merged result of the status.
Assumptions: #Id#GeneratedValue(strategy = GenerationType.IDENTITY)
Let's consider the following implementations of statusOfADao.getStatusOfAById(456) :
1. returns "proxy" object with just id set:
return new StatusOfA(456);
2. returns entity in new transaction:
EntityManager em = emf.createEntityManager();em.getTransaction().begin();
StatusOfA o = em.find(StatusOfA.class,456);//em.getReference(StatusOfA.class,456);
em.getTransaction().commit();
return o;
3. returns detached entity:
StatusOfA o = em.find(StatusOfA.class,456);//em.getReference(StatusOfA.class,456);
em.detached(o);
return o;
4. returns deserialized-serialized entity:
return ObjectCloner.deepCopy(em.find(StatusOfA.class,456));
5. returns attached entity:
return em.find(StatusOfA.class,456);
Conclusions:
Eclipselink handles only implementation N5 as "expected".
Hibernate handles all five implementations as "expected".
No analisys of what behaviour is jpa spec compliant