JPA bidirectional relations: structure in Database - jpa

I'm starting using JPA in Netbeans with a Glassfish server + Derby Database, and I have some doubts on its behavior. I did the following experiment: I defined an entity "User" with the property
#OneToMany(cascade=ALL, mappedBy="user")
public List<Thing> getThings() {
return things;
}
and the entity "Thing" with the property
#ManyToOne
public Cook4User getUser() {
return user;
}
Then I persisted one user and added one "Thing" to it. Everything all right, I can see the two tables "User" and "Thing" with one entry each, with the second table having a foreign key indicating the user id.
Then I removed the element from the "Thing" table, ran a select statement to recover the user, called the getThings() method on it and... I still found the element that I had removed from the Thing table! How is it possible? Where is it stored? I can't see it nowhere in the DB! Thanks for clearing things up to me.
EDIT: I tried to isolate the lines of code that produce the issue.
#PersistenceContext
private EntityManager em;
User user = new User();
em.persist(user);
Thing thing = new Thing();
em.persist(thing);
user.getThings().add(thing);
em.remove(thing);
user = em.find(User.class, userid);
logger.log(Level.INFO, "user still contains {0} things", user.getThings().size());
\\thing is still there!

In the form of a JUnit test with SpringRunner and Hibernate as the JPA implementation:
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration("classpath:spring-context.xml")
public class TestThings {
private final Logger log = LoggerFactory.getLogger(TestThings.class);
#PersistenceContext
private EntityManager em;
#Test
#Transactional
public void does_not_remove_thing() {
Cook4User user = new Cook4User();
em.persist(user);
Thing thing = new Thing();
em.persist(thing);
user.getThings().add(thing);
user = em.find(Cook4User.class, user.getId());
user.getThings().forEach((t) -> log.info("1 >> Users thing: {}", t.getId()));
em.remove(thing);
em.flush();
user = em.find(Cook4User.class, user.getId());
user.getThings().forEach((t) -> log.info("2 >> Users thing: {}", t.getId()));
assertThat(user.getThings()).isEmpty();
}
#Test
#Transactional
public void removes_thing_when_removed_from_owning_side() {
Cook4User user = new Cook4User();
em.persist(user);
Thing thing = new Thing();
em.persist(thing);
user.getThings().add(thing);
user = em.find(Cook4User.class, user.getId());
user.getThings().forEach((t) -> log.info("1 >> Users thing: {}", t.getId()));
user.getThings().remove(thing);
user = em.find(Cook4User.class, user.getId());
user.getThings().forEach((t) -> log.info("2 >> Users thing: {}", t.getId()));
assertThat(user.getThings()).isEmpty();
}
}
The first test does_not_remove_thing is per your question and fails as you have experienced, this is the output of that test with hibernate.show_sql logging set to true:
Hibernate:
insert
into
Cook4User
(id)
values
(null)
Hibernate:
insert
into
Thing
(id, user_id)
values
(null, ?)
[main] INFO TestThings - 1 >> Users thing: 1
[main] INFO TestThings - 2 >> Users thing: 1
java.lang.AssertionError: expecting empty but was:<[x.Thing#276]>
The second test removes_thing_when_removed_from_owning_side passes with output:
Hibernate:
insert
into
Cook4User
(id)
values
(null)
Hibernate:
insert
into
Thing
(id, user_id)
values
(null, ?)
[main] INFO TestThings - 1 >> Users thing: 1
So it looks like removing your Thing from the owning side of the relationship (he he) is the way to go.
Although, I must be honest, I'm not sure why exactly that works and your way does not. I would understand if you removed your Thing from a detached entity but that was not the case. Also, I was expecting a delete query after calling em.remove(thing) for Thing somewhere but nothing (I added em.flush() to try force that).
Maybe someone else can shed some light on the finer mechanics of what's going on here?

Related

Spring Boot: collection of owning entities ends up being null in child of #ManyToMany relationship

I have a Spring Boot application and two entities, User and Role that look like the following:
#Entity
public class User {
// other fields
#ManyToMany
#JoinTable(
name = "user_role",
joinColumns = #JoinColumn(name = "user_id"),
inverseJoinColumns = #JoinColumn(name = "role_id")
)
#JsonDeserialize(using = RolesJsonDeserializer.class)
#NotEmpty
private Set<Role> roles = new HashSet<>();
// getters and setters
}
#Entity
public class Role {
// other fields
#ManyToMany(mappedBy = "roles")
private List<User> users;
#PreRemove // whenever role is removed, remove association with each user
private void removeRolesFromUsers() {
for (User u : users) {
u.getRoles().remove(this);
}
}
// getters and setters
}
And then I have a Spring JPA integration test annotated with #DataJpaTest and #AutoConfigureTestDatabase(replace = AutoConfigureTestDatabase.Replace.NONE) which is tested against PostgreSQL database. I want to assert that before role is deleted, all associated users are dissociated from that role before it is removed from database.
#Test
#Transactional
public void whenRoleIsDeleted_thenItIsDeletedFromAllUsersThatHadIt() {
clearDatabase(userRepository, roleRepository);
populateDatabase(USERS_COUNT, null, userRepository, roleRepository); // saves a bunch of pre-defined roles first, then saves users associated with those roles
entityManager.flush();
Role existingRole = randomExistingRole(); // obtains role that exists
System.out.println(existingRole.getUsers());
Condition<RoleRepository> existingRole_ = new Condition<>(repo -> repo.findRoleByName(existingRole.getName())
.isPresent(), "hasExistingRole");
assertThat(roleRepository).has(existingRole_);
entityManager.flush();
roleRepository.delete(existingRole);
// then assert that no user has existingRole in its associated roles
}
But my assertion couldn't complete as NullPointException is thrown when I try to remove existingRole. This exception does not happen when application is used normally. It is thrown because role has no associations to any user (and is actually a null), even though I specify that it is Role's users is mappedBy = "roles". User on the other hand has roles in its roles field. If I try to set a break point before roleRepository.delete(), I can see that there are no changes made to the actual database until the end of test method, which I suspect is the problem. Although in logs I can see that Hibernate populates the database and creates joining table just fine.
I tried the following:
Set entityManager.setFlushMode(FlushModeType.COMMIT) to force every flush to commit a transaction, that didn't work.
Removed #Transactional annotation from test method, that didn't work.
Tried #Rollback(false) annotation, that didn't work, but it confirmed that if database is populated before test method starts, then role has users associated with it and made me think that presence of users in actual database is important for this test to work as I expect it to.
I want to understand:
Why does this happen only in tests?
Why can't I see new users and roles in my database immediately after I commit a transaction, but only once test method finished executioe despite the absence of #Transactional?
EDIT
I solved this problem by adding logic to associate role with user manually when user.setRoles() is invoked. I don't know if this is right way? Why won't Hibernate do this automatically?
public void setRoles(Set<Role> roles) {
this.roles = roles;
roles.forEach(role -> {
if (!role.getUsers().contains(this)) {
role.getUsers()
.add(this);
}
});
}

JPA not updating ManyToMany relationship in returning result

Here are my entities:
#Entity
public class Actor {
private List<Film> films;
#ManyToMany
#JoinTable(name="film_actor",
joinColumns =#JoinColumn(name="actor_id"),
inverseJoinColumns = #JoinColumn(name="film_id"))
public List<Film> getFilms(){
return films;
}
//... more in here
Moving on:
#Entity
public class Film {
private List actors;
#ManyToMany
#JoinTable(name="film_actor",
joinColumns =#JoinColumn(name="film_id"),
inverseJoinColumns = #JoinColumn(name="actor_id"))
public List<Actor> getActors(){
return actors;
}
//... more in here
And the join table:
#javax.persistence.IdClass(com.tugay.sakkillaa.model.FilmActorPK.class)
#javax.persistence.Table(name = "film_actor", schema = "", catalog = "sakila")
#Entity
public class FilmActor {
private short actorId;
private short filmId;
private Timestamp lastUpdate;
So my problem is:
When I remove a Film from an Actor and merge that Actor, and check the database, I see that everything is fine. Say the actor id is 5 and the film id is 3, I see that these id 's are removed from film_actor table..
The problem is, in my JSF project, altough my beans are request scoped and they are supposed to be fetching the new information, for the Film part, they do not. They still bring me Actor with id = 3 for Film with id = 5. Here is a sample code:
#RequestScoped
#Named
public class FilmTableBackingBean {
#Inject
FilmDao filmDao;
List<Film> allFilms;
public List<Film> getAllFilms(){
if(allFilms == null || allFilms.isEmpty()){
allFilms = filmDao.getAll();
}
return allFilms;
}
}
So as you can see this is a request scoped bean. And everytime I access this bean, allFilms is initially is null. So new data is fetched from the database. However, this fetched data does not match with the data in the database. It still brings the Actor.
So I am guessing this is something like a cache issue.
Any help?
Edit: Only after I restart the Server, the fetched information by JPA is correct.
Edit: This does not help either:
#Entity
public class Film {
private short filmId;
#ManyToMany(mappedBy = "films", fetch = FetchType.EAGER)
public List<Actor> getActors(){
return actors;
}
The mapping is wrong.
The join table is mapped twice: once as the join table of the many-to-many association, and once as an entity. It's one or the other, but not both.
And the many-to-many is wrong as well. One side MUST be the inverse side and use the mappedBy attribute (and thus not define a join table, which is already defined at the other, owning side of the association). See example 7.24, and its preceeding text, in the Hibernate documentation (which also applies to other JPA implementations)
Side note: why use a short for an ID? A Long would be a wiser choice.
JB Nizet is correct, but you also need to maintain both sides of relationships as there is caching in JPA. The EntityManager itself caches managed entities, so make sure your JSF project is closing and re obtaining EntityManagers, clearing them if they are long lived or refreshing entities that might be stale. Providers like EclipseLink also have a second level cache http://wiki.eclipse.org/EclipseLink/Examples/JPA/Caching

Updating entities in Extended Persistence Context

I have a form - Workflow where there are fields like wfName, assignedUser, dueDate, turnAroundTime. etc.
It is backed by an entity Workflow with a reference to the User entity as Many-to-One.
When a change is made to the assignedUser field( it is an email address) and the form is submitted, I get a Unique-constraint violation error on the USER entity.
I am not trying to achieve this. I only want to replace the User in the Workflow entity.
The save function is performed by a Stateful session bean, with an EXTENDED persistence context.
Am I missing something here? Is this the correct way to updated information in a referenced field?
While setting the updated User I am doing
User user = workflow.getUser();
//This user has its email address changed on the screen so getting a fresh reference of the new user from the database.
user = entitManager.createQuer("from User where email_address=:email_address").setParameter("email_address", user.getEmailAddress).getSingleResult();
//This new found user is then put back into the Workflow entity.
workflow.setUser(user);
entityManager.merge(workflow);
No exception is thrown at the time these lines are executed, but later in the logs I find that it threw a
Caused by: java.sql.SQLException: ORA-00001: unique constraint (PROJ.UK_USER_ID) violated
There is no cascading configuration present in the entities.
The following is the association code for the entities-
The workflow-User relation
#ManyToOne(fetch = FetchType.LAZY)
#JoinColumn(name = "USER_ID", nullable = false)
#NotNull
public GwpsUser getUser() {
return user;
}
public void setUserByUserId(User user) {
this.user = user;
}
The User-Workflow Relation
#OneToMany(fetch = FetchType.LAZY, mappedBy = "User")
public Set<Workflow> getWorkflowsForUserId() {
return workflowsForUserId;
}
public void setWorkflowsForUserId(
final Set<Workflow> WorkflowsForUserId) {
this.workflowsForUserId = workflowsForUserId;
}
In the SFSB I have two methods loadWorkflow() and saveWorkflow().
#Begin(join = true)
#Transactional
public boolean loadProofData(){
//Loading the DataModel here and the conversation starts
}
If I add flushMode = FlushModeType.MANUAL inside #Begin. The saveWorkflow() method saves the data properly, only for the first time. I have to go somewhere else and then come back to this page if I want to make any further changes.
The saveWorkflow() method looks like
#SuppressWarnings("unchecked")
public boolean saveWorkflow() throws FileTransferException {
//Do some other validations
for (Workflow currentWorkflow : workflowData) {
User user = currentWorkflow.getUser();
//This user has its email address changed on the screen so getting a fresh reference of the new user from the database.
user = entitManager.createQuery("from User where email_address=:email_address").setParameter("email_address", user.getEmailAddress).getSingleResult();
//This new found user is then put back into the Workflow entity.
currentWorkflow.setUser(user);
}
//Do some other things
}
Not using the merge() method here, but still the problem persists.
Why are you calling merge? Is the workflow detached (serialized)?
If it is not detched, you should not call merge, just change the object and it should be updated.
You should have a setUser method, not setUserByUserId? Not sure how this is working, perhaps include your full code. Your get/set method might be corrupting your objects, in general it is safer to annotate fields instead of method to avoid code in your get/set method to cause odd side-effects.
Ensure you are not creating two copies of the object, it seems your merge is somehow doing this. Enable logging and include the SQL. Calling flush() directly after your merge will cause any errors to be raise immediately.

GWT RequestFactory with Set sub-collections

I have a little problem with RequestFactory regarding persistence of children collections in the shape of Set . I am using gwt 2.5 with requestfactory, and Hibernate4/Spring3 at the backend. I am using the open-session-in-view filter by Spring so that collections can be persisted after findByID in the save method of my DAO. My problem is everything seems to work ok when children collections are based on List , but when they are based on Set , not all of the items from the client reach the server aparently.
My code looks like this:
-The root entity IndicationTemplate:
#Entity
#Table (name = "vdasIndicationTemplate")
#org.hibernate.annotations.Table ( appliesTo = "vdasIndicationTemplate", indexes =
{#Index (name = "xieIndicationTemplateCreateUser", columnNames= {"createUserID"}),
#Index (name = "xieIndicationTemplateModifyUser", columnNames= {"modifyUserID"})})
public class IndicationTemplate extends AbstractEditable <Integer> implements IEntity <Integer>, IDateable, IDescriptable {
//...
private Set <ProposalTemplate> proposalTemplates = null;
//...
#OneToMany (fetch = FetchType.LAZY, mappedBy = "indicationTemplate"
, cascade = {CascadeType.MERGE, CascadeType.PERSIST, CascadeType.REFRESH, CascadeType.DETACH})
public Set <ProposalTemplate> getProposalTemplates () {
return proposalTemplates;
}
public void setProposalTemplates (Set <ProposalTemplate> proposalTemplates) {
this.proposalTemplates = proposalTemplates;
}
//...
}
-The child entity ProposalTemplate of course has the opposite ManyToOne mapping and has 3 sub-collections as well of the same sort with 3 different entities.
-Client-side proxy for root entity:
#ProxyFor (value = IndicationTemplate.class, locator = PersistenceEntityLocator.class)
public interface IIndicationTemplateProxy extends IEntityProxy, IDeletableProxy, IDescriptableProxy {
//....
Set <IProposalTemplateProxy> getProposalTemplates ();
void setProposalTemplates (Set <IProposalTemplateProxy> proposalTemplateProxy);
}
-On the client, i render the attributes of root entity and also the list of children entity. Then the user can update them, and the changes are stored back into the collection like this:
Set <IProposalTemplateProxy> newList = getElementsFromUiSomehow (); //these elements can be new or just the old ones with some changes
indicationTemplate.getProposalTemplates ().clear ();
indicationTemplate.getProposalTemplates ().addAll (newList);
-And then at some point:
requestContext.saveIndicationTemplate ((IIndicationTemplateProxy) entityProxy)
.fire (new Receiver <IIndicationTemplateProxy> ()
-The RequestContext looks something like:
#Service (value = TemplateService.class, locator = SpringServiceLocator.class)
public interface ITemplateRequestContext extends RequestContext {
/** saves (creates or updates) one given indication template */
Request <IIndicationTemplateProxy> saveIndicationTemplate (IIndicationTemplateProxy indicationTemplate);
//....
}
The problem is only 1 child entity is added per request to the collection server-side. For example, indicationTemplate has 2 proposalTemplates, and i add 4 more, then on the server-side saveIndicationTemplate the entity contains only 3 instead of 6. If happens no matter how many entities i have previously and how many i add, i only get 1 more than before on the server. I did check the proxy object right before firing the requestContext method and it is fully loaded, with all of its children. And finally the weirdest thing is, if i replace Set per List (and all subsequent changes), everything works sweet!
May there be any problem why RF fails to transfer all the changes to the server when using Sets instead of Lists?? Btw, i do prefer Sets in this case, so that is why i am asking.
Anyone?
Thanks for helping!
I assume you are hitting this error. It is a known gwt bug which is still unfixed.
https://code.google.com/p/google-web-toolkit/issues/detail?id=6354&q=set&colspec=ID%20Type%20Status%20Owner%20Milestone%20Summary%20Stars
try to use list instead of set and it should be fine.

Eclipselink performs an unexpected insert in a many-to-one relationship

I have a very basic relationship between two objects:
#Entity
public class A {
#ManyToOne(optional = false)
#JoinColumn(name="B_ID", insertable=false, updatable=true)
private StatusOfA sa;
getter+setter
}
#Entity
public class StatusOfA {
#Id
private long id;
#Column
private String status;
getter+setter
}
There's only a limited set of StatusOfA in DB.
I perform an update on A in a transaction:
#TransactionalAttribute
public void updateStatusOfA(long id) {
A a = aDao.getAById(123);
if(a != null) {
a.getStatusOfA().getId(); //just to ensure that the object is loaded from DB
StatusOfA anotherStatusOfA = statusOfADao.getStatusOfAById(456);
a.setStatusOfA(aontherStatusOfA);
aDao.saveOrPersistA(a);
}
}
The saveOrPersistA method is here merging 'a'.
I expect Eclipselink to perform only an update on 'a' to update the StatusOfA but it's executing a new insert on StatusOfA table. Oracle is then complaining due to a unique contraint violation (the StatusOfA that Eclipselink tries to persist already exists...).
There is no Cascading here so the problem is not there and Hibernate (in JPA2) is behaving as excepted.
In the same project, I already made some more complex relationships and I'm really surprised to see that the relation here in not working.
Thanks in advance for your help.
What does, statusOfADao.getStatusOfAById() do?
Does it use the same persistence context (same transaction and EntityManager)?
You need to use the same EntityManager, as you should not mix objects from different persistence contexts.
What does saveOrPersistA do exactly? The merge() call should resolve everything correctly, but if you have really messed up objects, it may be difficult to merge everything as you expect.
Are you merging just A, or its status as well? Try also setting the status to the merged result of the status.
Assumptions: #Id#GeneratedValue(strategy = GenerationType.IDENTITY)
Let's consider the following implementations of statusOfADao.getStatusOfAById(456) :
1. returns "proxy" object with just id set:
return new StatusOfA(456);
2. returns entity in new transaction:
EntityManager em = emf.createEntityManager();em.getTransaction().begin();
StatusOfA o = em.find(StatusOfA.class,456);//em.getReference(StatusOfA.class,456);
em.getTransaction().commit();
return o;
3. returns detached entity:
StatusOfA o = em.find(StatusOfA.class,456);//em.getReference(StatusOfA.class,456);
em.detached(o);
return o;
4. returns deserialized-serialized entity:
return ObjectCloner.deepCopy(em.find(StatusOfA.class,456));
5. returns attached entity:
return em.find(StatusOfA.class,456);
Conclusions:
Eclipselink handles only implementation N5 as "expected".
Hibernate handles all five implementations as "expected".
No analisys of what behaviour is jpa spec compliant