I have one entity class, which consists of multiple foreign key constraints, which are handled by ManyToMany etc.
public class MyExampleClazz {
.......
#ManyToMany(cascade = CascadeType.ALL, fetch = FetchType.EAGER)
#JoinTable(name = "secondClazzEntities", joinColumns = #JoinColumn(name = "id"),
inverseJoinColumns = #JoinColumn(name = "id"))
List<MySecondClazz> secondClazz;
.....
}
For some cases, I would like to change the fetching strategy from e.g. from EAGER to LAZY and vice versa, because for some read operations, I don't need EAGER fetching (imagine a RESTful service , which offers only some small portion of data and not everything) but in most cases I need EAGER instead.
One option could be introduce an entity (for same table) but different annotation, but it would duplicate code and effort in regards of maintenance.
Are there other ways present, to achive the same result by doing less?
There're two layers where you can control data fetching in JPA:
At the level of entity class via fetch type and fetch mode
At the query level via the "join fetch" clause or using #EntityGraph
I suggest you use FetchType.LAZY, by default for almost all associations. And fetch them only when you need them via #EntityGraph.
Related
I'd like to optimize a queryDSL + Spring data query. Currently I am using a BooleanBuilder as the predicate, which would be fine, but it joins too many tables. I don't need all the columns from the tables and I don't need some of the tables at all. I believe using a projection would reduce the number of tables joined.
I tried with using a Projections.bean() and also with extending MappingProjection, but both approaches result in not using joins but selecting from multiple tables which results in less rows than what's needed.
My data structure consists of a Booking entity and some related entites like User, so looks something like the following:
#Entity
public class Booking {
#ManyToOne
#JoinColumn(name = "userId", nullable = false)
private User endUser;
}
#Entity
public class User {
#OneToMany(cascade = CascadeType.ALL, mappedBy = "endUser", fetch = FetchType.LAZY)
private List<Booking> bookings;
}
I implemented a custom queryDSL projection repository as described here: Spring Data JPA and Querydsl to fetch subset of columns using bean/constructor projection
I'm trying a projection like the following:
Projections.bean(Booking.class,
booking.uuid,
Projections.bean(User.class,
booking.endUser.uuid
).as(booking.endUser.getMetadata().getName()
);
The sql generated by the current solution looks something like this:
select (...)
from booking booking0_,
user user12_
where booking0_.user_id=user12_.id
So, how can I make QueryDSL join the tables instead of selecting from all of them?
Am I on the right path to try to optimize the query? Does the projection make sense?
I ended up creating a DB view, making an Entity for it and then querying that with querydsl. This is really simple, straightforward and the performance is good too.
This is probably where ORMs are just not capable enough.
We have this relationship:
public class RuleProviderEntity implements Serializable
{
...
#OneToMany(cascade = CascadeType.ALL, fetch = FetchType.EAGER)
#OrderColumn(name = RuleEntity.RULE_SEQUENCE)
private List<RuleEntity> rules;
}
This alone creates a join table with 2 keys and the RULE_SEQUENCE column. So far good and works for SELECTs.
Now there's a JQL query
DELETE FROM RuleProviderEntity WHERE ...
But this fails to cascade deleting the RuleEntity rows. It just deletes the RuleProviderEntity and leaves the RuleEntity intact.
Is this supposed to work in JPA 2 and it's a Hibernate bug, or am I missing something in the config?
I know I could add #JoinTable but it would only override the defaults.
Also orphanRemoval seems not necessary here.
Maybe I could do a workaround with #PreRemove but not sure how.
You mean a JPQL Bulk Delete query is issued? rather than em.remove().
A Bulk Delete query will NEVER respect cascade semantics and is not intended to (nor will it keep managed objects in-memory consistent with the datastore). If you want cascading then you need to call em.remove(). If in doubt about this look at the JPA spec.
Hi I have one table VariantValidityBE
It has a relationship column like this
#OneToMany(mappedBy = "variantValidityBE", fetch = FetchType.LAZY)
private List<VariantValidityValueBE> variantValidityBEList;
And in another table
#ManyToOne
#JoinColumn(name = "CATEGORY_ID", referencedColumnName = "ID")
private VariantValidityBE variantValidityBE;
And my method is like this
List<VariantValidityBE> resultList = getResultList(VariantValidityBE.FIND_ALL);
for (VariantValidityBE variantValidityBE : resultList) {
List<VariantValidityValueBE> options = variantValidityBE.getVariantValidityBEList();
}
the value of option is coming old value, newly inserted child record is not coming
Values are inserted into DB correctly.
But if I restart the application its giving the updated records.
The same type of relations I used so many times, never get such type problem.
Since JPA entities are treated as regular java objects, you are required to keep both sides of bidirectional relationships in synch with each other when making changes. JPA will not perform magic to mirror changes made to one side of a bidirectional relationship to the other for you. So when you add a new VariantValidityValueBE instance and set its variantValidityBE, you must also add the VariantValidityValueBE to the variantValidityBEList. Otherwise, the variantValidityBEList will remain unchanged and stale until it is refreshed from the database.
I have a many-to-many relationship where the link table has an additional property. Hence the link table is represented by an entity class too and called Composition. The primary key of Composition is an #Embeddable linking to the according entities, eg. 2 #ManyToOne references.
It can happen that a user makes an error when selecting either of the 2 references and hence the composite primary key must be updated. However due to how JPA (hibernate) works this will of course always create a new row (insert) instead of an update and the old Composition will still exist. The end result being that a new row was added instead of one being updated.
Option 1:
The old Composition could just be deleted before the new one is inserted but that would require that the according method handling this requires both the old and new version. plus since the updated version is actually a new entity optimistic locking will not work and hence last update will always win.
Option 2:
Native query. The query also increments version column and includes version in WHERE clause. Throw OptimisticLockException if update count is 0 (concurrent modification or deletion)
What is the better choice? What is the "common approach" to this issue?
Why not just change the primary key of Composition to be a UID which is auto-generated? Then the users could change the two references to the entities being joined without having to delete/re-create the Composition entity. Optimistic locking would then be maintained.
EDIT: For example:
#Entity
#Table(name = "COMPOSITION")
public class Composition {
#Id
#Column(name = "ID")
private Long id; // Auto-generate using preferred method
#ManyToOne(fetch = FetchType.LAZY, optional = false)
#JoinColumn( .... as appropriate .... )
private FirstEntity firstEntity;
#ManyToOne(fetch = FetchType.LAZY, optional = false)
#JoinColumn( .... as appropriate .... )
private SecondEntity secondEntity;
....
I have a m:n relationship book - borrow - user, the borrow is the join table.
The tables are given (can not be changed):
on one side they are used by jdbc app as well.
on the other side i would like to use them via jpa
book(book_id) - borrow(book_id,used_id) - user(user_id)
used jpa annotations:
User:
#OneToMany(targetEntity=BorrowEntity.class, mappedBy="user")
#JoinColumn(name="USER_ID", referencedColumnName="USER_ID")
private List<BorrowEntity>borrowings;
Book:
#OneToMany(targetEntity=BorrowEntity.class, mappedBy="book")
#JoinColumn(name="BOOK_ID", referencedColumnName="BOOK_ID")
private List<BorrowEntity>borrowings;
My problem is that by the settings above it adds some extra (undesired) fields to the borrow table:
'user_USER_ID' and 'book_BOOK_ID'
How can I configure the jpa annotations to keep just Borrow:user_id,book_id which is enough the many to one ?
Take a look at the picture which tells more:
First of all, since the borrow table is a pure join table, you don't need to map it at all. All you need is a ManyToMany association using this borrow table as JoinTable.
#ManyToMany
#JoinTable(name = "borrow",
joinColumns = #JoinColumn(name = "USER_ID"),
inverseJoinColumns = #JoinColumn(name = "BOOK_ID"))
private List<Book> borrowedBooks;
...
#ManyToMany(mappedBy = "borrowedBooks")
private List<User> borrowingUsers;
If you really want to map the join table as an entity, then it should contain two ManyToOne associations (one for each foreign key). So the following is wrong:
#OneToMany(targetEntity=BorrowEntity.class, mappedBy="user")
#JoinColumn(name="USER_ID", referencedColumnName="USER_ID")
private List<BorrowEntity>borrowings;
Indeed, mappedBy means: this association is the inverse side of the bidirectional OneToMany/ManyToOne association, which is already mapped by the field user in the BorrowEntity entity. Please see the annotations on this field to know how to map the association.
So the #JoinColumn doesn't make sense. It's in contradiction with mappedBy. You just need the following:
#OneToMany(mappedBy="user")
private List<BorrowEntity>borrowings;
The targetEntity is also superfluous, since it's a List<BorrowEntity>: JPA can infer the target entity from the generic type of the list.