Java JPA write only ID for nested entity - jpa

How can I avoid unnecessary queries to the DB?
I have LoadEntity with two nested entity - CarrierEntity and DriverEntity. Java class:
#Entity
public class LoadEntity {
...
#ManyToOne
#JoinColumn(name="carrier_id", nullable=false)
private CarrierEntity carrierEntity;
#ManyToOne
#JoinColumn(name="driver_id", nullable=false)
private DriverEntity driverEntity;
}
But API send me carrierId and driverId. I make it:
DriverEntity driverEntity = driverService.getDriverEntityById(request.getDriverId());
loadEntity.setDriverEntity(driverEntity);
loadRepository.save(loadEntity);
How can I write only driverId with JPA?

With Spring Data JPA you can always fall back on plain SQL.
Of course, this will side step all the great/annoying logic JPA gives you.
This means you won't get any events and the entities in memory might be out of sync with the database.
For this reason you might also increase the version column, if you are using optimistic locking.
That said you could update a sing field like this:
interface LoadRepository extends CrudRepository<LoadEntity, Long> {
#Query(query="update load_entity set driver_id = :driverId where carrier_id=:carrier_id", nativeQuery=true)
#Modifying
void updateDriverId(Long carrierId, Long driverId);
}
If you just want to avoid the loading of the DriverEntity you may also use JpaRepository.getById

Related

Spring Data JPA: Work with Pageable but with a specific set of fields of the entity

I am working with Spring Data 2.0.6.RELEASE.
I am working about pagination for performance and presentation purposes.
Here about performance I am talking about that if we have a lot of records is better show them through pages
I have the following and works fine:
interface PersonaDataJpaCrudRepository extends PagingAndSortingRepository<Persona, String> {
}
The #Controller works fine with:
#GetMapping(produces=MediaType.TEXT_HTML_VALUE)
public String findAll(Pageable pageable, Model model){
Through Thymeleaf I am able to apply pagination. Therefore until here the goal has been accomplished.
Note: The Persona class is annotated with JPA (#Entity, Id, etc)
Now I am concerned about the following: even when pagination works in Spring Data about the amount the records, what about of the content of each record?.
I mean: let's assume that Persona class contains 20 fields (consider any entity you want for your app), thus for a view based in html where a report only uses 4 fields (id, firstname, lastname, date), thus we have 16 unnecessary fields for each entity in memory
I have tried the following:
interface PersonaDataJpaCrudRepository extends PagingAndSortingRepository<Persona, String> {
#Query("SELECT p.id, id.nombre, id.apellido, id.fecha FROM Persona p")
#Override
Page<Persona> findAll(Pageable pageable);
}
If I do a simple print in the #Controller it fails about the following:
java.lang.ClassCastException:
[Ljava.lang.Object; cannot be cast to com.manuel.jordan.domain.Persona
If I avoid that the view fails with:
Caused by:
org.springframework.expression.spel.SpelEvaluationException:
EL1008E:
Property or field 'id' cannot be found on object of type
'java.lang.Object[]' - maybe not public or not valid?
I have read many posts in SO such as:
java.lang.ClassCastException: [Ljava.lang.Object; cannot be cast to
I understand the answer and I am agree about the Object[] return type because I am working with specific set of fields.
Is mandatory work with the complete set of fields for each entity? Should I simply accept the cost of memory about the 16 fields in this case that never are used? It for each record retrieved?
Is there a solution to work around with a specific set of fields or Object[] with the current API of Spring Data?
Have a look at Spring data Projections. For example, interface-based projections may be used to expose certain attributes through specific getter methods.
Interface:
interface PersonaSubset {
long getId();
String getNombre();
String getApellido();
String getFecha();
}
Repository method:
Page<PersonaSubset> findAll(Pageable pageable);
If you only want to read a specific set of columns you don't need to fetch the whole entity. Create a class containing requested columns - for example:
public class PersonBasicData {
private String firstName;
private String lastName;
public PersonBasicData(String firstName, String lastName) {
this.firstName = fistName;
this.lastName = lastName;
}
// getters and setters if needed
}
Then you can specify query using #Query annotation on repository method using constructor expression like this:
#Query("SELECT NEW some.package.PersonBasicData(p.firstName, p.lastName) FROM Person AS p")
You could also use Criteria API to get it done programatically:
CriteriaBuilder cb = entityManager.getCriteriaBuilder();
CriteriaQuery<PersonBasicData> query = cb.createQuery(PersonBasicData.class);
Root<Person> person = query.from(Person.class);
query.multiselect(person.get("firstName"), person.get("lastName"));
List<PersonBasicData> results = entityManager.createQuery(query).getResultList();
Be aware that instance of PersonBasicData being created just for read purposes - you won't be able to make changes to it and persist those back in your database as the class is not marked as entity and thus your JPA provider will not work with it.

JPA not updating ManyToMany relationship in returning result

Here are my entities:
#Entity
public class Actor {
private List<Film> films;
#ManyToMany
#JoinTable(name="film_actor",
joinColumns =#JoinColumn(name="actor_id"),
inverseJoinColumns = #JoinColumn(name="film_id"))
public List<Film> getFilms(){
return films;
}
//... more in here
Moving on:
#Entity
public class Film {
private List actors;
#ManyToMany
#JoinTable(name="film_actor",
joinColumns =#JoinColumn(name="film_id"),
inverseJoinColumns = #JoinColumn(name="actor_id"))
public List<Actor> getActors(){
return actors;
}
//... more in here
And the join table:
#javax.persistence.IdClass(com.tugay.sakkillaa.model.FilmActorPK.class)
#javax.persistence.Table(name = "film_actor", schema = "", catalog = "sakila")
#Entity
public class FilmActor {
private short actorId;
private short filmId;
private Timestamp lastUpdate;
So my problem is:
When I remove a Film from an Actor and merge that Actor, and check the database, I see that everything is fine. Say the actor id is 5 and the film id is 3, I see that these id 's are removed from film_actor table..
The problem is, in my JSF project, altough my beans are request scoped and they are supposed to be fetching the new information, for the Film part, they do not. They still bring me Actor with id = 3 for Film with id = 5. Here is a sample code:
#RequestScoped
#Named
public class FilmTableBackingBean {
#Inject
FilmDao filmDao;
List<Film> allFilms;
public List<Film> getAllFilms(){
if(allFilms == null || allFilms.isEmpty()){
allFilms = filmDao.getAll();
}
return allFilms;
}
}
So as you can see this is a request scoped bean. And everytime I access this bean, allFilms is initially is null. So new data is fetched from the database. However, this fetched data does not match with the data in the database. It still brings the Actor.
So I am guessing this is something like a cache issue.
Any help?
Edit: Only after I restart the Server, the fetched information by JPA is correct.
Edit: This does not help either:
#Entity
public class Film {
private short filmId;
#ManyToMany(mappedBy = "films", fetch = FetchType.EAGER)
public List<Actor> getActors(){
return actors;
}
The mapping is wrong.
The join table is mapped twice: once as the join table of the many-to-many association, and once as an entity. It's one or the other, but not both.
And the many-to-many is wrong as well. One side MUST be the inverse side and use the mappedBy attribute (and thus not define a join table, which is already defined at the other, owning side of the association). See example 7.24, and its preceeding text, in the Hibernate documentation (which also applies to other JPA implementations)
Side note: why use a short for an ID? A Long would be a wiser choice.
JB Nizet is correct, but you also need to maintain both sides of relationships as there is caching in JPA. The EntityManager itself caches managed entities, so make sure your JSF project is closing and re obtaining EntityManagers, clearing them if they are long lived or refreshing entities that might be stale. Providers like EclipseLink also have a second level cache http://wiki.eclipse.org/EclipseLink/Examples/JPA/Caching

Ecliplselink - #CascadeOnDelete doesn't work with #Customizer

I have two entities. "Price" class has "CalculableValue" stored as SortedMap field.
In order to support sorted map I wrote customizer. After that, it seems #CascadeOnDelete is not working. If I remove CalculableValue instance from map and then save "Price" EclipseLink only updates priceId column to NULL in calculableValues table...
I really want to keep the SortedMap. It helps to avoid lots of routine work for values access on Java level.
Also, there is no back-reference (ManyToOne) defined in the CalculableValue class, it will never be required for application logic, so, wanted to keep it just one way.
Any ideas what is the best way to resolve this issue? I actually have lots of other dependencies like this and pretty much everything is OneToMany relation with values stored in sorted map.
Price.java:
#Entity
#Table(uniqueConstraints={
#UniqueConstraint(columnNames={"symbol", "datestring", "timestring"})
})
#Customizer(CustomDescriptorCustomizer.class)
public class Price extends CommonWithDate
{
...
#CascadeOnDelete
#OneToMany(cascade = CascadeType.ALL, fetch = FetchType.EAGER)
#MapKeyColumn(name="key")
#JoinColumn(name = "priceId")
private Map<String, CalculatedValue> calculatedValues =
new TreeMap<String, CalculatedValue>();
...
}
public class CustomDescriptorCustomizer implements DescriptorCustomizer
{
#Override
public void customize(ClassDescriptor descriptor) throws Exception
{
DatabaseMapping jpaMapping = descriptor.getMappingByAttribute("calculatedValues");
((ContainerMapping) mapping).useMapClass(TreeMap.class, methodName);
}
}
Your customizer should have no affect on this. It could be because you are using a #JoinColumn instead of using a mappedBy which should normally be used in a #OneToMany.
You can check the mapping in your customizer using, isCascadeOnDeleteSetOnDatabase()
or set it using
mapping.setIsCascadeOnDeleteSetOnDatabase(true)

Adding entity doesn't refresh parent's collection

the question and problem is pretty simple, though annoying and I am looking for a global solution, because it's application-wide problem for us.
The code below is really not interesting but I post it for clarification!
We use PostgreSQL database with JPA 2.0 and we generated all the facades and entities, of course we did some editing but not much really.
The problem is that every entity contains a Collection of its children, which however (for us only?) is NOT updated after creation a children element.
The objects are written to database, you can select them easily, but what we really would like to see is the refreshed collection of children in parent object.
Why is this happening? If we (manually) refresh the entity of parent em.refresh(parent) it does the trick but it would mean for us a lot of work in Facades I guess. But maybe there is no other way?
Thanks for support!
/* EDIT */
I guess it has to be some annotation problem or cache or something, but I've already tried
#OneToMany(mappedBy = "idquestion", orphanRemoval=true, fetch= FetchType.EAGER)
and
#Cacheable(false)
didn't work properly.
/* EDIT */
Some sample code for understanding.
Database level:
CREATE TABLE Question (
idQuestion SERIAL,
questionContent VARCHAR,
CONSTRAINT Question_idQuestion_PK PRIMARY KEY (idQuestion)
);
CREATE TABLE Answer (
idAnswer SERIAL,
answerContent VARCHAR,
idQuestion INTEGER,
CONSTRAINT Answer_idAnswer_PK PRIMARY KEY (idAnswer),
CONSTRAINT Answer_idQuestion_FK FOREIGN KEY (idQuestion) REFERENCES Question(idQuestion)
);
Than we have generated some Entities in Netbeans 7.1, all of them look similar to:
#Entity
#Table(name = "question", catalog = "jobfairdb", schema = "public")
#XmlRootElement
#NamedQueries({ BLAH BLAH BLAH...})
public class Question implements Serializable {
private static final long serialVersionUID = 1L;
#Id
#Basic(optional = false)
#NotNull
#GeneratedValue(strategy= GenerationType.IDENTITY)
#Column(name = "idquestion", nullable = false)
private Integer idquestion;
#Size(max = 2147483647)
#Column(name = "questioncontent", length = 2147483647)
private String questioncontent;
#OneToMany(mappedBy = "idquestion", orphanRemoval=true)
private Collection<Answer> answerCollection;
Getters... setters...
We use (again) generated facades for them, all implementing AbstractFacade like:
public abstract class CCAbstractFacade<T> {
private Class<T> entityClass;
public CCAbstractFacade(Class<T> entityClass) {
this.entityClass = entityClass;
}
protected abstract EntityManager getEntityManager();
public void create(T entity) {
getEntityManager().persist(entity);
}
The father entity is updated automatically if you use container managed transactions and you fetch the collection after the transaction is complete. Otherwise, you have to update yourself the collection.
This article explains in detail this behaviour: JPA implementation patterns: Bidirectional associations
EDIT:
The simplest way to use Container Managed Transactions is to have transaction-type="JTA" in persistence.xml and use Container-Managed Entity Managers.
You seem to be setting the ManyToOne side, but not adding to the OneToMany, you have to do both.
In JPA, and in Java in general you must update both sides of a bi-directional relationship, otherwise the state of your objects will not be in sync. Not doing so, would be wrong in any Java code, not just JPA.
There is no magic in JPA that will do this for you. EclipseLink does have a magic option for this that you could set through a customizer (mapping.setRelationshipPartnerAttributeName()), but it is not recommended, fixing your code to be correct is the best solution.
See,
http://en.wikibooks.org/wiki/Java_Persistence/Relationships#Object_corruption.2C_one_side_of_the_relationship_is_not_updated_after_updating_the_other_side

Eclipselink performs an unexpected insert in a many-to-one relationship

I have a very basic relationship between two objects:
#Entity
public class A {
#ManyToOne(optional = false)
#JoinColumn(name="B_ID", insertable=false, updatable=true)
private StatusOfA sa;
getter+setter
}
#Entity
public class StatusOfA {
#Id
private long id;
#Column
private String status;
getter+setter
}
There's only a limited set of StatusOfA in DB.
I perform an update on A in a transaction:
#TransactionalAttribute
public void updateStatusOfA(long id) {
A a = aDao.getAById(123);
if(a != null) {
a.getStatusOfA().getId(); //just to ensure that the object is loaded from DB
StatusOfA anotherStatusOfA = statusOfADao.getStatusOfAById(456);
a.setStatusOfA(aontherStatusOfA);
aDao.saveOrPersistA(a);
}
}
The saveOrPersistA method is here merging 'a'.
I expect Eclipselink to perform only an update on 'a' to update the StatusOfA but it's executing a new insert on StatusOfA table. Oracle is then complaining due to a unique contraint violation (the StatusOfA that Eclipselink tries to persist already exists...).
There is no Cascading here so the problem is not there and Hibernate (in JPA2) is behaving as excepted.
In the same project, I already made some more complex relationships and I'm really surprised to see that the relation here in not working.
Thanks in advance for your help.
What does, statusOfADao.getStatusOfAById() do?
Does it use the same persistence context (same transaction and EntityManager)?
You need to use the same EntityManager, as you should not mix objects from different persistence contexts.
What does saveOrPersistA do exactly? The merge() call should resolve everything correctly, but if you have really messed up objects, it may be difficult to merge everything as you expect.
Are you merging just A, or its status as well? Try also setting the status to the merged result of the status.
Assumptions: #Id#GeneratedValue(strategy = GenerationType.IDENTITY)
Let's consider the following implementations of statusOfADao.getStatusOfAById(456) :
1. returns "proxy" object with just id set:
return new StatusOfA(456);
2. returns entity in new transaction:
EntityManager em = emf.createEntityManager();em.getTransaction().begin();
StatusOfA o = em.find(StatusOfA.class,456);//em.getReference(StatusOfA.class,456);
em.getTransaction().commit();
return o;
3. returns detached entity:
StatusOfA o = em.find(StatusOfA.class,456);//em.getReference(StatusOfA.class,456);
em.detached(o);
return o;
4. returns deserialized-serialized entity:
return ObjectCloner.deepCopy(em.find(StatusOfA.class,456));
5. returns attached entity:
return em.find(StatusOfA.class,456);
Conclusions:
Eclipselink handles only implementation N5 as "expected".
Hibernate handles all five implementations as "expected".
No analisys of what behaviour is jpa spec compliant