Persisting a list of an interface type with JPA2 - jpa

I suspect there's no perfect solution to this problem so least worst solution are more than welcome.
I'm implementing a dashboard using PrimeFaces and I would like to persist the model backing it (using JPA2). I've written my own implementation of DashboardModel and DashboardColumn with the necessary annotations and other fields I need. The model is shown below:
#Entity
public class DashboardSettings implements DashboardModel, Serializable{
#Id
private long id;
#OrderColumn( name="COLUMN_ORDER" )
private List<DashboardColumn> columns;
...a few other fields...
public DashboardSettings() {}
#Override
public void addColumn(DashboardColumn column) {
this.columns.add(column);
}
#Override
public List<DashboardColumn> getColumns() {
return columns;
}
...snip...
}
The problem is the columns field. I would like this field to be persisted into it's own table but because DashboardColumn is an interface (and from a third party so can't be changed) the field currently gets stored in a blob. If I change the type of the columns field to my own implementation (DashboardColumnSettings) which is marked with #Entity the addColumn method would cease to work correctly - it would have to do a type check and cast.
The type check and cast is not the end of the world as this code will only be consumed by our development team but it is a trip hazard. Is there any way to have the columns field persisted while at the same time leaving it as a DashboardColumn?

You can try to use targetEntity attribute, though I'm note sure it would be better than explicit cast:
#OrderColumn( name="COLUMN_ORDER" )
#OneToMany(targetEntity = DashboardColumnSettings.class)
private List<DashboardColumn> columns;

Depends on the JPA implementation (you don't mention which one); the JPA spec doesn't define support for interface fields, nor for Collections of interfaces. DataNucleus JPA certainly allows it, primarily because we support it for JDO also, being something that is part of the JDO spec.

Related

JPA: How to get results by compromised where-clause

I have a table with 30 columns.
I fill the object within my java code. Now I want to look up in my database, if the row is already inserted. I can do this primitive like:
SELECT *
FROM tablename
WHERE table.name=object.name
AND table.street=object.street
AND ...
AND ...
AND ...
I think you get it. It works, but in my opinion this is not the best solution.
Is there any kind of a generic solution (eg: I do not need to change the code, if the table changes), where I can give the where-clause my object and it can match itself? Also the where-clause is not that massive.
The closest thing that comes to mind is the Spring Data JPA Specifications.
You can isolate the where clauses in an instance for a particular entity.
Afterwards, you just pass it to any of the #Repository methods:
public interface UserRepository extends CrudRepository<User, Long>,
JpaSpecificationExecutor<User> {
}
Then in your service:
#Autowired
private UrerRepository repo;
public void findMatching() {
List<User> users = repo.findAll(new MyUserSpecification());
Then, whenever db changes you simply alter one place, which is the Specification implementation.

How are fields set on an entity by Spring Data MongoDB?

I have a MongoRepository class
public interface UserRepository extends MongoRepository<User, Long> {
User findById(Long id);
}
and my Entity pojo looks like this
#Document(collection = "user")
class User {
Long id;
String name;
Department department;
…
}
When I call the findBy method, a User object is returned. I want to know how does Spring Data MongoDB converts DBObject to Java object. I was under the impression that Spring Data MongoDB uses some sort of mapper (Jackson?) under the hood which would call setters/constructors method of the java(Entity) class based on the field names in the class or #Field Annotation. But to my surprise, the setters are never invoked. Only the default constructor is invoked.
Then how does the fields are set? The reason I am asking is if the setters are called, it would give me an option to set some other fields may be.
Thanks
Spring Data defaults to field access as accessor methods can contain additional logic that we don't want to trigger by accident. If that's what you actually want though, you can switch to property access by annotating your class with #AccessType(Type.PROPERTY).
Spring has a entity converter at the subsequent layer below it. It uses reflection to read the type of field, variables and signature. The conversion logic is generic for all data repositories. You can read about the same here
You can also introduce a custom converter be it yours or jackson, an example of it is here
Take a look at MappingMongoConverter class - it has the logic which does all this.

How to properly use Locking or Transactions to prevent duplicates using Spring Data

What is the best way to check if a record exists and if it doesn't, create it (avoiding duplicates)?
Keep in mind that this is a distributed application running across many application servers.
I'm trying to avoid these:
Race Conditions
TOCTOU
A simple example:
Person.java
#Entity
public class Person {
#Id
#GeneratedValue
private long id;
private String firstName;
private String lastName;
//Getters and Setters Omitted
}
PersonRepository.java
public interface PersonRepository extends CrudRepository<Person, Long>{
public Person findByFirstName(String firstName);
}
Some Method
public void someMethod() {
Person john = new Person();
john.setFirstName("John");
john.setLastName("Doe");
if(personRepo.findByFirstName(john.getFirstName()) == null){
personRepo.save(john);
}else{
//Don't Save Person
}
}
Clearly as the code currently stands, there is a chance that the Person could be inserted in the database in between the time I checked if it already exists and when I insert it myself. Thus a duplicate would be created.
How should I avoid this?
Based on my initial research, perhaps a combination of
#Transactional
#Lock
But the exact configuration is what I'm unsure of. Any guidance would be greatly appreciated. To reiterate, this application will be distributed across multiple servers so this must still work in a highly-available, distributed environment.
For Inserts: if you want to prevent same recordsto be persisted, than you may want to take some precoutions on DB side. In your example, if firstname should be unique, then define a unique index on that column, or a agroup of colunsd that should be unique, and let the DB handle the check, you just insert & get exception if you're inserting a record that's already inserted.
For updates: use #Version (javax.persistence.Version) annotation like this:
#Version
private long version;
Define a version column in tables, Hibernate or any other ORM will automatically populate the value & also verison to where clause when entity updated. So if someone try to update the old entity, it prevent this. Be careful, this doesn't throw exception, just return update count as 0, so you may want to check this.

JPA #PrePersist & LockModeType.OPTIMISTIC_FORCE_INCREMENT

I came up with interesting situation that I already know how to work around, but I was wondering if there is some elegant solution for this.
I have an Entity, which can not have a #Versio field since it is based on a legacy database, and the table has no column to have this kind of value.
Basically it is something like this:
#Entity
public class MyEntity {
#Id
private int id;
#Temporal(TemporalType.DATE)
private java.util.Date lastUpdated;
}
This is basically just for EULA (End User License Agreement) checking.
I want the Date to be updated when the eula has to be re-accepted (The new eula date is got from other place).
For that I was planning to use:
#PrePersist
#PreUpdate
protected void setPersistTime() {
this.lastUpdated = new Date();
}
The #PrePersist is called correctly when the entity is stored for the first time, but on the subsequent times the JPA seems to think that the entity is the same as before and the #PreUpdate won't be called as there is nothing to change.
I was planning to use
em.refresh(myEntity, LockModeType.OPTIMISTIC_FORCE_INCREMENT);
But that won't work without the #Version which I cannot use due to the legacy db. (no version field I could use and the Date is of wrong type for it).
Btw. Using EclipseLink.

Portable JPA Batch / Bulk Insert

I just jumped on a feature written by someone else that seems slightly inefficient, but my knowledge of JPA isn't that good to find a portable solution that's not Hibernate specific.
In a nutshell the Dao method called within a loop to insert each one of the new entities does a "entityManager.merge(object);".
Isnt' there a way defined in the JPA specs to pass a list of entities to the Dao method and do a bulk / batch insert instead of calling merge for every single object?
Plus since the Dao method is annotated w/ "#Transactional" I'm wondering if every single merge call is happening within its own transaction... which would not help performance.
Any idea?
No there is no batch insert operation in vanilla JPA.
Yes, each insert will be done within its own transaction. The #Transactional attribute (with no qualifiers) means a propagation level of REQUIRED (create a transaction if it doesn't exist already). Assuming you have:
public class Dao {
#Transactional
public void insert(SomeEntity entity) {
...
}
}
you do this:
public class Batch {
private Dao dao;
#Transactional
public void insert(List<SomeEntity> entities) {
for (SomeEntity entity : entities) {
dao.insert(entity);
}
}
public void setDao(Dao dao) {
this.dao = dao;
}
}
That way the entire group of inserts gets wrapped in a single transaction. If you're talking about a very large number of inserts you may want to split it into groups of 1000, 10000 or whatever works as a sufficiently large uncommitted transaction may starve the database of resources and possibly fail due to size alone.
Note: #Transactional is a Spring annotation. See Transactional Management from the Spring Reference.
What you could do, if you were in a crafty mood, is:
#Entity
public class SomeEntityBatch {
#Id
#GeneratedValue
private int batchID;
#OneToMany(cascade = {PERSIST, MERGE})
private List<SomeEntity> entities;
public SomeEntityBatch(List<SomeEntity> entities) {
this.entities = entities;
}
}
List<SomeEntity> entitiesToPersist;
em.persist(new SomeEntityBatch(entitiesToPersist));
// remove the SomeEntityBatch object later
Because of the cascade, that will cause the entities to be inserted in a single operation.
I doubt there is any practical advantage to doing this over simply persisting individual objects in a loop. It would be an interesting to look at the SQL that the JPA implementation emitted, and to benchmark.