JPQL #NamedQuery with Entities or Ids? - jpa

Sorry if duplicated.
Is it possible or recommended for business layer to using objects instead of ids?
SELECT c
FROM Child AS c
WHERE c.parent = :parent
public List<Child> list(final Parent parent) {
// does parent must be managed?
// how can I know that?
// have parent even been persisted?
return em.createNamedQuery(...).
setParameter("parent", parent);
}
This is how I work with.
SELECT c
FROM Child AS c
WHERE c.parent.id = :parent_id
public List<Child> list(final Parent parent) {
// wait! parent.id could be null!
// it may haven't been persisted yet!
return list(parent.getId());
}
public List<Child> list(final long parentId) {
return em.createNamedQuery(...).
setParameter("parent_id", parentId);
}
UPDATED QUESTION --------------------------------------
Do any JAX-RS or JAX-WS classes which each can be injected with #EJB can be said in the same JTA?
Here come the very original problem that I always curious about.
Let's say we have two EJBs.
#Stateless
class ParentBean {
public Parent find(...) {
}
}
#Stateless
class ChildBean {
public List<Child> list(final Parent parent) {
}
public List<Child> list(final long parentId) {
}
}
What is a proper way to do with any EJB clients?
#Stateless // <<-- This is mandatory for being injected with #EJB, right?
#Path("/parents/{parent_id: \\d+}/children")
class ChildsResource {
#GET
#Path
public Response list(#PathParam("parent_id") final long parentId) {
// do i just have to stick to this approach?
final List<Child> children1 = childBean.list(parentId);
// is this parent managed?
// is it ok to pass to other EJB?
final Parent parent = parentBean.find(parentId);
// is this gonna work?
final List<Child> children2 = childBean.list(parent);
...
}
#EJB
private ParentBean parentBean;
#EJB
private ChildBean childBean;
}

Following is presented as an answer only to question "Is it possible or recommended for business layer to using objects instead of ids?", because I unfortunately do not fully understand second question "Do any JAX-RS or JAX-WS classes which each can be injected with #EJB can be said in the same JTA?".
It is possible. In most cases also recommended. Whole purpose of ORM is that we can operate to objects and their relationships and not to their presentation in database.
Id of entity (especially in the case of surrogate id) is often concept that is only interesting when we are near storage itself. When only persistence provided itself needs to access id, it makes often sense to design methods to access id as protected. When we do so, less noise is published to the users of entity.
There is also valid exceptions as usual. It can be for example found that moving whole entity over the wire is too resource consuming and having list of ids instead of list of entities is preferable. Such a design decision should not be done before problem actually exists.

If parent has not been persisted yet, then the query won't work, and executing it doesn't make much sense. It's your responsibility to avoid executing it if the parent hasn't been persisted. But I would not make it a responsibility of the find method itself. Just make it clear in the documentation of the method that the parent passed as argument must have an ID, or at least be persistent. No need to make the sameverification as the entity manager.
If it has been persisted, but the flush hasn't happened yet, the entity manager must flush before executing the query, precisely to make the query find the children of the new parent.
At least with Hibernate, you may execute the query with a detached parent. If the ID is there, the query will use it and execute the query.

Related

How to know if a class is an #Entity (javax.persistence.Entity)?

How can I know if a class is annotated with javax.persistence.Entity?
Person (Entity)
#Entity
#Table(name = "t_person")
public class Person {
...
}
PersonManager
#Stateless
public class PersonManager {
#PersistenceContext
protected EntityManager em;
public Person findById(int id) {
Person person = this.em.find(Person.class, id);
return person;
}
I try to do it with instance of as the following
#Inject
PersonManager manager;
Object o = manager.findById(1);
o instanceof Entity // false
however the result is false, shouldn't it be true?
While the existing answers provide a (somehow) working solution, some things should be noted:
Using an approach based on Reflection implies (a) Performance Overhead and (b) Security Restrictions (see Drawbacks of Reflection).
Using an ORM-specific (here: Hibernate) approach risks portability of the code towards other execution environments, i.e., application containers or other customer-related settings.
Luckily, there is a third JPA-only way of detecting whether a certain Java class (type) is a (managed) #Entity. This approach makes use of standardized access to the javax.persistence.metamodel.MetaModel. With it you get the method
Set < EntityType > getEntities();
It only lists types annotated with #Entity AND which are detected by the current instance of EntityManager you use. With every object of EntityType it is possible to call
Class< ? > getJavaType();
For demonstration purposes, I quickly wrote a method which requires an instance of EntityManager (here: em), either injected or created ad-hoc:
private boolean isEntity(Class<?> clazz) {
boolean foundEntity = false;
Set<EntityType<?>> entities = em.getMetamodel().getEntities();
for(EntityType<?> entityType :entities) {
Class<?> entityClass = entityType.getJavaType();
if(entityClass.equals(clazz)) {
foundEntity = true;
}
}
return foundEntity;
}
You can provide such a method (either public or protected) in a central place (such as a Service class) for easy re-use by your application components. The above example shall just give a direction of what to look for aiming at a pure JPA approach.
For reference see sections 5.1.1 (page 218) and 5.1.2 (page 219f) of the JPA 2.1 specification.
Hope it helps.
If the statement
sessionFactory.getClassMetadata( HibernateProxyHelper.getClassWithoutInitializingProxy( Person.class ) ) != null;
is true, than it is an entity.
#NiVer's answer is valid. But, if you don't have a session or sessionFactory at that point you could use Reflection. Something like:
o.getClass().getAnnotation(Entity.class) != null;

Repository pattern: Deleting the aggregate root

When deleting a model (aggregate root) from the repository, all associated aggregates must be deleted too.
I am struggling to implement this in my Entity Framework 6 implementation of the repository pattern
In my example, I want to delete a Customer from the CustomerRepository. All the customer's Order objects should also be deleted.
Repository (stripped down):
public interface IRepository<T> where T : DomainEntity
{
void Remove(T item);
}
public class EntityFrameworkRepository<T> : IRepository<T> where T : DomainEntity
{
private readonly DbSet<T> dbSet;
public DbContext context;
public EntityFrameworkRepository(IUnitOfWork unitOfWork)
{
context = entityFrameworkUnitOfWork.context;
dbSet = dbSet = context.Set<T>();
}
public virtual void Remove(T item)
{
DbEntityEntry dbEntityEntry = context.Entry(item);
if (dbEntityEntry.State == EntityState.Detached)
{
dbSet.Attach(item);
}
dbSet.Remove(item);
}
}
public class EntityFrameworkUnitOfWork : IUnitOfWork
{
public readonly DbContext context;
public EntityFrameworkUnitOfWork()
{
this.context = new ReleaseContext();
}
public void Commit()
{
context.SaveChanges();
}
}
ICustomerRepository and CustomerRepository (EF implementation):
public interface ICustomerRepository : IRepository<Customer>
{
IEnumerable<Customer> GetAllActive();
}
public class CustomerRepository : EntityFrameworkRepository<Customer>, ICustomerRepository
{
public CustomerRepository(IUnitOfWork unitOfWork)
: base(unitOfWork)
{ }
public override void Remove(Order item)
{
item.Orders.Clear();
base.Remove(item);
}
}
Client-code:
customerRepository.Remove(customer);
unitOfWork.Commit();
Exception thrown:
System.InvalidOperationException: The operation failed: The
relationship could not be changed because one or more of the
foreign-key properties is non-nullable. When a change is made to a
relationship, the related foreign-key property is set to a null value.
If the foreign-key does not support null values, a new relationship
must be defined, the foreign-key property must be assigned another
non-null value, or the unrelated object must be deleted.
I would except calling item.Orders.Clear() to indicate to EF that the associations must be deleted.
The error suggests your clear method hasn't removed child entities.
Are you are aware of the fluent api addition cascade on delete ?
HasRequired(t => t.Parent).WithOptional().WillCascadeOnDelete(true);
So if you delete a root object, all dependents can be removed by Db.
Although that option is not always available...
Since you are using IRepository...
did you consider using some pattern like
public int DeleteWhere(Expression<Func<TPoco, bool>> predicate) {
var delList = Context.Set<TPoco>().Where(predicate);
foreach (var poco in delList) {
SetEntityState(poco, EntityState.Deleted);
}
return delList.Count;
}
var custId = 1;
var repOrder = new Respository<Order>();
var delCnt = repOrder.DeleteWhere(t => t.CustomerId == custId);
MyContext.SaveChanges()
There is a good practice: don't delete anything :)
Mark it as "deleted" instead.
Because WHY? Show us a real business requirement to physically delete stuff?
Not only it slows things down (usually DBs lock a lot while deleting), causes fragmentation, etc., but in most of the cases it is absurd! No business would allow you do physically delete a customer and a list of orders!
Business does not delete anything. In the real business no one will go an find all the papers related to a particular customer and dispose of them in a shredder. Unless they did something illegal and FBI is knocking at the door :)
Talk to your business experts who know a little about computers (these are the real business experts). They will tell you what happens to customers when they stop being customers (perhaps they are "archived", perhaps something else, or perhaps just nothing) and then model it.
It is us, programmers, usually invent the concept of "deleting" stuff.
Besides, analysing historical information can be really helpful some time in the future!
There are only two options when physical delete can be necessary:
To save disk space (which is not a problem anymore when disk space is as cheap as dirt)
To have some legal obligation to physically delete data when customer wants it to be deleted (which is a very rare requirement and usually is met in a certain domains).
For #1, again, space is not a problem these days so implementing delete can cost more than benefit from it.
For #2 you want to be explicit anyway and you would probably manage your data storage differently. For example you may want to have a DB per client then so you can just drop the DB and all the backups to comply to the regulation (yes, you must remove backups in order to legally say that you don't hold the deleted data anymore)
So which case is yours? Why you want to delete, what are you real business requirements?

JPA not updating ManyToMany relationship in returning result

Here are my entities:
#Entity
public class Actor {
private List<Film> films;
#ManyToMany
#JoinTable(name="film_actor",
joinColumns =#JoinColumn(name="actor_id"),
inverseJoinColumns = #JoinColumn(name="film_id"))
public List<Film> getFilms(){
return films;
}
//... more in here
Moving on:
#Entity
public class Film {
private List actors;
#ManyToMany
#JoinTable(name="film_actor",
joinColumns =#JoinColumn(name="film_id"),
inverseJoinColumns = #JoinColumn(name="actor_id"))
public List<Actor> getActors(){
return actors;
}
//... more in here
And the join table:
#javax.persistence.IdClass(com.tugay.sakkillaa.model.FilmActorPK.class)
#javax.persistence.Table(name = "film_actor", schema = "", catalog = "sakila")
#Entity
public class FilmActor {
private short actorId;
private short filmId;
private Timestamp lastUpdate;
So my problem is:
When I remove a Film from an Actor and merge that Actor, and check the database, I see that everything is fine. Say the actor id is 5 and the film id is 3, I see that these id 's are removed from film_actor table..
The problem is, in my JSF project, altough my beans are request scoped and they are supposed to be fetching the new information, for the Film part, they do not. They still bring me Actor with id = 3 for Film with id = 5. Here is a sample code:
#RequestScoped
#Named
public class FilmTableBackingBean {
#Inject
FilmDao filmDao;
List<Film> allFilms;
public List<Film> getAllFilms(){
if(allFilms == null || allFilms.isEmpty()){
allFilms = filmDao.getAll();
}
return allFilms;
}
}
So as you can see this is a request scoped bean. And everytime I access this bean, allFilms is initially is null. So new data is fetched from the database. However, this fetched data does not match with the data in the database. It still brings the Actor.
So I am guessing this is something like a cache issue.
Any help?
Edit: Only after I restart the Server, the fetched information by JPA is correct.
Edit: This does not help either:
#Entity
public class Film {
private short filmId;
#ManyToMany(mappedBy = "films", fetch = FetchType.EAGER)
public List<Actor> getActors(){
return actors;
}
The mapping is wrong.
The join table is mapped twice: once as the join table of the many-to-many association, and once as an entity. It's one or the other, but not both.
And the many-to-many is wrong as well. One side MUST be the inverse side and use the mappedBy attribute (and thus not define a join table, which is already defined at the other, owning side of the association). See example 7.24, and its preceeding text, in the Hibernate documentation (which also applies to other JPA implementations)
Side note: why use a short for an ID? A Long would be a wiser choice.
JB Nizet is correct, but you also need to maintain both sides of relationships as there is caching in JPA. The EntityManager itself caches managed entities, so make sure your JSF project is closing and re obtaining EntityManagers, clearing them if they are long lived or refreshing entities that might be stale. Providers like EclipseLink also have a second level cache http://wiki.eclipse.org/EclipseLink/Examples/JPA/Caching

Flush() required for multiple Eclipselink merges in the same transaction?

I'm having an issue with multiple EntityManager.merge() calls in a single transaction. This is using an Oracle database. Neither object exists yet. Entities:
public class A {
#Id
#Column("ID")
public Long getID();
#OneToOne(targetEntity = B.class)
#JoinColumn("ID")
public B getB();
}
public class B {
#Id
#Column("ID")
public Long getID();
}
The merge code looks something like this:
#Transactional
public void create(Object A, Object B) {
Object A = entitymanager.merge(A);
B.setId(A.getId());
entitymanager.merge(B);
}
Object A's ID is generated through a sequence and it gets correctly set on B. Looking at the log, merge on A is called before merge on B is called. There is a #OneToOne mapping from A to B. However, at the end of the method when it goes to actually commit, it tries to do an INSERT on B before it goes to do an INSERT on A, which throws an IntegrityConstraintViolation because the "parent key not found".
If I add entitymanager.flush() before the 2nd merge, it works fine.
#Transactional
public void create(Object A, Object B) {
Object A = entitymanager.merge(A);
entitymanager.flush();
B.setId(A.getId());
entitymanager.merge(B);
}
However, flush() is an expensive operation that shouldn't be necessary. All of this should be happening in the same transaction (default propagation of #Transactional is Propagation.REQUIRED).
Any idea why this doesn't work without flush(), and why even though the merge on A happens before the merge on B, the actual INSERT on COMMIT is reversed?
If entity A and B do not have a relationship (i.e. #OneToOne, #OneToMany, ...), then the persistence provider cannot calculate the correct insertion order. IIRC EclipseLink does not use the object-creation order when it comes to sending SQL statements to the database.
If you like to refrain from using flush(), simply set your constraints to be deferred.
As Frank mentioned, the code you have shown does not set a A->B relationship, so there is no way for the provider to know that this B object needs to be inserted before the A. Other relationships may cause it to think that in general A needs to be inserted first.
Deferring constraints can be done on some databases, and refers to setting the database to defer constraint processing until the end of the transaction. If you defer or remove the constraints, you can then see if the SQL that is being generated is correct or if there is another problem with the code and mappings that is being missed.
It appears that the merges are alphabetical (at least, that is one possibility) unless there are bidirectional #OneToOne annotations.
Previously:
public class A {
#OneToOne(targetEntity = B.class)
#JoinColumn("ID")
public B getB();
}
public class B {
#Id
#Column("ID")
public Long getID();
}
Now:
public class A {
#OneToOne(targetEntity = B.class)
#JoinColumn("ID")
public B getB();
}
public class B {
#Id
#Column("ID")
public Long getID();
#OneToOne(targetEntity = A.class)
#JoinColumn("ID")
public A getA();
}
For what I'm doing it doesn't matter that B has a way to get A, but I still don't understand why the annotations in A aren't sufficient.

WCF with Entity Framework Error

Error: The ObjectContext instance has been disposed and can no longer be used for operations that require a connection.
I am trying to create a WCF service with Entity Framework (VS 2010, .NET 4). When I run it, I get the above error.
I read something about editing the T4 template, but it appears that it already has
[DataContractAttribute(IsReference=true)]
public partial class Person : EntityObject
and
[DataMemberAttribute()]
public global::System.Int32 ID
{
get
{
return _ID;
}
I am not sure what the difference is between
[DataMemberAttribute()] and [DataMember]
or
[DataContractAttribute(IsReference=true)] and [DataContract]
either.
public Person GetPersonByID(int id)
{
using (var ctx = new MyEntities())
{
return (from p in ctx.Person
where p.ID == id
select p).FirstOrDefault();
}
}
How does WCF and EF work together, properly?
Do you have navigation properties in your Person class? Did you disable lazy loading? Otherwise it will probably try to load content for navigation properties during serialization and it fails because of closed context.
To your other questions:
[DataMemberAttribute()] and [DataMember] are same. It is just shorter name.
[DataContractAttribute(IsReference=true)] and [DataContract] are not same. IsRefrence allows tracking circular references in navigation properties. Without this parameter circular reference causes never ending recursion.