In a system using EJB 3.1 and JPA 2.0, where should one cache CriteriaQuery objects? - jpa

The JPA 2.0 specification mentions in section 6.9 that CriteriaQuery objects are serializable, and hence may outlive any open EntityManagers or EntityManagerFactory instances:
CriteriaQuery objects must be serializable. A persistence vendor is required to support the subse- quent deserialization of a CriteriaQuery object into a separate JVM instance of that vendor’s runt- ime, where both runtime instances have access to any required vendor implementation classes.
The EJB 3.1 specification says in section 21.2.2:
An enterprise bean must not use thread synchronization primitives to synchronize execution of multiple instances, except if it is a Singleton session bean with bean-managed concurrency.
If I have a stateless session bean that wishes to pre-build a bunch of CriteriaQuery objects using a CriteriaBuilder obtained from an injected #PersistenceContext, where should I stash the results?
I can think of the following possibilities but am concerned that all but one run afoul of the "no synchronization primitives" clause above:
In a Map that is stored as the value of one of my bean's instance fields, understanding that I'll have to synchronize access to the map. My take: section 21.2.2 violation.
In a ConcurrentMap that is stored as the value of one of my bean's instance fields. My take: still a section 21.2.2 violation, as I'm sure the ConcurrentMap implementation synchronizes somewhere.
In a #Singleton EJB's instance field somewhere, where the #Singleton exists only to serve as this kind of cache; with bean-managed concurrency this should be legal, but now all my stateless session beans that want to make use of this CriteriaQuery cache have to inject the singleton into themselves...seems like a lot of overhead.
So it sounds like strictly speaking the last option is the only specification-compliant one. Am I correct?

I would consider putting them in a simple static context, accessible from anywhere. The problem lies in initializing them since you need an entity manager instance to do that. Perhaps a singleton ejb for initializing things as described at Call method in EJB on JBoss startup. The singleton could initialize your criteria query cache, which then could serve criteria queries to your DAOs through static context.
Another option would be to use JPQL which has built in support for precompiled queries. Of course you'd lose some advantages of using the criteria API, though I think the main issue (type safety etc.) might be OK since precompiled queries should throw an exception if they are invalid at deploy time rather than runtime.

Related

When does CDI injection happen?

I am working with Wildfly server and am wondering when does the injection actually happen. Is it in the time when it's needed or is there some mechanism to do dependency resolution earlier?
If I use annotation #Inject, I know that I would get an error if something cannot be injected (ambiguity, etc). Does that mean that injection is done in deployment time? If so, how does that relate to this scenario: suppose I have BeanOne which injects BeanTwo, and BeanTwo injects BeanThree. Does this mean that this chain of beans will be allocated in deployment time? What happens if I have many chains than this, and suppose my bean pool is limited to some small number, say 2? How could it be done in the deployment time when there are not enough beans and some of them would have to wait for their dependencies?
Is this case different from programmatic lookup of beans: CDI.current().select(MyStatelessBean.class).get();
or even injection using instances: #Inject Instance<MyStatelessBean> bean;?
The errors you are getting are usually coming from what is called a validation phase. That's done during deployment and does not mean the actual beans would be created.
In fact, the bean creation is usually done lazily, especially when proxy is in play (e.g. any normal scoped bean). This is Weld-specific and other CDI implementations do not need to adhere to that as the specification itself does not demand/forbid it.
In practice this means that when you #Inject Foo foo; all you get is actually a proxy object. A stateless 'shell' that knows how to get hold of the so called contextual instance when needed. The contextual instance is created lazily, on demand, when you first attempt to use that bean which is usually when you first try to invoke a method on it.
Thanks to static nature of CDI, at deployment time, all dependencies of your beans are known and can be validated, so the chain you had in your question can be verified and you will know if all those beans are available/unsatisfied/ambiguous.
As for dynamic resolution, e.g. Instance<Bar>, this is somewhat different. CDI can only validate the initial declaration that you have; in my example above, that a bean of type Foo with default qualifier. Any subsequent calls to .select() methods are done at runtime hence you always need to verify is the instance you just tried to select is available because you can easily select either a type that is not a bean or a bean type but with invalid qualifier(s). The Instance API offers special methods for just that.

Entity Framework object materialization and dependency injection

I would like to be able to inject some dependencies (by using an IoC container) into entities just after they are loaded and materialized by Entity Framework (as a result of a query for instance).
It is possible to do so by hooking on the ObjectMaterialized event but I'm wondering if there is no better manner to achieve this as I use EF 6 and code first.
Any advices or ideas ?
Thanks
Riana
Although Entity Framework can be configured to allow dependencies to be injected into entities, I think it's safe to say that the general consensus (take a look at the opinions of Jimmy Bogard, Mark Seemann and me) is to not do this at all.
For me the main point is that classes like entities, DTOs and messages are very different from service classes. Entities, DTOs and messages are short lived objects containing runtime data, while services contain behavior, are often long lives and simply process runtime data (such as entities).
That doesn't mean that you can't use services into your entities though. As Mark describes here, not letting your entities use services lead to an Anemic Domain Model. But what this means is that entities shouldn't be part of your object graph.
Instead, if you are practicing DDD, your entities can simply accept dependencies into the domain methods that you define on the entities. Those dependencies can than be supplied by the command handlers that execute the use case. In other words, dependencies are injected into the constructor of a command handler, and when calling an entity's domain method, the command handler will supply the dependencies that this method requires (usually just one or two) to that method (method injection).

How to get a detached object from JPA

In my application I need most objects fetched in detached mode (fetched with the find API).
I'm wondering if there is a way to ask a detached object from the JPA provider and save the extra call to detach() API.
In additional I would expect the object created in such mode to be less expensive since the JPA provider doesn't need to add it to the entity manager context.
Is there a way to achieve this with JPA APIs?
Is there a way to achieve such functionality with query results?
Specifically I'm using Eclipse Link so if there is a specific way to do it with this implementation it will be helpful as well.
You can fetch a detached entity without an extra call to detach() if you fetch it outside a transaction. If you are not using container-managed transactions, it's trivial, simply do not start a transaction.
If you are using CMT, you have to make sure the requesting object is not a transaction-enabled EJB:
if in an EJB, suspend the transaction by annotating the appropriate method with:#TransactionAttribute(TransactionAttributeType.NOT_SUPPORTED),
or
call the EntityManager from a POJO. You dont have to call it directly, it only impotrant that the query result will end in a non-EJB object.
AFAIK, there is no performance gain to be expected, since the query result will always be put in the current persistence context, however shortlived it may be.
EDIT: There is another possibility to get detached objects which does not depend on transaction demarcations: JPA constructor expressions:
List<DTO> dtos = em.createQuery("SELECT NEW com.example.DTO( o.title, o.version) FROM Entity o").getResultList();
The constructed type must have a constructor with all the relevant attributes. The objects in the list, entities or not, will always be created detached. However there is a small overhead of instantiating a new object.

Design decision for a Java EE 6 (EJB, JSF, CDI, JPA) application

I am developing a small (but growing) Java EE project based on the technologies EJB 3.1, JSF 2, CDI (WELD) and JPA 2 deployed on a JBOSS AS 7.1Beta1.
As a starting point I created a Maven project based on the Knappsack Maven archetypes.
My architecture is basically the same provided by the archetype and as my project grows I think this archetype seems to be reaching its limits. I want to modify the basic idea of the archetype according to my needs. But let me first explain how the project is organized at the moment.
The whole project is built around Seam like Home classes. The view is referencing them (via EL in xhtml templates). Most of the Home classes are #Named and #RequestScoped (or shortly #Model) or #ConversationScoped and Entripse Java Beans are #Injected. Basically these (normally #Local) EJBs are responsible for the database access (Some kind of DAOs) to get transactions managed automatically by the container. So every DAO class has its own EntityManager injected via CDI. At the moment every DAO integrates aspects which logically belong to each other (e. g. there is a SchoolDao in the archetype which is responsible for creating Teachers, Students and Courses).
This of course results in growing DAOs which have no well defined task and which become hard to maintain and hard to understand. And as a painful side effect the risk of duplicate code grows.
As a consequence I want to breakup this design by having only DAOs which are responsible for one specific task (a #StudentDao, a #TeacherDaoand so on). And at this point I am in trouble. As each DAO has a reference to its own EntityManager it cannot be guaranteed that something like the following will work (I think it never will :)
Teacher teacher = teacherDao.find(teacherId);
course.setTeacher(teacher);
courseDao.save(course);
The JPA implementaion complains about a null value for column COURSE.TEACHER_ID (assuming Course has a not nullable FK realtionship to Teacher). Each DAO holds its own EntityManager, the teacher is managed by the one in the TeacherDao, but the other one in the CourseDao tries to merge the Course #Entity.
Maybe the archetye I used is not suitable for larger applications. But what would be a appropriate design for such an aplication then IF the technologies I used are obligatory (EJB 3.1 for container managed transactions [and later on other business related stuff], JSF as view technologie, JPA as the database mapper and CDI as the 'must have because it's hip:)?
Edit:
I now have an EntityManager injected in the base class all other DAO classes inherit from. So all DAOs use the same instance (debugger shows same object id) but I still have the problem that all entities that I read from the database are immediately detached. This is something that makes me wonder as it means that there is either no container managed transaction or the transaction gets immediately closed after the entity was read. Each DAO is a #Local #Stateless EJB. They are injected into my JSF Beans (#Named and #RequestScoped) from where I want to make use of the CRUD operations. Is there anything I miss?
Having each DOA have its own EntityManager is a very bad design.
You should have an EntityManager per transaction/request and pass it to each DOA, or have them share the same one or get it from the context.

Dynamicly select datasource for entities runtime

I have an entity bean that will represent an expected result over multiple databases/datasources and can also be different queries executed, but same result always comming back. So the bean is re-used over different datasources that should be able to be dynamicly selected.
Is it possible with JPA to select during runtime the data source to be used to execute a query, and return the same type of entity bean?
Also, does my ejb/application need to define the datasources that will be used? Or can I always specify via jndi what datasource to use? Modifying the descriptor's and re-deploying an application everytime a new datasource is created is not an option.
Sorry if the question does not make 100% sense, rather difficult to get the idea through.
Is it possible with JPA to select during runtime the data source to be used to execute a query, and return the same type of entity bean?
You can't change the datasource of a persistence unit at runtime. However, you can configure several persistence unit and use one or another EntityManagerFactory. Maybe JPA is not the right tool for your use case.
Modifying the descriptor's and re-deploying an application everytime a new datasource is created is not an option.
And how will the application be aware of the "available datasources"?
You can change the JPA datasource at runtime, but the approach is tricky (introspection, JPA implementation specific, ...).
I've implemented my own implementation of javax.persistence.spi.PersistenceProviderwhich override the org.hibernate.ejb.HibernatePersistence and sets the datasource in both the Map and PersistenceUnitInfo of the PersistenceProvider just before creating the EntityManagerFactory. This way, my EntityManagerFactory has a datasource which has been configured at runtime. I keep my EntityManagerFactory until the application is undeployed.
You could use the same be approach and create N different EntityManagerFactory, each with its specific datasource. However keep in mind that each ÈntityManagerFactory uses a lot of memory.