How would you setup an JSF entity, service and backing bean [duplicate] - jpa

I'm trying to get used to how JSF works with regards to accessing data (coming from a spring background)
I'm creating a simple example that maintains a list of users, I have something like
<h:dataTable value="#{userListController.userList}" var="u">
<h:column>#{u.userId}</h:column>
<h:column>#{u.userName}</h:column>
</h:dataTable>
Then the "controller" has something like
#Named(value = "userListController")
#SessionScoped
public class UserListController {
#EJB
private UserListService userListService;
private List<User> userList;
public List<User> getUserList() {
userList = userListService.getUsers();
return userList;
}
}
And the "service" (although it seems more like a DAO) has
public class UserListService {
#PersistenceContext
private EntityManager em;
public List<User> getUsers() {
Query query = em.createQuery("SELECT u from User as u");
return query.getResultList();
}
}
Is this the correct way of doing things? Is my terminology right? The "service" feels more like a DAO? And the controller feels like it's doing some of the job of the service.

Is this the correct way of doing things?
Apart from performing business logic the inefficient way in a managed bean getter method, and using a too broad managed bean scope, it looks okay. If you move the service call from the getter method to a #PostConstruct method and use either #RequestScoped or #ViewScoped instead of #SessionScoped, it will look better.
See also:
Why JSF calls getters multiple times
How to choose the right bean scope?
Is my terminology right?
It's okay. As long as you're consistent with it and the code is readable in a sensible way. Only your way of naming classes and variables is somewhat awkward (illogical and/or duplication). For instance, I personally would use users instead of userList, and use var="user" instead of var="u", and use id and name instead of userId and userName. Also, a "UserListService" sounds like it can only deal with lists of users instead of users in general. I'd rather use "UserService" so you can also use it for creating, updating and deleting users.
See also:
JSF managed bean naming conventions
The "service" feels more like a DAO?
It isn't exactly a DAO. Basically, JPA is the real DAO here. Previously, when JPA didn't exist, everyone homegrew DAO interfaces so that the service methods can keep using them even when the underlying implementation ("plain old" JDBC, or "good old" Hibernate, etc) changes. The real task of a service method is transparently managing transactions. This isn't the responsibility of the DAO.
See also:
I found JPA, or alike, don't encourage DAO pattern
DAO and JDBC relation?
When is it necessary or convenient to use Spring or EJB3 or all of them together?
And the controller feels like it's doing some of the job of the service.
I can imagine that it does that in this relatively simple setup. However, the controller is in fact part of the frontend not the backend. The service is part of the backend which should be designed in such way that it's reusable across all different frontends, such as JSF, JAX-RS, "plain" JSP+Servlet, even Swing, etc. Moreover, the frontend-specific controller (also called "backing bean" or "presenter") allows you to deal in a frontend-specific way with success and/or exceptional outcomes, such as in JSF's case displaying a faces message in case of an exception thrown from a service.
See also:
JSF Service Layer
What components are MVC in JSF MVC framework?
All in all, the correct approach would be like below:
<h:dataTable value="#{userBacking.users}" var="user">
<h:column>#{user.id}</h:column>
<h:column>#{user.name}</h:column>
</h:dataTable>
#Named
#RequestScoped // Use #ViewScoped once you bring in ajax (e.g. CRUD)
public class UserBacking {
private List<User> users;
#EJB
private UserService userService;
#PostConstruct
public void init() {
users = userService.listAll();
}
public List<User> getUsers() {
return users;
}
}
#Stateless
public class UserService {
#PersistenceContext
private EntityManager em;
public List<User> listAll() {
return em.createQuery("SELECT u FROM User u", User.class).getResultList();
}
}
You can find here a real world kickoff project here utilizing the canonical Java EE / JSF / CDI / EJB / JPA practices: Java EE kickoff app.
See also:
Creating master-detail pages for entities, how to link them and which bean scope to choose
Passing a JSF2 managed pojo bean into EJB or putting what is required into a transfer object
Filter do not initialize EntityManager
javax.persistence.TransactionRequiredException in small facelet application

It is a DAO, well actually a repository but don't worry about that difference too much, as it is accessing the database using the persistence context.
You should create a Service class, that wraps that method and is where the transactions are invoked.
Sometimes the service classes feel unnecessary, but when you have a service method that calls many DAO methods, their use is more warranted.
I normally end up just creating the service, even if it does feel unnecessary, to ensure the patterns stay the same and the DAO is never injected directly.
This adds an extra layer of abstraction making future refactoring more flexible.

Related

Injecting a Stateless EJB into another Stateless and using PersistenceContext

With JEE 5 / EJB 3.0 life of Java developers became much more easier. Later, influenced by Spring and CDI, similar approaches were also adopted in JEE.
Now, I hope I am doing it right, but just to be sure:
I have several Stateless EJBs, which all query and / or modify the database. An example is
#Stateless
public class AddressDBService {
#PersistenceContext
protected EntityManager em;
Some of the Stateless EJB refer the other services like this:
#Stateless
public class AVeDBService {
#PersistenceContext
protected EntityManager em;
#Inject
private HomeToDealDBService homeToDealDBService;
#Inject
private AddressDBService addressDBservice;
and in the Stateless EJBs I have public methods like the ones below:
#TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
public void saveEntity(Home home) throws EntityExistsException {
this.em.persist(home);
addressDBservice.saveAddress(home.getMainAddress(), home);
}
While I am almost certain this usage is correct and thread-safe (the above services are in turn injected into JSF Managed Beans).
Could somebody confirm that my usage is correct, thread-safe and conforms good practices?
My usage seems to be conform with the following questions:
Is EntityManager really thread-safe?
Stateless EJB with more injected EJBs instances
The "is correct?" question can't be answered without know the goal of the project.
It could works? Yes, you have posted java-ee code that could deploy, but is not enough.
I usually use BCE (Boundary Control Entity) pattern and Domain Driven pattern.
In this pattern we use EJB for business logic services or endpoint (JAX-RS) and all other injections, that are the Control part, are CDI objects.
Entities (JPA) could use cascade to avoid to manually save related entities:
addressDBservice.saveAddress(home.getMainAddress(), home);
can be avoided if you define the entity like this:
#Entity
public class Home {
#ManyToOne(cascade=ALL)
private Address mainAddress;
}
The #TransactionAttribute(TransactionAttributeType.REQUIRES_NEW) annotation usually respond to a specific transactions behavior, is not required, so is correct only if is what you want to do.

Common lib for EntityManager CDI

I have a common generic DAO in common lib. I want in each module which uses this DAO to initialize with its own persistence UNIT
public abstract class GenericDao implements IGenericDao {
#PersistenceContext(unitName = "XXXX")
private EntityManager entityManager;
and in other module
public class CarDao extends GenericDao{
I have lot of projects are using this generic DAO but each project have its own persistence unit.
Persitence unit are differents following the project where is used the common library
The point is i could not use POO with abstract getEntityManager injected in each micro servicies because in common project we have a history DAO common for all microservicies and for each one i have to retrieve the entityManager injected from the microservice
Am i doing wrong or well? and how set th epersistence unit in each project ? (each project have lot fo DAO and i don't want repet each time CRUD methods)
#PersistenceContext(unitName = "XXXX")
private EntityManager entityManager;
This should be done in each concrete class, the abstract one should implement the concrete operation using
getEntityManager().doSomething(entity)
the getter getEntityManager() being abstract.
Imho this is a design smell, EntityManager is already an abstraction and you have nothing to gain encapsulating it.
[edit]
Regarding the "factory" approach, the way in CDI to dynamically inject resources is using producer methods.
You can so create a method returning an EntityManager instance that will dynamically resolve the EntityManagerFactory according to the persistence unit name (see an example here).
Note that this is a very bad idea as the entityManager scope is usually bound to the transaction one, letting the container inject you the entityManager instance guarantee that the scope will be correctly handled (by the container). The only viable configuration with this approach is when you want an "application managed" entityManager
NB: note that the given example will instantiate a new EntityManageFactory instance for each injection which can be really catastrophic according to the way you use it (the EntityManageFactory should be created once for all the application)
be sure to be aware of EntityManager lifecycle before going further.
Thank you guy for your advices in fact i was totally stupid in my genericDao i simply put
public abstract class GenericDao implements IGenericDao {
#PersistenceContext
private EntityManager entityManager;
As we have only one PersistentUnit it will be automatically injected....
was so easy !
then i can use #PersistentContext in all DAOs or simply and best call getEntityManager from their parent IGenericDao

How to make #EJB injection work on the server?

Looking at this answer, it says:
If you don't want to use an Application Client Container and instead just run the application client class through a java command, injection won't be possible and you'll have to perform a JNDI lookup.
However, given that I am trying to inject a DAO bean like the example shown here, if I cannot do the automatic injecting, it means my application must manually do the JNDI lookup and all the transaction begin/end that I would get for free if the #EJB actually worked.
However, since everything is all within the same Eclipse EJB Project (it also failed with the same null handle when I had my client code in a Dynamic Web Project), surely there must be an easy way to get it all working? Can anyone suggest what I am doing wrong?
Finally, this article suggests that DAOs are not needed, but if I replace within my EJB:
#EJB MyDao dao;
with the more direct:
#PersistenceContext private EntityManager em;
I still get the similar null value; is this the same injection failure problem?
NB: I have just noticed this answer:
This is a bug in Glassfish (apparently in the web services stack).
I am running v4.0 Build 89, which still has this bug? Does this mean I have to do all JPA actions the long-winded way?
I eventually found out that the problem/issue is that in order to use injection of the #PersistenceContext the class MUST be a bean itself. This is hinted at in the example on Wikipedia:
#Stateless
public class CustomerService {
#PersistenceContext
private EntityManager entityManager;
public void addCustomer(Customer customer) {
entityManager.persist(customer);
}
}
I could delete this question, but perhaps leaving this answer might provide a hint to someone, or at least show them a minimal working example of EJB and JPA.

Entity Framework 6 Code First - Is Repository Implementation a Good One?

I am about to implement an Entity Framework 6 design with a repository and unit of work.
There are so many articles around and I'm not sure what the best advice is: For example I realy like the pattern implemented here: for the reasons suggested in the article here
However, Tom Dykstra (Senior Programming Writer on Microsoft's Web Platform & Tools Content Team) suggests it should be done in another article: here
I subscribe to Pluralsight, and it is implemented in a slightly different way pretty much every time it is used in a course so choosing a design is difficult.
Some people seem to suggest that unit of work is already implemented by DbContext as in this post, so we shouldn't need to implement it at all.
I realise that this type of question has been asked before and this may be subjective but my question is direct:
I like the approach in the first (Code Fizzle) article and wanted to know if it is perhaps more maintainable and as easily testable as other approaches and safe to go ahead with?
Any other views are more than welcome.
#Chris Hardie is correct, EF implements UoW out of the box. However many people overlook the fact that EF also implements a generic repository pattern out of the box too:
var repos1 = _dbContext.Set<Widget1>();
var repos2 = _dbContext.Set<Widget2>();
var reposN = _dbContext.Set<WidgetN>();
...and this is a pretty good generic repository implementation that is built into the tool itself.
Why go through the trouble of creating a ton of other interfaces and properties, when DbContext gives you everything you need? If you want to abstract the DbContext behind application-level interfaces, and you want to apply command query segregation, you could do something as simple as this:
public interface IReadEntities
{
IQueryable<TEntity> Query<TEntity>();
}
public interface IWriteEntities : IReadEntities, IUnitOfWork
{
IQueryable<TEntity> Load<TEntity>();
void Create<TEntity>(TEntity entity);
void Update<TEntity>(TEntity entity);
void Delete<TEntity>(TEntity entity);
}
public interface IUnitOfWork
{
int SaveChanges();
}
You could use these 3 interfaces for all of your entity access, and not have to worry about injecting 3 or more different repositories into business code that works with 3 or more entity sets. Of course you would still use IoC to ensure that there is only 1 DbContext instance per web request, but all 3 of your interfaces are implemented by the same class, which makes it easier.
public class MyDbContext : DbContext, IWriteEntities
{
public IQueryable<TEntity> Query<TEntity>()
{
return Set<TEntity>().AsNoTracking(); // detach results from context
}
public IQueryable<TEntity> Load<TEntity>()
{
return Set<TEntity>();
}
public void Create<TEntity>(TEntity entity)
{
if (Entry(entity).State == EntityState.Detached)
Set<TEntity>().Add(entity);
}
...etc
}
You now only need to inject a single interface into your dependency, regardless of how many different entities it needs to work with:
// NOTE: In reality I would never inject IWriteEntities into an MVC Controller.
// Instead I would inject my CQRS business layer, which consumes IWriteEntities.
// See #MikeSW's answer for more info as to why you shouldn't consume a
// generic repository like this directly by your web application layer.
// See http://www.cuttingedge.it/blogs/steven/pivot/entry.php?id=91 and
// http://www.cuttingedge.it/blogs/steven/pivot/entry.php?id=92 for more info
// on what a CQRS business layer that consumes IWriteEntities / IReadEntities
// (and is consumed by an MVC Controller) might look like.
public class RecipeController : Controller
{
private readonly IWriteEntities _entities;
//Using Dependency Injection
public RecipeController(IWriteEntities entities)
{
_entities = entities;
}
[HttpPost]
public ActionResult Create(CreateEditRecipeViewModel model)
{
Mapper.CreateMap<CreateEditRecipeViewModel, Recipe>()
.ForMember(r => r.IngredientAmounts, opt => opt.Ignore());
Recipe recipe = Mapper.Map<CreateEditRecipeViewModel, Recipe>(model);
_entities.Create(recipe);
foreach(Tag t in model.Tags) {
_entities.Create(tag);
}
_entities.SaveChanges();
return RedirectToAction("CreateRecipeSuccess");
}
}
One of my favorite things about this design is that it minimizes the entity storage dependencies on the consumer. In this example the RecipeController is the consumer, but in a real application the consumer would be a command handler. (For a query handler, you would typically consume IReadEntities only because you just want to return data, not mutate any state.) But for this example, let's just use RecipeController as the consumer to examine the dependency implications:
Say you have a set of unit tests written for the above action. In each of these unit tests, you new up the Controller, passing a mock into the constructor. Then, say your customer decides they want to allow people to create a new Cookbook or add to an existing one when creating a new recipe.
With a repository-per-entity or repository-per-aggregate interface pattern, you would have to inject a new repository instance IRepository<Cookbook> into your controller constructor (or using #Chris Hardie's answer, write code to attach yet another repository to the UoW instance). This would immediately make all of your other unit tests break, and you would have to go back to modify the construction code in all of them, passing yet another mock instance, and widening your dependency array. However with the above, all of your other unit tests will still at least compile. All you have to do is write additional test(s) to cover the new cookbook functionality.
I'm (not) sorry to say that the codefizzle, Dyksta's article and the previous answers are wrong. For the simple fact that they use the EF entities as domain (business) objects, which is a big WTF.
Update: For a less technical explanation (in plain words) read Repository Pattern for Dummies
In a nutshell, ANY repository interface should not be coupled to ANY persistence (ORM) detail. The repo interface deals ONLY with objects that makes sense for the rest of the app (domain, maybe UI as in presentation). A LOT of people (with MS leading the pack, with intent I suspect) make the mistake of believing that they can reuse their EF entities or that can be business object on top of them.
While it can happen, it's quite rare. In practice, you'll have a lot of domain objects 'designed' after database rules i.e bad modelling. The repository purpose is to decouple the rest of the app (mainly the business layer) from its persistence form.
How do you decouple it when your repo deals with EF entities (persistence detail) or its methods return IQueryable, a leaking abstraction with wrong semantics for this purpose (IQueryable allows you to build a query, thus implying that you need to know persistence details thus negating the repository's purpose and functionality)?
A domin object should never know about persistence, EF, joins etc. It shouldn't know what db engine you're using or if you're using one. Same with the rest of the app, if you want it to be decoupled from the persistence details.
The repository interface know only about what the higher layer know. This means, that a generic domain repository interface looks like this
public interface IStore<TDomainObject> //where TDomainObject != Ef (ORM) entity
{
void Save(TDomainObject entity);
TDomainObject Get(Guid id);
void Delete(Guid id);
}
The implementation will reside in the DAL and will use EF to work with the db. However the implementation looks like this
public class UsersRepository:IStore<User>
{
public UsersRepository(DbContext db) {}
public void Save(User entity)
{
//map entity to one or more ORM entities
//use EF to save it
}
//.. other methods implementation ...
}
You don't really have a concrete generic repository. The only usage of a concrete generic repository is when ANY domain object is stored in serialized form in a key-value like table. It isn't the case with an ORM.
What about querying?
public interface IQueryUsers
{
PagedResult<UserData> GetAll(int skip, int take);
//or
PagedResult<UserData> Get(CriteriaObject criteria,int skip, int take);
}
The UserData is the read/view model fit for the query context usage.
You can use directly EF for querying in a query handler if you don't mind that your DAL knows about view models and in that case you won't be needing any query repo.
Conclusion
Your business object shouldn't know about EF entities.
The repository will use an ORM, but it never exposes the ORM to the rest of the app, so the repo interface will use only domain objects or view models (or any other app context object that isn't a persistence detail)
You do not tell the repo how to do its work i.e NEVER use IQueryable with a repo interface
If you just want to use the db in a easier/cool way and you're dealing with a simple CRUD app where you don't need (be sure about it) to maintain separation of concerns then skip the repository all together, use directly EF for everything data. The app will be tightly coupled to EF but at least you'll cut the middle man and it will be on purpose not by mistake.
Note that using the repository in the wrong way, will invalidate its use and your app will still be tightly coupled to the persistence (ORM).
In case you believe the ORM is there to magically store your domain objects, it's not. The ORM purpose is to simulate an OOP storage on top of relational tables. It has everything to do with persistence and nothing to do with domain, so don't use the ORM outside persistence.
DbContext is indeed built with the Unit of Work pattern. It allows all of its entities to share the same context as we work with them. This implementation is internal to the DbContext.
However, it should be noted that if you instantiate two DbContext objects, neither of them will see the other's entities that they are each tracking. They are insulated from one another, which can be problematic.
When I build an MVC application, I want to ensure that during the course of the request, all my data access code works off of a single DbContext. To achieve that, I apply the Unit of Work as a pattern external to DbContext.
Here is my Unit of Work object from a barbecue recipe app I'm building:
public class UnitOfWork : IUnitOfWork
{
private BarbecurianContext _context = new BarbecurianContext();
private IRepository<Recipe> _recipeRepository;
private IRepository<Category> _categoryRepository;
private IRepository<Tag> _tagRepository;
public IRepository<Recipe> RecipeRepository
{
get
{
if (_recipeRepository == null)
{
_recipeRepository = new RecipeRepository(_context);
}
return _recipeRepository;
}
}
public void Save()
{
_context.SaveChanges();
}
**SNIP**
I attach all my repositories, which are all injected with the same DbContext, to my Unit of Work object. So long as any repositories are requested from the Unit of Work object, we can be assured that all our data access code will be managed with the same DbContext - awesome sauce!
If I were to use this in an MVC app, I would ensure the Unit of Work is used throughout the request by instantiating it in the controller, and using it throughout its actions:
public class RecipeController : Controller
{
private IUnitOfWork _unitOfWork;
private IRepository<Recipe> _recipeService;
private IRepository<Category> _categoryService;
private IRepository<Tag> _tagService;
//Using Dependency Injection
public RecipeController(IUnitOfWork unitOfWork)
{
_unitOfWork = unitOfWork;
_categoryRepository = _unitOfWork.CategoryRepository;
_recipeRepository = _unitOfWork.RecipeRepository;
_tagRepository = _unitOfWork.TagRepository;
}
Now in our action, we can be assured that all our data access code will use the same DbContext:
[HttpPost]
public ActionResult Create(CreateEditRecipeViewModel model)
{
Mapper.CreateMap<CreateEditRecipeViewModel, Recipe>().ForMember(r => r.IngredientAmounts, opt => opt.Ignore());
Recipe recipe = Mapper.Map<CreateEditRecipeViewModel, Recipe>(model);
_recipeRepository.Create(recipe);
foreach(Tag t in model.Tags){
_tagRepository.Create(tag); //I'm using the same DbContext as the recipe repo!
}
_unitOfWork.Save();
Searching around the internet I found this http://www.thereformedprogrammer.net/is-the-repository-pattern-useful-with-entity-framework/ it's a 2 part article about the usefulness of the repository pattern by Jon Smith.
The second part focuses on a solution. Hope it helps!
Repository with unit of work pattern implementation is a bad one to answer your question.
The DbContext of the entity framework is implemented by Microsoft according to the unit of work pattern. That means the context.SaveChanges is transactionally saving your changes in one go.
The DbSet is also an implementation of the Repository pattern. Do not build repositories that you can just do:
void Add(Customer c)
{
_context.Customers.Add(c);
}
Create a one-liner method for what you can do inside the service anyway ???
There is no benefit and nobody is changing EF ORM to another ORM nowadays...
You do not need that freedom...
Chris Hardie is argumenting that there could be instantiated multiple context objects but already doing this you do it wrong...
Just use an IOC tool you like and setup the MyContext per Http Request and your are fine.
Take ninject for example:
kernel.Bind<ITeststepService>().To<TeststepService>().InRequestScope().WithConstructorArgument("context", c => new ITMSContext());
The service running the business logic gets the context injected.
Just keep it simple stupid :-)
You should consider "command/query objects" as an alternative, you can find a bunch of interesting articles around this area, but here is a good one:
https://rob.conery.io/2014/03/03/repositories-and-unitofwork-are-not-a-good-idea/
When you need a transaction over multiple DB objects, use one command object per command to avoid the complexity of the UOW pattern.
A query object per query is likely unnecessary for most projects. Instead you might choose to start with a 'FooQueries' object
...by which I mean you can start with a Repository pattern for READS but name it as "Queries" to be explicit that it does not and should not do any inserts/updates.
Later, you might find splitting out individual query objects worthwhile if you want to add things like authorization and logging, you could feed a query object into a pipeline.
I always use UoW with EF code first. I find it more performant and easier tot manage your contexts, to prevent memory leaking and such. You can find an example of my workaround on my github: http://www.github.com/stefchri in the RADAR project.
If you have any questions about it feel free to ask them.

Does Guice Persist provide transaction scoped or application managed EntityManager?

We use Guice Persist to inject EntityManager in our project.
E.g.
public class MyDao{
#Inject
EntityManager em;
public void someMethod(){
//uses em instance
}
}
But it is unclear for us how injected instance of EntityManager is about to be used.
What type of EntityManager is this? (see e.g.: types of entity managers) Under the hood Guice Persist instantiates it via EntityManagerFactory.createEntityManager() so I'd say it's application-managed entity manager. But in official Wiki they write about seesion-per-transaction strategy, which suggests that EntityManager is (pseudo) transaction-scoped.
Should we invoke close() on it manually? Or Guice will take care of it?
What is the scope of first level cache? Only single transaction (like in transaction-scoped entity managers) or as long as I use the same injected instance of EntityManager (like in application managed entity managers)?
Even though the question is perfectly answered by Piotr I'd like to add some practical advise on how to use guice-persist.
I've been having issues with it which were pretty hard to debug. In my application certain threads would display outdated data and sometimes EntityManager instances were left with old dead database connections. The root cause was to be found in the way I used the #Transactional annotation (I only used them for methods that do updates/inserts/deletes, not for read-only methods). guice-persist stores EntityManager instances in a ThreadLocal as soon as you call get() on an injected Provider<EntityManager> instance (which I did for read-only methods). However, this ThreadLocal is only removed if you also call UnitOfWork.end() (which normally is done by the interceptor if #Transactional is on the method). Not doing so will leave the EM instance in the ThreadLocal so that eventually every thread in your thread pool will have an old EM instance with stale cached entities.
So, as long as you stick to the following simple rules the usage of guice-persist is straight forward:
Always inject Provider<EntityManager> instead of EntityManager directly.
For transaction-scoped semantics: always annotate each method with #Transactional (even the read-only methods). This way the JpaLocalTxnInterceptor will intercept the calls to your annotated methods making sure not only to start and commit transactions but also to set and unset EM instances in the ThreadLocal.
For request-scoped semantics: use the PersistFilter servlet filter that ships with guice-persist. It will call begin() and end() on the UnitOfWork for you before and after the request is done, thereby populating and cleaning up the ThreadLocal.
Hope this helps!
I did some research of the source code of Guice-persist and read through comments under Guice-persist wiki pages and these are the answers that I needed:
1 . Lifecycle management of EntityManager is kind of broken if it's injected via #Inject EntityManager. As stated in one of the comments on the Wiki:
I confirm that inject directly an EntityManager instead of a provider
can be dangerous. If you're not inside a UnitOfWork or a method
annotated with #Transaction, the first injection of an EntityManager
in a thread will create a new EntityManager, never destroy it, and
always use this specific EntityManager for this thread (EM are stored
thread-local). This can lead to terrible issues, like injection of
dead entityManager (connection closed, etc) So my recommendation if to
always inject a Provider, or at least to inject
directly an EntityManager only inside an opened UnitOfWork.
So example in my question isn't the most correct usage. It creates singleton instance of EntityManager (per-thread) and will inject this instance everywhere :-(.
However if I've injected Provider and used it inside #Transactional method then the instance of EntityManager would be per-transaction. So the answer to this question is: if injected and used correctly, the entity manager is transaction-scoped.
2 . If injected and used correctly then I don't need to manualy close entity manager (guice-persist will handle that for me). If used incorrectly, closing manually would be very bad idea (closed instance of EntityManager would be injected in every place when I #Inject EntityManager )
3 . If injected and used correctly then the scope of L1 cache is single transaction. If used incorrectly, the scope of the L1 cache is the lifetime of application (EntityManager is singleton)
1. It depends on you module cofiguration. There are some basic bindings:
JpaPersistanceService
public class JpaPersistanceService implements Provider<EntityManager> {
private EntityManagerFactory factory;
public JpaPersistanceService(EntityManagerFactory factory) {
this.factory = factory;
}
#Override
public EntityManager get() {
return factory.createEntityManager();
}
}
Module binding
EntityManagerFactory factory = Persistence.createEntityManagerFactory(getEnvironment(stage));
bind(EntityManager.class).annotatedWith(Names.named("request")).toProvider(new JpaPersistanceService(factory)).in(RequestScoped.class);
bind(EntityManager.class).annotatedWith(Names.named("session")).toProvider(new JpaPersistanceService(factory)).in(SessionScoped.class);
bind(EntityManager.class).annotatedWith(Names.named("app")).toProvider(new JpaPersistanceService(factory)).asEagerSingleton;
Usage
#Inject #Named("request")
private EntityManager em; //inject a new EntityManager class every request
#Inject #Named("session")
private Provider<EntityManager> emProvider; //inject a new EntityManager class each session
//This is little bit tricky, cuz EntityManager is stored in session. In Stage.PRODUCTION are all injection created eagerly and there is no session at injection time. Session binding should be done in lazy way - inject provider and call emProvider.get() when em is needed;
#Inject #Named("application")
private EntityManager em; //inject singleton
2. Yes, you should or you will use JpaPersistModule [javadoc]
3. Well, this is about JPA configuration in persistence.xml and EntityManager scope
I'm injecting a provider .... but I suspect something is wrong.
When I try to redeploy an application ALWAYS I have to restar the server because the JPA classes are cached.
It happens the following pseudo-bug
https://bugs.eclipse.org/bugs/show_bug.cgi?id=326552
Theoretically by injecting a Provider and getting an instance of EntityManager you should not close anything ....