I'm working on a REST api with Java, using JAX-RS, EJB, JPA and JasperReports, basically the API call a oracle function that return an id, with that id i make a select and generate reports with Jasper Reports, then i send the report as a response, works fine.
But, i have some questions if i should use or not use EJB, because i dont see why i should use EJB in this case, since the oracle function have commit inside of it, if something goes wrong, the rollback triggered by the EJB will not do nothing right? Also, the select that generates the report is simple, just one table, and i saw some articles saying that if you do just a select theres no need for using EJB to control the transaction.
Also, how to use CDI in this case? #Named in the classes and #Inject in the fields? I have some coworkers saying that #Named should be used just with JSF, but i'm a junior, seeking for the truth about this, after researching a lot i still dont know how to handle this, i apreciate any help.
Thanks!
Do I need EJBs for transactions?
If you are using JEE-7+ then you can use #Transactional for your CDI Beans instead of EJB-Beans with #Stateless and #TransactionManagement and #TransactionAttribute. #Transactional provides the same properties as #TransactionAttribute and makes any CDI Bean transactional without the need for an EJB Container. All of these approaches require JPA to be used, which for a simple single query is maybe an overkill.
https://docs.oracle.com/javaee/7/api/javax/transaction/Transactional.html
What can I use instead of EJBs and #Transactional?
If you don't need/want to use an EntityManager, then just use plain JDBC.
What does #Named do?
#Named makes CDI Beans accessible to Java-EL via their defined name, or if no one is defined then via their simple class name. You can also use #Named to distinguish between implementations, but I think CDI Qualifiers are more suitable to achieve that. So, if you don't need it, then don't annotate it.
How to provide CDI Beans to other CDI Beans?
In my opinion CDI Beans should be injected via Fields and not constructor arguments. Injection in constructor arguments is done because of testability, so you can test your beans without using CDI, which these days is not that hard to achieve anymore.
https://deltaspike.apache.org/documentation/test-control.html
Related
I am working with Wildfly server and am wondering when does the injection actually happen. Is it in the time when it's needed or is there some mechanism to do dependency resolution earlier?
If I use annotation #Inject, I know that I would get an error if something cannot be injected (ambiguity, etc). Does that mean that injection is done in deployment time? If so, how does that relate to this scenario: suppose I have BeanOne which injects BeanTwo, and BeanTwo injects BeanThree. Does this mean that this chain of beans will be allocated in deployment time? What happens if I have many chains than this, and suppose my bean pool is limited to some small number, say 2? How could it be done in the deployment time when there are not enough beans and some of them would have to wait for their dependencies?
Is this case different from programmatic lookup of beans: CDI.current().select(MyStatelessBean.class).get();
or even injection using instances: #Inject Instance<MyStatelessBean> bean;?
The errors you are getting are usually coming from what is called a validation phase. That's done during deployment and does not mean the actual beans would be created.
In fact, the bean creation is usually done lazily, especially when proxy is in play (e.g. any normal scoped bean). This is Weld-specific and other CDI implementations do not need to adhere to that as the specification itself does not demand/forbid it.
In practice this means that when you #Inject Foo foo; all you get is actually a proxy object. A stateless 'shell' that knows how to get hold of the so called contextual instance when needed. The contextual instance is created lazily, on demand, when you first attempt to use that bean which is usually when you first try to invoke a method on it.
Thanks to static nature of CDI, at deployment time, all dependencies of your beans are known and can be validated, so the chain you had in your question can be verified and you will know if all those beans are available/unsatisfied/ambiguous.
As for dynamic resolution, e.g. Instance<Bar>, this is somewhat different. CDI can only validate the initial declaration that you have; in my example above, that a bean of type Foo with default qualifier. Any subsequent calls to .select() methods are done at runtime hence you always need to verify is the instance you just tried to select is available because you can easily select either a type that is not a bean or a bean type but with invalid qualifier(s). The Instance API offers special methods for just that.
I'm having issues trying to get MyBatis and Javers (with Spring) integrated and working. I've followed instructions at http://javers.org/documentation/spring-integration/ and gotten the Aspect setup, and annotated my entity class and registered it with Javers, and the MyBatis interface correctly annotated with #Repository and #JaversAuditable on the appropriate methods, but still haven't gotten it to work, even setting breakpoints in the Javers Aspect, but nothing triggers.
I've also gone about it the other way, using MyBatis plugin interceptor, as per http://www.mybatis.org/mybatis-3/configuration.html#plugins (then used http://www.mybatis.org/spring/xref-test/org/mybatis/spring/ExecutorInterceptor.html as a basic example for commits). However while it's triggering, it's not doing what I expected and is basically just an aspect around on the commit method, which takes a boolean rather than containing which entity(ies) are being commited which would let me pass them to Javers. I suppose I could add an interceptor on the update/insert MyBatis methods, and then stored that in a ThreadLocal or similar so that when commit/rollback was called I could pass it to Javers as necessary, but that's messy.
I've got no clue where to go from here, unless someone can see something I've missed with one of those 2 methods.
So in my confusion, I realized that since MyBatis generates the concrete object for the Mapper Interfaces, Spring never seems the creation of that object, simply has the final object registered as a Bean in the context. Thus, Javers never has a chance to process the Bean as it's created in order to do any proxying or what not as necessary.
So, silly me. So I ended up creating a Spring-Data #Repository layer that mostly just passes the call through to the Mapper. Although on updates I'm doing some extra bits which the DAO shim layer (as I'm calling it) works well for.
I'm trying to handle the Spring DAO Exceptions (http://docs.spring.io/spring/docs/4.0.3.RELEASE/spring-framework-reference/htmlsingle/#dao-exceptions) in the service layer of my application, just to discover that the exceptions in the spring-data-commons module don't extend org.springframework.dao.DataAccessException.DataAccessException.
Example: PropertyReferenceException.
As far as I can tell, all exceptions in this module, and maybe in the other sub-modules of Spring Data projects should extend DataAccessException.
Is there anything obvious that I'm not seeing here?
There is no need for Spring Data to use Spring Core's DataAccessException hierarchy as one may use the PersistenceExceptionTranslator. The latter translates thrown exceptions of e.g. Spring Data Repositories to a DataAccessException subtype.
PersistenceExceptionTranslator kicks in automatically when marking your repository with the "#Repository" annotation. The service (using the annotated repository) may catch the DataAccessException, if needed.
I realized after writing this question I could sum it up in a few sentences. How can I manage transactions in Spring-Data-JPA with CDI the same way you would by using #Transactional in Spring itself?
First thing I did was set up Spring Data JPA CDI based on the documentation here. http://static.springsource.org/spring-data/data-jpa/docs/current/reference/html/jpa.repositories.html#jpd.misc.cdi-integration
I set this up and it is working fine for read operations but not write operations
For Example, Their example in the docs would work fine.
List<Person> people = repository.findAll();
So I have the basic setup complete.
Written by hand may have typos. This is similar to the code I execute.
#Inject
UserRepository userRepository;
User user;
#Transactional
public void signUpUserAction() {
userRepository.saveAndFlush(user);
}
Then I receive this error
Caused by: javax.persistence.TransactionRequiredException: no transaction is in progress
At first I realized I did not have the #Transactional so I added it and still did not work.(I believe in spring you need to use the AOP xml file to set up #Transactional so it makes sense this does not work in EE out of the box, I just do not know how to make it work.)
FYI annotating with this does not work
#TransactionAttribute(TransactionAttributeType.REQUIRED)
Something I tried while I was writing this post and I got it to work sort of... but I don't like the code and am still interested in using #Transactinoal, this code feels dirty, I'm pretty sure #Transactional handles calling other methods that are transactional in a clean way while this code would not.
This saves and I verify it's in the database.
#Inject
EntityManager em;
#Inject
UserRepository userRepository;
private User user;
public void signUpUserAction() {
em.getTransaction().begin();
userRepository.saveAndFlush(user);
em.getTransaction().commit();
}
So in short, how can I use #Transactional or something similar to manage my transactions?
Thank you for any help.
If you run Spring Data in a CDI environment, you're not running a Spring container at all. So you'll need to use EJB session beans to work with the repositories as CDI currently does not have support for transactions out of the box. The CDI extensions shipping with Spring Data is basically providing an entry point into the JavaEE world and you'll use the standard transaction mechanisms you can use in that environment.
So you either inject a repository into an #Stateless bean directly or you inject the CDI bean into one. This will allow you to use EJB transaction annotations on the EJB then.
for everyone who have this question yet.
I have this experimental project that support #Transactional in a CDI environment.
This project uses a custom code of Narayana as interceptor and provide compatibility with it and Spring Data Jpa implementation.
Key points to take in consideration:
Custom (Spring Data) Cdi Configuration -> add a Custom Transactional Post Processor custom spring data cdi configuration
Implement a custom Transactional Post Processor:
sample of a Custom Transactional Post Processor
Implement a custom Transactional Interceptor sample of a custom transactional interceptor
Add a Cdi Producer for your custom Tx Interceptor cdi producers
Create your custom repository fragments using #Transactional (JTA) custom fragments
Compose your Repository interface extending Repository interface and your fragments with #NoRepositoryBean annotation custom repositories
Take a look at this link that have some tips:
tips
Regards,
I am developing a small (but growing) Java EE project based on the technologies EJB 3.1, JSF 2, CDI (WELD) and JPA 2 deployed on a JBOSS AS 7.1Beta1.
As a starting point I created a Maven project based on the Knappsack Maven archetypes.
My architecture is basically the same provided by the archetype and as my project grows I think this archetype seems to be reaching its limits. I want to modify the basic idea of the archetype according to my needs. But let me first explain how the project is organized at the moment.
The whole project is built around Seam like Home classes. The view is referencing them (via EL in xhtml templates). Most of the Home classes are #Named and #RequestScoped (or shortly #Model) or #ConversationScoped and Entripse Java Beans are #Injected. Basically these (normally #Local) EJBs are responsible for the database access (Some kind of DAOs) to get transactions managed automatically by the container. So every DAO class has its own EntityManager injected via CDI. At the moment every DAO integrates aspects which logically belong to each other (e. g. there is a SchoolDao in the archetype which is responsible for creating Teachers, Students and Courses).
This of course results in growing DAOs which have no well defined task and which become hard to maintain and hard to understand. And as a painful side effect the risk of duplicate code grows.
As a consequence I want to breakup this design by having only DAOs which are responsible for one specific task (a #StudentDao, a #TeacherDaoand so on). And at this point I am in trouble. As each DAO has a reference to its own EntityManager it cannot be guaranteed that something like the following will work (I think it never will :)
Teacher teacher = teacherDao.find(teacherId);
course.setTeacher(teacher);
courseDao.save(course);
The JPA implementaion complains about a null value for column COURSE.TEACHER_ID (assuming Course has a not nullable FK realtionship to Teacher). Each DAO holds its own EntityManager, the teacher is managed by the one in the TeacherDao, but the other one in the CourseDao tries to merge the Course #Entity.
Maybe the archetye I used is not suitable for larger applications. But what would be a appropriate design for such an aplication then IF the technologies I used are obligatory (EJB 3.1 for container managed transactions [and later on other business related stuff], JSF as view technologie, JPA as the database mapper and CDI as the 'must have because it's hip:)?
Edit:
I now have an EntityManager injected in the base class all other DAO classes inherit from. So all DAOs use the same instance (debugger shows same object id) but I still have the problem that all entities that I read from the database are immediately detached. This is something that makes me wonder as it means that there is either no container managed transaction or the transaction gets immediately closed after the entity was read. Each DAO is a #Local #Stateless EJB. They are injected into my JSF Beans (#Named and #RequestScoped) from where I want to make use of the CRUD operations. Is there anything I miss?
Having each DOA have its own EntityManager is a very bad design.
You should have an EntityManager per transaction/request and pass it to each DOA, or have them share the same one or get it from the context.