I'm trying to update my (MySQL) database but it does'nt work.
Persistence code (called by a JSF managed bean):
#Override
public void changeEntrepriseStatut(int idEntreprise, int newStatut) {
Entreprise entr = em.find(Entreprise.class, idEntreprise);
em.persist(entr);
entr.setEntrepriseStatutInscription(newStatut);
em.merge(entr);
}
Ensure that you have an open transaction.
How you can do that does depend on how you manage your beans with jsf. There are at least four ways.
#ManagedBean - is the less power full possibility you should never do an operation which needs a transaction in there. (Call an EJB for example)
CDI beans - more powerfull but does also not have support for transactions build in when I am correct (not sure what the status of JavaEE7 is)
EJB - this is probably the easiest way, because of the support for #Transactional
Spring - never used that, if your using that please search the net for how to do that.
Additional never call em.persist on an already persisted bean. Just drop the line.
Related
I have a question about how Spring Data repositories are handling the datasource connections. Assuming Spring Data repositories open and close the connection and the connection when the method executes, how does the transaction started by declaring #Transactional in my service layer span across multiple repository calls?
Who handles the database connections? The #Transactional annotation or the JPA repository?
tl;dr
Ultimately it's the Spring JPA / Transaction infrastructure managing the connection via the thead-bound management of EntityManager instances. The scope of the transaction is controlled by #Transactional annotations in the user code but ultimately defaulted in Spring Data JPA's repository implementation. Connection acquisition is performed eagerly in case an OpenEntityManagerInViewFilter is used (enabled by default in Spring Boot 1.x and 2.x).
Details
SimpleJpaRepository is equipped with Spring's #Transactional annotations so that it will make sure it runs transactions in cases JPA requires them (e.g. to execute a call to EntityManager.persist(…) or ….merge(…)). Their default configuration makes sure, they automatically take part in transactions started at higher levels of abstraction. I.e. if you have a Spring component that's #Transactional itself, repositories will simply participate in the already running transaction:
#Component
class MyService {
private final FirstRepository first;
private final SecondRepository second;
// Constructor omitted for brevity
#Transactional
void someMethod() {
… = first.save(…);
… = second.save(…);
}
}
Both repositories participate in the transaction and a failure in one of them will roll back the entire transaction.
To achieve that, the JpaTransactionManager will use the transaction management API exposed by JPA's EntityManager to start a transaction and acquire a connection for the lifetime of the EntityManager instance. See JpaTransactionManager.doBegin(…) for details.
The role of an OpenEntityManagerInViewFilter or –Interceptor
Unless explicitly deactivated, Spring Boot 1.x and 2.x web applications run with an OpenEntityManagerInViewFilter deployed. Its used to create an EntityManager and thus acquire a connection pretty early and keep it around until very late in the request processing, namely after the view has been rendered. This has the effect of JPA lazy-loading being available to the view rendering but keeps the connection open for longer than if it was only needed for the actual transactional work.
That topic is quite a controversial one as its a tricky balance between developer convenience (the ability to traverse object relations to loaded lazily in the view rendering phase) at the risk of exactly that triggering expensive additional queries and keeping the resources in use for a longer time.
My question is about the need to define a UserTransaction in a JSF Bean if multiple EJB methods are called.
This is my general scenario:
//jsf bean...
#EJB ejb1;
...
public String process(businessobject) {
ejb1.op1(businessobject);
ejb1.op2(businessobject);
....
}
both ejbs methods manipulate the same complex jpa entity bean object (including flush and detachment). I recognized in the database that some of the #oneToMany relations form my entity bean where duplicated when ejb1.op1() is called before ejb1.op2().
I understand that both ejbs start a new transaction. And to me anything looks ok so far.
But the JSF code only works correctly if I add a UserTransaction to my jsf method like this:
//jsf bean...
#Resource UserTransaction tx;
#EJB ejb1;
...
public String process(businessobject) {
try {
tx.begin();
ejb1.op1(businessobject);
ejb1.op2(businessobject);
finaly {
tx.commit();
}....
}
I did not expect that it is necessary to encapsulate both ejb calls into one usertransaction. Why is this necessary?
Each #Stateless EJB method call from a client (in your case, the JSF managed bean), counts by default indeed as one full transaction. This lasts as long as until the EJB method call returns, including nested EJB method calls.
Just merge them both into a single EJB method call if it must represent a single transaction.
public String process(Entity entity) {
ejb1.op1op2(entity);
// ...
}
with
public void op1op2(Entity entity) {
op1(entity);
op2(entity);
}
No need to fiddle with UserTransaction in the client. In a well designed JSF based client application you should never have the need for it, either.
As to the why of transactions, it locks the DB in case you're performing a business action on the entity. Your mistake was that you performed two apparently dependent business actions completely separately. This may in a high concurrent system indeed cause a corrupted DB state as you encountered yourself.
As to the why of transactions, this may be a good read: When is it necessary or convenient to use Spring or EJB3 or all of them together?
Is a user transaction needed for you? Generally, container managed transactions are good enough and serve the purpose.
Even if you need to have user managed transactions, it is not a good to have transaction management logic mingled with JSF logic.
For using container managed transactions, you should look at using the #TransactionAttribute on the EJBs.
If all the methods in your ejb need to have the same level of transaction support, you could have the annotation at the class level. Else you could also use the #TransactionAttribute annotation against each individual ejb method.
I realized after writing this question I could sum it up in a few sentences. How can I manage transactions in Spring-Data-JPA with CDI the same way you would by using #Transactional in Spring itself?
First thing I did was set up Spring Data JPA CDI based on the documentation here. http://static.springsource.org/spring-data/data-jpa/docs/current/reference/html/jpa.repositories.html#jpd.misc.cdi-integration
I set this up and it is working fine for read operations but not write operations
For Example, Their example in the docs would work fine.
List<Person> people = repository.findAll();
So I have the basic setup complete.
Written by hand may have typos. This is similar to the code I execute.
#Inject
UserRepository userRepository;
User user;
#Transactional
public void signUpUserAction() {
userRepository.saveAndFlush(user);
}
Then I receive this error
Caused by: javax.persistence.TransactionRequiredException: no transaction is in progress
At first I realized I did not have the #Transactional so I added it and still did not work.(I believe in spring you need to use the AOP xml file to set up #Transactional so it makes sense this does not work in EE out of the box, I just do not know how to make it work.)
FYI annotating with this does not work
#TransactionAttribute(TransactionAttributeType.REQUIRED)
Something I tried while I was writing this post and I got it to work sort of... but I don't like the code and am still interested in using #Transactinoal, this code feels dirty, I'm pretty sure #Transactional handles calling other methods that are transactional in a clean way while this code would not.
This saves and I verify it's in the database.
#Inject
EntityManager em;
#Inject
UserRepository userRepository;
private User user;
public void signUpUserAction() {
em.getTransaction().begin();
userRepository.saveAndFlush(user);
em.getTransaction().commit();
}
So in short, how can I use #Transactional or something similar to manage my transactions?
Thank you for any help.
If you run Spring Data in a CDI environment, you're not running a Spring container at all. So you'll need to use EJB session beans to work with the repositories as CDI currently does not have support for transactions out of the box. The CDI extensions shipping with Spring Data is basically providing an entry point into the JavaEE world and you'll use the standard transaction mechanisms you can use in that environment.
So you either inject a repository into an #Stateless bean directly or you inject the CDI bean into one. This will allow you to use EJB transaction annotations on the EJB then.
for everyone who have this question yet.
I have this experimental project that support #Transactional in a CDI environment.
This project uses a custom code of Narayana as interceptor and provide compatibility with it and Spring Data Jpa implementation.
Key points to take in consideration:
Custom (Spring Data) Cdi Configuration -> add a Custom Transactional Post Processor custom spring data cdi configuration
Implement a custom Transactional Post Processor:
sample of a Custom Transactional Post Processor
Implement a custom Transactional Interceptor sample of a custom transactional interceptor
Add a Cdi Producer for your custom Tx Interceptor cdi producers
Create your custom repository fragments using #Transactional (JTA) custom fragments
Compose your Repository interface extending Repository interface and your fragments with #NoRepositoryBean annotation custom repositories
Take a look at this link that have some tips:
tips
Regards,
We are building a Java EE / JPA / CDI app with an Oracle Database. The data model (this we can't change) implements security partly by using views and client_info...something like..
create view the_view
as select *
from the_table
where organization_id = USERENV('CLIENT_INFO')
where userenv('CLIENT_INFO') is basically set by calling
dbms_application_info.set_client_info(11);
Now, we have a series of Stateless Beans that basically inject Persistence Context and execute queries (both native queries and regular POJO) and we need a way to inject the client info (that we can get from the security context) into the PersistenceContext before making calls to the EntityManager
in a nutshell I need to be able to call this..
#PersistenceContext
EntityManager em;
#Inject
UserInfo userInfo;
public TheView getTableData(long id) {
// At this point security Information should be set..
// Call the query
return em.find(TheView.class, id);
}
without having to call a setClientInfo() manually..
One way of doing this would probably be using interceptors and annotate the method and make the call there (providing I can get hold of the PersistenceContext that the method will use.. ).. will this even work??
Any other way of doing this??
TIA!
The interceptor approach you are writing about sounds like an excellent fit.
I'm not 100% sure if I understood your requirements correctly, but it seems as if would be a good idea to decouple authorization logic from the actual business logic to be able to write something like this:
...
#IsEditor("someMoreData")
public X getData() {
...
}
IsEditor is an interceptor and will encapsulate the relevant DB lookup.
Seam Security as an independent CDI modules comes with a couple of concepts (& implementations), you should definitely check it out.
If you are using EclipseLink, there is some info here on using EclipseLink with Oracle VPD, which seems similar.
Nasically, you can use events to execute your call.
http://wiki.eclipse.org/EclipseLink/UserGuide/JPA/Basic_JPA_Development/Caching/Shared_and_Isolated#Oracle_Virtual_Private_Database_.28VPD.29
I am developing a small (but growing) Java EE project based on the technologies EJB 3.1, JSF 2, CDI (WELD) and JPA 2 deployed on a JBOSS AS 7.1Beta1.
As a starting point I created a Maven project based on the Knappsack Maven archetypes.
My architecture is basically the same provided by the archetype and as my project grows I think this archetype seems to be reaching its limits. I want to modify the basic idea of the archetype according to my needs. But let me first explain how the project is organized at the moment.
The whole project is built around Seam like Home classes. The view is referencing them (via EL in xhtml templates). Most of the Home classes are #Named and #RequestScoped (or shortly #Model) or #ConversationScoped and Entripse Java Beans are #Injected. Basically these (normally #Local) EJBs are responsible for the database access (Some kind of DAOs) to get transactions managed automatically by the container. So every DAO class has its own EntityManager injected via CDI. At the moment every DAO integrates aspects which logically belong to each other (e. g. there is a SchoolDao in the archetype which is responsible for creating Teachers, Students and Courses).
This of course results in growing DAOs which have no well defined task and which become hard to maintain and hard to understand. And as a painful side effect the risk of duplicate code grows.
As a consequence I want to breakup this design by having only DAOs which are responsible for one specific task (a #StudentDao, a #TeacherDaoand so on). And at this point I am in trouble. As each DAO has a reference to its own EntityManager it cannot be guaranteed that something like the following will work (I think it never will :)
Teacher teacher = teacherDao.find(teacherId);
course.setTeacher(teacher);
courseDao.save(course);
The JPA implementaion complains about a null value for column COURSE.TEACHER_ID (assuming Course has a not nullable FK realtionship to Teacher). Each DAO holds its own EntityManager, the teacher is managed by the one in the TeacherDao, but the other one in the CourseDao tries to merge the Course #Entity.
Maybe the archetye I used is not suitable for larger applications. But what would be a appropriate design for such an aplication then IF the technologies I used are obligatory (EJB 3.1 for container managed transactions [and later on other business related stuff], JSF as view technologie, JPA as the database mapper and CDI as the 'must have because it's hip:)?
Edit:
I now have an EntityManager injected in the base class all other DAO classes inherit from. So all DAOs use the same instance (debugger shows same object id) but I still have the problem that all entities that I read from the database are immediately detached. This is something that makes me wonder as it means that there is either no container managed transaction or the transaction gets immediately closed after the entity was read. Each DAO is a #Local #Stateless EJB. They are injected into my JSF Beans (#Named and #RequestScoped) from where I want to make use of the CRUD operations. Is there anything I miss?
Having each DOA have its own EntityManager is a very bad design.
You should have an EntityManager per transaction/request and pass it to each DOA, or have them share the same one or get it from the context.