I am became in charge EJB 3.1/JPA part of our project running on Glassfish 4.0. I am quite new to EJB and so not very confident in Session beans(and/or their methods) transaction attributes. I am dealing with stateless session beans.
I read that NOT_SUPPORTED, NEVER, SUPPORTS transaction attributes should be used with caution because their behavior varies app server vendor. Actually, I could not find statements like that in other sources. Are they really vendor specific? Also, is that correct that not annotating method or bean defaults to REQUIRED transaction attribute for it?
And also here the situation. Let's I have transaction T and two stateless beans A and B with methods mA and mB. mA calls mB. What are the possible combinations for the transaction attributes of these methods so that the transaction T will go successfully? I know that if mA has REQUIRED and mB has NEVER the exception throw. Is SUPPORTS on mB good for any kind for incoming transaction - like a safe option to make sure any transaction will go through this method without error?
Thank you
GlassFish 4.0 is the reference implementation of Java EE 7 and according to the release notes it supports Enterprise JavaBeans 3.2 (JSR-345).
I read that NOT_SUPPORTED, NEVER, SUPPORTS transaction attributes should be used with caution because their behavior varies app server vendor. I could not find alerts in other sources rather than the one i mentioned. Please advice me If I should concerned about that.
Basically EJB specification says what to implement not how to implement thus there still may be some rare corner cases since we are not living in a perfect world. I suppose this is why you have been alerted. From the other hand I wouldn't concern about that as GlassFish is widely used across the world and it surely conforms the JSR specification.
Also, is that correct that not annotating method or bean defaults to REQUIRED transaction attribute?
Yes, this is correct. According to EJB 3.2 Specification, chapter 8.3.7 Specification of the Transaction Attributes for a Bean’s Methods:
By default, the value of the transaction attribute for a method of a
bean with container-managed transaction demarcation is the REQUIRED
transaction attribute, and the transaction attribute does not need to
be explicitly specified in this case.
Can you please advice a me source where I can find correct or in incorrect combinations of the transaction attributes on different methods of different EJBs in one call stack for one given transaction.
The ultimate source of knowledge is the mentioned EJB Specification (chapter 8.6 to be more specific) but you will find a lot of useful posts around.
In general take a closer look at transaction propagation and transaction demarcation related topics.
Will SUPPORTS attribute adapt method/bean to a calling method/bean with whatever transaction attribute, so that no error will occur?
Not really. I would say SUPPORTS propagates a transaction context (if any) of a calling method/bean, so you may safely query data within but you should avoid operations that change the persistence context.
Related
With JPA, we can use manually OPTIMISTIC or PESSIMISTIC locking to handle entity changes in transactions.
I wonder how JPA handles locking if we don't specify one of these 2 modes ?
No locking mode is used?
If we don't define an explicit locking mode, can the database integrity be lost?
Thanks
I've scanned through section 3.4.4 Lock Modes of the Java Persistence API 2.0 Final Release specification and while I couldn't find anything specific (it doesn't state that this is the default or anything like that) there is a footnote which says the following.
The lock mode type NONE may be specified as a value of lock mode
arguments and also provides a default value for annotations.
The section is about the kinds of LockModeType values available and their usages and describes which methods takes an argument of this kind and whatnot.
So, as it said LockModeType.NONE is default for annotations (JPA, annotations left and right) I guess when you use EntityManager.find(Class, Object) the default LockModeType is used.
There are some other, subtle, hints to reinforce this. Section 3.1.1 EntityManager interface.
The find method (provided it is invoked without a lock or invoked with
LockModeType.NONE) and the getReference method are not required to be
invoked within a transaction context.
It makes sense. For example if you use MySQL as your database and your database engine of choice is InnoDB then (by default) your tables will use REPEATABLE READ, if you use some other RDBMS or other database engines this could change.
Right now I'm not exactly sure that isolation levels has anything to do with JPA lock modes (although it seems that way), but my point is that different database systems differ so JPA can't decide for you (at least according to the specification) what lock mode to use by default, so it'll use LockModeType.NONE if you don't instruct it otherwise.
I've also found an article regarding isolation levels and lock modes, you might want to read it.
Oh, and to answer your last question.
If we don't define an explicit locking mode, can the database
integrity be lost?
It depends, but if you have concurrent transactions then the answer is probably yes.
Due to JPA 2.1 FR
3.2 Version Attributes
The Version field or property is used by the persistence provider to perform optimistic locking. It is accessed and/or set by the persistence provider in the course of performing lifecycle operations on the entity instance. An entity is automatically enabled for optimistic locking if it has a property or field mapped with a Version mapping.
So if the entity is a versioned object, such as a #Version has been specified, then the default persistence provider will perform optimistic locking.
In the specification persistence_2.0, page 89:
If a versioned object is otherwise updated or removed, then the implementation must ensure that the
requirements of LockModeType.OPTIMISTIC_FORCE_INCREMENT are met, even if no explicit
call to EntityManager.lock was made.
I came across an old project using OpenJPA with DB2 running on Websphere Liberty 18. In the persistence.xml file there is a persistent unit with the following declaration:
<persistence-unit name="my-pu" transaction-type="RESOURCE_LOCAL">
<provider>org.apache.openjpa.persistence.PersistenceProviderImpl</provider>
<jta-data-source>jdbc/my-data-source</jta-data-source>
</persistence-unit>
In the case that we are using RESOURCE_LOCAL transactions and there is code to manually manage the transactions scattered throughout the whole application, shouldn't the data source be declared as "non-jta-data-source"? Interestingly it seems the application is working fine despite that. Any ideas why it works fine?
<non-jta-data-source> specifies a data source that refuses to enlist in JTA transactions. This means, if you do userTransaction.begin (or take advantage of any API by which the container starts a transaction for you), and you perform some operations on the data source (which is marked with transactional="false" in Liberty) those operations will not be part of the encompassing JTA transaction and can be committed or rolled back independently. It's definitely an advanced pattern, and if you don't know what you are doing, or temporarily forget that the data source doesn't enlist, you can end up writing code that corrupts your data. At this point, you may be wondering why JPA even has such an option. I expect it isn't intended for the end user's usage of JPA programming model at all, but is really for the JPA persistence provider (Hibernate/EclipseLink/OpenJPA) implementation. For example, if you consider the scenario where a JTA transaction is active on the thread and you perform an operation via JPA where the JPA persistence provider needs to generate a unique key for you, and the persistence provider needs to run some database command to reserve the next block of unique keys, the JPA persistence provider can't just do that within your transaction because you might end up rolling it back, and then the same block of unique keys could be given out twice and errors would occur. The JPA persistence provider really needs to suspend your transaction, run its own transaction, and then resume yours. In my opinion suspend/resume would have been the natural solution here, but the JTA spec doesn't provide a standard way to obtain the TransactionManager, and so my guess is that the JPA spec invented its own solution for situations like this of requiring a data source that bypasses transaction enlistment as an alternative. A JPA provider can run its own transactional operations on the non-jta-data-source while your JTA transaction continues on unimpacted by it. You'll also notice with the example I chose, that it doesn't apply to a number of paths through JPA. If your JPA entity is configured to have the database generate the unique keys instead, then the persistence provider doesn't need to perform its own database operations on a non-jta-data-source. If you aren't using JTA transactions, then the persistence provider doesn't need to worry about enlisting in your transaction because it can just use a different connection, so it doesn't need a non-jta-data-source there either.
How, without using Spring as the DI framework (as it offers the #Transactional annotation offering custom isolation level for that transaction), could I have simple custom isolation level for specific/sensible transaction in a web service built on Dropwizard + Hibernate + Guice (DI) + Spring Data JPA + PostgreSQL (all recent versions)?
Here's a simple working example of web service with the exact same stack (pull requests are more than welcome):
https://github.com/jeep87c/dropwizard-guice-springDataJPA-hibernate
We use Spring Data JPA as an abstraction layer above Hibernate to simply save us from writing our own implementation for each DAOs. In an ideal world, it would be the only dependency to Spring for the web service but as you'll see in this code sample, we are doing kind of a hack by resolving the DAO implementation with Spring DI framework (using the beanFactory) so we can then register these in Guice. (I'm more than open to a better solution if you have one but this is not the subject of this question)
In this code sample, AbsenceResource.create voluntary perform a dup persist of the payload received. And there's an acceptance test AbsenceResourceAcceptanceTest.rollbackTest testing this compromised API route expecting the rollback to happen.
The business requirement here is, to create a new absence, it must first verify if no other absences collide with this one for the same employee in the same company. The sample repo I provide is actually simpler than my real life scenario where actually it must verify collision with absences and vacations entities for an employee in a multi-tenant (per company) environnement having a single table per entity (multi-tenancy with column filtering on the company id).
To prevent any concurrency issue resulting in the race condition that would let two colliding absences be wrongly inserted, we would like to set the isolation level to Serializable for this kind of specific transaction as per PostgreSQL documentation reveals to be the only choice to avoid such kind of issue.
We looked into dropwizard-hibernate library but unfortunately, it doesn't provide any way to set the isolation level per transaction.
So before I spend hours replacing Guice by Spring as our DI framework in our web service (as it looks like the only option for now), I'm seeking other potential simple solutions that would achieve the same.
I am working on a very large application with over 100 modules, and almost 500 tables in the database. We are converting this application to WPF/WCF using Entity Framework 4.2 Code First. Our database is SQL Anywhere 11. Because of the size of the database, we are using an approach similar to Bounded DbContexts, as described here http://msdn.microsoft.com/en-us/magazine/jj883952.aspx by Julie Lerman.
Each of our modules creates its own DbContext, modeling only the subset of the database that it needs.
However, we have run into a serious problem with the way DbContexts are created. Our modules are not neatly self-contained, nor can they be. Some contain operations that are called from several other modules. And when they are, they need to participate in transactions started by the calling modules. (And for architectural reasons, DTC is not an option for us.) In our old ADO architecture, there was no problem passing an open connection from module to module in order to support transactions.
I've looked at various DbContext constructor overloads, and tried managing the transaction from EntityConnection vs. the StoreConnection, and as far as I can tell, there is no combination that allows ModuleA to begin a transaction, call a function in ModuleB, and have ModuleB's DbContext participate in the transaction.
It comes down to two simple things:
Case 1. If I construct DbContextB with DbContextA's EntityConnection, DbContextB is not built with its own model metadata; it reuses DbContextA's metadata. Since the Contexts have different collections of DbSets, all ModuleB's queries fail. (The entity type is not a part of the current context.)
Case 2. If I construct DbContextB with ModuleA's StoreConnection, DbContextB does not recognize the StoreConnection's open transaction at the EntityConnection level, so EF tries to start a new transaction when ModuleB calls SaveChanges(). Since the database connection in fact has an open transaction, this generates a database exception. (Connection does not support parallel transactions.)
Is there any way to 1) force DbContextB to build its own model in Case 1, or 2) get DbContextB's ObjectContext to respect its StoreConnection's transaction state in Case 2?
(By the way, I saw some encouraging things in the EF6 alpha, but after testing it out, found the only difference was that I could create DbContextB on an open connection. But even then, the above 2 problems still exist.)
I suggest you try use the TransactionScope object to manage this task for you. As long as all of your DbContexts use the same connection string (not connection object) the transaction should not try to enlist MS-DTC.
The JPA 2.0 specification mentions in section 6.9 that CriteriaQuery objects are serializable, and hence may outlive any open EntityManagers or EntityManagerFactory instances:
CriteriaQuery objects must be serializable. A persistence vendor is required to support the subse- quent deserialization of a CriteriaQuery object into a separate JVM instance of that vendor’s runt- ime, where both runtime instances have access to any required vendor implementation classes.
The EJB 3.1 specification says in section 21.2.2:
An enterprise bean must not use thread synchronization primitives to synchronize execution of multiple instances, except if it is a Singleton session bean with bean-managed concurrency.
If I have a stateless session bean that wishes to pre-build a bunch of CriteriaQuery objects using a CriteriaBuilder obtained from an injected #PersistenceContext, where should I stash the results?
I can think of the following possibilities but am concerned that all but one run afoul of the "no synchronization primitives" clause above:
In a Map that is stored as the value of one of my bean's instance fields, understanding that I'll have to synchronize access to the map. My take: section 21.2.2 violation.
In a ConcurrentMap that is stored as the value of one of my bean's instance fields. My take: still a section 21.2.2 violation, as I'm sure the ConcurrentMap implementation synchronizes somewhere.
In a #Singleton EJB's instance field somewhere, where the #Singleton exists only to serve as this kind of cache; with bean-managed concurrency this should be legal, but now all my stateless session beans that want to make use of this CriteriaQuery cache have to inject the singleton into themselves...seems like a lot of overhead.
So it sounds like strictly speaking the last option is the only specification-compliant one. Am I correct?
I would consider putting them in a simple static context, accessible from anywhere. The problem lies in initializing them since you need an entity manager instance to do that. Perhaps a singleton ejb for initializing things as described at Call method in EJB on JBoss startup. The singleton could initialize your criteria query cache, which then could serve criteria queries to your DAOs through static context.
Another option would be to use JPQL which has built in support for precompiled queries. Of course you'd lose some advantages of using the criteria API, though I think the main issue (type safety etc.) might be OK since precompiled queries should throw an exception if they are invalid at deploy time rather than runtime.