Spring Boot with application managed persistence context - jpa

I am trying to migrate an application from EJB3 + JTA + JPA (EclipseLink). Currently, this application makes use of application managed persistent context due to an unknown number of databases on design time.
The application managed persistent context allows us to control how to create EntityManager (e.g. supply different datasources JNDI to create proper EntityManager for specific DB on runtime).
E.g.
Map properties = new HashMap();
properties.put(PersistenceUnitProperties.TRANSACTION_TYPE, "JTA");
//the datasource JNDI is by configuration and without prior knowledge about the number of databases
//currently, DB JNDI are stored in a externalized file
//the datasource is setup by operation team
properties.put(PersistenceUnitProperties.JTA_DATASOURCE, "datasource-jndi");
properties.put(PersistenceUnitProperties.CACHE_SHARED_DEFAULT, "false");
properties.put(PersistenceUnitProperties.SESSION_NAME, "xxx");
//create the proper EntityManager for connect to database decided on runtime
EntityManager em = Persistence.createEntityManagerFactory("PU1", properties).createEntityManager();
//query or update DB
em.persist(entity);
em.createQuery(...).executeUpdate();
When deployed in a EJB container (e.g. WebLogic), with proper TransactionAttribute (e.g. TransactionAttributeType.REQUIRED), the container will take care of the transaction start/end/rollback.
Now, I am trying to migrate this application to Spring Boot.
The problem I encounter is that there is no transaction started even after I annotate the method with #Transactional(propagation = Propagation.REQUIRED).
The Spring application is packed as an executable JAR file and run with embadded Tomcat.
When I try to execute those update APIs, e.g. EntityManager.persist(..), EclipseLink always complains about:
javax.persistence.TransactionRequiredException: 'No transaction is currently active'
Sample code below:
//for data persistence
#Service
class DynamicServiceImpl implements DynamicService {
//attempt to start a transaction
#Transactional(propagation = Propagation.REQUIRED)
public void saveData(DbJndi, EntityA){
//this return false that no transaction started
TransactionSynchronizationManager.isActualTransactionActive();
//create an EntityManager based on the input DbJndi to dynamically
//determine which DB to save the data
EntityManager em = createEm(DbJndi);
//save the data
em.persist(EntityA);
}
}
//restful service
#RestController
class RestController{
#Autowired
DynamicService service;
#RequestMapping( value = "/saveRecord", method = RequestMethod.POST)
public #ResponseBody String saveRecord(){
//save data
service.saveData(...)
}
}
//startup application
#SpringBootApplication
class TestApp {
public static void main(String[] args) {
SpringApplication.run(TestApp.class, args);
}
}
persistence.xml
-------------------------------------------
&ltpersistence-unit name="PU1" transaction-type="JTA">
&ltproperties>
&lt!-- comment for spring to handle transaction??? -->
&lt!--property name="eclipselink.target-server" value="WebLogic_10"/ -->
&lt/properties>
&lt/persistence-unit>
-------------------------------------------
application.properties (just 3 lines of config)
-------------------------------------------
spring.jta.enabled=true
spring.jta.log-dir=spring-test # Transaction logs directory.
spring.jta.transaction-manager-id=spring-test
-------------------------------------------
My usage pattern does not follow most typical use cases (e.g. with known number of DBs - Spring + JPA + multiple persistence units: Injecting EntityManager).
Can anybody give me advice on how to solve this issue?
Is there anybody who has ever hit this situation that the DBs are not known in design time?
Thank you.

I finally got it work with:
Enable tomcat JNDI and create the datasource JNDI to each DS programmatically
Add transaction stuff
com.atomikos:transactions-eclipselink:3.9.3 (my project uses eclipselink instead of hibernate)
org.springframework.boot:spring-boot-starter-jta-atomikos
org.springframework:spring-tx

You have pretty much answered the question yourself: "When deployed in a EJB container (e.g. WebLogic), with proper TransactionAttribute (e.g. TransactionAttributeType.REQUIRED), the container will take care of the transaction start/end/rollback".
WebLogic is compliant with the Java Enterprise Edition specification which is probably why it worked before, but now you are using Tomcat (in embedded mode) which are NOT.
So you simply cannot do what you are trying to do.
This statement in your persistence.xml file:
<persistence-unit name="PU1" transaction-type="JTA">
requires an Enterprise Server (WebLogic, Glassfish, JBoss etc.)
With Tomcat you can only do this:
<persistence-unit name="PU1" transaction-type="RESOURCE_LOCAL">
And you need to handle transactions by your self:
myEntityManager.getTransaction.begin();
... //Do your transaction stuff
myEntityManager.getTransaction().commit();

Related

Role of PlatformTransactionManager with Spring batch 5

The documentation is not very clear about the role of PlatformTransactionManager in steps configuration.
First, stepBuilder.tasklet and stepBuilder.chunk requires a PlatformTransactionManager as second parameter while the migration guide says it is now required to manually configure the transaction manager on any tasklet step definition (...) This is only required for tasklet steps, other step types do not require a transaction manager by design..
More over, in the documentation the transactionManager is injected via a method parameter:
/**
* Note the TransactionManager is typically autowired in and not needed to be explicitly
* configured
*/
But the transactionManager created by Spring Boot is linked to the DataSource created by Spring Boot based on spring.datasource.url. So with autoconfiguration, the following beans works together: dataSource, platformTransactionManager, jobRepository. It makes sense for job and step executions metadata management.
But unless readers, writers and tasklet works with this default DataSource used by JobOperator, the auto configured transactionManager must not be used for the steps configuration. Am I right ?
Tasklets or a chunk oriented steps will often need another PlatformTransactionManager:
if a step writes data in a specific db it needs a specific DataSource (not necessarily declared as bean otherwise the JobRepository will use it) and a specific PlatformTransactionManager linked to this DataSource
if a step writes data in a file or send message to a MOM, the ResourcelessTransactionManager is more appropriate. This useful implementation is not mentioned in the documentation.
As far as I understand, the implementation of PlatformTransactionManager for a step depends on where the data are written and has nothing to do with the transactionManager bean used by the JobOperator Am I right ?
Example:
var builder = new StepBuilder("step-1", jobRepository);
PlatformTransactionManager txManager = new ResourcelessTransactionManager();
return builder.<Input, Output> chunk(10, txManager)
.reader(reader())
.processor(processor())
.writer(writer()/*a FlatFileItemWriter*/)
.build();
or
#SpringBootApplication
#EnableBatchProcessing
public class MyJobConfiguration {
private DataSource dsForStep1Writer;
public MyJobConfiguration(#Value("${ds.for.step1.writer.url"} String url) {
this.dsForStep1Writer = new DriverManagerDataSource(url);
}
// reader() method, processor() method
JdbcBatchItemWriter<Output> writer() {
return new JdbcBatchItemWriterBuilder<Output>()
.dataSource(this.dsForStep1Writer)
.sql("...")
.itemPreparedStatementSetter((item, ps)->{/*code*/})
.build();
}
#Bean
Step step1(JobRepository jobRepository) {
var builder = new StepBuilder("step-1", jobRepository);
var txManager = new JdbcTransactionManager(this.dsForStep1Writer);
return builder.<Input, Output> chunk(10, txManager)
.reader(reader())
.processor(processor())
.writer(writer())
.build();
}
// other methods
}
Is that correct ?
Role of PlatformTransactionManager with Spring batch 5
The role of the transaction manager did not change between v4 and v5. I wrote an answer about this a couple of years ago for v4, so I will update it for v5 here:
In Spring Batch, there are two places where a transaction manager is used:
In the proxies created around the JobRepository/JobExplorer to create transactional methods when interacting with the job repository/explorer
In each step definition to drive the step's transaction
Typically, the same transaction manager is used in both places, but this is not a requirement. It is perfectly fine to use a ResourcelessTransactionManager with the job repository to not store any meta-data and a JpaTransactionManager in the step to persist data in a database.
Now in v5, #EnableBatchProcessing does not register a transaction manager bean in the application context anymore. You either need to manually configure one in the application context, or use the one auto-configured by Spring Boot (if you are a Spring Boot user).
What #EnableBatchProcessing will do though is look for a bean named transactionManager in the application context and set it on the auto-configured JobRepository and JobExplorer beans (this is configurable with the transactionManagerRef attribute). Again, this transaction manager bean could be manually configured or auto-configured by Boot.
Once that in place, it is up to you to set that transaction manager on your steps or not.

Persistence unit not injected in webservice

I have an application that was working on Java 6 + Glassfish 3. I am migrating this project to run on Glassfish 5.1 with oracle java 8.
The application builds without problems. During deployment the only jpa/eclipselink related lines are
Info: EclipseLink, version: Eclipse Persistence Services - 2.7.4.v20190115-ad5b7c6b2a
Info: /file:/home/eelke/NetBeansProjects/MplusLicentieService/build/web/WEB-INF/classes/_MplusLicentieServicePU login successful
However when a soap call is performed a NullPointerException is triggered. I have verified with the debugger that the pointer that is null is in fact the persistence unit. Here is the definition of the webservice, the EntitiyManager and one of the methods that failed. Within the method em == null. Left out other statements and variables.
#WebService(serviceName = "LicentieWebService")
public class LicentieWebService {
#PersistenceContext(unitName = "MplusLicentieServicePU")
private EntityManager em;
#Resource
private javax.transaction.UserTransaction utx;
#WebMethod(operationName = "getLicentie2")
public QLicentieAntwoord getLicentie2(
#WebParam(name = "licentieNr") String licentieNr,
#WebParam(name = "filiaalNr") int filiaal,
#WebParam(name = "werkplekNr") int werkplek,
#WebParam(name = "codewoord") String codewoord) {
try {
Licentie lic = em.find(Licentie.class, licentieNr);
...
} catch (ApplicationError ex) {
...
}
}
I also tried redefining the persistence unit in netbeans but this didn't change anything.
Some additional findings
In the same project is also a statelesss EJB which has a function that is called by a timer schedule. Into this EJB a second stateless EJB is injected with #Inject, this works. Into this second EJB the same persistent context is injected as in the webservice. This works, it is injected and queries are executed as expected.
Found some log lines which might be related
Info: JAX-WS RI specific descriptor (WEB-INF/sun-jaxws.xml) is found
in the archive web and hence Enterprise Web Service (109) deployment
is disabled for this archive to avoid duplication of services.

Why isn't JobScope and StepScope available from an MVC thread?

I'm getting Error creating bean with name 'scopedTarget.scopedTarget.processVlsCasesJob': Scope 'job' is not active for the current thread; consider defining a scoped proxy for this bean if you intend to refer to it from a singleton from a job factory class. The factory is where the job and step beans are created in the correct job/step scopes from a bean invoked during main application start up.
#Component("processVlsCasesJobFactory")
public class ProcessVlsCasesJobFactoryImpl
extends BatchJobFactoryAncestorImpl
implements ProcessVlsCasesJobFactory {
...
#Bean
#Scope(scopeName = "job", proxyMode = ScopedProxyMode.INTERFACES)
public ProcessVlsCasesJob processVlsCasesJob() {
return new ProcessVlsCasesJobImpl();
}
...
#Bean
#Scope(scopeName = "step", proxyMode = ScopedProxyMode.INTERFACES)
public ProcessVlsCasesProcessCases processVlsCasesProcessCases() {
return new ProcessVlsCasesProcessCasesImpl();
}
...
// other bean methods creating the step objects
Any attempt to allow Spring to auto-register any bean in the Job/Steps scope fails with that type of error. If those scopes are only available when (I guess) a job is running, how do I "create" the bean in the scope from the thread of the main MVC application running in Tomcat?
Why isn't JobScope and StepScope available from an MVC thread?
Those are custom scopes specific to Spring Batch, they are not part of Spring MVC. You need to specifically register them (or use #EnableBatchProcessing to have them automatically registered)
how do I "create" the bean in the scope from the thread of the main MVC application running in Tomcat?
The main thread (processing the web request) should call a JobLauncher configured with an asynchronous TaskExecutor so that the batch job is executed in a separate thread. Please see the Running Jobs from within a Web Container section which provides more details and a code example of how to do that.
I finally found the answer: #EnableBatchProcessing doesn't work within an MVC application context. In the #Configuration bean I created to configure SB (with DB2) and set up all the SB beans (like jobLauncher), I added:
jobScope = new JobScope();
jobScope.setAutoProxy(Boolean.FALSE);
jobScope.setName(JobScoped.SCOPE_NAME);
((ConfigurableBeanFactory)applicationContext.getAutowireCapableBeanFactory())
.registerScope(JobScoped.SCOPE_NAME, jobScope);
stepScope = new StepScope();
stepScope.setAutoProxy(Boolean.FALSE);
stepScope.setName(StepScoped.SCOPE_NAME);
((ConfigurableBeanFactory)applicationContext.getAutowireCapableBeanFactory())
.registerScope(StepScoped.SCOPE_NAME, stepScope);
Then the two scopes were finally available at run time and the job/step scoped beans were registered at deployment and ran properly.
Was #EBP added as part of Spring Boot? Is it only supposed to be used via a command line tool?

#EJB annotation vs JNDI lookup + transaction

Acording to another post [1] there's no difference between invoking a session EJB via JNDI lookup and using the #EJB annotation. However, in the following scenario:
1.- call session EJB1(JDBC inserts here)
2.- From EJB1, call session EJB2 (more inserts here)
3.- Rollback the transaction (from EJB1)
If I use the #EJB annotation it works fine, but with the JNDI lookup it doesn´t, the transaction in the second EJB is a new one and the rollback doesn´t happen. All this with CMT.
I'm deploying all this stuff in a Geronimo/ibmwasce-2.1.1.6.
¿Do I need to pass the transaction from one EJB to another explicitly? I thought it was the continer job. ¿Any clues?
[1] #EJB annotation vs JNDI lookup
Update:
Code via annotation:
#EJB
private CodAppEjb codAppejbAnotacion;
Code via jndi:
CodAppEjb codAppejb;
InitialContext ctx;
Properties properties= new Properties();
properties.setProperty("java.naming.provider.url", "ejbd://127.0.0.1:4201");
properties.setProperty("java.naming.factory.initial", "org.apache.openejb.client.RemoteInitialContextFactory");
ctx = new InitialContext(properties);
codAppejb= (CodAppEjb) ctx.lookup("CodAppEjbBeanRemote");
The transaction code is just the same.
It seems, you have a transaction propagation problem.
The problem seems to be, that in your JNDI lookup you search for the remote EJB (not Local), which does NOT get executed in the same transaction context as EJB1.
When using the #EJB annotation above, the local implementation is injected, with the same transaction context.

Glassfish: JTA/JPA transaction not rolling back

I am running Glassfish 3.1.1 with an Oracle database and have run into an issue with transactions not rolling back, but only on one specific environment so far. The same application works as expected on other machines. However, two separate Glassfish domains on the same machine are impacted.
Within the affected environment, I have similar results with both a container-managed transactions (CMT) inside an EJB that throws a RuntimeException, and a bean-managed transaction (BMT) with UserTransaction#rollback().
In both cases, the underlying issue appears to be that the JDBC connection is somehow still set with autoCommit = true even though there is a JTA transaction in progress.
My EJB/CMT test looks like this:
#Named
#Stateless
public class TransactionTest {
#PersistenceContext
EntityManager entityManager;
#TransactionAttribute(TransactionAttributeType.REQUIRED)
public void rollbackTest() {
Foo foo = new Foo();
entityManager.persist(foo);
entityManager.flush();
throw new RuntimeException("should be rolled back");
}
}
and my BMT/UserTransaction test is like this:
public void rollbackUtxTest() throws Exception {
utx.begin();
Foo foo = new Foo();
entityManager.persist(foo);
entityManager.flush();
utx.rollback();
}
When I call either method, the INSERT INTO FOO is committed, even though the transactions were rolled back.
What am I missing - perhaps I don't have my connection pool / datasource is not set up right?
I'm using OracleConnectionPoolDataSource as the datasource class name. Is there something I need to do to ensure my database connections participate in JTA transactions?
UPDATE 1 I originally thought this was an issue with OracleConnectionPoolDataSource but it turned out it was not correlated. The same exact pool configuration works on one environment but not the other.
UPDATE 2 Clarified that this is not specifically an EJB/CMT issue, but a general JTA issue.
UPDATE 3 added info about JDBC autocommit. Confirmed that persistence.xml is correct.
It looks like this may be an issue with domain.xml, possibly a Glassfish bug.
In persistence.xml, I have
<jta-data-source>jdbc/TEST</jta-data-source>.
In domain.xml, I have
<jdbc-resource pool-name="TEST_POOL" description="" jndi-name="jdbc/TEST"></jdbc-resource>
But no corresponding <resource-ref ref="jdbc/TEST"> - either missing or misspelled. (I believe I ended up in that state by creating the JNDI datasource through the UI, realizing the name is wrong, then fixing the JNDI name in domain.xml jdbc-resource by hand but not fixing it in resource-ref.
In this case, my injected EntityManager still works but is not participating in JTA transactions. If I fix domain.xml, it works as expected.
You didn't wrap your Exception in an EJBException.
See http://docs.oracle.com/javaee/6/tutorial/doc/bnbpj.html