why is #Transactional required for a test case whose function will do a update (JPA repository) in dao layer like
#Test
#Transactional
public void processTestSuccess() throws Exception{
abc.process();//abc instance of Abc class
}
public class Abc{
#Transactional
public void process(){
.....
jpaRepository.update(10); // spring jpa repo updating something
}
}
As when test is run without #Transaction annotation on it gives the following exception at line
jpa.update(10);
org.springframework.dao.InvalidDataAccessApiUsageException: Executing
an update/delete query; nested exception is
javax.persistence.TransactionRequiredException: Executing an
update/delete query
Your method process() is annotated with #Transactional. This means it requires a transaction.
Since you seem to run this in an Spring application context the annotation is evaluated and an exception thrown if now transaction is available.
By adding #Transactional to the test you make a transaction available (which will be rolled back at the end of the test).
Related
I'm writing a transactional junit-based IT test for Spring Data JPA repository.
To check number of rows in table I use side JDBCTemplate.
I notice, that in transactional context invoking of org.springframework.data.repository.CrudRepository#save(S) doesn't take effect. SQL insert in not performed, number of rows in table is not increased.
But If I invoke org.springframework.data.repository.CrudRepository#count after the save(S) then SQL insert is performed and number of rows is increased.
I guess this is behavior of JPA cache, but how it works in details?
Code with Spring Boot:
#RunWith(SpringRunner.class)
#SpringBootTest
public class ErrorMessageEntityRepositoryTest {
#Autowired
private ErrorMessageEntityRepository errorMessageEntityRepository;
#Autowired
private JdbcTemplate jdbcTemplate;
#Test
#Transactional
public void save() {
ErrorMessageEntity errorMessageEntity = aDefaultErrorMessageEntity().withUuid(null).build();
assertTrue(TestTransaction.isActive());
int sizeBefore= JdbcTestUtils.countRowsInTable(jdbcTemplate, "error_message");
ErrorMessageEntity saved = errorMessageEntityRepository.save(errorMessageEntity);
errorMessageEntityRepository.count(); // [!!!!] if comment this line test will fail
int sizeAfter= JdbcTestUtils.countRowsInTable(jdbcTemplate, "error_message");
Assert.assertEquals(sizeBefore+1, sizeAfter);
}
Entity:
#Entity(name = "error_message")
public class ErrorMessageEntity {
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
private UUID uuid;
#NotNull
private String details;
Repository:
public interface ErrorMessageEntityRepository extends CrudRepository<ErrorMessageEntity, UUID>
You are correct this is a result of how JPA works.
JPA tries to delay SQL statement execution as long as possible.
When saving new instances this means it will only perform an insert if it is required in order to get an id for the entity.
Only when a flush event occurs will all changes that are stored in the persistence context flushed to the database. There are three triggers for that event to happen:
The closing of the persistence context will flush all the changes. In a typical setup, this is tight to a transaction commit.
Explicitly calling flush on the EntityManager which you might do directly or when using Spring Data JPA via saveAndFlush
Before executing a query. Since you typically want to see your changes in a query.
Number 3 is the effect you are seeing.
Note that the details are a little more complicated since you can configure a lot of this stuff. As usual, Vlad Mihalcea has written an excellent post about it.
In order to make the test data not pollute the database, when using the unit test of Spring-test, the transaction will be rolled back by default, that is, #Rollback is true by default. If you want to test the data without rolling back, you can set #Rollback(value = false) . If you are using a MySQL database, after setting up automatic rollback, if you find that the transaction is still not rolled back, you can check whether the database engine is Innodb , because other database engines such as MyISAM and Memory do not support transactions.
I have an JPA Entity Class that is also an Elasticsearch Document. The enviroment is a Spring Boot Application using Spring Data Jpa and Spring Data Elasticsearch.
#Entity
#Document(indexname...etc)
#EntityListeners(MyJpaEntityListener.class)
public class MyEntity {
//ID, constructor and stuff following here
}
When an instance of this Entity gets created, updated or deleted it gets reindexed to Elasticsearch. This is currently achieved with an JPA EntityListener which reacts on PostPersist, PostUpdate and PostRemove events.
public class MyJpaEntityListener {
#PostPersist
#PostUpdate
public void postPersistOrUpdate(MyEntity entity) {
//Elasticsearch indexing code gets here
}
#PostRemove
public void postPersistOrUpdate(MyEntity entity) {
//Elasticsearch indexing code gets here
}
}
That´s all working fine at the moment when a single or a few entities get modified during a single transaction. Each modification triggers a separate index operation. But if a lot of entities get modified inside a transaction it is getting slow.
I would like to bulkindex all entities that got modified at the end (or after commit) of a transaction. I took a look at TransactionalEventListeners, AOP and TransactionSynchronizationManager but wasn´t able to come up with a good setup till now.
How can I collect all modified entities per transaction in an elegant way without doing it per hand in every service method myself?
And how can I trigger a bulkindex at the end of a transaction with the collected entities of this transaction.
Thanks for your time and help!
One different and in my opinion elegant approach, as you don't mix your services and entities with elasticsearch related code, is to use spring aspects with #AfterReturning in the service layer transactional methods.
The pointcut expression can be adjusted to catch all the service methods you want.
#Order(1) guaranties that this code will run after the transaction commit.
The code below is just a sample...you have to adapt it to work with your project.
#Aspect
#Component()
#Order(1)
public class StoreDataToElasticAspect {
#Autowired
private SampleElasticsearhRepository elasticsearchRepository;
#AfterReturning(pointcut = "execution(* com.example.DatabaseService.bulkInsert(..))")
public void synonymsInserted(JoinPoint joinPoint) {
Object[] args = joinPoint.getArgs();
//create elasticsearch documents from method params.
//can also inject database services if more information is needed for the documents.
List<String> ids = (List) args[0];
//create batch from ids
elasticsearchRepository.save(batch);
}
}
And here is an example with a logging aspect.
My question is about the need to define a UserTransaction in a JSF Bean if multiple EJB methods are called.
This is my general scenario:
//jsf bean...
#EJB ejb1;
...
public String process(businessobject) {
ejb1.op1(businessobject);
ejb1.op2(businessobject);
....
}
both ejbs methods manipulate the same complex jpa entity bean object (including flush and detachment). I recognized in the database that some of the #oneToMany relations form my entity bean where duplicated when ejb1.op1() is called before ejb1.op2().
I understand that both ejbs start a new transaction. And to me anything looks ok so far.
But the JSF code only works correctly if I add a UserTransaction to my jsf method like this:
//jsf bean...
#Resource UserTransaction tx;
#EJB ejb1;
...
public String process(businessobject) {
try {
tx.begin();
ejb1.op1(businessobject);
ejb1.op2(businessobject);
finaly {
tx.commit();
}....
}
I did not expect that it is necessary to encapsulate both ejb calls into one usertransaction. Why is this necessary?
Each #Stateless EJB method call from a client (in your case, the JSF managed bean), counts by default indeed as one full transaction. This lasts as long as until the EJB method call returns, including nested EJB method calls.
Just merge them both into a single EJB method call if it must represent a single transaction.
public String process(Entity entity) {
ejb1.op1op2(entity);
// ...
}
with
public void op1op2(Entity entity) {
op1(entity);
op2(entity);
}
No need to fiddle with UserTransaction in the client. In a well designed JSF based client application you should never have the need for it, either.
As to the why of transactions, it locks the DB in case you're performing a business action on the entity. Your mistake was that you performed two apparently dependent business actions completely separately. This may in a high concurrent system indeed cause a corrupted DB state as you encountered yourself.
As to the why of transactions, this may be a good read: When is it necessary or convenient to use Spring or EJB3 or all of them together?
Is a user transaction needed for you? Generally, container managed transactions are good enough and serve the purpose.
Even if you need to have user managed transactions, it is not a good to have transaction management logic mingled with JSF logic.
For using container managed transactions, you should look at using the #TransactionAttribute on the EJBs.
If all the methods in your ejb need to have the same level of transaction support, you could have the annotation at the class level. Else you could also use the #TransactionAttribute annotation against each individual ejb method.
A typical Spring-MVC based CRUD application has the following layout.
the controller layer
#Controller
public class WebController{
#Inject
UserService userService;
...
}
the service layer
#Service
#Transactional
public class UserServiceImpl{
#Inject
UserDao userDao;
}
and the DAO layer
#Repository
public class UserDaoImlp{
#Inject
SessectionFactory session;
}
With the use of Spring annotations, #Controller, Service, #Repository, and #Transactional, we can take the advantage the features from Spring. For example, if the data store is using a traditional database such as MySQL, the #Transactional annotation can help make sure the transaction is completed. But if I use the MongoDB as the data store, and MongoDB does not support transaction, is there still any advantage to use the #Transactional annotation? And how about #Service and #Repository layers?
AFAIK, there are two types of entity manager.
1. Container managed entity manager
2. Application managed entity manager
Container managed entity manager
This type of em uses JTA transaction only
Below is my code:
#PersistenceContext(unitName = "", type = Transaction)
EntityManager em;
public void persist(T entity) {
em.persist(entity)
}
Questions:
There is exception throw when execute the code : TransactionRequireException
Why there is this kind of exception? There is no TransactionRequireException happen after added #Resource UserTransaction to the method persist(). I wonder UserTransaction is belongs to JTA right.
EntityTransaction et = em.getTransaction();
Refer to the above code, Why JTA transaction type cannot invokes getTransaction() ?
Can extended JTA Transaction em use outside of EJB?
Application managed entity manager
Utilize JTA Transaction
Utilize JDBC Transaction(Resource Local Transaction)
Please anyone provide example of source code on JDBC Transaction type.
A JPA persistence unit can either be JTA or RESOURCE_LOCAL.
If you use JTA, then you must use JTA for transaction, either through SessionBeans or by accessing JTA directly.
See,
http://en.wikibooks.org/wiki/Java_Persistence/Runtime#Java_Enterprise_Edition