MyBatis hangs when inserting - mybatis

I am trying to insert a record into Oracle 11g database using MyBatis-Spring, but the insert hangs. Select works fine.
I need another set of eyes to help me figure out what is going on.
Here are the snippets of codes that matter:
Logging output: (it hangs forever on the last line)
Running persistence.PartyMapperUnitTest
DEBUG [main] - Cache Hit Ratio [persistence.mapper.PartyMapper]: 0.0
DEBUG [main] - ==> Preparing: SELECT PARTY_ID, PARTY_SUBTYPE_CD, LIFECYCLE_CD, PARTY_STATUS_CD FROM PARTY WHERE PARTY_ID =2
DEBUG [main] - ==> Parameters:
DEBUG [main] - <== Total: 1
DEBUG [main] - ==> Preparing: INSERT INTO PARTY (PARTY_SUBTYPE_CD, LIFECYCLE_CD, PARTY_STATUS_CD, CREATED_BY) VALUES (?,?,?,?)
DEBUG [main] - ==> Parameters: partySubtypeCode1438810529048(String), lifecycleCode(String), partyStatusCode(String), createdBy(String)
***==== The application hangs forever at this log line ====***
Testing.sql (this works fine)
INSERT INTO PARTY (PARTY_SUBTYPE_CD, LIFECYCLE_CD, PARTY_STATUS_CD, CREATED_BY)
VALUES ( 'a', 'b', 'c', 'd');
applicationContext.xml
<bean id="transactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="dataSource" />
</bean>
<bean id="sqlSessionFactory" class="org.mybatis.spring.SqlSessionFactoryBean">
<property name="dataSource" ref="dataSource" />
</bean>
PartyMapper.java
public interface PartyMapper<PartyEntity> {
public PartyEntity fetch(Object entityId);
public int insert(PartyEntity entity);
}
PartyMapper.xml
<insert
id="insert"
parameterType="persistence.entity.PartyEntity"
keyProperty="partyId"
keyColumn="PARTY_ID"
useGeneratedKeys="true">
INSERT INTO PARTY
(PARTY_SUBTYPE_CD, LIFECYCLE_CD, PARTY_STATUS_CD, CREATED_BY)
VALUES
(#{partySubtypeCode},#{lifecycleCode},#{partyStatusCode},#{createdBy})
</insert>
PartyMapperUnitTest.java
PartyEntity expectedParty = new PartyEntity();
expectedParty.setPartySubtypeCode("a");
expectedParty.setLifecycleCode("b");
expectedParty.setPartyStatusCode("c");
expectedParty.setCreatedBy("d");
partyMapper.insert(expectedParty);
=== EDIT ===
There are only two threads running when the UnitTest runs. I don't see anything wrong in here.
Added a Thread.dumpStack() before the insert, but did not see anything wrong with it:
Thread.dumpStack()
INFO: Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory#77ab3f0: defining beans [transactionManager,org.springframework.aop.config.internalAutoProxyCreator,org.springframework.transaction.annotation.AnnotationTransactionAttributeSource#0,org.springframework.transaction.interceptor.TransactionInterceptor#0,org.springframework.transaction.config.internalTransactionAdvisor,sqlSessionFactory,dataSource,org.mybatis.spring.mapper.MapperScannerConfigurer#0,org.springframework.context.annotation.internalConfigurationAnnotationProcessor,org.springframework.context.annotation.internalAutowiredAnnotationProcessor,org.springframework.context.annotation.internalRequiredAnnotationProcessor,org.springframework.context.annotation.internalCommonAnnotationProcessor,mapper,partyMapper,org.springframework.context.annotation.ConfigurationClassPostProcessor$ImportAwareBeanPostProcessor#0]; root of factory hierarchy
java.lang.Exception: Stack trace
at java.lang.Thread.dumpStack(Thread.java:1365)
at td.com.naccms.cps.persistence.PartyMapperUnitTest.insert(PartyMapperUnitTest.java:48)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
...
...

How many connections do you have in your connection pool? I've seen this happen often when the DB pool has 1 connection and there are 2 concurrent transactions (so MyBatis cannot get a DB connection, and the jdbc connection pool is the one actually hanging). You can debug your app and 'pause' it when it hangs. You should be able to see the threads and trace which one id blocked and where. Another option, but more rare, is that the table is locked. You can google for some queries that will show you all the current locks in your DB.
edit after your comment
To get a proper error when this happens, my suggestion is to set the defaultStatementTimeout in mybatis. This, at least, will throw an exception, rather than hang forever.
You also might want to configure some timeouts in your database connection pool too, as some pools wait forever by default (and that's a loooong time :).

Related

How to configure com.arjuna.ats.jta.orphanSafetyInterval in Jboss

I'm getting XARecovery Exception due to mysql replication breaks.
WARN [com.arjuna.ats.jta] (Periodic Recovery) ARJUNA016027: Local
XARecoveryModule.xaRecovery got XA exception XAException.XAER_NOTA:
com.mysql.jdbc.jdbc2.optional.MysqlXAException: XAER_NOTA: Unknown XID
Default timeout is 10 sec.
How to increase to orphanSafetyInterval timeout?
Thanks!
This property can be applied in standalone-full.xml under system property
<system-properties>
<property name="com.arjuna.ats.jta.orphanSafetyInterval" value="50000"/>
<property name="com.arjuna.ats.jta.xaAssumeRecoveryComplete" value="true"/>
</system-properties>
Moreover you also use xaAssumeRecoveryComplete to handle unknown id error while xa transaction
For more info please go through below link:
https://access.redhat.com/documentation/en-us/jboss_enterprise_application_platform/6.3/html/development_guide/limitations_of_the_xa_recovery_process
You can add orphanSafetyInterval as a System Variable, for example:
-Dcom.arjuna.ats.jta.common.orphanSafetyInterval=20000

JobExecution null in spring batch

I am running jobs in parallel. My job execution is always null when I use JobRepositoryFactoryBean. I need to use to use this. If I don't use this, then I will not be able to use metadata tables. Because I want to restart my job when it is not completed because of some failure reason. So, I want previous record which I will be fetching from metadata tables. And if I use MapJobRepositoryFactoryBean, the job execution is not null. But then there will not be insertion in metadata tables.
I referred this link:-
My job is always null. Can't inject a batch job with Spring Batch. Why?
But the link is not working for me.
My congifuration is
<bean id="batchScheduler" class="com.abc.BatchScheduler">
<property name="jobLauncher" ref="jobLauncher" />
<property name="jobtwo" ref="JobTwo" />
</bean>
I searched a lot. Please help me out. I am not able to proceed.

Spring Batch Integration, Email to be sent out in case of JobInstanceAlreadyCompleteException

I would like to put a hook somewhere in the following code/config to be able to spot a JobInstanceAlreadyCompleteException and then email the production support team that this occurred.
I have tried a JobExecutionListener#beforeJob() method in Spring Batch, but the JobInstanceAlreadyCompleteException is occurring before job execution.
I am using this Spring Batch Integration configuration from the documentation:
<int:channel id="inboundFileChannel"/>
<int:channel id="outboundJobRequestChannel"/>
<int:channel id="jobLaunchReplyChannel"/>
<int-file:inbound-channel-adapter id="filePoller"
channel="inboundFileChannel"
directory="file:/tmp/myfiles/"
filename-pattern="*.csv">
<int:poller fixed-rate="1000"/>
</int-file:inbound-channel-adapter>
<int:transformer input-channel="inboundFileChannel"
output-channel="outboundJobRequestChannel">
<bean class="io.spring.sbi.FileMessageToJobRequest">
<property name="job" ref="personJob"/>
<property name="fileParameterName" value="input.file.name"/>
</bean>
</int:transformer>
I want to handle JobInstanceAlreadyCompleteException in case the same CSV file name appears as the job parameter. Do I extend org.springframework.integration.handler.LoggingHandler?
I notice that class is reporting the error:
ERROR org.springframework.integration.handler.LoggingHandler - org.springframework.messaging.MessageHandlingException: org.springframework.batch.core.repository.JobInstanceAlreadyCompleteException: A job instance already exists and is complete for parameters={input.file.name=C:\Users\csv\file2015.csv}. If you want to run this job again, change the parameters.
The ERROR org.springframework.integration.handler.LoggingHandler is done from the default errorChannel which is reached from the <poller> on your <int-file:inbound-channel-adapter>.
So, to handle it manually your just need to specify your own error-channel there a go ahead with email sending:
<int-file:inbound-channel-adapter>
<int:poller fixed-rate="1000" error-channel="sendErrorToEmailChannel"/>
</int-file:inbound-channel-adapter>
<int-mail:outbound-channel-adapter id="sendErrorToEmailChannel"/>
Of course, you will have to do some ErrorMessage transformation before sending ti over e-mail, but that is already details of the target business logic implementation.

Spring Batch: Duplicate rows after job re-run

Our Spring Batch application is, upon restart of a failed job, processing the same records again, resulting in duplicate rows, and we want to understand how to avoid this.
The Spring Integration poller which starts the batch job is configured to run every couple of hours. When it runs a second time, the job parameters will be the same, but if the previous run failed (for example, because of a DataTruncation exception), Spring Batch will not complain that the job has already completed.
At the point of failure, several hundred thousand records will already have been processed and copied fromn the source table to the destination table. When the job is run a subsequent time, the same rows will be copied to the destination table, resulting in duplicates. Therefore, it appears that the job is not being resumed, but restarted from the beginning.
The Spring Batch database is Derby (file based), this is setup when the application starts, and it appears state is not maintained between restarts of the actual application (because a job can be run again with the same parameters). However, within one application run, state is maintained. For instance, if the job completes succesfully, the next time the poller runs an exception will be thrown because a job (with those parameters) has already completed.
Our job is definition is as follows:
<batch:job id="publisherJob" >
<batch:step id="step1">
<batch:tasklet >
<batch:chunk reader="itemReader" processor="itemProcessor"
writer="itemWriter" commit-interval="${...}" />
</batch:tasklet>
<batch:listeners>
...
</batch:listeners>
</batch:job>
<bean id="itemReader" class="org.springframework.batch.item.database.JdbcCursorItemReader">
<property name="dataSource" ref="dataSource" />
<property name="sql" value="select ${...} from ${...} where ${...}" />
<property name="rowMapper" ref="rowMapper" />
</bean>
The WHERE clause includes ORDER BY.
Our understanding was that Spring Batch would retain the state at which processing failed and proceed from that point (if the error in the source table has been fixed), therefore preventing duplicate rows. What has to be configured for this to happen?
Thanks
Spring Batch maintains state in that it remembers how many records were processed, not specifically which ones. Because of that, it's up to you to guarantee the order of the items is reproducible from run to run so that if we process 100 records in run 1 and fail, when we skip the first 100 records in run 2, those are the right 100 records to skip. You didn't provide the configuration for your JdbcCursorItemReader but my assumption is that you are not using an order by in your SQL. If you want restartability, you need some way to guarantee the order of the items. Using an order by in your SQL is the easiest way to accomplish this (there are others like using the process indicator pattern if that's needed).

WebSphere Application Server V8.0.0.5 JPA Unable to persist

I have a code that works perfectly on WAS 7 but fail when i run it in WAS 8.0.0.5. I am using JPA 2.0 with openJPA as my provider. Calling persist on my em throws a nested exception. Has anyone ever managed to write a JPA program in WAS 8.0.0.5
here is the Exception
WTRN0074E: Exception caught from before_completion synchronization operation: org.apache.openjpa.persistence.PersistenceException: DB2 SQL Error: SQLCODE=-204, SQLSTATE=42704, SQLERRMC=.OPENJPA_SEQUENCE_TABLE, DRIVER=3.58.81 {prepstmnt -1559269434 SELECT SEQUENCE_VALUE FROM .OPENJPA_SEQUENCE_TABLE WHERE ID = ? FOR READ ONLY WITH RS USE AND KEEP UPDATE LOCKS [params=?]}
The SQLCODE=-204 points that something is missing. The log keeps printing THAKHANI.OPENJPA_SEQUENCE_TABLE which makes think that maybe the table is missing. You could also check to make sure the DB2 user that JPA is using has permissions to create tables and run SELECT statements on them.
I manage to resolve the problem by selecting Identity as my primary key generation mechanism when generating entities from tables. I also add the following in my persistence.xml.
<properties>
<!-- OpenJPA specific properties -->
<property name="openjpa.TransactionMode" value="managed"/>
<property name="openjpa.ConnectionFactoryMode" value="managed"/>
<property name="openjpa.jdbc.DBDictionary" value="db2"/>
<property name="openjpa.jdbc.Schema" value=<SchemaName>/>
</properties>