I have a Spring batch Job with a tasklet Step and it get run when I deploy the Job to Spring XD. I want my job to run only when I launch my Job. Is this the default behavior or something I messed up?
<batch:job id='firstJob' restartable="false">
<batch:step id="myDAO">
<batch:tasklet ref="myDAOTasklet" />
<batch:next on="NO_RECORD" to="jobFinish" />
<batch:next on="*" to="nextStep" />
</batch:step>
Related
I have a Spring Batch app that I've configured with a SkipPolicy so the whole batch won't fail if one record can't be inserted. But the behavior doesn't seem to be that way. When an insert into the database fails, Postgresql says the transaction is "bad", and all commands will aborted. So all the following inserts into the db fail too. Spring is supposed to retry the transaction one record at a time, I thought?
So we're wondering if the problem is because we mark the service method with #Transactional? Should I just let Spring Batch control the transactions? Here's my job configuration:
<bean id="stepScope" class="org.springframework.batch.core.scope.StepScope">
<property name="autoProxy" value="true"/>
</bean>
<bean id="skipPolicy" class="com.company.batch.common.job.listener.BatchSkipPolicy"/>
<bean id="chunkListener" class="com.company.batch.common.job.listener.ChunkExecutionListener"/>
<batch:job id="capBkdnJob">
<batch:step id="capStep">
<tasklet throttle-limit="20">
<chunk reader="CapReader" processor="CapProcessor" writer="CapWriter" commit-interval="50"
skip-policy="skipPolicy" skip-limit="10">
<batch:skippable-exception-classes>
<batch:include class="com.company.common.exception.ERDException"/>
</batch:skippable-exception-classes>
</chunk>
<batch:no-rollback-exception-classes>
<batch:include class="com.company.common.exception.ERDException"/>
</batch:no-rollback-exception-classes>
<batch:listeners>
<batch:listener ref="chunkListener"/>
</batch:listeners>
</tasklet>
</batch:step>
<batch:listeners>
<batch:listener ref="batchWorkerJobExecutionListener"/>
</batch:listeners>
</batch:job>
Short answer is: no
Spring Batch will use the transaction manager defined as part of your JobRepository by default. This will allow it to roll back the whole chunk when an error has been encountered and then retry each item individually in its own transaction.
I use the setup below in a project for a job definition.
On the project the batch-jobs are defined in a database. The xml-job definition below serves as a template for creating all these batch jobs at runtime.
This works fine, except in the case of a BeanCreationException in the dataProcessor. When this exception occurs the skip policy is never called and the batch ends immediately instead.
What could be the reason for that? What do I have to do so that every Exception in the dataProcessor is going to use the SkipPolicy?
Thanks a lot in advance
Christian
Version: spring-batch 3.0.7
<batch:job id="MassenGevoJob" restartable="true">
<batch:step id="selectDataStep" parent="selectForMassenGeVoStep" next="executeProcessorStep" />
<batch:step id="executeProcessorStep"
allow-start-if-complete="true" next="decideExitStatus" >
<batch:tasklet>
<batch:chunk reader="dataReader" processor="dataProcessor"
writer="dataItemWriter" commit-interval="10"
skip-policy="batchSkipPolicy">
</batch:chunk>
<batch:listeners>
<batch:listener ref="batchItemListener" />
<batch:listener ref="batchSkipListener" />
<batch:listener ref="batchChunkListener" />
</batch:listeners>
</batch:tasklet>
</batch:step>
<batch:decision decider="failOnPendingObjectsDecider"
id="decideExitStatus">
<batch:fail on="FAILED_PENDING_OBJECTS" exit-code="FAILED_PENDING_OBJECTS" />
<batch:next on="*" to="endFlowStep" />
</batch:decision>
<batch:step id="endFlowStep">
<batch:tasklet ref="noopTasklet"></batch:tasklet>
</batch:step>
<batch:validator ref="batchParameterValidator" />
<batch:listeners>
<batch:listener ref="batchJobListener" />
</batch:listeners>
</batch:job>
A BeanCreationException isn't really skippable because it usually happens before Spring Batch starts. It's also typically a fatal error for your application (Spring couldn't create a component you've defined as being critical to your application). If the creation of that bean is subject to issues and not having it is ok, I'd suggest wrapping it's creation in a factory so that you can control any exceptions that come out of the creation of that bean. For example, if you can't create your custom ItemProcessor, your FactoryBean could return the PassthroughItemProcessor if that's ok.
I've read through the spring batch docs a few times and searched for a way to skip a job step based on job parameters.
For example say I have this job
<batch:job id="job" restartable="true"
xmlns="http://www.springframework.org/schema/batch">
<batch:step id="step1-partitioned-export-master">
<batch:partition handler="partitionHandler"
partitioner="partitioner" />
<batch:next on="COMPLETED" to="step2-join" />
</batch:step>
<batch:step id="step2-join">
<batch:tasklet>
<batch:chunk reader="xmlMultiResourceReader" writer="joinXmlItemWriter"
commit-interval="1000">
</batch:chunk>
</batch:tasklet>
<batch:next on="COMPLETED" to="step3-zipFile" />
</batch:step>
<batch:step id="step3-zipFile">
<batch:tasklet ref="zipFileTasklet" />
<!-- <batch:next on="COMPLETED" to="step4-fileCleanUp" /> -->
</batch:step>
<!-- <batch:step id="step4-fileCleanUp">
<batch:tasklet ref="fileCleanUpTasklet" />
</batch:step> -->
</batch:job>
I want to be able to skip step4 if desired by specifying in the job paramaters.
The only somewhat related question I could find was how to select which spring batch job to run based on application argument - spring boot java config
Which seems to indicate that 2 distinct job contexts should be created and the decision made outside the batch step definition.
I have already followed this pattern, since I had a csv export as well as xml as in the example. I split the 2 jobs into to separate spring-context.xml files one for each export type, even though the there where not many differences.
At that point I though it was perhaps cleaner since I could find no examples of alternatives.
But having to create 4 separate context files just to make it possible to include step 4 or not for each export case seems a bit crazy.
I must be missing something here.
Can't you do that with a decider? http://docs.spring.io/spring-batch/reference/html/configureStep.html (chapter 5.3.4 Programmatic Flow Decisions)
EDIT: link to the updated url
https://docs.spring.io/spring-batch/trunk/reference/html/configureStep.html#programmaticFlowDecisions
We have 10-15 diff spring batch jobs and for each job we have some common listener like email notifier, job duration listener etc. For this i have added parent job config and some common listener and package them as one lib.
Now in our main concrete jobs, i am using this parent job by extending them in child job context. something like this where "parentJob" is defined in another common lib which has one job listener registered to it.
Now when i run my child job, it is not executing job listener registered in parent job. What could be the isssue?
Parent Job Def
<batch:job id="parentJob" abstract="true">
<batch:listeners>
<batch:listener ref="jobDurationListener"/>
</batch:listeners>
</batch:job>
Child job
<batch:job id="job1" parent="parentJob">
<batch:step id="step1" >
<batch:tasklet transaction-manager="transactionManager" start-limit="100" >
<batch:chunk reader="reader" writer="writer" commit-interval="1" />
</batch:tasklet>
</batch:step>
<batch:listeners>
<batch:listener ref="testListener"/>
</batch:listeners>
</batch:job>
Sorry for this question. I should have read documentation. After adding merge="true" in child job listener definition it has resolved the issue
I am working on a project based on spring batch admin. I use spring-integration's
<int-jms:message-driven-channel-adapter/>
which picks the message from the queue and pushes them into a channel which invokes the service activator. the service activator then invokes the batch job.
spring-batch-admin internally uses a taskExecutor with pool size as 6(available in spring-batch-admin-manager-1.2.2-release.jar). This task executor has a rejectionPolicy configured as ABORT i.e. if the request for jobs are more than 6, abort other job requests. But when i run the project with over 100 requests, i see them with status as STARTING in the spring batch admin console although, only 6 job requests at a time gets processed.
I am not understanding where are the remaining job requests getting queued. Would appreciate if someone could explain me this or give some pointers.
Configurations:
<int-jms:message-driven-channel-adapter id="jmsIn"
connection-factory="connectionFactory"
destination-name="${JMS.SERVER.QUEUE}" channel="jmsInChannel"
extract-payload="false" send-timeout="20000"/>
<integration:service-activator id="serviceAct" input-channel="jmsInChannel" output-channel="fileNamesChannel"
ref="handler" method="process" />
<bean id="handler" class="com.mycompany.integration.AnalysisMessageProcessor">
<property name="jobHashTable" ref="jobsMapping" />
</bean>
<batch:job id="fullRebalanceJob" incrementer="jobIdIncrementer">
<batch:step id="stepGeneral">
<batch:tasklet>
<bean class="com.mycompany.batch.tasklet.DoGeneralTasklet" scope="step">
<property name="resultId" value="#{jobParameters[resultId]}" />
</bean>
</batch:tasklet>
<batch:next on="REC-SELLS" to="stepRecordSells"/>
<batch:fail on="FAILED" />
<batch:listeners>
<batch:listener ref="stepListener" />
</batch:listeners>
</batch:step>
<batch:step id="stepDoNext">
<batch:tasklet ref="dcnNext" />
</batch:step>
</batch:job>
Thanks in advance. Let me know if more details are required.