Generally, ItemReader has resource name as attribute, Can we pass file object to any of the Implementation of ItemReader.
I am using 3 version of Spring Batch API.
UPDATED :::
<bean id="cvsFileItemReader" class="org.springframework.batch.item.file.FlatFileItemReader" scope="step">
<!-- Read a csv file -->
<property name="resource" value="classpath:cvs/I_10000_3ColRem_input_File.csv" />
<property name="lineMapper">
<bean class="org.springframework.batch.item.file.mapping.DefaultLineMapper">
<!-- split it -->
<property name="lineTokenizer">
<bean
class="org.springframework.batch.item.file.transform.DelimitedLineTokenizer">
<property name="names" value="customerId,year,month,numPurchases,sow,purchaseAmt,cm,mc,multiChannel,loyalty,productReturn,relationDur,cb" />
</bean>
</property>
<property name="fieldSetMapper">
<!-- return back to reader, rather than a mapped object. -->
<!-- map to an object -->
<bean
class="org.springframework.batch.item.file.mapping.BeanWrapperFieldSetMapper">
<property name="prototypeBeanName" value="testCLV" />
<property name="customEditors">
<map>
<entry key="java.lang.Double">
<ref local="doubleEditor" />
</entry>
</map>
</property>
</bean>
</property>
</bean>
</property>
</bean>
My App.java looks like
ApplicationContext context =
new ClassPathXmlApplicationContext(springConfig);
JobLauncher jobLauncher = (JobLauncher) context.getBean("jobLauncher");
Job job = (Job) context.getBean("reportJob");
try {
long a, b;
a = System.currentTimeMillis();
JobExecution execution = jobLauncher.run(job, new JobParameters());
b = System.currentTimeMillis();
System.out.println("Exit Status : " + execution.getStatus());
System.out.println("jobLauncher.run "+(b-a)+"mil to execute. ("+((b-a)/1000)+" seconds)");
} catch (Exception e) {
e.printStackTrace();
}
System.out.println("Done");
My requirement is follow ::
Through web-application user will upload a file, from which i extracted inputStream
Let say i have a streamInput obj as 'streamInput', how could i inject this to resource of ItemReader and run my job.
You can create a Resource from an InputStream (InputStreamResource) which you could get from a File object. You can read more about the InputStreamResource here: http://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/core/io/InputStreamResource.html
Related
I've a requirement where it is a dynamic query and JobParameters are build using JobParametersBuilder by setting String, Date, Long etc, but able to read only String values in JdbcCursorItemReader. How can we read other than String in JdbcCursorItemReader so it can be set in PreparedStatement query. Thanks in Advance
With a step-scoped bean and a SpEL expression you can inject and use any type of parameter in your query. Here is an example with a non string parameter from one of the samples:
<bean id="itemReader" scope="step" autowire-candidate="false" class="org.springframework.batch.item.database.JdbcPagingItemReader">
<property name="dataSource" ref="dataSource" />
<property name="rowMapper">
<bean class="org.springframework.batch.sample.domain.trade.internal.CustomerCreditRowMapper" />
</property>
<property name="queryProvider">
<bean class="org.springframework.batch.item.database.support.SqlPagingQueryProviderFactoryBean">
<property name="dataSource" ref="dataSource"/>
<property name="fromClause" value="CUSTOMER"/>
<property name="selectClause" value="ID,NAME,CREDIT"/>
<property name="sortKeys">
<map>
<entry key="ID" value="ASCENDING"/>
</map>
</property>
<property name="whereClause" value="ID >= :minId and ID <= :maxId"/>
</bean>
</property>
<property name="parameterValues">
<map>
<entry key="minId" value="#{stepExecutionContext[minValue]}"/>
<entry key="maxId" value="#{stepExecutionContext[maxValue]}"/>
</map>
</property>
</bean>
EDIT: Add example with Java configuration style
#Bean
#StepScope
public JdbcCursorItemReader<Person> personReader(#Value("#{jobParameters['id']}") Long id) {
JdbcCursorItemReader<Person> itemReader = new JdbcCursorItemReader<>();
itemReader.setSql("select * from person where id = " + id);
// set other properties on the reader
return itemReader;
}
I am reading txt file and writing csv file with itemprocessor
Below my xml
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:batch="http://www.springframework.org/schema/batch" `xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/batch http://www.springframework.org/schema/batch/spring-batch-3.0.xsd
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-4.0.xsd">
<!-- JobRepository and JobLauncher are configuration/setup classes -->
<bean id="jobRepository" class="org.springframework.batch.core.repository.support.MapJobRepositoryFactoryBean" />
<bean id="jobLauncher" class="org.springframework.batch.core.launch.support.SimpleJobLauncher">
<property name="jobRepository" ref="jobRepository" />
</bean>
<!-- ItemReader reads a complete line one by one from input file -->
<bean id="flatFileItemReader" class="org.springframework.batch.item.file.FlatFileItemReader" scope="step">
<property name="resource" value="classpath:Test.txt" />
<property name="lineMapper">
<bean class="org.springframework.batch.item.file.mapping.DefaultLineMapper">
<property name="fieldSetMapper">
<!-- Mapper which maps each individual items in a record to properties in POJO -->
<bean class="com.chaman.springbatch.ResultFieldSetMapper" />
</property>
<property name="lineTokenizer">
<!-- A tokenizer class to be used when items in input record are separated by specific characters -->
<bean class="org.springframework.batch.item.file.transform.DelimitedLineTokenizer">
<!-- <property name="delimiter" value="|" /> -->
</bean>
</property>
</bean>
</property>
</bean>
<bean id="flatFileItemWriter" class="org.springframework.batch.item.file.FlatFileItemWriter" scope="step">`
`
<property name="resource" value="file:csv/Result.csv" />
<property name="lineAggregator">
<!-- An Aggregator which converts an object into delimited list of strings -->
<bean class="org.springframework.batch.item.file.transform.DelimitedLineAggregator">
<!-- <property name="delimiter" value="|" /> -->
<property name="fieldExtractor">
<!-- Extractor which returns the value of beans property through reflection -->
<bean class="org.springframework.batch.item.file.transform.BeanWrapperFieldExtractor">
<property name="names" value="number" />
</bean>
</property>
</bean>
</property>
</bean>
<!-- XML ItemWriter which writes the data in XML format -->
<!-- <bean id="xmlItemWriter" class="org.springframework.batch.item.xml.StaxEventItemWriter">
<property name="resource" value="file:xml/examResult.xml" />
<property name="rootTagName" value="UniversityExamResultList" />
<property name="marshaller">
<bean class="org.springframework.oxm.jaxb.Jaxb2Marshaller">
<property name="classesToBeBound">
<list>
<value>com.websystique.springbatch.model.ExamResult</value>
</list>
</property>
</bean>
</property>
</bean> -->
<!-- Optional ItemProcessor to perform business logic/filtering on the input records -->
<bean id="itemProcessor" class="com.chaman.springbatch.ResultItemProcessor" />
<!-- Optional JobExecutionListener to perform business logic before and after the job -->
<bean id="jobListener" class="com.chaman.springbatch.ResultJobListener" />
<!-- Step will need a transaction manager -->
<bean id="transactionManager" class="org.springframework.batch.support.transaction.ResourcelessTransactionManager" />
<!-- Actual Job -->
<batch:job id="ResultJob">
<batch:step id="step1">
<batch:tasklet transaction-manager="transactionManager">
<batch:chunk reader="flatFileItemReader" writer="flatFileItemWriter" processor="itemProcessor" commit-interval="10" />
</batch:tasklet>
</batch:step>
<batch:listeners>
<batch:listener ref="jobListener" />
</batch:listeners>
</batch:job>
How to run spring batch job through CommandLineRunner, if i am using xml based configuration?
This is explained in the documentation, see Running Jobs from the Command Line. Here is an example:
java CommandLineJobRunner myJob-configuration.xml myJob param=value
package com.paul.testspringbatch;
import java.util.Date;
import org.springframework.batch.core.Job;
import org.springframework.batch.core.JobParameters;
import org.springframework.batch.core.JobParametersBuilder;
import org.springframework.batch.core.launch.JobLauncher;
import org.springframework.boot.CommandLineRunner;
public class MyCommandLineRunner implements CommandLineRunner {
private JobLauncher jobLauncher;
private Job resultJob;
#Override
public void run(String... args) throws Exception {
JobParameters jobParameters = new JobParametersBuilder().addDate("start-date", new Date()).toJobParameters();
this.jobLauncher.run(resultJob, jobParameters);
}
public void setJobLauncher(JobLauncher jobLauncher) {
this.jobLauncher = jobLauncher;
}
public void setResultJob(Job resultJob) {
this.resultJob = resultJob;
}
}
add the bean to your xml:
<bean id="jobLauncher" class="com.paul.testspringbatch.MyCommandLineRunner">
<property name="jobLauncher" ref="jobLauncher" />
<property name="resultJob" ref="ResultJob" />
</bean>
when the application starts up, the job will be excuted.
enter image description hereI am new to Spring Batch with Scheduler. Here my task is to read the data from one table and write it into another table.
I am randomly going through the blogs and different tutorials.
I don't know whether there is any direct approach read from database and write into database. I took this approach like
Job 1 : Reads the data from the db using JdbcCursorItemReader, writing the data into a txt file using FlatFileItemWriter.
Job 2: Read the data from the txt file using FlatFileItemReader, multiResourceItemReader and writing the data into another table using HibernateItemWriter.
I am using a scheduler and it is going to run the batch for every 20 sec.
In this approach for the first run it is working fine. For the second run(after 20 sec), I am updating the data in the database(base table) but it is not writing updated data into the file and database.
Here is my configuration & code`package com.cg.schedulers;
public class UserScheduler {
#Autowired
private JobLauncher launcher;
#Autowired
private Job userJob;
#Autowired
private Job userJob2;
private JobExecution execution1,execution2;
public void run() {
try {
execution1 = launcher.run(userJob, new JobParameters());
execution2 = launcher.run(userJob2, new JobParameters());
System.out.println("Execution status: " + execution1.getStatus());
System.out.println("Execution status: " + execution2.getStatus());
} catch (JobExecutionAlreadyRunningException e) {
e.printStackTrace();
} catch (JobRestartException e) {
e.printStackTrace();
} catch (JobInstanceAlreadyCompleteException e) {
e.printStackTrace();
} catch (JobParametersInvalidException e) {
e.printStackTrace();
}
}
}
Xml Configuration
<import resource="spring-batch1.xml" />
<import resource="springbatch-database.xml" />
<context:annotation-config/>
<context:component-scan base-package="com.cg"/>
<!-- Reading data from -->
<bean id="itemReader"
class="org.springframework.batch.item.database.JdbcCursorItemReader"
scope="step">
<property name="dataSource" ref="dataSource" />
<property name="sql" value="select UserId, UserName, Password from USER" />
<property name="rowMapper">
<bean class="com.cg.mapper.UserRowMapper" />
</property>
</bean>
<!-- ItemWriter writes a line into output flat file -->
<bean id="flatFileItemWriter" class="org.springframework.batch.item.file.FlatFileItemWriter"
scope="step">
<property name="resource" value="file:csv/User.txt" />
<property name="lineAggregator">
<!-- An Aggregator which converts an object into delimited list of strings -->
<bean
class="org.springframework.batch.item.file.transform.DelimitedLineAggregator">
<property name="delimiter" value="," />
<property name="fieldExtractor">
<!-- Extractor which returns the value of beans property through reflection -->
<bean
class="org.springframework.batch.item.file.transform.BeanWrapperFieldExtractor">
<property name="names" value="userId, username, password" />
</bean>
</property>
</bean>
</property>
</bean>
<!-- ItemReader reads a complete line one by one from input file -->
<bean id="flatFileItemReader" class="org.springframework.batch.item.file.FlatFileItemReader"
scope="step">
<property name="lineMapper">
<bean class="org.springframework.batch.item.file.mapping.DefaultLineMapper">
<property name="fieldSetMapper">
<!-- Mapper which maps each individual items in a record to properties
in POJO -->
<bean class="com.cg.mapper.UserFieldSetMapper" />
</property>
<property name="lineTokenizer">
<!-- A tokenizer class to be used when items in input record are separated
by specific characters -->
<bean
class="org.springframework.batch.item.file.transform.DelimitedLineTokenizer">
<property name="delimiter" value="," />
</bean>
</property>
</bean>
</property>
</bean>
<bean id="multiResourceItemReader"
class="org.springframework.batch.item.file.MultiResourceItemReader">
<property name="resources" value="classpath:csv/User.txt" />
<property name="delegate" ref="flatFileItemReader" />
</bean>
<!-- Optional JobExecutionListener to perform business logic before and
after the job -->
<bean id="jobListener" class="com.cg.support.UserItemListener" />
<!-- Optional ItemProcessor to perform business logic/filtering on the input
records -->
<bean id="itemProcessor1" class="com.cg.support.UserItemProcessor" />
<bean id="itemProcessor2" class="com.cg.support.UserItemProcessor2" />
<!-- ItemWriter which writes data to database -->
<bean id="databaseItemWriter"
class="org.springframework.batch.item.database.HibernateItemWriter">
<property name="sessionFactory" ref="sessionFactory" />
</bean>
<!-- Actual Job -->
<batch:job id="userJob">
<batch:step id="step1">
<batch:tasklet transaction-manager="transactionManager">
<batch:chunk reader="itemReader" writer="flatFileItemWriter"
processor="itemProcessor1" commit-interval="10" />
</batch:tasklet>
</batch:step>
<batch:listeners>
<batch:listener ref="jobListener" />
</batch:listeners>
</batch:job>
<batch:job id="userJob2">
<batch:step id="step2">
<batch:tasklet transaction-manager="transactionManager">
<batch:chunk reader="multiResourceItemReader" writer="databaseItemWriter"
processor="itemProcessor2" commit-interval="10" />
</batch:tasklet>
</batch:step>
</batch:job>
<bean id="myScheduler" class="com.cg.schedulers.UserScheduler"/>
<task:scheduled-tasks>
<task:scheduled ref="myScheduler" method="run" cron="*/20 * * * * *" />
</task:scheduled-tasks>
Please provide the direct approach if possible using hibernate.
[enter image description here][2]
Execution status: COMPLETED
Thanks,
Vamshi.
I have a large file which may contain 100K to 500K records. I am planning to use chunk oriented processing and my thought is
1) Split the large file into smaller based on the count let say 10K in each file.
2) If there are 100K records then I will get 10 files each containing 10K reocrds
3) I would like to partition these 10 files and would like to process using 5 threads. I am thinking to use custom MultiResourcePartioner
4) The 5 threads should process all the 10 files created in split process.
5) I don't want to create same number of threads equal to file count as in that case I may face memory issues. What I am looking is whatever the number of files I would like to process them using only 5 threads (I can increase based on my requirements).
Expert could you let me know this can be achieved using spring batch? If yes could you please share pointers or reference implementations
Thanks in advance
The working job-config xml
<description>Spring Batch File Chunk Processing</description>
<import resource="../config/batch-context.xml" />
<batch:job id="file-partition-batch" job-repository="jobRepository" restartable="false">
<batch:step id="master">
<batch:partition partitioner="partitioner" handler="partitionHandler" />
</batch:step>
</batch:job>
<batch:step id="slave">
<batch:tasklet>
<batch:chunk reader="reader" processor="compositeProcessor"
writer="compositeWriter" commit-interval="5">
</batch:chunk>
</batch:tasklet>
</batch:step>
<bean id="partitionHandler" class="org.springframework.batch.core.partition.support.TaskExecutorPartitionHandler">
<property name="taskExecutor" ref="taskExecutor"/>
<property name="step" ref="slave" />
<property name="gridSize" value="5" />
</bean>
<bean id="partitioner" class="com.poc.partitioner.FileMultiResourcePartitioner">
<property name="resources" value="file:/Users/anupghosh/Documents/Spring_Batch/FilePartitionBatch/*.txt" />
<property name="threadName" value="feed-processor" />
</bean>
<bean id="taskExecutor" class="org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor">
<property name="corePoolSize" value="5" />
<property name="maxPoolSize" value="5" />
</bean>
<bean id="reader" class="org.springframework.batch.item.file.FlatFileItemReader" scope="step">
<property name="resource" value="#{stepExecutionContext['fileName']}" />
<property name="lineMapper">
<bean class="org.springframework.batch.item.file.mapping.DefaultLineMapper">
<property name="lineTokenizer">
<bean class="org.springframework.batch.item.file.transform.DelimitedLineTokenizer">
<property name="delimiter" value="|"/>
<property name="names" value="key,docName,docTypCD,itemType,itemNum,launchDate,status" />
</bean>
</property>
<property name="fieldSetMapper">
<bean class="com.poc.mapper.FileRowMapper" />
</property>
</bean>
</property>
</bean>
<bean id="validatingProcessor" class="org.springframework.batch.item.validator.ValidatingItemProcessor">
<constructor-arg ref="feedRowValidator" />
</bean>
<bean id="feedProcesor" class="com.poc.processor.FeedProcessor" />
<bean id="compositeProcessor" class="org.springframework.batch.item.support.CompositeItemProcessor" scope="step">
<property name="delegates">
<list>
<ref bean="validatingProcessor" />
<ref bean="feedProcesor" />
</list>
</property>
</bean>
<bean id="recordDecWriter" class="com.poc.writer.RecordDecWriter" />
<bean id="reconFlatFileCustomWriter" class="com.poc.writer.ReconFileWriter">
<property name="reconFlatFileWriter" ref="reconFlatFileWriter" />
</bean>
<bean id="reconFlatFileWriter" class="org.springframework.batch.item.file.FlatFileItemWriter" scope="step">
<property name="resource" value="file:/Users/anupghosh/Documents/Spring_Batch/recon-#{stepExecutionContext[threadName]}.txt" />
<property name="shouldDeleteIfExists" value="true" />
<property name="lineAggregator">
<bean class="org.springframework.batch.item.file.transform.DelimitedLineAggregator">
<property name="delimiter" value="|" />
<property name="fieldExtractor">
<bean class="org.springframework.batch.item.file.transform.BeanWrapperFieldExtractor">
<property name="names" value="validationError" />
</bean>
</property>
</bean>
</property>
</bean>
<bean id="compositeWriter" class="org.springframework.batch.item.support.CompositeItemWriter">
<property name="delegates">
<list>
<ref bean="recordDecWriter" />
<ref bean="reconFlatFileCustomWriter" />
</list>
</property>
</bean>
<bean id="feedRowValidator" class="org.springframework.batch.item.validator.SpringValidator">
<property name="validator">
<bean class="com.poc.validator.FeedRowValidator"/>
</property>
</bean>
was able to solve this using MultiResourcePartitioner. below are java config
#Bean
public Partitioner partitioner() {
MultiResourcePartitioner partitioner = new MultiResourcePartitioner();
ClassLoader cl = this.getClass().getClassLoader();
ResourcePatternResolver resolver = new PathMatchingResourcePatternResolver(cl);
Resource[] resources = resolver.getResources("file:" + filePath + "/"+"*.csv");
partitioner.setResources(resources);
partitioner.partition(10);
return partitioner;
}
#Bean
public TaskExecutor taskExecutor() {
ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor();
taskExecutor.setMaxPoolSize(4);
taskExecutor.afterPropertiesSet();
return taskExecutor;
}
#Bean
#Qualifier("masterStep")
public Step masterStep() {
return stepBuilderFactory.get("masterStep")
.partitioner(ProcessDataStep())
.partitioner("ProcessDataStep",partitioner())
.taskExecutor(taskExecutor())
.listener(pcStressStepListener)
.build();
}
#Bean
#Qualifier("processData")
public Step processData() {
return stepBuilderFactory.get("processData")
.<pojo, pojo> chunk(5000)
.reader(reader)
.processor(processor())
.writer(writer)
.build();
}
#Bean(name="reader")
#StepScope
public FlatFileItemReader<pojo> reader(#Value("#{stepExecutionContext['fileName']}") String filename) {
FlatFileItemReader<pojo> reader = new FlatFileItemReader<>();
reader.setResource(new UrlResource(filename));
reader.setLineMapper(new DefaultLineMapper<pojo>() {
{
setLineTokenizer(new DelimitedLineTokenizer() {
{
setNames(FILE HEADER);
}
});
setFieldSetMapper(new BeanWrapperFieldSetMapper<pojo>() {
{
setTargetType(pojo.class);
}
});
}
});
return reader;
}
i'm using Spring Batch & Quartz to read from database table and write in another table. the database is Oracle and it is c3p0
the problem is each job must have a unique parameters, I tried RunIdIncrementer and I tried this code:
public class JobRerunner implements JobParametersIncrementer {
#Override
public JobParameters getNext(JobParameters parameters) {
System.out.println("got job parameters: " + parameters);
if (parameters==null || parameters.isEmpty()) {
return new JobParametersBuilder().addLong("run.id", System.currentTimeMillis()).toJobParameters();
}
long currentTime = parameters.getLong("run.id",System.currentTimeMillis()) + 1;
return new JobParametersBuilder().addLong("run.id",currentTime).toJobParameters();
}
}
but I get the same problem, the run.id is generated only once, and when the job is ran for the second time it has no parameters at all and the third time also (the second and third run JobParameter = null so (Job Instance Already Exists)
job context
<batch:job id="readyReqPoolJob" restartable="true">
<batch:step id="readyReqPoolStep">
<batch:tasklet>
<batch:chunk reader="readyReqPoolReader" writer="readyReqPoolWrtiter"
commit-interval="100" />
</batch:tasklet>
</batch:step>
</batch:job>
<!-- ======================================================= -->
<!-- 6) READER -->
<!-- ======================================================= -->
<bean id="readyReqPoolReader"
class="org.springframework.batch.item.database.JdbcCursorItemReader">
<property name="dataSource" ref="dataSource" />
<property name="sql" value="select * from SF_ILA_Ready_Request_Pool" />
<property name="rowMapper" ref="ReadyReqPoolRowMapper" />
</bean>
<bean id="readyReqPoolWrtiter"
class="com.housekeepingservice.readyrequestpoolarchive.ReadyReqPoolArchiveWriter" />
<bean id="jobDetail" class="org.springframework.scheduling.quartz.JobDetailBean">
<property name="jobClass"
value="org.springframework.batch.sample.quartz.JobLauncherDetails" />
<property name="jobDataAsMap">
<map>
<entry key="jobName" value="readyReqPoolJob" />
<entry key="jobLocator" value-ref="jobRegistry" />
<entry key="jobLauncher" value-ref="jobLauncher" />
</map>
</property>
</bean>
<bean class="org.springframework.scheduling.quartz.SchedulerFactoryBean">
<property name="triggers">
<bean id="cronTrigger"
class="org.springframework.scheduling.quartz.CronTriggerFactoryBean">
<property name="jobDetail" ref="jobDetail" />
<property name="cronExpression" value="0 0/5 * * * ?" />
</bean>
</property>
</bean>
main context:
<import resource="classpath:spring/batch/config/readyReqPoolContext.xml"
<import resource="classpath:spring/batch/config/jdbc.commons.xml" />
<!-- 1) USE ANNOTATIONS TO CONFIGURE SPRING BEANS -->
<context:component-scan base-package="com.housekeepingservice" />
<bean id="transactionManager"
class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="dataSource" />
</bean>
<tx:annotation-driven transaction-manager="transactionManager" />
<bean id="jdbcTemplate" class="org.springframework.jdbc.core.JdbcTemplate">
<property name="dataSource" ref="dataSource" />
</bean>
<bean
class="org.springframework.batch.core.configuration.support.JobRegistryBeanPostProcessor">
<property name="jobRegistry" ref="jobRegistry" />
</bean>
<bean id="jobRegistry"
class="org.springframework.batch.core.configuration.support.MapJobRegistry" />
<!-- 3) JOB REPOSITORY -->
<bean id="jobRepository"
class="org.springframework.batch.core.repository.support.MapJobRepositoryFactoryBean">
<property name="transactionManager" ref="transactionManager" />
</bean>
<!-- 4) LAUNCH JOBS FROM A REPOSITORY -->
<bean id="jobLauncher"
class="org.springframework.batch.core.launch.support.SimpleJobLauncher">
<property name="jobRepository" ref="jobRepository" />
<property name="taskExecutor" ref="taskExecutor" />
</bean>
<bean id="taskExecutor" class="org.springframework.core.task.SimpleAsyncTaskExecutor" />
<bean id="jobExplorer"
class="org.springframework.batch.core.explore.support.JobExplorerFactoryBean">
<property name="dataSource" ref="dataSource" />
</bean>
<bean name="jobParamatersIncrementer" class="org.springframework.batch.core.launch.support.RunIdIncrementer">
</bean>
Test.java
public class Test {
public static void main(String[] args) {
String[] springConfig = { "spring/batch/config/mainContext.xml" };
ApplicationContext context = new ClassPathXmlApplicationContext(
springConfig);
JobRerunner rerun = new JobRerunner();
JobLauncher jobLauncher = (JobLauncher) context.getBean("jobLauncher");
Job readyRequestPoolJob = (Job) context.getBean("readyReqPoolJob");
try {
JobParameters jobParameters = new JobParameters();
JobExecution execution2 = jobLauncher.run(readyRequestPoolJob, rerun.getNext(jobParameters));
System.out.println("Exit Status : " + execution2.getStatus());
} catch (Exception e) {
e.printStackTrace();
}
System.out.println("Done");
}
}
log (checkout the job incetance parameters in the first run and the second run):
17:00:27,053 INFO SimpleJobLauncher:132 - Job: [FlowJob: [name=readyReqPoolJob]] launched with the following parameters: **[{run.id=1393855226339}]**
17:00:27.085 [Timer-0] DEBUG org.quartz.utils.UpdateChecker - Checking for available updated version of Quartz...
17:00:27,272 INFO SimpleStepHandler:135 - Executing step: [readyReqPoolStep]
17:02:08,791 INFO SimpleJobLauncher:135 - Job: [FlowJob: [name=readyReqPoolJob]] completed with the following parameters: [{run.id=1393855226339}] and the following status: [COMPLETED]
17:10:00.005 [org.springframework.scheduling.quartz.SchedulerFactoryBean#0_Worker-1] DEBUG org.quartz.core.JobRunShell - Calling execute on job DEFAULT.jobDetail
17:10:00,008 INFO JobLauncherDetails:69 - Quartz trigger firing with Spring Batch jobName=readyReqPoolJob
17:10:00,036 INFO SimpleJobLauncher:132 - Job: [FlowJob: [name=readyReqPoolJob]] launched with the following parameters: **[{}]**
17:10:00,059 INFO SimpleStepHandler:135 - Executing step: [readyReqPoolStep]
To lunch a job with job Incremater you need two things
Attach the RunIdIncremater to your job.
Use a launcher that is aware of the usage Incremater.
I do not see any need for your own implementation just use the existing one.
Attach the RunIdIncremater to your job.
<batch:job id="readyReqPoolJob" incrementer="runIdIncrementer" restartable="true">
</batch:job>
<bean id="runIdIncrementer"
class="org.springframework.batch.core.launch.support.RunIdIncrementer"/>
Use a launcher
To launch it you should use one of the following:
Option 1: CommandLineJobRunner with the –next option see the API
Option 2: User JobOperator
<bean id="jobOperator"
class="org.springframework.batch.core.launch.support.SimpleJobOperator">
<property name="jobRepository" ref="jobRepository" />
<property name="jobLauncher" ref="jobLauncher" />
<property name="jobRegistry" ref="jobRegistry" />
<property name="jobExplorer" ref="jobExplorer" />
</bean>
in the code
jobOperator.startNextInstance(jobName)
Option 3: In Junit you can use JobLauncherTestUtils.
Note that it has it’s own id Incremater and will ignore the one you use
see also the following answer SpringBatch: Test a JobExecutionListener
Set step allowStartIfComplete flag to True
Add a parameter called 'timestamp' for example or - if you want to use run.id - set Job.jobParametersIncrementer with your jobParamatersIncrementer bean definition.