I would like to invoke sftp:outbound-gateway from batch tasklet in order to download a file from sftp server.
I've seen other posts related to this subject but I'm not sure what am I doing wrong. Could anybody give me a hint based on my configuration? My batch works so the problem is just to ivoke the sftp component in batch step. I've marked the Spring Integration section with comment so it is easier to read just a relevant configuration.
I can see in my logs: DEBUG [o.s.i.e.SourcePollingChannelAdapter] Received no Message during the poll, returning 'false'. So I am not receiving a file but why?
Thanks in advance for your time spend on analysis!
<bean id="ftsSftpClientFactory" class="org.springframework.integration.sftp.session.DefaultSftpSessionFactory">
<property name="host" value="${my.import.sftp.localhost}"/>
<property name="user" value="${my.import.sftp.username}"/>
<property name="password" value="${my.import.sftp.passwort}"/>
</bean>
<!-- Start: Spring Integration -->
<int:channel id="replyChannel" >
<int:queue/>
</int:channel>
<int:channel id="requestChannel" />
<int-sftp:outbound-gateway id="sftpGateway"
session-factory="ftsSftpClientFactory"
request-channel="requestChannel"
reply-channel="replyChannel"
auto-startup="true"
command="get"
command-options="-P"
expression="payload"
remote-directory="."
local-directory="${my.import.sftp.copy.file.destinationpath}">
</int-sftp:outbound-gateway>
<bean name="copyFileTasklet" class="com.mydomain.CopyFileTasklet">
<property name="channel" ref="replyChannel" />
<property name="pollableChannel" ref="requestChannel" />
</bean>
<!-- Start: Spring Batch -->
<bean name="myImportTask" class="com.mydomain.MyImportTask">
<property name="job" ref="unternehmungImportJob"/>
<property name="jobLauncher" ref="jobLauncher"/>
</bean>
<bean id="jobDetail"
class="com.mydomain.MyImportJob">
<property name="myImportTask" ref="myImportTask" />
</bean>
<!--suppress SpringBatchModel -->
<batch:job id="myImportJob">
<batch:step id="copy-file-step" next="my-import-step">
<batch:tasklet ref="copyFileTasklet"/>
</batch:step>
<batch:step id="my-import-step">
<batch:tasklet>
<batch:chunk reader="myItemReader"
writer="myItemWriter"
commit-interval="10000">
<!--
skip-limit="10000"
<batch:skippable-exception-classes>
<batch:include class="java.lang.Exception"/>
<batch:exclude class="java.io.FileNotFoundException"/>
</batch:skippable-exception-classes> -->
</batch:chunk>
<batch:transaction-attributes isolation="DEFAULT" propagation="REQUIRED"/>
</batch:tasklet>
</batch:step>
</batch:job>
<bean id="myItemReader" scope="step" class="org.springframework.batch.item.file.FlatFileItemReader">
<property name="linesToSkip" value="1"/>
<property name="encoding" value="${my.import.batch.encoding}" />
<property name="resource" value="${my.import.batch.input.resource}"/>
<property name="lineMapper">
<bean class="org.springframework.batch.item.file.mapping.DefaultLineMapper">
<property name="lineTokenizer" ref="lineTokenizer"/>
<property name="fieldSetMapper">
<bean class="com.mydomain.MyImportMapper"/>
</property>
</bean>
</property>
</bean>
<bean id="myItemWriter" class="com.mydomain.MyItemWriter">
<property name="myApplicationService" ref="defaultmyApplicationService" />
</bean>
<bean id="lineTokenizer" class="com.mydomain.DelimitedLineTokenizerWithEOF">
<property name="delimiter" value="${my.import.batch.delimiter}" />
<property name="eofMarker" value="${my.import.batch.eof.marker}" />
</bean>
public class CopyFileTasklet implements Tasklet {
private MessageChannel requestChannel;
private PollableChannel replyChannel;
public void setRequestChannel(MessageChannel requestChannel) {
this.requestChannel = requestChannel;
}
public void setReplyChannel(PollableChannel replyChannel) {
this.replyChannel = replyChannel;
}
#Override
public RepeatStatus execute(StepContribution stepContribution, ChunkContext chunkContext) throws Exception {
Message<?> result = replyChannel.receive(10000);
Object file = result.getPayload();
return RepeatStatus.FINISHED;
}
}
Your issue that you don't inititate Integration Flow from your custom Tasklet. Of course you can't receive anything from the replyChannel, if you haven't sent request before.
If you just need to process Integration Flow and get result from it, it would be better to use POJI <gateway> from that Tasklet:
public interface SftpGateway {
File download(String fileName);
}
<gateway id="sftpGateway" service-interface="com.my.proj.SftpGateway"
default-request-channel="requestChannel"/>
<bean name="copyFileTasklet" class="com.mydomain.CopyFileTasklet">
<property name="sftpGateway" ref="sftpGateway" />
</bean>
Something like that.
Related
I am reading txt file and writing csv file with itemprocessor
Below my xml
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:batch="http://www.springframework.org/schema/batch" `xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/batch http://www.springframework.org/schema/batch/spring-batch-3.0.xsd
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-4.0.xsd">
<!-- JobRepository and JobLauncher are configuration/setup classes -->
<bean id="jobRepository" class="org.springframework.batch.core.repository.support.MapJobRepositoryFactoryBean" />
<bean id="jobLauncher" class="org.springframework.batch.core.launch.support.SimpleJobLauncher">
<property name="jobRepository" ref="jobRepository" />
</bean>
<!-- ItemReader reads a complete line one by one from input file -->
<bean id="flatFileItemReader" class="org.springframework.batch.item.file.FlatFileItemReader" scope="step">
<property name="resource" value="classpath:Test.txt" />
<property name="lineMapper">
<bean class="org.springframework.batch.item.file.mapping.DefaultLineMapper">
<property name="fieldSetMapper">
<!-- Mapper which maps each individual items in a record to properties in POJO -->
<bean class="com.chaman.springbatch.ResultFieldSetMapper" />
</property>
<property name="lineTokenizer">
<!-- A tokenizer class to be used when items in input record are separated by specific characters -->
<bean class="org.springframework.batch.item.file.transform.DelimitedLineTokenizer">
<!-- <property name="delimiter" value="|" /> -->
</bean>
</property>
</bean>
</property>
</bean>
<bean id="flatFileItemWriter" class="org.springframework.batch.item.file.FlatFileItemWriter" scope="step">`
`
<property name="resource" value="file:csv/Result.csv" />
<property name="lineAggregator">
<!-- An Aggregator which converts an object into delimited list of strings -->
<bean class="org.springframework.batch.item.file.transform.DelimitedLineAggregator">
<!-- <property name="delimiter" value="|" /> -->
<property name="fieldExtractor">
<!-- Extractor which returns the value of beans property through reflection -->
<bean class="org.springframework.batch.item.file.transform.BeanWrapperFieldExtractor">
<property name="names" value="number" />
</bean>
</property>
</bean>
</property>
</bean>
<!-- XML ItemWriter which writes the data in XML format -->
<!-- <bean id="xmlItemWriter" class="org.springframework.batch.item.xml.StaxEventItemWriter">
<property name="resource" value="file:xml/examResult.xml" />
<property name="rootTagName" value="UniversityExamResultList" />
<property name="marshaller">
<bean class="org.springframework.oxm.jaxb.Jaxb2Marshaller">
<property name="classesToBeBound">
<list>
<value>com.websystique.springbatch.model.ExamResult</value>
</list>
</property>
</bean>
</property>
</bean> -->
<!-- Optional ItemProcessor to perform business logic/filtering on the input records -->
<bean id="itemProcessor" class="com.chaman.springbatch.ResultItemProcessor" />
<!-- Optional JobExecutionListener to perform business logic before and after the job -->
<bean id="jobListener" class="com.chaman.springbatch.ResultJobListener" />
<!-- Step will need a transaction manager -->
<bean id="transactionManager" class="org.springframework.batch.support.transaction.ResourcelessTransactionManager" />
<!-- Actual Job -->
<batch:job id="ResultJob">
<batch:step id="step1">
<batch:tasklet transaction-manager="transactionManager">
<batch:chunk reader="flatFileItemReader" writer="flatFileItemWriter" processor="itemProcessor" commit-interval="10" />
</batch:tasklet>
</batch:step>
<batch:listeners>
<batch:listener ref="jobListener" />
</batch:listeners>
</batch:job>
How to run spring batch job through CommandLineRunner, if i am using xml based configuration?
This is explained in the documentation, see Running Jobs from the Command Line. Here is an example:
java CommandLineJobRunner myJob-configuration.xml myJob param=value
package com.paul.testspringbatch;
import java.util.Date;
import org.springframework.batch.core.Job;
import org.springframework.batch.core.JobParameters;
import org.springframework.batch.core.JobParametersBuilder;
import org.springframework.batch.core.launch.JobLauncher;
import org.springframework.boot.CommandLineRunner;
public class MyCommandLineRunner implements CommandLineRunner {
private JobLauncher jobLauncher;
private Job resultJob;
#Override
public void run(String... args) throws Exception {
JobParameters jobParameters = new JobParametersBuilder().addDate("start-date", new Date()).toJobParameters();
this.jobLauncher.run(resultJob, jobParameters);
}
public void setJobLauncher(JobLauncher jobLauncher) {
this.jobLauncher = jobLauncher;
}
public void setResultJob(Job resultJob) {
this.resultJob = resultJob;
}
}
add the bean to your xml:
<bean id="jobLauncher" class="com.paul.testspringbatch.MyCommandLineRunner">
<property name="jobLauncher" ref="jobLauncher" />
<property name="resultJob" ref="ResultJob" />
</bean>
when the application starts up, the job will be excuted.
I am trying get job-parameteres-in-to-item-processor-using-spring-batch-annotation.
I implemented it by referring below link; but for me the variable (batchRunName) is coming as null when I try to access it in my Processor class.
Can any one please look at it. I am sure; I am missing some small thing.
How to get Job parameteres in to item processor using spring Batch annotation
public static void main(String[] args) {
contextObj = new ClassPathXmlApplicationContext(springConfig);
jobObj = (Job) contextObj.getBean("XYZ-1001-DD-01");
JobParametersBuilder jobBuilder = new JobParametersBuilder();
System.out.println("args[0] is " + args[0] );
jobBuilder.addString("batchRunName", args[0]);
public class TimeProcessor implements ItemProcessor<Time, TimeMetric> {
private DataSource dataSource;
#Value("#{jobParameters['batchRunNumber']}")
private String batchRunNumber;
public void setBatchRunNumber(String batchRunNumber) {
this.batchRunNumber = batchRunNumber;
}
<bean id="timeProcessor"
class="com.xyz.processor.TimeProcessor" scope="step">
<property name="dataSource" ref="oracledataSource" />
</bean>
=================FULL XML CONFIGURATION========================
<import resource="classpath:/batch/utility/skip/batch_skip.xml" />
<import resource="classpath:/batch/config/context-postgres.xml" />
<import resource="classpath:/batch/config/oracle-database.xml" />
<context:property-placeholder
location="classpath:/batch/jobs/TPF-1001-DD-01/TPF-1001-DD-01.properties" />
<bean id="gridSizePartitioner"
class="com.tpf.partitioner.GridSizePartitioner" />
<task:executor id="taskExecutor" pool-size="${pool.size}" />
<batch:job id="XYZJob" job-repository="jobRepository"
restartable="true">
<batch:step id="XYZSTEP">
<batch:description>Convert TIF files to PDF</batch:description>
<batch:partition partitioner="gridSizePartitioner">
<batch:handler task-executor="taskExecutor"
grid-size="${pool.size}" />
<batch:step>
<batch:tasklet allow-start-if-complete="true">
<batch:chunk commit-interval="${commit.interval}"
skip-limit="${job.skip.limit}">
<batch:reader>
<bean id="timeReader"
class="org.springframework.batch.item.database.JdbcCursorItemReader"
scope="step">
<property name="dataSource" ref="oracledataSource" />
<property name="sql">
<value>
select TIME_ID as timesheetId,count(*),max(CREATION_DATETIME) as creationDateTime , ILN_NUMBER as ilnNumber
from TS_FAKE_NAME
where creation_datetime >= '#{jobParameters['creation_start_date1']} 12.00.00.000000000 AM'
and creation_datetime < '#{jobParameters['creation_start_date2']} 11.59.59.999999999 PM'
and mod(time_id,${pool.size})=#{stepExecutionContext['partition.id']}
group by time_id ,ILN_NUMBER
</value>
</property>
<property name="rowMapper">
<bean
class="org.springframework.jdbc.core.BeanPropertyRowMapper">
<property name="mappedClass"
value="com.tpf.model.Time" />
</bean>
</property>
</bean>
</batch:reader>
<batch:processor>
<bean id="compositeItemProcessor"
class="org.springframework.batch.item.support.CompositeItemProcessor">
<property name="delegates">
<list>
<ref bean="timeProcessor" />
</list>
</property>
</bean>
</batch:processor>
<batch:writer>
<bean id="compositeItemWriter"
class="org.springframework.batch.item.support.CompositeItemWriter">
<property name="delegates">
<list>
<ref bean="timeWriter" />
</list>
</property>
</bean>
</batch:writer>
<batch:skippable-exception-classes>
<batch:include
class="com.utility.skip.BatchSkipException" />
</batch:skippable-exception-classes>
<batch:listeners>
<batch:listener ref="batchSkipListener" />
</batch:listeners>
</batch:chunk>
</batch:tasklet>
</batch:step>
</batch:partition>
</batch:step>
<batch:validator>
<bean
class="org.springframework.batch.core.job.DefaultJobParametersValidator">
<property name="requiredKeys">
<list>
<value>batchRunNumber</value>
<value>creation_start_date1</value>
<value>creation_start_date2</value>
</list>
</property>
</bean>
</batch:validator>
</batch:job>
<bean id="timesheetWriter" class="com.tpf.writer.TimeWriter"
scope="step">
<property name="dataSource" ref="dataSource" />
</bean>
<bean id="timeProcessor"
class="com.tpf.processor.TimeProcessor" scope="step">
<property name="dataSource" ref="oracledataSource" />
</bean>
I think you are facing the issue reported in BATCH-2351.
You can try to provide the job parameter via XML instead of annotation (Since the majority of your config is XML based):
<bean id="timeProcessor" class="com.xyz.processor.TimeProcessor" scope="step">
<property name="dataSource" ref="oracledataSource" />
<property name="batchRunNumber" value="#{jobParameters['batchRunNumber']}" />
</bean>
Hope this helps.
enter image description hereI am new to Spring Batch with Scheduler. Here my task is to read the data from one table and write it into another table.
I am randomly going through the blogs and different tutorials.
I don't know whether there is any direct approach read from database and write into database. I took this approach like
Job 1 : Reads the data from the db using JdbcCursorItemReader, writing the data into a txt file using FlatFileItemWriter.
Job 2: Read the data from the txt file using FlatFileItemReader, multiResourceItemReader and writing the data into another table using HibernateItemWriter.
I am using a scheduler and it is going to run the batch for every 20 sec.
In this approach for the first run it is working fine. For the second run(after 20 sec), I am updating the data in the database(base table) but it is not writing updated data into the file and database.
Here is my configuration & code`package com.cg.schedulers;
public class UserScheduler {
#Autowired
private JobLauncher launcher;
#Autowired
private Job userJob;
#Autowired
private Job userJob2;
private JobExecution execution1,execution2;
public void run() {
try {
execution1 = launcher.run(userJob, new JobParameters());
execution2 = launcher.run(userJob2, new JobParameters());
System.out.println("Execution status: " + execution1.getStatus());
System.out.println("Execution status: " + execution2.getStatus());
} catch (JobExecutionAlreadyRunningException e) {
e.printStackTrace();
} catch (JobRestartException e) {
e.printStackTrace();
} catch (JobInstanceAlreadyCompleteException e) {
e.printStackTrace();
} catch (JobParametersInvalidException e) {
e.printStackTrace();
}
}
}
Xml Configuration
<import resource="spring-batch1.xml" />
<import resource="springbatch-database.xml" />
<context:annotation-config/>
<context:component-scan base-package="com.cg"/>
<!-- Reading data from -->
<bean id="itemReader"
class="org.springframework.batch.item.database.JdbcCursorItemReader"
scope="step">
<property name="dataSource" ref="dataSource" />
<property name="sql" value="select UserId, UserName, Password from USER" />
<property name="rowMapper">
<bean class="com.cg.mapper.UserRowMapper" />
</property>
</bean>
<!-- ItemWriter writes a line into output flat file -->
<bean id="flatFileItemWriter" class="org.springframework.batch.item.file.FlatFileItemWriter"
scope="step">
<property name="resource" value="file:csv/User.txt" />
<property name="lineAggregator">
<!-- An Aggregator which converts an object into delimited list of strings -->
<bean
class="org.springframework.batch.item.file.transform.DelimitedLineAggregator">
<property name="delimiter" value="," />
<property name="fieldExtractor">
<!-- Extractor which returns the value of beans property through reflection -->
<bean
class="org.springframework.batch.item.file.transform.BeanWrapperFieldExtractor">
<property name="names" value="userId, username, password" />
</bean>
</property>
</bean>
</property>
</bean>
<!-- ItemReader reads a complete line one by one from input file -->
<bean id="flatFileItemReader" class="org.springframework.batch.item.file.FlatFileItemReader"
scope="step">
<property name="lineMapper">
<bean class="org.springframework.batch.item.file.mapping.DefaultLineMapper">
<property name="fieldSetMapper">
<!-- Mapper which maps each individual items in a record to properties
in POJO -->
<bean class="com.cg.mapper.UserFieldSetMapper" />
</property>
<property name="lineTokenizer">
<!-- A tokenizer class to be used when items in input record are separated
by specific characters -->
<bean
class="org.springframework.batch.item.file.transform.DelimitedLineTokenizer">
<property name="delimiter" value="," />
</bean>
</property>
</bean>
</property>
</bean>
<bean id="multiResourceItemReader"
class="org.springframework.batch.item.file.MultiResourceItemReader">
<property name="resources" value="classpath:csv/User.txt" />
<property name="delegate" ref="flatFileItemReader" />
</bean>
<!-- Optional JobExecutionListener to perform business logic before and
after the job -->
<bean id="jobListener" class="com.cg.support.UserItemListener" />
<!-- Optional ItemProcessor to perform business logic/filtering on the input
records -->
<bean id="itemProcessor1" class="com.cg.support.UserItemProcessor" />
<bean id="itemProcessor2" class="com.cg.support.UserItemProcessor2" />
<!-- ItemWriter which writes data to database -->
<bean id="databaseItemWriter"
class="org.springframework.batch.item.database.HibernateItemWriter">
<property name="sessionFactory" ref="sessionFactory" />
</bean>
<!-- Actual Job -->
<batch:job id="userJob">
<batch:step id="step1">
<batch:tasklet transaction-manager="transactionManager">
<batch:chunk reader="itemReader" writer="flatFileItemWriter"
processor="itemProcessor1" commit-interval="10" />
</batch:tasklet>
</batch:step>
<batch:listeners>
<batch:listener ref="jobListener" />
</batch:listeners>
</batch:job>
<batch:job id="userJob2">
<batch:step id="step2">
<batch:tasklet transaction-manager="transactionManager">
<batch:chunk reader="multiResourceItemReader" writer="databaseItemWriter"
processor="itemProcessor2" commit-interval="10" />
</batch:tasklet>
</batch:step>
</batch:job>
<bean id="myScheduler" class="com.cg.schedulers.UserScheduler"/>
<task:scheduled-tasks>
<task:scheduled ref="myScheduler" method="run" cron="*/20 * * * * *" />
</task:scheduled-tasks>
Please provide the direct approach if possible using hibernate.
[enter image description here][2]
Execution status: COMPLETED
Thanks,
Vamshi.
I have a large file which may contain 100K to 500K records. I am planning to use chunk oriented processing and my thought is
1) Split the large file into smaller based on the count let say 10K in each file.
2) If there are 100K records then I will get 10 files each containing 10K reocrds
3) I would like to partition these 10 files and would like to process using 5 threads. I am thinking to use custom MultiResourcePartioner
4) The 5 threads should process all the 10 files created in split process.
5) I don't want to create same number of threads equal to file count as in that case I may face memory issues. What I am looking is whatever the number of files I would like to process them using only 5 threads (I can increase based on my requirements).
Expert could you let me know this can be achieved using spring batch? If yes could you please share pointers or reference implementations
Thanks in advance
The working job-config xml
<description>Spring Batch File Chunk Processing</description>
<import resource="../config/batch-context.xml" />
<batch:job id="file-partition-batch" job-repository="jobRepository" restartable="false">
<batch:step id="master">
<batch:partition partitioner="partitioner" handler="partitionHandler" />
</batch:step>
</batch:job>
<batch:step id="slave">
<batch:tasklet>
<batch:chunk reader="reader" processor="compositeProcessor"
writer="compositeWriter" commit-interval="5">
</batch:chunk>
</batch:tasklet>
</batch:step>
<bean id="partitionHandler" class="org.springframework.batch.core.partition.support.TaskExecutorPartitionHandler">
<property name="taskExecutor" ref="taskExecutor"/>
<property name="step" ref="slave" />
<property name="gridSize" value="5" />
</bean>
<bean id="partitioner" class="com.poc.partitioner.FileMultiResourcePartitioner">
<property name="resources" value="file:/Users/anupghosh/Documents/Spring_Batch/FilePartitionBatch/*.txt" />
<property name="threadName" value="feed-processor" />
</bean>
<bean id="taskExecutor" class="org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor">
<property name="corePoolSize" value="5" />
<property name="maxPoolSize" value="5" />
</bean>
<bean id="reader" class="org.springframework.batch.item.file.FlatFileItemReader" scope="step">
<property name="resource" value="#{stepExecutionContext['fileName']}" />
<property name="lineMapper">
<bean class="org.springframework.batch.item.file.mapping.DefaultLineMapper">
<property name="lineTokenizer">
<bean class="org.springframework.batch.item.file.transform.DelimitedLineTokenizer">
<property name="delimiter" value="|"/>
<property name="names" value="key,docName,docTypCD,itemType,itemNum,launchDate,status" />
</bean>
</property>
<property name="fieldSetMapper">
<bean class="com.poc.mapper.FileRowMapper" />
</property>
</bean>
</property>
</bean>
<bean id="validatingProcessor" class="org.springframework.batch.item.validator.ValidatingItemProcessor">
<constructor-arg ref="feedRowValidator" />
</bean>
<bean id="feedProcesor" class="com.poc.processor.FeedProcessor" />
<bean id="compositeProcessor" class="org.springframework.batch.item.support.CompositeItemProcessor" scope="step">
<property name="delegates">
<list>
<ref bean="validatingProcessor" />
<ref bean="feedProcesor" />
</list>
</property>
</bean>
<bean id="recordDecWriter" class="com.poc.writer.RecordDecWriter" />
<bean id="reconFlatFileCustomWriter" class="com.poc.writer.ReconFileWriter">
<property name="reconFlatFileWriter" ref="reconFlatFileWriter" />
</bean>
<bean id="reconFlatFileWriter" class="org.springframework.batch.item.file.FlatFileItemWriter" scope="step">
<property name="resource" value="file:/Users/anupghosh/Documents/Spring_Batch/recon-#{stepExecutionContext[threadName]}.txt" />
<property name="shouldDeleteIfExists" value="true" />
<property name="lineAggregator">
<bean class="org.springframework.batch.item.file.transform.DelimitedLineAggregator">
<property name="delimiter" value="|" />
<property name="fieldExtractor">
<bean class="org.springframework.batch.item.file.transform.BeanWrapperFieldExtractor">
<property name="names" value="validationError" />
</bean>
</property>
</bean>
</property>
</bean>
<bean id="compositeWriter" class="org.springframework.batch.item.support.CompositeItemWriter">
<property name="delegates">
<list>
<ref bean="recordDecWriter" />
<ref bean="reconFlatFileCustomWriter" />
</list>
</property>
</bean>
<bean id="feedRowValidator" class="org.springframework.batch.item.validator.SpringValidator">
<property name="validator">
<bean class="com.poc.validator.FeedRowValidator"/>
</property>
</bean>
was able to solve this using MultiResourcePartitioner. below are java config
#Bean
public Partitioner partitioner() {
MultiResourcePartitioner partitioner = new MultiResourcePartitioner();
ClassLoader cl = this.getClass().getClassLoader();
ResourcePatternResolver resolver = new PathMatchingResourcePatternResolver(cl);
Resource[] resources = resolver.getResources("file:" + filePath + "/"+"*.csv");
partitioner.setResources(resources);
partitioner.partition(10);
return partitioner;
}
#Bean
public TaskExecutor taskExecutor() {
ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor();
taskExecutor.setMaxPoolSize(4);
taskExecutor.afterPropertiesSet();
return taskExecutor;
}
#Bean
#Qualifier("masterStep")
public Step masterStep() {
return stepBuilderFactory.get("masterStep")
.partitioner(ProcessDataStep())
.partitioner("ProcessDataStep",partitioner())
.taskExecutor(taskExecutor())
.listener(pcStressStepListener)
.build();
}
#Bean
#Qualifier("processData")
public Step processData() {
return stepBuilderFactory.get("processData")
.<pojo, pojo> chunk(5000)
.reader(reader)
.processor(processor())
.writer(writer)
.build();
}
#Bean(name="reader")
#StepScope
public FlatFileItemReader<pojo> reader(#Value("#{stepExecutionContext['fileName']}") String filename) {
FlatFileItemReader<pojo> reader = new FlatFileItemReader<>();
reader.setResource(new UrlResource(filename));
reader.setLineMapper(new DefaultLineMapper<pojo>() {
{
setLineTokenizer(new DelimitedLineTokenizer() {
{
setNames(FILE HEADER);
}
});
setFieldSetMapper(new BeanWrapperFieldSetMapper<pojo>() {
{
setTargetType(pojo.class);
}
});
}
});
return reader;
}
i'm using Spring Batch & Quartz to read from database table and write in another table. the database is Oracle and it is c3p0
the problem is each job must have a unique parameters, I tried RunIdIncrementer and I tried this code:
public class JobRerunner implements JobParametersIncrementer {
#Override
public JobParameters getNext(JobParameters parameters) {
System.out.println("got job parameters: " + parameters);
if (parameters==null || parameters.isEmpty()) {
return new JobParametersBuilder().addLong("run.id", System.currentTimeMillis()).toJobParameters();
}
long currentTime = parameters.getLong("run.id",System.currentTimeMillis()) + 1;
return new JobParametersBuilder().addLong("run.id",currentTime).toJobParameters();
}
}
but I get the same problem, the run.id is generated only once, and when the job is ran for the second time it has no parameters at all and the third time also (the second and third run JobParameter = null so (Job Instance Already Exists)
job context
<batch:job id="readyReqPoolJob" restartable="true">
<batch:step id="readyReqPoolStep">
<batch:tasklet>
<batch:chunk reader="readyReqPoolReader" writer="readyReqPoolWrtiter"
commit-interval="100" />
</batch:tasklet>
</batch:step>
</batch:job>
<!-- ======================================================= -->
<!-- 6) READER -->
<!-- ======================================================= -->
<bean id="readyReqPoolReader"
class="org.springframework.batch.item.database.JdbcCursorItemReader">
<property name="dataSource" ref="dataSource" />
<property name="sql" value="select * from SF_ILA_Ready_Request_Pool" />
<property name="rowMapper" ref="ReadyReqPoolRowMapper" />
</bean>
<bean id="readyReqPoolWrtiter"
class="com.housekeepingservice.readyrequestpoolarchive.ReadyReqPoolArchiveWriter" />
<bean id="jobDetail" class="org.springframework.scheduling.quartz.JobDetailBean">
<property name="jobClass"
value="org.springframework.batch.sample.quartz.JobLauncherDetails" />
<property name="jobDataAsMap">
<map>
<entry key="jobName" value="readyReqPoolJob" />
<entry key="jobLocator" value-ref="jobRegistry" />
<entry key="jobLauncher" value-ref="jobLauncher" />
</map>
</property>
</bean>
<bean class="org.springframework.scheduling.quartz.SchedulerFactoryBean">
<property name="triggers">
<bean id="cronTrigger"
class="org.springframework.scheduling.quartz.CronTriggerFactoryBean">
<property name="jobDetail" ref="jobDetail" />
<property name="cronExpression" value="0 0/5 * * * ?" />
</bean>
</property>
</bean>
main context:
<import resource="classpath:spring/batch/config/readyReqPoolContext.xml"
<import resource="classpath:spring/batch/config/jdbc.commons.xml" />
<!-- 1) USE ANNOTATIONS TO CONFIGURE SPRING BEANS -->
<context:component-scan base-package="com.housekeepingservice" />
<bean id="transactionManager"
class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="dataSource" />
</bean>
<tx:annotation-driven transaction-manager="transactionManager" />
<bean id="jdbcTemplate" class="org.springframework.jdbc.core.JdbcTemplate">
<property name="dataSource" ref="dataSource" />
</bean>
<bean
class="org.springframework.batch.core.configuration.support.JobRegistryBeanPostProcessor">
<property name="jobRegistry" ref="jobRegistry" />
</bean>
<bean id="jobRegistry"
class="org.springframework.batch.core.configuration.support.MapJobRegistry" />
<!-- 3) JOB REPOSITORY -->
<bean id="jobRepository"
class="org.springframework.batch.core.repository.support.MapJobRepositoryFactoryBean">
<property name="transactionManager" ref="transactionManager" />
</bean>
<!-- 4) LAUNCH JOBS FROM A REPOSITORY -->
<bean id="jobLauncher"
class="org.springframework.batch.core.launch.support.SimpleJobLauncher">
<property name="jobRepository" ref="jobRepository" />
<property name="taskExecutor" ref="taskExecutor" />
</bean>
<bean id="taskExecutor" class="org.springframework.core.task.SimpleAsyncTaskExecutor" />
<bean id="jobExplorer"
class="org.springframework.batch.core.explore.support.JobExplorerFactoryBean">
<property name="dataSource" ref="dataSource" />
</bean>
<bean name="jobParamatersIncrementer" class="org.springframework.batch.core.launch.support.RunIdIncrementer">
</bean>
Test.java
public class Test {
public static void main(String[] args) {
String[] springConfig = { "spring/batch/config/mainContext.xml" };
ApplicationContext context = new ClassPathXmlApplicationContext(
springConfig);
JobRerunner rerun = new JobRerunner();
JobLauncher jobLauncher = (JobLauncher) context.getBean("jobLauncher");
Job readyRequestPoolJob = (Job) context.getBean("readyReqPoolJob");
try {
JobParameters jobParameters = new JobParameters();
JobExecution execution2 = jobLauncher.run(readyRequestPoolJob, rerun.getNext(jobParameters));
System.out.println("Exit Status : " + execution2.getStatus());
} catch (Exception e) {
e.printStackTrace();
}
System.out.println("Done");
}
}
log (checkout the job incetance parameters in the first run and the second run):
17:00:27,053 INFO SimpleJobLauncher:132 - Job: [FlowJob: [name=readyReqPoolJob]] launched with the following parameters: **[{run.id=1393855226339}]**
17:00:27.085 [Timer-0] DEBUG org.quartz.utils.UpdateChecker - Checking for available updated version of Quartz...
17:00:27,272 INFO SimpleStepHandler:135 - Executing step: [readyReqPoolStep]
17:02:08,791 INFO SimpleJobLauncher:135 - Job: [FlowJob: [name=readyReqPoolJob]] completed with the following parameters: [{run.id=1393855226339}] and the following status: [COMPLETED]
17:10:00.005 [org.springframework.scheduling.quartz.SchedulerFactoryBean#0_Worker-1] DEBUG org.quartz.core.JobRunShell - Calling execute on job DEFAULT.jobDetail
17:10:00,008 INFO JobLauncherDetails:69 - Quartz trigger firing with Spring Batch jobName=readyReqPoolJob
17:10:00,036 INFO SimpleJobLauncher:132 - Job: [FlowJob: [name=readyReqPoolJob]] launched with the following parameters: **[{}]**
17:10:00,059 INFO SimpleStepHandler:135 - Executing step: [readyReqPoolStep]
To lunch a job with job Incremater you need two things
Attach the RunIdIncremater to your job.
Use a launcher that is aware of the usage Incremater.
I do not see any need for your own implementation just use the existing one.
Attach the RunIdIncremater to your job.
<batch:job id="readyReqPoolJob" incrementer="runIdIncrementer" restartable="true">
</batch:job>
<bean id="runIdIncrementer"
class="org.springframework.batch.core.launch.support.RunIdIncrementer"/>
Use a launcher
To launch it you should use one of the following:
Option 1: CommandLineJobRunner with the –next option see the API
Option 2: User JobOperator
<bean id="jobOperator"
class="org.springframework.batch.core.launch.support.SimpleJobOperator">
<property name="jobRepository" ref="jobRepository" />
<property name="jobLauncher" ref="jobLauncher" />
<property name="jobRegistry" ref="jobRegistry" />
<property name="jobExplorer" ref="jobExplorer" />
</bean>
in the code
jobOperator.startNextInstance(jobName)
Option 3: In Junit you can use JobLauncherTestUtils.
Note that it has it’s own id Incremater and will ignore the one you use
see also the following answer SpringBatch: Test a JobExecutionListener
Set step allowStartIfComplete flag to True
Add a parameter called 'timestamp' for example or - if you want to use run.id - set Job.jobParametersIncrementer with your jobParamatersIncrementer bean definition.