No property called 'resource' in MultiResourceItemReader - spring-batch

In spring Batch doc (http://docs.spring.io/spring-batch/reference/html/scalability.html), under 7.4.3 section it is given that we can set the 'resource' property of MultiResourceItemReader from stepExecutionContext. But there is no property called 'resource' in MultiResourceItemReader, instead it is 'resources'.
Then how single resource can be set to MultiResourceItemReader from stepExecutionContext which will have single fileName in each context which was set during partitioning.

Instead it's called resources (which is an array) and it can be set like so:
<bean id="multiResourceReader"
class=" org.springframework.batch.item.file.MultiResourceItemReader">
<property name="resources" value="file:some/folder/prefix*.csv" />
<property name="delegate" ref="flatFileItemReader" />
</bean>
When partitioning, you would not use a MultiResourceItemReader. Instead, just use a FlatFileItemReader in step scope.
<bean id="flatFileItemReader" scope="step"
class="org.springframework.batch.item.file.FlatFileItemReader">
<property name="resource" value="file:#{stepExecutionContext['FILE.NAME']}">
</bean>

Related

spring batch flat file reader access header and footer variables to use in Writer

I am using spring batch flatfile reader to read a file with header and footer values. Below is sample file and output file should have every record appended with header date and file sequence. Sample input and output as below.
Can someone please suggest
If there is a way to set the header values and footer values as job parameters and used in writer?
Any way to get header and footer values in writer? Thanks in advance!
HH20210218001 --Header starting with "HH", date(20210218), filesequence(001)
name110
name220
name330
name440
name770
FT005 --Footer with "FT" and number of records
Output:(date + file seq(001) + name + age)
20210218001name110
20210218001name330
20210218001name440
20210218001name770
Below is my file reader
<bean id="fileItemReader"
class="org.springframework.batch.item.file.FlatFileItemReader"
scope="step">
<property name="resource" value="classpath:input/input.txt"></property>
<!-- <property name="linesToSkip" value="2" /> -->
<property name="lineMapper">
<bean
class="org.springframework.batch.item.file.mapping.DefaultLineMapper">
<property name="lineTokenizer">
<bean
class="org.springframework.batch.item.file.transform.FixedLengthTokenizer">
<property name="names" value="name,age" />
<property name="columns" value="1-5,6-8"></property>
<property name="strict" value="false" />
</bean>
</property>
<property name="fieldSetMapper">
<bean class="org.test.MyMapper">
</bean>
</property>
</bean>
</property>
</bean>
If there is a way to set the header values and footer values as job parameters and used in writer?
No, it's too late at that point. When the step starts, job parameters are already set and cannot be changed.
Any way to get header and footer values in writer?
From your expected output, you don't need anything from the footer. If you want info from the header in your writer, you can do that in two steps:
step 1: reads (only) the header and puts any needed information in the execution context
step 2: uses a step-scoped writer that gets any required information from the execution context and uses it to write items

Spring Batch SQL Command with JobParameters

I am new to spring-batch, here i am getting some data from DB using following reader statements. Here i need to pass value dynamically(thru arguments).
<bean id="ItemReader"
class="org.springframework.batch.item.database.JdbcCursorItemReader">
<property name="dataSource" ref="dataSource" />
<property name="sql">
<value>
<![CDATA[
select * from table where section = #{jobParameters['section']}
]]>
</value>
</property>
<property name="rowMapper">
<bean class="xyzRowMapper" />
</property>
</bean>
JUnit Code:
JobParameters jobParameters = = new JobParametersBuilder()
.addString("section", section);
Can any body help on this?
As explained in ยง5.4 Late Binding of Job and Steps Attributes of official Spring Batch documentation, you need to add scope="step" to your step :
Using a scope of Step is required in order to use late binding since
the bean cannot actually be instantiated until the Step starts, which
allows the attributes to be found. Because it is not part of the
Spring container by default, the scope must be added explicitly,
either by using the batch namespace or by including a bean definition
explicitly for the StepScope (but not both)
Giving this :
<bean id="ItemReader" scope="step" class="org.springframework.batch.item.database.JdbcCursorItemReader">
<property name="dataSource" ref="dataSource" />
<property name="sql">
<value>
<![CDATA[
select * from table where section = #{jobParameters['section']}
]]>
</value>
</property>
<property name="rowMapper">
<bean class="xyzRowMapper" />
</property>
</bean>

Additional properties files for spring-batch-admin

I have a web application using spring-batch and I'm now integrating spring-batch-admin for basic administration.
The problem is that the jobs configuration files (which are shared with the configuration of the existing application) use properties from files in my application's classpath, but spring-batch-admin's context is not able to load them.
The quick solution was to override the placeholderProperties bean in spring-batch-admin just to add my properties files:
<bean id="placeholderProperties" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="locations">
<list>
<value>classpath:/org/springframework/batch/admin/bootstrap/batch.properties</value>
<value>classpath:batch-default.properties</value>
<value>classpath:batch-${ENVIRONMENT:hsql}.properties</value>
<value>classpath:/path/to/jobs-config.properties</value> <!-- adding my properties here -->
</list>
</property>
<property name="systemPropertiesModeName" value="SYSTEM_PROPERTIES_MODE_OVERRIDE" />
<property name="ignoreResourceNotFound" value="true" />
<property name="ignoreUnresolvablePlaceholders" value="false" />
<property name="order" value="1" />
</bean>
I don't want to move my properties to one of spring-batch-admin's default files. Is there a simpler way to do this?
Answering my own question here...
As described in the documentation, every job configuration file placed under META-INF/spring/batch/jobs/*.xml is loaded by spring-batch-admin as a child context and property placeholders from the parent (i.e. this default bean) are inherited, but the child context can always create its own placeholder bean.
Given that, in my case, the job configuration files are shared with an existing application and use properties from the application classpath, the solution is to create a new job file in META-INF/spring/batch/jobs/*.xml specific for spring-batch-admin:
<!-- placeholder bean with additional properties for the child context -->
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="locations">
<list>
<value>classpath:/path/to/job-config.properties</value>
</list>
</property>
<property name="systemPropertiesModeName" value="SYSTEM_PROPERTIES_MODE_OVERRIDE" />
<property name="ignoreResourceNotFound" value="true" />
<property name="ignoreUnresolvablePlaceholders" value="false" />
<property name="order" value="1" />
</bean>
<!-- external job configuration file is imported -->
<import resource="classpath*:/path/to/job.xml" />

Naming output file given specific field in input file header

I am relatively new to Spring Batch.
I have an input file with a header. This header contains several fields, one of which I am interested in (YYYYMM data).
Here is my config for this :
<bean id="detaillesHeaderReaderCallback" class="fr.generali.ede.daemon.batch.dstaff.detailles.DetaillesHeaderReaderCallback" >
<property name="headerTokenizer" ref="headerTokenizer" />
<property name="fieldSetMapper" ref="fieldSetMapperHeaderLog07" />
<!-- need to write moisComptable to ChunkContext -->
<property name="chunkContext" value="#{chunkExecutionContext}" />
</bean>
<bean id="headerTokenizer"
class="org.springframework.batch.item.file.transform.FixedLengthTokenizer">
<property name="names" value="dummy1,moisComptable,dummy2" />
<property name="columns" value="1-22,23-28,29-146" />
</bean>
After which, in the next step of the job, I want to generate an output file whose name is composed of a static part and that header field :
<bean id="fileItemWriterLog07" class="org.springframework.batch.item.file.FlatFileItemWriter">
<property name="resource"
value="file:${batch.coherence.out.path}/DSTAF007_LOG_#{jobExecutionContext['moisComptable']}.txt" />
<property name="shouldDeleteIfExists" value="true" />
<property name="headerCallback" ref="DetaillesHeaderWriterCallbackLog07" />
...
</bean/>
(I have two jobs because I first write to a database, and then read from it.)
As one would guess this doesn't work, the config file is flowed so I get BeanCreationExceptions. But this gives an idea of what I want to achieve.
I have no exception on the ChunkContext (yet ?) but one on the writer resource. Here is the exception :
Field or property 'jobExecutionContext' cannot be found on object of type 'org.springframework.beans.factory.config.BeanExpressionContext'
Does anyone have an idea about how to proceed ?
Thanks in advance.

Usage of CustomEditor with BeanWrapperFieldExtractor just like with BeanWrapperFieldSetMapper

I have written a simple Spring Batch application that reads a CSV file, does some transforming and writes a modified CSV to the disk.
The reading of the file into domain objects works like a charm. I use DelimitedLineTokenizer to tokenize the lines and a BeanWrapperFieldSetMapper to feed the values into a bean:
<bean id="reader" class="org.springframework.batch.item.file.FlatFileItemReader" scope="step">
<property name="resource" value="#{jobParameters['inputResource']}" />
<property name="linesToSkip" value="1" />
<property name="lineMapper">
<bean class="org.springframework.batch.item.file.mapping.DefaultLineMapper">
<property name="lineTokenizer">
<bean class="org.springframework.batch.item.file.transform.DelimitedLineTokenizer">
<property name="delimiter" value=";" />
<property name="names"
value="ID,NAME,DESCRIPTION,PRICE,DATE" />
</bean>
</property>
<property name="fieldSetMapper">
<bean class="org.springframework.batch.item.file.mapping.BeanWrapperFieldSetMapper">
<property name="targetType" value="myapp.MyDomainObject" />
<property name="customEditors">
<map>
<entry key="java.util.Date" value-ref="dateEditor" />
<entry key="java.math.BigDecimal" value-ref="numberEditor" />
</map>
</property>
</bean>
</property>
</bean>
</property>
</bean>
I especially like the features of BeanWrapperFieldSetMapper to "guess" the field names and the possibility to define CustomEditors which I use to define the special date and number formats used in the input file.
Now I would like to write the modified file in the same format like the input file.
I use the following configuration:
<bean id="writer" class="org.springframework.batch.item.file.FlatFileItemWriter" scope="step">
<property name="resource" value="#{jobParameters['outputResource']}" />
<property name="lineAggregator">
<bean class="org.springframework.batch.item.file.transform.DelimitedLineAggregator">
<property name="delimiter" value=";" />
<property name="fieldExtractor">
<bean class="org.springframework.batch.item.file.transform.BeanWrapperFieldExtractor">
<property name="names" value="id,name,description,price,date" />
</bean>
</property>
</bean>
</property>
</bean>
There are two things I miss with this configuration:
BeanWrapperFieldSetMapper allowed me to set CustomEditors, but BeanWrapperFieldExtractor has no such possibility. Is there a way to use these?
Is there a way to define the headings in the first line of the file? I have not found any way to write an initial line that is not a bean... It would be great to use the same names here as in BeanWrapperFieldSetMapper such that BeanWrapperFieldExtractor writes the inital line and guesses the bean property namens as BeanWrapperFieldSetMapper does.
The process to load files is so comfortable in Spring Batch. Why is the writing of files so different? Am I missing something?
I have to use Spring Batch 2.1.x because we are using Spring 3.0.x . Therefor an upgrade to 2.2.x would not be an option.
Which is your need? Extract field property as text? You can
use a FormatterLineAggregator if you needs are not too complicated
write your own CustomEditorsFieldExtractor (better)
Generate a complex domain object composed by original domain object and by text-formatted object and use last one as parameter of writer (but breaks your current processor/writer)
Use FlatFileItemWriter.headerCallback: if setted allow custom header write
Writing - in your case - seems a pain respect read process because spring-batch's reading components fits your needs. Standard components fits more used use-case and they cover a lot of scenario. Let us write a custom FieldExtractor sometimes! :)