Write a Spring batch custom item writer - spring-batch

I need to write a Spring batch custom item writer that uses a footer, but I can't use the delegate pattern.
Is there another way to write a Spring batch custom item writer?
Thank you in advance.

Create a custom ItemWriter that implements ItemStream (to manage restartability and footer writing) and overwrite the next methods:
ItemWrite.write(List<> items): write items and during writing perform necessary data calculation for footer
ItemStream.update(ExecutionContext): save calculated footer data in write() method
ItemStream.open(ExecutionContext): restore previously saved footer data
ItemStream.close(): do real footer writing (directly in your own writer or using a callback)

Check here
Basically you need to create a class that implements ItemWriter<YourModel> and FlatFileFooterCallback
In the write method, enter how data will be written and in the writeFooter the footer of the file.
Then declare your class as a bean and put it as a writer in your job.

I've found the solution. I can't write a custom itemwriter, but I created a bean out and I have overridden the toString method. In this method I have set the output to the file as needed. Then, I created a PassThroughLineAggregator type itemwriter. This itemwriter calls the toString method of the bean output. And that's all!!
Here's the code:
MOH_Diaria_Bean_Out.java:
package es.lac.absis.batch.app.percai.domain;
import java.util.ArrayList;
import java.util.List;
public class MOH_Diaria_Bean_Out {
List<MOH_Diaria_Bean> listaBeans = new ArrayList<MOH_Diaria_Bean>();
public List<MOH_Diaria_Bean> getListaBeans() {
return listaBeans;
}
public void setListaBeans(List<MOH_Diaria_Bean> listaBeans) {
this.listaBeans = listaBeans;
}
public void add (MOH_Diaria_Bean bean){
listaBeans.add(bean);
}
#Override
public String toString() {
// TODO Auto-generated method stub
String salida="";
for (int j=0; j<listaBeans.size(); j++) {
MOH_Diaria_Bean bean = listaBeans.get(j);
salida = salida + bean.toString();
if (j<(listaBeans.size()-1)) {
salida = salida + "\n";
}
}
return salida;
}
}
ItemWriter:
<bean id="MOH_FusionadoFicheros_Writer" class="es.lac.absis.batch.arch.internal.writer.AbsisFlatFileItemWriter">
<property name="resource">
<bean class="es.lac.absis.batch.arch.internal.util.AbsisFileSystemResource">
<constructor-arg ref="filePCA00020"></constructor-arg>
</bean>
</property>
<property name="encoding" value="ISO8859_1"></property>
<property name="lineAggregator">
<bean class="org.springframework.batch.item.file.transform.PassThroughLineAggregator">
</bean>
</property>
</bean>

Related

Scheduler Spring boot

#Bean
public LockProvider lockProvider(DataSource dataSource) {
return new JdbcTemplateLockProvider(dataSource);
}
#Bean
public ScheduledLockConfiguration taskScheduler(LockProvider lockProvider) {
return ScheduledLockConfigurationBuilder
.withLockProvider(lockProvider)
.withPoolSize(10)
.withDefaultLockAtMostFor(Duration.ofMinutes(10))
.build();
}
My requirement is to run only single scheduler at only one instance in clustered enviroment. For this i am using shedlock, but problem is that at server startup i am getting the below exception, "java.lang.ClassCastException: net.javacrumbs.shedlock.spring.SpringLockableTaskSchedulerFactoryBean cannot be cast to org.springframework.scheduling.concurrent.ThreadPoolTaskScheduler"
Help me on this.
You can easily do this with dlock. You simply do the following and add registrar to your xml config.
Java Code
#TryLock(name = "doSomeWork", owner = "serviceA", lockFor = ONE_MINUTE)
public void doSomeWork() {
//...
}
XML Config
<!-- A bean for the lock implementation. Note that there should be only one global implementation-->
<bean id="postgresLock" class="com.yusufaytas.dlock.jdbc.PostgresIntervalLock">
<constructor-arg type="javax.sql.DataSource" ref="lockDataSource"/>
</bean>
<!-- The lock gets auto-registered to the registrar -->
<bean id="lockRegistrar" class="com.yusufaytas.dlock.spring.IntervalLockRegistrar"/>

unable to set/get the property values in the rowMapper

I'm trying to access the StepExecution in my RowMapper but unable to do so. I have a property set in the xml called 'prop1'. I expect this to be set but it is not setting.I also added a #BeforeStep method to the RowMapper hoping I can get the stepExecutionContext but this method is never invoked. Is there something else I need to do?
Here is my xml:
<bean id="bean1"
class="org.springframework.batch.item.database.JdbcCursorItemReader"
scope="step">
<property name="dataSource" ref="dataSource" />
<property name="sql"
value="${sql}"/>
<property name="fetchSize" value="${fetchSize}"></property>
<property name="rowMapper">
<bean id="rowMapper1" class="c.p.MyRowMapper" scope="step">
<property name="prop1" value="${prop1}"></property>
</property>
Here is my RowMapper:
public class MyRowMapper implements RowMapper<Object>{
private String prop1;
private StepExecution se;
public String getProp1() {
return stepFatpCount;
}
public void setProp1(String rop1) {
this. prop1 = prop1;
}
#BeforeStep
public void beforeStep(StepExecution stepExecution){
this.se = stepExecution;
}
}
I have some properties set in the stepExecutionContext before this step in another step and I want to use them here in the RowMapper. The same thing works in the ItemProcessor but not the RowMapper. Please let me know if I need to do something more for lazy binding or any other issue.
Thanks.
The step execution context is not shared between steps. Maybe you mean job execution context?
I suppose you could register rowMapper1 as a listener in the step (with the <listeners> tag), but if you just want to read some value from the job execution context you could use
value="#{jobExecutionContext['foobar']}"
If you do want to have injected some value from the step execution context, you just have to replace stepExecutionContext above.

Spring batch: Create file by setting name programmatically

I have a spring batch job (defined in xml) which generates the csv export.
Inside FlatFileItemWriter bean I am setting resource, where the name of file is set.
<bean id="customDataFileWriter" class="org.springframework.batch.item.file.FlatFileItemWriter" scope="step">
<property name="resource" value="file:/tmp/export/custom-export.csv"/>
...
Now I need to set this file name taking account a certain logic, so I need to set the file name from some java class. Any ideas?
Use the different builder classes of spring batch (job builder, step builder, and so on). Have a look at https://blog.codecentric.de/en/2013/06/spring-batch-2-2-javaconfig-part-1-a-comparison-to-xml/ to get an idea.
You can implement your own FlatFileItemWriter to override the method setResource and add your own logic to rename the file.
Here's an example implementation :
#Override
public void setResource(Resource resource) {
if (resource instanceof ClassPathResource) {
// Convert resource
ClassPathResource res = (ClassPathResource) resource;
try {
String path = res.getPath();
// Do something to "path" here
File file = new File(path);
// Check for permissions to write
if (file.canWrite() || file.createNewFile()) {
file.delete();
// Call parent setter with new resource
super.setResource(new FileSystemResource(file.getAbsolutePath()));
return;
}
} catch (IOException e) {
// File could not be read/written
}
}
// If something went wrong or resource was delegated to MultiResourceItemWriter,
// call parent setter with default resource
super.setResource(resource);
}
Another possibility exists with the use of jobParameters, if your logic can be applied before job is launched. See 5.4 Late Binding of Spring Batch Documentation.
Example :
<bean id="flatFileItemReader" scope="step" class="org.springframework.batch.item.file.FlatFileItemReader">
<property name="resource" value="#{jobParameters['input.file.name']}" />
</bean>
You can also use a MultiResourceItemWriter with a custom ResourceSuffixCreator. That will let you create 1 to n files with a common filename pattern.
Here's an example of the method getSuffix of a custom ResourceSuffixCreator :
#Override
public String getSuffix(int index) {
// Your logic
if (true)
return "XXX" + index;
else
return "";
}

SkipListener for chain processors and multiple model items

I want a SkipListener for my CompositeItemProcessor which is a list of 2 processors processing an item like this:
Reader(itemRead) -> ValidationProcessor(itemRead, item) -> WrapProcessor(item, item)
The problem is that SkipListener accepts only a model item(the itemRead, hopefully I'm wrong here) in the
onSkipInProcess like onSkipInProcess(ValidationItem itemRead, Throwable arg1)
while I want the item from the 2nd processor too, inside this function.
There is also a high chance that the WrapProcessor might change output(depending the requirements) to something like that: WrapProcessor<item, anotherItem>
Is there a way to solve this issue?
My compositeItemProcessor config:
<bean id="compositeItemProcessor"
class="org.springframework.batch.item.support.CompositeItemProcessor">
<property name="delegates">
<list>
<bean class="com.mycompany.ValidationProcessor">/
<bean class="com.mycompany.WrapProcessor" />
</list>
</property>
</bean>
You can have your transformed object into a custom skip exception
class WrapProcessor implements ItemProcessor<ValidationItem, WrappedObject> {
O transform(ValidationItem item) {
WrappedObject wrapped = new Wrapped(item);
if(some condition) {
throw new SkipWrappedObject(wrapped);
}
...
}
}
and in onSkipInProcess(ValidationItem itemRead, Throwable arg1) arg1 can be casted to SkipWrappedObject ad you can access skipped wrapped object

How to process logically related rows after ItemReader in SpringBatch?

Scenario
To make it simple, let's suppose I have an ItemReader that returns me 25 rows.
The first 10 rows belong to student A
The next 5 belong to student B
and the 10 remaining belong to student C
I want to aggregate them together logically say by studentId and flatten them to end up with one row per student.
Problem
If I understand correctly, setting the commit interval to 5 will do the following:
Send 5 rows to the Processor (which will aggregate them or do any business logic I tell it to).
After Processed will write 5 rows.
Then it will do it again for the next 5 rows and so on.
If that is true, then for the next five I will have to check the already written ones, get them out aggregate them to the ones that I am currently processing and write them again.
I personally do no like that.
What is the best practice to handle a situation like this in Spring Batch?
Alternative
Sometimes I feel that it is much easier to write a regular Spring JDBC main program and then I have full control of what I want to do. However, I wanted to take advantage of of the job repository state monitoring of the job, ability to restart, skip, job and step listeners....
My Spring Batch Code
My module-context.xml
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:batch="http://www.springframework.org/schema/batch"
xsi:schemaLocation="http://www.springframework.org/schema/batch http://www.springframework.org/schema/batch/spring-batch-2.1.xsd
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd">
<description>Example job to get you started. It provides a skeleton for a typical batch application.</description>
<batch:job id="job1">
<batch:step id="step1" >
<batch:tasklet transaction-manager="transactionManager" start-limit="100" >
<batch:chunk reader="attendanceItemReader"
processor="attendanceProcessor"
writer="attendanceItemWriter"
commit-interval="10"
/>
</batch:tasklet>
</batch:step>
</batch:job>
<bean id="attendanceItemReader" class="org.springframework.batch.item.database.JdbcCursorItemReader">
<property name="dataSource">
<ref bean="sourceDataSource"/>
</property>
<property name="sql"
value="select s.student_name ,s.student_id ,fas.attendance_days ,fas.attendance_value from K12INTEL_DW.ftbl_attendance_stumonabssum fas inner join k12intel_dw.dtbl_students s on fas.student_key = s.student_key inner join K12INTEL_DW.dtbl_schools ds on fas.school_key = ds.school_key inner join k12intel_dw.dtbl_school_dates dsd on fas.school_dates_key = dsd.school_dates_key where dsd.rolling_local_school_yr_number = 0 and ds.school_code = ? and s.student_activity_indicator = 'Active' and fas.LOCAL_GRADING_PERIOD = 'G1' and s.student_current_grade_level = 'Gr 9' order by s.student_id"/>
<property name="preparedStatementSetter" ref="attendanceStatementSetter"/>
<property name="rowMapper" ref="attendanceRowMapper"/>
</bean>
<bean id="attendanceStatementSetter" class="edu.kdc.visioncards.preparedstatements.AttendanceStatementSetter"/>
<bean id="attendanceRowMapper" class="edu.kdc.visioncards.rowmapper.AttendanceRowMapper"/>
<bean id="attendanceProcessor" class="edu.kdc.visioncards.AttendanceProcessor" />
<bean id="attendanceItemWriter" class="org.springframework.batch.item.file.FlatFileItemWriter">
<property name="resource" value="file:target/outputs/passthrough.txt"/>
<property name="lineAggregator">
<bean class="org.springframework.batch.item.file.transform.PassThroughLineAggregator" />
</property>
</bean>
</beans>
My supporting classes for the Reader.
A PreparedStatementSetter
package edu.kdc.visioncards.preparedstatements;
import java.sql.PreparedStatement;
import java.sql.SQLException;
import org.springframework.jdbc.core.PreparedStatementSetter;
public class AttendanceStatementSetter implements PreparedStatementSetter {
public void setValues(PreparedStatement ps) throws SQLException {
ps.setInt(1, 7);
}
}
and a RowMapper
package edu.kdc.visioncards.rowmapper;
import java.sql.ResultSet;
import java.sql.SQLException;
import org.springframework.jdbc.core.RowMapper;
import edu.kdc.visioncards.dto.AttendanceDTO;
public class AttendanceRowMapper<T> implements RowMapper<AttendanceDTO> {
public static final String STUDENT_NAME = "STUDENT_NAME";
public static final String STUDENT_ID = "STUDENT_ID";
public static final String ATTENDANCE_DAYS = "ATTENDANCE_DAYS";
public static final String ATTENDANCE_VALUE = "ATTENDANCE_VALUE";
public AttendanceDTO mapRow(ResultSet rs, int rowNum) throws SQLException {
AttendanceDTO dto = new AttendanceDTO();
dto.setStudentId(rs.getString(STUDENT_ID));
dto.setStudentName(rs.getString(STUDENT_NAME));
dto.setAttDays(rs.getInt(ATTENDANCE_DAYS));
dto.setAttValue(rs.getInt(ATTENDANCE_VALUE));
return dto;
}
}
My processor
package edu.kdc.visioncards;
import java.util.HashMap;
import java.util.Map;
import org.springframework.batch.item.ItemProcessor;
import edu.kdc.visioncards.dto.AttendanceDTO;
public class AttendanceProcessor implements ItemProcessor<AttendanceDTO, Map<Integer, AttendanceDTO>> {
private Map<Integer, AttendanceDTO> map = new HashMap<Integer, AttendanceDTO>();
public Map<Integer, AttendanceDTO> process(AttendanceDTO dto) throws Exception {
if(map.containsKey(new Integer(dto.getStudentId()))){
AttendanceDTO attDto = (AttendanceDTO)map.get(new Integer(dto.getStudentId()));
attDto.setAttDays(attDto.getAttDays() + dto.getAttDays());
attDto.setAttValue(attDto.getAttValue() + dto.getAttValue());
}else{
map.put(new Integer(dto.getStudentId()), dto);
}
return map;
}
}
My concerns from code above
In the Processor, I create a HashMap and as I process the rows I check whether I already have that Student in the Map, if it's not there I add it. If it's already there I grab the it get the values that I am interested in and add them with the row that I am currently processing.
After that, Spring Batch Framework writes to a File according to my configuration
My question is as follows:
I do not want it to go to the writer. I want to process all the remaining rows. How do I keep this Map that I have created in memory for the next set of rows that need to go through this same Processor? Everytime, a row is processed through AttendanceProcessor the Map is initialized. Should I put the Map initialization in a static block?
In my application I created a CollectingJdbcCursorItemReader that extends the standard JdbcCursorItemReader and performs exactly what you need. Internally it uses my CollectingRowMapper: an extension of the standard RowMapper that maps multiple related rows to one object.
Here is the code of the ItemReader, the code of CollectingRowMapper interface, and an abstract implementation of it, is available in another answer of mine.
import java.sql.ResultSet;
import java.sql.SQLException;
import org.springframework.batch.item.ReaderNotOpenException;
import org.springframework.batch.item.database.JdbcCursorItemReader;
import org.springframework.jdbc.core.RowMapper;
/**
* A JdbcCursorItemReader that uses a {#link CollectingRowMapper}.
* Like the superclass this reader is not thread-safe.
*
* #author Pino Navato
**/
public class CollectingJdbcCursorItemReader<T> extends JdbcCursorItemReader<T> {
private CollectingRowMapper<T> rowMapper;
private boolean firstRead = true;
/**
* Accepts a {#link CollectingRowMapper} only.
**/
#Override
public void setRowMapper(RowMapper<T> rowMapper) {
this.rowMapper = (CollectingRowMapper<T>)rowMapper;
super.setRowMapper(rowMapper);
}
/**
* Read next row and map it to item.
**/
#Override
protected T doRead() throws Exception {
if (rs == null) {
throw new ReaderNotOpenException("Reader must be open before it can be read.");
}
try {
if (firstRead) {
if (!rs.next()) { //Subsequent calls to next() will be executed by rowMapper
return null;
}
firstRead = false;
} else if (!rowMapper.hasNext()) {
return null;
}
T item = readCursor(rs, getCurrentItemCount());
return item;
}
catch (SQLException se) {
throw getExceptionTranslator().translate("Attempt to process next row failed", getSql(), se);
}
}
#Override
protected T readCursor(ResultSet rs, int currentRow) throws SQLException {
T result = super.readCursor(rs, currentRow);
setCurrentItemCount(rs.getRow());
return result;
}
}
You can use it just like the classic JdbcCursorItemReader: the only requirement is that you provide it a CollectingRowMapper instead of the classic RowMapper.
I always follow this pattern:
I make my reader scope to be "step", and in #PostConstruct I fetch
the results, and put them in a Map
In processor, I convert the associatedCollection into writable list,
and send the writable list
In ItemWriter, I persist the writable item(s) depending on the case
because you changed your question i add a new answer
if the students are ordered then there is no need for list/map, you could use exactly one studentObject on the processor to keep the "current" and aggregate on it until there is a new one (read: id change)
if the students are not ordered you will never know when a specific student is "finished" and you'd have to keep all students in a map which can't be written until the end of the complete read sequence
beware:
the processor needs to know when the reader is exhausted
its hard to get it working with any commit-rate and "id" concept if you aggregate items that are somehow identical the processor just can't know if the currently processed item is the last one
basically the usecase is either solved at reader level completely or at writer level (see other answer)
private SimpleItem currentItem;
private StepExecution stepExecution;
#Override
public SimpleItem process(SimpleItem newItem) throws Exception {
SimpleItem returnItem = null;
if (currentItem == null) {
currentItem = new SimpleItem(newItem.getId(), newItem.getValue());
} else if (currentItem.getId() == newItem.getId()) {
// aggregate somehow
String value = currentItem.getValue() + newItem.getValue();
currentItem.setValue(value);
} else {
// "clone"/copy currentItem
returnItem = new SimpleItem(currentItem.getId(), currentItem.getValue());
// replace currentItem
currentItem = newItem;
}
// reader exhausted?
if(stepExecution.getExecutionContext().containsKey("readerExhausted")
&& (Boolean)stepExecution.getExecutionContext().get("readerExhausted")
&& currentItem.getId() == stepExecution.getExecutionContext().getInt("lastItemId")) {
returnItem = new SimpleItem(currentItem.getId(), currentItem.getValue());
}
return returnItem;
}
basically you talk about batch processing with changing IDs(1), where the batch has to keep track of the change
for spring/spring-batch we talk about:
ItemWriter which checks the list of items for an id change
before the change the items are stored in a temporary datastore(2) (List, Map, whatever), and are not written out
when the id changes, the aggregating/flattening business code runs on the items in the datastore and one item should be written, now the datastore can be used for the next items with the next id
this concept needs a reader which tells the step "i'm exhausted" to properly flush the temporary datastore on end of items (file/database)
here a rough and simple code example
#Override
public void write(List<? extends SimpleItem> items) throws Exception {
// setup with first sharedId at startup
if (currentId == null){
currentId = items.get(0).getSharedId();
}
// check for change of sharedId in input
// keep items in temporary dataStore until id change of input
// call delegate if there is an id change or if the reader is exhausted
for (SimpleItem item : items) {
// already known sharedId, add to tempData
if (item.getSharedId() == currentId) {
tempData.add(item);
} else {
// or new sharedId, write tempData, empty it, keep new id
// the delegate does the flattening/aggregating
delegate.write(tempData);
tempData.clear();
currentId = item.getSharedId();
tempData.add(item);
}
}
// check if reader is exhausted, flush tempData
if ((Boolean) stepExecution.getExecutionContext().get("readerExhausted")
&& tempData.size() > 0) {
delegate.write(tempData);
// optional delegate.clear();
}
}
(1)assuming the items are ordered by an ID (can be composite too)
(2)a hashmap spring bean for thread safety
Use Step Execution Listener and store the records as map to the StepExecutionContext , you can then group them in the writer or writer listener and write it at a time