Spring Batch FlatFileItemWriter write Object with List - spring-batch

I have a Pojo Partner:
Partner Id
List
Address Pojo :
AddressId,
Address,
City,
Country,
Pin
I want to create a Flat file in Spring Batch
- File will be
: PartnerId;AddressId;Address;City;Country;Pin
I am getting Partner Pojo with Id and List of Addresses
How can I use the FlatFileItemWriter with the PartnerPojo
My FlatFileItemWriterConfiguration configuration:
<?xml version="1.0" encoding="UTF-8"?>
<bean id="itemWriter" class="org.springframework.batch.item.file.FlatFileItemWriter">
<property name="resource" value="file:outputFile.txt" />
<property name="lineAggregator">
<bean class="org.springframework.batch.item.file.transform.DelimitedLineAggregator">
<property name="delimiter" value=";" />
<property name="fieldExtractor">
<bean class="org.springframework.batch.item.file.transform.BeanWrapperFieldExtractor">
<property name="names" value="partnerId, addressId, address,city,country,pin " />
</bean>
</property>
</bean>
</property>
<property name="headerCallback" ref="headerCallback" />
</bean>
I get an error on addressId

You need to flatten your data and pass the list of flat items as expected in the output file to the writer. For example:
class Partner {
int id;
List<Address> addresses;
}
class Address {
int addressId;
String address,city,country,pin;
}
// create this Pojo to encapsulate flat data (as in the expected csv)
class PartnerAddress {
int partnerId, addressId;
String address,city,country,pin;
}
An item processor would prepare the data:
class PartnerItemProcessor implements ItemProcessor<Partner, List<PartnerAddress>> {
#Override
public List<PartnerAddress> process(Partner partner) {
List<PartnerAddress> partnerAddresses = new ArrayList<>();
for (Address address : partner.getAddresses()) {
PartnerAddress partnerAddress = new PartnerAddress();
partnerAddress.setPartnerId(partner.getId());
partnerAddress.setAddressId(address.getAddressId());
partnerAddress.setAddress(address.getAddress());
partnerAddress.setCity(address.getCity());
partnerAddress.setCountry(address.getCountry());
partnerAddress.setPin(address.getPin());
partnerAddresses.add(partnerAddress);
}
return partnerAddresses;
}
}
Then the writer receives the list of PartnerAddress and write them to the flat file.
Hope this helps.

Related

How can I make a XStreamMarshaller skip unknown binding?

I'm working on a Spring-Batch program. I unmarshalls XML files with XStreamMarshaller.
How can I make a XStreamMarshaller to skip any unknown+unannoated fields?
<bean id="merge.reader.item"
class="org.springframework.batch.item.xml.StaxEventItemReader">
<property name="fragmentRootElementName" value="xml-fragment"/>
<property name="unmarshaller" ref="merge.reader.unmarshaller"/>
</bean>
<bean id="merge.reader.unmarshaller"
class="org.springframework.oxm.xstream.XStreamMarshaller">
<property name="aliases" ref="merge.reader.binder"/>
<property name="autodetectAnnotations" value="true"/>
</bean>
<util:map id="merge.reader.binder">
<entry key="xml-fragment" value="path.to.my.Model"/>
</util:map>
public class Model {
#XStreamAlias(value = "one")
private String one;
#XStreamAlias(value = "other")
private String other;
}
The problem is that some new xml elements will be introduced in some other time.
I don't want to (actually I can't) add extra fields to my Model.
I'm answering for my own question. The solution is where #biziclop linked. (disclaimer: I also answered the same answer on that post).
public class ExtendedXStreamMarshaller extends XStreamMarshaller {
#Override
protected void configureXStream(final XStream xstream) {
super.configureXStream(xstream);
xstream.ignoreUnknownElements(); // will it blend?
}
}

Spring Batch JDBCCursorItemReader

Is there any better way to load sql from file system to inject in JDBCCursorItemReader.
I want to load sql query from files instead of hardcoding in the configuration file.
//spring bean
<bean id="jdbcReader" class="com.sample.DatabaseReader">
<property name="sql" value="query.sql"/>
</bean>
and then i extended JDBCCursorItemReader
//extended cursoritemreader
class DatabaseReader extends JDBCCursorItemReader {
//Overriden method
#Override
public void setSql(String fileName) {
//file
File f = new File(fileName);
//read file from given path
String query = FileCopyUtils.copyToString(f);
//pass the query
super.setSQL(query);
}
}
Use Spring's PropertyPlaceHolder to inject the SQL directly into the reader (no need to extend our reader for this). An example would look like this:
<bean id="jdbcItemReader" class="org.springframework.batch.item.database.JdbcCursorItemReader">
<property name="dataSource" ref="dataSource" />
<property name="rowMapper" ref="myRowMapper>
<property name="sql" value="${batch.sql}"/>
</bean>
As long as you have a PropertiesPlaceholderConfigurer configured that points to the properties file that holds the batch.sql property, you should be good to go.

Spring Batch : PassThroughFieldExtractor with BigDecimal formatting

I'm using Spring Batch to extract a CSV file from a DB table which has a mix of column types. The sample table SQL schema is
[product] [varchar](16) NOT NULL,
[version] [varchar](16) NOT NULL,
[life_1_dob] [date] NOT NULL,
[first_itm_ratio] [decimal](9,6) NOT NULL,
the sample Database column value for the 'first_itm_ration' field are
first_itm_ratio
1.050750
0.920000
but I would like my CSV to drop the trailing zero's from values.
first_itm_ratio
1.05075
0.92
I'd prefer not to have to define the formatting for each specific field in the table, but rather have a global object specific formatting for all columns of that data type.
My csvFileWriter bean
<bean id="csvFileWriter" class="org.springframework.batch.item.file.FlatFileItemWriter" scope="step">
<property name="resource" ref="fileResource"/>
<property name="lineAggregator">
<bean class="org.springframework.batch.item.file.transform.DelimitedLineAggregator">
<property name="delimiter">
<util:constant static-field="org.springframework.batch.item.file.transform.DelimitedLineTokenizer.DELIMITER_COMMA"/>
</property>
<property name="fieldExtractor">
<bean class="org.springframework.batch.item.file.transform.PassThroughFieldExtractor" />
</property>
</bean>
</property>
</bean>
You can
Write your own BigDecimalToStringConverter implements Converter<BigDecimal, String> to format big decimal without trailing 0's
Create a new ConversionService (MyConversionService) and register into the custom converter
Extends DelimitedLineAggregator, inject MyConversionService, override doAggregate() to format fields using injected conversion service
public class MyConversionService extends DefaultConversionService {
public MyConversionService() {
super();
addConverter(new BigDecimalToStringConverter());
}
}
public class MyFieldLineAggregator<T> extends DelimitedLineAggregator<T> {
private ConversionService cs = new MyConversionService();
public String doAggregate(Object[] fields) {
for(int i = 0;i < fields.length;i++) {
final Object o = fields[i];
if(cs.canConvert(o.getClass(), String.class)) {
fields[i] = cs.convert(o, String.class);
}
}
return super.doAggregate(fields);
}
}

Spring Social facebook + Spring Security

I want to integrate Spring Social facebook into my application with Spring Security (I use xml configurations). All I need is just connect facebook account with my app's account. In simple example I found this:
<bean id="connectionRepository" factory-method="createConnectionRepository"
factory-bean="usersConnectionRepository" scope="request">
<constructor-arg value="#{request.userPrincipal.name}" />
<aop:scoped-proxy proxy-target-class="false" />
</bean>
So, as I understood, this method comes into play:
public ConnectionRepository createConnectionRepository(String userId) {
if (userId == null) {
throw new IllegalArgumentException("userId cannot be null");
}
return new JdbcConnectionRepository(userId, jdbcTemplate, connectionFactoryLocator, textEncryptor, tablePrefix);
}
It resives "userId" from #{request.userPrincipal.name}. So, my question: How can I pass "userId"
to this method if I want to obtain this "userId" using SecurityContextHolder.getContext().getAuthentication().getPrincipal().
The only way I see is to create my implementation of JdbcUsersConnectionRepository and redefine createConnectionRepository(String userId) method. But maybe there is more elegant solution.
There is another way:
<bean id="connectionRepository" factory-method="createConnectionRepository" factory-bean="usersConnectionRepository"
scope="request">
<constructor-arg value="#{authenticationService.getAuthenticatedUsername()}" />
<aop:scoped-proxy proxy-target-class="false" />
</bean>
#Service("authenticationService")
public class AuthenticationService {
public String getAuthenticatedUsername() {
return SecurityContextHolder.getContext().getAuthentication().getPrincipal();
}
}
You can do it complitely in SPeL too (I do not like this kind of dependencies):
<bean id="connectionRepository" factory-method="createConnectionRepository" factory-bean="usersConnectionRepository"
scope="request">
<constructor-arg value="#{T(org.springframework.security.core.context.SecurityContextHolder).getContext().getAuthentication().getPrincipal()}" />
<aop:scoped-proxy proxy-target-class="false" />
</bean>

Implementing SkipListener to write invalid records to a flat file

I am working on setting a spring batch job that does the conventional READ > PROCESS > WRITE operation. However, I am trying to implement a listener which would capture records that are conisdered invalid during the PROCESS phase and write it out to an error log file.
My listener class uses an instance of FlatFileItemWriter to write the data. However, spring-batch is not instantiating the writer instance properly.
My listener class looks like this:
public class DTOProcessorListener extends SkipListenerSupport<AttributeReportGenerationDTO, AttributeValue> {
private static final Logger LOGGER = LoggerFactory.getLogger(DTOProcessorListener.class);
private FlatFileItemWriter<AttributeReportGenerationDTO> flatFileItemWriter;
#Override
public void onSkipInProcess(AttributeReportGenerationDTO item, Throwable t) {
try {
LOGGER.error("Record not processed for attribute value with ID : " + item.getAttributeValueId());
List<AttributeReportGenerationDTO> list = new ArrayList<AttributeReportGenerationDTO>();
list.add(item);
flatFileItemWriter.write(list);
} catch (Exception e) {
LOGGER.error("Unable to write to the error output file", e);
}
}
/**
* #param flatFileItemWriter
* the flatFileItemWriter to set
*/
public void setFlatFileItemWriter(FlatFileItemWriter<AttributeReportGenerationDTO> flatFileItemWriter) {
this.flatFileItemWriter = flatFileItemWriter;
}
}
and my job configuration XML looks like this:
<bean id="skipListener" class="something.DTOProcessorListener" scope="step">
<property name="flatFileItemWriter">
<bean id="errorItemWriter" class="org.springframework.batch.item.file.FlatFileItemWriter" scope="step">
<property name="resource" value="file:#{jobParameters['error.filename']}" />
<property name="appendAllowed" value="true" />
<property name="lineAggregator">
<bean class="org.springframework.batch.item.file.transform.DelimitedLineAggregator">
<property name="fieldExtractor">
<bean class="org.springframework.batch.item.file.transform.BeanWrapperFieldExtractor">
<property name="names"
value="productTypeId, productTypeName, productId, productName, skuId, skuName, attributeValueId, attributeName, attributeValue, attributeType, nonEditableValueCheckSum, editableValueCheckSum" />
</bean>
</property>
</bean>
</property>
<property name="headerCallback">
<bean class="something.CsvHeaderImplementation">
<property name="headerString"
value="Product Type ID,Product Type,Product ID,Product Name,Sku ID,Sku Name,Attribute Value ID,Attribute Name,Attribute Value,Attribute Type,Check Sum 1,Check Sum 2" />
</bean>
</property>
</bean>
</property>
</bean>
I get the error
org.springframework.batch.item.WriterNotOpenException: Writer must be open before it can be written to
I am unable to set a stream entry in the job config as the bean for FlatFileItemWriter is internally specified (for the listener). If I create abean outside of the listener and refer to it, its returning a proxy instance of the FlatFileItemWriterClass.
Has anyone successfully wired up a writer to a flat file in the listener?
Thanks for the help
Well why don't you use the writer as a normal bean ? You could register it as a stream and to get around the step proxy you could use the PropertPlaceholderConfigurer
i created a working example under my github repo, but i think spring batch could need an improvement here, it should be easier to implement error-item logging