I am new to spring batch and having a feed file with key param values in .txt format. I need to load the file into Mysql DB using spring batch. Is there any way to read a text file with key value message. Two rows are separated by an empty line and the delimiter is '='.
Sample File:
Name=Jack
Id=ADC12345
ClassId=7018
Rank=-326
Name=Gile
Id=FED12345
ClassId=7018
Rank=-32
Name, ID, ClassId and Rank are the column values.
Here's a working solution (you just need a blank line after the last record or it won't be read) :
1) Declare your business object :
public class Student {
private String name;
private String id;
private Integer classId;
private Integer rank;
// Getter + Setters
}
2) Declare a custom itemstreamreader to which you will delegate the actual FlatFileItemReader :
public class CustomMultiLineItemReader implements ItemStreamReader<Student> {
private FlatFileItemReader<FieldSet> delegate;
#Override
public void open(ExecutionContext executionContext) throws ItemStreamException {
delegate.open(executionContext);
}
#Override
public void update(ExecutionContext executionContext) throws ItemStreamException {
delegate.update(executionContext);
}
#Override
public void close() throws ItemStreamException {
delegate.close();
}
// Getter + Setters
}
3) Override its read method to manually map your multiline records :
public Student read() throws Exception {
Student s = null;
for (FieldSet line = null; (line = this.delegate.read()) != null;) {
if (line.getFieldCount() == 0) {
return s; // Record must end with footer
} else {
String prefix = line.readString(0);
if (prefix.equals("Name")) {
s = new Student(); // Record must start with header
s.setName(line.readString(1));
}
else if (prefix.equals("Id")) {
s.setId(line.readString(1));
}
else if (prefix.equals("ClassId")) {
s.setClassId(line.readInt(1));
}
else if (prefix.equals("Rank")) {
s.setRank(line.readInt(1));
}
}
}
return null;
}
4) Declare the reader in the step and configure it :
<bean class="xx.xx.xx.CustomMultiLineItemReader">
<property name="delegate">
<bean class="org.springframework.batch.item.file.FlatFileItemReader">
<property name="resource" value="file:${YOUR_FILE}"></property>
<property name="linesToSkip" value="0"></property>
<property name="lineMapper">
<bean class="org.springframework.batch.item.file.mapping.PatternMatchingCompositeLineMapper">
<property name="tokenizers">
<map>
<entry key="*">
<bean class="org.springframework.batch.item.file.transform.DelimitedLineTokenizer">
<property name="delimiter" value="="></property>
</bean>
</entry>
</map>
</property>
<property name="fieldSetMappers">
<map>
<entry key="*">
<bean class="org.springframework.batch.item.file.mapping.PassThroughFieldSetMapper" />
</entry>
</map>
</property>
</bean>
</property>
</bean>
</property>
</bean>
I used a PatternMatchingCompositeLineMapper to associate line content (here : *) with the corresponding lineTokenizer and lineMapper (even though it's useless in this case).
Then, the PassThroughFieldSetMapper lets the reader do the mapping, and the DelimitedLineTokenizer splits the line on the "=" character.
there are 2 challenges with this input format
start/end for a complete item
splitting the item in key/value pairs
one solution could be to use a custom RecordSeparatorPolicy and custom LineMapper, like
import java.util.HashMap;
import java.util.Map;
import org.junit.Test;
import org.springframework.batch.item.ExecutionContext;
import org.springframework.batch.item.file.FlatFileItemReader;
import org.springframework.batch.item.file.mapping.DefaultLineMapper;
import org.springframework.batch.item.file.mapping.FieldSetMapper;
import org.springframework.batch.item.file.separator.RecordSeparatorPolicy;
import org.springframework.batch.item.file.transform.DelimitedLineTokenizer;
import org.springframework.batch.item.file.transform.FieldSet;
import org.springframework.core.io.ClassPathResource;
import org.springframework.validation.BindException;
public class ReaderKeyValueTest {
#Test
public void test() throws Exception {
FlatFileItemReader<Map<String, String>> reader = new FlatFileItemReader<Map<String, String>>();
reader.setResource(new ClassPathResource("keyvalue.txt"));
// custom RecordSeparatorPolicy
reader.setRecordSeparatorPolicy(new RecordSeparatorPolicy() {
#Override
public String preProcess(final String record) {
// empty line is added to the previous 'item'
if (record.isEmpty()) {
return record;
} else {
// line with content means it is part of an 'item', lets enhance it with adding a separator
return record + ",";
}
}
#Override
public String postProcess(final String record) {
return record;
}
#Override
public boolean isEndOfRecord(final String record) {
// the end of a record is marked with the last key/value pair for "Rank"
if (record.contains("Rank=")) {
return true;
} else {
return false;
}
}
});
DefaultLineMapper<Map<String, String>> lineMapper = new DefaultLineMapper<Map<String, String>>();
// the key/value pairs are separated with ',', so we can use the standard DelimitedLineTokenizer here
lineMapper.setLineTokenizer(new DelimitedLineTokenizer());
lineMapper.setFieldSetMapper(new FieldSetMapper<Map<String, String>>() {
#Override
public Map<String, String> mapFieldSet(final FieldSet fieldSet) throws BindException {
Map<String, String> item = new HashMap<String, String>();
// split each "Key=Value" and add to the Map
for (int i = 0; i < fieldSet.getValues().length; i++) {
String[] entry = fieldSet.getValues()[i].split("=");
item.put(entry[0], entry[1]);
}
return item;
}
});
reader.setLineMapper(lineMapper);
reader.open(new ExecutionContext());
Map<String, String> item;
while ((item = reader.read()) != null) {
System.out.println(item.toString());
}
reader.read();
reader.close();
}
}
the sysout produces
{ClassId=7018, Id=ADC12345, Name=Jack, Rank=-326}
{ClassId=7018, Id=FED12345, Name=Gile, Rank=-32}
Related
Ideas on why the MultiResourceItemReader is leaving the last file locked and I cannot move it with the Move Tasklet? The moves completes all the other files, but the last one read.
IO Exception has:
java.nio.file.FileSystemException: C:\Users\UGDW\MyProjects\ngsa2\oab-outside-assets-batchlauncher\input\EQ_AcctData_4321_03292020.csv -> C:\Users\UGDW\MyProjects\ngsa2\oab-outside-assets-batchlauncher\output\EQ_AcctData_4321_03292020.csv_processed: The process cannot access the file because it is being used by another process.
Batch config (stripped down):
<batch:job id="stockPlanAccountDataJob">
<batch:step id="getFilesInInputDirectory" next="fileProcessing">
<tasklet ref="getFilesInInputDirectoryTasklet"/>
</batch:step>
<batch:step id="fileProcessing" next="moveFilesToOuputDirectory">
<tasklet>
<chunk reader="stockPlanAccountDataFileReader" processor="stockPlanAccountDataProcessor" writer="stockPlanConsoleItemWriter"
commit-interval="20" skip-limit="20">
<batch:skippable-exception-classes>
<batch:include class="java.lang.Exception"/>
<batch:exclude class="org.springframework.batch.item.file.FlatFileParseException"/>
</batch:skippable-exception-classes>
</chunk>
</tasklet>
</batch:step>
<batch:step id="moveFilesToOuputDirectory">
<tasklet ref="stockPlanMoveFilesTasklet"/>
</batch:step>
</batch:job>
<bean id="getFilesInInputDirectoryTasklet" class="simplepeekandmulti.GetFilesInInputDirectoryTasklet" scope="step"/>
<bean id="stockPlanAccountDataFileReader" class="simplepeekandmulti.StockPlanAccountDataFileReader" scope="step">
<property name="delegate" ref="preprocessorUsingPeekable"/>
</bean>
<bean id="preprocessorUsingPeekable" class="org.springframework.batch.item.support.SingleItemPeekableItemReader" scope="step">
<property name="delegate" ref="multiFileResourceReader"/>
</bean>
<bean name="multiFileResourceReader" class="org.springframework.batch.item.file.MultiResourceItemReader" scope="step">
<property name="resources" value="file:#{jobExecutionContext[filepattern]}" />
<property name="delegate" ref="genericFlatFileReader" />
<property name="strict" value="true" />
</bean>
<bean id="genericFlatFileReader" class="org.springframework.batch.item.file.FlatFileItemReader" scope="step">
<property name="lineMapper" ref="genericFileLineMapper"/>
</bean>
<bean name="genericFileLineMapper" class="org.springframework.batch.item.file.mapping.PassThroughLineMapper" scope="step" />
<bean id="stockPlanAccountDataProcessor" class="simplepeekandmulti.StockPlanAccountDataProcessor" scope="step"/>
<bean id="stockPlanMoveFilesTasklet" class="simplepeekandmulti.StockPlanMoveFilesTasklet" scope="step"/>
Reader (with dumb logic):
package simplepeekandmulti;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.atomic.AtomicLong;
import org.springframework.batch.item.ItemReader;
import org.springframework.batch.item.PeekableItemReader;
import simplepeekandmulti.StockPlanAccountData;
import simplepeekandmulti.StockPlanFileInputAccountData;
public class StockPlanAccountDataFileReader implements ItemReader<StockPlanFileInputAccountData> {
private PeekableItemReader<String> delegate;
private AtomicLong itemsRead = new AtomicLong(0L);
private static final String PIPE = "|";
private static final String PIPE_SPLIT = "\\|";
private static final int NUM_RECORDS_PER_LINE = 6;
public PeekableItemReader<String> getDelegate() {
return delegate;
}
public void setDelegate(PeekableItemReader<String> delegate) {
this.delegate = delegate;
}
#Override
public StockPlanFileInputAccountData read() throws Exception {
String currentLine = delegate.read();
StockPlanFileInputAccountData inputData = new StockPlanFileInputAccountData();
int recs = 0;
List<String> errorList = new ArrayList<>();
while (currentLine != null) {
if (currentLine.contains(PIPE)) {
recs++;
setDetailLine(currentLine, inputData, recs, errorList);
} else {
errorList.add(currentLine);
}
if ((errorList.size() % 2) == 0) {
return inputData;
}
itemsRead.incrementAndGet();
currentLine = delegate.read();
}
return null;
}
private void setDetailLine(String inputLine, StockPlanFileInputAccountData inputData,
int numRecs, List<String> errorList) {
String[] entry = inputLine.split(PIPE_SPLIT);
if (entry.length == NUM_RECORDS_PER_LINE) {
inputData.setDataRecordsPerFile(numRecs);
StockPlanAccountData data = new StockPlanAccountData();
data.setExternalClientId(entry[0]);
data.setSSN(entry[1]);
data.setExternalParticipantId(entry[2]);
data.setFirstName(entry[3]);
data.setLastName(entry[4]);
data.setDateOfBitrth(entry[5]);
inputData.addToDataList(data);
} else {
errorList.add("Detail Line Is Invalid, Does NOT have 6 columns, 5 pipes: " + inputLine);
}
}
}
Processor:
package simplepeekandmulti;
import java.util.ArrayList;
import java.util.List;
import org.springframework.batch.core.StepExecution;
import org.springframework.batch.core.annotation.BeforeStep;
import org.springframework.batch.item.ItemProcessor;
import com.vanguard.inst.batch.oab.springboot.data.StockPlanFileInputAccountData;
public class StockPlanAccountDataProcessor implements ItemProcessor<StockPlanFileInputAccountData, StockPlanFileInputAccountData> {
private StepExecution stepExecution;
#BeforeStep
public void beforeStep(StepExecution stepExecution) {
this.stepExecution = stepExecution;
}
public StockPlanFileInputAccountData process(StockPlanFileInputAccountData item) throws Exception {
List<String> errorList = new ArrayList<>(0);
if (errorList.isEmpty()) {
return item;
} else {
//exchangeEmailService.sendEmail(fileName, errorList);
return null;
}
}
}
Writer:
package simplepeekandmulti;
import java.util.List;
import org.springframework.batch.item.ItemWriter;
import org.springframework.stereotype.Component;
import com.vanguard.inst.batch.oab.springboot.data.StockPlanFileInputAccountData;
#Component
public class StockConsoleOutputItemWriter implements ItemWriter<StockPlanFileInputAccountData> {
#Override
public void write(List<? extends StockPlanFileInputAccountData> arg0) throws Exception {
// TODO Auto-generated method stub
}
}
Move Files Tasklet (with file name hardcoded): Last file in the loop always fails.
package simplepeekandmulti;
import java.io.IOException;
import java.nio.file.FileSystems;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.StandardCopyOption;
import java.util.ArrayList;
import java.util.List;
import org.springframework.batch.core.StepContribution;
import org.springframework.batch.core.scope.context.ChunkContext;
import org.springframework.batch.core.step.tasklet.Tasklet;
import org.springframework.batch.repeat.RepeatStatus;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Component;
#Component
public class StockPlanMoveFilesTasklet implements Tasklet {
private static final String CLASS_NAME = StockPlanMoveFilesTasklet.class.getSimpleName();
#Value("$simplepeekandmulti-{INPUT_DIR}")
private String inputDir;
#Value("$simplepeekandmulti-{OUTPUT_DIR}")
private String outputDir;
private static final String PROCESSED = "_processed";
public RepeatStatus execute(StepContribution contribution, ChunkContext chunkContext) {
String[] fileList = {"EQ_AcctData_3210_03302020.csv", "EQ_AcctData_4321_03302020.csv"};
try {
for (String fileName : fileList) {
Path pathFrom = FileSystems.getDefault().getPath(inputDir, fileName);
Path pathTo = FileSystems.getDefault().getPath(outputDir, fileName + PROCESSED);
Files.move(pathFrom, pathTo, StandardCopyOption.REPLACE_EXISTING);
}
} catch (IOException io) {
System.out.println(io.toString());
}
return RepeatStatus.FINISHED;
}
}
CSV Files simply have; header date, records pipe delimited, footer total record count
03/30/2020
3210|59658625|12000|AADFBCJGH|LLOQMNURS|1962-03-08
3210|10124602|12001|AADFBCJGH|LLOQMNURS|1962-03-08
2
03/30/2020
4321|5690154|13000|AADFBCJGH|LLOQMNURS|1988-10-23
4321|745701|13001|AADFBCJGH|LLOQMNURS|1988-10-23
2
I have the following usecase, where I have to create a rest url dynamically using the properties. For that I have created a custom Mediator which reads the properties and creates calls the backend service.
I am having an issue on how to send the response back to the user. It is an xml format. But I need to parse the xml and just send the text. For that I am using PayloadFactory. I am attaching my code here, can someone please suggest what I am doing wrong?
<api xmlns="http://ws.apache.org/ns/synapse" name="tririgaProxy" context="/services">
<resource methods="GET" url-mapping="/employee">
<inSequence>
<sequence key="tririgaConf"/>
<property name="triUser" expression="get-property('triUser')"/>
<property name="triPass" expression="get-property('triPass')"/>
<property name="triURL" expression="get-property('triURL')"/>
<property name="triWfName" expression="get-property('triPeople.database.employee.wfName')"/>
<class name="com.wso2.tririga.mediator.IncomingMediator"/>
<payloadFactory media-type="text">
<format><![CDATA[$1]</format>
<args>
<arg evaluator="xml" expression="/status/text()"/>
</args>
</payloadFactory>
</inSequence>
</resource>
Java Class:
package com.wso2.tririga.mediator;
import java.io.IOException;
import java.net.URI;
import java.net.URISyntaxException;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.http.HttpEntity;
import org.apache.http.HttpResponse;
import org.apache.http.client.ClientProtocolException;
import org.apache.http.client.ResponseHandler;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.client.methods.HttpRequestBase;
import org.apache.http.client.utils.URIBuilder;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClients;
import org.apache.http.util.EntityUtils;
import org.apache.synapse.MessageContext;
import org.apache.synapse.mediators.AbstractMediator;
import org.apache.synapse.util.PayloadHelper;
public class IncomingMediator extends AbstractMediator {
private static final Log log = LogFactory.getLog(IncomingMediator.class);
#Override
public boolean mediate(MessageContext msgContext) {
String triUser = String.valueOf(msgContext.getProperty("triUser"));
String triPass = String.valueOf(msgContext.getProperty("triPass"));
String triURL = String.valueOf(msgContext.getProperty("triURL"));
String triWfName = String.valueOf(msgContext.getProperty("triWfName"));
try {
URI uri = new URIBuilder(triURL)
.addParameter("USERNAME", triUser)
.addParameter("PASSWORD", triPass)
.addParameter("ioName", triWfName).build();
log.info("URI: "+uri.toString());
String response = execute(uri);
PayloadHelper.setTextPayload(msgContext, convertToXML(response));
} catch (URISyntaxException e) {
log.error("Error while creating URI", e);
}
return true;
}
private static String execute(URI uri) {
String responseBody = null;
CloseableHttpClient httpclient = HttpClients.createDefault();
try {
HttpGet get = new HttpGet();
((HttpRequestBase) get).setURI(uri);
ResponseHandler<String> responseHandler = new ResponseHandler<String>() {
#Override
public String handleResponse(final HttpResponse response) throws ClientProtocolException, IOException {
int status = response.getStatusLine().getStatusCode();
if (status >= 200 && status < 300) {
HttpEntity entity = response.getEntity();
String responseStr = EntityUtils.toString(entity);
return "Successful".equalsIgnoreCase(responseStr) ? "RetCode=C;Message=Success" : "RetCode=F;Message=Failed because Itegration Exception";
} else {
throw new ClientProtocolException("Unexpected response status: " + status);
}
}
};
try {
responseBody = httpclient.execute(get, responseHandler);
} catch (ClientProtocolException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
} finally {
try {
httpclient.close();
} catch (IOException e) {
e.printStackTrace();
}
}
return responseBody;
}
private static String convertToXML(String response){
return"<status>"+response+"</status>";
}
}
I dont get any response back from here.
Since you need to transform the response message you need to do the payload transformation in the out sequence of the api.
currently what you are doing is transforming the message in the in sequence.
I need to take data(input.xml) from one file which is size in 100MB-200MB and need to write into four different files based on some logic.
input xml :
<?xml version="1.0"?>
<Orders>
<Order><OrderId>1</OrderId><Total>10</Total><Name>jon1</Name></Order>
<Order><OrderId>2</OrderId><Total>20</Total><Name>jon2</Name></Order>
<Order><OrderId>3</OrderId><Total>30</Total><Name>jon3</Name></Order>
<Order><OrderId>4</OrderId><Total>40</Total><Name>jon4</Name></Order>
<Orders>
logic is if Total is 1-10 then write to file1 and if Total is 11-20 then write to file2.....,
expected output:
1 10 jon1 -->write into file1
2 20 jon2 -->write into file2
3 30 jon3 -->write into file3
4 40 jon4 -->write into file4
Here i have enabled streaming in datamapper which is under configuration but i'm not getting proper output. The problem is i'm getting only some recodes into only one file which should come into that file after satisfying the condition.
But if i disable streaming button in datamapper it is working fine. As there are lakes of records i must use streaming option.
Is there any otherway to configure datamapper to enable streaming option..?
Please suggest me on this., Thanks.,
It is difficult to see a problem without a little more detail on what you are doing.
Nevertheless, I think this probably will help you to try another approach.
The data mapper will load the full XML document into memory although you activate streaming, it has to do it in order to support XPATH (it loads the full xml input into a DOM).
So if you can not afford to load 200Mb document into memory you will need to try a workaround.
What I have done before is creating a java component that transforms the input stream to an iterator with the help of a stax parser. With a very simple implementation you can code an iterator that pulls from the stream to create the next element (a pojo, a map, a string...). In the mule flow, after the "java component", you should be able to use a "for-each" with a "choice" within and apply your logic.
A quick example for your data:
package tests;
import java.io.InputStream;
import java.util.HashMap;
import java.util.Iterator;
import java.util.Map;
import java.util.Map.Entry;
import javax.xml.stream.FactoryConfigurationError;
import javax.xml.stream.XMLInputFactory;
import javax.xml.stream.XMLStreamConstants;
import javax.xml.stream.XMLStreamException;
import javax.xml.stream.XMLStreamReader;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
public class OrdersStreamIterator implements Iterator<Map<String,String>> {
final static Log LOGGER = LogFactory.getLog(OrdersStreamIterator.class);
final InputStream is;
final XMLStreamReader xmlReader;
boolean end = false;
HashMap<String,String> next;
public OrdersStreamIterator(InputStream is)
throws XMLStreamException, FactoryConfigurationError {
this.is = is;
xmlReader = XMLInputFactory.newInstance().createXMLStreamReader(is);
}
protected HashMap<String,String> _next() throws XMLStreamException {
int event;
HashMap<String,String> order = null;
String orderChild = null;
String orderChildValue = null;
while (xmlReader.hasNext()) {
event = xmlReader.getEventType();
if (event == XMLStreamConstants.START_ELEMENT) {
if (order==null) {
if (checkOrder()) {
order = new HashMap<String,String>();
}
}
else {
orderChild = xmlReader.getLocalName();
}
}
else if (event == XMLStreamConstants.END_ELEMENT) {
if (checkOrders()) {
end = true;
return null;
}
else if (checkOrder()) {
xmlReader.next();
return order;
}
else if (order!=null) {
order.put(orderChild, orderChildValue);
orderChild = null;
orderChildValue = null;
}
}
else if (order!=null && orderChild!=null){
switch (event) {
case XMLStreamConstants.SPACE:
case XMLStreamConstants.CHARACTERS:
case XMLStreamConstants.CDATA:
int start = xmlReader.getTextStart();
int length = xmlReader.getTextLength();
if (orderChildValue==null) {
orderChildValue = new String(xmlReader.getTextCharacters(), start, length);
}
else {
orderChildValue += new String(xmlReader.getTextCharacters(), start, length);
}
break;
}
}
xmlReader.next();
}
end = true;
return null;
}
protected boolean checkOrder() {
return "Order".equals(xmlReader.getLocalName());
}
protected boolean checkOrders() {
return "Orders".equals(xmlReader.getLocalName());
}
#Override
public boolean hasNext() {
if (end) {
return false;
}
else if (next==null) {
try {
next = _next();
} catch (XMLStreamException e) {
LOGGER.error(e.getMessage(), e);
end = true;
}
return !end;
}
else {
return true;
}
}
#Override
public Map<String,String> next() {
if (hasNext()) {
final HashMap<String,String> n = next;
next = null;
return n;
}
else {
return null;
}
}
#Override
public void remove() {
throw new RuntimeException("ReadOnly!");
}
// Test
public static String dump(Map<String,String> o) {
String s = "{";
for (Entry<String,String> e : o.entrySet()) {
if (s.length()>1) {
s+=", ";
}
s+= "\"" + e.getKey() + "\" : \"" + e.getValue() + "\"";
}
return s + "}";
}
public static void main(String[] argv) throws XMLStreamException, FactoryConfigurationError {
final InputStream is = OrdersStreamIterator.class.getClassLoader().getResourceAsStream("orders.xml");
final OrdersStreamIterator i = new OrdersStreamIterator(is);
while (i.hasNext()) {
System.out.println(dump(i.next()));
}
}
}
An example flow:
<flow name="testsFlow">
<http:listener config-ref="HTTP_Listener_Configuration" path="/" doc:name="HTTP"/>
<scripting:component doc:name="Groovy">
<scripting:script engine="Groovy"><![CDATA[return tests.OrdersStreamIterator.class.getClassLoader().getResourceAsStream("orders.xml");]]></scripting:script>
</scripting:component>
<set-payload value="#[new tests.OrdersStreamIterator(payload)]" doc:name="Iterator"/>
<foreach doc:name="For Each">
<logger message="#[tests.OrdersStreamIterator.dump(payload)]" level="INFO" doc:name="Logger"/>
</foreach>
</flow>
The Spring Batch JdbcCursorItemReader can accept a preparedStatementSetter:
<bean id="reader" class="org.springframework.batch.item.database.JdbcCursorItemReader">
<property name="dataSource" ref="..." />
<property name="sql" value="SELECT * FROM test WHERE col1 = ?">
<property name="rowMapper" ref="..." />
<property name="preparedStatementSetter" ref="..." />
</bean>
This works well if the sql uses ? as placeholder(s), as in the above example. However, our pre-existing sql uses named parameters, e.g. SELECT * FROM test WHERE col1 = :param
.
Is there a way to get a JdbcCursorItemReader to work with a NamedPreparedStatementSetter rather than a simple PreparedStatementSetter?
Thanks
You can try with jobParameters. In this case you don't need any PreparedStatementSetter.
<bean id="reader" class="org.springframework.batch.item.database.JdbcCursorItemReader">
<property name="dataSource" ref="..." />
<property name="sql" value="SELECT * FROM test WHERE col1 = #{jobParameters['col1']">
<property name="rowMapper" ref="..." />
<property name="preparedStatementSetter" ref="..." />
</bean>
pass the value when running the job
JobParameters param = new JobParametersBuilder().addString("col1", "value1").toJobParameters();
JobExecution execution = jobLauncher.run(job, param);
Once we don't have an official solution from spring, we can fix this problem using a simple approach:
Define one interface to provide the SqlParameters:
import org.springframework.jdbc.core.namedparam.SqlParameterSource;
public interface SqlParameterSourceProvider {
SqlParameterSource getSqlParameterSource();
}
Extending the JdbcCursorItemReader and adding the namedParameter features.
import org.springframework.batch.item.database.JdbcCursorItemReader;
import org.springframework.jdbc.core.SqlTypeValue;
import org.springframework.jdbc.core.StatementCreatorUtils;
import org.springframework.jdbc.core.namedparam.*;
import org.springframework.util.Assert;
import java.sql.PreparedStatement;
import java.sql.SQLException;
import java.util.*;
public class NamedParameterJdbcCursorItemReader<T> extends JdbcCursorItemReader<T> {
private SqlParameterSourceProvider parameterSourceProvider;
private String paramedSql;
public NamedParameterJdbcCursorItemReader(SqlParameterSourceProvider parameterSourceProvider) {
this.parameterSourceProvider = parameterSourceProvider;
}
#Override
public void setSql(String sql) {
Assert.notNull(parameterSourceProvider, "You have to set parameterSourceProvider before the SQL statement");
Assert.notNull(sql, "sql must not be null");
paramedSql = sql;
super.setSql(NamedParameterUtils.substituteNamedParameters(sql, parameterSourceProvider.getSqlParameterSource()));
}
#Override
protected void applyStatementSettings(PreparedStatement stmt) throws SQLException {
final ParsedSql parsedSql = NamedParameterUtils.parseSqlStatement(paramedSql);
final List<?> parameters = Arrays.asList(NamedParameterUtils.buildValueArray(parsedSql, parameterSourceProvider.getSqlParameterSource(), null));
for (int i = 0; i < parameters.size(); i++) {
StatementCreatorUtils.setParameterValue(stmt, i + 1, SqlTypeValue.TYPE_UNKNOWN, parameters.get(i));
}
}
}
Creating the concrete class that implements the interface SqlParameterSourceProvider and has the state with the updated value of the parameters to be used in your query.
public class MyCustomSqlParameterSourceProvider implements SqlParameterSourceProvider {
private Map<String, Object> params;
public void updateParams(Map<String, Object> params) {
this.params = params;
}
#Override
public SqlParameterSource getSqlParameterSource() {
final MapSqlParameterSource paramSource = new MapSqlParameterSource();
paramSource.addValues(params);
return paramSource;
}
}
Finally, update the spring configuration.
<bean id="reader" class="org.wisecoding.stackoverflow.NamedParameterJdbcCursorItemReader">
<constructor-arg ref="sqlParameterSourceProvider"/>
<property name="dataSource" ref="..." />
<property name="sql" value=SELECT * FROM test WHERE col1 = :param" />
<property name="rowMapper" ref="..." />
<property name="preparedStatementSetter" ref="..." />
</bean>
<bean id="sqlParameterSourceProvider" class="org.wisecoding.stackoverflow.MyCustomSqlParameterSourceProvider">
</bean>
Currently, there is not a way to do this. The JdbcCursorItemReader uses raw JDBC (PreparedStatement) instead of the Spring JdbcTemplate under the hood (since there is no way to get the underlying ResultSet when using JdbcTemplate). If you'd like to contribute this as a new feature, or request it as a new feature, feel free to do so at jira.spring.io
original solution in https://jira.spring.io/browse/BATCH-2521, but which does not support id in (:ids) clause.
here is an enhancement.
import lombok.Setter;
import lombok.extern.slf4j.Slf4j;
import lombok.val;
import org.springframework.batch.item.database.JdbcCursorItemReader;
import org.springframework.jdbc.core.PreparedStatementCreatorFactory;
import org.springframework.jdbc.core.namedparam.MapSqlParameterSource;
import org.springframework.jdbc.core.namedparam.NamedParameterUtils;
import java.util.Map;
#Slf4j
public class NamedParameterJdbcCursorItemReader<T> extends JdbcCursorItemReader<T> {
protected void setNamedParametersSql(String sql, Map<String, Object> parameters) {
val parsedSql = NamedParameterUtils.parseSqlStatement(sql);
val paramSource = new MapSqlParameterSource(parameters);
val sqlToUse = NamedParameterUtils.substituteNamedParameters(parsedSql, paramSource);
val declaredParams = NamedParameterUtils.buildSqlParameterList(parsedSql, paramSource);
val params = NamedParameterUtils.buildValueArray(parsedSql, paramSource, null);
val pscf = new PreparedStatementCreatorFactory(sql, declaredParams);
val pss = pscf.newPreparedStatementSetter(params);
log.info("sql: {}", sqlToUse);
log.info("parameters: {}", parameters);
setSql(sqlToUse);
setPreparedStatementSetter(pss);
}
}
Usage:
#Slf4j
public class UserItemJdbcReader extends NamedParameterJdbcCursorItemReader<UserEntity> {
#PostConstruct
public void init() {
val sql = "SELECT * FROM users WHERE id IN (:ids)";
val parameters = new HashMap<String, Object>(4);
parameters.put("ids", Arrays.asList(1,2,3));
setDataSource(dataSource);
setRowMapper(new UserRowMapper());
setNamedParametersSql(sql, parameters);
}
}
in my case I reuse ArgumentPreparedStatementSetter from spring-jdbc
private static final String SQL = "SELECT * FROM payments.transactions WHERE time_stamp >= ? AND time_stamp <= ?";
...
Object[] args = new Object[2];
args[0] = new Date(Instant.now().minus(7, ChronoUnit.DAYS).toEpochMilli());
args[1] = new Date();
ArgumentPreparedStatementSetter argumentPreparedStatementSetter =
new ArgumentPreparedStatementSetter(args);
return new JdbcCursorItemReaderBuilder<>()
.name("dbReader")
.sql(SQL)
.preparedStatementSetter(argumentPreparedStatementSetter)
...
I want to fetch a string from setValues() method of ItemPreparedStatementSetter which is my SQL string. I want to use this String into setSql() method of ItemWriter. Can somebody help me to achieve this.
Below is my PreparedStatementSetter class:
public class PreparedStatementSetter implements
ItemPreparedStatementSetter<Object>{
public static final int INT = 4;
public static final int STRING = 12;
public void setValues(Object item, PreparedStatement ps)
throws SQLException{
#SuppressWarnings({ "rawtypes", "unchecked" })
Map<String, Object> map = (LinkedHashMap) item;
int i = 0;
String columnType;
String sql="";
String final_sql;
try {
sql=generateSql();
} catch (ParserConfigurationException e) {
e.printStackTrace();
} catch (SAXException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
int len=map.size();
for(int k=0 ; k<len ; k++)
{
sql=sql+","+"?";
}
sql=sql+")";
// i want to use this final_sql string in setsql() method of itemwriter
final_sql=sql.replaceFirst("," , " ");
for (Map.Entry<String, Object> entry : map.entrySet()) {
i++;
columnType = entry.getKey().substring(0,
(entry.getKey().indexOf("_")));
switch (Integer.parseInt(columnType)) {
case INT: {
ps.setInt(i, (Integer) (entry.getValue()));
break;
}
case STRING: {
ps.setString(i, (String) (entry.getValue()));
break;
}
}
}
}
private String generateSql()
throws ParserConfigurationException, SAXException, IOException
{
String sql="";
Insert insert;
String table="";
try
{
File is = new File("C:/Users/AMDecalog.Trainees/workspace/SpringJobExecuter/config/input1.xml");
JAXBContext context = JAXBContext.newInstance(Insert.class);
Unmarshaller unmarshaller = context.createUnmarshaller();
insert = (Insert) unmarshaller.unmarshal(is);
Insert in = insert;
List<String> into = in.getInto().getTablename();
for(String s : into)
{
table = table+s;
System.out.println(table);
}
sql = "insert into" + " " + table + " " + "values(";
System.out.println(sql);
}
catch (JAXBException e)
{
e.printStackTrace();
}
return sql;
}
OK, you don't implement your PreparedStatementSetter the right way.
All you have to do is to declare your SQL in the ItemWriter config or in the itemWriter Implementation.
I will assume you are using a JdbcBatchItemWriter:
public class MyItemWrtier extends JdbcBatchItemWriter<MyDomainObj> implements InitializingBean{
#Override
public void afterPropertiesSet() throws Exception {
// set the SQL
String SQL= "UPDATE MYTABLE WHERE FIELD1 = ? AND FIELD2 = ?"
super.setSql(SQL);
}
}
Now, your batch config should declare this writer like this.
<bean id="myItemWriter" class="xxx.yyy.MyItemWriter">
<property name="dataSource" ref="dataSourceIemt" />
<property name="itemPreparedStatementSetter" ref="myPreparedStatementSetter" />
</bean>
And Finally,
#Component("myPreparedStatementSetter")
public class MyPreparedStatementSetter implements ItemPreparedStatementSetter<MyDomainObj> {
public void setValues(MyDomainObj obj, PreparedStatement ps) throws SQLException {
ps.setString(1, obj.getsometing());
ps.setString(2, obj.getsometingElse());
}
}
Hope it is clear.
Regards