Using Spring Batch JdbcCursorItemReader with NamedParameters - spring-batch

The Spring Batch JdbcCursorItemReader can accept a preparedStatementSetter:
<bean id="reader" class="org.springframework.batch.item.database.JdbcCursorItemReader">
<property name="dataSource" ref="..." />
<property name="sql" value="SELECT * FROM test WHERE col1 = ?">
<property name="rowMapper" ref="..." />
<property name="preparedStatementSetter" ref="..." />
</bean>
This works well if the sql uses ? as placeholder(s), as in the above example. However, our pre-existing sql uses named parameters, e.g. SELECT * FROM test WHERE col1 = :param
.
Is there a way to get a JdbcCursorItemReader to work with a NamedPreparedStatementSetter rather than a simple PreparedStatementSetter?
Thanks

You can try with jobParameters. In this case you don't need any PreparedStatementSetter.
<bean id="reader" class="org.springframework.batch.item.database.JdbcCursorItemReader">
<property name="dataSource" ref="..." />
<property name="sql" value="SELECT * FROM test WHERE col1 = #{jobParameters['col1']">
<property name="rowMapper" ref="..." />
<property name="preparedStatementSetter" ref="..." />
</bean>
pass the value when running the job
JobParameters param = new JobParametersBuilder().addString("col1", "value1").toJobParameters();
JobExecution execution = jobLauncher.run(job, param);

Once we don't have an official solution from spring, we can fix this problem using a simple approach:
Define one interface to provide the SqlParameters:
import org.springframework.jdbc.core.namedparam.SqlParameterSource;
public interface SqlParameterSourceProvider {
SqlParameterSource getSqlParameterSource();
}
Extending the JdbcCursorItemReader and adding the namedParameter features.
import org.springframework.batch.item.database.JdbcCursorItemReader;
import org.springframework.jdbc.core.SqlTypeValue;
import org.springframework.jdbc.core.StatementCreatorUtils;
import org.springframework.jdbc.core.namedparam.*;
import org.springframework.util.Assert;
import java.sql.PreparedStatement;
import java.sql.SQLException;
import java.util.*;
public class NamedParameterJdbcCursorItemReader<T> extends JdbcCursorItemReader<T> {
private SqlParameterSourceProvider parameterSourceProvider;
private String paramedSql;
public NamedParameterJdbcCursorItemReader(SqlParameterSourceProvider parameterSourceProvider) {
this.parameterSourceProvider = parameterSourceProvider;
}
#Override
public void setSql(String sql) {
Assert.notNull(parameterSourceProvider, "You have to set parameterSourceProvider before the SQL statement");
Assert.notNull(sql, "sql must not be null");
paramedSql = sql;
super.setSql(NamedParameterUtils.substituteNamedParameters(sql, parameterSourceProvider.getSqlParameterSource()));
}
#Override
protected void applyStatementSettings(PreparedStatement stmt) throws SQLException {
final ParsedSql parsedSql = NamedParameterUtils.parseSqlStatement(paramedSql);
final List<?> parameters = Arrays.asList(NamedParameterUtils.buildValueArray(parsedSql, parameterSourceProvider.getSqlParameterSource(), null));
for (int i = 0; i < parameters.size(); i++) {
StatementCreatorUtils.setParameterValue(stmt, i + 1, SqlTypeValue.TYPE_UNKNOWN, parameters.get(i));
}
}
}
Creating the concrete class that implements the interface SqlParameterSourceProvider and has the state with the updated value of the parameters to be used in your query.
public class MyCustomSqlParameterSourceProvider implements SqlParameterSourceProvider {
private Map<String, Object> params;
public void updateParams(Map<String, Object> params) {
this.params = params;
}
#Override
public SqlParameterSource getSqlParameterSource() {
final MapSqlParameterSource paramSource = new MapSqlParameterSource();
paramSource.addValues(params);
return paramSource;
}
}
Finally, update the spring configuration.
<bean id="reader" class="org.wisecoding.stackoverflow.NamedParameterJdbcCursorItemReader">
<constructor-arg ref="sqlParameterSourceProvider"/>
<property name="dataSource" ref="..." />
<property name="sql" value=SELECT * FROM test WHERE col1 = :param" />
<property name="rowMapper" ref="..." />
<property name="preparedStatementSetter" ref="..." />
</bean>
<bean id="sqlParameterSourceProvider" class="org.wisecoding.stackoverflow.MyCustomSqlParameterSourceProvider">
</bean>

Currently, there is not a way to do this. The JdbcCursorItemReader uses raw JDBC (PreparedStatement) instead of the Spring JdbcTemplate under the hood (since there is no way to get the underlying ResultSet when using JdbcTemplate). If you'd like to contribute this as a new feature, or request it as a new feature, feel free to do so at jira.spring.io

original solution in https://jira.spring.io/browse/BATCH-2521, but which does not support id in (:ids) clause.
here is an enhancement.
import lombok.Setter;
import lombok.extern.slf4j.Slf4j;
import lombok.val;
import org.springframework.batch.item.database.JdbcCursorItemReader;
import org.springframework.jdbc.core.PreparedStatementCreatorFactory;
import org.springframework.jdbc.core.namedparam.MapSqlParameterSource;
import org.springframework.jdbc.core.namedparam.NamedParameterUtils;
import java.util.Map;
#Slf4j
public class NamedParameterJdbcCursorItemReader<T> extends JdbcCursorItemReader<T> {
protected void setNamedParametersSql(String sql, Map<String, Object> parameters) {
val parsedSql = NamedParameterUtils.parseSqlStatement(sql);
val paramSource = new MapSqlParameterSource(parameters);
val sqlToUse = NamedParameterUtils.substituteNamedParameters(parsedSql, paramSource);
val declaredParams = NamedParameterUtils.buildSqlParameterList(parsedSql, paramSource);
val params = NamedParameterUtils.buildValueArray(parsedSql, paramSource, null);
val pscf = new PreparedStatementCreatorFactory(sql, declaredParams);
val pss = pscf.newPreparedStatementSetter(params);
log.info("sql: {}", sqlToUse);
log.info("parameters: {}", parameters);
setSql(sqlToUse);
setPreparedStatementSetter(pss);
}
}
Usage:
#Slf4j
public class UserItemJdbcReader extends NamedParameterJdbcCursorItemReader<UserEntity> {
#PostConstruct
public void init() {
val sql = "SELECT * FROM users WHERE id IN (:ids)";
val parameters = new HashMap<String, Object>(4);
parameters.put("ids", Arrays.asList(1,2,3));
setDataSource(dataSource);
setRowMapper(new UserRowMapper());
setNamedParametersSql(sql, parameters);
}
}

in my case I reuse ArgumentPreparedStatementSetter from spring-jdbc
private static final String SQL = "SELECT * FROM payments.transactions WHERE time_stamp >= ? AND time_stamp <= ?";
...
Object[] args = new Object[2];
args[0] = new Date(Instant.now().minus(7, ChronoUnit.DAYS).toEpochMilli());
args[1] = new Date();
ArgumentPreparedStatementSetter argumentPreparedStatementSetter =
new ArgumentPreparedStatementSetter(args);
return new JdbcCursorItemReaderBuilder<>()
.name("dbReader")
.sql(SQL)
.preparedStatementSetter(argumentPreparedStatementSetter)
...

Related

ignite client is taking long time to start when we are connecting to multiple nodes

scenario , i have two server nodes in beginning and when we are trying to connect client nodes taking 15+ min to start client. please find below server configuration. only change is IP address for another server nd, and on console i am getting below error thanks in advance
[12:42:10] Possible failure suppressed accordingly to a configured handler [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0, super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet [SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext [type=SYSTEM_WORKER_BLOCKED, err=class o.a.i.IgniteException: GridWorker [name=tcp-comm-worker, igniteInstanceName=null, finished=false, heartbeatTs=1600672317715]]] [12:42:40,486][SEVERE][tcp-disco-msg-worker-[5023dc59 172.16.0.189:48510]-#2][G] Blocked system-critical thread has been detected. This can lead to cluster-wide undefined behaviour [workerName=tcp-comm-worker, threadName=tcp-comm-worker-#1, blockedFor=18s] [12:42:40] Possible failure suppressed accordingly to a configured handler [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0, super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet [SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext [type=SYSTEM_WORKER_BLOCKED, err=class o.a.i.IgniteException: GridWorker [name=tcp-comm-worker, igniteInstanceName=null, finished=false, heartbeatTs=1600672341604]]] [12:42:49,498][SEVERE][tcp-disco-msg-worker-[5023dc59 172.16.0.189:48510]-#2][G] Blocked system-critical thread has been detected. This can lead to cluster-wide undefined behaviour [workerName=tcp-comm-worker, threadName=tcp-comm-worker-#1, blockedFor=27s] [12:42:49] Possible failure suppressed accordingly to a configured handler [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0, super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet [SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext [type=SYSTEM_WORKER_BLOCKED, err=class o.a.i.IgniteException: GridWorker [name=tcp-comm-worker, igniteInstanceName=null, finished=false, heartbeatTs=1600672341604]]] [12:43:01,603][SEVERE][tcp-disco-msg-worker-[5023dc59 172.16.0.189:48510]-#2][G] Blocked system-critical thread has been detected. This can lead to cluster-wide undefined behaviour [workerName=tcp-comm-worker, threadName=tcp-comm-worker-#1, blockedFor=39s]
``
-->
-->
<!-- <property name="consistentId" value="#{ systemEnvironment['IGNITE_CONSISTENT_ID'] }" /> -->
<!-- Enable task execution events for examples. -->
<property name="dataStorageConfiguration">
<bean class="org.apache.ignite.configuration.DataStorageConfiguration">
<property name="defaultDataRegionConfiguration">
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="persistenceEnabled" value="true" />
<property name="maxSize" value="#{4L * 1024 * 1024 * 1024}"/>
<property name="initialSize" value="#{1L * 1024 * 1024 * 1024}"/>
</bean>
</property>
</bean>
</property>
<!-- Explicitly configure TCP discovery SPI to provide list of initial nodes. -->
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="localPort" value="48510"/>
<property name="ipFinder">
<!--
Ignite provides several options for automatic discovery that can be used
instead os static IP based discovery. For information on all options refer
to our documentation: http://apacheignite.readme.io/docs/cluster-config
-->
<!-- Uncomment static IP finder to enable static-based discovery of initial nodes. -->
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
<!-- <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder"> -->
<property name="addresses">
<list>
<!-- In distributed environment, replace with actual host IP address. -->
<value>127.0.0.1:48510..48512</value>
<value>X.16.0.X:48510..48512</value>
</list>
</property>
</bean>
</property>
</bean>
</property>
<property name="communicationSpi">
<bean class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
<property name="localPort" value="48110"/>
<!-- <property name="localPortRange" value="1000"/> -->
</bean>
</property>
<property name="clientConnectorConfiguration">
<bean class="org.apache.ignite.configuration.ClientConnectorConfiguration">
<property name="port" value="10801"/>
</bean>
</property>
<property name="userAttributes">
<map>
<entry key="ROLE" value="SecindNode" />
</map>
</property>
</bean>
``
Client Code
``
public final class IgniteConnectionUtil {
private static final Logger logger = Logger.getLogger(IgniteConnectionUtil.class);
private static IgniteConnectionUtil instance;
private static Ignite ignite;
private static String CACHE_NAME = "CollectionCache";
private static String jdbcThinHost = null;
private IgniteConnectionUtil() {
if(ignite == null)
init();
try {
boolean clearRedisMap = ConfigurationManager.getInstance().getPropertyAsBoolean("CLEAR_REDIS_MAP",
"IN_MEMORY_DB", "CONFIG");
if (clearRedisMap)
InMemoryTableStore.getInstance().clearStore();
} catch (Exception e) {
logger.info("Unable to clear ignite-redis map");
}
}
public static synchronized void init() {
try {
if(!isIgniteEnabled() || ignite != null)
return;
logger.info("Ignite Client starting");
Ignition.setClientMode(true);
DataStorageConfiguration storageCfg = new DataStorageConfiguration();
storageCfg.setWalMode(WALMode.BACKGROUND);
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setDataStorageConfiguration(storageCfg);
cfg.setPeerClassLoadingEnabled(true);
TcpDiscoverySpi discoverySpi = new TcpDiscoverySpi();
TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder();
String serverIp = ConfigurationManager.getInstance()
.getPropertyAsString("SERVER_ADDRESS", "IN_MEMORY_DB", "CONFIG");
//ipFinder.setAddresses(Arrays.asList(serverIp));
ipFinder.setAddresses(
Arrays.asList("127.0.0.1:48510","127.0.0.1:48511","127.0.0.1:48512",
"X.16.0.189:48510","X.16.0.X:48511","X.16.0.X:48512"
));
discoverySpi.setLocalPort(48510);
// timeout for which client node will try to connect to ignite servers
// it will throw exception and exit if server can not be found
long discoveryTimeout = ConfigurationManager.getInstance()
.getPropertyAsLong("DISCOVERY_TIMEOUT", "IN_MEMORY_DB", "CONFIG");
discoverySpi.setIpFinder(ipFinder).setJoinTimeout(discoveryTimeout);
TcpCommunicationSpi commSpi = new TcpCommunicationSpi();
long communicationTimeout = ConfigurationManager.getInstance()
.getPropertyAsLong("COMMUNICATION_TIMEOUT", "IN_MEMORY_DB", "CONFIG");
commSpi.setConnectTimeout(communicationTimeout).setLocalPort(48110);
// this timeout is used to reconnect client to server if server has failed/restarted
long clientFailureDetectionTimeout = ConfigurationManager.getInstance()
.getPropertyAsLong("CLIENT_FAILURE_DETECTION_TIMEOUT", "IN_MEMORY_DB", "CONFIG");
cfg.setClientFailureDetectionTimeout(30000);
cfg.setDiscoverySpi(discoverySpi);
cfg.setCommunicationSpi(commSpi);
//cfg.setIncludeEventTypes(EventType.EVT_NODE_JOINED);
ignite = Ignition.start(cfg);
ignite.cluster().active(true);
ignite.cluster().baselineAutoAdjustEnabled(true);
ignite.cluster().baselineAutoAdjustTimeout(30000);
initializeJDBCThinDriver();
//igniteEventListen();
logger.info("Ignite Client started");
} catch (Exception e) {
logger.error("Error in starting ignite cluster", e);
}
}
public static synchronized IgniteConnectionUtil getInstance() {
if (instance == null) {
instance = new IgniteConnectionUtil();
} else {
try {
if(ignite == null || ignite.cluster() == null) {
logger.error("Illegal Ignite state. Will try to restart ignite clinet.");
init();
} else if(Ignition.state().equals(IgniteState.STOPPED_ON_SEGMENTATION)) {
logger.error("Reconnecting to Ignite");
ignite = null;
init();
}else if(!ignite.cluster().active())
ignite.cluster().active(true);
} catch(Exception e) {
logger.error("Ignite Exception. Please restart ignite server.");
}
}
return instance;
}
public static void initializeJDBCThinDriver() {
try {
Class.forName("org.apache.ignite.IgniteJdbcThinDriver");
jdbcThinHost = ConfigurationManager.getInstance()
.getPropertyAsString("JDBC_THIN_HOST", "IN_MEMORY_DB", "CONFIG");
} catch (ClassNotFoundException e) {
logger.error("Error in loading IgniteJdbcThinDriver class", e);
}
}
public Connection getJDBCConnection() {
Connection conn = null;
try {
conn = DriverManager.getConnection("jdbc:ignite:thin://"+jdbcThinHost+"/");
if(conn == null )
{
conn = DriverManager.getConnection("jdbc:ignite:thin://172.16.0.189:10801/");
}
} catch (SQLException e) {
logger.error("Error in getting Ignite JDBC connection", e);
}
return conn;
}
public IgniteCache<?, ?> getOrCreateCache(String cacheName) {
CacheConfiguration<?, ?> cacheConfig = new CacheConfiguration<>(CACHE_NAME);
//cacheConfig.setDataRegionName("500MB_Region");
cacheConfig.setCacheMode(CacheMode.PARTITIONED);
cacheConfig.setBackups(1);
cacheConfig.setRebalanceMode(CacheRebalanceMode.ASYNC);
cacheConfig.setAtomicityMode(CacheAtomicityMode.ATOMIC);
cacheConfig.setWriteSynchronizationMode(CacheWriteSynchronizationMode.PRIMARY_SYNC);
cacheConfig.setReadFromBackup(true);
cacheConfig.setCopyOnRead(true);
cacheConfig.setOnheapCacheEnabled(true);
cacheConfig.setSqlSchema("PUBLIC");
if(ignite != null) {
return ignite.getOrCreateCache(cacheConfig);
}else {
throw new IgniteSQLException("Internal Server Error Please contact support");
}
}
public IgniteCache<?, ?> getOrCreateCache() {
CacheConfiguration<?, ?> cacheConfig = new CacheConfiguration<>(CACHE_NAME);
//cacheConfig.setDataRegionName("500MB_Region");
cacheConfig.setCacheMode(CacheMode.PARTITIONED);
cacheConfig.setBackups(1);
cacheConfig.setRebalanceMode(CacheRebalanceMode.ASYNC);
cacheConfig.setAtomicityMode(CacheAtomicityMode.ATOMIC);
cacheConfig.setWriteSynchronizationMode(CacheWriteSynchronizationMode.PRIMARY_SYNC);
cacheConfig.setReadFromBackup(true);
cacheConfig.setCopyOnRead(true);
cacheConfig.setOnheapCacheEnabled(true);
cacheConfig.setSqlSchema("PUBLIC");
if(ignite != null) {
return ignite.getOrCreateCache(cacheConfig);
}else {
throw new IgniteSQLException("Internal Server Error Please contact support");
}
}
public static synchronized void shutdown() throws Exception {
try {
if(ignite != null) {
ignite.close();
}
} catch(IgniteException ie) {
throw new Exception(ie);
} finally {
ignite = null;
}
}
public static boolean isIgniteEnabled() throws Exception {
return ConfigurationManager.getInstance().getPropertyAsBoolean("ENABLED",
"IN_MEMORY_DB");
}
}
``
Blocked system-critical thread has been detected. This can lead to cluster-wide undefined behaviour [workerName=tcp-comm-worker, threadName=tcp-comm-worker-#1, blockedFor=18s]
This would likely mean that server node cannot connect to client's communication port (47100), or vice versa. In 2.8.1 or earlier, it needs to be traversable in both directions. In 2.9, new operation mode will be introduced where server will never try to connect to client, only the traditional way around.

Is It possible to support sync and async Application Events in Spring[5]

<bean id="applicationEventMulticaster"
class="com.test.listener.CustomApplicationEventMulticaster">
<property name="taskExecutor" >
<bean class="org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor">
<property name="maxPoolSize" value="10"/>
<property name="corePoolSize" value="10"/>
<property name="waitForTasksToCompleteOnShutdown" value="true"/>
<property name="awaitTerminationSeconds" value="200"/>
</bean>
</property>
</bean>
public class CustomApplicationEventMulticaster extends SimpleApplicationEventMulticaster {
#Override
public void multicastEvent(final ApplicationEvent event, ResolvableType eventType) {
boolean async = (event instanceof AbstractApplicationEvent) ? ((AbstractApplicationEvent) event).isAsyncEvent()
: true;
final SecurityContext sc = SecurityContextHolder.getContext();
ResolvableType defaultEventType = ResolvableType.forInstance(event);
for (final ApplicationListener listener : getApplicationListeners(event, defaultEventType)) {
Executor executor = getTaskExecutor();
if (async && executor != null) {
executor.execute(() -> {
try {
SecurityContextHolder.setContext(sc);
listener.onApplicationEvent(event);
} finally {
SecurityContextHolder.clearContext();
}
});
} else {
listener.onApplicationEvent(event);
}
}
}
}
In Application, I am trying to trigger sync and async event.
It is this fine to do?
executor.execute(() -> {
try {
SecurityContext emptyContext = SecurityContextHolder.createEmptyContext();
emptyContext.setAuthentication(sc.getAuthentication());
listener.onApplicationEvent(event);
} finally {
SecurityContextHolder.clearContext();
}
});

MultiResourceItemReader leaving last file read locked, therefore can't move when done processing

Ideas on why the MultiResourceItemReader is leaving the last file locked and I cannot move it with the Move Tasklet? The moves completes all the other files, but the last one read.
IO Exception has:
java.nio.file.FileSystemException: C:\Users\UGDW\MyProjects\ngsa2\oab-outside-assets-batchlauncher\input\EQ_AcctData_4321_03292020.csv -> C:\Users\UGDW\MyProjects\ngsa2\oab-outside-assets-batchlauncher\output\EQ_AcctData_4321_03292020.csv_processed: The process cannot access the file because it is being used by another process.
Batch config (stripped down):
<batch:job id="stockPlanAccountDataJob">
<batch:step id="getFilesInInputDirectory" next="fileProcessing">
<tasklet ref="getFilesInInputDirectoryTasklet"/>
</batch:step>
<batch:step id="fileProcessing" next="moveFilesToOuputDirectory">
<tasklet>
<chunk reader="stockPlanAccountDataFileReader" processor="stockPlanAccountDataProcessor" writer="stockPlanConsoleItemWriter"
commit-interval="20" skip-limit="20">
<batch:skippable-exception-classes>
<batch:include class="java.lang.Exception"/>
<batch:exclude class="org.springframework.batch.item.file.FlatFileParseException"/>
</batch:skippable-exception-classes>
</chunk>
</tasklet>
</batch:step>
<batch:step id="moveFilesToOuputDirectory">
<tasklet ref="stockPlanMoveFilesTasklet"/>
</batch:step>
</batch:job>
<bean id="getFilesInInputDirectoryTasklet" class="simplepeekandmulti.GetFilesInInputDirectoryTasklet" scope="step"/>
<bean id="stockPlanAccountDataFileReader" class="simplepeekandmulti.StockPlanAccountDataFileReader" scope="step">
<property name="delegate" ref="preprocessorUsingPeekable"/>
</bean>
<bean id="preprocessorUsingPeekable" class="org.springframework.batch.item.support.SingleItemPeekableItemReader" scope="step">
<property name="delegate" ref="multiFileResourceReader"/>
</bean>
<bean name="multiFileResourceReader" class="org.springframework.batch.item.file.MultiResourceItemReader" scope="step">
<property name="resources" value="file:#{jobExecutionContext[filepattern]}" />
<property name="delegate" ref="genericFlatFileReader" />
<property name="strict" value="true" />
</bean>
<bean id="genericFlatFileReader" class="org.springframework.batch.item.file.FlatFileItemReader" scope="step">
<property name="lineMapper" ref="genericFileLineMapper"/>
</bean>
<bean name="genericFileLineMapper" class="org.springframework.batch.item.file.mapping.PassThroughLineMapper" scope="step" />
<bean id="stockPlanAccountDataProcessor" class="simplepeekandmulti.StockPlanAccountDataProcessor" scope="step"/>
<bean id="stockPlanMoveFilesTasklet" class="simplepeekandmulti.StockPlanMoveFilesTasklet" scope="step"/>
Reader (with dumb logic):
package simplepeekandmulti;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.atomic.AtomicLong;
import org.springframework.batch.item.ItemReader;
import org.springframework.batch.item.PeekableItemReader;
import simplepeekandmulti.StockPlanAccountData;
import simplepeekandmulti.StockPlanFileInputAccountData;
public class StockPlanAccountDataFileReader implements ItemReader<StockPlanFileInputAccountData> {
private PeekableItemReader<String> delegate;
private AtomicLong itemsRead = new AtomicLong(0L);
private static final String PIPE = "|";
private static final String PIPE_SPLIT = "\\|";
private static final int NUM_RECORDS_PER_LINE = 6;
public PeekableItemReader<String> getDelegate() {
return delegate;
}
public void setDelegate(PeekableItemReader<String> delegate) {
this.delegate = delegate;
}
#Override
public StockPlanFileInputAccountData read() throws Exception {
String currentLine = delegate.read();
StockPlanFileInputAccountData inputData = new StockPlanFileInputAccountData();
int recs = 0;
List<String> errorList = new ArrayList<>();
while (currentLine != null) {
if (currentLine.contains(PIPE)) {
recs++;
setDetailLine(currentLine, inputData, recs, errorList);
} else {
errorList.add(currentLine);
}
if ((errorList.size() % 2) == 0) {
return inputData;
}
itemsRead.incrementAndGet();
currentLine = delegate.read();
}
return null;
}
private void setDetailLine(String inputLine, StockPlanFileInputAccountData inputData,
int numRecs, List<String> errorList) {
String[] entry = inputLine.split(PIPE_SPLIT);
if (entry.length == NUM_RECORDS_PER_LINE) {
inputData.setDataRecordsPerFile(numRecs);
StockPlanAccountData data = new StockPlanAccountData();
data.setExternalClientId(entry[0]);
data.setSSN(entry[1]);
data.setExternalParticipantId(entry[2]);
data.setFirstName(entry[3]);
data.setLastName(entry[4]);
data.setDateOfBitrth(entry[5]);
inputData.addToDataList(data);
} else {
errorList.add("Detail Line Is Invalid, Does NOT have 6 columns, 5 pipes: " + inputLine);
}
}
}
Processor:
package simplepeekandmulti;
import java.util.ArrayList;
import java.util.List;
import org.springframework.batch.core.StepExecution;
import org.springframework.batch.core.annotation.BeforeStep;
import org.springframework.batch.item.ItemProcessor;
import com.vanguard.inst.batch.oab.springboot.data.StockPlanFileInputAccountData;
public class StockPlanAccountDataProcessor implements ItemProcessor<StockPlanFileInputAccountData, StockPlanFileInputAccountData> {
private StepExecution stepExecution;
#BeforeStep
public void beforeStep(StepExecution stepExecution) {
this.stepExecution = stepExecution;
}
public StockPlanFileInputAccountData process(StockPlanFileInputAccountData item) throws Exception {
List<String> errorList = new ArrayList<>(0);
if (errorList.isEmpty()) {
return item;
} else {
//exchangeEmailService.sendEmail(fileName, errorList);
return null;
}
}
}
Writer:
package simplepeekandmulti;
import java.util.List;
import org.springframework.batch.item.ItemWriter;
import org.springframework.stereotype.Component;
import com.vanguard.inst.batch.oab.springboot.data.StockPlanFileInputAccountData;
#Component
public class StockConsoleOutputItemWriter implements ItemWriter<StockPlanFileInputAccountData> {
#Override
public void write(List<? extends StockPlanFileInputAccountData> arg0) throws Exception {
// TODO Auto-generated method stub
}
}
Move Files Tasklet (with file name hardcoded): Last file in the loop always fails.
package simplepeekandmulti;
import java.io.IOException;
import java.nio.file.FileSystems;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.StandardCopyOption;
import java.util.ArrayList;
import java.util.List;
import org.springframework.batch.core.StepContribution;
import org.springframework.batch.core.scope.context.ChunkContext;
import org.springframework.batch.core.step.tasklet.Tasklet;
import org.springframework.batch.repeat.RepeatStatus;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Component;
#Component
public class StockPlanMoveFilesTasklet implements Tasklet {
private static final String CLASS_NAME = StockPlanMoveFilesTasklet.class.getSimpleName();
#Value("$simplepeekandmulti-{INPUT_DIR}")
private String inputDir;
#Value("$simplepeekandmulti-{OUTPUT_DIR}")
private String outputDir;
private static final String PROCESSED = "_processed";
public RepeatStatus execute(StepContribution contribution, ChunkContext chunkContext) {
String[] fileList = {"EQ_AcctData_3210_03302020.csv", "EQ_AcctData_4321_03302020.csv"};
try {
for (String fileName : fileList) {
Path pathFrom = FileSystems.getDefault().getPath(inputDir, fileName);
Path pathTo = FileSystems.getDefault().getPath(outputDir, fileName + PROCESSED);
Files.move(pathFrom, pathTo, StandardCopyOption.REPLACE_EXISTING);
}
} catch (IOException io) {
System.out.println(io.toString());
}
return RepeatStatus.FINISHED;
}
}
CSV Files simply have; header date, records pipe delimited, footer total record count
03/30/2020
3210|59658625|12000|AADFBCJGH|LLOQMNURS|1962-03-08
3210|10124602|12001|AADFBCJGH|LLOQMNURS|1962-03-08
2
03/30/2020
4321|5690154|13000|AADFBCJGH|LLOQMNURS|1988-10-23
4321|745701|13001|AADFBCJGH|LLOQMNURS|1988-10-23
2

Spring batch to read a key= value and load it into DB

I am new to spring batch and having a feed file with key param values in .txt format. I need to load the file into Mysql DB using spring batch. Is there any way to read a text file with key value message. Two rows are separated by an empty line and the delimiter is '='.
Sample File:
Name=Jack
Id=ADC12345
ClassId=7018
Rank=-326
Name=Gile
Id=FED12345
ClassId=7018
Rank=-32
Name, ID, ClassId and Rank are the column values.
Here's a working solution (you just need a blank line after the last record or it won't be read) :
1) Declare your business object :
public class Student {
private String name;
private String id;
private Integer classId;
private Integer rank;
// Getter + Setters
}
2) Declare a custom itemstreamreader to which you will delegate the actual FlatFileItemReader :
public class CustomMultiLineItemReader implements ItemStreamReader<Student> {
private FlatFileItemReader<FieldSet> delegate;
#Override
public void open(ExecutionContext executionContext) throws ItemStreamException {
delegate.open(executionContext);
}
#Override
public void update(ExecutionContext executionContext) throws ItemStreamException {
delegate.update(executionContext);
}
#Override
public void close() throws ItemStreamException {
delegate.close();
}
// Getter + Setters
}
3) Override its read method to manually map your multiline records :
public Student read() throws Exception {
Student s = null;
for (FieldSet line = null; (line = this.delegate.read()) != null;) {
if (line.getFieldCount() == 0) {
return s; // Record must end with footer
} else {
String prefix = line.readString(0);
if (prefix.equals("Name")) {
s = new Student(); // Record must start with header
s.setName(line.readString(1));
}
else if (prefix.equals("Id")) {
s.setId(line.readString(1));
}
else if (prefix.equals("ClassId")) {
s.setClassId(line.readInt(1));
}
else if (prefix.equals("Rank")) {
s.setRank(line.readInt(1));
}
}
}
return null;
}
4) Declare the reader in the step and configure it :
<bean class="xx.xx.xx.CustomMultiLineItemReader">
<property name="delegate">
<bean class="org.springframework.batch.item.file.FlatFileItemReader">
<property name="resource" value="file:${YOUR_FILE}"></property>
<property name="linesToSkip" value="0"></property>
<property name="lineMapper">
<bean class="org.springframework.batch.item.file.mapping.PatternMatchingCompositeLineMapper">
<property name="tokenizers">
<map>
<entry key="*">
<bean class="org.springframework.batch.item.file.transform.DelimitedLineTokenizer">
<property name="delimiter" value="="></property>
</bean>
</entry>
</map>
</property>
<property name="fieldSetMappers">
<map>
<entry key="*">
<bean class="org.springframework.batch.item.file.mapping.PassThroughFieldSetMapper" />
</entry>
</map>
</property>
</bean>
</property>
</bean>
</property>
</bean>
I used a PatternMatchingCompositeLineMapper to associate line content (here : *) with the corresponding lineTokenizer and lineMapper (even though it's useless in this case).
Then, the PassThroughFieldSetMapper lets the reader do the mapping, and the DelimitedLineTokenizer splits the line on the "=" character.
there are 2 challenges with this input format
start/end for a complete item
splitting the item in key/value pairs
one solution could be to use a custom RecordSeparatorPolicy and custom LineMapper, like
import java.util.HashMap;
import java.util.Map;
import org.junit.Test;
import org.springframework.batch.item.ExecutionContext;
import org.springframework.batch.item.file.FlatFileItemReader;
import org.springframework.batch.item.file.mapping.DefaultLineMapper;
import org.springframework.batch.item.file.mapping.FieldSetMapper;
import org.springframework.batch.item.file.separator.RecordSeparatorPolicy;
import org.springframework.batch.item.file.transform.DelimitedLineTokenizer;
import org.springframework.batch.item.file.transform.FieldSet;
import org.springframework.core.io.ClassPathResource;
import org.springframework.validation.BindException;
public class ReaderKeyValueTest {
#Test
public void test() throws Exception {
FlatFileItemReader<Map<String, String>> reader = new FlatFileItemReader<Map<String, String>>();
reader.setResource(new ClassPathResource("keyvalue.txt"));
// custom RecordSeparatorPolicy
reader.setRecordSeparatorPolicy(new RecordSeparatorPolicy() {
#Override
public String preProcess(final String record) {
// empty line is added to the previous 'item'
if (record.isEmpty()) {
return record;
} else {
// line with content means it is part of an 'item', lets enhance it with adding a separator
return record + ",";
}
}
#Override
public String postProcess(final String record) {
return record;
}
#Override
public boolean isEndOfRecord(final String record) {
// the end of a record is marked with the last key/value pair for "Rank"
if (record.contains("Rank=")) {
return true;
} else {
return false;
}
}
});
DefaultLineMapper<Map<String, String>> lineMapper = new DefaultLineMapper<Map<String, String>>();
// the key/value pairs are separated with ',', so we can use the standard DelimitedLineTokenizer here
lineMapper.setLineTokenizer(new DelimitedLineTokenizer());
lineMapper.setFieldSetMapper(new FieldSetMapper<Map<String, String>>() {
#Override
public Map<String, String> mapFieldSet(final FieldSet fieldSet) throws BindException {
Map<String, String> item = new HashMap<String, String>();
// split each "Key=Value" and add to the Map
for (int i = 0; i < fieldSet.getValues().length; i++) {
String[] entry = fieldSet.getValues()[i].split("=");
item.put(entry[0], entry[1]);
}
return item;
}
});
reader.setLineMapper(lineMapper);
reader.open(new ExecutionContext());
Map<String, String> item;
while ((item = reader.read()) != null) {
System.out.println(item.toString());
}
reader.read();
reader.close();
}
}
the sysout produces
{ClassId=7018, Id=ADC12345, Name=Jack, Rank=-326}
{ClassId=7018, Id=FED12345, Name=Gile, Rank=-32}

in the hornetq, the consumer is automatically invoked?

I looked over all of examples in the hornetq, but I couldn't find the example that the consumer is automactically invoked whenever the message comess through the producer.
Please let me know about the example code or hint.
thanks in advance.
Use DefaultMessageListenerContainer. You can register a listener to it and consume messages asynchronously. Follow this link for more information about tuning MessageListenerContainer: http://bsnyderblog.blogspot.se/2010/05/tuning-jms-message-consumption-in.html.
Hornetq dependecies you need (I used a standalone hornetq-2.3.0.CR2) (You also need some spring jars):
<dependencies>
<!-- hornetq -->
<dependency>
<groupId>org.jboss.netty</groupId>
<artifactId>netty</artifactId>
<version>3.2.7.Final</version>
</dependency>
<dependency>
<groupId>org.hornetq</groupId>
<artifactId>hornetq-jms-client</artifactId>
<version>2.3.0.CR2</version>
</dependency>
<dependency>
<groupId>org.hornetq</groupId>
<artifactId>hornetq-core-client</artifactId>
<version>2.3.0.CR2</version>
</dependency>
<!-- hornetq -->
</dependencies>
The beans you should use in your applicationContext.xml (I didn't use jndi for getting ConnectionFactory and destinations; For this, you can follow this question):
<!-- It's ConnectionFactory to connect to hornetq. 5445 is hornetq acceptor port -->
<bean name="connectionFactory" class="messaging.jms.CustomHornetQJMSConnectionFactory">
<constructor-arg index="0" name="ha" value="false" />
<constructor-arg index="1" name="commaSepratedServerUrls" value="127.0.0.1:5445" />
</bean>
<bean id="destinationParent" class="messaging.jms.JmsDestinationFactoryBean" abstract="true">
<property name="pubSubDomain" value="false" /> <!-- default is queue -->
</bean>
<bean id="exampleDestination" parent="destinationParent">
<property name="destinationName" value="example" /> <!-- queue name -->
</bean>
<!-- MessageListener -->
<bean id="messageHandler" class="messaging.consumer.MessageHandler">
</bean>
<!-- MessageListenerContainer -->
<bean id="paymentListenerContainer" class="org.springframework.jms.listener.DefaultMessageListenerContainer">
<property name="destination" ref="exampleDestination" />
<property name="messageListener" ref="messageHandler" />
<property name="connectionFactory" ref="connectionFactory" />
<property name="sessionTransacted" value="true" />
<property name="concurrentConsumers" value="1" />
<property name="maxConcurrentConsumers" value="10" />
<property name="idleConsumerLimit" value="2" />
<property name="idleTaskExecutionLimit" value="5" />
<property name="receiveTimeout" value="3000" />
</bean>
CustomHornetQJMSConnectionFactory:
public class CustomHornetQJMSConnectionFactory extends org.hornetq.jms.client.HornetQJMSConnectionFactory
{
private static final long serialVersionUID = 1L;
public CustomHornetQJMSConnectionFactory(boolean ha, String commaSepratedServerUrls)
{
super(ha, converToTransportConfigurations(commaSepratedServerUrls));
}
public static TransportConfiguration[] converToTransportConfigurations(String commaSepratedServerUrls)
{
String [] serverUrls = commaSepratedServerUrls.split(",");
TransportConfiguration[] transportconfigurations = new TransportConfiguration[serverUrls.length];
for(int i = 0; i < serverUrls.length; i++)
{
String[] urlParts = serverUrls[i].split(":");
HashMap<String, Object> map = new HashMap<String,Object>();
map.put(TransportConstants.HOST_PROP_NAME, urlParts[0]);
map.put(TransportConstants.PORT_PROP_NAME, urlParts[1]);
transportconfigurations[i] = new TransportConfiguration(NettyConnectorFactory.class.getName(), map);
}
return transportconfigurations;
}
}
JmsDestinationFactoryBean (Used in destinationParent bean):
public class JmsDestinationFactoryBean implements FactoryBean<Destination>
{
private String destinationName;
private boolean pubSubDomain = false;
public void setDestinationName(String destinationName) {
this.destinationName = destinationName;
}
public void setPubSubDomain(boolean pubSubDomain) {
this.pubSubDomain = pubSubDomain;
}
#Override
public Class<?> getObjectType()
{
return Destination.class;
}
#Override
public boolean isSingleton()
{
return true;
}
#Override
public Destination getObject() throws Exception
{
if(pubSubDomain)
{
return HornetQJMSClient.createTopic(destinationName);
}
else
{
return HornetQJMSClient.createQueue(destinationName);
}
}
}
MessageHandler (Received messages go to onMessage method for process) (For simplicity, You can implement javax.jms.MessageListener instead of SessionAwareMessageListener):
public class MessageHandler implements org.springframework.jms.listener.SessionAwareMessageListener<Message>
{
#Override
public void onMessage(Message msg, Session session) throws JMSException
{
if(msg instanceof TextMessage)
{
System.out.println(((TextMessage)msg).getText());
session.commit();
}
else
{
session.rollback(); // send message back to the queue
}
}