How to use #BeforeStep Job Parameters in JdbcCursorItemReader for named Query - spring-batch

I have the code like below
#Bean
public JdbcCursorItemReader<Map<String, Object>> itemReader() {
return new JdbcCursorItemReader<Map<String, Object>>() {
private JobParameters jobParameter;
String sql = "select EMPLOYEE_ID as empId, EMPLOYEE_NAME as empName EMPLOYEE_AGE as age from EMPLOYEE EMPLOYEE_DEPT =:empDept and EMPLOYEE_SAL > :empSal";
Map<String, Object> namedParameters = null;
#PostConstruct
public void initialize() throws Exception
{
setDataSource(dataSource);
setSql("select 1 from dual");
setRowMapper(new ColumnMapRowMapper());
}
#BeforeStep
public void retrieveExecutionContext(StepExecution stepExecution)
{
jobParameter = stepExecution.getJobParameters();
namedParameters = new HashMap<String, Object>() {
{
put("bstd", jobParameter.getString("empDept"));
put("bwtn", jobParameter.getString("empSal"));
}
};
jobParameter.getParameters().forEach((k, v) -> System.out.println("key =" + k + ", Value:" + v));
}
#Override
public void afterPropertiesSet() throws Exception {
setSql(NamedParameterUtils.substituteNamedParameters(sql, new MapSqlParameterSource(namedParameters)));
setPreparedStatementSetter(new ListPreparedStatementSetter(
Arrays.asList(NamedParameterUtils.buildValueArray(sql, namedParameters))));
setRowMapper(new ColumnMapRowMapper());
setDataSource(dataSource);
super.afterPropertiesSet();
}
};
}
Tried using calling afterPropertiesSet, but still seeing below exception
Caused by: org.springframework.dao.InvalidDataAccessApiUsageException: No value supplied for the SQL parameter 'empDept': No value registered for key 'empDept'
at org.springframework.jdbc.core.namedparam.NamedParameterUtils.buildValueArray(NamedParameterUtils.java:361) ~[spring-jdbc-5.3.22.jar:5.3.22]
at org.springframework.jdbc.core.namedparam.NamedParameterUtils.buildValueArray(NamedParameterUtils.java:485) ~[spring-jdbc-5.3.22.jar:5.3.22]
Requirement is dynamic query, so don't have control of the Select query and the where conditions.
Thanks in advance,

You can use a SpEL expression to inject and use job parameters in your item reader bean definition as follows:
#Bean
#StepScope
public JdbcCursorItemReader<Map<String, Object>> itemReader(#Value("#{jobParameters['empDept']}") String empDept, #Value("#{jobParameters['empSal']}") String empSal) {
JdbcCursorItemReader<Map<String, Object>> itemReader = new JdbcCursorItemReader<>();
// use parameters 'empDept' and 'empSal' in your sql query as needed
return itemReader;
}
Note that the item reader should be step-scoped for that to work. For more details, please refer to the documentation: Late Binding of Job and Step Attributes.

Related

Use Chunk Listener for Indicator Pattern

I am trying to use Processor Indicator Pattern to make my job idempotent, i tried to use Write Listener, AfterWrite to update mongo document by setting a field Processed: true. However there were issues when there is a big number of chunks.
MongoDB Item Reader(10000 Docs) ---chunk(1000)--> JDBC Batch Item Writer(Only 5000 are saved in table after Step's completion)
The following Code is about The step:
#Bean
public MongoItemReader<X> Reader() throws Exception {
MongoItemReader<X> reader = new MongoItemReader<>();
reader.setTemplate(mongoTemplate);
reader.setCollection("MY_COLLECTION");
reader.setTargetType(X.class);
reader.setQuery("{PROCESSED: {$exists: false}}");
reader.setSort(new HashMap<String, Sort.Direction>() {{
put("_id", Sort.Direction.ASC);
}});
reader.afterPropertiesSet();
return reader;
}
#Bean
public XItemProcessor x_item_processor() {
return new XItemProcessor();
}
#Bean
public X_Item_Listener item_listener() {
return new X_Item_Listener();
}
#Bean
public X_Step_Listener step_listener() {
return new X_Step_Listener();
}
#Bean
public JdbcBatchItemWriter<Y> YWriter() {
JdbcBatchItemWriter<Y> Y_Writer = new JdbcBatchItemWriter<>();
Y_Writer.setDataSource(dataSource);
Y_Writer.setAssertUpdates(true);
Y_Writer.setItemSqlParameterSourceProvider(new BeanPropertyItemSqlParameterSourceProvider<>());
Y_Writer.setSql("INSERT INTO Y (Y1,Y2,Y3,Y4) VALUES (:y1, :y2, :y3, :y4)");
Y_Writer.afterPropertiesSet();
return Y_Writer;
}
#Bean
public Step XY_Step() throws Exception {
return stepBuilderFactory.get("XY")
.<X, Y>chunk(1000)
.reader(Reader())
.processor(x_item_processor())
.writer(YWriter())
.faultTolerant()
.skipLimit(Integer.MAX_VALUE)
.skip(Exception.class)
.listener((ItemProcessListener<? super X, ? super Y>) item_listener())
.listener(step_listener())
.build();
}
Here a snippet of code used in After Write Listener for updating mongo Document.
#Autowired
private MongoTemplate mongoTemplate;
#Transactional(propagation = Propagation.REQUIRES_NEW)
public void afterWrite(List<? extends Y> items) {
BulkOperations ops=mongoTemplate.bulkOps(BulkOperations.BulkMode.UNORDERED,"MY_COLLECTION");
for (Y item : items) {
Update update = new Update().set("PROCESSED", true);
ops.updateOne(new Query(Criteria.where("_id").is(item.getID())), update);
}
ops.execute();
}

Kafka RecordFilterStrategy does not filter records when using spring-kafka ReplyingKafkaTemplate

Hi I have following configuration for ReplyingKafkaTemplate and i want to filter message before consumer based on correlationID but some reason its not filter can anyone suggest what is wrong with this.
#Bean
public ConcurrentMessageListenerContainer<String, FireflyResponse> replyContainer() {
ConcurrentKafkaListenerContainerFactory<String, FireflyResponse> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
RetryTemplate retryTemplate = new RetryTemplate();
retryTemplate.setRetryPolicy(new SimpleRetryPolicy(retry));
factory.setRetryTemplate(retryTemplate);
factory.setConcurrency(3);
factory.setBatchListener(true);
factory.setAckDiscarded(true);
factory.setRecordFilterStrategy(new RecordFilterStrategy<String, FireflyResponse>() {
#Override
public boolean filter(ConsumerRecord<String, FireflyResponse> consumerRecord) {
return consumerRecord.headers().lastHeader(KafkaHeaders.CORRELATION_ID) == null;
}
});
return factory.createContainer(responseTopic);
}
#Bean
public ReplyingKafkaTemplate<String, FireflyRequest, FireflyResponse> kafkaTemplate(
ConcurrentMessageListenerContainer<String, FireflyResponse> replyContainer) {
ReplyingKafkaTemplate<String, FireflyRequest, FireflyResponse> template = new ReplyingKafkaTemplate<>(
producerFactory(), replyContainer);
template.setDefaultReplyTimeout(Duration.ofSeconds(connectionTimeout));
template.setSharedReplyTopic(true);
return template;
}
The replying template ALWAYS sets the correlation id header...
#Override
public RequestReplyFuture<K, V, R> sendAndReceive(ProducerRecord<K, V> record, #Nullable Duration replyTimeout) {
Assert.state(this.running, "Template has not been start()ed"); // NOSONAR (sync)
CorrelationKey correlationId = this.correlationStrategy.apply(record);
Assert.notNull(correlationId, "the created 'correlationId' cannot be null");
...
It needs it to correlate the reply with a request.
EDIT
It appears you are trying the filter the response; that is not supported; only requests are filtered.
Simply return null from the listener if you don't want to reply.

Save on JDBC connections by using JdbcCursorItemReader or JdbcPagingItemReader

In the spring batch project, I used JdbcCursorItemReader to read data to process them in parallel. I can run the batch locally without any problem.
I also heard that JdbcPagingItemReader is recommended for parallel processing against JdbcCursorItemReader, as cursor reader will hold the connection too long while paging reader can release connection once the page size is reached.
I then switched to JdbcPagingItemReader in step2, but out of surprise, I got the exception below when running locally.
Caused by: java.sql.SQLTransientConnectionException: HikariPool-1 -
Connection is not available, request timed out after 300001ms.
However, it seems the above exception occurs in step1 before the paging reader in step2 is executed, and that is the only change made. Please shed some light on why the exception is thrown and if it is good practice to use paging reader instead of cursor in parallel processing. Much appreciated your help!
The code snippet is pasted below:
#Bean
#StepScope
public Flow createParallelSubFlow() {
List<Flow> subFlowList = new ArrayList<>();
List<Stream> streamList;
try {
streamList = dataSourceConfig.streamMapper().
getStreamListByStatus(Constants.PENDING_STATUS_CD);
} catch (Exception e) {
}
streamList.forEach(stream -> {
long id = stream.getStreamId();
String flowName = "stream" + id + "_flow";
Flow subFlow = new FlowBuilder<Flow>(flowName)
.start(step1(id))
.next(step2(id))
.end();
subFlowList.add(subFlow);
});
return new FlowBuilder<Flow>("splitFlow").split(new SimpleAsyncTaskExecutor())
.add(subFlowList.toArray(new Flow[0])).build();
}
public Step step1(long id) {
return stepBuilderFactory.get("step1")
.<Domain, Domain>chunk(100)
.reader(reader1(id))
.writer(writer1())
.build();
}
//#StepScope
//#Bean
public Step step2(long id) {
return stepBuilderFactory.get("step2")
.<Domain, Domain>chunk(100)
.reader(cursorReader2(id))
.processor(processor2)
.writer(writer2())
.build();
}
public JdbcCursorItemReader<Domain> cursorReader2(Long id) {
return new JdbcCursorItemReaderBuilder<Domain>()
.dataSource(dataSourceConfig.dataSource())
.name("cursorReader")
.sql(Constants.QUERY_SQL)
.preparedStatementSetter(new PreparedStatementSetter() {
#Override
public void setValues(PreparedStatement ps) throws SQLException {
ps.setLong(1, id);
}})
.rowMapper(new RowMapper())
.build();
}
//Switch from cursorReader2 to pagingReader2 in step2
public JdbcPagingItemReader<Domain> pagingReader2(Long id) {
return new JdbcPagingItemReaderBuilder<Domain>()
.dataSource(dataSourceConfig.dataSource())
.name("pagingReader")
.queryProvider(queryProvider())
.parameterValues(parameterValues(id))
.rowMapper(new RowMapper())
.pageSize(100)
.build();
}
#Bean
public PagingQueryProvider queryProvider() {
SqlPagingQueryProviderFactoryBean providerFactory = new SqlPagingQueryProviderFactoryBean();
Map<String, Order> sortKeys = new HashMap<>(2);
sortKeys.put("ID", Order.ASCENDING);
providerFactory.setDataSource(dataSourceConfig.dataSource());
providerFactory.setSelectClause("SELECT Clause");
providerFactory.setFromClause("FROM Clause");
providerFactory.setWhereClause("WHERE Clause");
providerFactory.setSortKeys(sortKeys);
PagingQueryProvider pagingQueryProvider = null;
try {
pagingQueryProvider = providerFactory.getObject();
} catch (Exception e) {
logger.error("Failed to get PagingQueryProvider", e);
throw new RuntimeException("Failed to get PagingQueryProvider", e);
}
return pagingQueryProvider;
}
private Map<String, Object> parameterValues(Long id) {
Map<String, Object> parameterValues = new HashMap<>();
parameterValues.put("1", id);
return parameterValues;
}

Spring Bean Scope for StringRedisConnection

I have the following two bean definitions for Spring Data Redis. I cant seem to find the relevant documentation to determine the scopes(singleton,request or session) of these beans for a web app.
#Bean
public StringRedisTemplate redisTemplate() throws Exception {
StringRedisTemplate redisTemplate = new StringRedisTemplate();
redisTemplate.setConnectionFactory(jedisConnectionFactory());
return redisTemplate;
}
#Bean
public StringRedisConnection stringRedisConnection() throws Exception {
return new DefaultStringRedisConnection(redisTemplate().getConnectionFactory().getConnection());
}
Thanks to #Christoph Strobl recommendation here is the implementation Iam currently using
public List<String> testAutoComplete(String key,String query, int limitCount){
StringRedisSerializer serializer = new StringRedisSerializer();
RedisZSetCommands.Range range = Range.range();
range.gt(query);
RedisZSetCommands.Limit limit = new RedisZSetCommands.Limit();
limit.count(limitCount);
return template.execute(new RedisCallback< List<String>>() {
public List<String> doInRedis(RedisConnection connection) {
Set<byte[]> results = connection.zRangeByLex(serializer.serialize(key), range,limit);
List<String> resultAsString = new ArrayList<String>();
for(byte[] result : results){
resultAsString.add(serializer.deserialize(result));
}
return resultAsString;
}
},false);
}

Spring batch : FlatFileItemWriter header never called

I have a weird issue with my FlatFileItemWriter callbacks.
I have a custom ItemWriter implementing both FlatFileFooterCallback and FlatFileHeaderCallback. Consequently, I set header and footer callbacks in my FlatFileItemWriter like this :
ItemWriter Bean
#Bean
#StepScope
public ItemWriter<CityItem> writer(FlatFileItemWriter<CityProcessed> flatWriter, #Value("#{jobExecutionContext[inputFile]}") String inputFile) {
CityItemWriter itemWriter = new CityItemWriter();
flatWriter.setHeaderCallback(itemWriter);
flatWriter.setFooterCallback(itemWriter);
itemWriter.setDelegate(flatWriter);
itemWriter.setInputFileName(inputFile);
return itemWriter;
}
FlatFileItemWriter Bean
#Bean
#StepScope
public FlatFileItemWriter<CityProcessed> flatFileWriterArchive(#Value("#{jobExecutionContext[outputFileArchive]}") String outputFile) {
FlatFileItemWriter<CityProcessed> flatWriter = new FlatFileItemWriter<CityProcessed>();
FileSystemResource isr;
isr = new FileSystemResource(new File(outputFile));
flatWriter.setResource(isr);
DelimitedLineAggregator<CityProcessed> aggregator = new DelimitedLineAggregator<CityProcessed>();
aggregator.setDelimiter(";");
BeanWrapperFieldExtractor<CityProcessed> beanWrapper = new BeanWrapperFieldExtractor<CityProcessed>();
beanWrapper.setNames(new String[]{
"country", "name", "population", "popUnder25", "pop25To50", "pop50to75", "popMoreThan75"
});
aggregator.setFieldExtractor(beanWrapper);
flatWriter.setLineAggregator(aggregator);
flatWriter.setEncoding("ISO-8859-1");
return flatWriter;
}
Step Bean
#Bean
public Step stepImport(StepBuilderFactory stepBuilderFactory, ItemReader<CityFile> reader, ItemWriter<CityItem> writer, ItemProcessor<CityFile, CityItem> processor,
#Qualifier("flatFileWriterArchive") FlatFileItemWriter<CityProcessed> flatFileWriterArchive, ExecutionContextPromotionListener executionContextListener) {
return stepBuilderFactory.get("stepImport").<CityFile, CityItem> chunk(10).reader(reader(null)).processor(processor).writer(writer).stream(flatFileWriterArchive)
.listener(executionContextListener).build();
}
I have the classic content in my writeFooter, writeHeader and write methods.
ItemWriter code
public class CityItemWriter implements ItemWriter<CityItem>, FlatFileFooterCallback, FlatFileHeaderCallback, ItemStream {
private FlatFileItemWriter<CityProcessed> writer;
private static int totalUnknown = 0;
private static int totalSup10000 = 0;
private static int totalInf10000 = 0;
private String inputFileName = "-";
public void setDelegate(FlatFileItemWriter<CityProcessed> delegate) {
writer = delegate;
}
public void setInputFileName(String name) {
inputFileName = name;
}
private Predicate<String> isNullValue() {
return p -> p == null;
}
#Override
public void write(List<? extends CityItem> cities) throws Exception {
List<CityProcessed> citiesCSV = new ArrayList<>();
for (CityItem item : cities) {
String populationAsString = "";
String less25AsString = "";
String more25AsString = "";
/*
* Some processing to get total Unknown/Sup 10000/Inf 10000
* and other data
*/
// Write in CSV file
CityProcessed cre = new CityProcessed();
cre.setCountry(item.getCountry());
cre.setName(item.getName());
cre.setPopulation(populationAsString);
cre.setLess25(less25AsString);
cre.setMore25(more25AsString);
citiesCSV.add(cre);
}
writer.write(citiesCSV);
}
#Override
public void writeFooter(Writer fileWriter) throws IOException {
String newLine = "\r\n";
String totalUnknown= "Subtotal:;Unknown;" + String.valueOf(nbUnknown) + newLine;
String totalSup10000 = ";Sum Sup 10000;" + String.valueOf(nbSup10000) + newLine;
String totalInf10000 = ";Sum Inf 10000;" + String.valueOf(nbInf10000) + newLine;
String total = "Total:;;" + String.valueOf(nbSup10000 + nbInf10000 + nbUnknown) + newLine;
fileWriter.write(newLine);
fileWriter.write(totalUnknown);
fileWriter.write(totalSup10000);
fileWriter.write(totalInf10000);
fileWriter.write(total );
}
#Override
public void writeHeader(Writer fileWriter) throws IOException {
String newLine = "\r\n";
String firstLine= "FILE PROCESSED ON: ;" + new SimpleDateFormat("MM/dd/yyyy").format(new Date()) + newLine;
String secondLine= "Filename: ;" + inputFileName + newLine;
String colNames= "Country;Name;Population...;...having less than 25;...having more than 25";
fileWriter.write(firstLine);
fileWriter.write(secondLine);
fileWriter.write(newLine);
fileWriter.write(colNames);
}
#Override
public void close() throws ItemStreamException {
writer.close();
}
#Override
public void open(ExecutionContext context) throws ItemStreamException {
writer.open(context);
}
#Override
public void update(ExecutionContext context) throws ItemStreamException {
writer.update(context);
}
}
When I run my batch, I only have the data for each city (write method part) and the footer lines. If I comment the whole content of write method and footer callback, I still don't have the header lines. I tried to add a System.out.println() text in my header callback, it looks like it's never called.
Here is an example of the CSV file produced by my batch :
France;Paris;2240621;Unknown;Unknown
France;Toulouse;439553;Unknown;Unknown
Spain;Barcelona;1620943;Unknown;Unknown
Spain;Madrid;3207247;Unknown;Unknown
[...]
Subtotal:;Unknown;2
;Sum Sup 10000;81
;Sum Inf 10000;17
Total:;;100
What is weird is that my header used to work before, when I added both footer and header callbacks. I didn't change them, and I don't see what I've done in my code to "broke" my header callback... And of course, I have no save of my first code. Because I see only now that my header has disappeared (I checked my few last files, and it looks like my header is missing for some time but I didn't see it), I can't just remove my modifications to see when/why it happens.
Do you have any idea to solve this problem ?
Thanks
When using Java config as you are, it's best to return the most specific type possible (the opposite of what you're normally told to do in java programming). In this case, your writer is returning ItemWriter, but is step scoped. Because of this a proxy is created that can only see the type that your java config returns which in this case is ItemWriter and does not expose the methods on the ItemStream interface. If you return CityItemWriter, I'd expect things to work.