Kafka RecordFilterStrategy does not filter records when using spring-kafka ReplyingKafkaTemplate - apache-kafka

Hi I have following configuration for ReplyingKafkaTemplate and i want to filter message before consumer based on correlationID but some reason its not filter can anyone suggest what is wrong with this.
#Bean
public ConcurrentMessageListenerContainer<String, FireflyResponse> replyContainer() {
ConcurrentKafkaListenerContainerFactory<String, FireflyResponse> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
RetryTemplate retryTemplate = new RetryTemplate();
retryTemplate.setRetryPolicy(new SimpleRetryPolicy(retry));
factory.setRetryTemplate(retryTemplate);
factory.setConcurrency(3);
factory.setBatchListener(true);
factory.setAckDiscarded(true);
factory.setRecordFilterStrategy(new RecordFilterStrategy<String, FireflyResponse>() {
#Override
public boolean filter(ConsumerRecord<String, FireflyResponse> consumerRecord) {
return consumerRecord.headers().lastHeader(KafkaHeaders.CORRELATION_ID) == null;
}
});
return factory.createContainer(responseTopic);
}
#Bean
public ReplyingKafkaTemplate<String, FireflyRequest, FireflyResponse> kafkaTemplate(
ConcurrentMessageListenerContainer<String, FireflyResponse> replyContainer) {
ReplyingKafkaTemplate<String, FireflyRequest, FireflyResponse> template = new ReplyingKafkaTemplate<>(
producerFactory(), replyContainer);
template.setDefaultReplyTimeout(Duration.ofSeconds(connectionTimeout));
template.setSharedReplyTopic(true);
return template;
}

The replying template ALWAYS sets the correlation id header...
#Override
public RequestReplyFuture<K, V, R> sendAndReceive(ProducerRecord<K, V> record, #Nullable Duration replyTimeout) {
Assert.state(this.running, "Template has not been start()ed"); // NOSONAR (sync)
CorrelationKey correlationId = this.correlationStrategy.apply(record);
Assert.notNull(correlationId, "the created 'correlationId' cannot be null");
...
It needs it to correlate the reply with a request.
EDIT
It appears you are trying the filter the response; that is not supported; only requests are filtered.
Simply return null from the listener if you don't want to reply.

Related

How to use #BeforeStep Job Parameters in JdbcCursorItemReader for named Query

I have the code like below
#Bean
public JdbcCursorItemReader<Map<String, Object>> itemReader() {
return new JdbcCursorItemReader<Map<String, Object>>() {
private JobParameters jobParameter;
String sql = "select EMPLOYEE_ID as empId, EMPLOYEE_NAME as empName EMPLOYEE_AGE as age from EMPLOYEE EMPLOYEE_DEPT =:empDept and EMPLOYEE_SAL > :empSal";
Map<String, Object> namedParameters = null;
#PostConstruct
public void initialize() throws Exception
{
setDataSource(dataSource);
setSql("select 1 from dual");
setRowMapper(new ColumnMapRowMapper());
}
#BeforeStep
public void retrieveExecutionContext(StepExecution stepExecution)
{
jobParameter = stepExecution.getJobParameters();
namedParameters = new HashMap<String, Object>() {
{
put("bstd", jobParameter.getString("empDept"));
put("bwtn", jobParameter.getString("empSal"));
}
};
jobParameter.getParameters().forEach((k, v) -> System.out.println("key =" + k + ", Value:" + v));
}
#Override
public void afterPropertiesSet() throws Exception {
setSql(NamedParameterUtils.substituteNamedParameters(sql, new MapSqlParameterSource(namedParameters)));
setPreparedStatementSetter(new ListPreparedStatementSetter(
Arrays.asList(NamedParameterUtils.buildValueArray(sql, namedParameters))));
setRowMapper(new ColumnMapRowMapper());
setDataSource(dataSource);
super.afterPropertiesSet();
}
};
}
Tried using calling afterPropertiesSet, but still seeing below exception
Caused by: org.springframework.dao.InvalidDataAccessApiUsageException: No value supplied for the SQL parameter 'empDept': No value registered for key 'empDept'
at org.springframework.jdbc.core.namedparam.NamedParameterUtils.buildValueArray(NamedParameterUtils.java:361) ~[spring-jdbc-5.3.22.jar:5.3.22]
at org.springframework.jdbc.core.namedparam.NamedParameterUtils.buildValueArray(NamedParameterUtils.java:485) ~[spring-jdbc-5.3.22.jar:5.3.22]
Requirement is dynamic query, so don't have control of the Select query and the where conditions.
Thanks in advance,
You can use a SpEL expression to inject and use job parameters in your item reader bean definition as follows:
#Bean
#StepScope
public JdbcCursorItemReader<Map<String, Object>> itemReader(#Value("#{jobParameters['empDept']}") String empDept, #Value("#{jobParameters['empSal']}") String empSal) {
JdbcCursorItemReader<Map<String, Object>> itemReader = new JdbcCursorItemReader<>();
// use parameters 'empDept' and 'empSal' in your sql query as needed
return itemReader;
}
Note that the item reader should be step-scoped for that to work. For more details, please refer to the documentation: Late Binding of Job and Step Attributes.

Use Chunk Listener for Indicator Pattern

I am trying to use Processor Indicator Pattern to make my job idempotent, i tried to use Write Listener, AfterWrite to update mongo document by setting a field Processed: true. However there were issues when there is a big number of chunks.
MongoDB Item Reader(10000 Docs) ---chunk(1000)--> JDBC Batch Item Writer(Only 5000 are saved in table after Step's completion)
The following Code is about The step:
#Bean
public MongoItemReader<X> Reader() throws Exception {
MongoItemReader<X> reader = new MongoItemReader<>();
reader.setTemplate(mongoTemplate);
reader.setCollection("MY_COLLECTION");
reader.setTargetType(X.class);
reader.setQuery("{PROCESSED: {$exists: false}}");
reader.setSort(new HashMap<String, Sort.Direction>() {{
put("_id", Sort.Direction.ASC);
}});
reader.afterPropertiesSet();
return reader;
}
#Bean
public XItemProcessor x_item_processor() {
return new XItemProcessor();
}
#Bean
public X_Item_Listener item_listener() {
return new X_Item_Listener();
}
#Bean
public X_Step_Listener step_listener() {
return new X_Step_Listener();
}
#Bean
public JdbcBatchItemWriter<Y> YWriter() {
JdbcBatchItemWriter<Y> Y_Writer = new JdbcBatchItemWriter<>();
Y_Writer.setDataSource(dataSource);
Y_Writer.setAssertUpdates(true);
Y_Writer.setItemSqlParameterSourceProvider(new BeanPropertyItemSqlParameterSourceProvider<>());
Y_Writer.setSql("INSERT INTO Y (Y1,Y2,Y3,Y4) VALUES (:y1, :y2, :y3, :y4)");
Y_Writer.afterPropertiesSet();
return Y_Writer;
}
#Bean
public Step XY_Step() throws Exception {
return stepBuilderFactory.get("XY")
.<X, Y>chunk(1000)
.reader(Reader())
.processor(x_item_processor())
.writer(YWriter())
.faultTolerant()
.skipLimit(Integer.MAX_VALUE)
.skip(Exception.class)
.listener((ItemProcessListener<? super X, ? super Y>) item_listener())
.listener(step_listener())
.build();
}
Here a snippet of code used in After Write Listener for updating mongo Document.
#Autowired
private MongoTemplate mongoTemplate;
#Transactional(propagation = Propagation.REQUIRES_NEW)
public void afterWrite(List<? extends Y> items) {
BulkOperations ops=mongoTemplate.bulkOps(BulkOperations.BulkMode.UNORDERED,"MY_COLLECTION");
for (Y item : items) {
Update update = new Update().set("PROCESSED", true);
ops.updateOne(new Query(Criteria.where("_id").is(item.getID())), update);
}
ops.execute();
}

search for a very simple EsperIO Kafka example

I'm just desperately looking for example code for an Esper CEP Kafka Adapter code. I've already installed Kafka and wrote data to a Kafka topic using a producer and now I want to process it with Esper CEP. Unfortunately the documentation of Esper for the Kafka Adapter is not very meaningful. Does anyone have a very simple example?
Edit:
So far I added an adapter and it seems to work. However, I don't know how to read the adapter nor how to link a CEP pattern with this adapter. This is my code so far:
config.addImport(KafkaOutputDefault.class);
Properties props = new Properties();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, org.apache.kafka.common.serialization.StringDeserializer.class.getName());
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, org.apache.kafka.common.serialization.StringDeserializer.class.getName());
props.put(ConsumerConfig.GROUP_ID_CONFIG, "group.id");
props.put(EsperIOKafkaConfig.INPUT_SUBSCRIBER_CONFIG, EsperIOKafkaInputSubscriberByTopicList.class.getName());
props.put(EsperIOKafkaConfig.TOPICS_CONFIG, "test123");
props.put(EsperIOKafkaConfig.INPUT_PROCESSOR_CONFIG, EsperIOKafkaInputProcessorDefault.class.getName());
props.put(EsperIOKafkaConfig.INPUT_TIMESTAMPEXTRACTOR_CONFIG, EsperIOKafkaInputTimestampExtractorConsumerRecord.class.getName());
Configuration config2 = new Configuration();
config2.addPluginLoader("KafkaInput", EsperIOKafkaInputAdapterPlugin.class.getName(), props, null);
EsperIOKafkaInputAdapter adapter = new EsperIOKafkaInputAdapter(props, "default");
adapter.start();
I've had the same problem. I created a sample Project you could have a look at, especially the plain-esper branch.
An even more simplified Version would be:
public class KafkaExample implements Runnable {
private String runtimeURI;
public KafkaExample(String runtimeURI) {
this.runtimeURI = runtimeURI;
}
public static void main(String[] args){
new KafkaExample("KafkaExample").run();
}
#Override
public void run() {
Configuration configuration = new Configuration();
configuration.getCommon().addImport(KafkaOutputDefault.class);
configuration.getCommon().addEventType(String.class);
Properties consumerProps = new Properties();
// Kafka Consumer Properties
consumerProps.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,StringDeserializer.class.getName());
consumerProps.put(ConsumerConfig.GROUP_ID_CONFIG, UUID.randomUUID().toString());
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, OffsetResetStrategy.EARLIEST.toString().toLowerCase());
// EsperIO Kafka Input Adapter Properties
consumerProps.put(EsperIOKafkaConfig.INPUT_SUBSCRIBER_CONFIG, Consumer.class.getName());
consumerProps.put(EsperIOKafkaConfig.INPUT_PROCESSOR_CONFIG, InputProcessor.class.getName());
consumerProps.put(EsperIOKafkaConfig.INPUT_TIMESTAMPEXTRACTOR_CONFIG, EsperIOKafkaInputTimestampExtractorConsumerRecord.class.getName());
configuration.getRuntime().addPluginLoader("KafkaInput", EsperIOKafkaInputAdapterPlugin.class.getName(), consumerProps, null);
String stmt = "#name('sampleQuery') select * from String";
EPCompiled compiled;
try {
compiled = EPCompilerProvider.getCompiler().compile(stmt, new CompilerArguments(configuration));
} catch (EPCompileException ex) {
throw new RuntimeException(ex);
}
EPRuntime runtime = EPRuntimeProvider.getRuntime(runtimeURI, configuration);
EPDeployment deployment;
try {
deployment = runtime.getDeploymentService().deploy(compiled, new DeploymentOptions().setDeploymentId(UUID.randomUUID().toString()));
} catch (EPDeployException ex) {
throw new RuntimeException(ex);
}
EPStatement statement = runtime.getDeploymentService().getStatement(deployment.getDeploymentId(), "sampleQuery");
statement.addListener((newData, oldData, sta, run) -> {
for (EventBean nd : newData) {
System.out.println(nd.getUnderlying());
}
});
while (true) {}
}
}
public class Consumer implements EsperIOKafkaInputSubscriber {
#Override
public void subscribe(EsperIOKafkaInputSubscriberContext context) {
Collection<String> collection = new ArrayList<String>();
collection.add("input");
context.getConsumer().subscribe(collection);
}
}
public class InputProcessor implements EsperIOKafkaInputProcessor {
private EPRuntime runtime;
#Override
public void init(EsperIOKafkaInputProcessorContext context) {
this.runtime = context.getRuntime();
}
#Override
public void process(ConsumerRecords<Object, Object> records) {
for (ConsumerRecord record : records) {
if (record.value() != null) {
try {
runtime.getEventService().sendEventBean(record.value().toString(), "String");
} catch (Exception e) {
throw e;
}
}
}
}
public void close() {}
}
Sample code follows. This code assumes there are already some messages in the topic. This does not loop and wait for more messages.
Properties consumerProps = new Properties();
consumerProps.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, ip);
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, org.apache.kafka.common.serialization.StringDeserializer.class.getName());
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, org.apache.kafka.common.serialization.StringDeserializer.class.getName());
consumerProps.put(ConsumerConfig.GROUP_ID_CONFIG, "mygroup");
KafkaConsumer consumer = new KafkaConsumer<>(consumerProps);
ConsumerRecords<String, String> rows = consumer.poll(1000);
Iterator<ConsumerRecord<String, String>> it = rows.iterator();
while (it.hasNext()) {
ConsumerRecord<String, String> row = it.next();
MyEvent event = new MyEvent(row.value()); // transform string to event
// process event
runtime.sendEvent(event);
}

How get original message after get an errorHandler and write a file

I've been building a Spring integration Service Email using Java DSL.
This service must have a recovery policy in order to retry sending the emails but I'm not getting success.
A brief story: The application recieve a Payload and Header and try to send to email server. It tries 3 times and in case of failure, it creates a new file with Header and Body of message.
How could I get the original Message(Header and Payload) and put the information pair in a json file, in case of failure to send the email?
Thanks.
This is my beans and the service:
/**
* #################
* MESSAGE ENDPOINTS
* #################
*/
#Bean(name = PollerMetadata.DEFAULT_POLLER)
public PollerMetadata poller() {
return Pollers
.fixedRate(NumberUtils.createLong(QUEUE_RATE))
.maxMessagesPerPoll(NumberUtils.createLong(QUEUE_CAPACITY))
.errorHandler(e -> LOG.error("Exception : " + e.getMessage()))
.get();
}
#Bean
public MessageChannel recoveryChannel() {
return MessageChannels.direct().get();
}
#MessagingGateway
public static interface MailService {
#Gateway(requestChannel = "mail.input")
void sendMail(String body, #Headers Map<String,String> headers);
}
#Bean
public RetryPolicy retryPolicy() {
final Map<Class<? extends Throwable>, Boolean> map =
new HashMap<Class<? extends Throwable>, Boolean>() {
{
put(MailSendException.class,true);
put(RuntimeException.class, true);
}
private static final long serialVersionUID = -1L;
};
final RetryPolicy ret = new SimpleRetryPolicy(3, map, true);
return ret;
}
#Bean
public RetryTemplate retryTemplate() {
final RetryTemplate ret = new RetryTemplate();
ret.setRetryPolicy(retryPolicy());
ret.setThrowLastExceptionOnExhausted(false);
return ret;
}
#Bean
public Advice retryAdvice() {
final RequestHandlerRetryAdvice advice = new RequestHandlerRetryAdvice();
advice.setRetryTemplate(retryTemplate());
RecoveryCallback<Object> recoveryCallBack = new ErrorMessageSendingRecoverer(recoveryChannel());
advice.setRecoveryCallback(recoveryCallBack);
return advice;
}
private MailSendingMessageHandlerSpec mailOutboundAdapter(){
MailSendingMessageHandlerSpec msmhs =
Mail.outboundAdapter(emailServerHost())
.port(serverPort())
.credentials(MAIL_USER_NAME, MAIL_PASSWORD)
.protocol(emailProtocol())
.javaMailProperties(p -> p
.put("mail.debug", "true")
.put("mail.smtp.ssl.enable",enableSSL())
.put("mail.smtp.connectiontimeout", 5000)
.put("mail.smtp.timeout", 5000));
return msmhs;
}
#Bean
public FileWritingMessageHandler fileOutboundAdapter(){
FileWritingMessageHandler fwmhs = Files
.outboundAdapter(new File("logs/errors/"))
.autoCreateDirectory(true)
.get();
return fwmhs;
}
/**
* ################
* FLOWS
* ################
*/
#Bean
public IntegrationFlow smtp(){
return IntegrationFlows.from("mail.input")
.channel(MessageChannels.queue())
.handle(this.mailOutboundAdapter(),
e -> e.id("smtpOut")
.advice(retryAdvice())
)
.get();
}
#Bean
public IntegrationFlow errorFlow(){
return IntegrationFlows.from(recoveryChannel())
.transform(Transformers.toJson())
.enrichHeaders(c -> c.header(FileHeaders.FILENAME, "emailErrors"))
.handle(this.fileOutboundAdapter())
.get();
}
}
The error message has a payload MessagingException. It has two properties cause and failedMessage.
The failed message is the message at the point of failure, with headers and payload.

Spring batch partitioning is not working

I am using Spring Batch Partitioning to merge data from group of related flat files to a single file. The batch is failing with below two issues:
First slave step thread is failing as the data to file writer is written before it is opened. The value for variable inputFileNames (step context data provided by partitioner) for this thread is[20002", 20003]
Second slave step thread is failing as the partitioning data is missing from the step context. The value for variable inputFileNames for this thread is null
Please let me know if I am missing some thing in the configuration.
// log with Error info
2015-12-26 17:59:14,165 DEBUG [SimpleAsyncTaskExecutor-1] c.d.d.b.r.ReaderConfiguration [ReaderBatchConfiguration.java:473] inputFileNames ----[20002", 20003]
2015-12-26 17:59:14,165 DEBUG [SimpleAsyncTaskExecutor-1] c.d.d.b.r.BatchConfiguration [BatchConfiguration.java:389] consumer ----p2
2015-12-26 17:59:14,275 ERROR [SimpleAsyncTaskExecutor-1] o.s.b.c.s.AbstractStep [AbstractStep.java:225] Encountered an error executing step testConsumersInputFileMergeStep in job testFileForInputJob
org.springframework.batch.item.WriterNotOpenException: Writer must be open before it can be written to
at org.springframework.batch.item.file.FlatFileItemWriter.write(FlatFileItemWriter.java:255) ~[spring-batch-infrastructure-3.0.3.RELEASE.jar:3.0.3.RELEASE]
2015-12-26 18:00:14,421 DEBUG [SimpleAsyncTaskExecutor-2] c.d.d.b.r.ReaderBatchConfiguration [ReaderConfiguration.java:474] inputFileNames ----null
// Partitioner
public class ProvisioningInputFilePartitioner implements Partitioner {
#Override
public Map<String, ExecutionContext> partition(int gridSize) {
Map<String, ExecutionContext> filesToProcess = getFilesToProcess(outboundSourceFolder);
Map<String, ExecutionContext> execCtxs = new HashMap<>();
for(Entry<String, ExecutionContext> entry : filesToProcess.entrySet()) {
execCtxs.put(entry.getKey(), entry.getValue());
}
return execCtxs;
}
private Map<String, ExecutionContext> getFilesToProcess(String outboundSourceFolder2) {
Map<String, ExecutionContext> contexts = new HashMap<>();
ExecutionContext execCtx1 = new ExecutionContext();
List<String> inputFileNames1 = Arrays.asList("20001", "22222");
execCtx1.put("consumer", "p1");
execCtx1.put("inputFileNames", inputFileNames1);
contexts.put("p1", execCtx1);
ExecutionContext execCtx2 = new ExecutionContext();
List<String> inputFileNames2 = Arrays.asList("20002", "20003");
execCtx1.put("consumer", "p2");
execCtx1.put("inputFileNames", inputFileNames2);
contexts.put("p2", execCtx2);
return contexts;
}
}
// Writer
#Bean
#StepScope
public ItemWriter<String> testConsumerFileItemWriter (#Value("#{stepExecutionContext[consumer]}") String consumer){
logger.debug("consumer ----"+ consumer);
FileSystemResource fileSystemResource = new FileSystemResource(new File(outboundSourceFolder, consumer + ".txt"));
FlatFileItemWriter<String> fileItemWriter = new FlatFileItemWriter<>();
fileItemWriter.setResource(fileSystemResource);
fileItemWriter.setLineAggregator(new PassThroughLineAggregator<String>());
return fileItemWriter;
}
#Bean
public Partitioner provisioningInputFilePartitioner() {
return new ProvisioningInputFilePartitioner();
}
#Bean
public TaskExecutor taskExecutor() {
return new SimpleAsyncTaskExecutor();
}
// Reader
#Bean
#StepScope
public ItemReader<String> testInputFilesReader (#Value("#{stepExecutionContext[inputFileNames]}") List<String> inputFileNames) {
logger.debug("inputFileNames ----" + inputFileNames);
MultiResourceItemReader<String> multiResourceItemReader = new MultiResourceItemReader<String>();
...
return multiResourceItemReader;
}
// slave step
#Bean
public Step testConsumersInputFileMergeStep(StepBuilderFactory stepBuilder, ItemReader<String> testInputFilesReader,
ItemWriter<String> testConsumerFileItemWriter){
return stepBuilder.get("testConsumersInputFileMergeStep").<String, String>chunk(1).reader(testInputFilesReader)
.writer(testConsumerFileItemWriter).build();
}
// master step
#Bean
public Step testConsumersFilePartitionerStep(StepBuilderFactory stepBuilder, Step testConsumersInputFileMergeStep, Partitioner provisioningInputFilePartitioner,
TaskExecutor taskExecutor ){
return stepBuilder.get("testConsumersFilePartitionerStep").partitioner(testConsumersInputFileMergeStep)
.partitioner("testConsumersInputFileMergeStep", provisioningInputFilePartitioner)
.taskExecutor(taskExecutor)
.build();
}
//Job
#Bean
public Job testFileForInputJob(JobBuilderFactory factory, Step testFileForInputStep, Step testConsumersFilePartitionerStep) {
return factory.get("testFileForInputJob").incrementer(new RunIdIncrementer()).start(testConsumersFilePartitionerStep).build();
}