Pause Kafka Consumer with spring-cloud-stream and Functional Style - apache-kafka

I'm trying to implement a retry mechanism for my kafka stream application. The idea is that I would get the consumer and partition ID as well as the topic name from the input topic and then pause the consumer for the duration stored in the payload.
I've searched for documentations and examples but all I found are examples based on the classic bindings provided by spring-cloud-stream. I'm trying to see if there's a way to get access to these info with functional style.
For example the following code can give me access to the consumer with classic binding style.
#StreamListener(Sink.INPUT)
public void in(String in, #Header(KafkaHeaders.CONSUMER) Consumer<?, ?> consumer) {
System.out.println(in);
consumer.pause(Collections.singleton(new TopicPartition("myTopic", 0)));
}
How do I get the equivalence with the Functional Style?
I tried with the following code but I'm getting exception saying no such binding is found.
#Bean
public Function<Message<?>, KStream<String, String>> process() {
message -> {
Consumer<?, ?> consumer = message.getHeaders().get(KafkaHeaders.Consumer, Consumer.class);
String topic = message.getHeaders().get(KafkaHeaders.Topic, String.class);
Integer partitionId = message.getHeaders().get(KafkaHeaders.RECEIVED_PARTITION_ID, Integer.class);
CustomPayload payload = (CustomPayload) message.getPayload();
if (payload.getRetryTime() < System.currentTimeMillis()) {
consumer.pause(Collections.singleton(new TopicPartition(topic, partitionId)));
}
}
}
Exception I got
Caused by: java.lang.IllegalStateException: No factory found for binding target type: org.springframework.messaging.Message among registered factories: channelFactory,messageSourceFactory,kStreamBoundElementFactory,kTableBoundElementFactory,globalKTableBoundElementFactory
at org.springframework.cloud.stream.binding.AbstractBindableProxyFactory.getBindingTargetFactory(AbstractBindableProxyFactory.java:82)
at org.springframework.cloud.stream.binder.kafka.streams.function.KafkaStreamsBindableProxyFactory.bindInput(KafkaStreamsBindableProxyFactory.java:191)
at org.springframework.cloud.stream.binder.kafka.streams.function.KafkaStreamsBindableProxyFactory.afterPropertiesSet(KafkaStreamsBindableProxyFactory.java:111)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1853)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1790)
... 96 more

In your functional bean example, you are mixing both Message and KStream. That is the reason for that specific exception. The functional bean could be rewritten as below.
#Bean
public java.util.function.Consumer<Message<?>> process() {
return message -> {
Consumer<?, ?> consumer = message.getHeaders().get(KafkaHeaders.Consumer, Consumer.class);
String topic = message.getHeaders().get(KafkaHeaders.Topic, String.class);
Integer partitionId = message.getHeaders().get(KafkaHeaders.RECEIVED_PARTITION_ID, Integer.class);
CustomPayload payload = (CustomPayload) message.getPayload();
if (payload.getRetryTime() < System.currentTimeMillis()) {
consumer.pause(Collections.singleton(new TopicPartition(topic, partitionId)));
}
}
}

Related

error handling when consume as a batch in kafka

i have a kafka consumer written in java spring boot (spirng kafka). My consumer is like below.
#RetryableTopic(
attempts = "4",
backoff = #Backoff(delay = 1000, multiplier = 2.0),
autoCreateTopics = "false",
topicSuffixingStrategy = TopicSuffixingStrategy.SUFFIX_WITH_INDEX_VALUE,
include = {ResourceAccessException.class, MyCustomRetryableException.class})
#KafkaListener(topics = "myTopic", groupId = "myGroup", autoStartup = "true", concurrency = "3")
public void consume(#Header(KafkaHeaders.RECEIVED_TOPIC) String topic,
#Header("custom_header_1") String customHeader1,
#Header("custom_header_2") String customHeader2,
#Header("custom_header_3") String customHeader3,
#Header(required = false, name = KafkaHeaders.RECEIVED_MESSAGE_KEY) String key,
#Payload(required = false) String message) {
log.info("-------------------------");
log.info(key);
log.info(message);
log.info("-------------------------");
}
I have used #RetryableTopic annotation to handle errors. I have written a custom exception class and whatever method that throw my custom exception class (MyCustomRetryableException.class), it will retry according to the backoff with number of attempts defined in the retryable annotation. So in here i dont have to do anything. Kafka will simple publish failing messages to the correct dlt topic. All i have to do is create dlt related topic since i have used autoCreateTopics = "false".
Now i'm trying to consume messages in batch wise. I changed my kafka config like below in order to consume in batch wise.
#Bean
public ConsumerFactory<String, Object> consumerFactory() {
Map<String, Object> config = new HashMap<>();
// default configs like bootstrap servers, key and value deserializers are here
config.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, "5");
return new DefaultKafkaConsumerFactory<>(config);
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, Object> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, Object> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.getContainerProperties().setCommitLogLevel(LogIfLevelEnabled.Level.DEBUG);
factory.setBatchListener(true);
return factory;
}
Now that i have added batch listeners, #RetryableTopic is not supported with it. So how can i achieve the publishing failed messages to DLT task which was previously handled by #RetryableTopic ?
If anyone can answer with an example it would be great. Thank you in advance.
See the documentation.
Use a DefaultErrorHandler with a DeadLetterPublishingRecoverer.
Non blocking retries are not supported; the retries will use the configured BackOff.
Throw a BatchListenerFailedException to indicate which record in the batch failed and just that one will be sent to the DLT.
With any other exception, the whole batch will be retried (and sent to the DLT if retries are exhausted).
https://docs.spring.io/spring-kafka/docs/current/reference/html/#retrying-batch-eh

Spring cloud Kafka does infinite retry when it fails

Currently, I am having an issue where one of the consumer functions throws an error which makes Kafka retry the records again and again.
#Bean
public Consumer<List<RuleEngineSubject>> processCohort() {
return personDtoList -> {
for(RuleEngineSubject subject : personDtoList)
processSubject(subject);
};
}
This is the consumer the processSubject throws a custom error which causes it to fail.
processCohort-in-0:
destination: internal-process-cohort
consumer:
max-attempts: 1
batch-mode: true
concurrency: 10
group: process-cohort-group
The above is my binder for Kafka.
Currently, I am attempting to retry 2 times and then send to a dead letter queue but I have been unsuccessful and not sure which is the right approach to take.
I have tried to implement a custom handler that will handle the error when it fails but does not retry again and I am not sure how to send to a dead letter queue
#Bean
ListenerContainerCustomizer<AbstractMessageListenerContainer<?, ?>> customizer() {
return (container, dest, group) -> {
if (group.equals("process-cohort-group")) {
container.setBatchErrorHandler(new BatchErrorHandler() {
#Override
public void handle(Exception thrownException, ConsumerRecords<?, ?> data) {
System.out.println(data.records(dest).iterator().);
data.records(dest).forEach(r -> {
System.out.println(r.value());
});
System.out.println("failed payload='{}'" + thrownException.getLocalizedMessage());
}
});
}
};
}
This stops infinite retry but does not send a dead letter queue. Can I get suggestions on how to retry two times and then send a dead letter queue. From my understanding batch listener does not how to recover when there is an error, could someone help shine light on this
Retry 15 times then throw it to topicname.DLT topic
#Bean
public ConcurrentKafkaListenerContainerFactory kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setCommonErrorHandler(
new DefaultErrorHandler(
new DeadLetterPublishingRecoverer(kafkaTemplate()), kafkaBackOffPolicy()));
factory.setConsumerFactory(kafkaConsumerFactory());
return factory;
}
#Bean
public ExponentialBackOffWithMaxRetries kafkaBackOffPolicy() {
var exponentialBackOff = new ExponentialBackOffWithMaxRetries(15);
exponentialBackOff.setInitialInterval(Duration.ofMillis(500).toMillis());
exponentialBackOff.setMultiplier(2);
exponentialBackOff.setMaxInterval(Duration.ofSeconds(2).toMillis());
return exponentialBackOff;
}
You need to configure a suitable error handler in the listener container; you can disable retry and dlq in the binding and use a DeadLetterPublishingRecoverer instead. See the answer Retry max 3 times when consuming batches in Spring Cloud Stream Kafka Binder

remote-partitioning in spring-batch with Kafka as middleware

I'm trying to use spring-batch remote-partitioning for scaling the Job and Apache Kafka as the middleware.
here is a brief configuration of the masterStep:
#Bean
public Step managerStep() {
return managerStepBuilderFactory.get("managerStep")
.partitioner("workerStep", filePartitioner)
.outputChannel(requestForWorkers())
.inputChannel(repliesFromWorkers())
.build();
}
So I'm using channels for both sending requests to the workers as well as receiving responses from them. I know the other option is to poll the JobRepository (which works fine in my case), but I would rather not use it.
here also is some of the configs for the Kafka:
spring.kafka.producer.key-serializer= org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.springframework.kafka.support.serializer.JsonSerializer
spring.kafka.consumer.key-deserializer= org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.value-deserializer= org.springframework.kafka.support.serializer.JsonDeserializer
spring.kafka.producer.properties.spring.json.add.type.headers=true
spring.kafka.consumer.properties.spring.json.trusted.packages = org.springframework.batch.integration.partition,org.springframework.batch.core
The master and the workers are configured and the master can send the request through Kafka to the workers. The workers start processing and everything is fine until the workers try to send the response through the Kafka
as you see I'm using the JsonSerializer and JsonDeserializer for sending/receiving the messages. The problem is that when Jackson tries to serialize the StepExecution, it falls into an infinite loop since the StepExetion has a JobExecution in it and the JobExecution also has a List of StepExetions:
Caused by: org.apache.kafka.common.errors.SerializationException: Can't serialize data [StepExecution: id=3001, version=6, name=workerStep:61127a319d6caf656442ff53, status=COMPLETED, exitStatus=COMPLETED, readCount=10, filterCount=0, writeCount=10 readSkipCount=0, writeSkipCount=0, processSkipCount=0, commitCount=4, rollbackCount=0, exitDescription=] for topic [repliesFromWorkers]
Caused by: com.fasterxml.jackson.databind.JsonMappingException: Infinite recursion (StackOverflowError) (through reference chain: org.springframework.batch.core.JobExecution["stepExecutions"]->java.util.Collections$UnmodifiableRandomAccessList[0]->org.springframework.batch.core.StepExecution["jobExecution"]->org.springframework.batch.core.JobExecution["stepExecutions"]->java.util.Collections$UnmodifiableRandomAccessList[0]->org.springframework.batch.core.StepExecution["jobExecution"]->org.springframework.batch.core.JobExecution["stepExecutions"]-....
So I thought maybe I can customize the serializing of the StepExecution so it ignores the List of StepExecutions in the JobExecution of the first StepExecution! but even in this case, it will fails at the master side while deserializing of this StepExecution:
Caused by: com.fasterxml.jackson.databind.exc.MismatchedInputException: Cannot construct instance of `org.springframework.batch.core.StepExecution` (although at least one Creator exists): cannot deserialize from Object value (no delegate- or property-based Creator)
Is there anyway to make this work?
Im using Spring Boot 2.4.2 and its corresponding versions of the spring-boot-starter-batch, spring-batch-integration, spring-integration-kafka and spring-kafka
you can create a custom (de)serializer and handle it manually. something like this will help:
public class KafkaStringOrByteSerializer<T> extends JsonSerializer<T> {
private final Serializer<Object> byteSerializer = new DefaultSerializer();
private final org.apache.kafka.common.serialization.Serializer<String> stringSerializer = new StringSerializer();
#Override
public byte[] serialize(String topic, T data) {
if (needsBinarySerializer(data)) {
return this.serializeBinary(data);
} else {
return stringSerializer.serialize(topic, (String) data);
}
}
private boolean needsBinarySerializer(Object data) {
if (data instanceof byte[] || data instanceof Byte[] || data instanceof Byte)
return true;
if (data != null && data.getClass() != null) {
return (data.getClass().getName()).startsWith("org.springframework.batch");
}
return false;
}
private byte[] serializeBinary(Object data) {
try (ByteArrayOutputStream output = new ByteArrayOutputStream()) {
byteSerializer.serialize(data, output);
return output.toByteArray();
} catch (IOException e) {
throw new MessageConversionException("Cannot convert object to bytes", e);
}
}
}
a similar approach can be taken for the deserializer

Kafka streams exactly once delivery

My goal is to consume from topic A, do some processing and produce to topic B, as a single atomic action. To achieve this I see two options:
Use a spring-kafka #Kafkalistener and a KafkaTemplate as described here.
Use Streams eos (exactly-once) functionality.
I have successfully verified option #1. By successfully, I mean that if my processing fails (IllegalArgumentException is thrown) the consumed message from topic A keeps being consumed by the KafkaListener. This is what I expect, as the offset is not committed and DefaultAfterRollbackProcessor is used.
I am expecting to see the same behaviour if instead of a KafkaListener I use a stream for consuming from topic A, processing and sending to topic B (option #2). But even though while I process an IllegalArgumentException is thrown the message is only consumed once by the stream. Is this the expected behaviour?
In the Streams case the only configuration I have is the following:
#Configuration
#EnableKafkaStreams
public class KafkaStreamsConfiguration {
#Bean(name = KafkaStreamsDefaultConfiguration.DEFAULT_STREAMS_CONFIG_BEAN_NAME)
public StreamsConfig kStreamsConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "http://localhost:9092");
props.put(StreamsConfig.APPLICATION_ID_CONFIG, "calculate-tax-sender-invoice-stream");
props.put(AbstractKafkaAvroSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG, "http://localhost:8082");
// this should be enough to enable transactions
props.put(StreamsConfig.PROCESSING_GUARANTEE_CONFIG, StreamsConfig.EXACTLY_ONCE);
return new StreamsConfig(props);
}
}
//required to create and start a new KafkaStreams, as when an exception is thrown the stream dies
// see here: https://docs.spring.io/spring-kafka/reference/html/_reference.html#after-rollback
#Bean(name = KafkaStreamsDefaultConfiguration.DEFAULT_STREAMS_BUILDER_BEAN_NAME)
public StreamsBuilderFactoryBean myKStreamBuilder(StreamsConfig streamsConfig) {
StreamsBuilderFactoryBean streamsBuilderFactoryBean = new StreamsBuilderFactoryBean(streamsConfig);
streamsBuilderFactoryBean.setUncaughtExceptionHandler(new Thread.UncaughtExceptionHandler() {
#Override
public void uncaughtException(Thread t, Throwable e) {
log.debug("StopStartStreamsUncaughtExceptionHandler caught exception {}, stopping StreamsThread ..", e);
streamsBuilderFactoryBean.stop();
log.debug("creating and starting a new StreamsThread ..");
streamsBuilderFactoryBean.start();
}
});
return streamsBuilderFactoryBean;
}
My Stream is like this:
#Autowired
public SpecificAvroSerde<InvoiceEvents> eventSerde;
#Autowired
private TaxService taxService;
#Bean
public KStream<String, InvoiceEvents> kStream(StreamsBuilder builder) {
KStream<String, InvoiceEvents> kStream = builder.stream("A",
Consumed.with(Serdes.String(), eventSerde));
kStream
.mapValues(v ->
{
// get tax from possibly remote service
// an IllegalArgumentException("Tax calculation failed") is thrown by getTaxForInvoice()
int tax = taxService.getTaxForInvoice(v);
// create a TaxCalculated event
InvoiceEvents taxCalculatedEvent = InvoiceEvents.newBuilder().setType(InvoiceEvent.TaxCalculated).setTax(tax).build();
log.debug("Generating TaxCalculated event: {}", taxCalculatedEvent);
return taxCalculatedEvent;
})
.to("B", Produced.with(Serdes.String(), eventSerde));
return kStream;
}
The happy path streams scenario works: if no exception is thrown while processing, message appears properly in topic B.
My unit test:
#Test
public void calculateTaxForInvoiceTaxCalculationFailed() throws Exception {
log.debug("running test calculateTaxForInvoiceTaxCalculationFailed..");
Mockito.when(taxService.getTaxForInvoice(any(InvoiceEvents.class)))
.thenThrow(new IllegalArgumentException("Tax calculation failed"));
InvoiceEvents invoiceCreatedEvent = createInvoiceCreatedEvent();
List<KeyValue<String, InvoiceEvents>> inputEvents = Arrays.asList(
new KeyValue<String, InvoiceEvents>("A", invoiceCreatedEvent));
Properties producerConfig = new Properties();
producerConfig.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "http://localhost:9092");
producerConfig.put(ProducerConfig.ACKS_CONFIG, "all");
producerConfig.put(ProducerConfig.RETRIES_CONFIG, 1);
producerConfig.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
producerConfig.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, KafkaAvroSerializer.class.getName());
producerConfig.put(AbstractKafkaAvroSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG, "http://localhost:8082");
producerConfig.put(ProducerConfig.CLIENT_ID_CONFIG, "unit-test-producer");
// produce with key
IntegrationTestUtils.produceKeyValuesSynchronously("A", inputEvents, producerConfig);
// wait for 30 seconds - I should observe re-consumptions of invoiceCreatedEvent, but I do not
Thread.sleep(30000);
// ...
}
Update:
In my unit test I sent 50 invoiceEvents (orderId=1,...,50), I process them and sent them to a destination topic.
In my logs the behaviour I see is as follows:
invoiceEvent.orderId = 43 → consumed and successfully processed
invoiceEvent.orderId = 44 → consumed and IlleagalArgumentException thrown
..new stream starts..
invoiceEvent.orderId = 44 → consumed and successfully processed
invoiceEvent.orderId = 45 → consumed and successfully processed
invoiceEvent.orderId = 46 → consumed and successfully processed
invoiceEvent.orderId = 47 → consumed and successfully processed
invoiceEvent.orderId = 48 → consumed and successfully processed
invoiceEvent.orderId = 49 → consumed and successfully processed
invoiceEvent.orderId = 50 → consumed and IlleagalArgumentException thrown
...
[29-0_0-producer] task [0_0] Error sending record (key A value {"type": ..., "payload": {**"id": "46"**, ... }}} timestamp 1529583666036) to topic invoice-with-tax.t due to {}; No more records will be sent and no more offsets will be recorded for this task.
..new stream starts..
invoiceEvent.**orderId = 46** → consumed and successfully processed
invoiceEvent.orderId = 47 → consumed and successfully processed
invoiceEvent.orderId = 48 → consumed and successfully processed
invoiceEvent.orderId = 49 → consumed and successfully processed
invoiceEvent.orderId = 50 → consumed and successfully processed
Why after the 2nd failure, it re-consumes from invoiceEvent.orderId = 46?
The key points to have option 2 (Streams Transactions) working are:
Assign a Thread.UncaughtExceptionHandler() so that you start a new StreamThread in case of any uncaught exception (by default the StreamThread dies - see code snippet that follows). This can even happen if the production to Kafka broker fails, it does not have to be related to your business logic code in the stream.
Consider setting a policy for handling de-serailization of messages (when you consume). Check DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG (javadoc). For example, should you ignore and consume next message or stop consuming from the relevant Kafka partition.
In the case of Streams, even if you set MAX_POLL_RECORDS_CONFIG=1 (one record per poll/batch), still consumed offsets and produced messages are not committed per message. This case leads to cases as the one described in the question (see "Why after the 2nd failure, it re-consumes from invoiceEvent.orderId = 46?").
Kafka transactions simply do not work on Windows yet. The fix will be delivered in Kafka 1.1.1 (https://issues.apache.org/jira/browse/KAFKA-6052).
Consider checking how you handle serialisation exceptions (or in general exceptions during production) (here and here)
#Configuration
#EnableKafkaStreams
public class KafkaStreamsConfiguration {
#Bean(name = KafkaStreamsDefaultConfiguration.DEFAULT_STREAMS_CONFIG_BEAN_NAME)
public StreamsConfig kStreamsConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "http://localhost:9092");
props.put(StreamsConfig.APPLICATION_ID_CONFIG, "blabla");
props.put(AbstractKafkaAvroSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG, "http://localhost:8082");
// this should be enough to enable transactions
props.put(StreamsConfig.PROCESSING_GUARANTEE_CONFIG, StreamsConfig.EXACTLY_ONCE);
return new StreamsConfig(props);
}
}
#Bean(name = KafkaStreamsDefaultConfiguration.DEFAULT_STREAMS_BUILDER_BEAN_NAME)
public StreamsBuilderFactoryBean myKStreamBuilder(StreamsConfig streamsConfig)
{
StreamsBuilderFactoryBean streamsBuilderFactoryBean = new StreamsBuilderFactoryBean(streamsConfig);
streamsBuilderFactoryBean.setUncaughtExceptionHandler(new Thread.UncaughtExceptionHandler() {
#Override
public void uncaughtException(Thread t, Throwable e) {
log.debug("StopStartStreamsUncaughtExceptionHandler caught exception {}, stopping StreamsThread ..", e);
streamsBuilderFactoryBean.stop();
log.debug("creating and starting a new StreamsThread ..");
streamsBuilderFactoryBean.start();
}
});
return streamsBuilderFactoryBean;
}

Kafka Consumer not getting invoked when the kafka Producer is set to Sync

I have a requirement where there are 2 topics to be maintained 1 with synchronous approach and other with an asynchronous way.
The asynchronous works as expected invoking the consumer record, however in the synchronous approach the consumer code is not getting invoked.
Below is the code declared in the config file
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9093");
props.put(ProducerConfig.RETRIES_CONFIG, 3);
props.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384);
props.put(ProducerConfig.ACKS_CONFIG, "all");
props.put(ProducerConfig.LINGER_MS_CONFIG, 1);
props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432);
I have enabled autoFlush true here
#Bean( name="KafkaPayloadSyncTemplate")
public KafkaTemplate<String, KafkaPayload> KafkaPayloadSyncTemplate() {
return new KafkaTemplate<String,KafkaPayload>(producerFactory(),true);
}
The control stops thereafter not making any calls to the consumer after returning the recordMetadataResults object
private List<RecordMetadata> sendPayloadToKafkaTopicInSync() throws InterruptedException, ExecutionException {
final List<RecordMetadata> recordMetadataResults = new ArrayList<RecordMetadata>();
KafkaPayload kafkaPayload = constructKafkaPayload();
ListenableFuture<SendResult<String,KafkaPayload>>
future = KafkaPayloadSyncTemplate.send(TestTopic, kafkaPayload);
SendResult<String, KafkaPayload> results;
results = future.get();
recordMetadataResults.add(results.getRecordMetadata());
return recordMetadataResults;
}
Consumer Code
public class KafkaTestListener {
#Autowired
TestServiceImpl TestServiceImpl;
public final CountDownLatch countDownLatch = new CountDownLatch(1);
#KafkaListener(id="POC", topics = "TestTopic", group = "TestGroup")
public void listen(ConsumerRecord<String,KafkaPayload> record, Acknowledgment acknowledgment) {
countDownLatch.countDown();
TestServiceImpl.consumeKafkaMessage(record);
System.out.println("Acknowledgment : " + acknowledgment);
acknowledgment.acknowledge();
}
}
Based on the issue, I have 2 questions
Should we manually call the listen() inside the Listener Class when its a Sync Producer. If Yes, How to do that ?
If the listener(#KafkaListener) get called automatically, what other setup/configurations do I need to add to make this working.
Thanks for the inputs in advance
-Srikant
You should be sure that you use consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest"); for Consumer Properties.
Not sure what you mean about sync/async, but produce and consume are fully distinguished operations. And you can't affect consumer from your producer side. Because in between there is Kafka Broker.