I was wondering if there is a way to catch an exception/throwable when producing a kafka message via the Kafka Template.
I don't see anything where it would throw a KafkaException if something fails. I need to find out if the message was committed to Kafka before I can continue with my application flow. I know the ListenableFuture would log a failure but I don't know how to capture that onFailure.
public void sendToKafkaTopic(KafkaMessage data) {
ListenableFuture<SendResult<String, KafkaMessage>> future = kafkaTemplate.send(primaryKafkaTopic, data);
future.addCallback(new ListenableFutureCallback<SendResult<String, KafkaMessage>>() {
#Override
public void onSuccess(SendResult<String, KafkaMessage> result) {
log.info("sent message='{}' with offset={}", data,
result.getRecordMetadata().offset());
}
#Override
public void onFailure(Throwable ex) {
log.error("unable to send message='{}'", data, ex);
}
});
}
I was trying to find something like this:
public void sendToKafkaTopic(KafkaMessage data) {
try {
kafkaTemplate.send(primaryKafkaTopic, data);
} catch (RuntimeException err) {
log.error("unable to send message='{}'", data, ex);
throw new CustomException(err);
}
}
}
before I can continue with my application flow
There is nothing wrong if you just do Future.get(). When an exception happens downstream, it is going to be thrown to you from this blocking get().
Related
How could I go to "#Override public void onFailure(Throwable ex) { ... }"?
It always go "#Override public void onSuccess(SendResult<String, KafkaPresponseDto> result) {...}.
I want to print
log.error("Unable to send message=["+kafkaPgResponseDto.toString()+"] due to : " + ex.getMessage());
Please help...
ListenableFuture<SendResult<String, KafkaPgResponseDto>> future = pgResponseKafkaTemplate.send(kurlyTopicNamePgResponse, kafkaPgResponseDto);
future.addCallback(new ListenableFutureCallback<SendResult<String, KafkaPgResponseDto>>(){
#Override
public void onSuccess(SendResult<String, KafkaPgResponseDto> result) {
KafkaPgResponseDto kafkaPgResponseDto = result.getProducerRecord().value();
log.debug("Send message=["+kafkaPgResponseDto.toString()+"] with offset=["+result.getRecordMetadata().offset()+"]");
}
#Override
public void onFailure(Throwable ex) {
log.error("Unable to send message=["+kafkaPgResponseDto.toString()+"] due to : "+ex.getMessage());
kafkaTransactionService.failedProcessingKafkaHistorySave(orderNo, kurlyTopicNamePgResponse, gson.toJson(payload), ex.toString());
}
});
I believe there is no need in the real Kafka to test your functionality. Consider to use a MockProducer for injection into that KafkaTemplate and emulate an error for that onFailure() case: https://www.baeldung.com/kafka-mockproducer
We are using spring-kafka 2.3.0 in our app . Have observed some processing glitches in the scenarios below with
#Service
#EnableScheduling
public class KafkaService {
public void sendToKafkaProducer(String data) {
kafkaTemplate.send(configuration.getProducer().getTopicName(), data);
}
#KafkaListener(id = "consumer_grpA_id",
topics = "#{__listener.getEnvironmentConfiguration().getConsumer().getTopicName()}", groupId = "consumer_grpA", autoStartup = "false")
public void onMessage(ConsumerRecord<String, String> data) throws Exception {
passA(data);
}
private void passB(String message) {
//counter to keep track of retry attempts
if (counter.containsKey(message.getEventID())) {
//RETRY_COUNT = 5
if (counter.get(message.getEventID()) < RETRY_COUNT) {
retryAgain(message);
}
} else {
firstRetryPass(message);
}
}
private void retryAgain(String message) {
counter.put(message.getEventID(), counter.get(message.getEventID()) + 1);
try {
registry.stop(); //pause the listener
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
private void firstRetryPass(String message) {
// First Time Entry for count and time
counter.put(message.getEventID(), 1);
try {
registry.stop();//pause the listener
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
private void passA(String message) {
try {
passToTarget(message); //Call target processor
LOGGER.info("Message Processed Successfully to the target");
} catch (Exception e) {
targetUnavailable= true;
passB(message);
}
}
private void passToTarget(String message){
//processor logic, if the target is not available, retry after 15 mins, call passB func
}
#Scheduled(cron = "0 0/15 * 1/1 * ?")
public void scheduledMethod() {
try {
if (targetUnavailable) {
registry.start();
firstTimeStart = false;
}
LOGGER.info(">>>Scheduler Running ?>>>" + registry.isRunning());
} catch (Exception e) {
LOGGER.error(e.getMessage());
}
}
}
On receipt of the first message after a gap in processing, the consumer doesn't pick up the first message. The subsequent messages are processed.
As we don't have the direct access to Kafka topics, we aren't able to identify the process that didn't get picked up from consumer.
How do we track those events that arenot picked up and why is it so.?
We also configured a scheduler whose job is to keep the registry for Kafka running . So is this scheduler required when we already have a listener configured ?
What is the mem and CPU utilization metrics if we keep the listener running. That was one of the reason we used the Kafka registry to stop the listener explicitly whenever the target is down. So need to validate if this approach is sustainable. My hunch is this is against the basic working of Listener, as it's main job is to continue listening for new events irrespective of target status
Edited*
You shouldn't stop the registry on the listener thread unless you use stop(Runnable) - otherwise there will be a deadlock and a delay since the container waits for the listener to exit.
Stopping the container (via the registry) won't actually take effect until any remaining records fetched by the last poll have been processed (unless you set max.poll.records=1.
When the listener exits normally, the record's offset will be committed so that record will not be redelivered on the next start.
You can use the ContainerStoppingErrorHandler for this use case. See here.
Throw an exception and the error handler will stop the container for you.
But that will stop the container on the first try.
If you want retries, use a SeekToCurrentErrorHandler and call the ContainerStoppingErrorHandler from the recoverer after retries are exhausted.
Is there a way for me to retrieve the error codes https://kafka.apache.org/protocol#protocol_error_acodes from Kafka when my producer fails to publish a message successfully?
kindly please provide same POC for retriving error codes in java
You can register callbacks to the producer, for success as well as failure. Please see the below code:
public ListenableFuture<SendResult<K, V>> sendMessage(String topicName, V message) {
log.info("sending message :{} to topic {} ", message, topicName);
ListenableFuture<SendResult<K, V>> future = kafkaTemplate.send(topicName, message);
addCallback(message, future);
return future;
}
private void addCallback(V message, ListenableFuture<SendResult<K, V>> future) {
future.addCallback(new ListenableFutureCallback<SendResult<K, V>>() {
#Override
public void onSuccess(SendResult<K, V> result) {
//this is a callback for success.
// use result to know various attributes like offset can be known
// using result.getRecordMetadata().offset(), etc.
}
#Override
public void onFailure(Throwable ex) {
//use ex to know the exception that occurred while
//publishing a message
}
});
}
I used ListeningExecutorService of guava in my project, got confused about the exception handling.
I used a thread pool, submit a task to it, and set a timeout to the listenableFuture, and add a callback to it.
final ListeningExecutorService threadPool = MoreExecutors.listeningDecorator(Executors.newCachedThreadPool());
Futures.addCallback(listenableFuture, new FutureCallback<**>() {
#Override
public void onSuccess(#Nullable ** data) {
xxxxxxx;
}
public void onFailure(Throwable t) {
xxxxxxxxxx;
if (t instanceof CancellationException) {
throw new QueryException("yyyy");
} else {
throw new QueryException("zzzzz");
}
}
});
I can't catch the exception inside the callback. So I use another ListenableFuture to get the exception
ListenableFuture allFutures = Futures.allAsList(allFuturesList);
try {
allFutures.get();
} catch (CancellationException ce) {
throw new QueryException("");
} catch (InterruptedException t) {
throw new QueryException("");
} catch (ExecutionException e) {
Throwable t = e.getCause();
if (t instanceof QueryException)
throw (QueryException) t;
else
throw new QueryException();
} catch (QueryException qe) {
throw qe;
} catch (Exception e) {
throw new QueryException();
} finally {
}
But I when the callback throw a QueryException, the allFutures can't catch it, allFutures can only catch a CancellationException without the detail error message.
How can I get my detail error message?
Futures.allAsList doesn't do what you expect it to do
From the Javadoc: (emphasis is from me)
Canceling this future will attempt to cancel all the component futures, and if any of the provided futures fails or is canceled, this one is, too.
What you should probably do is create your own aggregating future. You can base your code on Guava's own internal mechanism. See the source code for more info.
Anyways, do not throw exceptions in your FutureCallback::onFailure method.
this is mycode. It seem only execute 1 request
public class RestFulService extends AbstractVerticle {
#Override
public void start() throws Exception {
Router router = Router.router(vertx);
router.get("/test/hello/:input").handler(new Handler<RoutingContext>() {
#Override
public void handle(RoutingContext routingContext) {
WorkerExecutor executor = vertx.createSharedWorkerExecutor("my-worker-pool",10,120000);
executor.executeBlocking(future -> {
try {
Thread.sleep(5000);
future.complete();
} catch (InterruptedException e) {
e.printStackTrace();
}
},false, res -> {
System.out.println("The result is: " + res.result());
routingContext.response().end("routing1"+res.result());
executor.close();
});
}
});
}
When i call 10 request from browser in same time, it take 50000ms to done all request.
Please guide me fix it.
Try with curl, I suspect your browser is using the same connection for all requests (thus waiting for a response before sending the next request).
By the way, you don't need to call createSharedWorkerExecutor on each request. You can do it once when the verticle is started.