I have two subscriber pointing to same subscription of topic use case.
As per document pub sub redeliver the message if subscriber took more time than Acknowledgement deadline to acknowledge the message.
I have configure the default value which is 10 sec. But processing takes approx ~ 1 min to complete and acknowledge.
Below is my sample code
public class SubscribeAsyncExample {
private Subscriber subscriber = null;
#PostConstruct
public void init() throws Exception {
// TODO(developer): Replace these variables before running the sample.
String projectId = "your-project-id";
String subscriptionId = "your-subscription-id";
subscribeAsyncExample(projectId, subscriptionId);
}
public void subscribeAsyncExample(String projectId, String subscriptionId) {
ProjectSubscriptionName subscriptionName = ProjectSubscriptionName.of(projectId, subscriptionId);
// Instantiate an asynchronous message receiver.
MessageReceiver receiver = (PubsubMessage message, AckReplyConsumer consumer) -> {
// Handle incoming message, then ack the received message.
System.out.println("Id: " + message.getMessageId());
System.out.println("Data: " + message.getData().toStringUtf8());
int sleepingTime = 20000;
System.out.println("sleepingTime:" + sleepingTime);
try {
Thread.sleep(sleepingTime);
} catch (InterruptedException e) {
e.printStackTrace();
}
consumer.ack();
System.out.println("test completed");
};
try {
subscriber = Subscriber.newBuilder(subscriptionName, receiver).build();
// Start the subscriber.
subscriber.startAsync().awaitRunning();
System.out.printf("Listening for messages on %s:\n", subscriptionName.toString());
// Allow the subscriber to run for 30s unless an unrecoverable error occurs.
// subscriber.awaitTerminated(30, TimeUnit.SECONDS);
} catch (Exception e) {
e.printStackTrace();
}
}
#PreDestroy
public void preDestroy() throws Exception {
// Shut down the subscriber after 30s. Stop receiving messages.
subscriber.stopAsync();
}
}```
Below is Response
20:53:24,300 INFO [stdout] (Thread-128) Id: 1288313732423842
20:53:24,300 INFO [stdout] (Thread-128) Data: abc13
**20:53:24,300 INFO** [stdout] (Thread-128) sleepingTime:20000
**20:53:44,300 INFO** [stdout] (Thread-128) test completed
When using the Cloud Pub/Sub client libraries, the deadline is automatically extended up to the MaxAckExtensionPeriod specified in the Subscriber.Buider. This extension period defaults to one hour. To change this value, you'd want to change the line that creates the subscriber as follows:
subscriber = Subscriber.newBuilder(subscriptionName, receiver)
.setMaxAckExtensionPeriod(Duration.ofSeconds(10))
.build();
Related
We are using spring-kafka 2.3.0 in our app . Have observed some processing glitches in the scenarios below with
#Service
#EnableScheduling
public class KafkaService {
public void sendToKafkaProducer(String data) {
kafkaTemplate.send(configuration.getProducer().getTopicName(), data);
}
#KafkaListener(id = "consumer_grpA_id",
topics = "#{__listener.getEnvironmentConfiguration().getConsumer().getTopicName()}", groupId = "consumer_grpA", autoStartup = "false")
public void onMessage(ConsumerRecord<String, String> data) throws Exception {
passA(data);
}
private void passB(String message) {
//counter to keep track of retry attempts
if (counter.containsKey(message.getEventID())) {
//RETRY_COUNT = 5
if (counter.get(message.getEventID()) < RETRY_COUNT) {
retryAgain(message);
}
} else {
firstRetryPass(message);
}
}
private void retryAgain(String message) {
counter.put(message.getEventID(), counter.get(message.getEventID()) + 1);
try {
registry.stop(); //pause the listener
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
private void firstRetryPass(String message) {
// First Time Entry for count and time
counter.put(message.getEventID(), 1);
try {
registry.stop();//pause the listener
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
private void passA(String message) {
try {
passToTarget(message); //Call target processor
LOGGER.info("Message Processed Successfully to the target");
} catch (Exception e) {
targetUnavailable= true;
passB(message);
}
}
private void passToTarget(String message){
//processor logic, if the target is not available, retry after 15 mins, call passB func
}
#Scheduled(cron = "0 0/15 * 1/1 * ?")
public void scheduledMethod() {
try {
if (targetUnavailable) {
registry.start();
firstTimeStart = false;
}
LOGGER.info(">>>Scheduler Running ?>>>" + registry.isRunning());
} catch (Exception e) {
LOGGER.error(e.getMessage());
}
}
}
On receipt of the first message after a gap in processing, the consumer doesn't pick up the first message. The subsequent messages are processed.
As we don't have the direct access to Kafka topics, we aren't able to identify the process that didn't get picked up from consumer.
How do we track those events that arenot picked up and why is it so.?
We also configured a scheduler whose job is to keep the registry for Kafka running . So is this scheduler required when we already have a listener configured ?
What is the mem and CPU utilization metrics if we keep the listener running. That was one of the reason we used the Kafka registry to stop the listener explicitly whenever the target is down. So need to validate if this approach is sustainable. My hunch is this is against the basic working of Listener, as it's main job is to continue listening for new events irrespective of target status
Edited*
You shouldn't stop the registry on the listener thread unless you use stop(Runnable) - otherwise there will be a deadlock and a delay since the container waits for the listener to exit.
Stopping the container (via the registry) won't actually take effect until any remaining records fetched by the last poll have been processed (unless you set max.poll.records=1.
When the listener exits normally, the record's offset will be committed so that record will not be redelivered on the next start.
You can use the ContainerStoppingErrorHandler for this use case. See here.
Throw an exception and the error handler will stop the container for you.
But that will stop the container on the first try.
If you want retries, use a SeekToCurrentErrorHandler and call the ContainerStoppingErrorHandler from the recoverer after retries are exhausted.
I am using ReplyingKafkaTemplate to establish a synchronous call between two microservices.
The receiver of the event is annotated with SendTo as below:
#KafkaListener(topics = "${kafka.topic.prefix}"
+ "${kafka.topic.name}", containerFactory = "customEventKafkaListenerFactory")
#SendTo
public CustomResponseEvent onMessage(
#Payload #Valid CustomRequestEvent event, #Header(KafkaHeaders.CORRELATION_ID) String correlationId,
#Header(KafkaHeaders.REPLY_TOPIC) String replyTopic) {
//Making some REST API calls to another external system here using RestTemplate
}
The REST API call can throw a 4xx or 5xx. There are multiple such calls, some to internal systems, and some to external systems. It may be a bad design, but let's not get into that.
I would like to have a global exception handler for the RestTemplate where I can catch all the exceptions, and then return a response to the original sender of the event.
I am using the same replyTopic and correlationId as received in the consumer to publish the event.
But still the receiver of the response throws No pending reply exception.
Whatever approach I have above, is it possible to achieve such a central error response event publisher?
Is there any other alternative that is best suited for this exception handling?
The #KafkaListener comes with the:
/**
* Set an {#link org.springframework.kafka.listener.KafkaListenerErrorHandler} bean
* name to invoke if the listener method throws an exception.
* #return the error handler.
* #since 1.3
*/
String errorHandler() default "";
That one is used to catch and process all the downstream exceptions and if it returns a result, it is sent back to the replyTopic:
public void onMessage(ConsumerRecord<K, V> record, Acknowledgment acknowledgment, Consumer<?, ?> consumer) {
Message<?> message = toMessagingMessage(record, acknowledgment, consumer);
logger.debug(() -> "Processing [" + message + "]");
try {
Object result = invokeHandler(record, acknowledgment, message, consumer);
if (result != null) {
handleResult(result, record, message);
}
}
catch (ListenerExecutionFailedException e) { // NOSONAR ex flow control
if (this.errorHandler != null) {
try {
Object result = this.errorHandler.handleError(message, e, consumer);
if (result != null) {
handleResult(result, record, message);
}
}
catch (Exception ex) {
throw new ListenerExecutionFailedException(createMessagingErrorMessage(// NOSONAR stack trace loss
"Listener error handler threw an exception for the incoming message",
message.getPayload()), ex);
}
}
else {
throw e;
}
}
See RecordMessagingMessageListenerAdapter source code for more info.
When Kafka producer invokes send() method it returns a future of RecordMetadata which contains
public RecordMetadata(TopicPartition topicPartition,
long baseOffset,
long relativeOffset,
long timestamp,
java.lang.Long checksum,
int serializedKeySize,
int serializedValueSize)
This contains the timestamp of the record in the topic/partition but is there a way to find out timestamp of acknowledgment sent by the broker.
I am noticing a delay in acknowledgment receipt and would like to debug further to understand the cause of this delay.
Is there a log level in Kafka broker that allows printing acknowledgment information in server logs?
I found TRACE log level in both Apache Kafka and Spring Kafka. Could it be what you are looking for.
org.springframework.kafka.core.KafkaTemplate
protected ListenableFuture<SendResult<K, V>> doSend(final ProducerRecord<K, V> producerRecord) {
final Producer<K, V> producer = getTheProducer();
if (this.logger.isTraceEnabled()) {
this.logger.trace("Sending: " + producerRecord);
}
...
producer.send(producerRecord, new Callback() {
#Override
public void onCompletion(RecordMetadata metadata, Exception exception) {
...
if (KafkaTemplate.this.logger.isTraceEnabled()) {
KafkaTemplate.this.logger.trace("Sent ok: " + producerRecord + ", metadata: " + metadata);
}
...
}
}
...
}
org.apache.kafka.clients.producer.KafkaProducer
private Future<RecordMetadata> doSend(ProducerRecord<K, V> record, Callback
callback) {
...
log.trace("Sending record {} with callback {} to topic {} partition {}",
record, callback, record.topic(), partition);
...
}
I have a distributed queue on Weblogic. Messages are read from the queue using JMS onMessage() function. However the messages are not purged from the queue as long as the deployment is running. The message state string is always 'receive'. How do we ensure that the message is not picked up again in case a restart of the deployment is done?
#Override
public void onMessage(Message msg) {
try {
String msgText;
if (msg instanceof TextMessage) {
msgText = ((TextMessage) msg).getText();
} else {
msgText = msg.toString();
}
System.out.println("Message Received from Message_RESPONSE_QUEUE: " + msgText + " - " + count++);
// now send the message to queue2
InitialContext ic2 = getInitialContext2();
getMsgFromQueue qs = new getMsgFromQueue();
qs.init2(ic2, QUEUE2);
qs.send(msg, null);
} catch (JMSException jmse) {
} catch (NamingException ex) {
Logger.getLogger(getMsgFromQueue.class.getName()).log(Level.SEVERE, null, ex);
}
}
The message from the JMS queue does not get removed until JMS server receives an acknowledgement.
Here's some references that you may find useful -
http://docs.oracle.com/cd/E17904_01/web.1111/e15493/prog_details.htm#i1156227
http://docs.oracle.com/cd/E17904_01/web.1111/e15493/prog_details.htm#i1152248
http://docs.oracle.com/cd/E17904_01/web.1111/e15493/prog_details.htm#i1156227
I am trying to create a synchronous request using JMS on JBoss
Code for MDB is:
#Resource(mappedName = "java:/ConnectionFactory")
private ConnectionFactory connectionFactory;
#Override
public void onMessage(Message message) {
logger.info("Received message for client call");
if (message instanceof ObjectMessage) {
Connection con = null;
try {
con = connectionFactory.createConnection();
con.start();
Requests requests = (Requests) ((ObjectMessage) message)
.getObject();
String response = getClient().get(getRequest(requests));
con = connectionFactory.createConnection();
Session ses = con.createSession(true, Session.AUTO_ACKNOWLEDGE);
MessageProducer producer = ses.createProducer(message
.getJMSReplyTo());
TextMessage replyMsg = ses.createTextMessage();
replyMsg.setJMSCorrelationID(message.getJMSCorrelationID());
replyMsg.setText(response);
logger.info("Sending reply to client call : " + response );
producer.send(replyMsg);
} catch (JMSException e) {
logger.severe(e.getMessage());
} finally {
if (con != null) {
try {
con.close();
} catch (Exception e2) {
logger.severe(e2.getMessage());
}
}
}
}
}
Code for client is:
#Resource(mappedName = "java:/ConnectionFactory")
private QueueConnectionFactory queueConnectionFactory;
#Resource(mappedName = "java:/queue/request")
private Queue requestQueue;
#Override
public Responses getResponses(Requests requests) {
QueueConnection connection = null;
try {
connection = queueConnectionFactory.createQueueConnection();
connection.start();
QueueSession session = connection.createQueueSession(false,
Session.AUTO_ACKNOWLEDGE);
MessageProducer messageProducer = session
.createProducer(requestQueue);
ObjectMessage message = session.createObjectMessage();
message.setObject(requests);
TemporaryQueue temp = session.createTemporaryQueue();
MessageConsumer consumer = session.createConsumer(temp);
message.setJMSReplyTo(temp);
messageProducer.send(message);
Message response = consumer.receive();
if (response instanceof TextMessage) {
logger.info("Received response");
return new Responses(null, ((TextMessage) response).getText());
}
} catch (JMSException e) {
logger.severe(e.getMessage());
} finally {
if (connection != null) {
try {
connection.close();
} catch (Exception e2) {
logger.severe(e2.getMessage());
}
}
}
return null;
}
The message is received fine on the queue, the response message is created and the MessageProducer sends the response without issue, with no errors. However the consumer just sits and waits indefinitely. I have also tried creating a separate reply queue rather then using a temporary queue and the result is the same.
I am guessing that I am missing something basic with this set up but I cannot for the life of me see anything I am doing wrong.
There is no other code, the 2 things I have read on this that can cause problems is that the connection.start() isn't called or the repsonses are going to some other different receiver, which isn't happening here (as far as I know - there are no other messaging parts to the code outside of these classes yet)
So I guess my question is, should the above code work or am I missing some fundamental understanding of the JMS flow?
So..I persevered and I got it to work.
The answer is that when I create the session, the transacted attribute in both the client and the MDB had to be set to false:
Session ses = con.createSession(true, Session.AUTO_ACKNOWLEDGE);
had to be changed to:
Session ses = con.createSession(false, Session.AUTO_ACKNOWLEDGE);
for both client and server.
I know why now! I am effectively doing the below which is taken from the Oracle JMS documentation!
If you try to use a request/reply mechanism, whereby you send a message and then try to receive a reply to the sent message in the same transaction, the program will hang, because the send cannot take place until the transaction is committed. The following code fragment illustrates the problem:
// Don’t do this!
outMsg.setJMSReplyTo(replyQueue);
producer.send(outQueue, outMsg);
consumer = session.createConsumer(replyQueue);
inMsg = consumer.receive();
session.commit();