Kafka producer send blocks indefinitely when kafka servers are down - apache-kafka

I'm using Kafka 0.11.0.0. I have a test program that publishes to a Kafka topic; if the zookeeper and Kafka servers are down (which is normal in my development environment; I bring them up as needed) then the call to KafkaProducer<>.send() hangs indefinitely.
I either need to have send() return, preferably indicating the error; or I need a way to check whether the servers are up or down. Basically, I want my test tool to be able tell me, "Hey, dummy, start up Kafka!" instead of hanging.
Is there a way for my producer task to determine whether the servers are up or down?
I'm calling the send() like this:
kafkaProducer.send(new ProducerRecord<>(KAFKA_TOPIC, KAFKA_KEY,
message), (rm, ex) -> {
System.out.println("**** " + rm + "\n**** " +ex);
});
I have linger.ms = 1; I've tried retries=0, 1, and 2, and send() still blocks. I've never seen the callback called.
Older messages suggest setting metadata.fetch.timeout.ms to a small value, but that's gone in 0.11. Others suggest calling command line utilities to see if the servers are OK...but the referenced utilities also seem to be gone.
What's the graceful way to get this done?

We can send messages to broker in three ways :
Fire-and-forget :
We send a message to the server and don’t really care if it arrives successfully or not. Most of the time, it will arrive successfully, since Kafka is highly available and the producer will retry sending messages automatically. However, some messages will get lost using this method.
Asynchronous send
We call the send() method with a callback function, which gets triggered when it receives a response from the Kafka broker.
Synchronous send
We send a message, the send() method returns a Future object, and we use get() to wait on the future and see if the send() was successful or not.
The simplest way to send a message synchronously is as follows:
ProducerRecord<String, String> record =
new ProducerRecord<>(KAFKA_TOPIC, KEY, message);
try {
producer.send(record).get();
} catch (Exception e) {
e.printStackTrace();
}
Here, we are using Future.get() to wait for a reply from Kafka. This method will throw an exception if the record is not sent successfully to Kafka. If there were no errors, we will get a RecordMetadata object that we can use to retrieve the offset the message was written to.
hope this helps.

That is strange. It should return with an error saying either "Failed to update metadata" or "Expiring x number of records".
Check request.timeout.ms and max.block.ms setting for your producer. By default request.timeout.ms is 60 seconds long

Related

Failures in streaming handling of requests - what happens to connection?

The documentation for akka-http explains that it is important to consume a request stream entirely since bytes that are not pulled will be interpreted as backpressure (https://doc.akka.io/docs/akka-http/current/implications-of-streaming-http-entity.html). When you know beforehand that the stream can be ignored you should use discardEntityBytes, or otherwise read it fully. There is also the option of closing the connection by attaching the stream to a Sink.cancelled.
My question is what happens when the stream fails.
Is the stream drained or is the connection closed? Or is it the responsibility of the implementation to recover from errors and either drain or close the connection? If so, what is a good code pattern for this?
Does it matter if a request is completed with a Future or if the response is streaming?
What if, instead of an unexpected failure, you determine half-way through the stream that the rest of the stream can be ignored. Is throwing an exception a good way of stopping stream processing?
Example completing with a future:
val route =
post {
extractDataBytes { data =>
complete {
data
.via(flow1)
.via(flow2) // say error happens here at some point
.runwWith(sink)
}
}
}
If the server connection is having problem then connection will be automatically closed.

Same message to several services

I have one MSMQ queue which is listened by five windows services. I used BeginPeek and PeekCompleted event for this purpose. My problem is among five services, only one service is the right recipient of the message. All four just read message, but no action is performed. This can only be identified when we read MQ message.
Now, I added a code in my services to check, if the criteria matches and the message is being processed by the right service, then I am using Receive to dequeue the message from MSMQ. Is that a good idea?
Secondly, If the message doesnot satisfy condition and all five services just peeked it, but not received, the message still lies in queue. I understand. But the same message is being processed infinite times, as the message was never removed.
private void queue_PeekCompleted(object sender, PeekCompletedEventArgs e)
{
MessageQueue queue = (MessageQueue)sender;
//Message msg = queue.EndPeek(e.AsyncResult);
Message msg = e.Message;
//Read message and check if the criteria matches
if(CriteriaMatches)
{
queue.ReceiveById(e.Message.Id);
}
queue.EndPeek(e.AsyncResult);
queue.BeginPeek();
}
Appreciate your help.
Thanks,
Fayaz
Set the messages to expire after a set (short) period. They will then move to the dead letter queue where you can have another service waiting for arrivals. This service could then raise an alert, for example, as soon as a message arrives.

hornetq message remain in queue after ack

We are using hornetq-core 2.2.21.Final stand-alone after reading a non-transnational message , the message still remains in queue although it acknowledge
session is created using
sessionFactory.createSession(true, true, 0)
locator setting:
val transConf = new TransportConfiguration(classOf[NettyConnectorFactory].getName,map)
val locator = HornetQClient.createServerLocatorWithoutHA(transConf)
locator.setBlockOnDurableSend(false)
locator.setBlockOnNonDurableSend(false)
locator.setAckBatchSize(0) // also tried without this setting
locator.setConsumerWindowSize(0)// also tried without this setting
Message is acknowledge using message.acknowledge ()
I think that the problem might be two queues on the same address
also tried to set the message expiration but it didn't help , messages are still piling up in the queue
please advise
It seems you are using the core api. Are you explicitly calling acknowledge on the messages?
If you have two queues on the same address ack will only ack the messages on the queue you are consuming. On that case the system is acting normally.

Message Queue(MSMQ) does not throw exception when message is not received on the other end

Let's say I try to send to an authenticated transactional queue,
by calling msg.send(object,MessageQueueTransactionType.Single), message does not receive in transactional queue, no exception thrown.
What I want to accomplish is after sending, if message fail to send, perform some function and abort transaction, yet it doesn't throw exception, so I am unable to process it.
I am sending object from Web Application in local to local message queue.
My code is as follows in my web application:
MessageQueueTransaction mqTran=new MessageQueueTransaction();
try
{
using(System.Messaging.Message msg=new System.Messaging.Message(){
mqTran.Begin();
MessageQueue adminQ = new MessageQueue(AdminQueuePath);
MessageQueue msgQ = new MessageQueue(queuePath);
msgQ.DefaultPropertiesToSend.Recoverable = true;
msg.body = object;
msg.Recoverable=true;
msg.Label="Object";
msg.TimeToReachQueue=new TimeSpan(0,0,30);
msg.AcknowledgeType=AcknowledgeTypes.FullReachQueue;
msg.ResponseQueue=adminQ;
msg.AdministrationQueue=adminQ;
msgQ1.Send(msg,MessageQueueTransactionType.Single);
mqTran.Commit();
}
catch(Exception e)
{
mqTran.Abort();
//Do some processing if fail to send
}
It's not going to throw an exception for failure to deliver, only for failure to place on the queue. One of the points of message queueing is that the messages are durable so that you can take appropriate measures if delivery fails. This means you need to program another process to read the dead letter queue. The image below is taken from MSDN.
Because the entire process is asynchronous, your code flow is not going to be exception-driven the way your code block would like. Your transaction is simply the "sending transaction" in this workflow.
Recommendation: Check your message queue to find the messages, either in the outgoing queue or the transactional dead-letter queue.

Remove messages from MSMQ

I have program which reads MSMQ using GetAllMessages but it does not remove messages from Queue so I have following code; which keep getting same messages. I do not want to process same message again and again. How can I make sure that MSMQ deletes those already received messages or atleast I don't receive it ?
while()
{
Messages[] receivedMessage = queue.GetAllMessages()
foreach(Message msg in receivedMessage)
{
... Processing
}
}
GetAllMessages() gives you a copy of the messages in the queue, but doesn't delete them.
Use any of the Receive methods to receive and remove the messages from a queue