hornetq message remain in queue after ack - hornetq

We are using hornetq-core 2.2.21.Final stand-alone after reading a non-transnational message , the message still remains in queue although it acknowledge
session is created using
sessionFactory.createSession(true, true, 0)
locator setting:
val transConf = new TransportConfiguration(classOf[NettyConnectorFactory].getName,map)
val locator = HornetQClient.createServerLocatorWithoutHA(transConf)
locator.setBlockOnDurableSend(false)
locator.setBlockOnNonDurableSend(false)
locator.setAckBatchSize(0) // also tried without this setting
locator.setConsumerWindowSize(0)// also tried without this setting
Message is acknowledge using message.acknowledge ()
I think that the problem might be two queues on the same address
also tried to set the message expiration but it didn't help , messages are still piling up in the queue
please advise

It seems you are using the core api. Are you explicitly calling acknowledge on the messages?
If you have two queues on the same address ack will only ack the messages on the queue you are consuming. On that case the system is acting normally.

Related

Understanding better ExecutionReport QuickFix

I am newbie on Fix in general and I have started from QuickFix to make practice. I apology in advance from the following trivial questions.
I have understood that to handle ExecutionReport I need to use crack() method inside FromApp() and implementing OnMessage().
But what I have two questions :
1) What happens if during a Partially fill order ExecutionReport message suddenly session drops, which is the way to handle this situation. Trying to reconnect and Send a request ? Please Can you provide a simple explanation in steps and what QuickFix Api method should I use ?
2) If I need to implement a FixEngine to handle dropcopy should I be aware of something in particular ?
Thank you for your help
1). Just make sure ResetOnDisconnect parameter is set to N for your trading session: ResetOnDisconnect=N (docs)
QuickFix will be automatically attempting to reconnect every ReconnectInterval seconds;
Once connected (with ResetOnDisconnect=N) it will also automatically exchange last known message sequence numbers with the FIX server, and the ones lost during the disconnection will be re-sent - so without a line of code you will receive the missing messages.
Also, if disconnection was for a longer period of time, you may want to send Order Status Request (H) message to the FIX server to receive actual ExecutionReport for your pending orders.
2) The question is too general for me to answer...

Kafka producer send blocks indefinitely when kafka servers are down

I'm using Kafka 0.11.0.0. I have a test program that publishes to a Kafka topic; if the zookeeper and Kafka servers are down (which is normal in my development environment; I bring them up as needed) then the call to KafkaProducer<>.send() hangs indefinitely.
I either need to have send() return, preferably indicating the error; or I need a way to check whether the servers are up or down. Basically, I want my test tool to be able tell me, "Hey, dummy, start up Kafka!" instead of hanging.
Is there a way for my producer task to determine whether the servers are up or down?
I'm calling the send() like this:
kafkaProducer.send(new ProducerRecord<>(KAFKA_TOPIC, KAFKA_KEY,
message), (rm, ex) -> {
System.out.println("**** " + rm + "\n**** " +ex);
});
I have linger.ms = 1; I've tried retries=0, 1, and 2, and send() still blocks. I've never seen the callback called.
Older messages suggest setting metadata.fetch.timeout.ms to a small value, but that's gone in 0.11. Others suggest calling command line utilities to see if the servers are OK...but the referenced utilities also seem to be gone.
What's the graceful way to get this done?
We can send messages to broker in three ways :
Fire-and-forget :
We send a message to the server and don’t really care if it arrives successfully or not. Most of the time, it will arrive successfully, since Kafka is highly available and the producer will retry sending messages automatically. However, some messages will get lost using this method.
Asynchronous send
We call the send() method with a callback function, which gets triggered when it receives a response from the Kafka broker.
Synchronous send
We send a message, the send() method returns a Future object, and we use get() to wait on the future and see if the send() was successful or not.
The simplest way to send a message synchronously is as follows:
ProducerRecord<String, String> record =
new ProducerRecord<>(KAFKA_TOPIC, KEY, message);
try {
producer.send(record).get();
} catch (Exception e) {
e.printStackTrace();
}
Here, we are using Future.get() to wait for a reply from Kafka. This method will throw an exception if the record is not sent successfully to Kafka. If there were no errors, we will get a RecordMetadata object that we can use to retrieve the offset the message was written to.
hope this helps.
That is strange. It should return with an error saying either "Failed to update metadata" or "Expiring x number of records".
Check request.timeout.ms and max.block.ms setting for your producer. By default request.timeout.ms is 60 seconds long

RabbitMQ - way of working of ackonwledgments and requeue/remove

I tried to find answer to my question in the Internet, but no effect. From what I know when I got some message it is hold in queue until the moment I ack it. So it may be very long, in particular infinitely longly. At it is ok for me.
However, the question is about what in case of sending nack ??
it is requeued ? What does it mean ? Don't remove it form queue or remove it and push at the end of queue ?
Thanks in advance,
Regards
According to the RabbitMQ documentation on NACKs, when a consumer sends a NACK, it also specifies if the message should be requeued or not. Example with Pika's basic_nack:
channel.basic_nack(delivery_tag = method.delivery_tag, requeue = True)
If the requeue parameter is set to False, the message will be discarded by RabbitMQ (and therefore lost).
If the requeue parameter is set to True, the message will stay in the queue until a consumer receives and ACKs it.

Get subscriber filter from a ZMQ PUB socket

I noticed in the FAQ, in the Monitoring section, that it's not possible to get a list of connected peers or to be notified when peers connect/disconnect.
Does this imply that it's also not possible to know which topics a PUB/XPUB socket knows it should publish, from its upstream feedback? Or is there some way to access that data?
I know that ZMQ >= 3.0 "supports PUB/SUB filtering at the publisher", but what I really want is to filter at my application code, using the knowledge ZMQ has about which topics are subscribed to.
My use-case is that I want to publish info about the status of a robot. Some topics involve major hardware actions, like switching the select lines on an ADC to read IR values.
I have a publisher thread running on the bot that should only do that "read" to get IR data when there are actually subscribers. However, since I can only feed a string into my pub_sock.send, I always have to do the costly operation, even if ZMQ is about to drop that message when there are no subscribers.
I have an implementation that uses a backchannel REQ/REP socket to send topic information, which my app can check in its publish loop, thereby only collecting data that needs to be collected. This seems very inelegant though, since ZMQ must already have the data I need, as evidenced by its filtering at the publisher.
I noticed that in this mailing list message, the OP seems to be able to see subscribe messages being sent to an XPUB socket.
However, there's no mention of how they did that, and I'm not seeing any such ability in the docs (still looking). Maybe they were just using Wireshark (to see upstream subscribe messages to an XPUB socket).
Using zmq.XPUB socket type, there is a way to detect new and leaving subscribers. The following code sample shows how:
# Publisher side
import zmq
ctx = zmq.Context.instance()
xpub_socket = ctx.socket(zmq.XPUB)
xpub_socket.bind("tcp://*:%d" % port_nr)
poller = zmq.Poller()
poller.register(xpub_socket)
events = dict(poller.poll(1000))
if xpub_socket in events:
msg = xpub_socket.recv()
if msg[0] == b'\x01':
topic = msg[1:]
print "Topic '%s': new subscriber" % topic
elif msg[0] == b'\x00':
topic = msg[1:]
print "Topic '%s': subscriber left" % topic
Note that the zmq.XSUB socket type does not subscribe in the same manner as the "normal" zmq.SUB. Code sample:
# Subscriber side
import zmq
ctx = zmq.Context.instance()
# Subscribing of zmq.SUB socket
sub_socket = ctx.socket(zmq.SUB)
sub_socket.setsockopt(zmq.SUBSCRIBE, "sometopic") # OK
sub_socket.connect("tcp://localhost:%d" % port_nr)
# Subscribing zmq.XSUB socket
xsub_socket = ctx.socket(zmq.XSUB)
xsub_socket.connect("tcp://localhost:%d" % port_nr)
# xsub_socket.setsockopt(zmq.SUBSCRIBE, "sometopic") # NOK, raises zmq.error.ZMQError: Invalid argument
xsub_socket.send_multipart([b'\x01', b'sometopic']) # OK, triggers the subscribe event on the publisher
I'd also like to point out the zmq.XPUB_VERBOSE socket option. If set, all subscription events are received on the socket. If not set, duplicate subscriptions are filtered. See also the following post: ZMQ: No subscription message on XPUB socket for multiple subscribers (Last Value Caching pattern)
At least for the XPUB/XSUB socket case you can save a subscription state by forwarding and handling the packages manually:
context = zmq.Context()
xsub_socket = context.socket(zmq.XSUB)
xsub_socket.bind('tcp://*:10000')
xpub_socket = context.socket(zmq.XPUB)
xpub_socket.bind('tcp://*:10001')
poller = zmq.Poller()
poller.register(xpub_socket, zmq.POLLIN)
poller.register(xsub_socket, zmq.POLLIN)
while True:
try:
events = dict(poller.poll(1000))
except KeyboardInterrupt:
break
if xpub_socket in events:
message = xpub_socket.recv_multipart()
# HERE goes some subscription handle code which inspects
# message
xsub_socket.send_multipart(message)
if xsub_socket in events:
message = xsub_socket.recv_multipart()
xpub_socket.send_multipart(message)
(this is Python code but I guess C/C++ looks quite similar)
I'm currently working on this topic and I will add more information as soon as possible.

How to Purge an MSMQ Outgoing Queue

Is there any way to purge an outgoing queue. It doesn't appear that I can do it with the MMC snap-in and when i try to purge it in code i get an error Format name is invalid the computer it's sending the messages to does not exist, so they will never be sent, however the queues filled up the max storage space for MSMQ so everytime my application tries to send another message i get the insufficient resources exception.
I've tried the following formats and they all fail with the exception format name is invalid
DIRECT=OS:COMPUTER\private$\queuename
OS:COMPUTER\private$\queuename
COMPUTER\private$\queuename
You should be able to purge it manually from the MMC snap-in. MSMQ gets very stingy when it reaches its storage limits, so a lot of operations will fail with "permission denied" and things like that.
The long-term solution obviously is to modify the configuration so there is enough storage space for your particular usage patterns.
Edit: You might be running into a limitation in the managed API related to admin capabilities and remote queues. Take a look at this article by Ingo Rammer. It even includes a p-invoke example.
it is possible use managed code to purge an outgoing queue:
using (var msgQueue = new MessageQueue(GetPrivateMqPath(queueName, remoteIP), QueueAccessMode.ReceiveAndAdmin))
{
msgQueue.Purge();
}
in which GetPrivateMqPath is:
if (!string.IsNullOrEmpty(remoteIP))
return String.Format("FORMATNAME:DIRECT=TCP:{0}\\private$\\{1}", remoteIP, queueName);
else
return #".\private$\" + queueName;
QueueAccessMode.ReceiveAndAdmin points to outgoing queue.
You could try FORMATNAME:DIRECT=OS:computer\PRIVATE$\queuename.