I tried to find answer to my question in the Internet, but no effect. From what I know when I got some message it is hold in queue until the moment I ack it. So it may be very long, in particular infinitely longly. At it is ok for me.
However, the question is about what in case of sending nack ??
it is requeued ? What does it mean ? Don't remove it form queue or remove it and push at the end of queue ?
Thanks in advance,
Regards
According to the RabbitMQ documentation on NACKs, when a consumer sends a NACK, it also specifies if the message should be requeued or not. Example with Pika's basic_nack:
channel.basic_nack(delivery_tag = method.delivery_tag, requeue = True)
If the requeue parameter is set to False, the message will be discarded by RabbitMQ (and therefore lost).
If the requeue parameter is set to True, the message will stay in the queue until a consumer receives and ACKs it.
Related
I am newbie on Fix in general and I have started from QuickFix to make practice. I apology in advance from the following trivial questions.
I have understood that to handle ExecutionReport I need to use crack() method inside FromApp() and implementing OnMessage().
But what I have two questions :
1) What happens if during a Partially fill order ExecutionReport message suddenly session drops, which is the way to handle this situation. Trying to reconnect and Send a request ? Please Can you provide a simple explanation in steps and what QuickFix Api method should I use ?
2) If I need to implement a FixEngine to handle dropcopy should I be aware of something in particular ?
Thank you for your help
1). Just make sure ResetOnDisconnect parameter is set to N for your trading session: ResetOnDisconnect=N (docs)
QuickFix will be automatically attempting to reconnect every ReconnectInterval seconds;
Once connected (with ResetOnDisconnect=N) it will also automatically exchange last known message sequence numbers with the FIX server, and the ones lost during the disconnection will be re-sent - so without a line of code you will receive the missing messages.
Also, if disconnection was for a longer period of time, you may want to send Order Status Request (H) message to the FIX server to receive actual ExecutionReport for your pending orders.
2) The question is too general for me to answer...
I'm using Kafka 0.11.0.0. I have a test program that publishes to a Kafka topic; if the zookeeper and Kafka servers are down (which is normal in my development environment; I bring them up as needed) then the call to KafkaProducer<>.send() hangs indefinitely.
I either need to have send() return, preferably indicating the error; or I need a way to check whether the servers are up or down. Basically, I want my test tool to be able tell me, "Hey, dummy, start up Kafka!" instead of hanging.
Is there a way for my producer task to determine whether the servers are up or down?
I'm calling the send() like this:
kafkaProducer.send(new ProducerRecord<>(KAFKA_TOPIC, KAFKA_KEY,
message), (rm, ex) -> {
System.out.println("**** " + rm + "\n**** " +ex);
});
I have linger.ms = 1; I've tried retries=0, 1, and 2, and send() still blocks. I've never seen the callback called.
Older messages suggest setting metadata.fetch.timeout.ms to a small value, but that's gone in 0.11. Others suggest calling command line utilities to see if the servers are OK...but the referenced utilities also seem to be gone.
What's the graceful way to get this done?
We can send messages to broker in three ways :
Fire-and-forget :
We send a message to the server and don’t really care if it arrives successfully or not. Most of the time, it will arrive successfully, since Kafka is highly available and the producer will retry sending messages automatically. However, some messages will get lost using this method.
Asynchronous send
We call the send() method with a callback function, which gets triggered when it receives a response from the Kafka broker.
Synchronous send
We send a message, the send() method returns a Future object, and we use get() to wait on the future and see if the send() was successful or not.
The simplest way to send a message synchronously is as follows:
ProducerRecord<String, String> record =
new ProducerRecord<>(KAFKA_TOPIC, KEY, message);
try {
producer.send(record).get();
} catch (Exception e) {
e.printStackTrace();
}
Here, we are using Future.get() to wait for a reply from Kafka. This method will throw an exception if the record is not sent successfully to Kafka. If there were no errors, we will get a RecordMetadata object that we can use to retrieve the offset the message was written to.
hope this helps.
That is strange. It should return with an error saying either "Failed to update metadata" or "Expiring x number of records".
Check request.timeout.ms and max.block.ms setting for your producer. By default request.timeout.ms is 60 seconds long
I have one MSMQ queue which is listened by five windows services. I used BeginPeek and PeekCompleted event for this purpose. My problem is among five services, only one service is the right recipient of the message. All four just read message, but no action is performed. This can only be identified when we read MQ message.
Now, I added a code in my services to check, if the criteria matches and the message is being processed by the right service, then I am using Receive to dequeue the message from MSMQ. Is that a good idea?
Secondly, If the message doesnot satisfy condition and all five services just peeked it, but not received, the message still lies in queue. I understand. But the same message is being processed infinite times, as the message was never removed.
private void queue_PeekCompleted(object sender, PeekCompletedEventArgs e)
{
MessageQueue queue = (MessageQueue)sender;
//Message msg = queue.EndPeek(e.AsyncResult);
Message msg = e.Message;
//Read message and check if the criteria matches
if(CriteriaMatches)
{
queue.ReceiveById(e.Message.Id);
}
queue.EndPeek(e.AsyncResult);
queue.BeginPeek();
}
Appreciate your help.
Thanks,
Fayaz
Set the messages to expire after a set (short) period. They will then move to the dead letter queue where you can have another service waiting for arrivals. This service could then raise an alert, for example, as soon as a message arrives.
I use the perform javascript call to perform an action on the server, like this:
subscription.perform('action', {...});
However, from what I've seen there seems to be no builtin javascript "success" callback, i.e. to let me know the action is concluded on the server's side (or possibly failed). I was thinking about sending a broadcast at the end of the action like so:
def action(data)
...do_stuff
ActionCable.server.broadcast "room", success_message...
end
But all clients subscribed to this "room" would receive that message, possibly resulting in false positives. In addition, from what I've heard, message order isn't guaranteed, so a previous broadcast inside this action could be delivered after the success message, possibly leading to further issues.
Any ideas on this or am I missing something completely?
Looking at https://github.com/xtian/action-cable-js/blob/master/dist/cable.js and , https://developer.mozilla.org/en-US/docs/Web/API/WebSocket#send(), perform just executes WebSocket.send() and returns true or false, and there is no way to know whether your data has arrived. (That is just not possible with WebSockets, it seems.)
You could try using just a http call (I recommend setting up an api with jbuilder), or indeed broadcasting back a success message.
You can solve the order of the messages by creating a timestamp on the server, and sending it along with the message, and then sorting the messages with Javascript.
Good luck!
Maybe what you are looking for is the trasmit method: https://api.rubyonrails.org/v6.1.3/classes/ActionCable/Channel/Base.html#method-i-transmit
It sends a message to the current connection being handled for a channel.
We are using hornetq-core 2.2.21.Final stand-alone after reading a non-transnational message , the message still remains in queue although it acknowledge
session is created using
sessionFactory.createSession(true, true, 0)
locator setting:
val transConf = new TransportConfiguration(classOf[NettyConnectorFactory].getName,map)
val locator = HornetQClient.createServerLocatorWithoutHA(transConf)
locator.setBlockOnDurableSend(false)
locator.setBlockOnNonDurableSend(false)
locator.setAckBatchSize(0) // also tried without this setting
locator.setConsumerWindowSize(0)// also tried without this setting
Message is acknowledge using message.acknowledge ()
I think that the problem might be two queues on the same address
also tried to set the message expiration but it didn't help , messages are still piling up in the queue
please advise
It seems you are using the core api. Are you explicitly calling acknowledge on the messages?
If you have two queues on the same address ack will only ack the messages on the queue you are consuming. On that case the system is acting normally.