I am very much new to Mule ESB.
I have created a flow which has multiple queues(I am using RabbitMQ). The flow is something like some messages are put into first queue which will be read by second queue which will be read be third and so on.
Note: I am sending messages concurrently using JMeter.
Let's say before all message(s) can be put into third queue from second queue, my rabbitmq server is stopped. Now, in this case, I want to recover my messages. Also, I should be able to know what messages have been put into third queue and what are still left.
I might not have put my question in elegant or understandable way but i hope you understood what i want to achieve.
You can use Rollback Exception strategy (http://www.mulesoft.org/documentation/display/current/Rollback+Exception+Strategy) along with transactions: http://www.mulesoft.org/documentation/display/current/Transaction+Management when properly implemented messages that have not been delivered to the second queue will be rolled back automatically.
In rollback exception strategy you can write your custom behavior. Why don't you use rabbitmq client to see what messages where in third queue?
Related
I am using Spring Cloud Stream over RabbitMQ for my project. I have a processor that reads from a source, process the message and publish it to the sink.
Is my understanding correct that if my application picks up an event from the stream and fails (e.g. app sudden death):
unless I ack the message or
I save the message after reading it from the queue
then my event would be lost? What other option would I have to make sure not to lose the event in such case?
DIgging through the Rabbit-MQ documentation I found this very useful example page for the different types of queues and message deliveries for RabbitMQ, and most of them can be used with AMPQ.
In particular looking at the work queue example for java, I found exactly the answer that I was looking for:
Message acknowledgment
Doing a task can take a few seconds. You may wonder what happens if
one of the consumers starts a long task and dies with it only partly
done. With our current code, once RabbitMQ delivers a message to the
consumer it immediately marks it for deletion. In this case, if you
kill a worker we will lose the message it was just processing. We'll
also lose all the messages that were dispatched to this particular
worker but were not yet handled. But we don't want to lose any tasks.
If a worker dies, we'd like the task to be delivered to another
worker.
In order to make sure a message is never lost, RabbitMQ supports
message acknowledgments. An ack(nowledgement) is sent back by the
consumer to tell RabbitMQ that a particular message has been received,
processed and that RabbitMQ is free to delete it.
If a consumer dies (its channel is closed, connection is closed, or
TCP connection is lost) without sending an ack, RabbitMQ will
understand that a message wasn't processed fully and will re-queue it.
If there are other consumers online at the same time, it will then
quickly redeliver it to another consumer. That way you can be sure
that no message is lost, even if the workers occasionally die.
There aren't any message timeouts; RabbitMQ will redeliver the message
when the consumer dies. It's fine even if processing a message takes a
very, very long time.
Manual message acknowledgments are turned on by default. In previous
examples we explicitly turned them off via the autoAck=true flag. It's
time to set this flag to false and send a proper acknowledgment from
the worker, once we're done with a task.
Thinking about it, using the ACK seems to be the logic thing to do. The reason why I didn't think about it before, is because I thought of a ACK just under the perspective of the publisher and not of the broker. The piece of documentation above was very useful to me.
Lets say my consumer acked and due to some strange reason did not handle the message well.
Is there a technic to go over message again?
Requirements :
1. same order.
2. continue receiving new messages and placing them last in the queue?
therefor, not re-queieng them (messing order), something like moving the index back (as in kafka )?
Thanks.
1 - Same order is something that is not compatible with async AMQP model.
In general RabbitMQ feed messages in order they was added, but if redelivery occurred that message will be delivered ASAP. Also, if one message was not acked among other it will be scheduled to client ASAP too.
2 - Dead lettering may help you.
Sure, you can manually add message back to exchange it was originally published to, but it is not what may be called best practice (ok, it works and pretty well in some cases). But you have to protect your application from cycled messages that fails and then delivered again (headres solve this problem for me, at least).
I'm using node-amqp. For each queue, there is one sender and one consumer. On the sender side, I need to maintain a list of active consumers. The question is when a consumer computer crashed, how would I get a notification and delete it from the list at the sender side?
I think you may not be using the MQ concept correctly. The whole point is to disconnect the consumers from the producers. On the whole it is not the job of the producers to know anything about the consumers, except the type of message they will be consuming. To the point that the producer will keep producing if a consumer crashes and the messages will continue to build up in the queue it was reading from.
There is a way to do it by using RabbitMQ's HTTP API (at http://server-name:55672/api/) to get list of connections, but it is too brutal for frequently queries. Another way in theory is to use alternate exchanges to detect undelivered messages, but I didn't tried this way yet.
Also, it may be possible to detect unexpected consumer disconnection by using dead-letter-exchanges as described there: http://www.rabbitmq.com/dlx.html
Is that possible to have mutex in RabbitMQ queue, i.e. If a client is reading from the queue, no other client should read from the queue. is that possible?
Let me explain my scenario:
Two application running in two different servers. reading the same queue. But, if one application is running and reading the messages from the Queue, the other application should not do anything. if the Main application fails or stopped, then the other application should
start reading from this queue.
This is kind of a fail over mechanism. Have anyone tried this before. Any help is much appreciated.
As long as i have searched, no solutions found...A simple solution is
create a queue call it as Lock Queue.
Have only one message make the application to read it from the queue.
When ever the application starts in a another server, it will wait for the message in the Queue. so, if the first one fails second
one will read the message and start processing the message in desired queue from which it should read.
A Mutex in Queue, that's it.
Note: This approach will work only if there is only message in the lock queue. make sure you handle it in your application.
This talk explicitly explains why this is a bad idea:
http://www.youtube.com/watch?v=XiXZOF6dZuE&feature=share&t=29m55s
from ~ 29m 55s in
Can someone tell me whether MSMQ (using transactions) supports competing consumers? Basically, I have multiple threads dequeueing messages off of a single queue. Just wanted to make sure this will work since MSMQ sometimes behaves differently than I expect.
If you are calling Receive from multiple processes on the same machine on the same queue, you will not get the same message more than once -- unless you rollback a transaction from a read.
If you are using 2008/w7 and are receiving on multiple machines from the same remote queue within a transaction, you should not see the same message twice (again, unless you roll back).
If you are using an enumerator to peek the messages and then remove an interesting one (via RemoveCurrent), you should expect to see an exception that the message has already been removed if another consumer has picked it up.
If you are on 2003/XP, you cannot do remote receives in a transaction so all bets are off there.