Configuring activemq messages - queue

In my project I am using acticemq.Now I need to configure my messages to be automatically deleted from the queue after certain time limit(say 24hrs).Can anybody say how and where to configure that.Can any one give me sample code to do so...Thanks

Use JMS Time To Live on the message to tell the broker how long the Message should remain on the Queue. You can configure what happens when a message is expired on the broker as well, it can be sent to a DLQ for instance.

Related

With ActiveMQ Artemis is it possible to find out if a listener has stopped listening to a topic?

I'm using ActiveMQ Artemis 2.18.0 and some Spring Boot clients that communicate with each other via topics. The Spring Boot clients use JMS for all MQTT operations.
I'd like to know if it is possible for a producer with one or more subscribers to find out whether a certain subscriber is actively listening or not. For example, there are 3 clients - SB1, SB2, and SB3. SB1 publishes to test/topic, and SB2 and SB3 are subscribed to test/topic. If SB2 shuts down for any reason would it be possible for SB1 to become aware of this?
I understand that queues would be the way to go for this, but my project is much better suited to the use of topics, and it is set up this way already and works fine. There's just one operation where it must be determined whether the listener is active or not in order to update the listener's online status, a crucial parameter. Right now, clients and the server continually poll a database so that the online status is periodically updated, I want to avoid doing this and use something that Artemis may provide instead.
Apache ActiveMQ Artemis emits notifications to inform listeners of potentially interesting events as consumer created or closed, see Management Notifications at http://activemq.apache.org/components/artemis/documentation/latest/management.html.
A listener of the management notification address would receive a message for each consumer created or closed, see the Management Notification Example at http://activemq.apache.org/components/artemis/documentation/latest/examples.html#management-notification
Part of the point to pub/sub based messaging is to decouple the information producer (publisher) from the consumer (subscriber). As a rule a published REALLY shouldn't care if there even are any subscribers.
If you want to know the status of the subscriber then it's up to the subscriber to update this, not the publisher. Things like the Last Will & Testament feature allow the subscriber to update it's status in the event of a failure to explicitly do it when going offline.

How to skip old messages when connecting to a RabbitMQ producer

I've looked into the expiration and TTL policies for messages and queues, but I'm not sure if that's the best way to accomplish what I'm trying to do.
Ideally, when my consumer connects to my sender, I want to skip any old, unreceived messages, and only receive messages that are sent after connection. In Kafka, this was accomplished by configuring the consumer to essentially seek the queue to the end before beginning the consumption of more messages. A direct RabbitMQ equivalent to this feature didn't seem to exist, but I have to imagine there's a more efficient way to accomplish this without making the TTL or expiration on the messages to be very short.
How do I consume only messages received after connecting to a RabbitMQ producer?
What ended up working for us was configuring the producer during publishing rather than configuring the consumer.
channel.basic_publish([other params], properties=pika.BasicProperties(expiration='1000'))
This causes the messages to expire after one second, which was good enough for our needs.

How to handle application failure after reading event from source in Spring Cloud Stream with rabbit MQ

I am using Spring Cloud Stream over RabbitMQ for my project. I have a processor that reads from a source, process the message and publish it to the sink.
Is my understanding correct that if my application picks up an event from the stream and fails (e.g. app sudden death):
unless I ack the message or
I save the message after reading it from the queue
then my event would be lost? What other option would I have to make sure not to lose the event in such case?
DIgging through the Rabbit-MQ documentation I found this very useful example page for the different types of queues and message deliveries for RabbitMQ, and most of them can be used with AMPQ.
In particular looking at the work queue example for java, I found exactly the answer that I was looking for:
Message acknowledgment
Doing a task can take a few seconds. You may wonder what happens if
one of the consumers starts a long task and dies with it only partly
done. With our current code, once RabbitMQ delivers a message to the
consumer it immediately marks it for deletion. In this case, if you
kill a worker we will lose the message it was just processing. We'll
also lose all the messages that were dispatched to this particular
worker but were not yet handled. But we don't want to lose any tasks.
If a worker dies, we'd like the task to be delivered to another
worker.
In order to make sure a message is never lost, RabbitMQ supports
message acknowledgments. An ack(nowledgement) is sent back by the
consumer to tell RabbitMQ that a particular message has been received,
processed and that RabbitMQ is free to delete it.
If a consumer dies (its channel is closed, connection is closed, or
TCP connection is lost) without sending an ack, RabbitMQ will
understand that a message wasn't processed fully and will re-queue it.
If there are other consumers online at the same time, it will then
quickly redeliver it to another consumer. That way you can be sure
that no message is lost, even if the workers occasionally die.
There aren't any message timeouts; RabbitMQ will redeliver the message
when the consumer dies. It's fine even if processing a message takes a
very, very long time.
Manual message acknowledgments are turned on by default. In previous
examples we explicitly turned them off via the autoAck=true flag. It's
time to set this flag to false and send a proper acknowledgment from
the worker, once we're done with a task.
Thinking about it, using the ACK seems to be the logic thing to do. The reason why I didn't think about it before, is because I thought of a ACK just under the perspective of the publisher and not of the broker. The piece of documentation above was very useful to me.

Does Activemq ensure persistence?

I am using activemq queues in my project. Does it guarantee that messages will remain in the queue until dispatched even in the event of failures?
if enabled to do so, yes...it will persist messages in a message store (file, database, etc) and only remove them after they have been successfully dequeued
see this page for details on persistence options: http://activemq.apache.org/persistence.html
see this page for exception handling options: http://activemq.apache.org/message-redelivery-and-dlq-handling.html
In addition to the broker persistence configuration, you need to insure the message producer delivery mode is persistent - see this.
On the consumer side the acknowledgement mode of the session will indicate when the message is acknowledged. Usually the default behavior of the JMS client is AUTO - the message is acknowledged when the receive method returns. But watch out, some wrapper like Spring might send the ACK before! In this case you may want to use Client acknowledgement or transactions...

How does a queue sender know that a consumer crashed?

I'm using node-amqp. For each queue, there is one sender and one consumer. On the sender side, I need to maintain a list of active consumers. The question is when a consumer computer crashed, how would I get a notification and delete it from the list at the sender side?
I think you may not be using the MQ concept correctly. The whole point is to disconnect the consumers from the producers. On the whole it is not the job of the producers to know anything about the consumers, except the type of message they will be consuming. To the point that the producer will keep producing if a consumer crashes and the messages will continue to build up in the queue it was reading from.
There is a way to do it by using RabbitMQ's HTTP API (at http://server-name:55672/api/) to get list of connections, but it is too brutal for frequently queries. Another way in theory is to use alternate exchanges to detect undelivered messages, but I didn't tried this way yet.
Also, it may be possible to detect unexpected consumer disconnection by using dead-letter-exchanges as described there: http://www.rabbitmq.com/dlx.html