With ActiveMQ Artemis is it possible to find out if a listener has stopped listening to a topic? - activemq-artemis

I'm using ActiveMQ Artemis 2.18.0 and some Spring Boot clients that communicate with each other via topics. The Spring Boot clients use JMS for all MQTT operations.
I'd like to know if it is possible for a producer with one or more subscribers to find out whether a certain subscriber is actively listening or not. For example, there are 3 clients - SB1, SB2, and SB3. SB1 publishes to test/topic, and SB2 and SB3 are subscribed to test/topic. If SB2 shuts down for any reason would it be possible for SB1 to become aware of this?
I understand that queues would be the way to go for this, but my project is much better suited to the use of topics, and it is set up this way already and works fine. There's just one operation where it must be determined whether the listener is active or not in order to update the listener's online status, a crucial parameter. Right now, clients and the server continually poll a database so that the online status is periodically updated, I want to avoid doing this and use something that Artemis may provide instead.

Apache ActiveMQ Artemis emits notifications to inform listeners of potentially interesting events as consumer created or closed, see Management Notifications at http://activemq.apache.org/components/artemis/documentation/latest/management.html.
A listener of the management notification address would receive a message for each consumer created or closed, see the Management Notification Example at http://activemq.apache.org/components/artemis/documentation/latest/examples.html#management-notification

Part of the point to pub/sub based messaging is to decouple the information producer (publisher) from the consumer (subscriber). As a rule a published REALLY shouldn't care if there even are any subscribers.
If you want to know the status of the subscriber then it's up to the subscriber to update this, not the publisher. Things like the Last Will & Testament feature allow the subscriber to update it's status in the event of a failure to explicitly do it when going offline.

Related

MassTransit Ensure Queues Created before Publishing Messages

We have multiple services and use the publish/subscribe pattern for sending events from service A to be handled by other services (B & C). The goal is to allow multiple queues to receive messages from a Producer by matching the binding key / topic.
This works fine if services B & C start first. In that case, the Subscribe method creates the Exchanges and Queues to receive the messages when published. However, if service A starts first, then the published messages are lost as the receiving queues are not created.
Looking for the Best Practice way to ensure the queues are created before publish. The producer does not have knowledge of the consumers, and there may be more consumers over time for a given message type, so we can't have the producer code take responsibility for queue creation.
Our current implementation is using RabbitMQ on the backplane, but we want to migrate over time to SQS and Azure Service Bus, so we need this to be Message Broker agnostic
The simple answer, start your consumer services before you start your publishers.
Alternatively, you could use the DeployTopologyOnly flag with a custom build or command-line to deploy the queue/exchanges/bindings without actually starting the consumers, but it will still be the consumer service with all of its configuration.

What happens to subscribers when the Kafka service is down? Do the need to subscribes to a specific topic when it restart?

Currently I have to sent events externally to the client which needs to subscribe to the these events. I have an endpoint that the client calls (subscribe) that follow the Server-Sent Events specifications. This open a HTTP connection, that is kept alive by the server that send "heartbeat" events.
The problem is that when this service need to be redeployed, or it goes down is the responsibility of the client to re-subscribe making a call to this endpoint, to receive again the events in real-time.
I was wondering, if I switch to technology like rabbitMQ or Kafka can I solve this problem? In other word, I would like to remove the responsibility of the client to re-subscribe if something goes wrong on the server side.
If you can attached article/resources to your answer would be great.
With RabbitMQ , the reconnection feature is dependant on the client library. For e.g., Java and .NET clients do provide this ( check here)
With Kafka I see there are configurations to support this. Also it's worth reading the excellent recommendations from Kakfa for surviving broker outages here.

ActiveMQ Artemis How to control the multicast queue name?

When a consumer subscribes to a topic, a multicast queue is automatically created with a system generated name. I'd like to know if it's possible to control this name generation to make it more friendly (like consumer_id + session_id + idx).
I was using the web console to monitor, and with the previous version of ActiveMQ (prior to Artemis) I used to see the consumer names for each subscription to a topic which was very convenient.
What I used to see in the web console with ActiveMQ 5.0:
What I see now in the console with Artemis:
These UUID named queues are temporary subscriptions that will be deleted as soon as the client disconnects.
The classic way of having named durable subscriptions of the form clientId.subscriptionName is to set clientId and subscriptionName properties on your client. Note that durable subscriptions will continue to get messages also when the subscriber disconnects.
With Artemis, you can also use the fully qualified queue names (FQQN) feature to achieve the same, but with the additional benefit of full control on the durable subscription name:
First, create a multicast address like this:
<address name="example.foo">
<multicast>
<queue name="q1"></queue>
<queue name="q2"></queue>
</multicast>
</address>
At this point, you can send messages to example.foo topic and consume them from example.foo::q1 and example.foo::q2 queues (note the :: separator).
It appears to me that you're not looking in the right place for consumer information in the ActiveMQ Artemis web console. Currently you're looking in the main navigation tree which shows "major" components like acceptors, addresses, queues, etc. Consumers do not appear here by design. Your screenshot of the ActiveMQ Artemis web console simply shows the multicast queues on an address. In the JMS topic use-case each subscriber gets their own queue typically called a "subscription queue". However, a subscription queue is not the same as a consumer. The consumer consumes from the subscription queue. If you want to see the corresponding consumers then you need to open the "Consumers" tab.
For example, here is a screen shot when 3 non-durable JMS subscribers are connected to a JMS topic named myTopic:
As you note, the names of the subscription queues don't really tell you much. However, if you click on the "Consumers" tab you will see a wealth of information, e.g.:
From the "Consumers" tab you can see where consumers are connecting from, what queue & address they're using, when they were created, their client ID, etc. If there are lots of consumers you can filter them based on, for example, the address or queue they're using.
It's important to note that ActiveMQ Artemis does not represent every consumer with an underlying JMX MBean. This is what ActiveMQ 5.x does and this can cause resource utilization problems with a large number of consumers since MBeans are fairly "heavy" objects.
Note: These screenshots were taken with the latest release (i.e. 2.16.0) which includes a newer version of the web console than the one you appear to be using in your screenshot.

Can I edit messages on mqtt server?

Building an instant chat app (native IOS and web). Exploring whether to use XMPP or MQTT as application protocols. Seemingly I can't have users editing old messages on XMPP. Can messages be edited on MQTT?
Example: I want to implement "Edit Message" like Slack offers, but upon clicking "(edited)" to allow the user to see the different versions of the message and their timestamps (like the edit history for comments you find in Facebook), enabling an "audit trail" of the conversation.
Follow-up: As it seems this can only be achieved through a "hack", would it be better to get the hack done on XMPP or MQTT or some other protocol/websockets/JSON, etc?
Once a MQTT message is published to the broker the publishing client has no more control over that message at all.
Most brokers will not allow you to edit the message either as they will just forward the message instantly to all clients subscribed to the relevant topics and queue the message for any offline clients with persistent subscriptions.
The only exception may be the mosca broker that has a call back for when messages reach the broker, but this would not allow a user to edit a message, only the system to possibly update the payload in the instant before it was forwarded to the subscribed clients.
Hardlib's advice is correct, editing messages in this way is not supported by most MQTT implementations and to implement it would break the loose coupling between publisher and subscriber that is the virtue of MQTT. In other words this should be implemented at a higher level or through other means.
That said, if I understand editing to mean the ability to change what the broker forwards to clients that were not online during the initial publication, you could implement this with retained messages. Consider this:
Client A is subscribed to topic clientb/# and Client B is subscribed to topic clienta/#.
Client A publishes a message to clienta/(unique message id) while Client B is not actively connected. The broker retains the message.
Client A decides to edit the message so (through some interface you devise) they publish an amended message to clienta/(unique message id) which replaces the message and, from a subscribers perspective, edits what is there.
Client B receives the amended message when they come online and (as long as there isn't a persistent session or something like that) has no knowledge of the change.
From this example you can probably tell why this is a bad idea as the server would retain every single message in a different topic and would likely need regular pruning... not mention that it would make a mess out of timestamps and require all sorts of other work arounds. However, if there is some reason that you have to implement it this way you could hack something usable together.

Is RabbitMQ capable of "pushing" messages from a queue to a consumer?

With RabbitMQ, is there a way to "push" messages from a queue TO a consumer as opposed to having a consumer "poll and pull" messages FROM a queue?
This has been the cause of some debate on a current project i'm on. The argument from one side is that having consumers (i.e. a windows service) "poll" a queue until a new message arrives is somewhat inefficient and less desirable than the idea having the message "pushed" automatically from the queue to the subscriber(s)/consumer(s).
I can only seem to find information supporting the idea of consumers "polling and pulling" messages off of a queue (e.g. using a windows service to poll the queue for new messages). There isn't much information on the idea of "pushing" messages to a consumer/subscriber...
Having the server push messages to the client is one of the two ways to get messages to the client, and the preferred way for most applications. This is known as consuming messages via a subscription.
The client is connected. (The AMQP/RabbitMQ/most messaging systems model is that the client is always connected - except for network interruptions, of course.)
You use the client API to arrange that your channel consume messages by supplying a callback method. Then whenever a message is available the server sends it to the client over the channel and the client application gets it via an asynchronous callback (typically one thread per channel). You can set the "prefetch count" on the channel which controls the amount of pipelining your client can do over that channel. (For further parallelism an application can have multiple channels running over one connection, which is a common design that serves various purposes.)
The alternative is for the client to poll for messages one at a time, over the channel, via a get method.
You "push" messages from Producer to Exchange.
https://www.rabbitmq.com/tutorials/tutorial-three-python.html
BTW this is fitting very well IoT scenarios. Devices produce messages and sends them to an exchange. Queue is handling persistence, FIFO and other features, as well as delivery of messages to subscribers.
And, by the way, you never "Poll" the queue. Instead, you always subscribe to publisher. Similar to observer pattern. Generally, I would say genius principle.
So it is similar to post box or post office, except it sends you a notification when message is available.
Quoting from the docs here:
AMQP brokers either deliver messages to consumers subscribed to
queues, or consumers fetch/pull messages from queues on demand.
And from here:
Storing messages in queues is useless unless applications can consume
them. In the AMQP 0-9-1 Model, there are two ways for applications to
do this:
Have messages delivered to them ("push API")
Fetch messages as needed ("pull API")
With the "push API", applications have to indicate interest in
consuming messages from a particular queue. When they do so, we say
that they register a consumer or, simply put, subscribe to a queue. It
is possible to have more than one consumer per queue or to register an
exclusive consumer (excludes all other consumers from the queue while
it is consuming).
Each consumer (subscription) has an identifier called a consumer tag.
It can be used to unsubscribe from messages. Consumer tags are just
strings.
RabbitMQ broker is like server that wont send data to consumer without consumer client getting registering itself to server. but then question comes like below
Can RabbitMQ keep client consumer details and connect to client when packet comes?
Answer is no. so what is alternative well then write plugin by yourself that maintain client information in some kind of config. Plugin will pull from RabbitMQ Queue and push to client.
Please give look at this plugin might help.
https://www.rabbitmq.com/shovel.html
Frankly speaking Client need to implement AMQP protocol to receive so and should listen connection on some port for that. This sound like another server.
Regards,
Vishal