Hornetq Core Bridge - one publisher, multiple consumers - hornetq

Server A publishes data to the topic source/topic and two durable subscribers sub-b and sub-c are configured to listen to the topic. Subscribers sub-b and sub-c will receive the identical data.
Is there a way to configure in HornetQ using multiple core bridges to publish message from sub-b channel to server B and from sub-c channel to server C.
As per the Horentq documentation they suggest to use core bridge instead of JMS bridge if possible.
It's always preferable to use a core bridge if you can.
bridgeType Schema definition does not seem to support to use subscriber name as in the case of JMS bridge bean definition.
The workaround I came up with is to use JMS but I was wondering if anyone came across this issue before and would you mind sharing your thoughts on this?

A JMS topic (i.e. source/topic) is represented in the broker simply as an address. A JMS subscription (i.e. sub-b and sub-c) is represented in the broker as a queue associated with the relevant address (source/topic in this case). The queues internal name is a combination of details from the JMS subscriber (e.g. client ID, subscription name, etc.). When a message is sent to the JMS topic the broker routes a reference to that message to each subscription so that every subscription gets every message (assuming their selectors match).
A core bridge listens for messages arriving in an queue and then forwards those messages to an address either locally or on a remote broker.
In your case, you can create a bridge which listens on the queue of the JMS subscription and then forwards that message to a remote broker of your choosing.

Related

MassTransit Ensure Queues Created before Publishing Messages

We have multiple services and use the publish/subscribe pattern for sending events from service A to be handled by other services (B & C). The goal is to allow multiple queues to receive messages from a Producer by matching the binding key / topic.
This works fine if services B & C start first. In that case, the Subscribe method creates the Exchanges and Queues to receive the messages when published. However, if service A starts first, then the published messages are lost as the receiving queues are not created.
Looking for the Best Practice way to ensure the queues are created before publish. The producer does not have knowledge of the consumers, and there may be more consumers over time for a given message type, so we can't have the producer code take responsibility for queue creation.
Our current implementation is using RabbitMQ on the backplane, but we want to migrate over time to SQS and Azure Service Bus, so we need this to be Message Broker agnostic
The simple answer, start your consumer services before you start your publishers.
Alternatively, you could use the DeployTopologyOnly flag with a custom build or command-line to deploy the queue/exchanges/bindings without actually starting the consumers, but it will still be the consumer service with all of its configuration.

With ActiveMQ Artemis is it possible to find out if a listener has stopped listening to a topic?

I'm using ActiveMQ Artemis 2.18.0 and some Spring Boot clients that communicate with each other via topics. The Spring Boot clients use JMS for all MQTT operations.
I'd like to know if it is possible for a producer with one or more subscribers to find out whether a certain subscriber is actively listening or not. For example, there are 3 clients - SB1, SB2, and SB3. SB1 publishes to test/topic, and SB2 and SB3 are subscribed to test/topic. If SB2 shuts down for any reason would it be possible for SB1 to become aware of this?
I understand that queues would be the way to go for this, but my project is much better suited to the use of topics, and it is set up this way already and works fine. There's just one operation where it must be determined whether the listener is active or not in order to update the listener's online status, a crucial parameter. Right now, clients and the server continually poll a database so that the online status is periodically updated, I want to avoid doing this and use something that Artemis may provide instead.
Apache ActiveMQ Artemis emits notifications to inform listeners of potentially interesting events as consumer created or closed, see Management Notifications at http://activemq.apache.org/components/artemis/documentation/latest/management.html.
A listener of the management notification address would receive a message for each consumer created or closed, see the Management Notification Example at http://activemq.apache.org/components/artemis/documentation/latest/examples.html#management-notification
Part of the point to pub/sub based messaging is to decouple the information producer (publisher) from the consumer (subscriber). As a rule a published REALLY shouldn't care if there even are any subscribers.
If you want to know the status of the subscriber then it's up to the subscriber to update this, not the publisher. Things like the Last Will & Testament feature allow the subscriber to update it's status in the event of a failure to explicitly do it when going offline.

ActiveMQ Artemis How to control the multicast queue name?

When a consumer subscribes to a topic, a multicast queue is automatically created with a system generated name. I'd like to know if it's possible to control this name generation to make it more friendly (like consumer_id + session_id + idx).
I was using the web console to monitor, and with the previous version of ActiveMQ (prior to Artemis) I used to see the consumer names for each subscription to a topic which was very convenient.
What I used to see in the web console with ActiveMQ 5.0:
What I see now in the console with Artemis:
These UUID named queues are temporary subscriptions that will be deleted as soon as the client disconnects.
The classic way of having named durable subscriptions of the form clientId.subscriptionName is to set clientId and subscriptionName properties on your client. Note that durable subscriptions will continue to get messages also when the subscriber disconnects.
With Artemis, you can also use the fully qualified queue names (FQQN) feature to achieve the same, but with the additional benefit of full control on the durable subscription name:
First, create a multicast address like this:
<address name="example.foo">
<multicast>
<queue name="q1"></queue>
<queue name="q2"></queue>
</multicast>
</address>
At this point, you can send messages to example.foo topic and consume them from example.foo::q1 and example.foo::q2 queues (note the :: separator).
It appears to me that you're not looking in the right place for consumer information in the ActiveMQ Artemis web console. Currently you're looking in the main navigation tree which shows "major" components like acceptors, addresses, queues, etc. Consumers do not appear here by design. Your screenshot of the ActiveMQ Artemis web console simply shows the multicast queues on an address. In the JMS topic use-case each subscriber gets their own queue typically called a "subscription queue". However, a subscription queue is not the same as a consumer. The consumer consumes from the subscription queue. If you want to see the corresponding consumers then you need to open the "Consumers" tab.
For example, here is a screen shot when 3 non-durable JMS subscribers are connected to a JMS topic named myTopic:
As you note, the names of the subscription queues don't really tell you much. However, if you click on the "Consumers" tab you will see a wealth of information, e.g.:
From the "Consumers" tab you can see where consumers are connecting from, what queue & address they're using, when they were created, their client ID, etc. If there are lots of consumers you can filter them based on, for example, the address or queue they're using.
It's important to note that ActiveMQ Artemis does not represent every consumer with an underlying JMX MBean. This is what ActiveMQ 5.x does and this can cause resource utilization problems with a large number of consumers since MBeans are fairly "heavy" objects.
Note: These screenshots were taken with the latest release (i.e. 2.16.0) which includes a newer version of the web console than the one you appear to be using in your screenshot.

Kafka vs JMS for event publishing

In our scenario we have a set of micro services which interact with other services by sending event messages. We anticipate millions of messages per day at the peak. Every message is targeted to one or more listener types.
Our requirements are as follows:
Zero lost messages.
Ability to dynamically register multiple listeners of a specific
type in order to increase throughput.
Listeners are not guaranteed to be alive when messages are
dispatched.
We consider 2 options:
Send each message to JMS main queue then listeners of that queue will route the messages to specific queues according to message content, and then target services will listen to those specific queues.
Send messages to a Kafka topic by message type then target services will subscribe to the relevant topic and consume the messages.
What are the cons and pros for using either JMS or Kafka for that purpose?
Your first requirement is "zero lost messages". However, if you want publish-subscribe semantics (i.e. topics in JMS), but listeners are not guaranteed to be alive when messages are dispatched then JMS is a non-starter as those messages will simply be discarded (i.e. lost).
I would suggest to go with Kafka as it has fault tolerance mechanism and even if some message lost or not captured by any listener you can easily retrieve it from Kafka cluster.
Along with this you can easily add new listener / listener in group and kafka along with zookeeper will take care of managing it very well.
In summary, Kafka is a distributed publish-subscribe messaging system that is designed to be fast, scalable, and durable. Like many publish-subscribe messaging systems, Kafkamaintains feeds of messages in topics. Producers write data to topics and consumers read from topics.
Very easy for integration.

Is RabbitMQ capable of "pushing" messages from a queue to a consumer?

With RabbitMQ, is there a way to "push" messages from a queue TO a consumer as opposed to having a consumer "poll and pull" messages FROM a queue?
This has been the cause of some debate on a current project i'm on. The argument from one side is that having consumers (i.e. a windows service) "poll" a queue until a new message arrives is somewhat inefficient and less desirable than the idea having the message "pushed" automatically from the queue to the subscriber(s)/consumer(s).
I can only seem to find information supporting the idea of consumers "polling and pulling" messages off of a queue (e.g. using a windows service to poll the queue for new messages). There isn't much information on the idea of "pushing" messages to a consumer/subscriber...
Having the server push messages to the client is one of the two ways to get messages to the client, and the preferred way for most applications. This is known as consuming messages via a subscription.
The client is connected. (The AMQP/RabbitMQ/most messaging systems model is that the client is always connected - except for network interruptions, of course.)
You use the client API to arrange that your channel consume messages by supplying a callback method. Then whenever a message is available the server sends it to the client over the channel and the client application gets it via an asynchronous callback (typically one thread per channel). You can set the "prefetch count" on the channel which controls the amount of pipelining your client can do over that channel. (For further parallelism an application can have multiple channels running over one connection, which is a common design that serves various purposes.)
The alternative is for the client to poll for messages one at a time, over the channel, via a get method.
You "push" messages from Producer to Exchange.
https://www.rabbitmq.com/tutorials/tutorial-three-python.html
BTW this is fitting very well IoT scenarios. Devices produce messages and sends them to an exchange. Queue is handling persistence, FIFO and other features, as well as delivery of messages to subscribers.
And, by the way, you never "Poll" the queue. Instead, you always subscribe to publisher. Similar to observer pattern. Generally, I would say genius principle.
So it is similar to post box or post office, except it sends you a notification when message is available.
Quoting from the docs here:
AMQP brokers either deliver messages to consumers subscribed to
queues, or consumers fetch/pull messages from queues on demand.
And from here:
Storing messages in queues is useless unless applications can consume
them. In the AMQP 0-9-1 Model, there are two ways for applications to
do this:
Have messages delivered to them ("push API")
Fetch messages as needed ("pull API")
With the "push API", applications have to indicate interest in
consuming messages from a particular queue. When they do so, we say
that they register a consumer or, simply put, subscribe to a queue. It
is possible to have more than one consumer per queue or to register an
exclusive consumer (excludes all other consumers from the queue while
it is consuming).
Each consumer (subscription) has an identifier called a consumer tag.
It can be used to unsubscribe from messages. Consumer tags are just
strings.
RabbitMQ broker is like server that wont send data to consumer without consumer client getting registering itself to server. but then question comes like below
Can RabbitMQ keep client consumer details and connect to client when packet comes?
Answer is no. so what is alternative well then write plugin by yourself that maintain client information in some kind of config. Plugin will pull from RabbitMQ Queue and push to client.
Please give look at this plugin might help.
https://www.rabbitmq.com/shovel.html
Frankly speaking Client need to implement AMQP protocol to receive so and should listen connection on some port for that. This sound like another server.
Regards,
Vishal