What all functionality are there in queue which can't be achieved by topic? - queue

What all functionality are there in queue which can't be achieved by topic??

The main requirement that I run into is that consumers cannot compete for a single message on a topic. For example, I have a client who publishes call center events. Several systems subscribe to these events. One of these systems is the actual call routing application which has multiple instances running. If each instance subscribes then the call is routed to all of them. However, if the message is dropped onto a queue and all the instances consume off the same queue then only one will receive the message and the call goes to that operator. If the publishing application converts from topics to a queue, the call center works but all the other subscriber apps don't get the message.
The solution (as implemented in WebSphere MQ) was to create an administrative subscription on the topic and deliver the messages to a queue that all application instances consume from. So the producer apps are still publishers, all the dynamic subscribers still get copies of the message and the call center app instances compete for a single instance of each published message.
Also, you can't use browse semantics on a topic whereas you can on a queue. With topics you can specify selectors to filter the messages that are returned but that's about it. With queues you can browse, reset the browse pointer and then browse some more.
If you put a message on a queue and nothing is there to receive it, the message remains queued up. If you put a message to a topic and there are no active subscribers or durable subscriptions, the message is discarded. Therefore messages in a queue are naturally durable whereas messages on a topic may or may not be.
From a pure JMS perspective, queue and topic are both instances of destination and are interchangeable if you don't try to browse. An application may not know whether the destination it opens is a queue or a topic unless it uses instanceOf() at run-time to find out.

Related

Event broadcasting in Kafka?

Is there a way to have a event delivered to every subscriber of a topic regardless of consumer group? think "Refresh your local cache" kind of scenario
As far as Kafka in concerned, you cannot subscribe to a topic without a consumer group.
Out of the box, this isn't a pattern of a Kafka consumer; there isn't a way to make all consumers in a group read all messages from all partitions. There cannot be more consumer clients than partitions (thereby making "fan out" hard), and only one message goes to any one consumer before the message offset gets committed and the entire consumer group seeks those offsets forward to consume later events.
You'd need a layer above the consumer to decouple yourself from the consumer-group limitations.
For example, with Interactive Queries, you'd consume a topic and build a StateStore from the data that comes in, effectively building a distributed cache. With that, you can layer in an RPC layer (mentioned in those docs) that allows external applications over a protocol of your choice (e.g. HTTP) to later query and poll that data. From an application that is polling the data, you then would have the option of forwarding "notification events" via any method of your choice.
As for a framework that already exposes most of this out-of-the-box, checkout Azkarra Streams (I have no affiliation)
Or you can use alternative solutions such as Kafka Connect and write data to Slack, Telegram, etc. "message boards" instead, where many people explicitly subscribe to those channel notifications.

Weblogic JMS queue and differences between production, consumption, and insertion

I want a Weblogic queue to receive a message, but I don't want to process that message further. I want the messages I've sent to the queue to stay there before they are consumed.
So I think I need to pause Production and Consumption but leave Insertion to run so every message sent to that Queue will stay there, and I will be able to read each message created there. Am I right?
Based on the Weblogic documentation on this subject you should only pause consumption. If you pause production then producers will not be able to send messages to the queue. As the documentation states:
When a JMS destination is "paused for production," new and existing producers attached to that destination are unable to produce new messages for that destination. A producer that attempts to send a message to a paused destination receives an exception that indicates that the destination is paused.
Also, if you pause insertion then any in-flight messages will not appear on the queue either. Again, from the documentation:
When a JMS destination is paused for "insertion," both messages inserted as a result of in-flight work and new messages sent by producers are prevented from appearing on the destination. Use insertion pause to stop all messages from appearing on a destination.
That said, if consumption is paused then you won't be able to consume the messages either, although you should be able to use a JMS browser to inspect them.

Is it possible for a single message to be given to multiply instance of the same subscription in gcloud PubSub

I have a publisher that publishes messages to a particular topic (myTopic), then on my PubSub I create a subscription name: myTopicSub to this topic (myTopic), then I have a VM that runs a service that listeners on my subscription myTopicSub
THIS WORKS
MY PROBLEM IS: if there be a need to scale, and I add 5 more VM to handle more messages from my subscription... is it possible for PubSub to send the same message to more than one VM...
Because I only need one VM to process the message once. Please I need help
Cloud Pub/Sub offers at-least-once delivery. That means that a message can be delivered multiple times and in some cases, can be delivered to two different subscribers for the same subscription within a short period of time. That particular type of duplicate delivery is rare, but not impossible.
Subscribers have to be able to handle the delivery of duplicates and depending on their nature may handle it in different ways. For some, all actions are idempotent, so re-processing the same message has no ill effects. In other cases, the subscribers need to track which messages they have received and processed and if a message is a duplicate, just immediately ack the message instead of process it.

Is RabbitMQ capable of "pushing" messages from a queue to a consumer?

With RabbitMQ, is there a way to "push" messages from a queue TO a consumer as opposed to having a consumer "poll and pull" messages FROM a queue?
This has been the cause of some debate on a current project i'm on. The argument from one side is that having consumers (i.e. a windows service) "poll" a queue until a new message arrives is somewhat inefficient and less desirable than the idea having the message "pushed" automatically from the queue to the subscriber(s)/consumer(s).
I can only seem to find information supporting the idea of consumers "polling and pulling" messages off of a queue (e.g. using a windows service to poll the queue for new messages). There isn't much information on the idea of "pushing" messages to a consumer/subscriber...
Having the server push messages to the client is one of the two ways to get messages to the client, and the preferred way for most applications. This is known as consuming messages via a subscription.
The client is connected. (The AMQP/RabbitMQ/most messaging systems model is that the client is always connected - except for network interruptions, of course.)
You use the client API to arrange that your channel consume messages by supplying a callback method. Then whenever a message is available the server sends it to the client over the channel and the client application gets it via an asynchronous callback (typically one thread per channel). You can set the "prefetch count" on the channel which controls the amount of pipelining your client can do over that channel. (For further parallelism an application can have multiple channels running over one connection, which is a common design that serves various purposes.)
The alternative is for the client to poll for messages one at a time, over the channel, via a get method.
You "push" messages from Producer to Exchange.
https://www.rabbitmq.com/tutorials/tutorial-three-python.html
BTW this is fitting very well IoT scenarios. Devices produce messages and sends them to an exchange. Queue is handling persistence, FIFO and other features, as well as delivery of messages to subscribers.
And, by the way, you never "Poll" the queue. Instead, you always subscribe to publisher. Similar to observer pattern. Generally, I would say genius principle.
So it is similar to post box or post office, except it sends you a notification when message is available.
Quoting from the docs here:
AMQP brokers either deliver messages to consumers subscribed to
queues, or consumers fetch/pull messages from queues on demand.
And from here:
Storing messages in queues is useless unless applications can consume
them. In the AMQP 0-9-1 Model, there are two ways for applications to
do this:
Have messages delivered to them ("push API")
Fetch messages as needed ("pull API")
With the "push API", applications have to indicate interest in
consuming messages from a particular queue. When they do so, we say
that they register a consumer or, simply put, subscribe to a queue. It
is possible to have more than one consumer per queue or to register an
exclusive consumer (excludes all other consumers from the queue while
it is consuming).
Each consumer (subscription) has an identifier called a consumer tag.
It can be used to unsubscribe from messages. Consumer tags are just
strings.
RabbitMQ broker is like server that wont send data to consumer without consumer client getting registering itself to server. but then question comes like below
Can RabbitMQ keep client consumer details and connect to client when packet comes?
Answer is no. so what is alternative well then write plugin by yourself that maintain client information in some kind of config. Plugin will pull from RabbitMQ Queue and push to client.
Please give look at this plugin might help.
https://www.rabbitmq.com/shovel.html
Frankly speaking Client need to implement AMQP protocol to receive so and should listen connection on some port for that. This sound like another server.
Regards,
Vishal

How does a queue sender know that a consumer crashed?

I'm using node-amqp. For each queue, there is one sender and one consumer. On the sender side, I need to maintain a list of active consumers. The question is when a consumer computer crashed, how would I get a notification and delete it from the list at the sender side?
I think you may not be using the MQ concept correctly. The whole point is to disconnect the consumers from the producers. On the whole it is not the job of the producers to know anything about the consumers, except the type of message they will be consuming. To the point that the producer will keep producing if a consumer crashes and the messages will continue to build up in the queue it was reading from.
There is a way to do it by using RabbitMQ's HTTP API (at http://server-name:55672/api/) to get list of connections, but it is too brutal for frequently queries. Another way in theory is to use alternate exchanges to detect undelivered messages, but I didn't tried this way yet.
Also, it may be possible to detect unexpected consumer disconnection by using dead-letter-exchanges as described there: http://www.rabbitmq.com/dlx.html