Is there a way to control the message output speed of MQTT broker? - queue

I am planning an MQTT project with multiple publishers. MQTT broker receives messages from different publishers. Is there a way to control the message output rate from the MQTT broker? For example, the broker queue forward messages to subscribers at 2 seconds interval. But the broker receives messages all the time.
In such a way, can we control the exit rate from MQTT broker?

No, not really.
It's important to understand that MQTT is pub/sub not really a message queue system.
It is worth remembering that if the subscribing clients can not consume and process messages quicker than they are created then the system will eventually fail, they will have to back up somewhere (most likely in the broker) which will eventually fail due to either memory or storage exhaustion.
Assuming the client is subscribed at QOS 1 or 2 and the broker is configured with only 1 inflight message at a time, then the client should be able to control the rate it handles messages by controlling how it handles the QOS handshake. But this tends to be a bad idea for the reasons already mentioned and not all clients give you any control over the handshake steps.

Related

Why does kafka consumer poll the broker?

Currently learning about Kafka architecture and I'm confused as to why the consumer polls the broker. Why wouldn't the consumer simply subscribe to the broker and supply some callback information and wait for the broker to get a record? Then when the broker gets a relevant record, look up who needs to know about it and look at the callback information to dispatch the messages? This would reduce the number of network operations hugely.
Kafka can be used as a messaging service, but it is not the only possible usecase. You could also treat it as a remote file that can have bytes (records) read on demand.
Also, if notification mechanism were to implemented in message-broker fashion as you suggest, you'd need to handle slow consumers. Kafka leaves all control to consumers, allowing them to consume at their own speed.

Paho-MQTT check message queue size

I'm publishing MQTT messages from an Arduino, and subscribing to those from a Raspberry Pi. Sometimes the publishing goes faster than the Raspberry can receive (and process).
I'm looking for a way of checking how many messages are queued on the Raspberry side. I'm using Paho-MQTT. I only see it is possible to set a max queue size, but how can I check the current queue size? (If possible.)
There is no queue in the broker, all messages are delivered as they are published.
The Paho client is singled threaded and the Message received call back is handled on the network thread, so messages may back up on the network stack (for QOS0 messages). QOS1/2 messages will back up in the broker until the QOS handshake for the current message completes.
The max_queued message setting is about how many QOS 1/2 messages the client will accept to publish before blocking, not how many incoming messages it will queue up.
If you want to queue messages in a measurable way then have the Message received callback place the messages on to a local queue and have a second thread (or pool of threads if they can be handled in parallel) take messages from the local queue.

Kafka vs JMS for event publishing

In our scenario we have a set of micro services which interact with other services by sending event messages. We anticipate millions of messages per day at the peak. Every message is targeted to one or more listener types.
Our requirements are as follows:
Zero lost messages.
Ability to dynamically register multiple listeners of a specific
type in order to increase throughput.
Listeners are not guaranteed to be alive when messages are
dispatched.
We consider 2 options:
Send each message to JMS main queue then listeners of that queue will route the messages to specific queues according to message content, and then target services will listen to those specific queues.
Send messages to a Kafka topic by message type then target services will subscribe to the relevant topic and consume the messages.
What are the cons and pros for using either JMS or Kafka for that purpose?
Your first requirement is "zero lost messages". However, if you want publish-subscribe semantics (i.e. topics in JMS), but listeners are not guaranteed to be alive when messages are dispatched then JMS is a non-starter as those messages will simply be discarded (i.e. lost).
I would suggest to go with Kafka as it has fault tolerance mechanism and even if some message lost or not captured by any listener you can easily retrieve it from Kafka cluster.
Along with this you can easily add new listener / listener in group and kafka along with zookeeper will take care of managing it very well.
In summary, Kafka is a distributed publish-subscribe messaging system that is designed to be fast, scalable, and durable. Like many publish-subscribe messaging systems, Kafkamaintains feeds of messages in topics. Producers write data to topics and consumers read from topics.
Very easy for integration.

Keeping a series of messages in an MQTT Topic

I'm not sure if this is possible or not. If I set a number of messages to be persisted under a topic for some period of time, can I later grab all of them?
I have an MQTT Broker (Mosquitto) set up already for communication between my services but I now also need some storage for several messages, ideally keeping the last 24 hours worth of messages and being able to pull them out later.
Message persistence is only for clients that have subscribed but are currently disconnected and when they do reconnect do so with the cleanSession flag set to false. In which case all the messages published while that client was disconnected.
You can not use a MQTT broker to store an arbitrary number of messages and retrieve them later. If a client is connected then all messages for it's collection of subscribed topics will be delivered as soon as possible.
Of you want to log messages for later you will have to implement this separately, there are plenty of examples of applications that store messages in databases available

Is RabbitMQ capable of "pushing" messages from a queue to a consumer?

With RabbitMQ, is there a way to "push" messages from a queue TO a consumer as opposed to having a consumer "poll and pull" messages FROM a queue?
This has been the cause of some debate on a current project i'm on. The argument from one side is that having consumers (i.e. a windows service) "poll" a queue until a new message arrives is somewhat inefficient and less desirable than the idea having the message "pushed" automatically from the queue to the subscriber(s)/consumer(s).
I can only seem to find information supporting the idea of consumers "polling and pulling" messages off of a queue (e.g. using a windows service to poll the queue for new messages). There isn't much information on the idea of "pushing" messages to a consumer/subscriber...
Having the server push messages to the client is one of the two ways to get messages to the client, and the preferred way for most applications. This is known as consuming messages via a subscription.
The client is connected. (The AMQP/RabbitMQ/most messaging systems model is that the client is always connected - except for network interruptions, of course.)
You use the client API to arrange that your channel consume messages by supplying a callback method. Then whenever a message is available the server sends it to the client over the channel and the client application gets it via an asynchronous callback (typically one thread per channel). You can set the "prefetch count" on the channel which controls the amount of pipelining your client can do over that channel. (For further parallelism an application can have multiple channels running over one connection, which is a common design that serves various purposes.)
The alternative is for the client to poll for messages one at a time, over the channel, via a get method.
You "push" messages from Producer to Exchange.
https://www.rabbitmq.com/tutorials/tutorial-three-python.html
BTW this is fitting very well IoT scenarios. Devices produce messages and sends them to an exchange. Queue is handling persistence, FIFO and other features, as well as delivery of messages to subscribers.
And, by the way, you never "Poll" the queue. Instead, you always subscribe to publisher. Similar to observer pattern. Generally, I would say genius principle.
So it is similar to post box or post office, except it sends you a notification when message is available.
Quoting from the docs here:
AMQP brokers either deliver messages to consumers subscribed to
queues, or consumers fetch/pull messages from queues on demand.
And from here:
Storing messages in queues is useless unless applications can consume
them. In the AMQP 0-9-1 Model, there are two ways for applications to
do this:
Have messages delivered to them ("push API")
Fetch messages as needed ("pull API")
With the "push API", applications have to indicate interest in
consuming messages from a particular queue. When they do so, we say
that they register a consumer or, simply put, subscribe to a queue. It
is possible to have more than one consumer per queue or to register an
exclusive consumer (excludes all other consumers from the queue while
it is consuming).
Each consumer (subscription) has an identifier called a consumer tag.
It can be used to unsubscribe from messages. Consumer tags are just
strings.
RabbitMQ broker is like server that wont send data to consumer without consumer client getting registering itself to server. but then question comes like below
Can RabbitMQ keep client consumer details and connect to client when packet comes?
Answer is no. so what is alternative well then write plugin by yourself that maintain client information in some kind of config. Plugin will pull from RabbitMQ Queue and push to client.
Please give look at this plugin might help.
https://www.rabbitmq.com/shovel.html
Frankly speaking Client need to implement AMQP protocol to receive so and should listen connection on some port for that. This sound like another server.
Regards,
Vishal