I'm publishing MQTT messages from an Arduino, and subscribing to those from a Raspberry Pi. Sometimes the publishing goes faster than the Raspberry can receive (and process).
I'm looking for a way of checking how many messages are queued on the Raspberry side. I'm using Paho-MQTT. I only see it is possible to set a max queue size, but how can I check the current queue size? (If possible.)
There is no queue in the broker, all messages are delivered as they are published.
The Paho client is singled threaded and the Message received call back is handled on the network thread, so messages may back up on the network stack (for QOS0 messages). QOS1/2 messages will back up in the broker until the QOS handshake for the current message completes.
The max_queued message setting is about how many QOS 1/2 messages the client will accept to publish before blocking, not how many incoming messages it will queue up.
If you want to queue messages in a measurable way then have the Message received callback place the messages on to a local queue and have a second thread (or pool of threads if they can be handled in parallel) take messages from the local queue.
Related
I am new to ZMQ and am not sure if what I want is even possible or if I should use another technology.
I would like to have a socket that multiple servers can stream to.
It appears that a ZMQ socket can do this based on this documentation: http://api.zeromq.org/4-0:zmq-setsockopt
How would I implement a ZMQ socket on the receiving end that only grabs the latest message sent from each server?
You can do this with Zmq's PUB / SUB.
The first key thing is that a SUB socket can be connected to multiple PUBlishers. This is covered in Chapter 1 of the guide:
Some points about the publish-subscribe (pub-sub) pattern:
A subscriber can connect to more than one publisher, using one connect call each time. Data will then arrive and be interleaved “fair-queued” so that no single publisher drowns out the others.
If a publisher has no connected subscribers, then it will simply drop all messages.
If you’re using TCP and a subscriber is slow, messages will queue up on the publisher. We’ll look at how to protect publishers against this using the “high-water mark” later.
So, that means that you can have a single SUB socket on your client. This can be connected to several PUB sockets, one for each server from which the client needs to stream messages.
Latest Message
The "latest message" can be partially dealt with (as I suspect you'd started to find) using high water marks. The ZMQ_RCVHWM option allows the number to be received to be set to 1, though this is an imprecise control.
You also have to consider what it is that is meant by "latest" message; the PUB servers and SUB client will have different views of what this is. For example, when the zmq_send() function on a PUB server returns, the sent message is the one that the PUBlisher would regard as the "latest".
However, over in the client there is no knowledge of this as nothing has yet got down through the PUBlishing server's operating system network stack, nothing has yet touched the Ethernet, etc. So the SUBscribing client's view of the "latest" message at that point in time is whichever message is in ZMQ's internal buffers / queues waiting for the application to read it. This message could be quite old in comparison to the one the PUBlisher has just started sending.
In reality, the "latest" message seen by the client SUBscriber will be dependent on how fast the SUBscriber application runs.
Provided it's fast enough to keep up with all the PUBlishers, then every single message the SUBscriber gets will be as close to the "latest" message as it can get (the message will be only as old as the network propagation delays and the time taken to transit through ZMQ's internal protocols, buffers and queues).
If the SUBscriber isn't fast enough to keep up, then the "latest" messages it will see will be at least as old as the processing time per message multiplied by the number of PUBlishers. If you've set the receive HWM to 1, and the subscriber is not keeping up, the publishers will try publishing messages but the subscriber socket will keep rejecting them until the subscribed application has cleared out the old message that's caused the queue congestion, waiting for zmq_recv() to be called.
If the subscriber can't keep up, the best thing to do in the subscriber is:
have a receiving thread dedicated to receiving messages and dispose of them until processing becomes available
have a separate processing thread that does the processing.
Have the two threads communicate via ZMQ, using a REQ/REP pattern via an inproc connection.
The receiving thread can zmq_poll both the SUB socket connection to the PUBlishing servers and the REP socket connection to the processing thread.
If the receiving thread receives a message on the REP socket, it can reply with the next message read from the SUB socket.
If it receives a message from the SUB socket with no REPly due, it disposes of the message.
The processing thread sends 1 bytes messages (the content doesn't matter) to its REQ socket to request the latest message, and receives the latest message from the PUBlishers in reply.
Or, something like that. That'll keep the messages flowing from PUBlishers to the SUBscriber, thus the SUBscriber always has a message as close to possible as being "the latest" and is processing that as and when it can, disposing of messages it can't deal with.
I am planning an MQTT project with multiple publishers. MQTT broker receives messages from different publishers. Is there a way to control the message output rate from the MQTT broker? For example, the broker queue forward messages to subscribers at 2 seconds interval. But the broker receives messages all the time.
In such a way, can we control the exit rate from MQTT broker?
No, not really.
It's important to understand that MQTT is pub/sub not really a message queue system.
It is worth remembering that if the subscribing clients can not consume and process messages quicker than they are created then the system will eventually fail, they will have to back up somewhere (most likely in the broker) which will eventually fail due to either memory or storage exhaustion.
Assuming the client is subscribed at QOS 1 or 2 and the broker is configured with only 1 inflight message at a time, then the client should be able to control the rate it handles messages by controlling how it handles the QOS handshake. But this tends to be a bad idea for the reasons already mentioned and not all clients give you any control over the handshake steps.
I want a Weblogic queue to receive a message, but I don't want to process that message further. I want the messages I've sent to the queue to stay there before they are consumed.
So I think I need to pause Production and Consumption but leave Insertion to run so every message sent to that Queue will stay there, and I will be able to read each message created there. Am I right?
Based on the Weblogic documentation on this subject you should only pause consumption. If you pause production then producers will not be able to send messages to the queue. As the documentation states:
When a JMS destination is "paused for production," new and existing producers attached to that destination are unable to produce new messages for that destination. A producer that attempts to send a message to a paused destination receives an exception that indicates that the destination is paused.
Also, if you pause insertion then any in-flight messages will not appear on the queue either. Again, from the documentation:
When a JMS destination is paused for "insertion," both messages inserted as a result of in-flight work and new messages sent by producers are prevented from appearing on the destination. Use insertion pause to stop all messages from appearing on a destination.
That said, if consumption is paused then you won't be able to consume the messages either, although you should be able to use a JMS browser to inspect them.
With RabbitMQ, is there a way to "push" messages from a queue TO a consumer as opposed to having a consumer "poll and pull" messages FROM a queue?
This has been the cause of some debate on a current project i'm on. The argument from one side is that having consumers (i.e. a windows service) "poll" a queue until a new message arrives is somewhat inefficient and less desirable than the idea having the message "pushed" automatically from the queue to the subscriber(s)/consumer(s).
I can only seem to find information supporting the idea of consumers "polling and pulling" messages off of a queue (e.g. using a windows service to poll the queue for new messages). There isn't much information on the idea of "pushing" messages to a consumer/subscriber...
Having the server push messages to the client is one of the two ways to get messages to the client, and the preferred way for most applications. This is known as consuming messages via a subscription.
The client is connected. (The AMQP/RabbitMQ/most messaging systems model is that the client is always connected - except for network interruptions, of course.)
You use the client API to arrange that your channel consume messages by supplying a callback method. Then whenever a message is available the server sends it to the client over the channel and the client application gets it via an asynchronous callback (typically one thread per channel). You can set the "prefetch count" on the channel which controls the amount of pipelining your client can do over that channel. (For further parallelism an application can have multiple channels running over one connection, which is a common design that serves various purposes.)
The alternative is for the client to poll for messages one at a time, over the channel, via a get method.
You "push" messages from Producer to Exchange.
https://www.rabbitmq.com/tutorials/tutorial-three-python.html
BTW this is fitting very well IoT scenarios. Devices produce messages and sends them to an exchange. Queue is handling persistence, FIFO and other features, as well as delivery of messages to subscribers.
And, by the way, you never "Poll" the queue. Instead, you always subscribe to publisher. Similar to observer pattern. Generally, I would say genius principle.
So it is similar to post box or post office, except it sends you a notification when message is available.
Quoting from the docs here:
AMQP brokers either deliver messages to consumers subscribed to
queues, or consumers fetch/pull messages from queues on demand.
And from here:
Storing messages in queues is useless unless applications can consume
them. In the AMQP 0-9-1 Model, there are two ways for applications to
do this:
Have messages delivered to them ("push API")
Fetch messages as needed ("pull API")
With the "push API", applications have to indicate interest in
consuming messages from a particular queue. When they do so, we say
that they register a consumer or, simply put, subscribe to a queue. It
is possible to have more than one consumer per queue or to register an
exclusive consumer (excludes all other consumers from the queue while
it is consuming).
Each consumer (subscription) has an identifier called a consumer tag.
It can be used to unsubscribe from messages. Consumer tags are just
strings.
RabbitMQ broker is like server that wont send data to consumer without consumer client getting registering itself to server. but then question comes like below
Can RabbitMQ keep client consumer details and connect to client when packet comes?
Answer is no. so what is alternative well then write plugin by yourself that maintain client information in some kind of config. Plugin will pull from RabbitMQ Queue and push to client.
Please give look at this plugin might help.
https://www.rabbitmq.com/shovel.html
Frankly speaking Client need to implement AMQP protocol to receive so and should listen connection on some port for that. This sound like another server.
Regards,
Vishal
I have a transactional private message queue (among other message queues on which I have not seen this problem) on a Windows Server 2008 R2 server.
This particular queue has a recurring problem happening every few weeks where the console shows a nonzero count of messages in the queue, but it does not have any messages in the queue itself or any subqueue. Queue Explorer shows the same thing. Performance counters indicate there are messages like the count in the built-in msmq console and queue explorer.
I cannot find any messages. I understand that I could see a situation like this for outgoing queues with dead letter tracking such that it may have been delivered to a remote machine but not yet processed. This is not an outgoing queue, though. Messages are sourced from remote machines and have landed here on this machine.
Also, I am certain that the count I'm seeing are not journal messages or subqueues.
Does this make any sense? Is there a logical explanation for this and under some circumstance this is expected? If so, what is it?
EDIT: Removed info about purging queue removing the count - that was incorrect. Purging actually does nothing and leaves me in the same state as before with a count reflected, but no messages showing.
As you noted, you can see a message count on an outgoing queue if source journaling is in use. The invisible messages are there in case they need to be moved to the DLQ.I would expect your problem to be similar - there should be a visible message in the outgoing queue and an invisible message in the destination queue because delivery hasn't completed. I assume a handshaking or storage acknowledgement has been lost along the way. Or maybe the message has been processed and removed from the queue but MSMQ couldn't update the sender of the fact. Check the outgoing queues on the remote machines sending TO this queue.