How to reject messages greater than 100 MB HornetQ - hornetq

We would like to reject the messages being placed in hornetq greater than 50 MB.
Could we restrict it in the configuration at queue/connection factory level.
Placing large messages in HornetQ is causing heap issue and the server is getting crashed.
Any help appreciated.

Edit your .xml configuration like:
<address-setting match="your/queue/address">
<max-size-bytes>104857600</max-size-bytes>
<page-size-bytes>104857600</page-size-bytes>
<address-full-policy>DROP</address-full-policy>
</address-setting>
From the docs:
Messages are stored per address on the file system.
Instead of paging messages when the max size is reached, an address can also be configured to just drop messages when the address is full.
..to do so, set address-full-policy to DROP (messages will be silently dropped).
The above settings are documented at:
https://docs.jboss.org/hornetq/2.2.5.Final/user-manual/en/html/queue-attributes.html
While, especially concerning the message size elements: https://docs.jboss.org/hornetq/2.2.5.Final/user-manual/en/html/paging.html

Related

Set a retention policy for ActiveMQ Artemis dead-letter queues?

Is there a best practice on setting a retention policy on ActiveMQ Artemis dead-letter queues?
I was looking through the documentation, but I cannot find anything related. My best approach would be calling removeMessages(string) with a filter AMQTimestamp > TIMESTAMP.
There's no real best practice here as it's really dependent on use-case and use-cases vary widely in their needs this regard.
Using removeMessages(string) with a filter AMQTimestamp > TIMESTAMP is certainly fine when you want to remove messages administratively (or even potentially with a script). However, if you want to set up something more automated you can just use the expiry-delay address setting, e.g.:
<address-setting match="myAddress">
<expiry-delay>300000</expiry-delay> <!-- 300 seconds (5 minutes) -->
</address-setting>
If there's no expiry address defined then the messages will simply be removed after the expiry-delay elapses. If there is an expiry address defined (e.g. in a parent's address-setting) then those messages will be routed to any queues bound to that address according to the configured routing type(s). However, if you want to remove the expiry address so that the messages are just dropped then you can, e.g.:
<address-setting match="myAddress">
<expiry-address/>
<expiry-delay>300000</expiry-delay> <!-- 300 seconds (5 minutes) -->
</address-setting>

Paging mode in ActiveMQ Artemis

As far as I understand paging will be carried out on adresses if they exceed the defined size. Currently we experience a paging, but not on known addresses (queues). It seems like it is an internal queue from ActiveMQ? Is it possible to understand what kind of address ActiveMQ is paging here?
WARN [org.apache.activemq.artemis.core.server] AMQ222038: Starting paging on address '$.artemis.internal.my-cluster.fec50662-55c7-11eb-91d1-005056903119'; size is currently: 25,238,532 bytes; max-size-bytes: -1; global-size-bytes: 524,357,417
This is important for us, because we have analyzed that this paging causing the inability for the messages in our queues to be consumed.
The address named $.artemis.internal.my-cluster.fec50662-55c7-11eb-91d1-005056903119, and the related queue, are used for intra-cluster communication. When messages need to be moved from one node to another they are sent to this address and then forwarded to another broker by the internal cluster bridge.
Given the log message I would surmise that you've reached the global-size-bytes which is calculated by adding up the bytes from all addresses. You might consider increasing your global-max-size in broker.xml.
You say that this paging is preventing your consumers from consuming messages. However, it's also worth noting that paging is typically caused by consumers not consuming messages, not the other way around. When consumers slow down or stop then messages build up in the broker and it has no choice but to begin paging. Therefore you would likely see both of these things simultaneously which could lead to misattribution.

Service Bus Explorer

Our project has Microsoft Service Bus (on-prem ) running on Windows 2012 R2 servers for message processing.
When sending messages to service bus topic above the size limit (say 10 mb ) , services bus shows processing error – throws socket timeout exception.
Just wanted to know ,
if anyone has worked with sending messages (say > 10 MBs ) to Service Bus Topics . Would appreciate any suggested approach on how to handle this.
Also is there a way to increase the service bus timeout configuration or message size limit settings on Service Bus Topics either through Powershell cmds or Service Bus Explorer.
Service Bus queues support a maximum message size of 256 Kb (the header, which includes the standard and custom application properties, can have a maximum size of 64 Kb).
There is no limit on the number of messages held in a queue but there is a cap on the total size of the messages held by a queue. This queue size is defined at creation time, with an upper limit of 5 GB.
Are you asking about sending a message which is of size 10 MB? Service Bus doesn't allow that large message. For Premium, the maximum message size is 1 MB, and for Standard, it's 256 KB as #Ana said.
Also is there a way to increase the service bus timeout configuration
or message size limit settings?
Yes, there is a possibility to handle time-to-live property of messages which can be configured either at the time of Queue/Subscription creation or while sending Individual message. Refer to set Time to live for Queue as well as message.
Also is there a way to increase message size limit settings?
No, as the maximum size is 1 MB (May be increased by Azure in the future).
To answer this "Can we Send messages (say > 10 MBs ) to Service Bus Topics".
Now as of today, the updated answer will be YES: The Premium tier of Service Bus, enabling Message size up to 100 MB. Where as Standard is up to 256 KB as of today.
How to enabling large messages support for an existing queue (or topic)
Recommended:
While 100 MB message payloads are supported, it's recommended to keep the message payloads as small as possible to ensure reliable performance from the Service Bus namespace.
The Premium tier is recommended for production scenarios.

jute.maxbuffer affects only incoming traffic

Does this value only affect incoming traffic? If i set this value to say 4MB on zookeeper server as well as zookeeper client and I start my client, will I still get data > 4MB when I do a request for a path /abc/asyncMultiMap/subs. If /subs has data greater than 4MB is the server going to break it up in chunks <= 4MB and send it in pieces to the client?
I am using zookeeper 3.4.6 on both client (via vertx-zookeeper) and server. I see errors on clients where it complains that packet length is greater than 4MB.
java.io.IOException: Packet len4194374 is out of range!
at org.apache.zookeeper.ClientCnxnSocket.readLength(ClientCnxnSocket.java:112) ~[zookeeper-3.4.6.jar:3.4.6-1569965]
"This is a server-side setting"
This statement is incorrect, jute.maxbuffer is evaluated on client as well by Record implementing classes that receive InputArchive. Each time a field is read and stored into an InputArchive the value is checked against jute.maxbuffer. Eg ClientCnxnSocket.readConnectResult
I investigated it in ZK 3.4
There is no chunking in the response.
This is a server-side setting. You will get this error if the entirety of the response is greater than the jute.maxbuffer setting. This response limit includes the list of children of znodes as well so even if subs does not have a lot of data but has enough children such that their length of their paths exceeds the max buffer size you will get the error.

divert on large files on hornetq leaves a copy of the file behind

We have this queue where we get some large files and many small ones, so we use divert to get the large files onto their own queue.
The divert is exclusive. However see that under the large messages folder there are two copies of the message (which is a bytesmessage - file with properties) for every large message diverted. Once we consume the diverted message one of them goes away, but the other is left behind until hornet is restarted. When it is restarted we see messages like the following:
[org.hornetq.core.server] HQ221018: Large message: 19,327,352,827 did not have any associated reference, file will be deleted
We use streaming over JMS to put them in and get them out.
Below is the divert configuration. By the way, large size is considered everything over 100k in our HornetQ.
Are we missing any thing or did we just discover a bug?
HornetQ version is 2.3.0
<diverts>
<divert name="large-message-divert">
<routing-name>large-message-divert</routing-name>
<address>jms.queue.FileDelivery</address>
<forwarding-address>jms.queue.FileDelivery.large</forwarding-address>
<filter string="_HQ_LARGE_SIZE IS NOT NULL AND _HQ_LARGE_SIZE > 52428800"/>
<exclusive>true</exclusive>
</divert>
</diverts>
This is likely a bug fixed in a later release of hornetq, since there were at least 2 fixes since 2.3.0 to the current date when I wrote this answer:
HORNETQ-1292 - Delete large message from disk when message is dropped
HORNETQ-431 - Large Messages Files NOT DELETED on unbounded address