Filter messages by content property - activemq-artemis

I am using ActiveMQ Artemis 2.17.0. I am trying to filter messages by content property using this field:
Sample message content:
{test:1}
For example I want to serach every message with test equal to 1.
How can I do that?

ActiveMQ Artemis web console allows to browse messages on a queue and apply a filter.
Until ActiveMQ Artemis 2.15 the messages are filtered at the client side and you can use the body keyword to filter the messages by their content. However filtering messages at the client side is inefficient and inconsistent.
Since ActiveMQ Artemis 2.16 the messages are filtered at the broker side using a filter but you can not filter the messages by their content.
ActiveMQ Artemis 2.18 will support xpath filters to filter the messages by their content, see ARTEMIS-3137 for further details.

Related

Message order issue in single consumer connected to ActiveMQ Artemis queue

Any possibility of message order issue while receive single queue consumer and multiple producer?
producer1 publish message m1 at 2021-06-27 02:57:44.513 and producer2 publish message m2 at 2021-06-27 02:57:44.514 on same queue worker_consumer_queue. Client code connected to the queue configured as single consumer should receive message in order m1 first and then m2 correct? Sometimes message receive in wrong order. version is ActiveMQ Artemis 2.17.0.
Even though I mentioned that multiple producer, message publish one after another from same thread using property blockOnDurableSend=false.
I create and close producer on each message publish. On same JVM, my assumption is order of published messages in queue, from same thread or from different threads even with async. timestamp is getJMSTimestamp(). async publish also maintain any internal queue has order?
If you use blockOnDurableSend=false you're basically saying you don't strictly care about the order or even if the message makes it to the broker at all. Using blockOnDurableSend=false basically means "fire and forget."
Furthermore, the JMSTimetamp is not when the message is actually sent as noted in the javax.jms.Message JavaDoc:
The JMSTimestamp header field contains the time a message was handed off to a provider to be sent. It is not the time the message was actually transmitted, because the actual send may occur later due to transactions or other client-side queueing of messages.
With more than one producer there is no guarantee that the messages will be processed in order.
More producers, ActiveMQ Artemis and one consumer are a distributed system and the lack of a global clock is a significant characteristic of distributed systems.
Even if producers and ActiveMQ Artemis were on the same machine and used the same clock, ActiveMQ Artemis could not receive the messages in the same order producers would create and send their messages. Because the time to create a message and the time to send a message include variable time latencies.
The easiest solution is to trust the order of the messages received by ActiveMQ Artemis, adding a timestamp with an interceptor or enabling the ingress timestamp, see ARTEMIS-2919 for further details.
If the easiest solution doesn't work, the distributed solution is to implement a distributed system total ordering algorithm as lamport timestamps.
Well, as it seams it is not a bug within Artemis, when it comes to a millisecond difference it is more like a network lag or something like this.
So to workaround I got to the idea, you could create a algorythm in which a recieved message will wait for ~100ms before it is really worked through (whatever you want to be doing with this message) and check if there is another message which your application recieved afterwards but is send before. So basicly have your own receiver queue with a delay.
IF there is message that was before, you could simply move that up in your personal algorythm. You could also think about to reject the first message back to your bus, depending on your settings on queues and topics it would be able to recieve it afterwards again.

Is it possible to have a DeadLetter Queue topic on Kafka Source Connector side?

Is it possible to have a DeadLetter Queue topic on Kafka Source Connector side?
We have a challenge with the events processed by the IBM MQ Source connector, which is processing N number of messages but sending N-100 messages, where 100 messages are the Poison messages.
But from below blog by Robin Moffatt, I can see it is not doable to have DLQ on Source Connectors side.
https://www.confluent.io/blog/kafka-connect-deep-dive-error-handling-dead-letter-queues/
Below note mentioned in above article:
Note that there is no dead letter queue for source connectors.
1Q) Please confirm if anyone used the Deadletter queue for the IBM MQ Source Connector (below is the documentation)
https://github.com/ibm-messaging/kafka-connect-mq-source
2Q) Is anyone used the DLQ on any other source connectors side?
3Q) Why it is a limitation on not having DLQ on source connector side?
Thanks.
errors.tolerance is available for source connectors too - refer docs
However, if you compare that to sinks, no, DLQ options are not available. You would instead need to parse the Connector logs with the event details, then pipe that to a topic on your own.
Overall, how would the source connectors decide what events are bad? A network connection exception means that no messages would be read at all, so there's nothing to produce. If messages fail to serialize to Kafka events, then they also would fail to be produced... Your options are either to fail-fast, or skip and log.
If you're just wanting to send through binary data as-is, then nothing would be "poisonous" it can be done with the ByteArrayConverter class, but that's not really a good use case for Kafka Connect since it's primarily designed around Structured types with parsible Schemas, but at least with that option, data gets into Kafka and you can use Kstreams to branch/filter good messages from the bad ones

Is it possible in Spring Kafka to send a messages that will expire on a per message (not per template or higher) basis

I am trying to use Kafka as a request-response system between two clients much like RabbitMQ and I was wondering if it is possible to set the expiration of a message so that after it is posted it will automatically get deleted from the Kafka servers.
I'm trying to do it on a per message level as well (but even if it were per-topic it is okay, but I'd like to use the same template if possible).
I was checking ProducerRecord, but all it had was timestamp. I also don't see any mention of it in KafkaHeaders
Kafka records are deleted in segments (a group of messages) based on overall topic retention.
Spring is just a client. It doesn't control the server side logic of the log cleaner.

Kafka DSL stream swallow custom headers

Is it possible to forward incoming messages with custom headers from topic A to B in DSL stream processor?
I notice all of my incomming messages in topic A contains custom headers, but when I put them into topic B all headers are swallowed by stream processor.
I usestream.to(outputTopic); method to process messages.
I have found this task, which is still OPEN.
https://issues.apache.org/jira/browse/KAFKA-5632?src=confmacro
Your observation is correct. Up to Kafka 1.1, Kafka Streams drops records headers.
Record header support is added in (upcoming) Kafka 2.0 allowing to read and modify headers using the Processor API (cf. https://issues.apache.org/jira/browse/KAFKA-6850). With KAFKA-6850, record headers will also be preserved (ie, auto-forwarded) if the DSL is used.
The mentioned issue KAFKA-5632 is about header manipulation at DSL level, that is still not supported in Kafka 2.0.
To manipulate headers using the DSL in Kafka 2.0, you can mix-and-match Processor API into the DSL by using KStream#transformValues(), #transform() or #process().

How do I group messages in JBoss ESB?

I have JSON messages incoming to JMS queue on JBoss server.
I want to group them using some criteria, e.g. parse and use attribute "group" to group by.
I need to accumulate messages for X minutes, then create a new message representing each group and call a service to process each group-message.
I can't find a way to read messages from JMS queue and produce less ESB messages in transactional way. I don't want to loose messages during restart.
If you stumbled upon this like I did. I suggest you use a message Aggregator for this. Please have a look at the following link for more details on how to. https://access.redhat.com/site/documentation/en-US/JBoss_Enterprise_SOA_Platform/4.2/html-single/SOA_ESB_Message_Action_Guide/index.html#section-Aggregator