ActiveMQ Artemis divert logging - activemq-artemis

Is there a way to log active mq Artemis diverts? So you can see which divert was applied to a given message? I can't seem to find this in the docs.

You can activate TRACE logging for org.apache.activemq.artemis.core.server.impl.DivertImpl. This will log a message for each diverted message. See the logging code here.

An option could be found here in docs:
Diverts can also be configured to apply a Transformer. If specified,
all diverted messages will have the opportunity of being transformed
by the Transformer.
Since Transformer is a class you could implement and trace with a Logger implementation, just without touching the message.

Related

Why are there kafka and brokers in the scdf trace shown in zipkin?

I have a spring cloud data flow environment created in kubernetes and a zipkin environment created as well. But when I look at the Dependencies in zipkin, I see that in addition to the application that exists in the stream, there is also a broker and kafka.
Is there anyone who can tell me why this is? And is there any way I can get broker and kafka to not show up.
It's like this image shows
That's because one of the brokers got resolved as being kafka (for example via special message headers) and the other didn't. It's either a bug in Sleuth or you're using an uninstrumented library.

ActiveMQ Artemis: full disk policy

I'm using ActiveMQ Artemis 2.17. I've set up the address full message policy to PAGING and ran some crash tests. When the disk is full, I'm getting the message "AMQ222212: Disk Full! Blocking message production on address...". Is there a setup to raise an error to the producers instead of blocking them?
Regards
Nicolas
When the disk is full the only option is to block. However, if the protocol doesn't support flow control an exception will be thrown. This is noted in the documentation.
For what it's worth, individual addresses can be configured to behave differently when they reach max-size-bytes. One of those options is to return a failure to producers, e.g.:
<address-full-policy>FAIL</address-full-policy>
See the documentation for more details.

dead letter queue for kafka connect http sink

I am writing HTTP Sink plugin for Kafka connect. Its purpose is to send HTTP requests for each message in the configured Kafka topic. I want to send the message to dead letter queue in case HTTP request fails. Can I make use of dead letter queue configuration provided in sink plugin ?
The reason for this question is that, it has been mentioned in kafka connect documentation and several blogs that only errors in transformer and converter will be send to dead letter queue and not the ones during PUT. Since the task of sending the http request is done in PUT. So I am think, is there a way to send failed http messages to DLQ ? If not, is it possible to send the message to some other kafka topic for further processing ?
According to #Neil this might be informative,
Kip 610 (implemented in Kafka 2.6) added DLQ support for issues when interacting with end system. Kip 298 added DLQ but only on issues prior to sink system interaction.
Check the versions of your connect cluster and sink version and see if it supports it.
https://cwiki.apache.org/confluence/display/KAFKA/KIP-610%3A+Error+Reporting+in+Sink+Connectors

Is it possible to log all incoming messages in Apache Kafka

I need to know if it possible to configure logging for Apache Kafka broker to write all produced/consumed topics and it's messages.
I've been looking the log4j.properties but none of the suggested properties seems to do what I need.
Thanks in advance.
Looking the generated logging files by Kafka, none of them seems to log the messages written in the different topics.
UPDATE:
Not exactly what I was looking for, but for anyone looking something similar I found: https://github.com/kafka-lens/kafka-lens which provides a friendly GUI to view messages on different topics.
I feel like there's some confusion with the word "log".
As you're talking about log4j, I assume you're talking about what I'd call "application logs". Kafka does not write the records it handles in application/log4j logs. In Kafka, log4j logs are only used to trace errors and give some context about the work brokers are doing.
On the other hand, Kafka write/read records into/from its "log", the Kafka log. These are stored in the path specified by log.dirs (/tmp/kafka-logs by default) and are not directly readable. You can use the DumpLogSegments tool to read these files if you want, for example:
bin/kafka-run-class.sh kafka.tools.DumpLogSegments \
-f /tmp/kafka-logs/topic-0/00000000000000000000.log

Diverts deleted when restarting ActiveMQ Artemis

I have searched and haven't found anything about this online nor in the manual.
I have setup addresses and use both multicast to several queues and an AnyCast to a single queue (All durable queues). To this I have connected Diverts created in the API in runtime.
The diverts works great when sending messages. BUT when I restart the ActiveMQ Artemis instance the Diverts are deleted. Everything else is in place. Just the Diverts are deleted.
Any ideas on how to keep Diverts after a Restart?
Diverts created via the management API during runtime are volatile. If you wish to have a divert which survives a broker restart you should modify the broker.xml with the desired divert configuration.
Of course, the current behavior may not work for your use-case. If that's true then I would encourage you to open a "Feature Request" JIRA at Artemis JIRA project. Furthermore, if you're really committed to seeing the behavior change you can download the code, make the necessary changes, and submit a pull-request (or attach a patch to the JIRA). Checkout the Artemis Hacking Guide for help getting started.