I am sending all my application logs to the Kafka using Log4j2 Kafka appender and it works. But in a situation, when I purposefully bring down the broker, the application gets hung-up and the kafka appender keeps on retrying to establish the connection.
How can I stop writing into Kafka when the broker(s) are down? and resume once it is available?
Following is the appender configuration I have used.
<Kafka name="KafkaServiceStatInfo" topic="testKafkaLogs">
<PatternLayout pattern="%m"/>
<Property name="bootstrap.servers">localhost:9092</Property>
<Property name="acks">0</Property>
</Kafka>
<Async name="Async">
<AppenderRef ref="KafkaServiceStatInfo"/>
</Async>
Related
When I use the KafkaAppender of log4j I have a problem when I put a single broker, but it is stopped. The problem is the KafkaAppender waits for a very long time before failing. I use syncsend=false I want to set some timeout so the appender wouldn't wait for such a long time.
Could you tell me how I need to configure the KafkaAppender in order to prevent this wait?
There is no timeout setting on KafkaAppender itself, but there are a few timeout options that can be configured on KafkaProducer. The options are described in the Kafka documentation.
Here you have example kafka appender configuration with two kafka producer timeout settings with their default values:
<Appenders>
<Kafka name="Kafka" topic="log-test">
<PatternLayout pattern="%date %message"/>
<Property name="bootstrap.servers">localhost:9092</Property>
<Property name="request.timeout.ms">30000</Property><!-- 30 seconds -->
<Property name="transaction.timeout.ms">60000</Property><!-- 1 minute -->
</Kafka>
</Appenders>
You might want to play with those to get the expected behaviour.
Also, remember that the syncSend option was added in log4j 2.8 version. If you use older version it will have no effect.
I'm configuring an Apache Artemis message-broker. The broker will accept big files and downstream consumers access the topic to process the latest files. Now I'm wondering how to make the latest files available for dev-runs. Because the messages only arrive a few times a day, the test runs would need to access the last few sent messages and can't wait for the next.
For production and staging-systems, I found that durable subscriptions work fine. I've adapted an Apache Camel config to serve as an illustration. Here are two consumers that receive messages, each using a durable subscription:
<route id="inbox">
<from uri="file:inbox"/>
<to uri="activemq:topic:testing"/>
</route>
<route id="outbox-staging">
<from uri="activemq:topic:testing?clientId=staging&durableSubscriptionName=staging"/>
<to uri="file:outbox-staging"/>
</route>
<route id="outbox-production">
<from uri="activemq:topic:testing?clientId=production&durableSubscriptionName=production"/>
<to uri="file:outbox-production"/>
</route>
This is fine. If a consumer is offline it will pick up messages when it comes back online. Now if another consumer joins for testing;
<route id="outbox-testing" streamCache="true">
<from uri="activemq:topic:testing?clientId=my-local-consumer&durableSubscriptionName=my-local-consumer"/>
<to uri="file:outbox-local"/>
</route>
because the subscription didn't exist before, the consumer will have to wait for new messages. What I'm looking for is new subscribers to be immediately primed with the available messages. I found different names for the concept such as prefetchPolicy, consumerWindowSize, or "retroactive consumer". But it's unclear to me which terms apply to Apache Artemis and how to set them up because the examples mostly refer to Apache ActiveMQ.
How can a configure Artemis so that a consumer joining on a new subscription gets past messages?
The prefetchPolicy doesn't apply to ActiveMQ Artemis. It's for ActiveMQ 5.x.
The consumerWindowSize does apply to ActiveMQ Artemis.
However, neither prefetchPolicy nor consumerWindowSize apply to this situation as they're both related to "flow control" and have nothing to do with putting "missed" messages onto a JMS topic subscription.
The "retroactive consumer" feature is for ActiveMQ 5.x. A similar feature (called "retroactive address") will be available in ActiveMQ Artemis 2.11. It was implemented as part of ARTEMIS-2504.
Therefore you have a few options:
Wait for ActiveMQ Artemis 2.11 to be released (should be released in January).
Build your own version of ActiveMQ Artemis based on the master branch which includes the retroactive address feature.
Modify your test environment so that new subscribers don't have to wait so long for messages (e.g. send them more frequently).
I have two components communicating over an jms queue in a wildfly instance. As soon as the consumer of the queue disconnects or gets stopped, the messages are forwarded to the DLQ (at least when wildfly is restarted).
Is it possible to configure wildfly to automatically redeliver the messages from DLQ as soon as a consumer reconnects to the queue?
Some details
Wildfly version: 8.2.0
standalone.xml - As far as I can tell, nothing special
<jms-destinations>
<jms-queue name="ExpiryQueue">
<entry name="java:/jms/queue/ExpiryQueue"/>
<durable>false</durable>
</jms-queue>
<jms-queue name="DLQ">
<entry name="java:/jms/queue/DLQ"/>
<durable>false</durable>
</jms-queue>
...
<jms-queue name="Q1-Producer-to-Consumer">
<entry name="java:/queue/Q1-Producer-to-Consumer"/>
<entry name="java:jboss/exported/queue/Q1-Producer-to-Consumer"/>
<durable>false</durable>
</jms-queue>
</jms-destinations>
Thanks.
The DLQ only gets messages that have thrown an exception during message processing. If a consumer disconnects, the messages will just still be sitting there awaiting delivery
If you are seeing an Issue whereby during a server restart messages hit the DLQ, this would suggest that your consumer is consuming messages before the resources it requires are available, so is erroring when processing the messages. You would be better to fix your consumer to not start consuming messages to early, rather than trying to fish the failed messages back from DLQ
I'm using the Virtual Topics concepts of Activemq 5.14.5 (https://activemq.apache.org/virtual-destinations) with MQTT protocol.
ActiveMQ will pick up the messages written to the topic and write them to a queue (or multiple queues or topics). See the ${ACTIVEMQ_HOME}/conf/activemq.xml configuration below:
<beans>
<broker>
...
<destinationInterceptors>
<virtualDestinationInterceptor>
<virtualDestinations>
<compositeQueue name="MY.QUEUE">
<forwardTo>
<queue physicalName="FOO" />
<topic physicalName="BAR" />
</forwardTo>
</compositeQueue>
</virtualDestinations>
</virtualDestinationInterceptor>
</destinationInterceptors>
...
</broker>
</beans>
Using Mqtt.fx software (https://mqttfx.jensd.de/) I'm only able to dequeue from a Topic (BAR). How can I dequeue from a Queue (FOO) to see the messages arrived on it?
I am new tot this and learning about the protocol MQTT and broker Activemq.
I am using Voltdb. And my use case is to import data from kafka to voltdb.
I am using below command :
Command:
kafkaloader test --brokers <>:2181, --topic kafkavoltdb
In deployment.xml file the configuration is:
<security enabled="false" provider="hash"/>
<import>
<configuration type="kafka" enabled="true" format="csv">
<property name="topics">kafkavoltdb</property>
<property name="procedure">TEST.insert</property>
<property name="brokers">brokers:6667</property>
</configuration>
</import>
I am not able to fetch data from kafka to voltdb and the kafkaloader commands hungs up and not throwing any error. The logs showing :
Failed to get Kafka partition info
org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata.
Note: i am using apache kafka (HDP version 3.0 ,Kerberos security cluster)
Kindly help me with solution.