timeout with kakfkaAppender in log4j? - apache-kafka

When I use the KafkaAppender of log4j I have a problem when I put a single broker, but it is stopped. The problem is the KafkaAppender waits for a very long time before failing. I use syncsend=false I want to set some timeout so the appender wouldn't wait for such a long time.
Could you tell me how I need to configure the KafkaAppender in order to prevent this wait?

There is no timeout setting on KafkaAppender itself, but there are a few timeout options that can be configured on KafkaProducer. The options are described in the Kafka documentation.
Here you have example kafka appender configuration with two kafka producer timeout settings with their default values:
<Appenders>
<Kafka name="Kafka" topic="log-test">
<PatternLayout pattern="%date %message"/>
<Property name="bootstrap.servers">localhost:9092</Property>
<Property name="request.timeout.ms">30000</Property><!-- 30 seconds -->
<Property name="transaction.timeout.ms">60000</Property><!-- 1 minute -->
</Kafka>
</Appenders>
You might want to play with those to get the expected behaviour.
Also, remember that the syncSend option was added in log4j 2.8 version. If you use older version it will have no effect.

Related

Log4j2 kafka appender failover handling when all of Kafka Brokers are unavailable?

I am sending all my application logs to the Kafka using Log4j2 Kafka appender and it works. But in a situation, when I purposefully bring down the broker, the application gets hung-up and the kafka appender keeps on retrying to establish the connection.
How can I stop writing into Kafka when the broker(s) are down? and resume once it is available?
Following is the appender configuration I have used.
<Kafka name="KafkaServiceStatInfo" topic="testKafkaLogs">
<PatternLayout pattern="%m"/>
<Property name="bootstrap.servers">localhost:9092</Property>
<Property name="acks">0</Property>
</Kafka>
<Async name="Async">
<AppenderRef ref="KafkaServiceStatInfo"/>
</Async>

Redelivering JMS messages from the DLQ

I have two components communicating over an jms queue in a wildfly instance. As soon as the consumer of the queue disconnects or gets stopped, the messages are forwarded to the DLQ (at least when wildfly is restarted).
Is it possible to configure wildfly to automatically redeliver the messages from DLQ as soon as a consumer reconnects to the queue?
Some details
Wildfly version: 8.2.0
standalone.xml - As far as I can tell, nothing special
<jms-destinations>
<jms-queue name="ExpiryQueue">
<entry name="java:/jms/queue/ExpiryQueue"/>
<durable>false</durable>
</jms-queue>
<jms-queue name="DLQ">
<entry name="java:/jms/queue/DLQ"/>
<durable>false</durable>
</jms-queue>
...
<jms-queue name="Q1-Producer-to-Consumer">
<entry name="java:/queue/Q1-Producer-to-Consumer"/>
<entry name="java:jboss/exported/queue/Q1-Producer-to-Consumer"/>
<durable>false</durable>
</jms-queue>
</jms-destinations>
Thanks.
The DLQ only gets messages that have thrown an exception during message processing. If a consumer disconnects, the messages will just still be sitting there awaiting delivery
If you are seeing an Issue whereby during a server restart messages hit the DLQ, this would suggest that your consumer is consuming messages before the resources it requires are available, so is erroring when processing the messages. You would be better to fix your consumer to not start consuming messages to early, rather than trying to fish the failed messages back from DLQ

voltdb not fetching data from kafka topic

I am using Voltdb. And my use case is to import data from kafka to voltdb.
I am using below command :
Command:
kafkaloader test --brokers <>:2181, --topic kafkavoltdb
In deployment.xml file the configuration is:
<security enabled="false" provider="hash"/>
<import>
<configuration type="kafka" enabled="true" format="csv">
<property name="topics">kafkavoltdb</property>
<property name="procedure">TEST.insert</property>
<property name="brokers">brokers:6667</property>
</configuration>
</import>
I am not able to fetch data from kafka to voltdb and the kafkaloader commands hungs up and not throwing any error. The logs showing :
Failed to get Kafka partition info
org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata.
Note: i am using apache kafka (HDP version 3.0 ,Kerberos security cluster)
Kindly help me with solution.

Cannot Restart Kafka Consumer Application, Failing due to OffsetOutOfRangeException

Currently, my Kafka Consumer streaming application is manually committing the offsets into Kafka with enable.auto.commit set to false.
The application failed when I tried restarting it throwing below exception:
org.apache.kafka.clients.consumer.OffsetOutOfRangeException: Offsets out of range with no configured reset policy for partitions:{partition-12=155555555}
Assuming the above error is due to the message not present/partition deleted due to retention period, I tried below method:
I disabled the manual commit and enabled auto commit(enable.auto.commit=true and auto.offset.reset=earliest)
Still it fails with the same error
org.apache.kafka.clients.consumer.OffsetOutOfRangeException: Offsets out of range with no configured reset policy for partitions:{partition-12=155555555}
Please suggest ways to restart the job so that it can successfully read the correct offset for which message/partition is present
You are trying to read offset 155555555 from partition 12 of topic partition, but -most probably- it might have already been deleted due to your retention policy.
You can either use Kafka Streams Application Reset Tool in order to reset your Kafka Streams application's internal state, such that it can reprocess its input data from scratch
$ bin/kafka-streams-application-reset.sh
Option (* = required) Description
--------------------- -----------
* --application-id <id> The Kafka Streams application ID (application.id)
--bootstrap-servers <urls> Comma-separated list of broker urls with format: HOST1:PORT1,HOST2:PORT2
(default: localhost:9092)
--intermediate-topics <list> Comma-separated list of intermediate user topics
--input-topics <list> Comma-separated list of user input topics
--zookeeper <url> Format: HOST:POST
(default: localhost:2181)
or start your consumer using a fresh consumer group ID.
I met the same problem and I use package org.apache.spark.streaming.kafka010 in my application.In the begining,I suscepted the auto.offset.reset strategy take no effect,but when I read the description of the method fixKafkaParams in the object KafkaUtils,i found the configuration has been overwrited.I guess the reason why it tweak the configuration ConsumerConfig.AUTO_OFFSET_RESET_CONFIG for executor is to keep consistent offset obtained by driver and executor.

How to redirect logging in akka?

I am implementing a distributed database with scala 2.9 and akka 2.0. My current problem is I want to redirect the standard logging to a file instead of stdout. I don't realy want to use SLF4J or SLF4S. Is there a simple way to redirect the logging output?
The akka documentation for logging says, that you can register handlers in the config like this:
akka {
# Event handlers to register at boot time (Logging$DefaultLogger logs to STDOUT)
event-handlers = ["akka.event.Logging$DefaultLogger"]
# Options: ERROR, WARNING, INFO, DEBUG
loglevel = "DEBUG"
}
There is also an SLF4J handler
akka.event.slf4j.Slf4jEventHandler
Using this you can add any SLF4J compliant library like logback to write your logs to wherever you want.
edit:
To use logback to log to a file you have to add logback as a dependency, add the Slf4jEventHandler to your config:
akka {
# Event handlers to register at boot time (Logging$DefaultLogger logs to STDOUT)
event-handlers = ["akka.event.slf4j.Slf4jEventHandler"]
# Options: ERROR, WARNING, INFO, DEBUG
loglevel = "DEBUG"
}
and add a logback config to your project that lokks something like this (taken from logback docs):
<configuration>
<appender name="FILE" class="ch.qos.logback.core.FileAppender">
<file>testFile.log</file>
<append>true</append>
<!-- encoders are assigned the type
ch.qos.logback.classic.encoder.PatternLayoutEncoder by default -->
<encoder>
<pattern>%-4relative [%thread] %-5level %logger{35} - %msg%n</pattern>
</encoder>
</appender>
<root level="DEBUG">
<appender-ref ref="FILE" />
</root>
</configuration>
Due to the async logging in akka you cannot use the %thread variable in your log pattern, instead use the sourceThread variable from the MDC. You can read about that at the bottom of this page: http://doc.akka.io/docs/akka/2.0/scala/logging.html
You don't have to explicitly use slf4j or logback in your code, just use the normal akka logging, the handler will take care of everything else.