kafka crashing when consuming messages - apache-kafka

kafka used to work perfectly on my own computer. i'm working on another computer now where it says
ERROR Error while creating log for __consumer_offsets-41 in dir C:\tmp\kafka-logs (kafka.server.LogDirFailureChannel) java.io.IOException: Map failed at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:940) at kafka.log.AbstractIndex.(AbstractIndex.scala:126) at kafka.log.TimeIndex.(TimeIndex.scala:54) at kafka.log.LogSegment$.open(LogSegment.scala:635) at kafka.log.Log.loadSegments(Log.scala:573) at kafka.log.Log.(Log.scala:290) at kafka.log.Log$.apply(Log.scala:2141) at kafka.log.LogManager.$anonfun$getOrCreateLog$1(LogManager.scala:701) at scala.Option.getOrElse(Option.scala:121) at kafka.log.LogManager.getOrCreateLog(LogManager.scala:659) at kafka.cluster.Partition.$anonfun$getOrCreateReplica$1(Partition.scala:199) at kafka.utils.Pool$$anon$2.apply(Pool.scala:61) at java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1660) at kafka.utils.Pool.getAndMaybePut(Pool.scala:60) at kafka.cluster.Partition.getOrCreateReplica(Partition.scala:195) at kafka.cluster.Partition.$anonfun$makeLeader$3(Partition.scala:373) at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233) at scala.collection.Iterator.foreach(Iterator.scala:937) at scala.collection.Iterator.foreach$(Iterator.scala:937) at scala.collection.AbstractIterator.foreach(Iterator.scala:1425) at scala.collection.IterableLike.foreach(IterableLike.scala:70) at scala.collection.IterableLike.foreach$(IterableLike.scala:69) at scala.collection.AbstractIterable.foreach(Iterable.scala:54) at scala.collection.TraversableLike.map(TraversableLike.scala:233) at scala.collection.TraversableLike.map$(TraversableLike.scala:226) at scala.collection.AbstractTraversable.map(Traversable.scala:104) at kafka.cluster.Partition.$anonfun$makeLeader$1(Partition.scala:373) at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251) at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:259) at kafka.cluster.Partition.makeLeader(Partition.scala:367) at kafka.server.ReplicaManager.$anonfun$makeLeaders$5(ReplicaManager.scala:1170) at scala.collection.mutable.HashMap.$anonfun$foreach$1(HashMap.scala:145) at scala.collection.mutable.HashTable.foreachEntry(HashTable.scala:235) at scala.collection.mutable.HashTable.foreachEntry$(HashTable.scala:228) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40) at scala.collection.mutable.HashMap.foreach(HashMap.scala:145) at kafka.server.ReplicaManager.makeLeaders(ReplicaManager.scala:1168) at kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:1080) at kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:185) at kafka.server.KafkaApis.handle(KafkaApis.scala:110) at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.OutOfMemoryError: Map failed at sun.nio.ch.FileChannelImpl.map0(Native Method) at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:937) ... 41 more [2019-03-25 14:55:00,296] INFO [ReplicaManager broker=0] Stopping serving replicas in dir C:\tmp\kafka-logs (kafka.server.ReplicaManager) [2019-03-25 14:55:00,296] ERROR [ReplicaManager broker=0] Error while making broker the leader for partition Topic: __consumer_offsets; Partition: 41; Leader: None; AllReplicas: ; InSyncReplicas: in dir None (kafka.server.ReplicaManager)
this error appears whenever i try to consume a topic by issuing the following command:
bin\windows\kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic UpdateObserver --from-beginning
my java version is :
java version "1.8.0_91"
Java(TM) SE Runtime Environment (build 1.8.0_91-b15)
Java HotSpot(TM) 64-Bit Server VM (build 25.91-b15, mixed mode)
ps: deleting the tmp directory doesnt solve the problem, it just makes me able to launch kafka again, once i want to consume from a given topic it crashes

Facing the same issue. Upgrade the Java version to 16.0.1 will solve my problem.
Also set JAVA_HOME to 16 version without bin folder. for example C:\Program Files\Java\jdk-16.0.1

Related

windows kafka java.nio.file.FileSystemException

Very frequently error in windows server 2012
kafka verson 2.3.1
the error log
[2019-12-05 03:57:51,567] ERROR Uncaught exception in scheduled task 'kafka-log-retention' (kafka.utils.KafkaScheduler)
org.apache.kafka.common.errors.KafkaStorageException: Error while deleting segments for MetadataLog-0 in dir D:\GpsPlatform\kafka\.\tmp\kafka-logs
Caused by: java.nio.file.FileSystemException: D:\GpsPlatform\kafka\.\tmp\kafka-logs\MetadataLog-0\00000000000003368617.index -> D:\GpsPlatform\kafka\.\tmp\kafka-logs\MetadataLog-0\00000000000003368617.index.deleted: 另一个程序正在使用此文件,进程无法访问。
at java.base/sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:92)
at java.base/sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:103)
at java.base/sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:395)
at java.base/sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:292)
at java.base/java.nio.file.Files.move(Files.java:1425)
at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:815)
at kafka.log.AbstractIndex.renameTo(AbstractIndex.scala:209)
at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:509)
at kafka.log.Log.asyncDeleteSegment(Log.scala:1982)
at kafka.log.Log.deleteSegment(Log.scala:1967)
at kafka.log.Log.$anonfun$deleteSegments$3(Log.scala:1493)
at kafka.log.Log.$anonfun$deleteSegments$3$adapted(Log.scala:1493)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at kafka.log.Log.$anonfun$deleteSegments$2(Log.scala:1493)
at scala.runtime.java8.JFunction0$mcI$sp.apply(JFunction0$mcI$sp.java:23)
at kafka.log.Log.maybeHandleIOException(Log.scala:2085)
at kafka.log.Log.deleteSegments(Log.scala:1484)
at kafka.log.Log.deleteOldSegments(Log.scala:1479)
at kafka.log.Log.deleteRetentionMsBreachedSegments(Log.scala:1557)
at kafka.log.Log.deleteOldSegments(Log.scala:1547)
at kafka.log.LogManager.$anonfun$cleanupLogs$3(LogManager.scala:914)
at kafka.log.LogManager.$anonfun$cleanupLogs$3$adapted(LogManager.scala:911)
at scala.collection.immutable.List.foreach(List.scala:392)
at kafka.log.LogManager.cleanupLogs(LogManager.scala:911)
at kafka.log.LogManager.$anonfun$startup$2(LogManager.scala:395)
at kafka.utils.KafkaScheduler.$anonfun$schedule$2(KafkaScheduler.scala:114)
at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:65)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:830)
Suppressed: java.nio.file.FileSystemException: D:\GpsPlatform\kafka\.\tmp\kafka-logs\MetadataLog-0\00000000000003368617.index -> D:\GpsPlatform\kafka\.\tmp\kafka-logs\MetadataLog-0\00000000000003368617.index.deleted: 另一个程序正在使用此文件,进程无法访问。
at java.base/sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:92)
at java.base/sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:103)
at java.base/sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:309)
at java.base/sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:292)
at java.base/java.nio.file.Files.move(Files.java:1425)
at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:812)
... 29 more
after running for a period of time, a similar exception will be reported, causing Kafka to crash. How to completely resolve this exception?
If you have to use kafka in windows environment. You have to disable log retention.
In Kafka server.properties
log.retention.hours=-1
log.cleaner.enable=false
# Remove any other rows start from log.retention.*
To run Kafka on Windows it's recommended to do so using WSL2 as detailed here. Otherwise you encounter the kind of problems described above.

Kafka not working after consumer has started to consume data

I am new to Kafka and installed kafka in windows 10 using Steps in https://kafka.apache.org/quickstart
In step 5 after starting consumer.
I am getting following errors after running following command
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning
I am getting following error when I create topic with above command.
[2019-03-18 19:09:44,905] ERROR Error while loading log dir C:\tmp\kafka-logs (kafka.log.LogManager)
java.io.IOException: Map failed
at sun.nio.ch.FileChannelImpl.map(Unknown Source)
at kafka.log.AbstractIndex.<init>(AbstractIndex.scala:126)
at kafka.log.OffsetIndex.<init>(OffsetIndex.scala:54)
at kafka.log.LogSegment$.open(LogSegment.scala:634)
at kafka.log.Log$$anonfun$kafka$log$Log$$loadSegmentFiles$3.apply(Log.scala:434)
at kafka.log.Log$$anonfun$kafka$log$Log$$loadSegmentFiles$3.apply(Log.scala:421)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
After this error, even if I start Kafka server same error is coming.
Another error is when I list topics or create topic:
bin/kafka-topics.sh --list --zookeeper localhost:2181
log4j:ERROR Could not read configuration file from URL [file:/c/Users/sboyapal/Projects/Polaris/kafka_2.11-2.1.0/bin/../config/tools-log4j.properties].
java.io.FileNotFoundException: \c\Users\sboyapal\Projects\Polaris\kafka_2.11-2.1.0\bin\..\config\tools-log4j.properties (The system cannot find the path specified)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(Unknown Source)
at java.io.FileInputStream.<init>(Unknown Source)
at java.io.FileInputStream.<init>(Unknown Source)
at sun.net.www.protocol.file.FileURLConnection.connect(Unknown Source)
at sun.net.www.protocol.file.FileURLConnection.getInputStream(Unknown Source)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:557)
at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
at org.apache.log4j.LogManager.<clinit>(LogManager.java:127)
at org.slf4j.impl.Log4jLoggerFactory.<init>(Log4jLoggerFactory.java:66)
at org.slf4j.impl.StaticLoggerBinder.<init>(StaticLoggerBinder.java:72)
at org.slf4j.impl.StaticLoggerBinder.<clinit>(StaticLoggerBinder.java:45)
at org.slf4j.LoggerFactory.bind(LoggerFactory.java:150)
at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124)
at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:412)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:357)
at com.typesafe.scalalogging.Logger$.apply(Logger.scala:48)
at kafka.utils.Log4jControllerRegistration$.<init>(Logging.scala:25)
at kafka.utils.Log4jControllerRegistration$.<clinit>(Logging.scala)
at kafka.utils.Logging$class.$init$(Logging.scala:47)
at kafka.admin.TopicCommand$.<init>(TopicCommand.scala:40)
at kafka.admin.TopicCommand$.<clinit>(TopicCommand.scala)
at kafka.admin.TopicCommand.main(TopicCommand.scala)
log4j:ERROR Ignoring configuration file [file:/c/Users/sboyapal/Projects/Polaris/kafka_2.11-2.1.0/bin/../config/tools-log4j.properties].
log4j:WARN No appenders could be found for logger (kafka.utils.Log4jControllerRegistration$).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
test
The only way to fix the issue is to just delete the C:\tmp\kafka-logs directory. After that start up the kafka server and follow the same steps.

Kafka Startup Error on topic delete

Kafka Version: 0.10.2.1 (Server)
Zookeeper: bundled with Kafka
Issue
If you delete the topic and then restart the broker - it fails. Broker is configured for delete.topic.enable=true
Delete Command:
./kafka-topics.sh --zookeeper localhost:2182 --delete --topic MY.TOPIC.NAME
Only way out for now is to go to log directory manually and remove the
topic dirs using rm-rf. Post that its ok.
Error:
[2017-06-09 12:24:43,359] ERROR There was an error in one of the threads during logs loading: java.lang.StringIndexOutOfBoundsException: String index out of range: -1 (kafka.log.LogManager)
[2017-06-09 12:24:43,360] FATAL [Kafka Server 101], Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
java.lang.StringIndexOutOfBoundsException: String index out of range: -1
at java.lang.String.substring(String.java:1967)
at kafka.log.Log$.parseTopicPartitionName(Log.scala:1146)
at kafka.log.LogManager.$anonfun$loadLogs$10(LogManager.scala:153)
at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:57)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2017-06-09 12:24:43,363] INFO [Kafka Server 101], shutting down (kafka.server.KafkaServer)
If your topic indeed has the dot (".") in the name as shown, I believe you have been hit by this defect:
KAFKA-5232 Kafka broker fails to start if a topic containing dot in its name is marked for delete but hasn't been deleted during previous uptime

NoClassDefFoundError: kafka/admin/ShutdownBroker

When I run kafka broker shutdown from the provided shell file, there is a NoClassDefFoundError class exception, I don't know how to resolve it.
Please help.
command:
bin/kafka-run-class.sh kafka.admin.ShutdownBroker --zookeeper 172.19.41.48:2181,172.19.41.50:2181,172.19.41.52:2181,172.19.41.55:2181,172.19.41.57:2181/huadong/kafka --broker 5 --num.retries 3 --retry.interval.ms 600
exception:
kafka.admin.ShutdownBroker --zookeeper 172.19.41.48:2181,172.19.41.50:2181,172.19.41.52:2181,172.19.41.55:2181,172.19.41.57:2181/huadong/kafka --broker 5 --num.retries 3 --retry.interval.ms 600
/export/servers/jdk1.6.0_25/bin/java -Xmx256M -server -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:+CMSScavengeBeforeRemark -XX:+DisableExplicitGC -Djava.awt.headless=true -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dkafka.logs.dir=bin/../logs -Dlog4j.configuration=file:bin/../config/tools-log4j.properties -cp .:/export/servers/jdk1.6.0_25/lib/dt.jar:/export/servers/jdk1.6.0_25/lib/tools.jar:bin/../core/build/dependant-libs-2.10.4*/*.jar:bin/../examples/build/libs//kafka-examples*.jar:bin/../contrib/hadoop-consumer/build/libs//kafka-hadoop-consumer*.jar:bin/../contrib/hadoop-producer/build/libs//kafka-hadoop-producer*.jar:bin/../clients/build/libs/kafka-clients*.jar:bin/../libs/jopt-simple-3.2.jar:bin/../libs/kafka_2.11-0.8.2.1.jar:bin/../libs/kafka_2.11-0.8.2.1-javadoc.jar:bin/../libs/kafka_2.11-0.8.2.1-scaladoc.jar:bin/../libs/kafka_2.11-0.8.2.1-sources.jar:bin/../libs/kafka_2.11-0.8.2.1-test.jar:bin/../libs/kafka-clients-0.8.2.1.jar:bin/../libs/log4j-1.2.16.jar:bin/../libs/lz4-1.2.0.jar:bin/../libs/metrics-core-2.2.0.jar:bin/../libs/scala-library-2.11.5.jar:bin/../libs/scala-parser-combinators_2.11-1.0.2.jar:bin/../libs/scala-xml_2.11-1.0.2.jar:bin/../libs/slf4j-api-1.7.6.jar:bin/../libs/slf4j-log4j12-1.6.1.jar:bin/../libs/snappy-java-1.1.1.6.jar:bin/../libs/zkclient-0.3.jar:bin/../libs/zookeeper-3.4.6.jar:bin/../core/build/libs/kafka_2.10*.jar kafka.admin.ShutdownBroker --zookeeper 172.19.41.48:2181,172.19.41.50:2181,172.19.41.52:2181,172.19.41.55:2181,172.19.41.57:2181/huadong/kafka --broker 5 --num.retries 3 --retry.interval.ms 600
Exception in thread "main" java.lang.NoClassDefFoundError: kafka/admin/ShutdownBroker
Caused by: java.lang.ClassNotFoundException: kafka.admin.ShutdownBroker
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
Could not find the main class: kafka.admin.ShutdownBroker. Program will exit.
CLASSPATH:
[admin#A06-R12-302F0402-I36-59 kafka_2.11-0.9.0.1]$ echo $CLASSPATH
.:/export/servers/jdk1.7.0_71/lib/dt.jar:/export/servers/jdk1.7.0_71/lib/tools.jar:/export/servers/kafka_2.11-0.9.0.1/libs/*
Kafka developers removed helper tool to initiate graceful broker shutdown as per discussion in the ticket KAFKA-1298 (removal commit, documentation page diff).
Now only supported way to gracefully shutdown broker is by sending SIGTERM signal to broker process:
this initiate logs syncing to disk and start reelection of new partition leaders for which current broker was a leader.
Easiest way to stop broker gracefully now is to use kafka-server-stop.sh script provided as part of Kafka distribution.
Configuration options which affect this behavior:
controlled.shutdown.enable
controlled.shutdown.max.retries
controlled.shutdown.retry.backoff.ms

How to change the log.dirs for Kafka on Ubuntu 14.04

I have installed Kafka on Ubuntu 14.04. By default the log.dirs was set to /tmp/kafka-logs. I ran the server and it was working fine. Then I shutdown the server using kafka-server-stop.sh and changed the log.dirs in the config file a new directory.
After than when I tried to start the kafka server, I get the following error:
[2016-06-08 18:24:45,206] FATAL Fatal error during KafkaServerStartable startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
java.lang.StringIndexOutOfBoundsException: String index out of range: -1
at java.lang.String.substring(String.java:1911)
at kafka.log.Log$.parseTopicPartitionName(Log.scala:833)
at kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$3$$anonfun$apply$7$$anonfun$apply$1.apply$mcV$sp(LogManager.scala:138)
at kafka.utils.Utils$$anon$1.run(Utils.scala:54)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
[2016-06-08 18:24:45,208] INFO [Kafka Server 0], shutting down (kafka.server.KafkaServer)
Performed the following steps
Stop the Kafka servers
Copy the data from /tmp and into the new directory
Change the server.properties
Restart the Kafka servers.
It worked fine for me. Referred to this post