NoClassDefFoundError: kafka/admin/ShutdownBroker - apache-kafka

When I run kafka broker shutdown from the provided shell file, there is a NoClassDefFoundError class exception, I don't know how to resolve it.
Please help.
command:
bin/kafka-run-class.sh kafka.admin.ShutdownBroker --zookeeper 172.19.41.48:2181,172.19.41.50:2181,172.19.41.52:2181,172.19.41.55:2181,172.19.41.57:2181/huadong/kafka --broker 5 --num.retries 3 --retry.interval.ms 600
exception:
kafka.admin.ShutdownBroker --zookeeper 172.19.41.48:2181,172.19.41.50:2181,172.19.41.52:2181,172.19.41.55:2181,172.19.41.57:2181/huadong/kafka --broker 5 --num.retries 3 --retry.interval.ms 600
/export/servers/jdk1.6.0_25/bin/java -Xmx256M -server -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:+CMSScavengeBeforeRemark -XX:+DisableExplicitGC -Djava.awt.headless=true -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dkafka.logs.dir=bin/../logs -Dlog4j.configuration=file:bin/../config/tools-log4j.properties -cp .:/export/servers/jdk1.6.0_25/lib/dt.jar:/export/servers/jdk1.6.0_25/lib/tools.jar:bin/../core/build/dependant-libs-2.10.4*/*.jar:bin/../examples/build/libs//kafka-examples*.jar:bin/../contrib/hadoop-consumer/build/libs//kafka-hadoop-consumer*.jar:bin/../contrib/hadoop-producer/build/libs//kafka-hadoop-producer*.jar:bin/../clients/build/libs/kafka-clients*.jar:bin/../libs/jopt-simple-3.2.jar:bin/../libs/kafka_2.11-0.8.2.1.jar:bin/../libs/kafka_2.11-0.8.2.1-javadoc.jar:bin/../libs/kafka_2.11-0.8.2.1-scaladoc.jar:bin/../libs/kafka_2.11-0.8.2.1-sources.jar:bin/../libs/kafka_2.11-0.8.2.1-test.jar:bin/../libs/kafka-clients-0.8.2.1.jar:bin/../libs/log4j-1.2.16.jar:bin/../libs/lz4-1.2.0.jar:bin/../libs/metrics-core-2.2.0.jar:bin/../libs/scala-library-2.11.5.jar:bin/../libs/scala-parser-combinators_2.11-1.0.2.jar:bin/../libs/scala-xml_2.11-1.0.2.jar:bin/../libs/slf4j-api-1.7.6.jar:bin/../libs/slf4j-log4j12-1.6.1.jar:bin/../libs/snappy-java-1.1.1.6.jar:bin/../libs/zkclient-0.3.jar:bin/../libs/zookeeper-3.4.6.jar:bin/../core/build/libs/kafka_2.10*.jar kafka.admin.ShutdownBroker --zookeeper 172.19.41.48:2181,172.19.41.50:2181,172.19.41.52:2181,172.19.41.55:2181,172.19.41.57:2181/huadong/kafka --broker 5 --num.retries 3 --retry.interval.ms 600
Exception in thread "main" java.lang.NoClassDefFoundError: kafka/admin/ShutdownBroker
Caused by: java.lang.ClassNotFoundException: kafka.admin.ShutdownBroker
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
Could not find the main class: kafka.admin.ShutdownBroker. Program will exit.
CLASSPATH:
[admin#A06-R12-302F0402-I36-59 kafka_2.11-0.9.0.1]$ echo $CLASSPATH
.:/export/servers/jdk1.7.0_71/lib/dt.jar:/export/servers/jdk1.7.0_71/lib/tools.jar:/export/servers/kafka_2.11-0.9.0.1/libs/*

Kafka developers removed helper tool to initiate graceful broker shutdown as per discussion in the ticket KAFKA-1298 (removal commit, documentation page diff).
Now only supported way to gracefully shutdown broker is by sending SIGTERM signal to broker process:
this initiate logs syncing to disk and start reelection of new partition leaders for which current broker was a leader.
Easiest way to stop broker gracefully now is to use kafka-server-stop.sh script provided as part of Kafka distribution.
Configuration options which affect this behavior:
controlled.shutdown.enable
controlled.shutdown.max.retries
controlled.shutdown.retry.backoff.ms

Related

Running Kafka ACL Commands fail

I am trying to run any kafka acl commands from adding, deleting, and list. An example command is:
../bin/kafka-acls.sh --authorizer-properties
zookeeper.connect=localhost:2181 --add --allow-principal User:demouser
--operation Create --operation Describe --topic demo-topic
But I always get the error:
Error while executing ACL command: Exception while loading Zookeeper JAAS login context 'Client'
org.apache.kafka.common.KafkaException: Exception while loading Zookeeper JAAS login context 'Client'
at org.apache.kafka.common.security.JaasUtils.isZkSecurityEnabled(JaasUtils.java:45)
at kafka.admin.AclCommand$AuthorizerService.withAuthorizer(AclCommand.scala:197)
at kafka.admin.AclCommand$AuthorizerService.addAcls(AclCommand.scala:221)
at kafka.admin.AclCommand$.main(AclCommand.scala:70)
at kafka.admin.AclCommand.main(AclCommand.scala)
Caused by: java.lang.SecurityException: java.io.IOException: /remote/sde108/kafka/kafka/config/config.conf (No such file or directory)
at sun.security.provider.ConfigFile$Spi.<init>(ConfigFile.java:137)
at sun.security.provider.ConfigFile.<init>(ConfigFile.java:102)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at java.lang.Class.newInstance(Class.java:442)
at javax.security.auth.login.Configuration$2.run(Configuration.java:255)
at javax.security.auth.login.Configuration$2.run(Configuration.java:247)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.login.Configuration.getConfiguration(Configuration.java:246)
at org.apache.kafka.common.security.JaasUtils.isZkSecurityEnabled(JaasUtils.java:42)
... 4 more
Caused by: java.io.IOException: /remote/sde108/kafka/kafka/config/config.conf (No such file or directory)
at sun.security.provider.ConfigFile$Spi.ioException(ConfigFile.java:666)
at sun.security.provider.ConfigFile$Spi.init(ConfigFile.java:262)
at sun.security.provider.ConfigFile$Spi.<init>(ConfigFile.java:135)
... 15 more
Based on my search, there is no file called config.conf in the kafka or zookeeper installation. Did anyone encounter a similar problem or the same one and knows how to fix it?
Moving comment to answer.
Based on the error, you've loaded a JAAS file somehow.
These could be set up within environment variables, and you could find that like env | grep KAFKA

Kafka not working after consumer has started to consume data

I am new to Kafka and installed kafka in windows 10 using Steps in https://kafka.apache.org/quickstart
In step 5 after starting consumer.
I am getting following errors after running following command
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning
I am getting following error when I create topic with above command.
[2019-03-18 19:09:44,905] ERROR Error while loading log dir C:\tmp\kafka-logs (kafka.log.LogManager)
java.io.IOException: Map failed
at sun.nio.ch.FileChannelImpl.map(Unknown Source)
at kafka.log.AbstractIndex.<init>(AbstractIndex.scala:126)
at kafka.log.OffsetIndex.<init>(OffsetIndex.scala:54)
at kafka.log.LogSegment$.open(LogSegment.scala:634)
at kafka.log.Log$$anonfun$kafka$log$Log$$loadSegmentFiles$3.apply(Log.scala:434)
at kafka.log.Log$$anonfun$kafka$log$Log$$loadSegmentFiles$3.apply(Log.scala:421)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
After this error, even if I start Kafka server same error is coming.
Another error is when I list topics or create topic:
bin/kafka-topics.sh --list --zookeeper localhost:2181
log4j:ERROR Could not read configuration file from URL [file:/c/Users/sboyapal/Projects/Polaris/kafka_2.11-2.1.0/bin/../config/tools-log4j.properties].
java.io.FileNotFoundException: \c\Users\sboyapal\Projects\Polaris\kafka_2.11-2.1.0\bin\..\config\tools-log4j.properties (The system cannot find the path specified)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(Unknown Source)
at java.io.FileInputStream.<init>(Unknown Source)
at java.io.FileInputStream.<init>(Unknown Source)
at sun.net.www.protocol.file.FileURLConnection.connect(Unknown Source)
at sun.net.www.protocol.file.FileURLConnection.getInputStream(Unknown Source)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:557)
at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
at org.apache.log4j.LogManager.<clinit>(LogManager.java:127)
at org.slf4j.impl.Log4jLoggerFactory.<init>(Log4jLoggerFactory.java:66)
at org.slf4j.impl.StaticLoggerBinder.<init>(StaticLoggerBinder.java:72)
at org.slf4j.impl.StaticLoggerBinder.<clinit>(StaticLoggerBinder.java:45)
at org.slf4j.LoggerFactory.bind(LoggerFactory.java:150)
at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124)
at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:412)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:357)
at com.typesafe.scalalogging.Logger$.apply(Logger.scala:48)
at kafka.utils.Log4jControllerRegistration$.<init>(Logging.scala:25)
at kafka.utils.Log4jControllerRegistration$.<clinit>(Logging.scala)
at kafka.utils.Logging$class.$init$(Logging.scala:47)
at kafka.admin.TopicCommand$.<init>(TopicCommand.scala:40)
at kafka.admin.TopicCommand$.<clinit>(TopicCommand.scala)
at kafka.admin.TopicCommand.main(TopicCommand.scala)
log4j:ERROR Ignoring configuration file [file:/c/Users/sboyapal/Projects/Polaris/kafka_2.11-2.1.0/bin/../config/tools-log4j.properties].
log4j:WARN No appenders could be found for logger (kafka.utils.Log4jControllerRegistration$).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
test
The only way to fix the issue is to just delete the C:\tmp\kafka-logs directory. After that start up the kafka server and follow the same steps.

kafka crashing when consuming messages

kafka used to work perfectly on my own computer. i'm working on another computer now where it says
ERROR Error while creating log for __consumer_offsets-41 in dir C:\tmp\kafka-logs (kafka.server.LogDirFailureChannel) java.io.IOException: Map failed at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:940) at kafka.log.AbstractIndex.(AbstractIndex.scala:126) at kafka.log.TimeIndex.(TimeIndex.scala:54) at kafka.log.LogSegment$.open(LogSegment.scala:635) at kafka.log.Log.loadSegments(Log.scala:573) at kafka.log.Log.(Log.scala:290) at kafka.log.Log$.apply(Log.scala:2141) at kafka.log.LogManager.$anonfun$getOrCreateLog$1(LogManager.scala:701) at scala.Option.getOrElse(Option.scala:121) at kafka.log.LogManager.getOrCreateLog(LogManager.scala:659) at kafka.cluster.Partition.$anonfun$getOrCreateReplica$1(Partition.scala:199) at kafka.utils.Pool$$anon$2.apply(Pool.scala:61) at java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1660) at kafka.utils.Pool.getAndMaybePut(Pool.scala:60) at kafka.cluster.Partition.getOrCreateReplica(Partition.scala:195) at kafka.cluster.Partition.$anonfun$makeLeader$3(Partition.scala:373) at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233) at scala.collection.Iterator.foreach(Iterator.scala:937) at scala.collection.Iterator.foreach$(Iterator.scala:937) at scala.collection.AbstractIterator.foreach(Iterator.scala:1425) at scala.collection.IterableLike.foreach(IterableLike.scala:70) at scala.collection.IterableLike.foreach$(IterableLike.scala:69) at scala.collection.AbstractIterable.foreach(Iterable.scala:54) at scala.collection.TraversableLike.map(TraversableLike.scala:233) at scala.collection.TraversableLike.map$(TraversableLike.scala:226) at scala.collection.AbstractTraversable.map(Traversable.scala:104) at kafka.cluster.Partition.$anonfun$makeLeader$1(Partition.scala:373) at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251) at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:259) at kafka.cluster.Partition.makeLeader(Partition.scala:367) at kafka.server.ReplicaManager.$anonfun$makeLeaders$5(ReplicaManager.scala:1170) at scala.collection.mutable.HashMap.$anonfun$foreach$1(HashMap.scala:145) at scala.collection.mutable.HashTable.foreachEntry(HashTable.scala:235) at scala.collection.mutable.HashTable.foreachEntry$(HashTable.scala:228) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40) at scala.collection.mutable.HashMap.foreach(HashMap.scala:145) at kafka.server.ReplicaManager.makeLeaders(ReplicaManager.scala:1168) at kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:1080) at kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:185) at kafka.server.KafkaApis.handle(KafkaApis.scala:110) at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.OutOfMemoryError: Map failed at sun.nio.ch.FileChannelImpl.map0(Native Method) at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:937) ... 41 more [2019-03-25 14:55:00,296] INFO [ReplicaManager broker=0] Stopping serving replicas in dir C:\tmp\kafka-logs (kafka.server.ReplicaManager) [2019-03-25 14:55:00,296] ERROR [ReplicaManager broker=0] Error while making broker the leader for partition Topic: __consumer_offsets; Partition: 41; Leader: None; AllReplicas: ; InSyncReplicas: in dir None (kafka.server.ReplicaManager)
this error appears whenever i try to consume a topic by issuing the following command:
bin\windows\kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic UpdateObserver --from-beginning
my java version is :
java version "1.8.0_91"
Java(TM) SE Runtime Environment (build 1.8.0_91-b15)
Java HotSpot(TM) 64-Bit Server VM (build 25.91-b15, mixed mode)
ps: deleting the tmp directory doesnt solve the problem, it just makes me able to launch kafka again, once i want to consume from a given topic it crashes
Facing the same issue. Upgrade the Java version to 16.0.1 will solve my problem.
Also set JAVA_HOME to 16 version without bin folder. for example C:\Program Files\Java\jdk-16.0.1

Failed to list kafka topics in kubernetes after adding jmx__javaagent

I have a running kafka pod in my kubernetes cluster. For getting custom metrics in prometheus format, I have configured a jmx_prometheus_javaagent and assagined the port 2255. And I am able to list the metrics from localhost:2255/metrics.
The issue is, after this, I am not able to list the kafka topics. Getting following error:
bash-4.3# /opt/kafka/bin/kafka-topics.sh --list --zookeeper dz-zookeeper:2181
Exception in thread "main" java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
Caused by: java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at sun.net.httpserver.ServerImpl.bind(ServerImpl.java:133)
at sun.net.httpserver.HttpServerImpl.bind(HttpServerImpl.java:54)
at io.prometheus.jmx.shaded.io.prometheus.client.exporter.HTTPServer.<init>(HTTPServer.java:145)
at io.prometheus.jmx.shaded.io.prometheus.jmx.JavaAgent.premain(JavaAgent.java:49)
FATAL ERROR in native method: processing of -javaagent failed
Aborted (core dumped)
Any idea, how to solve this error?
You've set it up so that the java agent is being loaded not just for the kafka server, but also all of the command line tools. You should change your configuration so it is only being loaded for the server.
Get the kafka broker container ID by using "docker ps"
Then run the kafka command against that container from the command line this way:
docker exec -it CONTAINERID /bin/bash -c "KAFKA_OPTS= && kafka-topics --zookeeper 127.0.0.1:2181 --list"
Change the CONTAINERID and the zookeeper address to fit your enviroment.

Multiple Brokers Kafka 0.7

I am trying to start multiple Brokers using Kafka 0.7.2. I get the following error :
Exception in thread "kafka-acceptor" java.net.BindException:
Address already in use at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:444)
at sun.nio.ch.Net.bind(Net.java:436)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
at kafka.network.Acceptor.run(SocketServer.scala:128)
at java.lang.Thread.run(Thread.java:724)
I created two different config files for the two brokers, and run it with the command:
env JMX_PORT=9999 bin/kafka-server-start.sh config/server.properties
env JMX_PORT=10000 bin/kafka-server-start.sh config/server1.properties
I did the same with 0.8, and it worked fine. Am I missing something here?
Kill process using following command
sudo fuser -k 2181/tcp