Multiple Brokers Kafka 0.7 - apache-kafka

I am trying to start multiple Brokers using Kafka 0.7.2. I get the following error :
Exception in thread "kafka-acceptor" java.net.BindException:
Address already in use at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:444)
at sun.nio.ch.Net.bind(Net.java:436)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
at kafka.network.Acceptor.run(SocketServer.scala:128)
at java.lang.Thread.run(Thread.java:724)
I created two different config files for the two brokers, and run it with the command:
env JMX_PORT=9999 bin/kafka-server-start.sh config/server.properties
env JMX_PORT=10000 bin/kafka-server-start.sh config/server1.properties
I did the same with 0.8, and it worked fine. Am I missing something here?

Kill process using following command
sudo fuser -k 2181/tcp

Related

Authenticate Kafka CLI with Kafka running on Confluent

I'm having a Kafka cluster running on Confluent Cloud but I'm not able to reset the commit offset from the UI. Hence, I'm trying to do it via Kafka's CLI as below:
kafka-consumer-groups --bootstrap-server=my_cluster.confluent.cloud:9092 --list
However, I'm bumping into the below error. And I think it has to do with how I can authenticate.
Error: Executing consumer group command failed due to org.apache.kafka.common.KafkaException: Failed to find brokers to send ListGroups
java.util.concurrent.ExecutionException: org.apache.kafka.common.KafkaException: Failed to find brokers to send ListGroups
at java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:396)
at java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2073)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:165)
at kafka.admin.ConsumerGroupCommand$ConsumerGroupService.listConsumerGroups(ConsumerGroupCommand.scala:203)
at kafka.admin.ConsumerGroupCommand$ConsumerGroupService.listGroups(ConsumerGroupCommand.scala:198)
at kafka.admin.ConsumerGroupCommand$.run(ConsumerGroupCommand.scala:70)
at kafka.admin.ConsumerGroupCommand$.main(ConsumerGroupCommand.scala:59)
at kafka.admin.ConsumerGroupCommand.main(ConsumerGroupCommand.scala)
Caused by: org.apache.kafka.common.KafkaException: Failed to find brokers to send ListGroups
at org.apache.kafka.clients.admin.KafkaAdminClient$24.handleFailure(KafkaAdminClient.java:3368)
at org.apache.kafka.clients.admin.KafkaAdminClient$Call.handleTimeoutFailure(KafkaAdminClient.java:838)
at org.apache.kafka.clients.admin.KafkaAdminClient$Call.fail(KafkaAdminClient.java:804)
at org.apache.kafka.clients.admin.KafkaAdminClient$TimeoutProcessor.handleTimeouts(KafkaAdminClient.java:934)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.timeoutPendingCalls(KafkaAdminClient.java:1013)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.processRequests(KafkaAdminClient.java:1367)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1331)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment. Call: findAllBrokers
here is an example to list consumer groups
kafka-consumer-groups --bootstrap-server <ccloud kafka>:9092 --command-config consumer.properties --list
consumer.properties
bootstrap.servers=<ccloud kafka>:9092
ssl.endpoint.identification.algorithm=https
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule
required username="<KEY>" password="<SECRET>";
You'll want to use the --command-config option to set properties files that contain your CCLoud credentials

Running Kafka ACL Commands fail

I am trying to run any kafka acl commands from adding, deleting, and list. An example command is:
../bin/kafka-acls.sh --authorizer-properties
zookeeper.connect=localhost:2181 --add --allow-principal User:demouser
--operation Create --operation Describe --topic demo-topic
But I always get the error:
Error while executing ACL command: Exception while loading Zookeeper JAAS login context 'Client'
org.apache.kafka.common.KafkaException: Exception while loading Zookeeper JAAS login context 'Client'
at org.apache.kafka.common.security.JaasUtils.isZkSecurityEnabled(JaasUtils.java:45)
at kafka.admin.AclCommand$AuthorizerService.withAuthorizer(AclCommand.scala:197)
at kafka.admin.AclCommand$AuthorizerService.addAcls(AclCommand.scala:221)
at kafka.admin.AclCommand$.main(AclCommand.scala:70)
at kafka.admin.AclCommand.main(AclCommand.scala)
Caused by: java.lang.SecurityException: java.io.IOException: /remote/sde108/kafka/kafka/config/config.conf (No such file or directory)
at sun.security.provider.ConfigFile$Spi.<init>(ConfigFile.java:137)
at sun.security.provider.ConfigFile.<init>(ConfigFile.java:102)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at java.lang.Class.newInstance(Class.java:442)
at javax.security.auth.login.Configuration$2.run(Configuration.java:255)
at javax.security.auth.login.Configuration$2.run(Configuration.java:247)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.login.Configuration.getConfiguration(Configuration.java:246)
at org.apache.kafka.common.security.JaasUtils.isZkSecurityEnabled(JaasUtils.java:42)
... 4 more
Caused by: java.io.IOException: /remote/sde108/kafka/kafka/config/config.conf (No such file or directory)
at sun.security.provider.ConfigFile$Spi.ioException(ConfigFile.java:666)
at sun.security.provider.ConfigFile$Spi.init(ConfigFile.java:262)
at sun.security.provider.ConfigFile$Spi.<init>(ConfigFile.java:135)
... 15 more
Based on my search, there is no file called config.conf in the kafka or zookeeper installation. Did anyone encounter a similar problem or the same one and knows how to fix it?
Moving comment to answer.
Based on the error, you've loaded a JAAS file somehow.
These could be set up within environment variables, and you could find that like env | grep KAFKA

Failed to list kafka topics in kubernetes after adding jmx__javaagent

I have a running kafka pod in my kubernetes cluster. For getting custom metrics in prometheus format, I have configured a jmx_prometheus_javaagent and assagined the port 2255. And I am able to list the metrics from localhost:2255/metrics.
The issue is, after this, I am not able to list the kafka topics. Getting following error:
bash-4.3# /opt/kafka/bin/kafka-topics.sh --list --zookeeper dz-zookeeper:2181
Exception in thread "main" java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
Caused by: java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at sun.net.httpserver.ServerImpl.bind(ServerImpl.java:133)
at sun.net.httpserver.HttpServerImpl.bind(HttpServerImpl.java:54)
at io.prometheus.jmx.shaded.io.prometheus.client.exporter.HTTPServer.<init>(HTTPServer.java:145)
at io.prometheus.jmx.shaded.io.prometheus.jmx.JavaAgent.premain(JavaAgent.java:49)
FATAL ERROR in native method: processing of -javaagent failed
Aborted (core dumped)
Any idea, how to solve this error?
You've set it up so that the java agent is being loaded not just for the kafka server, but also all of the command line tools. You should change your configuration so it is only being loaded for the server.
Get the kafka broker container ID by using "docker ps"
Then run the kafka command against that container from the command line this way:
docker exec -it CONTAINERID /bin/bash -c "KAFKA_OPTS= && kafka-topics --zookeeper 127.0.0.1:2181 --list"
Change the CONTAINERID and the zookeeper address to fit your enviroment.

Error starting Kafka kafka_2.10-0.10.2.1

bin/zookeeper-shell.sh config/zookeeper.properties
Connecting to config/zookeeper.properties
Exception in thread "main" java.net.UnknownHostException: config: Name or service not known
To run the zookeeper shell you need to provide the machine on which it is running as parameter, not the properties file:
./bin/zookeeper-shell.sh localhost:2181

NoClassDefFoundError: kafka/admin/ShutdownBroker

When I run kafka broker shutdown from the provided shell file, there is a NoClassDefFoundError class exception, I don't know how to resolve it.
Please help.
command:
bin/kafka-run-class.sh kafka.admin.ShutdownBroker --zookeeper 172.19.41.48:2181,172.19.41.50:2181,172.19.41.52:2181,172.19.41.55:2181,172.19.41.57:2181/huadong/kafka --broker 5 --num.retries 3 --retry.interval.ms 600
exception:
kafka.admin.ShutdownBroker --zookeeper 172.19.41.48:2181,172.19.41.50:2181,172.19.41.52:2181,172.19.41.55:2181,172.19.41.57:2181/huadong/kafka --broker 5 --num.retries 3 --retry.interval.ms 600
/export/servers/jdk1.6.0_25/bin/java -Xmx256M -server -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:+CMSScavengeBeforeRemark -XX:+DisableExplicitGC -Djava.awt.headless=true -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dkafka.logs.dir=bin/../logs -Dlog4j.configuration=file:bin/../config/tools-log4j.properties -cp .:/export/servers/jdk1.6.0_25/lib/dt.jar:/export/servers/jdk1.6.0_25/lib/tools.jar:bin/../core/build/dependant-libs-2.10.4*/*.jar:bin/../examples/build/libs//kafka-examples*.jar:bin/../contrib/hadoop-consumer/build/libs//kafka-hadoop-consumer*.jar:bin/../contrib/hadoop-producer/build/libs//kafka-hadoop-producer*.jar:bin/../clients/build/libs/kafka-clients*.jar:bin/../libs/jopt-simple-3.2.jar:bin/../libs/kafka_2.11-0.8.2.1.jar:bin/../libs/kafka_2.11-0.8.2.1-javadoc.jar:bin/../libs/kafka_2.11-0.8.2.1-scaladoc.jar:bin/../libs/kafka_2.11-0.8.2.1-sources.jar:bin/../libs/kafka_2.11-0.8.2.1-test.jar:bin/../libs/kafka-clients-0.8.2.1.jar:bin/../libs/log4j-1.2.16.jar:bin/../libs/lz4-1.2.0.jar:bin/../libs/metrics-core-2.2.0.jar:bin/../libs/scala-library-2.11.5.jar:bin/../libs/scala-parser-combinators_2.11-1.0.2.jar:bin/../libs/scala-xml_2.11-1.0.2.jar:bin/../libs/slf4j-api-1.7.6.jar:bin/../libs/slf4j-log4j12-1.6.1.jar:bin/../libs/snappy-java-1.1.1.6.jar:bin/../libs/zkclient-0.3.jar:bin/../libs/zookeeper-3.4.6.jar:bin/../core/build/libs/kafka_2.10*.jar kafka.admin.ShutdownBroker --zookeeper 172.19.41.48:2181,172.19.41.50:2181,172.19.41.52:2181,172.19.41.55:2181,172.19.41.57:2181/huadong/kafka --broker 5 --num.retries 3 --retry.interval.ms 600
Exception in thread "main" java.lang.NoClassDefFoundError: kafka/admin/ShutdownBroker
Caused by: java.lang.ClassNotFoundException: kafka.admin.ShutdownBroker
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
Could not find the main class: kafka.admin.ShutdownBroker. Program will exit.
CLASSPATH:
[admin#A06-R12-302F0402-I36-59 kafka_2.11-0.9.0.1]$ echo $CLASSPATH
.:/export/servers/jdk1.7.0_71/lib/dt.jar:/export/servers/jdk1.7.0_71/lib/tools.jar:/export/servers/kafka_2.11-0.9.0.1/libs/*
Kafka developers removed helper tool to initiate graceful broker shutdown as per discussion in the ticket KAFKA-1298 (removal commit, documentation page diff).
Now only supported way to gracefully shutdown broker is by sending SIGTERM signal to broker process:
this initiate logs syncing to disk and start reelection of new partition leaders for which current broker was a leader.
Easiest way to stop broker gracefully now is to use kafka-server-stop.sh script provided as part of Kafka distribution.
Configuration options which affect this behavior:
controlled.shutdown.enable
controlled.shutdown.max.retries
controlled.shutdown.retry.backoff.ms