I am trying to check the kafka consumer by consuming the data from a topic on a remote Kafka cluster. I am getting the following error when I use the kafka-console-consumer.sh:
ERROR Error processing message, terminating consumer process: (kafka.tools.ConsoleConsumer$)
java.lang.IllegalStateException: No entry found for connection 2147475658
at org.apache.kafka.clients.ClusterConnectionStates.nodeState(ClusterConnectionStates.java:330)
at org.apache.kafka.clients.ClusterConnectionStates.disconnected(ClusterConnectionStates.java:134)
at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:885)
at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:276)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.tryConnect(ConsumerNetworkClient.java:548)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$FindCoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:655)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$FindCoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:635)
at org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:204)
at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:167)
at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:127)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:575)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:389)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:297)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:215)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:231)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:316)
at org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1214)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1179)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1164)
at kafka.tools.ConsoleConsumer$ConsumerWrapper.receive(ConsoleConsumer.scala:436)
at kafka.tools.ConsoleConsumer$.process(ConsoleConsumer.scala:104)
at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:76)
at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:54)
at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
Processed a total of 0 messages
Here is the command that I use:
./bin/kafka-console-consumer.sh --bootstrap-server SSL://{IP}:{PORT},SSL://{IP}:{PORT},SSL://{IP}:{PORT} --consumer.config ./config/consumer.properties --topic MYTOPIC --group MYGROUP
Here is the ./config/consumer.properties file:
bootstrap.servers=SSL://{IP}:{PORT},SSL://{IP}:{PORT},SSL://{IP}:{PORT}
# consumer group id
group.id=MYGROUP
# What to do when there is no initial offset in Kafka or if the current
# offset does not exist any more on the server: latest, earliest, none
auto.offset.reset=earliest
#### Security
security.protocol=SSL
ssl.key.password=test1234
ssl.keystore.location=/opt/kafka/config/certs/keystore.jks
ssl.keystore.password=test1234
ssl.truststore.location=/opt/kafka/config/certs/truststore.jks
ssl.truststore.password=test1234
Do you have any idea what the problem is?
I have found the problem. It was a DNS problem at the end. I was reaching out the Kafka brokers by the IP addresses, but the broker replies with DNS name. After setting the DNS names on the consumer side, it started working again.
I had this problem (with consumers and producers) when running Kafka and Zookeeper as Docker containers.
The solution was to set advertised.listeners in the config/server.properties file of the Kafka brokers, so that it contains the IP address of the container, e.g.
advertised.listeners=PLAINTEXT://172.15.0.8:9092
See https://github.com/maxant/kafkaplayground/blob/master/start-kafka.sh for an example of a script used to start Kafka inside the container after setting up the properties file correctly.
It seems the Kafka cluster listener property is not configured in server.properties.
In the remote kafka cluster, this property should be uncommented with the proper host name.
listeners=PLAINTEXT://0.0.0.0:9092
In my case I was receiving that while trying to connect to my Kafka container, I had to pass the following:
-e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:9092
Hope it helps someone
Are you sure the remote kafka is running. I would suggest running nmap -p PORT HOST in order to verify the port is open (unless it is configured differently the port should be 9092). If that is ok, then you can use kafkacat which makes things easier. Create a consumer running kafkacat -b HOST:PORT -t YOUR_TOPIC -C -o beginning or create a producer running kafkacat -b HOST:PORT -t YOUR_TOPIC -P
Related
When I execute the below command in kafka
./kafka-consumer-groups.sh --bootstrap-server sample-address:9092 --list
I'm facing the below error
Error: Executing consumer group command failed due to org.apache.kafka.common.KafkaException: Failed to find brokers to send ListGroups
java.util.concurrent.ExecutionException: org.apache.kafka.common.KafkaException: Failed to find brokers to send ListGroups
at org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
at org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:262)
at kafka.admin.ConsumerGroupCommand$ConsumerGroupService.listGroups(ConsumerGroupCommand.scala:132)
at kafka.admin.ConsumerGroupCommand$.main(ConsumerGroupCommand.scala:58)
at kafka.admin.ConsumerGroupCommand.main(ConsumerGroupCommand.scala)
Caused by: org.apache.kafka.common.KafkaException: Failed to find brokers to send ListGroups
at org.apache.kafka.clients.admin.KafkaAdminClient$22.handleFailure(KafkaAdminClient.java:2610)
at org.apache.kafka.clients.admin.KafkaAdminClient$Call.fail(KafkaAdminClient.java:614)
at org.apache.kafka.clients.admin.KafkaAdminClient$TimeoutProcessor.handleTimeouts(KafkaAdminClient.java:730)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.timeoutPendingCalls(KafkaAdminClient.java:798)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1092)
at java.base/java.lang.Thread.run(Thread.java:835)
Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
In my case I noticed that we were using SSL
listeners=SSL://sample-address:9092
so I figured that I need to pass SSL properties in the command and it worked
bin/kafka-consumer-groups.sh --bootstrap-server sample-address:9092 --list --command-config /kafka/config/client-ssl.properties
client-ssl.properties
bootstrap.servers=sample-address:9092
security.protocol=SSL
ssl.truststore.location=/keys/truststore.jks
ssl.truststore.password=*****
ssl.keystore.location=/keys/keystore.jks
ssl.keystore.password=*****
After a lot of debugging, I have replicated this scenario and below solution working for me.
I have done changes in server.properties (that is mainly responsible for starting kafka server) instead of "localhost" pass "IP address" in listeners key.
Find detailed step below:-
These are configurations that you have to make sure while running a command.
Check for a correct IP address and port combination passed in command
bin/kafka-consumer-groups.sh --bootstrap-server 192.168.X.X:4848 --list
Main important point , configure listeners with IP address in server.properties correctly .
listeners=PLAINTEXT://**192.168.X.X:4848 --working**
listeners=PLAINTEXT://localhost:4848 --Not working
Post-change, restart kafka server.
Note:- This issue generally gets reproduce/comes in virtual box. After changing network setting like NAT to bridge.
I ran into the "Failed to find brokers to send ListGroups" problem but with the "Timed out waiting to send the call" exception.
In this case the problem was that the bootstrap server was not reachable from the machine where I was running the kafka-consumer-groups cli tool.
Since we have a VPC peering between our Kubernetes cluster in GCP and Confluent Cloud, I resolved the issue by executing the confluent cli tool inside the k8s cluster.
Beginner in kafka and confluent package.I want to start multiple brokers so as to consume the topic.
It can be done via this setting -
{'bootstrap.server' : 'ip:your_host,...',}
This setting can be defined in the server config file or else in the script as well.
But how shall I run those?. If I just add multiple end points to the bootstrap servers, it gives this error:
java.lang.IllegalArgumentException: requirement failed: Each listener must have a different name, listeners: PLAINTEXT://:9092, PLAINTEXT://:9093
cp config/server.properties config/server-1.properties
cp config/server.properties config/server-2.properties
config/server-1.properties:
broker.id=1
listeners=PLAINTEXT://:9093
log.dirs=/tmp/kafka-logs-1
config/server-2.properties:
broker.id=2
listeners=PLAINTEXT://:9094
log.dirs=/tmp/kafka-logs-2
Reference: kafka_quickstart_multibroker
Done.
I had actually mentioned same port for producer and consumer and hence was the issue.
Set up brokers on different ports and works fine even if one broker goes down.
everybody,there is a virtual server in the local area network which ip is 192.168.18.230, and my machine ip is 192.168.0.175.
Today, I try to use my machine (192.168.0.175) to send some messages to my virtual server(192.168.18.230), with the Kafka console producer
$ bin/kafka-console-producer.sh --broker-list 192.168.18.230:9092 --topic test
but there is something wrong. The description of the problem is :
[2017-04-10 17:25:40,396] ERROR Error when sending message to topic test with key: null, value: 6 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for test-0 due to 1568 ms has passed since batch creation plus linger time
But when I use the kafka-topics script to list topics, it works:
$ bin/kafka-topics.sh --list --zookeeper 192.168.18.230:2181
This problem confused me a very long period, can any body help me to solve it?
If you have a zookeeper instance which is running, you can of course ask the list of topics. However, it seems that you have no Kafka broker available.
You maybe have a zookeeper running but not Kafka.
Your Kafka producer might be running on a machine which cannot access your virtual machine where your Kafka broker is running.
Also, not only should the broker port be open, but also it must be answerable by the broker i.e, the (advertised) listeners of your Kafka broker must have your virtual machine IP (IP accessible from where your Kafka producer is running, because a VM can have multiple IPs and there is no rule that all IPs will be accessible).
For example, your virtual machine have two IPs 1.2.3.4 and 4.3.2.1 and your producer on another machine points to 1.2.3.4, you must be able first to ping and telnet to this IP.
Next, you must have this IP 1.2.3.4 in the advertised listeners in your Kafka broker.
advertised.listeners=PLAINTEXT://1.2.3.4:9092
You should set this IP only as your bootstrap.servers in your Kafka Producer.
You should also ensure that the port is not just open to localhost or 127.0.0.1 like for example, when you do netstat, it should not have just localhost:9092 or 127.0.0.1:9092, it should use any local address 0.0.0.0 or your IP 1.2.3.4:9092
I want to know the list of taken broker ids in a kafka cluster. For example, in a cluster with 10 nodes if I create a topic with 10 partitions(or more) I can see from the output of a describe topic command, the brokers to which it has been assigned.
./bin/kafka-topics --describe --zookeeper <zkconnect>:2181 --topic rbtest3
Can I collect this information without creating a topic?
You can get list of used broker ids using zookeeper cli.
zookeeper-3.4.8$ ./bin/zkCli.sh -server zookeeper-1:2181 ls /brokers/ids | tail -1
[0]
You also can use the zookeeper-shell.sh script that ships with the Kafka distribution, like this:
linux$ ./zookeeper-shell.sh zookeeper-IPaddress:2181 <<< "ls /brokers/ids"
Just add the IP address of any of your Zookeeper servers (and/or change the port if necessary, for example when running multiple Zookeeper instances on the same server).
This alternative can be useful when, for example, you find yourself inside a container (Docker, LXC, etc.) that is exclusively running a Kafka client; but Zookeeper itself is somewhere else (say, in a different container).
I hope it helps. =:)
#kafka broker id
cat $KAFKA_HOME/logs/meta.properties
If you want to know what is the broker ID of a specific broker - the easiest way is to look at its controller.log, I found:
cat /var/log/kafka/controller.log
[2021-02-18 13:20:22,639] INFO [ControllerEventThread controllerId=1003] Starting (kafka.controller.ControllerEventManager$ControllerEventThread)
[2021-02-18 13:20:22,646] DEBUG [Controller id=1003] Broker 1002 has been elected as the controller, so stopping the election process. (kafka.controller.KafkaController)
controllerId=1003 ---> this is your brokerID (1003)
[substitute your path to the kafka logs, of course ...]
you can use kafka-manager, an open-source tool powered by yahoo.
You can run the following command:
bin/kafka-run-class.sh kafka.tools.GetOffsetShell --topic=<your topic> --broker-list=<your broker list> --time=-2
This will list all of the brokers with their id and the beginning offset.
Using the Zookeeper CLI
sh /bin/zkCli.sh -server zookeeper-1:2181 ls /brokers/ids
Then to get details of the broker, you can use the "get" with the ids that was the output generated from the previous command
sh /bin/zkCli.sh -server zookeeper-1:2181 get /brokers/ids/<broker-id>
I have two machines localhost and 192.168.1.110 to run two independent single machine kafka.
(1)At localhost, I run:
➜ kafka_2.11-0.10.0.0 bin/kafka-console-producer.sh --broker-list 192.168.1.110:9092 --topic test
this is a message
[2016-08-24 18:15:27,441] ERROR Error when sending message to topic test with key: null, value: 2 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Batch containing 1 record(s) expired due to timeout while requesting metadata from brokers for test-0
Why message can't be sent to broker at 192.168.1.110?
could I use broker ip directly in consumer or producer?
If I could only use hostname, does this relate to advertised.host.name?
then how to set up advertised.host.name? does this host name should be globally resolvable(could I use /etc/hosts to resolve the host name?)
(2)
I edited /etc/hosts to let localhost point to 192.168.1.110,
then I run:
➜ kafka_2.11-0.10.0.0 bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
and I could successfully sent messages to 192.168.1.110 and consumed the messages there.
(3)I edited /etc/hosts to let rrlocalhost point to 192.168.1.110,
then I run:
➜ kafka_2.11-0.10.0.0 bin/kafka-console-producer.sh --broker-list rrlocalhost:9092 --topic test
then I sent messages to rrlocalhost, there is the same error as in (1).
Definitely you can use ip address directly.
broker config advertised.host.name will be registered in zookeeper, and producer and consumer will fetch them as cluster metadata. If you config it using local nick name, producer and consumer will be trouble to communicate with it.