Kafka consumer group script to see all consumer group not working - apache-kafka

When I execute the below command in kafka
./kafka-consumer-groups.sh --bootstrap-server sample-address:9092 --list
I'm facing the below error
Error: Executing consumer group command failed due to org.apache.kafka.common.KafkaException: Failed to find brokers to send ListGroups
java.util.concurrent.ExecutionException: org.apache.kafka.common.KafkaException: Failed to find brokers to send ListGroups
at org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
at org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:262)
at kafka.admin.ConsumerGroupCommand$ConsumerGroupService.listGroups(ConsumerGroupCommand.scala:132)
at kafka.admin.ConsumerGroupCommand$.main(ConsumerGroupCommand.scala:58)
at kafka.admin.ConsumerGroupCommand.main(ConsumerGroupCommand.scala)
Caused by: org.apache.kafka.common.KafkaException: Failed to find brokers to send ListGroups
at org.apache.kafka.clients.admin.KafkaAdminClient$22.handleFailure(KafkaAdminClient.java:2610)
at org.apache.kafka.clients.admin.KafkaAdminClient$Call.fail(KafkaAdminClient.java:614)
at org.apache.kafka.clients.admin.KafkaAdminClient$TimeoutProcessor.handleTimeouts(KafkaAdminClient.java:730)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.timeoutPendingCalls(KafkaAdminClient.java:798)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1092)
at java.base/java.lang.Thread.run(Thread.java:835)
Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.

In my case I noticed that we were using SSL
listeners=SSL://sample-address:9092
so I figured that I need to pass SSL properties in the command and it worked
bin/kafka-consumer-groups.sh --bootstrap-server sample-address:9092 --list --command-config /kafka/config/client-ssl.properties
client-ssl.properties
bootstrap.servers=sample-address:9092
security.protocol=SSL
ssl.truststore.location=/keys/truststore.jks
ssl.truststore.password=*****
ssl.keystore.location=/keys/keystore.jks
ssl.keystore.password=*****

After a lot of debugging, I have replicated this scenario and below solution working for me.
I have done changes in server.properties (that is mainly responsible for starting kafka server) instead of "localhost" pass "IP address" in listeners key.
Find detailed step below:-
These are configurations that you have to make sure while running a command.
Check for a correct IP address and port combination passed in command
bin/kafka-consumer-groups.sh --bootstrap-server 192.168.X.X:4848 --list
Main important point , configure listeners with IP address in server.properties correctly .
listeners=PLAINTEXT://**192.168.X.X:4848 --working**
listeners=PLAINTEXT://localhost:4848 --Not working
Post-change, restart kafka server.
Note:- This issue generally gets reproduce/comes in virtual box. After changing network setting like NAT to bridge.

I ran into the "Failed to find brokers to send ListGroups" problem but with the "Timed out waiting to send the call" exception.
In this case the problem was that the bootstrap server was not reachable from the machine where I was running the kafka-consumer-groups cli tool.
Since we have a VPC peering between our Kubernetes cluster in GCP and Confluent Cloud, I resolved the issue by executing the confluent cli tool inside the k8s cluster.

Related

Trouble with Apache Kafka to Allow External Connections

I'm just having a difficult time with Kafka right now, but I feel like I'm close.
I have two VMs on FreeNAS running locally. Both Running Ubuntu 18.04 LTS.
VM Graylog: 192.168.1.25. Running Graylog Server. Working well retrieving rsyslogs and apache from itself.
VM Kafka: 192.168.1.16. Running Kafka.
My goal is to have VM Graylog pull logs from VM Kafka, via a Graylog Kafka UDP input. The secondary goal is to replicate this, except tha the Kafka instance will sit on my VPS server feeding apache logs from a website. Of course, I want to test this in a dev environment first.
I am able to have my VM Kafka server successfully listen through this line of code:
/opt/kafka_2.13-2.6.0/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic rsyslog_kafka --from-beginning
This is my 60-kafka.conf file:
module(load="omkafka")
template(name="json"
type="list"
option.json="on") {
constant(value="{")
constant(value="\"#timestamp\":\"") property(name="timereported" dateFormat="rfc33$
constant(value="\",\"#version\":\"1")
constant(value="\",\"message\":\"") property(name="msg")
constant(value="\",\"host\":\"") property(name="hostname")
constant(value="\",\"severity\":\"") property(name="syslogseverity-text")
constant(value="\",\"facility\":\"") property(name="syslogfacility-text")
constant(value="\",\"programname\":\"") property(name="programname")
constant(value="\",\"procid\":\"") property(name="procid")
constant(value="\"}\n")
}
action(
broker=["192.168.1.16:9092"]
type="omkafka"
topic="rsyslog_kafka"
template="json"
)
I'm using the default server.properties file which doesn't contain any listeners, just the defaults. I do understand I need to set the listeners and advertised.listeners.
I've attempted the following settings to no avail:
Attempt 1:
listeners = PLAINTEXT://localhost:9092
advertised.listeners=PLAINTEXT://192.168.1.16:9092
Attempt 2:
listeners = PLAINTEXT://127.0.0.1:9092
advertised.listeners=PLAINTEXT://192.168.1.16:9092
This after reloading both Kafka and Rsyslog and confirming their statuses are active.
Example errors when attempting to read messages.
Bunch of these
[2020-08-20 00:52:42,248] WARN [Consumer clientId=consumer-console-consumer-70205-1, groupId=console-consumer-70205] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
Followed by an infinite amount of these:
[2020-08-20 00:48:50,598] WARN [Consumer clientId=consumer-console-consumer-11975-1, groupId=console-consumer-11975] Error while fetching metadata with correlation id 254 : {rsyslog_kafka=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
I feel like I'm close. Perhaps there is something I'm just understanding. I've read lots of similar articles where they say just replace the IP addresses with your server. I feel like I've done that, with no success.
You need to set listeners to PLAINTEXT://0.0.0.0:9092 in order to bind externally.
The advertised listener ought to be set to an address that your consumers will be able to use to discover the cluster
Note: Docker Compose might be easier than VMs

Kafka Topic Creation with --bootstrap-server gives timeout Exception (kafka version 2.5)

When trying to create topic using --bootstrap-server,
I am getting exception "Error while executing Kafka topic command: Timed out waiting for a node" :-
kafka-topics --bootstrap-server localhost:9092 --topic boottopic --replication-factor 3 --partitions
However following works fine, using --zookeeper :-
kafka-topics --zookeeper localhost:2181--topic boottopic --replication-factor 3 --partitions
I am using Kafka version 2.5 and as per knowledge since version >2.2, all the offsets and metadata are stored on the broker itself. So, while creating topic there's no need to connect to zookeeper.
Please help to understand this behaviour
Note - I have set up a Zookeeper quorum and Kafka broker cluster each containing 3 instance on a single machine (for dev purposes)
Old question, but Ill answer anyways for the sake of internet wisdom.
You probably have auth set, when using --bootstrap-server you need to also specify your credentials with --command-config
since version >2.2, all the ... metadata are stored on the broker itself
False. Topic metadata is still stored on Zookeeper until KIP-500 is completed.
The AdminClient.createTopics() method, however that is used internally will delegate to Zookeeper from the Controller broker node in the cluster.
Hard to say what the error is, but most common issue is that Kafka is not running, you have SSL enabled and the certs are wrong, or the listeners are misconfigured.
For example, in the listeners, the default broker port on a Cloudera Kafka installation would be 6667, not 9092
each containing 3 instance on a single machine
Running 3 instances on one machine does not improve resiliency or performance unless you have 3 CPUs and 3 separate HDDs on that one motherboard.
"Error while executing Kafka topic command: Timed out waiting for a
node"
This seems like your broker is down or is inaccessible from where you are running those commands or it hasn't started yet (perhaps still starting).
Sometimes the broker startup takes long because it performs some cleaning operations. You may want to check your Kafka broker startup logs and see if it is ready and then try creating the topics by giving in the bootstrap servers.
There could also be some errors during your Kafka broker startup like Too many open files or wrong zookeeper url, zookeeper not being accessible by your broker, to name a few.
If you are able to create topics by passing in your Zookeeper URL means that zookeeper is up but does not necessarily mean that your Kafka broker(s) are also up and running.
Since a zookeeper can start without a broker but not vice-versa.

Kafka Consumer: No entry found for connection

I am trying to check the kafka consumer by consuming the data from a topic on a remote Kafka cluster. I am getting the following error when I use the kafka-console-consumer.sh:
ERROR Error processing message, terminating consumer process: (kafka.tools.ConsoleConsumer$)
java.lang.IllegalStateException: No entry found for connection 2147475658
at org.apache.kafka.clients.ClusterConnectionStates.nodeState(ClusterConnectionStates.java:330)
at org.apache.kafka.clients.ClusterConnectionStates.disconnected(ClusterConnectionStates.java:134)
at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:885)
at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:276)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.tryConnect(ConsumerNetworkClient.java:548)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$FindCoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:655)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$FindCoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:635)
at org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:204)
at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:167)
at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:127)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:575)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:389)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:297)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:215)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:231)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:316)
at org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1214)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1179)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1164)
at kafka.tools.ConsoleConsumer$ConsumerWrapper.receive(ConsoleConsumer.scala:436)
at kafka.tools.ConsoleConsumer$.process(ConsoleConsumer.scala:104)
at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:76)
at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:54)
at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
Processed a total of 0 messages
Here is the command that I use:
./bin/kafka-console-consumer.sh --bootstrap-server SSL://{IP}:{PORT},SSL://{IP}:{PORT},SSL://{IP}:{PORT} --consumer.config ./config/consumer.properties --topic MYTOPIC --group MYGROUP
Here is the ./config/consumer.properties file:
bootstrap.servers=SSL://{IP}:{PORT},SSL://{IP}:{PORT},SSL://{IP}:{PORT}
# consumer group id
group.id=MYGROUP
# What to do when there is no initial offset in Kafka or if the current
# offset does not exist any more on the server: latest, earliest, none
auto.offset.reset=earliest
#### Security
security.protocol=SSL
ssl.key.password=test1234
ssl.keystore.location=/opt/kafka/config/certs/keystore.jks
ssl.keystore.password=test1234
ssl.truststore.location=/opt/kafka/config/certs/truststore.jks
ssl.truststore.password=test1234
Do you have any idea what the problem is?
I have found the problem. It was a DNS problem at the end. I was reaching out the Kafka brokers by the IP addresses, but the broker replies with DNS name. After setting the DNS names on the consumer side, it started working again.
I had this problem (with consumers and producers) when running Kafka and Zookeeper as Docker containers.
The solution was to set advertised.listeners in the config/server.properties file of the Kafka brokers, so that it contains the IP address of the container, e.g.
advertised.listeners=PLAINTEXT://172.15.0.8:9092
See https://github.com/maxant/kafkaplayground/blob/master/start-kafka.sh for an example of a script used to start Kafka inside the container after setting up the properties file correctly.
It seems the Kafka cluster listener property is not configured in server.properties.
In the remote kafka cluster, this property should be uncommented with the proper host name.
listeners=PLAINTEXT://0.0.0.0:9092
In my case I was receiving that while trying to connect to my Kafka container, I had to pass the following:
-e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:9092
Hope it helps someone
Are you sure the remote kafka is running. I would suggest running nmap -p PORT HOST in order to verify the port is open (unless it is configured differently the port should be 9092). If that is ok, then you can use kafkacat which makes things easier. Create a consumer running kafkacat -b HOST:PORT -t YOUR_TOPIC -C -o beginning or create a producer running kafkacat -b HOST:PORT -t YOUR_TOPIC -P

Start multiple brokers in kafka

Beginner in kafka and confluent package.I want to start multiple brokers so as to consume the topic.
It can be done via this setting -
{'bootstrap.server' : 'ip:your_host,...',}
This setting can be defined in the server config file or else in the script as well.
But how shall I run those?. If I just add multiple end points to the bootstrap servers, it gives this error:
java.lang.IllegalArgumentException: requirement failed: Each listener must have a different name, listeners: PLAINTEXT://:9092, PLAINTEXT://:9093
cp config/server.properties config/server-1.properties
cp config/server.properties config/server-2.properties
config/server-1.properties:
broker.id=1
listeners=PLAINTEXT://:9093
log.dirs=/tmp/kafka-logs-1
config/server-2.properties:
broker.id=2
listeners=PLAINTEXT://:9094
log.dirs=/tmp/kafka-logs-2
Reference: kafka_quickstart_multibroker
Done.
I had actually mentioned same port for producer and consumer and hence was the issue.
Set up brokers on different ports and works fine even if one broker goes down.

Kafka 0.10.1.0 Test Server: Request METADATA failed

I'm getting basic errors after setting up a super simple single-instance Kafka VM. This is for tiny volume development testing.
This is using the latest Confluent Platform 3.1.1 which includes the almost latest Kafka 0.10.1.0.
FYI, a slightly newer bug patch Kafka 0.10.1.1 is out, but the next post 3.1.1 Confluent Platform binary that includes that isn't available quite yet.
I configure /etc/kafka/server.properties with (I'm using a static local IP for dev testing simplicity):
listeners=PLAINTEXT://192.168.50.20:9092
advertised.listeners=PLAINTEXT://192.168.50.20:9092
(is that right?)
Simple console admin commands are generating errors. This leads me to believe that there is something wrong with the basic setup/configuration.
~$ /usr/bin/kafka-consumer-groups --new-consumer --bootstrap-server localhost:9092 --list
Error while executing consumer group command Request METADATA failed on brokers List(localhost:9092 (id: -1 rack: null))
java.lang.RuntimeException: Request METADATA failed on brokers List(localhost:9092 (id: -1 rack: null))
at kafka.admin.AdminClient.sendAnyNode(AdminClient.scala:67)
at kafka.admin.AdminClient.findAllBrokers(AdminClient.scala:87)
at kafka.admin.AdminClient.listAllGroups(AdminClient.scala:96)
at kafka.admin.AdminClient.listAllGroupsFlattened(AdminClient.scala:117)
at kafka.admin.AdminClient.listAllConsumerGroupsFlattened(AdminClient.scala:121)
at kafka.admin.ConsumerGroupCommand$KafkaConsumerGroupService.list(ConsumerGroupCommand.scala:304)
at kafka.admin.ConsumerGroupCommand$.main(ConsumerGroupCommand.scala:66)
at kafka.admin.ConsumerGroupCommand.main(ConsumerGroupCommand.scala)
EDIT: The problem, thanks to Gondola_Ride, was that I specified the IP in listeners in server.properties. I could connect via that IP, but not via localhost. The solution was to use host 0.0.0.0 which is Kafka's convention for binding to all local TCP interfaces:
listeners=PLAINTEXT://0.0.0.0:9092
advertised.listeners=PLAINTEXT://192.168.50.20:9092
Try adding an entry in the /etc/hosts on the host where you are running this command for this host 192.168.50.20 and see if it works
Something like
127.0.0.1 localhost.localdomain localhost
OR
192.168.50.20 hostname hostname-alias
Then try using it in the command
OR
Try using ip address directly in the command instead of localhost