Authenticate Kafka CLI with Kafka running on Confluent - apache-kafka

I'm having a Kafka cluster running on Confluent Cloud but I'm not able to reset the commit offset from the UI. Hence, I'm trying to do it via Kafka's CLI as below:
kafka-consumer-groups --bootstrap-server=my_cluster.confluent.cloud:9092 --list
However, I'm bumping into the below error. And I think it has to do with how I can authenticate.
Error: Executing consumer group command failed due to org.apache.kafka.common.KafkaException: Failed to find brokers to send ListGroups
java.util.concurrent.ExecutionException: org.apache.kafka.common.KafkaException: Failed to find brokers to send ListGroups
at java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:396)
at java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2073)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:165)
at kafka.admin.ConsumerGroupCommand$ConsumerGroupService.listConsumerGroups(ConsumerGroupCommand.scala:203)
at kafka.admin.ConsumerGroupCommand$ConsumerGroupService.listGroups(ConsumerGroupCommand.scala:198)
at kafka.admin.ConsumerGroupCommand$.run(ConsumerGroupCommand.scala:70)
at kafka.admin.ConsumerGroupCommand$.main(ConsumerGroupCommand.scala:59)
at kafka.admin.ConsumerGroupCommand.main(ConsumerGroupCommand.scala)
Caused by: org.apache.kafka.common.KafkaException: Failed to find brokers to send ListGroups
at org.apache.kafka.clients.admin.KafkaAdminClient$24.handleFailure(KafkaAdminClient.java:3368)
at org.apache.kafka.clients.admin.KafkaAdminClient$Call.handleTimeoutFailure(KafkaAdminClient.java:838)
at org.apache.kafka.clients.admin.KafkaAdminClient$Call.fail(KafkaAdminClient.java:804)
at org.apache.kafka.clients.admin.KafkaAdminClient$TimeoutProcessor.handleTimeouts(KafkaAdminClient.java:934)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.timeoutPendingCalls(KafkaAdminClient.java:1013)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.processRequests(KafkaAdminClient.java:1367)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1331)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment. Call: findAllBrokers

here is an example to list consumer groups
kafka-consumer-groups --bootstrap-server <ccloud kafka>:9092 --command-config consumer.properties --list
consumer.properties
bootstrap.servers=<ccloud kafka>:9092
ssl.endpoint.identification.algorithm=https
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule
required username="<KEY>" password="<SECRET>";

You'll want to use the --command-config option to set properties files that contain your CCLoud credentials

Related

Issues setting up Kafka on localhost

I ran into an issue while trying to set up kafaka (version 2.12-2.4.0) on my local machine by following:
https://kafka.apache.org/quickstart
I created a very simple spring-boot app that had a producer and consumer by following some online tutorials. When I would start up my app it would spin for 30 seconds and then start throwing errors in the logs trying to create a topic with connection errors.
I thought maybe my spring-boot app was misconfigured so I tried creating a topic from the command line but I got a similar error:
bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic testing
Error while executing topic command : org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
[2020-02-11 21:57:06,545] ERROR java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
at org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
at org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
at kafka.admin.TopicCommand$AdminClientTopicService.createTopic(TopicCommand.scala:225)
at kafka.admin.TopicCommand$TopicService.createTopic(TopicCommand.scala:194)
at kafka.admin.TopicCommand$TopicService.createTopic$(TopicCommand.scala:189)
at kafka.admin.TopicCommand$AdminClientTopicService.createTopic(TopicCommand.scala:217)
at kafka.admin.TopicCommand$.main(TopicCommand.scala:61)
at kafka.admin.TopicCommand.main(TopicCommand.scala)
Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
Kafka and zookeeper were up and running but nothing could connect to kafka.
I noticed that the kafka logs said it was listening on 0.0.0.0:9092:
INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
I went into the server.properties and changed all the localhost values to:
# The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
# FORMAT:
# listeners = listener_name://host_name:port
# EXAMPLE:
# listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://127.0.0.1:9092
This solved the problem. I don't know if this is an issue with my laptop but I wanted to save other people time. I didn't have to change my spring-boot connection configuration, connecting to localhost still worked.

How do I ask kafka which topics have extraordinary overrides?

I understand my default retention period, but I want to see if any topics have special overrides for the retention period.
I've tried using the kafka-topics, but it doesn't connect with zookeeper.
kafka-topics --zookeeper localhost:2181 --describe --topics-with-overrides
I expect this output to be basically nothing but I get the error message:
Exception in thread "main" kafka.zookeeper.ZooKeeperClientTimeoutException: Timed out waiting for connection while in state: CONNECTING
at kafka.zookeeper.ZooKeeperClient$$anonfun$kafka$zookeeper$ZooKeeperClient$$waitUntilConnected$1.apply$mcV$sp(ZooKeeperClient.scala:268)
at kafka.zookeeper.ZooKeeperClient$$anonfun$kafka$zookeeper$ZooKeeperClient$$waitUntilConnected$1.apply(ZooKeeperClient.scala:264)
at kafka.zookeeper.ZooKeeperClient$$anonfun$kafka$zookeeper$ZooKeeperClient$$waitUntilConnected$1.apply(ZooKeeperClient.scala:264)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
at kafka.zookeeper.ZooKeeperClient.kafka$zookeeper$ZooKeeperClient$$waitUntilConnected(ZooKeeperClient.scala:264)
at kafka.zookeeper.ZooKeeperClient.<init>(ZooKeeperClient.scala:97)
at kafka.zk.KafkaZkClient$.apply(KafkaZkClient.scala:1693)
at kafka.admin.TopicCommand$.main(TopicCommand.scala:57)
at kafka.admin.TopicCommand.main(TopicCommand.scala)
Turns out my zookeeper param wasn't correct, zookeeper is not hosted on the kafka server. This worked:
kafka-topics --zookeeper myactualzookeeperserver.com:2181 --describe --topics-with-overrides

Apache beam: Timeout while initializing partition 'topic-1'. Kafka client may not be able to connect to servers

I got this error when my Apache beam application connects to my Kafka cluster with ACL enabled. Please help me fix this issue.
Caused by: java.io.IOException: Reader-4: Timeout while initializing partition 'test-1'. Kafka client may not be able to connect to servers.
org.apache.beam.sdk.io.kafka.KafkaUnboundedReader.start(KafkaUnboundedReader.java:128)
org.apache.beam.runners.dataflow.worker.WorkerCustomSources$UnboundedReaderIterator.start(WorkerCustomSources.java:779)
org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.start(ReadOperation.java:361)
org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:194)
org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:159)
org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:76)
org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker.process(StreamingDataflowWorker.java:1228)
org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker.access$1000(StreamingDataflowWorker.java:143)
org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker$6.run(StreamingDataflowWorker.java:967)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
I have a Kafka cluster with 3 nodes on GKE. I created a topic with replication-factor 3 and partition 5.
kafka-topics --create --zookeeper zookeeper:2181 \
--replication-factor 3 --partitions 5 --topic topic
I set read permission on a topic test for test_consumer_group consumer group.
kafka-acls --authorizer-properties zookeeper.connect=zookeeper:2181 \
--add --allow-principal User:CN=myuser.test.io --consumer \
--topic test --group 'test_consumer_group'
In my Apache beam application, I set configuration group.id=test_consumer_group.
Also testing with console consumer and it is not working as well.
$ docker run --rm -v `pwd`:/cert confluentinc/cp-kafka:5.1.0 \
kafka-console-consumer --bootstrap-server kafka.xx.xx:19092 \
--topic topic --consumer.config /cert/client-ssl.properties
[2019-03-08 05:43:07,246] WARN [Consumer clientId=consumer-1, groupId=test_consumer_group]
Received unknown topic or partition error in ListOffset request for
partition test-3 (org.apache.kafka.clients.consumer.internals.Fetcher)
Seems like a communication issue between to your kafka readers Kafka client may not be able to connect to servers

Facing issue to find the depth of a kafka topic with SSL enabled port

I'm facing issue while finding the depth of a topic in kafka with SSL enabled port by using kafka.tools.GetOffsetShell, but could able to consume the messages from the same port.
However could be able to execute the command to get the depth of a topic for PLAINTEXT port.
Below is the error stack for the same
sshuser#wn0:~$sudo bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list wn0.internal.cloudapp.net:9093 --topic TP.TOPIC --time -1 --offsets 1 --security-protocol SSL
{metadata.broker.list=wn0.internal.cloudapp.net:9093, request.timeout.ms=1000, client.id=GetOffsetShell, security.protocol=SSL}
[2017-09-26 19:21:59,026] WARN Fetching topic metadata with correlation id 0 for topics [Set(TP.TOPIC)] from broker [BrokerEndPoint(0,wn0.internal.cloudapp.net:9093)] failed (kafka.client.ClientUtils$)
java.io.EOFException
at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:99)
at kafka.network.BlockingChannel.readCompletely(BlockingChannel.scala:140)
at kafka.network.BlockingChannel.receive(BlockingChannel.scala:131)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:84)
at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:81)
at kafka.producer.SyncProducer.send(SyncProducer.scala:126)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:59)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:96)
at kafka.tools.GetOffsetShell$.main(GetOffsetShell.scala:98)
at kafka.tools.GetOffsetShell.main(GetOffsetShell.scala)

Apache Kafka Producer Program is throwing kafka.common.KafkaException, java.nio.channels.ClosedChannelException and kafka.common. How to resolve it?

kafka.common.KafkaException: fetching topic metadata for topics [Set(tweets)] from broker [ArrayBuffer(id:0,host:localhost,port:9092)] failed
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:72)
at kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82)
at kafka.producer.async.DefaultEventHandler$$anonfun$handle$2.apply$mcV$sp(DefaultEventHandler.scala:78)
at kafka.utils.Utils$.swallow(Utils.scala:172)
at kafka.utils.Logging$class.swallowError(Logging.scala:106)
at kafka.utils.Utils$.swallowError(Utils.scala:45)
at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:78)
at kafka.producer.Producer.send(Producer.scala:77)
at kafka.javaapi.producer.Producer.send(Producer.scala:33)
at com.aail.kafka.KafkaConnection.r(KafkaConnection.java:141)
at com.aail.kafka.Postgresconnection.main(Postgresconnection.java:40)
Caused by: java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)
at kafka.producer.SyncProducer.send(SyncProducer.scala:113)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58)
... 10 more
[2016-02-17 09:02:36,876] ERROR Failed to send requests for topics tweets with correlation ids in [0,32]
How to resolve this issue any help would be appreciated
Thanks in advance
I would follow the points below to investigate which you might have already checked into.
Check if the console producer and console consumer are working as expected.Following are the commands after the Zookeeper and Kafka servers are started and topic is been created.
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning
Check if the advertised.host.name is been set to the correct value in server.properties file
Please do let me know if any of these help to solve the issue or if you were able to solve it with some other changes.