I understand my default retention period, but I want to see if any topics have special overrides for the retention period.
I've tried using the kafka-topics, but it doesn't connect with zookeeper.
kafka-topics --zookeeper localhost:2181 --describe --topics-with-overrides
I expect this output to be basically nothing but I get the error message:
Exception in thread "main" kafka.zookeeper.ZooKeeperClientTimeoutException: Timed out waiting for connection while in state: CONNECTING
at kafka.zookeeper.ZooKeeperClient$$anonfun$kafka$zookeeper$ZooKeeperClient$$waitUntilConnected$1.apply$mcV$sp(ZooKeeperClient.scala:268)
at kafka.zookeeper.ZooKeeperClient$$anonfun$kafka$zookeeper$ZooKeeperClient$$waitUntilConnected$1.apply(ZooKeeperClient.scala:264)
at kafka.zookeeper.ZooKeeperClient$$anonfun$kafka$zookeeper$ZooKeeperClient$$waitUntilConnected$1.apply(ZooKeeperClient.scala:264)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
at kafka.zookeeper.ZooKeeperClient.kafka$zookeeper$ZooKeeperClient$$waitUntilConnected(ZooKeeperClient.scala:264)
at kafka.zookeeper.ZooKeeperClient.<init>(ZooKeeperClient.scala:97)
at kafka.zk.KafkaZkClient$.apply(KafkaZkClient.scala:1693)
at kafka.admin.TopicCommand$.main(TopicCommand.scala:57)
at kafka.admin.TopicCommand.main(TopicCommand.scala)
Turns out my zookeeper param wasn't correct, zookeeper is not hosted on the kafka server. This worked:
kafka-topics --zookeeper myactualzookeeperserver.com:2181 --describe --topics-with-overrides
Related
I'm trying to run a kafka cluster with this command :
kafka-topics.sh --bootstrap-server 127.0.0.1:2181 --topic first_topic --create --partitions 3 --replication-factor 1
and i get this as an error:
[2022-02-03 11:25:28,635] WARN [AdminClient clientId=adminclient-1] Connection to node -1 (/127.0.0.1:2181) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
So i tried to look at kafka_2.12-3.1.0\config\server.properties i have
listeners=PLAINTEXT://localhost:9092
Any help will be very appreciated.
2181 is typically the port used by ZooKeeper. If you want to specify that, and you're not running Kafka in KRaft (zookeeper-less mode) then you need to do as #Umeshwaran said and use the --zookeeper argument.
However, you can use --bootstrap-server, but if you are doing so then specify the broker address and port, which from your listeners config is 9092:
kafka-topics.sh --bootstrap-server 127.0.0.1:9092 --topic first_topic --create --partitions 3 --replication-factor 1
This article should clarify things.
This is because your zookeeper is not working ,I was facing the same issue but when I run the zookeeper with(I am using wsl)
"zookeeper-server-start.sh ~/kafka_2.13-3.3.1/config/zookeeper.properties"
command it starts working
I am running Kafka also if you don't how to run, use this command
"kafka-server-start.sh ~/kafka_2.13-3.3.1/config/server.properties"
hint- modify kafka_2.13-3.3.1 this with the correct version of kafka you are using
i installed latest version of kafka on ubuntu 20.04 and when i try to create a topic , i have this error :
~/Kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic TutorialTopic
Exception in thread "main" kafka.zookeeper.ZooKeeperClientTimeoutException: Timed out waiting for connection while in state: CONNECTING
at kafka.zookeeper.ZooKeeperClient.waitUntilConnected(ZooKeeperClient.scala:271)
at kafka.zookeeper.ZooKeeperClient.<init>(ZooKeeperClient.scala:125)
at kafka.zk.KafkaZkClient$.apply(KafkaZkClient.scala:1948)
at kafka.admin.TopicCommand$ZookeeperTopicService$.apply(TopicCommand.scala:378)
at kafka.admin.TopicCommand$.main(TopicCommand.scala:56)
at kafka.admin.TopicCommand.main(TopicCommand.scala)```
i tried :
- to modify zookeeper timeout connection time to 18000->60000
- change zookeeper localhost:2181 to zookeeper zookeeper:2181
but still same issue, do you know why please ?
I got this error when my Apache beam application connects to my Kafka cluster with ACL enabled. Please help me fix this issue.
Caused by: java.io.IOException: Reader-4: Timeout while initializing partition 'test-1'. Kafka client may not be able to connect to servers.
org.apache.beam.sdk.io.kafka.KafkaUnboundedReader.start(KafkaUnboundedReader.java:128)
org.apache.beam.runners.dataflow.worker.WorkerCustomSources$UnboundedReaderIterator.start(WorkerCustomSources.java:779)
org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.start(ReadOperation.java:361)
org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:194)
org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:159)
org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:76)
org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker.process(StreamingDataflowWorker.java:1228)
org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker.access$1000(StreamingDataflowWorker.java:143)
org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker$6.run(StreamingDataflowWorker.java:967)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
I have a Kafka cluster with 3 nodes on GKE. I created a topic with replication-factor 3 and partition 5.
kafka-topics --create --zookeeper zookeeper:2181 \
--replication-factor 3 --partitions 5 --topic topic
I set read permission on a topic test for test_consumer_group consumer group.
kafka-acls --authorizer-properties zookeeper.connect=zookeeper:2181 \
--add --allow-principal User:CN=myuser.test.io --consumer \
--topic test --group 'test_consumer_group'
In my Apache beam application, I set configuration group.id=test_consumer_group.
Also testing with console consumer and it is not working as well.
$ docker run --rm -v `pwd`:/cert confluentinc/cp-kafka:5.1.0 \
kafka-console-consumer --bootstrap-server kafka.xx.xx:19092 \
--topic topic --consumer.config /cert/client-ssl.properties
[2019-03-08 05:43:07,246] WARN [Consumer clientId=consumer-1, groupId=test_consumer_group]
Received unknown topic or partition error in ListOffset request for
partition test-3 (org.apache.kafka.clients.consumer.internals.Fetcher)
Seems like a communication issue between to your kafka readers Kafka client may not be able to connect to servers
I am getting below error while trying to pass words in Kafka-consumer,
commands with i entered
console-1:(for producer)
export PATH=$PATH:/usr/hdp/current/kafka-broker/bin
kafka-topics.sh --create --zookeeper ip-172-31-20-58.ec2.internal:2181 --replication-factor 1 --partitions 1 --topic testuday1234
kafka-console-producer.sh --broker-list ip-172-31-20-58.ec2.internal:6667 --topic testuday1234
console-2: (for consumer)
export PATH=$PATH:/usr/hdp/current/kafka-broker/bin
kafka-console-consumer.sh --zookeeper localhost:2181 --topic testuday1234 --from-beginning
Please help me in resolving these error
Error i am getting in Producer console:
[udaychitukula6587#ip-172-31-38-183 ~]$ kafka-console-producer.sh --broker-list ip-172-31-20-58.ec2.internal:6667 --topic testuday1234
hi
[2018-05-28 15:27:36,761] WARN Error while fetching metadata [{TopicMetadata for topic testuday1234 ->
No partition metadata for topic testuday1234 due to kafka.common.LeaderNotAvailableException}] for topic [testuday1234]: class kafka.common.LeaderNotAvailableExcep
tion (kafka.producer.BrokerPartitionInfo)
[2018-05-28 15:27:36,774] WARN Error while fetching metadata [{TopicMetadata for topic testuday1234 ->
No partition metadata for topic testuday1234 due to kafka.common.LeaderNotAvailableException}] for topic [testuday1234]: class kafka.common.LeaderNotAvailableExcep
tion (kafka.producer.BrokerPartitionInfo)
Error i am getting in consumer console:
[udaychitukula6587#ip-172-31-38-183 ~]$ kafka-console-consumer.sh --zookeeper localhost:2181 --topic testuday123 --from-beginning
{metadata.broker.list=ip-172-31-20-58.ec2.internal:6667,ip-172-31-53-48.ec2.internal:6667,ip-172-31-60-179.ec2.internal:6667, request.timeout.ms=30000, client.id=c
onsole-consumer-63526, security.protocol=PLAINTEXT}
{metadata.broker.list=ip-172-31-20-58.ec2.internal:6667,ip-172-31-53-48.ec2.internal:6667,ip-172-31-60-179.ec2.internal:6667, request.timeout.ms=30000, client.id=c
onsole-consumer-63526, security.protocol=PLAINTEXT}
There is a couple of things that I've noted here.
First, in the newer versions (I think from 0.10.1) of Kafka for the console consumer, we need to use --bootstrap-server option as opposed to --zookeeper. Could you please confirm the version you're using? and also try to run the consumer command with --bootstrap-server option?
Second, for the producer in such a scenario, I would recommend checking 3 things to confirm where the issue might be:
The leader of a Kafka cluster is elected by the zookeeper, so it might be worth checking by running a zookeeper-client shell to see if there is an active controller in the Kafka cluster (in the znode path - /brokers/ids/[brokerId]).
Try running a Kafka-topics --describe --topic command to see if the topic has an active leader partition i.e. the Leader column in the output of the command should NOT have None. I've run into this myself before.
The last one is about the port number of the broker, could you please check and confirm if the broker is actually listening on that port. You'll find this information (listeners and advertised.listeners) in server.properties file on the broker. I found this post that you might find useful where the user had a problem with the port 6667.
I hope this helps!
When i try to consume messages from the kafka server which is hosted in ec2 with kafka console tool (V 0.9.0.1 , i think this uses old consumer APIs)
I get following exception.
How can i overcome this?
kafka-console-consumer.sh --zookeeper zookeeper.xx.com:2181 --topic my-replicated-topic
[2016-04-06 15:52:40,247] WARN [console-consumer-12572_Rathas-MacBook-Pro.local-1459921957380-6ebc238f-leader-finder-thread], Failed to add leader for partitions [my-replicated-topic,1],[my-replicated-topic,2],[my-replicated-topic,0]; will retry (kafka.consumer.ConsumerFetcherManager$LeaderFinderThread)
java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:110)
at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:98)
at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:83)
at kafka.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:149)
at kafka.consumer.SimpleConsumer.earliestOrLatestOffset(SimpleConsumer.scala:188)
at kafka.consumer.ConsumerFetcherThread.handleOffsetOutOfRange(ConsumerFetcherThread.scala:84)
at kafka.server.AbstractFetcherThread$$anonfun$addPartitions$2.apply(AbstractFetcherThread.scala:187)
at kafka.server.AbstractFetcherThread$$anonfun$addPartitions$2.apply(AbstractFetcherThread.scala:182)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:778)
at scala.collection.immutable.Map$Map1.foreach(Map.scala:116)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:777)
at kafka.server.AbstractFetcherThread.addPartitions(AbstractFetcherThread.scala:182)
at kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:88)
at kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:78)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:778)
at scala.collection.immutable.Map$Map3.foreach(Map.scala:161)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:777)
at kafka.server.AbstractFetcherManager.addFetcherForPartitions(AbstractFetcherManager.scala:78)
at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:95)
at
I created topic like
/kafka-topics.sh --create --zookeeper zookeeper.xx.com:2181 --replication-factor 3 --partitions 3 --topic my-replicated-topic