I am trying to create my first topic by using the command below:
./bin/kafka-topics.sh --bootstrap-server localhost:2181 --create --topic test --partitions 3 --replication-factor 1
and then I am getting the error below.
ost:2181 --create --topic test --partitions 3 --replication-factor 1
[2021-10-07 14:03:15,144] WARN [AdminClient clientId=adminclient-1] Connection to node -1 (localhost/127.0.0.1:2181) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2021-10-07 14:03:15,251] WARN [AdminClient clientId=adminclient-1] Connection to node -1 (localhost/127.0.0.1:2181) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2021-10-07 14:03:15,418] WARN [AdminClient clientId=adminclient-1] Connection to node -1 (localhost/127.0.0.1:2181) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
Could you please advise on what exactly is the issue here and how this can be resolved?
In the Apache Kafka documentation, under the Notable changes in 2.2.0 states that
The bin/kafka-topics.sh command line tool is now able to connect directly to brokers with --bootstrap-server instead of zookeeper. The old --zookeeper option is still available for now. Please read KIP-377 for more information.
(As of Apache Kafka 3.0.0 --zookeper flag was removed).
It's unclear what version of Kafka you are using, but given that it accepted the flag --bootstrap-server then you're at least using 2.2.0 (probably < 3.0.0 given the dir name but not important for this question).
If you're using --bootstrap-server then you want to be connecting to the port associated to the Kafka server not Apache zookeper
Recall that
The bin/kafka-topics.sh command line tool is now able to connect directly to brokers
Therefore, typically port 9092 is used and so your command should be
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --create --topic test --partitions 3 --replication-factor 1
However, --zookeeper is typically used with port 2181 since that's the port that Apache Zookeper tends to run on.
I ran into an issue while trying to set up kafaka (version 2.12-2.4.0) on my local machine by following:
https://kafka.apache.org/quickstart
I created a very simple spring-boot app that had a producer and consumer by following some online tutorials. When I would start up my app it would spin for 30 seconds and then start throwing errors in the logs trying to create a topic with connection errors.
I thought maybe my spring-boot app was misconfigured so I tried creating a topic from the command line but I got a similar error:
bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic testing
Error while executing topic command : org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
[2020-02-11 21:57:06,545] ERROR java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
at org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
at org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
at kafka.admin.TopicCommand$AdminClientTopicService.createTopic(TopicCommand.scala:225)
at kafka.admin.TopicCommand$TopicService.createTopic(TopicCommand.scala:194)
at kafka.admin.TopicCommand$TopicService.createTopic$(TopicCommand.scala:189)
at kafka.admin.TopicCommand$AdminClientTopicService.createTopic(TopicCommand.scala:217)
at kafka.admin.TopicCommand$.main(TopicCommand.scala:61)
at kafka.admin.TopicCommand.main(TopicCommand.scala)
Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
Kafka and zookeeper were up and running but nothing could connect to kafka.
I noticed that the kafka logs said it was listening on 0.0.0.0:9092:
INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
I went into the server.properties and changed all the localhost values to:
# The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
# FORMAT:
# listeners = listener_name://host_name:port
# EXAMPLE:
# listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://127.0.0.1:9092
This solved the problem. I don't know if this is an issue with my laptop but I wanted to save other people time. I didn't have to change my spring-boot connection configuration, connecting to localhost still worked.
I have an environment problem.
I want using zookeeper and Kafka cluster to solve my problem.
My zookeeper version is 3.4.12 and Kafka is 2.12-2.1.0
I also change the zoo.cfg in zookeeper.
dataDir=D:/WEBSOCKET/zookeeper-3.4.12/data
and server.properties in kafka.
log.dirs=D:/WEBSOCKET/kafka_2.12-2.1.0/logs
I see all the tutorial and do it the exact same way.
And also usgin kafka open zookeeper.
this is my command:
1) open zookeeper (zkServer.cmd)
2) in kafka
.\bin\windows\kafka-server-start.bat .\config\server.properties
3) create topic
.\bin\windows\kafka-topics.bat --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic hello
4) create a producer
.\bin\windows\kafka-console-producer.bat --bootstrap-server localhost:2181 --topic hello
5) create a consumer
.\bin\windows\kafka-console-consumer.bat --bootstrap-server localhost:2181 --topic hello
when I get in step 5, I always fail.
zookeeper will give me a lot of console like :
WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#383] - Exception causing close of session 0x0: null
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#215] - Accepted socket connection from /127.0.0.1:55192
and
2019-01-08 17:05:24,822 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#1040] - Closed socket connection for client /127.0.0.1:50874 (no session established for client)
2019-01-08 17:05:25,783 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#215] - Accepted socket connection from /127.0.0.1:56089
I don't know how to fix it. I google for two days...
when I open my Kafka with step 2, my zookeeper some times doesn't hvae any response or shows me this:
[ProcessThread(sid:0 cport:2181)::PrepRequestProcessor#596] - Got user-level KeeperException when processing sessionid:0x1000058f8960000 type:multi cxid:0x36 zxid:0x69 txntype:-1 reqpath:n/a aborting remaining multi ops. Error Path:/admin/preferred_replica_election Error:KeeperErrorCode = NoNode for /admin/preferred_replica_election
I also google this, but not helpful.
I set this in kfaka before:
advertised.host.name = localhost
listeners=PLAINTEXT://127.0.0.1:9092
my host have set
127.0.0.1 localhost
please help me create local server I want to coding my project..
thank you read all.
Producers and consumers need to use port 9092 (Kafka)
You are seeing Zookeeper logs and errors because you are trying to use bootstrap-server or broker-list and port 2181 (Zookeeper)
Check again the quickstart guide
I am using a single node Kafka V 0.10.2 (16 GB RAM, 8 cores) and a single node zookeeper V 3.4.9 (4 GB RAM, 1 core ). I am having 64 consumer groups and 500 topics each having 250 partitions. I am able to execute the commands which require only Kafka broker and its running fine
ex.
./kafka-consumer-groups.sh --bootstrap-server localhost:9092
--describe --group
But when I execute the admin command like create topic, alter topic For example
./kafka-topics.sh --create --zookeeper :2181
--replication-factor 1 --partitions 1 --topic
Following exception is being displayed:
Error while executing topic command : replication factor: 1 larger
than available brokers: 0 [2017-11-16 11:22:13,592] ERROR
org.apache.kafka.common.errors.InvalidReplicationFactorException:
replication factor: 1 larger than available brokers: 0
(kafka.admin.TopicCommand$)
I checked my broker is up. In server.log following warnings are there
[2017-11-16 11:14:26,959] WARN Client session timed out, have not heard from server in 15843ms for sessionid 0x15aa7f586e1c061 (org.apache.zookeeper.ClientCnxn)
[2017-11-16 11:14:28,795] WARN Unable to reconnect to ZooKeeper service, session 0x15aa7f586e1c061 has expired (org.apache.zookeeper.ClientCnxn)
[2017-11-16 11:21:46,055] WARN Unable to reconnect to ZooKeeper service, session 0x15aa7f586e1c067 has expired (org.apache.zookeeper.ClientCnxn)
Below mentioned is my Kafka server configuration :
broker.id=1
delete.topic.enable=true
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/kafka/data/logs
num.partitions=1
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=<zookeeperIP>:2181
zookeeper.connection.timeout.ms=6000
Zookeeper Configuration is :
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/zookeeper/data
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
autopurge.snapRetainCount=20
# Purge task interval in hours
# Set to "0" to disable auto purge feature
autopurge.purgeInterval=48
I am not able to figure out which configuration to tune. What I am missing .Any help will be appreciated.
When you are running consumer with zookeeper argument like
./kafka-topics.sh --create --zookeeper :2181 --replication-factor 1
--partitions 1 --topic
it means that consumer will go and ask zookeeper to about broker details. if broker details available in zookeeper it can able to connect to the broker.
in your scenario, I think zookeeper lost broker details. zookeeper usually store all your configuration in tree path.
to check whether zookeeper has broker path or not you need log into zookeeper shell using /bin/zkCli.sh -server localhost:2181
after successful connection do ls / you will see output like this
[controller, controller_epoch, brokers, zookeeper, admin, isr_change_notification, consumers, config]
and then do ls /brokers output will be [ids, topics, seqid]
and then do ls /brokers/ids output will be [0] - it is an array of broker id's. if your array is empty [] that means that no broker details are present in your zookeeper
in that case, you need to restart your broker and zookeeper.
Updated :
This problem won't happen usually. because your zookeeper server is closing(killing) or losing broker path automatically.
To overcome this it is better to maintain two more zookeepers means complete 3 zookeepers nodes.
if it is local use localhost:2181, localhost:2182, localhost:2183.
if it is cluster use three instances zookeeper1:2181, zookeeper2:2181, zookeeper3:2181
you can tolerate up to two failures.
for creating topic and use following command :
./kafka-topics.sh --create --zookeeper
localhost:2181,localhost:2182,localhost:2183 --replication-factor 1
--partitions 1 --topic
While producing message in kafka, i am getting the following error :
$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic nil_PF1_P1
hi
hello
[2016-07-19 17:06:34,542] ERROR Error when sending message to topic nil_PF1_P1 with key: null, value: 2 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
[2016-07-19 17:07:34,544] ERROR Error when sending message to topic nil_PF1_P1 with key: null, value: 5 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
$ bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic nil_PF1_P1
Topic:nil_PF1_P1 PartitionCount:1 ReplicationFactor:1 Configs:
Topic: nil_PF1_P1 Partition: 0 Leader: 2 Replicas: 2 Isr: 2
Any idea on this??
Instead of changing server.properties include the address 0.0.0.0 in the code itself.
Instead of
/usr/bin/kafka-console-producer --broker-list Hostname:9092 --topic MyFirstTopic1
use
/usr/bin/kafka-console-producer --broker-list 0.0.0.0:9092 --topic MyFirstTopic1
It may be because of some parameters from Kafka's server.properties file. You can find more information here
Stop the Kafka server with
cd $KAFKA_HOME/bin
./kafka-server-stop.sh
Change
listeners=PLAINTEXT://hostname:9092
to
listeners=PLAINTEXT://0.0.0.0:9092
in $KAFKA_HOME/config/server.properties
Restart the Kafka server with
$KAFKA_HOME/bin/kafka-server-start.sh $KAFKA_HOME/config/server.properties
I know this is old but this may work for someone else who's dealing with it:
I changed 2 things:
1. change the "bootstrap.servers" property or the --broker-list option to 0.0.0.0:9092
2. change (uncomment and edit in my case) the server.properties in 2 properties
listeners = PLAINTEXT://your.host.name:9092 to listeners=PLAINTEXT://:9092
advertised.listeners=PLAINTEXT://your.host.name:9092 to advertised.listeners=PLAINTEXT://localhost:9092
I faced similar problem, where I was able to produce and consume on localhost but not from different machines on network. Based few answers I got the clue that essentially we need to expose advertised.listener to producer and consumer, however giving 0.0.0.0 was also not working. So gave exact IP against advertised.listeners
advertised.listeners=PLAINTEXT://HOST.IP:9092
And I left listener=PLAINTEXT://:9092 as it is.
So with this the spark exposes advertised ip and port to producers and consumers
If you are running hortonworks cluster, check the listening port in ambari.
In my case 9092 was not my port. I went to ambari and found the listening port was set to 6667
it worked for me. :)
I get the same error today with confluent_kafka 0.9.2 (0x90200) and librdkafka 0.9.2 (0x90401). In my case, I specified the wrong broker port in tutorialpoints example:
$ kafka-console-producer.sh --broker-list localhost:9092 --topic tutorialpoint-basic-ops-01
although my broker was started on port 9094:
$ cat server-02.properties
broker.id=2
port=9094
log.dirs=/tmp/kafka-example-logs-02
zookeeper.connect=localhost:2181
Although the 9092 port was not open (netstat -tunap), it took 60s for kafka-console-producer.sh to raise an error. Looks like this tool needs a fix to:
fail faster
with a more explicit error message.
I faced the above exception stacktrace. I investigated and found the root cause.I faced it when I established Kafka cluster with two nodes.With the following settings in server.properties.Here I am denoting server.properties of kafka node 1 and 2 as broker1.properties and broker2.properties
broker1.properties settings
listeners=PLAINTEXT://A.B.C.D:9092
zookeeper.connect=A.B.C.D:2181,E.F.G.H:2181
broker2.properties settings
listeners=PLAINTEXT://E.F.G.H:9092
zookeeper.connect=A.B.C.D:2181,E.F.G.H:2181
I was trying to start a producer from node1 or from node2 using the following command:
./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic OUR_TOPIC
and I was getting the above timeout exception stacktrace although Kafka is running in both machine.
Although producer is starting either from Leader node or from a follower I was always getting the same.
While using below command from any broker I was able to get producer the message.
./bin/kafka-console-producer.sh --broker-list A.B.C.D:9092 --topic OUR_TOPIC
or
./bin/kafka-console-producer.sh --broker-list E.F.G.H:9092 --topic OUR_TOPIC
or
./bin/kafka-console-producer.sh --broker-list A.B.C.D:9092,E.F.G.H:9092 --topic OUR_TOPIC
So the root cause is that Kafka broker internally using listeners=PLAINTEXT://E.F.G.H:9092 property while staring a producer.This property must match to start a kafka broker from any of the node while starting a producer.Converting this property to listeners=PLAINTEXT://localhost:9092 will work for our very first command.
Had this issue:
Using Hortonworks HDP 2.5.
Kerberisation enabled
Fixed by providing the correct security protocol and ports.
Example commands:
./kafka-console-producer.sh --broker-list sand01.intranet:6667, san02.intranet:6667, san03.intranet:6667--topic test--security-protocol PLAINTEXTSASL
./kafka-console-consumer.sh --zookeeper sand01:2181 --topic test--from-beginning --security-protocol PLAINTEXTSASL
In my case, I am using Kafka docker with Openshift. I was getting the same problem. It got fixed when I passed the environment variable KAFKA_LISTENERS with a value of PLAINTEXT://:9092. This will eventually add create an entry listeners=PLAINTEXT://:9092 under server.properties.
The listeners doesn't have to have a hostname.
Another scenario here. No clue on what was happening till I find a kafka log with the following message:
Caused by: java.lang.IllegalArgumentException: Invalid version for API key 3: 2
Apparently the producer was using a newer kafka-client (java) than the kafka server and the API used was invalid (client using 1.1 and server on 10.0). On the Client/Producer I got the:
Error producing to topic Failed to update metadata after 60000 ms.
For Apache Kafka v2.11-1.1.0
Start zookeeper server:
$ bin/zookeeper-server-start.sh config/zookeeper.properties
Start kafka server:
$ bin/kafka-server-start.sh config/server.properties
Create one topic name "my_topic":
$ bin/kafka-topics.sh --create --topic my_topic --zookeeper localhost:2181 --replication-factor 1 --partitions 1
Start the producer:
$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic my_topic
Start the consumer:
$ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic my_topic --from-beginning
I use Apache Kafka on a Hortonworks (HDP 2.X release) installation. The error message encountered means that Kafka producer was not able to push the data to the segment log file. From a command-line console, that would mean 2 things :
You are using incorrect port for the brokers
Your listener config in server.properties are not working
If you encounter the error message while writing via scala api, additionally check connection to kafka cluster using telnet <cluster-host> <broker-port>
NOTE: If you are using scala api to create topic, it takes sometime for the brokers to know about the newly created topic. So, immediately after topic creation, the producers might fail with the error Failed to update metadata after 60000 ms.
I did the following checks in order to resolve this issue:
The first difference once I check via Ambari is that Kafka brokers listen on port 6667 on HDP 2.x (apache kafka uses 9092).
listeners=PLAINTEXT://localhost:6667
Next, use the ip instead of localhost.
I executed netstat -na | grep 6667
tcp 0 0 192.30.1.5:6667 0.0.0.0:* LISTEN
tcp 1 0 192.30.1.5:52242 192.30.1.5:6667 CLOSE_WAIT
tcp 0 0 192.30.1.5:54454 192.30.1.5:6667 TIME_WAIT
So, I modified the producer call to user the IP and not localhost:
./kafka-console-producer.sh --broker-list 192.30.1.5:6667 --topic rdl_test_2
To monitor if you have new records being written, monitor the /kafka-logs folder.
cd /kafka-logs/<topic name>/
ls -lart
-rw-r--r--. 1 kafka hadoop 0 Feb 10 07:24 00000000000000000000.log
-rw-r--r--. 1 kafka hadoop 10485756 Feb 10 07:24 00000000000000000000.timeindex
-rw-r--r--. 1 kafka hadoop 10485760 Feb 10 07:24 00000000000000000000.index
Once, the producer successfully writes, the segment log-file 00000000000000000000.log will grow in size.
See the size below:
-rw-r--r--. 1 kafka hadoop 10485760 Feb 10 07:24 00000000000000000000.index
-rw-r--r--. 1 kafka hadoop **45** Feb 10 09:16 00000000000000000000.log
-rw-r--r--. 1 kafka hadoop 10485756 Feb 10 07:24 00000000000000000000.timeindex
At this point, you can run the consumer-console.sh:
./kafka-console-consumer.sh --bootstrap-server 192.30.1.5:6667 --topic rdl_test_2 --from-beginning
response is hello world
After this step, if you want to produce messages via the Scala API's , then change the listeners value(from localhost to a public IP) and restart Kafka brokers via Ambari:
listeners=PLAINTEXT://192.30.1.5:6667
A Sample producer will be as follows:
package com.scalakafka.sample
import java.util.Properties
import java.util.concurrent.TimeUnit
import org.apache.kafka.clients.producer.{ProducerRecord, KafkaProducer}
import org.apache.kafka.common.serialization.{StringSerializer, StringDeserializer}
class SampleKafkaProducer {
case class KafkaProducerConfigs(brokerList: String = "192.30.1.5:6667") {
val properties = new Properties()
val batchsize :java.lang.Integer = 1
properties.put("bootstrap.servers", brokerList)
properties.put("key.serializer", classOf[StringSerializer])
properties.put("value.serializer", classOf[StringSerializer])
// properties.put("serializer.class", classOf[StringDeserializer])
properties.put("batch.size", batchsize)
// properties.put("linger.ms", 1)
// properties.put("buffer.memory", 33554432)
}
val producer = new KafkaProducer[String, String](KafkaProducerConfigs().properties)
def produce(topic: String, messages: Iterable[String]): Unit = {
messages.foreach { m =>
println(s"Sending $topic and message is $m")
val result = producer.send(new ProducerRecord(topic, m)).get()
println(s"the write status is ${result}")
}
producer.flush()
producer.close(10L, TimeUnit.MILLISECONDS)
}
}
Hope this helps someone.
adding such a line after the topic helped with the same issue:
... --topic XXX --property "parse.key = true" --property "key.separator =:"
Hope this helps someone.