I am trying to create my first topic by using the command below:
./bin/kafka-topics.sh --bootstrap-server localhost:2181 --create --topic test --partitions 3 --replication-factor 1
and then I am getting the error below.
ost:2181 --create --topic test --partitions 3 --replication-factor 1
[2021-10-07 14:03:15,144] WARN [AdminClient clientId=adminclient-1] Connection to node -1 (localhost/127.0.0.1:2181) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2021-10-07 14:03:15,251] WARN [AdminClient clientId=adminclient-1] Connection to node -1 (localhost/127.0.0.1:2181) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2021-10-07 14:03:15,418] WARN [AdminClient clientId=adminclient-1] Connection to node -1 (localhost/127.0.0.1:2181) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
Could you please advise on what exactly is the issue here and how this can be resolved?
In the Apache Kafka documentation, under the Notable changes in 2.2.0 states that
The bin/kafka-topics.sh command line tool is now able to connect directly to brokers with --bootstrap-server instead of zookeeper. The old --zookeeper option is still available for now. Please read KIP-377 for more information.
(As of Apache Kafka 3.0.0 --zookeper flag was removed).
It's unclear what version of Kafka you are using, but given that it accepted the flag --bootstrap-server then you're at least using 2.2.0 (probably < 3.0.0 given the dir name but not important for this question).
If you're using --bootstrap-server then you want to be connecting to the port associated to the Kafka server not Apache zookeper
Recall that
The bin/kafka-topics.sh command line tool is now able to connect directly to brokers
Therefore, typically port 9092 is used and so your command should be
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --create --topic test --partitions 3 --replication-factor 1
However, --zookeeper is typically used with port 2181 since that's the port that Apache Zookeper tends to run on.
I have installed confluent-oss-5.0.0 on Azure VM and exposed all necessary ports to access using public IP Address.
I tried to change the etc/kafka/server.properties below things to achieve but no luck
Approach - 1
listeners=PLAINTEXT://:9092
advertised.listeners=PLAINTEXT://<publicIP>:9092
--------------------------------------
Approach - 2
advertised.listeners=PLAINTEXT://<publicIP>:9092
--------------------------------------
Approach - 3
listeners=PLAINTEXT://<publicIP>:9092
I experienced below error
pj#pj-HP-EliteBook-840-G1:~/confluent-kafka/confluent-oss-5.0.0/bin$ kafka-console-producer --broker-list <publicIp>:9092 --topic pj_test123>dfsds
[2019-03-25 19:13:38,784] WARN [Producer clientId=console-producer] Connection to node -1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
pj#pj-HP-EliteBook-840-G1:~/confluent-kafka/confluent-oss-5.0.0/bin$ kafka-console-producer --broker-list <publicIp>:9092 --topic pj_test123
>message1
>message2
>[2019-03-25 19:20:13,216] ERROR Error when sending message to topic pj_test123 with key: null, value: 3 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 2 record(s) for pj_test123-0: 1503 ms has passed since batch creation plus linger time
[2019-03-25 19:20:13,218] ERROR Error when sending message to topic pj_test123 with key: null, value: 3 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
pj#pj-HP-EliteBook-840-G1:~/confluent-kafka/confluent-oss-5.0.0/bin$ kafka-console-consumer --bootstrap-server <publicIp>:9092 --topic pj_test123 --from-beginning
[2019-03-25 19:29:27,742] WARN [Consumer clientId=consumer-1, groupId=console-consumer-42352] Error while fetching metadata with correlation id 2 : {pj_test123=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
pj#pj-HP-EliteBook-840-G1:~/confluent-kafka/confluent-oss-5.0.0/bin$ kafka-console-consumer --bootstrap-server <publicIp>:9092 --topic pj_test123 --from-beginning
[2019-03-25 19:27:06,589] WARN [Consumer clientId=consumer-1, groupId=console-consumer-33252] Connection to node 0 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
All other service like zookeeper, kafka-connect and restAPI are working fine using the <PublicIP>:<port>
kafka-topics --zookeeper 13.71.115.20:2181 --list --- This is working
Ref:
Not able to access messages from confluent kafka on EC2
https://kafka.apache.org/documentation/#brokerconfigs
Why I cannot connect to Kafka from outside?
Solutions
Thanks, #Robin Moffatt, It works for me. I do below changes along with allowing all Kafka related ports on Azure networking
kafka#kafka:~/confluent-oss-5.0.0$ sudo vi etc/kafka/server.properties
listeners=INTERNAL://0.0.0.0:9092,EXTERNAL://0.0.0.0:19092
listener.security.protocol.map=INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
advertised.listeners=INTERNAL://<privateIp>:9092,EXTERNAL://<publicIp>:19092
inter.broker.listener.name=INTERNAL
You need to configure both internal and external listeners for your broker. This article details how: https://rmoff.net/2018/08/02/kafka-listeners-explained/.
You will also have to give public access to port 9092 (your broker). TO do that,
Go to your Virtual machine in Azure portal
Select Networking under settings in the left menu
Add inbound port rule
Add port 9092 to be accessbile from anywhere
I got this error when my Apache beam application connects to my Kafka cluster with ACL enabled. Please help me fix this issue.
Caused by: java.io.IOException: Reader-4: Timeout while initializing partition 'test-1'. Kafka client may not be able to connect to servers.
org.apache.beam.sdk.io.kafka.KafkaUnboundedReader.start(KafkaUnboundedReader.java:128)
org.apache.beam.runners.dataflow.worker.WorkerCustomSources$UnboundedReaderIterator.start(WorkerCustomSources.java:779)
org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.start(ReadOperation.java:361)
org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:194)
org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:159)
org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:76)
org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker.process(StreamingDataflowWorker.java:1228)
org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker.access$1000(StreamingDataflowWorker.java:143)
org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker$6.run(StreamingDataflowWorker.java:967)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
I have a Kafka cluster with 3 nodes on GKE. I created a topic with replication-factor 3 and partition 5.
kafka-topics --create --zookeeper zookeeper:2181 \
--replication-factor 3 --partitions 5 --topic topic
I set read permission on a topic test for test_consumer_group consumer group.
kafka-acls --authorizer-properties zookeeper.connect=zookeeper:2181 \
--add --allow-principal User:CN=myuser.test.io --consumer \
--topic test --group 'test_consumer_group'
In my Apache beam application, I set configuration group.id=test_consumer_group.
Also testing with console consumer and it is not working as well.
$ docker run --rm -v `pwd`:/cert confluentinc/cp-kafka:5.1.0 \
kafka-console-consumer --bootstrap-server kafka.xx.xx:19092 \
--topic topic --consumer.config /cert/client-ssl.properties
[2019-03-08 05:43:07,246] WARN [Consumer clientId=consumer-1, groupId=test_consumer_group]
Received unknown topic or partition error in ListOffset request for
partition test-3 (org.apache.kafka.clients.consumer.internals.Fetcher)
Seems like a communication issue between to your kafka readers Kafka client may not be able to connect to servers
I have an environment problem.
I want using zookeeper and Kafka cluster to solve my problem.
My zookeeper version is 3.4.12 and Kafka is 2.12-2.1.0
I also change the zoo.cfg in zookeeper.
dataDir=D:/WEBSOCKET/zookeeper-3.4.12/data
and server.properties in kafka.
log.dirs=D:/WEBSOCKET/kafka_2.12-2.1.0/logs
I see all the tutorial and do it the exact same way.
And also usgin kafka open zookeeper.
this is my command:
1) open zookeeper (zkServer.cmd)
2) in kafka
.\bin\windows\kafka-server-start.bat .\config\server.properties
3) create topic
.\bin\windows\kafka-topics.bat --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic hello
4) create a producer
.\bin\windows\kafka-console-producer.bat --bootstrap-server localhost:2181 --topic hello
5) create a consumer
.\bin\windows\kafka-console-consumer.bat --bootstrap-server localhost:2181 --topic hello
when I get in step 5, I always fail.
zookeeper will give me a lot of console like :
WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#383] - Exception causing close of session 0x0: null
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#215] - Accepted socket connection from /127.0.0.1:55192
and
2019-01-08 17:05:24,822 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#1040] - Closed socket connection for client /127.0.0.1:50874 (no session established for client)
2019-01-08 17:05:25,783 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#215] - Accepted socket connection from /127.0.0.1:56089
I don't know how to fix it. I google for two days...
when I open my Kafka with step 2, my zookeeper some times doesn't hvae any response or shows me this:
[ProcessThread(sid:0 cport:2181)::PrepRequestProcessor#596] - Got user-level KeeperException when processing sessionid:0x1000058f8960000 type:multi cxid:0x36 zxid:0x69 txntype:-1 reqpath:n/a aborting remaining multi ops. Error Path:/admin/preferred_replica_election Error:KeeperErrorCode = NoNode for /admin/preferred_replica_election
I also google this, but not helpful.
I set this in kfaka before:
advertised.host.name = localhost
listeners=PLAINTEXT://127.0.0.1:9092
my host have set
127.0.0.1 localhost
please help me create local server I want to coding my project..
thank you read all.
Producers and consumers need to use port 9092 (Kafka)
You are seeing Zookeeper logs and errors because you are trying to use bootstrap-server or broker-list and port 2181 (Zookeeper)
Check again the quickstart guide
While producing message in kafka, i am getting the following error :
$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic nil_PF1_P1
hi
hello
[2016-07-19 17:06:34,542] ERROR Error when sending message to topic nil_PF1_P1 with key: null, value: 2 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
[2016-07-19 17:07:34,544] ERROR Error when sending message to topic nil_PF1_P1 with key: null, value: 5 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
$ bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic nil_PF1_P1
Topic:nil_PF1_P1 PartitionCount:1 ReplicationFactor:1 Configs:
Topic: nil_PF1_P1 Partition: 0 Leader: 2 Replicas: 2 Isr: 2
Any idea on this??
Instead of changing server.properties include the address 0.0.0.0 in the code itself.
Instead of
/usr/bin/kafka-console-producer --broker-list Hostname:9092 --topic MyFirstTopic1
use
/usr/bin/kafka-console-producer --broker-list 0.0.0.0:9092 --topic MyFirstTopic1
It may be because of some parameters from Kafka's server.properties file. You can find more information here
Stop the Kafka server with
cd $KAFKA_HOME/bin
./kafka-server-stop.sh
Change
listeners=PLAINTEXT://hostname:9092
to
listeners=PLAINTEXT://0.0.0.0:9092
in $KAFKA_HOME/config/server.properties
Restart the Kafka server with
$KAFKA_HOME/bin/kafka-server-start.sh $KAFKA_HOME/config/server.properties
I know this is old but this may work for someone else who's dealing with it:
I changed 2 things:
1. change the "bootstrap.servers" property or the --broker-list option to 0.0.0.0:9092
2. change (uncomment and edit in my case) the server.properties in 2 properties
listeners = PLAINTEXT://your.host.name:9092 to listeners=PLAINTEXT://:9092
advertised.listeners=PLAINTEXT://your.host.name:9092 to advertised.listeners=PLAINTEXT://localhost:9092
I faced similar problem, where I was able to produce and consume on localhost but not from different machines on network. Based few answers I got the clue that essentially we need to expose advertised.listener to producer and consumer, however giving 0.0.0.0 was also not working. So gave exact IP against advertised.listeners
advertised.listeners=PLAINTEXT://HOST.IP:9092
And I left listener=PLAINTEXT://:9092 as it is.
So with this the spark exposes advertised ip and port to producers and consumers
If you are running hortonworks cluster, check the listening port in ambari.
In my case 9092 was not my port. I went to ambari and found the listening port was set to 6667
it worked for me. :)
I get the same error today with confluent_kafka 0.9.2 (0x90200) and librdkafka 0.9.2 (0x90401). In my case, I specified the wrong broker port in tutorialpoints example:
$ kafka-console-producer.sh --broker-list localhost:9092 --topic tutorialpoint-basic-ops-01
although my broker was started on port 9094:
$ cat server-02.properties
broker.id=2
port=9094
log.dirs=/tmp/kafka-example-logs-02
zookeeper.connect=localhost:2181
Although the 9092 port was not open (netstat -tunap), it took 60s for kafka-console-producer.sh to raise an error. Looks like this tool needs a fix to:
fail faster
with a more explicit error message.
I faced the above exception stacktrace. I investigated and found the root cause.I faced it when I established Kafka cluster with two nodes.With the following settings in server.properties.Here I am denoting server.properties of kafka node 1 and 2 as broker1.properties and broker2.properties
broker1.properties settings
listeners=PLAINTEXT://A.B.C.D:9092
zookeeper.connect=A.B.C.D:2181,E.F.G.H:2181
broker2.properties settings
listeners=PLAINTEXT://E.F.G.H:9092
zookeeper.connect=A.B.C.D:2181,E.F.G.H:2181
I was trying to start a producer from node1 or from node2 using the following command:
./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic OUR_TOPIC
and I was getting the above timeout exception stacktrace although Kafka is running in both machine.
Although producer is starting either from Leader node or from a follower I was always getting the same.
While using below command from any broker I was able to get producer the message.
./bin/kafka-console-producer.sh --broker-list A.B.C.D:9092 --topic OUR_TOPIC
or
./bin/kafka-console-producer.sh --broker-list E.F.G.H:9092 --topic OUR_TOPIC
or
./bin/kafka-console-producer.sh --broker-list A.B.C.D:9092,E.F.G.H:9092 --topic OUR_TOPIC
So the root cause is that Kafka broker internally using listeners=PLAINTEXT://E.F.G.H:9092 property while staring a producer.This property must match to start a kafka broker from any of the node while starting a producer.Converting this property to listeners=PLAINTEXT://localhost:9092 will work for our very first command.
Had this issue:
Using Hortonworks HDP 2.5.
Kerberisation enabled
Fixed by providing the correct security protocol and ports.
Example commands:
./kafka-console-producer.sh --broker-list sand01.intranet:6667, san02.intranet:6667, san03.intranet:6667--topic test--security-protocol PLAINTEXTSASL
./kafka-console-consumer.sh --zookeeper sand01:2181 --topic test--from-beginning --security-protocol PLAINTEXTSASL
In my case, I am using Kafka docker with Openshift. I was getting the same problem. It got fixed when I passed the environment variable KAFKA_LISTENERS with a value of PLAINTEXT://:9092. This will eventually add create an entry listeners=PLAINTEXT://:9092 under server.properties.
The listeners doesn't have to have a hostname.
Another scenario here. No clue on what was happening till I find a kafka log with the following message:
Caused by: java.lang.IllegalArgumentException: Invalid version for API key 3: 2
Apparently the producer was using a newer kafka-client (java) than the kafka server and the API used was invalid (client using 1.1 and server on 10.0). On the Client/Producer I got the:
Error producing to topic Failed to update metadata after 60000 ms.
For Apache Kafka v2.11-1.1.0
Start zookeeper server:
$ bin/zookeeper-server-start.sh config/zookeeper.properties
Start kafka server:
$ bin/kafka-server-start.sh config/server.properties
Create one topic name "my_topic":
$ bin/kafka-topics.sh --create --topic my_topic --zookeeper localhost:2181 --replication-factor 1 --partitions 1
Start the producer:
$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic my_topic
Start the consumer:
$ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic my_topic --from-beginning
I use Apache Kafka on a Hortonworks (HDP 2.X release) installation. The error message encountered means that Kafka producer was not able to push the data to the segment log file. From a command-line console, that would mean 2 things :
You are using incorrect port for the brokers
Your listener config in server.properties are not working
If you encounter the error message while writing via scala api, additionally check connection to kafka cluster using telnet <cluster-host> <broker-port>
NOTE: If you are using scala api to create topic, it takes sometime for the brokers to know about the newly created topic. So, immediately after topic creation, the producers might fail with the error Failed to update metadata after 60000 ms.
I did the following checks in order to resolve this issue:
The first difference once I check via Ambari is that Kafka brokers listen on port 6667 on HDP 2.x (apache kafka uses 9092).
listeners=PLAINTEXT://localhost:6667
Next, use the ip instead of localhost.
I executed netstat -na | grep 6667
tcp 0 0 192.30.1.5:6667 0.0.0.0:* LISTEN
tcp 1 0 192.30.1.5:52242 192.30.1.5:6667 CLOSE_WAIT
tcp 0 0 192.30.1.5:54454 192.30.1.5:6667 TIME_WAIT
So, I modified the producer call to user the IP and not localhost:
./kafka-console-producer.sh --broker-list 192.30.1.5:6667 --topic rdl_test_2
To monitor if you have new records being written, monitor the /kafka-logs folder.
cd /kafka-logs/<topic name>/
ls -lart
-rw-r--r--. 1 kafka hadoop 0 Feb 10 07:24 00000000000000000000.log
-rw-r--r--. 1 kafka hadoop 10485756 Feb 10 07:24 00000000000000000000.timeindex
-rw-r--r--. 1 kafka hadoop 10485760 Feb 10 07:24 00000000000000000000.index
Once, the producer successfully writes, the segment log-file 00000000000000000000.log will grow in size.
See the size below:
-rw-r--r--. 1 kafka hadoop 10485760 Feb 10 07:24 00000000000000000000.index
-rw-r--r--. 1 kafka hadoop **45** Feb 10 09:16 00000000000000000000.log
-rw-r--r--. 1 kafka hadoop 10485756 Feb 10 07:24 00000000000000000000.timeindex
At this point, you can run the consumer-console.sh:
./kafka-console-consumer.sh --bootstrap-server 192.30.1.5:6667 --topic rdl_test_2 --from-beginning
response is hello world
After this step, if you want to produce messages via the Scala API's , then change the listeners value(from localhost to a public IP) and restart Kafka brokers via Ambari:
listeners=PLAINTEXT://192.30.1.5:6667
A Sample producer will be as follows:
package com.scalakafka.sample
import java.util.Properties
import java.util.concurrent.TimeUnit
import org.apache.kafka.clients.producer.{ProducerRecord, KafkaProducer}
import org.apache.kafka.common.serialization.{StringSerializer, StringDeserializer}
class SampleKafkaProducer {
case class KafkaProducerConfigs(brokerList: String = "192.30.1.5:6667") {
val properties = new Properties()
val batchsize :java.lang.Integer = 1
properties.put("bootstrap.servers", brokerList)
properties.put("key.serializer", classOf[StringSerializer])
properties.put("value.serializer", classOf[StringSerializer])
// properties.put("serializer.class", classOf[StringDeserializer])
properties.put("batch.size", batchsize)
// properties.put("linger.ms", 1)
// properties.put("buffer.memory", 33554432)
}
val producer = new KafkaProducer[String, String](KafkaProducerConfigs().properties)
def produce(topic: String, messages: Iterable[String]): Unit = {
messages.foreach { m =>
println(s"Sending $topic and message is $m")
val result = producer.send(new ProducerRecord(topic, m)).get()
println(s"the write status is ${result}")
}
producer.flush()
producer.close(10L, TimeUnit.MILLISECONDS)
}
}
Hope this helps someone.
adding such a line after the topic helped with the same issue:
... --topic XXX --property "parse.key = true" --property "key.separator =:"
Hope this helps someone.