kafka.admin.TopicCommand Failing - apache-kafka

I am using a single node Kafka V 0.10.2 (16 GB RAM, 8 cores) and a single node zookeeper V 3.4.9 (4 GB RAM, 1 core ). I am having 64 consumer groups and 500 topics each having 250 partitions. I am able to execute the commands which require only Kafka broker and its running fine
ex.
./kafka-consumer-groups.sh --bootstrap-server localhost:9092
--describe --group
But when I execute the admin command like create topic, alter topic For example
./kafka-topics.sh --create --zookeeper :2181
--replication-factor 1 --partitions 1 --topic
Following exception is being displayed:
Error while executing topic command : replication factor: 1 larger
than available brokers: 0 [2017-11-16 11:22:13,592] ERROR
org.apache.kafka.common.errors.InvalidReplicationFactorException:
replication factor: 1 larger than available brokers: 0
(kafka.admin.TopicCommand$)
I checked my broker is up. In server.log following warnings are there
[2017-11-16 11:14:26,959] WARN Client session timed out, have not heard from server in 15843ms for sessionid 0x15aa7f586e1c061 (org.apache.zookeeper.ClientCnxn)
[2017-11-16 11:14:28,795] WARN Unable to reconnect to ZooKeeper service, session 0x15aa7f586e1c061 has expired (org.apache.zookeeper.ClientCnxn)
[2017-11-16 11:21:46,055] WARN Unable to reconnect to ZooKeeper service, session 0x15aa7f586e1c067 has expired (org.apache.zookeeper.ClientCnxn)
Below mentioned is my Kafka server configuration :
broker.id=1
delete.topic.enable=true
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/kafka/data/logs
num.partitions=1
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=<zookeeperIP>:2181
zookeeper.connection.timeout.ms=6000
Zookeeper Configuration is :
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/zookeeper/data
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
autopurge.snapRetainCount=20
# Purge task interval in hours
# Set to "0" to disable auto purge feature
autopurge.purgeInterval=48
I am not able to figure out which configuration to tune. What I am missing .Any help will be appreciated.

When you are running consumer with zookeeper argument like
./kafka-topics.sh --create --zookeeper :2181 --replication-factor 1
--partitions 1 --topic
it means that consumer will go and ask zookeeper to about broker details. if broker details available in zookeeper it can able to connect to the broker.
in your scenario, I think zookeeper lost broker details. zookeeper usually store all your configuration in tree path.
to check whether zookeeper has broker path or not you need log into zookeeper shell using /bin/zkCli.sh -server localhost:2181
after successful connection do ls / you will see output like this
[controller, controller_epoch, brokers, zookeeper, admin, isr_change_notification, consumers, config]
and then do ls /brokers output will be [ids, topics, seqid]
and then do ls /brokers/ids output will be [0] - it is an array of broker id's. if your array is empty [] that means that no broker details are present in your zookeeper
in that case, you need to restart your broker and zookeeper.
Updated :
This problem won't happen usually. because your zookeeper server is closing(killing) or losing broker path automatically.
To overcome this it is better to maintain two more zookeepers means complete 3 zookeepers nodes.
if it is local use localhost:2181, localhost:2182, localhost:2183.
if it is cluster use three instances zookeeper1:2181, zookeeper2:2181, zookeeper3:2181
you can tolerate up to two failures.
for creating topic and use following command :
./kafka-topics.sh --create --zookeeper
localhost:2181,localhost:2182,localhost:2183 --replication-factor 1
--partitions 1 --topic

Related

Unable to produce message when leader is not available

I am unable to produce message when Leader is not available. I created new topic with replication factor 2 using below command
~/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --topic TestTopic202 --partitions 1 --replication-factor 2
Then
~/kafka/bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic TestTopic202
Output:
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropri
ately using -XX:ParallelGCThreads=N
Topic:TestTopic202 PartitionCount:1 ReplicationFactor:2 Configs:
Topic: TestTopic202 Partition: 0 Leader: 1 Replicas: 1,2 Isr: 1,2
After creating topic I stopped broker-1 which is leader of this topic to test fault tolerance. I expected broker-2 will be elected as leader but I received broker down error message.
~/kafka/bin/kafka-console-producer.sh --broker-list 34.93.59.30:9092 --topic TestTopic202
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
>Hi
After bringing down,
^C[kafka#hgtestsrv1 ~]$ ~/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic TestTopic202
\OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
>Hi
[2019-09-03 12:25:47,305] WARN [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2019-09-03 12:25:47,407] WARN [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be ava
ilable. (org.apache.kafka.clients.NetworkClient)
[2019-09-03 12:25:47,511] WARN [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be ava
ilable. (org.apache.kafka.clients.NetworkClient)
[2019-09-03 12:25:47,765] WARN [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be ava
ilable. (org.apache.kafka.clients.NetworkClient)
[2019-09-03 12:25:48,222] WARN [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be ava
ilable. (org.apache.kafka.clients.NetworkClient)
^Corg.apache.kafka.common.KafkaException: Producer closed while send in progress
at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:862)
at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:839)
at kafka.tools.ConsoleProducer$.send(ConsoleProducer.scala:75)
at kafka.tools.ConsoleProducer$.main(ConsoleProducer.scala:57)
at kafka.tools.ConsoleProducer.main(ConsoleProducer.scala)
Caused by: org.apache.kafka.common.KafkaException: Requested metadata update after close
at org.apache.kafka.clients.Metadata.awaitUpdate(Metadata.java:200)
at org.apache.kafka.clients.producer.KafkaProducer.waitOnMetadata(KafkaProducer.java:982)
at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:859)
... 4 more
Can anyone tell me how to test fault tolerance?
My server.properties
default.replication.factor=2
offsets.topic.replication.factor=2
transaction.state.log.replication.factor=2
transaction.state.log.min.isr=1
Before producing a message to Kafka, Kafka fetches metadata regarding how many partitions are there for that topic and which broker is the leader for which partition etc.
Approach #1. Now, it is important to give your bootstrap servers (broker list) of the producer to the broker that is reachable. You said, you have brought broker-1 down, so you cannot include it as the only one in your Kafka producer bootstrap servers.
The following configuration needs to be changed.
bootstrap.servers=<IP>:<Port>
Set this to your broker-2 and see.
For your question of testing fault tolerance...
Your approach is right, but you must include multiple (if not all) brokers in your producer/consumer bootstrap.servers property.
You gave localhost:9092 Is this the broker which you brought down? Where is the other broker running? The running broker must be given to the producer.
Check to see if the other broker is running and is registered with the same zookeeper.
Update:
You gave 2 different IPs 34.93.59.30:9092 and next localhost:9092, I suspect that they are two different brokers on two different machines.
Approach #2. If it is the case, ensure that (as said before) your localhost:9092 broker is registered with the zookeeper of 34.93.59.30:9092 broker (It should mostly be registered with 34.93.59.30:2181 if you use the default ports).
So in the server.properties of your localhost:9092 (your second broker you should give the same zookeeper as your broker-1).
zookeeper.connect=34.93.59.30:2181
and foremost, ensure that your localhost can connect to that first.
Approach #3. If you still were not able to figure this out check the output of your describe topics.
~/kafka/bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic TestTopic202
after you brought the broker-1 down. It should show you the leader.

Kafka producer and consumer on separate computers aren't communicating

I'm using kafka_2.11-1.1.0. This is my server.properties file:
broker.id=1
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/tmp/kafka-logs
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=192.168.1.110:2181,192.168.1.64:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
On the second computer, broker.id=2. I got the ip numbers for the zookeeper.connect line by typing ipconfig into the command prompt and using the IPv4 Address under Ethernet adapter Local Area Connection on one computer and the IPv4 Address under Wireless LAN adapter Wi-Fi for the other.
I ran these commands on each computer (to anyone following along, run the first on both computers before running the second):
bin\windows\zookeeper-server-start.bat config\zookeeper.properties
bin\windows\kafka-server-start.bat config\server.properties
On the first computer, I made a topic and started a producer console:
bin\windows\kafka-topics.bat --create --zookeeper 192.168.1.110:2181 --replication-factor 2 --partitions 1 --topic test
bin\windows\kafka-console-producer.bat --broker-list 192.168.1.110:2181 --topic test
On the second one, I started a consumer console:
bin\windows\kafka-console-consumer.bat --bootstrap-server 192.168.1.64:2181 --topic test
When I tried sending a message, the consumer did not receive it. The zookeeper server console on each computer looped through the following messages, but with each IP value corresponding to its respective PC, and the port number increasing by one with each loop (about once a second):
INFO Accepted socket connection from /192.168.1.110:55371 (org.apache.zookeeper.server.NIOServerCnxnFactory)
WARN Exception causing close of session 0x0 due to java.io.EOFException (org.apache.zookeeper.server.NIOServerCnxn)
INFO Closed socket connection for client /192.168.1.110:55371 (no session established for client) (org.apache.zookeeper.server.NIOServerCnxn)
In the producer console, this error was received after a minute:
ERROR Error when sending message to topic test with key: null, value: 6 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
How do I fix this? Any help would be appreciated.
UPDATE - SOLVED - courtesy of Victor:
Change:
bin\windows\kafka-console-producer.bat --broker-list 192.168.1.110:2181 --topic test
and
bin\windows\kafka-console-consumer.bat --bootstrap-server 192.168.1.64:2181 --topic test
To:
bin\windows\kafka-console-producer.bat --broker-list 192.168.1.110:9092 --topic test
and
bin\windows\kafka-console-consumer.bat --bootstrap-server 192.168.1.64:9092 --topic test
UPDATE 2
For anyone who is following this to set up two computers with Kafka - I found that this method doesn't always work. The permanent solution I later found was to use the same IP for both computers. I used the IP of the computer with the ethernet connection, which happened to be the one with the producer.
I believe you have to pass to the producer a list of Kafka brokers and not a Zookeeper quorum:
So change this:
bin\windows\kafka-console-producer.bat --broker-list 192.168.1.110:2181
To something like this:
bin\windows\kafka-console-producer.bat --broker-list 192.168.1.110:9092
(I´m assuming you are running your Kafka server there)
I got a similar error, writing to Kafka with Spark Streaming:
Error connecting to Zookeeper with Spark Streaming Structured

Issues with Apache Kafka Quickstart

I am new to Kafka and seem to be having several issues with the 'Quickstart' guide for Apache Kafka found here:
https://kafka.apache.org/quickstart#quickstart_kafkaconnect
Ultimately I am trying to learn how to load a kafka queue with many kafka messages and so the Step 7 part of this Quickstart guide seemed relevant.
I installed the binary download (Scala 2.11 - kafka_2.11-1.1.0.tgz ) found here:
https://kafka.apache.org/downloads
I had initially tried to jump straight to step 7 but realised after finding this question (Kafka Connect implementation errors) I had to do the few steps prior to that
Therefore I followed the first step successfully:
tar -xzf kafka_2.11-1.1.0.tgz
cd kafka_2.11-1.1.0
Then I followed step 2:
bin/zookeeper-server-start.sh config/zookeeper.properties
But I get the error
ERROR Unexpected exception, exiting abnormally (org.apache.zookeeper.server.ZooKeeperServerMain)
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
at org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:90)
at org.apache.zookeeper.server.ZooKeeperServerMain.runFromConfig(ZooKeeperServerMain.java:117)
at org.apache.zookeeper.server.ZooKeeperServerMain.initializeAndRun(ZooKeeperServerMain.java:87)
at org.apache.zookeeper.server.ZooKeeperServerMain.main(ZooKeeperServerMain.java:53)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:116)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:78)
But when I run the next command in that same step:
bin/kafka-server-start.sh config/server.properties
The Kafka server seems to run successfully?
So then I tried to continue to step 3 to create a topic:
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
But this produces the error:
Error while executing topic command : Replication factor: 1 larger than available brokers: 0.
[2018-04-09 14:13:26,908] ERROR org.apache.kafka.common.errors.InvalidReplicationFactorException: Replication factor: 1 larger than available brokers: 0.
(kafka.admin.TopicCommand$)
Then trying step 4:
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
This seems to work and I can write a message but then I get a connection error (which is probably due to the fact previous steps haven't worked successfully)
kafka_2.11-1.1.0 user$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
>This is a message
[2018-04-09 14:17:52,631] WARN [Producer clientId=console-producer] Connection to node -1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2018-04-09 14:17:52,687] WARN [Producer clientId=console-producer] Connection to node -1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
Does anyone know why these issues are occurring and how I can fix them? I can't find anymore inforomation in that tutorial about these problems
As the error suggests, you have something running on the default port for ZK. Either close it or change the zookeeper properties file to use another port.
Address localhost:2181 is already in use. Since Zookeeper cannot start, then Kafka brokers won't start too. replication-factor must be less or equal to the number of available brokers, and since no broker is available then the following error will be reported (even if you are using --replication-factor 1).
Error while executing topic command : Replication factor: 1 larger than available brokers: 0.
[2018-04-09 14:13:26,908] ERROR org.apache.kafka.common.errors.InvalidReplicationFactorException: Replication factor: 1 larger than available brokers: 0.
(kafka.admin.TopicCommand$)
You either need stop the process which is running in 2181 or change the ZK default port to a port which is not currently in use.
To see what is running (PID) in port 2181, run
lsof -i -n -P | grep 2181
If you want to kill that process, then run
kill -9 PID
where PID is the process ID which you can get from lsof command.
Otherwise, you need to change the port in the zookeeper.properties file by modifying the parameter clientPort=2181. And finally, you need to change zookeeper.connect=localhost:2181 parameter in the server.properties file accordingly.

Kafka Producer not able to send messages

I am very new to Kafka.
Using Kafka 0.11
Number of alive brokers '1' does not meet the required replication factor '3' for the offsets topic (configured via 'offsets.topic.replication.factor')
I get the above error on sending a message for a topic
kafka-topics --zookeeper localhost:2181 --topic test --describe
Topic:test1 PartitionCount:1 ReplicationFactor:1 Configs:
Topic: test1 Partition: 0 Leader: 0 Replicas: 0 Isr: 0
How are you starting the broker ? What is the server.properties file. The one provided with the downloaded package should have the following line :
offsets.topic.replication.factor=1
Just to be clear the error you see is not related to the topic you are trying to publish. Today, Kafka doesn't save topic offsets for consumers in Zookeeper anymore but in "internal topics" with name __consumer_offsets. Of course, if you have 1 broker you can't have a replication factor of 3. So I'd like to take a look at your server.properties. If the above property is missing, the default is 3.
In my case, my error is also similar.
ERROR [KafkaApi-2] Number of alive brokers '0' does not meet the required replication factor '1' for the offsets topic (configured via 'offsets.topic.replication.factor'). This error can be ignored if the cluster is starting up and not all brokers are up yet. (kafka.server.KafkaApis)
Cluster : 2 Brokers (ID=1,ID=2) with hostname-1 and hostname-2
Kafka version : 1.0.1
listeners=PLAINTEXT://:9090,SSL://:9091,PLAINTEXT_EXT://:9092,SSL_EXT://:9093,SASL_SSL://:9094,SASL_PLAINTEXT://:9095
and both broker server.properties is set to offsets.topic.replication.factor=1
but I configured my advertised hostname as hostname-1 in both broker over protocols which is being used for inter-broker communication ( and thus broker with ID=2 keep on giving above error.
advertised.listeners=PLAINTEXT://hostname-2:9090,SSL://hostname-2:9091,PLAINTEXT_EXT://<EXTERNAL_IP>:9092,SSL_EXT://<EXTERNAL_IP>:9093,SASL_SSL://hostname-1:9094,SASL_PLAINTEXT://hostname-1:9095
correction on SASL_SSL and SASL_PLAINTEXT fixed this error.
PS : SASL_PLAINTEXT is security.inter.broker.protocol in this cluster. This error seems to be related with port availability as well.
This means your cluster have default replication factor setting to some numbers to override this you need to edit server.properties and add replication factor parameter with your choice value
offsets.topic.replication.factor=1
In my case i wanted to run single node kafka with single node zookeeper for that case you need to create topic with replication factor 1 otherwise you will gent an error
mansoor#c2dkb05-usea1d:~$ ./bin/kafka-topics.sh --create --zookeeper zookeeper-svc:2181 --replication-factor 2 --partitions 2 --topic mqttDeviceEvents
Error while executing topic command : Replication factor: 2 larger than available brokers: 1.
[2020-06-18 14:39:46,533] ERROR org.apache.kafka.common.errors.InvalidReplicationFactorException: Replication factor: 2 larger than available brokers: 1.
the correct way to create topic is when you have single node kafka
mansoor#c2dkb05-usea1d:$ ./bin/kafka-topics.sh --create --zookeeper zookeeper-svc:2181 --replication-factor 1 --partitions 2 --topic mqttDeviceEvents
Created topic mqttDeviceEvents.

What is the command to list down all the available brokers in Apache Kafka?

I want to run a multi node cluster in Apache Kafka.
I have made three server.properties files - server, server1 and server2. I have also given then different broker ids and different port numbers. Still on running the script kafka-topics.sh with a replication factor if 3 ,it throws an error saying that the replication factor:3 is larger than the number of brokers:0
I used this command :
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 1 --topic my-replica-topic
Error shown is
Error while executing topic command replication factor: 3 larger than
available brokers: 0 kafka.admin.AdminOperationException: replication
factor: 3 larger than available brokers: 0 at
kafka.admin.AdminUtils$.assignReplicasToBrokers(AdminUtils.scala:70)
at kafka.admin.AdminUtils$.createTopic(AdminUtils.scala:171) at
kafka.admin.TopicCommand$.createTopic(TopicCommand.scala:93) at
kafka.admin.TopicCommand$.main(TopicCommand.scala:55) at
kafka.admin.TopicCommand.main(TopicCommand.scala)
Could you let me know where I am going wrong ?
I think u should at least start 3 kafka server to make sure the number of brokers is larger or equal to the number of replication factor:
First we make a config file for each of the brokers:
cp config/server.properties config/server-1.properties
cp config/server.properties config/server-2.properties
Now edit these new files and set the following properties:
config/server-1.properties:
broker.id=1
listeners=PLAINTEXT://:9093
log.dir=/tmp/kafka-logs-1
config/server-2.properties:
broker.id=2
listeners=PLAINTEXT://:9094
log.dir=/tmp/kafka-logs-2
The broker.id property is the unique and permanent name of each node in the cluster. We have to override the port and log directory only because we are running these all on the same machine and we want to keep the brokers from all trying to register on the same port or overwrite each others data.
We already have Zookeeper and our single node started, so we just need to start the two new nodes:
bin/kafka-server-start.sh config/server-1.properties
bin/kafka-server-start.sh config/server-2.properties
Now create a new topic with a replication factor of three:
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 1 --topic my-replicated-topic