multi-broker kafka InvalidReplicationFactorException - apache-kafka

I'm having some problems while getting kafka to work in a multi-broker cluster.
It's managed by Ambari 2.6.3.0 and I installed kafka broker in both hosts and I can see both Started with no alerts, but when trying to run some producer with some replication factor, it's not working.
bin/kafka-topics.sh --create --zookeeper master:2181 --replication-factor 2 --partitions 1 --topic myTopic
which result in the following error:
bigdata#master:/usr/hdp/2.6.3.0-57/kafka$ bin/kafka-topics.sh --create --zookeeper master:2181 --replication-factor 2 --partitions 1 --topic myTopic
Error while executing topic command : replication factor: 2 larger than available brokers: 1
[2017-09-19 08:45:00,430] ERROR org.apache.kafka.common.errors.InvalidReplicationFactorException: replication factor: 2 larger than available brokers: 1
(kafka.admin.TopicCommand$)
When looking in zookeeper for active brokers, I'm only getting 1 id, so second broker is not supposed to be part of the cluster.
bigdata#master:/usr/hdp/2.6.3.0-57/zookeeper$ bin/zkCli.sh
[zk: master:2181(CONNECTED) 0] connect master:2181
[zk: master:2181(CONNECTED) 1] ls /brokers/ids
[1001]
I would thank any answer or suggestion on how to make both brokers listed together and be able to create the topic with replication factor
EDIT: add logs (pastebin not to copy so long text in here)
server.log: https://pastebin.com/rjKUxE5y
ambari:image

Related

Error when creating a topic in Apache Kafka

Does anyone knows how to fix the error when creating a topic in Kafka?
C:\kafka\bin\windows>kafka-topics.bat --create --bootstrap-server localhost:2181 --replication-factor 1 --partition 1 --topic test
Exception in thread "main" joptsimple.UnrecognizedOptionException: partition is not a recognized option
at joptsimple.OptionException.unrecognizedOption(OptionException.java:108)
at joptsimple.OptionParser.handleLongOptionToken(OptionParser.java:510)
at joptsimple.OptionParserState$2.handleArgument(OptionParserState.java:56)
at joptsimple.OptionParser.parse(OptionParser.java:396)
at kafka.admin.TopicCommand$TopicCommandOptions.<init>(TopicCommand.scala:567)
at kafka.admin.TopicCommand$.main(TopicCommand.scala:47)
at kafka.admin.TopicCommand.main(TopicCommand.scala)
The parameter is partitions
The bootstrap server normally (default) runs in port 9092
C:\kafka\bin\windows>kafka-topics.bat --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic test
In recent versions, you don't have to create topics on zookeeper. You can directly create topics on the bootstrap servers of Kafka. In the later version, they plan to remove the zookeeper altogether, so they are preparing for that in the current versions.
Use the below to create a new partition. I suggest adding the below parameters as well to control the topic behaviour appropriately.
kafka-topics.bat --create --bootstrap-server localhost:9092 --partitions 1 --replication-factor 1 --config retention.ms=604800000 segment.bytes=26214400 retention.bytes=1048576000 min.insync.replicas=1 --topic test

Kafka Broker may not be available on 127.0.0.1:2181

I'm trying to run a kafka cluster with this command :
kafka-topics.sh --bootstrap-server 127.0.0.1:2181 --topic first_topic --create --partitions 3 --replication-factor 1
and i get this as an error:
[2022-02-03 11:25:28,635] WARN [AdminClient clientId=adminclient-1] Connection to node -1 (/127.0.0.1:2181) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
So i tried to look at kafka_2.12-3.1.0\config\server.properties i have
listeners=PLAINTEXT://localhost:9092
Any help will be very appreciated.
2181 is typically the port used by ZooKeeper. If you want to specify that, and you're not running Kafka in KRaft (zookeeper-less mode) then you need to do as #Umeshwaran said and use the --zookeeper argument.
However, you can use --bootstrap-server, but if you are doing so then specify the broker address and port, which from your listeners config is 9092:
kafka-topics.sh --bootstrap-server 127.0.0.1:9092 --topic first_topic --create --partitions 3 --replication-factor 1
This article should clarify things.
This is because your zookeeper is not working ,I was facing the same issue but when I run the zookeeper with(I am using wsl)
"zookeeper-server-start.sh ~/kafka_2.13-3.3.1/config/zookeeper.properties"
command it starts working
I am running Kafka also if you don't how to run, use this command
"kafka-server-start.sh ~/kafka_2.13-3.3.1/config/server.properties"
hint- modify kafka_2.13-3.3.1 this with the correct version of kafka you are using

Kafka messages are stored in cluster/ensemble but aren't retrieved by consumer

I have a 2 server zookeeper+Kafka setup with the following config duplicated over the two servers:
Kafka config
broker.id=1 #2 for the second server
listeners=PLAINTEXT://0.0.0.0:9092
advertised.listeners=PLAINTEXT://server1_ip:9092
zookeeper.connect=server1_ip:2181,server2_ip:2181
num.partitions=3
offsets.topic.replication.factor=2 #how many servers should every topic be replicated to
zookeeper config
dataDir=/tmp/zookeeper
clientPort=2181
maxClientCnxns=200
admin.enableServer=false
server.1=server1_ip:2888:3888
server.2=server2_ip:2888:3888
initLimit=20
syncLimit=10
Successfully created a topic using:
/usr/local/kafka/bin/kafka-topics.sh --create --zookeeper server1_ip:2181,server2_ip:2181 --replication-factor 2 --partitions 3 --topic replicatedtest
Doing a Describe on topic using:
/usr/local/kafka/bin/kafka-topics.sh --bootstrap-server=server1_ip:2181,server2_ip:2181 --describe --topic replicatedtest
shows the following:
Topic: replicatedtest PartitionCount: 3 ReplicationFactor: 2 Configs: segment.bytes=1073741824
Topic: replicatedtest Partition: 0 Leader: 2 Replicas: 2,1 Isr: 2,1
Topic: replicatedtest Partition: 1 Leader: 1 Replicas: 1,2 Isr: 1,2
Topic: replicatedtest Partition: 2 Leader: 2 Replicas: 2,1 Isr: 2,1
At this point everything looks good. However when I push messages using the following:
/usr/local/kafka/bin/kafka-console-producer.sh --broker-list server1_ip:9092,server2_ip:9092 --topic replicatedtest
>Hello
>Hi
and call the consumer script using:
/usr/local/kafka/bin/kafka-console-consumer.sh --bootstrap-server server1_ip:9092,server2_ip:9092 --topic replicatedtest --from-beginning
The consumer just stalls.
When I check if these messages exist via an admin UI (KafkaMagic) they do come up. So looks like the messages are stored successfully but for some reason the consumer script can't get to them.
Any ideas?
Many thanks in advance!
==Edit==
added a 3rd server. Changed log level to TRACE in tools-log4j.properties and this is what the consumer script outputs:
https://docs.google.com/document/d/12nfML7M2a5QyXQswIZ_QVGuNqkc2DTRLPKqvRfDWKDY/edit?usp=sharing
Some corrections,
offsets.topic.replication.factor does not set default replication factor for the created topics, it is for the internal __offset topic that keeps your offsets of the consumer group
I have never heard or seen a setup with 2 zookeepers, the recommended is odd number where 1 / 3 or 5 at maximum , where usually 3 is more then enough.
for brokers as well the recommended setup is at least 3 replicas with minimum in sync replica of 2
For further assistance please provide the logs of the server / set consumer at debug / issue consumer-groups describe / issue the console consumer with --group test1 for easier investigation
Update: according to your provided docs
The group coordinator is not available
"
I faced a similar issue. The problem was when you start your Kafka broker there is a property associated with the offset topic replication factor which it default to 3
"
Can you do topic --list and make sure __consumer_offsets topic exists?
If not please create it and restart brokers/zookeeprs and try consume again
kafka-topics --bootstrap-server node1:9092 --partitions 50 --replication-factor 3 --create --topic __consumer_offsets
Looks like issue happened because I started off with one node and then decided to move to a cluster/ensemble setup that includes the original node. __consumer_offsets apparently needed to be reset in this case. This is what I ended up doing to solve issue:
stop zookeeper and kafka on all 3 servers
systemctl stop kafka
systemctl stop zookeeper
delete existing topic data
rm -rf /tmp/kafka-logs
rm -rf /tmp/zookeeper/version-2
delete __consumer_offsets (calling the same delete command on each zookeeper instance might not be necessary)
/usr/local/kafka/bin/zookeeper-shell.sh server1_ip:2181 <<< "deleteall /brokers/topics/__consumer_offsets"
/usr/local/kafka/bin/zookeeper-shell.sh server2_ip:2181 <<< "deleteall /brokers/topics/__consumer_offsets"
/usr/local/kafka/bin/zookeeper-shell.sh server3_ip:2181 <<< "deleteall /brokers/topics/__consumer_offsets"
restart servers
systemctl start zookeeper
systemctl start kafka
recreate __consumer_offsets
/usr/local/kafka/bin/kafka-topics.sh --zookeeper server1_ip:2181,server2_ip:2181,server3_ip:2181 --create --topic __consumer_offsets --partitions 50 --replication-factor 3
Solution was based off: https://community.microstrategy.com/s/article/Kafka-cluster-health-check-fails-with-the-error-Group-coordinator-lookup-failed-The-coordinator-is-not-available?language=en_US

Kafka Producer not able to send messages

I am very new to Kafka.
Using Kafka 0.11
Number of alive brokers '1' does not meet the required replication factor '3' for the offsets topic (configured via 'offsets.topic.replication.factor')
I get the above error on sending a message for a topic
kafka-topics --zookeeper localhost:2181 --topic test --describe
Topic:test1 PartitionCount:1 ReplicationFactor:1 Configs:
Topic: test1 Partition: 0 Leader: 0 Replicas: 0 Isr: 0
How are you starting the broker ? What is the server.properties file. The one provided with the downloaded package should have the following line :
offsets.topic.replication.factor=1
Just to be clear the error you see is not related to the topic you are trying to publish. Today, Kafka doesn't save topic offsets for consumers in Zookeeper anymore but in "internal topics" with name __consumer_offsets. Of course, if you have 1 broker you can't have a replication factor of 3. So I'd like to take a look at your server.properties. If the above property is missing, the default is 3.
In my case, my error is also similar.
ERROR [KafkaApi-2] Number of alive brokers '0' does not meet the required replication factor '1' for the offsets topic (configured via 'offsets.topic.replication.factor'). This error can be ignored if the cluster is starting up and not all brokers are up yet. (kafka.server.KafkaApis)
Cluster : 2 Brokers (ID=1,ID=2) with hostname-1 and hostname-2
Kafka version : 1.0.1
listeners=PLAINTEXT://:9090,SSL://:9091,PLAINTEXT_EXT://:9092,SSL_EXT://:9093,SASL_SSL://:9094,SASL_PLAINTEXT://:9095
and both broker server.properties is set to offsets.topic.replication.factor=1
but I configured my advertised hostname as hostname-1 in both broker over protocols which is being used for inter-broker communication ( and thus broker with ID=2 keep on giving above error.
advertised.listeners=PLAINTEXT://hostname-2:9090,SSL://hostname-2:9091,PLAINTEXT_EXT://<EXTERNAL_IP>:9092,SSL_EXT://<EXTERNAL_IP>:9093,SASL_SSL://hostname-1:9094,SASL_PLAINTEXT://hostname-1:9095
correction on SASL_SSL and SASL_PLAINTEXT fixed this error.
PS : SASL_PLAINTEXT is security.inter.broker.protocol in this cluster. This error seems to be related with port availability as well.
This means your cluster have default replication factor setting to some numbers to override this you need to edit server.properties and add replication factor parameter with your choice value
offsets.topic.replication.factor=1
In my case i wanted to run single node kafka with single node zookeeper for that case you need to create topic with replication factor 1 otherwise you will gent an error
mansoor#c2dkb05-usea1d:~$ ./bin/kafka-topics.sh --create --zookeeper zookeeper-svc:2181 --replication-factor 2 --partitions 2 --topic mqttDeviceEvents
Error while executing topic command : Replication factor: 2 larger than available brokers: 1.
[2020-06-18 14:39:46,533] ERROR org.apache.kafka.common.errors.InvalidReplicationFactorException: Replication factor: 2 larger than available brokers: 1.
the correct way to create topic is when you have single node kafka
mansoor#c2dkb05-usea1d:$ ./bin/kafka-topics.sh --create --zookeeper zookeeper-svc:2181 --replication-factor 1 --partitions 2 --topic mqttDeviceEvents
Created topic mqttDeviceEvents.

What is the command to list down all the available brokers in Apache Kafka?

I want to run a multi node cluster in Apache Kafka.
I have made three server.properties files - server, server1 and server2. I have also given then different broker ids and different port numbers. Still on running the script kafka-topics.sh with a replication factor if 3 ,it throws an error saying that the replication factor:3 is larger than the number of brokers:0
I used this command :
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 1 --topic my-replica-topic
Error shown is
Error while executing topic command replication factor: 3 larger than
available brokers: 0 kafka.admin.AdminOperationException: replication
factor: 3 larger than available brokers: 0 at
kafka.admin.AdminUtils$.assignReplicasToBrokers(AdminUtils.scala:70)
at kafka.admin.AdminUtils$.createTopic(AdminUtils.scala:171) at
kafka.admin.TopicCommand$.createTopic(TopicCommand.scala:93) at
kafka.admin.TopicCommand$.main(TopicCommand.scala:55) at
kafka.admin.TopicCommand.main(TopicCommand.scala)
Could you let me know where I am going wrong ?
I think u should at least start 3 kafka server to make sure the number of brokers is larger or equal to the number of replication factor:
First we make a config file for each of the brokers:
cp config/server.properties config/server-1.properties
cp config/server.properties config/server-2.properties
Now edit these new files and set the following properties:
config/server-1.properties:
broker.id=1
listeners=PLAINTEXT://:9093
log.dir=/tmp/kafka-logs-1
config/server-2.properties:
broker.id=2
listeners=PLAINTEXT://:9094
log.dir=/tmp/kafka-logs-2
The broker.id property is the unique and permanent name of each node in the cluster. We have to override the port and log directory only because we are running these all on the same machine and we want to keep the brokers from all trying to register on the same port or overwrite each others data.
We already have Zookeeper and our single node started, so we just need to start the two new nodes:
bin/kafka-server-start.sh config/server-1.properties
bin/kafka-server-start.sh config/server-2.properties
Now create a new topic with a replication factor of three:
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 1 --topic my-replicated-topic