Kafka Producer not able to send messages - apache-kafka

I am very new to Kafka.
Using Kafka 0.11
Number of alive brokers '1' does not meet the required replication factor '3' for the offsets topic (configured via 'offsets.topic.replication.factor')
I get the above error on sending a message for a topic
kafka-topics --zookeeper localhost:2181 --topic test --describe
Topic:test1 PartitionCount:1 ReplicationFactor:1 Configs:
Topic: test1 Partition: 0 Leader: 0 Replicas: 0 Isr: 0

How are you starting the broker ? What is the server.properties file. The one provided with the downloaded package should have the following line :
offsets.topic.replication.factor=1
Just to be clear the error you see is not related to the topic you are trying to publish. Today, Kafka doesn't save topic offsets for consumers in Zookeeper anymore but in "internal topics" with name __consumer_offsets. Of course, if you have 1 broker you can't have a replication factor of 3. So I'd like to take a look at your server.properties. If the above property is missing, the default is 3.

In my case, my error is also similar.
ERROR [KafkaApi-2] Number of alive brokers '0' does not meet the required replication factor '1' for the offsets topic (configured via 'offsets.topic.replication.factor'). This error can be ignored if the cluster is starting up and not all brokers are up yet. (kafka.server.KafkaApis)
Cluster : 2 Brokers (ID=1,ID=2) with hostname-1 and hostname-2
Kafka version : 1.0.1
listeners=PLAINTEXT://:9090,SSL://:9091,PLAINTEXT_EXT://:9092,SSL_EXT://:9093,SASL_SSL://:9094,SASL_PLAINTEXT://:9095
and both broker server.properties is set to offsets.topic.replication.factor=1
but I configured my advertised hostname as hostname-1 in both broker over protocols which is being used for inter-broker communication ( and thus broker with ID=2 keep on giving above error.
advertised.listeners=PLAINTEXT://hostname-2:9090,SSL://hostname-2:9091,PLAINTEXT_EXT://<EXTERNAL_IP>:9092,SSL_EXT://<EXTERNAL_IP>:9093,SASL_SSL://hostname-1:9094,SASL_PLAINTEXT://hostname-1:9095
correction on SASL_SSL and SASL_PLAINTEXT fixed this error.
PS : SASL_PLAINTEXT is security.inter.broker.protocol in this cluster. This error seems to be related with port availability as well.

This means your cluster have default replication factor setting to some numbers to override this you need to edit server.properties and add replication factor parameter with your choice value
offsets.topic.replication.factor=1
In my case i wanted to run single node kafka with single node zookeeper for that case you need to create topic with replication factor 1 otherwise you will gent an error
mansoor#c2dkb05-usea1d:~$ ./bin/kafka-topics.sh --create --zookeeper zookeeper-svc:2181 --replication-factor 2 --partitions 2 --topic mqttDeviceEvents
Error while executing topic command : Replication factor: 2 larger than available brokers: 1.
[2020-06-18 14:39:46,533] ERROR org.apache.kafka.common.errors.InvalidReplicationFactorException: Replication factor: 2 larger than available brokers: 1.
the correct way to create topic is when you have single node kafka
mansoor#c2dkb05-usea1d:$ ./bin/kafka-topics.sh --create --zookeeper zookeeper-svc:2181 --replication-factor 1 --partitions 2 --topic mqttDeviceEvents
Created topic mqttDeviceEvents.

Related

Error in creating the kafka topic with replication factor 2 in a two broker cluster

i have 2 broker kafka cluster, and i have configured the zookeeper of the two nodes into the both the borkers. Now when i try creating a topic with replication factor 2, it's showing that replication factor is larger than available brokers.
Node1 server.properties
broker.id=0
zookeeper.connect=10.142.0.4:2181,10.142.0.43:2181
log.dirs=/home/******/kafka-logs
Node 2 server.properties
broker.id=1
zookeeper.connect=10.142.0.43:2181,10.142.0.4:2181
log.dirs=/home/******/kafka-logs
when i tried two create a kafka topic with replication factor 2 and with partitions 2 its showing error.Below is my command to create a topic.
bin/kafka-topics.sh --create --zookeeper 10.142.0.43:2181 --replication-factor 2 --partitions 2 --topic logdata
Below is the errorr i am facing
Error while executing topic command : Replication factor: 2 larger than available brokers: 0.
[2019-04-08 06:08:40,876] ERROR org.apache.kafka.common.errors.InvalidReplicationFactorException: Replication factor: 2 larger than available brokers: 0.
(kafka.admin.TopicCommand$)

kafka.admin.TopicCommand Failing

I am using a single node Kafka V 0.10.2 (16 GB RAM, 8 cores) and a single node zookeeper V 3.4.9 (4 GB RAM, 1 core ). I am having 64 consumer groups and 500 topics each having 250 partitions. I am able to execute the commands which require only Kafka broker and its running fine
ex.
./kafka-consumer-groups.sh --bootstrap-server localhost:9092
--describe --group
But when I execute the admin command like create topic, alter topic For example
./kafka-topics.sh --create --zookeeper :2181
--replication-factor 1 --partitions 1 --topic
Following exception is being displayed:
Error while executing topic command : replication factor: 1 larger
than available brokers: 0 [2017-11-16 11:22:13,592] ERROR
org.apache.kafka.common.errors.InvalidReplicationFactorException:
replication factor: 1 larger than available brokers: 0
(kafka.admin.TopicCommand$)
I checked my broker is up. In server.log following warnings are there
[2017-11-16 11:14:26,959] WARN Client session timed out, have not heard from server in 15843ms for sessionid 0x15aa7f586e1c061 (org.apache.zookeeper.ClientCnxn)
[2017-11-16 11:14:28,795] WARN Unable to reconnect to ZooKeeper service, session 0x15aa7f586e1c061 has expired (org.apache.zookeeper.ClientCnxn)
[2017-11-16 11:21:46,055] WARN Unable to reconnect to ZooKeeper service, session 0x15aa7f586e1c067 has expired (org.apache.zookeeper.ClientCnxn)
Below mentioned is my Kafka server configuration :
broker.id=1
delete.topic.enable=true
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/kafka/data/logs
num.partitions=1
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=<zookeeperIP>:2181
zookeeper.connection.timeout.ms=6000
Zookeeper Configuration is :
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/zookeeper/data
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
autopurge.snapRetainCount=20
# Purge task interval in hours
# Set to "0" to disable auto purge feature
autopurge.purgeInterval=48
I am not able to figure out which configuration to tune. What I am missing .Any help will be appreciated.
When you are running consumer with zookeeper argument like
./kafka-topics.sh --create --zookeeper :2181 --replication-factor 1
--partitions 1 --topic
it means that consumer will go and ask zookeeper to about broker details. if broker details available in zookeeper it can able to connect to the broker.
in your scenario, I think zookeeper lost broker details. zookeeper usually store all your configuration in tree path.
to check whether zookeeper has broker path or not you need log into zookeeper shell using /bin/zkCli.sh -server localhost:2181
after successful connection do ls / you will see output like this
[controller, controller_epoch, brokers, zookeeper, admin, isr_change_notification, consumers, config]
and then do ls /brokers output will be [ids, topics, seqid]
and then do ls /brokers/ids output will be [0] - it is an array of broker id's. if your array is empty [] that means that no broker details are present in your zookeeper
in that case, you need to restart your broker and zookeeper.
Updated :
This problem won't happen usually. because your zookeeper server is closing(killing) or losing broker path automatically.
To overcome this it is better to maintain two more zookeepers means complete 3 zookeepers nodes.
if it is local use localhost:2181, localhost:2182, localhost:2183.
if it is cluster use three instances zookeeper1:2181, zookeeper2:2181, zookeeper3:2181
you can tolerate up to two failures.
for creating topic and use following command :
./kafka-topics.sh --create --zookeeper
localhost:2181,localhost:2182,localhost:2183 --replication-factor 1
--partitions 1 --topic

multi-broker kafka InvalidReplicationFactorException

I'm having some problems while getting kafka to work in a multi-broker cluster.
It's managed by Ambari 2.6.3.0 and I installed kafka broker in both hosts and I can see both Started with no alerts, but when trying to run some producer with some replication factor, it's not working.
bin/kafka-topics.sh --create --zookeeper master:2181 --replication-factor 2 --partitions 1 --topic myTopic
which result in the following error:
bigdata#master:/usr/hdp/2.6.3.0-57/kafka$ bin/kafka-topics.sh --create --zookeeper master:2181 --replication-factor 2 --partitions 1 --topic myTopic
Error while executing topic command : replication factor: 2 larger than available brokers: 1
[2017-09-19 08:45:00,430] ERROR org.apache.kafka.common.errors.InvalidReplicationFactorException: replication factor: 2 larger than available brokers: 1
(kafka.admin.TopicCommand$)
When looking in zookeeper for active brokers, I'm only getting 1 id, so second broker is not supposed to be part of the cluster.
bigdata#master:/usr/hdp/2.6.3.0-57/zookeeper$ bin/zkCli.sh
[zk: master:2181(CONNECTED) 0] connect master:2181
[zk: master:2181(CONNECTED) 1] ls /brokers/ids
[1001]
I would thank any answer or suggestion on how to make both brokers listed together and be able to create the topic with replication factor
EDIT: add logs (pastebin not to copy so long text in here)
server.log: https://pastebin.com/rjKUxE5y
ambari:image

Kafka uncommitted messages

Lets say the partition has 4 replicas (1 leader, 3 followers) and all are currently in sync. min.insync.replicas is set to 3 and request.required.acks is set to all or -1.
The producer send a message to the leader, the leader appends it to it's log. After that, two of the replicas crashed before they could fetch this message. One remaining replica successfully fetched the message and appended to it's own log.
The leader, after certain timeout, will send an error (NotEnoughReplicas, I think) to the producer since min.insync.replicas condition is not met.
My question is: what will happen to the message which was appended to leader and one of the replica's log?
Will it be delivered to the consumers when crashed replicas come back online and broker starts accepting and committing new messages (i.e. high watermark is forwarded in the log)?
If there is no min.insync.replicas available and producer uses ack=all, then the message is not committed and consumers will not receive that message, even after crashed replicas come back and are added to the ISR list again. You can test this in the following way.
Start two brokers with min.insync.replicas = 2
$ ./bin/kafka-server-start.sh ./config/server-1.properties
$ ./bin/kafka-server-start.sh ./config/server-2.properties
Create a topic with 1 partition and RF=2. Make sure both brokers are in the ISR list.
$ ./bin/kafka-topics.sh --zookeeper zookeeper-1 --create --topic topic1 --partitions 1 --replication-factor 2
Created topic "topic1".
$ ./bin/kafka-topics.sh --zookeeper zookeeper-1 --describe --topic topic1
Topic:topic1 PartitionCount:1 ReplicationFactor:2 Configs:
Topic: topic1 Partition: 0 Leader: 1 Replicas: 1,2 Isr: 1,2
Run console consumer and console producer. Make sure produce uses ack=-1
$ ./bin/kafka-console-consumer.sh --new-consumer --bootstrap-server kafka-1:9092,kafka-2:9092 --topic topic1
$ ./bin/kafka-console-producer.sh --broker-list kafka-1:9092,kafka-2:9092 --topic topic1 --request-required-acks -1
Produce some messages. Consumer should receive them.
Kill one of the brokers (I killed broker with id=2). Check that ISR list is reduced to one broker.
$ ./bin/kafka-topics.sh --zookeeper zookeeper-1 --describe --topic topic1
Topic:topic1 PartitionCount:1 ReplicationFactor:2 Configs:
Topic: topic1 Partition: 0 Leader: 1 Replicas: 1,2 Isr: 1
Try to produce again. In the producer you should get some
Error: NOT_ENOUGH_REPLICAS
(one per retry) and finally
Messages are rejected since there are fewer in-sync replicas than required.
Consumer will not receive these messages.
Restart the killed broker and try to produce again.
Consumer will receive these message but not those that you sent while one of the replicas was down.
From my understanding, the watermark will not advance until both failed
follow-broker recovered and caught up.
See this blog post for more details: http://www.confluent.io/blog/hands-free-kafka-replication-a-lesson-in-operational-simplicity/
Error observerd
Messages are rejected since there are fewer in-sync replicas than required.
To resolve this i had increase the number of replication factors and it worked

What is the command to list down all the available brokers in Apache Kafka?

I want to run a multi node cluster in Apache Kafka.
I have made three server.properties files - server, server1 and server2. I have also given then different broker ids and different port numbers. Still on running the script kafka-topics.sh with a replication factor if 3 ,it throws an error saying that the replication factor:3 is larger than the number of brokers:0
I used this command :
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 1 --topic my-replica-topic
Error shown is
Error while executing topic command replication factor: 3 larger than
available brokers: 0 kafka.admin.AdminOperationException: replication
factor: 3 larger than available brokers: 0 at
kafka.admin.AdminUtils$.assignReplicasToBrokers(AdminUtils.scala:70)
at kafka.admin.AdminUtils$.createTopic(AdminUtils.scala:171) at
kafka.admin.TopicCommand$.createTopic(TopicCommand.scala:93) at
kafka.admin.TopicCommand$.main(TopicCommand.scala:55) at
kafka.admin.TopicCommand.main(TopicCommand.scala)
Could you let me know where I am going wrong ?
I think u should at least start 3 kafka server to make sure the number of brokers is larger or equal to the number of replication factor:
First we make a config file for each of the brokers:
cp config/server.properties config/server-1.properties
cp config/server.properties config/server-2.properties
Now edit these new files and set the following properties:
config/server-1.properties:
broker.id=1
listeners=PLAINTEXT://:9093
log.dir=/tmp/kafka-logs-1
config/server-2.properties:
broker.id=2
listeners=PLAINTEXT://:9094
log.dir=/tmp/kafka-logs-2
The broker.id property is the unique and permanent name of each node in the cluster. We have to override the port and log directory only because we are running these all on the same machine and we want to keep the brokers from all trying to register on the same port or overwrite each others data.
We already have Zookeeper and our single node started, so we just need to start the two new nodes:
bin/kafka-server-start.sh config/server-1.properties
bin/kafka-server-start.sh config/server-2.properties
Now create a new topic with a replication factor of three:
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 1 --topic my-replicated-topic