We have a 3 node kafka-zookeeper cluster setup with kafka-zookeeper communicating on SSL.
We are currently using apache kafka 2.5 and zookeeper 3.5.7 . We are trying to increase the replication factor in kafka topics using the below method:
To increase the number of replicas for a given topic you have to:
Specify the extra replicas in a custom reassignment json file. For example, you could create increase-replication-factor.json and put this content in it:
{"version":1,
"partitions":[
{"topic":"signals","partition":0,"replicas":[0,1,2]},
{"topic":"signals","partition":1,"replicas":[0,1,2]},
{"topic":"signals","partition":2,"replicas":[0,1,2]}
]}
Use the file with the --execute option of the kafka-reassign-partitions tool
For example:
$ kafka-reassign-partitions --zookeeper localhost:2182 --reassignment-json-file increase-replication-factor.json --execute --command-config zookeeper_client.properties
But we are facing the problem while running the kafka-reassign-partitions , while running this command the connection to zookeeper fails with below error:
Save this to use as the --reassignment-json-file option during rollback
Partitions reassignment failed due to KeeperErrorCode = NoAuth for
/admin/reassign_partitions
org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = NoAuth
for /admin/reassign_partitions
at org.apache.zookeeper.KeeperException.create(KeeperException.java:120)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
at kafka.zookeeper.AsyncResponse.maybeThrow(ZooKeeperClient.scala:564)
at kafka.zk.KafkaZkClient.createRecursive(KafkaZkClient.scala:1644)
at kafka.zk.KafkaZkClient.createPartitionReassignment(KafkaZkClient.scala:871)
We don't have any ACLS created on the topic still we get this issue.
Related
I get the error after running those simple commands -
I started the Zookeeper and the Kafka servers,
I execute the command:
./kafka-topics --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
and execute the command:
./kafka-console-producer --broker-list localhost:9092 --topic test
I obtain a list of WARN like:
[2019-12-08 21:36:13,024] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 37 : {test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
what did I do wrong?
Thanks
If your broker has the auto.create.topics.enable set to true, then this error will be transient and you should be able to produce message without any further error.
It happens just because the producer is asking for metadata about the topic it wants to write to but that topic doesn't exist in the cluster and the partition leader (where the producer wants to write) doesn't exist yet.
If you retry, the broker will create the topic and the command will work fine.
If the above configuration is set to false, then the broker doesn't create the topic automatically on the first request from a client so you have to create it upfront.
Finally, but it's not your case, the above error could even happen when the topic exist but, for example, the broker which is leader for the specific topic partition is down and a new leader election is in progress.
I wanted to add a comment, but its seems I can't. Just go through this link. Someone had a similar problem and it seems the problem is not what you have done but can be somethings different.
Link: https://grokbase.com/t/kafka/users/134qvay38q/leadernotavailable-exception
I am trying to set log.retenton.hours for broker level configuration for kafka 0.10.2x. But I am getting this error for below command.
kafka-configs.sh --zookeeper zookeeper:2181 --entity-type brokers --entity-name 0 --alter --add-config log.retention.hours=-1
Error while executing config command requirement failed: Unknown Dynamic Configuration 'log.retention.hours'.
java.lang.IllegalArgumentException: requirement failed: Unknown Dynamic Configuration 'log.retention.hours'.
at scala.Predef$.require(Predef.scala:277)
at kafka.server.DynamicConfig$.$anonfun$validate$1(DynamicConfig.scala:101)
at kafka.server.DynamicConfig$.$anonfun$validate$1$adapted(DynamicConfig.scala:100)
at scala.collection.Iterator.foreach(Iterator.scala:929)
at scala.collection.Iterator.foreach$(Iterator.scala:929)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1406)
at kafka.server.DynamicConfig$.kafka$server$DynamicConfig$$validate(DynamicConfig.scala:100)
at kafka.server.DynamicConfig$Broker$.validate(DynamicConfig.scala:59)
at kafka.admin.AdminUtils$.changeBrokerConfig(AdminUtils.scala:555)
at kafka.admin.ConfigCommand$.alterConfig(ConfigCommand.scala:105)
at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:68)
at kafka.admin.ConfigCommand.main(ConfigCommand.scala)
As the error says, that property is not dynamic (cannot be modified while the broker is running)
Plus, that feature shouldn't be possible with your version
From Kafka version 1.1 onwards, some of the broker configs can be updated without restarting the broker
You can set retention per topic level, otherwise, you need to edit the server.properties file of every broker and gracefully reboot them
I'm sure you have a good reason for "disabling" retention, but I would suggest trying compacted topics first
log.retention.hours is read-only property at broker level, so it can't be changed using kafka-config.sh dynamically.
Change it in server.properties and restart the brokers.
Here are the details for readonly or dynamic broker config.
https://kafka.apache.org/documentation/#dynamicbrokerconfigs
I am running Kafka 0.10.0 on CDH 5.9, cluster is kerborized.
What I am trying to do is to write messages from a remote machine to my Kafka broker.
The cluster (where Kafka is installed) has internal as well as external IP addresses.
The machines' hostnames within the cluster get resolved to the private IPs, the remote machine resolves the same hostnames to the public IP addreses.
I opened the necessary port 9092 (I am using SASL_PLAINTEXT protocol) from remote machine to Kafka Broker, verified that using telnet.
First Step - in addition to the standard properties for the Kafka Broker, I configured the following:
listeners=SASL_PLAINTEXT://0.0.0.0:9092
advertised.listeners=SASL_PLAINTEXT://<hostname>:9092
I am able to start the console consumer with
kafka-console-consumer --new consumer --topic <topicname> --from-beginning --bootstrap-server <hostname>:9092 --consumer.config consumer.properties
I am able to use my custom producer from another machine within the cluster.
Relevant excerpt of producer properties:
security.protocol=SASL_PLAINTEXT
bootstrap.servers=<hostname>:9092
I am not able to use my custom producer from the remote machine:
Exception org.apache.kafka.common.errors.TimeoutException: Batch containing 1 record(s) expired due to timeout while requesting metadata from brokers for <topicname>-<partition>
using the same producer properties. I am able to telnet the Kafka Broker from the machine and /etc/hosts includes hostnames and public IPs.
Second Step - I modified server.properties:
listeners=SASL_PLAINTEXT://0.0.0.0:9092
advertised.listeners=SASL_PLAINTEXT://<kafkaBrokerInternalIP>:9092
consumer & producer within the same cluster still run fine (bootstrap
servers are now the internal IP with port 9092)
as expected remote producer fails (but that is obvious given that it
is not aware of the internal IP addresses)
Third Step - where it gets hairy :(
listeners=SASL_PLAINTEXT://0.0.0.0:9092
advertised.listeners=SASL_PLAINTEXT://<kafkaBrokerPublicIP>:9092
starting my consumer with
kafka-console-consumer --new-consumer --topic <topicname> --from-beginning --bootstrap-server <hostname>:9092 --consumer.config consumer.properties
gives me a warning, but I don't think this is right...
WARN clients.NetworkClient: Error while fetching metadata with correlation id 1 : {<topicname>=LEADER_NOT_AVAILABLE}
starting my consumer with
kafka-console-consumer --new-consumer --topic <topicname> --from-beginning --bootstrap-server <KafkaBrokerPublicIP>:9092 --consumer.config consumer.properties
just hangs after those log messages:
INFO utils.AppInfoParser: Kafka version : 0.10.0-kafka-2.1.0
INFO utils.AppInfoParser: Kafka commitId : unknown
seems like it cannot find a coordinator as in the normal flow this would be the next log:
INFO internals.AbstractCoordinator: Discovered coordinator <hostname>:9092 (id: <someNumber> rack: null) for group console-consumer-<someNumber>.
starting the producer on a cluster node with bootstrap.servers=:9092
I observe the same as with the producer:
WARN NetworkClient:600 - Error while fetching metadata with correlation id 0 : {<topicname>=LEADER_NOT_AVAILABLE}
starting the producer on a cluster node with bootstrap.servers=:9092 I get
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
starting the producer on my remote machine with either bootstrap.servers=:9092 or bootstrap.servers=:9092 I get
NetworkClient:600 - Error while fetching metadata with correlation id 0 : {<topicname>=LEADER_NOT_AVAILABLE}
I have been struggling for the past three days to get this to work, however I am out of ideas :/ My understanding is that advertised.hostnames serves for exactly this purpose, however either I am doing something wrong, or there is something wrong in the machine setup.
Any hints are very much appreciated!
I met this issue recently.
In my case , I enabled Kafka ACL, and after disable it by comment this 2 configuration, the problem worked around.
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
super.users=User:kafka
And an thread may help you I think:
https://gist.github.com/jorisdevrede/a7933a99251452bb1867
What mentioned in it at the end:
If you only use a SASL_PLAINTEXT listener on the Kafka Broker, you
have to make sure that you have set the
security.inter.broker.protocol=SASL_PLAINTEXT too, otherwise you will
get a LEADER_NOT_AVAILABLE error in the client.
I found something very weird with Kafka.
I have a producer with 3 brokers :
bin/kafka-console-producer.sh --broker-list localhost:9093, localhost:9094, localhost:9095 --topic topic
Then I try to run a consumer with the new API :
bin/kafka-console-consumer.sh --bootstrap-server localhost:9093,localhost:9094,localhost:9095 --topic topic --from-beginning
I got nothing ! BUT if I use the old API :
bin/kafka-console-consumer.sh --zookeeper localhost:2181 --from-beginning --topic topic
I got my messages !
What is wrong with me ?
PS : I am using Kafka 10
I eventually resolved my problem thanks to this similar post : Kafka bootstrap-servers vs zookeeper in kafka-console-consumer
I believe it is a bug / wrong configuration of mine leading to a problem with zookeeper and kafka.
SOLUTION :
First be sure to have enable topic deleting in server.properties files of your brokers :
# Switch to enable topic deletion or not, default value is false
delete.topic.enable=true
Then delete the topic :
bin/kafka-topics.sh --zookeeper localhost:2181 --delete --topic myTopic
Remove all the /tmp/log.dir directories of your brokers.
EDIT : I faced again the problem and I had to remove also the log files of zookeeper in /tmp/zookeeper/version-2/.
Finally delete the topic in /brokers/topics in zookeeper as follow :
$ kafka/bin/zookeeper-shell.sh localhost:2181
Connecting to localhost:2181
Welcome to ZooKeeper!
JLine support is disabled
rmr /broker/topics/mytopic
And restart your brokers and create your topic again.
After fighting a while with same problem. Specify --partition and console consumer new API works (but hangs..). I have CDH 5.12 + Kafka 0.11 (from parcel).
UPD:
Also find out that Kafka 0.11 (versioned as 3.0.0 in CDH parclel) does not work correctly with consuming messages. After downgrading to Kafka 0.10 it become OK. --partition does not need any more.
I had the same problem, but I was using a single broker instance for my "cluster", and I was getting this error:
/var/log/messages
[2018-04-04 22:29:39,854] ERROR [KafkaApi-20] Number of alive brokers '1' does not meet the required replication factor '3' for the offsets topic (configured via 'offsets.topic.replication.factor'). This error can be ignored if the cluster is starting up and not all brokers are up yet. (kafka.server.KafkaApis)
I just added in my server configuration file the setting offsets.topic.replication.factor=1 and restarted. It started to work fine.
I have setup a multi node setup for kafka, everything seems to work well and show no error logs unless i try to push message to one producer. I get a message:
Bootstrap broker host2:2181 disconnected (org.apache.kafka.clients.NetworkClient)
and on the zookeeper logs i am getting:
"WARN Exception causing close of session 0x0 due to java.io.IOException:
Unreasonable length = 1701969920 (org.apache.zookeeper.server.NIOServerCnxn)"
i cleaned up my data directory which is "/var/zookeeper/data" still no luck.
Any help on the the would be much appriciated
Vaibhav looking at this line (Bootstrap broker host2:2181) looks like you are trying to connect to zookeeper instance rather than broker instance. By Default Kafka broker runs on 9092 port. So producer and consumer should be created as per below command
Producer :
bin/kafka-console-producer.sh --broker-list host1:9092,host2:9092 \
--topic "topic_name"
Consumer:
bin/kafka-console-consumer.sh --bootstrap-server <host_ip_of_producer>:9092 \
--topic "topic_name" --from-beginning