I have 10 node kafka cluster .both kafka broker and zookeeper are running on each node. Recently we have added 3 new nodes 8 ,9 and 10 and yesterday 2 node was down (2 and 4). I have a topic with 60 partitions and 3 replication. In kafka manager, Brokers Skewed % is showing as 50 and Brokers Leader Skewed % as 70. I have manually reassigned the partition from UI and Brokers Skewed % is 0 now but it didn't change Brokers Leader Skewed %. I also ran the command:
$ kafka-preferred-replica-election.sh --zookeeper localhost:2181 --path-to-json-file test.json
Warning: --zookeeper is deprecated and will be removed in a future version of Kafka.
Use --bootstrap-server instead to specify a broker to connect to.
Created preferred replica election path with ...
Successfully started preferred replica election for partitions Set(..
but it didn't change anything. I can see on UI that brokers 8 and 10 have no leader. How can I rebalance leaders across all brokers evenly? I read that cluster rolling restart of all broker can solve it but I can't (in normal case) restart my production Kafka cluster.
kafka version: 2.3.0
zookeeper version: 3.4.12
added kafka manager screenshot and highlighted the issue with the red circle.
I will appreciate any help.
I was able to solve this issue. if you will see the screenshots carefully, you will notice that none of the preferred replica (first entry of Replicas column) is assigned to broker 8 or 10 and only parition 37 has broker 9 as preferred replica so Preferred Replica Election will not work.
I have shuffled preferred replicas using Manual Partition Assignments (moving broker 8,9 and 10 to replica 0 in 17 paritions (6*3 -2)) option and then running Reassign Partitions and then running Preferred Replica Election.
I hope it will behelpful to others.
Note: 8,9 and 10 node was added recently.
Related
I am setting up a Confluent Kafka Cluster (Community) with 3 Zookeper and 5 Kafka Broker nodes .
The requirement is that we should be able to continue in live environment even if 2 broker nodes are down.
What should be the recommended
-replication factor ,
-in sync replica
for topics with 50 partitions.
In most case the suggested replication factor is 3 . What would be the impact if we increase that to 5 in the mentioned cluster configuration
Setting the replication factor to 5 would mean that all partitions exist on all brokers in the cluster. If two brokers are down, then the replication factor requirement is no longer met and your topics will be under-replicated (should give a warning).
min.insync.replicas should then be set to 3 (or less), otherwise producing a message with acks = all would fail. Producing a message with acks set to 1, 2, or 3 would also work on higher values of min.insync.replicas.
Also note that while two nodes are down, you can't create new topics with a replication factor of 5 (also see KIP-409).
I am a newbie to Kafka and wanted to understand the behaviour of apache Kafka in below scenario.
Consider I have the topic with 3:
partitions 3
brokers 3
replication factor 3
min isr 2
Producer acks = all
Unclean leader election false
As per my understanding if broker 1 goes down there is no harm and no data loss as isr=2 and writes will be successful.
If node 1 comes back up it will again follow the leader and catch up.
My question is if node 1 never comes back up..also it's removed from isr list... My replication factor desired is 3... and if I add new node 4 ..how to automatically make node 4 copy the partitions from failed node 1...so that replication of 3 is still maintained?
Topic replicas stored in zookeper and if you add new broker you should update replicas value for topic(s). (As far as I know there is no way to do it automatically)
But you can do it manually by using kafka-reassign-partitions.sh tool.
Steps:
Create a json file to represent your desired replicas for partitions. For example:
{"version":1,
"partitions":[
{"topic":"YourTopic","partition":0,"replicas":[3,2,4]},
{"topic":"YourTopic","partition":1,"replicas":[2,4,3]},
{"topic":"YourTopic","partition":2,"replicas":[4,3,2]}
]}
Execute this command to reassign partitions.
./kafka/bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 --reassignment-json-file my_file.json --execute
We are trying to implement Kafka HA using kafka cluster. While doing R&D, we found that minimum number of nodes recommended for zookeeper & kafka brokers are 3.
We understand that why zookeeper should have minimum 3 nodes, because for leader election minimum (n+1)/2 nodes should be up & running.
But its not clear, why minimum 3 kafka brokers are required. Why can't we implement HA with 2 kafka brokers & 3 zookeepr nodes?
The minimum number of nodes of Zookeeper is 3 because of the quorum attribute. It should be odd because the even number of nodes is no used. e.g: Zookeeper with total nodes of 8 can be downgraded to 7. Many nodes in Zookeepers also isn't good because of the consensus algorithm. (e.g: Paxos)
For the Kafka cluster, personally I think it is okay for setting 2 brokers. But it is better with 3 brokers. The reason because of maintaining the ISR - In Sync Replicas.
Let say your Kafka cluster has 2 brokers. To maintain the high availability and the consistency of the data, we will set the replicas and the ISR both to 2. The interesting part is the min-ISR attribute. If you set the min-ISR to 1 then the leader fails, likely you don't have any remaining replicas. If you set the min-ISR to 2, when either the leader or the follower fails, nor the producer and consumer can work.
If our Kafka cluster has 3 brokers and we set the ISR equals to 3, the min-ISR equals to 2. With this configuration, we accept the risk of losing 1 replica (either leader or follower) while working. For instance, if we lose the leader, there has at least one follower that in-sync for switching. If we lose one of the followers, we still have a remaining follower to keep the min-ISR to 2.
In addition to #hqt answer:
You can setup a Kafka HA Cluster with only 2 brokers, but the recommended replication-factor for production is 3, so you need 3 brokers in order to achieve this.
Also you should consider that Confluent are working at migrating the leader election to Kafka, so in the future you will not need Zookeeper anymore, which will possibly implies to have an odd number of Kafka brokers.
I have an issue with my Kafka cluster.
I have 3 brokers, so when I stop the broker 1 (for example), each topic partition with leader 1 change his leader with the seconde broker in replica configuration.
So this is the good behavior and it works fine.
But when I restart the broker 1 I need to execute:
./kafka-preferred-replica-election.sh --zookeeper myHost
because the current leader is the other replica.
So my question is :
there is a way to configure Kafka to do it automatically ?
thx
I'm assuming your default (when all brokers are running) assignment is balanced, and the preferred leaders are evenly spread.
Yes Kafka can re-elect the preferred leaders for all the partitions automatically when a broker is restarted. This is actually enabled by default, see auto.leader.rebalance.enable.
Upon restarting a broker, Kafka can take up to leader.imbalance.check.interval.seconds to trigger the re-election. This defaults to 5 minutes. So maybe you just did not wait long enough!
There is also leader.imbalance.per.broker.percentage which defines the percentage of non-preferred leaders allowed. This default to 10%.
For the full details about these configuration, see the broker config section on Kafka's website.
I have a cluster of 2 Kafka brokers and a topic with replication factor 2. If one of the brokers dies, will my producers be able to continue sending new messages to this degraded cluster of 1 node? Or replication factor 2 requires 2 alive nodes and messaged will be refused?
It depends on a few factors:
What is your producer configuration for acks? If you configure to "all", the leader broker won't answer with an ACK until the message have been replicated to all nodes in ISR list. At this point is up to your producer to decide if he cares about ACKs or not.
What is your value for min.insync.replicas? If the number of nodes is below this config, your broker leader won't accept more messages from producers until more nodes are available.
So basically your producers may get into a pause for a while, until more nodes are up.
Messages will not be ignored if the no. of alive brokers is lesser than the configured replicas. Whenever a new Kafka broker joins the cluster, the data gets replicated to that node.
You can reproduce this scenario by configuring the replication factor as 3 or more and start only one broker.
Kafka will handle reassigning partitions for producers and consumers that where dealing with the partitions lost, but it will problematic for new topics.
You could start one broker with a replication factor of 2 or 3. It does work. However, you could not create a topic with that replication factor until you have that amount of brokers in the cluster. Either the topic is auto generated on the first message or created manually, kafka will throw an error.
kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 1 --topic test
Error while executing topic command : Replication factor: 3 larger than available brokers: 1.
[2018-08-08 15:23:18,339] ERROR org.apache.kafka.common.errors.InvalidReplicationFactorException: Replication factor: 3 larger than available brokers: 1.
As soon as new node joined to the kafka cluster, data will be replicated, the replicas factor does not effect the publisher messages
replication-factor 2 doesn't require 2 live brokers, it publish message while one broker is down depends on those configurations
- acks
- min.insync.replicas
Check those configurations as mentioned above #Javier