Kafka does not replicate a topic to thoes brokers which were not assigned to the topic when it was created? - apache-kafka

I have a topic "reptop" with replication factor 3. My cluster consist of 4 brokers [IDs: 0,1,2,3]. When the topic was created brokers 0,2 and 3 were assigned to the topic, with leader as '2', now when one of my brokers, leader or follower goes down Kafka does not replicate the topic to broker:1 even though it is healthy and the ISR is less than replication-factor, but when the broker which had gone down and was initially assigned to the topic, comes back up kafka replicates the topic to this node. So the question is why does the kafka not replicate the topic to the brokers that were not assigned the topic when the topic was created even though there are healthy brokers on the cluster and "ISR

This is by design. If you want to reassign the partitions, you must do so with the reassignment tool. Another option is to bring up a new broker instance with the missing ID. Kafka does not "self heal" like say hdfs and there are many cases where you wouldn't want it to. If you want it to, there are told out there like confluent rebalancer that can be used.

Related

Does zookeeper returns Kafka brokers dns? (which has the leader partition)

I'm new to Kafka and I want to ask a question.
If there are 3 Kafka brokers(kafka1, kafka2, kafka3)(they are in same Kafka Cluster)
and topic=test(replication=2)
kafka1 has leader partition and kafka2 has follower partition.
If producer sends data to kafka3, then how do data stored in kafka1 and Kafka2?
I heard that, if producer sends data to kafka3, then the zookeeper finds the broker who has the leader partition and returns the broker's dns or IP address.
And then, producer will resend to the broker with metadata.
Is it right? or if it's wrong, plz tell me how it works.
Thanks a lot!
Every kafka topic partition has its own leader. So if you have 2 partitions , kafka would assign leader for each partition. They might end up being same kafka nodes or they might be different.
When producer connects to kafka cluster, it gets to know about the the partition leaders. All writes must go through corresponding partition leader, which is responsible to keep track of in-sync replicas.
All consumers only talk to corresponding partition leaders to get data.
If partition leader goes down , one of the replicas become leader and all producers and consumers are notified about this change

Kafka Inter Broker Communication

I understand producer/consumers need to talk to brokers to know leader for partition. Brokers talk to zk to tell they joined the cluster.
Is it true that
Brokers know who is the leader for a given partition from zk
zk detects broker left/died. Then it re-elects leader and sends new leader info to all brokers
Question:
why do we need brokers to communicate with each other? Is it just
so tehy can move partitions around or do they also query metadata from each other. If so what would be example of metadata exchange
Producers/ consumers request metadata from one of the brokers ( as each one of them caches it) and that is how they know who is the leader for a partition.
Regarding "is it true that" section:
Brokers know who is the leader for the given partition thanks to zk and one of them. To be more precise, one of them decides who will be a leader. That broker is called controller. The first broker that connects to zookeeper becomes a controller and his role is to decide which broker will be a leader and which ones will be replicas and to inform them about it. Controller itself is not excluded from this process. It is a broker like any other with this special responsibilities of choosing leaders and replicas
zk indeed detects when a broker dies/ leaves but it doesn't reelect leader. It is controller responsibility. When one of the brokers leaves a cluster, controller gets information from zk and it starts reassignment
About your question - brokers do communicate with each other ( replicas are reading the messages from leaders, controller is informing other brokers about changes), but they do not exchange metadata among themselves - they write metadata to a zookeeper
A Broker is a Kafka server that runs in a Kafka Cluster
"A Kafka cluster is made up of multiple Kafka Brokers. Each Kafka Broker has a unique ID (number). Kafka Brokers contain topic log partitions. Connecting to one broker bootstraps a client to the entire Kafka cluster"
Each broker holds a number of partitions and each of these partitions can be either a leader or a replica for a topic. All writes and reads to a topic go through the leader and the leader coordinates updating replicas with new data. If a leader fails, a replica takes over as the new leader.

how producers find kafka reader

The producers send messages by setting up a list of Kafka Broker as follows.
props.put("bootstrap.servers", "127.0.0.1:9092,127.0.0.1:9092,127.0.0.1:9092");
I wonder "producers" how to know that which of the three brokers knew which one had a partition leader.
For a typical distributed server, either you have a load bearing server or have a virtual IP, but for Kafka, how is it loaded?
Does the producers program try to connect to one broker at random and look for a broker with a partition leader?
A Kafka cluster contains multiple broker instances. At any given time, exactly one broker is the leader while the remaining are the in-sync-replicas (ISR) which contain the replicated data. When the leader broker is taken down unexpectedly, one of the ISR becomes the leader.
Kafka chooses one broker’s partition’s replicas as leader using ZooKeeper. When a producer publishes a message to a partition in a topic, it is forwarded to its leader.
According to Kafka documentation:
The partitions of the log are distributed over the servers in the
Kafka cluster with each server handling data and requests for a share
of the partitions. Each partition is replicated across a configurable
number of servers for fault tolerance.
Each partition has one server which acts as the "leader" and zero or
more servers which act as "followers". The leader handles all read and
write requests for the partition while the followers passively
replicate the leader. If the leader fails, one of the followers will
automatically become the new leader. Each server acts as a leader for
some of its partitions and a follower for others so load is well
balanced within the cluster.
You can find topic and partition leader using this piece of code.
EDIT:
The producer sends a meta request with a list of topics to one of the brokers you supplied when configuring the producer.
The response from the broker contains a list of partitions in those topics and the leader for each partition. The producer caches this information and therefore, it knows where to redirect the messages.
It's quite an old question but I have the same question and after researched, I want to share the answer cuz I hope it can help others.
To determine leader of a partition, producer uses a request type called a metadata request, which includes a list of topics the producer is interested in.
The broker will response specifies which partitions exist in the topics, the replicas for each partition, and which replica is the leader.
Metadata requests can be sent to any broker because all brokers have a metadata cache that contains this information.

Why is my kafka topic not consumable with a broker down?

My issue is that I have a three broker Kafka Cluster and an availability requirement to have access to consume and produce to a topic when one or two of my three brokers is down.
I also have a reliability requirement to have a replication factor of 3. These seem to be conflicting requirements to me. Here is how my problem manifests:
I create a new topic with replication factor 3
I send several messages to that topic
I kill one of my brokers to simulate a broker issue
I attempt to consume the topic I created
My consumer hangs
I review my logs and see the error:
Number of alive brokers '2' does not meet the required replication factor '3' for the offsets topic
If I set all my broker's offsets.topic.replication.factor setting to 1, then I'm able to produce and consume my topics, even if I set the topic level replication factor to 3.
Is this an okay configuration? Or can you see any pitfalls in setting things up this way?
You only need as many brokers as your replication factor when creating the topic.
I'm guessing in your case, you start with a fresh cluster and no consumers have connected yet. In this case, the __consumer_offsets internal topic does not exist as it is only created when it's first needed. So first connect a consumer for a moment and then kill one of the brokers.
Apart from that, in order to consume you only need 1 broker up, the leader for the partition.

One Kafka broker connects to multiple zookeepers

I'm new to Kafka, zookeeper and Storm.
I our environment we have one Kafka broker connecting to multiple zookeepers. Is there an advantage having the producer send the messages to a specific topic and partition on one broker to multiple zookeepers vs multiple brokers to multiple zookeepers?
Yes there is. Kafka allows you to scale by adding brokers. When you use a Kafka cluster with a single broker, as you have, all partitions reside on that single broker. But when you have multiple brokers, Kafka will split the partitions between them. So, broker A may be elected leader for partitions 1 and 2 of your topic, and broker B leader for partition 3. So, when you publish messages to the topic, the client will split the messages between the various partitions on the two brokers.
Note that I also mentioned leader election. Adding brokers to your Kafka cluster gives you replication. Kafka uses ZooKeeper to elect a leader for each partition as I mentioned in my example. Once a leader is elected, the client splits messages among partitions and sends each message to the leader for the appropriate partition. Depending on the topic configuration, the leader may synchronously replicate messages to a backup. So, in my example, if the replication factor for the topic is 2 then broker A will synchronously replicate messages for partitions 1 and 2 to broker B and broker B will synchronously replicate messages for partition 3 to broker A.
So, that's all to say that adding brokers gives you both scalability and fault-tolerance.