After unexpected shutdown of brokers, some of the topic partitions remain offline even if all the brokers are back up and running. Does anyone know the solution for this issue ?
2019-05-17T10:40:32,379 [myid:] - INFO [controller-event-thread:Logging$class#70] - [Controller 3]: Starting preferred replica leader election for partitions [topic,9]
2019-05-17T10:40:32,379 [myid:] - INFO [controller-event-thread:Logging$class#70] - [Partition state machine on Controller 3]: Invoking state change to OnlinePartition for partitions [topic,9]
2019-05-17T10:40:32,380 [myid:] - INFO [controller-event-thread:Logging$class#70] - [PreferredReplicaPartitionLeaderSelector]: Current leader -1 for partition [topic,9] is not the preferred replica. Triggering preferred replica leader election
2019-05-17T10:40:32,380 [myid:] - WARN [controller-event-thread:Logging$class#85] - [Controller 3]: Partition [topic,9] failed to complete preferred replica leader election. Leader is -1
My colleague and I just ran into a similar problem, however, we were trying to delete a topic that had offline partitions. The key to your problem is that your leader is -1.
The way we fixed this was by manually editing the znode in Zookeeper to point the leader to a broker that was online and doing a rolling restart of the cluster. Using the Zookeeper cli get the following znode:
/brokers/topics/<my-topic>/partitions/0/state.
In our case it returned:
{"controller_epoch":52,"leader":-1,"version":1,"leader_epoch":35,"isr":[5]}
Notice that the leader is -1. You might try updating the znode, setting the leader to a broker that is up and running.
Related
I have 5 broker kafka version 0.10 cluster.Replication factor is 3. and this is production kafka
brokers IDS are
101
102
103
104
105
after couple months that cluster was ok , we observed following logs in Kakfa server.log.
from the log we can see many lines of 'This server is not the leader for that topic-partitionb' exception.
the topic - kopa.thrn.bvff have 100 partitions
and we can see that all 100 partitions are balanced and no need to run kafka kafka-reassign-partitions
What may be the possible reason?
Please help me.
[2023-01-19 11:53:37,434] ERROR [ReplicaFetcherThread-0-101], Error for partition [kopa.thrn.bvff,78] to broker 101:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition. (kafka.server.ReplicaFetcherThread)
[2023-01-19 11:53:37,434] ERROR [ReplicaFetcherThread-0-101], Error for partition [kopa.thrn.bvff,23] to broker 101:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition. (kafka.server.ReplicaFetcherThread)
[2023-01-19 11:53:37,434] ERROR [ReplicaFetcherThread-0-101], Error for partition [kopa.thrn.bvff,63] to broker 101:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition. (kafka.server.ReplicaFetcherThread)
[2023-01-19 11:53:37,434] ERROR [ReplicaFetcherThread-0-101], Error for partition [kopa.thrn.bvff,98] to broker 101:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition. (kafka.server.ReplicaFetcherThread)
[2023-01-19 11:53:37,434] ERROR [ReplicaFetcherThread-0-101], Error for partition [kopa.thrn.bvff,3] to broker 101:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition. (kafka.server.ReplicaFetcherThread)
The leader broker and the follower brokers manage each partition in Kafka. Since you have replication factor 3, each partition will have one leader broker and 2 follower brokers.
When the Kafka producer produces data, it connects to the leader and puts the data there, the followers will copy the data from the leader.
Now, the Kafka leader broker can be reassigned based on the leader's availability, if the leader was unavailable for some time for any reason in a distributed environment (busy CPU, network partition etc), Kafka will run the leader election for the partition to elect a leader for the partition.
You can see who is the leader and who is the follower by topic describe command.
In your case, the partition leader has been changed due to some unavailability of the leader. If you have Kafka metrics, you could see those leader election events for the partition. It is hard in a distributed environment to ensure one broker will remain the leader forever.
We are get many errors from server.log from each of 3 kafka machines ( we have 3 kafka in the cluster )
[2019-12-05 13:25:09,529] ERROR [ReplicaFetcherThread-0-1], Error for partition [jdty.dee.rules.time,91] to broker 1001:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition. (kafka.server.ReplicaFetcherThread)
[2019-12-05 13:25:09,529] ERROR [ReplicaFetcherThread-0-1], Error for partition [jdty.dee.export.profiles,96] to broker 1001:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition. (kafka.server.ReplicaFetcherThread)
[2019-12-05 13:25:09,529] ERROR [ReplicaFetcherThread-0-1], Error for partition [jdty.dee.control.tt.state,40] to broker 1001:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition. (kafka.server.ReplicaFetcherThread)
[2019-12-05 13:25:09,529] ERROR [ReplicaFetcherThread-0-1], Error for partition [jdty.dee.control.tt,67] to broker 1001:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition. (kafka.server.ReplicaFetcherThread)
from my understanding each topic is served by one or multiple Brokers - one is leader and the remaining brokers are followers.
A producer needs to send new messages to the leader Broker which internally replicate the data to all followers.
I assume, that your producer client does not connect to the correct Broker, its connect to a follower instead of the leader, and this follower rejects your send request.
so my question is - how to configure the producer configuration in order to avoid such these errors?
Follower brokers fetch from the leader broker (the leader is not pushing to followers). Hence, it seems that a follower broker tries to fetch from the wrong (leader) broker. This can happen is the leader of a partition changed. The corresponding follower broker should update its cluster metadata automatically to rediscover the new leader. If the error persists, it indicates that this follower broker has issues to update its metadata.
We have a cluster of 4 nodes in production. We observed that one of the
nodes ran into a situation where it constantly shrunk and expanded ISR for
more than 1 hours and unable to recover until the broker was bounced.
[2017-02-21 14:52:16,518] INFO Partition [skynet-large-stage,5] on broker 0: Shrinking ISR for partition [skynet-large-stage,5] from 2,0 to 0 (kafka.cluster.Partition)
[2017-02-21 14:52:16,543] INFO Partition [skynet-large-stage,37] on broker 0: Shrinking ISR for partition [skynet-large-stage,37] from 1,0 to 0 (kafka.cluster.Partition)
[2017-02-21 14:52:16,544] INFO Partition [skynet-large-stage,13] on broker 0: Shrinking ISR for partition [skynet-large-stage,13] from 1,0 to 0 (kafka.cluster.Partition)
[2017-02-21 14:52:16,545] INFO Partition [__consumer_offsets,46] on broker 0: Shrinking ISR for partition [__consumer_offsets,46] from 3,2,0 to 3,0 (kafka.cluster.Partition)
.
.
I'd like to know what would cause this issue and why the broken broker was not kicked out of ISR.
Kafka version is 0.10.1.0
There was that bug in KAFKA-4477 that got fixed, but in general, I've seen this same problem when Kafka brokers time out when talking to a zookeeper node (default is 6000ms timeout), for some transient network blip, at which point they get kicked out of the cluster, partition leadership changes, clients have to rebalance, etc. For high volume clusters, it's a pain.
Simply increasing this timeout has helped me several times before:
zookeeper.session.timeout.ms
The default value according to the official docs is 6000ms. I found simply increasing it to 15000ms caused the cluster to be rock solid.
Documentation for 0.11.0 Kafka version: https://kafka.apache.org/0110/documentation.html
Today I met one issue,the broker 2 is not existent in zookeeper, I thought the borker 2 is down, but it's still running well. I checked the zookeeper log,
it only mentioned " Established session 0x153514345be0321 with negotiated timeout 6000 for client broker 2",
from the removed broker server and state-change log,
[2016-03-08 06:00:00,257] ERROR Controller 2 epoch 19 initiated state change for partition [robotEvents,186] from OfflinePartition to OnlinePartition failed (state.change.logger)
kafka.common.StateChangeFailedException: encountered error while electing leader for partition [***,186] due to: aborted leader election for partition [****,186] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 2 went through a soft failure and another controller was elected with epoch 20..
a lot of these kind of errors occur, from the source code, this error seems normal process during leader election, but it can't explain why the broker 2 was removed from zk. Any idea?
Thanks in advance
I was running my services that work with kafka already for a year and no spontaneous changes of leader happens.
But for the last 2 weeks that started happens quite often.
Kafka log on that:
[2015-09-27 15:35:14,826] INFO [ReplicaFetcherManager on broker 2]
Removed fetcher for partitions [myTopic] (kafka.server.ReplicaFetcherManager)
[2015-09-27 15:35:14,830] INFO Truncating log myTopic-0 to offset 11520979. (kafka.log.Log)
[2015-09-27 15:35:14,845] WARN [Replica Manager on Broker 2]: Fetch request with correlation id 713276 from client ReplicaFetcherThread-0-2 on partition [myTopic,0] failed due to Leader not local for partition [myTopic,0] on broker 2 (kafka.server.ReplicaManager)
[2015-09-27 15:35:14,857] WARN [Replica Manager on Broker 2]: Fetch request with correlation id 256685 from client mirrormaker-1 on partition [myTopic,0] failed due to Leader not local for partition [myTopic,0] on broker 2 (kafka.server.ReplicaManager)
[2015-09-27 15:35:20,171] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions [myTopic,0] (kafka.server.ReplicaFetcherManager)
What can cause switching leader? If there is info in some kafka documentation - please - just point the link. I've failed to find.
System configuration
kafka version: kafka_2.10-0.8.2.1
os: Red Hat Enterprise Linux Server release 6.5 (Santiago)
server.properties (differs from default):
broker.id=001
socket.send.buffer.bytes=1048576
socket.receive.buffer.bytes=1048576
socket.request.max.bytes=104857600
log.flush.interval.messages=10000
log.flush.interval.ms=1000
log.retention.bytes=-1
controlled.shutdown.enable=true
auto.create.topics.enable=false
It appears like lead broker is down for that partition. It might be that data directroy(log.dirs) configured in server.properties is out of space and broker is not able to accommodate.
Also, what is replication factor of topic and cluster size of brokers?
I am assuming you have one topic and one partition with a replication factor of 2. Which is not a good configuration for optimal Kafka performance and consumers.
Your Logs are not clear enough for leader switch. Major issue in your topic may be having the only one leader due to the only partition. Now the single file in your logs is getting bigger in size day by day. Kafka internally does rebalancing at some level(details are not confirmed). That can be the reason for your leader switch. But i am not sure.
Also in your 2nd log line its says some of the logs are truncated. Can you please go though the logs in details and check is this happening only after truncation?
As you already mentioned you already checked your Kafka log directory files and their size. Please run the describe when you got this issue. The leader switch will reflect here as well. Or if you can setup some dashboard that will display the leader for past time. Then it will be easy for you to find the root cause.
bin/kafka-topics.sh --describe --zookeeper Zookeeperhost:Port --topic TopicName
Suggestion: i will suggest you to create a new topic with more partitions(read Kafka documentation to get a good idea about optimum number of partitions) and start writing to it. Or you can check, how to change partitions for current topic.
Last Thing: Is leader switch causing some issues in your Clients or you are worried only about warnings?