I have 5 broker kafka version 0.10 cluster.Replication factor is 3. and this is production kafka
brokers IDS are
101
102
103
104
105
after couple months that cluster was ok , we observed following logs in Kakfa server.log.
from the log we can see many lines of 'This server is not the leader for that topic-partitionb' exception.
the topic - kopa.thrn.bvff have 100 partitions
and we can see that all 100 partitions are balanced and no need to run kafka kafka-reassign-partitions
What may be the possible reason?
Please help me.
[2023-01-19 11:53:37,434] ERROR [ReplicaFetcherThread-0-101], Error for partition [kopa.thrn.bvff,78] to broker 101:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition. (kafka.server.ReplicaFetcherThread)
[2023-01-19 11:53:37,434] ERROR [ReplicaFetcherThread-0-101], Error for partition [kopa.thrn.bvff,23] to broker 101:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition. (kafka.server.ReplicaFetcherThread)
[2023-01-19 11:53:37,434] ERROR [ReplicaFetcherThread-0-101], Error for partition [kopa.thrn.bvff,63] to broker 101:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition. (kafka.server.ReplicaFetcherThread)
[2023-01-19 11:53:37,434] ERROR [ReplicaFetcherThread-0-101], Error for partition [kopa.thrn.bvff,98] to broker 101:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition. (kafka.server.ReplicaFetcherThread)
[2023-01-19 11:53:37,434] ERROR [ReplicaFetcherThread-0-101], Error for partition [kopa.thrn.bvff,3] to broker 101:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition. (kafka.server.ReplicaFetcherThread)
The leader broker and the follower brokers manage each partition in Kafka. Since you have replication factor 3, each partition will have one leader broker and 2 follower brokers.
When the Kafka producer produces data, it connects to the leader and puts the data there, the followers will copy the data from the leader.
Now, the Kafka leader broker can be reassigned based on the leader's availability, if the leader was unavailable for some time for any reason in a distributed environment (busy CPU, network partition etc), Kafka will run the leader election for the partition to elect a leader for the partition.
You can see who is the leader and who is the follower by topic describe command.
In your case, the partition leader has been changed due to some unavailability of the leader. If you have Kafka metrics, you could see those leader election events for the partition. It is hard in a distributed environment to ensure one broker will remain the leader forever.
Related
I notice the message in kafka broker logs
Partition <topic name> marked as failed (kafka.server.ReplicaFetcherThread)
My question is how and when does Kafka mark a partition as failed.
I need to monitor the logs and take action on messages.
Is this message actionable or something that can be ignored?
We are get many errors from server.log from each of 3 kafka machines ( we have 3 kafka in the cluster )
[2019-12-05 13:25:09,529] ERROR [ReplicaFetcherThread-0-1], Error for partition [jdty.dee.rules.time,91] to broker 1001:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition. (kafka.server.ReplicaFetcherThread)
[2019-12-05 13:25:09,529] ERROR [ReplicaFetcherThread-0-1], Error for partition [jdty.dee.export.profiles,96] to broker 1001:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition. (kafka.server.ReplicaFetcherThread)
[2019-12-05 13:25:09,529] ERROR [ReplicaFetcherThread-0-1], Error for partition [jdty.dee.control.tt.state,40] to broker 1001:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition. (kafka.server.ReplicaFetcherThread)
[2019-12-05 13:25:09,529] ERROR [ReplicaFetcherThread-0-1], Error for partition [jdty.dee.control.tt,67] to broker 1001:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition. (kafka.server.ReplicaFetcherThread)
from my understanding each topic is served by one or multiple Brokers - one is leader and the remaining brokers are followers.
A producer needs to send new messages to the leader Broker which internally replicate the data to all followers.
I assume, that your producer client does not connect to the correct Broker, its connect to a follower instead of the leader, and this follower rejects your send request.
so my question is - how to configure the producer configuration in order to avoid such these errors?
Follower brokers fetch from the leader broker (the leader is not pushing to followers). Hence, it seems that a follower broker tries to fetch from the wrong (leader) broker. This can happen is the leader of a partition changed. The corresponding follower broker should update its cluster metadata automatically to rediscover the new leader. If the error persists, it indicates that this follower broker has issues to update its metadata.
I have been creating a Kafka Producer example using Java. I have been
sending normal data which is just "Test" + Integer as value to Kafka. I
have used the below properties and after I have started the Producer
Client and messages are on the way, during this I am killing the broker
and suddenly receiving the below error message instead of retrying.
Using 3 brokers and topic with 3 partitions and replication factor as 3
and no min-insync-replicas
Below are the properties configured config.put(ProducerConfig.ACKS_CONFIG, "all");
config.put(ProducerConfig.MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION, "1");
config.put(CommonClientConfigs.RETRIES_CONFIG, 60);
config.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, true);
config.put(ProducerConfig.RETRY_BACKOFF_MS_CONFIG ,10000);
config.put(ProducerConfig.REQUEST_TIMEOUT_MS_CONFIG ,30000);
config.put(ProducerConfig.MAX_BLOCK_MS_CONFIG ,10000);
config.put(ProducerConfig.MAX_REQUEST_SIZE_CONFIG , 1048576);
config.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384);
config.put(ProducerConfig.LINGER_MS_CONFIG, 0);
config.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 1073741824); // 1GB
and the result when I have killed all my brokers or sometimes one of the
broker is as below
**Error:**
WARN org.apache.kafka.clients.producer.internals.Sender - [Producer
clientId=producer-1] Got error produce response with correlation id 124
on topic-partition testing001-0, retrying (59 attempts left). Error:
NETWORK_EXCEPTION
27791 [kafka-producer-network-thread | producer-1] WARN
org.apache.kafka.clients.producer.internals.Sender - [Producer
clientId=producer-1] Received invalid metadata error in produce request
on partition testing001-0 due to
org.apache.kafka.common.errors.NetworkException: The server disconnected
before a response was received.. Going to request metadata update now
28748 [kafka-producer-network-thread | producer-1] ERROR
org.apache.kafka.common.utils.KafkaThread - Uncaught exception in thread
'kafka-producer-network-thread | producer-1':
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.<init>(Unknown Source)
at java.nio.ByteBuffer.allocate(Unknown Source)
at org.apache.kafka.common.memory.MemoryPool$1.tryAllocate
(MemoryPool.java:30)
at org.apache.kafka.common.network.NetworkReceive.readFrom
(NetworkReceive.java:112)
at org.apache.kafka.common.network.KafkaChannel.receive
(KafkaChannel.java:335)
at org.apache.kafka.common.network.KafkaChannel.read
(KafkaChannel.java:296)
at org.apache.kafka.common.network.Selector.attemptRead
(Selector.java:560)
at org.apache.kafka.common.network.Selector.pollSelectionKeys
(Selector.java:496)
at org.apache.kafka.common.network.Selector.poll(Selector.java:425)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:510)
at org.apache.kafka.clients.producer.internals.Sender.run
(Sender.java:239)
at org.apache.kafka.clients.producer.internals.Sender.run
(Sender.java:163)
at java.lang.Thread.run(Unknown Source)
I assume you are testing the producer. When a producer connect to the Kafka cluster you will pass all broker IPs and ports as a comma separated string. In your case there are three brokers. When producer try to connect to cluster, as part of initialization cluster controller responds with cluster metadata. Assume your producer only populating message to a single topic. Cluster maintains a leader among brokers for each and every topic. After identify the leader for the topic, your producer only going to communicate to the leader until it is live.
In your testing scenario, you are deliberately killing the broker instances. When it happens kafka cluster need to identify a new leader for your topic and controller has to pass the new meta data to your producer. If the metadata change quite frequently( in your case you may kill another broker mean while) producer may receive invalid metadata.
I reconfigured my kafka cluster, changing:
default replication factor from 1 to 3 and also
changing the location of the kafka data dir on disk
So after restarting all nodes, the cluster seemed ok but then I noticed all the topics are failing to come online. In the logs there are messages like this for each topic:
state-change.log: [2018-02-01 12:41:42,176] ERROR Controller 826437096 epoch 19 initiated state change for partition [filedrop,0] from OfflinePartition to OnlinePartition failed (state.change.logger)
So none of the topics are usable; Listing topics with kafkacat -L -b shows leaders not available.
Metadata for all topics (from broker -1: lol-045:9092/bootstrap):
7 brokers:
broker 826437096 at lol-044:9092
broker 746155422 at lol-047:9092
broker 651737161 at lol-046:9092
broker 728512596 at lol-048:9092
broker 213763378 at lol-045:9092
broker 622553932 at lol-049:9092
broker 746727274 at lol-050:9092
14 topics:
topic "lol.stripped" with 3 partitions:
partition 2, leader -1, replicas:, isrs:, Broker: Leader not available
partition 1, leader -1, replicas:, isrs:, Broker: Leader notavailable
partition 0, leader -1, replicas:, isrs:, Broker: Leader not available
However, newly created topics are correctly replicated and healthy
topic "lol-kafka-health" with 3 partitions:
partition 2, leader 622553932, replicas: 622553932,213763378,651737161, isrs: 622553932,213763378,651737161
partition 1, leader 213763378, replicas: 622553932,213763378,826437096, isrs: 213763378,826437096,622553932
partition 0, leader 826437096, replicas: 213763378,746727274,826437096, isrs: 826437096,746727274,213763378
So I think some kind of metadata corruption happened during the reconfigure
My question is:
Is there any way I can get these topic partitions online again ?
Given that:
the broker ids were changed during the reconfigure
the zookeeper cluster for kafka went down temporarily during the reconfig
In addition, are there some procedures I can use to investigate how recoverable these topics are?
Many thanks in advance!
The procedure described here allowed me to re-assign leaderless artitions to leaders with new broker Ids:
https://community.cloudera.com/t5/Data-Ingestion-Integration/Move-partitions-from-invalid-leader/td-p/43334
I was running my services that work with kafka already for a year and no spontaneous changes of leader happens.
But for the last 2 weeks that started happens quite often.
Kafka log on that:
[2015-09-27 15:35:14,826] INFO [ReplicaFetcherManager on broker 2]
Removed fetcher for partitions [myTopic] (kafka.server.ReplicaFetcherManager)
[2015-09-27 15:35:14,830] INFO Truncating log myTopic-0 to offset 11520979. (kafka.log.Log)
[2015-09-27 15:35:14,845] WARN [Replica Manager on Broker 2]: Fetch request with correlation id 713276 from client ReplicaFetcherThread-0-2 on partition [myTopic,0] failed due to Leader not local for partition [myTopic,0] on broker 2 (kafka.server.ReplicaManager)
[2015-09-27 15:35:14,857] WARN [Replica Manager on Broker 2]: Fetch request with correlation id 256685 from client mirrormaker-1 on partition [myTopic,0] failed due to Leader not local for partition [myTopic,0] on broker 2 (kafka.server.ReplicaManager)
[2015-09-27 15:35:20,171] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions [myTopic,0] (kafka.server.ReplicaFetcherManager)
What can cause switching leader? If there is info in some kafka documentation - please - just point the link. I've failed to find.
System configuration
kafka version: kafka_2.10-0.8.2.1
os: Red Hat Enterprise Linux Server release 6.5 (Santiago)
server.properties (differs from default):
broker.id=001
socket.send.buffer.bytes=1048576
socket.receive.buffer.bytes=1048576
socket.request.max.bytes=104857600
log.flush.interval.messages=10000
log.flush.interval.ms=1000
log.retention.bytes=-1
controlled.shutdown.enable=true
auto.create.topics.enable=false
It appears like lead broker is down for that partition. It might be that data directroy(log.dirs) configured in server.properties is out of space and broker is not able to accommodate.
Also, what is replication factor of topic and cluster size of brokers?
I am assuming you have one topic and one partition with a replication factor of 2. Which is not a good configuration for optimal Kafka performance and consumers.
Your Logs are not clear enough for leader switch. Major issue in your topic may be having the only one leader due to the only partition. Now the single file in your logs is getting bigger in size day by day. Kafka internally does rebalancing at some level(details are not confirmed). That can be the reason for your leader switch. But i am not sure.
Also in your 2nd log line its says some of the logs are truncated. Can you please go though the logs in details and check is this happening only after truncation?
As you already mentioned you already checked your Kafka log directory files and their size. Please run the describe when you got this issue. The leader switch will reflect here as well. Or if you can setup some dashboard that will display the leader for past time. Then it will be easy for you to find the root cause.
bin/kafka-topics.sh --describe --zookeeper Zookeeperhost:Port --topic TopicName
Suggestion: i will suggest you to create a new topic with more partitions(read Kafka documentation to get a good idea about optimum number of partitions) and start writing to it. Or you can check, how to change partitions for current topic.
Last Thing: Is leader switch causing some issues in your Clients or you are worried only about warnings?