How to fix the JAVA Kafka Producer Error "Received invalid metadata error in produce request on partition" and Out of Memory when broker is down - apache-kafka

I have been creating a Kafka Producer example using Java. I have been
sending normal data which is just "Test" + Integer as value to Kafka. I
have used the below properties and after I have started the Producer
Client and messages are on the way, during this I am killing the broker
and suddenly receiving the below error message instead of retrying.
Using 3 brokers and topic with 3 partitions and replication factor as 3
and no min-insync-replicas
Below are the properties configured config.put(ProducerConfig.ACKS_CONFIG, "all");
config.put(ProducerConfig.MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION, "1");
config.put(CommonClientConfigs.RETRIES_CONFIG, 60);
config.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, true);
config.put(ProducerConfig.RETRY_BACKOFF_MS_CONFIG ,10000);
config.put(ProducerConfig.REQUEST_TIMEOUT_MS_CONFIG ,30000);
config.put(ProducerConfig.MAX_BLOCK_MS_CONFIG ,10000);
config.put(ProducerConfig.MAX_REQUEST_SIZE_CONFIG , 1048576);
config.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384);
config.put(ProducerConfig.LINGER_MS_CONFIG, 0);
config.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 1073741824); // 1GB
and the result when I have killed all my brokers or sometimes one of the
broker is as below
**Error:**
WARN org.apache.kafka.clients.producer.internals.Sender - [Producer
clientId=producer-1] Got error produce response with correlation id 124
on topic-partition testing001-0, retrying (59 attempts left). Error:
NETWORK_EXCEPTION
27791 [kafka-producer-network-thread | producer-1] WARN
org.apache.kafka.clients.producer.internals.Sender - [Producer
clientId=producer-1] Received invalid metadata error in produce request
on partition testing001-0 due to
org.apache.kafka.common.errors.NetworkException: The server disconnected
before a response was received.. Going to request metadata update now
28748 [kafka-producer-network-thread | producer-1] ERROR
org.apache.kafka.common.utils.KafkaThread - Uncaught exception in thread
'kafka-producer-network-thread | producer-1':
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.<init>(Unknown Source)
at java.nio.ByteBuffer.allocate(Unknown Source)
at org.apache.kafka.common.memory.MemoryPool$1.tryAllocate
(MemoryPool.java:30)
at org.apache.kafka.common.network.NetworkReceive.readFrom
(NetworkReceive.java:112)
at org.apache.kafka.common.network.KafkaChannel.receive
(KafkaChannel.java:335)
at org.apache.kafka.common.network.KafkaChannel.read
(KafkaChannel.java:296)
at org.apache.kafka.common.network.Selector.attemptRead
(Selector.java:560)
at org.apache.kafka.common.network.Selector.pollSelectionKeys
(Selector.java:496)
at org.apache.kafka.common.network.Selector.poll(Selector.java:425)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:510)
at org.apache.kafka.clients.producer.internals.Sender.run
(Sender.java:239)
at org.apache.kafka.clients.producer.internals.Sender.run
(Sender.java:163)
at java.lang.Thread.run(Unknown Source)

I assume you are testing the producer. When a producer connect to the Kafka cluster you will pass all broker IPs and ports as a comma separated string. In your case there are three brokers. When producer try to connect to cluster, as part of initialization cluster controller responds with cluster metadata. Assume your producer only populating message to a single topic. Cluster maintains a leader among brokers for each and every topic. After identify the leader for the topic, your producer only going to communicate to the leader until it is live.
In your testing scenario, you are deliberately killing the broker instances. When it happens kafka cluster need to identify a new leader for your topic and controller has to pass the new meta data to your producer. If the metadata change quite frequently( in your case you may kill another broker mean while) producer may receive invalid metadata.

Related

Kafka producer does not signal that all brokers are unreachable

When all brokers/node of a cluster are unreachable, the error in the Kafka producer callback is a generic "Topic XXX not present in metadata after 60000 ms".
When I activate the DEBUG log level, I can see that all attempts to deliver the message to any node are failing:
DEBUG org.apache.kafka.clients.NetworkClient - Initialize connection to node node2.url:443 (id: 2 rack: null) for sending metadata request
DEBUG org.apache.kafka.clients.NetworkClient - Initiating connection to node node2.url:443 (id: 2 rack: null) using address node2.url:443/X.X.X.X:443
....
DEBUG org.apache.kafka.clients.NetworkClient - Disconnecting from node 2 due to socket connection setup timeout. The timeout value is 16024 ms.
DEBUG org.apache.kafka.clients.NetworkClient - Initialize connection to node node0.url:443 (id: 0 rack: null) for sending metadata request
DEBUG org.apache.kafka.clients.NetworkClient - Initiating connection to node node0.url:443 (id: 0 rack: null) using address node0.url:443/X.X.X.X:443
....
DEBUG org.apache.kafka.clients.NetworkClient - Disconnecting from node 0 due to socket connection setup timeout. The timeout value is 17408 ms.
and so on, until, after the deliver timeout, the send() Callback gets the error:
ERROR my.kafka.SenderClass - Topic XXX not present in metadata after 60000 ms.
Unlike bootstrap url, all nodes could be unreachable for example for wrong DNS entries or whatever.
How can the application understand that all nodes were not reachable? This is traced just as DEBUG info and is not avialable to the producer send() callback.
Such an error detail at application level would speed up troubleshoooting.
This error is usually signaled by standard webservice SOAP/REST interface.
The producer only cares about the cluster Controller for bootstrapping and the leaders of the partitions it needs to write to (one of those leaders could be the Controller). That being said, it doesn't need to know about "all" brokers.
How can the application understand that all nodes were not reachable?
If you set acks=1 or acks=all, then the callback should know at least one broker had the data written. If not, there was some error.
You can use an AdminClient outside of the Producer client to describe the topic(s) and fetch metadata about the leader partitions, then use standard TCP socket network requests to try and ping those advertised listeners from Java
FWIW, port 443 should ideally be reserved for HTTPS traffic, not Kafka. Kafka is not a REST/SOAP service.

kafka + This server is not the leader for that topic-partition + kafka producer

We are get many errors from server.log from each of 3 kafka machines ( we have 3 kafka in the cluster )
[2019-12-05 13:25:09,529] ERROR [ReplicaFetcherThread-0-1], Error for partition [jdty.dee.rules.time,91] to broker 1001:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition. (kafka.server.ReplicaFetcherThread)
[2019-12-05 13:25:09,529] ERROR [ReplicaFetcherThread-0-1], Error for partition [jdty.dee.export.profiles,96] to broker 1001:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition. (kafka.server.ReplicaFetcherThread)
[2019-12-05 13:25:09,529] ERROR [ReplicaFetcherThread-0-1], Error for partition [jdty.dee.control.tt.state,40] to broker 1001:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition. (kafka.server.ReplicaFetcherThread)
[2019-12-05 13:25:09,529] ERROR [ReplicaFetcherThread-0-1], Error for partition [jdty.dee.control.tt,67] to broker 1001:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition. (kafka.server.ReplicaFetcherThread)
from my understanding each topic is served by one or multiple Brokers - one is leader and the remaining brokers are followers.
A producer needs to send new messages to the leader Broker which internally replicate the data to all followers.
I assume, that your producer client does not connect to the correct Broker, its connect to a follower instead of the leader, and this follower rejects your send request.
so my question is - how to configure the producer configuration in order to avoid such these errors?
Follower brokers fetch from the leader broker (the leader is not pushing to followers). Hence, it seems that a follower broker tries to fetch from the wrong (leader) broker. This can happen is the leader of a partition changed. The corresponding follower broker should update its cluster metadata automatically to rediscover the new leader. If the error persists, it indicates that this follower broker has issues to update its metadata.

Subset of stream's changelog and repartition partitions not available as broker is down - how stream should behave?

My setup consists of 3 kafka brokers (2.11-1.1.1), a single ZK and a java service that is using the Streams API.
The java service is consuming from topic A, performs a persistent stream operation (backed up by a changelog and a repartition streams topic) and writes to topic B. EOS semantics are enabled.
Given that the changelog and repartition topics have replication factor of 1, how should the streams java app behave in case one of my brokers is down (e.g. in my DEV env the disk is full only for one broker). Will the stream continue to consume even if 1/3 of the changelog and repartition partitions are not reachable?
EDIT: Also given that topics A, B and __consumer_offsets have RF=3.
In my java service logs I see:
2019-01-04 09:14:38,787 UTC WARN kafka-producer-network-thread | trsb-app-
nonprod.snapshot-14fa12b2-ac15-4ecc-8729-8f6c4a0034a7-StreamThread-2-0_4-
producer org.apache.kafka.clients.NetworkClient warn | [Producer
clientId=trsb-app-nonprod.snapshot-14fa12b2-ac15-4ecc-8729-8f6c4a0034a7-
StreamThread-2-0_4-producer, transactionalId=trsb-app-nonprod.snapshot-0_4]
Connection to node 1 could not be established. Broker may not be available.
2019-01-04 09:14:38,797 UTC WARN kafka-producer-network-thread | trsb-app-
nonprod.snapshot-14fa12b2-ac15-4ecc-8729-8f6c4a0034a7-StreamThread-2-1_10-
producer org.apache.kafka.clients.NetworkClient warn | [Producer
clientId=trsb-app-nonprod.snapshot-14fa12b2-ac15-4ecc-8729-8f6c4a0034a7-
StreamThread-2-1_10-producer, transactionalId=trsb-app-nonprod.snapshot-
1_10] Connection to node 1 could not be established. Broker may not be
available.
And nothing is consumed.
In both working broker logs I see:
[2019-01-04 13:56:56,449] WARN Resetting first dirty offset of trsb-app-
nonprod.snapshot-store.invoices-changelog-43 to log start offset 99 since
the checkpointed offset 95 is invalid. (kafka.log.LogCleanerManager$)
[2019-01-04 13:56:56,449] WARN Resetting first dirty offset of trsb-app-
nonprod.snapshot-store.invoices-changelog-40 to log start offset 103 since
the checkpointed offset 100 is invalid. (kafka.log.LogCleanerManager$)
Since you are using exactly once semantics, a minimum of 3 brokers are needed to continue processing, so your app would not continue to process if one of the brokers went down. Read here (see processing.guarantee section) for more info regarding this:
https://kafka.apache.org/10/documentation/streams/developer-guide/config-streams.html#id25
The stream continues to consume, but as the state store, depending on the message key, may no be pushable to its corresponding changelog partition, some keys may fail and these transactions will fail and be rollbacked. As a result, the first key on topic A that once consumed will cause the state store push to fail, will block its partition till the broker is up again. This is because the state store push is part of the EOS transaction.

Kafka Producer Warnings WARN [kafka-producer-network-thread ]

I am seeing below WARN in my Kafka producer.
WARN [kafka-producer-network-thread | producer-4] o.a.k.c.NetworkClient
[NetworkClient.java:588] Connection to node -1 could not be
established. Broker may not be available.
Kafka Client version = 0.11.0.1
Kafka Broker version is same as client Version.
My Data is flowing into the kafka cluster and I am able to consume as well. But I am keep getting this errors. Any clues >\?

Kafka mirror maker duplicates when DCs are isolated

We have 5 kafka 1.0.0 clusters:
4 of them are made of 3 nodes and are in different regions in the world
the last one is made of 5 nodes and is an aggregate only cluster.
We are using MirrorMaker (later referenced as MM) to read from the regional clusters and copy the data in the aggregate cluster in our HQ datacenter.
And not sure about where to run it we have currently 2 cases in our prod environment:
MM in the region: reading locally and pushing to aggregate cluster in remote data-center (DC), before committing offsets locally. I tend to call this the push mode (pushing the data)
MM in the DC of the aggregate cluster: reading remotely the data, writing it locally before committing the offsets on remote DC.
What happened is that we got the entire DC where we have our aggregate server totally isolated from a network point of view. And in both cases, we got duplicated records in our aggregate cluster.
Push mode = MM local to the regional cluster, pushing data to remote aggregate cluster
MM started to throw errors like this:
WARN [Producer clientId=producer-1] Got error produce response with correlation id 674364 on topic-partition <topic>-4, retrying (2147483646 attempts left). Error: NETWORK_EXCEPTION (org.apache.kafka.clients.producer.internals.Sender)
then:
WARN [Producer clientId=producer-1] Connection to node 1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
which is ok so far because of idempotence.
But finally we got errors like:
ERROR Error when sending message to topic debug_sip_callback-delivery with key: null, value: 1640 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for <topic>-4: 30032 ms has passed since batch creation plus linger time
ERROR Error when sending message to topic <topic> with key: null, value: 1242 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
java.lang.IllegalStateException: Producer is closed forcefully.
causing MM to stop and I think this is the problem causing duplicates (I need to dig the code, but could be that it lost information about idempotence and on restart it resumed from previously committed offsets).
Pull mode = MM local to the aggregate cluster, pulling data from remote regional cluster
MM instances (with logs at INFO level in this case) started seeing the broker as dead:
INFO [Consumer clientId=mirror-maker-region1-agg-0, groupId=mirror-maker-region1-agg] Marking the coordinator kafka1.region1.internal:9092 (id: 2147483646 rack: null) dead (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
At the same time on the broker side, we got:
INFO [GroupCoordinator 1]: Member mirror-maker-region1-agg-0-de2af312-befb-4af7-b7b0-908ca8ecb0ed in group mirror-maker-region1-agg has failed, removing it from the group (kafka.coordinator.group.GroupCoordinator)
...
INFO [GroupCoordinator 1]: Group mirror-maker-region1-agg with generation 42 is now empty (__consumer_offsets-2) (kafka.coordinator.group.GroupCoordinator)
Later on MM side, a lot of:
WARN [Consumer clientId=mirror-maker-region1-agg-0, groupId=mirror-maker-region1-agg] Connection to node 2 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
and finally when network came back:
ERROR [Consumer clientId=mirror-maker-region1-agg-0, groupId=mirror-maker-region1-agg] Offset commit failed on partition <topic>-dr-8 at offset 382424879: The coordinator is not aware of this member. (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
i.e., it could not commit in region1 the offsets written on agg because of the rebalancing. And it resumed after rebalance from previously successfully committed offset causing duplicates.
Configuration
Our MM instances are configured like this:
For our consumer:
bootstrap.servers=kafka1.region1.intenal:9092,kafka2.region1.internal:9092,kafka3.region1.internal:9092
group.id=mirror-maker-region-agg
auto.offset.reset=earliest
isolation.level=read_committed
For our producer:
bootstrap.servers=kafka1.agg.internal:9092,kafka2.agg.internal:9092,kafka3.agg.internal:9092,kafka4.agg.internal:9092,kafka5.agg.internal:9092
compression.type=none
request.timeout.ms=30000
max.block.ms=60000
linger.ms=15000
max.request.size=1048576
batch.size=32768
buffer.memory=134217728
retries=2147483647
max.in.flight.requests.per.connection=1
acks=all
enable.idempotence=true
Any idea how we can get the "only once" delivery on top of exactly once in case of 30 min isolated DCs?