While I was doing a proof of concept with kafka-flink, I discovered the following : It seems that kafka producer errors could happen due to workload done on flink side ?!
Here are more details:
I have sample files like sample??.EDR made of ~700'000 rows with values like "entity", "value", "timestamp"
I use the following command to create the kafka topic:
~/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --partitions 1 --replication-factor 1 --topic gprs
I use the following command to load sample files on topic:
[13:00] kafka#ubu19: ~/fms
% /home/kafka/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic gprs < ~/sample/sample01.EDR
I have on flink side jobs that aggregate value for each entity with sliding window of 6 hours and 72 hours (aggregationeachsix, aggregationeachsentytwo).
I did three scenarios:
Load files in the topic without any job running
Load files in the topic with aggregationeachsix job running
Load files in the topic with aggregationeachsix and aggregationeachsentytwo jobs running
The results is that first two scenarios are working but for the third scenario, I have the following errors on kafka producer side while loading the files (not always at the same file, it can be the first, second, third or even later file):
[plenty of lines before this part]
[2017-08-09 12:56:53,409] ERROR Error when sending message to topic gprs with key: null, value: 35 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 233 record(s) for gprs-0: 1560 ms has passed since last append
[2017-08-09 12:56:53,409] ERROR Error when sending message to topic gprs with key: null, value: 37 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 233 record(s) for gprs-0: 1560 ms has passed since last append
[2017-08-09 12:56:53,409] ERROR Error when sending message to topic gprs with key: null, value: 37 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 233 record(s) for gprs-0: 1560 ms has passed since last append
[2017-08-09 12:56:53,412] WARN Got error produce response with correlation id 1627 on topic-partition gprs-0, retrying (2 attempts left). Error: NETWORK_EXCEPTION (org.apache.kafka.clients.producer.internals.Sender)
[2017-08-09 12:56:53,412] WARN Got error produce response with correlation id 1626 on topic-partition gprs-0, retrying (2 attempts left). Error: NETWORK_EXCEPTION (org.apache.kafka.clients.producer.internals.Sender)
[2017-08-09 12:56:53,412] WARN Got error produce response with correlation id 1625 on topic-partition gprs-0, retrying (2 attempts left). Error: NETWORK_EXCEPTION (org.apache.kafka.clients.producer.internals.Sender)
[2017-08-09 12:56:53,412] WARN Got error produce response with correlation id 1624 on topic-partition gprs-0, retrying (2 attempts left). Error: NETWORK_EXCEPTION (org.apache.kafka.clients.producer.internals.Sender)
[2017-08-09 12:56:53,515] ERROR Error when sending message to topic gprs with key: null, value: 35 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 8 record(s) for gprs-0: 27850 ms has passed since batch creation plus linger time
[2017-08-09 12:56:53,515] ERROR Error when sending message to topic gprs with key: null, value: 37 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 8 record(s) for gprs-0: 27850 ms has passed since batch creation plus linger time
[plenty of lines after this part]
My question is why flink could have an impact on kafka producer and then, what do I need to change to avoid this error ?
It looks like you are saturating your network when both flink and kafka-producer are using it and thus you get TimeoutExceptions.
Related
I am using spring boot 2.1.9 and spring Kafka 2.2.9 with Kafka chained transactions.
I am getting some warning from Kafka producer every time. due to this some time functionality will not work.
I want to know why these errors are coming? is there in config problem?
2020-05-04 09:12:35.216 WARN [xxxxx-order-service,,,] 10 --- [ad | producer-8] o.a.k.clients.producer.internals.Sender : [Producer clientId=producer-8, transactionalId=xxxxx-Order-Service-JOg4T1vFzW4tuc-2] Got error produce response with correlation id 1946 on topic-partition process_event-0, retrying (2147483646 attempts left). Error: UNKNOWN_PRODUCER_ID
2020-05-04 09:12:35.327 WARN [xxxxx-order-service,,,] 10 --- [ad | producer-8] o.a.k.clients.producer.internals.Sender : [Producer clientId=producer-8, transactionalId=xxxxx-Order-Service-JOg4T1vFzW4tuc-2] Got error produce response with correlation id 1950 on topic-partition audit-0, retrying (2147483646 attempts left). Error: UNKNOWN_PRODUCER_ID
2020-05-04 09:12:53.512 WARN [xxxxx-order-service,,,] 10 --- [ad | producer-6] o.a.k.clients.producer.internals.Sender : [Producer clientId=producer-6, transactionalId=xxxxx-Order-Service-JOg4T1vFzW4tuc-0] Got error produce response with correlation id 5807 on topic-partition process_submitted_page_count-2, retrying (2147483646 attempts left). Error: UNKNOWN_PRODUCER_ID
2020-05-04 09:12:53.632 WARN [xxxxx-order-service,,,] 10 --- [ad | producer-6] o.a.k.clients.producer.internals.Sender : [Producer clientId=producer-6, transactionalId=xxxxx-Order-Service-JOg4T1vFzW4tuc-0] Got error produce response with correlation id 5811 on topic-partition process_event-0, retrying (2147483646 attempts left). Error: UNKNOWN_PRODUCER_ID
2020-05-04 09:12:53.752 WARN [xxxxx-order-service,,,] 10 --- [ad | producer-6] o.a.k.clients.producer.internals.Sender : [Producer clientId=producer-6, transactionalId=xxxxx-Order-Service-JOg4T1vFzW4tuc-0] Got error produce response with correlation id 5816 on topic-partition audit-0, retrying (2147483646 attempts left). Error: UNKNOWN_PRODUCER_ID
I assume that you might be hitting this issue.
When a streams application has little traffic, then it is possible
that consumer purging would delete even the last message sent by a
producer (i.e., all the messages sent by this producer have been
consumed and committed), and as a result, the broker would delete that
producer's ID. The next time when this producer tries to send, it will
get this UNKNOWN_PRODUCER_ID error code, but in this case, this error
is retriable: the producer would just get a new producer id and
retries, and then this time it will succeed.
Proposed Solution: Upgrade Kafka
Now this issue has been solved for versions 2.4.0+ so if you are still hitting this you need to upgrade to a newer Kafka version.
Alternative Solution: Increase retention time & transactional.id.expiration.ms
Alternatively, if you cannot (or don't want to) upgrade then you can increase the retention period (log.retention.hours) as well as transactional.id.expiration.ms that defines the amount of inactivity time that needs to pass in order for a producer to be considered as expired (defaults to 7 days).
hi please when i run kafka producer im displays this error
is what you need to delete the topics??
[2019-04-09 01:48:29,011] WARN [Producer clientId=console-producer]
Received invalid metadata error in produce request on partition
test2-0 due to
org.apache.kafka.common.errors.NotLeaderForPartitionException: This
server is not the leader for that topic-partition.. Going to request
metadata update now
(org.apache.kafka.clients.producer.internals.Sender) [2019-04-09
01:48:29,124] ERROR Error when sending message to topic test2 with
key: null, value: 0 bytes with error:
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.NotLeaderForPartitionException: This
server is not the leader for that topic-partition. [2019-04-09
01:48:29,132] WARN [Producer clientId=console-producer] Received
invalid metadata error in produce request on partition test2-0 due to
org.apache.kafka.common.errors.NotLeaderForPartitionException: This
server is not the leader for that topic-partition.. Going to request
metadata update now
(org.apache.kafka.clients.producer.internals.Sender)
kafka-producer-perf-test.sh throwing corrupt message error and not producing any results. What's the reason.
I am trying to run a performance test on my Kafka cluster using the shell scripts provided with Kafka, It is generating CORRUPT_MESSAGE ERROR.
Error: CORRUPT_MESSAGE (org.apache.kafka.clients.producer.internals.Sender)
[2019-01-03 10:33:09,119] WARN [Producer clientId=producer-1] Got error produce response with correlation id 2396 on topic-partition my_topic-1, retrying (2147483519 attempts left). Error: CORRUPT_MESSAGE (org.apache.kafka.clients.producer.internals.Sender)
I am running the following command to run the test
bin/kafka-producer-perf-test.sh --topic my_topic --num-records 50 --throughput 10 --producer-props bootstrap.servers=kafka1:9092 key.serializer=org.apache.kafka.common.serialization.StringSerializer value.serializer=org.apache.kafka.common.serialization.StringSerializer --record-size 1
What could be the reason behind this?
Edit 1:
We have a cluster of 5 brokers r5.xlarge machines (4 cores, 32 GB RAM) with heap option Xmx3G.
We have 5 kafka 1.0.0 clusters:
4 of them are made of 3 nodes and are in different regions in the world
the last one is made of 5 nodes and is an aggregate only cluster.
We are using MirrorMaker (later referenced as MM) to read from the regional clusters and copy the data in the aggregate cluster in our HQ datacenter.
And not sure about where to run it we have currently 2 cases in our prod environment:
MM in the region: reading locally and pushing to aggregate cluster in remote data-center (DC), before committing offsets locally. I tend to call this the push mode (pushing the data)
MM in the DC of the aggregate cluster: reading remotely the data, writing it locally before committing the offsets on remote DC.
What happened is that we got the entire DC where we have our aggregate server totally isolated from a network point of view. And in both cases, we got duplicated records in our aggregate cluster.
Push mode = MM local to the regional cluster, pushing data to remote aggregate cluster
MM started to throw errors like this:
WARN [Producer clientId=producer-1] Got error produce response with correlation id 674364 on topic-partition <topic>-4, retrying (2147483646 attempts left). Error: NETWORK_EXCEPTION (org.apache.kafka.clients.producer.internals.Sender)
then:
WARN [Producer clientId=producer-1] Connection to node 1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
which is ok so far because of idempotence.
But finally we got errors like:
ERROR Error when sending message to topic debug_sip_callback-delivery with key: null, value: 1640 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for <topic>-4: 30032 ms has passed since batch creation plus linger time
ERROR Error when sending message to topic <topic> with key: null, value: 1242 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
java.lang.IllegalStateException: Producer is closed forcefully.
causing MM to stop and I think this is the problem causing duplicates (I need to dig the code, but could be that it lost information about idempotence and on restart it resumed from previously committed offsets).
Pull mode = MM local to the aggregate cluster, pulling data from remote regional cluster
MM instances (with logs at INFO level in this case) started seeing the broker as dead:
INFO [Consumer clientId=mirror-maker-region1-agg-0, groupId=mirror-maker-region1-agg] Marking the coordinator kafka1.region1.internal:9092 (id: 2147483646 rack: null) dead (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
At the same time on the broker side, we got:
INFO [GroupCoordinator 1]: Member mirror-maker-region1-agg-0-de2af312-befb-4af7-b7b0-908ca8ecb0ed in group mirror-maker-region1-agg has failed, removing it from the group (kafka.coordinator.group.GroupCoordinator)
...
INFO [GroupCoordinator 1]: Group mirror-maker-region1-agg with generation 42 is now empty (__consumer_offsets-2) (kafka.coordinator.group.GroupCoordinator)
Later on MM side, a lot of:
WARN [Consumer clientId=mirror-maker-region1-agg-0, groupId=mirror-maker-region1-agg] Connection to node 2 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
and finally when network came back:
ERROR [Consumer clientId=mirror-maker-region1-agg-0, groupId=mirror-maker-region1-agg] Offset commit failed on partition <topic>-dr-8 at offset 382424879: The coordinator is not aware of this member. (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
i.e., it could not commit in region1 the offsets written on agg because of the rebalancing. And it resumed after rebalance from previously successfully committed offset causing duplicates.
Configuration
Our MM instances are configured like this:
For our consumer:
bootstrap.servers=kafka1.region1.intenal:9092,kafka2.region1.internal:9092,kafka3.region1.internal:9092
group.id=mirror-maker-region-agg
auto.offset.reset=earliest
isolation.level=read_committed
For our producer:
bootstrap.servers=kafka1.agg.internal:9092,kafka2.agg.internal:9092,kafka3.agg.internal:9092,kafka4.agg.internal:9092,kafka5.agg.internal:9092
compression.type=none
request.timeout.ms=30000
max.block.ms=60000
linger.ms=15000
max.request.size=1048576
batch.size=32768
buffer.memory=134217728
retries=2147483647
max.in.flight.requests.per.connection=1
acks=all
enable.idempotence=true
Any idea how we can get the "only once" delivery on top of exactly once in case of 30 min isolated DCs?
I'm getting below error when running producer client, which take messages from an input file kafka_message.log. This log file is pilled with 100000 records per second of each message of length 4096
error -
[2017-01-09 14:45:24,813] ERROR Error when sending message to topic test2R2P2 with key: null, value: 4096 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Batch containing 3 record(s) expired due to timeout while requesting metadata from brokers for test2R2P2-0
[2017-01-09 14:45:24,816] ERROR Error when sending message to topic test2R2P2 with key: null, value: 4096 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Batch containing 3 record(s) expired due to timeout while requesting metadata from brokers for test2R2P2-0
[2017-01-09 14:45:24,816] ERROR Error when sending message to topic test2R2P2 with key: null, value: 4096 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Batch containing 3 record(s) expired due to timeout while requesting metadata from brokers for test2R2P2-0
[2017-01-09 14:45:24,816] ERROR Error when sending message to topic test2R2P2 with key: null, value: 4096 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Batch containing 3 record(s) expired due to timeout while requesting metadata from brokers for test2R2P2-0
[2017-01-09 14:45:24,816] ERROR Error when sending message to topic test2R2P2 with key: null, value: 4096 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Batch containing 3 record(s) expired due to timeout while requesting metadata from brokers for test2R2P2-0
command i run :
$ bin/kafka-console-producer.sh --broker-list x.x.x.x:xxxx,x.x.x.x:xxxx --batch-size 1000 --message-send-max-retries 10 --request-required-acks 1 --topic test2R2P2 <~/kafka_message.log
there are 2 brokers running and the topic has partitions = 2 and replication factor = 2.
can some please help me understand what this error means? i also see loss of message meaning not all the messages from input file is put into the topic?
on a seperate note: i see data loss when running kafka-producer-perf-test.sh and killing one of the broker (in 3 node cluster) when the test is running. is this a expected behavior? i see same results for multiple tests.
commands i run:
describe topic:
$ bin/kafka-topics.sh --zookeeper x.x.x.x:2181/kafka-framework --describe |grep test4
Topic:test4R2P2 PartitionCount:2 ReplicationFactor:2 Configs:
Topic: test4R2P2 Partition: 0 Leader: 0 Replicas: 1,0 Isr: 0,1
Topic: test4R2P2 Partition: 1 Leader: 0 Replicas: 0,1 Isr: 0,1
run perf test:
$ bin/kafka-producer-perf-test.sh --num-records 100000 --record-size 4096 --throughput 1000 --topic test4R2P2 --producer-props bootstrap.servers=x.x.x.x:xxxx,x.x.x.x:xxxx
consumer command:
$ bin/kafka-console-consumer.sh --zookeeper x.x.x.x:2181/kafka-framework --topic test4R2P2 1>~/kafka_message.log
checking message count:
$ wc -l ~/kafka_message.log
399418 /home/montana/kafka_message.log
i see only 399418 messages in the topic test4R2P2, where as i have put total 400000 messages by running perf test 4 times.
exception thrown by perf command:
org.apache.kafka.common.errors.NetworkException: The server disconnected before a response was received.
org.apache.kafka.common.errors.NetworkException: The server disconnected before a response was received.
exceptions thrown by consumer command:
[2017-01-10 07:40:07,246] WARN [ConsumerFetcherThread-console-consumer-46599_node-44a8422fe1a0-1484033822261-f07d33d7-0-1], Error in fetch kafka.consumer.ConsumerFetcherThread$FetchRequest#695be565 (kafka.consumer.ConsumerFetcherThread)
[2017-01-10 07:40:07,472] WARN Fetching topic metadata with correlation id 1 for topics [Set(test4R2P2)] from broker [BrokerEndPoint(1,10.105.26.1,31052)] failed (kafka.client.ClientUtils$)
java.nio.channels.ClosedChannelException
[2017-01-10 07:42:23,073] WARN [ConsumerFetcherThread-console-consumer-46599_node-44a8422fe1a0-1484033822261-f07d33d7-0-0], Error in fetch kafka.consumer.ConsumerFetcherThread$FetchRequest#7bd94073 (kafka.consumer.ConsumerFetcherThread)
[2017-01-10 07:44:58,195] WARN [ConsumerFetcherThread-console-consumer-46599_node-44a8422fe1a0-1484033822261-f07d33d7-0-1], Error in fetch kafka.consumer.ConsumerFetcherThread$FetchRequest#2855ee73 (kafka.consumer.ConsumerFetcherThread)
[2017-01-10 07:44:58,404] WARN Fetching topic metadata with correlation id 3 for topics [Set(test4R2P2)] from broker [BrokerEndPoint(1,10.105.26.1,31052)] failed (kafka.client.ClientUtils$)
java.nio.channels.ClosedChannelException
[2017-01-10 07:45:47,127] WARN [ConsumerFetcherThread-console-consumer-46599_node-44a8422fe1a0-1484033822261-f07d33d7-0-0], Error in fetch kafka.consumer.ConsumerFetcherThread$FetchRequest#f8887da (kafka.consumer.ConsumerFetcherThread)
[2017-01-10 07:50:56,291] ERROR [ConsumerFetcherThread-console-consumer-46599_node-44a8422fe1a0-1484033822261-f07d33d7-0-1], Error for partition [test4R2P2,1] to broker 1:kafka.common.NotLeaderForPartitionException (kafka.consumer.ConsumerFetcherThread)
Based on the comments, this suggestion from #amethystic seems to do the trick:
...you could increase the value for "request.timeout.ms" ...