ms has passed since batch creation plus linger time - apache-kafka

I am using SQL 2.4.1v as consumer and Spring boot as producer for my Kafka topic. While I am trying to insert the records on to the topic, I am getting following error message:
WARN 12044 --- [ad | producer-1] o.a.k.clients.producer.internals.Sender : [Producer clientId=producer-1] Got error produce response with correlation id 4457 on topic-partition TRANS_INBOUND-20, retrying (0 attempts left). Error: NETWORK_EXCEPTION
WARN 12044 --- [ad | producer-1] o.a.k.clients.producer.internals.Sender : [Producer clientId=producer-1] Received invalid metadata error in produce request on partition TRANS_INBOUND-20 due to org.apache.kafka.common.errors.NetworkException: The server disconnected before a response was received.. Going to request metadata update now
ERROR 12044 --- [ad | producer-1] o.s.k.support.LoggingProducerListener : Exception thrown when sending a message with key='1-356194-2018-01-02-STATUS' and payload='com.TransRecord#48323556' to topic TRANS_INBOUND:
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for TRANS_INBOUND-82: 501 ms has passed since batch creation plus linger time
Following are my producer settings:
acks: 1
retries: 1
batchSize: 100
lingerMs: 5
bufferMemory: 33554432
requestTimeoutMs: 600
autoOffsetReset: latest
enableAutoCommit: false
reconnectBackoffMaxMs: 1000
reconnectBackoffMs: 50
retryBackoffMs: 100
I have already tried with many combinations of batchSize, lingerMs and requestTimeoutMs, nothing is working. I still see the above errors quite often. What might be wrong here, and how can I fix this problem?

Related

Apache Flink, Kafka consumer - org.apache.kafka.clients.producer.internals.Sender retrying (2147483646 attempts left). Error: NETWORK_EXCEPTION

We are using Apache flink kafka consumer to consume the payload . We are facing the delay in processing intermittently. We have added the logs in our business logic and everything looks good. But keep on getting the below error.
[kafka-producer-network-thread | producer-44] WARN org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=producer-44] Got error produce response with correlation id 82 on topic-partition topicname-ingress-0, retrying (2147483646 attempts left). Error: NETWORK_EXCEPTION. Error Message: Disconnected from node 0
[kafka-producer-network-thread | producer-44] WARN org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=producer-44] Received invalid metadata error in produce request on partition topicnamae-ingress-0 due to org.apache.kafka.common.errors.NetworkException: Disconnected from node 0. Going to request metadata update now

Received invalid metadata error in produce request on partition topic-0 due to org.apache.kafka.common.errors.NotLeaderForPartitionException

We use spring kafka stream producer to produce data to kafka topic. when we did resiliency test, we got the the below error.
`2020-08-28 16:18:35.536 WARN [,,,] 26 --- [ad | producer-3] o.a.k.clients.producer.internals.Sender : [Producer clientId=producer-3] Received invalid metadata error in produce request on partition topic1-0 due to org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition.. Going to request metadata update now
 log: 2020-08-28 16:18:35.536 WARN [,,,] 26 --- [ad | producer-3] o.a.k.clients.producer.internals.Sender : [Producer clientId=producer-3] Got error produce response with correlation id 80187 on topic-partition topic1-0, retrying (4 attempts left). Error: NOT_LEADER_FOR_PARTITION
[Producer clientId=producer-3] Received invalid metadata error in produce request on partition topic1-0 due to org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition.. Going to request metadata update now.
The warn should be coming only for the period of time we are running resiliency(broker down/up testing) but these warning happening even after the resiliency test period and happening only for the particular partition(here topic1-0). all the other partitions are working fine.`
this is the producer config we have:
spring.cloud.stream.kafka.binder.requiredAcks=all spring.cloud.stream.kafka.binder.configuration.retries=5 spring.cloud.stream.kafka.binder.configuration.metadata.max.age.ms=3000 spring.cloud.stream.kafka.binder.configuration.max.in.flight.requests.per.connection=1 spring.cloud.stream.kafka.binder.configuration.retry.backoff.ms=10000
we have retry config too and it is retrying to get the proper metadata which you can see the above log but keep getting the same warning for that particular partition. Our kafka team also analyzing this issue. I checked google for any solution but nothing i could find to be useful.
is there any config or anything else missing?
please help me.
Thanks in advance.
This error comes when Kafka is down. Restarting Kafka worked for me! :)

Getting UNKOWN_PRODUCER_ID Exception in spring kafka

I am using spring boot 2.1.9 and spring Kafka 2.2.9 with Kafka chained transactions.
I am getting some warning from Kafka producer every time. due to this some time functionality will not work.
I want to know why these errors are coming? is there in config problem?
2020-05-04 09:12:35.216 WARN [xxxxx-order-service,,,] 10 --- [ad | producer-8] o.a.k.clients.producer.internals.Sender : [Producer clientId=producer-8, transactionalId=xxxxx-Order-Service-JOg4T1vFzW4tuc-2] Got error produce response with correlation id 1946 on topic-partition process_event-0, retrying (2147483646 attempts left). Error: UNKNOWN_PRODUCER_ID
2020-05-04 09:12:35.327 WARN [xxxxx-order-service,,,] 10 --- [ad | producer-8] o.a.k.clients.producer.internals.Sender : [Producer clientId=producer-8, transactionalId=xxxxx-Order-Service-JOg4T1vFzW4tuc-2] Got error produce response with correlation id 1950 on topic-partition audit-0, retrying (2147483646 attempts left). Error: UNKNOWN_PRODUCER_ID
2020-05-04 09:12:53.512 WARN [xxxxx-order-service,,,] 10 --- [ad | producer-6] o.a.k.clients.producer.internals.Sender : [Producer clientId=producer-6, transactionalId=xxxxx-Order-Service-JOg4T1vFzW4tuc-0] Got error produce response with correlation id 5807 on topic-partition process_submitted_page_count-2, retrying (2147483646 attempts left). Error: UNKNOWN_PRODUCER_ID
2020-05-04 09:12:53.632 WARN [xxxxx-order-service,,,] 10 --- [ad | producer-6] o.a.k.clients.producer.internals.Sender : [Producer clientId=producer-6, transactionalId=xxxxx-Order-Service-JOg4T1vFzW4tuc-0] Got error produce response with correlation id 5811 on topic-partition process_event-0, retrying (2147483646 attempts left). Error: UNKNOWN_PRODUCER_ID
2020-05-04 09:12:53.752 WARN [xxxxx-order-service,,,] 10 --- [ad | producer-6] o.a.k.clients.producer.internals.Sender : [Producer clientId=producer-6, transactionalId=xxxxx-Order-Service-JOg4T1vFzW4tuc-0] Got error produce response with correlation id 5816 on topic-partition audit-0, retrying (2147483646 attempts left). Error: UNKNOWN_PRODUCER_ID
I assume that you might be hitting this issue.
When a streams application has little traffic, then it is possible
that consumer purging would delete even the last message sent by a
producer (i.e., all the messages sent by this producer have been
consumed and committed), and as a result, the broker would delete that
producer's ID. The next time when this producer tries to send, it will
get this UNKNOWN_PRODUCER_ID error code, but in this case, this error
is retriable: the producer would just get a new producer id and
retries, and then this time it will succeed.
Proposed Solution: Upgrade Kafka
Now this issue has been solved for versions 2.4.0+ so if you are still hitting this you need to upgrade to a newer Kafka version.
Alternative Solution: Increase retention time & transactional.id.expiration.ms
Alternatively, if you cannot (or don't want to) upgrade then you can increase the retention period (log.retention.hours) as well as transactional.id.expiration.ms that defines the amount of inactivity time that needs to pass in order for a producer to be considered as expired (defaults to 7 days).

1 partitions have leader brokers without a matching listener

I'm trying to get Kafka to work for the first time. I get the error as described below. Any reason why Kafka would throw this error?
The error:
[kafka-producer-network-thread | producer-1] WARN
org.apache.kafka.clients.NetworkClient - [Producer
clientId=producer-1] 1 partitions have leader brokers without a
matching listener, including [test1-0]
port =9020
advertised.host.name=10.44.72.204
advertised.port=9020

Error when sending message to topic in Kafka

hi please when i run kafka producer im displays this error
is what you need to delete the topics??
[2019-04-09 01:48:29,011] WARN [Producer clientId=console-producer]
Received invalid metadata error in produce request on partition
test2-0 due to
org.apache.kafka.common.errors.NotLeaderForPartitionException: This
server is not the leader for that topic-partition.. Going to request
metadata update now
(org.apache.kafka.clients.producer.internals.Sender) [2019-04-09
01:48:29,124] ERROR Error when sending message to topic test2 with
key: null, value: 0 bytes with error:
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.NotLeaderForPartitionException: This
server is not the leader for that topic-partition. [2019-04-09
01:48:29,132] WARN [Producer clientId=console-producer] Received
invalid metadata error in produce request on partition test2-0 due to
org.apache.kafka.common.errors.NotLeaderForPartitionException: This
server is not the leader for that topic-partition.. Going to request
metadata update now
(org.apache.kafka.clients.producer.internals.Sender)