Restoration of GlobalKTables is extremely slow - apache-kafka

Since we introduced GlobalKTables in streams in several services, the startup time of the services grew to unbearable amounts of time. We have a listener observing the state of state store restoration and this is what stands in the logs:
[wrwks-bef-projekt-export-projektAggregate-5e88016d-ae03-49fe-80b0-46c2238f528d-GlobalStreamThread] INFO o.a.k.clients.consumer.KafkaConsumer - [Consumer clientId=wrwks-bef-projekt-export-projektAggregate-5e88016d-ae03-49fe-80b0-46c2238f528d-global-consumer, groupId=null] Subscribed to partition(s): wrwks-bef-equipmenttyp-aggregat-privat-1-4
[wrwks-bef-projekt-export-projektAggregate-5e88016d-ae03-49fe-80b0-46c2238f528d-GlobalStreamThread] INFO o.a.k.clients.consumer.KafkaConsumer - [Consumer clientId=wrwks-bef-projekt-export-projektAggregate-5e88016d-ae03-49fe-80b0-46c2238f528d-global-consumer, groupId=null] Seeking to offset 4 for partition wrwks-bef-equipmenttyp-aggregat-privat-1-4
[wrwks-bef-projekt-export-projektAggregate-5e88016d-ae03-49fe-80b0-46c2238f528d-GlobalStreamThread] DEBUG a.w.b.p.h.StreamAndStateStoreStateListener - onRestoreStart for store wrwks-bef-equipmenttyp-aggregat-privat-1-STATE-STORE-0000000005, partition wrwks-bef-equipmenttyp-aggregat-privat-1-4
[wrwks-bef-projekt-export-projektAggregate-5e88016d-ae03-49fe-80b0-46c2238f528d-GlobalStreamThread] INFO a.w.b.p.h.StreamAndStateStoreStateListener - started state restore for store wrwks-bef-equipmenttyp-aggregat-privat-1-STATE-STORE-0000000005, remaining partitions: 1
[wrwks-bef-projekt-export-projektAggregate-5e88016d-ae03-49fe-80b0-46c2238f528d-GlobalStreamThread] INFO a.w.b.p.h.StreamAndStateStoreStateListener - publishing availability information: liveness false, readiness false
[wrwks-bef-projekt-export-projektAggregate-5e88016d-ae03-49fe-80b0-46c2238f528d-GlobalStreamThread] DEBUG a.w.b.p.h.StreamAndStateStoreStateListener - onRestoreEnd for store wrwks-bef-equipmenttyp-aggregat-privat-1-STATE-STORE-0000000005, partition wrwks-bef-equipmenttyp-aggregat-privat-1-4
[wrwks-bef-projekt-export-projektAggregate-5e88016d-ae03-49fe-80b0-46c2238f528d-GlobalStreamThread] INFO a.w.b.p.h.StreamAndStateStoreStateListener - finished state restore for store wrwks-bef-equipmenttyp-aggregat-privat-1-STATE-STORE-0000000005
[wrwks-bef-projekt-export-projektAggregate-5e88016d-ae03-49fe-80b0-46c2238f528d-GlobalStreamThread] INFO a.w.b.p.h.StreamAndStateStoreStateListener - publishing availability information: liveness false, readiness false
[wrwks-bef-projekt-export-projektAggregate-5e88016d-ae03-49fe-80b0-46c2238f528d-GlobalStreamThread] INFO o.a.k.clients.consumer.KafkaConsumer - [Consumer clientId=wrwks-bef-projekt-export-projektAggregate-5e88016d-ae03-49fe-80b0-46c2238f528d-global-consumer, groupId=null] Subscribed to partition(s): wrwks-bef-equipmenttyp-aggregat-privat-1-1
[wrwks-bef-projekt-export-projektAggregate-5e88016d-ae03-49fe-80b0-46c2238f528d-GlobalStreamThread] INFO o.a.k.clients.consumer.KafkaConsumer - [Consumer clientId=wrwks-bef-projekt-export-projektAggregate-5e88016d-ae03-49fe-80b0-46c2238f528d-global-consumer, groupId=null] Seeking to offset 1 for partition wrwks-bef-equipmenttyp-aggregat-privat-1-1
[wrwks-bef-projekt-export-projektAggregate-5e88016d-ae03-49fe-80b0-46c2238f528d-GlobalStreamThread] DEBUG a.w.b.p.h.StreamAndStateStoreStateListener - onRestoreStart for store wrwks-bef-equipmenttyp-aggregat-privat-1-STATE-STORE-0000000005, partition wrwks-bef-equipmenttyp-aggregat-privat-1-1
[wrwks-bef-projekt-export-projektAggregate-5e88016d-ae03-49fe-80b0-46c2238f528d-GlobalStreamThread] INFO a.w.b.p.h.StreamAndStateStoreStateListener - started state restore for store wrwks-bef-equipmenttyp-aggregat-privat-1-STATE-STORE-0000000005, remaining partitions: 1
[wrwks-bef-projekt-export-projektAggregate-5e88016d-ae03-49fe-80b0-46c2238f528d-GlobalStreamThread] INFO a.w.b.p.h.StreamAndStateStoreStateListener - publishing availability information: liveness false, readiness false
[wrwks-bef-projekt-export-projektAggregate-5e88016d-ae03-49fe-80b0-46c2238f528d-GlobalStreamThread] DEBUG a.w.b.p.h.StreamAndStateStoreStateListener - onRestoreEnd for store wrwks-bef-equipmenttyp-aggregat-privat-1-STATE-STORE-0000000005, partition wrwks-bef-equipmenttyp-aggregat-privat-1-1
[wrwks-bef-projekt-export-projektAggregate-5e88016d-ae03-49fe-80b0-46c2238f528d-GlobalStreamThread] INFO a.w.b.p.h.StreamAndStateStoreStateListener - finished state restore for store wrwks-bef-equipmenttyp-aggregat-privat-1-STATE-STORE-0000000005
[wrwks-bef-projekt-export-projektAggregate-5e88016d-ae03-49fe-80b0-46c2238f528d-GlobalStreamThread] INFO a.w.b.p.h.StreamAndStateStoreStateListener - publishing availability information: liveness false, readiness false
[wrwks-bef-projekt-export-projektAggregate-5e88016d-ae03-49fe-80b0-46c2238f528d-GlobalStreamThread] INFO o.a.k.clients.consumer.KafkaConsumer - [Consumer clientId=wrwks-bef-projekt-export-projektAggregate-5e88016d-ae03-49fe-80b0-46c2238f528d-global-consumer, groupId=null] Subscribed to partition(s): wrwks-bef-equipmenttyp-aggregat-privat-1-6
[wrwks-bef-projekt-export-projektAggregate-5e88016d-ae03-49fe-80b0-46c2238f528d-GlobalStreamThread] INFO o.a.k.clients.consumer.KafkaConsumer - [Consumer clientId=wrwks-bef-projekt-export-projektAggregate-5e88016d-ae03-49fe-80b0-46c2238f528d-global-consumer, groupId=null] Seeking to offset 1 for partition wrwks-bef-equipmenttyp-aggregat-privat-1-6
[wrwks-bef-projekt-export-projektAggregate-5e88016d-ae03-49fe-80b0-46c2238f528d-GlobalStreamThread] DEBUG a.w.b.p.h.StreamAndStateStoreStateListener - onRestoreStart for store wrwks-bef-equipmenttyp-aggregat-privat-1-STATE-STORE-0000000005, partition wrwks-bef-equipmenttyp-aggregat-privat-1-6
[wrwks-bef-projekt-export-projektAggregate-5e88016d-ae03-49fe-80b0-46c2238f528d-GlobalStreamThread] INFO a.w.b.p.h.StreamAndStateStoreStateListener - started state restore for store wrwks-bef-equipmenttyp-aggregat-privat-1-STATE-STORE-0000000005, remaining partitions: 1
[wrwks-bef-projekt-export-projektAggregate-5e88016d-ae03-49fe-80b0-46c2238f528d-GlobalStreamThread] INFO a.w.b.p.h.StreamAndStateStoreStateListener - publishing availability information: liveness false, readiness false
[wrwks-bef-projekt-export-projektAggregate-5e88016d-ae03-49fe-80b0-46c2238f528d-GlobalStreamThread] DEBUG a.w.b.p.h.StreamAndStateStoreStateListener - onRestoreEnd for store wrwks-bef-equipmenttyp-aggregat-privat-1-STATE-STORE-0000000005, partition wrwks-bef-equipmenttyp-aggregat-privat-1-6
[wrwks-bef-projekt-export-projektAggregate-5e88016d-ae03-49fe-80b0-46c2238f528d-GlobalStreamThread] INFO a.w.b.p.h.StreamAndStateStoreStateListener - finished state restore for store wrwks-bef-equipmenttyp-aggregat-privat-1-STATE-STORE-0000000005
[wrwks-bef-projekt-export-projektAggregate-5e88016d-ae03-49fe-80b0-46c2238f528d-GlobalStreamThread] INFO a.w.b.p.h.StreamAndStateStoreStateListener - publishing availability information: liveness false, readiness false
[wrwks-bef-projekt-export-projektAggregate-5e88016d-ae03-49fe-80b0-46c2238f528d-GlobalStreamThread] INFO o.a.k.clients.consumer.KafkaConsumer - [Consumer clientId=wrwks-bef-projekt-export-projektAggregate-5e88016d-ae03-49fe-80b0-46c2238f528d-global-consumer, groupId=null] Subscribed to partition(s): wrwks-bef-equipmenttyp-aggregat-privat-1-7
[wrwks-bef-projekt-export-projektAggregate-5e88016d-ae03-49fe-80b0-46c2238f528d-GlobalStreamThread] INFO o.a.k.clients.consumer.KafkaConsumer - [Consumer clientId=wrwks-bef-projekt-export-projektAggregate-5e88016d-ae03-49fe-80b0-46c2238f528d-global-consumer, groupId=null] Seeking to offset 1 for partition wrwks-bef-equipmenttyp-aggregat-privat-1-7
[wrwks-bef-projekt-export-projektAggregate-5e88016d-ae03-49fe-80b0-46c2238f528d-GlobalStreamThread] DEBUG a.w.b.p.h.StreamAndStateStoreStateListener - onRestoreStart for store wrwks-bef-equipmenttyp-aggregat-privat-1-STATE-STORE-0000000005, partition wrwks-bef-equipmenttyp-aggregat-privat-1-7
[wrwks-bef-projekt-export-projektAggregate-5e88016d-ae03-49fe-80b0-46c2238f528d-GlobalStreamThread] INFO a.w.b.p.h.StreamAndStateStoreStateListener - started state restore for store wrwks-bef-equipmenttyp-aggregat-privat-1-STATE-STORE-0000000005, remaining partitions: 1
[wrwks-bef-projekt-export-projektAggregate-5e88016d-ae03-49fe-80b0-46c2238f528d-GlobalStreamThread] INFO a.w.b.p.h.StreamAndStateStoreStateListener - publishing availability information: liveness false, readiness false
[wrwks-bef-projekt-export-projektAggregate-5e88016d-ae03-49fe-80b0-46c2238f528d-GlobalStreamThread] DEBUG a.w.b.p.h.StreamAndStateStoreStateListener - onRestoreEnd for store wrwks-bef-equipmenttyp-aggregat-privat-1-STATE-STORE-0000000005, partition wrwks-bef-equipmenttyp-aggregat-privat-1-7
[wrwks-bef-projekt-export-projektAggregate-5e88016d-ae03-49fe-80b0-46c2238f528d-GlobalStreamThread] INFO a.w.b.p.h.StreamAndStateStoreStateListener - finished state restore for store wrwks-bef-equipmenttyp-aggregat-privat-1-STATE-STORE-0000000005
[wrwks-bef-projekt-export-projektAggregate-5e88016d-ae03-49fe-80b0-46c2238f528d-GlobalStreamThread] INFO a.w.b.p.h.StreamAndStateStoreStateListener - publishing availability information: liveness false, readiness false
[wrwks-bef-projekt-export-projektAggregate-5e88016d-ae03-49fe-80b0-46c2238f528d-GlobalStreamThread] INFO o.a.k.clients.consumer.KafkaConsumer - [Consumer clientId=wrwks-bef-projekt-export-projektAggregate-5e88016d-ae03-49fe-80b0-46c2238f528d-global-consumer, groupId=null] Subscribed to partition(s): wrwks-bef-equipmenttyp-aggregat-privat-1-3
[wrwks-bef-projekt-export-projektAggregate-5e88016d-ae03-49fe-80b0-46c2238f528d-GlobalStreamThread] INFO o.a.k.clients.consumer.KafkaConsumer - [Consumer clientId=wrwks-bef-projekt-export-projektAggregate-5e88016d-ae03-49fe-80b0-46c2238f528d-global-consumer, groupId=null] Seeking to offset 4 for partition wrwks-bef-equipmenttyp-aggregat-privat-1-3
[wrwks-bef-projekt-export-projektAggregate-5e88016d-ae03-49fe-80b0-46c2238f528d-GlobalStreamThread] DEBUG a.w.b.p.h.StreamAndStateStoreStateListener - onRestoreStart for store wrwks-bef-equipmenttyp-aggregat-privat-1-STATE-STORE-0000000005, partition wrwks-bef-equipmenttyp-aggregat-privat-1-3
[wrwks-bef-projekt-export-projektAggregate-5e88016d-ae03-49fe-80b0-46c2238f528d-GlobalStreamThread] INFO a.w.b.p.h.StreamAndStateStoreStateListener - started state restore for store wrwks-bef-equipmenttyp-aggregat-privat-1-STATE-STORE-0000000005, remaining partitions: 1
[wrwks-bef-projekt-export-projektAggregate-5e88016d-ae03-49fe-80b0-46c2238f528d-GlobalStreamThread] INFO a.w.b.p.h.StreamAndStateStoreStateListener - publishing availability information: liveness false, readiness false
This topic consists of 20 records and the bootstrap time is several minutes if it completes. In other cases it doesn't complete at all which leads to repeated restarts by the k8s watchdog after a wait time of 5 minutes in our cluster.
A very annoying workaround is to delete the state store directory and restart the service, then the service usually starts within a reasonable amount of time.
This behaviour applies to all services using GlobalKTables. All services use Kafka Streams via Spring Cloud Stream 2021.0.5.

Related

Random "Timed out ProduceRequest in flight" messages

I am getting some random timeout errors while publishing messages using Confluent.Kafka. The application runs in a Kubernetes cluster and is built using the .NET 6 framework.
When the default timeout (60000ms) is reached, the message is successfully published as a result of the configured retries.
These are the logs:
[08:03:34 INF] Publishing message with key 'dada33d9-067c-4528-8e85-b9f50858fb1b' and version 'v1' to topic 'topic.name'...
[08:04:34 INF] rdkafka#producer-1: REQTMOUT - [thrd:sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1]: sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1: Timed out ProduceRequest in flight (after 60018ms, timeout #0)
[08:04:34 WRN] rdkafka#producer-1: REQTMOUT - [thrd:sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1]: sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1: Timed out 1 in-flight, 0 retry-queued, 0 out-queue, 0 partially-sent requests
[08:04:34 ERR] rdkafka#producer-1: FAIL - [thrd:sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1]: sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1: 1 request(s) timed out: disconnect (after 1247331ms in state UP)
[08:04:34 INF] A non fatal error with error code 'Local_TimedOut' occurred: sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1: 1 request(s) timed out: disconnect (after 1247331ms in state UP). The client will automatically try to recover from this error.
[08:04:34 INF] Message with key 'dada33d9-067c-4528-8e85-b9f50858fb1b' and version 'v1' successfully published to topic 'topic.name' in partition '[4]:4765'.
And by changing SocketTimeoutMs from 60000ms to 5000ms, the message is published after 5 seconds:
[13:45:46 INF] Publishing message with key 'db380009-a910-43e4-8a1b-457e5d163f9a' and version 'v1' to topic 'topic.name'...
[13:45:51 INF] rdkafka#producer-1: REQTMOUT - [thrd:sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1]: sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1: Timed out ProduceRequest in flight (after 5103ms, timeout #0)
[13:45:51 WRN] rdkafka#producer-1: REQTMOUT - [thrd:sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1]: sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1: Timed out 1 in-flight, 0 retry-queued, 0 out-queue, 0 partially-sent requests
[13:45:51 ERR] rdkafka#producer-1: FAIL - [thrd:sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1]: sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1: 1 request(s) timed out: disconnect (after 309267ms in state UP)
[13:45:51 INF] A non fatal error with error code 'Local_TimedOut' occurred: sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1: 1 request(s) timed out: disconnect (after 309267ms in state UP). The client will automatically try to recover from this error.
[13:45:52 INF] Message with key 'db380009-a910-43e4-8a1b-457e5d163f9a' and version 'v1' successfully published to topic 'topic.name' in partition '[7]:14638'.
I can't find a pattern for this behavior, but it happens pretty often I would say, both when the application is deployed and on my local environment.
Based on the logs from App Insights for one of our environments:
Last 30 Days: 305 timeouts out of 55.72k messages published => 0.55%
Last 7 Days: 92 timeouts out of 14.60k messages published => 0.62%
And from another environment where we don't have to many messages published:
Last 30 Days: 63 timeouts out of 467 messages published => 13.49%
Last 7 Days: 20 timeouts out of 244 messages published => 8.2%
I tried changing the default values for some of the configuration settings, like the default value for LingerMs from 5ms to 100ms, but didn't help.
Does anyone have any idea what could be the reason behind this? I don't know if there is something wrong with my code/configuration, or something wrong with the Kafka broker.
The broker is not owned or maintained by our team, but I know it is built on top of Apache Kafka, using Strimzi, running in Kubernetes.
Client configuration:
CompressionType: CompressionType.Snappy
Acks: Acks.Leader
SecurityProtocol: SecurityProtocol.SaslSsl
EnableSslCertificateVerification: true
MessageMaxBytes: 100000000
SocketKeepaliveEnable: true
MessageSendMaxRetries: 10
RetryBackoffMs: 100
LingerMs: 100
SocketTimeoutMs: 5000
Edit:
More logs with Debug = "broker,topic,msg"
[12:24:24 INF] Publishing message with key 'cd23c386-0855-4207-9868-c1ef0acf059c' and version 'v2' to topic 'topic.name'...
[12:24:24 INF] rdkafka#producer-1: PRODUCE - [thrd:sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1]: sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1: topic.name [7]: Produce MessageSet with 1 message(s) (1041 bytes, ApiVersion 7, MsgVersion 2, MsgId 0, BaseSeq -1, PID{Invalid}, snappy)
[12:24:27 INF] rdkafka#producer-1: REQTMOUT - [thrd:sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1]: sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1: Timed out ProduceRequest in flight (after 3016ms, timeout #0)
[12:24:27 INF] rdkafka#producer-1: REQERR - [thrd:sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1]: sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1: ProduceRequest failed: Local: Timed out: explicit actions Retry,MsgPossiblyPersisted
[12:24:27 INF] rdkafka#producer-1: MSGSET - [thrd:sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1]: sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1: topic.name [7]: MessageSet with 1 message(s) (MsgId 0, BaseSeq -1) encountered error: Local: Timed out (actions Retry,MsgPossiblyPersisted)
[12:24:27 INF] rdkafka#producer-1: REQTMOUT - [thrd:sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1]: sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1: Timed out 1 in-flight, 0 retry-queued, 0 out-queue, 0 partially-sent requests
[12:24:27 INF] rdkafka#producer-1: FAIL - [thrd:sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1]: sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1: 1 request(s) timed out: disconnect (after 298222ms in state UP) (_TIMED_OUT)
[12:24:27 INF] rdkafka#producer-1: FAIL - [thrd:sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1]: sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1: 1 request(s) timed out: disconnect (after 298222ms in state UP)
[12:24:27 INF] rdkafka#producer-1: STATE - [thrd:sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1]: sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1: Broker changed state UP -> DOWN
[12:24:29 INF] rdkafka#producer-1: QRYLEADER - [thrd:main]: Topic topic.name [1]: broker is down: re-query
[12:24:29 INF] A non fatal error with error code 'Local_TimedOut' occurred: sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1: 1 request(s) timed out: disconnect (after 298222ms in state UP). The client will automatically try to recover from this error.
[12:24:29 INF] rdkafka#producer-1: QRYLEADER - [thrd:main]: Topic topic.name [4]: broker is down: re-query
[12:24:29 INF] rdkafka#producer-1: QRYLEADER - [thrd:main]: Topic topic.name [7]: broker is down: re-query
[12:24:29 INF] rdkafka#producer-1: STATE - [thrd:sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1]: sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1: Broker changed state DOWN -> INIT
[12:24:29 INF] rdkafka#producer-1: TOPPAR - [thrd:sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1]: sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1: topic.name [7] 1 message(s) queued but broker not up
[12:24:29 INF] rdkafka#producer-1: STATE - [thrd:sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1]: sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1: Broker changed state INIT -> TRY_CONNECT
[12:24:29 INF] rdkafka#producer-1: CONNECT - [thrd:sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1]: sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1: broker in state TRY_CONNECT connecting
[12:24:29 INF] rdkafka#producer-1: STATE - [thrd:sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1]: sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1: Broker changed state TRY_CONNECT -> CONNECT
[12:24:29 INF] rdkafka#producer-1: METADATA - [thrd:main]: Topic topic.name partition 0 Leader 0
[12:24:29 INF] rdkafka#producer-1: METADATA - [thrd:main]: Topic topic.name partition 1 Leader 1
[12:24:29 INF] rdkafka#producer-1: METADATA - [thrd:main]: Topic topic.name partition 2 Leader 2
[12:24:29 INF] rdkafka#producer-1: METADATA - [thrd:main]: Topic topic.name partition 3 Leader 0
[12:24:29 INF] rdkafka#producer-1: METADATA - [thrd:main]: Topic topic.name partition 4 Leader 1
[12:24:30 INF] rdkafka#producer-1: METADATA - [thrd:main]: Topic topic.name partition 5 Leader 2
[12:24:30 INF] rdkafka#producer-1: METADATA - [thrd:main]: Topic topic.name partition 6 Leader 0
[12:24:30 INF] rdkafka#producer-1: METADATA - [thrd:main]: Topic topic.name partition 7 Leader 1
[12:24:30 INF] rdkafka#producer-1: METADATA - [thrd:main]: Topic topic.name partition 8 Leader 2
[12:24:30 INF] rdkafka#producer-1: METADATA - [thrd:main]: Topic topic.name partition 9 Leader 0
[12:24:30 INF] rdkafka#producer-1: METADATA - [thrd:main]: sasl_ssl://oauth-2.test.bin.az.company.tech:9094/2: 1/1 requested topic(s) seen in metadata
[12:24:30 INF] rdkafka#producer-1: METADATA - [thrd:main]: Topic topic.name partition 0 Leader 0
[12:24:30 INF] rdkafka#producer-1: METADATA - [thrd:main]: Topic topic.name partition 1 Leader 1
[12:24:30 INF] rdkafka#producer-1: METADATA - [thrd:main]: Topic topic.name partition 2 Leader 2
[12:24:30 INF] rdkafka#producer-1: METADATA - [thrd:main]: Topic topic.name partition 3 Leader 0
[12:24:30 INF] rdkafka#producer-1: METADATA - [thrd:main]: Topic topic.name partition 4 Leader 1
[12:24:30 INF] rdkafka#producer-1: METADATA - [thrd:main]: Topic topic.name partition 5 Leader 2
[12:24:30 INF] rdkafka#producer-1: CONNECT - [thrd:sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1]: sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1: Connecting to ipv4#13.95.89.147:9094 (sasl_ssl) with socket 5896
[12:24:30 INF] rdkafka#producer-1: METADATA - [thrd:main]: Topic topic.name partition 6 Leader 0
[12:24:30 INF] rdkafka#producer-1: TOPPAR - [thrd:sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1]: sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1: topic.name [7] 1 message(s) queued but broker not up
[12:24:30 INF] rdkafka#producer-1: METADATA - [thrd:main]: Topic topic.name partition 7 Leader 1
[12:24:30 INF] rdkafka#producer-1: METADATA - [thrd:main]: Topic topic.name partition 8 Leader 2
[12:24:30 INF] rdkafka#producer-1: METADATA - [thrd:main]: Topic topic.name partition 9 Leader 0
[12:24:30 INF] rdkafka#producer-1: METADATA - [thrd:main]: sasl_ssl://oauth-2.test.bin.az.company.tech:9094/2: 1/1 requested topic(s) seen in metadata
[12:24:30 INF] rdkafka#producer-1: CONNECT - [thrd:sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1]: sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1: Connected to ipv4#13.95.89.147:9094
[12:24:30 INF] rdkafka#producer-1: STATE - [thrd:sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1]: sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1: Broker changed state CONNECT -> SSL_HANDSHAKE
[12:24:30 INF] rdkafka#producer-1: TOPPAR - [thrd:sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1]: sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1: topic.name [7] 1 message(s) queued but broker not up
[12:24:30 INF] rdkafka#producer-1: TOPPAR - [thrd:sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1]: sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1: topic.name [7] 1 message(s) queued but broker not up
[12:24:30 INF] rdkafka#producer-1: CONNECTED - [thrd:sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1]: sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1: Connected (#2)
[12:24:30 INF] rdkafka#producer-1: STATE - [thrd:sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1]: sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1: Broker changed state SSL_HANDSHAKE -> APIVERSION_QUERY
[12:24:30 INF] rdkafka#producer-1: TOPPAR - [thrd:sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1]: sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1: topic.name [7] 1 message(s) queued but broker not up
[12:24:30 INF] rdkafka#producer-1: TOPPAR - [thrd:sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1]: sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1: topic.name [7] 1 message(s) queued but broker not up
[12:24:30 INF] rdkafka#producer-1: TOPPAR - [thrd:sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1]: sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1: topic.name [7] 1 message(s) queued but broker not up
[12:24:30 INF] rdkafka#producer-1: AUTH - [thrd:sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1]: sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1: Auth in state APIVERSION_QUERY (handshake supported)
[12:24:30 INF] rdkafka#producer-1: STATE - [thrd:sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1]: sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1: Broker changed state APIVERSION_QUERY -> AUTH_HANDSHAKE
[12:24:30 INF] rdkafka#producer-1: TOPPAR - [thrd:sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1]: sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1: topic.name [7] 1 message(s) queued but broker not up
[12:24:31 INF] rdkafka#producer-1: TOPPAR - [thrd:sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1]: sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1: topic.name [7] 1 message(s) queued but broker not up
[12:24:31 INF] rdkafka#producer-1: SASLMECHS - [thrd:sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1]: sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1: Broker supported SASL mechanisms: OAUTHBEARER
[12:24:31 INF] rdkafka#producer-1: AUTH - [thrd:sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1]: sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1: Auth in state AUTH_HANDSHAKE (handshake supported)
[12:24:31 INF] rdkafka#producer-1: STATE - [thrd:sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1]: sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1: Broker changed state AUTH_HANDSHAKE -> AUTH_REQ
[12:24:31 INF] rdkafka#producer-1: TOPPAR - [thrd:sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1]: sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1: topic.name [7] 1 message(s) queued but broker not up
[12:24:31 INF] rdkafka#producer-1: TOPPAR - [thrd:sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1]: sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1: topic.name [7] 1 message(s) queued but broker not up
[12:24:31 INF] rdkafka#producer-1: OAUTHBEARER - [thrd:sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1]: sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1: SASL OAUTHBEARER authentication successful (principal=)
[12:24:31 INF] rdkafka#producer-1: STATE - [thrd:sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1]: sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1: Broker changed state AUTH_REQ -> UP
[12:24:31 INF] rdkafka#producer-1: PRODUCE - [thrd:sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1]: sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1: topic.name [7]: Produce MessageSet with 1 message(s) (1041 bytes, ApiVersion 7, MsgVersion 2, MsgId 0, BaseSeq -1, PID{Invalid}, snappy)
[12:24:31 INF] rdkafka#producer-1: MSGSET - [thrd:sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1]: sasl_ssl://oauth-1.test.bin.az.company.tech:9094/1: topic.name [7]: MessageSet with 1 message(s) (MsgId 0, BaseSeq -1) delivered
[12:24:31 INF] Message with key 'cd23c386-0855-4207-9868-c1ef0acf059c' and version 'v2' successfully published to topic 'topic.name' in partition '[7]:18182'.

kafka gives Unsubscribed all topics or patterns and assigned partitions

I have a simple consumer with dead letter topic functinality by adding #RetryableTopic and #DltHandler method. It works fine in my local machine. Build correctly and application function correctly.
But in the a separate env when kafka and zookeeper are running in a sidecar, it gives below INFO log and the consumers stop.
2022-08-03 13:41:09.976 INFO 1 --- [ner#3-dlt-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-dlt-1, groupId=xxx-dlt] Resetting generation due to: consumer pro-actively leaving the group
2022-08-03 13:41:09.978 INFO 1 --- [ner#3-dlt-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-dlt-1, groupId=xxx-dlt] Request joining group due to: consumer pro-actively leaving the group
2022-08-03 13:41:09.979 INFO 1 --- [ner#3-dlt-0-C-1] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer-xxx-dlt-1, groupId=xxx-dlt] Unsubscribed all topics or patterns and assigned partitions
2022-08-03 13:41:09.978 INFO 1 --- [etry-1000-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-retry-1000-2, groupId=xxx-retry-1000] Resetting generation due to: consumer pro-actively leaving the group
2022-08-03 13:41:09.982 INFO 1 --- [etry-1000-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-retry-1000-2, groupId=xxx-retry-1000] Request joining group due to: consumer pro-actively leaving the group
2022-08-03 13:41:09.978 INFO 1 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-4, groupId=xxx] Revoke previously assigned partitions xxx-0
2022-08-03 13:41:09.983 INFO 1 --- [ntainer#0-0-C-1] o.s.k.l.KafkaMessageListenerContainer : xxx: partitions revoked: [xxx-0]
2022-08-03 13:41:09.984 INFO 1 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=xxx-4, groupId=xxx] Member consumer-xxx-4-b9e92dbb-3726-48b1-8ff6-3e550c0956c9 sending LeaveGroup request to coordinator localhost:9092 (id: 2147483646 rack: null) due to the consumer unsubscribed from all topics
I'm new to kafka and i dont know what does above log means.
Edit
Since above log is not enough i added full log
2022-08-03 13:39:29.777 INFO 1 --- [ main] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer-xxx-4, groupId=xxx] Subscribed to topic(s): xxx
2022-08-03 13:39:29.777 INFO 1 --- [etry-3000-0-C-1] org.apache.kafka.clients.Metadata : [Consumer clientId=consumer-xxx-retry-3000-3, groupId=xxx-retry-3000] Resetting the last seen epoch of partition xxx-retry-3000-0 to 0 since the associated topicId changed from null to 9cpZF_2cTs2Db0R0OYMtFA
2022-08-03 13:39:29.782 INFO 1 --- [etry-3000-0-C-1] org.apache.kafka.clients.Metadata : [Consumer clientId=consumer-xxx-retry-3000-3, groupId=xxx-retry-3000] Cluster ID: s28GwPkmTEOOKRkAUkwjWw
2022-08-03 13:39:29.784 INFO 1 --- [etry-3000-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-retry-3000-3, groupId=xxx-retry-3000] Discovered group coordinator localhost:9092 (id: 2147483646 rack: null)
2022-08-03 13:39:29.784 INFO 1 --- [ner#3-dlt-0-C-1] org.apache.kafka.clients.Metadata : [Consumer clientId=consumer-xxx-dlt-1, groupId=xxx-dlt] Cluster ID: s28GwPkmTEOOKRkAUkwjWw
2022-08-03 13:39:29.785 INFO 1 --- [ner#3-dlt-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-dlt-1, groupId=xxx-dlt] Discovered group coordinator localhost:9092 (id: 2147483646 rack: null)
2022-08-03 13:39:29.803 INFO 1 --- [etry-1000-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-retry-1000-2, groupId=xxx-retry-1000] (Re-)joining group
2022-08-03 13:39:29.894 INFO 1 --- [etry-3000-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-retry-3000-3, groupId=xxx-retry-3000] (Re-)joining group
2022-08-03 13:39:29.898 INFO 1 --- [ner#3-dlt-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-dlt-1, groupId=xxx-dlt] (Re-)joining group
2022-08-03 13:39:29.998 INFO 1 --- [ntainer#0-0-C-1] org.apache.kafka.clients.Metadata : [Consumer clientId=consumer-xxx-4, groupId=xxx] Resetting the last seen epoch of partition xxx-0 to 0 since the associated topicId changed from null to Pg5jHtdbRu-y3lfbrX23Kg
2022-08-03 13:39:29.999 INFO 1 --- [ntainer#0-0-C-1] org.apache.kafka.clients.Metadata : [Consumer clientId=consumer-xxx-4, groupId=xxx] Cluster ID: s28GwPkmTEOOKRkAUkwjWw
2022-08-03 13:39:30.003 INFO 1 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-4, groupId=xxx] Discovered group coordinator localhost:9092 (id: 2147483646 rack: null)
2022-08-03 13:39:30.073 INFO 1 --- [etry-1000-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-retry-1000-2, groupId=xxx-retry-1000] Request joining group due to: need to re-join with the given member-id
2022-08-03 13:39:30.076 INFO 1 --- [etry-1000-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-retry-1000-2, groupId=xxx-retry-1000] (Re-)joining group
2022-08-03 13:39:30.075 INFO 1 --- [ner#3-dlt-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-dlt-1, groupId=xxx-dlt] Request joining group due to: need to re-join with the given member-id
2022-08-03 13:39:30.078 INFO 1 --- [etry-3000-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-retry-3000-3, groupId=xxx-retry-3000] Request joining group due to: need to re-join with the given member-id
2022-08-03 13:39:30.083 INFO 1 --- [etry-3000-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-retry-3000-3, groupId=xxx-retry-3000] (Re-)joining group
2022-08-03 13:39:30.083 INFO 1 --- [ner#3-dlt-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-dlt-1, groupId=xxx-dlt] (Re-)joining group
2022-08-03 13:39:30.098 INFO 1 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-4, groupId=xxx] (Re-)joining group
2022-08-03 13:39:30.173 INFO 1 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-4, groupId=xxx] Request joining group due to: need to re-join with the given member-id
2022-08-03 13:39:30.178 INFO 1 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-4, groupId=xxx] (Re-)joining group
2022-08-03 13:39:30.278 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8088 (http) with context path ''
2022-08-03 13:39:30.474 INFO 1 --- [ main] c.w.e.e.EventHubAdaptorApplication : Started EventHubAdaptorApplication in 40.773 seconds (JVM running for 44.671)
2022-08-03 13:39:33.089 INFO 1 --- [etry-1000-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-retry-1000-2, groupId=xxx-retry-1000] Successfully joined group with generation Generation{generationId=25, memberId='consumer-xxx-retry-1000-2-1ad2ca33-d1a3-4626-8460-4008d8705cf2', protocol='range'}
2022-08-03 13:39:33.094 INFO 1 --- [ner#3-dlt-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-dlt-1, groupId=xxx-dlt] Successfully joined group with generation Generation{generationId=25, memberId='consumer-xxx-dlt-1-74310b12-d021-4bfe-b6eb-67d69fa99d68', protocol='range'}
2022-08-03 13:39:33.093 INFO 1 --- [etry-3000-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-retry-3000-3, groupId=xxx-retry-3000] Successfully joined group with generation Generation{generationId=25, memberId='consumer-xxx-retry-3000-3-ed7c07c9-18a3-4edb-a3c1-cfac0e8b6448', protocol='range'}
2022-08-03 13:39:33.098 INFO 1 --- [etry-3000-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-retry-3000-3, groupId=xxx-retry-3000] Finished assignment for group at generation 25: {consumer-xxx-retry-3000-3-ed7c07c9-18a3-4edb-a3c1-cfac0e8b6448=Assignment(partitions=[xxx-retry-3000-0])}
2022-08-03 13:39:33.098 INFO 1 --- [etry-1000-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-retry-1000-2, groupId=xxx-retry-1000] Finished assignment for group at generation 25: {consumer-xxx-retry-1000-2-1ad2ca33-d1a3-4626-8460-4008d8705cf2=Assignment(partitions=[xxx-retry-1000-0])}
2022-08-03 13:39:33.098 INFO 1 --- [ner#3-dlt-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-dlt-1, groupId=xxx-dlt] Finished assignment for group at generation 25: {consumer-xxx-dlt-1-74310b12-d021-4bfe-b6eb-67d69fa99d68=Assignment(partitions=[xxx-dlt-0])}
2022-08-03 13:39:33.113 INFO 1 --- [etry-3000-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-retry-3000-3, groupId=xxx-retry-3000] Successfully synced group in generation Generation{generationId=25, memberId='consumer-xxx-retry-3000-3-ed7c07c9-18a3-4edb-a3c1-cfac0e8b6448', protocol='range'}
2022-08-03 13:39:33.115 INFO 1 --- [etry-3000-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-retry-3000-3, groupId=xxx-retry-3000] Notifying assignor about the new Assignment(partitions=[xxx-retry-3000-0])
2022-08-03 13:39:33.119 INFO 1 --- [ner#3-dlt-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-dlt-1, groupId=xxx-dlt] Successfully synced group in generation Generation{generationId=25, memberId='consumer-xxx-dlt-1-74310b12-d021-4bfe-b6eb-67d69fa99d68', protocol='range'}
2022-08-03 13:39:33.120 INFO 1 --- [ner#3-dlt-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-dlt-1, groupId=xxx-dlt] Notifying assignor about the new Assignment(partitions=[xxx-dlt-0])
2022-08-03 13:39:33.124 INFO 1 --- [etry-1000-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-retry-1000-2, groupId=xxx-retry-1000] Successfully synced group in generation Generation{generationId=25, memberId='consumer-xxx-retry-1000-2-1ad2ca33-d1a3-4626-8460-4008d8705cf2', protocol='range'}
2022-08-03 13:39:33.125 INFO 1 --- [etry-3000-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-retry-3000-3, groupId=xxx-retry-3000] Adding newly assigned partitions: xxx-retry-3000-0
2022-08-03 13:39:33.126 INFO 1 --- [etry-1000-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-retry-1000-2, groupId=xxx-retry-1000] Notifying assignor about the new Assignment(partitions=[xxx-retry-1000-0])
2022-08-03 13:39:33.126 INFO 1 --- [etry-1000-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-retry-1000-2, groupId=xxx-retry-1000] Adding newly assigned partitions: xxx-retry-1000-0
2022-08-03 13:39:33.128 INFO 1 --- [ner#3-dlt-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-dlt-1, groupId=xxx-dlt] Adding newly assigned partitions: xxx-dlt-0
2022-08-03 13:39:33.184 INFO 1 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-4, groupId=xxx] Successfully joined group with generation Generation{generationId=25, memberId='consumer-xxx-4-b9e92dbb-3726-48b1-8ff6-3e550c0956c9', protocol='range'}
2022-08-03 13:39:33.185 INFO 1 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-4, groupId=xxx] Finished assignment for group at generation 25: {consumer-xxx-4-b9e92dbb-3726-48b1-8ff6-3e550c0956c9=Assignment(partitions=[xxx-0])}
2022-08-03 13:39:33.274 INFO 1 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-4, groupId=xxx] Successfully synced group in generation Generation{generationId=25, memberId='consumer-xxx-4-b9e92dbb-3726-48b1-8ff6-3e550c0956c9', protocol='range'}
2022-08-03 13:39:33.274 INFO 1 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-4, groupId=xxx] Notifying assignor about the new Assignment(partitions=[xxx-0])
2022-08-03 13:39:33.274 INFO 1 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-4, groupId=xxx] Adding newly assigned partitions: xxx-0
2022-08-03 13:39:33.285 INFO 1 --- [etry-3000-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-retry-3000-3, groupId=xxx-retry-3000] Setting offset for partition xxx-retry-3000-0 to the committed offset FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:9092 (id: 1 rack: null)], epoch=0}}
2022-08-03 13:39:33.282 INFO 1 --- [etry-1000-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-retry-1000-2, groupId=xxx-retry-1000] Setting offset for partition xxx-retry-1000-0 to the committed offset FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:9092 (id: 1 rack: null)], epoch=0}}
2022-08-03 13:39:33.292 INFO 1 --- [ner#3-dlt-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-dlt-1, groupId=xxx-dlt] Setting offset for partition xxx-dlt-0 to the committed offset FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:9092 (id: 1 rack: null)], epoch=0}}
2022-08-03 13:39:33.299 INFO 1 --- [ner#3-dlt-0-C-1] o.s.k.l.KafkaMessageListenerContainer : xxx-dlt: partitions assigned: [xxx-dlt-0]
2022-08-03 13:39:33.303 INFO 1 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-4, groupId=xxx] Setting offset for partition xxx-0 to the committed offset FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:9092 (id: 1 rack: null)], epoch=0}}
2022-08-03 13:39:33.304 INFO 1 --- [ntainer#0-0-C-1] o.s.k.l.KafkaMessageListenerContainer : xxx: partitions assigned: [xxx-0]
2022-08-03 13:39:33.390 INFO 1 --- [etry-3000-0-C-1] o.s.k.l.KafkaMessageListenerContainer : xxx-retry-3000: partitions assigned: [xxx-retry-3000-0]
2022-08-03 13:39:33.391 INFO 1 --- [etry-1000-0-C-1] o.s.k.l.KafkaMessageListenerContainer : xxx-retry-1000: partitions assigned: [xxx-retry-1000-0]
2022-08-03 13:41:09.918 INFO 1 --- [ner#3-dlt-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-dlt-1, groupId=xxx-dlt] Revoke previously assigned partitions xxx-dlt-0
2022-08-03 13:41:09.920 INFO 1 --- [ner#3-dlt-0-C-1] o.s.k.l.KafkaMessageListenerContainer : xxx-dlt: partitions revoked: [xxx-dlt-0]
2022-08-03 13:41:09.974 INFO 1 --- [ner#3-dlt-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-dlt-1, groupId=xxx-dlt] Member consumer-xxx-dlt-1-74310b12-d021-4bfe-b6eb-67d69fa99d68 sending LeaveGroup request to coordinator localhost:9092 (id: 2147483646 rack: null) due to the consumer unsubscribed from all topics
2022-08-03 13:41:09.975 INFO 1 --- [etry-1000-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-retry-1000-2, groupId=xxx-retry-1000] Revoke previously assigned partitions xxx-retry-1000-0
2022-08-03 13:41:09.975 INFO 1 --- [etry-1000-0-C-1] o.s.k.l.KafkaMessageListenerContainer : xxx-retry-1000: partitions revoked: [xxx-retry-1000-0]
2022-08-03 13:41:09.975 INFO 1 --- [etry-1000-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-retry-1000-2, groupId=xxx-retry-1000] Member consumer-xxx-retry-1000-2-1ad2ca33-d1a3-4626-8460-4008d8705cf2 sending LeaveGroup request to coordinator localhost:9092 (id: 2147483646 rack: null) due to the consumer unsubscribed from all topics
2022-08-03 13:41:09.976 INFO 1 --- [etry-1000-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-retry-1000-2, groupId=xxx-retry-1000] Resetting generation due to: consumer pro-actively leaving the group
2022-08-03 13:41:09.976 INFO 1 --- [etry-1000-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-retry-1000-2, groupId=xxx-retry-1000] Request joining group due to: consumer pro-actively leaving the group
2022-08-03 13:41:09.976 INFO 1 --- [etry-1000-0-C-1] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer-xxx-retry-1000-2, groupId=xxx-retry-1000] Unsubscribed all topics or patterns and assigned partitions
2022-08-03 13:41:09.976 INFO 1 --- [ner#3-dlt-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-dlt-1, groupId=xxx-dlt] Resetting generation due to: consumer pro-actively leaving the group
2022-08-03 13:41:09.978 INFO 1 --- [ner#3-dlt-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-dlt-1, groupId=xxx-dlt] Request joining group due to: consumer pro-actively leaving the group
2022-08-03 13:41:09.979 INFO 1 --- [ner#3-dlt-0-C-1] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer-xxx-dlt-1, groupId=xxx-dlt] Unsubscribed all topics or patterns and assigned partitions
2022-08-03 13:41:09.978 INFO 1 --- [etry-1000-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-retry-1000-2, groupId=xxx-retry-1000] Resetting generation due to: consumer pro-actively leaving the group
2022-08-03 13:41:09.982 INFO 1 --- [etry-1000-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-retry-1000-2, groupId=xxx-retry-1000] Request joining group due to: consumer pro-actively leaving the group
2022-08-03 13:41:09.978 INFO 1 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-4, groupId=xxx] Revoke previously assigned partitions xxx-0
2022-08-03 13:41:09.983 INFO 1 --- [ntainer#0-0-C-1] o.s.k.l.KafkaMessageListenerContainer : xxx: partitions revoked: [xxx-0]
2022-08-03 13:41:09.984 INFO 1 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-4, groupId=xxx] Member consumer-xxx-4-b9e92dbb-3726-48b1-8ff6-3e550c0956c9 sending LeaveGroup request to coordinator localhost:9092 (id: 2147483646 rack: null) due to the consumer unsubscribed from all topics
2022-08-03 13:41:09.985 INFO 1 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-4, groupId=xxx] Resetting generation due to: consumer pro-actively leaving the group
2022-08-03 13:41:09.985 INFO 1 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-4, groupId=xxx] Request joining group due to: consumer pro-actively leaving the group
2022-08-03 13:41:09.986 INFO 1 --- [ntainer#0-0-C-1] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer-xxx-4, groupId=xxx] Unsubscribed all topics or patterns and assigned partitions
2022-08-03 13:41:09.988 INFO 1 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-4, groupId=xxx] Resetting generation due to: consumer pro-actively leaving the group
2022-08-03 13:41:09.988 INFO 1 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-4, groupId=xxx] Request joining group due to: consumer pro-actively leaving the group
2022-08-03 13:41:09.979 INFO 1 --- [etry-3000-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-retry-3000-3, groupId=xxx-retry-3000] Revoke previously assigned partitions xxx-retry-3000-0
2022-08-03 13:41:09.988 INFO 1 --- [etry-3000-0-C-1] o.s.k.l.KafkaMessageListenerContainer : xxx-retry-3000: partitions revoked: [xxx-retry-3000-0]
2022-08-03 13:41:09.988 INFO 1 --- [etry-3000-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-retry-3000-3, groupId=xxx-retry-3000] Member consumer-xxx-retry-3000-3-ed7c07c9-18a3-4edb-a3c1-cfac0e8b6448 sending LeaveGroup request to coordinator localhost:9092 (id: 2147483646 rack: null) due to the consumer unsubscribed from all topics
2022-08-03 13:41:09.989 INFO 1 --- [etry-3000-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-retry-3000-3, groupId=xxx-retry-3000] Resetting generation due to: consumer pro-actively leaving the group
2022-08-03 13:41:09.989 INFO 1 --- [etry-3000-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-retry-3000-3, groupId=xxx-retry-3000] Request joining group due to: consumer pro-actively leaving the group
2022-08-03 13:41:09.989 INFO 1 --- [etry-3000-0-C-1] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer-xxx-retry-3000-3, groupId=xxx-retry-3000] Unsubscribed all topics or patterns and assigned partitions
2022-08-03 13:41:09.984 INFO 1 --- [ner#3-dlt-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-dlt-1, groupId=xxx-dlt] Resetting generation due to: consumer pro-actively leaving the group
2022-08-03 13:41:09.989 INFO 1 --- [ner#3-dlt-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-dlt-1, groupId=xxx-dlt] Request joining group due to: consumer pro-actively leaving the group
2022-08-03 13:41:09.990 INFO 1 --- [ner#3-dlt-0-C-1] org.apache.kafka.common.metrics.Metrics : Metrics scheduler closed
2022-08-03 13:41:09.990 INFO 1 --- [ner#3-dlt-0-C-1] org.apache.kafka.common.metrics.Metrics : Closing reporter org.apache.kafka.common.metrics.JmxReporter
2022-08-03 13:41:09.990 INFO 1 --- [ner#3-dlt-0-C-1] org.apache.kafka.common.metrics.Metrics : Metrics reporters closed
2022-08-03 13:41:09.997 INFO 1 --- [etry-3000-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-retry-3000-3, groupId=xxx-retry-3000] Resetting generation due to: consumer pro-actively leaving the group
2022-08-03 13:41:09.997 INFO 1 --- [ntainer#0-0-C-1] org.apache.kafka.common.metrics.Metrics : Metrics scheduler closed
2022-08-03 13:41:09.997 INFO 1 --- [etry-3000-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-xxx-retry-3000-3, groupId=xxx-retry-3000] Request joining group due to: consumer pro-actively leaving the group
2022-08-03 13:41:09.997 INFO 1 --- [ntainer#0-0-C-1] org.apache.kafka.common.metrics.Metrics : Closing reporter org.apache.kafka.common.metrics.JmxReporter
2022-08-03 13:41:09.997 INFO 1 --- [ntainer#0-0-C-1] org.apache.kafka.common.metrics.Metrics : Metrics reporters closed
2022-08-03 13:41:10.073 INFO 1 --- [etry-3000-0-C-1] org.apache.kafka.common.metrics.Metrics : Metrics scheduler closed
2022-08-03 13:41:10.074 INFO 1 --- [etry-3000-0-C-1] org.apache.kafka.common.metrics.Metrics : Closing reporter org.apache.kafka.common.metrics.JmxReporter
2022-08-03 13:41:10.074 INFO 1 --- [etry-3000-0-C-1] org.apache.kafka.common.metrics.Metrics : Metrics reporters closed
2022-08-03 13:41:10.076 INFO 1 --- [etry-3000-0-C-1] o.a.kafka.common.utils.AppInfoParser : App info kafka.consumer for consumer-xxx-retry-3000-3 unregistered
2022-08-03 13:41:10.077 INFO 1 --- [etry-3000-0-C-1] o.s.k.l.KafkaMessageListenerContainer : xxx-retry-3000: Consumer stopped
2022-08-03 13:41:10.079 INFO 1 --- [etry-1000-0-C-1] org.apache.kafka.common.metrics.Metrics : Metrics scheduler closed
2022-08-03 13:41:10.079 INFO 1 --- [etry-1000-0-C-1] org.apache.kafka.common.metrics.Metrics : Closing reporter org.apache.kafka.common.metrics.JmxReporter
2022-08-03 13:41:10.080 INFO 1 --- [etry-1000-0-C-1] org.apache.kafka.common.metrics.Metrics : Metrics reporters closed
2022-08-03 13:41:10.084 INFO 1 --- [ntainer#0-0-C-1] o.a.kafka.common.utils.AppInfoParser : App info kafka.consumer for consumer-xxx-4 unregistered
2022-08-03 13:41:10.084 INFO 1 --- [ntainer#0-0-C-1] o.s.k.l.KafkaMessageListenerContainer : xxx: Consumer stopped
2022-08-03 13:41:10.085 INFO 1 --- [etry-1000-0-C-1] o.a.kafka.common.utils.AppInfoParser : App info kafka.consumer for consumer-xxx-retry-1000-2 unregistered
2022-08-03 13:41:10.085 INFO 1 --- [etry-1000-0-C-1] o.s.k.l.KafkaMessageListenerContainer : xxx-retry-1000: Consumer stopped
2022-08-03 13:41:10.087 INFO 1 --- [ner#3-dlt-0-C-1] o.a.kafka.common.utils.AppInfoParser : App info kafka.consumer for consumer-xxx-dlt-1 unregistered
2022-08-03 13:41:10.087 INFO 1 --- [ner#3-dlt-0-C-1] o.s.k.l.KafkaMessageListenerContainer : xxx-dlt: Consumer stopped

Some Kafka Producers slower than others

We are having a weird situation where some producers, running the exact same code, are slower than other producers. After the faster producers complete their work, the slow producers remain slow. Any suggestions? We are running 32 kubernetes pods and each pod is running 4 producers, each producer writing to a given partition. A given pod can have fast running producers and slow running producers.
Below is an example of the logging produced by the 4 producers on a given instance. You can see that three of them finished in relatively the same amount of time, where as the fourth took 2 hours and 19 minutes.
2022-01-25 20:13:26.519 WARN 1 --- [cket-migrator-1] c.n.qube.kafka.Qube2KafkaBucketMigrator : Completed kafka migration of 'SCRUBBED TOPIC NAME', getRecords count = 25028043, duration = 29m 16s
2022-01-25 20:13:26.520 INFO 1 --- [cket-migrator-1] c.n.qube.kafka.Qube2KafkaBucketMigrator : I'm going to sleep (500ms), wake me when something happens
2022-01-25 20:13:26.520 INFO 1 --- [cket-migrator-1] c.n.qube.kafka.Qube2KafkaBucketMigrator : Woke up
2022-01-25 20:14:34.268 WARN 1 --- [cket-migrator-1] c.n.qube.kafka.Qube2KafkaBucketMigrator : Completed kafka migration of 'SCRUBBED TOPIC NAME', getRecords count = 25028043, duration = 30m 24s
2022-01-25 20:14:34.268 INFO 1 --- [cket-migrator-1] c.n.qube.kafka.Qube2KafkaBucketMigrator : I'm going to sleep (500ms), wake me when something happens
2022-01-25 20:14:34.268 INFO 1 --- [cket-migrator-1] c.n.qube.kafka.Qube2KafkaBucketMigrator : Woke up
2022-01-25 20:16:12.747 WARN 1 --- [cket-migrator-1] c.n.qube.kafka.Qube2KafkaBucketMigrator : Completed kafka migration of 'SCRUBBED TOPIC NAME', getRecords count = 25028043, duration = 32m 2s
2022-01-25 20:16:12.748 INFO 1 --- [cket-migrator-1] c.n.qube.kafka.Qube2KafkaBucketMigrator : I'm going to sleep (500ms), wake me when something happens
2022-01-25 20:16:12.748 INFO 1 --- [cket-migrator-1] c.n.qube.kafka.Qube2KafkaBucketMigrator : Woke up
2022-01-25 22:03:54.030 WARN 1 --- [cket-migrator-1] c.n.qube.kafka.Qube2KafkaBucketMigrator : Completed kafka migration of 'SCRUBBED TOPIC NAME', getRecords count = 25028043, duration = 2h 19m 44s
2022-01-25 22:03:54.030 INFO 1 --- [cket-migrator-1] c.n.qube.kafka.Qube2KafkaBucketMigrator : I'm going to sleep (500ms), wake me when something happens
2022-01-25 22:03:54.030 INFO 1 --- [cket-migrator-1] c.n.qube.kafka.Qube2KafkaBucketMigrator : Woke up

My consumer group is stuck in rebalancing and does not consume v0.11 kafka

My consumer group is not consuming data from the topic from the kafka cluster v 0.11
It starts up and it does not process messages from the topic. I see the following logs:
Discovered coordinator xxxx:9097 (id: 2147477640 rack: null) for group cs-ms-mapping-processor-dev-new.
2019-07-24 11:25:08.222 Level=INFO 47637 --- [ntainer#0-9-C-1] o.a.k.c.c.internals.AbstractCoordinator : Discovered coordinator xxxx:9097 (id: 2147477640 rack: null) for group cs-ms-mapping-processor-dev-new.
2019-07-24 11:25:08.222 Level=INFO 47637 --- [ntainer#0-2-C-1] o.a.k.c.c.internals.AbstractCoordinator : Discovered coordinator xxxx:9097 (id: 2147477640 rack: null) for group cs-ms-mapping-processor-dev-new.
2019-07-24 11:25:08.222 Level=INFO 47637 --- [ntainer#0-5-C-1] o.a.k.c.c.internals.AbstractCoordinator : Discovered coordinator xxxx:9097 (id: 2147477640 rack: null) for group cs-ms-mapping-processor-dev-new.
2019-07-24 11:25:08.222 Level=INFO 47637 --- [ntainer#0-7-C-1] o.a.k.c.c.internals.AbstractCoordinator : Discovered coordinator xxxx:9097 (id: 2147477640 rack: null) for group cs-ms-mapping-processor-dev-new.
2019-07-24 11:25:08.227 Level=INFO 47637 --- [ntainer#0-7-C-1] o.a.k.c.c.internals.ConsumerCoordinator : Revoking previously assigned partitions [] for group cs-ms-mapping-processor-dev-new
2019-07-24 11:25:08.227 Level=INFO 47637 --- [ntainer#0-5-C-1] o.a.k.c.c.internals.ConsumerCoordinator : Revoking previously assigned partitions [] for group cs-ms-mapping-processor-dev-new
2019-07-24 11:25:08.227 Level=INFO 47637 --- [ntainer#0-3-C-1] o.a.k.c.c.internals.ConsumerCoordinator : Revoking previously assigned partitions [] for group cs-ms-mapping-processor-dev-new
2019-07-24 11:25:08.227 Level=INFO 47637 --- [ntainer#0-9-C-1] o.a.k.c.c.internals.ConsumerCoordinator : Revoking previously assigned partitions [] for group cs-ms-mapping-processor-dev-new
2019-07-24 11:25:08.227 Level=INFO 47637 --- [ntainer#0-2-C-1] o.a.k.c.c.internals.ConsumerCoordinator : Revoking previously assigned partitions [] for group cs-ms-mapping-processor-dev-new
2019-07-24 11:25:08.228 Level=INFO 47637 --- [ntainer#0-3-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions revoked:[]
2019-07-24 11:25:08.228 Level=INFO 47637 --- [ntainer#0-5-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions revoked:[]
2019-07-24 11:25:08.228 Level=INFO 47637 --- [ntainer#0-7-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions revoked:[]
2019-07-24 11:25:08.228 Level=INFO 47637 --- [ntainer#0-9-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions revoked:[]
2019-07-24 11:25:08.228 Level=INFO 47637 --- [ntainer#0-2-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions revoked:[]
2019-07-24 11:25:08.228 Level=INFO 47637 --- [ntainer#0-7-C-1] o.a.k.c.c.internals.AbstractCoordinator : (Re-)joining group cs-ms-mapping-processor-dev-new
2019-07-24 11:25:08.228 Level=INFO 47637 --- [ntainer#0-5-C-1] o.a.k.c.c.internals.AbstractCoordinator : (Re-)joining group cs-ms-mapping-processor-dev-new
2019-07-24 11:25:08.228 Level=INFO 47637 --- [ntainer#0-9-C-1] o.a.k.c.c.internals.AbstractCoordinator : (Re-)joining group cs-ms-mapping-processor-dev-new
2019-07-24 11:25:08.228 Level=INFO 47637 --- [ntainer#0-3-C-1] o.a.k.c.c.internals.AbstractCoordinator : (Re-)joining group cs-ms-mapping-processor-dev-new
2019-07-24 11:25:08.228 Level=INFO 47637 --- [ntainer#0-2-C-1] o.a.k.c.c.internals.AbstractCoordinator : (Re-)joining group cs-ms-mapping-processor-dev-new
2019-07-24 11:25:08.233 Level=INFO 47637 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : Discovered coordinator xxxx:9097 (id: 2147477640 rack: null) for group cs-ms-mapping-processor-dev-new.
2019-07-24 11:25:08.234 Level=INFO 47637 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : Revoking previously assigned partitions [] for group cs-ms-mapping-processor-dev-new
2019-07-24 11:25:08.234 Level=INFO 47637 --- [ntainer#0-0-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions revoked:[]
2019-07-24 11:25:08.234 Level=INFO 47637 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : (Re-)joining group cs-ms-mapping-processor-dev-new
2019-07-24 11:25:08.269 Level=INFO 47637 --- [ntainer#0-6-C-1] o.a.k.c.c.internals.AbstractCoordinator : Discovered coordinator xxxx:9097 (id: 2147477640 rack: null) for group cs-ms-mapping-processor-dev-new.
2019-07-24 11:25:08.270 Level=INFO 47637 --- [ntainer#0-6-C-1] o.a.k.c.c.internals.ConsumerCoordinator : Revoking previously assigned partitions [] for group cs-ms-mapping-processor-dev-new
2019-07-24 11:25:08.270 Level=INFO 47637 --- [ntainer#0-6-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions revoked:[]
2019-07-24 11:25:08.270 Level=INFO 47637 --- [ntainer#0-6-C-1] o.a.k.c.c.internals.AbstractCoordinator : (Re-)joining group cs-ms-mapping-processor-dev-new
2019-07-24 11:25:08.396 Level=INFO 47637 --- [ntainer#0-1-C-1] o.a.k.c.c.internals.AbstractCoordinator : Discovered coordinator xxxx:9097 (id: 2147477640 rack: null) for group cs-ms-mapping-processor-dev-new.
2019-07-24 11:25:08.397 Level=INFO 47637 --- [ntainer#0-1-C-1] o.a.k.c.c.internals.ConsumerCoordinator : Revoking previously assigned partitions [] for group cs-ms-mapping-processor-dev-new
2019-07-24 11:25:08.398 Level=INFO 47637 --- [ntainer#0-1-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions revoked:[]
2019-07-24 11:25:08.398 Level=INFO 47637 --- [ntainer#0-1-C-1] o.a.k.c.c.internals.AbstractCoordinator : (Re-)joining group cs-ms-mapping-processor-dev-new
2019-07-24 11:25:08.521 Level=INFO 47637 --- [ntainer#0-4-C-1] o.a.k.c.c.internals.AbstractCoordinator : Discovered coordinator xxxx:9097 (id: 2147477640 rack: null) for group cs-ms-mapping-processor-dev-new.
2019-07-24 11:25:08.522 Level=INFO 47637 --- [ntainer#0-4-C-1] o.a.k.c.c.internals.ConsumerCoordinator : Revoking previously assigned partitions [] for group cs-ms-mapping-processor-dev-new
2019-07-24 11:25:08.522 Level=INFO 47637 --- [ntainer#0-4-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions revoked:[]
2019-07-24 11:25:08.522 Level=INFO 47637 --- [ntainer#0-4-C-1] o.a.k.c.c.internals.AbstractCoordinator : (Re-)joining group cs-ms-mapping-processor-dev-new
2019-07-24 11:25:08.522 Level=INFO 47637 --- [ntainer#0-8-C-1] o.a.k.c.c.internals.AbstractCoordinator : Discovered coordinator xxxx:9097 (id: 2147477640 rack: null) for group cs-ms-mapping-processor-dev-new.
2019-07-24 11:25:08.523 Level=INFO 47637 --- [ntainer#0-8-C-1] o.a.k.c.c.internals.ConsumerCoordinator : Revoking previously assigned partitions [] for group cs-ms-mapping-processor-dev-new
2019-07-24 11:25:08.523 Level=INFO 47637 --- [ntainer#0-8-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions revoked:[]
2019-07-24 11:25:08.524 Level=INFO 47637 --- [ntainer#0-8-C-1] o.a.k.c.c.internals.AbstractCoordinator : (Re-)joining group cs-ms-mapping-processor-dev-new
I also see this log:
Auto-commit of offsets {cs-ms-notification-prod-6=OffsetAndMetadata{offset=401005946, metadata=''}} failed for group cs-ms-mapping-processor-prod-new: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records
max-poll-records: 1
session.timeout.ms: "450000"
request.timeout.ms: "480000"
max.poll.interval.ms: "450000"
code snippet
#KafkaListener(topics = "${kafka.notification.topic-name:notification}", containerFactory = NOTIFICATION_LISTENER_CONTAINER_FACTORY)
public void accept(KafkaStatusMessage kafkaStatusMessage) {
ContentId contentId = kafkaStatusMessage.getHeaders();
StatusInfo statusInfo = kafkaStatusMessage.getCurrentStatus();
List<StatusInfo> previousStatuses = kafkaStatusMessage.getPreviousStatuses();
}
DefaultKafkaConsumerFactory<String, KafkaStatusMessage> consumerFactory = new DefaultKafkaConsumerFactory<>(
kafkaProperties.buildConsumerProperties(),
new StringDeserializer(),
jsonDeserializer);
This are the config values I have right now
I would like it to consume messages from the topic but its not doing so

Spring-Kafka: Rebalancing happening while using consumer pause/resume which should not as per documentation

Spring-Kafka: While pausing/resuming consumer using pause/resume method as per documentation, rebalance should not occur when automatic assignment is used but it is not working, rebalancing happening. How to pause/resume consumer and keeping polling after a period without rebalancing?
Use Case: Consumer should pause for a period and keep polling to give heartbeat and resume after time is up but Kafka should not rebalance while consumer is paused.
System.out.println("Consumer[" + Thread.currentThread().getName() + "] Partition [" + topicPartition + "] stopped consumption.");
consumer.pause(Collections.singleton(topicPartition));
try {
Thread.sleep(60000);
consumer.resume(Collections.singleton(topicPartition));
System.out.println("Consumer[" + Thread.currentThread().getName() + "] Partition [" + topicPartition + "] resumed consumption.");
} catch (InterruptedException e) {
e.printStackTrace();
}
Logs:
2019-02-19 15:19:49.173 INFO 82272 --- [rTaskExecutor-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-2, groupId=customer] (Re-)joining group
2019-02-19 15:19:49.175 INFO 82272 --- [rTaskExecutor-2] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-3, groupId=customer] (Re-)joining group
2019-02-19 15:19:49.181 INFO 82272 --- [rTaskExecutor-3] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-4, groupId=customer] (Re-)joining group
2019-02-19 15:19:49.192 INFO 82272 --- [rTaskExecutor-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-2, groupId=customer] Successfully joined group with generation 581
2019-02-19 15:19:49.192 INFO 82272 --- [rTaskExecutor-2] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-3, groupId=customer] Successfully joined group with generation 581
2019-02-19 15:19:49.194 INFO 82272 --- [rTaskExecutor-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-2, groupId=customer] Setting newly assigned partitions [spring-kafka-topic-2, spring-kafka-topic-0, spring-kafka-topic-1]
2019-02-19 15:19:49.194 INFO 82272 --- [rTaskExecutor-2] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-3, groupId=customer] Setting newly assigned partitions [spring-kafka-topic-4, spring-kafka-topic-5, spring-kafka-topic-3]
2019-02-19 15:19:49.218 INFO 82272 --- [rTaskExecutor-2] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [spring-kafka-topic-4, spring-kafka-topic-5, spring-kafka-topic-3]
2019-02-19 15:19:49.219 INFO 82272 --- [rTaskExecutor-1] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [spring-kafka-topic-2, spring-kafka-topic-0, spring-kafka-topic-1]
2019-02-19 15:19:49.223 INFO 82272 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ''
2019-02-19 15:19:49.233 INFO 82272 --- [ main] c.g.s.S.SpringKafkaSupportApplication : Started SpringKafkaSupportApplication in 3.43 seconds (JVM running for 3.85)
Consumer[customerTaskExecutor-1] received message[Customer(name=, phoneNumber=20)]
Consumer[customerTaskExecutor-2] received message[Customer(name=test 6, phoneNumber=6)]
Consumer[customerTaskExecutor-1] Partition [spring-kafka-topic-2] stopped consumption.
Consumer[customerTaskExecutor-1] Partition [spring-kafka-topic-1] stopped consumption.
2019-02-19 15:19:52.200 INFO 82272 --- [rTaskExecutor-2] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-3, groupId=customer] Attempt to heartbeat failed since group is rebalancing
2019-02-19 15:19:52.200 INFO 82272 --- [rTaskExecutor-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-2, groupId=customer] Attempt to heartbeat failed since group is rebalancing
2019-02-19 15:19:52.200 INFO 82272 --- [rTaskExecutor-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-2, groupId=customer] Revoking previously assigned partitions [spring-kafka-topic-2, spring-kafka-topic-0, spring-kafka-topic-1]
2019-02-19 15:19:52.200 INFO 82272 --- [rTaskExecutor-2] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-3, groupId=customer] Revoking previously assigned partitions [spring-kafka-topic-4, spring-kafka-topic-5, spring-kafka-topic-3]
2019-02-19 15:19:52.200 INFO 82272 --- [rTaskExecutor-1] o.s.k.l.KafkaMessageListenerContainer : partitions revoked: [spring-kafka-topic-2, spring-kafka-topic-0, spring-kafka-topic-1]
2019-02-19 15:19:52.200 INFO 82272 --- [rTaskExecutor-2] o.s.k.l.KafkaMessageListenerContainer : partitions revoked: [spring-kafka-topic-4, spring-kafka-topic-5, spring-kafka-topic-3]
2019-02-19 15:19:52.200 INFO 82272 --- [rTaskExecutor-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-2, groupId=customer] (Re-)joining group
2019-02-19 15:19:52.200 INFO 82272 --- [rTaskExecutor-2] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-3, groupId=customer] (Re-)joining group
2019-02-19 15:19:52.209 INFO 82272 --- [rTaskExecutor-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-2, groupId=customer] Successfully joined group with generation 582
2019-02-19 15:19:52.209 INFO 82272 --- [rTaskExecutor-2] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-3, groupId=customer] Successfully joined group with generation 582
2019-02-19 15:19:52.209 INFO 82272 --- [rTaskExecutor-3] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-4, groupId=customer] Successfully joined group with generation 582
2019-02-19 15:19:52.209 INFO 82272 --- [rTaskExecutor-3] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-4, groupId=customer] Setting newly assigned partitions [spring-kafka-topic-4, spring-kafka-topic-5]
2019-02-19 15:19:52.210 INFO 82272 --- [rTaskExecutor-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-2, groupId=customer] Setting newly assigned partitions [spring-kafka-topic-0, spring-kafka-topic-1]
2019-02-19 15:19:52.210 INFO 82272 --- [rTaskExecutor-2] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-3, groupId=customer] Setting newly assigned partitions [spring-kafka-topic-2, spring-kafka-topic-3]
2019-02-19 15:19:52.211 INFO 82272 --- [rTaskExecutor-3] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [spring-kafka-topic-4, spring-kafka-topic-5]
2019-02-19 15:19:52.212 INFO 82272 --- [rTaskExecutor-1] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [spring-kafka-topic-0, spring-kafka-topic-1]
2019-02-19 15:19:52.212 INFO 82272 --- [rTaskExecutor-2] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [spring-kafka-topic-2, spring-kafka-topic-3]
Consumer[customerTaskExecutor-3] received message[Customer(name=test 6, phoneNumber=6)]
Read the Kafka documentation.
Pausing the consumer simply means that subsequent poll()s will return no records until you call resume(), but you still have to call poll() within max.poll.interval.ms in order to prevent a rebalance.
Just had same "group" of messages with consumers and Spring Kafka. Same results with #KafkaListener and non-annotated Spring with ConcurrentMessageListenerContainer. Parameter adjustment does not work exactly the same as straight Java.
Re-wrote with straight Java using consumer.poll() and started threads with ExecutorService - adjusted parameters per Gary Russell and everything works properly. No longer get these messages and lost heartbeat during re-balancing. Straight java examples from Clouderable and Confluent websites:
http://cloudurable.com/blog/kafka-tutorial-kafka-consumer/index.html
https://docs.confluent.io/current/clients/consumer.html#