StreamsException: No valid committed offset found for input topic - apache-kafka

I have two node Kafka cluster with replication factor two. My Kafka version is 0.10.2.0.
Previously my application was used 0.11.0.0 Kafka Streams API. This is changed 0.11.0.1 Kafka stream to fix Kafka Streams application is throwing 'CommitFailedException' exceptions as suggested at https://issues.apache.org/jira/browse/KAFKA-5786 and https://issues.apache.org/jira/browse/KAFKA-5152
Following are ConsumerConfig values:
auto.commit.interval.ms = 5000
auto.offset.reset = earliest
bootstrap.servers = [1.1.1.1:9092, 1.1.1.2:9092]
check.crcs = true
client.id = sample-app-0.0.1-27c65ef7-d07f-4619-96be-852f5772d73d-StreamThread-1-consumer
connections.max.idle.ms = 540000
enable.auto.commit = false
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = sample-app-0.0.1
heartbeat.interval.ms = 3000
interceptor.classes = null
internal.leave.group.on.close = false
isolation.level = read_uncommitted
key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 2147483647
max.poll.records = 1000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [org.apache.kafka.streams.processor.internals.StreamPartitionAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 305000
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
session.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
But one of my application is throwing below exception after running some time even though previously committed offsets are available. What would be the possible reason? Please explain.
017-12-04 12:03:25,410 ERROR c.e.s.c.f.k.s.r.f.SampleStreamsApp [sample-app-0.0.1-096850bc-5f18-4c59-b0c7-63ad00e08aa1-StreamThread-1] No valid committed offset found for input topic sampel (partition 0) and no valid reset policy configured. You need to set configuration parameter "auto.offset.reset" or specify a topic specific reset policy via KStreamBuilder#stream(StreamsConfig.AutoOffsetReset offsetReset, ...) or KStreamBuilder#table(StreamsConfig.AutoOffsetReset offsetReset, ...) org.apache.kafka.streams.errors.StreamsException: No valid committed offset found for input topic sample (partition 0) and no valid reset policy configured. You need to set configuration parameter "auto.offset.reset" or specify a topic specific reset policy via KStreamBuilder#stream(StreamsConfig.AutoOffsetReset offsetReset, ...) or KStreamBuilder#table(StreamsConfig.AutoOffsetReset offsetReset, ...)
at org.apache.kafka.streams.processor.internals.StreamThread.resetInvalidOffsets(StreamThread.java:567)
at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:538)
at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:490)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
Caused by: org.apache.kafka.clients.consumer.NoOffsetForPartitionException: Undefined offset with no reset policy for partitions: [sample-0]
at org.apache.kafka.clients.consumer.internals.Fetcher.resetOffsets(Fetcher.java:425)
at org.apache.kafka.clients.consumer.internals.Fetcher.resetOffsetsIfNeeded(Fetcher.java:254)
at org.apache.kafka.clients.consumer.KafkaConsumer.updateFetchPositions(KafkaConsumer.java:1640)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1083)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043)
at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:536)
... 3 more

Related

akka kafka consumer poll speed backpressure

The project uses akka stream kafka, and one process is commencing several kafka topics. When I receive a message from kafka, I am processing the business logic of pre-processing and updating to couchbase. At this time, even if the data generation speed of the kafka producer becomes very fast, the consumer wants to control the speed of the concume. When registering the kafka consumer, you specified options such as "max.pull.records" and "fetch.max.bytes", but the cpu value is abnormally high because the speed of the conclusion is too fast. Is there a way to control the speed of the concume?
Consumer.plainSource(ConsumerConfig, Subscriptions.topics("topicName")
.mapAsync(parallelism, Function)
.delay(Duration.ofNanos(1L), DelayOverflowStrategy.backpressure());
I had no choice but to adjust the consume speed with the code giving delay as above, but it is too slow. Please help me.
2022:04:28 10:19:49.566 INFO --- [akka.actor.consume-dispatcher-25] o.a.k.clients.consumer.ConsumerConfig[logAll:347] : ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = earliest
bootstrap.servers = [127.0.0.1:9092]
check.crcs = true
client.dns.lookup = default
client.id =
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 10485760
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = group1
group.instance.id = null
heartbeat.interval.ms = 5000
interceptor.classes = []
internal.leave.group.on.close = true
isolation.level = read_uncommitted
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 100
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
session.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
it`s my consumer config.

Timeout expired while fetching topic metadata - Kafka

We are trying to consume the broker messages in Kafka hosted in Windows standalone.
Consumer is running in Kubernetes.
server.properties:
listeners=PLAINTEXT://:29092
advertised.listeners=PLAINTEXT://myhostname:29092
listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
Consumer Error:
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1627883198273
[main] INFO XXX.XXX.KafkaConsumerProperties - Kafka Topic Name : table-update
[main] INFO org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=consumer-GroupConsumer-1, groupId=GroupConsumer] Subscribed to topic(s): table-update
[main] INFO XX.XX.XXXXX- Could not run Loader: Timeout expired while fetching topic metadata .
Consumer config values :
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = earliest
bootstrap.servers = [myhostname:29092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = consumer-GroupConsumer-1
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = false
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = GroupConsumer
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 52428800
max.poll.interval.ms = 2147483647
max.poll.records = 1000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 120000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 60000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.2
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = XXXX
Please help me in resolving this issue.
While fetching the metadata for topics there are possibilities of multiple error conditions like invalid topic name, metadata leader, offline partitions etc. You can get more info about the errors by logging at debug level. The debug level logs the error condition
log.debug("Topic metadata fetch included errors: {}", errors);

Kafka timeout exception even though the linger.ms is 0

In the Spring + Kafka application, batch.size is the default value 16384, and linger.ms=0. So I think when a message is created, it will imedeately sent to the broker, because 0 linger. But while running, it still get the exception sometimes.
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for cn-mcd-pos-promotion-3: 30033 ms has passed since batch creation plus linger time
Most of the time, the producer is OK. Sometimes it gets this excpetion. What werid is, that all the exceptions is related with cn-mcd-pos-promotion-3.
What may cause this timeout even though the linger.ms is 0?
What is -3 meaning in cn-mcd-pos-promotion-3?
PS: The config of producer.
bootstrap.servers = [wqdcsrv3078.cn.infra:9093, wqdcsrv3079.cn.infra:9093, wqdcsrv3080.cn.infra:9095]
buffer.memory = 33554432
client.id = mcd.client
compression.type = none
connections.max.idle.ms = 540000
enable.idempotence = false
interceptor.classes = null
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 0
retry.backoff.ms = 100
sasl.jaas.config = [hidden]
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = SCRAM-SHA-256
security.protocol = SASL_PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.springframework.kafka.support.serializer.JsonSerializer```
Could anyone help? Thanks.

#KafkaListener not recovering after DisconnectException

I have a Kafka consumer (Spring boot) configured using #KafkaListener. This was running in production and all was good until as part of the maintenance the brokers were restarted. By docs, I was expecting that the kafka listener would recover once broker is back up. However this is not what I observed from the logs. The logs stopped with following Exception:
2020-04-22 11:11:28,802|INFO|automator-consumer-app-id-0-C-1|org.apache.kafka.clients.FetchSessionHandler|[Consumer clientId=automator-consumer-app-id-0, groupId=automator-consumer-app-id] Node 10 was unable to process the fetch request with (sessionId=2138208872, epoch=348): FETCH_SESSION_ID_NOT_FOUND.
2020-04-22 11:24:23,798|INFO|automator-consumer-app-id-0-C-1|org.apache.kafka.clients.FetchSessionHandler|[Consumer clientId=automator-consumer-app-id-0, groupId=automator-consumer-app-id] Error sending fetch request (sessionId=499459047, epoch=314160) to node 7: org.apache.kafka.common.errors.DisconnectException.
2020-04-22 11:36:37,241|INFO|automator-consumer-app-id-0-C 1|org.apache.kafka.clients.FetchSessionHandler|[Consumer clientId=automator-consumer-app-id-0, groupId=automator-consumer-app-id] Error sending fetch request (sessionId=2033512553, epoch=342949) to node 4: org.apache.kafka.common.errors.DisconnectException.
Once the application was restarted, the connectivity reestablished. I was wondering if this could be related with any of the consumer configuration below.
2020-04-22 12:46:59,681|INFO|main|org.apache.kafka.clients.consumer.ConsumerConfig|ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [msk-e00-br1.int.bell.ca:9092]
check.crcs = true
client.dns.lookup = default
client.id = automator-consumer-app-id-0
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = false
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = automator-consumer-app-id
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
isolation.level = read_uncommitted
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
session.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
Increase the value of
max.incremental.fetch.session.cache.slots
. The default value is 1K. You can refer the answer here : How to check the actual number of incremental fetch session cache slots used in Kafka cluster?
Basically, your application is continuously listening the messages from the topic, suppose if there is no message published in the topic you will get this type of exception.
org.apache.kafka.common.errors.DisconnectException: null
Disconnect Exception class
If we start sending messages to topic, application will start running and consume those messages.
Here you need to increase the request timeout in your properties files.
consumer.request.timeout.ms:

Kafka consumer process order with concurrency

I have a producer writing to a Kafka topic with 100 partitions, and it choose the partition by the user ID, therefore user's messages are necessarily being processed by the order they were submitted to the queue.
The service which is responsible for consuming has 2-10 instances, each one has in its configuration:
spring.cloud.stream.bindings.input.consumer.concurrency=10
spring.cloud.stream.bindings.input.consumer.partitioned=true
I recently noticed that although the consumer start to process the partition messages in order, sometimes one message is done before the one after it because it's easier to being processed than next one.
It's important to me to maintain the current processing rate of the service, and because I'm not familiar with the threading model of spring cloud stream I wanted to consult and ask for other's knowledge. What is the best way to ensure that one user's message is being processed only after the previous ones are done?
--EDIT--
As requested, more relevant params.
Kafka version: 0.10.2.1
spring-cloud-stream version: 1.1.0.RELEASE
binder params:
spring.cloud.stream.kafka.bindings.input.consumer.autoCommitOffset=false
spring.cloud.stream.kafka.bindings.input.consumer.autoCommitOnError=true
spring.cloud.stream.kafka.bindings.input.consumer.enableDlq=true
consumer configuration as printed to the console:
2018-12-11 09:56:51,975 [RMI TCP Connection(6)-127.0.0.1] INFO [AbstractConfig::logAll] - ConsumerConfig values:
metric.reporters = []
metadata.max.age.ms = 300000
partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
max.partition.fetch.bytes = 1048576
bootstrap.servers = [localhost:9092]
ssl.keystore.type = JKS
enable.auto.commit = true
sasl.mechanism = GSSAPI
interceptor.classes = null
exclude.internal.topics = true
ssl.truststore.password = null
client.id = consumer-11
ssl.endpoint.identification.algorithm = null
max.poll.records = 2147483647
check.crcs = true
request.timeout.ms = 40000
heartbeat.interval.ms = 3000
auto.commit.interval.ms = 5000
receive.buffer.bytes = 65536
ssl.truststore.type = JKS
ssl.truststore.location = null
ssl.keystore.password = null
fetch.min.bytes = 1
send.buffer.bytes = 131072
value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
group.id =
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.trustmanager.algorithm = PKIX
ssl.key.password = null
fetch.max.wait.ms = 500
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
session.timeout.ms = 30000
metrics.num.samples = 2
key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
ssl.protocol = TLS
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.keystore.location = null
ssl.cipher.suites = null
security.protocol = PLAINTEXT
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 30000
auto.offset.reset = latest
producer config as printed to the console:
2018-12-11 09:56:52,439 [-kafka-listener-1] INFO [AbstractConfig::logAll] - ProducerConfig values:
metric.reporters = []
metadata.max.age.ms = 300000
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
bootstrap.servers = [localhost:9092]
ssl.keystore.type = JKS
sasl.mechanism = GSSAPI
max.block.ms = 60000
interceptor.classes = null
ssl.truststore.password = null
client.id = producer-5
ssl.endpoint.identification.algorithm = null
request.timeout.ms = 30000
acks = 1
receive.buffer.bytes = 32768
ssl.truststore.type = JKS
retries = 0
ssl.truststore.location = null
ssl.keystore.password = null
send.buffer.bytes = 131072
compression.type = none
metadata.fetch.timeout.ms = 60000
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
buffer.memory = 33554432
timeout.ms = 30000
key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.trustmanager.algorithm = PKIX
block.on.buffer.full = false
ssl.key.password = null
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
max.in.flight.requests.per.connection = 5
metrics.num.samples = 2
ssl.protocol = TLS
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
batch.size = 16384
ssl.keystore.location = null
ssl.cipher.suites = null
security.protocol = PLAINTEXT
max.request.size = 1048576
value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
linger.ms = 0
The partitions are distributed across the container threads.
If the container concurrency is 10 and you have 20 partitions, each consumer (thread) will normally be assigned 2 partitions.
This guarantees delivery order within a partition.