How to optimize EmbddedKafka and Mongo logs in Spring Boot - mongodb

how to properly keep only relevant logs when using MongoDB and Kafka in a SpringBoot application
2022-08-02 11:14:58.148 INFO 363923 --- [ main] kafka.server.KafkaConfig : KafkaConfig values:
advertised.listeners = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.heartbeat.interval.ms = 2000
broker.id = 0
broker.id.generation.enable = true
broker.rack = null
broker.session.timeout.ms = 9000
client.quota.callback.class = null
compression.type = producer
connection.failed.authentication.delay.ms = 100
connections.max.idle.ms = 600000
...
2022-08-02 11:15:11.005 INFO 363923 --- [er-event-thread] state.change.logger : [Controller id=0 epoch=1] Changed partition test_cfr_prv_customeragreement_event_disbursement_ini-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0)
2022-08-02 11:15:11.005 INFO 363923 --- [er-event-thread] state.change.logger : [Controller id=0 epoch=1] Changed partition test_cfr_prv_customeragreement_event_receipt_ini-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0)
2022-08-02 11:15:11.017 INFO 363923 --- [er-event-thread] state.change.logger : [Controller id=0 epoch=1] Sending LeaderAndIsr request to broker 0 with 2 become-leader and 0 become-follower partitions
2022-08-02 11:15:11.024 INFO 363923 --- [er-event-thread] state.change.logger : [Controller id=0 epoch=1] Sending UpdateMetadata request to brokers HashSet(0) for 2 partitions
2022-08-02 11:15:11.026 INFO 363923 --- [er-event-thread] state.change.logger : [Controller id=0 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions
2022-08-02 11:15:11.028 INFO 363923 --- [quest-handler-0] state.change.logger : [Broker id=0] Handling LeaderAndIsr request correlationId 1 from controller 0 for 2 partitions
example of undesired logs
2022-08-02 11:15:04.578 INFO 363923 --- [ Thread-3] o.s.b.a.mongo.embedded.EmbeddedMongo : {"t":{"$date":"2022-08-02T11:15:04.578+02:00"},"s":"I", "c":"CONTROL", "id":51765, "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Ubuntu","version":"20.04"}}}
2022-08-02 11:15:04.579 INFO 363923 --- [ Thread-3] o.s.b.a.mongo.embedded.EmbeddedMongo : {"t":{"$date":"2022-08-02T11:15:04.578+02:00"},"s":"I", "c":"CONTROL", "id":21951, "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"net":{"bindIp":"127.0.0.1","port":34085},"replication":{"oplogSizeMB":10,"replSet":"rs0"},"security":{"authorization":"disabled"},"storage":{"dbPath":"/tmp/embedmongo-db-66eab1ce-d099-40ec-96fb-f759ef3808a4","syncPeriodSecs":0}}}}
2022-08-02 11:15:04.585 INFO 363923 --- [ Thread-3] o.s.b.a.mongo.embedded.EmbeddedMongo : {"t":{"$date":"2022-08-02T11:15:04.585+02:00"},"s":"I", "c":"STORAGE", "id":22297, "ctx":"initandlisten","msg":"Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem","tags":["startupWarnings"]}
Please find here a link to a sample project github.com/smaillns/springboot-mongo-kafka
If we run a test we'll get a bunch of logs ! What's wrong with the current configuration ?

Related

Kafka consumer message commit issue

Kafka newbie.
Kafka version: 2.3.1
I am trying to consume Kafka message from two topics using spring cloud. I have not done much configuration apart from kafka binder and some simple config like below. Whenever (Group coordinator lbbb111a.uat.pncint.net:9092 (id: 2147483641 rack: null) is unavailable or invalid, will attempt rediscovery)happen, bunch of message which has already processed is getting processed again. Not sure what is happening.
spring.cloud.stream.kafka.binder.brokers: xxxxx:9094
spring:
cloud:
stream:
default:
group: bbb-bl-kyc
bindings:
input:
destination: bbb.core.sar.blul.events,bbb.core.sar.bluloc.events
contentType: application/json
consumer:
headerMode: embeddedHeaders
spring.kafka.consumer.properties.spring.json.trusted.packages: "*"
spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms: 1000
#Custom Serializer configurations to secure data
spring.cloud.stream.kafka.binder.configuration:
key.serializer: org.apache.kafka.common.serialization.StringSerializer
value.serializer: pnc.aop.core.kafka.serialization.MessageSecuredByteArraySerializer
value.deserializer: pnc.aop.core.kafka.serialization.MessageSecuredByteArrayDeserializer
key.deserializer: org.apache.kafka.common.serialization.StringDeserializer
2020-05-29 07:01:11.389 INFO 1 --- [container-0-C-1] p.a.b.k.service.KYCOrchestrationService : Done with Customer xxxx MS call response handling Confm Id: 159073553171893 Appln Id: HSUKQJDJNZNMWVZZ
2020-05-29 07:01:11.393 INFO 1 --- [container-0-C-1] p.a.b.kyc.service.DMSIntegrationService : Message written to the DMS topic successfully 159073553171893
2020-05-29 07:01:11.394 INFO 1 --- [container-0-C-1] p.a.b.k.s.AdminConsoleProducerService : Message written to Admin console Application Log topic successfully Confm Id: 159073553171893 Appln Id: HSUKQJDJNZNMWVZZ
2020-05-30 17:21:13.140 INFO 1 --- [ad | bbb-bl-kyc] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-4, groupId=bbb-bl-kyc] Group coordinator lbbb111a.uat.pncint.net:9092 (id: 2147483641 rack: null) is unavailable or invalid, will attempt rediscovery
2020-05-30 17:21:13.122 INFO 1 --- [ad | bbb-bl-kyc] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-2, groupId=bbb-bl-kyc] Group coordinator lbbb111a.uat.pncint.net:9092 (id: 2147483641 rack: null) is unavailable or invalid, will attempt rediscovery
2020-05-30 17:21:14.522 INFO 1 --- [ad | bbb-bl-kyc] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-2, groupId=bbb-bl-kyc] Discovered group coordinator lbbb111a.uat.pncint.net:9092 (id: 2147483641 rack: null)
2020-05-30 17:21:14.692 INFO 1 --- [container-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-4, groupId=bbb-bl-kyc] Discovered group coordinator lbbb111a.uat.pncint.net:9092 (id: 2147483641 rack: null)
2020-05-30 17:21:15.151 INFO 1 --- [container-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-4, groupId=bbb-bl-kyc] Attempt to heartbeat failed for since member id consumer-4-f5a03efd-75cd-425b-94e1-efd3d728d7ca is not valid.
2020-05-30 17:21:15.152 INFO 1 --- [container-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-4, groupId=bbb-bl-kyc] Revoking previously assigned partitions [bbb.core.sar.bluloc.events-0]
2020-05-30 17:21:15.173 INFO 1 --- [container-0-C-1] o.s.c.s.b.k.KafkaMessageChannelBinder$1 : bbb-bl-kyc: partitions revoked: [bbb.core.sar.bluloc.events-0]
2020-05-30 17:21:15.141 INFO 1 --- [container-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-2, groupId=bbb-bl-kyc] Attempt to heartbeat failed for since member id consumer-2-52012bae-1b22-4211-b107-803fb3765720 is not valid.
2020-05-30 17:21:15.175 INFO 1 --- [container-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-4, groupId=bbb-bl-kyc] (Re-)joining group
2020-05-30 17:21:15.176 INFO 1 --- [container-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-2, groupId=bbb-bl-kyc] Revoking previously assigned partitions [bbb.core.sar.blul.events-0]
2020-05-30 17:21:15.184 INFO 1 --- [container-0-C-1] o.s.c.s.b.k.KafkaMessageChannelBinder$1 : bbb-bl-kyc: partitions revoked: [bbb.core.sar.blul.events-0]
2020-05-30 17:21:15.184 INFO 1 --- [container-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-2, groupId=bbb-bl-kyc] (Re-)joining group
2020-05-30 17:21:18.200 INFO 1 --- [container-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-4, groupId=bbb-bl-kyc] Successfully joined group with generation 66
2020-05-30 17:21:18.200 INFO 1 --- [container-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-2, groupId=bbb-bl-kyc] Successfully joined group with generation 66
2020-05-30 17:21:18.200 INFO 1 --- [container-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-4, groupId=bbb-bl-kyc] Setting newly assigned partitions: bbb.core.sar.bluloc.events-0
2020-05-30 17:21:18.200 INFO 1 --- [container-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-2, groupId=bbb-bl-kyc] Setting newly assigned partitions: bbb.core.sar.blul.events-0
2020-05-30 17:21:18.203 INFO 1 --- [container-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-2, groupId=bbb-bl-kyc] Found no committed offset for partition bbb.core.sar.blul.events-0
2020-05-30 17:21:18.203 INFO 1 --- [container-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-4, groupId=bbb-bl-kyc] Found no committed offset for partition bbb.core.sar.bluloc.events-0
2020-05-30 17:21:18.537 INFO 1 --- [container-0-C-1] o.a.k.c.c.internals.SubscriptionState : [Consumer clientId=consumer-2, groupId=bbb-bl-kyc] Resetting offset for partition bbb.core.sar.blul.events-0 to offset 4.
2020-05-30 17:21:18.538 INFO 1 --- [container-0-C-1] o.a.k.c.c.internals.SubscriptionState : [Consumer clientId=consumer-4, groupId=bbb-bl-kyc] Resetting offset for partition bbb.core.sar.bluloc.events-0 to offset 0.
2020-05-30 17:21:18.621 INFO 1 --- [container-0-C-1] o.s.c.s.b.k.KafkaMessageChannelBinder$1 : bbb-bl-kyc: partitions assigned: [bbb.core.sar.blul.events-0]
2020-05-30 17:21:18.625 INFO 1 --- [container-0-C-1] o.s.c.s.b.k.KafkaMessageChannelBinder$1 : bbb-bl-kyc: partitions assigned: [bbb.core.sar.bluloc.events-0]
2020-05-30 17:21:18.822 INFO 1 --- [container-0-C-1] p.a.b.k.stream.KYCbbbCoreEventsListener : Initiating KYC Orchestration 159071814927374
2020-05-30 17:21:18.826 INFO 1 --- [container-0-C-1] p.a.b.k.stream.KYCbbbCoreEventsListener : Initiating KYC Orchestration null
2020-05-30 17:21:18.928 INFO 1 --- [container-0-C-1] p.a.b.k.s.AdminConsoleProducerService : Message written to Admin console Application topic successfully Confm Id: null Appln Id: XQZ58K3H3XZADTAT
Without changing much of the consumer configurations, you will have at least once delivery semantics.
When the Group Coordinator is temporarly not available your consumer won't be able to commit the messages it processed. After re-joining your consumer will again process same messages (as they were not committed yet) leading to duplicates.
You can find more details on GroupCoordinator and delivery semantics here

Kafka consumer does not poll records intermittently

I have wrote a simple utility in scala to read kafka message as byte array.
The utility works on one machine but not on the other. Both are on same OS (centos 7) and same kafka server as well (which is in another machine all together).
However Kafka Tool (www.kafkatool.com) works on the machine which the utility not - so its not likely accessibility issue.
Following is the essence of the consumer code:
import java.io.BufferedOutputStream
import java.util.Properties
import org.apache.kafka.clients.consumer.KafkaConsumer
val outputFile = "output.txt"
val topic = "test_topic"
val server = "localhost:9092"
val id = "record-tool"
val props = new Properties()
props.put("bootstrap.servers", server)
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer")
props.put("value.deserializer", "org.apache.kafka.common.serialization.ByteArrayDeserializer")
props.put("auto.offset.reset", "earliest")
props.put("enable.auto.commit", "false")
props.put("max.partition.fetch.bytes", "104857600")
props.put("group.id", id)
val bos = new BufferedOutputStream(new FileOutputStream(outputFile))
val consumer = new KafkaConsumer[String, Array[Byte]](props)
consumer.subscribe(Seq(topic).asJava)
Stream.continually(consumer.poll(5000).asScala.toList).takeWhile(_.nonEmpty).flatten.foreach(c => bos.write(c.value)))
consumer.close
bos.close
I dont see any errors in the logs as well, following is debug log
[root#vm util]# bin/record-tool consume --server=kafka-server:9092 --topic=test_topic --asBin --debug
16:44:02.548 [main] INFO org.apache.kafka.clients.consumer.ConsumerConfig - ConsumerConfig values:
metric.reporters = []
metadata.max.age.ms = 300000
partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
max.partition.fetch.bytes = 104857600
bootstrap.servers = [kafka-server:9092]
ssl.keystore.type = JKS
enable.auto.commit = false
sasl.mechanism = GSSAPI
interceptor.classes = null
exclude.internal.topics = true
ssl.truststore.password = null
client.id =
ssl.endpoint.identification.algorithm = null
max.poll.records = 2147483647
check.crcs = true
request.timeout.ms = 40000
heartbeat.interval.ms = 3000
auto.commit.interval.ms = 5000
receive.buffer.bytes = 65536
ssl.truststore.type = JKS
ssl.truststore.location = null
ssl.keystore.password = null
fetch.min.bytes = 1
send.buffer.bytes = 131072
value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
group.id = record-tool
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.trustmanager.algorithm = PKIX
ssl.key.password = null
fetch.max.wait.ms = 500
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
session.timeout.ms = 30000
metrics.num.samples = 2
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
ssl.protocol = TLS
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.keystore.location = null
ssl.cipher.suites = null
security.protocol = PLAINTEXT
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 30000
auto.offset.reset = earliest
16:44:02.550 [main] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - Starting the Kafka consumer
16:44:02.621 [main] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 1 to Cluster(nodes = [kafka-server:9092 (id: -1 rack: null)], partitions = [])
16:44:02.632 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-closed:
16:44:02.636 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-created:
16:44:02.637 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received:
16:44:02.637 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent:
16:44:02.638 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-received:
16:44:02.638 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name select-time:
16:44:02.639 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name io-time:
16:44:02.649 [main] INFO org.apache.kafka.clients.consumer.ConsumerConfig - ConsumerConfig values:
metric.reporters = []
metadata.max.age.ms = 300000
partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
max.partition.fetch.bytes = 104857600
bootstrap.servers = [kafka-server:9092]
ssl.keystore.type = JKS
enable.auto.commit = false
sasl.mechanism = GSSAPI
interceptor.classes = null
exclude.internal.topics = true
ssl.truststore.password = null
client.id = consumer-1
ssl.endpoint.identification.algorithm = null
max.poll.records = 2147483647
check.crcs = true
request.timeout.ms = 40000
heartbeat.interval.ms = 3000
auto.commit.interval.ms = 5000
receive.buffer.bytes = 65536
ssl.truststore.type = JKS
ssl.truststore.location = null
ssl.keystore.password = null
fetch.min.bytes = 1
send.buffer.bytes = 131072
value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
group.id = record-tool
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.trustmanager.algorithm = PKIX
ssl.key.password = null
fetch.max.wait.ms = 500
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
session.timeout.ms = 30000
metrics.num.samples = 2
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
ssl.protocol = TLS
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.keystore.location = null
ssl.cipher.suites = null
security.protocol = PLAINTEXT
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 30000
auto.offset.reset = earliest
16:44:02.657 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name heartbeat-latency
16:44:02.657 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name join-latency
16:44:02.657 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name sync-latency
16:44:02.659 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name commit-latency
16:44:02.663 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-fetched
16:44:02.664 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name records-fetched
16:44:02.664 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name fetch-latency
16:44:02.664 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name records-lag
16:44:02.664 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name fetch-throttle-time
16:44:02.666 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version : 0.10.0.1
16:44:02.666 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId : a7a17cdec9eaa6c5
16:44:02.668 [main] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - Kafka consumer created
16:44:02.680 [main] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - Subscribed to topic(s): test_topic
16:44:02.681 [main] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Sending coordinator request for group record-tool to broker kafka-server:9092 (id: -1 rack: null)
16:44:02.695 [main] DEBUG org.apache.kafka.clients.NetworkClient - Initiating connection to node -1 at kafka-server:9092.
16:44:02.816 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node--1.bytes-sent
16:44:02.817 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node--1.bytes-received
16:44:02.818 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node--1.latency
16:44:02.820 [main] DEBUG org.apache.kafka.clients.NetworkClient - Completed connection to node -1
16:44:02.902 [main] DEBUG org.apache.kafka.clients.NetworkClient - Sending metadata request {topics=[test_topic]} to node -1
16:44:02.981 [main] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 2 to Cluster(nodes = [kafka-server.mydomain.com:9092 (id: 0 rack: null)], partitions = [Partition(topic = test_topic, partition = 0, leader = 0, replicas = [0,], isr = [0,]])
16:44:02.982 [main] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Received group coordinator response ClientResponse(receivedTimeMs=1583225042982, disconnected=false, request=ClientRequest(expectResponse=true, callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler#434a63ab, request=RequestSend(header={api_key=10,api_version=0,correlation_id=0,client_id=consumer-1}, body={group_id=record-tool}), createdTimeMs=1583225042692, sendTimeMs=1583225042904), responseBody={error_code=0,coordinator={node_id=0,host=kafka-server.mydomain.com,port=9092}})
16:44:02.983 [main] INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Discovered coordinator kafka-server.mydomain.com:9092 (id: 2147483647 rack: null) for group record-tool.
16:44:02.983 [main] DEBUG org.apache.kafka.clients.NetworkClient - Initiating connection to node 2147483647 at kafka-server.mydomain.com:9092.
16:44:02.986 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Revoking previously assigned partitions [] for group record-tool
16:44:02.986 [main] INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator - (Re-)joining group record-tool
16:44:02.988 [main] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Sending JoinGroup ({group_id=record-tool,session_timeout=30000,member_id=,protocol_type=consumer,group_protocols=[{protocol_name=range,protocol_metadata=java.nio.HeapByteBuffer[pos=0 lim=25 cap=25]}]}) to coordinator kafka-server.mydomain.com:9092 (id: 2147483647 rack: null)
16:44:03.051 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node-2147483647.bytes-sent
16:44:03.051 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node-2147483647.bytes-received
16:44:03.052 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node-2147483647.latency
16:44:03.052 [main] DEBUG org.apache.kafka.clients.NetworkClient - Completed connection to node 2147483647
16:44:03.123 [main] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Received successful join group response for group record-tool: {error_code=0,generation_id=3,group_protocol=range,leader_id=consumer-1-f82633ab-a06e-4474-8ddb-1ec096d6c7f2,member_id=consumer-1-f82633ab-a06e-4474-8ddb-1ec096d6c7f2,members=[{member_id=consumer-1-f82633ab-a06e-4474-8ddb-1ec096d6c7f2,member_metadata=java.nio.HeapByteBuffer[pos=0 lim=25 cap=25]}]}
16:44:03.123 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Performing assignment for group record-tool using strategy range with subscriptions {consumer-1-f82633ab-a06e-4474-8ddb-1ec096d6c7f2=Subscription(topics=[test_topic])}
16:44:03.124 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Finished assignment for group record-tool: {consumer-1-f82633ab-a06e-4474-8ddb-1ec096d6c7f2=Assignment(partitions=[test_topic-0])}
16:44:03.124 [main] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Sending leader SyncGroup for group record-tool to coordinator kafka-server.mydomain.com:9092 (id: 2147483647 rack: null): {group_id=record-tool,generation_id=3,member_id=consumer-1-f82633ab-a06e-4474-8ddb-1ec096d6c7f2,group_assignment=[{member_id=consumer-1-f82633ab-a06e-4474-8ddb-1ec096d6c7f2,member_assignment=java.nio.HeapByteBuffer[pos=0 lim=33 cap=33]}]}
16:44:03.198 [main] INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Successfully joined group record-tool with generation 3
16:44:03.199 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Setting newly assigned partitions [test_topic-0] for group record-tool
16:44:03.200 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Group record-tool fetching committed offsets for partitions: [test_topic-0]
16:44:03.268 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Group record-tool has no committed offset for partition test_topic-0
16:44:03.269 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Resetting offset for partition test_topic-0 to earliest offset.
16:44:03.270 [main] DEBUG org.apache.kafka.clients.NetworkClient - Initiating connection to node 0 at kafka-server.mydomain.com:9092.
16:44:03.336 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node-0.bytes-sent
16:44:03.337 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node-0.bytes-received
16:44:03.337 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node-0.latency
16:44:03.338 [main] DEBUG org.apache.kafka.clients.NetworkClient - Completed connection to node 0
16:44:03.407 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - Fetched offset 0 for partition test_topic-0
16:44:06.288 [main] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Received successful heartbeat response for group record-tool
16:44:07.702 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name connections-closed:
16:44:07.702 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name connections-created:
16:44:07.702 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name bytes-sent-received:
16:44:07.702 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name bytes-sent:
16:44:07.703 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name bytes-received:
16:44:07.703 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name select-time:
16:44:07.704 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name io-time:
16:44:07.704 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name node--1.bytes-sent
16:44:07.705 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name node--1.bytes-received
16:44:07.705 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name node--1.latency
16:44:07.705 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name node-2147483647.bytes-sent
16:44:07.706 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name node-2147483647.bytes-received
16:44:07.706 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name node-2147483647.latency
16:44:07.706 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name node-0.bytes-sent
16:44:07.707 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name node-0.bytes-received
16:44:07.707 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name node-0.latency
16:44:07.707 [main] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - The Kafka consumer has closed.
What I noticed within takeWhile(_.nonEmpty) is the list is empty.
Is there any mistake in the code? Thanks.

Flink application sink KafkaProducer is throwing java heap space error (outofmemory)

I have created flink app which takes a datastream of strings and sink it with Kafka. The datastream of strings is simple strings fromCollection.
List<String> listOfStrings = new ArrayList<>();
listOfStrings.add("testkafka1");
listOfStrings.add("testkafka2");
listOfStrings.add("testkafka3");
listOfStrings.add("testkafka4");
DataStream<String> testStringStream = env.fromCollection(listOfStrings);
The flink runs on Kubernetes with parllelism 1 and 1 task manager. As soon as flink job starts it is failing with following error.
ERROR org.apache.kafka.common.utils.KafkaThread - Uncaught exception in kafka-producer-network-thread | producer-1:
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:97)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:75)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:203)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:167)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:381)
at org.apache.kafka.common.network.Selector.poll(Selector.java:326)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:433)
at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:71)
at org.apache.kafka.clients.producer.internals.Sender.awaitLeastLoadedNodeReady(Sender.java:409)
at org.apache.kafka.clients.producer.internals.Sender.maybeSendTransactionalRequest(Sender.java:337)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:204)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:162)
at java.lang.Thread.run(Thread.java:748)
The taskmanager config I have is (Taken from taskmanager logs)
Starting Task Manager
config file:
jobmanager.rpc.address: component-app-adb71002-tm-5c6f4d58bd-rtblz
jobmanager.rpc.port: 6123
jobmanager.heap.size: 1024m
taskmanager.heap.size: 1024m
taskmanager.numberOfTaskSlots: 2
parallelism.default: 1
jobmanager.execution.failover-strategy: region
blob.server.port: 6124
query.server.port: 6125
blob.server.port: 6125
fs.s3a.aws.credentials.provider: org.apache.flink.fs.s3base.shaded.com.amazonaws.auth.DefaultAWSCredentialsProviderChain
jobmanager.heap.size: 524288k
jobmanager.rpc.port: 6123
jobmanager.web.port: 8081
metrics.internal.query-service.port: 50101
metrics.reporter.dghttp.apikey: f52362263f032f2ebc3622cafc0171cd
metrics.reporter.dghttp.class: org.apache.flink.metrics.datadog.DatadogHttpReporter
metrics.reporter.dghttp.tags: componentingestion,dev
query.server.port: 6124
taskmanager.heap.size: 1048576k
taskmanager.numberOfTaskSlots: 1
web.upload.dir: /opt/flink
jobmanager.rpc.address: component-app-adb71002
taskmanager.host: 10.42.6.6
Starting taskexecutor as a console application on host component-app-adb71002-tm-5c6f4d58bd-rtblz.
2020-02-11 15:19:20,519 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner - --------------------------------------------------------------------------------
2020-02-11 15:19:20,520 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner - Starting TaskManager (Version: 1.9.2, Rev:c9d2c90, Date:24.01.2020 # 08:44:30 CST)
2020-02-11 15:19:20,520 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner - OS current user: flink
2020-02-11 15:19:20,520 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner - Current Hadoop/Kerberos user: <no hadoop dependency found>
2020-02-11 15:19:20,520 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner - JVM: OpenJDK 64-Bit Server VM - Oracle Corporation - 1.8/25.242-b08
2020-02-11 15:19:20,521 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner - Maximum heap size: 922 MiBytes
2020-02-11 15:19:20,521 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner - JAVA_HOME: /usr/local/openjdk-8
2020-02-11 15:19:20,521 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner - No Hadoop Dependency available
2020-02-11 15:19:20,521 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner - JVM Options:
2020-02-11 15:19:20,521 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner - -XX:+UseG1GC
2020-02-11 15:19:20,521 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner - -Xms922M
2020-02-11 15:19:20,521 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner - -Xmx922M
2020-02-11 15:19:20,521 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner - -XX:MaxDirectMemorySize=8388607T
2020-02-11 15:19:20,521 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner - -Dlog4j.configuration=file:/opt/flink/conf/log4j-console.properties
2020-02-11 15:19:20,522 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner - -Dlogback.configurationFile=file:/opt/flink/conf/logback-console.xml
2020-02-11 15:19:20,522 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner - Program Arguments:
2020-02-11 15:19:20,522 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner - --configDir
2020-02-11 15:19:20,522 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner - /opt/flink/conf
2020-02-11 15:19:20,522 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner - Classpath: /opt/flink/lib/flink-metrics-datadog-1.9.2.jar:/opt/flink/lib/flink-table-blink_2.12-1.9.2.jar:/opt/flink/lib/flink-table_2.12-1.9.2.jar:/opt/flink/lib/log4j-1.2.17.jar:/opt/flink/lib/slf4j-log4j12-1.7.15.jar:/opt/flink/lib/flink-dist_2.12-1.9.2.jar:::
Producer config that I have is
acks = 1
batch.size = 16384
bootstrap.servers = [XXXXXXXXXXXXXXXX] ---I masked it intentionally
buffer.memory = 33554432
client.id =
compression.type = none
connections.max.idle.ms = 540000
enable.idempotence = false
interceptor.classes = null
key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 3
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = Source: Collection Source -> Sink: Unnamed-eb99017e0f9125fa6648bf56123bdcf7-19
value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
Most of the producer config is default, is there something I am missing here or something wrong with the config ?
As Dominik suggested, the issue is not related to Heap.
If the broker is setup with ssl authentication and client is not setup for ssl auth, this exception is thrown.
this is a bug with kafka.
https://issues.apache.org/jira/browse/KAFKA-4090

Spring Cloud Stream Kafka Stream application shows Resetting offset for partition event-x to offset 0 on every restart

I have a Spring Cloud Stream Kafka Stream application that reads from a topic (event) and performs a simple processing:
#Configuration
class EventKStreamConfiguration {
private val logger = LoggerFactory.getLogger(javaClass)
#StreamListener
fun process(#Input("event") eventStream: KStream<String, EventReceived>) {
eventStream.foreach { key, value ->
logger.info("--------> Processing Event {}", value)
// Save in DB
}
}
}
This application is using a Kafka environment from Confluent Cloud, with an event topic with 6 partitions. The full configuration is:
spring:
application:
name: events-processor
cloud:
stream:
schema-registry-client:
endpoint: ${schema-registry-url:http://localhost:8081}
kafka:
streams:
binder:
brokers: ${kafka-brokers:localhost}
configuration:
application:
id: ${spring.application.name}
default:
key:
serde: org.apache.kafka.common.serialization.Serdes$StringSerde
schema:
registry:
url: ${spring.cloud.stream.schema-registry-client.endpoint}
value:
subject:
name:
strategy: io.confluent.kafka.serializers.subject.RecordNameStrategy
processing:
guarantee: exactly_once
bindings:
event:
consumer:
valueSerde: io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde
bindings:
event:
destination: event
data:
mongodb:
uri: ${mongodb-uri:mongodb://localhost/test}
server:
port: 8085
logging:
level:
org.springframework.kafka.config: debug
---
spring:
profiles: confluent-cloud
cloud:
stream:
kafka:
streams:
binder:
autoCreateTopics: false
configuration:
retry:
backoff:
ms: 500
security:
protocol: SASL_SSL
sasl:
mechanism: PLAIN
jaas:
config: xxx
basic:
auth:
credentials:
source: USER_INFO
schema:
registry:
basic:
auth:
user:
info: yyy
Messages are being correctly processed by the KStream. If I restart the application they are not reprocessed. Note: I don’t want them to be reprocessed, so this behaviour is ok.
However the startup logs show some strange bits:
First it displays the creation of a restore consumer client. with auto offset reset none:
2019-07-19 10:20:17.120 INFO 82473 --- [ main] o.a.k.s.p.internals.StreamThread : stream-thread [events-processor-9a8069c4-3fb6-4d76-a207-efbbadd52b8f-StreamThread-1] Creating restore consumer client
2019-07-19 10:20:17.123 INFO 82473 --- [ main] o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values:
auto.commit.interval.ms = 5000
auto.offset.reset = none
Then it creates a consumer client with auto offset reset earliest.
2019-07-19 10:20:17.235 INFO 82473 --- [ main] o.a.k.s.p.internals.StreamThread : stream-thread [events-processor-9a8069c4-3fb6-4d76-a207-efbbadd52b8f-StreamThread-1] Creating consumer client
2019-07-19 10:20:17.241 INFO 82473 --- [ main] o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values:
auto.commit.interval.ms = 5000
auto.offset.reset = earliest
The final traces of the startup log show an offset reset to 0. This happens on every restart of the application:
2019-07-19 10:20:31.577 INFO 82473 --- [-StreamThread-1] o.a.k.s.p.internals.StreamThread : stream-thread [events-processor-9a8069c4-3fb6-4d76-a207-efbbadd52b8f-StreamThread-1] State transition from PARTITIONS_ASSIGNED to RUNNING
2019-07-19 10:20:31.578 INFO 82473 --- [-StreamThread-1] org.apache.kafka.streams.KafkaStreams : stream-client [events-processor-9a8069c4-3fb6-4d76-a207-efbbadd52b8f] State transition from REBALANCING to RUNNING
2019-07-19 10:20:31.669 INFO 82473 --- [events-processor] o.a.k.c.consumer.internals.Fetcher : [Consumer clientId=events-processor-9a8069c4-3fb6-4d76-a207-efbbadd52b8f-StreamThread-1-consumer, groupId=events-processor] Resetting offset for partition event-3 to offset 0.
2019-07-19 10:20:31.669 INFO 82473 --- [events-processor] o.a.k.c.consumer.internals.Fetcher : [Consumer clientId=events-processor-9a8069c4-3fb6-4d76-a207-efbbadd52b8f-StreamThread-1-consumer, groupId=events-processor] Resetting offset for partition event-0 to offset 0.
2019-07-19 10:20:31.669 INFO 82473 --- [events-processor] o.a.k.c.consumer.internals.Fetcher : [Consumer clientId=events-processor-9a8069c4-3fb6-4d76-a207-efbbadd52b8f-StreamThread-1-consumer, groupId=events-processor] Resetting offset for partition event-1 to offset 0.
2019-07-19 10:20:31.669 INFO 82473 --- [events-processor] o.a.k.c.consumer.internals.Fetcher : [Consumer clientId=events-processor-9a8069c4-3fb6-4d76-a207-efbbadd52b8f-StreamThread-1-consumer, groupId=events-processor] Resetting offset for partition event-5 to offset 0.
2019-07-19 10:20:31.670 INFO 82473 --- [events-processor] o.a.k.c.consumer.internals.Fetcher : [Consumer clientId=events-processor-9a8069c4-3fb6-4d76-a207-efbbadd52b8f-StreamThread-1-consumer, groupId=events-processor] Resetting offset for partition event-4 to offset 0.
What's the reason why there are two consumers configured?
Why does the second one have auto.offset.reset = earliest when I haven't configured it explicitly and the Kafka default is latest?
I want the default (auto.offset.reset = latest) behaviour and it seems to be working fine. However, doesn't it contradict what I see in the logs?
UPDATE:
I would rephrase the third question like this: Why do the logs show that the partitions are being reseted to 0 on every restart and in spite of it no messages are redelivered to the KStream?
UPDATE 2:
I've simplified the scenario, this time with a native Kafka Streams application. The behaviour is exactly the same as observed with Spring Cloud Stream. However, inspecting the consumer-group and the partitions I've found it kind of makes sense.
KStream:
fun main() {
val props = Properties()
props[StreamsConfig.APPLICATION_ID_CONFIG] = "streams-wordcount"
props[StreamsConfig.BOOTSTRAP_SERVERS_CONFIG] = "localhost:9092"
props[StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG] = 0
props[StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG] = Serdes.String().javaClass.name
props[StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG] = Serdes.String().javaClass.name
val builder = StreamsBuilder()
val source = builder.stream<String, String>("streams-plaintext-input")
source.foreach { key, value -> println("$key $value") }
val streams = KafkaStreams(builder.build(), props)
val latch = CountDownLatch(1)
// attach shutdown handler to catch control-c
Runtime.getRuntime().addShutdownHook(object : Thread("streams-wordcount-shutdown-hook") {
override fun run() {
streams.close()
latch.countDown()
}
})
try {
streams.start()
latch.await()
} catch (e: Throwable) {
exitProcess(1)
}
exitProcess(0)
}
This is what I've seen:
1) With an empty topic, the startup shows a resetting of all partitions to offset 0:
07:55:03.885 [streams-wordcount-3549a54e-49db-4490-bd9f-7156e972021a-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=streams-wordcount-3549a54e-49db-4490-bd9f-7156e972021a-StreamThread-1-consumer, groupId=streams-wordcount] Resetting offset for partition streams-plaintext-input-2 to offset 0.
07:55:03.886 [streams-wordcount-3549a54e-49db-4490-bd9f-7156e972021a-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=streams-wordcount-3549a54e-49db-4490-bd9f-7156e972021a-StreamThread-1-consumer, groupId=streams-wordcount] Resetting offset for partition streams-plaintext-input-3 to offset 0.
07:55:03.886 [streams-wordcount-3549a54e-49db-4490-bd9f-7156e972021a-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=streams-wordcount-3549a54e-49db-4490-bd9f-7156e972021a-StreamThread-1-consumer, groupId=streams-wordcount] Resetting offset for partition streams-plaintext-input-0 to offset 0.
07:55:03.886 [streams-wordcount-3549a54e-49db-4490-bd9f-7156e972021a-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=streams-wordcount-3549a54e-49db-4490-bd9f-7156e972021a-StreamThread-1-consumer, groupId=streams-wordcount] Resetting offset for partition streams-plaintext-input-1 to offset 0.
07:55:03.886 [streams-wordcount-3549a54e-49db-4490-bd9f-7156e972021a-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=streams-wordcount-3549a54e-49db-4490-bd9f-7156e972021a-StreamThread-1-consumer, groupId=streams-wordcount] Resetting offset for partition streams-plaintext-input-4 to offset 0.
07:55:03.886 [streams-wordcount-3549a54e-49db-4490-bd9f-7156e972021a-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=streams-wordcount-3549a54e-49db-4490-bd9f-7156e972021a-StreamThread-1-consumer, groupId=streams-wordcount] Resetting offset for partition streams-plaintext-input-5 to offset 0
2) I put one message in the topic and inspect the consumer group, seeing that the record is in partition 4:
TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
streams-plaintext-input 0 - 0 - streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer-905a307a-4c49-4d8b-ac2e-5525ba2e8a8e /127.0.0.1 streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer
streams-plaintext-input 5 - 0 - streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer-905a307a-4c49-4d8b-ac2e-5525ba2e8a8e /127.0.0.1 streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer
streams-plaintext-input 1 - 0 - streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer-905a307a-4c49-4d8b-ac2e-5525ba2e8a8e /127.0.0.1 streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer
streams-plaintext-input 2 - 0 - streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer-905a307a-4c49-4d8b-ac2e-5525ba2e8a8e /127.0.0.1 streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer
streams-plaintext-input 3 - 0 - streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer-905a307a-4c49-4d8b-ac2e-5525ba2e8a8e /127.0.0.1 streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer
streams-plaintext-input 4 1 1 0 streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer-905a307a-4c49-4d8b-ac2e-5525ba2e8a8e /127.0.0.1 streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer
3) I restart the application. Now the resetting only affects the empty partitions (0, 1, 2, 3, 5):
07:57:39.477 [streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer, groupId=streams-wordcount] Resetting offset for partition streams-plaintext-input-2 to offset 0.
07:57:39.478 [streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer, groupId=streams-wordcount] Resetting offset for partition streams-plaintext-input-3 to offset 0.
07:57:39.478 [streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer, groupId=streams-wordcount] Resetting offset for partition streams-plaintext-input-0 to offset 0.
07:57:39.479 [streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer, groupId=streams-wordcount] Resetting offset for partition streams-plaintext-input-1 to offset 0.
07:57:39.479 [streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer, groupId=streams-wordcount] Resetting offset for partition streams-plaintext-input-5 to offset 0.
4) I insert another message, inspect the consumer group state and the same thing happens: the record is in partition 2 and when restarting the application it only resets the empty partitions (0, 1, 3, 5):
TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
streams-plaintext-input 0 - 0 - streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1-consumer-cb04e2bd-598f-455f-b913-1370b4144dd6 /127.0.0.1 streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1-consumer
streams-plaintext-input 5 - 0 - streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1-consumer-cb04e2bd-598f-455f-b913-1370b4144dd6 /127.0.0.1 streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1-consumer
streams-plaintext-input 1 - 0 - streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1-consumer-cb04e2bd-598f-455f-b913-1370b4144dd6 /127.0.0.1 streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1-consumer
streams-plaintext-input 2 1 1 0 streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1-consumer-cb04e2bd-598f-455f-b913-1370b4144dd6 /127.0.0.1 streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1-consumer
streams-plaintext-input 3 - 0 - streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1-consumer-cb04e2bd-598f-455f-b913-1370b4144dd6 /127.0.0.1 streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1-consumer
streams-plaintext-input 4 1 1 0 streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1-consumer-cb04e2bd-598f-455f-b913-1370b4144dd6 /127.0.0.1 streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1-consumer
08:00:42.313 [streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1-consumer, groupId=streams-wordcount] Resetting offset for partition streams-plaintext-input-3 to offset 0.
08:00:42.314 [streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1-consumer, groupId=streams-wordcount] Resetting offset for partition streams-plaintext-input-0 to offset 0.
08:00:42.314 [streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1-consumer, groupId=streams-wordcount] Resetting offset for partition streams-plaintext-input-1 to offset 0.
08:00:42.314 [streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1-consumer, groupId=streams-wordcount] Resetting offset for partition streams-plaintext-input-5 to offset 0.
What's the reason why there are two consumers configured?
Restore Consumer Client is a dedicated consumer for fault tolerance and state management. It is the responsible for restoring the state from the changelog topics. It is displayed seperately from the application consumer client. You can find more information here :
https://docs.confluent.io/current/streams/monitoring.html#kafka-restore-consumer-client-id
Why does the second one have auto.offset.reset = earliest when I haven't configured it explicitly and the Kafka default is latest?
You are right, auto.offset.reset default value is latest in Kafka Consumer. But in Spring Cloud Stream, default value for consumer startOffset is earliest. Hence it shows earliest in second consumer. Also it depends on spring.cloud.stream.bindings.<channelName>.group binding. If it is set explicitly, then startOffset is set to earliest, otherwise it is set to latest for anonymous consumer.
Reference : Spring Cloud Stream Kafka Consumer Properties
I want the default (auto.offset.reset = latest) behaviour and it
seems to be working fine. However, doesn't it contradict what I see in
the logs?
In case of anonymous consumer group, the default value for startOffset will be latest.

KafkaConsumer: how to increase log level?

When I run my Java application and instantiate the KafkaConsumer object (fed with the minimum required properties: key and value deserializer and group_id); I see lots of INFO messages on the StdOut (If I provide unsupported properties, I also see WARNING messages).
I want to see when fetch events take place. I assume that by increasing the loglevel to DEBUG I will be able to see that. Unfortunately, I am not able to increase it.
I tried to feed the log4j.properties file in multiple ways (placing the file at specific paths and also passing it as parameter (-Dlog4j.configuration). The output remains the same.
cd /Users/user/git/kafka/toys; JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_191.jdk/Contents/Home "/Applications/NetBeans/NetBeans 8.2.app/Contents/Resources/NetBeans/java/maven/bin/mvn" "-Dexec.args=-classpath %classpath ch.demo.toys.CarthusianConsumer" -Dexec.executable=/Library/Java/JavaVirtualMachines/jdk1.8.0_191.jdk/Contents/Home/bin/java -Dexec.classpathScope=runtime -DskipTests=true org.codehaus.mojo:exec-maven-plugin:1.2.1:exec
Running NetBeans Compile On Save execution. Phase execution is skipped and output directories of dependency projects (with Compile on Save turned on) will be used instead of their jar artifacts.
Scanning for projects...
------------------------------------------------------------------------
Building toys 1.0-SNAPSHOT
------------------------------------------------------------------------
--- exec-maven-plugin:1.2.1:exec (default-cli) # toys ---
Jul 10, 2019 2:52:00 PM org.apache.kafka.common.config.AbstractConfig logAll
INFO: ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = earliest
bootstrap.servers = [kafka-server:9090, kafka-server:9091, kafka-server:9092]
check.crcs = true
client.dns.lookup = default
client.id =
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = carthusian-consumer
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
isolation.level = read_uncommitted
key.deserializer = class org.apache.kafka.common.serialization.IntegerDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 100000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = DEBUG
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
session.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
Jul 10, 2019 2:52:01 PM org.apache.kafka.common.utils.AppInfoParser$AppInfo <init>
INFO: Kafka version: 2.3.0
Jul 10, 2019 2:52:01 PM org.apache.kafka.common.utils.AppInfoParser$AppInfo <init>
INFO: Kafka commitId: fc1aaa116b661c8a
Jul 10, 2019 2:52:01 PM org.apache.kafka.common.utils.AppInfoParser$AppInfo <init>
INFO: Kafka startTimeMs: 1562763121219
Jul 10, 2019 2:52:01 PM org.apache.kafka.clients.consumer.KafkaConsumer subscribe
INFO: [Consumer clientId=consumer-1, groupId=carthusian-consumer] Subscribed to topic(s): sequence
Jul 10, 2019 2:52:01 PM org.apache.kafka.clients.Metadata update
INFO: [Consumer clientId=consumer-1, groupId=carthusian-consumer] Cluster ID: REIXp5FySKGPHlRyfTALLQ
Jul 10, 2019 2:52:01 PM org.apache.kafka.clients.consumer.internals.AbstractCoordinator$FindCoordinatorResponseHandler onSuccess
INFO: [Consumer clientId=consumer-1, groupId=carthusian-consumer] Discovered group coordinator kafka-tds:9091 (id: 2147483646 rack: null)
Jul 10, 2019 2:52:01 PM org.apache.kafka.clients.consumer.internals.ConsumerCoordinator onJoinPrepare
INFO: [Consumer clientId=consumer-1, groupId=carthusian-consumer] Revoking previously assigned partitions []
Revoke event: []
Jul 10, 2019 2:52:01 PM org.apache.kafka.clients.consumer.internals.AbstractCoordinator sendJoinGroupRequest
INFO: [Consumer clientId=consumer-1, groupId=carthusian-consumer] (Re-)joining group
Jul 10, 2019 2:52:01 PM org.apache.kafka.clients.consumer.internals.AbstractCoordinator sendJoinGroupRequest
INFO: [Consumer clientId=consumer-1, groupId=carthusian-consumer] (Re-)joining group
Jul 10, 2019 2:52:01 PM org.apache.kafka.clients.consumer.internals.AbstractCoordinator$1 onSuccess
INFO: [Consumer clientId=consumer-1, groupId=carthusian-consumer] Successfully joined group with generation 96
Jul 10, 2019 2:52:01 PM org.apache.kafka.clients.consumer.internals.ConsumerCoordinator onJoinComplete
INFO: [Consumer clientId=consumer-1, groupId=carthusian-consumer] Setting newly assigned partitions: sequence-1, sequence-0
Assignment event: [sequence-1, sequence-0]
Jul 10, 2019 2:52:01 PM org.apache.kafka.clients.consumer.internals.SubscriptionState lambda$requestOffsetReset$3
INFO: [Consumer clientId=consumer-1, groupId=carthusian-consumer] Seeking to EARLIEST offset of partition sequence-1
Jul 10, 2019 2:52:01 PM org.apache.kafka.clients.consumer.internals.SubscriptionState lambda$requestOffsetReset$3
INFO: [Consumer clientId=consumer-1, groupId=carthusian-consumer] Seeking to EARLIEST offset of partition sequence-0
Jul 10, 2019 2:52:01 PM org.apache.kafka.clients.consumer.internals.SubscriptionState maybeSeekUnvalidated
INFO: [Consumer clientId=consumer-1, groupId=carthusian-consumer] Resetting offset for partition sequence-0 to offset 0.
Jul 10, 2019 2:52:01 PM org.apache.kafka.clients.consumer.internals.SubscriptionState maybeSeekUnvalidated
INFO: [Consumer clientId=consumer-1, groupId=carthusian-consumer] Resetting offset for partition sequence-1 to offset 0.
Loaded 9804 records from [sequence-0] partitions
Loaded 9804 records from [sequence-1] partitions
Loaded 9799 records from [sequence-0] partitions
Loaded 9799 records from [sequence-1] partitions
Loaded 9799 records from [sequence-0] partitions
Loaded 9799 records from [sequence-1] partitions
Loaded 9799 records from [sequence-0] partitions
Loaded 9799 records from [sequence-1] partitions
Loaded 9799 records from [sequence-0] partitions
Solved by placing the following (simple) log4j.properties under src/main/resources and running the app straight from console (rather than from the IDE). Fetching messages are now shown.
# Root logger option
log4j.rootLogger=DEBUG, stdout
# Direct log messages to stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target=System.out
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1} - %m%n
At the moment I do not know which class is generating the messages I am looking for, hence the DEBUG setting on the rootLogger.