I execute shell script kafka-topics.sh to alter partition count from 3 to 10.
./bin/kafka-topics.sh --zookeeper xxx.xx.xxx.xx:2181/my-broker-0 --alter --topic target-topic-example-1 --partitions 10
As soon as I execute the command, Kafka cluster occurred errors.
[2018-11-20 14:45:29,954] INFO Created log for partition [target-topic-example-1,5] in /data/kafka/brokers with properties {flush.messages -> 9223372036854775807, message.timestamp.type -> CreateTime, segment.bytes -> 536870912, preallocate -> false, cleanup.policy -> [delete], delete.retention.ms -> 86400000, segment.ms -> 604800000, min.insync.replicas -> 1, file.delete.delay.ms -> 60000, retention.ms -> 259200000, max.message.bytes -> 10000000, message.format.version -> 0.10.1-IV2, index.interval.bytes -> 4096, segment.index.bytes -> 10485760, retention.bytes -> -1, segment.jitter.ms -> 0, min.cleanable.dirty.ratio -> 0.5, compression.type -> producer, unclean.leader.election.enable -> true, message.timestamp.difference.max.ms -> 9223372036854775807, min.compaction.lag.ms -> 0, flush.ms -> 9223372036854775807}. (kafka.log.LogManager)
[2018-11-20 14:45:29,955] INFO Partition [target-topic-example-1,5] on broker 8: No checkpointed highwatermark is found for partition [target-topic-example-1,5] (kafka.cluster.Partition)
[2018-11-20 14:45:29,957] INFO Completed load of log target-topic-example-1-4 with 1 log segments and log end offset 0 in 1 ms (kafka.log.Log)
[2018-11-20 14:45:29,957] INFO Created log for partition [target-topic-example-1,4] in /data/kafka/brokers with properties {flush.messages -> 9223372036854775807, message.timestamp.type -> CreateTime, segment.bytes -> 536870912, preallocate -> false, cleanup.policy -> [delete], delete.retention.ms -> 86400000, segment.ms -> 604800000, min.insync.replicas -> 1, file.delete.delay.ms -> 60000, retention.ms -> 259200000, max.message.bytes -> 10000000, message.format.version -> 0.10.1-IV2, index.interval.bytes -> 4096, segment.index.bytes -> 10485760, retention.bytes -> -1, segment.jitter.ms -> 0, min.cleanable.dirty.ratio -> 0.5, compression.type -> producer, unclean.leader.election.enable -> true, message.timestamp.difference.max.ms -> 9223372036854775807, min.compaction.lag.ms -> 0, flush.ms -> 9223372036854775807}. (kafka.log.LogManager)
[2018-11-20 14:45:29,958] INFO Partition [target-topic-example-1,4] on broker 8: No checkpointed highwatermark is found for partition [target-topic-example-1,4] (kafka.cluster.Partition)
[2018-11-20 14:45:29,958] INFO [ReplicaFetcherManager on broker 8] Removed fetcher for partitions target-topic-example-1-5,target-topic-example-1-4 (kafka.server.ReplicaFetcherManager)
[2018-11-20 14:45:29,958] INFO Truncating log target-topic-example-1-5 to offset 0. (kafka.log.Log)
[2018-11-20 14:45:29,958] INFO Truncating log target-topic-example-1-4 to offset 0. (kafka.log.Log)
[2018-11-20 14:45:29,961] INFO [ReplicaFetcherManager on broker 8] Added fetcher for partitions List([target-topic-example-1-5, initOffset 0 to broker BrokerEndPoint(1,My-broker02-172.cm.skp,9092)] , [target-topic-example-1-4, initOffset 0 to broker BrokerEndPoint(0,My- broker01-172.cm.skp,9092)] ) (kafka.server.ReplicaFetcherManager)
[2018-11-20 14:45:30,062] ERROR [ReplicaFetcherThread-0-1], Error for partition [other-topic-1,3] to broker 1:org.apache.kafka.common.errors.UnknownServerException: The server experienced an unexpected error when processing the request (kafka.server.ReplicaFetcherThread)
[2018-11-20 14:45:30,070] ERROR [ReplicaFetcherThread-0-1], Error for partition [other-topic-2,0] to broker 1:org.apache.kafka.common.errors.UnknownServerException: The server experienced an unexpected error when processing the request (kafka.server.ReplicaFetcherThread)
[2018-11-20 14:45:30,070] ERROR [ReplicaFetcherThread-0-1], Error for partition [other-topic-3,3] to broker 1:org.apache.kafka.common.errors.UnknownServerException: The server experienced an unexpected error when processing the request (kafka.server.ReplicaFetcherThread)
[2018-11-20 14:45:30,070] ERROR [ReplicaFetcherThread-0-1], Error for partition [other-topic-4,0] to broker 1:org.apache.kafka.common.errors.UnknownServerException: The server experienced an unexpected error when processing the request (kafka.server.ReplicaFetcherThread)
My command to change the number of partitions worked well as a result.
However, I do not know what the above errors are like.
(For reference, there are a total of 10 brokers, and more than 500 topics.)
Just altering the number of partitions isn't enough, you need to re-balance the cluster. The newer partitions are only active when the consumers are informed about these new partitions.
Please refer to my answer for process of adding partitions: https://stackoverflow.com/a/58293572/8702415
I would also recommend to conduct preferred replica election. ./bin/kafka-preferred-replica-election.sh --zookeeper localhost:9092
Related
Kafka consumer continues to print error messages
I built a cluster (kafka version 2.3.0) using 5 machines kafka, which has a partition with a partition of 0 and a data copy of 3. When I consume the kafka-clients api, I continue to output exceptions:
Offset commit failed on partition test-0 at offset 1: The request timed out.
But this topic reads and writes messages are fine.
Consumer configuration:
Auto.commit.interval.ms = 5000
Auto.offset.reset = latest
Bootstrap.servers = [qs-kfk-01:9092, qs-kfk-02:9092, qs-kfk-03:9092, qs-kfk-04:9092, qs-kfk-05:9092]
Check.crcs = true
Client.id =
Connections.max.idle.ms = 540000
Default.api.timeout.ms = 60000
Enable.auto.commit = true
Exclude.internal.topics = true
Fetch.max.bytes = 52428800
Fetch.max.wait.ms = 500
Fetch.min.bytes = 1
Group.id = erp-sales
Heartbeat.interval.ms = 3000
Interceptor.classes = []
Internal.leave.group.on.close = true
Isolation.level = read_uncommitted
Key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
Max.partition.fetch.bytes = 1048576
Max.poll.interval.ms = 300000
Max.poll.records = 500
Metadata.max.age.ms = 300000
Metric.reporters = []
Metrics.num.samples = 2
Metrics.recording.level = INFO
Metrics.sample.window.ms = 30000
Partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
Receive.buffer.bytes = 65536
Reconnect.backoff.max.ms = 1000
Reconnect.backoff.ms = 50
Request.timeout.ms = 30000
Retry.backoff.ms = 100
Sasl.client.callback.handler.class = null
Sasl.jaas.config = null
Sasl.kerberos.kinit.cmd = /usr/bin/kinit
Sasl.kerberos.min.time.before.relogin = 60000
Sasl.kerberos.service.name = null
Sasl.kerberos.ticket.renew.jitter = 0.05
Sasl.kerberos.ticket.renew.window.factor = 0.8
Sasl.login.callback.handler.class = null
Sasl.login.class = null
Sasl.login.refresh.buffer.seconds = 300
Sasl.login.refresh.min.period.seconds = 60
Sasl.login.refresh.window.factor = 0.8
Sasl.login.refresh.window.jitter = 0.05
Sasl.mechanism = GSSAPI
Security.protocol = PLAINTEXT
Send.buffer.bytes = 131072
Session.timeout.ms = 10000
Ssl.cipher.suites = null
Ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
Ssl.endpoint.identification.algorithm = https
Ssl.key.password = null
Ssl.keymanager.algorithm = SunX509
Ssl.keystore.location = null
Ssl.keystore.password = null
Ssl.keystore.type = JKS
Ssl.protocol = TLS
Ssl.provider = null
Ssl.secure.random.implementation = null
Ssl.trustmanager.algorithm = PKIX
Ssl.truststore.location = null
Ssl.truststore.password = null
Ssl.truststore.type = JKS
Value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
java code:
ConsumerRecords<K, V> consumerRecords = _kafkaConsumer.poll(50L);
for (ConsumerRecord<K, V> record : consumerRecords.records(topic)) {
kafkaConsumer.receive(topic, record.key(), record.value(), record.partition(), record.offset());
}
I have tried the following:
Increase request timeout to 5 minutes (not working)
Replaced another group-id (it work): I found that as long as I use this group-id, there will be problems.
Kill the machine where the group coordinator is located. After the group coordinator switches to another machine, the error remains.
I get continuous error message output in the console
2019-11-03 16:21:11.687 DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=erp-sales] Sending asynchronous auto-commit of offsets {test-0=OffsetAndMetadata{offset=1, metadata=''}}
2019-11-03 16:21:11.704 ERROR org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=erp-sales] Offset commit failed on partition test-0 at offset 1: The request timed out.
2019-11-03 16:21:11.704 INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=erp-sales] Group coordinator qs-kfk-04:9092 (id: 2147483643 rack: null) is unavailable or invalid, will attempt rediscovery
2019-11-03 16:21:11.705 DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=erp-sales] Asynchronous auto-commit of offsets {test-0=OffsetAndMetadata{offset=1, metadata=''}} failed due to retriable error: {}
org.apache.kafka.clients.consumer.RetriableCommitFailedException: Offset commit failed with a retriable exception. You should retry committing the latest consumed offsets.
Caused by: org.apache.kafka.common.errors.TimeoutException: The request timed out.
2019-11-03 16:21:11.708 DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=erp-sales] Manually disconnected from 2147483643. Removed requests: OFFSET_COMMIT.
2019-11-03 16:21:11.708 DEBUG org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient - [Consumer clientId=consumer-1, groupId=erp-sales] Cancelled request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=4, clientId=consumer-1, correlationId=42) due to node 2147483643 being disconnected
2019-11-03 16:21:11.708 DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=erp-sales] Asynchronous auto-commit of offsets {test-0=OffsetAndMetadata{offset=1, metadata=''}} failed due to retriable error: {}
org.apache.kafka.clients.consumer.RetriableCommitFailedException: Offset commit failed with a retriable exception. You should retry committing the latest consumed offsets.
Caused by: org.apache.kafka.common.errors.DisconnectException: null
Why did the submission of the offset fail?
Why is the offset information in the kafka cluster still correct when the submission fails?
Thank you for your help.
Group coordinator Unavailability is the main cause of this issue.
Group coordinator is Unavailable --
This issue is already raised in the KAFKA Community (KAFKA-7017).
You can fix this by deleting the topic _offset_topics and restart the cluster.
You can go through the following to get few more details.
https://www.stuckintheloop.com/posts/my_first_kafka_outage/
#Rohit Yadav, Thanks for the answer, I have replaced a consumer group for the time being and I am working very well. But now the client has another problem, continuous output:
2019-11-05 14:11:14.892 INFO org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=fs-sales-be] Error sending fetch request (sessionId=2035993355, epoch=1) to node 4: org.apache.kafka.common.errors.DisconnectException.
2019-11-05 14:11:14.892 INFO org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=fs-sales-be] Error sending fetch request (sessionId=181175071, epoch=INITIAL) to node 5: org.apache.kafka.common.errors.DisconnectException.
What is this caused by this? 4, 5 node status is OK
I have a Spring Cloud Stream Kafka Stream application that reads from a topic (event) and performs a simple processing:
#Configuration
class EventKStreamConfiguration {
private val logger = LoggerFactory.getLogger(javaClass)
#StreamListener
fun process(#Input("event") eventStream: KStream<String, EventReceived>) {
eventStream.foreach { key, value ->
logger.info("--------> Processing Event {}", value)
// Save in DB
}
}
}
This application is using a Kafka environment from Confluent Cloud, with an event topic with 6 partitions. The full configuration is:
spring:
application:
name: events-processor
cloud:
stream:
schema-registry-client:
endpoint: ${schema-registry-url:http://localhost:8081}
kafka:
streams:
binder:
brokers: ${kafka-brokers:localhost}
configuration:
application:
id: ${spring.application.name}
default:
key:
serde: org.apache.kafka.common.serialization.Serdes$StringSerde
schema:
registry:
url: ${spring.cloud.stream.schema-registry-client.endpoint}
value:
subject:
name:
strategy: io.confluent.kafka.serializers.subject.RecordNameStrategy
processing:
guarantee: exactly_once
bindings:
event:
consumer:
valueSerde: io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde
bindings:
event:
destination: event
data:
mongodb:
uri: ${mongodb-uri:mongodb://localhost/test}
server:
port: 8085
logging:
level:
org.springframework.kafka.config: debug
---
spring:
profiles: confluent-cloud
cloud:
stream:
kafka:
streams:
binder:
autoCreateTopics: false
configuration:
retry:
backoff:
ms: 500
security:
protocol: SASL_SSL
sasl:
mechanism: PLAIN
jaas:
config: xxx
basic:
auth:
credentials:
source: USER_INFO
schema:
registry:
basic:
auth:
user:
info: yyy
Messages are being correctly processed by the KStream. If I restart the application they are not reprocessed. Note: I don’t want them to be reprocessed, so this behaviour is ok.
However the startup logs show some strange bits:
First it displays the creation of a restore consumer client. with auto offset reset none:
2019-07-19 10:20:17.120 INFO 82473 --- [ main] o.a.k.s.p.internals.StreamThread : stream-thread [events-processor-9a8069c4-3fb6-4d76-a207-efbbadd52b8f-StreamThread-1] Creating restore consumer client
2019-07-19 10:20:17.123 INFO 82473 --- [ main] o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values:
auto.commit.interval.ms = 5000
auto.offset.reset = none
Then it creates a consumer client with auto offset reset earliest.
2019-07-19 10:20:17.235 INFO 82473 --- [ main] o.a.k.s.p.internals.StreamThread : stream-thread [events-processor-9a8069c4-3fb6-4d76-a207-efbbadd52b8f-StreamThread-1] Creating consumer client
2019-07-19 10:20:17.241 INFO 82473 --- [ main] o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values:
auto.commit.interval.ms = 5000
auto.offset.reset = earliest
The final traces of the startup log show an offset reset to 0. This happens on every restart of the application:
2019-07-19 10:20:31.577 INFO 82473 --- [-StreamThread-1] o.a.k.s.p.internals.StreamThread : stream-thread [events-processor-9a8069c4-3fb6-4d76-a207-efbbadd52b8f-StreamThread-1] State transition from PARTITIONS_ASSIGNED to RUNNING
2019-07-19 10:20:31.578 INFO 82473 --- [-StreamThread-1] org.apache.kafka.streams.KafkaStreams : stream-client [events-processor-9a8069c4-3fb6-4d76-a207-efbbadd52b8f] State transition from REBALANCING to RUNNING
2019-07-19 10:20:31.669 INFO 82473 --- [events-processor] o.a.k.c.consumer.internals.Fetcher : [Consumer clientId=events-processor-9a8069c4-3fb6-4d76-a207-efbbadd52b8f-StreamThread-1-consumer, groupId=events-processor] Resetting offset for partition event-3 to offset 0.
2019-07-19 10:20:31.669 INFO 82473 --- [events-processor] o.a.k.c.consumer.internals.Fetcher : [Consumer clientId=events-processor-9a8069c4-3fb6-4d76-a207-efbbadd52b8f-StreamThread-1-consumer, groupId=events-processor] Resetting offset for partition event-0 to offset 0.
2019-07-19 10:20:31.669 INFO 82473 --- [events-processor] o.a.k.c.consumer.internals.Fetcher : [Consumer clientId=events-processor-9a8069c4-3fb6-4d76-a207-efbbadd52b8f-StreamThread-1-consumer, groupId=events-processor] Resetting offset for partition event-1 to offset 0.
2019-07-19 10:20:31.669 INFO 82473 --- [events-processor] o.a.k.c.consumer.internals.Fetcher : [Consumer clientId=events-processor-9a8069c4-3fb6-4d76-a207-efbbadd52b8f-StreamThread-1-consumer, groupId=events-processor] Resetting offset for partition event-5 to offset 0.
2019-07-19 10:20:31.670 INFO 82473 --- [events-processor] o.a.k.c.consumer.internals.Fetcher : [Consumer clientId=events-processor-9a8069c4-3fb6-4d76-a207-efbbadd52b8f-StreamThread-1-consumer, groupId=events-processor] Resetting offset for partition event-4 to offset 0.
What's the reason why there are two consumers configured?
Why does the second one have auto.offset.reset = earliest when I haven't configured it explicitly and the Kafka default is latest?
I want the default (auto.offset.reset = latest) behaviour and it seems to be working fine. However, doesn't it contradict what I see in the logs?
UPDATE:
I would rephrase the third question like this: Why do the logs show that the partitions are being reseted to 0 on every restart and in spite of it no messages are redelivered to the KStream?
UPDATE 2:
I've simplified the scenario, this time with a native Kafka Streams application. The behaviour is exactly the same as observed with Spring Cloud Stream. However, inspecting the consumer-group and the partitions I've found it kind of makes sense.
KStream:
fun main() {
val props = Properties()
props[StreamsConfig.APPLICATION_ID_CONFIG] = "streams-wordcount"
props[StreamsConfig.BOOTSTRAP_SERVERS_CONFIG] = "localhost:9092"
props[StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG] = 0
props[StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG] = Serdes.String().javaClass.name
props[StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG] = Serdes.String().javaClass.name
val builder = StreamsBuilder()
val source = builder.stream<String, String>("streams-plaintext-input")
source.foreach { key, value -> println("$key $value") }
val streams = KafkaStreams(builder.build(), props)
val latch = CountDownLatch(1)
// attach shutdown handler to catch control-c
Runtime.getRuntime().addShutdownHook(object : Thread("streams-wordcount-shutdown-hook") {
override fun run() {
streams.close()
latch.countDown()
}
})
try {
streams.start()
latch.await()
} catch (e: Throwable) {
exitProcess(1)
}
exitProcess(0)
}
This is what I've seen:
1) With an empty topic, the startup shows a resetting of all partitions to offset 0:
07:55:03.885 [streams-wordcount-3549a54e-49db-4490-bd9f-7156e972021a-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=streams-wordcount-3549a54e-49db-4490-bd9f-7156e972021a-StreamThread-1-consumer, groupId=streams-wordcount] Resetting offset for partition streams-plaintext-input-2 to offset 0.
07:55:03.886 [streams-wordcount-3549a54e-49db-4490-bd9f-7156e972021a-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=streams-wordcount-3549a54e-49db-4490-bd9f-7156e972021a-StreamThread-1-consumer, groupId=streams-wordcount] Resetting offset for partition streams-plaintext-input-3 to offset 0.
07:55:03.886 [streams-wordcount-3549a54e-49db-4490-bd9f-7156e972021a-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=streams-wordcount-3549a54e-49db-4490-bd9f-7156e972021a-StreamThread-1-consumer, groupId=streams-wordcount] Resetting offset for partition streams-plaintext-input-0 to offset 0.
07:55:03.886 [streams-wordcount-3549a54e-49db-4490-bd9f-7156e972021a-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=streams-wordcount-3549a54e-49db-4490-bd9f-7156e972021a-StreamThread-1-consumer, groupId=streams-wordcount] Resetting offset for partition streams-plaintext-input-1 to offset 0.
07:55:03.886 [streams-wordcount-3549a54e-49db-4490-bd9f-7156e972021a-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=streams-wordcount-3549a54e-49db-4490-bd9f-7156e972021a-StreamThread-1-consumer, groupId=streams-wordcount] Resetting offset for partition streams-plaintext-input-4 to offset 0.
07:55:03.886 [streams-wordcount-3549a54e-49db-4490-bd9f-7156e972021a-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=streams-wordcount-3549a54e-49db-4490-bd9f-7156e972021a-StreamThread-1-consumer, groupId=streams-wordcount] Resetting offset for partition streams-plaintext-input-5 to offset 0
2) I put one message in the topic and inspect the consumer group, seeing that the record is in partition 4:
TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
streams-plaintext-input 0 - 0 - streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer-905a307a-4c49-4d8b-ac2e-5525ba2e8a8e /127.0.0.1 streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer
streams-plaintext-input 5 - 0 - streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer-905a307a-4c49-4d8b-ac2e-5525ba2e8a8e /127.0.0.1 streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer
streams-plaintext-input 1 - 0 - streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer-905a307a-4c49-4d8b-ac2e-5525ba2e8a8e /127.0.0.1 streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer
streams-plaintext-input 2 - 0 - streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer-905a307a-4c49-4d8b-ac2e-5525ba2e8a8e /127.0.0.1 streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer
streams-plaintext-input 3 - 0 - streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer-905a307a-4c49-4d8b-ac2e-5525ba2e8a8e /127.0.0.1 streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer
streams-plaintext-input 4 1 1 0 streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer-905a307a-4c49-4d8b-ac2e-5525ba2e8a8e /127.0.0.1 streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer
3) I restart the application. Now the resetting only affects the empty partitions (0, 1, 2, 3, 5):
07:57:39.477 [streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer, groupId=streams-wordcount] Resetting offset for partition streams-plaintext-input-2 to offset 0.
07:57:39.478 [streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer, groupId=streams-wordcount] Resetting offset for partition streams-plaintext-input-3 to offset 0.
07:57:39.478 [streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer, groupId=streams-wordcount] Resetting offset for partition streams-plaintext-input-0 to offset 0.
07:57:39.479 [streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer, groupId=streams-wordcount] Resetting offset for partition streams-plaintext-input-1 to offset 0.
07:57:39.479 [streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=streams-wordcount-b1565eca-7d80-4550-97d2-e78ead62a840-StreamThread-1-consumer, groupId=streams-wordcount] Resetting offset for partition streams-plaintext-input-5 to offset 0.
4) I insert another message, inspect the consumer group state and the same thing happens: the record is in partition 2 and when restarting the application it only resets the empty partitions (0, 1, 3, 5):
TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
streams-plaintext-input 0 - 0 - streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1-consumer-cb04e2bd-598f-455f-b913-1370b4144dd6 /127.0.0.1 streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1-consumer
streams-plaintext-input 5 - 0 - streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1-consumer-cb04e2bd-598f-455f-b913-1370b4144dd6 /127.0.0.1 streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1-consumer
streams-plaintext-input 1 - 0 - streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1-consumer-cb04e2bd-598f-455f-b913-1370b4144dd6 /127.0.0.1 streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1-consumer
streams-plaintext-input 2 1 1 0 streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1-consumer-cb04e2bd-598f-455f-b913-1370b4144dd6 /127.0.0.1 streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1-consumer
streams-plaintext-input 3 - 0 - streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1-consumer-cb04e2bd-598f-455f-b913-1370b4144dd6 /127.0.0.1 streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1-consumer
streams-plaintext-input 4 1 1 0 streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1-consumer-cb04e2bd-598f-455f-b913-1370b4144dd6 /127.0.0.1 streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1-consumer
08:00:42.313 [streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1-consumer, groupId=streams-wordcount] Resetting offset for partition streams-plaintext-input-3 to offset 0.
08:00:42.314 [streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1-consumer, groupId=streams-wordcount] Resetting offset for partition streams-plaintext-input-0 to offset 0.
08:00:42.314 [streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1-consumer, groupId=streams-wordcount] Resetting offset for partition streams-plaintext-input-1 to offset 0.
08:00:42.314 [streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=streams-wordcount-addb08ed-62ce-47f9-a446-f2ee0592c53d-StreamThread-1-consumer, groupId=streams-wordcount] Resetting offset for partition streams-plaintext-input-5 to offset 0.
What's the reason why there are two consumers configured?
Restore Consumer Client is a dedicated consumer for fault tolerance and state management. It is the responsible for restoring the state from the changelog topics. It is displayed seperately from the application consumer client. You can find more information here :
https://docs.confluent.io/current/streams/monitoring.html#kafka-restore-consumer-client-id
Why does the second one have auto.offset.reset = earliest when I haven't configured it explicitly and the Kafka default is latest?
You are right, auto.offset.reset default value is latest in Kafka Consumer. But in Spring Cloud Stream, default value for consumer startOffset is earliest. Hence it shows earliest in second consumer. Also it depends on spring.cloud.stream.bindings.<channelName>.group binding. If it is set explicitly, then startOffset is set to earliest, otherwise it is set to latest for anonymous consumer.
Reference : Spring Cloud Stream Kafka Consumer Properties
I want the default (auto.offset.reset = latest) behaviour and it
seems to be working fine. However, doesn't it contradict what I see in
the logs?
In case of anonymous consumer group, the default value for startOffset will be latest.
When I run my Java application and instantiate the KafkaConsumer object (fed with the minimum required properties: key and value deserializer and group_id); I see lots of INFO messages on the StdOut (If I provide unsupported properties, I also see WARNING messages).
I want to see when fetch events take place. I assume that by increasing the loglevel to DEBUG I will be able to see that. Unfortunately, I am not able to increase it.
I tried to feed the log4j.properties file in multiple ways (placing the file at specific paths and also passing it as parameter (-Dlog4j.configuration). The output remains the same.
cd /Users/user/git/kafka/toys; JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_191.jdk/Contents/Home "/Applications/NetBeans/NetBeans 8.2.app/Contents/Resources/NetBeans/java/maven/bin/mvn" "-Dexec.args=-classpath %classpath ch.demo.toys.CarthusianConsumer" -Dexec.executable=/Library/Java/JavaVirtualMachines/jdk1.8.0_191.jdk/Contents/Home/bin/java -Dexec.classpathScope=runtime -DskipTests=true org.codehaus.mojo:exec-maven-plugin:1.2.1:exec
Running NetBeans Compile On Save execution. Phase execution is skipped and output directories of dependency projects (with Compile on Save turned on) will be used instead of their jar artifacts.
Scanning for projects...
------------------------------------------------------------------------
Building toys 1.0-SNAPSHOT
------------------------------------------------------------------------
--- exec-maven-plugin:1.2.1:exec (default-cli) # toys ---
Jul 10, 2019 2:52:00 PM org.apache.kafka.common.config.AbstractConfig logAll
INFO: ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = earliest
bootstrap.servers = [kafka-server:9090, kafka-server:9091, kafka-server:9092]
check.crcs = true
client.dns.lookup = default
client.id =
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = carthusian-consumer
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
isolation.level = read_uncommitted
key.deserializer = class org.apache.kafka.common.serialization.IntegerDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 100000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = DEBUG
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
session.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
Jul 10, 2019 2:52:01 PM org.apache.kafka.common.utils.AppInfoParser$AppInfo <init>
INFO: Kafka version: 2.3.0
Jul 10, 2019 2:52:01 PM org.apache.kafka.common.utils.AppInfoParser$AppInfo <init>
INFO: Kafka commitId: fc1aaa116b661c8a
Jul 10, 2019 2:52:01 PM org.apache.kafka.common.utils.AppInfoParser$AppInfo <init>
INFO: Kafka startTimeMs: 1562763121219
Jul 10, 2019 2:52:01 PM org.apache.kafka.clients.consumer.KafkaConsumer subscribe
INFO: [Consumer clientId=consumer-1, groupId=carthusian-consumer] Subscribed to topic(s): sequence
Jul 10, 2019 2:52:01 PM org.apache.kafka.clients.Metadata update
INFO: [Consumer clientId=consumer-1, groupId=carthusian-consumer] Cluster ID: REIXp5FySKGPHlRyfTALLQ
Jul 10, 2019 2:52:01 PM org.apache.kafka.clients.consumer.internals.AbstractCoordinator$FindCoordinatorResponseHandler onSuccess
INFO: [Consumer clientId=consumer-1, groupId=carthusian-consumer] Discovered group coordinator kafka-tds:9091 (id: 2147483646 rack: null)
Jul 10, 2019 2:52:01 PM org.apache.kafka.clients.consumer.internals.ConsumerCoordinator onJoinPrepare
INFO: [Consumer clientId=consumer-1, groupId=carthusian-consumer] Revoking previously assigned partitions []
Revoke event: []
Jul 10, 2019 2:52:01 PM org.apache.kafka.clients.consumer.internals.AbstractCoordinator sendJoinGroupRequest
INFO: [Consumer clientId=consumer-1, groupId=carthusian-consumer] (Re-)joining group
Jul 10, 2019 2:52:01 PM org.apache.kafka.clients.consumer.internals.AbstractCoordinator sendJoinGroupRequest
INFO: [Consumer clientId=consumer-1, groupId=carthusian-consumer] (Re-)joining group
Jul 10, 2019 2:52:01 PM org.apache.kafka.clients.consumer.internals.AbstractCoordinator$1 onSuccess
INFO: [Consumer clientId=consumer-1, groupId=carthusian-consumer] Successfully joined group with generation 96
Jul 10, 2019 2:52:01 PM org.apache.kafka.clients.consumer.internals.ConsumerCoordinator onJoinComplete
INFO: [Consumer clientId=consumer-1, groupId=carthusian-consumer] Setting newly assigned partitions: sequence-1, sequence-0
Assignment event: [sequence-1, sequence-0]
Jul 10, 2019 2:52:01 PM org.apache.kafka.clients.consumer.internals.SubscriptionState lambda$requestOffsetReset$3
INFO: [Consumer clientId=consumer-1, groupId=carthusian-consumer] Seeking to EARLIEST offset of partition sequence-1
Jul 10, 2019 2:52:01 PM org.apache.kafka.clients.consumer.internals.SubscriptionState lambda$requestOffsetReset$3
INFO: [Consumer clientId=consumer-1, groupId=carthusian-consumer] Seeking to EARLIEST offset of partition sequence-0
Jul 10, 2019 2:52:01 PM org.apache.kafka.clients.consumer.internals.SubscriptionState maybeSeekUnvalidated
INFO: [Consumer clientId=consumer-1, groupId=carthusian-consumer] Resetting offset for partition sequence-0 to offset 0.
Jul 10, 2019 2:52:01 PM org.apache.kafka.clients.consumer.internals.SubscriptionState maybeSeekUnvalidated
INFO: [Consumer clientId=consumer-1, groupId=carthusian-consumer] Resetting offset for partition sequence-1 to offset 0.
Loaded 9804 records from [sequence-0] partitions
Loaded 9804 records from [sequence-1] partitions
Loaded 9799 records from [sequence-0] partitions
Loaded 9799 records from [sequence-1] partitions
Loaded 9799 records from [sequence-0] partitions
Loaded 9799 records from [sequence-1] partitions
Loaded 9799 records from [sequence-0] partitions
Loaded 9799 records from [sequence-1] partitions
Loaded 9799 records from [sequence-0] partitions
Solved by placing the following (simple) log4j.properties under src/main/resources and running the app straight from console (rather than from the IDE). Fetching messages are now shown.
# Root logger option
log4j.rootLogger=DEBUG, stdout
# Direct log messages to stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target=System.out
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1} - %m%n
At the moment I do not know which class is generating the messages I am looking for, hence the DEBUG setting on the rootLogger.
The command:
./bin/kafka-consumer-perf-test.sh --topic test_topic --messages 500000 --zookeeper zookeepernode --show-detailed-stats --consumer.config ./conf/connect-distributed.properties
leads to
{metadata.broker.list=kafkanode1:9093,kafkanode2:9093,av3l338p.kafkanode3:9093, request.timeout.ms=30000, client.id=perf-consumer-68473, security.protocol=SASL_SSL}
WARN Fetching topic metadata with correlation id 0 for topics [Set(test_topic)] from broker [BrokerEndPoint(1003,kafkanode3,9093)] failed (kafka.client.ClientUtils$)
java.io.EOFException
at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:99)
[...]
WARN Fetching topic metadata with correlation id 0 for topics [Set(test_topic)] from broker [BrokerEndPoint(1001,kafkanode1,9093)] failed (kafka.client.ClientUtils$)
java.nio.channels.ClosedByInterruptException
at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
[...]
WARN Fetching topic metadata with correlation id 0 for topics [Set(test_topic)] from broker [BrokerEndPoint(1002,kafkanode2,9093)] failed (kafka.client.ClientUtils$)
java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:122)
Try to add the flag "--new-consumer":
./bin/kafka-consumer-perf-test.sh --new-consumer --broker-list <broker-host>:9093 --topic <topic-read-from> --messages <number-of-messages> --show-detailed-stats --consumer.config consumer.properties --zookeeper <zookeeper-host>:2181
The consumer.properties looks like (including SASL_SSL and Kerberos configuration):
zookeeper.connect=<zookeeper-host>:2181
# timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000
#consumer group id
group.id=<consumer-group>
security.protocol=SASL_SSL
ssl.keystore.location=<path-to-keystore-file>
ssl.keystore.password=<password>
ssl.truststore.location=<path-to-truststore-file>
ssl.truststore.password=<password>
The output should look like:
time, data.consumed.in.MB, MB.sec, data.consumed.in.nMsg, nMsg.sec
2017-06-01 11:29:15:194, 0, 1,6726, 4,1816, 5000, 12500,0000
2017-06-01 11:29:15:207, 0, 3,3916, 859,4761, 10000, 2500000,0000
2017-06-01 11:29:15:208, 0, 5,1009, 1709,3201, 15000, 5000000,0000
2017-06-01 11:29:15:209, 0, 6,7850, 1684,1164, 20000, 5000000,0000
2017-06-01 11:29:15:291, 0, 8,5101, 21,2977, 25000, 61728,3951
2017-06-01 11:29:15:460, 0, 10,2138, 10,0808, 30000, 29585,7988
2017-06-01 11:29:15:462, 0, 11,9258, 1711,9637, 35000, 5000000,0000
2017-06-01 11:29:15:463, 0, 13,6348, 1709,0416, 40000, 5000000,0000
2017-06-01 11:29:15:673, 0, 15,3255, 8,0511, 45000, 23809,5238
2017-06-01 11:29:15:962, 0, 17,0417, 5,9384, 50000, 17301,0381
2017-06-01 11:29:15:963, 0, 18,7520, 1710,2280, 55000, 5000000,0000
2017-06-01 11:29:15:963, 0, 20,4568, Infinity, 60000, Infinity
2017-06-01 11:29:16:090, 0, 22,1223, 13,1138, 65000, 39370,0787
2017-06-01 11:29:16:282, 0, 23,8131, 8,8062, 70000, 26041,6667
I learn kafka by official documents,all command works well.
such as:
step1:start zookeeper
step2:start server
step3:create topic
step4:Send some messages and consume messages.
But i can't use java api to send and consume messages.
producer api to send 100 messages and consumer api.
Here are the logs when start a producer:
[2016-10-24 11:08:28,727] INFO Topic creation {"version":1,"partitions":{"0":[0]}} (kafka.admin.AdminUtils$)
[2016-10-24 11:08:28,737] INFO [KafkaApi-0] Auto creation of topic test2 with 1 partitions and replication factor 1 is successful (kafka.server.KafkaApis)
[2016-10-24 11:08:28,815] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions test2-0 (kafka.server.ReplicaFetcherManager)
[2016-10-24 11:08:28,825] INFO Completed load of log test2-0 with 1 log segments and log end offset 0 in 3 ms (kafka.log.Log)
[2016-10-24 11:08:28,829] INFO Created log for partition [test2,0] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2016-10-24 11:08:28,834] INFO Partition [test2,0] on broker 0: No checkpointed highwatermark is found for partition [test2,0] (kafka.cluster.Partition)
But the kafka file's size is 0Byte.
path:/tmp/kafka-logs/test2-0/
files:
-rw-r--r-- 1 root root 10485760 Oct 24 11:08 00000000000000000000.index
-rw-r--r-- 1 root root 0 Oct 24 11:08 00000000000000000000.log
-rw-r--r-- 1 root root 10485756 Oct 24 11:08 00000000000000000000.timeindex