I'm seeing that, even though the kafka topic has a lot of messages (millions) queued up, the vert.x consumer is only fetching 500 messages (the default fetch amount) and which it then passes on to the handler. But after the messages have been handled and committed the consumer just stops and waits for about 35 seconds until it fetches another batch of messages.
I would expect that the consumer would keep on fetching until it manages to catch up with the partition and then pause. How do I make it do so?
The consumer is setup with the following code:
kafkaConsumer.subscribe(topic, result -> {
if (result.succeeded()) {
log.info("Kafka consumer successfully subscribed to topic {}", topic);
} else {
log.error("Kafka consumer failed to subscribe to topic {}", topic);
}
promise.handle(result);
});
With the following configuration for the consumer:
group.id = somegroup
auto.offset.reset=latest
enable.auto.commit=false
max.poll.interval.ms=300000
max.poll.records=500
session.timeout.ms=10000
heartbeat.interval.ms=3000
I'm using vert.x 3.9.2 and Kafka is 2.4.1
The delay was caused by a number of reasons. The most notorious reason was that the each individual message in the batched fetch was manually committed in a sequential fashion. Using autocommit speeded things up and I believe that committing the batch offset would speed things up even more.
Related
Is there any way to stop kafka consumers from consuming messages for sometime ?
I want consumer to stop for sometime and later start consuming messages from the last unconsumed message.
Most Kafka libraries have close or pause methods on the Consumer implementation. Or, you could throw some fatal exception during consumption.
Resuming from the last uncommitted offset for a consumer group is the default behavior
It's been a couple of months that I am learning Kafka, and
I keep seeing the word "commit" come up in both producer as well as consumer contexts. It was confusing to me for a long time, but I think I have a better understanding now.
Would be great if someone could validate my understanding, or correct me if I am wrong/missing something in my below understanding:
commit in Producer:
Commit comes up in a producer context only when we are dealing with transactions. Here a commit means that a transactional producer has been able to successfully write a message to a partition in a topic.
commit in Consumer:
Kafka does not itself automatically track which consumer has read which message. A consumer needs to notify the broker that it has read a particular message in a topic. This acknowledgment process, by which a consumer notifies which message/partition in a topic it has read successfully (so that other consumers don't re-read that again) is known a "commit".
From the book Kafka: The definitive guide:
How does a consumer commit an offset? It produces a message to Kafka,
to a special __consumer_offsets topic, with the committed offset for
each partition
Also, another area where the word "commit" comes up in a Consumer setting is the "isolation.level" of a consumer, ie isolation.level=read_committed. This is however only in a transactional setting. When we are using a Transactional Producer, this isolation.level of the consumer will specify if it will read messages after they are "committed" by the producer or not. More details here
Again, would be great if someone could validate my understanding.
If my understanding is correct, a Producer commit does not only mean that a transactional producer has been able to successfully write records to a topic, but it also means that the consumer involved in the transaction (in other words, in the atomic read-process-write cycle) has also been able to successfully consume records from a topic.
Before calling producer.commitTransaction(), one should call producer.sendOffsetsToTransaction(Map<TopicPartition,OffsetAndMetadata> offsets, String consumerGroupId) so that the broker knows which records were consumed by the consumer during an atomic read-process-write cycle.
The following code shows a common processing pattern (read-process-write loop) when dealing with transactions :
while (true) {
ConsumerRecords records = consumer.poll(timeout);
producer.beginTransaction();
for (ConsumerRecord record : records)
producer.send(producerRecord(“outputTopic”, record));
producer.sendOffsetsToTransaction(currentOffsets(consumer), group);
producer.commitTransaction();
}
In short, in a transactional context, it is also the duty of the producer to "commit for the consumer".
More information about transactions can be found in this article
I am having a Kafka Consumer which fetches the broker in a given fetch interval. It is fetching with the given time interval and it is fine when the messages are there in a topic. But i really don't know the reason why consumer is sending more fetch requests when there are no messages in kafka topic.
In general, consumers send two types of requests to broker:
Heartbeat
Poll request
Heartbeat is sent via separate thread and its interval is configured with heartbeat.interval.ms (3 seconds in default)
For poll request there is no specific interval and it's up to your code. (there is just an upper bound for it (max.poll.interval.ms))
It is absolutely reasonable to send more frequent poll requests when there is no data in partition(s) that your consumer is assigned to. Suppose that you have a code like this:
void consumeLoop() {
while (true) {
records = consumer.poll();
if(!records.isEmpty()) {
processMessages(records);
}
}
}
As you see if there is no records returned from poll, then your consumer will immediately send another poll request. But if there is data, you should first process these records before sending next poll request.
I have a kafka cluster. There is only one topic and to this topic 3 different consumer groups are taking the same messages from the topic and processing differently according to their own logic.
is there any problem with creating same topic for multiple consumer groups?
I am getting this doubt, as i am trying to implement exception topic and try to reprocess this messages.
suppose, i have message "secret" in topic A.
my all 3 consumer groups took the message "secret".
2 of my consumer groups successfully completed the processing of message.
But for one of my consumer group failed to process the message.
so i kept the message in topic "failed_topic".
I want to try to process this message for my failed consumer. But if i keep this message in my actual topic A, the other 2 consumer groups process this message second time.
Can some one please let me know how i can implement perfect reprocessing for this scenario ?
First of all in Kafka each consumer group has its own offset for each topic-partition subscribed and these offsets are managed seperately by consumer groups. So failing in one consumer group doesn't affect other consumer groups.
You can check current offsets for a consumer group with this cli command:
bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --group my-group
is there any problem with creating same topic for multiple consumer
groups
No. There is no problem. Actually this is a normal behaivour of topic based publisher/subscriber pattern.
To implement re-processing logic there are some important points to consider:
You should keep calling poll() even you are re-processing same
message. Otherwise after max.poll.interval.ms your consumer
will be considered dead and be revoked.
By calling poll() you will get messages that your consumer group have not
read yet. So when you poll() you will get messages up to
max.poll.records when you poll() again, for this time you will get
next group of messages. So for reprocessing failed messages you need
to call seek method.
public void seek(TopicPartition partition, long offset) : Overrides
the fetch offsets that the consumer will use on the next poll(timeout)
Ideally your number of consumers in consumer group should be
equal to number of partitions of the topic subscribed. Kafka will
take care of assigning partitions to consumers evenly. (one partition
per consumer) But even this condition is satisfied at the very
beginning, after some time a consumer may die and Kafka may assign
more than one partitions to one consumer. This can lead some problems. Suppose that your consumer is responsible for two partitions, when you poll() you will get messages from both of these partitions and when a message cannot be consumed you should seek all of the partitions which is assigned (not just the one failed message comes from). Otherwise you may skip some messages.
Let's try to write some pseudocode to implement re-process logic in case of exception by using these informations:
public void consumeLoop() {
while (true) {
currentRecord = consumer.poll(); //max.poll.records = 1
if (currentRecord != null) {
try {
processMessage(currentRecord);
} catch (Exception e) {
consumer.seek(new TopicPartition(currentRecord.topic(), currentRecord.partition()), currentRecord.offset());
continue;
}
consumer.commitSync(Collections.singletonMap(topicPartition, new OffsetAndMetadata(currentRecord.offset() + 1)));
}
}
}
Notes about the code:
max.poll.records is set to one to make seek process simple.
In every exception we seek and poll to get same message again. (we
have to poll to be considered alive by Kafka)
auto.commit is disabled
is there any problem with creating same topic for multiple consumer groups?
Not at all
if i keep this message in my actual topic A, the other 2 consumer groups process this message second time.
Exactly, and you would create a loop (third group would fail, put it back, 2 accept it, third fails again, etc, etc)
Basically, you are asking about a "dead-letter queue" which would be a specific topic for each consumer group. Kafka can hold tens of thousands of topics, so this shouldn't be an issue in your use-case.
So when our application Scales-Up / Scales-Down, The Kafka Consumer Group Rebalances.
For example when the application Scales-Down, one of the consumer is killed and the partitions which were earlier assigned to this consumer is distributed across the other consumers in the group, When this process happens i have errors in my application logs saying the processing of the in flight message has been aborted
I know the entire consumer group pauses (i.e) does read any new messages while the consumer group is rebalancing. But what happens to the messages which were read by the consumers before pausing ? Can we gracefully handle the messages which are currently being processed ?
Thanks in advance!
The messages which were read but not committed will be ignored when consumer rebalance occurs.After the consumer rebalance is completed the consumers will resume consuming from the last committed offset , so you won't be loosing any message.