Consume messages without committing from Kafka 10 consumer - apache-kafka

I have a requirement to read messages from a topic, batch them and push the batch to an external system. If the batch fails for any reason, I need to consume the same set of messages again and repeat the process. So for every batch, the from and to offsets for each partition are stored in a database. In order to achieve this, I am creating one Kafka consumer per partition by assigning partition to the reader, based on the previous offsets stored, the consumers seek to that position and start reading. I have turned off auto commit and I dont commit offsets from the consumer. For every batch, I create a new consumer per partition, read messages from the last offset stored and publish to the external system. Do you see any problems in consuming messages without committing offsets and using the same consumer group across batches, but at any point there won't be more than one consumer per partition ?

Your design seems reasonable to me.
Committing offsets to Kafka is just a convenient built-in mechanism within Kafka to keep track of offsets. However, there is no requirement whatsoever to use it -- you can use any other mechanism to track offsets, too (like using a DB as in your case).
Furthermore, if you assign partitions manually, there will be no group management anyway. So parameter group.id has no effect. See http://docs.confluent.io/current/clients/consumer.html for more details.

In kafka version two i achieved this behaviour without the need for a database to store the offsets.
The following is a configuration for spring-boot-kafka but it should also work with any kafka consumer api
spring:
kafka:
bootstrap-servers: ...
consumer:
value-deserializer: ...
max-poll-records: 1000
enable-auto-commit: false
fetch-min-size: 262144 # 1/4 mb..
group-id: ...
fetch-max-wait: 10000 # we will consume every 10s or when 1/4 mb or 1000 records are accumulated.
auto-offset-reset: earliest
listener:
type: batch
concurrency: 7
ack-mode: manual
This gives me the messages in batches of max. 1000 records (dependent on load). I then write these records asynchronously to a database and count how many success callbacks i get. If the successful writes equals the received batch size i acknowledge the batch, e.g. i commit the offset. This design was very reliable even in a high-load production environment.

Related

Is it possible to make a Kafka Consumer override/ignore its configurations when doing a records poll?

I have a Kafka consumer that should consume minimum 1MB worth of records at each poll. This data is then written to file and stored partioned by date - in example, records consumed during 2022.09.22 should be written to a file and stored to the date_id=20220922 folder. The file size should be a minimum of 1MB.
The configuration properties fetch.min.bytes and fetch.max.wait.ms are tuned to get the desired behavior. The problem, though, arrives when a new day occurs. On a day change, the consumer should consume the remaining records on topic (it is less than 1MB) without having to wait for the poll size threshold to be met or for the wait time to time out. The consumer should do a type of "force fetch" of the remaining records available on topic.
Is it possible to override the configuration of the consumer to achieve this behavior?
The properties are what they are - you cannot change them at runtime without stopping the consumer and creating a new one with other config settings.
Worth mentioning that the HDFS/S3 sink connectors from Confluent already have a Date directory partitions. They also work for local storage, but distributed storage makes more sense when your kafka consumers are distributed

How does Spring Kafka manual commit works in Batch listener mode

I have a topic with 2 partitions. I am using kafka batch listener mode in my consumer application. Since, I am using a single consumer application therefore I will receive messages from both partitions. Once the consumer application process those list of messages, I want to commit the largest offset of each partition manually.
If I use MANUAL_IMMEDIATE mode, will it commit the highest offset of each partition? If not what is the approach I should use?
Yes; acknowledgment.ack() will commit the offset of the highest offset of all partitions for which records were received.
However, the container will do it automatically for you if you use the default AckMode.BATCH. This is simpler than dealing with manual acks.

Is it possible to control how often a Spring Kafka Message Listener switches between its assigned partitions?

When a Spring Kafka MessageListener is consuming messages from multiple partitions, it keeps processing messages from one partition until there are no more messages and only after that it continues with the next partition. (based on my observations)
Is it possible to set a max number of messages/batches and tell the Listener to switch faster to the next partition rather than later?
This would improve fairness and consume evenly from all assigned partitions.
switch faster to the next partition, consume evenly from all assigned partitions
I don't think Kafka has any properties for this. kafka consumer config
It's weird. You could see a partition replica in Kafka as a log file. Your consumer poll runs in one thread, for better performance, it should consume from one file, and the next poll will consume from another file rather than separate it and consume evenly from many partitions for each poll, right? Eventually, you still need to consume all of the messages on the topic.

Apache Kafka the order of messages in partition guarantee

Read this article about message ordering in topic partition: https://blog.softwaremill.com/does-kafka-really-guarantee-the-order-of-messages-3ca849fd19d2
Allowing retries without setting max.in.flight.requests.per.connection
to 1 will potentially change the ordering of records because if two
batches are sent to a single partition, and the first fails and is
retried but the second succeeds, then the records in the second batch
may appear first.
According it there are two types of producer configs possible to achieve ordering guarantee:
max.in.flight.requests.per.connection=1 // can impact producer throughput
or alternative
enable.idempotence=true
max.in.flight.requests.per.connection //to be less than or equal to 5
max.retries // to be greater than 0
acks=all
Can anybody explain how second configuration achieves order guarantee? Also in the second config exactly-once semantics enabled.
idempotence:(Exactly-once in order semantics per partition)
Idempotent delivery enables the producer to write a message to Kafka exactly
once to a particular partition of a topic during the lifetime of a
single producer without data loss and order per partition.
Idempotent is one of the key features to achieve Exactly-once Semantics in Kafka. To set “enable.idempotence=true” eventually get exactly-once semantics per partition, meaning no duplicates, no data loss for a particular partition. If an error occurred even producer send messages multiple times will get written to Kafka once.
Kafka producer concept of PID and Sequence Number to achieve idempotent as explained below:
PID and Sequence Number
Idempotent producers use product id(PID) and sequence number while producing messages. The producer keeps incrementing the sequence number on each message published which map with unique PID. The broker always compares the current sequence number with the previous one and it rejects if the new one is not +1 greater than the previous one which avoids duplication and the same time if more than greater show lost in messages.
In a failure scenario it will still maintain sequence number and avoid duplication as shown below:
Note: When the producer restarts, new PID gets assigned. So the idempotency is promised only for a single producer session
If you are using enable.idempotence=true you can keep max.in.flight.requests.per.connection up to 5 and you can achieve order guarantee which brings better parallelism and improve performance.
Idempotence feature introduced in Kafka 0.11+ before we can achieve some level level of guaranteed using max.in.flight.requests.per.connection with retries and Acks setting:
max.in.flight.requests.per.connection to 1
max.retries bigger number
acks=all
max.in.flight.requests.per.connection=1: to make sure that while messages are retrying, additional messages will not be sent.
This gives guarantee at-least-once and comes with cost on performance and throughput and that's encourage introduced enable.idempotence feature to improve the performance and at the same time guarantee ordering.
exactly_once: To achieve exactly_once along with idempotence we need to set transaction as read_committed and will not allow to overwrite following parameters:
isolation.level:read_committed( Consumers will always read committed
data only)
enable.idempotence=true (Producer will always haveidempotency enabled)
MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION=5 (Producer will
always have one in-flight request per connection)
enable.idempotence is a newer setting that was introduced as part of kip-98 (implemented in kafka 0.11+). before it users would have to set max.inflight to 1.
the way it works (abbreviated) is that producers now put sequence numbers on ourgoing produce batches, and brokers keep track of these sequence numbers per producer connected to them. if a broker receives a batch out of order (say batch 3 after 1) it rejects it and expects to see batch 2 (which the producer will retransmit). for complete details you should read kip-98

Kafka one consumer with two different checkpoints

I have a Kafka consumer project which consumes data from a specific Kafka topic. The 90% of the records are processed as soon as I got them but I have the delay processing some of the records (10%).
This these records need to be delayed, I can't commit the records so it may cause Kafka to reassign the partitions to new nodes. In order to avoid that, I can read the same topic twice and delay the fetching data part in the second consumer but it requires deserialization twice so comes with an overhead.
Is it possible the read records using single consumer but have two separate commits with Kafka consumers? It will be basically similar to having two different consumers in terms of commit, consumer.poll will be called from a single consumer but there will be two consumer.commitSync for each batch. I will help me to avoid extra deserialization and also the network cost.
Below mentioned are the things you can do to achieve the above-mentioned task.
Create a pipe Line having two topics(T1, T2) push all the messages (90%) in topic T1 and rest all the messages 10% in topic T2.
Make your Kafka consumer configurable i.e. you can easily pass polling interval, batchSize, and batch timeout whenever you are starting your consumer.
Find a logic/ or if your second topic consumption is time-based then schedule the cron which will start and stop your consumer topic T2 when it is required.
Regarding consumer Groups, you can place both of your topics in the same group or indifferent. It's completely your choice.
By this way you will be keeping the topics clean.and each and every time you need to process the messages you can do it easily by setting up the pipeline just for once.