Does Kafka consumer reads the message from active segment in the partition? - apache-kafka

Let us say I have a partition (partition-0) with 4 segments that are committed and are eligible for compaction. So all these segments will not have any duplicate data since the compaction is done on all the 4 segments.
Now, there is an active segment which is still not closed. Meanwhile, if the consumer starts reading the data from the partition-0, does it also read the messages from active segment?
Note: My goal is to not provide duplicate data to the consumer for a particular key.

Your concerns are valid as the Consumer will also read the messages from the active segment. Log compaction does not guarantee that you have exactly one value for a particular key, but rather at least one.
Here is how Log Compaction is introduced in the documentation:
Log compaction ensures that Kafka will always retain at least the last known value for each message key within the log of data for a single topic partition.
However, you can try to get the compaction running more frequently to have your active and non-compated segment as small as possible. This, however, comes at a cost as running the compaction log cleaner takes up ressources.
There are a lot of configurations at topic level that are related to the log compaction. Here are the most important and all details can be looked-up here:
delete.retention.ms
max.compaction.lag.ms
min.cleanable.dirty.ratio
min.compaction.lag.ms
segment.bytes
However, I am quite convinced that you will not be able to guarantee that your consumer is never getting any duplicates with a log compacted topic.

Related

Kafka trigger a compaction on low throughput topic

A common concern for PII on a compacted topic is to ensure that after some time the topic gets compacted even though no new message is written to it and thus triggers a segment close and compaction.
Using kafka 2.6
The topic I have needs to be compacted 1 hour after some PII cleaning is written, and as the topic is very low volume there might not be any more writes for a couple of days. Thus the old and new key/values stays.
When reading: https://cwiki.apache.org/confluence/display/KAFKA/KIP-354%3A+Add+a+Maximum+Log+Compaction+Lag its not clear wether a write needs to happen or if some house-keeping would ensure closing active segment and then a compaction.
I am configure topic with:
cleanup.policy=compact
min.cleanable.dirty.ratio=0.01
min.compaction.lag.ms=<1 hour>
max.compaction.lag.ms=<1 hour>
segment.ms=<1 hour>
segment.bytes=<1 MB>
What am I missing

Delete a specific record in a Kafka topic using compaction

I am trying to delete a specific message or record from a Kafka topic. I understand that Kafka was not build to do that. But is it possible to use topic compaction with the ability to replace a record with an empty record using a specific Kafka key? How can this be done?
Thank you
Yes, you could get rid of a particular message if you have a compacted topic.
In that case your message key becomes the identifier. If you then want to delete a particular message you need to send a message with the same key and an empty value to the topic. This is called a tombstone message. Kafka will keep this tombstone around for a configurable amount of time ( so your consumers can deal with the deletion). After this set amount of time, the cleaner thread will remove the tombstone message, and the key will be gone from the partition in Kafka.
In general, please note, that the old (to be deleted) message will not disappear immediately. Depending on the configurations, it could take some time before the replacement of the individual message is happening.
I found this summary on the configurations quite helpful (link to blog)
1) To activate compaction cleanup policy cleanup.policy=compact should be placed
2) The consumer sees all tombstones as long as the consumer reaches head of a log in a period less than the topic config delete.retention.ms (the default is 24 hours).
3) The number of these threads are configurable through log.cleaner.threads config
4) The cleaner thread then chooses the log with the highest dirty ratio.
dirty ratio = the number of bytes in the head / total number of bytes in the log(tail + head)
5) Topic config min.compaction.lag.ms gets used to guarantee a minimum period that must pass before a message can be compacted.
6) To set delay to start compacting records after they are written use topic config log.cleaner.min.compaction.lag.ms. Records won’t get compacted until after this period. The setting gives consumers time to get every record.
The log compaction is introduced as
Log compaction ensures that Kafka will always retain at least the last known value for each message key within the log of data for a single topic partition.
Its guarantees are listed here:
Log compaction is handled by the log cleaner, a pool of background threads that recopy log segment files, removing records whose key appears in the head of the log. Each compactor thread works as follows:
1) It chooses the log that has the highest ratio of log head to log tail
2) It creates a succinct summary of the last offset for each key in the head of the log
3) It recopies the log from beginning to end removing keys which have a later occurrence in the log. New, clean segments are swapped into the log immediately so the additional disk space required is just one additional log segment (not a fully copy of the log).
4)The summary of the log head is essentially just a space-compact hash table. It uses exactly 24 bytes per entry. As a result with 8GB of cleaner buffer one cleaner iteration can clean around 366GB of log head (assuming 1k messages).

Kafka compaction for de-duplication

I'm trying to understand how Kafka compaction works and have the following question: Does kafka guarantees uniqueness of keys for messages stored in topic with enabled compaction?
Thanks!
Short answer is no.
Kafka doesn't guarantees uniqueness for key stored with enabled topic retention.
In Kafka you have two types of cleanup.policy:
delete - It means that after configured time messages won't be available. There are several properties, that can be used for that: log.retention.hours, log.retention.minutes, log.retention.ms. By default log.retention.hours is set 168. It means, that messages older than 7 days will be deleted
compact - For each key at least one message will be available. In some situation it can be one, but in the most cases it will be more. Compaction processed is run in background periodically. It copies log parts with removing duplicates and only leaving last value.
If you want to read only one value for each key, you have to use KTable<K,V> abstraction from Kafka Streams.
Related question regarding latest value for key and compaction:
Kafka only subscribe to latest message?
Looking at 4 guarantees of kakfa compaction, number 4 states:
Any consumer progressing from the start of the log will see at least
the final state of all records in the order they were written.
Additionally, all delete markers for deleted records will be seen,
provided the consumer reaches the head of the log in a time period
less than the topic's delete.retention.ms setting (the default is 24
hours). In other words: since the removal of delete markers happens
concurrently with reads, it is possible for a consumer to miss delete
markers if it lags by more than delete.retention.ms.
So, you will have more than one value for the key if the head of the topic is not being retained by the delete.retention.ms policy.
As I understand it, if you set a 24h retention policy (delete.retention.ms=86400000), you'll have a unique value for a single key, for all messages that were from 24h ago. That's your at least, but not only, as many other messages for the same key may have arrived during the last 24 hours.
So, it is guaranteed that you'll catch at least one, but not just the last, because retention didn't act on recent messages.
edit. As cricket's comment states, even if you set a delete retention property of 1 day, the log.roll.ms is what defines when a log segment is closed, based on message's timestamp. As this last segment is never retained for compaction, it becomes the second factor that doesn't allow you having just the last value for your known key. If your topic starts at T0, then messages after T0+log.roll.ms will be on the open log segment, thus, not compacted.

How Kafka reply works in case of log compaction?

In Kafka if log compaction is enabled it will store only recent key values. If we try to reply these messages, will it just replay latest messages? How exactly Kafka-reply works?
Yes. Offsets of earlier, duplicate keys are dropped and the newest key offset is kept. The consumer skips over gaps in the broker offsets to read all messages available
Also, log compaction happens on a schedule, so you might see the same key within a partition for a certain amount of time, depending on the the properties defined on the broker/topic.

Kafka retention AFTER initial consuming

I have a Kafka cluster with one consumer, which is processing TB's of data every day. Once a message is consumed and committed, it can be deleted immediately (or after a retention of few minutes).
It looks like the log.retention.bytes and log.retention.hours configurations count from the message creation. Which is not good for me.
In case where the consumer is down for maintenance/incident, I want to keep the data until it comes back online. If I happen to run out of space, I want to refuse accepting new data from the producers, and NOT delete data that wasn't consumed yet (so the log.retention.bytes doesn't help me).
Any ideas?
If you can ensure your messages have unique keys, you can configure your topic to use compaction instead of timed-retention policy. Then have your consumer after having processed each message send a message back to the same topic with the message key but null value. Kafka would compact away such messages. You can tune compaction parameters to your needs (and log segment file size, since the head segment is never compacted, you may want to set it to a smaller size if you want compaction to kick in sooner).
However, as I mentioned before, this would only work if messages have unique keys, otherwise you can't simply turn on compaction as that would cause loss of previous messages with the same key during periods when your consumer is down (or has fallen behind the head segment).