I'm trying to understand how Kafka compaction works and have the following question: Does kafka guarantees uniqueness of keys for messages stored in topic with enabled compaction?
Thanks!
Short answer is no.
Kafka doesn't guarantees uniqueness for key stored with enabled topic retention.
In Kafka you have two types of cleanup.policy:
delete - It means that after configured time messages won't be available. There are several properties, that can be used for that: log.retention.hours, log.retention.minutes, log.retention.ms. By default log.retention.hours is set 168. It means, that messages older than 7 days will be deleted
compact - For each key at least one message will be available. In some situation it can be one, but in the most cases it will be more. Compaction processed is run in background periodically. It copies log parts with removing duplicates and only leaving last value.
If you want to read only one value for each key, you have to use KTable<K,V> abstraction from Kafka Streams.
Related question regarding latest value for key and compaction:
Kafka only subscribe to latest message?
Looking at 4 guarantees of kakfa compaction, number 4 states:
Any consumer progressing from the start of the log will see at least
the final state of all records in the order they were written.
Additionally, all delete markers for deleted records will be seen,
provided the consumer reaches the head of the log in a time period
less than the topic's delete.retention.ms setting (the default is 24
hours). In other words: since the removal of delete markers happens
concurrently with reads, it is possible for a consumer to miss delete
markers if it lags by more than delete.retention.ms.
So, you will have more than one value for the key if the head of the topic is not being retained by the delete.retention.ms policy.
As I understand it, if you set a 24h retention policy (delete.retention.ms=86400000), you'll have a unique value for a single key, for all messages that were from 24h ago. That's your at least, but not only, as many other messages for the same key may have arrived during the last 24 hours.
So, it is guaranteed that you'll catch at least one, but not just the last, because retention didn't act on recent messages.
edit. As cricket's comment states, even if you set a delete retention property of 1 day, the log.roll.ms is what defines when a log segment is closed, based on message's timestamp. As this last segment is never retained for compaction, it becomes the second factor that doesn't allow you having just the last value for your known key. If your topic starts at T0, then messages after T0+log.roll.ms will be on the open log segment, thus, not compacted.
Related
Let us say I have a partition (partition-0) with 4 segments that are committed and are eligible for compaction. So all these segments will not have any duplicate data since the compaction is done on all the 4 segments.
Now, there is an active segment which is still not closed. Meanwhile, if the consumer starts reading the data from the partition-0, does it also read the messages from active segment?
Note: My goal is to not provide duplicate data to the consumer for a particular key.
Your concerns are valid as the Consumer will also read the messages from the active segment. Log compaction does not guarantee that you have exactly one value for a particular key, but rather at least one.
Here is how Log Compaction is introduced in the documentation:
Log compaction ensures that Kafka will always retain at least the last known value for each message key within the log of data for a single topic partition.
However, you can try to get the compaction running more frequently to have your active and non-compated segment as small as possible. This, however, comes at a cost as running the compaction log cleaner takes up ressources.
There are a lot of configurations at topic level that are related to the log compaction. Here are the most important and all details can be looked-up here:
delete.retention.ms
max.compaction.lag.ms
min.cleanable.dirty.ratio
min.compaction.lag.ms
segment.bytes
However, I am quite convinced that you will not be able to guarantee that your consumer is never getting any duplicates with a log compacted topic.
I am trying to delete a specific message or record from a Kafka topic. I understand that Kafka was not build to do that. But is it possible to use topic compaction with the ability to replace a record with an empty record using a specific Kafka key? How can this be done?
Thank you
Yes, you could get rid of a particular message if you have a compacted topic.
In that case your message key becomes the identifier. If you then want to delete a particular message you need to send a message with the same key and an empty value to the topic. This is called a tombstone message. Kafka will keep this tombstone around for a configurable amount of time ( so your consumers can deal with the deletion). After this set amount of time, the cleaner thread will remove the tombstone message, and the key will be gone from the partition in Kafka.
In general, please note, that the old (to be deleted) message will not disappear immediately. Depending on the configurations, it could take some time before the replacement of the individual message is happening.
I found this summary on the configurations quite helpful (link to blog)
1) To activate compaction cleanup policy cleanup.policy=compact should be placed
2) The consumer sees all tombstones as long as the consumer reaches head of a log in a period less than the topic config delete.retention.ms (the default is 24 hours).
3) The number of these threads are configurable through log.cleaner.threads config
4) The cleaner thread then chooses the log with the highest dirty ratio.
dirty ratio = the number of bytes in the head / total number of bytes in the log(tail + head)
5) Topic config min.compaction.lag.ms gets used to guarantee a minimum period that must pass before a message can be compacted.
6) To set delay to start compacting records after they are written use topic config log.cleaner.min.compaction.lag.ms. Records won’t get compacted until after this period. The setting gives consumers time to get every record.
The log compaction is introduced as
Log compaction ensures that Kafka will always retain at least the last known value for each message key within the log of data for a single topic partition.
Its guarantees are listed here:
Log compaction is handled by the log cleaner, a pool of background threads that recopy log segment files, removing records whose key appears in the head of the log. Each compactor thread works as follows:
1) It chooses the log that has the highest ratio of log head to log tail
2) It creates a succinct summary of the last offset for each key in the head of the log
3) It recopies the log from beginning to end removing keys which have a later occurrence in the log. New, clean segments are swapped into the log immediately so the additional disk space required is just one additional log segment (not a fully copy of the log).
4)The summary of the log head is essentially just a space-compact hash table. It uses exactly 24 bytes per entry. As a result with 8GB of cleaner buffer one cleaner iteration can clean around 366GB of log head (assuming 1k messages).
In all the Kafka tutorials I've read so far they all mentioned "Kafka partitions are immutable". However, I also read from this site https://towardsdatascience.com/log-compacted-topics-in-apache-kafka-b1aa1e4665a7 that from time to time, Kafka will remove older messages in the partition (depending on the retention time you set in the log-compact command). You can see from the screenshot below that data within the partition has clearly changed after removing the duplicate Keys in the partition:
So my question is what exactly does it mean to say "Kafka partitions are immutable"?
Tha Kafka partitions are defined as "immutable" referring to the fact that a producer can just append messages to a partition itself and not changing the value for an existing one (i.e. with the same key). The partition itself is a commit log working just in append mode from a producer point of view.
Of course, it means that without any kind of mechanisms like deletion (by retention time) and compaction, the partition size could grow endlessly.
At this point you could think .. "so it's not immutable!" as you mentioned.
Well, as I said the immutability is from a producer's point of view. Deletion and compaction are administrative operations.
For example, deleting records is also possible using the Admin Client API ... but we are always talking about administrative stuff, not producer/consumer related stuff.
If you think about compaction and how it works, the producer initially sends, for example, a message with key = A and payload = "Hello". After a while in order to "update" the value, it sends a new message with same key = A and payload = "Hi" ... but actually it's a really new message appended at the end of the partition log; it will be the compaction thread in the broker doing the work of deleting the old message with "Hello" payload leaving just the new one.
In the same way a producer can send the message with key = A and payload = null. It's the way for actually deleting the message (null is called "tombstone"). Anyway the producer is still appending a new message to the partition; it's always the compaction thread which will delete the last message with key = A when it saw the tombstone.
Inidividual messages are immutable.
Compaction or retention will drop messages. It doesn't alter messages or offsets
Data in Kafka is stored in topics, topics are partitioned, each partition is further divided into segments and finally each segment has a log file to store the actual message, an index file to store the position of the messages in the log file and timeindex file, for example:
$ ls -l /mnt/data/kafka/*consumer*/00000000004618814867*
-rw-r--r-- 1 kafka kafka 10485760 Oct 3 23:41 /mnt/data/kafka/__consumer_offsets-7/00000000004618814867.index
-rw-r--r-- 1 kafka kafka 8189913 Oct 3 23:41 /mnt/data/kafka/__consumer_offsets-7/00000000004618814867.log
-rw-r--r-- 1 kafka kafka 10485756 Oct 3 23:41 /mnt/data/kafka/__consumer_offsets-7/00000000004618814867.timeindex
In scenario where log.cleanup.policy (or cleanup.policy on particular topic) set to delete, occur complete delete some of log segments (one or more).
In scenario where params set to compact the compaction is done in the background by periodically recopying log segments: it recopies the log from beginning to end removing keys which have a later occurrence in the log. New, clean segments are swapped into the log immediately so the additional disk space required is just one additional log segment (not a fully copy of the log). In other words, the old segment is replaced by a new compacted segment
See more about distributed logs:
https://kafka.apache.org/documentation.html#compaction
https://medium.com/#durgaswaroop/a-practical-introduction-to-kafka-storage-internals-d5b544f6925f
https://engineering.linkedin.com/distributed-systems/log-what-every-software-engineer-should-know-about-real-time-datas-unifying
https://bookkeeper.apache.org/distributedlog/docs/0.5.0/user_guide/architecture/main
https://bravenewgeek.com/building-a-distributed-log-from-scratch-part-1-storage-mechanics/
Immutability is a property of the records stored within the partitions themselves. When the source (documentation or articles) states immutability within the context of topics or partitions, they are usually referring to either one of two things, both of which are correct in a limited context:
Records are immutable. Once a record is written, its contents can never be altered. A record can be deleted by the broker when either (a) the contents of the partition are pruned due to the retention limit, (b) a new record is added for the same key that supersedes the original record and compaction takes place, or (c) a record is added for the same key with a null value, which acts as a tombstone record, deleting the original without adding a replacement.
Partitions are append-only from a client's perspective, in that a client is not permitted to modify records or directly remove records from a partition, only append to the partition. This is somewhat debatable, because a client can induce the deletion of a record through the compaction feature, although this operation is asynchronous and the client cannot specify precisely which record should be deleted.
Assume I'm using log.message.timestamp.type=LogAppendTime.
Also assume number of messages per topic/partition during first read:
topic0:partition0: 5
topic0:partition1: 0
topic0:partition2: 3
topic1:partition0: 2
topic1:partition1: 0
topic1:partition2: 4
and during second read:
topic0:partition0: 5
topic0:partition1: 2
topic0:partition2: 3
topic1:partition0: 2
topic1:partition1: 4
topic1:partition2: 4
If I read first message from each partition, does Kafka guarantee that reading again from each partition won't return a message that's older than those I read during first read?
Focus on topic0:partition1 and topic1:partition1 which didn't have any messages during first read, but have during second read.
Kafka guarantees message ordering at partition level, so your use case perfectly fits kafka's architecture.
There are some concepts to explain in here. First of all, you have the starting consumer position (when you first launch a new consumer group), defined by the auto.offset.reset parameter.
This will kick in only if there's no saved offset for that group, or if a saved offset is not valid anymore (f.e, if it was already deleted by retention policies). You should normally only worry for this if you launch a new consumer group (and you want to decide wether it starts from the oldest messages, or from the present - newest one).
Regarding your example, in normal conditions (there are no consumer shutdowns, etc), you have nothing to worry about. Consumers within a same consumer group will only read their messages once, no matter the number of partitions nor the number of consumers. These consumers remember their last read offset, and periodically save it in the _consumer_offsets topic.
There are 2 properties that define this periodical recording:
enable.auto.commit
Setting it to true (which is the default value) will allow the automatic commit to the _consumer_offsets topic.
auto.commit.interval.ms
Defines when the offsets are commited. For example, with a value of 10000, your consumer offsets will be stored every 10 seconds.
You can also set enable.auto.commit to false and store your offsets in your own way (f.e to a database, etc), but this is a more special use case.
The auto offset committing will allow you to stop your consumers, and start them again later without losing any message nor reprocessing already processed ones (it's like a mark in a book's page). If you don't stop your consumers (and without any errors from broker/zookeeper/consumers), even less worries for you.
For more info, you can take a look here: https://docs.confluent.io/current/clients/consumer.html#concepts
Hope it helps!
I have a Kafka cluster with one consumer, which is processing TB's of data every day. Once a message is consumed and committed, it can be deleted immediately (or after a retention of few minutes).
It looks like the log.retention.bytes and log.retention.hours configurations count from the message creation. Which is not good for me.
In case where the consumer is down for maintenance/incident, I want to keep the data until it comes back online. If I happen to run out of space, I want to refuse accepting new data from the producers, and NOT delete data that wasn't consumed yet (so the log.retention.bytes doesn't help me).
Any ideas?
If you can ensure your messages have unique keys, you can configure your topic to use compaction instead of timed-retention policy. Then have your consumer after having processed each message send a message back to the same topic with the message key but null value. Kafka would compact away such messages. You can tune compaction parameters to your needs (and log segment file size, since the head segment is never compacted, you may want to set it to a smaller size if you want compaction to kick in sooner).
However, as I mentioned before, this would only work if messages have unique keys, otherwise you can't simply turn on compaction as that would cause loss of previous messages with the same key during periods when your consumer is down (or has fallen behind the head segment).