Difference between KTable and local store - apache-kafka

What the difference between these entities?
As i think, KTable - simple kafka topic with compaction deletion policy. Also, if logging is enabled for KTable, then there is also changelog and then, deletion policy is compaction,delete.
Local store - In-memory key-value cache based on RockDB. But local store also has a changelog.
In both cases, we get the last value for key for a certain period of time (?). Local store is used for aggregation steps, joins and etc. But new topic with compaction strategy also created after it.
For example:
KStream<K, V> source = builder.stream(topic1);
KTable<K, V> table = builder.table(topic2); // what will happen here if i read data from topic with deletion policy delete and compaction? Will additional topic be created for store data or just a local store (cache) be used for it?
// or
KTable<K, V> table2 = builder.table(..., Materialized.as("key-value-store-name")) // what will happen here? As i think, i just specified a concrete name for local store and now i can query it as a regular key-value store
source.groupByKey().aggregate(initialValue, aggregationLogic, Materialized.as(...)) // Will new aggregation topic be created here with compaction deletion policy? Or only local store will be used?
Also i can create a state store using builder builder.addStateStore(...) where i can enable/disable logging(changelog) and caching(???).
I've read this: https://docs.confluent.io/current/streams/developer-guide/memory-mgmt.html, but some details are still unclear for me. Especially the case when we can disable StreamCache (but not RockDB cache) and we will get a full copy of CDC system for relational database

A KTable is a logical abstraction of a table that is updated over time. Additionally, you can think of it not as a materialized table, but as a changelog stream that consists of all update records to the table. Compare https://docs.confluent.io/current/streams/concepts.html#duality-of-streams-and-tables. Hence, conceptually a KTable is something hybrid if you wish, however, it's easier to think of it as a table that is updated over time.
Internally, a KTable is implemented using RocksDB and a topic in Kafka. RocksDB stores the current data of the table (note, that RocksDB is not an in-memory store, and can write to disk). At the same time, each update to the KTable (ie, to RocksDB) is written into the corresponding Kafka topic. The Kafka topic is used for fault-tolerance reasons (note, that RocksDB itself is considered ephemeral and writing to disk via RocksDB does not provide fault-tolerance, but the used changelog topic), and is configured with log compaction enabled to make sure that the latest state of RocksDB can be restored by reading from the topic.
If you have a KTable that is created by a windowed aggregation, the Kafka topic is configured with compact,delete to expired old data (ie, old windows) to avoid that the table (ie, RocksDB) grows unbounded.
Instead of RocksDB, you can also use an in-memory store for a KTable that does not write to disk. This store would also have a changelog topic that tracks all updates to the store for fault-tolerance reasons.
If you add a store manually via builder.addStateStore() you can also add RocksDB or in-memory stores. In this case, you can enable changelogging for fault-tolerance similar to a KTable (note, that when a KTable is created, internally, it uses the exact same API -- ie, a KTable is a higher level abstractions hiding some internal details).
For caching: this is implemented within Kafka Streams and on top of a store (either RocksDB or in-memory) and you can enable/disable is for "plain" stores you add manually, of for KTables. Compare https://docs.confluent.io/current/streams/developer-guide/memory-mgmt.html Thus, caching is independent of RocksDB caching.

Related

Kafka streams state store for what?

As I got right from book, Kafka Streams state store it is a memory key/value storage to store data to Kafka or after filtering.
I am confused by some theoretical questions.
What is differenct Kafka streams state from another memory storage like Redis etc?
What is real case to use state storage in Kafka Streams?
Why topic is not alternative for state storage?
Why topic is not alternative for state storage?
A topic contains messages in a sequential order that typically represents a log.
Sometimes, we would want to aggregate these messages, group them and perform an operation, like sum, for example and store it in a place which we can retrieve later using a key. In this case, an ideal solution would be to use a key-value store rather than a topic that is a log-structure.
What is real case to use state storage in Kafka Streams?
A simple use-case would be word count where we have a word and a counter of how many times it has occurred. You can see more examples at kafka-streams-examples on github.
What is difference between Kafka streams state from another memory storage like Redis etc?
State can be considered as a savepoint from where you can resume your data processing or it might also contain some useful information needed for further processing (like the previous word count which we need to increment), so it can be stored using Redis, RocksDB, Postgres etc.
Redis can be a plugin for Kafka streams state storage, however the default persistent state storage for Kafka streams is RocksDB.
Therefore, Redis is not an alternative to Kafka streams state but an alternative to Kafka streams' default RocksDB.
-Why topic is not alternative for state storage?
Topic is the final statestore storage under the hood (everything is topic in kafka)
If you create a microservice with name "myStream" and a statestore named "MyState", you'll see appear a myStream-MyState-changelog with has an history of all changes in the statestore.
RocksDB is only the local cache to improve performances, with a first layer of local backup on the local disk, but at the end the real high availability and exactly-once processing guarantee is provided by the underlying changelog topic.
What is differenct Kafka streams state from another memory storage like Redis etc?
What is real case to use state storage in Kafka Streams?
It not a storage, it's a just local, efficient, guaranteed memory state to manage some business case is a fully streamed way.
As an example :
For each Incoming Order (Topic1), i want to find any previous order (Topic2) to the same location in the last 6 hours

GlobalKTable Refresh Logic

When there are updates done to an underlying topic of a GlobalKTable, what is the logic for all instances of KStream apps to get the latest data? Below are my follow up questions:
Would the GlobalKTable be locked at record level or table level when the updates are happening?
According to this blog: Kafka GlobalKTable Latency Issue, can the latency go upto 0.5s?! If so, is there any alternative to reduce the latency?
Since GlobalKTable uses RocksDB by default as the state store, are all features of RocksDB available to use?
I understand the GlobalKTable should not be used for use-cases that require frequent updates to the lookup data. Is there any other key-value store that we can use for use-cases that might require updates on table data - Redis for example?
I could not find much documentation about GlobalKTable and its internals. Is there any available documentations available?
GlobalKTables are updated async. Hence, there is no guarantee whatsoever when the different instances are updated.
Also, the "global thread" uses a dedicated "global consumer" that you can fine tune individually to reduce latency: https://docs.confluent.io/current/streams/developer-guide/config-streams.html#naming
RocksDB is integrated via JNI and the JNI interface does not expose all features of RocksDB. Furthermore, the "table" abstraction "hides" RocksDB to some extent. However, you can tune RocksDB via rocksdb.config.setter (https://docs.confluent.io/current/streams/developer-guide/config-streams.html#rocksdb-config-setter).
The Javadocs for KStream#join() are pretty clear that joins against a GlobalKTable only occur as records in the stream are processed. Therefore, to answer your question, there are no automatic updates that happen to the underlying KStreams: new messages would need to be processed in them in order for them to see the updates.
"Table lookup join" means, that results are only computed if KStream
records are processed. This is done by performing a lookup for
matching records in the current internal GlobalKTable state. In
contrast, processing GlobalKTable input records will only update the
internal GlobalKTable state and will not produce any result records.
If a GlobalKTable is materialized as a key-value store, most of the methods for iterating over and mutating KeyValueStore implementations use the synchronized keyword to prevent interference from multiple threads updating the state store concurrently.
You might be able to reduce the latency by using an in-memory key-value store, or by using a custom state store implementation.
Interacting with state stores is controlled via a set of interfaces in Kafka Streams, for example KeyValueStore, so in this sense you're not interacting directly with RocksDB APIs.

Cost of Kstream Vs cost of KTable with respect to the state store

I am trying to better understand how to set up my cluster for running my Kafka-Stream application. I'm trying to have a better sense of the volume of data that will be involve.
In that regard, while I can quickly see that a KTable require a state store, I wonder if creating a Kstream from a topics, immediately means copping all the log of that topic into the state store obviously in an append only fashion I suppose. That is, especially if we want to expose the stream for query ?
Does Kafka automatically replicate the Data in the state store as they move in the source topic, when it is a Kstream ? As said above this sounds obvious for Ktable because of the update, but for Kstream I just want a confirmation of what happens.
State Stores are created whenever any stateful operation is called or while windowing stream.
You are right that KTable requires a state store. KTable is an abstraction of changelog stream where each record represents an update. Internally it is implemented using RocksDB where all the updated values are stored in the state store and a changelog topic. At any time, state store can be rebuilt from changelog topic.
While KStream has a different concept, it represents abstraction on record stream with the unbounded dataset in append-only format. It doesn't create any state store while reading a source topic.
Unless, you want to see the updated changelog, it is okay to use KStream instead of KTable as it avoids creating unwanted state store. KTables are always expensive as compared to KStreams. Also it depends on how you want to use the data.
If you want to expose the stream for query, you need to materialize the stream into state store.

Kafka Streams: Is it possible to have "compact,delete" policy on state stores?

Kafka Streams state stores are "compact" by default. Is it possible to set "compact,delete" with a retention policy in a state store?
Yes, it is possible to configure topics with retention and compaction and Kafka Streams uses this setting for windowed KTables.
If you really want to set this, you can update the corresponding changelog topic config manually after it is created.
However, setting topic retention time for changelog topics deletes the data only from the topic. Data is not deleted in the local state store. State stores don't offer TTL and RocksDBs TTL setting cannot be enabled (for technical reasons that we hope to resolve eventually).
If you want to delete data cleanly, you should use tombstone messages that will delete the data from the store as well as the changelog topic (instead of using retention time).
If you are using the default RocksDBStore there is an option to set up CompactionStyle to FIFO:
FIFO compaction style is the simplest compaction strategy. It is suited for keeping event log data with very low overhead (query log for example). It periodically deletes the old data, so it's basically a TTL compaction style.
and then use the TTL:
A new option, compaction_options_fifo.ttl, has been introduced for this to delete SST files for which the TTL has expired. This feature enables users to drop files based on time rather than always based on size, say, drop all SST files older than a week or a month.
RocksDB FIFO doc
To actually set the FIFO you have to implement RocksDBConfigSetter and set it as configuration property: rocksdb.config.setter

Kafka: topic compaction notification?

I was given the following architecture that I'm trying to improve.
I receive a stream of DB changes which end up in a compacted topic. The stream is basically key/value pairs and the keyspace is large (~4 GB).
The topic is consumed by one kafka stream process that stores the data in RockDB (separate for each consumer/shard). The processor does two different things:
join the data into another stream.
check if a message from the topic is a new key or an update to an existing one. If it is an update it sends the old key/value and the new key/value pair to a different topic (updates are rare).
The construct has a couple of problems:
The two different functionalities of the stream processor belong to different teams and should not be part of the same code base. They are put together to save memory. If we separate it we would have to duplicate RockDB's.
I would prefer to use a normal KTable join instead of the handcrafted join that's currently in the code.
RockDB seems to be a bit of overkill if the data is already persisted in a topic. We currently running into some performance issues and I assume it would be faster if we just keep everything in memory.
Question 1:
Is there a way to hook into the compaction process of a compacted topic? I would like a notification (to a different topic) for every key that is actually compacted (including the old and new value).
If this is somehow possible I could easily split the code bases apart and simplify the join.
Question 2:
Any other idea on how this can be solved more elegantly?
You overall design makes sense.
About your join semantics: I guess you need to stick with Processor API as regular KTable cannot provide you want. It's also not possible to hook into the compaction process.
However, Kafka Streams also supports in-memory state stores: https://kafka.apache.org/documentation/streams/developer-guide/processor-api.html#state-stores
RocksDB is used by default, to allow the state to be larger than available main-memory. Spilling to disk with RocksDB to reliability -- however, it also has the advantage, that stores can be recreated quicker if an instance come back online on the same machine, as it's not required to re-read the whole changelog topic.
If you want to split the app into two, is your own decision on how much resources you want to provide.