We materialize the KTable into a Internal-State-Store.
a.) How and where can I specify that, this Internal-State-Store should be Persistent and be automatically backed-up to another kafka topic ?
b.) How can we specify that, this Internal-State-Store should be global one, so that any of my stream-task should be able to refer to that ?
c.) Is there a frequency, upon which the incoming messaageRecords are being written to the Internal-State-Store ? Can it happen that, a particular MessageRecord gets processed by Stream-processor, being stored in KTable, and then my stream-processor dies and it could not make entry to Internal-State-Store !!
Below snippet we use right now :-
KTable<String, String> KT0 = streamsBuilder.table(AppConfigs.topicName, Materialized.as(AppConfigs.stateStoreName)));
Any responses shall be highly appreciated !!
a) If you have a custom implementation of a state store, you can pass it via Materialized.as(KeyValueStoreSupplier).
b) For global-store use-case, you can use builder.globalKTable().
c) Writes happen on a per record basis, but could cached in-memory. Before input topic offsets are committed, the state store will be flushed though, and thus you will never miss any data. By default, KafkaStreams provides at-least-once processing semantics.
Related
I am currently working on the deployment of a distributed stream process chain using Kafka but not Kafka stream library. I've created a kind of node which can be executed and take as input a topic, process the obtained data and send it to an output topic. The node is a simple consumer/producer couple which is associated to a unique upstream partition. The producer is idempotent, the processing is done in a transaction context such as :
producer.initTransaction();
try
{
producer.beginTransaction();
//process
producer.commitTransaction();
}
catch (KafkaException e)
{
producer.abortTransaction();
}
I also used the producer.sendoffsetstotransaction method to ensure an atomic commit for the consumer.
I would like to use a key-value store for keeping the state of my nodes (i was thinking about MapDB which looks simple to use).
But I wonder if I update my state inside the transaction with a map.put(key, value) for example, will the transaction ensure that the state will be updated exactly-once ?
Thank you very much
Kafka only promises exactly once for its components - i.e. When I produce X to output-topic, I will also commit X to input-topic. Either both succeeds or both fails - i.e. Atomic.
So whatever you do between consuming and producing is totally on you to ensure the exactly-once. UNLESS, you use the state-store provided by Kafka itself. That is available to you if you use Kafka-streams.
If you cannot switch to kafka streams, it is still possible to ensure exactly once yourself if you track kafka's offsets in mapDB and add sufficient checks.
For eg, assuming you are trying to do deduplication here,
This is just one way of doing things - assuming that whatever you put in mapDB is committed right away. Even if not, you can always consult the "source of truth" - which are the topics here - and reconstruct the lost data.
I have a kafka streams topology that reads from an input topic updates some state and determines if the state entry needs to remain in state store or can be deleted. If it can be deleted it will be removed else I've a punctuator that runs every 10s and expires items from the state store.
I recently found out that the punctuators run on the same stream thread and can potentially block processing of the stream.
What are some patterns I can use to execute the logic inside the punctuator in a separate thread pool to avoid blocking stream processing ?
Appreciate your help.
Matthias J. Sax already said, that's not possible with state stores, so far, so as he works at Confluent, I believe thats the latest news.
However, what we did in our case was using a KStream-KTable join instead of a state store. I'm not sure, if that's possible for your case, but let me explain, what we did, maybe it's of some use for you, as well:
We have two Topics A and B, Topic A is consumed with a KStream. Topic B is consumed with a KTable. We transform the KTable data, so we can join it on the KStream for Topic A. We join it, perform our operations and "delete" the data from Topic B by writing a null value with the original key to Topic B, using map and through. So when we get another record in Topic A, there are no longer values in our KTable to join with (exactly what we wanted).
I hope it helps.
I am trying to, better, understand what happens in the level of resources when you create a KStream and a KTable. Below, I wil mention some conclusions that I have come to, as I understand them (feel free to correct me).
Firstly, every topic has a number of partitions and all the messages in those partitions are stored in the hard disk(s) in continuous order.
A KStream does not need to store the messages, that are read from a topic, again to another location, because the offset is sufficient to retrieve those messages from the topic which is connected to.
(Is this correct? )
The question regards the KTable. As I have understand, a KTable, in contrast with a KStream, updates every message with the with the same key. In order to do that, you have to either store externally the messages that arrive from the topic to a static table, or read all the message queue, each time a new message arrives. The later does not seem very efficient regarding time performance. Is the first approach I presented correct?
read all the message queue, each time a new message arrives.
All messages are only read at the fresh start of the application. Once the app reads up to the latest offset, it's just updating the table like any other consumer
How disk usage is determined ultimately depends on the state store you've configured for the application, along with its own settings. For example, in-memory vs rocksdb vs an external state store interface that you've written on your own
I've been reading a few articles about using Kafka and Kafka Streams (with state store) as Event Store implementation.
https://www.confluent.io/blog/event-sourcing-using-apache-kafka/
https://www.confluent.io/blog/event-sourcing-cqrs-stream-processing-apache-kafka-whats-connection/
The implementation idea is the following:
Store entity changes (events) in a kafka topic
Use Kafka streams with state store (by default uses RethinkDB) to update and cache the entity snapshot
Whenever a new Command is being executed, get the entity from the store execute the operation on it and continue with step #1
The issue with this workflow is that the State Store is being updated asynchronously (step 2) and when a new command is being processed the retrieved entity snapshot might be stale (as it was not updated with events from previous commands).
Is my understanding correct? Is there a simple way to handle such case with kafka?
Is my understanding correct?
As far as I have been able to tell, yes -- which means that it is an unsatisfactory event store for many event-sourced domain models.
In short, there's no support for "first writer wins" when adding events to a topic, which means that Kafka doesn't help you ensure that the topic satisfies its invariants.
There have been proposals/tickets to address this, but I haven't found evidence of progress.
https://issues.apache.org/jira/browse/KAFKA-2260
https://cwiki.apache.org/confluence/display/KAFKA/KIP-27+-+Conditional+Publish
Yes it's simple way.
Use key for Kafka message. Messages with the same key always* go the the same partition.
One consumer can read from one or many portions, but two partitions can not be read by two consumer simultaneously.
Max count of working consumer is always <= count of partition for a topic. You can create more consumer but consumer will be backup nodes.
Something like example:
Assumption.
There is a kafka topic abc with partitions p0,p1.
There is consumer C1 consuming from p0, and consumer C2 consuming from p1. Consumers are working asynchronicity
km(key,command) - kafka message.
#Procedure creating message
km(key1,add) -> p0
km(key2,add) -> p1
km(key1,edit) -> p0
km(key3,add) -> p1
km(key3,edit) -> p1
#consumer C1 will read messages km(key1,add), km(key1,edit) and order will be persist
#consumer c2 will read messages km(key2,add) , km(key3,add) ,km(key3,edit)
If you write commands to Kafka then materialize a view in KStreams the materialized view will be updated asynchronously. This helps you separate writes from reads so the read path can scale.
If you want consistent read-write semantics over your commands/events you might be better writing to a database. Events can either be extracted from the database into Kafka using a CDC connector (write-through) or you can write to the database and then to Kafka in a transaction (write-aside).
Another option is to implement long polling on the read (so if you write trade1.version2 then want to read it again the read will block until trade1.version2 is available). This isn't suitable for all use cases but it can be useful.
Example here: https://github.com/confluentinc/kafka-streams-examples/blob/4eb3aa4cc9481562749984760de159b68c922a8f/src/main/java/io/confluent/examples/streams/microservices/OrdersService.java#L165
The Command Pattern that you want to implement is already a part of the Akka Framework, I don't know you have experience with the framework or not but I strongly advice you to look there before you implement your own solution.
Also for the amount of Events that we receive in todays IT, I also advice to integrate it with a State Machine.
If you like to see how can we put all together please check my blog :)
I am trying to better understand how to set up my cluster for running my Kafka-Stream application. I'm trying to have a better sense of the volume of data that will be involve.
In that regard, while I can quickly see that a KTable require a state store, I wonder if creating a Kstream from a topics, immediately means copping all the log of that topic into the state store obviously in an append only fashion I suppose. That is, especially if we want to expose the stream for query ?
Does Kafka automatically replicate the Data in the state store as they move in the source topic, when it is a Kstream ? As said above this sounds obvious for Ktable because of the update, but for Kstream I just want a confirmation of what happens.
State Stores are created whenever any stateful operation is called or while windowing stream.
You are right that KTable requires a state store. KTable is an abstraction of changelog stream where each record represents an update. Internally it is implemented using RocksDB where all the updated values are stored in the state store and a changelog topic. At any time, state store can be rebuilt from changelog topic.
While KStream has a different concept, it represents abstraction on record stream with the unbounded dataset in append-only format. It doesn't create any state store while reading a source topic.
Unless, you want to see the updated changelog, it is okay to use KStream instead of KTable as it avoids creating unwanted state store. KTables are always expensive as compared to KStreams. Also it depends on how you want to use the data.
If you want to expose the stream for query, you need to materialize the stream into state store.