I have a kafka streams topology that reads from an input topic updates some state and determines if the state entry needs to remain in state store or can be deleted. If it can be deleted it will be removed else I've a punctuator that runs every 10s and expires items from the state store.
I recently found out that the punctuators run on the same stream thread and can potentially block processing of the stream.
What are some patterns I can use to execute the logic inside the punctuator in a separate thread pool to avoid blocking stream processing ?
Appreciate your help.
Matthias J. Sax already said, that's not possible with state stores, so far, so as he works at Confluent, I believe thats the latest news.
However, what we did in our case was using a KStream-KTable join instead of a state store. I'm not sure, if that's possible for your case, but let me explain, what we did, maybe it's of some use for you, as well:
We have two Topics A and B, Topic A is consumed with a KStream. Topic B is consumed with a KTable. We transform the KTable data, so we can join it on the KStream for Topic A. We join it, perform our operations and "delete" the data from Topic B by writing a null value with the original key to Topic B, using map and through. So when we get another record in Topic A, there are no longer values in our KTable to join with (exactly what we wanted).
I hope it helps.
Related
Let say I have Topology A that streams from Source A to Stream A, and I have Topology B which stream from Source B to stream B (used as Table B).
Then I have a stream/table join that joins Stream A and Table B.
As expected the join only triggers when something arrives in Stream A and theres a correlating record in Table B.
I have an architecture, where the source topics are still populated while the Kafka Stream is DEAD. And messages are always arrives in source B before source A.
I am finding that when I restart Kafka Stream (by redeploy the app), the topology that streams stuff to stream A, can happen BEFORE the topology that streams stuff to Table B.
And as a result, the join won't trigger.
I know this is probably the expected behaviour, there's no coordination between separate topologies.
I was wondering if there is a mechanism, a delay or something that can ORDER/Sequence the start of the topologies?
Once they are up, they are fine, as I can ensure the message arrives in the right order.
I think you want to try setting the max.task.idle.ms to something greater than the default (0), maybe 30 secs? It's tough to give a precise answer, so you'll have to experiment some.
HTH,
Bill
If you need to trigger a downstream result from both sides of the join, you have to do a KTable-to-KTable join. From the javadoc:
"The join is computed by (1) updating the internal state of one KTable and (2) performing a lookup for a matching record in the current (i.e., processing time) internal state of the other KTable. This happens in a symmetric way, i.e., for each update of either this or the other input KTable the result gets updated."
EDIT: Even if you do stream-to-KTable join that triggers only when a new stream event is emitted on the left side of the join (KTable updates do not emit downstream event), when you start the topology Streams will try to do timestamp re-synchronisation using the timestamps of the input events, and there should not be any race condition between the rate of consumption of the KTable source and the stream topic. BUT, my understanding is that this is on a best effort cases. E.g. if two events have exactly the same timestamp then Streams cannot deduce which should be processed first.
Reading about log compaction on a topic, I was wondering if there is any way for a consumer to get hold of any of the positions/offsets of the following?
end of the head
start of the tail
compaction cleaner point
Basically the point at which the compacted and non-compacted parts of the log meet?
I've read that there is a cleaner-offset-checkpoint file that sits on the broker at /var/lib/kafka/data/cleaner-offset-checkpoint but is the info in this file available to a consumer?
My use case is a consumer that will consume compacted keys one way and non-compacted keys another way.
thanks for any advice.
UPDATE:
thinking for example of a topic holding various customer events like here https://www.confluent.io/blog/put-several-event-types-kafka-topic/; new customer, customer updates name, customer updates address, etc. Log compaction, I believe, will leave one event per customer in the tail but still many events per customer in the head (assuming compaction is slower than message production..?) A new consumer of this topic would have to treat all compacted messages as CREATES, but then also treat non-compacted message as their more fine grained event? In any case I was wondering if a consumer could tell how far along a topic compaction has got, at any given time?
It's not possible, with the consumer api, no.
If you want to check that checkpoint file on disk, you could use Jssh, for example, to access a broker, and read the file. If it has offset data, you could then use seek methods, but keep in mind that the Log Cleaner thread may be actively running when you seek to or consume that data
A new consumer of this topic would have to treat all compacted messages as CREATES, but then also treat non-compacted message as their more fine grained event?
I don't think this is a valid use case. For a stream of customer updates, you'd just update a new customer model in a table via a streaming reduce function. If any consumer restarts, it'll have to always read from the beginning of the topic to rebuild its local state then continue reading any updates to those stored values, so doesn't make sense to skip past them all, or have two separate consumers
I also don't necessarily think you need different models. Some UUID would be unique, and every event can contain the full model of a "customer". Most fields can remain optional/nullable until they are provided with a new message with all those fields set (or not), and this defines a batch update since you can set/update/remove multiple attributes at once. If you need more granularity, that's also possible to define at the producer level by storing and looping over your attributes and producing individual "customer" objects with each new attribute
We materialize the KTable into a Internal-State-Store.
a.) How and where can I specify that, this Internal-State-Store should be Persistent and be automatically backed-up to another kafka topic ?
b.) How can we specify that, this Internal-State-Store should be global one, so that any of my stream-task should be able to refer to that ?
c.) Is there a frequency, upon which the incoming messaageRecords are being written to the Internal-State-Store ? Can it happen that, a particular MessageRecord gets processed by Stream-processor, being stored in KTable, and then my stream-processor dies and it could not make entry to Internal-State-Store !!
Below snippet we use right now :-
KTable<String, String> KT0 = streamsBuilder.table(AppConfigs.topicName, Materialized.as(AppConfigs.stateStoreName)));
Any responses shall be highly appreciated !!
a) If you have a custom implementation of a state store, you can pass it via Materialized.as(KeyValueStoreSupplier).
b) For global-store use-case, you can use builder.globalKTable().
c) Writes happen on a per record basis, but could cached in-memory. Before input topic offsets are committed, the state store will be flushed though, and thus you will never miss any data. By default, KafkaStreams provides at-least-once processing semantics.
I am trying to, better, understand what happens in the level of resources when you create a KStream and a KTable. Below, I wil mention some conclusions that I have come to, as I understand them (feel free to correct me).
Firstly, every topic has a number of partitions and all the messages in those partitions are stored in the hard disk(s) in continuous order.
A KStream does not need to store the messages, that are read from a topic, again to another location, because the offset is sufficient to retrieve those messages from the topic which is connected to.
(Is this correct? )
The question regards the KTable. As I have understand, a KTable, in contrast with a KStream, updates every message with the with the same key. In order to do that, you have to either store externally the messages that arrive from the topic to a static table, or read all the message queue, each time a new message arrives. The later does not seem very efficient regarding time performance. Is the first approach I presented correct?
read all the message queue, each time a new message arrives.
All messages are only read at the fresh start of the application. Once the app reads up to the latest offset, it's just updating the table like any other consumer
How disk usage is determined ultimately depends on the state store you've configured for the application, along with its own settings. For example, in-memory vs rocksdb vs an external state store interface that you've written on your own
I've been reading a few articles about using Kafka and Kafka Streams (with state store) as Event Store implementation.
https://www.confluent.io/blog/event-sourcing-using-apache-kafka/
https://www.confluent.io/blog/event-sourcing-cqrs-stream-processing-apache-kafka-whats-connection/
The implementation idea is the following:
Store entity changes (events) in a kafka topic
Use Kafka streams with state store (by default uses RethinkDB) to update and cache the entity snapshot
Whenever a new Command is being executed, get the entity from the store execute the operation on it and continue with step #1
The issue with this workflow is that the State Store is being updated asynchronously (step 2) and when a new command is being processed the retrieved entity snapshot might be stale (as it was not updated with events from previous commands).
Is my understanding correct? Is there a simple way to handle such case with kafka?
Is my understanding correct?
As far as I have been able to tell, yes -- which means that it is an unsatisfactory event store for many event-sourced domain models.
In short, there's no support for "first writer wins" when adding events to a topic, which means that Kafka doesn't help you ensure that the topic satisfies its invariants.
There have been proposals/tickets to address this, but I haven't found evidence of progress.
https://issues.apache.org/jira/browse/KAFKA-2260
https://cwiki.apache.org/confluence/display/KAFKA/KIP-27+-+Conditional+Publish
Yes it's simple way.
Use key for Kafka message. Messages with the same key always* go the the same partition.
One consumer can read from one or many portions, but two partitions can not be read by two consumer simultaneously.
Max count of working consumer is always <= count of partition for a topic. You can create more consumer but consumer will be backup nodes.
Something like example:
Assumption.
There is a kafka topic abc with partitions p0,p1.
There is consumer C1 consuming from p0, and consumer C2 consuming from p1. Consumers are working asynchronicity
km(key,command) - kafka message.
#Procedure creating message
km(key1,add) -> p0
km(key2,add) -> p1
km(key1,edit) -> p0
km(key3,add) -> p1
km(key3,edit) -> p1
#consumer C1 will read messages km(key1,add), km(key1,edit) and order will be persist
#consumer c2 will read messages km(key2,add) , km(key3,add) ,km(key3,edit)
If you write commands to Kafka then materialize a view in KStreams the materialized view will be updated asynchronously. This helps you separate writes from reads so the read path can scale.
If you want consistent read-write semantics over your commands/events you might be better writing to a database. Events can either be extracted from the database into Kafka using a CDC connector (write-through) or you can write to the database and then to Kafka in a transaction (write-aside).
Another option is to implement long polling on the read (so if you write trade1.version2 then want to read it again the read will block until trade1.version2 is available). This isn't suitable for all use cases but it can be useful.
Example here: https://github.com/confluentinc/kafka-streams-examples/blob/4eb3aa4cc9481562749984760de159b68c922a8f/src/main/java/io/confluent/examples/streams/microservices/OrdersService.java#L165
The Command Pattern that you want to implement is already a part of the Akka Framework, I don't know you have experience with the framework or not but I strongly advice you to look there before you implement your own solution.
Also for the amount of Events that we receive in todays IT, I also advice to integrate it with a State Machine.
If you like to see how can we put all together please check my blog :)