I created a Kafka topic and sent some messages to it.
I created an application which had the stream topology. Basically, it was doing the sum and materialized it to a local state store.
I saw the new directory was created in the state folder I configured.
I could read the sum from the local state store.
Everything was good so far.
Then, I turned off my application which was running the stream.
I removed the directory created in my state folder.
I restart the Kafka cluster.
I restart my application which has the stream topology.
In my understanding, the state was gone. Kafka needs to do the aggregation again. But it did not. I was still able to get the previous sum result.
How comes? Where did Kafka save the local state store?
Here is my code
Reducer<Double> reduceFunction = (subtotal, amount) -> {
// detect when the reducer is triggered
System.out.println("reducer is running to add subtotal with amount..." + amount);
return subtotal + amount;
};
groupedByAccount.reduce(reduceFunction,
Materialized.<String, Double, KeyValueStore<Bytes, byte[]>>as(BALANCE).withValueSerde(Serdes.Double()));
I explicitly put the System.out in the reduceFunction. Whenever it is executed, I shall see it on the console.
But I did not see any after restart kafka cluster and my application.
Does Kafka really recover the state? Or it saves state somewhere else?
If I'm not mistaken and according to Designing Event-Driven Systems by Ben Stopford (free book) on page 137, it states:
We could store, these
stats in a state store and they’ll be saved locally as well as being backed up to
Kafka, using what’s called a changelog topic, inheriting all of Kafka’s durability
guarantees.
It seems like a copy of your state store is also backed up in Kafka itself (i.e. changelog topic). I don't think restarting a cluster will flush out (or remove) messages already in a topic as they're kept track in the Zookeeper.
So once you restart your cluster and application again, the local state store is recovered from Kafka.
Related
I'm working with Spring Cloud Streams and I have a BiFunction that receives a KStream and a GlobalKTable. I dont want to lose the GlobalKTable data after my application restarts, but it's what is happening.
#Bean
public BiFunction<KStream<String, MyClass1>, GlobalKTable<String, MyClass2>, KStream<String, MyClass3>> process() {
...
}
I've also configured the "materializedAs" property:
spring.cloud.stream.kafka.streams.bindings.process-in-1.consumer.materializedAs: MYTABLE
I Have a topic A that have a retention time of 1 week. So, if a message from topic A was erased due retention time and my application restarts, the GlobalKTable doesn't find this message.
The GlobalKTable data should really be erased when my application restarts?
GlobalKTable always restores from the input topic directly. It builds the state store based on the input topic. If the state store is already there and in sync with the input topic, I believe the restore on startup will be faster (therefore, if you are using Spring for Apache Kafka < 2.7, you need to do what Gary suggested above). However, if the input topic is completely removed, then the state store needs to be rebuilt entirely from scratch with the new input topic. That is the reason, why you are not seeing any data restored on startup after deleting the topic. This thread has some more details on this topic.
See the binder documentation.
By default, the Kafkastreams.cleanup() method is called when the binding is stopped. See the Spring Kafka documentation. To modify this behavior simply add a single CleanupConfig #Bean (configured to clean up on start, stop, or neither) to the application context; the bean will be detected and wired into the factory bean.
Spring for Apache Kafka 2.7 and later does not remove the state by default any longer: https://github.com/spring-projects/spring-kafka/commit/eff205404389b563849fdd4dceb52b23aeb38f20
I have a KafkaStream application with multiple joins on KTable. Finally I am using Processor Api to form State Store to perform some sort of business logic. Earlier I had a purge job which was deleting Kafka log and KafkaStream's state dir every morning to process only today's data which was produced by my producers. Till this time everything was working as expected.
But deleting Kafka log directory is not good approach so I decided to make use of cleanup.policy to delete data from Kafka and deleting KafkaStream state dirs. I think this approach is creating problem in state stores where data is still being re-stored from changelog topics on App startup.
Is there a way to purge entire data from Kafka and all KafkaStream's state store along with changelog topics?
Appreciate your help.
I have Global State Store attached my topology. Global state store in reading from compacted topic. This global state store stores 100,000 records and these records should be there in state store for correct processing of the topology.
Question:
Q. During application restart, kafka streams will start the global state store thread and make sure that the state is fully built before starting streams threads ?
I am trying to find some documentation related to this topic
Please point me to code or documentation also.
It depends if there is still any state remaining in the state.dir local filesystem for your application.id.
If there is, it'll start rebuilding data onto there. Otherwise, the beginning of the topic will have to be consumed to recreate that data
I'm currently using Kafka Streams for a stateful application. The state is not stored in a Kafka state store though, but rather just in memory for the moment being. This means whenever I restart the application, all state is lost and it has to be rebuilt by processing all records from the start.
After doing some research on Kafka state stores, this seems to be exactly the solution I'm looking for to persist state between application restarts (either in memory or on disk). However, I find the resources online lack some pretty important details, so I still have a couple of questions on how this would work exactly:
If the stream is set to start from offset latest, will the state still be (re)calculated from all the previous records?
If previously already processed records need to be reprocessed in order to rebuild the state, will this propagate records through the rest of the Streams topology (e.g. InputTopic -> stateful processor -> OutputTopic, will this result in duplicated records in the OutputTopic because of rebuilding state)?
State stores use their own changelog topics, and kafka-streams state stores take on responsibility for loading from them. If your state stores are uninitialised, your kafka-streams app will rehydrate its local state store from the changelog topic using EARLIEST, since it has to read every record.
This means the startup sequence for a brand new instance is roughly:
Observe there is no local state-store cache
Load the local state store by consumeing from the changelog topic for the statestore (the state-store's topic name is <state-store-name>-changelog)
Read each record and update a local rocksDB instance accordingly
Do not emit anything, since this is an application-service, not your actual topology
Read your consumer-groups offsets using EARLIEST or LATEST according to how you configured the topology. Not this is only a concern if your consumer group doesn't have any offsets yet
Process stuff, emitting records according to the topology
Whether you set your actual topology's auto.offset.reset to LATEST or EARLIEST is up to you. In the event they are lost, or you create a new group, its a balance between potentially skipping records (LATEST) vs handling reprocessing of old records & deduplication (EARLIEST),
Long story short: state-restoration is different from processing, and handled by kafka-streams its self.
If the stream is set to start from offset latest, will the state still be (re)calculated from all the previous records?
If you are re-launching the same application (e.g. after having stopped it before), then state will not be recalculated by reprocessing the original input data. Instead, the state will be restored from its "backup" (every state store or KTable is durably stored in a Kafka topic, the so-called "changelog topic" of that table/state store for such purposes) so that its data is exactly what it was when the application was stopped. This behavior enables you to seamlessly stop+restart your applications without skipping over records that arrived between "stop" and "restart".
But there is a different caveat that you need to be aware of: The configuration to set the offset start point (latest or earliest) is only used when you run your Kafka Streams application for the first time. Afterwards, whenever you stop+restart your application, it will always continue where it previously stopped. That's because, if the app has run at least once, it has stored its consumer offset information in Kafka, which allows it to know from where to automatically resume operations once it is being restarted.
If you need the different behavior of always (re)starting from e.g. the latest offsets (thus potentially skipping records that arrived in between when you stopped the application and when you restarted it), you must reset your Kafka Streams application. One of the steps the reset tool performs is removing the application's consumer offset information from Kafka, which makes the application think that it was never started before, so to speak.
If previously already processed records need to be reprocessed in order to rebuild the state, will this propagate records through the rest of the Streams topology (e.g. InputTopic -> stateful processor -> OutputTopic, will this result in duplicated records in the OutputTopic because of rebuilding state)?
This reprocessing will not happen by default as explained above. State will be automatically reconstructed to its prior state (pun intended) at the point when the application was stopped.
Reprocessing would only happen if you manually reset your application (see above) and e.g. configure the application to re-read historical data (like setting auto.offset.reset to earliest after you did the reset).
I have a kafka streams app which is currently just joining two KStreams with a 5-minute window and writing the join result to another topic.
Since I am joining two topics over a time window, my app will have state associated with it. I was under the impression that the state stores in my app would get pruned after every the 5-minute window (Because my app cares only about the 5 min window of events for the join state).
I was expecting a constant disk-space utilization. But, seems like that is not the case. Its been 12 hrs and I do not see that the state store is getting cleaned up. It's consistently growing.
So I have multiple concerns on this now,
When does Kafka Streams app clean up its state?
If one of my app in the kafka streams app cluster fails, and I boot another host and make it join the cluster, after rebalancing, is there orphaned state store sitting in the disk for the partitions that got rebalanced?
My understanding is that the events are joined only if they happen in the defined window, so, why does kafka need to hold on to data that is older than the defined window period in its state store?
Let me know if you need any other information from me regarding my streams app. I am currently running kafka-streams version 2.2.1 and my brokers are also on the same version.
When does Kafka Streams app clean up its state?
The size of the state depends on the retention period, that is 1 day by default.
Atm, it's not possible to change the retention period for KStream-KStream joins -- it's already WIP to add this feature: https://issues.apache.org/jira/browse/KAFKA-8558
If one of my app in the kafka streams app cluster fails, and I boot another host and make it join the cluster, after rebalancing, is there orphaned state store sitting in the disk for the partitions that got rebalanced?
Yes. However, this state will be cleaned (if you restart Kafka Streams on the recovered host) if the state is not reused after a configurable (state.cleanup.delay.ms) period of time.
My understanding is that the events are joined only if they happen in the defined window, so, why does kafka need to hold on to data that is older than the defined window period in its state store?
Having a higher retention period that your window size allows Kafka Streams to process out-of-order data. Note that Kafka Streams uses event time semantics, not processing time semantics.