Currently I have 3 kafka brokers with 150 partitions.
I also have 3 consumers that each one is assigned to a group of partitions.
Each consumer has its own local state store with rocksdb. This in-memory key-value store is called during grpc calls. During rebalancing (if a consumer disappears) then the data is written in the local store of the other consumers.
If the consumers are running for around 2 weeks it seems that the services are running out of memory.
Is there a solution to the local storage growing too much? Can we remove data of partitions that are not needed anymore? Or is there a way to remove the stored data after the consumer is restored?
you can use the cleanUp(); method while starting or shut down Kafka Stream to cleanup state storage.
cleanUp()
Do a clean up of the local StateStore by deleting all data with regard
to the application ID. May only be called either before this
KafkaStreams instance is started in with calling start() method or
after the instance is closed by calling close() method.
KafkaStreams app = new KafkaStreams(builder.build(), props);
// Delete the application's local state.
// Note: In real application you'd call `cleanUp()` only under
// certain conditions. See tip on `cleanUp()` below.
app.cleanUp();
app.start();
Note: To avoid the corresponding recovery overhead, you should not call
cleanUp() by default but only if you really need to. Otherwise, you wipe out local state and trigger an expensive state restoration. You
won't lose data and the program will still be correct, but you may
slow down startup significantly (depending on the size of your state)
In case you are looking to delete from state store during your life cycle of Kafka Stream, you can very well remove from state store after all its just collection of map store in rocks B
Assume you are using Kafka Stream Processor
KeyValueStore<String, String> dsStore=(KeyValueStore<String, String>) context.getStateStore("localstorename");
KeyValueIterator<String, String> iter = this.dsStore.all();
while (iter.hasNext()) {
KeyValue<String, String> entry = iter.next();
dsStore.delete(entry.key);
}
Related
I created a Kafka topic and sent some messages to it.
I created an application which had the stream topology. Basically, it was doing the sum and materialized it to a local state store.
I saw the new directory was created in the state folder I configured.
I could read the sum from the local state store.
Everything was good so far.
Then, I turned off my application which was running the stream.
I removed the directory created in my state folder.
I restart the Kafka cluster.
I restart my application which has the stream topology.
In my understanding, the state was gone. Kafka needs to do the aggregation again. But it did not. I was still able to get the previous sum result.
How comes? Where did Kafka save the local state store?
Here is my code
Reducer<Double> reduceFunction = (subtotal, amount) -> {
// detect when the reducer is triggered
System.out.println("reducer is running to add subtotal with amount..." + amount);
return subtotal + amount;
};
groupedByAccount.reduce(reduceFunction,
Materialized.<String, Double, KeyValueStore<Bytes, byte[]>>as(BALANCE).withValueSerde(Serdes.Double()));
I explicitly put the System.out in the reduceFunction. Whenever it is executed, I shall see it on the console.
But I did not see any after restart kafka cluster and my application.
Does Kafka really recover the state? Or it saves state somewhere else?
If I'm not mistaken and according to Designing Event-Driven Systems by Ben Stopford (free book) on page 137, it states:
We could store, these
stats in a state store and they’ll be saved locally as well as being backed up to
Kafka, using what’s called a changelog topic, inheriting all of Kafka’s durability
guarantees.
It seems like a copy of your state store is also backed up in Kafka itself (i.e. changelog topic). I don't think restarting a cluster will flush out (or remove) messages already in a topic as they're kept track in the Zookeeper.
So once you restart your cluster and application again, the local state store is recovered from Kafka.
I built a kafka streaming application with a state store. Now I am trying to scale this application. When running the application on three different servers Kafka splits up partitions and state stores randomly.
For example:
Instance1 gets: partition-0, partition-1
Instance2 gets: partition-2, stateStore-repartition-0
Instance3 gets: stateStore-repartition-1, stateStore-repartition-2
I want to assign one stateStore and one partition per instance. What am I doing wrong?
My KafkaStreams Config:
final Properties properties = new Properties();
properties.setProperty(StreamsConfig.APPLICATION_ID_CONFIG, "my-app");
properties.setProperty(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, BOOTSTRAP_SERVERS_CONFIG);
try {
properties.setProperty(StreamsConfig.STATE_DIR_CONFIG,
Files.createTempDirectory(stateStoreName).toAbsolutePath().toString());
} catch (final IOException e) {
// use the default one
}
And my stream is:
stream.groupByKey()
.windowedBy(TimeWindows.of(timeWindowDuration))
.<TradeStats>aggregate(
() -> new TradeStats(),
(k, v, tradestats) -> tradestats.add(v),
Materialized.<String, TradeStats, WindowStore<Bytes, byte[]>>as(stateStoreName)
.withValueSerde(new TradeStatsSerde()))
.toStream();
From what I can see so far (as mentioned in my comment to your question, please share your state store definition), everything is fine and I suspect a slight misconception on your side regarding the question
What am I doing wrong?
Basically, nothing. :-)
For the partition part of your question: They get distributed around the consumers according to the configured assignor (consult https://kafka.apache.org/26/javadoc/index.html?org/apache/kafka/clients/consumer/CooperativeStickyAssignor.html or adjacent interfaces).
For the state store part of your question: May be here lies a little misconception on how (in memory) state stores work: They are usually backed by a Kafka topic which does not reside on your application host(s) but in the Kafka cluster itself. To be more precise, a part of the whole state store lives in the (RocksDB) in-memory key/value store on each of your application hosts, exactly as you showed in the state store assignment in your question. However these are only parts or slices of the complete state store which is maintained in the Kafka cluster.
So in a nutshell: Everything is fine, let Kafka do the assignment job and interfere with this only if you have really special use-cases or good reasons. :-) Kafka also assures correct redundancy and re-balancing of all partitions in case of outages of your application hosts.
If you still want to assign something on your own, the use-case would be interesting for further help.
Currently I have the following setup:
StoreBuilder storeBuilder = Stores.keyValueStoreBuilder(
Stores.persistentKeyValueStore("kafka.topics.table"),
new SomeKeySerde(),
new SomeValueSerde());
streamsBuilder.addStateStore(storeBuilder);
final KStream<byte[], SomeClass> requestsStream = streamsBuilder
.stream("myTopic", Consumed.with(Serdes.ByteArray(), theSerde));
requestsStream
.filter((key, request) -> Objects.nonNull(request))
.process(() -> new SomeClassUpdater("kafka.topics.table", maxNumMatches), "kafka.topics.table");
Properties streamsConfiguration = loadConfiguration();
KafkaStreams streams = new KafkaStreams(streamsBuilder.build(), streamsConfiguration);
streams.start()
Why do I need the local state store, since I'm not doing any other computation with it and the data is also stored in the kafka changelog? Also at what moment does it store in the local store, does it store and commit to the changelog?
The problem that I'm facing is that I'm storing localy and in time I run into memory problems especially when it repartitions often. Because the old partitions still sit around and fill the memory.
So my questions are, why do we need the persistence with rocksdb since:
the data is persisted in kafka changelog
ramdisk is gone anyway when the container is gone.
On a single thread we can have multiple tasks equal to the no. of partitions of the topic. Each partition has its own state store and these state stores save the data to a Changelog which is an internal topic of Kafka. Each state store of a partition also maintains a replica of the state store of other partition, in order to recover the data of the partition whose task may fail.
If you don't use state store, and one of your task fails, it will go to the internal topic i.e. the Changelog and then will fetch data for the partition which is a time consuming job for the CPU. Hence, maintaining State Store reduces the time in which a task may fail and fetches the data from another tasks State Store immediately.
Given the system, which is consuming the event stream from Kafka in order to analyze some records stored in the database.
In some cases, the event matches some condition that means, that corresponding record should be analyzed later in the future.
Perhaps, the most simple solution to implement this logic is to write timestamp of future processing to the database and periodically perform some kind of select to find required records for re-processing.
Maybe there is another more convenient and scalable way to do it? It looks like another timestamped event stream which could be processed when the current time become greater or equal than timestamp of event, what's the options to implement such behavior?
In my opinion depending on how long you need to store it, you can just create a stream that filter for these events and push it into a new topic that can be processed later. If it is more for historical purpose then it might be better to push it into a DBMS.
You can try state store in Kafka Stream. Which can be used by stream processing applications to store and query data in later.
Kafka Stream automatically creates and manages such state stores when you are calling stateful operators such as count() or aggregate(), or when you are windowing a stream. It will be store in In-memory however you can store in somewhere persistent storage e.g. portworx to handle fault scenario.
Below show how you initialize StateStore
StoreBuilder<KeyValueStore<String, String>> statStore = Stores
.keyValueStoreBuilder(Stores.persistentKeyValueStore("uniqueName"), Serdes.String(),
Serdes.String())
.withLoggingDisabled(); // disable backing up the store to a change log topic
Below show how to add state store inside Kafka Stream
Topology builder = new Topology();
builder.addSource("Source", topic)
.addProcessor("SourceProcessName", () -> new ProcessorClass(), "Source")
.addStateStore(statStore, "SourceProcessName")
.addSink("SinkProcessName", sinkTopic, "SourceProcessName");
In the Process Method, You can store Kafka topic message as key, value
KeyValueStore<String, String> dsStore = (KeyValueStore<String, String>) context.getStateStore("statStore");
KeyValueIterator<String, String> iter = this.dsStore.all();
while (iter.hasNext()) {
KeyValue<String, String> entry = iter.next();
}
I am running multiple kafka stream consumer instances (2 instances) in my local machine, each having its own custom local store and each with different name.
As per documentation if one of the instances goes down then kafka has to sync the store of dead instance to the store of instance which is alive (correct me if I am wrong).
I have configured both instances with same application id to let kafka know these instances belong to same group.
When one of the instances are killed then the store of other (alive) instance is not getting synced with the store of dead instance. I have enabled change log topic on both stores.
How-ever when I have same store name in both instances, stores getting synced as expected, not sure if these instances are pointing to one store.I have different StreamsConfig.STATE_DIR_CONFIG location for these two instances.
Please let me know if I am missing something, can store name be different on different instance of application? Does kafka automatically takes care replaying change log topic on new instance store?
//below is my stream configuration
#Bean
public KafkaStreams kafkaStreams(KafkaProperties properties,
#Value("${spring.application.name}") String appName) {
final Properties props = new Properties();
props.put(StreamsConfig.APPLICATION_ID_CONFIG, appName);
props.put(StreamsConfig.CLIENT_ID_CONFIG, "client2");
props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, properties.getBootstrapServers());
props.put(StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG, 0);
props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, JsonSerde.class);
props.put(StreamsConfig.COMMIT_INTERVAL_MS_CONFIG, 10 * 1000);
//props.put(StreamsConfig.STATE_DIR_CONFIG, "/tmp/kafka-streams1");
props.put(StreamsConfig.NUM_STANDBY_REPLICAS_CONFIG, "1");
props.put(StreamsConfig.consumerPrefix(ConsumerConfig.PARTITION_ASSIGNMENT_STRATEGY_CONFIG),
new RoundRobinAssignor().getClass().getName());
props.put("auto.offset.reset", "earliest");
final KafkaStreams kafkaStreams = new KafkaStreams(kafkaStreamTopology(), props);
System.out.println("Invoked kafkaStreams");
//kafkaStreams.cleanUp();
kafkaStreams.start();
return kafkaStreams;
}
I am running multiple kafka stream consumer instances (2 instances) in my local machine, each having its own custom local store and each with different name.
This sounds incorrect. If you run multiple instances with the same application.id (ie, group.id), all instances must execute the same code. (I am wondering why your application does not crash in the first place.)
I am not 100% sure what you try to achieve. It might be helpful if you could share your topology code?
Note that KafkaStreams shards logical stores base on the number of input topic partitions (cf. https://docs.confluent.io/current/streams/architecture.html). Maybe you confuse sharding with logical stores?
If you want to have two logical stores, each with one shard, you can still run multiple instances and the stores will be executed on different instances (and fail-over will work too). However, you still need to "include" both stores on both instances on startup.