Clean up changelog topic backing session windows - apache-kafka

We are aggregating in session windows by using the following code:
.windowedBy(SessionWindows.with(...))
.aggregate(..., ..., ...)
The state store that is created for us automatically is backed by a changelog topic with cleanup.policy=compact.
When redeploying our topology, we found that restoring the state store took much longer than expected (10+ minutes). The explanation seems to be that even though a session has been closed, it is still present in the changelog topic.
We noticed that session windows have a default maintain duration of one day but even after the inactivity + maintain durations have been exceeded, it does not look like messages are removed from the changelog topic.
a) Do we need to manually delete "old" (by our definition) messages to keep the size of the changelog topic under control? (This may be the case as hinted to by [1].)
b) Would it be possible to somehow have the changelog topic created with cleanup.policy=compact,delete and would that even make sense?
[1] A session store seems to be created internally by Kafka Stream's UnwindowedChangelogTopicConfig (and not WindowedChangelogTopicConfig) which may make this comment from Kafka Streams - reducing the memory footprint for large state stores relevant: "For non-windowed store, there is no retention policy. The underlying topic is compacted only. Thus, if you know, that you don't need a record anymore, you would need to delete it via a tombstone. But it's a little tricky to achieve... – Matthias J. Sax Jun 27 '17 at 22:07"

You are hitting a bug. I just created a ticket for this: https://issues.apache.org/jira/browse/KAFKA-7101.
I would recommend that you modify the topic config manually for fix the issue in your deployment.

Related

Apache Samza flush table update to changelog immediately

If I specify a changelog backing for a RocksDB Table in Samza. Is there configuration to update the async write time to the changelog? I want to reduce it to a shorter time. I cannot see anything in the Config reference.
The scenario I want is too write to a changelog from a stream after bridging a legacy JMS connection. This legacy connection provides partial updates and I want to merge the partial updates into a fuller message building a cache of these messages in the samza streaming application and write these down to a changelog.
If I use a changelog configured with stores.store-name.changelog then it will write to the changelog eventually changes I make to the Samze API Table. But not quick enough for my needs so want to configure the max wait time to propagate to changelog.
Alternatively it seems that using the withSideInputs to bootstrap my table each time and then using sendTo will work faster to update and I can keep a LocalStore to read and write the cache too and always have the changelog as golden source.
The reason I want the changelog to write quickly too is because other applications are reading from this changelog.
Yes you can configure the time it will commit changes to the changelog usin the config:
task.commit.ms
Docs
Then writes to the store will be flushed when the commit happens:
profileTable.put(message.key, message.value)
A note on this higher volumes of input appear to result in changes going to changelog topic before this commit millisecond configuration. Also be careful not to put too low as will slow down overall throughout massively with higher volumes.
You can also use the low level API to commit on a particular stream task the TaskCoordinator provides commit api to manually commit.

Kafka - different configuration settings

I am going through the documentation, and there seems to be there are lot of moving with respect to message processing like exactly once processing , at least once processing . And, the settings scattered here and there. There doesnt seem a single place that documents the properties need to be configured rougly for exactly once processing and atleast once processing.
I know there are many moving parts involved and it always depends . However, like i was mentioning before , what are the settings to be configured atleast to provide exactly once processing and at most once and atleast once ...
You might be interested in the first part of Kafka FAQ that describes some approaches on how to avoid duplication on data production (i.e. on producer side):
Exactly once semantics has two parts: avoiding duplication during data
production and avoiding duplicates during data consumption.
There are two approaches to getting exactly once semantics during data
production:
Use a single-writer per partition and every time you get a network
error check the last message in that partition to see if your last
write succeeded
Include a primary key (UUID or something) in the
message and deduplicate on the consumer.
If you do one of these things, the log that Kafka hosts will be
duplicate-free. However, reading without duplicates depends on some
co-operation from the consumer too. If the consumer is periodically
checkpointing its position then if it fails and restarts it will
restart from the checkpointed position. Thus if the data output and
the checkpoint are not written atomically it will be possible to get
duplicates here as well. This problem is particular to your storage
system. For example, if you are using a database you could commit
these together in a transaction. The HDFS loader Camus that LinkedIn
wrote does something like this for Hadoop loads. The other alternative
that doesn't require a transaction is to store the offset with the
data loaded and deduplicate using the topic/partition/offset
combination.

Fault tolerance in Flink file Sink

I am using Flink streaming with Kafka consumer connector (FlinkKafkaConsumer) and file Sink (StreamingFileSink) in a cluster mode with exactly once policy.
The file sink writes the files to the local disk.
I’ve noticed that if a job fails and automatic restart is on, the task managers look for the leftovers files from the last failing job (hidden files).
Obviously, since the tasks can be assigned to different task managers, this sums up to more failures over and over again.
The only solution I found so far is to delete the hidden files and resubmit the job.
If I get it right (and please correct me If I wrong), the events in the hidden files were not committed to the bootstrap-server, so there is no data loss.
Is there a way, forcing Flink to ignore the files that were written already? Or maybe there is a better way to implement the solution (maybe somehow with savepoints)?
I got a very detailed answer in Flink mailing list. TLDR, in order to implement exactly once, I have to use some kind of distributed FS.
The full answer:
A local filesystem is not the right choice for what you are trying to achieve. I don't think you can achieve a true exactly once policy in this setup. Let me elaborate on why.
The interesting bit is how it behaves on checkpoints. The behavior is controlled by a RollingPolicy. As you have not said what format you use let's assume you use row format first. For a row format the default rolling policy (when to change the file from in-progress to pending) is it will be rolled if the file reaches 128MB, the file is older than 60 sec or it has not been written to for 60 sec. It does not roll on a checkpoint. Moreover StreamingFileSink considers the filesystem as a durable sink that can be accessed after a restore. That implies that it will try to append to this file when restoring from checkpoint/savepoint.
Even if you rolled the files on every checkpoint you still might face the problem that you can have some leftovers because the StreamingFileSink moves the files from pending to complete after the checkpoint is completed. If a failure happens between finishing the checkpoint and moving the files it will not be able to move them after a restore (it would do it if had an access).
Lastly a completed checkpoint will contain offsets of records that were processed successfully end-to-end, which means records that are assumed committed by the StreamingFileSink. This can be records written to an in-progress file with a pointer in a StreamingFileSink checkpointed metadata, records in a "pending" file with an entry in a StreamingFileSink checkpointed metadata that this file has been completed or records in "finished" files.[1]
Therefore as you can see there are multiple scenarios when the StreamingFileSink has to access the files after a restart.
The last thing, you mentioned "committing to the "bootstrap-server". Bear in mind that Flink does not use offsets committed back to Kafka for guaranteeing consistency. It can write those offsets back but only for monitoring/debugging purposes. Flink stores/restores the processed offsets from its checkpoints.[3]
Let me know if it helped. I tried my best ;) BTW I highly encourage reading the linked sources as they try to describe all that in a more structured way.
I am also cc'ing Kostas who knows more about the StreamingFileSink than I do., so he can maybe correct me somewhere.
[1] https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/connectors/streamfile_sink.html
[2] https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/connectors/kafka.html
[3]https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/connectors/kafka.html#kafka-consumers-offset-committing-behaviour-configuration

Kafka Streams - How does it save the state internally

There are a lot of articles across the internet about the usage of the Kafka Streams, but almost nothing about how it's done internally.
Does it use any features inside Kafka outside the standard set (let's call "standard" the librdkafka implementation)?
If it saves the state inside RocksDB (or any custom StateStore), how it guarantees that the state saving and the commit are in one transaction?
The same question in the case when the state is saved in the compacted log (the commit and the log updated should be in one transaction).
Thank you.
I found the answers by combining information from several threads here.
It uses transactions (see https://stackoverflow.com/a/54593467/414016) which are unsupported (yet) by librdkafka.
It doesn't really rely on RocksDB, instead it saves the state changes into the commit log (see https://stackoverflow.com/a/50264900/414016).
It does it using the transactions as mentioned above.

Synchronising transactions between database and Kafka producer

We have a micro-services architecture, with Kafka used as the communication mechanism between the services. Some of the services have their own databases. Say the user makes a call to Service A, which should result in a record (or set of records) being created in that service’s database. Additionally, this event should be reported to other services, as an item on a Kafka topic. What is the best way of ensuring that the database record(s) are only written if the Kafka topic is successfully updated (essentially creating a distributed transaction around the database update and the Kafka update)?
We are thinking of using spring-kafka (in a Spring Boot WebFlux service), and I can see that it has a KafkaTransactionManager, but from what I understand this is more about Kafka transactions themselves (ensuring consistency across the Kafka producers and consumers), rather than synchronising transactions across two systems (see here: “Kafka doesn't support XA and you have to deal with the possibility that the DB tx might commit while the Kafka tx rolls back.”). Additionally, I think this class relies on Spring’s transaction framework which, at least as far as I currently understand, is thread-bound, and won’t work if using a reactive approach (e.g. WebFlux) where different parts of an operation may execute on different threads. (We are using reactive-pg-client, so are manually handling transactions, rather than using Spring’s framework.)
Some options I can think of:
Don’t write the data to the database: only write it to Kafka. Then use a consumer (in Service A) to update the database. This seems like it might not be the most efficient, and will have problems in that the service which the user called cannot immediately see the database changes it should have just created.
Don’t write directly to Kafka: write to the database only, and use something like Debezium to report the change to Kafka. The problem here is that the changes are based on individual database records, whereas the business significant event to store in Kafka might involve a combination of data from multiple tables.
Write to the database first (if that fails, do nothing and just throw the exception). Then, when writing to Kafka, assume that the write might fail. Use the built-in auto-retry functionality to get it to keep trying for a while. If that eventually completely fails, try to write to a dead letter queue and create some sort of manual mechanism for admins to sort it out. And if writing to the DLQ fails (i.e. Kafka is completely down), just log it some other way (e.g. to the database), and again create some sort of manual mechanism for admins to sort it out.
Anyone got any thoughts or advice on the above, or able to correct any mistakes in my assumptions above?
Thanks in advance!
I'd suggest to use a slightly altered variant of approach 2.
Write into your database only, but in addition to the actual table writes, also write "events" into a special table within that same database; these event records would contain the aggregations you need. In the easiest way, you'd simply insert another entity e.g. mapped by JPA, which contains a JSON property with the aggregate payload. Of course this could be automated by some means of transaction listener / framework component.
Then use Debezium to capture the changes just from that table and stream them into Kafka. That way you have both: eventually consistent state in Kafka (the events in Kafka may trail behind or you might see a few events a second time after a restart, but eventually they'll reflect the database state) without the need for distributed transactions, and the business level event semantics you're after.
(Disclaimer: I'm the lead of Debezium; funnily enough I'm just in the process of writing a blog post discussing this approach in more detail)
Here are the posts
https://debezium.io/blog/2018/09/20/materializing-aggregate-views-with-hibernate-and-debezium/
https://debezium.io/blog/2019/02/19/reliable-microservices-data-exchange-with-the-outbox-pattern/
first of all, I have to say that I’m no Kafka, nor a Spring expert but I think that it’s more a conceptual challenge when writing to independent resources and the solution should be adaptable to your technology stack. Furthermore, I should say that this solution tries to solve the problem without an external component like Debezium, because in my opinion each additional component brings challenges in testing, maintaining and running an application which is often underestimated when choosing such an option. Also not every database can be used as a Debezium-source.
To make sure that we are talking about the same goals, let’s clarify the situation in an simplified airline example, where customers can buy tickets. After a successful order the customer will receive a message (mail, push-notification, …) that is sent by an external messaging system (the system we have to talk with).
In a traditional JMS world with an XA transaction between our database (where we store orders) and the JMS provider it would look like the following: The client sets the order to our app where we start a transaction. The app stores the order in its database. Then the message is sent to JMS and you can commit the transaction. Both operations participate at the transaction even when they’re talking to their own resources. As the XA transaction guarantees ACID we’re fine.
Let’s bring Kafka (or any other resource that is not able to participate at the XA transaction) in the game. As there is no coordinator that syncs both transactions anymore the main idea of the following is to split processing in two parts with a persistent state.
When you store the order in your database you can also store the message (with aggregated data) in the same database (e.g. as JSON in a CLOB-column) that you want to send to Kafka afterwards. Same resource – ACID guaranteed, everything fine so far. Now you need a mechanism that polls your “KafkaTasks”-Table for new tasks that should be send to a Kafka-Topic (e.g. with a timer service, maybe #Scheduled annotation can be used in Spring). After the message has been successfully sent to Kafka you can delete the task entry. This ensures that the message to Kafka is only sent when the order is also successfully stored in application database. Did we achieve the same guarantees as we have when using a XA transaction? Unfortunately, no, as there is still the chance that writing to Kafka works but the deletion of the task fails. In this case the retry-mechanism (you would need one as mentioned in your question) would reprocess the task an sends the message twice. If your business case is happy with this “at-least-once”-guarantee you’re done here with a imho semi-complex solution that could be easily implemented as framework functionality so not everyone has to bother with the details.
If you need “exactly-once” then you cannot store your state in the application database (in this case “deletion of a task” is the “state”) but instead you must store it in Kafka (assuming that you have ACID guarantees between two Kafka topics). An example: Let’s say you have 100 tasks in the table (IDs 1 to 100) and the task job processes the first 10. You write your Kafka messages to their topic and another message with the ID 10 to “your topic”. All in the same Kafka-transaction. In the next cycle you consume your topic (value is 10) and take this value to get the next 10 tasks (and delete the already processed tasks).
If there are easier (in-application) solutions with the same guarantees I’m looking forward to hear from you!
Sorry for the long answer but I hope it helps.
All the approach described above are the best way to approach the problem and are well defined pattern. You can explore these in the links provided below.
Pattern: Transactional outbox
Publish an event or message as part of a database transaction by saving it in an OUTBOX in the database.
http://microservices.io/patterns/data/transactional-outbox.html
Pattern: Polling publisher
Publish messages by polling the outbox in the database.
http://microservices.io/patterns/data/polling-publisher.html
Pattern: Transaction log tailing
Publish changes made to the database by tailing the transaction log.
http://microservices.io/patterns/data/transaction-log-tailing.html
Debezium is a valid answer but (as I've experienced) it can require some extra overhead of running an extra pod and making sure that pod doesn't fall over. This could just be me griping about a few back to back instances where pods OOM errored and didn't come back up, networking rule rollouts dropped some messages, WAL access to an aws aurora db started behaving oddly... It seems that everything that could have gone wrong, did. Not saying Debezium is bad, it's fantastically stable, but often for devs running it becomes a networking skill rather than a coding skill.
As a KISS solution using normal coding solutions that will work 99.99% of the time (and inform you of the .01%) would be:
Start Transaction
Sync save to DB
-> If fail, then bail out.
Async send message to kafka.
Block until the topic reports that it has received the
message.
-> if it times out or fails Abort Transaction.
-> if it succeeds Commit Transaction.
I'd suggest to use a new approach 2-phase message. In this new approach, much less codes are needed, and you don't need Debeziums any more.
https://betterprogramming.pub/an-alternative-to-outbox-pattern-7564562843ae
For this new approach, what you need to do is:
When writing your database, write an event record to an auxiliary table.
Submit a 2-phase message to DTM
Write a service to query whether an event is saved in the auxiliary table.
With the help of DTM SDK, you can accomplish the above 3 steps with 8 lines in Go, much less codes than other solutions.
msg := dtmcli.NewMsg(DtmServer, gid).
Add(busi.Busi+"/TransIn", &TransReq{Amount: 30})
err := msg.DoAndSubmitDB(busi.Busi+"/QueryPrepared", db, func(tx *sql.Tx) error {
return AdjustBalance(tx, busi.TransOutUID, -req.Amount)
})
app.GET(BusiAPI+"/QueryPrepared", dtmutil.WrapHandler2(func(c *gin.Context) interface{} {
return MustBarrierFromGin(c).QueryPrepared(db)
}))
Each of your origin options has its disadvantage:
The user cannot immediately see the database changes it have just created.
Debezium will capture the log of the database, which may be much larger than the events you wanted. Also deployment and maintenance of Debezium is not an easy job.
"built-in auto-retry functionality" is not cheap, it may require much codes or maintenance efforts.