I have a use case that can be described as follows:
Dump that is generated each day at 4 am
Online stream that is run from 12 is for 24 hours.
We use dump as lookup, any content that exists in the online stream and also in the dump will have a special offer, but we face a problem as our proposed solution is limited. We created a stream that joins between lookup dump stream and online stream for 24 but we face a problem as there is a gap because the dump is not ready before 4 am so the join find nothing in those 4 hours and if we changed the window period for more time we will lose each day refreshment data.
Any help?
This question was cross-posted on the ksqlDB GitHub page: https://github.com/confluentinc/ksql/issues/7935
Copying my reply from there:
This is tricky... You need to understand that a joins have temporal semantics. For a stream-table join, it implies that a stream record joins to the table "version" that is valid according to the event-time of the stream record. Thus, if you table updates are timestamps at 4am, all stream-records from 12am-4am happen before the table update and are not eligible "see" those updates.
And even if you can change the timestamp of the table updates to 12am, the issue is that ksqlDB does not support a GRACE PRIOD for stream-table joins yet (we are already working on this though)...
Bottom line might be, that you would need to work around the current ksqlDB limitations to change your upstream ingestion to "wait" until after all table updates got published, before you publish the stream updates... Not sure if this is possible in your end-to-end setup.
To learn more about temporal join semantics, check out this Kafka Summit talk: https://www.confluent.io/events/kafka-summit-europe-2021/temporal-joins-in-kafka-streams-and-ksqldb/
Related
I have an architecture where I would like to query a ksqlDB Table from a Kafka stream A (created by ksqlDB). On startup, Service A will load in all the data from this table into a hashmap, and then afterward it will start consuming from Kafka Stream A and act off any events to update this hashmap. I want to avoid any race condition in which I would miss any events that were propagated to Kafka Stream A in the time between I queried the table, and when I started consuming off Kafka Stream A. Is there a way that I can retrieve the latest offset that my query to the table is populated by so that I can use that offset to start consuming from Kafka Stream A?
Another thing to mention is that we have hundreds of instances of our app going up and down so reading directly off the Kafka stream is not an option. Reading an entire stream worth of data every time our apps come up is not a scalable solution. Reading in the event streams data into a hashmap on the service is a hard requirement. This is why the ksqlDB table seems like a good option since we can get the latest state of data in the format needed and then just update based off of events from the stream. Kafka Stream A is essentially a CDC stream off of a MySQL table that has been enriched with other data.
You used "materialized view" but I'm going to pretend I
heard "table". I have often used materialized views
in a historical reporting context, but not with live updates.
I assume that yours will behave similar to a "table".
I assume that all events, and DB rows, have timestamps.
Hopefully they are "mostly monotonic", so applying a
small safety window lets us efficiently process just
the relevant recent ones.
The crux of the matter is racing updates.
We need to prohibit races.
Each time an instance of a writer, such as your app,
comes up, assign it a new name.
Rolling a guid is often the most convenient way to do that,
or perhaps prepend it with a timestamp if sort order matters.
Ensure that each DB row mentions that "owning" name.
want to avoid any race condition in which I would miss any events that were propagated to Kafka Stream A in the time between I queried the materialized view, and when I started consuming off Kafka Stream A.
We will need a guaranteed monotonic column with an integer ID
or a timestamp. Let's call it ts.
Query m = max(ts).
Do a big query of records < m, slowly filling your hashmap.
Start consuming Stream A.
Do a small query of records >= m, updating the hashmap.
Continue to loop through subsequently arriving Stream A records.
Now you're caught up, and can maintain the hashmap in sync with DB.
Your business logic probably requires that you
treat DB rows mentioning the "self" guid
in a different way from rows that existed
prior to startup.
Think of it as de-dup, or ignoring replayed rows.
You may find offsetsForTimes() useful.
There's also listOffsets().
In my case, daily data is getting produced in one topic. I want KSQLDB server to create two tables daily for today and yesterday. For example, Today-Table and Yesterday-Table are two KTables of same topic conditioning that Today-Table contains today’s data and Yesterday-Table contain yesterday’s data. It is pretty achievable when it comes to static creation.
But what I need here is, for next day it should automatically toggle today’s data as yesterday’s data and next day’s data as today’s data and so on.
As for tomorrow, today is yesterday.
Setup server is confluent cloud itself.
Is it achievable with KSQLDB? How can I do that? What should be the prerequisites?
Please put some light in this case,
Thanks.
I am using a Windowed Join between two streams, let's say a 7 day window.
On initial load, all records in the DB (via kafka connect source connector) are being loaded to the streams. It seems then that ALL records end up in the window state store for those first 7 days as the producer/ingested timestamps are all in current time vs. a field (like create_time) that might be in the message value.
Is there a recommended way to balance the initial load against the Windows of the join?
Well, the question is what records do you want to join to each other? And what timestamp the source connector sets as record timestamp (might also depend on the topic configuration, [log.]message.timestamp.type.
The join is executed based on whatever the TimestampExtractor returns. By default, that is the record timestamp. If you want to base the join on some other timestamp, a custom timestampe extractor is the way to go.
If you want to get processing time semantics, you may want to use the WallclockTimestampExtractor though.
We are using Debezium + PostgreSQL.
Notice that we get 4 types of events for create, read, update and delete - c, r, u and d.
The read type of event is unused for our application. Actually, I could not think of an use case for the 'r' events unless we are doing auditing or mirroring the activities of a transaction.
We are facing difficulties scaling & I suspect it is because of network getting hogged by read type of events.
How do we filter out those events in postgreSQL itself?
I got a clue from one of the contributors to use snapshot.mode. I guess something that has to be done when Debezium creates a snapshot. I am unable to figure out how to do that.
It is likely that your database has existed for some time and contains data and changes that have been purged from the logical decoding logs. If you then start using the Debezium PostgreSQL connector to start capturing changes into Kafka, the question becomes what a consumer of the events in Kafka should be able to see.
One scenario is that a consumer should be able to see events for all rows in the database, even those that existed prior to the start of CDC. For example, this allows a consumer to completely reproduce/replicate all of the existing data and keep that data in sync over time. To accomplish this, the Debezium PostgreSQL connector starts up can begin by creating a snapshot of the database contents before it starts capturing the changes. This is done atomically, so that even if the snapshot process takes a while to run, the connector will still see all of the events that occurred since the snapshot process was started. These events are represented as "read" events, since in effect the connector is simply reading the existing rows. However, they are identical to "insert" events, so any application could treat reads and inserts in the same way.
On the other hand, if consumers of the events in Kafka do not need to see events for all existing rows, then the connector can be configured to avoid the snapshot and to instead begin by capturing the changes. This may be useful in some scenarios where the entire database state need not be found in Kafka, but instead the goal is to simply capture the changes that are occurring.
The Debezium PostgreSQL connector will work either way, so you should use the approach that works for how you're consuming the events.
I'm working on a project where we need to stream real-time updates from Oracle to a bunch of systems (Cassandra, Hadoop, real-time processing, etc). We are planing to use Golden Gate to capture the changes from Oracle, write them to Kafka, and then let different target systems read the event from Kafka.
There are quite a few design decisions that need to be made:
What data to write into Kafka on updates?
GoldenGate emits updates in a form of record ID, and updated field. These changes can be writing into Kafka in one of 3 ways:
Full rows: For every field change, emit the full row. This gives a full representation of the 'object', but probably requires making a query to get the full row.
Only updated fields: The easiest, but it's kind of a weird to work with as you never have a full representation of an object easily accessible. How would one write this to Hadoop?
Events: Probably the cleanest format ( and the best fit for Kafka), but it requires a lot of work to translate db field updates into events.
Where to perform data transformation and cleanup?
The schema in the Oracle DB is generated by a 3rd party CRM tool, and is hence not very easy to consume - there are weird field names, translation tables, etc. This data can be cleaned in one of (a) source system, (b) Kafka using stream processing, (c) each target system.
How to ensure in-order processing for parallel consumers?
Kafka allows each consumer to read a different partition, where each partition is guaranteed to be in order. Topics and partitions need to be picked in a way that guarantees that messages in each partition are completely independent. If we pick a topic per table, and hash record to partitions based on record_id, this should work most of the time. However what happens when a new child object is added? We need to make sure it gets processed before the parent uses it's foreign_id
One solution I have implemented is to publish only the record id into Kafka and in the Consumer, use a lookup to the origin DB to get the complete record. I would think that in a scenario like the one described in the question, you may want to use the CRM tool API to lookup that particular record and not reverse engineer the record lookup in your code.
How did you end up implementing the solution ?