KSQLDB : Dynamic KTable creation on topic - apache-kafka

In my case, daily data is getting produced in one topic. I want KSQLDB server to create two tables daily for today and yesterday. For example, Today-Table and Yesterday-Table are two KTables of same topic conditioning that Today-Table contains today’s data and Yesterday-Table contain yesterday’s data. It is pretty achievable when it comes to static creation.
But what I need here is, for next day it should automatically toggle today’s data as yesterday’s data and next day’s data as today’s data and so on.
As for tomorrow, today is yesterday.
Setup server is confluent cloud itself.
Is it achievable with KSQLDB? How can I do that? What should be the prerequisites?
Please put some light in this case,
Thanks.

Related

How do you get the latest offset from a remote query to a Table in ksqlDB?

I have an architecture where I would like to query a ksqlDB Table from a Kafka stream A (created by ksqlDB). On startup, Service A will load in all the data from this table into a hashmap, and then afterward it will start consuming from Kafka Stream A and act off any events to update this hashmap. I want to avoid any race condition in which I would miss any events that were propagated to Kafka Stream A in the time between I queried the table, and when I started consuming off Kafka Stream A. Is there a way that I can retrieve the latest offset that my query to the table is populated by so that I can use that offset to start consuming from Kafka Stream A?
Another thing to mention is that we have hundreds of instances of our app going up and down so reading directly off the Kafka stream is not an option. Reading an entire stream worth of data every time our apps come up is not a scalable solution. Reading in the event streams data into a hashmap on the service is a hard requirement. This is why the ksqlDB table seems like a good option since we can get the latest state of data in the format needed and then just update based off of events from the stream. Kafka Stream A is essentially a CDC stream off of a MySQL table that has been enriched with other data.
You used "materialized view" but I'm going to pretend I
heard "table". I have often used materialized views
in a historical reporting context, but not with live updates.
I assume that yours will behave similar to a "table".
I assume that all events, and DB rows, have timestamps.
Hopefully they are "mostly monotonic", so applying a
small safety window lets us efficiently process just
the relevant recent ones.
The crux of the matter is racing updates.
We need to prohibit races.
Each time an instance of a writer, such as your app,
comes up, assign it a new name.
Rolling a guid is often the most convenient way to do that,
or perhaps prepend it with a timestamp if sort order matters.
Ensure that each DB row mentions that "owning" name.
want to avoid any race condition in which I would miss any events that were propagated to Kafka Stream A in the time between I queried the materialized view, and when I started consuming off Kafka Stream A.
We will need a guaranteed monotonic column with an integer ID
or a timestamp. Let's call it ts.
Query m = max(ts).
Do a big query of records < m, slowly filling your hashmap.
Start consuming Stream A.
Do a small query of records >= m, updating the hashmap.
Continue to loop through subsequently arriving Stream A records.
Now you're caught up, and can maintain the hashmap in sync with DB.
Your business logic probably requires that you
treat DB rows mentioning the "self" guid
in a different way from rows that existed
prior to startup.
Think of it as de-dup, or ignoring replayed rows.
You may find offsetsForTimes() useful.
There's also listOffsets().

Event trigger using Mongo/Kafka

I've a mongoDB instance with a collection holding calendar events. This is fed using a Kafka application.
These events need to feed into other downstream systems, using Kafka Streams, but what I'd like to invesitgate is whether is would be possible to only trigger an event to a downstream system when the event has just happened (rather then passing future events downstream).
So if an event is received and written to mongo for a date in the future, the downstream system will only know about it as that date is reached and not before.
I've looked at the normal connectors (mongoDB -> Kafka https://www.mongodb.com/kafka-connector) and that functionaility isn't provided.
One of the ways I thought about doing this would be to write a custom application which queries the mongo DB collection on a schedule between the "last run" and "now" to get all the events which occur within these times and create a downstream event into Kafka. (setting indexes on the query elements in the mongo document).
Is there any other way?
Many thanks for reading.
Jill
Instead of query the mongodb I would suggest to create a consumer group to the original kafka topic, which the mongodb data is ingested from and do if you recognize that the date is in the future -> create a rundeck / airflow scheduled task configured to that date, so your consumer logic will be simple.
Another solution you can try is to do some changes to the source connector that you found and try to merge it.
Good luck! Im here if you have any questions

Issue in creating refreshed stream

I have a use case that can be described as follows:
Dump that is generated each day at 4 am
Online stream that is run from 12 is for 24 hours.
We use dump as lookup, any content that exists in the online stream and also in the dump will have a special offer, but we face a problem as our proposed solution is limited. We created a stream that joins between lookup dump stream and online stream for 24 but we face a problem as there is a gap because the dump is not ready before 4 am so the join find nothing in those 4 hours and if we changed the window period for more time we will lose each day refreshment data.
Any help?
This question was cross-posted on the ksqlDB GitHub page: https://github.com/confluentinc/ksql/issues/7935
Copying my reply from there:
This is tricky... You need to understand that a joins have temporal semantics. For a stream-table join, it implies that a stream record joins to the table "version" that is valid according to the event-time of the stream record. Thus, if you table updates are timestamps at 4am, all stream-records from 12am-4am happen before the table update and are not eligible "see" those updates.
And even if you can change the timestamp of the table updates to 12am, the issue is that ksqlDB does not support a GRACE PRIOD for stream-table joins yet (we are already working on this though)...
Bottom line might be, that you would need to work around the current ksqlDB limitations to change your upstream ingestion to "wait" until after all table updates got published, before you publish the stream updates... Not sure if this is possible in your end-to-end setup.
To learn more about temporal join semantics, check out this Kafka Summit talk: https://www.confluent.io/events/kafka-summit-europe-2021/temporal-joins-in-kafka-streams-and-ksqldb/

How do I implement Event Sourcing using Kafka?

I would like to implement the event-sourcing pattern using kafka as an event store.
I want to keep it as simple as possible.
The idea:
My app contains a list of customers. Customers an be created and deleted. Very simple.
When a request to create a customer comes in, I am creating the event CUSTOMER_CREATED including the customer data and storing this in a kafka topic using a KafkaProducer. The same when a customer is deleted with the event CUSTOMER_DELETED.
Now when i want to list all customers, i have to replay all events that happened so far and then get the current state meaning a list of all customers.
I would create a temporary customer list, and then processing all the events one by one (create customer, create customer, delete customer, create customer etc). (Consuming these events with a KafkaConsumer). In the end I return the temporary list.
I want to keep it as simple as possible and it's just about giving me an understanding on how event-sourcing works in practice. Is this event-sourcing? And also: how do I create snapshots when implementing it this way?
when i want to list all customers, i have to replay all events that happened so far
You actually don't, or at least not after your app starts fresh and is actively collecting / tombstoning the data. I encourage you to lookup the "Stream Table Duality", which basically states that your table is the current state of the world in your system, and a snapshot in time of all the streamed events thus far, which would be ((customers added + customers modified) - customers deleted).
The way you implement this in Kafka would be to use a compacted Kafka topic for your customers, which can be read into a Kafka Streams KTable, and persisted in memory or spill to disk (backed by RocksDB). The message key would be some UUID for the customer, or some other identifiable record that cannot change (e.g. not name, email, phone, etc. as all this can change)
With that, you can implement Interactive Queries on it to scan or lookup a certain customer's details.
Theoretically you can do Event Sourcing with Kafka as you mentioned, replaying all Events in the application start but as you mentioned, if you have 100 000 Events to reach a State it is not practical.
As it is mentioned in the previous answers, you can use Kafka Streaming KTable for sense of Event Sourcing but while KTable is hosted in Key/Value database RockDB, querying the data will be quite limited (You can ask what is the State of the Customer Id: 123456789 but you can't ask give me all Customers with State CUSTOMER_DELETED).
To achieve that flexibility, we need help from another pattern Command Query Responsibility Segregation (CQRS), personally I advice you to use Kafka reliable, extremely performant Broker and give the responsibility for Event Sourcing dedicated framework like Akka (which Kafka synergies naturally) with Apache Cassandra persistence and Akka Finite State Machine for the Command part and Akka Projection for the Query part.
If you want to see a sample how all these technology stacks plays together, I have a blog for it. I hope it can help you.

Oracle change-data-capture with Kafka best practices

I'm working on a project where we need to stream real-time updates from Oracle to a bunch of systems (Cassandra, Hadoop, real-time processing, etc). We are planing to use Golden Gate to capture the changes from Oracle, write them to Kafka, and then let different target systems read the event from Kafka.
There are quite a few design decisions that need to be made:
What data to write into Kafka on updates?
GoldenGate emits updates in a form of record ID, and updated field. These changes can be writing into Kafka in one of 3 ways:
Full rows: For every field change, emit the full row. This gives a full representation of the 'object', but probably requires making a query to get the full row.
Only updated fields: The easiest, but it's kind of a weird to work with as you never have a full representation of an object easily accessible. How would one write this to Hadoop?
Events: Probably the cleanest format ( and the best fit for Kafka), but it requires a lot of work to translate db field updates into events.
Where to perform data transformation and cleanup?
The schema in the Oracle DB is generated by a 3rd party CRM tool, and is hence not very easy to consume - there are weird field names, translation tables, etc. This data can be cleaned in one of (a) source system, (b) Kafka using stream processing, (c) each target system.
How to ensure in-order processing for parallel consumers?
Kafka allows each consumer to read a different partition, where each partition is guaranteed to be in order. Topics and partitions need to be picked in a way that guarantees that messages in each partition are completely independent. If we pick a topic per table, and hash record to partitions based on record_id, this should work most of the time. However what happens when a new child object is added? We need to make sure it gets processed before the parent uses it's foreign_id
One solution I have implemented is to publish only the record id into Kafka and in the Consumer, use a lookup to the origin DB to get the complete record. I would think that in a scenario like the one described in the question, you may want to use the CRM tool API to lookup that particular record and not reverse engineer the record lookup in your code.
How did you end up implementing the solution ?