merge collection with different window strategy - apache-beam

we have 3 different data sources which eventually we need to do some kind of inner join between them
we create all pcollections with group by key
pcollectionA - Implemented using state (the data is not changed)
pcollectionB - windowed for 5h. if another event arrives within that time we would like to increase the window in another 5h hours. Implemented using custom window
pcollectionC - windowed for 30 minutes. Implemented using fixed window.
our purpose is to send pcollectionC events only if the relevant key exist in pcollectionA & pcollectionB
what is the best way to implement it as regular joins are not working due do the window differences

From your question I am assuming that the first two streams (pcollectionA and pcollectionB) are more like control/reference data used by the stream processor of pcollectionC.
Have you tried the Broadcast State Pattern? You can convert pcollectionA and pcollectionB to Broadcast streams and then look them up in the processor of pcollectionC.
Here is a very high level design:
// broadcast pcollectionA
BroadcastStream<> A_BroadcastStream = pcollectionA_stream
.broadcast(stateDescriptor);
// broadcast pcollectionB
BroadcastStream<> B_BroadcastStream = pcollectionB_stream
.broadcast(stateDescriptor);
//Process pcollectionC with broadcast data from A and B.
DataStream<String> output = pcollectionC_Stream.
.connect(A_BroadcastStream)
.process(
// process/join with pcollectionA items
)
.connect(B_BroadcastStream)
.process(
// process/join with pcollectionB items
)
This documentation has a great example for this design pattern - https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/fault-tolerance/broadcast_state/

Related

Event data aggregation using Kafka Streams in a system with high volume of data

In our company we have a use which can be mapped to the one exposed in Confluent about joining movies and ratings using a KStream and KTable.
Here is the problem statement described in Confluent article:
Suppose you have a set of movies that have been released and a stream
of ratings from moviegoers about how entertaining they are. In this
tutorial, we'll write a program that joins each rating with content
about the movie. Related pattern: Event Joiner
The short answer they provide is the following:
KStream<String, Rating> ratings = ...
KTable<String, Movie> movies = ...
final MovieRatingJoiner joiner = new MovieRatingJoiner();
KStream<String, RatedMovie> ratedMovie = ratings.join(movies, joiner);
Source:
https://developer.confluent.io/tutorials/join-a-stream-to-a-table/kstreams.html
However, in our case the data we are dealing with has the following properties:
The number of total movies is about 500 million, while the number of ratings is even higher.
We always need to have the complete movie catalog in our lookup store, otherwise we may receive a rating for a movie we don't have and the join will not work.
A new rating can be received anytime, and a new movie can be received anytime.
We just need to send the rated movie event (join result) if the rating was done after one month ago. If the rating was done more than one month ago we don't want to send the rated movie event anymore.
Then, considering properties P1 and P2, I think for movies lookup is more appropriate to use a persistent database instead of a KTable. KTable can use local disk storage and is fault-tolerant, since it can be rebuilt using the kafka topic in case it crashes. Even though, that process could take a long time if we are talking about millions of movies.
Having P3, we need to guaranty there is no race condition which may lead to an event being lost.
With P4 we are considering a time window of one month.
As you can imagine, most likely, the rating will belong to an already existent movie, so the initial lookup will be successful more than 95% of the times. However, it is possible to receive a rating for a new movie which is not yet present in our system, or a movie which we will never receive.
Apart from KStream-KTable, I was also thinking on a KStream-KStream windowed join, but considering the volume of the data and the window of one month, I expect the memory usage to be too high.
Based on the properties described above, is there any recommended pattern using KStream-KStream or KStream-KTable joins which could fit this use case?
Otherwise I believe the best option we have is to use two persistent datasources for the lookup, like this:

Process item in a window with Kafka streams

I'm trying to process some events in a sliding window with kafka stream but I think i don't understand some details of kafka streams so I'm not able to do what I want.
What I have :
input topic of events with key/value like (Int, Person)
What I want :
read these events within a sliding window of 10 minutes
process each element in the sliding window
filter and count some element, fire some event to an other kafka
topic (like if a wrong value is detected)
To be simple: get all the events in a sliding window of 10 minutes, do a foreach on them, compute some stats/events in the context of the window, go to the next window...
What I tried :
I tried to mix the Stream and the processor API like :
val streamBuilder = new StreamsBuilder()
streamBuilder.stream[Int, Person](topic)
.groupBy((_, value) => PersonWrapper(value.id, value.name))
.windowedBy(TimeWindows.of(10 * 60 * 1000L).advanceBy(1 * 60 * 1000L))
// now I have a window of (PersonWrapper, Person) right ?
streamBuilder.build().addProcessor(....)
And now I'd add a processor to this topology to process each events of the sliding window.
I don't understand what is TimeWindowStream and why we should have a KGroupedStream to apply a Window on events. If someone can enlight me about Kafka stream and what I'm trying to do.
Did you read the documentation: https://docs.confluent.io/current/streams/developer-guide/dsl-api.html#windowing
Windowing is a special form of grouping (grouping based on time)
Grouping is always require to compute an aggregation in Kafka Streams
After you have a grouped and windowed stream you call aggregate() for the actually processing (not need to attach a Processor manually; the call to aggregate() will implicitly add a Processor for you).
Btw: Kafka Streams does not really support "sliding windows" for aggregation. The window you define is called a hopping window.
KGroupedStream and TimeWindowedKStreams are basically just helper classes and an intermediate representation that allows for a fluent API design.
The tutorial is also a good way to get started: https://docs.confluent.io/current/streams/quickstart.html
You should also check out the examples: https://github.com/confluentinc/kafka-streams-examples

Category projections using kafka and cassandra for event-sourcing

I'm using Cassandra and Kafka for event-sourcing, and it works quite well. But I've just recently discovered a potentially major flaw in the design/set-up. A brief intro to how it is done:
The aggregate command handler is basically a kafka consumer, which consumes messages of interest on a topic:
1.1 When it receives a command, it loads all events for the aggregate, and replays the aggregate event handler for each event to get the aggregate up to current state.
1.2 Based on the command and businiss logic it then applies one or more events to the event store. This involves inserting the new event(s) to the event store table in cassandra. The events are stamped with a version number for the aggregate - starting at version 0 for a new aggregate, making projections possible. In addition it sends the event to another topic (for projection purposes).
1.3 A kafka consumer will listen on the topic upon these events are published. This consumer will act as a projector. When it receives an event of interest, it loads the current read model for the aggregate. It checks that the version of the event it has received is the expected version, and then updates the read model.
This seems to work very well. The problem is when I want to have what EventStore calls category projections. Let's take Order aggregate as an example. I can easily project one or more read models pr Order. But if I want to for example have a projection which contains a customers 30 last orders, then I would need a category projection.
I'm just scratching my head how to accomplish this. I'm curious to know if any other are using Cassandra and Kafka for event sourcing. I've read a couple of places that some people discourage it. Maybe this is the reason.
I know EventStore has support for this built in. Maybe using Kafka as event store would be a better solution.
With this kind of architecture, you have to choose between:
Global event stream per type - simple
Partitioned event stream per type - scalable
Unless your system is fairly high throughput (say at least 10s or 100s of events per second for sustained periods to the stream type in question), the global stream is the simpler approach. Some systems (such as Event Store) give you the best of both worlds, by having very fine-grained streams (such as per aggregate instance) but with the ability to combine them into larger streams (per stream type/category/partition, per multiple stream types, etc.) in a performant and predictable way out of the box, while still being simple by only requiring you to keep track of a single global event position.
If you go partitioned with Kafka:
Your projection code will need to handle concurrent consumer groups accessing the same read models when processing events for different partitions that need to go into the same models. Depending on your target store for the projection, there are lots of ways to handle this (transactions, optimistic concurrency, atomic operations, etc.) but it would be a problem for some target stores
Your projection code will need to keep track of the stream position of each partition, not just a single position. If your projection reads from multiple streams, it has to keep track of lots of positions.
Using a global stream removes both of those concerns - performance is usually likely to be good enough.
In either case, you'll likely also want to get the stream position into the long term event storage (i.e. Cassandra) - you could do this by having a dedicated process reading from the event stream (partitioned or global) and just updating the events in Cassandra with the global or partition position of each event. (I have a similar thing with MongoDB - I have a process reading the 'oplog' and copying oplog timestamps into events, since oplog timestamps are totally ordered).
Another option is to drop Cassandra from the initial command processing and use Kafka Streams instead:
Partitioned command stream is processed by joining with a partitioned KTable of aggregates
Command result and events are computed
Atomically, KTable is updated with changed aggregate, events are written to event stream and command response is written to command response stream.
You would then have a downstream event processor that copies the events into Cassandra for easier querying etc. (and which can add the Kafka stream position to each event as it does it to give the category ordering). This can help with catch up subscriptions, etc. if you don't want to use Kafka for long term event storage. (To catch up, you'd just read as far as you can from Cassandra and then switch to streaming from Kafka from the position of the last Cassandra event). On the other hand, Kafka itself can store events for ever, so this isn't always necessary.
I hope this helps a bit with understanding the tradeoffs and problems you might encounter.

Synchronize Data From Multiple Data Sources

Our team is trying to build a predictive maintenance system whose task is to look at a set of events and predict whether these events depict a set of known anomalies or not.
We are at the design phase and the current system design is as follows:
The events may occur on multiple sources of an IoT system (such as cloud platform, edge devices or any intermediate platforms)
The events are pushed by the data sources into a message queueing system (currently we have chosen Apache Kafka).
Each data source has its own queue (Kafka Topic).
From the queues, the data is consumed by multiple inference engines (which are actually neural networks).
Depending upon the feature set, an inference engine will subscribe to
multiple Kafka topics and stream data from those topics to continuously output the inference.
The overall architecture follows the single-responsibility principle meaning that every component will be separate from each other and run inside a separate Docker container.
Problem:
In order to classify a set of events as an anomaly, the events have to occur in the same time window. e.g. say there are three data sources pushing their respective events into Kafka topics, but due to some reason, the data is not synchronized.
So one of the inference engines pulls the latest entries from each of the kafka topics, but the corresponding events in the pulled data do not belong to the same time window (say 1 hour). That will result in invalid predictions due to out-of-sync data.
Question
We need to figure out how can we make sure that the data from all three sources are pushed in-order so that when an inference engine requests entries (say the last 100 entries) from multiple kakfa topics, the corresponding entries in each topic belong to the same time window?
I would suggest KSQL, which is a streaming SQL engine that enables real-time data processing against Apache Kafka. It also provides nice functionality for Windowed Aggregation etc.
There are 3 ways to define Windows in KSQL:
hopping windows, tumbling windows, and session windows. Hopping and
tumbling windows are time windows, because they're defined by fixed
durations they you specify. Session windows are dynamically sized
based on incoming data and defined by periods of activity separated by
gaps of inactivity.
In your context, you can use KSQL to query and aggregate the topics of interest using Windowed Joins. For example,
SELECT t1.id, ...
FROM topic_1 t1
INNER JOIN topic_2 t2
WITHIN 1 HOURS
ON t1.id = t2.id;
Some suggestions -
Handle delay at the producer end -
Ensure all three producers always send data in sync to Kafka topics by using batch.size and linger.ms.
eg. if linger.ms is set to 1000, all messages would be sent to Kafka within 1 second.
Handle delay at the consumer end -
Considering any streaming engine at the consumer side (be it Kafka-stream, spark-stream, Flink), provides windows functionality to join/aggregate stream data based on keys while considering delayed window function.
Check this - Flink windows for reference how to choose right window type link
To handle this scenario, data sources must provide some mechanism for the consumer to realize that all relevant data has arrived. The simplest solution is to publish a batch from data source with a batch Id (Guid) of some form. Consumers can then wait until the next batch id shows up marking the end of the previous batch. This approach assumes sources will not skip a batch, otherwise they will get permanently mis-aligned. There is no algorithm to detect this but you might have some fields in the data that show discontinuity and allow you to realign the data.
A weaker version of this approach is to either just wait x-seconds and assume all sources succeed in this much time or look at some form of time stamps (logical or wall clock) to detect that a source has moved on to the next time window implicitly showing completion of the last window.
The following recommendations should maximize success of event synchronization for the anomaly detection problem using timeseries data.
Use a network time synchronizer on all producer/consumer nodes
Use a heartbeat message from producers every x units of time with a fixed start time. For eg: the messages are sent every two minutes at the start of the minute.
Build predictors for producer message delay. use the heartbeat messages to compute this.
With these primitives, we should be able to align the timeseries events, accounting for time drifts due to network delays.
At the inference engine side, expand your windows at a per producer level to synch up events across producers.

How does kafka streams compute watermarks?

Does Kafka Streams internally compute watermarks? Is it possible to observe the result of a window (only) when it completes (i.e. when watermark passes end of window)?
Kafka Streams does not use watermarks internally, but a new feature in 2.1.0 lets you observe the result of a window when it closed. It's called Suppressed, and you can read about it in the docs: Window Final Results:
KGroupedStream<UserId, Event> grouped = ...;
grouped
.windowedBy(TimeWindows.of(Duration.ofHours(1)).grace(ofMinutes(10)))
.count()
.suppress(Suppressed.untilWindowCloses(unbounded()))
Does Kafka Streams internally compute watermarks?
No. Kafka Streams follows a continuous update processing model that does not require watermarks. You can find more details about this online:
https://www.confluent.io/blog/watermarks-tables-event-time-dataflow-model/
https://www.confluent.io/resources/streams-tables-two-sides-same-coin
Is it possible to observe the result of a window (only) when it completes (i.e. when watermark passes end of window)?
You can observe of result of a window at any point in time. Either via subscribing to the result changelog stream via for example KTable#toStream()#foreach() (ie, a push based approach), or via Interactive Queries that let you query the result window actively (ie, a pull based approach).
As mentioned by #Dmitry, for the push based approach, you can also use suppress() operator if you are only interested in a window's final result.