Last value corresponding to each key sent on a Kafka topic - apache-kafka

We have a Kafka topic configured on which we publish accumulated reports for each stock we traded throughout the day.
For example Stock A - Buy-50, Sell-60, Stock B - Buy-44, Sell-34 etc. The key while publishing is RIC code of the stock.
The next day I want all consumers to get the last published positions for each stock individually. I want to understand how to configure Kafka producer/consumer to achieve this behavior.
One thing that comes to mind is creating a partition for each stock, this will result into individual offsets for each stock and all consumers can point to the HIGHEST offset and get the latest position.
Is this the correct approach or am I missing something obvious?

Your approach will work, but only if you don't care about the time boundaries too much - for example, you do not need to get the counts for each day separately, with a strict requirement that only events that happened between say, [01/25/2017 00:00 - 01/26/2017 00:00] must be counted.
If you do need to get counts per day in a strict manner - you could try using Kafka Streams , with the key of RIC and the window set to 24 hours based on the event timestamp.
This is just one other way to do that - I'm sure there are more approaches available!

Related

How does Google Dataflow determine the watermark for various sources?

I was just reviewing the documentation to understand how Google Dataflow handles watermarks, and it just mentions the very vague:
The data source determines the watermark
It seems you can add more flexibility through withAllowedLateness but what will happen if we do not configure this?
Thoughts so far
I found something indicating that if your source is Google PubSub it already has a watermark which will get taken, but what if the source is something else? For example a Kafka topic (which I believe does not inherently have a watermark, so I don't see how something like this would apply).
Is it always 10 seconds, or just 0? Is it looking at the last few minutes to determine the max lag and if so how many (surely not since forever as that would get distorted by the initial start of processing which might see giant lag)? I could not find anything on the topic.
I also searched outside the context of Google DataFlow for Apache Beam documentation but did not find anything explaining this either.
When using Apache Kafka as a data source, each Kafka partition may have a simple event time pattern (ascending timestamps or bounded out-of-orderness). However, when consuming streams from Kafka, multiple partitions often get consumed in parallel, interleaving the events from the partitions and destroying the per-partition patterns (this is inherent in how Kafka’s consumer clients work).
In that case, you can use Flink’s Kafka-partition-aware watermark generation. Using that feature, watermarks are generated inside the Kafka consumer, per Kafka partition, and the per-partition watermarks are merged in the same way as watermarks are merged on stream shuffles.
For example, if event timestamps are strictly ascending per Kafka partition, generating per-partition watermarks with the ascending timestamps watermark generator will result in perfect overall watermarks. Note, that TimestampAssigner is not provided in the example, the timestamps of the Kafka records themselves will be used instead.
In any data processing system, there is a certain amount of lag between the time a data event occurs (the “event time”, determined by the timestamp on the data element itself) and the time the actual data element gets processed at any stage in your pipeline (the “processing time”, determined by the clock on the system processing the element). In addition, there are no guarantees that data events will appear in your pipeline in the same order that they were generated.
For example, let’s say we have a PCollection that’s using fixed-time windowing, with windows that are five minutes long. For each window, Beam must collect all the data with an event time timestamp in the given window range (between 0:00 and 4:59 in the first window, for instance). Data with timestamps outside that range (data from 5:00 or later) belong to a different window.
However, data isn’t always guaranteed to arrive in a pipeline in time order, or to always arrive at predictable intervals. Beam tracks a watermark, which is the system’s notion of when all data in a certain window can be expected to have arrived in the pipeline. Once the watermark progresses past the end of a window, any further element that arrives with a timestamp in that window is considered late data.
From our example, suppose we have a simple watermark that assumes approximately 30s of lag time between the data timestamps (the event time) and the time the data appears in the pipeline (the processing time), then Beam would close the first window at 5:30. If a data record arrives at 5:34, but with a timestamp that would put it in the 0:00-4:59 window (say, 3:38), then that record is late data.

KStreamWindowAggregate 2.0.1 vs 2.5.0: skipping records instead of processing

I've recently upgraded my kafka streams from 2.0.1 to 2.5.0. As a result I'm seeing a lot of warnings like the following:
org.apache.kafka.streams.kstream.internals.KStreamWindowAggregate$KStreamWindowAggregateProcessor Skipping record for expired window. key=[325233] topic=[MY_TOPIC] partition=[20] offset=[661798621] timestamp=[1600041596350] window=[1600041570000,1600041600000) expiration=[1600059629913] streamTime=[1600145999913]
There seem to be new logic in the KStreamWindowAggregate class that checks if a window has closed. If it has been closed the messages are skipped. Compared to 2.0.1 these messages where still processed.
Question
Is there a way to get the same behavior like before? I'm seeing lots of gaps in my data with this upgrade and not sure how to solve this, as previously these gaps where not seen.
The aggregate function that I'm using already deals with windowing and as a result with expired windows. How does this new logic relate to this expiring windows?
Update
While further exploring I indeed see it to be related to the graceperiod in ms. It seems that in my custom timestampextractor (that has the logic to use the timestamp from the payload instead of the normal timestamp), I'm able to see that the incoming timestamp for the expired window warnings indeed is bigger than the 24 hours compared to the event time from the payload.
I assume this is caused by consumer lags of over 24 hours.
The timestamp extractor extract method has a partition time which according to the docs:
partitionTime the highest extracted valid timestamp of the current record's partition˙ (could be -1 if unknown)
so is this the create time of the record on the topic? And is there a way to influence this in a way that my records are no longer skipped?
Compared to 2.0.1 these messages where still processed.
That is a little bit surprising (even if I would need to double check the code), at least for the default config. By default, store retention time is set to 24h, and thus in 2.0.1 older messages than 24h should also not be processed as the corresponding state got purged already. If you did change the store retention time (via Materialized#withRetention) to a larger value, you would also need to increase the window grace period via TimeWindows#grace() method accordingly.
The aggregate function that I'm using already deals with windowing and as a result with expired windows. How does this new logic relate to this expiring windows?
Not sure what you mean by this or how you actually do this? The old and new logic are similar with regard to how a long a window is stored (retention time config). The new part is the grace period that you can increase to the same value as retention time if you wish).
About "partition time": it is computed base on whatever TimestampExtractor returns. For your case, it's the max of whatever you extracted from the message payload.

How to control retention over aggregate state store and changelog topic

My use case is the following:
Orders are flowing into an activation system via a topic. I have to Identify changes for records of same key. I compare the existing value with the new value using the aggregate function and output an event that points out the type of change identified i.e. DueDate Change.
The key is a randomly generated number and the number of unique keys is pretty much unbound. The same key will be reused in case the ordering system push a revision to an existing order.
The code has been running for a couple month in production but the state store and changelog topic are growing and there is a concern of space usage. I would like to have records expire after 90 days in the state store. I read about ways to apply a time based retention on state store and it looks like windowing the aggregation is a way of achieving that.
I understand that windowed aggregation are only available for tumbling and hopping window. Sliding window is available for join operation only.
Tumbling window wouldn't work in this case because I would have windows for 0-90, 90-180 and I wouldn't be able to identify an update on day 92 for a record that came in on day 89 (they wouldn't share the same window).
Now the only other option is hopping window.
TimeWindows timeWindow = TimeWindows.of(90days).advanceBy(1day).until(1day);
The problem is that I'll have to persist and update 90 windows. When the stream starts, 90 windows will be created 0-90, 1-91, 2-92, 3-93 etc. If I have a retention of 1 day on the windows, the window 0-90 will be cleaned up on day 91.
Now lets say on day 90 I get an update. Correct me if I'm wrong but my understanding is that I will have to update 90 windows and my state store will be quite large by that time because of all the duplicates. Maybe this is where I'm missing something. If a record is present in 90 windows, is it physically written on disk 90 times?
In the end all I need is to prevent my state store and changelog topic from growing indefinitely. 90 days of historical data is sufficient to support my use case.
Would there be a better way to approach this?
It might be simpler to not use the DSL but the Processor API with a windowed state store. A windowed state store is just a key-value store with expiration. Hence, you can use it similar to a key-value store -- you only provide an additional timestamp that will be used to expire data eventually.

Kafka Streams Intelligently skip messages

I have a simple kafka 2.0.1 stream as explained in https://kafka.apache.org/documentation/streams/
Imagine the stream to be a series of stock prices. For each prices I trigger some CPU and I/O intensive computations. Obviously prices arrive at a very high rate so let's assume following scenario
Price arrives for a stock at 10AM and I schedule a series of computations which say take approx 3 minutes to finish.
In the meantime 3 prices arrive say at 10:01, 10:02 and 10:03
Is there any intelligent way in Kafka to skip the price update at 10:01, 10:02 and go straight to the one at 10:03 (i.e, the latest price update on the stock)? There is no point in me processing the updates at 10:01 and 10:02?
In akka I could perhaps do a custom mailbox. It is possible this isn't a pure streaming requirement however this sounded as a simple enough requirement that other people ought to have faced this.
You can use KTable to store the updated state for stock prices. It will always keep the latest record and updates the previous value with the new one. If there are 3 records for key "stock1" and below records arrive in the stream at the given time
<stock1, 10> // at time 10:01
<stock1, 8> // at time 10:02
<stock1, 13> // at time 10:03
KTable will result in <stock1, 13> for stock1.
Kafka will produce the eventual results based on the event time. I would recommend to go with KTable and always pick the latest record from the stream.
You can find more information about KTables : https://docs.confluent.io/current/streams/concepts.html#ktable

Need advice on storing time series data in aligned 10 minute batches per channel

I have time series data in Kafka. The schema is quite simple - the key is the channel name, and the values are Long/Double tuples of the timestamp and the value (in reality it's a custom Avro object but it boils down to this). They always come in correct chronological order.
The wanted end result is data packaged in 10 minute batches, aligned at 10 minutes (i.e., 00:00 < t <= 00:10, 00:10 < t <= 00:20, ..., 23: 50 < t <= 00:00). Each package is to contain only data of one channel.
My idea is to have two Spark Streaming jobs. The first one takes the data from the Kafka topics and dumps it to a table in a Cassandra database where the key is the timestamp and the channel name, and every time such an RDD hits a 10 minute boundary, this boundary is posted to another topic, alongside the channel whose boundary is hit.
The second job listens to this "boundary topic", and for every received 10 minute boundary, the data is pulled from Cassandra, some calculations like min, max, mean, stddev are done and the data and these results are packaged to a defined output directory. That way, each directory contains the data from one channel and one 10 minute window.
However, this looks a bit clunky and like a lot of extra work to me. Is this a feasible solution or are there any other more efficient tricks to it, like some custom windowing of the Kafka data?
I agree with your intuition that this solution is clunky. How about simply using the time windowing functionality built into the Streams DSL?
http://kafka.apache.org/11/documentation/streams/developer-guide/dsl-api.html#windowing
The most natural output would be a new topic containing the windowed aggregations, but if you really need it written to a directory that should be possible with Kafka Connect.
I work with the Flink Stream Processing, not Spark-streaming but I guess the programming concept of both of them is alike. So supposing data are ordered chronologically and you want to aggregate data for every 10 minutes and do some processing on aggregated data, I think the best approach is to use the Streaming Window Functions. I suggest to define a function to map every incoming data's timestamp to the last 10 minutes:
12:10:24 ----> 12:10:00
12:10:30 ----> 12:10:00
12:25:24 ----> 12:20:00
So you can create a keyed stream object like:
StreamObject<Long, Tuple<data>>
That the Long field is the mapped timestamp of every message. Then you can apply a window. You should search what kind of window is more appropriate for your case.
Point: Setting a key for the data stream will cause the window function to consider a logical window for every key.
In the most simple case, you should define a time window of 10 minutes and aggregate all data incoming on that period of time.
The other approach, if you know the rate of generating of data and how many messages will be generated in a period of 10 minutes, is to use Count window. For example, a window with the count of 20 will listen to the stream and aggregate all the messages with the same key in a logical window and apply the window function just when the number of messages in the window reaches 20.
After messages aggregated in a window as desired, you could apply your processing logic using a reduce function or some action like that.