KStreamWindowAggregate 2.0.1 vs 2.5.0: skipping records instead of processing - apache-kafka

I've recently upgraded my kafka streams from 2.0.1 to 2.5.0. As a result I'm seeing a lot of warnings like the following:
org.apache.kafka.streams.kstream.internals.KStreamWindowAggregate$KStreamWindowAggregateProcessor Skipping record for expired window. key=[325233] topic=[MY_TOPIC] partition=[20] offset=[661798621] timestamp=[1600041596350] window=[1600041570000,1600041600000) expiration=[1600059629913] streamTime=[1600145999913]
There seem to be new logic in the KStreamWindowAggregate class that checks if a window has closed. If it has been closed the messages are skipped. Compared to 2.0.1 these messages where still processed.
Question
Is there a way to get the same behavior like before? I'm seeing lots of gaps in my data with this upgrade and not sure how to solve this, as previously these gaps where not seen.
The aggregate function that I'm using already deals with windowing and as a result with expired windows. How does this new logic relate to this expiring windows?
Update
While further exploring I indeed see it to be related to the graceperiod in ms. It seems that in my custom timestampextractor (that has the logic to use the timestamp from the payload instead of the normal timestamp), I'm able to see that the incoming timestamp for the expired window warnings indeed is bigger than the 24 hours compared to the event time from the payload.
I assume this is caused by consumer lags of over 24 hours.
The timestamp extractor extract method has a partition time which according to the docs:
partitionTime the highest extracted valid timestamp of the current record's partition˙ (could be -1 if unknown)
so is this the create time of the record on the topic? And is there a way to influence this in a way that my records are no longer skipped?

Compared to 2.0.1 these messages where still processed.
That is a little bit surprising (even if I would need to double check the code), at least for the default config. By default, store retention time is set to 24h, and thus in 2.0.1 older messages than 24h should also not be processed as the corresponding state got purged already. If you did change the store retention time (via Materialized#withRetention) to a larger value, you would also need to increase the window grace period via TimeWindows#grace() method accordingly.
The aggregate function that I'm using already deals with windowing and as a result with expired windows. How does this new logic relate to this expiring windows?
Not sure what you mean by this or how you actually do this? The old and new logic are similar with regard to how a long a window is stored (retention time config). The new part is the grace period that you can increase to the same value as retention time if you wish).
About "partition time": it is computed base on whatever TimestampExtractor returns. For your case, it's the max of whatever you extracted from the message payload.

Related

is Time Based log Compaction in Kafka based on Wall Clock time or Event time or a mix of Both?

I have been trying to understand how to set time based log compaction, but still can't understand its behavior properly. In particular i am interested in the behavior of log.roll.ms.
What I would like to understand is the following statement taken from the official kafka doc: https://kafka.apache.org/documentation.html#upgrade_10_1_breaking
The log rolling time is no longer depending on log segment create time. Instead it is now based on the timestamp in the messages. More specifically. if the timestamp of the first message in the segment is T, the log will be rolled out when a new message has a timestamp greater than or equal to T + log.roll.ms
in T + log.roll.ms
a) I understand that T is based on the timestamp of the message and therefore can be considered the Event Time. However what is the clock behind log.roll.ms. In kafka stream for instance, when working with Event time it is clear what is the stream, it is the highest timestamp seen. So does the time for log compaction progress with the timestamp of the message and therefore is Event Time, or it progress based on the walk-clock time of the Brokers
I thought it was event time, but then i saw the following talk https://www.confluent.io/kafka-summit-san-francisco-2019/whats-the-time-and-why/ where
#Matthias J. Sax talks about it. From his talks i got confused. It seems indeed that compaction is driven by both Event time T and Walking time

How does Google Dataflow determine the watermark for various sources?

I was just reviewing the documentation to understand how Google Dataflow handles watermarks, and it just mentions the very vague:
The data source determines the watermark
It seems you can add more flexibility through withAllowedLateness but what will happen if we do not configure this?
Thoughts so far
I found something indicating that if your source is Google PubSub it already has a watermark which will get taken, but what if the source is something else? For example a Kafka topic (which I believe does not inherently have a watermark, so I don't see how something like this would apply).
Is it always 10 seconds, or just 0? Is it looking at the last few minutes to determine the max lag and if so how many (surely not since forever as that would get distorted by the initial start of processing which might see giant lag)? I could not find anything on the topic.
I also searched outside the context of Google DataFlow for Apache Beam documentation but did not find anything explaining this either.
When using Apache Kafka as a data source, each Kafka partition may have a simple event time pattern (ascending timestamps or bounded out-of-orderness). However, when consuming streams from Kafka, multiple partitions often get consumed in parallel, interleaving the events from the partitions and destroying the per-partition patterns (this is inherent in how Kafka’s consumer clients work).
In that case, you can use Flink’s Kafka-partition-aware watermark generation. Using that feature, watermarks are generated inside the Kafka consumer, per Kafka partition, and the per-partition watermarks are merged in the same way as watermarks are merged on stream shuffles.
For example, if event timestamps are strictly ascending per Kafka partition, generating per-partition watermarks with the ascending timestamps watermark generator will result in perfect overall watermarks. Note, that TimestampAssigner is not provided in the example, the timestamps of the Kafka records themselves will be used instead.
In any data processing system, there is a certain amount of lag between the time a data event occurs (the “event time”, determined by the timestamp on the data element itself) and the time the actual data element gets processed at any stage in your pipeline (the “processing time”, determined by the clock on the system processing the element). In addition, there are no guarantees that data events will appear in your pipeline in the same order that they were generated.
For example, let’s say we have a PCollection that’s using fixed-time windowing, with windows that are five minutes long. For each window, Beam must collect all the data with an event time timestamp in the given window range (between 0:00 and 4:59 in the first window, for instance). Data with timestamps outside that range (data from 5:00 or later) belong to a different window.
However, data isn’t always guaranteed to arrive in a pipeline in time order, or to always arrive at predictable intervals. Beam tracks a watermark, which is the system’s notion of when all data in a certain window can be expected to have arrived in the pipeline. Once the watermark progresses past the end of a window, any further element that arrives with a timestamp in that window is considered late data.
From our example, suppose we have a simple watermark that assumes approximately 30s of lag time between the data timestamps (the event time) and the time the data appears in the pipeline (the processing time), then Beam would close the first window at 5:30. If a data record arrives at 5:34, but with a timestamp that would put it in the 0:00-4:59 window (say, 3:38), then that record is late data.

Kafka Streams topology with windowing doesn't trigger state changes

I am building the following Kafka Streams topology (pseudo code):
gK = builder.stream().gropuByKey();
g1 = gK.windowedBy(TimeWindows.of("PT1H")).reduce().mapValues().toStream().mapValues().selectKey();
g2 = gK.reduce().mapValues();
g1.leftJoin(g2).to();
If you notice, this is a rhomb-like topology that starts at single input topic and ends in the single output topic with messages flowing through two parallel flows that eventually get joined together at the end. One flow applies (tumbling?) windowing, the other does not. Both parts of the flow work on the same key (apart from the WindowedKey intermediately introduced by the windowing).
The timestamp for my messages is event-time. That is, they get picked from the message body by my custom configured TimestampExtractor implementation. The actual timestamps in my messages are several years to the past.
That all works well at first sight in my unit tests with a couple of input/output messages and in the runtime environment (with real Kafka).
The problem seems to come when the number of messages starts being significant (e.g. 40K).
My failing scenario is following:
~40K records with the same
key get uploaded into the input topic first
~40K updates are
coming out of the output topic, as expected
another ~40K records
with the same but different to step 1) key get uploaded into the
input topic
only ~100 updates are coming out of the output topic,
instead of expected new ~40K updates. There is nothing special to
see on those ~100 updates, their contents seems to be right, but
only for certain time windows. For other time windows there are no
updates even though the flow logic and input data should definetly
generate 40K records. In fact, when I exchange dataset in step 1)
and 3) I have exactly same situation with ~40K updates coming from
the second dataset and same number ~100 from the first.
I can easily reproduce this issue in the unit tests using TopologyTestDriver locally (but only on bigger numbers of input records).
In my tests, I've tried disabling caching with StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG. Unfortunately, that didn't make any difference.
UPDATE
I tried both, reduce() calls and aggregate() calls instead. The issue persists in both cases.
What I'm noticing else is that with StreamsConfig.TOPOLOGY_OPTIMIZATION set to StreamsConfig.OPTIMIZE and without it, the mapValues() handler gets called in debugger before the preceding reduce() (or aggregate()) handlers at least for the first time. I didn't expect that.
Tried both join() and leftJoin() unfortunately same result.
In debugger the second portion of the data doesn't trigger reduce() handler in the "left" flow at all, but does trigger reduce() handler in the "right" flow.
With my configuration, if the number or records in both datasets is 100 in each, the problem doesn't manifests itself, I'm getting 200 output messages as I expect. When I raise the number to 200 in each data set, I'm getting less than 400 expected messages out.
So, it seems at the moment that something like "old" windows get dropped and the new records for those old windows get ignored by the stream.
There is window retention setting that can be set, but with its default value that I use I was expecting for windows to retain their state and stay active for at least 12 hours (what exceeds the time of my unit test run significantly).
Tried to amend the left reducer with the following Window storage config:
Materialized.as(
Stores.inMemoryWindowStore(
"rollup-left-reduce",
Duration.ofDays(5 * 365),
Duration.ofHours(1), false)
)
still no difference in results.
Same issue persists even with only single "left" flow without the "right" flow and without join(). It seems that the problem is in the window retention settings of my set up. Timestamps (event-time) of my input records span 2 years. The second dataset starts from the beginning of 2 years again. this place in Kafka Streams makes sure that the second data set records get ignored:
https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/state/internals/InMemoryWindowStore.java#L125
Kafka Streams Version is 2.4.0. Also using Confluent dependencies version 5.4.0.
My questions are
What could be the reason for such behaviour?
Did I miss anything in my stream topology?
Is such topology expected to work at all?
After some debugging time I found the reason for my problem.
My input datasets contain records with timestamps that span 2 years. I am loading the first dataset and with that the "observed" time of my stream gets set to the maximum timestamp from from input data set.
The upload of the second dataset that starts with records with timestamps that are 2 years before the new observed time causes the stream internal to drop the messages. This can be seen if you set the Kafka logging to TRACE level.
So, to fix my problem I had to configure the retention and grace period for my windows:
instead of
.windowedBy(TimeWindows.of(windowSize))
I have to specify
.windowedBy(TimeWindows.of(windowSize).grace(Duration.ofDays(5 * 365)))
Also, I had to explicitly configure reducer storage settings as:
Materialized.as(
Stores.inMemoryWindowStore(
"rollup-left-reduce",
Duration.ofDays(5 * 365),
windowSize, false)
)
That's it, the output is as expected.

How to control retention over aggregate state store and changelog topic

My use case is the following:
Orders are flowing into an activation system via a topic. I have to Identify changes for records of same key. I compare the existing value with the new value using the aggregate function and output an event that points out the type of change identified i.e. DueDate Change.
The key is a randomly generated number and the number of unique keys is pretty much unbound. The same key will be reused in case the ordering system push a revision to an existing order.
The code has been running for a couple month in production but the state store and changelog topic are growing and there is a concern of space usage. I would like to have records expire after 90 days in the state store. I read about ways to apply a time based retention on state store and it looks like windowing the aggregation is a way of achieving that.
I understand that windowed aggregation are only available for tumbling and hopping window. Sliding window is available for join operation only.
Tumbling window wouldn't work in this case because I would have windows for 0-90, 90-180 and I wouldn't be able to identify an update on day 92 for a record that came in on day 89 (they wouldn't share the same window).
Now the only other option is hopping window.
TimeWindows timeWindow = TimeWindows.of(90days).advanceBy(1day).until(1day);
The problem is that I'll have to persist and update 90 windows. When the stream starts, 90 windows will be created 0-90, 1-91, 2-92, 3-93 etc. If I have a retention of 1 day on the windows, the window 0-90 will be cleaned up on day 91.
Now lets say on day 90 I get an update. Correct me if I'm wrong but my understanding is that I will have to update 90 windows and my state store will be quite large by that time because of all the duplicates. Maybe this is where I'm missing something. If a record is present in 90 windows, is it physically written on disk 90 times?
In the end all I need is to prevent my state store and changelog topic from growing indefinitely. 90 days of historical data is sufficient to support my use case.
Would there be a better way to approach this?
It might be simpler to not use the DSL but the Processor API with a windowed state store. A windowed state store is just a key-value store with expiration. Hence, you can use it similar to a key-value store -- you only provide an additional timestamp that will be used to expire data eventually.

StreamsException: Extracted timestamp value is negative, which is not allowed

This could be a duplicate of Error in Kafka Streams using kafka-node - negative timestamp, but certainly not. My Kafka Streams app does some transformation logic on each message and forwards it to a new topic. There is no time-based aggregation/processing in the app, so there is no need of using any custom timestamp extractor. This app was running fine for several days, but all of sudden the app thrown a negative timestamp exception.
Exception in thread "StreamThread-4" org.apache.kafka.streams.errors.StreamsException: Extracted timestamp value is negative, which is not allowed.
After throwing this exception from all StreamThreads (10 in total), the app was kind of frozen as there was no further progress on the stream for several hours. There was no exception thrown after that. When I restarted the app, it started to process only the newly coming messages.
Now the question is, what happened to those messages that came in between (after throwing the exception and before restarting the app). In case, those missing messages had no embedded timestamp (Highly impossible as no changes happened in the broker and producer), isn't that the app should have thrown an exception for each such message? Or is't like the app stop the stream progress when it detects the negative timestamp in the message at first time? Is there a way to handle this situation so that the app can progress the stream, even after detecting any negative timestamp?My app uses Kafka Streams library version 0.10.0.1-cp1.
Note: I can easily put up a custom timestamp extractor which can check the negative timestamp in each message, but that is a lot of unnecessary overhead for my app. All I want to understand is why was the stream not progressed after detecting a message with negative timestamp.
Even if you do not have any time based operator, a Kafka Streams application checks if timestamps returned from timestamp extractor are valid, because timestamps are used to determine processing order of records from different partitions, to ensure records are processes in-order and all partitions are consumed in an time-based aligned manner.
If a negative timestamp is detected, the application (or actually the corresponding thread) dies. Unfortunately, it is currently not possible to recover from such an exception and you would need to restart your application. See also Confluent FAQs: http://docs.confluent.io/3.1.1/streams/faq.html#invalid-timestamp-exception
If your application dies and you restart it, it will resume processing where it left off. Unfortunately, in Kafka 0.10.0.1 there is a bug (fixed in upcoming release 0.10.2) and in case of failure an incorrect offset can get committed and the application "steps over" some records. I assume this happened in your case, and if you have only some records with an invalid timestamp, those record might have been skipped allowing your application to resume after restart. This behavior is actually a bug -- without the bug, Kafka Stream would try to process those records with invalid timestamp again and again and fail every time until you provide a custom timestamp extractor that fixes the problem by returning a valid timestamp.
How to fix it:
The correct fix would be to provide a custom timestamp extractor that does never return an invalid (ie, negative) timestamp.
I have no explanation why you got invalid timestamps though... This is quite strange and you might want to investigate your producer setup and try to figure out if there is the possibility that your producer puts and invalid timestamp (even if this is unlikely -- I have no other idea what the root cause of the problem could be).
Further remarks:
In the next release (0.10.2), handling invalid timestamps gets simplified and Kafka Streams provides more built-in timestamp extractors that handle records with invalid timestamps differently. For example, this allows you to auto-skip records with invalid timestamps instead of raising an error (current behavior). For more details see KIP-93: https://cwiki.apache.org/confluence/display/KAFKA/KIP-93%3A+Improve+invalid+timestamp+handling+in+Kafka+Streams