kafka-streams sliding agg window discard out-of-order record belongs to window without grace - apache-kafka

I have next error, actually i don't understand it.
o.a.k.s.k.i.KStreamSlidingWindowAggregate - Skipping record for expired window. topic=[...] partition=[0] offset=[16880] timestamp=[1662556875000] window=[1662542475000,1662556875000] expiration=[1662556942000] streamTime=[1662556942000]
streamTime=[1662556942000]
timestamp=[1662556875000]
streamTime-timestamp = 67s
window size is 4hour.
grace period is 0
Why was record skipped and i didn't get a output message? it belongs to window. Yes record out-of-order
Update:
After read more about kafka-streams, i understand that on each message it creates two window:
(message time - window) and this window include message.
(message time + window) and this window exclude message.
Window 1 is expired. Window 2 don't. that's why i dind't see out message.
But logically it's wrong, message belong to window but i havn't a out message.
Example
sliding window time diff = 10, grace = 0
stream time = 0
send message (time = 10, key = 2) -> message key = 2; stream time = 10
send message (time = 4, key = 1) -> no out message;
send message (time = 5, key = 1) -> no out message;
last message belongs to window (stream-time - window-time)
------ restart stream -------
stream time = 0
send message (time = 10, key = 2) -> message key = 2; stream time = 10
send message (time = 4, key = 2) -> 2 message

In Kafka Streams sliding windows are event based. A new window is created each time a new record enters or drops from the window. It is defined by the record timestamp and a fixed duration.
Each record creates a window [record.timestamp - duration, record.timestamp] and
Each dropped record creates a window [record.timestamp + 1ms, record.timestamp + 1ms + duration].
(Be aware that other stream processing frameworks use a totally different definition of 'sliding windows')
The record is not included in the window when
stream-time > window-end + grace-period
(https://kafka.apache.org/27/javadoc/org/apache/kafka/streams/kstream/SlidingWindows.html)
For your initial example, the grace-period is zero and your window ends(at record timestamp) after the stream-time; thus the record is not included in the window.
For the second example, I am not sure. My guess is the records with key=1 are expired because the stream-time(10) has exceeded the record times(4,5)(and grace period=0). For the records with key=2, one window is created for the record with timestamp=10 and an update of the same window is emitted, because the record satisfies the condition above. However, no additional windows are created for the out-of-order record, because the grace period is zero.

Related

Kafka Streams - GroupBy - Late Event - persistentWindowStore - WindowBy with Grace Period and Suppress

My purpose to calculate success and fail message from source to destination per second and sum their results in daily bases.
I had two options to do that ;
stream events then group them time#source#destination
KeyValueBytesStoreSupplier streamStore = Stores.persistentKeyValueStore("store-name");
sourceStream.selectKey((k, v) -> v.getDataTime() + KEY_SEPERATOR + SRC + KEY_SEPERATOR + DEST ).groupByKey().aggregate(
DO SOME Aggregation,
Materialized.<String, AggregationObject>as(streamStore)
.withKeySerde(Serdes.String())
.withValueSerde(AggregationObjectSerdes));
After trying this approach above we noticed that state store is getting increase because of number of unique keys are increasing and if i am correct, because of state topics are only "compact" they are never expires.
NumberOfUniqueKeys = 86.400 seconds in a day X SOURCE X DESTINATION
Then we thought that if we do not put a time field in a KEY block, we can reduce state store size. We tried windowing operation as second approach.
using windowing operation with persistentWindowStore, CustomTimeStampExtractor, WindowBy, Suppress
WindowBytesStoreSupplier streamStore = Stores.persistentWindowStore("store-name", Duration.ofHours(6), Duration.ofSeconds(1), false);
sourceStream.selectKey((k, v) -> SRC + KEY_SEPERATOR + DEST)
.groupByKey() .windowedBy(TimeWindows.of(Duration.ofSeconds(1)).grace(Duration.ofSeconds(5)))
.aggregate(
{
DO SOME Aggregation
}, Materialized.<String, AggregationObject>as(streamStore)
.withKeySerde(Serdes.String())
.withValueSerde(AggregationObjectSerdes))
.suppress(Suppressed.untilWindowCloses(Suppressed.BufferConfig.unbounded())).toStream();`
After trying that second approach, we reduced state store size but now we had problem with late arrive events. Then we added grace period with 5 seconds with suppress operation and in addition using grace period and suppress operation did not guarantee to handle all late arrived events, another side effect of suppress operation is a latency because it emits result of aggregation after window grace period.
BTW
using windowing operation caused a getting WARNING message like
"WARN 1 --- [-StreamThread-2] o.a.k.s.state.internals.WindowKeySchema : Warning: window end time was truncated to Long.MAX"
I checked the reason from source code and I found from here
https://github.com/a0x8o/kafka/blob/master/streams/src/main/java/org/apache/kafka/streams/state/internals/WindowKeySchema.java
/**
* Safely construct a time window of the given size,
* taking care of bounding endMs to Long.MAX_VALUE if necessary
*/
static TimeWindow timeWindowForSize(final long startMs,
final long windowSize) {
long endMs = startMs + windowSize;
if (endMs < 0) {
LOG.warn("Warning: window end time was truncated to Long.MAX");
endMs = Long.MAX_VALUE;
}
return new TimeWindow(startMs, endMs);
}
BUT actually it does not make any sense to me that how endMs can be lower than 0...
Questions ?
What if we go through with approach 1, how can we reduce state store size ? In approach 1, It was guaranteed that all event will be processed and there will be no missing event because of latency.
What if we go through with approach 2, how should i tune my logic and catch late arrival data and reduce latency ?
Why do i get Warning message in approach 2 although all time fields are positive in my model ?
What can be other options that you can suggest other then these two approaches ?
I need some expert help :)
BR,
According to mail kafka mail group about warning message
WARNING message like "WARN 1 --- [-StreamThread-2] o.a.k.s.state.internals.WindowKeySchema : Warning: window end time was truncated to Long.MAX"
It was written to me :
You can get this message "o.a.k.s.state.internals.WindowKeySchema :
Warning: window end time was truncated to Long.MAX"" when your
TimeWindowDeserializer is created without a windowSize. There are two
constructors for a TimeWindowDeserializer, are you using the one with
WindowSize?
https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/kstream/TimeWindowedDeserializer.java#L46-L55
It calls WindowKeySchema with a Long.MAX_VALUE
https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/kstream/TimeWindowedDeserializer.java#L84-L90

Kafka streams dropping messages during windowing and on restart

I am working on Kafka Streams application with following topology:
private final Initializer<Set<String>> eventInitializer = () -> new HashSet<>();
final StreamsBuilder streamBuilder = new StreamsBuilder();
final KStream<String, AggQuantityByPrimeValue> eventStreams = streamBuilder.stream("testTopic",
Consumed.with(Serdes.String(), **valueSerde**));
final KStream<String, Value> filteredStreams = eventStreams
.filter((key,clientRecord)->recordValidator.isAllowedByRules(clientRecord));
final KGroupedStream<Integer, Value> groupedStreams = filteredStreams.groupBy(
(key, transactionEntry) -> transactionEntry.getNodeid(),
Serialized.with(Serdes.Integer(), **valueSerde**));
/* Hopping window */
final TimeWindowedKStream<Integer, Value> windowedGroupStreams = groupedStreams
.windowedBy(TimeWindows.of(Duration.ofSeconds(30)).advanceBy(Duration.ofSeconds(25))
.grace(Duration.ofSeconds(0)));
/* Aggregating the events */
final KStream<Windowed<Integer>, Set<String>> suppressedStreams = windowedGroupStreams
.aggregate(eventInitializer, countAggregator, Materialized.as("counts-aggregate")
.suppress(Suppressed.untilWindowCloses(Suppressed.BufferConfig.unbounded())
.withName("suppress-window")
.toStream();
suppressedStreams.foreach((windowed, value) -> eventProcessor.publish(windowed.key(), value));
return new KafkaStreams(streamBuilder.build(), config.getKafkaConfigForStreams());
I am observing that intermittently few events are getting dropped during/after windowing.
For example:
All records can be seen/printed in isAllowedByRules() method, which are valid(allowed by filters) and consumed by the stream.
But when printing the events in countAggregator, I can see few events are not coming through it.
Current configurations for streams:
Properties streamsConfig = new Properties();
streamsConfig.put(StreamsConfig.APPLICATION_ID_CONFIG,"kafka-app-id"
streamsConfig.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, <bootstraps-server>);
streamsConfig.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest");
streamsConfig.put(StreamsConfig.COMMIT_INTERVAL_MS_CONFIG, 30000);
streamsConfig.put(StreamsConfig.NUM_STREAM_THREADS_CONFIG, 5);
streamsConfig.put(ConsumerConfig.HEARTBEAT_INTERVAL_MS_CONFIG, 10000);
streamsConfig.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 30000);
streamsConfig.put(ConsumerConfig.FETCH_MAX_BYTES_CONFIG, 10485760);
streamsConfig.put(ProducerConfig.MAX_REQUEST_SIZE_CONFIG, 10485760);
streamsConfig.put(ConsumerConfig.MAX_PARTITION_FETCH_BYTES_CONFIG, 10485760);
/*For window buffering across all threads*/
streamsConfig.put(StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG, 52428800);
streamsConfig.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.Integer().getClass().getName());
streamsConfig.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, **customSerdesForSet**);
Initially, I was using tumbling window but I found that mostly at the end of window few events were getting lost so I changed to hopping window (better to duplicate than lost). Then dropped events became zero. But today again after almost 4 days I saw few dropped events and there is one pattern among them, that they are late by almost a minute compared to other events which were produced together. But then expectation is that these late events should come in any of the future windows but that didn't happen. Correct me here if my understanding is not right.
Also as I have mentioned in the topic, on restart of streams (gracefully) I could see few events getting lost again at aggregation step though processed by isAllowedByRules() method.
I have searched a lot on stack overflow and other sites, but couldn't find the root cause of this behaviour. Is it something related to some configuration that I am missing/not correctly setting or could be due to some other reason?
From my understanding, you have a empty grace period :
/* Hopping window */
...
.grace(Duration.ofSeconds(0))
So your window is closed without permitting any late arrivals.
Then regarding your sub question :
But then expectation is that these late events should come in any of the future windows but that didn't happen. Correct me here if my understanding is not right.
Maybe you're mixing event time and processing time.
Your record will be categorized as 'late' if the timestamp of the record ( added by the producer at produce time, or by the brokers when arriving in the cluster if not set by producer) is outside your current window.
Here is a example with 2 records '*'.
Their event time (et1 and et2) fit in the window :
| window |
t1 t2
| * * |
et1 et2
But, processing time of et2 (pt2) is in fact as follows :
| window |
t1 t2
| * | *
pt1 pt2
Here the window is a slice of time between t1 and t2 (processing time)
et1 and et2 are respectively event time of the 2 records '*'.
et1 and et2 are timestamps set in the records themselves.
in this example , et1 and et2 are between t1 and t2, et2 have been received after the window closure, as your grace period is 0, it will be skipped.
Might be a explanation

Siddhi delayed query

I am struggling to understand this query:
from heartbeats#window.time(1 hour) insert expired events into delayedStream;
from every e = heartbeats -> e2 = heartbeats[deviceId == e.deviceId]
or expired = delayedStream[deviceId == e.deviceId]
within 1 hour 10 minutes
select e.deviceId, e2.deviceId as id2, expired.deviceId as id3
insert into tmpStream;
The first query delays all Events by 1 hour.
The second query filters all Events that occured 1 hour ago and no newer Events have been found.
This works but I dont understand this part:
from every e = heartbeats -> e2 = heartbeats[deviceId == e.deviceId] or expired = delayedStream[deviceId == e.deviceId]
The second part of the query (or expired = ...) checks if the Event with the given deviceId is on the delayedStream. What is the purpose of the first part and how does it come together, that this query finds devices that sent no data for more than 1 hour?
I don't think the above query will be accurate if you want to check if a sensor did not send reading for the last 1 hour. I tweaked the windows as 1 minute and sent 2 events,
[2019-07-19 16:48:23,774] heartbeats : Event{timestamp=1563535103772, data=[1], isExpired=false}
[2019-07-19 16:48:24,696] tmpStream : Event{timestamp=1563535104694, data=[1, 1, null], isExpired=false}
[2019-07-19 16:48:24,697] heartbeats : Event{timestamp=1563535104694, data=[1], isExpired=false}
[2019-07-19 16:49:23,774] tmpStream : Event{timestamp=1563535163772, data=[1, null, 1], isExpired=false}
Let's say events arrive at 10 and 10.15, the outputs at the tmpStream will be at 10.15 (first part) and 11 (due to delayed stream). The second match is incorrect as it has to match at 11.15 as per use case.
However, if you want to improve the query you can use the Siddhi detecting non-occurance pattern feature for your use case, https://siddhi.io/en/v5.0/docs/query-guide/#detecting-non-occurring-events, it will be simpler

Spark sliding window is not triggering when there is no data from the data stream

I have a simple spark sql with time window of 5 minutes, trigger policy of every minute:
val withTime = eventStreams(0).selectExpr("*", "cast(cast(parsed.time as long)/1000 as timestamp) as event_time")
val momentumDataAggQuery = withTime
.selectExpr("parsed.symbol", "parsed.bid", "parsed.ask", "event_time")
.withWatermark("event_time", "1 minutes")
.groupBy(col("symbol"), window(col("event_time"), "5 minutes", "60 seconds")) // change to 60 minutes
.agg(first("bid", true).as("first_bid"), first("ask").as("first_ask"), last("bid").as("last_bid"), last("ask").as("last_ask"))
val momentumDataQuery = momentumDataAggQuery
.selectExpr("window.start", "window.end", "ln(((last_bid + last_ask)/2)/((first_bid + first_ask)/2)) as momentum", "symbol")
When there is data from the stream, it gets triggered every minute to calculate 'momentum' but stop when there is not data point. I expect it will continue using old data to update every minute even there is not enough data point.
Consider the example in the following table
In the 1st window, there is only one data point so log return is zero.
In the 2nd window, there is only two data points so it takes log(97.5625/97.4625), where 97.5625 was received at 11:53 and 97.4625 was received at 11:52:10, within the time window 12:19 <> 12:54...it went on to calculate the log return when there was sufficient data point.
However, when there was NO more data point after 15:56:12, say, for the window 12:54 <> 12:59, i expect it would take ln(97.8625/97.6625) where the input were generated at 11:56:12 and 11:54:11 respectively. However it's not the case, the red box were never generated.
Is there something wrong with my spark sql?

Issues using retract in then condition of a rule

I am trying to write a rule to detect if a given event has occurred for 'n' number times in last 'm' duration of time.
I am using drools version 5.4.Final. I have also tried 5.5.Final with no effect.
I have found that there are a couple of Conditional Elements, as Drools call it, accumulate and collect. I have used collect in my sample rule below
rule "check-login-attack-rule-1"
dialect "java"
when
$logMessage: LogMessage()
$logMessages : ArrayList ( size >= 3 )
from collect(LogMessage(getAction().equals(Action.Login)
&& isProcessed() == false)
over window:time(10s))
then
LogManager.debug(Poc.class, "!!!!! Login Attack detected. Generating alert.!!!"+$logMessages.size());
LogManager.debug(Poc.class, "Current Log Message: "+$logMessage.getEventName()+":"+(new Date($logMessage.getTime())));
int size = $logMessages.size();
for(int i = 0 ; i < size; i++) {
Object msgObj = $logMessages.get(i);
LogMessage msg = (LogMessage) msgObj;
LogManager.debug(Poc.class, "LogMessage: "+msg.getEventName()+":"+(new Date(msg.getTime())));
msg.setProcessed(true);
update(msgObj); // Does not work. Rule execution does not proceed beyond this point.
// retract(msgObj) // Does not work. Rule execution does not proceed beyond this point.
}
// Completed processing the logs over a given window. Now removing the processed logs.
//retract($logMessages) // Does not work. Rule execution does not proceed beyond this point.
end
The code to inject logs is as below. The code injects logs at every 3 secs and fires rules.
final StatefulKnowledgeSession kSession = kBase.newStatefulKnowledgeSession();
long msgId = 0;
while(true) {
// Generate Log messages every 3 Secs.
// Every alternate log message will satisfy a rule condition
LogMessage log = null;
log = new LogMessage();
log.setEventName("msg:"+msgId);
log.setAction(LogMessage.Action.Login);
LogManager.debug(Poc.class, "PUSHING LOG: "+log.getEventName()+":"+log.getTime());
kSession.insert(log);
kSession.fireAllRules();
LogManager.debug(Poc.class, "PUSHED LOG: "+log.getEventName()+":"+(new Date(log.getTime())));
// Sleep for 3 secs
try {
sleep(3*1000L);
} catch (InterruptedException e) {
e.printStackTrace();
}
msgId++;
}
With this, what I could achieve is checking for existence of the above said LogMessage in last 10 secs. I could also find out the exact set of LogMessages which occurred in last 10 secs triggering the rule.
The problem is, once these messages are processed, they should not take part in next cycle of evaluation. This is something which I've not be able to achieve. I'll explain this with example.
Consider a timeline below, The timeline shows insertion of log messages and the state of alert generation which should happen.
Expected Result
Secs -- Log -- Alert
0 -- LogMessage1 -- No Alert
3 -- LogMessage2 -- No Alert
6 -- LogMessage3 -- Alert1 (LogMessage1, LogMessage2, LogMessage3)
9 -- LogMessage4 -- No Alert
12 -- LogMessage5 -- No Alert
15 -- LogMessage6 -- Alert2 (LogMessage4, LogMessage5, LogMessage6)
But whats happening with current code is
Actual Result
Secs -- Log -- Alert
0 -- LogMessage1 -- No Alert
3 -- LogMessage2 -- No Alert
6 -- LogMessage3 -- Alert1 (LogMessage1, LogMessage2, LogMessage3)
9 -- LogMessage4 -- Alert2 (LogMessage2, LogMessage3, LogMessage4)
12 -- LogMessage5 -- Alert3 (LogMessage3, LogMessage4, LogMessage5)
15 -- LogMessage6 -- Alert4 (LogMessage4, LogMessage5, LogMessage6)
Essentially, I am not able to discard the messages which are already processed and have taken part in an alert generation. I tried to use retract to remove the processed facts from its working memory. But when I added retract in the then part of the rule, the rules stopped firing at all. I have not been able to figure out why the rules stop firing after adding the retract.
Kindly let me know where am I going wrong.
You seem to be forgetting to set as processed the other 3 facts in the list. You would need a helper class as a global to do so because it should be done in a for loop. Otherwise, these groups of messages can trigger the rule as well:
1 no triggering
1,2 no triggerning
1,2,3 triggers
2,3,4 triggers because a new fact is added and 2 and 3 were in the list
3,4,5 triggers because a new fact is added and 3 and 4 were in the list
and so on
hope this helps