Kafka Late Arrival message processing - apache-kafka

As New to Kafka , We are trying to understand about late arrival records based on the following details. Please help us on below Questions.
To process the late arrival records , what time parameter can be chosen to determine how long we can wait for Late arrival records ?
If records not arrived that time too , what would be that case ? records will be discarded for processing ?

Consumers performing time-series analysis are the only ones that care about time; Kafka does not.
For example, say you had a device emitting metrics for some game while in airplane mode, and that would cache data locally until network reconnected.
In that case, you would collect both time that the events originally occurred while offline and the time that the records reached Kafka.
Topics are append-only, therefore your analysis is only able to project data at which it entered the system, and it's up to your analysis to discover the minimum time window of the original events.

Related

is Time Based log Compaction in Kafka based on Wall Clock time or Event time or a mix of Both?

I have been trying to understand how to set time based log compaction, but still can't understand its behavior properly. In particular i am interested in the behavior of log.roll.ms.
What I would like to understand is the following statement taken from the official kafka doc: https://kafka.apache.org/documentation.html#upgrade_10_1_breaking
The log rolling time is no longer depending on log segment create time. Instead it is now based on the timestamp in the messages. More specifically. if the timestamp of the first message in the segment is T, the log will be rolled out when a new message has a timestamp greater than or equal to T + log.roll.ms
in T + log.roll.ms
a) I understand that T is based on the timestamp of the message and therefore can be considered the Event Time. However what is the clock behind log.roll.ms. In kafka stream for instance, when working with Event time it is clear what is the stream, it is the highest timestamp seen. So does the time for log compaction progress with the timestamp of the message and therefore is Event Time, or it progress based on the walk-clock time of the Brokers
I thought it was event time, but then i saw the following talk https://www.confluent.io/kafka-summit-san-francisco-2019/whats-the-time-and-why/ where
#Matthias J. Sax talks about it. From his talks i got confused. It seems indeed that compaction is driven by both Event time T and Walking time

How does Google Dataflow determine the watermark for various sources?

I was just reviewing the documentation to understand how Google Dataflow handles watermarks, and it just mentions the very vague:
The data source determines the watermark
It seems you can add more flexibility through withAllowedLateness but what will happen if we do not configure this?
Thoughts so far
I found something indicating that if your source is Google PubSub it already has a watermark which will get taken, but what if the source is something else? For example a Kafka topic (which I believe does not inherently have a watermark, so I don't see how something like this would apply).
Is it always 10 seconds, or just 0? Is it looking at the last few minutes to determine the max lag and if so how many (surely not since forever as that would get distorted by the initial start of processing which might see giant lag)? I could not find anything on the topic.
I also searched outside the context of Google DataFlow for Apache Beam documentation but did not find anything explaining this either.
When using Apache Kafka as a data source, each Kafka partition may have a simple event time pattern (ascending timestamps or bounded out-of-orderness). However, when consuming streams from Kafka, multiple partitions often get consumed in parallel, interleaving the events from the partitions and destroying the per-partition patterns (this is inherent in how Kafka’s consumer clients work).
In that case, you can use Flink’s Kafka-partition-aware watermark generation. Using that feature, watermarks are generated inside the Kafka consumer, per Kafka partition, and the per-partition watermarks are merged in the same way as watermarks are merged on stream shuffles.
For example, if event timestamps are strictly ascending per Kafka partition, generating per-partition watermarks with the ascending timestamps watermark generator will result in perfect overall watermarks. Note, that TimestampAssigner is not provided in the example, the timestamps of the Kafka records themselves will be used instead.
In any data processing system, there is a certain amount of lag between the time a data event occurs (the “event time”, determined by the timestamp on the data element itself) and the time the actual data element gets processed at any stage in your pipeline (the “processing time”, determined by the clock on the system processing the element). In addition, there are no guarantees that data events will appear in your pipeline in the same order that they were generated.
For example, let’s say we have a PCollection that’s using fixed-time windowing, with windows that are five minutes long. For each window, Beam must collect all the data with an event time timestamp in the given window range (between 0:00 and 4:59 in the first window, for instance). Data with timestamps outside that range (data from 5:00 or later) belong to a different window.
However, data isn’t always guaranteed to arrive in a pipeline in time order, or to always arrive at predictable intervals. Beam tracks a watermark, which is the system’s notion of when all data in a certain window can be expected to have arrived in the pipeline. Once the watermark progresses past the end of a window, any further element that arrives with a timestamp in that window is considered late data.
From our example, suppose we have a simple watermark that assumes approximately 30s of lag time between the data timestamps (the event time) and the time the data appears in the pipeline (the processing time), then Beam would close the first window at 5:30. If a data record arrives at 5:34, but with a timestamp that would put it in the 0:00-4:59 window (say, 3:38), then that record is late data.

Deduplication using Kafka-Streams

I want to deduplication in my kafka-streams application which uses state-store and using this very good example:
https://github.com/confluentinc/kafka-streams-examples/blob/5.5.0-post/src/test/java/io/confluent/examples/streams/EventDeduplicationLambdaIntegrationTest.java
I have few questions about this example.
As I correctly understand, this example briefly do this:
Message comes into input topic
Look at the store, if it does not exist, write to state-store and return
if it does exist drop the record, so the deduplication is applied.
But in the code example there is a time window size that you can determine. Also, retention time for the messages in the state-store. You can also check the record is in the store or not by giving timestamp timeFrom + timeTo
final long eventTime = context.timestamp();
final WindowStoreIterator<String> timeIterator = store.fetch(
key,
eventTime - leftDurationMs,
eventTime + rightDurationMs
);
What is the actual purpose for the timeTo and timeFrom ? I am not sure why I am checking the next time interval because I am checking the future messages that did not come to my topic yet ?
My second question does this time interval related and should HIT the previous time window ?
If I am able to search the time interval by giving timeTo and timeFrom, why time window size is important ?
If I give the window Size 12 hours, am I able to guarantee that I am deduplicated messages for 12 hours ?
I think like this:
First message comes with key "A" in the first minute of the application start-up, after 11 hours, the message with a key "A" comes again. Can I catch this duplicated message by giving enough time interval like eventTime - 12hours ?
Thanks for any ideas !
TimeWindow size decides how long you wants the "duplication" runs, no duplication forever or just during 5 minutes. Kafka has to store these records. A large timewindow may consume a large resource of your server.
TimeFrom and TimeTo, cause your record(event) may arrive/process late in kafka, so the event-time of the record is 1 minute ago, not now. Kafka is process an "old" record, and that's it needs to take care of records which are not that old, relatively "future" records to the "old" one.

Synchronize Data From Multiple Data Sources

Our team is trying to build a predictive maintenance system whose task is to look at a set of events and predict whether these events depict a set of known anomalies or not.
We are at the design phase and the current system design is as follows:
The events may occur on multiple sources of an IoT system (such as cloud platform, edge devices or any intermediate platforms)
The events are pushed by the data sources into a message queueing system (currently we have chosen Apache Kafka).
Each data source has its own queue (Kafka Topic).
From the queues, the data is consumed by multiple inference engines (which are actually neural networks).
Depending upon the feature set, an inference engine will subscribe to
multiple Kafka topics and stream data from those topics to continuously output the inference.
The overall architecture follows the single-responsibility principle meaning that every component will be separate from each other and run inside a separate Docker container.
Problem:
In order to classify a set of events as an anomaly, the events have to occur in the same time window. e.g. say there are three data sources pushing their respective events into Kafka topics, but due to some reason, the data is not synchronized.
So one of the inference engines pulls the latest entries from each of the kafka topics, but the corresponding events in the pulled data do not belong to the same time window (say 1 hour). That will result in invalid predictions due to out-of-sync data.
Question
We need to figure out how can we make sure that the data from all three sources are pushed in-order so that when an inference engine requests entries (say the last 100 entries) from multiple kakfa topics, the corresponding entries in each topic belong to the same time window?
I would suggest KSQL, which is a streaming SQL engine that enables real-time data processing against Apache Kafka. It also provides nice functionality for Windowed Aggregation etc.
There are 3 ways to define Windows in KSQL:
hopping windows, tumbling windows, and session windows. Hopping and
tumbling windows are time windows, because they're defined by fixed
durations they you specify. Session windows are dynamically sized
based on incoming data and defined by periods of activity separated by
gaps of inactivity.
In your context, you can use KSQL to query and aggregate the topics of interest using Windowed Joins. For example,
SELECT t1.id, ...
FROM topic_1 t1
INNER JOIN topic_2 t2
WITHIN 1 HOURS
ON t1.id = t2.id;
Some suggestions -
Handle delay at the producer end -
Ensure all three producers always send data in sync to Kafka topics by using batch.size and linger.ms.
eg. if linger.ms is set to 1000, all messages would be sent to Kafka within 1 second.
Handle delay at the consumer end -
Considering any streaming engine at the consumer side (be it Kafka-stream, spark-stream, Flink), provides windows functionality to join/aggregate stream data based on keys while considering delayed window function.
Check this - Flink windows for reference how to choose right window type link
To handle this scenario, data sources must provide some mechanism for the consumer to realize that all relevant data has arrived. The simplest solution is to publish a batch from data source with a batch Id (Guid) of some form. Consumers can then wait until the next batch id shows up marking the end of the previous batch. This approach assumes sources will not skip a batch, otherwise they will get permanently mis-aligned. There is no algorithm to detect this but you might have some fields in the data that show discontinuity and allow you to realign the data.
A weaker version of this approach is to either just wait x-seconds and assume all sources succeed in this much time or look at some form of time stamps (logical or wall clock) to detect that a source has moved on to the next time window implicitly showing completion of the last window.
The following recommendations should maximize success of event synchronization for the anomaly detection problem using timeseries data.
Use a network time synchronizer on all producer/consumer nodes
Use a heartbeat message from producers every x units of time with a fixed start time. For eg: the messages are sent every two minutes at the start of the minute.
Build predictors for producer message delay. use the heartbeat messages to compute this.
With these primitives, we should be able to align the timeseries events, accounting for time drifts due to network delays.
At the inference engine side, expand your windows at a per producer level to synch up events across producers.

GCP Dataflow: System Lag for streaming from Pub/Sub IO

We use "System Lag" to check the health of our Dataflow jobs. For example if we see an increase in system lag, we will try to see how to bring this metric down. There are few question regarding this metric.
1) What does system lag exactly means?
The maximum time that an item of data has been awaiting processing
Above is what we see in GCP Console when we hit information icon. What does an item of data mean in this case? Stream processing has concept of Windowing, event time vs processing time, watermark, etc. When is an item considered awaiting to be processed? For example is it simply when the message arrives regardless of its state?
2) What is the optimum threshold for this metric?
We try to keep this metric as low as possible, but we don't have any recommendation on how low we should keep it. For example do we have some recommendation such as keeping system lag between 20s to 30s is optimum.
3)How does system lag implicates sinks
How does system lag affect latency of the event itself?
Depending on the pipeline being executed there are a number of places that elements may be queued up awaiting processing. This is typically when the elements are passed between machines, such as within a GroupByKey, although the PubSub source also reflects the oldest unacked element.
For a given step (sinks included) "System Lag" measures the age of the oldest element in the closest input queue to that step.
It is not unusual for there to be spikes in this measure -- elements are pulled off the queue after they are processed, so if many new elements are delivered it may take a while before the queue is back to a manageable size. What is important is that the system lag goes back down after these spikes.
The latency of a sink depends on several factors:
The rate that elements arrive in the pipeline limits the rate the input watermark advances.
The configuration of windowing and triggers affect how long the pipeline must wait before emitting a given window.
System lag is a measure of how much added delay is currently being introduced by code executing within the pipeline.
It is likely easier to look at the "Data Watermark" of the sink, which reports up to what point in (event) time the sink has been processed.