We have an IoT device that will send in sensor readings when a user interacts with the device. The user will typically interact with the device multiple times with each interaction generating a sensor reading. The device will then transmit each sensor reading to a backend API but order and timeliness are not guaranteed.
On the backend API we must window all sensor readings into a single event that happened within the same 2 minutes and timestamp the event with the timestamp of the first event in the window. Due to not having order or timeliness guarantees from the IoT device we currently have a schema where each user has a document per day and an array of events that in turn has an array of the sensor readings. We currently query this whole document on every sensor reading and in application code determine the correct events based on the existing sensor readings and the new one just received.
We've seen some performance and locking issues having to query the entire document and process this event windowing in application code and save the entire document. Is there a way to do this event windowing completely in mongodb? What is the pattern that would give us the highest write throughput overall.
Related
I am working on an application tracking objects detected by multiple sensors. I receive my inputs by consuming a kafka message and I save the informations in a postgresql database.
An object is located in a specific location if it's last scan was detected by sensor in that exact location, example :
Object last scanned by sensor in room 1 -> means last location known for the object is room 1
The scans are continously happening and we can set a frequency of a few seconds to a few minutes. So for an object that wasn't scanned in the last hour for example. It needs to be considered out of range.
So my question now is, how can I design a system that generates some sort of notification when a device is out of range ?
for example if the timestamp of the last detection goes longer than 5 minutes ago, it triggers a notification.
The only solution I can think of, is to create a batch that repeatedly checks for all the objects with last detection time is more than 5 minutes for example. But I am wondering if that is the right approach and I would like to ask if there is a better way.
I use kotlin and spring boot for my application.
Thank you for your help.
You would need some type of heartbeat mechanism, yes.
Query all detection events with "last seen timestamp" greater than your threshold, and fire an alert when that returned result set is more than some tolerable threshold (e.g. if you are willing to accept intermittent lost devices and expect them to be found in the next scan).
As far as "where/how" to alert - up to you. Slack webhooks are a popular example. Grafana can do alerting and query your database.
I'm using Cassandra and Kafka for event-sourcing, and it works quite well. But I've just recently discovered a potentially major flaw in the design/set-up. A brief intro to how it is done:
The aggregate command handler is basically a kafka consumer, which consumes messages of interest on a topic:
1.1 When it receives a command, it loads all events for the aggregate, and replays the aggregate event handler for each event to get the aggregate up to current state.
1.2 Based on the command and businiss logic it then applies one or more events to the event store. This involves inserting the new event(s) to the event store table in cassandra. The events are stamped with a version number for the aggregate - starting at version 0 for a new aggregate, making projections possible. In addition it sends the event to another topic (for projection purposes).
1.3 A kafka consumer will listen on the topic upon these events are published. This consumer will act as a projector. When it receives an event of interest, it loads the current read model for the aggregate. It checks that the version of the event it has received is the expected version, and then updates the read model.
This seems to work very well. The problem is when I want to have what EventStore calls category projections. Let's take Order aggregate as an example. I can easily project one or more read models pr Order. But if I want to for example have a projection which contains a customers 30 last orders, then I would need a category projection.
I'm just scratching my head how to accomplish this. I'm curious to know if any other are using Cassandra and Kafka for event sourcing. I've read a couple of places that some people discourage it. Maybe this is the reason.
I know EventStore has support for this built in. Maybe using Kafka as event store would be a better solution.
With this kind of architecture, you have to choose between:
Global event stream per type - simple
Partitioned event stream per type - scalable
Unless your system is fairly high throughput (say at least 10s or 100s of events per second for sustained periods to the stream type in question), the global stream is the simpler approach. Some systems (such as Event Store) give you the best of both worlds, by having very fine-grained streams (such as per aggregate instance) but with the ability to combine them into larger streams (per stream type/category/partition, per multiple stream types, etc.) in a performant and predictable way out of the box, while still being simple by only requiring you to keep track of a single global event position.
If you go partitioned with Kafka:
Your projection code will need to handle concurrent consumer groups accessing the same read models when processing events for different partitions that need to go into the same models. Depending on your target store for the projection, there are lots of ways to handle this (transactions, optimistic concurrency, atomic operations, etc.) but it would be a problem for some target stores
Your projection code will need to keep track of the stream position of each partition, not just a single position. If your projection reads from multiple streams, it has to keep track of lots of positions.
Using a global stream removes both of those concerns - performance is usually likely to be good enough.
In either case, you'll likely also want to get the stream position into the long term event storage (i.e. Cassandra) - you could do this by having a dedicated process reading from the event stream (partitioned or global) and just updating the events in Cassandra with the global or partition position of each event. (I have a similar thing with MongoDB - I have a process reading the 'oplog' and copying oplog timestamps into events, since oplog timestamps are totally ordered).
Another option is to drop Cassandra from the initial command processing and use Kafka Streams instead:
Partitioned command stream is processed by joining with a partitioned KTable of aggregates
Command result and events are computed
Atomically, KTable is updated with changed aggregate, events are written to event stream and command response is written to command response stream.
You would then have a downstream event processor that copies the events into Cassandra for easier querying etc. (and which can add the Kafka stream position to each event as it does it to give the category ordering). This can help with catch up subscriptions, etc. if you don't want to use Kafka for long term event storage. (To catch up, you'd just read as far as you can from Cassandra and then switch to streaming from Kafka from the position of the last Cassandra event). On the other hand, Kafka itself can store events for ever, so this isn't always necessary.
I hope this helps a bit with understanding the tradeoffs and problems you might encounter.
Our team is trying to build a predictive maintenance system whose task is to look at a set of events and predict whether these events depict a set of known anomalies or not.
We are at the design phase and the current system design is as follows:
The events may occur on multiple sources of an IoT system (such as cloud platform, edge devices or any intermediate platforms)
The events are pushed by the data sources into a message queueing system (currently we have chosen Apache Kafka).
Each data source has its own queue (Kafka Topic).
From the queues, the data is consumed by multiple inference engines (which are actually neural networks).
Depending upon the feature set, an inference engine will subscribe to
multiple Kafka topics and stream data from those topics to continuously output the inference.
The overall architecture follows the single-responsibility principle meaning that every component will be separate from each other and run inside a separate Docker container.
Problem:
In order to classify a set of events as an anomaly, the events have to occur in the same time window. e.g. say there are three data sources pushing their respective events into Kafka topics, but due to some reason, the data is not synchronized.
So one of the inference engines pulls the latest entries from each of the kafka topics, but the corresponding events in the pulled data do not belong to the same time window (say 1 hour). That will result in invalid predictions due to out-of-sync data.
Question
We need to figure out how can we make sure that the data from all three sources are pushed in-order so that when an inference engine requests entries (say the last 100 entries) from multiple kakfa topics, the corresponding entries in each topic belong to the same time window?
I would suggest KSQL, which is a streaming SQL engine that enables real-time data processing against Apache Kafka. It also provides nice functionality for Windowed Aggregation etc.
There are 3 ways to define Windows in KSQL:
hopping windows, tumbling windows, and session windows. Hopping and
tumbling windows are time windows, because they're defined by fixed
durations they you specify. Session windows are dynamically sized
based on incoming data and defined by periods of activity separated by
gaps of inactivity.
In your context, you can use KSQL to query and aggregate the topics of interest using Windowed Joins. For example,
SELECT t1.id, ...
FROM topic_1 t1
INNER JOIN topic_2 t2
WITHIN 1 HOURS
ON t1.id = t2.id;
Some suggestions -
Handle delay at the producer end -
Ensure all three producers always send data in sync to Kafka topics by using batch.size and linger.ms.
eg. if linger.ms is set to 1000, all messages would be sent to Kafka within 1 second.
Handle delay at the consumer end -
Considering any streaming engine at the consumer side (be it Kafka-stream, spark-stream, Flink), provides windows functionality to join/aggregate stream data based on keys while considering delayed window function.
Check this - Flink windows for reference how to choose right window type link
To handle this scenario, data sources must provide some mechanism for the consumer to realize that all relevant data has arrived. The simplest solution is to publish a batch from data source with a batch Id (Guid) of some form. Consumers can then wait until the next batch id shows up marking the end of the previous batch. This approach assumes sources will not skip a batch, otherwise they will get permanently mis-aligned. There is no algorithm to detect this but you might have some fields in the data that show discontinuity and allow you to realign the data.
A weaker version of this approach is to either just wait x-seconds and assume all sources succeed in this much time or look at some form of time stamps (logical or wall clock) to detect that a source has moved on to the next time window implicitly showing completion of the last window.
The following recommendations should maximize success of event synchronization for the anomaly detection problem using timeseries data.
Use a network time synchronizer on all producer/consumer nodes
Use a heartbeat message from producers every x units of time with a fixed start time. For eg: the messages are sent every two minutes at the start of the minute.
Build predictors for producer message delay. use the heartbeat messages to compute this.
With these primitives, we should be able to align the timeseries events, accounting for time drifts due to network delays.
At the inference engine side, expand your windows at a per producer level to synch up events across producers.
I'm using DynamoDB Streams + Kinesis Client Library (KCL).
How can I measure latency between when an event was created in a stream and when it was processed on KCL side?
As I know, KCL's MillisBehindLatest metric is specific to Kinesis Streams(not DynamoDB streams).
approximateCreationDateTime record attribute has a minute-level approximation, which is not acceptable for monitoring in sub-second latency systems.
Could you please help with some useful metrics for monitoringDynamoDB Streams latency?
You can change the way you do writes in your application to allow your application to track the propagation delay of mutations in the table's stream. For example, you could always update a 'last_updated=' timestamp attribute when you create and update items. That way, when your creations and updates appear in the stream, you can estimate the propagation delay by subtracting the current time from last_updated in the NEW_IMAGE of the stream record.
Because deletions do not have a NEW_IMAGE in stream records, your deletes would need to take place in two steps:
logical deletion where you write the 'logically_deleted='
timestamp to the item and
physical deletion where you actually call DeleteItem immediately following 1.
Then, you would use the same math as for creations and updates, only differences being that you would use the OLD_IMAGE when processing deletions and you would need to subtract at least around 10ms to account for the time it takes to perform the logical delete (step 1).
I'm looking for a CEP engine, but I' don't know if any engine meets my requirements.
My system has to process multiple streams of event data and generate complex events and this is exactly what almost any CEP engine perfectly fits (ESPER, Drools).
I store all raw events in database (it's not CEP part, but I do this) and use rules (or continious queries or something) to generate custom actions on complex events. But some of my rules are dependent on the events in the past.
For instance: I could have a sensor sending event everytime my spouse is coming or leaving home and if both my car and the car of my fancy woman are near the house, I get SMS 'Dangerous'.
The problem is that with restart of event processing service I lose all information on the state of the system (is my wife at home?) and to restore it I need to replay events for unknow period of time. The system state can depend not only on raw events, but on complex events as well.
The same problem arises when I need some report on complex events in the past. I have raw events data stored in database, and could generate these complex events replaying raw events, but I don't know for which exactly period I have to replay them.
At the same time it's clear that for the most rules it's possible to find automatically the number of events to be processed from the past (or period of time to load events to be processed) to restore system state.
If given action depends on presence of my wife at home, CEP system has to request last status change. If report on complex events is requested and complex event depends on average price within the previous period, all price change events for this period should be replayed. And so on...
If I miss something?
The RuleCore CEP Server might solve your problems if I remember correctly. It does not lose state if you restart it and it contains a virtual logical clock so that you can replay events using any notion of time.
I'm not sure if your question is whether current CEP products offer joining historical data with live events, but if that's what you need, Esper allows you to pull data from JDBC sources (which connects your historical data with your live events) and reflect them in your EPL statements. I guess you already checked the Esper website, if not, you'll see that Esper has excellent documentation with lots of cookbook examples
But even if you model your historical events after your live events, that does not solve your problem with choosing the correct timeframe, and as you wrote, this timeframe is use case dependent.
As previous people mentioned, I don't think your problem is really an engine problem, but more of a use case one. All engines I am familiar with, including Drools Fusion and Esper can join incoming events with historical data and/or state data queried on demand from an external source (like a database). It seems to me that what you need to do is persist state (or "timestamp check-points") when a relevant change happens and re-load the state on re-starts instead of replaying events for an unknown time frame.
Alternatively, if using Drools, you can inspect existing rules (kind of reflection on your rules/queries) to figure out which types of events your rules need and backtrack your event log until a point in time where all requirements are met and load/replay your events from there using the session clock.
Finally, you can use a cluster to reduce the restarts, but that does not solve the problem you describe.
Hope it helps.