When using IBM IoT Real-Time insights, we can define alerts that trigger when an event arrives that passes some rule condition. This can trigger an action such as sending an email. Is there a technique that will allow us to generate an alert the first time that a condition is true but will not generate an alert for subsequent events until the condition first evaluates to false?
Imagine I am receiving temperature events from a pipeline sensor. When the temperature passes 120oF, I want to receive an alert ... however it appears to me that the temperature event value will continue to be above this threshold temperature for each subsequent event which will result in the condition evaluating true many times. If the action I want to have performed is to email me, then I believe I will receive a new email for each new event received which isn't what I am wanting. What I really want to happen is to receive an email the first time the temperature passes a threshold and no more until the problem is corrected and it may happen again.
The continuous triggering of alerts once a rule threshold is met is a limitation that is currently being addressed by the IoT Real-Time Insights team. Keep an eye on the What's new in Bluemix section for news about new features as they are released.
The latest update of IoT Real-Time Insights includes updates to rule conditions. To control the number of alerts that are triggered over a time period, you can set a condition frequency requirement to for example only trigger the rule the first time a condition is met, and then not trigger for the next hour. Rules can now be triggered each time a condition is met, or when a condition is met N times in M days/hours/minutes. For more information, see Conditional Triggering in the IoT Real-Time Insights documentation.
Related
I am working on an application tracking objects detected by multiple sensors. I receive my inputs by consuming a kafka message and I save the informations in a postgresql database.
An object is located in a specific location if it's last scan was detected by sensor in that exact location, example :
Object last scanned by sensor in room 1 -> means last location known for the object is room 1
The scans are continously happening and we can set a frequency of a few seconds to a few minutes. So for an object that wasn't scanned in the last hour for example. It needs to be considered out of range.
So my question now is, how can I design a system that generates some sort of notification when a device is out of range ?
for example if the timestamp of the last detection goes longer than 5 minutes ago, it triggers a notification.
The only solution I can think of, is to create a batch that repeatedly checks for all the objects with last detection time is more than 5 minutes for example. But I am wondering if that is the right approach and I would like to ask if there is a better way.
I use kotlin and spring boot for my application.
Thank you for your help.
You would need some type of heartbeat mechanism, yes.
Query all detection events with "last seen timestamp" greater than your threshold, and fire an alert when that returned result set is more than some tolerable threshold (e.g. if you are willing to accept intermittent lost devices and expect them to be found in the next scan).
As far as "where/how" to alert - up to you. Slack webhooks are a popular example. Grafana can do alerting and query your database.
In accordance, I have a question in the modeling process. I want my agent to have an event that is triggered by time and condition, example: goToSchool if it is more than 6 am and there is a school bus. I am confused about whether to use the timeout trigger (but cannot use the condition) or the condition (but cannot use the timeout) or is there any possible alternative?
In your example, "if it is more than 6 am" is a condition and not a timeout. A timeout trigger is used when you want an event to happen at an exact time. In your case, while "more than 6 am" is time related, it is still a condition. So I would use a condition triggered event with two conditions:
getHourOfDay() > 6 && <bus condition>
getHourOfDay() function returns the hour of the day in a 24-hr format.
You need to keep in mind something important related to condition triggered events, they are only evaluated "on change". I recommend you read this carefully:
https://help.anylogic.com/index.jsp?topic=%2Fcom.anylogic.help%2Fhtml%2Fstatecharts%2Fcondition-event.html
My recommendation would be to use the onChange() function in the block controlling your bus arrival so that the condition is evaluated each time a bus arrives.
I am simplifying my problem with the following scenario:
3 friends share a loyalty card. The card has two restrictions
can be used max 10 times (does not matter which uses the card, ie
friend_a can use it 10 times.
the max money in the card is 200. So with 1 "event" of value = 200 the card is "completed".
I am using a kafka producer that sends events in the kafka cluster like this
{ "name": "friend_1", "value": 10 }
{ "name": "friend_3", "value": 20 }
the events are posted to a topic that is connected with a kafka stream that groups by key and doing aggregation to sum the money spent. That seems to work, however I am facing a "concurrency issue"
Let's imaging the card is used 9 times, so only 1 time remains to be used and the total money spent is 190, that means there are 10 units left to spend.
So friend_2 wants to buy something that costs 11 units (which should not be allowed) and friend_3 wants to buy something that costs 9 units which should be allowed. Friend_3 will modify the state using the card for the 10th time. All other future attempts should not modify something.
So it seems reasonable for the card user to know if the event he sent modified the max used number and the total count. How can I do it in kafka? Using the streams aggregation I can always increase the values, but how can do I know if my action "modified the state" of the card?
UPDATE: the card user should immediately get a negative feedback if the transaction validates a rule.
From how I understand your question there are a few options.
One option is to derive a new stream after the aggregation that you filter() for data that would modify 'the state' of the card, like filtering all events that have > 200 units spent or > 10 uses. This stream can then be used to inform the card user(s) that the card has been spent, e.g. by sending an email. This is approach can be implemented solely with the DSL.
When more flexibility or tighter control is needed, another option is to use the Processor API (which you can integrate with the DSL so most of your code could keep using the DSL), where you implement the aggregation step yourself (working with state stores that you attach to a Transformer or Processor). During the aggregation, you can implement the logic that checks whether the incoming event is valid (in your example: friend_3 with 9 units is valid, friend_2 with 11 units is not). If it is valid, the aggregation increases the card's counters (units and uses), and that's it. If it is invalid, the event is discarded and will not modify the counters, the Transformer/Processor can emit a new event to another stream that tells the card user(s) that something didn't work. You can similarly implement the functionality to inform users that a card has been fully used, that a card is no longer usable, or any other 'state change' of that card.
Also, depending on what you want to do, take a look at the interactive queries feature of Kafka Streams. Sometimes other applications may want to do quick point lookups (queries) of the latest state of something, like the card's state, which can be done with interactive queries via e.g. a REST API.
Hope this helps!
I want to use Google Measurement Protocol to record offline events, i.e. take data from an EPOS system and track them in Google Analytics. This would be a batch process once a day. How do I tell Google what the date of the event is? If the console app went offline for a few days I wouldn't want three days worth of events to be associated with one day.
Your best best currently is to use the Queue Time Measurement Protocol Parameter.
v=1&tid=UA-123456-1&cid=5555&t=pageview&dp=%2FpageA&qt=343
Queue Time is used to collect offline / latent hits. The value represents the time delta (in milliseconds) between when the hit being reported occurred and the time the hit was sent. The value must be greater than or equal to 0. Values greater than four hours may lead to hits not being processed.
I'm looking for a CEP engine, but I' don't know if any engine meets my requirements.
My system has to process multiple streams of event data and generate complex events and this is exactly what almost any CEP engine perfectly fits (ESPER, Drools).
I store all raw events in database (it's not CEP part, but I do this) and use rules (or continious queries or something) to generate custom actions on complex events. But some of my rules are dependent on the events in the past.
For instance: I could have a sensor sending event everytime my spouse is coming or leaving home and if both my car and the car of my fancy woman are near the house, I get SMS 'Dangerous'.
The problem is that with restart of event processing service I lose all information on the state of the system (is my wife at home?) and to restore it I need to replay events for unknow period of time. The system state can depend not only on raw events, but on complex events as well.
The same problem arises when I need some report on complex events in the past. I have raw events data stored in database, and could generate these complex events replaying raw events, but I don't know for which exactly period I have to replay them.
At the same time it's clear that for the most rules it's possible to find automatically the number of events to be processed from the past (or period of time to load events to be processed) to restore system state.
If given action depends on presence of my wife at home, CEP system has to request last status change. If report on complex events is requested and complex event depends on average price within the previous period, all price change events for this period should be replayed. And so on...
If I miss something?
The RuleCore CEP Server might solve your problems if I remember correctly. It does not lose state if you restart it and it contains a virtual logical clock so that you can replay events using any notion of time.
I'm not sure if your question is whether current CEP products offer joining historical data with live events, but if that's what you need, Esper allows you to pull data from JDBC sources (which connects your historical data with your live events) and reflect them in your EPL statements. I guess you already checked the Esper website, if not, you'll see that Esper has excellent documentation with lots of cookbook examples
But even if you model your historical events after your live events, that does not solve your problem with choosing the correct timeframe, and as you wrote, this timeframe is use case dependent.
As previous people mentioned, I don't think your problem is really an engine problem, but more of a use case one. All engines I am familiar with, including Drools Fusion and Esper can join incoming events with historical data and/or state data queried on demand from an external source (like a database). It seems to me that what you need to do is persist state (or "timestamp check-points") when a relevant change happens and re-load the state on re-starts instead of replaying events for an unknown time frame.
Alternatively, if using Drools, you can inspect existing rules (kind of reflection on your rules/queries) to figure out which types of events your rules need and backtrack your event log until a point in time where all requirements are met and load/replay your events from there using the session clock.
Finally, you can use a cluster to reduce the restarts, but that does not solve the problem you describe.
Hope it helps.