I'm working on a project which needs to process a large number of events per second. The project uses Drools 6.5 running in stream mode. The data is fed to the engine as "event" objects.
Due to the large number of events that need to be processed, automatic memory management provided by Drools simplifies the development process significantly. However, drools has a somewhat vague documentation in this category. I need to count the number of events with certain conditions in the past T seconds and fire a rule if the number surpasses a threshold. I currently use sliding windows to achieve this. The problem is, Drools discards events before T seconds passes from their insertion (using #expires value) or does not discard them at all (if #expires tag is removed); thus either making inference impossible, or causing a heap memory overflow in the long run.
Is there a better approach to the problem? Can anyone clarify how inferred expiration works? Am I doing something wrong?
Any help would be greatly appreciated.
After a few hours of exploring the documentation for Drools 6.5, I finally found out what was happening. I will leave the information here to help anyone else who might have the same problem.
Important
An explicit expiration policy for a given event type overrides any inferred expiration offset for that same type.
As the documentation says(9.8.1), explicit #expires tag overrides any inferred expiration offset, so in order to let the engine handle the events' life cycle, do not use this tag.
7.5.1. Passive Mode
With Passive mode not only is the user responsible for working memory operations, such as insert(), but also for when the rules are to evaluate the data and fire the resulting rule instantiations - using fireAllRules()
Apparently, in order to use the inferred expiration feature, one can not use the passive execution mode. Running kSession.fireUntilHalt() runs the engine in active mode and enables the use of inferred expiration offsets.
TL;DR:
1. Remove any #expires tag
2. run the engine using fireUntilHalt() in a dedicated thread.
Related
Just have a general question regarding memory management when using "Stateful sessions" in Drools. For context, I'm specifically looking to use a ksession in "Stream" mode, together with fireUntilHalt() to process an infinite stream of events. Each event is timestamped, however I'm mainly writing rules using length-based windows notation (i.e. window:length()) and the from accumulate syntax for decision making.
The docs are a little vague though about how memory management works in this case. The docs suggest that using temporal operators the engine can automatically remove any facts/events that can no longer match. However, would this also apply to rules that only use the window:length()? Or would my system need to manually delete events that are no longer applicable, in order to prevent running OOM?
window:time() calculates expiration, so works for automatic removal. However, window:length() doesn't calculate expiration, so events would be retained.
You can confirm the behaviour with my example:
https://github.com/tkobayas/kiegroup-examples/tree/master/Ex-cep-window-length-8.32
FYI)
https://github.com/kiegroup/drools/blob/8.32.0.Final/drools-core/src/main/java/org/drools/core/rule/SlidingLengthWindow.java#L145
You would need to explicitly delete them or set #expires (if it's possible for your application to specify expiration with time) to avoid OOME.
Thank you for pointing that the document is not clear about it. I have filed a doc JIRA to explain it.
https://issues.redhat.com/browse/DROOLS-7282
For example, I am load a lot of drools rules to run, how do I know which drools rule now is running? So I can know find out the rule
Assuming you're talking about the right hand side of the rules, you'll want to use an AgendaEventListener. This is an interface which defines a listener that you can create that watches the Event Lifecycle. For more information about the event model, please refer to the Drools documentation.
The easiest way to do this would be to extend either DefaultAgendaEventListener or DebugAgendaEventListener. Both of these classes implement all of the interface methods. The Default listener implements each method as a "no-op", so you can override just the methods you care about. The Debug listener implements each method with a logging statement, logging the toString() of the triggering event to INFO. If you're just learning about the Drools lifecycle, hooking up the various Debug listeners is a great way to watch and learn how rules and events process in rules.
(Also the cool thing about listeners is that they allow you to put breakpoints in the "when" clause that trigger when specific conditions are met -- eg when a rule match is created. In general I find that listeners are a great debugging tool because they allow you to put breakpoints in methods that trigger when different parts of the Drools lifecycle occur.)
Anyway, what you'll want to do is create an event listener and then pay attention to one or more of these specific events:
BeforeMatchFired
AfterMatchFired
MatchCreated
Which events to pay attention to depend on where you think the issue is.
If you think the issue is in the "when" clause (left-hand side, LHS), the MatchCreated event is what is triggered when Drools evaluates the LHS and decides that this rule is valid for firing based on the input data. It is then put on, effectively, a priority queue based on salience. When the rule is the highest priority on the queue, it is picked up for firing -- at this point the BeforeMatchFired event is triggered; note that this is before the "then" clause (right-hand side, RHS) is evaluated. Then Drools will actually do the work on the RHS, and once it finishes, trigger the AfterMatchFired.
Things get a little more complicated when your rules do things like updates/retracts/etc -- you'll start having to consider potential match cancellations when Drools re-evaluates the LHS and decides that a rule is no longer valid to be fired per the facts in working memory. But in general, these are the tools you'll want to start with.
The way I would traditionally identify long-running rules would be to start timing within the BeforeMatchFired and to stop timing in the AfterMatchFired, and then log the resulting rule execution time. Note that you want to be careful here to log the execution of the current rule, tracking it by name; if your rule extends another rule you might find that your execution flow goes BeforeMatchFired(Child) -> BeforeMatchFired(Parent) -> AfterMatchFired(Parent) -> AfterMatchFired(Child), so if you're naively stopping a shared timer you might start having issues. My preferred way of doing this is by tracking timers by rule name in thread local or even a thread-safe map implementation, but you can go whichever route you'd like.
If you're using a very new version of Drools (7.41+), there is a new library called drools-metric which you can use to identify slow rules. I haven't personally used this library yet because the newest versions of Drools have started introducing non-backwards-compatible changes in minor releases, but this is an option as well.
You can read more about drools-metric in the official documentation here (you'll need to scroll down a bit.) There's some tuning you'll need to do because the module only logs instances where the thresholds are exceeded. The docs that I've linked to include the Maven dependency you'll need to import, along with information about configuration, and some examples of the output and how to understand what it's telling you.
I have started using drools a week ago.
I need to calculate average of a metric over a window-duration, say 4s. Below code-snippet of Drools will do this job.
... over window:time(4s) ...
However, I want to take this value as input to a rule with the value taken from control-panel UI where someone, say the customer, can specify the window duration.
I tried many options, including the one below, but that doesn't compile.
... over window:time($SlidingWindowDuration)
Googled for hours, but there is little documentation available on this subject.
Any clues in this regard would be of great help to me.
The length of a sliding window:time cannot be set dynamically. (I think this is so because dynamic lengths would make it impossible to infer an expiration offset for the automatic removal of obsolete events.)
Note that if this length can be set by the user before the engine is started and remains constant afterwards, you can insert the duration into the rule text, compile on the fly (only the rules that need last-minute editing) and execute.
To be absolutely dynamic, you'll have to implement the "window" mechanism explicitly. Make the timestamp an attribute of the event and set it explicitly: then you can base reasoning on timestamp differences, retract old events explicitly and compute the average over all that's left using a simple accumulate CE.
Atomically might not be the right word. When modelling cellular automata or neural networks, usually you have two copies of the system state. One is the current state, and one is the state of the next step that you are updating. This ensures consistency that the state of the system as a whole remains unchanged while running all of the rules to determine the next step. For example, if you run the rules for one cell/neuron to determine the state of it for the next step, you then run the rules for the next cell, it's neighbor, you want to use as the input for those rules the current state of the neighbor cell, not its updated state.
This may seem inefficient due to the fact that each step requires you copy all of the current step states to the next step states before updating them, however it is important to do this to accurately simulate the system as if all cells/neurons were actually being processed simultaneously, and thus all inputs for rules/firing functions were the current states.
Something that has bothered me when designing rules for expert systems is how one rule can run, update some facts that should trigger other rules to run, and you might have 100 rules queued up to run in response, but the salience is used as a fragile way to ensure the really important ones run first. As these rules run, the system changes more. The state of the facts are consistently changing, so by the time you get to processing the 100th rule, the state of the system has changed significantly since the time it was added to the queue when it was really responding to the first fact change. It might have changed so drastically that the rule doesn't have a chance to react to the original state of the system when it really should have. Usually as a workaround you carefully adjust its salience, but then that moves other rules down the list and you run into a chicken or egg problem. Other workarounds involve adding "processing flag" facts that serve as a locking mechanism to suppress certain rules until other rules process. These all feel like hacks and cause rules to include criteria beyond just the core domain model.
If you build a really sophisticated system that modeled a problem accurately, you would really want the changes to the facts to be staged to a separate "updates" queue that doesn't affect the current facts until the rules queue is empty. So lets say you make a fact change that fills the queue of rules to run with 100 rules. All of these rules would run, but none of them would update facts in the current fact list, any change they make gets queued to a change list, and that ensures no other rules get activated while the current batch is processing. Once all rules are processed, then the fact changes get applied to the current fact list, all at once, and then that triggers more rules to be activated. Rinse repeat. So it becomes much like how neural networks or cellular automata are processed. Run all rules against an unchanging current state, queue changes, after running all rules apply the changes to current state.
Is this mode of operation a concept that exist in the academic world of expert systems? I'm wondering if there is a term for it.
Does Drools have the capability to run in a way that allows all rules to run without affecting the current facts, and queue fact changes separately until all rules have run? If so, how? I don't expect you to write the code for me, but just some keywords of what it's called or keywords in the API, some starting point to help me search.
Do any other expert/rule engines have this capability?
Note that in such a case, the order rules run in no longer matters, because all of the rules queued to run will all be seeing only the current state. Thus as the queue of rules is run and cleared, none of the rules see any of the changes the other rules are making, because they are all being run against the current set of facts. Thus the order becomes irrelevant and the complexities of managing rule execution order go away. All fact changes are pending and not applied to the current state until all rules have been cleared from the queue. Then all of those changes are applied at once, and thus cause relevant rules to queue again. So my goal is not to have more control over the order that rules run in, but to avoid the issue of rule execution order entirely by using an engine that simulates simultaneous rule execution.
If I understand what you describe:
You have one fact that is managed by many rules
Each rule should apply on the initial value of your fact and has no right to modify the fact value (to not modify other rules'executions)
You then batch all the updates made by the rules on your fact
Other rules apply on this new fact value in a similar manner 'simutanously'
It seems to me that it is a Unit of Work design pattern just like Hibernate implements it (and many ORM in fact): http://www.codeproject.com/Articles/581487/Unit-of-Work-Design-Pattern
Basically you store in-memory all the changes (in a 'technical' fact for instance) and then execute a 'transaction' when all the rules based on the initial value have been fired that updates the fact value, and so on. Hibernate does that with its session (you modify your attached object, then when required it executes the update query on the database, not all modifications on the java object produce queries on your database).
Still you will have troubles if updates conflict (same fact field value modified, which value to choose? Same as a source version control conflict), you will have to define a determinist way to order updates, but it will be defined only once and available for all rules and for other changes it will work seamlessly.
This workaournd may/may not work based on your rather vague description. If you really are concerned about rules triggering further activations, why not queue the intermediate state yourself. And once the current evaluation is complete, insert those new facts into the working memory.
You would have to invoke fireAllRules() after inserting each fact though, this could be quite expensive. And then in the rules, rather than inserting the facts directly, push these into a queue. Once the above call returns, walk through the queue doing the same (or after inserting the original facts completely...)
I would imagine that this will be quite slow, to speed up, you could have multiple parallel working memories with the same rules, and evaluate multiple facts in one go into several queues etc. But things get pretty hairy..
Anyway, just an idea that's too long for the comments...
If the subject is confusing, that is because the problem itself is way too confusing to us. Here is the thing.
We have an application that leverages Drools' rule engine to help us evaluate java beans - Fact Objects in Drool's term - on their field values and update a particular flag field within the bean to "true" or "false" according to the evaluation result. All the evaluations and update operations are defined in the template.
The way it invokes Drools is like this. First it creates a stateful session before the first use. And when we have a list of beans, we insert them one by one to the session, and call fireAllRules. After firing the rules, we keep the session for later uses. And once we have another batch of beans, we do the same again, and again, and again...
This sounds making sense. But later during the testing, we found that although during the first batch, the rule engine worked fine, the following batches didn't. Some beans were mistakenly updated, that is, even no fields did match any rules, the flag was updated to true.
Then we thought maybe we should not reuse the session. So we put all beans from all batches into one big list. But soon we found that the problematic beans still got wrong update. And what's more weird, if we run this testing on different machines, problematic bean could be different. But if we test any of the problematic beans in unit test with itself, everything works fine.
Now I hope I have explained the problem. We are new to Drools. Maybe we did something wrong somewhere that we don't know. Could anyone here give any direction of the problem? That'll us a very big favor!
It sounds to me as though you're not clearing out working memory after each 'fireAllRules'.
If you use a stateful session, then every fact which you insert will remain in working memory until you explicitly retract it. Therefore every time you fire rules, you are re-evaluating the original set of facts, plus the new ones.
It might be useful to add a little debugging to your code. Using session.getObjects(), you will be able to see what facts are in working memory before and after execution of the rules. This should indicate what is not being retracted between evaluations.