I have a knowledge base for performing validation of my data model. Modification events from the UI are posted asynchronously to a separate thread that updates the knowledge base and fires the rules. Validation errors trigger a logical insert of an error object. I can collect these and post events asynchronously back to the UI thread. However, to make it easier to keep the UI up-to-date, I also want to post an event when the user fixes an error – i.e. when an error object is retracted from the knowledge base.
I have two ideas for how to do this, neither of which I like:
I could listen to working memory events from procedural code, but that would violate the encapsulation of the validation functionality within the knowledge base.
Alternately, I could insert a flag object paired with my logical insertion of an error object and write a rule that detects un-paired flags, retracts them, and fires the "error fixed" event.
Is there a clean and simple way to activate a rule based on the logical retraction of an error object as described above?
Self-answering so that I can link to this later and find out if there is a better way to do it.
Here's the approach I wound up taking:
When a validation rule is triggered, insertLogical an object with a unique id representing the validation error (e.g. ValidationMessage).
ValidationMessage has a property "flagged", which defaults to false.
Define a rule that triggers on existence of unmarked ValidationMessages. In the RHS, mark the message and make an onAssert call to a global event handler. Insert a second object (e.g. ValidationMessageFlag) with the same id as the ValidationMessage.
Define a rule that triggers on existence of a ValidationMessageFlag, when no corresponding ValidationMessage (with the same id exists). In the RHS, call onRetract in the global event handler. Retract the ValidationMessageFlag.
Related
For example, I am load a lot of drools rules to run, how do I know which drools rule now is running? So I can know find out the rule
Assuming you're talking about the right hand side of the rules, you'll want to use an AgendaEventListener. This is an interface which defines a listener that you can create that watches the Event Lifecycle. For more information about the event model, please refer to the Drools documentation.
The easiest way to do this would be to extend either DefaultAgendaEventListener or DebugAgendaEventListener. Both of these classes implement all of the interface methods. The Default listener implements each method as a "no-op", so you can override just the methods you care about. The Debug listener implements each method with a logging statement, logging the toString() of the triggering event to INFO. If you're just learning about the Drools lifecycle, hooking up the various Debug listeners is a great way to watch and learn how rules and events process in rules.
(Also the cool thing about listeners is that they allow you to put breakpoints in the "when" clause that trigger when specific conditions are met -- eg when a rule match is created. In general I find that listeners are a great debugging tool because they allow you to put breakpoints in methods that trigger when different parts of the Drools lifecycle occur.)
Anyway, what you'll want to do is create an event listener and then pay attention to one or more of these specific events:
BeforeMatchFired
AfterMatchFired
MatchCreated
Which events to pay attention to depend on where you think the issue is.
If you think the issue is in the "when" clause (left-hand side, LHS), the MatchCreated event is what is triggered when Drools evaluates the LHS and decides that this rule is valid for firing based on the input data. It is then put on, effectively, a priority queue based on salience. When the rule is the highest priority on the queue, it is picked up for firing -- at this point the BeforeMatchFired event is triggered; note that this is before the "then" clause (right-hand side, RHS) is evaluated. Then Drools will actually do the work on the RHS, and once it finishes, trigger the AfterMatchFired.
Things get a little more complicated when your rules do things like updates/retracts/etc -- you'll start having to consider potential match cancellations when Drools re-evaluates the LHS and decides that a rule is no longer valid to be fired per the facts in working memory. But in general, these are the tools you'll want to start with.
The way I would traditionally identify long-running rules would be to start timing within the BeforeMatchFired and to stop timing in the AfterMatchFired, and then log the resulting rule execution time. Note that you want to be careful here to log the execution of the current rule, tracking it by name; if your rule extends another rule you might find that your execution flow goes BeforeMatchFired(Child) -> BeforeMatchFired(Parent) -> AfterMatchFired(Parent) -> AfterMatchFired(Child), so if you're naively stopping a shared timer you might start having issues. My preferred way of doing this is by tracking timers by rule name in thread local or even a thread-safe map implementation, but you can go whichever route you'd like.
If you're using a very new version of Drools (7.41+), there is a new library called drools-metric which you can use to identify slow rules. I haven't personally used this library yet because the newest versions of Drools have started introducing non-backwards-compatible changes in minor releases, but this is an option as well.
You can read more about drools-metric in the official documentation here (you'll need to scroll down a bit.) There's some tuning you'll need to do because the module only logs instances where the thresholds are exceeded. The docs that I've linked to include the Maven dependency you'll need to import, along with information about configuration, and some examples of the output and how to understand what it's telling you.
With Firestore Increment, what happens if you're using it in a Cloud Function and the Cloud Function is accidentally invoked twice?
To make sure that your function behaves correctly on retried execution attempts, you should make it idempotent by implementing it so that an event results in the desired results (and side effects) even if it is delivered multiple times.
E.g. the function is trying to increment a document field by 1
document("post/Post_ID_1").
updateData(["likes" : FieldValue.increment(1)])
So while Increment may be atomic it's not idempotent? If we want to make our counters idempotent we still need to use a transaction and keep track of who was the last person to like the post?
It will increment once for each invocation of the function. If that's not acceptable, you will need to write some code to figure out if any subsequent invocations are valid for your case.
There are many strategies to implement this, and it's up to you to choose one that suits your needs. The usual strategy is to use the event ID in the context object passed to your function to determine if that event has been successfully processed in the past. Maybe this involves storing that record in another document, in Redis, or somewhere that persists long enough for duplicates to be prevented (an hour should be OK).
I am in need of stopping a record from being inserted when a certain condition is met. I made this change in beforeInsert of my trigger with the help of addError().
I have an issue with this solution: apex addError - remove default error message.
I want to remove this default error message and keep only my customized message. And I want to make this bold and bit bigger too. I am now convinced that these things are not possible with addError().
Is there any alternative solution to this? I mean, to stop this record from being inserted?
My object in concern is ObjectA. And ObjectA has a lookup to ObjectB. This ObjectB field in ObjectA has to be unique. No two ObjectA records can contain the lookup to same ObjectB field. That's when I need to stop this insertion.
Can someone help me with this?
bold and bit bigger too
Possible only if you have custom UI (Visualforce / Aura Component / Lightning Web Component...). I wouldn't spend too much time on this. Focus on getting your logic right and making sure it runs also via API (so not only manual insert but also Data Loader is protected for example).
if addError doesn't do what you need then consider adding a helper text(18) field. Mark it unique and use a before insert,before update trigger (or workflow) to populate it with value from that lookup.
Uniqueness should be handled by database. Are you really ready to write that "before insert" perfectly? What about update? What about undelete (restore from recycle bin?) What if I'll want to load 2 identical records at same time? That trigger starts to look bit more complex. What if user is not allowed to see the record with which the clash should be detected (sharing rules etc... I mean your scenario sounds like the uniqueness should be "global" but you need really good reason to write "without sharing" code in the trigger handler).
It's all certainly possible but it's so much easier to just make it with an unique field and call it a day. And tell business to deal with the not necessarily friendly error message.
I am currently beginning my first real attempt at a DDD/CQRS/ES system after studying a lot of material and examples.
1) I have seen event sourcing examples where the Aggregates are Event Handlers and their Handle method for each event is what mutates the state on the object instance (They implement an IHandleEvent<EventType> interface for events that would mutate the state)
2) I have also seen examples where the Aggregates would just look like plain classic Entity classes modelling the domain.
Another Event Handler class is involved in mutating the state.
State, of course, is mutated on an aggregate by the event handlers in both cases when rebuilding the aggregate from a repository call that gets all the previous events for that aggregate, and when a command handler calls methods on an aggregate. Although in the latter I've seen examples where the events are published in the command handler rather than by the aggregate, which I'm convinced is wrong.
My question is what are the pros and cons between method (1) and (2)
The job of receiving/handling a command is different from actioning it. The approach I take is to have a handler. It's job is to receive a command. The command hold the AggregateId which it can then use to get all the events for the aggregate. It can then apply those events to the aggregate via a LoadFromHistory method. This brings the aggregate up to date and makes it ready to receive the command. So my the short version is option 2.
I have some posts that you find helpful, the first is a overview of the flow of a typical CQRS/ES application. It's not how it should be just how they often are. You can find that at CQRS – A Step-by-Step Guide to the Flow of a typical Application!
I also have a post on how to build an aggregate root for CQRS and ES if thats helpful. You can find that at Aggregate Root – How to Build One for CQRS and Event Sourcing
Anyway, I hope that helps. All the best building your CQRS/ES app!
While I agree with Codescribler , I need to go a bit further into details. ES is about expressing an entity state as a stream of events (which will be stored). A message handler is just a service implementation which will tell an Entity what to do.
With ES the entity implements its changes by generating one or more events and then applying them to itself. The entity doesn't know that its changes come from a command or event handler (it should be 'always' a command handler but well.. sometimes it doesn't matter), however it modifies state via its own events that will be published by a service (or the event store itself).
But... in a recent app, for pragmatic reasons my ES entity accepted the commands directly, although the entity itself wasn't an implementation of a command handler. The handler would just relay the command to the entity.
So, you can actually handle messages directly with an entity but only as an implementation detail, I wouldn't recommend to designate an entity to be a command/event handler, as it's a violation of the Separation of Concerns.
I have event driven architecture. Say about 1000 event types and each event type can have multiple listeners. Averaging around 2 per event. giving 2000 handlers.
For each event handler I have rule to be evaluated further to see if that event handling is required or not.
handle(MyEvent xxx){
kisession.execute( xxx.getPayload());
// Here I want the rules that are named/identified againt my Event alone to be fired
}
I could add MyEvent to be part of LHS of the specific rule.
But I want the matching to be preprocessed to save on processing time after event is fired.
Is there a better way to fire a specific rule only rather than allowing the underlying engine evaluate all the 2000 rules to figure out which one is applicable for the Payload fact?
I could identify the rules for specific event handlers at design time itself and want to exploit this advantage for better performance.
If you select which rule to fire from outside the rules engine, then there is absolutely no point in using a rules engine!
Evaluating which rules should activate is what Drools is designed to do. Fast. Drools does not need to evaluate 2000 rules every time you call fireAllRules, just because you have 2000 rules. When you create a knowledge base, the rules are compiled into a graph which lets the engine determine which rules might fire for certain matches. The graph is updated every time a fact is inserted, modified or retracted. It's a bit like having an indexed database table.
Of course, you can technically do this. Use the fireAllRules(AgendaFilter) method to filter the rules which may fire.