Event Base Drools Rule - drools

I am new to Drools Fusion, I am unable to create a rule for below condition
Read the server log file with (Date, Error message etc...)
If found Event Type: ERROR with Event Message: "Memory Error" have to
trigger some event (as of now SOP)
Another (with in) 1hr it should not trigger event for same Event Message & Event Type (if its found in log file)
After 1hr if it found the same, it has to trigger event
Note: Have to use same date & time specified in log file
Please do the needful for the same.

I'm not sure exactly what you're looking for. I'll respond conceptually. I'm going to assume you are trying to do everything within the drools framework.
For drools to be constantly aware of the server log you will need to be running a stateful knowledge session and constantly inserting new facts into it. These facts would be derived from the server log.
It looks like you want to talk about Events in your model. Make an Event class. For this example, the class should probably have "type" and "message" fields. Presumably you would insert new event objects using code which is constantly getting information from the server log (either reading a file, through REST, or whatever).
In order to do time based logic you can use cron expressions. You can also use Calendar in more recent versions of drools. This is a brief example of doing it with cron.

Related

version of aggregate event sourcing

According to event sourcing. When a command is called, all events of a domain have to be stored. Per event, system must increase the version of an aggregate. My eventstore is something like this:
(AggregateId, AggregateVersion, Sequence, Data, EventName, CreatedDate)
(AggregateId, AggregateVersion) is key
In some cases it does not make sense to increase the version of an aggregate. For example,
a command register an user and raises RegisteredUser, WelcomeEmailEvent, GiftCardEvent.
how can I handle this problem?
how can I handle this problem?
Avoid confusing your representation-of-information-changes events from your publishing-for-use-elsewhere events.
"Event sourcing", as commonly understood in the domain-drive-design and cqrs space, is a kind of data model. We're talking specifically about the messages an aggregate sends to its future self that describe its own changes over time.
It's "just" another way of storing the state of the aggregate, same as we would do if we were storing information in a relational database, or a document store, etc.
Messages that we are going to send to other components and then forget about don't need to have events in the event stream.
In some cases, there can be confusion when we haven't recognized that there are multiple different processes at work.
A requirement like "when a new user is registered, we should send them a welcome email" is not necessarily part of the registration process; it might instead be an independent process that is triggered by the appearance of a RegisteredUser event. The information that you need to save for the SendEmail process would be "somewhere else" - outside of the Users event history.
Event changes the state of an aggregate, and therefore changes its version. If state is not changed, then there should be no event for this aggregate.
In your example, I would ask myself - if WelcomeEmailEvent does not change the state of the User aggregate, then whose state it chages? Perhaps some other aggregate - some EmailNotification service that cares about successful or filed email attempt. In this case I would make it event of those aggregate which state it changes. And it will affect version of that aggregate.

CQRS/Event Sourcing: Should events (types) be shared?

Should events be shared? I am experimenting with CQRS and Event Sourcing and am wondered if events (the types) should be shared/defined between services.
Case:
A request comes in and a new createUser command is pushed into the 'commands' event log. Service A (business logic) fetches this command and generates the data of the new user. Once the new user is created it pushes the new data into the 'events' event log with the event name newUser. Service B (projector) notices the new event and starts processing it.
Here lays my question. Should we define for every event type (in this case newUser) the logics that needs to be ran in order to update the materialised view? In the example below do we have 2 types of events and is for every event the actions defined that need to happen. In this case are the event types defined in the logics service and the projector service.
# <- onEvent
switch event.type
case "newUser"
putUsers(firstName=data.firstName, lastName=data.lastName) # put this data in the database
case "updateUserFirstName"
updateUsers(where id = 1, firstName=data.firstName)
Or is it a good idea to define in the event the type of operation that needs to be preformed? In that case are event types not shared and is the projector service able to handle unknown/new events, without any modification.
# <- onEvent
switch event.operation
case "create"
putUser(...)
case "update"
updateUser(...) # update only the data defined in the event
Is options 2 a viable option? Or will I be running into issues when choosing this strategy?
Events reflect something that has happened. They are usually named in the past tense - userCreated.
Generic events (or one event type per entity) have a number of drawbacks:
Finding proper past tense names for event types becomes more difficult
You lose some of the expressivity since the whole domain meaning is no longer immediately apparent just looking at the event type
Impossible to subscribe to events in a fine grained, streamlined way because you need to "open the envelope" to find out which specific event you're dealing with
Discrepancy between events you talk about with domain experts (for instance during Event Storming sessions) and the way they are encoded in your types, messages, etc.
I wouldn't recommend it except maybe in a very free-form/dynamic system where the entities are not known in advance.
I recommend using event type to determine what type of business logic/rules will consume the event.
As #guillaume31 mentioned, use past tense to name your events. But if you want to plan for the future, you should also version your event types. For example, you can name your event types like this "userCreated_v1" or "userFirstNameChanged_v1". This gives you the ability to change the structure of event messages in the future and easily associate new business logic/rules with the new events.

Creation Concurrency with CQRS and EventStore

Baseline info:
I'm using an external OAuth provider for login. If the user logs into the external OAuth, they are OK to enter my system. However this user may not yet exist in my system. It's not really a technology issue, but I'm using JOliver EventStore for what it's worth.
Logic:
I'm not given a guid for new users. I just have an email address.
I check my read model before sending a command, if the user email
exists, I issue a Login command with the ID, if not I issue a
CreateUser command with a generated ID. My issue is in the case of a new user.
A save occurs in the event store with the new ID.
Issue:
Assume two create commands are somehow issued before the read model is updated due to browser refresh or some other anomaly that occurs before consistency with the read model is achieved. That's OK that's not my problem.
What Happens:
Because the new ID is a Guid comb, there's no chance the event store will know that these two CreateUser commands represent the same user. By the time they get to the read model, the read model will know (because they have the same email) and can merge the two records or take some other compensating action. But now my read model is out of sync with the event store which still thinks these are two separate entities.
Perhaps it doesn't matter because:
Replaying the events will have the same effect on the read model
so that should be OK.
Because both commands are duplicate "Create" commands, they should contain identical information, so it's not like I'm losing anything in the event store.
Can anybody illuminate how they handled similar issues? If some compensating action needs to occur does the read model service issue some kind of compensation command when it realizes it's got a duplicate entry? Is there a simpler methodology I'm not considering?
You're very close to what I'd consider a proper possible solution. The scenario, if I may summarize, is somewhat like this:
Perform the OAuth-entication.
Using the read model decide between a recurring visitor and a new visitor, based on the email address.
In case of a new visitor, send a RegisterNewVisitor command message that gets handled and stored in the eventstore.
Assume there is some concurrency going on that, for the same email address, causes two RegisterNewVisitor messages, each containing what the system thinks is the key associated with the email address. These keys (guids) are different.
Detect this duplicate key issue in the read model and merge both read model records into one record.
Now instead of merging the records in the read model, why not send a ResolveDuplicateVisitorEmailAddress { Key1, Key2 } towards your domain model, leaving it up to the domain model (the codified form of the business decision to be taken) to resolve this issue. You could even have a dedicated read model to deal with these kind of issues, the other read model will just get a kind of DuplicateVisitorEmailAddressResolved event, and project it into the proper records.
Word of warning: You've asked a technical question and I gave you a technical, possible solution. In general, I would not apply this technique unless I had some business indicator that this is worth investing in (what's the frequency of a user logging in concurrently for the first time - maybe solving it this way is just a way of ignoring the root cause (flakey OAuth, no register new visitor process in place, etc)). There are other technical solutions to this problem but I wanted to give you the one closest to what you already have in place. They range from registering new visitors sequentially to keeping an in-memory projection of the visitors not yet in the read model.

Obtain ServiceDeploymentId in TrackingParticipant

In WF4, I've created a descendant of TrackingParticipant. In the Track method, record.InstanceId gives me the GUID of the workflow instance.
I'm using the SqlWorkflowInstanceStore for persistence. By default records are automatically deleted from the InstancesTable when the workflow completes. I want to keep it that way to keep the transaction database small.
This creates a problem for reporting, though. My TrackingParticipant will log the instance ID to a reporting table (along with other tracking information), but I'll want to join to the ServiceDeploymentsTable. If the workflow is complete, that GUID won't be in the InstancesTable, so I won't be able to look up the ServiceDeploymentId.
How can I obtain the ServiceDeploymentId in the TrackingParticipant? Alternately, how can I obtain it in the workflow to add it to a CustomTrackingRecord?
You can't get the ServiceDeploymentId in the TrackingParticipant. Basically the ServiceDeploymentId is an internal detail of the SqlWorkflowInstanceStore.
I would either set the SqlWorkflowInstanceStore to not delete the worklow instance upon completion and do so myself at some later point in time after saving the ServiceDeploymentId with the InstanceId.
An alternative is to use auto cleanup with the SqlWorkflowInstanceStore and retreive the ServiceDeploymentId when the first tracking record is generated. At that point the workflow is not complete so the original instance record is still there.

How do I pretend duplicate values in my read database with CQRS

Say that I have a User table in my ReadDatabase (use SQL Server). In a regulare read/write database I can put like a index on the table to make sure that 2 users aren't addedd to the table with the same emailadress.
So if I try to add a user with a emailadress that already exist in my table for a diffrent user, the sql server will throw an exception back.
In Cqrs I can't do that since if I decouple the write to my readdatabas from the domain model, by puting it on an asyncronus queue I wont get the exception thrown back to me, and I will return "OK" to the UI and the user will think that he is added to the database, when infact he will never be added to the read database.
I can do a search in the read database checking if there is a user already in my database with the emailadress, and if there is one, then thru an exception back to the UI. But if they press the save button the same time, I will do 2 checks to the database and see that there isn't any user in the database with the emailadress, I send back that it's okay. Put it on my queue and later it will fail (by hitting the unique identifier).
Am I suppose to load all users from my EventSource (it's a SQL Server) and then do the check on that collection, to see if I have a User that already has this emailadress. That sounds a bit crazy too me...
How have you people solved it?
The way I can see is to not using an asyncronized queue, but use a syncronized one but that will affect perfomance really bad, specially when you have many "read storages" to write to...
Need some help here...
Searching for CQRS Set Based Validation will give you solutions to this issue.
Greg Young posted about the business impact of embracing eventual consistency http://codebetter.com/gregyoung/2010/08/12/eventual-consistency-and-set-validation/
Jérémie Chassaing posted about discovering missing aggregate roots in the domain http://thinkbeforecoding.com/post/2009/10/28/Uniqueness-validation-in-CQRS-Architecture
Related stack overflow questions:
How to handle set based consistency validation in CQRS?
CQRS Validation & uniqueness