I am currently stuck in a problem and hope you can help me with some ideas or best practices here...
Let's say I have an application using Event Sourcing and CQRS. I have
A green pot containing a number
A red pot containing a number
A setting where I define, the number from which pot should be displayed in the UI.
A calculated result that contains the number to be displayed
The current state of my application is
Red pot: 10
Green pot 20
Setting: Red
Result: 10 (value from red pot)
I have a Calculation service that subscribes to Red Pot service, Green Pot service and Settings service. I have a View Updater service that additionally subscribes to the Calculation service and updates the read model on any changes.
Now the following events are dropping in:
Green Pot: 25
Setting: Green Pot
The View Updater service is a bit busy today and has some delay in updating the view model.
The Calculation service handles the Green Pot event. It fetches the setting from read model (which is still set to Red) and does nothing.
After that the Calculation service handles Settings event. It fetches the Green value (which is still 20) from read read model and sends a new event (Result: 20)
After that the View Updater handles both events and updates the read model.
In this case, my application is not consistent - not even eventually.
Do have any thoughts how to handle something like this? Thanks for any ideas :-)
First thought is that it isn't clear that you are sharing the common understanding of eventual consistency. Martin Kleppmann's talk emphasized three ideas
Eventual Delivery
Convergence
No data loss
Second thought is that you seem to have introduced a race condition into your Calculation service design. If RedPot, GreenPot, and Setting are separate aggregates/streams, then there really isn't any time binding between them. The arrival of events from those sources is inherently racy.
Udi Dahan wrote Race Conditions Don't Exist
A microsecond difference in timing shouldn’t make a difference to core business behaviors.
This is where convergence comes into play: you need to design your solutions so that they reach the same results even though the timing of messages is different. This often means that you need to include in your model some concept of a clock, or time, and some way of defining the interval in which some result is true.
As you've defined the problem, the Results produced by the calculation service are more about caching a snapshot than maintaining a history. So a different way to think about the problem you are having is to consider that the calculation service should not accept any arbitrary data from the read model, but rather one that aligns with the events that it is consuming.
Calculation service: "Hi, I can haz green pot as of event #40?"
Read model: "Sorry, no can haz, retry-after: 10 minutes"
Calculation service: "OK, I'll try again later."
The Calculation service should subscribe to and receive all events (GreenPotUpdated, RedPotUpdated and SettingsChanged).
It is not OK for the Calculation service to depend on an eventually consistent read-model. Instead it should maintain its own private state, ensuring that the events are received in the correct order.
Related
I am new to drools / fusion (7.x) and am not sure how to solve this requirement. Assume I have event objects as Event{long: timestamp, id: string} where id identifies a physical asset (like tractor) and timestamp represents the time the event fired relative to the asset. In my scenario these Events do not arrive in my system in 'real-time', meaning they can be seconds, minutes or even days late. And my rules system needs to monitor multiple assets. Given this, when rules are evaluate the clock needs to be relative to the asset being monitored, it can't be a clock that spans assets.
I'm aware of Pseudo Clock, is there a way to assign Pseudo clocks per Asset?
My assumption is that a clock must always progress forward or temporal functions will not work properly. Take for the example the following scenario:
Fact A for Asset 1 arrive at 1:00 it is inserted into memory and rules fired. Then Fact B arrives for same Asset 1 at 2:00. It too is inserted and rules fired. Now Fact Z arrives for Asset 2 at 1:30 (- 30 minutes from clock). I'm assuming I shouldn't simply progress the clock backwards and evaluate, furthermore I'd want to set the clock back to 2:00 since that was the "latest" data I received. Now assume I am monitoring thousands of assets, all sending data at different times...
The best way I can think to address this is to keep a clock per asset and then save the engine state when each assets data is evaluated. Can individual KieSession's have different clocks, or is it at a container level?
Sample rule: When Fact 1 arrives after Fact 2 for the same Asset.
You're approaching the problem incorrectly. Regardless of whether you're using a realtime or psuedo clock, you're using a clock. You can't say "Fact #1 use clock A, and Fact #2 use clock B."
Instead you should be leveraging the metadata tags for events, specifically the #timestamp tag. This tag indicates to Drools that a specific field inside of the event is actually the timestamp for the Event, rather than the actual time the fact enters working memory.
For example:
import com.example.SampleEvent
declare SampleEvent
#role( event )
// this field is actually in the object, it's not the time the fact was inserted
#timestamp( createdDateTime )
end
Not knowing anything about what your rules are actually doing, the major issue I can foresee here is that if your rules rely on the temporal operators or define an expiry (#expires), they're not going to work and you'll need to redesign them. Especially for expirations: once an event expires, it is removed from working memory; when your out-of-band events come in any previously expired events are already gone and can't be worked against.
Of course that concern would be true regardless of whether you use #timestamp or your original "different psuedo clock" plan. Either way you're going to have to manage the fact that events cannot live forever in working memory -- you will eventually run out of resources and your system will crash. Events must be evicted at some point, so you'll need to design around that in both your models and your rules.
I am modeling ticket system with various SLA. The model must contain several service blocks with different reaction time ( from 2 to 32 hours). In the service block only working hours should be taken into account. So in the service block timeout should stop when non-workong hours and on the weekend. Could you please kindly tell me how i can realize it?
Thank you very much in advance!
I can think of two answers, one simplified but works in many cases, the other more advanced and probably more accurate:
Simplified approach: I would set the model in hours and keep everything running as is without any stop. So, at the end of the simulation, if the total time is 100 hours and you know that you have 8 hours/day with 5 days/week, then you'd know the total duration is 2.5 weeks. Of course, this might have limitations or might become more complex later on if you want day-specific actions (e.g. you want to differentiate between Monday, Tuesday, etc.)
Advanced more accurate approach: Create resources whose capacities are defined by schedule and assigned them to your services. Create a schedule and specify the working hours in that schedule. Check the below link to learn more about schedules. I call this the more advanced approach because you need to make sure the schedule is defined correctly and make sure all elements in the model are properly controlled (e.g. non-service blocks such as source, delays, etc.).
https://help.anylogic.com/topic/com.anylogic.help/html/data/schedule.html?resultof=%22%73%63%68%65%64%75%6c%65%73%22%20%22%73%63%68%65%64%75%6c%22%20
I personally would use the first approach if the model is rather simple and modeling working hours is enough for analysis. Otherwise, I'd go for option 2.
Finally, another option I'd like to highlight is the "suspend/resume" functions. I am only adding this because you asked "how to stop timeout". So these functions specifically stop and resume timeout. But you'll need to define the times at which they are executed (through an event for example).
Good day
I'm a new user trying to find my with Anylogic.
Any help with the following question will be appreciated.
Is it possible to start a model with initial values/quantities given to certain blocks/sections in a model? In other words not have the model start from 0 but from the values given.
You can run a "warmup" period manually and save that as a model snapshot. In future runs, you can start off from that snapshot by loading it. See the help on model snapshots
This is the general problem of model initialisation (e.g., if you're modelling a manufacturing facility, you may want the run to start with the facility at the state it would be at on 9am next Monday morning). There is no generic answer: what initialisation you need is 100% model-dependent (as is how easy/hard this is).
In particular, process models make this difficult because entities (agents) are expected to have flowed through the process up to the point they 'start' in. You can use things like extra initialisation-only Source/Enter blocks to 'inject' agents into the appropriate process points, but in most models it's not this easy: you will have all kinds of model state that needs to be made consistent with this (e.g., the agents flowing through the process might have attributes which have changed based on what's happened to them so far, so this would have to be made consistent).
That's why warm-up periods (letting the model run 'from empty' for a period until its state is qualitatively what you want as your starting point) is a common approach. Model snapshots can help you here (see Ben's answer) but they're not the only way of doing it. (You can also just 'reset' all your metrics/output gathering at the point when you determine the warm-up period has ended --- i.e., you are effectively establishing a new 'time zero' --- but, again, exactly what you need to do is 100% model dependent.)
I found time as the best value as event version.
I can merge perfectly independent events of different event sources on different servers whenever needed without being worry about read side event order synchronization. I know which event (from server 1) had happened before the other (from server 2) without the need for global sequential event id generator which makes all read sides to depend on it.
As long as the time is a globally ever sequential event version , different teams in companies can act as distributed event sources or event readers And everyone can always relay on the contract.
The world's simplest notification from a write side to subscribed read sides followed by a query pulling the recent changes from the underlying write side can simplify everything.
Are there any side effects I'm not aware of ?
Time is indeed increasing and you get a deterministic number, however event versioning is not only serves the purpose of preventing conflicts. We always say that when we commit a new event to the event store, we send the new event version there as well and it must match the expected version on the event store side, which must be the previous version plus exactly one. If there will be a thousand or three millions of ticks between two events - I do not really care, this does not give me the information I need. And if I have missed one event on the go is critical to know. So I would not use anything else than incremental counter, with events versioned per aggregate/stream.
This is a simplified example scenario.
I collect interval temperature data from a room's heating, let's say every minute.
Additionally, there is a light switch, which sends its status (on/off) when someone switches the light on or off. All events are stored in crate with their timestamps.
I'd like to query all temperature readings while the light switch is in the "on"-state.
My design is to denormalize the data. So, each temperature event gets a new field, which contains the light switch status. My query breaks down to a simple filter then. However, I have only the events if someone presses the switch, but no continuous readings. So, I need to read out all light switch data, resemble the switch's state over time and update all temperature data accordingly.
Using crate, is there a way doing everything within crate using crate's SQL only, i.e. 'in-crate' data updates? I do not want to setup and maintain external client programs for such operations.
In more complex scenarios, I may also face the problem of reading a huge amount of data first via a single client program in order to update afterwards other data stored within the same data store. This "bottleneck" design approach worries me. Any ideas?
Thanks,
nodot