Domain Events v Event Aggregator v... other - event-handling

I have a composite structure in my domain where the leaf node (Allocation) has a DurationChanged event that I would like to use at the top of my presentation layer view model structure (in the TimeSheetViewModel), and I am wondering what the best way is to get to it.
Options that come to mind include:
Subscribe to it in the TimeSheetComposite. Each composite is ultimately composed of Allocations, and the TimeSheetComposite is the Model to the TimeSheetViewModel. It seems I would also need an event in the TimeSheetComposite that gets fired when a child DurationChanged event is fired; the TimeSheetViewModel would subscribe to the latter event.
Ignore the DurationChanged event and just follow the INPC chain that bubbles up to the TimeSheetViewModel when AllocationViewModel.Amount is changed. I wouldn't have a useful piece of information, specifically the old Amount prior to the edit, but I can calculate the needed end results cheaply enough if necessary.
Make the DurationChanged event a Domain Event; I do not currently use domain events, but I sure like the concept and it looks like there is enough code in Udi's article to get started with it.
Set up some sort of Event Aggregator to publish & subscribe to DurationChanged. I am not very sure yet what the difference is yet between Domain Events and Event Aggregators are, and whether they are complimentary or alternative approaches to solving the same thing. The implementation here using Rx looks promising.
In this design, the TimeSheetViewModel needs to know when an Allocation.Duration has changed so it can get a new total of all allocation durations by date.
How would you provide the DurationChanged notice?
Cheers,
Berryl
Domain Composite structure & event
Presentation layer structure

I wound up listening for the leaf event in the (TimeSheet)Composite, and then essentially re-throwing a similar event there to make it easy for the (TimeSheet)ViewModel to listen to it.
When I understand DomainEvents / EventAggregators better I will revisit this one.
Cheers,
Berryl

Related

Snapshotting as Domain Event in Event Sourcing

I have some pretty constant aggregates in my event sourcing model that will accumulate a large amount of events. I am thinking about using snapshots to optimize the re-hydration of these aggregates. I.E. the aggregates are warehouses.
My question is whether or not I should produce a specific event for snapshotting, so something like "WarehouseStateSnapshotted". In my current prototype, a snapshot state is saved in duplicate code existing in a few command handlers. I feel this is not the right area to be handling it. I would rather dispatch an event for the snapshot to my service bus, and have the event handler handle saving the snapshot state. This may, however, violate the domain driven pattern of events them self. Have other's created events for snapshots?
If this is not the right approach, should I at least move my snapshotting logic out of the command handlers and into the aggregate class?
Thanks!
EDIT: Title and -- This comment seems to suggest snapshots as domain events is the wrong approach.
EDIT2: Simplified Question - Is it appropriate to have repos injected into command handlers?
Let me attack the easy one first. The snapshotting logic does not belong in the aggregate. Whether and when you shapshot is purely a performance concern and so does not belong with business rules. It helps to draw the line by imagining a server with infinite resources. If you don’t need to do “the thing” on this amazing machine, “the thing” does not belong in the aggregate.
In the link you posted above I agree with RBanks54 that the snapshot does not belong in the aggregate event stream, for all the reasons he lists. I think your solution to dispatch an event on the service bus, then handle that event in a different command, is the correct approach. Handling snapshotting in the context of handling a new event means you cannot snapshot unless a new event is received. Having a distinct message on the service bus means any process can request a snapshot when appropriate.
My question is whether or not I should produce a specific event for snapshotting, so something like "WarehouseStateSnapshotted".
"It depends".
The reference you should review for snapshoting is CQRS Documents, by Greg Young. It's relatively old 2010, but serves as a simple introduction to snapshotting as a concept.
There's nothing wrong with generating snapshots asynchronously and storing them outside of the event stream.
You can use any sensible trigger for the snapshotting process; you don't necessarily need an event in the stream. "Snapshot every 100 events" or "Snapshot every 10 minutes" or "Snapshot when the admin clicks the snapshot button" are all viable.
Some domains have a natural cadence to them, where the domain itself might suggest a snapshot -- think "closing the books on the fiscal year".
I'm somewhat skeptical about putting a domain agnostic "make a snapshot" message into the event stream - I don't think it's appropriate to have the aggregate be responsible for snapshot cadence. It's not broken, but it does feel a bit like overloading the semantics of the event stream with a different concern.
I have been dabbling a bit with event-sourcing but I'm no expert. I do not particularly like the idea of a separate "stream" representing a snapshot. It isn't much of a stream since it only stores the last snapshot. In my Shuttle.Recall project, which is still in its infancy, I store snapshots as normal domain events but they are specifically marked as snapshots and the last snapshot version is stored separately in order to load it and then the events after that version are applied. I find some advantages to this in that you can add some functionality around snapshots also.
When you are using snapshots as a purely technical performance improvement it may not add much value to your domain. If a snapshot does not belong in the aggregate/domain then how would one go about hydrating the aggregate from the snapshot?
In some instances a snapshot may be very much part of the domain. When you look at your monthly bank statement you will not find each and every transaction (event) from the day that you opened up your account. Instead we have an opening balance (snapshot) with the new transactions (events) for that month. In this way the "MonthEndProcessed" event may very well be a snapshot.
I also don't really buy the argument that should a snapshot contain an error you cannot fix it since an event stream is immutable. What happens if your event contains an error? Can you not fix it? These errors should ideally not make it into a production system but if they do then they should be fixed. The immutability, to me anyway, relates to the typical interaction with the system. We do not typically make changes to an event once it has taken place.
In some instances it may even be beneficial to go back and change some events to a newer version. These should be kept to a minimum and ideally avoided but perhaps it may be pragmatic in some instances.
But like I said... I'm still learning :)

Should Aggregates be Event Handlers

I am currently beginning my first real attempt at a DDD/CQRS/ES system after studying a lot of material and examples.
1) I have seen event sourcing examples where the Aggregates are Event Handlers and their Handle method for each event is what mutates the state on the object instance (They implement an IHandleEvent<EventType> interface for events that would mutate the state)
2) I have also seen examples where the Aggregates would just look like plain classic Entity classes modelling the domain.
Another Event Handler class is involved in mutating the state.
State, of course, is mutated on an aggregate by the event handlers in both cases when rebuilding the aggregate from a repository call that gets all the previous events for that aggregate, and when a command handler calls methods on an aggregate. Although in the latter I've seen examples where the events are published in the command handler rather than by the aggregate, which I'm convinced is wrong.
My question is what are the pros and cons between method (1) and (2)
The job of receiving/handling a command is different from actioning it. The approach I take is to have a handler. It's job is to receive a command. The command hold the AggregateId which it can then use to get all the events for the aggregate. It can then apply those events to the aggregate via a LoadFromHistory method. This brings the aggregate up to date and makes it ready to receive the command. So my the short version is option 2.
I have some posts that you find helpful, the first is a overview of the flow of a typical CQRS/ES application. It's not how it should be just how they often are. You can find that at CQRS – A Step-by-Step Guide to the Flow of a typical Application!
I also have a post on how to build an aggregate root for CQRS and ES if thats helpful. You can find that at Aggregate Root – How to Build One for CQRS and Event Sourcing
Anyway, I hope that helps. All the best building your CQRS/ES app!
While I agree with Codescribler , I need to go a bit further into details. ES is about expressing an entity state as a stream of events (which will be stored). A message handler is just a service implementation which will tell an Entity what to do.
With ES the entity implements its changes by generating one or more events and then applying them to itself. The entity doesn't know that its changes come from a command or event handler (it should be 'always' a command handler but well.. sometimes it doesn't matter), however it modifies state via its own events that will be published by a service (or the event store itself).
But... in a recent app, for pragmatic reasons my ES entity accepted the commands directly, although the entity itself wasn't an implementation of a command handler. The handler would just relay the command to the entity.
So, you can actually handle messages directly with an entity but only as an implementation detail, I wouldn't recommend to designate an entity to be a command/event handler, as it's a violation of the Separation of Concerns.

EventStore: learning how to use

I'm trying to learn EventStore, I like the concept but when I try to apply in practice I'm getting stuck in same point.
Let's see the code:
foreach (var k in stream.CommittedEvents)
{
//handling events
}
Two question about that:
When an app start ups after some maintenance, how do we bookmark in a
safe way what events start to read? Is there a pattern to use?
as soon the events are all consumed, the cycle ends... what about the message arriving run time? I would expect the call blocking until some new message arrive ( of course need to be handled in a thread ) or having something like BeginRead EndRead.
Do I have to bind an ESB to handle run time event or does the EventSore provides some facility to do this?
I try to better explain with an example
Suppose the aggregate is a financial portfolio, and the application is an application showing that portfolio to a trader. Suppose the trader connect to the web app and he looks at his own portfolio. The current state will be the whole history, so I have to read potentially a lot of records to reproduce the status. I guess this could be done by a so called snapshot, but who's responsible for creating it? When one should choose to create an aggregate? How can one guess a snapshot for an aggregate exists ?
For the runtime part: as soon the user look at the reconstructed portfolio state, the real time part begin to run. The user can place an order and a new position can be created by succesfully execute that order in the market. How is the portfolio updated by the infrastructure? I would expect, but maybe I'm completely wrong, having the same event stream being the source of that new event new long position, otherwise I have two path handling the state of the same aggregate. I would like to know if this is how the strategy is supposed to work, even if I feel a little tricky having the two state agents, that can possibly overlap.
Just to clarify how I fear the overlapping:
I know events has to be idempotent, so I know it must not be a
problem anyway,
But let's consider the following:
I subscribe an event bus before streaming the event to update the state of the portfolio. some "open position event" appears on the bus: I must handle them, but maybe the portfolio is not in the correct state to handle it since is not yet actualized. Even if I'm able to handle such events I will find them again when I read the stream.
More insidious: I open the stream and I read all events and I create a state. Then I subscribe to the bus: some message on the bus happen in the middle between the end of the steram reading and the beggining of the subscription: those events are missing and the aggregate is not in the correct state.
Please be patient all, my English is poor and the argument is tricky, hope I managed to share my doubt :)
The current state will be the whole history, so I have to read
potentially a lot of records to reproduce the status. I guess this
could be done by a so called snapshot, but who's responsible for
creating it?
In CQRS and event sourcing, queries are served by projections which are generated from events emitted by aggregates. You don't use the aggregate instance as reconstituted from the event store to display information.
The term snapshot refers specifically to an optimization of the event store which allows rebuilding the aggregate without replaying all of the events.
Projections are essentially event handlers which maintain a denormalized view of aggregates. Events emitted from aggregates are published, possibly out of band, and the projection subscribes to and handles those events. A projection can combine multiple aggregates if a requirement exists to display summary information, for instance. In case of a trading application, each view will typically contain data from various aggregates. Projections are designed in a consumer-driven way - application requirements determine the different views of the underlying data that are needed.
With this type of workflow you have to embrace eventual consistency throughout your application. For instance, if an end user is viewing their portfolio and initiating new trades, the UI has to subscribe to updates to reflect updated projections in an asynchronous manner.
Take a look at here for an overview of CQRS and event sourcing.

Messaging - All attributes or just an id pointer

We are considering integrating messaging (publishing events) in our system, we multiple components, a few different stacks etc.. We'll start with a small number of publishers and subscribers and gradually introduce where it makes sense.
If we publish an event, say type: 'NewProductAddedToCatalogue', should it included all the attributes of the new product or just the new product Id or some form of rest url perhaps e.g. http://db.intranet/products/[uuid]. What are the advantages of each approach? I feel some subscribers would just be interested in a minimal number of attributes whilst others e.g. website publisher might want access to them all (or most). Are there any significant downside to either approach?
The quick answer - why not publish two types of event message?
One could be a lightweight event with just the product ID and this would be used by subscribers who would then enrich the event data themselves.
The other message would contain all the data needed to make sense of the event, for consumers who didn't want to enrich the data.
The longer answer - I don't really like the "lightweight" event idea. The problem with this is that you are basically turning your event message into a "something changed" notification.
This removes the event from it's underlying data change - for example a notification does not say what has changed, but only that something has changed. It's entirely possible that the event message may have been delayed to the point where the underlying data is no longer in the same state as it was when the event was raised (whether this is a problem for you is down to your individual requirement).
More importantly however, the lookup to "enrich" the data introduces coupling between components - the idea behind an event message is that the event subscriber can just process it - the subscriber doesn't need to know anything about the publisher of the message - or, more specifically, about the data source that the message came from.
However, there are some benefits - notification-type message processing is idempotent by nature so there's less effort involved there.
This is an interesting topic we've discussed at length since our product catalog is at our core. What we've found is that each Subscriber will be interested in a common set of data and then it will enhance that data with its own. An example of that would be a Marketing subscriber that would add consumer friendly images and descriptions. This is quite different than a Supply Chain subscriber that would add things like height, weight, and cube. This approach works when each component is responsible for its own data.
If you are in the situation where your some of catalog is centrally managed, we've found that its easiest to send each Subscriber the common elements plus the data that it is interested in. We've found that there really isn't a ton of overlap in the data and you can keep your systems decoupled.

Recreate a graph that change in time

I have an entity in my domain that represent a city electrical network. Actually my model is an entity with a List that contains breakers, transformers, lines.
The network change every time a breaker is opened/closed, user can change connections etc...
In all examples of CQRS the EventStore is queried with Version and aggregateId.
Do you think I have to implement events only for the "network" aggregate or also for every "Connectable" item?
In this case when I have to replay all events to get the "actual" status (based on a date) I can have near 10000-20000 events to process.
An Event modify one property or I need an Event that modify an object (containing all properties of the object)?
Theres always an exception to the rule but I think you need to have an event for every command handled in your domain. You can get around the problem of processing so many events by making use of Snapshots.
http://thinkbeforecoding.com/post/2010/02/25/Event-Sourcing-and-CQRS-Snapshots
I assume you mean currently your "connectable items" are part of the "network" aggregate and you are asking if they should be their own aggregate? That really depends on the nature of your system and problem and is more of a DDD issue than simple a CQRS one. However if the nature of your changes is typically to operate on the items independently of one another then then should probably be aggregate roots themselves. Regardless in order to answer that question we would need to know much more about the system you are modeling.
As for the challenge of replaying thousands of events, you certainly do not have to replay all your events for each command. Sure snapshotting is an option, but even better is caching the aggregate root objects in memory after they are first loaded to ensure that you do not have to source from events with each command (unless the system crashes, in which case you can rely on snapshots for quicker recovery though you may not need them with caching since you only pay the penalty of loading once).
Now if you are distributing this system across multiple hosts or threads there are some other issues to consider but I think that discussion is best left for another question or the forums.
Finally you asked (I think) can an event modify more than one property of the state of an object? Yes if that is what makes sense based on what that event represents. The idea of an event is simply that it represents a state change in the aggregate, however these events should also represent concepts that make sense to the business.
I hope that helps.