Maintain reference between aggregates - cqrs

I'm trying to wrap my head around how to maintain id references between two aggregates, eg. when an event happens on either side that affects the relationship, the other side is updated as well in an eventual consistent manner.
I have two aggregates, one for "Team" and one for "Event", in the context of a festival with the following code:
#Aggregate
public class Event {
#AggregateIdentifier
private EventId eventId;
private Set<TeamId> teams; // List of associated teams
... protected constructor, getters/setters and command handlers ...
}
#Aggregate
public class Team {
#AggregateIdentifier
private TeamId teamId;
private EventId eventId; // Owning event
... protected constructor, getters/setters and command handlers ...
}
A Team must always be associated to an event (through the eventId). An event contains a list of associated teams (through the team id set).
When a team is created (CreateTeamCommand) on the Team aggregate, I would like the TeamId set on the Event aggregate to be updated with the team id of the newly created team.
If the command "DeleteEventCommand" on the Event aggregate is executed, all teams associated to the event should also be deleted.
If a team is moved from one event to another event (MoveTeamToEventCommand) on the Team aggregate, the eventId on the Team aggregate should be updated but the TeamId should be removed from the old Event aggregate and be added to the new Event aggregate.
My current idea was to create a saga where I would run SagaLifecycle.associateWith for both the eventId on the Event aggregate and the teamId on the Team aggregate with a #StartSaga on the "CreateTeamCommand" (essentially the first time the relationship starts) and then have an event handler for every event that affects the relationship. My main issue with this solution is:
1: It would mean I would have a unique saga for each possible combination of team and event. Could this cause trouble performance wise if it was scaled to eg. 1mil events with each event having 50 teams? (This is unrealistic for this scenario but relevant for a general solution to maintain relationships between aggregates).
2: It would require I had custom commands and event handlers dedicated to handle the update of teams in team list of the Event aggregate as the resulting events should not be processed in the saga to avoid an infinite loop of updating references.
Thank you for reading this small story and I hope someone can either confirm that I'm on the right track or point me in the direction of a proper solution.

An event contains a list of associated teams (through the team id set).
If you mean "An event aggregate" here by "An event", I don't believe your event aggregate needs team ids. If you think it does, it would be great to understand your reasoning on this.
What I think you need is though your read side to know about this. Your read model for a single "Event" can listen on CreateTeamCommand and MoveTeamToEventCommand as well as all other "Event" related events, and build up the projection accordingly. Remember, don't design your aggregates with querying concerns in mind.
If the command "DeleteEventCommand" on the Event aggregate is executed, all teams associated to the event should also be deleted.
A few things here:
Again, your read side can listen on this event, and update the projections accordingly.
You can also start performing validation on relevant command handlers for the Team aggregate to check whether the Event exists or not before performing the operations. This won't have exact sync, but will cover for most cases (see "How can I verify that a customer ID really exists when I place an order?" section here).
If you really want to delete the associated Team aggregates off the back of a DeleteEventCommand event, you need to handle this inside a Saga as there is no way for you to be able to perform this in an atomic way w/o leaking the data storage system specifics into your domain model. So, you need certain retry and idempotency needs here, where a saga can give you. It's not exactly what you are suggesting here but related fact is that a single command can't act on a set of aggregates, see "How can I update a set of aggregates with a single command?" section here.

Related

version of aggregate event sourcing

According to event sourcing. When a command is called, all events of a domain have to be stored. Per event, system must increase the version of an aggregate. My eventstore is something like this:
(AggregateId, AggregateVersion, Sequence, Data, EventName, CreatedDate)
(AggregateId, AggregateVersion) is key
In some cases it does not make sense to increase the version of an aggregate. For example,
a command register an user and raises RegisteredUser, WelcomeEmailEvent, GiftCardEvent.
how can I handle this problem?
how can I handle this problem?
Avoid confusing your representation-of-information-changes events from your publishing-for-use-elsewhere events.
"Event sourcing", as commonly understood in the domain-drive-design and cqrs space, is a kind of data model. We're talking specifically about the messages an aggregate sends to its future self that describe its own changes over time.
It's "just" another way of storing the state of the aggregate, same as we would do if we were storing information in a relational database, or a document store, etc.
Messages that we are going to send to other components and then forget about don't need to have events in the event stream.
In some cases, there can be confusion when we haven't recognized that there are multiple different processes at work.
A requirement like "when a new user is registered, we should send them a welcome email" is not necessarily part of the registration process; it might instead be an independent process that is triggered by the appearance of a RegisteredUser event. The information that you need to save for the SendEmail process would be "somewhere else" - outside of the Users event history.
Event changes the state of an aggregate, and therefore changes its version. If state is not changed, then there should be no event for this aggregate.
In your example, I would ask myself - if WelcomeEmailEvent does not change the state of the User aggregate, then whose state it chages? Perhaps some other aggregate - some EmailNotification service that cares about successful or filed email attempt. In this case I would make it event of those aggregate which state it changes. And it will affect version of that aggregate.

How to persist aggregate/read model from "EventStore" in a database?

Trying to implement Event Sourcing and CQRS for the first time, but got stuck when it came to persisting the aggregates.
This is where I'm at now
I've setup "EventStore" an a stream, "foos"
Connected to it from node-eventstore-client
I subscribe to events with catchup
This is all working fine.
With the help of the eventAppeared event handler function I can build the aggregate, whenever events occur. This is great, but what do I do with it?
Let's say I build and aggregate that is a list of Foos
[
{
id: 'some aggregate uuidv5 made from barId and bazId',
barId: 'qwe',
bazId: 'rty',
isActive: true,
history: [
{
id: 'some event uuid',
data: {
isActive: true,
},
timestamp: 123456788,
eventType: 'IsActiveUpdated'
}
{
id: 'some event uuid',
data: {
barId: 'qwe',
bazId: 'rty',
},
timestamp: 123456789,
eventType: 'FooCreated'
}
]
}
]
To follow CQRS I will build the above aggregate within a Read Model, right? But how do I store this aggregate in a database?
I guess just a nosql database should be fine for this, but I definitely need a db since I will put a gRPC APi in front of this and other read models / aggreates.
But what do I actually go from when I have built the aggregate, to when to persist it in the db?
I once tried following this tutorial https://blog.insiderattack.net/implementing-event-sourcing-and-cqrs-pattern-with-mongodb-66991e7b72be which was super simple, since you'd use mongodb both as the event store and just create a view for the aggregate and update that one when new events are incoming. It had it's flaws and limitations (the aggregation pipeline) which is why I now turned to "EventStore" for the event store part.
But how to persist the aggregate, which is currently just built and stored in code/memory from events in "EventStore"...?
I feel this may be a silly question but do I have to loop over each item in the array and insert each item in the db table/collection or do you somehow have a way to dump the whole array/aggregate there at once?
What happens after? Do you create a materialized view per aggregate and query against that?
I'm open to picking the best db for this, whether that is postgres/other rdbms, mongodb, cassandra, redis, table storage etc.
Last question. For now I'm just using a single stream "foos", but at this level I expect new events to happen quite frequently (every couple of seconds or so) but as I understand it you'd still persist it and update it using materialized views right?
So given that barId and bazId in combination can be used for grouping events, instead of a single stream I'd think more specialized streams such as foos-barId-bazId would be the way to go, to try and reduce the frequency of incoming new events to a point where recreating materialized views will make sense.
Is there a general rule of thumb saying not to recreate/update/refresh materialized views if the update frequency gets below a certain limit? Then the only other a lternative would be querying from a normal table/collection?
Edit:
In the end I'm trying to make a gRPC api that has just 2 rpcs - one for getting a single foo by id and one for getting all foos (with optional field for filtering by status - but that is not so important). The simplified proto would look something like this:
rpc GetFoo(FooRequest) returns (Foo)
rpc GetFoos(FoosRequest) returns (FooResponse)
message FooRequest {
string id = 1; // uuid
}
// If the optional status field is not specified, return all foos
message FoosRequest {
// If this field is specified only return the Foos that has isActive true or false
FooStatus status = 1;
enum FooStatus {
UNKNOWN = 0;
ACTIVE = 1;
INACTIVE = 2;
}
}
message FoosResponse {
repeated Foo foos;
}
message Foo {
string id = 1; // uuid
string bar_id = 2 // uuid
string baz_id = 3 // uuid
boolean is_active = 4;
repeated Event history = 5;
google.protobuf.Timestamp last_updated = 6;
}
message Event {
string id = 1; // uuid
google.protobuf.Any data = 2;
google.protobuf.Timestamp timestamp = 3;
string eventType = 4;
}
The incoming events would look something like this:
{
id: 'some event uuid',
barId: 'qwe',
bazId: 'rty',
timestamp: 123456789,
eventType: 'FooCreated'
}
{
id: 'some event uuid',
isActive: true,
timestamp: 123456788,
eventType: 'IsActiveUpdated'
}
As you can see there is no uuid to make it possible to GetFoo(uuid) in the gRPC API, which is why I'll generate a uuidv5 with the barId and bazId, which will combined, be a valid uuid. I'm making that in the projection / aggregate you see above.
Also the GetFoos rpc will either return all foos (if status field is left undefined), or alternatively it'll return the foo's that has isActive that matches the status field (if specified).
Yet I can't figure out how to continue from the catchup subscription handler.
I have the events stored in "EventStore" (https://eventstore.com/), using a subscription with catchup, I have built an aggregate/projection with an array of Foo's in the form that I want them, but to be able to get a single Foo by id from a gRPC API of mine, I guess I'll need to store this entire aggregate/projection in a database of some sort, so I can connect and fetch the data from the gRPC API? And every time a new event comes in I'll need to add that event to the database also or how is this working?
I think I've read every resource I can possibly find on the internet, but still I'm missing some key pieces of information to figure this out.
The gRPC is not so important. It could be REST I guess, but my big question is how to make the aggregated/projected data available to the API service (possible more API's will need it as well)? I guess I will need to store the aggregated/projected data with the generated uuid and history fields in a database to be able to fetch it by uuid from the API service, but what database and how is this storing process done, from the catchup event handler where I build the aggregate?
I know exactly how you feel! This is basically what happened to me when I first tried to do CQRS and ES.
I think you have a couple of gaps in your knowledge which I'm sure you will rapidly plug. You hydrate an aggregate from the event stream as you are doing. That IS your aggregate persisted. The read model is something different. Let me explain...
Your read model is the thing you use to run queries against and to provide data for display to a UI for example. Your aggregates are not (directly) involved in that. In fact they should be encapsulated. Meaning that you can't 'see' their state from the outside. i.e. no getter and setters with the exception of the aggregate ID which would have a getter.
This article gives you a helpful overview of how it all fits together: CQRS + Event Sourcing – Step by Step
The idea is that when an aggregate changes state it can only do so via an event it generates. You store that event in the event store. That event is also published so that read models can be updated.
Also looking at your aggregate it looks more like a typical read model object or DTO. An aggregate is interested in functionality, not properties. So you would expect to see void public functions for issuing commands to the aggregate. But not public properties like isActive or history.
I hope that makes sense.
EDIT:
Here are some more practical suggestions.
"To follow CQRS I will build the above aggregate within a Read Model, right? "
You do not build aggregates in the read model. They are separate things on separate sides of the CQRS side of the equation. Aggregates are on the command side. Queries are done against read models which are different from aggregates.
Aggregates have public void functions and no getter or setters (with the exception of the aggregate id). They are encapsulated. They generate events when their state changes as a result of a command being issued. These events are stored in an event store and are used to recover the state of an aggregate. In other words, that is how an aggregate is stored.
The events go on to be published so the event handlers and other processes can react to them and update the read model and or trigger new cascading commands.
"Last question. For now I'm just using a single stream "foos", but at this level I expect new events to happen quite frequently (every couple of seconds or so) but as I understand it you'd still persist it and update it using materialized views right?"
Every couple of seconds is very likely to be fine. I'm more concerned at the persist and update using materialised views. I don't know what you mean by that but it doesn't sound like you have the right idea. Views should be very simple read models. No need to complex relations like you find in an RDMS. And is therefore highly optimised fast for reading.
There can be a lot of confusion on all the terminologies and jargon used in DDD and CQRS and ES. I think in this case, the confusion lies in what you think an aggregate is. You mention that you would like to persist your aggregate as a read model. As #Codescribler mentioned, at the sink end of your event stream, there isn't a concept of an aggregate. Concretely, in ES, commands are applied onto aggregates in your domain by loading previous events pertaining to that aggregate, rehydrating the aggregate by folding each previous event onto the aggregate and then applying the command, which generates more events to be persisted in the event store.
Down stream, a subscribing process receives all the events in order and builds a read model based on the events and data contained within. The confusion here is that this read model, at this end, is not an aggregate per se. It might very well look exactly like your aggregate at the domain end or it could be only creating a read model that doesn't use all the events and or the event data.
For example, you may choose to use every bit of information and build a read model that looks exactly like the aggregate hydrated up to the newest event(likely your source of confusion). You may instead have another process that builds a read model that only tallies a specific type of event. You might even subscribe to multiple streams and "join" them into a big read model.
As for how to store it, this is really up to you. It seems to me like you are taking the events and rebuilding your aggregate plus a history of events in a memory structure. This, of course, doesn't scale, which is why you want to store it at rest in a database. I wouldn't use the memory structure, since you would need to do a lot of state diffing when you flush to the database. You should be modify the database directly in response to each individual event. Ideally, you also transactionally store the stream count with said modification so you don't process the same event again in the case of a failure.
Hope this helps a bit.

CQRS Read Model Design when Event Sourcing with a Parent-Child-GrandChild... relationship

I'm in the process of writing my first CQRS application, let's say my system dispatches the following commands:
CreateContingent (Id, Name)
CreateTeam (Id, Name)
AssignTeamToContingent (TeamId, ContingentId)
CreateParticipant (Id, Name)
AssignParticipantToTeam (ParticipantId, TeamId)
Currently, these result in identical events, just worded in the past tense (ContingentCreated, TeamCreated, etc) but they contain the same properties. (I'm not so sure that is correct and is one of my questions)
My issue lies with the read models.
I have a Contingents read model, that subscribes to ContingentCreated, and has the following methods:
List<ContingentQueries.Contingent> GetContingents();
ContingentQueries.Contingent GetContingent(System.Guid id);
ContingentQueries.Contingent GetContingent(string province);
The Contingent class returned by these looks like this:
public class Contingent
{
public Guid Id { get; internal set; }
public string Province { get; internal set; }
}
This works fine for a listing, but when I start thinking about the read model for a view of a single contingent, and all it's teams, things start to get messy. Given the methods it seems trivial:
ContingentQueries.Contingent GetContingent(System.Guid id);
ContingentQueries.Contingent GetContingent(string province);
And I create the Contingent class for these queries with a collection of teams:
public class Contingent
{
public Guid Id { get; internal set; }
public string Province { get; internal set; }
public IList<Team> Teams { get; internal set; }
}
public class Team
{
public Guid Id { get; internal set; }
public string Name { get; internal set; }
}
In theory, I should only need to subscribe to ContingentCreated, and TeamAssignedToContingent, but the TeamAssignedToContingent event only has the team and contingent ids... so I can't set the Team.Name property of this read model.
Do I also subscribe to TeamCreated? Add keep another copy of teams for use here?
Or when events are raised, should they have more information in them? Should the AddTeamToContingent command handler be querying for team information to add to the event? Which may not yet exist, because the read model might not have been updated with the team yet.
What if I wanted to show the team name and the participant that has been assigned to the team, and named the captain in this Contingent view? Do I also store the participants in the read model? This seems like a ton of duplication.
For some extra context. Participants are not necessaily part of any contingent, or a team, they could just be guests; or delegates and/or spares of a contingent. However, roles can shift. A delegate who is not a team member, could also be a spare, and due to injury be assigned to a team, however they are still a delegate. Which is why the generic "Participant" is used.
I understand I want read models to not be heavier than necessary, but I'm having difficulty with the fact that I may need data from events that I'm not subscribed to (TeamCreated), and in theory shouldn't be, but because of a future event (TeamAssignedToContingent) I do need that info, what do I do?
UPDATE: Thought about it overnight, and it seems like all the options I have thought of are bad. There must be another.
Option 1: Add more data to raised events
Do Command Handlers start using read models that may not yet be written so they can get that data?
What if a future subscriber to the event needs even more info? I can't add it!
Option 2: Have Event Handlers subscribe to more events?
This results in the read model storing data it might not care about?
Example: Storing participants in the ContingentView read model so when a person is assigned to a team, and marked as captain, we have know their name.
Option 3: Have Event Handlers query other Read Models?
This feels like the best approach, but feels wrong. Should the Contingent view query the Participant view to get the name if and when it needs it? It does eliminate the drabacks in 1 and 2.
Options 4: ...?
What you're probably going to end up with is a combination of 1 and 2. There is absolutely no problem with enhancing or enriching events to have more information in them. It is actually quite helpful.
What if a future subscriber to the event needs even more info? I can't add it!
You can add it to the events going forward. If you need historical data you'll have to find it somewhere else. You can't build events with all the data in the world on them just on the off chance that you'll need it in the future. It is even allowable to go back and add more data to your historical events. Some people would frown on this as rewriting history but you're not, you're simply enhancing history. As long as you don't change the existing data you should be fine.
I suspect that your event handlers are also going to need to subscribe to more events as time continues. For instance are teams going to need to be renamed? If so then you'll need to handle that event. Don't be afraid to have multiple copies of the same data in different read views. Coming from a relational database background this duplication of data is one of the hardest things to get use to.
In the end be pragmatic. If you're encountering pain in applying CQRS then change how you're applying it.
Another option that comes to my mind is:
In your read side, you first build normalized views by events from the write side.
E.g.:
contingents table:
-----------
id, name
teams table:
-----------
id, name
contigents_teams table:
-----------
contigent_id, team_id
On ContingentCreated event, you just insert the record into the contingents table.
On TeamCreated event, you just insert the record into the teams table.
On TeamAssignToContingent event, you insert the record into the contigents_teams table.
On TeamNameChanged event, you update appropriate record in the teams table.
And so on.
Thus, your data is (eventually) consistent and in sync with the write side.
Yes, you need to use joins in the read side to fetch data...
If this normalized view does not satisfy read performance needs, then you can build a denormalized view from this normalized data.
it requires an extra step to build the denormalized view, but to me, it is the most simple solution I can think of for now.
Maybe you even will not need a denormalized view.
After all (in my opinion), the main value of ES is capturing user intent, confidence in your data (no destructive operations, single source of truth), ability to answer questions that nobody could think of in the past, big value for analytics.
As I said, if you need to optimize performance, you always can build a denormalized view from the normalized view of the read side.

SO style reputation system with CQRS & Event Sourcing

I am diving into my first forays with CQRS and Event Sourcing and I have a few points Id like some guidance on. I would like to implement a SO style reputation system. This seems a perfect fit for this architecture.
Keeping SO as the example. Say a question is upvoted this generates an UpvoteCommand which increases the questions total score and fires off a QuestionUpvotedEvent.
It seems like the author's User aggregate should subscribe to the QuestionUpvotedEvent which could increase the reputation score. But how/when you do this subscription is not clear to me? In Greg Youngs example the event/command handling is wired up in the global.asax but this doesn't seem to involve any routing based on aggregate Id.
It seems as though every User aggregate would subscribe to every QuestionUpvotedEvent which doesn't seem correct, to make such a scheme work the event handler would have to exhibit behavior to identify if that user owned the question that was just upvoted. Greg Young implied this should not be in event handler code, which should merely involve state change.
What am i getting wrong here?
Any guidance much appreciated.
EDIT
I guess what we are talking about here is inter-aggregate communication between the Question & User aggregates. One solution I can see is that the QuestionUpvotedEvent is subscribed to by a ReputationEventHandler which could then fetch the corresponding User AR and call a corresponding method on this object e.g. YourQuestionWasUpvoted. This would in turn generated a user specific UserQuestionUpvoted event thereby preserving replay ability in the future. Is this heading in the right direction?
EDIT 2
See also the discussion on google groups here.
My understanding is that aggregates themselves should not be be subscribing to events. The domain model only raises events. It's the query side or other infrastructure components (such as an emailing component) that subscribe to events.
Domain Services are designed to work with use-cases/commands that involve more than one aggregate.
What I would do in this situation:
VoteUpQuestionCommand gets invoked.
The handler for VoteUpQuestionCommand calls:
IQuestionVotingService.VoteUpQuestion(Guid questionId, Guid UserId);
This then fecthes both the question & user aggregates, calling the appropriate methods on both, such as user.IncrementReputation(int amount) and question.VoteUp(). This would raise two events; UsersReputationIncreasedEvent and QuestionUpVotedEvent respectively, which would be handled by the query side.
My rule of thumb: if you do inter-AR communication use a saga. It keeps things within the transactional boundary and makes your links explicit => easier to handle/maintain.
The user aggregate should have a QuestionAuthored event... in that event is subscribes to the QuestionUpvotedEvent... similarly it should have a QuestionDeletedEvent and/or QuestionClosedEvent in which it does the proper handling like unsibscribing from the QuestionUpvotedEvent etc.
EDIT - as per comment:
I would implement the Question is an external event source and handle it via a gateway. The gateway in turn is the one responsible for handling any replay correctly so the end result stays exactly the same - except for special events like rejection events...
This is the old question and tagged as answered but I think can add something to it.
After few months of reading, practice and create small framework and application base on CQRS+ES, I think CQRS try to decouple components dependencies and responsibilities. In some resources write for each command you Should change maximum one aggregate on command handler (you can load more than one aggregate on handler but only one of them can change).
So in your case I think the best practice is #Tom answer and you should use saga. If your framework doesn't support saga (Like my small framework) you can create some event handler like UpdateUserReputationByQuestionVotedEvent. In that, handler create UpdateUserReputation(Guid user id, int amount) OR UpdateUserReputation(Guid user id, Guid QuestionId, int amount) OR
UpdateUserReputation(Guid user id, string description, int amount). After command sends to handler, the handler load user by user id and update states and properties. In this type of handling you can create a more complex scenario or workflow.

How to get list of aggregates using JOliviers's CommonDomain and EventStore?

The repository in the CommonDomain only exposes the "GetById()". So what to do if my Handler needs a list of Customers for example?
On face value of your question, if you needed to perform operations on multiple aggregates, you would just provide the ID's of each aggregate in your command (which the client would obtain from the query side), then you get each aggregate from the repository.
However, looking at one of your comments in response to another answer I see what you are actually referring to is set based validation.
This very question has raised quite a lot debate about how to do this, and Greg Young has written an blog post on it.
The classic question is 'how do I check that the username hasn't already been used when processing my 'CreateUserCommand'. I believe the suggested approach is to assume that the client has already done this check by asking the query side before issuing the command. When the user aggregate is created the UserCreatedEvent will be raised and handled by the query side. Here, the insert query will fail (either because of a check or unique constraint in the DB), and a compensating command would be issued, which would delete the newly created aggregate and perhaps email the user telling them the username is already taken.
The main point is, you assume that the client has done the check. I know this is approach is difficult to grasp at first - but it's the nature of eventual consistency.
Also you might want to read this other question which is similar, and contains some wise words from Udi Dahan.
In the classic event sourcing model, queries like get all customers would be carried out by a separate query handler which listens to all events in the domain and builds a query model to satisfy the relevant questions.
If you need to query customers by last name, for instance, you could listen to all customer created and customer name change events and just update one table of last-name to customer-id pairs. You could hold other information relevant to the UI that is showing the data, or you could simply hold IDs and go to the repository for the relevant customers in order to work further with them.
You don't need list of customers in your handler. Each aggregate MUST be processed in its own transaction. If you want to show this list to user - just build appropriate view.
Your command needs to contain the id of the aggregate root it should operate on.
This id will be looked up by the client sending the command using a view in your readmodel. This view will be populated with data from the events that your AR emits.