Event Sourcing (pubsub), dependent events, chaining/queueing the events properly - publish-subscribe

Say I have the following model:
Channel [channel_id]
Group [group_id]
Channel [channel_id] 1:1 [channel_id] ChannelGroupReference [group_id] 1:1 [group_id] Group
Each of the above, including references, runs as a separate microservice.
This is not obvious, so I should mention that channel does not necessary have a group connected. There might be channels with groups and channels without groups.
Next.
Group creation event and Channel creation event are triggered from different locations:
In case we create a Group: this is handled by GroupsApp, then we create a Channel afterwards, then a Group:Channel relationship is created;
In case we create a Channel (without group): we create a channel by ChannelsApp and that's it (no further references).
The way I see it
I create a Group // handled by GroupApp
I fire the GroupCreated event // fired by GroupApp (contains group_id)
I handle GroupCreated event by ChannelApp
I fire the ChannelCreated event by ChannelApp (contains both group_id and channel_id)
I handle the ChannelCreated event by GroupChannelReferenceApp and based on the group_id field value I decide if the reference should be created or not (so for channels without group no reference is created)
The problem I see in this case that ChannelApp becomes kind of a proxy, knowing and caring too much about Group entity as well, while the only app I would like to care about both Group and Channel is a [3] GroupChannelApp. ChannelApp should care only about Channels and fire only Channel related events.
I need some sort of chain/queue (saga won't work because of distribution), which will store 'knowledge' first about the group event, then about the channel created for the group (based on the events fired), then fire the proper event of this GroupChannelWhateverCreated, so GroupChannelReferenceApp can handle this event and store a relationship.
This is the simplified example of the 3rd app handling separate events fired by 1st and 2nd apps. The chain/queue might be longer (group membership, channel membership etc). Some of them have to wait for the side services to process the events first.
Question
In general, how should I handle the events for the third app, which requires both Group and Channel events data, taking into account those two events are fired by different sources and I need both IDs to store the references? By the time I create a reference, both Group and Channel should be created, or one might not exists, but the point is, I do not want those [1] and [2] apps to know about each other.

Related

Synchronising events between microservices using Kafka and MongoDb connector

I'm experimenting with microservices architecture. I have UserService and ShoppingService.
In UserService I'm using MongoDb. When I'm creating new user in UserService I want to sync basic user info to ShoppingService. In UserService I'm using something like event sourcing. When I'm creating new User, I first create the UserCreatedEvent and then I apply the event onto domain User object. So in the end I get the domain User object that has current state and list of events containing one UserCreatedEvent.
I wonder if I should persist the Events collection as a nested property of User document or in separate UserEvents collection. I was planning to use Kafka Connect to synchronize the events from UserService to ShoppingService.
If I decide to persist the events inside the User document then I don't need transaction that I would use to save event to separate UserEvents collection but I can't setup the Kafka connector to track changes in the nested property only.
If I decide to persist events in separate UserEvents collection I need to wrap in transaction changes to User and UserEvents. But saving events to separate collection makes setting up Kafka connector very easy because I track only inserts and I don't need to track updates of nested UserEvents array in User document.
I think I will go with the second option for sake of simplicity but maybe I've missed something. Is it good idea to implement it like this?
I would generally advise the second approach. Note that you can also eliminate the need for a transaction by observing that User is just a snapshot based on the UserEvents up to some point in the stream and thus doesn't have to be immediately updated.
With this, your read operation for User can be: select a user from User (the latest snapshot), which includes a version/sequence number saying that it's as-of some event; then select the events with later sequence numbers and apply those events to the user. If there's some querier which wants a faster response and can tolerate getting something stale, a different endpoint (or an option in the query) can bypass the event replay.
You can then have some asynchronous process which subscribes to the stream of user events and updates User based on those events.

Maintain reference between aggregates

I'm trying to wrap my head around how to maintain id references between two aggregates, eg. when an event happens on either side that affects the relationship, the other side is updated as well in an eventual consistent manner.
I have two aggregates, one for "Team" and one for "Event", in the context of a festival with the following code:
#Aggregate
public class Event {
#AggregateIdentifier
private EventId eventId;
private Set<TeamId> teams; // List of associated teams
... protected constructor, getters/setters and command handlers ...
}
#Aggregate
public class Team {
#AggregateIdentifier
private TeamId teamId;
private EventId eventId; // Owning event
... protected constructor, getters/setters and command handlers ...
}
A Team must always be associated to an event (through the eventId). An event contains a list of associated teams (through the team id set).
When a team is created (CreateTeamCommand) on the Team aggregate, I would like the TeamId set on the Event aggregate to be updated with the team id of the newly created team.
If the command "DeleteEventCommand" on the Event aggregate is executed, all teams associated to the event should also be deleted.
If a team is moved from one event to another event (MoveTeamToEventCommand) on the Team aggregate, the eventId on the Team aggregate should be updated but the TeamId should be removed from the old Event aggregate and be added to the new Event aggregate.
My current idea was to create a saga where I would run SagaLifecycle.associateWith for both the eventId on the Event aggregate and the teamId on the Team aggregate with a #StartSaga on the "CreateTeamCommand" (essentially the first time the relationship starts) and then have an event handler for every event that affects the relationship. My main issue with this solution is:
1: It would mean I would have a unique saga for each possible combination of team and event. Could this cause trouble performance wise if it was scaled to eg. 1mil events with each event having 50 teams? (This is unrealistic for this scenario but relevant for a general solution to maintain relationships between aggregates).
2: It would require I had custom commands and event handlers dedicated to handle the update of teams in team list of the Event aggregate as the resulting events should not be processed in the saga to avoid an infinite loop of updating references.
Thank you for reading this small story and I hope someone can either confirm that I'm on the right track or point me in the direction of a proper solution.
An event contains a list of associated teams (through the team id set).
If you mean "An event aggregate" here by "An event", I don't believe your event aggregate needs team ids. If you think it does, it would be great to understand your reasoning on this.
What I think you need is though your read side to know about this. Your read model for a single "Event" can listen on CreateTeamCommand and MoveTeamToEventCommand as well as all other "Event" related events, and build up the projection accordingly. Remember, don't design your aggregates with querying concerns in mind.
If the command "DeleteEventCommand" on the Event aggregate is executed, all teams associated to the event should also be deleted.
A few things here:
Again, your read side can listen on this event, and update the projections accordingly.
You can also start performing validation on relevant command handlers for the Team aggregate to check whether the Event exists or not before performing the operations. This won't have exact sync, but will cover for most cases (see "How can I verify that a customer ID really exists when I place an order?" section here).
If you really want to delete the associated Team aggregates off the back of a DeleteEventCommand event, you need to handle this inside a Saga as there is no way for you to be able to perform this in an atomic way w/o leaking the data storage system specifics into your domain model. So, you need certain retry and idempotency needs here, where a saga can give you. It's not exactly what you are suggesting here but related fact is that a single command can't act on a set of aggregates, see "How can I update a set of aggregates with a single command?" section here.

Mixpanel: Merge duplicate people profiles and also merge events

I have duplicate profiles due to switching of the identifier in the code. I would like to merge the duplicate profiles now and also merge the events / activity feed.
I got the API working and by calling
deduplicate_people(prop_to_match='$email',merge_props=True,case_sensitive=False,backup=True,backup_file=None)
Duplicates are in fact removed, but the events / activity feed is not merged. So I'd loose many events.
Is there a way to remove duplicates and merging events / activity feed at the same time?
Duplicates happen because some persons use ID and others email as distinct_id due to the change of identifier. The events are referenced by that ID or email to the corresponding person.
So here is what I ended up doing to re-create the identity mapping for people and their events:
I used Mixpanel's API (export_people / export_events) to create a backup of people and events. I wrote a script that creates a mapping "distinct_id <-> email" for people that use an actual ID as distinct_id and not an email (each person has an $email field regardless of the content of the $distinct_id).
Then I went over all exported events. For each event that had an ID as distinct_id I used the mapping to change that distinct_id to email. Updated events were saved in a JSON file. Thus creating the reference from events to person using email as distinct_id -- the events that got lost otherwise.
Then I went ahead and used the de-duplicate API from Mixpanel to delete all duplicates -- thus loosing some events. Now I imported the events from the step before, which gave me back those missing events.
Three open questions to consider before using this approach:
I believe events are not actually deleted on deduplication. So by importing them again there are probably duplicate events in the system that are just not referenced to a person and that may show up at some point.
the deduplication by $email did keep the people that use email as distinct_id and removed the ones with the actual ID. I don't know if this is true every time or may have been a coincidence. My approach will fail for persons that still use ID as distinct_id.
I suppose it's generally discouraged to hack around the distinct_id like that, because making a mistake may result in data loss. So make sure to get it right..

CQRS/Event Sourcing: Should events (types) be shared?

Should events be shared? I am experimenting with CQRS and Event Sourcing and am wondered if events (the types) should be shared/defined between services.
Case:
A request comes in and a new createUser command is pushed into the 'commands' event log. Service A (business logic) fetches this command and generates the data of the new user. Once the new user is created it pushes the new data into the 'events' event log with the event name newUser. Service B (projector) notices the new event and starts processing it.
Here lays my question. Should we define for every event type (in this case newUser) the logics that needs to be ran in order to update the materialised view? In the example below do we have 2 types of events and is for every event the actions defined that need to happen. In this case are the event types defined in the logics service and the projector service.
# <- onEvent
switch event.type
case "newUser"
putUsers(firstName=data.firstName, lastName=data.lastName) # put this data in the database
case "updateUserFirstName"
updateUsers(where id = 1, firstName=data.firstName)
Or is it a good idea to define in the event the type of operation that needs to be preformed? In that case are event types not shared and is the projector service able to handle unknown/new events, without any modification.
# <- onEvent
switch event.operation
case "create"
putUser(...)
case "update"
updateUser(...) # update only the data defined in the event
Is options 2 a viable option? Or will I be running into issues when choosing this strategy?
Events reflect something that has happened. They are usually named in the past tense - userCreated.
Generic events (or one event type per entity) have a number of drawbacks:
Finding proper past tense names for event types becomes more difficult
You lose some of the expressivity since the whole domain meaning is no longer immediately apparent just looking at the event type
Impossible to subscribe to events in a fine grained, streamlined way because you need to "open the envelope" to find out which specific event you're dealing with
Discrepancy between events you talk about with domain experts (for instance during Event Storming sessions) and the way they are encoded in your types, messages, etc.
I wouldn't recommend it except maybe in a very free-form/dynamic system where the entities are not known in advance.
I recommend using event type to determine what type of business logic/rules will consume the event.
As #guillaume31 mentioned, use past tense to name your events. But if you want to plan for the future, you should also version your event types. For example, you can name your event types like this "userCreated_v1" or "userFirstNameChanged_v1". This gives you the ability to change the structure of event messages in the future and easily associate new business logic/rules with the new events.

Event Sourcing and dealing with data dependencies

Given a REST API with the following operations resulting in events posted to Kafka:
AddCategory
UpdateCategory
RemoveCategory
AddItem (refers to a category by some identifier)
UpdateItem
RemoveItem
And an environment where multiple users may use the REST API at the same time, and the consumers must all get the same events. The consumers may be offline for an extended period of time (more than a day). New consumers may be added, and others removed.
The problems:
Event ordering (only workaround single topic/partition?)
AddItem before AddCategory, invalid category reference.
UpdateItem before AddCategory, used to be a valid reference, now invalid.
RemoveCategory before AddItem, category reference invalid.
....infinite list of other concurrency issues.
Event Store snapshots for fast resync of restarted consumers
Should there be a compacted log topic for both categories and items, each entity keyed by its identifier?
Can the whole compacted log topic be somehow identified as an offset?
Should there only be one one entry in the compacted log topic, and the data of it contain a serialized blob of all categories and items given an offset (would require single topic/partition).
How to deal with the handover from replaying the rendered entities event store to the "live stream" of commands/events? Encode offset in each item in the compacted log view, and pass that to replay from the live event log?
Are there other systems that fit this problem better?
I will give you a partial answer based on my experience in Event sourcing.
Event ordering (only workaround single topic/partition?)
AddItem before AddCategory, invalid category reference.
UpdateItem before AddCategory, used to be a valid reference, now invalid.
RemoveCategory before AddItem, category reference invalid.
....infinite list of other concurrency issues.
All scalable Event stores that I know of guaranty events ordering inside a partition only. In DDD terms, the Event store ensure that the Aggregate is rehydrated correctly by replaying the events in the order they were generated. The Apache-kafka topic seems to be a good choice for that. While this is sufficient for the Write side of an application, it is harder for the Read side to use it. Harder but not impossible.
Given that the events are already validated by the Write side (because they represent facts that already happened) we can be sure that any inconsistency that appears in the system is due to the wrong ordering of events. Also, given that the Read side is eventually consistent with the Write side, the missing events will eventually reach our Read models.
So, first thing, in your case AddItem before AddCategory, invalid category reference, should be in fact ItemAdded before CategoryAdded (terms are in the past).
Second, when ItemAdded arrives, you try to load the Category by ID and if it fails (because of the delayed CategoryAdded event) then you can create a NotYetAvailableCategory having the ID equal to the referenced ID in the ItemAdded event and a title of "Not Yet Available Please Wait a few miliseconds". Then, when the CategoryAdded event arrives, you just update all the Items that reference that category ID. So, the main idea is that you create temporary entities that will be finalized when their events eventually arrive.
In the case of CategoryRemoved before ItemAdded, category reference invalid, when the ItemAdded event arrives, you could check that the category was deleted (by havind a ListOfCategoriesThatWereDeleted read model) and then take the appropriate actions in your Item entity - what depends on you business.