Updating last accessed time when separating Commands and Queries - cqrs

Consider a function: IsWalletValid(walletID). It returns true if the walletID exists in the database, and updates a 'last_accessed_time' field.
A task runs periodically to remove any wallets that have not been accessed for a set period of time.
Seems like an easy solution for what we want to do, but IsWalletValid() has a side effect because it writes to the database.
Should we add an additional 'UpdateLastAccessedTime(walletID)' function? Everytime we call IsWalletValid() we will also need to remember to call UpdateLastAccessedTime(walletID).

Do verifying that a wallet is valid and updating it's last_accessed_time field need to be transactionally consistent (ACID)? You could use eventual consistency here:
The method IsWalletValid publishes an WalletAccessed event, then an event handler updates last_accessed_time asynchronously.

if last_accessed_time is not accessed by domain logic to make decisions on any write handling this could just be a facet of the read only projection. Seems like this is the same concern as other more verbose read audit concerns. Just because data is being written and maintained doesn't mean that it necessarily needs to be part of the write model of the system. If you did however want to implement this as part of the domain and perhaps stored within the same event store it could be considered a separate auditing context outside of the boundary of the original aggregate being audited.

Related

Firestore Increment - Cloud Function Invoked Twice

With Firestore Increment, what happens if you're using it in a Cloud Function and the Cloud Function is accidentally invoked twice?
To make sure that your function behaves correctly on retried execution attempts, you should make it idempotent by implementing it so that an event results in the desired results (and side effects) even if it is delivered multiple times.
E.g. the function is trying to increment a document field by 1
document("post/Post_ID_1").
updateData(["likes" : FieldValue.increment(1)])
So while Increment may be atomic it's not idempotent? If we want to make our counters idempotent we still need to use a transaction and keep track of who was the last person to like the post?
It will increment once for each invocation of the function. If that's not acceptable, you will need to write some code to figure out if any subsequent invocations are valid for your case.
There are many strategies to implement this, and it's up to you to choose one that suits your needs. The usual strategy is to use the event ID in the context object passed to your function to determine if that event has been successfully processed in the past. Maybe this involves storing that record in another document, in Redis, or somewhere that persists long enough for duplicates to be prevented (an hour should be OK).

How to store sagas’ data?

From what I read aggregates must only contain properties which are used to protect their invariants.
I also read sagas can be aggregates which makes sense to me.
Now I modeled a registration process using a saga: on RegistrationStarted event it sends a ReserveEmail command which will trigger an EmailReserved or EmailReservationFailed given if the email is free or not. A listener will then either send a validation link or a message telling an account already exists.
I would like to use data from the RegistrationStarted event in this listener (say the IP and user-agent). How should I do it?
Storing these data in the saga? But they’re not used to protect invariants.
Pushing them through ReserveEmail command and the resulting event? Sounds tedious.
Project the saga to the read model? What about eventual consistency?
Another way?
Rinat Abdullin wrote a good overview of sagas / process managers.
The usual answer is that the saga has copies of the events that it cares about, and uses the information in those events to compute the command messages to send.
List[Command] processManager(List[Event] events)
Pushing them through ReserveEmail command and the resulting event?
Yes, that's the usual approach; we get a list [RegistrationStarted], and we use that to calculate the result [ReserveEmail]. Later on, we'll get [RegistrationStarted, EmailReserved], and we can use that to compute the next set of commands (if any).
Sounds tedious.
The data has to travel between the two capabilities somehow. So you are either copying the data from one message to another, or you are copying a correlation identifier from one message to another and then allowing the consumer to decide how to use the correlation identifier to fetch a copy of the data.
Storing these data in the saga? But they’re not used to protect invariants.
You are typically going to be storing events in the sagas (to keep track of what has happened). That gives you a copy of the data provided in the event. You don't have an invariant to protect because you are just caching a copy of a decision made somewhere else. You won't usually have the process manager running queries to collect additional data.
What about eventual consistency?
By their nature, sagas are always going to be "eventually consistent"; the "state" of an instance of a saga is just cached copies of data controlled elsewhere. The data is probably nanoseconds old by the time the saga sees it, there's no point in pretending that the data is "now".
If I understand correctly I could model my saga as a Registration aggregate storing all the events whose correlation identifier is its own identifier?
Udi Dahan, writing about CQRS:
Here’s the strongest indication I can give you to know that you’re doing CQRS correctly: Your aggregate roots are sagas.

How to prevent lost updates on the views in a distributed CQRS/ES system?

I have a CQRS/ES application where some of the views are populated by events from multiple aggregate roots.
I have a CashRegisterActivated event on the CashRegister aggregate root and a SaleCompleted event on the Sale aggregate root. Both events are used to populate the CashRegisterView. The CashRegisterActivated event creates the CashRegisterView or sets it active in case it already exists. The SaleCompleted event sets the last sale sequence number and updates the cash in the drawer.
When two of these events arrive within milliseconds, the first update is overwritten by the last one. So that's a lost update.
I already have a few possible solutions in mind, but they all have their drawbacks:
Marshal all event processing for a view or for one record of a view on the same thread. This works fine on a single node, but once you scale out, things start to get complex. You need to ensure all events for a view are delivered to the same node. And you need to migrate to another node when it goes down. This requires some smart load balancer which is aware of the events and the views.
Lock the record before updating to make sure no other threads or nodes modify it in the meantime. This will probably work fine, but it means giving up on a lock-free system. Threads will set there, waiting for a lock to be freed. Locking also means increased latency when I scale out the data store (if I'm not mistaken).
For the record: I'm using Java with Apache Camel, RabbitMQ to deliver the events and MariaDB for the view data store.
I have a CQRS/ES application where some of the views in the read model are populated by events from multiple aggregate roots.
That may be a mistake.
Driving a process off of an isolated event. But composing a view normally requires a history, rather than a single event.
A more likely implementation would be to use the arrival of the events to mark the current view stale, and to use a single writer to update the view from the history of events produced by the aggregate(s) concerned.
And that requires a smart messaging solution. I thought "Smart endpoints and dumb pipes" would be a good practice for CQRS/ES systems.
It is. The endpoints just need to be smart enough to understand when they need histories, or when events are sufficient.
A view, after all, is just a snapshot. You take inputs (X.history, Y.history), produce a snapshot, write the snapshot into your view store (possibly with meta data describing the positions in the histories that were used), and you are done.
The events are just used to indicate to the writer that a previous snapshot is stale. You don't use the event to extend the history, you use the event to tell the writer that a history has changed.
You don't lose updates with multiple events, because the event itself, with all of its state, is captured in the history. It's the history that is used to build the event-sourced view.
Konrad Garus wrote
... handling events coming from a single source is easier, but more importantly because a DB-backed event store trivially guarantees ordering and has no issues with lost or duplicate messages.
A solution could be to detect the when this situation happens, and do a retry.
To do this:
Add to each table the aggregate version number which is kept up to date
On each update statement add the following the the where clause "aggr_version=n-1" (where n is the version of the event being processed)
When the result of the update statement is that no records where modified, it probably means that the event was processed out of order and a retry strategy can be performed
The problem is that this adds complexity and is hard to test. The performance bottleneck is very likely in the database, so a single process with a failover solution will probably be the easiest solution.
Although I see you ask how to handle these things at scale - I've seen people recommend using a single threaded approach - until such times as it actually becomes a problem - and then address it.
I would have a process manager per view model, draw the events you need from the store and write them single threaded.
I combined the answers of VoiceOfUnreason and StefRave into something I think might work. Populating a view from multiple aggregate roots feels wrong indeed. We have out of order detection with a retry queue. So an event on an aggregate root will only be processed when the last completely processed event is version n-1.
So when I create new aggregate roots for the views that would be populated by multiple aggregate roots (say aggregate views), all updates for the view will be synchronised without row locking or thread synchronisation. We have conflict detection with a retry mechanism on the aggregate roots as well, that will take care of concurrency on the command side. So if I just construct these aggregate roots from the events I'm currently using to populate the aggregate views, I will have solved the lost update problem.
Thoughts on this solution?

CQRS, Event-Sourcing and Web-Applications

As I am reading some CQRS resources, there is a recurrent point I do not catch. For instance, let's say a client emits a command. This command is integrated by the domain, so it can refresh its domain model (DM). On the other hand, the command is persisted in an Event-Store. That is the most common scenario.
1) When we say the DM is refreshed, I suppose data is persisted in the underlying database (if any). Am I right ? Otherwise, we would deal with a memory-transient model, which I suppose, would not be a good thing ? (state is not supposed to remain in memory on server side outside a client request).
2) If data is persisted, I suppose the read-model that relies on it is automatically updated, as each client that requests it generates a new "state/context" in the application (in case of a Web-Application or a RESTful architecture) ?
3) If the command is persisted, does that mean we deal with Event-Sourcing (by construct when we use CQRS) ? Does Event-Sourcing invalidate the database update process ? (as if state is reconstructed from the Event-Store, maintaining the database seems useless) ?
Does CQRS only apply to multi-databases systems (when data is propagated on separate databases), and, if it deals with memory-transient models, does that fit well with Web-Applications or RESTful services ?
1) As already said, the only things that are really stored are the events.
The only things that commands do are consistency checks prior to the raise of events. In pseudo-code:
public void BorrowBook(BorrowableBook dto){
if (dto is valid)
RaiseEvent(new BookBorrowedEvent(dto))
else
throw exception
}
public void Apply(BookBorrowedEvent evt) {
this.aProperty = evt.aProperty;
...
}
Current state is retrieved by sequential Apply. Since this, you have to point a great attention in the design phase cause there are common pitfalls to avoid (maybe you already read it, but let me suggest this article by Martin Fowler).
So far so good, but this is just Event Sourcing. CQRS come into play if you decide to use a different database to persist the state of an aggregate.
In my project we have a projection that every x minutes apply the new events (from event store) on the aggregate and save the results on a separate instance of MongoDB (presentation layer will access to this DB for reading). This model is clearly eventually consistent, but in this way you really separate Command (write) from Query (read).
2) If you have decided to divide the write model from the read model there are various options that you can use to make them synchronized:
Every x seconds apply events from the last checkpoint (some solutions offer snapshot to avoid reapplying of heavy commands)
A projection that subscribe events and update the read model as soon event is raised
3) The only thing stored are the events. Infact we have an event-store, not a command store :)
Is database is useless? Depends! How many events do you need to reapply for take the aggregate to the current state?
Three? Maybe you don't need to have a database for read-model
The thing to grok is that the ONLY thing stored is the events*. The domain model is rebuilt from the events.
So yes, the domain model is memory transient as you say in that no representation of the domain model is stored* only the events which happend to the domain to put the model in the current state.
When an element from the domain model is loaded what happens is a new instance of the element is created and then the events that affect that instance are replayed one after the other in the right order to put the element into the correct state.
you could keep instances of your domain objects around and subscribing to new events so that they can be kept up to date without loading them from all the events every time, but usually its quick enough just to load all the events from the database and apply them every time in the same way that you might load the instance from the database on every call to your web service.
*Unless you have snapshots of you domain object to reduce the number of events you need to load/process
Persistence of data is not strictly needed. It might be sufficient to have enough copies in enough different locations (GigaSpaces). So no, a database is not required. This is (at least was a few years ago) used in production by the Dutch eBay equivalent.

How cursor.observe works and how to avoid multiple instances running?

Observe
I was trying to figure it out how cursor.observe runs inside meteor, but found nothing about it.
Docs says
Establishes a live query that notifies callbacks on any change to the query result.
I would like to understand better what live query means.
Where will be my observer function executed? By Meteor or by mongo?
Multiple runs
When we have more than just a user subscribing an observer, one instance runs for each client, leading us to a performance and race condition issue.
How can I implement my observe to it be like a singleton? Just one instance running for all.
Edit: There was a third question here, but now it is a separated question: How to avoid race conditions on cursor.observe?
Server side, as of right now, observe works as follows:
Construct the set of documents that match the query.
Regularly poll the database with query and take a diff of the changes, emitting the relevant events to the callbacks.
When matching data is changed/inserted into mongo by meteor itself, emit the relevant events, short circuiting step #2 above.
There are plans (possibly in the next release) to automatically ensure that calls to subscribe that have the same arguments are shared. So basically taking care of the singleton part for you automatically.
Certainly you could achieve something like this yourself, but I believe it's a high priority for the meteor team, so it's probably not worth the effort at this point.