How cursor.observe works and how to avoid multiple instances running? - mongodb

Observe
I was trying to figure it out how cursor.observe runs inside meteor, but found nothing about it.
Docs says
Establishes a live query that notifies callbacks on any change to the query result.
I would like to understand better what live query means.
Where will be my observer function executed? By Meteor or by mongo?
Multiple runs
When we have more than just a user subscribing an observer, one instance runs for each client, leading us to a performance and race condition issue.
How can I implement my observe to it be like a singleton? Just one instance running for all.
Edit: There was a third question here, but now it is a separated question: How to avoid race conditions on cursor.observe?

Server side, as of right now, observe works as follows:
Construct the set of documents that match the query.
Regularly poll the database with query and take a diff of the changes, emitting the relevant events to the callbacks.
When matching data is changed/inserted into mongo by meteor itself, emit the relevant events, short circuiting step #2 above.
There are plans (possibly in the next release) to automatically ensure that calls to subscribe that have the same arguments are shared. So basically taking care of the singleton part for you automatically.
Certainly you could achieve something like this yourself, but I believe it's a high priority for the meteor team, so it's probably not worth the effort at this point.

Related

Firestore Increment - Cloud Function Invoked Twice

With Firestore Increment, what happens if you're using it in a Cloud Function and the Cloud Function is accidentally invoked twice?
To make sure that your function behaves correctly on retried execution attempts, you should make it idempotent by implementing it so that an event results in the desired results (and side effects) even if it is delivered multiple times.
E.g. the function is trying to increment a document field by 1
document("post/Post_ID_1").
updateData(["likes" : FieldValue.increment(1)])
So while Increment may be atomic it's not idempotent? If we want to make our counters idempotent we still need to use a transaction and keep track of who was the last person to like the post?
It will increment once for each invocation of the function. If that's not acceptable, you will need to write some code to figure out if any subsequent invocations are valid for your case.
There are many strategies to implement this, and it's up to you to choose one that suits your needs. The usual strategy is to use the event ID in the context object passed to your function to determine if that event has been successfully processed in the past. Maybe this involves storing that record in another document, in Redis, or somewhere that persists long enough for duplicates to be prevented (an hour should be OK).

Snapshotting as Domain Event in Event Sourcing

I have some pretty constant aggregates in my event sourcing model that will accumulate a large amount of events. I am thinking about using snapshots to optimize the re-hydration of these aggregates. I.E. the aggregates are warehouses.
My question is whether or not I should produce a specific event for snapshotting, so something like "WarehouseStateSnapshotted". In my current prototype, a snapshot state is saved in duplicate code existing in a few command handlers. I feel this is not the right area to be handling it. I would rather dispatch an event for the snapshot to my service bus, and have the event handler handle saving the snapshot state. This may, however, violate the domain driven pattern of events them self. Have other's created events for snapshots?
If this is not the right approach, should I at least move my snapshotting logic out of the command handlers and into the aggregate class?
Thanks!
EDIT: Title and -- This comment seems to suggest snapshots as domain events is the wrong approach.
EDIT2: Simplified Question - Is it appropriate to have repos injected into command handlers?
Let me attack the easy one first. The snapshotting logic does not belong in the aggregate. Whether and when you shapshot is purely a performance concern and so does not belong with business rules. It helps to draw the line by imagining a server with infinite resources. If you don’t need to do “the thing” on this amazing machine, “the thing” does not belong in the aggregate.
In the link you posted above I agree with RBanks54 that the snapshot does not belong in the aggregate event stream, for all the reasons he lists. I think your solution to dispatch an event on the service bus, then handle that event in a different command, is the correct approach. Handling snapshotting in the context of handling a new event means you cannot snapshot unless a new event is received. Having a distinct message on the service bus means any process can request a snapshot when appropriate.
My question is whether or not I should produce a specific event for snapshotting, so something like "WarehouseStateSnapshotted".
"It depends".
The reference you should review for snapshoting is CQRS Documents, by Greg Young. It's relatively old 2010, but serves as a simple introduction to snapshotting as a concept.
There's nothing wrong with generating snapshots asynchronously and storing them outside of the event stream.
You can use any sensible trigger for the snapshotting process; you don't necessarily need an event in the stream. "Snapshot every 100 events" or "Snapshot every 10 minutes" or "Snapshot when the admin clicks the snapshot button" are all viable.
Some domains have a natural cadence to them, where the domain itself might suggest a snapshot -- think "closing the books on the fiscal year".
I'm somewhat skeptical about putting a domain agnostic "make a snapshot" message into the event stream - I don't think it's appropriate to have the aggregate be responsible for snapshot cadence. It's not broken, but it does feel a bit like overloading the semantics of the event stream with a different concern.
I have been dabbling a bit with event-sourcing but I'm no expert. I do not particularly like the idea of a separate "stream" representing a snapshot. It isn't much of a stream since it only stores the last snapshot. In my Shuttle.Recall project, which is still in its infancy, I store snapshots as normal domain events but they are specifically marked as snapshots and the last snapshot version is stored separately in order to load it and then the events after that version are applied. I find some advantages to this in that you can add some functionality around snapshots also.
When you are using snapshots as a purely technical performance improvement it may not add much value to your domain. If a snapshot does not belong in the aggregate/domain then how would one go about hydrating the aggregate from the snapshot?
In some instances a snapshot may be very much part of the domain. When you look at your monthly bank statement you will not find each and every transaction (event) from the day that you opened up your account. Instead we have an opening balance (snapshot) with the new transactions (events) for that month. In this way the "MonthEndProcessed" event may very well be a snapshot.
I also don't really buy the argument that should a snapshot contain an error you cannot fix it since an event stream is immutable. What happens if your event contains an error? Can you not fix it? These errors should ideally not make it into a production system but if they do then they should be fixed. The immutability, to me anyway, relates to the typical interaction with the system. We do not typically make changes to an event once it has taken place.
In some instances it may even be beneficial to go back and change some events to a newer version. These should be kept to a minimum and ideally avoided but perhaps it may be pragmatic in some instances.
But like I said... I'm still learning :)

How to prevent lost updates on the views in a distributed CQRS/ES system?

I have a CQRS/ES application where some of the views are populated by events from multiple aggregate roots.
I have a CashRegisterActivated event on the CashRegister aggregate root and a SaleCompleted event on the Sale aggregate root. Both events are used to populate the CashRegisterView. The CashRegisterActivated event creates the CashRegisterView or sets it active in case it already exists. The SaleCompleted event sets the last sale sequence number and updates the cash in the drawer.
When two of these events arrive within milliseconds, the first update is overwritten by the last one. So that's a lost update.
I already have a few possible solutions in mind, but they all have their drawbacks:
Marshal all event processing for a view or for one record of a view on the same thread. This works fine on a single node, but once you scale out, things start to get complex. You need to ensure all events for a view are delivered to the same node. And you need to migrate to another node when it goes down. This requires some smart load balancer which is aware of the events and the views.
Lock the record before updating to make sure no other threads or nodes modify it in the meantime. This will probably work fine, but it means giving up on a lock-free system. Threads will set there, waiting for a lock to be freed. Locking also means increased latency when I scale out the data store (if I'm not mistaken).
For the record: I'm using Java with Apache Camel, RabbitMQ to deliver the events and MariaDB for the view data store.
I have a CQRS/ES application where some of the views in the read model are populated by events from multiple aggregate roots.
That may be a mistake.
Driving a process off of an isolated event. But composing a view normally requires a history, rather than a single event.
A more likely implementation would be to use the arrival of the events to mark the current view stale, and to use a single writer to update the view from the history of events produced by the aggregate(s) concerned.
And that requires a smart messaging solution. I thought "Smart endpoints and dumb pipes" would be a good practice for CQRS/ES systems.
It is. The endpoints just need to be smart enough to understand when they need histories, or when events are sufficient.
A view, after all, is just a snapshot. You take inputs (X.history, Y.history), produce a snapshot, write the snapshot into your view store (possibly with meta data describing the positions in the histories that were used), and you are done.
The events are just used to indicate to the writer that a previous snapshot is stale. You don't use the event to extend the history, you use the event to tell the writer that a history has changed.
You don't lose updates with multiple events, because the event itself, with all of its state, is captured in the history. It's the history that is used to build the event-sourced view.
Konrad Garus wrote
... handling events coming from a single source is easier, but more importantly because a DB-backed event store trivially guarantees ordering and has no issues with lost or duplicate messages.
A solution could be to detect the when this situation happens, and do a retry.
To do this:
Add to each table the aggregate version number which is kept up to date
On each update statement add the following the the where clause "aggr_version=n-1" (where n is the version of the event being processed)
When the result of the update statement is that no records where modified, it probably means that the event was processed out of order and a retry strategy can be performed
The problem is that this adds complexity and is hard to test. The performance bottleneck is very likely in the database, so a single process with a failover solution will probably be the easiest solution.
Although I see you ask how to handle these things at scale - I've seen people recommend using a single threaded approach - until such times as it actually becomes a problem - and then address it.
I would have a process manager per view model, draw the events you need from the store and write them single threaded.
I combined the answers of VoiceOfUnreason and StefRave into something I think might work. Populating a view from multiple aggregate roots feels wrong indeed. We have out of order detection with a retry queue. So an event on an aggregate root will only be processed when the last completely processed event is version n-1.
So when I create new aggregate roots for the views that would be populated by multiple aggregate roots (say aggregate views), all updates for the view will be synchronised without row locking or thread synchronisation. We have conflict detection with a retry mechanism on the aggregate roots as well, that will take care of concurrency on the command side. So if I just construct these aggregate roots from the events I'm currently using to populate the aggregate views, I will have solved the lost update problem.
Thoughts on this solution?

Updating last accessed time when separating Commands and Queries

Consider a function: IsWalletValid(walletID). It returns true if the walletID exists in the database, and updates a 'last_accessed_time' field.
A task runs periodically to remove any wallets that have not been accessed for a set period of time.
Seems like an easy solution for what we want to do, but IsWalletValid() has a side effect because it writes to the database.
Should we add an additional 'UpdateLastAccessedTime(walletID)' function? Everytime we call IsWalletValid() we will also need to remember to call UpdateLastAccessedTime(walletID).
Do verifying that a wallet is valid and updating it's last_accessed_time field need to be transactionally consistent (ACID)? You could use eventual consistency here:
The method IsWalletValid publishes an WalletAccessed event, then an event handler updates last_accessed_time asynchronously.
if last_accessed_time is not accessed by domain logic to make decisions on any write handling this could just be a facet of the read only projection. Seems like this is the same concern as other more verbose read audit concerns. Just because data is being written and maintained doesn't mean that it necessarily needs to be part of the write model of the system. If you did however want to implement this as part of the domain and perhaps stored within the same event store it could be considered a separate auditing context outside of the boundary of the original aggregate being audited.

Recreate a graph that change in time

I have an entity in my domain that represent a city electrical network. Actually my model is an entity with a List that contains breakers, transformers, lines.
The network change every time a breaker is opened/closed, user can change connections etc...
In all examples of CQRS the EventStore is queried with Version and aggregateId.
Do you think I have to implement events only for the "network" aggregate or also for every "Connectable" item?
In this case when I have to replay all events to get the "actual" status (based on a date) I can have near 10000-20000 events to process.
An Event modify one property or I need an Event that modify an object (containing all properties of the object)?
Theres always an exception to the rule but I think you need to have an event for every command handled in your domain. You can get around the problem of processing so many events by making use of Snapshots.
http://thinkbeforecoding.com/post/2010/02/25/Event-Sourcing-and-CQRS-Snapshots
I assume you mean currently your "connectable items" are part of the "network" aggregate and you are asking if they should be their own aggregate? That really depends on the nature of your system and problem and is more of a DDD issue than simple a CQRS one. However if the nature of your changes is typically to operate on the items independently of one another then then should probably be aggregate roots themselves. Regardless in order to answer that question we would need to know much more about the system you are modeling.
As for the challenge of replaying thousands of events, you certainly do not have to replay all your events for each command. Sure snapshotting is an option, but even better is caching the aggregate root objects in memory after they are first loaded to ensure that you do not have to source from events with each command (unless the system crashes, in which case you can rely on snapshots for quicker recovery though you may not need them with caching since you only pay the penalty of loading once).
Now if you are distributing this system across multiple hosts or threads there are some other issues to consider but I think that discussion is best left for another question or the forums.
Finally you asked (I think) can an event modify more than one property of the state of an object? Yes if that is what makes sense based on what that event represents. The idea of an event is simply that it represents a state change in the aggregate, however these events should also represent concepts that make sense to the business.
I hope that helps.