CQRS/EventStore : How to dispatch undispatched events? - cqrs

With the amazing library EventStore 3.0 there is a
store.Advanced.GetUnDispatchedCommits();
What is the best way/pattern to dispatch these?
Do I simply loop and call my dispatcher and then update the commit to indicate it has been dispatched (if so how would I do this)?
Also, during the wiring up of the EventStore, a dispatch is attempted on undispatched commits at start up. How can I avoid this from being done?
These questions are more to do with learning how EventStore works rather than a problem with a project.

The "dispatch scheduler" is the thing that attempts to load up all undispatched commits during startup. You could create your own and override the method that is called at startup. At the same time, I wouldn't recommend it. The model behind the EventStore is all about fault tolerance and failure recovery. Specifically, if the machine dies just after a commit has been persisted and the dispatcher hasn't yet finished pushing the message onto the wire, you want the EventStore to pickup where it left off at startup by dispatching any undispatched commits.

Related

Multithreading: best method for lossy thread notifications in Swift?

I have a high-priority audio thread that runs periodically and should do minimal synchronization.
Sometimes the main thread needs to ensure that at least one audio cycle has passed and certain parameters have been picked up, before sending the next batch of parameters. For example, when disabling an audio node the main thread needs to wait until the next cycle when the disabling command is picked up and the node shuts itself down.
At times it is important for the main thread to wait until the command is fully executed, but other times it's not important, so nobody might be listening to the sync event. Hence the "lossy" scenario.
So what is the best way of notifying other threads about an event with minimal overhead and possibly in a "lossy" way?
Can't think of ways of using a semaphore for this task. Are there any canonical ways of achieving this? Looks like Java's notifyAll() works precisely this way, if so, what synchronization mechanism is used behind notifyAll()?
Edit: been thinking, is there such a thing as "send me a semaphore in a queue and I'll signal it"? Seems a bit too complicated but theoretically it could do the job. Any simpler tools for the same task?
As a rule, you never want to block the main thread (or, at least, for more than a few milliseconds). If the response might ever take longer than that, rather than actually waiting, we would adopt asynchronous patterns, let the main thread proceed. Sure, if you need to prevent user interaction, we’d do that, but we wouldn't block the main thread.
The key concern is that if an app blocks the main thread for too long, you have a bad UX (where the app appears to freeze) and you risk having your app killed by the watchdog process. I would therefore not advise using semaphores (or any other similar mechanisms) to have the main thread wait for something from your audio engine controller.
So, for example, let’s say the main thread wants to tell the audio engine to pause playback, but you want the UI to “wait” for it to be acknowledged and handled. Instead of actually waiting, we would set up some asynchronous pattern where the main thread notifies the audio engine that it wants it to pause, the audio controller would then notify the main thread when that request has been processed via some callback mechanism (e.g., via delegate protocol pattern, completion handler closure, etc.). If you happen to need to prevent user interaction during the intervening time, then you’d disable controller and use some UIActivityIndicatorView (i.e., a spinner) or something like that, something that would be removed when the completion handler is called.
Now, you used the term “lossy”, but that generally conveys that you don't mind the request getting lost. But I’m assuming that is not really the case. I'm assuming that you don't really want the request to be lost, but rather only that the main thread doesn't care about the response, confident that the audio controller will get to it when it can. In that case, you'd probably still give this sort of request to the audio controller a callback mechanism, but the main thread just wouldn’t avail itself of it.
Now if you have a sequence of commands that you want the audio engine to process in order, then the audio controller might have a private, internal queue for these requests, where you’d configure it to not start subsequent request(s) until the prior ones finished. The main thread shouldn't be worried about whether the required audio cycle has processed. It should just send whatever requests are appropriate and the audio controller should handle them in the desired order/timing.

CQRS/EventStore: How are failures to deliver events handled?

Getting into CQRS and I understand that you have commands (app layer) and events (from the domain).
In the simple case where events are to update the read model, do read model updates fail? If there is no "bug" then I cannot see them failing and as I am using EventStore, I know there is a commit flag which will retry failures.
So my question is do I have to do anything in addition to EventStore to handle failures?
Coming from a world where you do everything in one transaction and now things are done separately is worrying me.
Of course there may be cases where a published event will fail in the read models.
You have to make sure you can detect that and solve it.
The nice thing is that you can replay all the events again and again so you have the chance not only to fix the error. You can also test the fix by replaying every single event if you want.
I use NServiceBus as my publishing mechanism which allows me to use an error queue. Using my other logging tools together with the error queue I can easily determine what happened since I have the error log and the actual message that caused the error in the first place.

EventStore and more than one unit of work?

In the reply to few questions, Jonathan Oliver mentions using an AsynchronousCommitDispatcher to handle multiple unit of works.
I am still in the design stage of my project (and still learning CRQS and ES) and have a few questions:
Would I create an AsynchronousCommitDispatcher for each aggregate root that will be affected by a domain event being raised?
What happens if I have some sort of locking mechanism where the dispatched event cannot make a change to an aggregate root if it is locked by another user? Does AsynchronousCommitDispatcher retry if there is a lock?
What if the system goes down before an domain event is handled? Unless I persist the fact that it has not been handled, wont it be lost?
My initial understanding was that the types of Dispatchers were for messaging across the wire or for updating the read model. Here we are using it to update another aggregate root. I this correct?
TIA
JD
The commit dispatchers are all about pushing events onto the wire after everything has been completely successfully. No, you don't need more than one dispatcher for a given endpoint. The AsyncCommitScheduler (which uses a dispatcher) is multi-threaded and can dispatch more than one event at a time.
A dispatcher is not about handling an incoming message--that's what your message handlers are for. The dispatcher just sends once everything is complete.
Yes dispatchers can help update read models, but not in the way you think. Instead, the dispatchers just push the messages into your messaging framework (MSMQ, RabbitMQ, or, at a higher level, NServiceBus/MassTransit). Then once a message is received at your view models, you update your view model tables accordingly.

Windows Workflow: Persistence and Polling

I'm currently learning the WF framework, so bear with me; mostly I'm looking for where to start looking, not necessarily a direct answer. I just can't seem to figure out how to begin researching what I'd like in The Google.
Let's say I have a simple one-step workflow (much more complicated than that, but for simplicity's sake). This workflow needs to watch a certain record in the database to see when it changes. I don't have the capability to "push" via a trigger from the database when the row changes, so I need to poll for it every so often.
This workflow needs to be persisted to the database to be durable against restarts and whatnot as this is a long-running workflow. I'm trying to figure out the best way to get it to check every 3 minutes or so and also persist to the database. Do the persistence capabilities of the framework allow for that? It seems to be time-based. And since the workflow won't be reawakened by an external event, how does it reload from the database and check the same step it did previously again? Does it attempt the last unfulfilled activity automatically upon reloading?
Do "while" activities with a delay attached to it work at all, or can it be handled solely through the persistence services?
I'm not sure what you mean by "handled soley through persistence services"? Persistence refers only to the storing of an idle workflow.
You could have a Delay and a Code activity in a Sequence in a While loop. When in the Delay the workflow will go idle and may be persisted if necessary. However depending on how much state is needed when persisting the workflow and/or how many such workflows you would have running at any one time may mean that a leaner approach is necessary.
A leaner approach would be to externalise the DB watching and have some "DB watching" workflow service raise an event when the desired change has occured. This service would be added to Workflow runtime.
To that end you need a service contract which is defined by an Inteface with the [ExternalDataExchange] attribute. This interface in turn defines an event that the service will raise when the desired DB change is detected. It also defines a method that a Workflow can call to specify what what change this service should be looking for. The method should accept an instance GUID so that the requesting instance can be found when the DB change is detected.
In the workflow you use a CallExternalMethodActivity to call this services method. You then flow to a HandleExternalEventActivity which listen for the event. At this point the workflow will go idle and can be persisted. It will remain there until the service raises the event.

Microsoft Message Queue Missing Messages

I am using C# and .Net Framework 1.1 (yes its old but I inherited this stuff and can't convert up). I places messages on a transactional queue but it does not get on the queue about 50% of the time. Running workgroup and Windows/XP Professional with all service packs installed. I don't see any messages in the dead letter queue either.
Any ideas where to look?
If it isn't hitting the queue at all and isn't going to the dead-letter queue, it suggests the item isn't being sent to the queue. You should be able to confirm that this is the case by switching on the journal for the queue.
Assuming it isn't hitting the queue, it is probably a transaction issue. I would check that you are definitely committing the message to the queue every time. Make sure there aren't any exceptions being thrown and swallowed that causes the transaction to roll back or never be committed (essentially the same thing). Also make sure there aren't any conditional statements that mean the commit gets skipped.
I would add some logging around every location where a transaction is started, committed and rolled back and also around any location where you are creating a message. You can then review you log to see the order of events and see what's going astray.
Another option would be to remove all of the transaction code and test the code against a non-transactional queue. If the messages all appear then it is a transactional problem. If not, the issue is elsewhere.
I use MSMQ a lot and the one thing I have learned through experience is that it works really well and the weak point is me :-)