Time based Sagas with Event Sourcing - cqrs

Let's say I wanted to have a saga that get's created by some event, then sits and wait for a few hours, and if nothing happens, sends off some command.
Now, if this Saga was all in-memory and I had to restart the app/server, the saga would be unloaded and never seen again, right?
Would I use Event Sourcing to bring this Saga up to speed once the system is back online?
If so, I would need pretty much a separate Event Store with "active sagas" that can be replayed at system startup, to get my Sagas up to speed. So far it seems good to me, but how would I implement the timeout?
I would need some way of "faking" the timeouts at replay, taking into account there may be several, subsequent timeouts depending on the events going into the saga.

The best way to achieve this capability is with another endpoint that is capable of returning a message back to you at a certain point in time. For example, your saga may dispatch a message to this "timeout manager" and say wake me in 1 hour or 1 day or even 1 year. The message would then be returned to you at that time. Ideally this message would have business meaning that would cause an action to occur.
Perhaps the best example of this is something like customer signup where, if the customer hasn't confirmed their account within 7 days from signup, you'd notify them via email. The "timeout message" would effectively be: RemindUserToConfirmAccountMessage. When this message is received back by the saga after 7 days, the saga would determine based upon its current state, if that message needs to be handled and a customer email needs to be sent. But if the user has already confirm his/her account, the message can be discarded with no action taken.

Related

Google Calendar API Watch Channel Should I get a push notification for each resource?

When subscribing to Calendar Channel, https://developers.google.com/google-apps/calendar/v3/push, should I expect to get a push notification for each new event created?
In testing, if I create 21 events (each at 2 second intervals), I get about 7 notifications.
It's hard to tell from the docs if I should be getting a notification for each event created, or if I should use the notification to do a sync?
What are you guys doing for your apps?
Google Calendar watches only make sense when you're also using the sync token feature. They are basically instructions to do another sync, which will bring in 1 or more event changes. The reason you got less than 21 messages is because Google rate limits the messages (in your case to what looks like every ~3 seconds... my experience is closer to 10s).
The callout about not being 100% reliable is actually a bit of a different concern than the "only 7" callbacks issue. Until yesterday, my experience was that watches were 99.9% reliable in terms of delivering a notification within a few seconds of a change. But for the 0.1%, you'll want to have some sort of fallback force sync... could be once an hour, could be upon login, etc.
I've noticed similar. Scroll down to the very bottom of that page you linked:
Notifications are not 100% reliable. Expect a small percentage of messages to get dropped under normal working conditions. Make sure to handle these missing messages gracefully, so that the application still syncs even if no push messages are received.
If you've called watch on the calendar to register/create a notification channel, I'm assuming they're doing some throttling/bucketing to push out notifications at a coarse-grained level. Testing this out myself but I believe the original intention of asking for incremental changes via setting timeMin equal to a previously requested syncTime still holds true:
https://developers.googleblog.com/2013/07/google-calendar-api-push-notifications.html

Is there a way to rely on Postgres Notify/Listen mechanism?

I have implemented a Notify/Listen mechanism, so when a special request is sent to the web server, using notify I can notify the workers (in Python) that there's a pending request waiting to be processed.
The implementation works fine, but the problem is that if the workers server is restarting, the notification gets lost, since at that particular time there's no listener.
I can implement a service like MQRabbit or similar, but my needs are so simple that implement such a monster is too much.
Is there any way, a configuration variable perhaps, that can give some persistence to the notification mechanism?
Thanks in advance
I don't think there is a way to persist notification channels, but you can simply store the pending requests to a table, and have the worker check for any missed work on startup.
Either a timestamp or a pending/completed flag would work, depending on what kind of work it's doing.
For consistency, you can have the NOTIFY fire from an INSERT trigger on the queue table, and have the worker always check for any remaining work (not just a specific request) when notified.

Replaying events - validating transitions

I'm wondering exactly what logic should be contained when applying an event to a state while replaying events using some event sourcing solution.
Specifically, I'm wondering about validation, say I've got entity which can be in one of the following status:
Logged
Active
Close
Cancelled
and the progress needs to be Logged->Active->Close or Logged->Active->Cancelled, we cannot jump from Logged to Close directly for example.
Now, I understand, the validation should be contained in commands: UpdateState would check if the entity current state allows transition to desired one, and would produce appropriate event StatusUpdated which would be persisted into the event store.
Question is, what should I do when replaying it back? Should I just update the status, or should I perform same validation (so that if status transition requirements change it won't be possible to load some previously updated entities unless we add some additional logic), to ensure we won't end up with entities that do not satisfy our current logic?
PS. I think I've got problems grasping it because in my understanding events are essentialy just about 'announcing' something that happened already (and the sender state is already modified) so that interesting parties can react accordingly. And in case of events loading/replaying, you need to alter said state instead of actually 'announce' anything...
You do not need to perform any validation when replaying the event stream.
Commands model things that will be done in the future: You ask the system do to something for you. It's up to the system to decide whether to do it or not, e.g. based on business rules and validation.
Events in contrast model things that have already happened. As in the reality, you can not change the past.
So, this means, when an event gets persisted, it was in consequence of a command which was taken as valid at the point in time when it was processed. Replaying an event stream simply means to have a look at what happened in the past, and you can not change this.
Hence, you do not need to run any validation again.
Moreover, this means that if one day your business logic changes, all the things (business accidents!) that happened in the past still have happened, so they must not change. Hence you are not allowed to use any validation logic, as it may be another one today than when it was when you stored the events.
And again: You can not (and should not) change the past :-)
Example
Supposed you have a way of validating credit card numbers. A customer comes to your shop, pays, you consider his / her card as valid given your current set of rules, and everything is fine.
Then, one day the credit card institute changes the way credit card numbers are calculated, and hence you have another validation algorithm.
When you now play back your past events, the payment had happened, with or without the new validation rules - and you can not change the fact that it had happened! If you wanted to you had to create a new transaction to send money back to the customer. Again, this would result in a new event, not in a changed one from the past.
So, to cut a long story short: Don't validate events against anything. They are valid by definition, as they had happened before.
Any event stream that's been written to the event store should be valid to be played back without introducing any logic in the event handlers. If you needed to change your transitioning process, you'd need to look at doing some sort of conversion, along the lines of this example.
Regarding your last point. Event sourcing is a technique for persisting and restoring the state of an entity using a historical record of ordered events. It just so happens that when you're saving the entity, you can also publish these events for any interested parties to consume.

Service Broker : Same Service in 'From' & 'To' clause in BEGIN DIALOG Statement

In my Service Broker design, I need to make an asynchronous calls and needed some work to get done in background (Inside SQL Server only, like updating tables).
There are certain points to be taken under consideration based on the requirement :
It's kind of one-way data push. Just place a message into the SB queue and forget. No acknowledgement required.
Only one database involved in the design. There is no need for multiple databases.
Message will be placed to the SB queue using a Stored Proc ( This SP will be called by an application).
By observing above points, it seems that requirement doesn't suits for creating 2 different SB services as only one service would suffice. I designed the scenario with having only one SB Service, and while creating a conversation dialog, I assigned same service name to the 'From' & 'To' clauses. Program pushes data to the SB queue and activator will activate associated Store Procedure.. It works just fine.
BEGIN DIALOG CONVERSATION #RecordConversationHandle
FROM SERVICE **UpdateQueueStatus**
TO SERVICE '**UpdateQueueStatus**'
WITH ENCRYPTION = OFF;
Please help me by any suggestion on the above proposed design.. ? Any suggestions/issues or anything which demands attention to improve the design for better performance & scalability would be much appreciated.
Service broker is designed for dialogs, not monolog conversations. Don't design something new (There is tons of good reasons why they are always dialogs)
You can create sending service (Service1), witch is used for sending messages and receives "End Dialog" messages and ends dialog. The other (Service2) witch receives messages and does some processing with them + ends dialog when work is done.
The main reason of two services in a dialog and dialog-oriented conversations is the ability to disable queue. The initiator's queue may be enabled while at the same time, for some purpose or reasons, the target's queue may be disabled. In this case, sedning messages runs without the "disabled queue" error and messages will wait in the transmition queue until the target queue become enabled again.
That is why a contract may contain just one message type, and a queue may be created without specifying any contract. It's the initiator's queue.
There is a caveat: BEGIN CONVERSATION TIMER. It puts the standard message https://schemas.microsoft.com/SQL/ServiceBroker/DialogTimer into a local queue the specified conversation dialog belongs to.
One use case when a dialog on the same service may be usefull, is a recovery process. However, in this case, there should be a specific message type received in a higher priority than ordinary messages. The activation procedure first receives a recovery message, tries to recover, rollback if unsuccessfull, then receives an ordinary messages and commits receiving messages of both the types or just rollback if unsuccessfull again.

Is there a replication delay in Salesforce.com via the APEX API?

I have been using SOAP to deal with Salesforce.com and have been using the getUpdated() call, using the timestamp I retrieve from the getServertimestamp() call.
I have watched my process check, (it polls every minute) and a few seconds after I save the change in the Sandbox environment, I see it poll, get no <ids> in the getUpdated call, and then on the next poll, the modified id shows up.
Is there a backend replication delay in SFDC? I suspect there is, but have had no luck in identifying the magnitude of it. Anyone else experienced this?
Additionally, I realize I should mention, this is all in a Sandbox copy of the environment, which may confuse matters even further.
Update: I just tested, and I made a change, and my poll ran 48 seconds later, and did not see the updated object. But 1 minute 48 seconds later it did see it. So that is one data point. (I know my SOAP endpoint and Web interface are both running on the same server at SFDC, tapp0).
There's no delay in the recording of the change, but the getUpdate/getDeleted calls round down the specified time to the nearest minute, so a finish time of now, gets rounded down, and the just made change falls outside of the range.
Also, if you're doing near realtime replication via these calls, then make sure to pay attention to the inflight transaction timestamp returned, otherwise you can miss changes (as the change timestamp can't be the actual transaction commit time)