Am new to Microservices and CQRS event handling. I am trying to understand with one simple task. In this task I have three REST external services to handle one transaction/request(Service). The three services are
step1: customer create.
step2: create business for customer
step3: Create Address for business.
I want to implement SAGA for these events with InMemorySagaRepository and saga manager.
Where exactly I have to initiate the SagaManager with repository, Is it in RestController or in CommandHandler ?
Can you please help me in understanding sagas flow ?
Thanks in Advance.
Half a year later, and I'm making an edit as I've now taken a course held by Greg Young called Greg Young's CQRS, Domain Events, Event Sourcing and how to apply DDD
I really recommend it to anyone thinking about CQRS. Help A LOT to understand what things actually are
Original anwser
In our product we use Sagas as something that reacts to events.
This means that our sagas are really just Subscribers to a specific Event. The saga then holds some logic as to whether it should do something or not.
If the saga finds that an action should be taken, it creates a Command which it puts on the CommandBus.
This means that Sagas are just 'reactors' and use the same path in as a user would (skipping the APIs etc).
But what a Saga really is, and what it should do, differs from the one talking about them to the other. (Disclaimer: This is how I read these posts, they might actually all say the same thing, but in a way to fluffy way for me [+my team] to see that)
http://blog.jonathanoliver.com/cqrs-sagas-with-event-sourcing-part-i-of-ii/ for example, raises the point that Sagas should not contain 'business logic' (anything that contains 'if' is business logic according to the post).
https://msdn.microsoft.com/en-us/library/jj591569.aspx Talks about Sagas as 'Process managers' which coordinate things between different Aggregates (remember that Aggregate1 can't talk to Aggregat2 directly, so a 'Process manager' is required to orchestrate the communication). To put it simply: Event -> Saga -> Command -> Event -> Saga... To reach the final destination.
https://lostechies.com/jimmybogard/2013/03/21/saga-implementation-patterns-variations/ Talks about two different patterns of what a Saga is. One is 'Publish-gatherer' which basically coordinates what should happen based on a Command. The other is 'Reporter', which just reports the status of things to where they need to go. It doesn't coordinate things, it just reports whatever it needs to report.
http://kellabyte.com/2012/05/30/clarifying-the-saga-pattern/ Has a write-up of what the Saga-pattern 'is'. The claim is that Sagas should/could compensate for different workflows that break.
http://cqrs.nu/Faq/sagas Has a very short description on what Sagas are and basically says 'They are state machines that lets aggregates react to other aggregates'.
So, given that, what is it you actually want the Saga to do? Should it coordinate everything? Or should it just react and not care what the Aggregates do?
My edited part
So, after taking the course on CQRS and talking with Greg about this, I've come to the conclusion that there is quite a lot of confusion out there on the web.
Lets start with just the concept 'Saga'. A Saga has actually nothing to do with CQRS. It's not a concept of it. 'Saga' a form of a two-phase-commit, only it's optimised for success rather than fail ( https://en.wikipedia.org/wiki/Compensating_transaction )
Now, what most people mean when they talk CQRS and say "Saga" is "Process Manager". And process managers are quite complicated it seems (Greg has a whole other course for just Process Managers).
Basically what they do is the manage the whole process of something (as the name suggests). The link to Microsoft is pretty much what it's all about.
To answer the question:
Where exactly I have to initiate the SagaManager with repository, Is it in RestController or in CommandHandler ?
Outside of them both. A Process Manager is it's own thing. It spans aggregates and repositories. Conceptually it might be better to look at it as a user doing all the things you want the PM do to, just that you program the users interaction and tell it what to listen for.
Disclaimer: I do not work for Greg, or anyone that stands to gain on my promotion for taking his courses. It's just that I learned a lot from it, so I recommend it just like I would recommend reading Eric Evans book on DDD.
In my application i've build Saga process manager using this MSDN documentation, my Saga is implemented in Application Service layer, it listens Events of Sales, Warehouse & Billing bounded contexts and on event occurrence sends Commands via Service Bus.
Simple example, hope it helps you to analyze how to build your saga (I am registering saga as handler in Composition Root) ;):
SAGA:
public class SalesSaga : Saga<SalesSagaData>,
ISagaStartedBy<OrderPlaced>,
IMessageHandler<StockReserved>,
IMessageHandler<PaymentAccepted>
{
private readonly ISagaPersister storage;
private readonly IBus bus;
public SalesSaga(ISagaPersister storage, IBus bus)
{
this.storage = storage;
this.bus = bus;
}
public void Handle(OrderPlaced message)
{
// Send ReserveStock command
// Save SalesSagaData
}
public void Handle(StockReserved message)
{
// Restore & Update SalesSagaData
// Send BillCustomer command
// Save SalesSagaData
}
public void Handle(PaymentAccepted message)
{
// Restore & Update SalesSagaData
// Send AcceptOrder command
// Complete Saga (Dispose SalesSagaData)
}
}
InMemorySagaPersister: (as SalesSagaDataID i am using OrderID its unique across whole process)
public sealed class InMemorySagaPersister : ISagaPersister
{
private static readonly Lazy<InMemorySagaPersister> instance = new Lazy<InMemorySagaPersister>(() => new InMemorySagaPersister());
private InMemorySagaPersister()
{
}
public static InMemorySagaPersister Instance
{
get
{
return instance.Value;
}
}
ConcurrentDictionary<int, ISagaData> data = new ConcurrentDictionary<int, ISagaData>();
public T GetByID<T>(int id) where T : ISagaData
{
T value;
var tData = new ConcurrentDictionary<int, T>(data.Where(c => c.Value.GetType() == typeof(T))
.Select(c => new KeyValuePair<int, T>(c.Key, (T)c.Value))
.ToArray());
tData.TryGetValue(id, out value);
return value;
}
public bool Save(ISagaData sagaData)
{
bool result;
ISagaData existingValue;
data.TryGetValue(sagaData.Id, out existingValue);
if (existingValue == null)
result = data.TryAdd(sagaData.Id, sagaData);
else
result = data.TryUpdate(sagaData.Id, sagaData, existingValue);
return result;
}
public bool Complete(ISagaData sagaData)
{
ISagaData existingValue;
return data.TryRemove(sagaData.Id, out existingValue);
}
}
One approach might be to have some sort of starting command that starts the Saga. In this scenario it would be configured in your composition root to listen to a certain command type. Once a command has been received in your message dispatcher (or whatever middleware messaging stuff you have) it would look for any Sagas that have been registered to be started by the command. You would then create the Saga and pass it the command. It could then react to other commands and events as they happen.
In your scenario I would suggest your Saga is a type of command handler so the initiation of it would be upon receiving a command
Related
OK, I have been at this for a while ...
I am trying to track when user changes input languages from Language Bar.
I have a Text Service DLL - modeled from MSDN and WinSDK samples - that registers fine, and I can use the interfaces ITfActiveLanguageProfileNotifySink & ITfLanguageProfileNotifySink and see those events just fine.
I also have finally realized that when I change languages these events occur for the application/process that currently has focus.
What I need to do is to simply have these events able to callback to my own application, when it has the focus. I know I am missing something.
Any help here is appreciated.
Thanks.
I did some double-checking, and you should be able to create a thread manager object without implementing ITextStoreACP so long as you don't call ITfThreadMgr::Activate.
So, the code should look like:
HRESULT hr = CoInitialize(NULL);
if (SUCCEEDED(hr))
{
ITfThreadMgr* pThreadMgr(NULL);
hr = CoCreateInstance(CLSID_TF_ThreadMgr, NULL, CLSCTX_INPROC_SERVER, IID_ITfThreadMgr, (LPVOID*) &pThreadMgr);
if (SUCCEEDED(hr))
{
ITfSource *pSource;
hr = pThreadMgr->QueryInterface(IID_ITfSource, (LPVOID*)&pSource);
if(SUCCEEDED(hr))
{
hr = pSource->AdviseSink(IID_ITfActiveLanguageProfileNotifySink,
(ITfActiveLanguageProfileNotifySink*)this,
&m_dwCookie);
pSource->Release();
}
}
}
Alternatively, you can use ITfLanguageProfileNotifySink - this interface is driven from the ItfInputProcessorProfiles object instead of ItfThreadMgr. There's a sample of how to set it up on the MSDN page for ItfLanguageProfileNotifySink.
For both objects, you need to keep the source object (ITfThreadMgr or ITfInputProcessorProfiles) as well as the sink object (what you implement) alive until your application exits.
Before your application exits, you need to remove the sink from the source object using ITfSource::UnadviseSink, and then release the source object (using Release). (You don't need to keep the ItfSource interface alive for the life of your application, though.)
I have a windows service, running workflows. The workflows are XAMLs loaded from database (users can define their own workflows using a rehosted designer). It is configured with one instance of the SQLWorkflowInstanceStore, to persist workflows when becoming idle. (It's basically derived from the example code in \ControllingWorkflowApplications from Microsoft's WCF/WF samples).
But sometimes I get an error like below:
System.Runtime.DurableInstancing.InstanceOwnerException: The execution of an InstancePersistenceCommand was interrupted because the instance owner registration for owner ID 'a426269a-be53-44e1-8580-4d0c396842e8' has become invalid. This error indicates that the in-memory copy of all instances locked by this owner have become stale and should be discarded, along with the InstanceHandles. Typically, this error is best handled by restarting the host.
I've been trying to find the cause, but it is hard to reproduce in development, on production servers however, I get it once in a while. One hint I found : when I look at the LockOwnersTable, I find the LockOnwersTable lockexpiration is set to 01/01/2000 0:0:0 and it's not getting updated anymore, while under normal circumstances the should be updated every x seconds according to the Host Lock Renewal period...
So , why whould SQLWorkflowInstanceStore stop renewing this LockExpiration and how can I detect the cause of it?
This happens because there are procedures running in the background and trying to extend the lock of the instance store every 30 seconds, and it seems that once the connection fail connecting to the SQL service it will mark this instance store as invalid.
you can see the same behaviour if you delete the instance store record from [LockOwnersTable] table.
The proposed solution is when this exception fires, you need to free the old instance store and initialize a new one
public class WorkflowInstanceStore : IWorkflowInstanceStore, IDisposable
{
public WorkflowInstanceStore(string connectionString)
{
_instanceStore = new SqlWorkflowInstanceStore(connectionString);
InstanceHandle handle = _instanceStore.CreateInstanceHandle();
InstanceView view = _instanceStore.Execute(handle,
new CreateWorkflowOwnerCommand(), TimeSpan.FromSeconds(30));
handle.Free();
_instanceStore.DefaultInstanceOwner = view.InstanceOwner;
}
public InstanceStore Store
{
get { return _instanceStore; }
}
public void Dispose()
{
if (null != _instanceStore)
{
var deleteOwner = new DeleteWorkflowOwnerCommand();
InstanceHandle handle = _instanceStore.CreateInstanceHandle();
_instanceStore.Execute(handle, deleteOwner, TimeSpan.FromSeconds(10));
handle.Free();
}
}
private InstanceStore _instanceStore;
}
you can find the best practices to create instance store handle in this link
Workflow Instance Store Best practices
This is an old thread but I just stumbled on the same issue.
Damir's Corner suggests to check if the instance handle is still valid before calling the instance store. I hereby quote the whole post:
Certain aspects of Workflow Foundation are still poorly documented; the persistence framework being one of them. The following snippet is typically used for setting up the instance store:
var instanceStore = new SqlWorkflowInstanceStore(connectionString);
instanceStore.HostLockRenewalPeriod = TimeSpan.FromSeconds(30);
var instanceHandle = instanceStore.CreateInstanceHandle();
var view = instanceStore.Execute(instanceHandle,
new CreateWorkflowOwnerCommand(), TimeSpan.FromSeconds(10));
instanceStore.DefaultInstanceOwner = view.InstanceOwner;
It's difficult to find a detailed explanation of what all of this
does; and to be honest, usually it's not necessary. At least not,
until you start encountering problems, such as InstanceOwnerException:
The execution of an InstancePersistenceCommand was interrupted because
the instance owner registration for owner ID
'9938cd6d-a9cb-49ad-a492-7c087dcc93af' has become invalid. This error
indicates that the in-memory copy of all instances locked by this
owner have become stale and should be discarded, along with the
InstanceHandles. Typically, this error is best handled by restarting
the host.
The error is closely related to the HostLockRenewalPeriod property
which defines how long obtained instance handle is valid without being
renewed. If you try monitoring the database while an instance store
with a valid instance handle is instantiated, you will notice
[System.Activities.DurableInstancing].[ExtendLock] being called
periodically. This stored procedure is responsible for renewing the
handle. If for some reason it fails to be called within the specified
HostLockRenewalPeriod, the above mentioned exception will be thrown
when attempting to persist a workflow. A typical reason for this would
be temporarily inaccessible database due to maintenance or networking
problems. It's not something that happens often, but it's bound to
happen if you have a long living instance store, e.g. in a constantly
running workflow host, such as a Windows service.
Fortunately it's not all that difficult to fix the problem, once you
know the cause of it. Before using the instance store you should
always check, if the handle is still valid; and renew it, if it's not:
if (!instanceHandle.IsValid)
{
instanceHandle = instanceStore.CreateInstanceHandle();
var view = instanceStore.Execute(instanceHandle,
new CreateWorkflowOwnerCommand(), TimeSpan.FromSeconds(10));
instanceStore.DefaultInstanceOwner = view.InstanceOwner;
}
It's definitely less invasive than the restart of the host, suggested
by the error message.
you have to be sure about expiration of owner user
here how I am used to handle this issue
public SqlWorkflowInstanceStore SetupSqlpersistenceStore()
{
SqlWorkflowInstanceStore sqlWFInstanceStore = new SqlWorkflowInstanceStore(ConfigurationManager.ConnectionStrings["DB_WWFConnectionString"].ConnectionString);
sqlWFInstanceStore.InstanceCompletionAction = InstanceCompletionAction.DeleteAll;
InstanceHandle handle = sqlWFInstanceStore.CreateInstanceHandle();
InstanceView view = sqlWFInstanceStore.Execute(handle, new CreateWorkflowOwnerCommand(), TimeSpan.FromSeconds(30));
handle.Free();
sqlWFInstanceStore.DefaultInstanceOwner = view.InstanceOwner;
return sqlWFInstanceStore;
}
and here how you can use this method
wfApp.InstanceStore = SetupSqlpersistenceStore();
wish this help
Regarding the BoilerplateJs example, how should we adjust those modules to be intercommunicate in such a way once the user done any change to one module, the other related modules should be updated with that change done.
For example, if there is a module to retrieve inputs from user as name and sales and another module to update those retrieved data in a table or a graph, can you explain with some example ,how those inter connection occurs considering event handling?
Thanks!!
In BoilerplateJS, each of your module will have it's own moduleContext object. This module context object contains two methods 'listen' and 'notify'. Have a look at the context class at '/src/core/context.js' for more details.
The component that need to 'listen' to the event, should register for the event by specifying the name of the event and callback handler. Component that raise the event should use 'notify' method to let others know something interesting happened (optionally passing a parameter).
Get an update of the latest BoilerplateJS code from GitHub. I just committed changes with making clickCounter a composite component where 'clickme component' raising an event and 'lottery component' listening to the event to respond.
Code for notifying the Event:
moduleContext.notify('LOTTERY_ACTIVITY', this.numberOfClicks());
Code for listening to the Event:
moduleContext.listen("LOTTERY_ACTIVITY", function(activityNumber) {
var randomNum = Math.floor(Math.random() * 3) + 1;
self.hasWon(randomNum === activityNumber);
});
I would look at using a Publish-Subscribe library, such as Amplify. Using this technique it is easy for one module to act as a publisher of events and others to register as subscribers, listening and responding to these events in a highly decoupled manner.
As you are already using Knockout you might be interested in first trying Ryan Niemeyer's knockout-postbox plugin first. More background on this library is available here including a demo fiddle. You can always switch to Amplify later if you require.
We have a HTTP end-point that takes a long time to run and can also be called concurrently by users. As part of this request, we update the model inside a synchronized block so that other (possibly concurrent) requests pick up that change.
E.g.
MyModel m = null;
synchronized (lockObject) {
m = MyModel.findById(id);
if (m.status == PENDING) {
m.status = ACTIVE;
} else {
//render a response back to user that the operation is not allowed
}
m.save(); //Is not expected to be called unless we set m.status = ACTIVE
}
//Long running operation continues here. It can involve further changes to instance "m"
The reason for the synchronized block is to ensure that even concurrent requests get to pick up the latest status. However, the underlying JPA does not commit my changes (m.save()) until the request is complete. Since this is a long-running request, I do not want to wait until the request is complete and still want to ensure that other callers are notified of the change in status. I tried to call "m.em().flush(); JPA.em().getTransaction().commit();" after m.save(), but that makes the transaction unavailable for the subsequent action as part of the same request. Can I just given "JPA.em().getTransaction().begin();" and let Play handle the transaction from then on? If not, what is the best way to handle this use-case?
UPDATE:
Based on the response, I modified my code as follows:
MyModel m = null;
synchronized (lockObject) {
m = MyModel.findById(id);
if (m.status == PENDING) {
m.status = ACTIVE;
} else {
//render a response back to user that the operation is not allowed
}
m.save(); //Is not expected to be called unless we set m.status = ACTIVE
}
new MyModelUpdateJob(m.id).now();
And in my job, I have the following line:
doJob() {
MyModel m = MyModel.findById(id);
print m.status; //This still prints the old status as-if m.save() had no effect...
}
What am I missing?
Put your update code in a job an call
new MyModelUpdateJob(id).now().get();
thus the update will be done in another transaction that is commited at the end of the job
ouch, as soon as you add more play servers, you will be in trouble. You may want to play with optimistic locking in your example or and I advise against it pessimistic locking....ick.
HOWEVER, looking at your code, maybe read the article Building on Quicksand. I am not sure you need a synchronized block in that case at all...try to go after being idempotent.
In your case if
1. user 1 and user 2 both call that method and it is pending, then it goes to active(Idempotent)
If user 1 or user 2 wins, well that would be like you had the synchronization block anyways.
I am sure however you have a more complex scenario not shown here, BUT READ that article Building on Quicksand as it really changes the traditional way of thinking and is how google and amazon and very large scale systems operate.
Another option for distributed transactions across play servers is zookeeper which the big large nosql guys use BUT only as a last resort ;) ;)
later,
Dean
I'm building a WP7 app, and I'm now at the point of handling the tombstoning part of it.
What I am doing is saving the viewmodel of the page in the Page.State bag when the NavigatedFrom event occurs, and reading it back in the NavigatedTo (with some check to detect whether I should read from the bag or read from the real live data of the application).
First my VM was just a wrapper to the domain model
public string Nome
{
get
{
return _dm.Nome;
}
set
{
if (value != _dm.Nome)
{
_dm.Nome= value;
NotifyPropertyChanged("Nome");
}
}
}
But this didn't always work because when saving to the bag and then reading back, the domain model was not deserialized correctly.
Then I changed my VM implementation to be just a copy of the properties I needed from the DM:
public string Nome
{
get
{
return _nome;
}
set
{
if (value !=nome)
{
_nome= value;
NotifyPropertyChanged("Nome");
}
}
}
and with the constructor that does:
_nome = dm.Nome;
And now it works, but I was not sure if this is the right approach.
Thx
Simone
Any transient state information should be persisted in the Application.Deactivated event and then restored in the Application.Activated event for tombstoning support.
If you need to store anything between application sessions then you could use the Application.Closing event, but depending on what you need to store, you could just store it whenever it changes. Again, depending on what you need to store, you can either restore it in the Application.Launching event, or just read it when you need it.
The approach that you take depends entirely on your application's requirements and the method and location that you store your data is also up to you (binary serialization to isolated storage is generally accepted is being the fastest).
I don't know the details of your application, but saving and restoring data in NavigatedFrom/NavigatedTo is unlikely to be the right place to do it if you are looking to implement support for tombstoning.
I'd recommend against making a copy of part of the model as when tombstoning you'd (probably) need to persist both the full (app level) model and the page level copy when handling tombstoning.
Again the most appropriate solution will depend on the complexity of your application and the models it uses.
Application.Activated/Deactivated is a good place to handle tombstoning.
See why OnNavigatedTo/From may not be appropriate for your needs here.
How to correctly handle application deactivation and reactivation - Peter Torr's Blog
Execution Model Overview for Windows Phone