In an MVVM LOB app, say I have a ViewModel that allows the user to launch a long-running business process, let's pretend it's the workflow of creating an order.
When the CreateOrder command executes on the ViewModel, how does the UnitOfWork object (DbContext in EF) get created and managed throughout its lifetime? Is the ViewModel responsible for managing its lifetime, passing it off to some wizard dialog service, and eventually committing it to the database? Seems like a violation of SRP. But if the ViewModel doesn't manage this process, who/what does? Some kind of OrderManagerService?
Also, where does IoC/Dependency Injection fit into this picture? For unit testing obviously I don't want the ViewModel to instantiate a new UnitOfWork that's coupled to the database. But if this business process only launches if/when a user requests it, obviously a UnitOfWork can't be injected into the ViewModel upon app startup.
Thanks
I think you nailed it with the OrderManager service. You really don't want the accumulation of this change occurring in a view layer. Create a PendingOrder object to accumulate your UnitOfWork pattern. Put in in a memory-store, or an external data store (probably memory).
This keeps your view layer clean, and makes testing easier.
It kind of dissolves your IOC/testing issue. Unit test your PendingOrder code independently of your UI. Then you can mock/stub it for your UI testing.
Related
I'm building a simple REST API for generating some objects that must be created and sent periodically out of the API. The nature of the objects doesn't matter, neither the framework supporting the REST interface (Spray, Play Framework, whatever else). My question is, what would be a good scalable actor design for this system using Akka? Suppose the service crashes or it's migrated or whatever that causes to stop it. In order to recover the description of the tasks about what objects must be sent and when, is akka-persistence a good way to go here? or it's better to persist such things in a traditional DB?
Thanks.
NOTE: also I would like to know, supposing there's some actor which is not stateful himself, but creates many children actors, if it's a good practice to use akka-persistence in order to replay the messages which causes this actor to create his children again (the children being also non-stateful).
In a traditional DB you would most likely end up modeling this with timestamps and events, and with event sourcing this is already the native model.
Akka-persistence would be a natural fit for this scenario since it will persist every event about what objects must be created and sent periodically out. The snapshot support will also help with speed of recovery when the number of events gets very large.
In the case of crashes or migration, the recovery process will handle this just fine.
Regarding your note, if the actor is truly stateless then there is no need to persist the events that cause the children to be created since they can be recreated on demand. If the existence of the children does need to be recovered, then the actor is not stateless. In that case then it may indeed make sense to persist those events.
In an MvvmCross solution I have a singleton Service class which gets items from a web service and updates a public ObservableCollection. It does this every five seconds and items may be added or removed or their properties changed.
I also have a ViewModel which has a public property which is set to the Service's ObservableCollection. The View is bound to the ObservableCollection so that when items are added, removed or changed the view should update to show this.
However, as expected, I am getting a threading exception because the ObservableCollection is being updated by a thread other than the Main/UI one and the binding therefore cannot update the UI.
Within the Service I do not have the InvokeOnMainThread call readily available so there is no obvious cross platform way to get back on to the main thread when updating the ObservableCollection. Also, doing this just seems wrong - a Service shouldn't be concerning itself with UI matters (whereas a ViewModel can).
Also I am a bit nervous about exposing events from a Service in case this causes ViewModels to not be Garbage Collected. I note that in #slodge's N+1 series http://mvvmcross.wordpress.com/ he is using a messenging service presumably to avoid just this.
So a possible solution may be to publish a message with the latest list of items, and for the ViewModel to subscribe to the message and update its own ObservableCollection on the UI thread by comparing the message contents to it. But this seems a little clunky.
Any suggestions on the best way to implement this would be appreciated - thanks.
The original requirement that INotifyCollectionChanged must be called on the UI thread really comes from the synchronous way that the Windows controls update based upon the Added/Removed/Moved/Replaced/Reset notifications.
This synchronous update is, of course, entirely sensible - it would be very hard to update the UI display while another thread is actively changing it.
There are 'new' changes in .Net 4.5 which may mean the future is nicer... but overall these look quite complicated to me - see https://stackoverflow.com/a/14602121/373321
The ways I know of to handle this are essentially the same as those outlined in your post:
A. keep the ObservableCollection in the Service/Model layer and marshal all events there onto the UI thread - this is possible using any class which inherits from MvxMainThreadDispatchingObject - or can be done by calling MvxMainThreadDispatcher.Instance.RequestMainThreadAction(action)
Whilst it's unfortunate that this means your Service/Model does have some threading knowledge, this approach can work well for the overall App experience.
B. make a duplicate copy of the collection in the ViewModel - updating it by some weak reference type mechanism
e.g. by sending it Messages which tell it what has been Added, Removed, Replaced or Moved (or completely Reset) - note that for this to work, then it's important that the Messages arrive in the correct order!
or e.g. allowing snapshots to be sent across from the Service/Model layer
Which of these to choose depends on:
the frequency, type and size of the collection changes - e.g. whether you are only getting occasional single line updates, whether you are getting frequent large bursts of changes, or whether you are mainly seeing complex groups of changes (which essentially are Resets as far as the UI is concerned)
the animation level required in the UI - e.g. should added/deleted items animate in/out? If no animation is required then it can sometimes be easier just to replace the entire list with a new snapshot rather than to work out the incremental changes.
the size of the collection itself - obviously duplicating a large collection can cause out-of-memory issues
the persistence required on the collection - if persistence is required, then ObservableCollection itself may not be appropriate and you may need to use a custom INotifyCollectionChanged implementation (like the MyCustomList samples)
I personally generally choose the (A) approach in apps - but it does depend on the situation and on the characteristics of the collection and its changes.
Note that while this is most definitely an mvvm issue, the underlying problem is one independent of databinding - how do you update an on-screen display of a list while the list itself is changing on a background thread?
After using EFProfiler (absolutely fantastic tool BTW!) for profiling a few of our Entity Framework applications, it seems that, in most cases, all of the Object Contexts are not closed.
For example, after running it locally, EF Profiler told me that there were 326 Object Context's opened, yet only 1 was closed?
So my question is, should I worry about this? Or is it self-contained within Entity Framework?
If you're not using an IoC container is there anyway you can close the ObjectContexts manually after each request, for example in the End Request of your Global.asax, thereby simulating a "per request" lifestyle for your contexts?
ObjectContexts will be disposed eventually if your application is not holding onto them explicitly, but in general, you should try to dispose them deterministically as soon as possible after you are done with them. In most cases, they will hold onto database connections until they are disposed. In my current web application, we use an IoC container (Autofac) to ensure that any ObjectContext opened during a request is disposed at the end of the request, and does not have to wait for garbage collection.
I suggest you do worry about it and try to fix the issue as Object Contexts are pretty "bulky". If you have too many of them your application may eventually end up using more memory than it needs to and IIS will be restarting your application more frequently then...
A single-user desktop application is unique in that you know the in-memory data is current. So rather than going through the pain of creating a new context for intermittent database operations then reattaching objects, would using just one context for the entire application session carry any risks (other than a multi-user requirement arising later)?
The context is 'transaction' based (i.e. for the commit). Therefore i would not make it a singleton.
I like this article: Singleton datacontext where it states that:
A DataContext is lightweight and is not expensive to create
and
You are probably saving a few 10s of milliseconds. The word micro optimisation springs to mind - in which case you probably shouldn't be using Entity Framework.
The only risk of using a single DataContext is growing the change log too large, AFAIK, and exhausting the main memory or loosing lots of changes the user made in case of a crash. I'm not sure the transaction behaviour is configurable.
But you'll have to manage thread synchronization (as with any shared data in a multi-threaded application), so maybe you're better off using a DataContext per data operation - e.g. opening a Form to edit users in the app should open it's own DataContext and commit it on save or close.
I'm currently learning the WF framework, so bear with me; mostly I'm looking for where to start looking, not necessarily a direct answer. I just can't seem to figure out how to begin researching what I'd like in The Google.
Let's say I have a simple one-step workflow (much more complicated than that, but for simplicity's sake). This workflow needs to watch a certain record in the database to see when it changes. I don't have the capability to "push" via a trigger from the database when the row changes, so I need to poll for it every so often.
This workflow needs to be persisted to the database to be durable against restarts and whatnot as this is a long-running workflow. I'm trying to figure out the best way to get it to check every 3 minutes or so and also persist to the database. Do the persistence capabilities of the framework allow for that? It seems to be time-based. And since the workflow won't be reawakened by an external event, how does it reload from the database and check the same step it did previously again? Does it attempt the last unfulfilled activity automatically upon reloading?
Do "while" activities with a delay attached to it work at all, or can it be handled solely through the persistence services?
I'm not sure what you mean by "handled soley through persistence services"? Persistence refers only to the storing of an idle workflow.
You could have a Delay and a Code activity in a Sequence in a While loop. When in the Delay the workflow will go idle and may be persisted if necessary. However depending on how much state is needed when persisting the workflow and/or how many such workflows you would have running at any one time may mean that a leaner approach is necessary.
A leaner approach would be to externalise the DB watching and have some "DB watching" workflow service raise an event when the desired change has occured. This service would be added to Workflow runtime.
To that end you need a service contract which is defined by an Inteface with the [ExternalDataExchange] attribute. This interface in turn defines an event that the service will raise when the desired DB change is detected. It also defines a method that a Workflow can call to specify what what change this service should be looking for. The method should accept an instance GUID so that the requesting instance can be found when the DB change is detected.
In the workflow you use a CallExternalMethodActivity to call this services method. You then flow to a HandleExternalEventActivity which listen for the event. At this point the workflow will go idle and can be persisted. It will remain there until the service raises the event.