where does common/global data go in mvvm? - mvvm

I'm trying to get my head around mvvm and came up with a test application that I think will give me a good foundation. Suppose my application has a service that goes out every minute and gets the latest flight arrival and departure information at an airport. Now suppose I have 3 different views: InboundView, OutboundView and GateView. The Inbound and Outbound views would simply display the various flight details for inbound and outbound flights that I'm sure we've all seen on the flight boards in the airport. The GateView would display similar flight information but might be sorted by gate # instead of flight #.
So the model for the Flight object would contain the flight data details as well as an instance of a Gate object that would be updated appropriately once a flight arrives.
So all 3 views are using the same flight data service and I know I can pass an instance of that service to each VM but then I'd need to hook up the appropriate INPC events at each view model and that seems less than ideal as the number of views/vms increases.
Right now, each VM uses a ListCollectionView wrapped around the passed in collection of flight data and I just sort/filter based on inbound/outbound, etc.. but I was hoping to incorporate the service results into a sort of parent view model that would then pass a reference to itself along to the sub-views and then I could just handle all the INPC, etc.. events at the parent view model level and those will automatically trickle down to each of the subviews if data on a particular flight changes (such as its gate) instead of having to handle that separately in each of the VMs.
I've looked into the Messenger framework for MVVM Light but it still seems like each of the sub-VMs would have to register for the message and respond to it individually.
Does that make sense? Am I on the right track here?

So all 3 views are using the same flight data service and I know I can pass an instance of that service to each VM but then I'd need to hook up the appropriate INPC events at each view model and that seems less than ideal as the number of views/vms increases.
You don't necessarily have to do this, if the "service" implements INotifyPropertyChanged. Remember, you can bind to a property within a property, ie: {Binding Path=FlightService.Gate} or whatever, which may work. (It's difficult to know your requirements here, though.)
I've looked into the Messenger framework for MVVM Light but it still seems like each of the sub-VMs would have to register for the message and respond to it individually.
Yes, if you wanted to use a messaging framework, you would need this to be handled in each of the ViewModels. Alternatively, you could use some form of service location or constructor injection to "pull in" the flight service. The latter is my personal preference here.
The advantage of handling this in each VM is that each VM will likely want to handle things somewhat differently (otherwise, why is there more than 1?). By pulling a reference to the service in via IoC, you can handle this anyway you need to.

Related

Clean architecture and user logins in Flutter - how do I store user information?

I've been trying to use Reso Coder's Flutter adaptation of Uncle Bob's Clean Architecture.
My app connects to an API, and most requests (other than logging in) require an authentication token.
Furthermore, upon logging in, user profile information (like a name and profile picture) is received.
I need a way to save this data upon login, and use it in both future API requests and my app's UI.
As I'm new to Uncle Bob's Clean Architecture, I'm not quite sure where this data belongs. Here are the ideas I've come up with, all involving storing the data in a User object:
Store the User in the repository layer, in an authentication feature directory. Other repository-level methods can pass it to the appropriate datasource methods.
This seems to make the most sense; other repository-level methods that call other API calls can use the stored User easily, passing it to methods in the data source layer.
If this is the way to go, I'm not quite sure how other features (that use the API) would access the User - is it okay to have a repository depend on another, and pass the authentication repository to the new feature repository?
Store the User in the repository layer, in an authentication feature directory. Other (non-login) usecases can depend on both this repository and on one relevant to their own feature, passing the User to their repository methods.
This is also breaking the vertical feature barrier, but it may be cleaner then idea 1.
For both these ideas, here's what my repository looks like:
abstract class AuthenticationRepository {
/// The current user.
User get currentUser;
/// True if logged in.
bool get loggedIn;
/// Logs in, saving the [User].
Future<void> login(AuthenticationParams params);
/// Logs out, disposing of the [User].
Future<void> logout();
/// Same as [logout], but logs out of all devices.
Future<void> logoutAll();
/// Retrieves stored login credentials.
Future<AuthenticationParams> retrieveStoredCredentials();
}
Are these ideas "valid", and are there any better ways of doing it?
I see another option to tackle the problem. The solution I want to talk about comes from the domain-driven design and is an event based approach.
In DDD you have the concept of a bounded context. A business object (uncle bob's entity) can have different meanings in different bounded contexts. Take a look at your user business object. The data and methods that some use case uses is often differnt to the data and methods that other use cases use. That's why you have differnt user objects in differnt bounded contexts. They are a kind of perspective that each use case has on the same business object.
If a business object is modified in one bounded context it can emit a business event. Another feature can listen to those events. The event mechanism can either be a simple observer pattern or if you need to distribute your application features via microservices a message queue. In case you use a simple observer patter the event emitter and event handler can run within the same data source transaction. But they can also run in differnt ones. It depends on your needs.
So when the sign-up use case registers a new user it emits a UserSignedUpEvent. Other features can now listen to this event. The event carries the information of the user, like the email, the name, the profile image and other infomration that the user provided during sign-up. Other features can now save the piece of data they need to their own data source. It can be the same as the sign-up use case uses (just other tables or another schema). But it is also possible that is a completely differnt data source, maybe another kind of data source like a nosql db. The part I wrote above about transactions is of course more difficult if you have different data sources.
The main point is that each feature has it's own data and manages it. It might be a copy of the whole user infromation, but in a lot of cases it is only a subset.
The event-based approach can give you perfect modularization. But as it is always when something looks great, it comes at a cost. You have to duplicate some part or even all data. When you think of a microservice architecture and some features are in different microservices it means that the duplication increases the availability of the service. The service can operate even the main service that manages the data is down, because a local copy exists. But now you have to deal with consistency issues - eventual consistency.
At this point I like to stop and guide you to other sources for details:
Chapter 8: Domain Events, Implementing Domain-Driven Desing, Vaughn Vernon
The many meanings of event-driven architectures, Martin Fowler

Fetching potentially needed data from repository - DDD

We have (roughly) following architecture:
Application service does the infrastructure job - fetches data from repositories which are hidden behind interfaces.
Object graph is created and passed to appropriate domain service.
Domain service does it thing and raises appropriate events.
Events are handled in different application services which perform some persistent operations (altering repositories, sending e-mails etc).
However. Domain service (3) has become so complex that it requires data from different external APIs only if particular conditions are satisfied. For example - if Product X is of type Car, we need to know price of that car model from some external CatalogService (example invented) hidden behind ICatalogService. This operation is potentially expensive one (REST call).
How do we go about this?
A. Do we pre-fetch all data in Application Service listed as (1) even we might not need it? Do we inject interface ICatalogService into given Domain Service and fetch data only when needed? The latter solution might create performance issues if, some other client of Domain Service, calls this Domain Service repeatedly without knowing there is a REST call hidden inside it.
Or did we simply get the domain model wrong?
This question is related to Domain Driven Design.
How do we go about this?
There are two common patterns.
One is to pass the capability to make the query into the domain model, allowing the model to fetch the information itself when it is needed. What this will usually look like is defining an interface / a contract that will be consumed by the domain model, but implemented in the application/infrastructure layers.
The other is to extend the protocol between the domain model and the application, so that we can signal to the application layer what information is needed, and then the application code can decide how to provide it. You end up with something like a state machine for the processes, with the application code coordinating the exchange of information between the external api and the domain model.
If you use a bit of imagination, you've already got a state machine something like this; as your application code is already coordinating the movement of inputs to the repository and the domain model. The difference, of course, is that the existing "state machine" is simple and linear enough that it may not be obvious that there is a state machine present at all.
how exactly would you signal application layer?
Simple queries; which is to say, the application code pulls the information it needs out of the domain model and uses that information to compute the next action. When the action is completed, the application code pushes information to the domain model.
There isn't enough information to give you targeted good advice. I suspect you need to refactor your domains into further subdomains. It sounds like your domain service has way more than 1 responsibility. Keep the service simple.
In addition, If you have a long running task like a service call that takes a long time, then you need to architect it away. The most supple design will not keep the consumer waiting. It'll return immediately with some sort of result to the user even if it's simply a periodic status update.

Client Interaction With Event Sourcing

I have been recently looking into event sourcing and have some questions about the interactions with clients.
So event-sourcing sounds great. decoupling all your microservices, keeping your information in immutable events and formulating a stored states off of that to fit your needs is really handy. Having event propagate through your system/services and reacting to events in their own way is all fine.
The issue i am having lies with understanding the client interaction.
So you want clients to interact with the system, but they need to do this now by events. They can not longer submit a state to mutate your existing one.
So the question is how do clients fire off specific event and interact with (not only an event based system) but a system based on event sourcing.
My understanding is that you no longer use the rest api as resources (which you can get, update, delete, etc.. handling them as a resource), but you instead post to an endpoint as an event.
So how do these endpoint work?
my second question is how does the user get responses back?
for instance lets say we have an event to place an order.
your going to fire off an event an its going to do its thing. Again my understanding is that you dont now validate the request, e.g. checking if the user ordering the order has enough money, but instead fire it to be place and it will be handled in the system.
e.g. it will not be
- order placed
- this will be picked up by the pricing service and it will either fire an reserved money or money exceeded event based on if the user can afford it.
- The order service will then listen for those and then mark the order as denied or not enough credit.
So because this is a async process and the user has fired and forgotten, how do you then show the user it has either failed or succeeded? do you show them an order confirmation page with the order status as it is (even if its pending)
or do you poll it until it changes (web sockets or something).
I'm sorry if a lot of this is all nonsense, I am still learning about this architecture and am very much in the mindset of a monolith with REST responses.
Any help would be appreciated.
The issue i am having lies with understanding the client interaction.
Some of the issue may be understanding, but I promise you a fair share of the issue is that the literature sucks.
In particular, the word "Event" gets re-used a lot of different ways. If you aren't paying very careful attention to which meaning is being used, you are going to get knotted.
Event Sourcing is really about persistence - how does a micro-server store its private copy of state for later re-use? Instead of destructively overwriting our previous state, we write new information that links back to the previous state. If you imagine each microservice storing each change of state as a commit in its own git repository, you are in the right ballpark.
That's a different animal from using Event Messages to communicate information between one microservice and another.
There's some obvious overlap, of course, because the one message that you are likely to share with other microservices is "I just changed state".
So how do these endpoint work?
The same way that web forms do. I send you a representation of a form, the client displays the form to you. You fill in your data and submit the form, the client processes the contents of the form, and sends back to me an HTTP request with a "FormSubmitted" event in the message body.
You can achieve similar results by sending new representations of the state, but its a bit error prone to strip away the semantic intent and then try to guess it again on the server. So you are more likely to instead see task based user interfaces, or protocols that clearly identify the semantics of the change.
When the outside world is the authority for some piece of data (a shopper's shipping address, for example), you are more likely to see the more traditional "just edit the existing representation" approach.
So because this is a async process and the user has fired and forgotten, how do you then show the user it has either failed or succeeded?
Fire and forget really doesn't work for a distributed protocol on an unreliable network. In most cases, at-least-once delivery is important, so Fire until verified is the more common option. The initial acknowledgement of the message might be something like 202 Accepted -- "We received your message, we wrote it down, here's our current progress, here are some links you can fetch for progress reports".
It doesnt seem to me that event-sourcing fits with the traditional REST model where you CRUD a resource.
Jim Webber's 2011 talk may help to prune away the noise. A REST API is a disguise that your domain model wears; you exchange messages about manipulating resources, and as a side effect your domain model does useful work.
One way you could do this that would look more "traditional" is to work with representations of the event stream. I do a GET /08ff2ec9-a9ad-4be2-9793-18e232dbe615 and it returns me a representation of a list of events. I append a new event onto the end of that list, and PUT /08ff2ec9-a9ad-4be2-9793-18e232dbe615, and interesting side effects happen. Or perhaps I instead create a patch document that describes my change, and PATCH /08ff2ec9-a9ad-4be2-9793-18e232dbe615.
But more likely, I would do something else -- instead of GET /08ff2ec9-a9ad-4be2-9793-18e232dbe615 to fetch a representation of the list of events, I'd probably GET /08ff2ec9-a9ad-4be2-9793-18e232dbe615 to fetch a representation of available protocols - which is to say, a document filled with hyper links. From there, I might GET /08ff2ec9-a9ad-4be2-9793-18e232dbe615/603766ac-92af-47f3-8265-16f003ce5a09 to obtain a representation of the data collection form. I fill in the details of my event, submit the form, and POST /08ff2ec9-a9ad-4be2-9793-18e232dbe615 the form data to the server.
You can, of course, use any spelling you like for the URI.
In the first case, we need something like an HTTP capable document editor; the second case uses something more like a web browser.
If there were lots of different kinds of events, then the second case might well have lots of different form resources, all submitting POST /08ff2ec9-a9ad-4be2-9793-18e232dbe615 requests.
(You don't have to have all of the forms submitting to the same URI, but there are advantages to consider).
In a non event sourcing pattern I guess that would be first put into the database, then the event gets risen.
Even when you aren't event sourcing, there may still be some advantages to committing events to your durable store before emitting them. See Pat Helland: Data on the Outside versus Data on the Inside.
So you want clients to interact with the system, but they need to do this now by events.
Clients don't have to. Client may even not be aware of the underlying event store.
There are a number of trade-offs to consider and decisions to take when implementing an event-sourced system. To start with you can try to name a few pre computer era examples of event-sourced systems and look at their non-functional characteristics.
So the question is how do clients fire off specific event
Clients don't send events. They rather should express an intent (a command). Then it is the responsibility of the event-sourced system to validate the intent and either reject it or accept and store the corresponding event. It would mean that an intent to change the system's state was accepted and the stored event confirms the change.
My understanding is that you no longer use the rest api as resources
REST is one of the options. You just consider different things as resources. A command can be a REST resource. An event-sourced entity can be a resource, to which you POST a command. If you like it async - you can later GET the command to check its status. You can GET an entity to know its current state. You cant GET events from a class of entities as a means of subscription.
If we are talking about an end user, then most likely it doesn't deal with the event store directly. There is some third tier in between, which does CQRS. From a user client perspective it can be provided with REST, GraphQL, SOAP, gRPC or event e-mail. Whatever transport solution you find suitable. Command-processing part from CQRS is what specifically domain-driven. It decides which intent to accept and which to reject.
Event store itself is responsible for the data consistency. I.e. it should not allow two concurrent event leading to invalid state be published. This is what pre-computer event-sourced systems are good at. You usually have some physical object as an entity, so you lock for update by just getting hand of it.
Then an end-user client usually reads from some prepared read model. The responsibility of a read (R in CQRS) component is to prepare read-optimised data for clients. This data may come from multiple event-sourced of the same or different classes. Again, client may interact with a read model with whatever transport is suitable.
While an event-store is consistent and consistent immediately, a read model is eventually consistent. But it's up to you to tune this eventuality.
Just try to throw REST out of the architecture for a while. Consider it a one of available transport options - that may help to look at the root.

Hosting a Workflow in an MVVM application

I'm designing a MVVM application that does not use WPF or Silverlight. It will simply present web pages in HTML5, styled with CSS3.
The domain is a perfect case for using WF because it involves a number of activities in a long-running process. Specifically, I am tracking the progress of interactions with a customer over a 30 day period and that involves filling out various forms at points along the way, getting approvals from a supervisor at certain times, and making certain that the designated order of activities is followed and is executed correctly.
Each activity will normally be represented by a form on a view designed to capture the desired information at that step. Stated differently, the view that a user sees will be determined by where she is in the workflow at that moment.
My research so far has turned up examples where the workflow is used to execute business logic in accordance with the flowchart that defines it.
In my situation, I need for a user to login then pick up where she left off in the workflow (for example, some new external event has occurred and she needs to fill out the form for that or move forward in the workflow to that step.)
And I need to support the case where the supervisor logs in and can basically be presented with activities that need approval at that time.
So... it seems to me that a WF solution might be appropriate, but maybe the way I want to use it is inverted - like the cart pulling the horse so to speak.
I'd appreciate any insight that anyone here can offer.
Thanks - Steve
I have designed an app similar to yours, actually based on WPF, but the screens shown by the application are actually driven by workflows.
I use a task-based approach. I have some custom activities that create user tasks on a DB. There are different type of tasks, one for every different form type that the application supports. When the workflow reaches one of these special activities, the task is saved to DB and the WF goes idle (bookmark).
Once the user submits the form, the wf is resumed up to the point where another user task is reached and so on.
Tasks can be assigned to different users along the way (final user, supervisor, ..) and they have a pending tasks list where they can resume previous wf instances, etc.
Then, to generate user views (HTML5 forms in your case) you have to read the pending task and translate that into the corresponding form.
Hope you find it useful

Where to put my application logic when using Entity Framework and MVVM

In the last days I spent a lot of time creating the architecture for my program, but still have a problem with it. At the moment it looks like this:
DataLayer: Here my context class which derived from DbContext and the mapper classes which derived from EntityTypeConfiguration like JobMap for the Domain objects reside
DomainLayer: Here my domain/business objects like Job or Schedule reside.
Presentation Layer: Here I have the *ViewModel and *View classes (I use WPF for the views)
Now to my question: I want to build a scheduling application with some optimization abilities (it is a single user and single pc application so no further decoupling like web application is needed). But I have the problem that I don't know where this application fits into this architecture?
Considering the following use case: The user clicks a button "Start" on the View which calls the ViewModel which redirects to my scheduling/optimization application. This app then gets all the new jobs from the database and creates/updates the current schedule. The ViewModel should then update the old schedule with the new created one. Finally the View shows the generated schedule to the user.
In this case my ViewModel knows about my application (because it calls it) and about my domain/business objects (because my app will deliver e.g. a Schedule domain object, which the ViewModel encapsulates).
Is this a correct usage of the EF, MVVM and my application?
Regards
To start, you'll want to identify which pieces of your application go where, and that's fairly easy to do. Essentially, you have to ask yourself: Does this method or class help define my domain. If the answer is yes, you put it in the domain layer, and if not, you'll put it in presentation.
Here's how you'd look at it in your example:
Your Presentation layer (PL) receives a message via the start
button.
The PL calls the Domain and tells it to generate a schedule. This call is probably to a domain service.
Your domain service is then in charge of populating the Job domain objects, creating a new Schedule domain object (or modifying an existing one), and returning the Schedule domain object.
Your PL then simply displays the returned Schedule.
This might be different if you just wanted to obtain an existing Schedule object. Instead of calling a domain service, you would ask a domain repository to get the existing schedule. The repository would be the way of encapsulating or otherwise obscuring the data layer from your PL and from your Domain.
Now, what you DON'T want to do:
Do not get the list of jobs in your PL, and then use that list of jobs to create the schedule in the controller of your MVVM. This would be business logic that defines your domain.
If Schedules are commonly generated from Jobs, regardless of whether it's called from MVVM or a PHP site, then don't add complexity in your PL and Domain Layer by forcing the PL to first get the jobs and pass them back into the Domain for a Schedule to be generated. The fact that those two concepts are tied to each other means that the relationship helps define your domain, and thus belongs in your domain layer. An exception might be when both the jobs and the schedule to be modified both rely on context from the front end (user input), but even this isn't always an exception.
Do not pass in VMs to your domain. Let your controller filter out the data and determine what needs to be sent to which domain part.
It's really hard to give a precise detail of what you should place where because only you would have a clear view of what defines your domain, but here's essentially how I break it down:
Could I change/replace this without affecting how my business/domain works?
If the answer is yes, it does not belong in your domain. Example: You could replace your entire MVVM front-end to flat PHP or ASPX, and even though it'd be a lot of work and a huge pain, you could to do it without affecting how the rest of the business operates.