Where to put my application logic when using Entity Framework and MVVM - entity-framework

In the last days I spent a lot of time creating the architecture for my program, but still have a problem with it. At the moment it looks like this:
DataLayer: Here my context class which derived from DbContext and the mapper classes which derived from EntityTypeConfiguration like JobMap for the Domain objects reside
DomainLayer: Here my domain/business objects like Job or Schedule reside.
Presentation Layer: Here I have the *ViewModel and *View classes (I use WPF for the views)
Now to my question: I want to build a scheduling application with some optimization abilities (it is a single user and single pc application so no further decoupling like web application is needed). But I have the problem that I don't know where this application fits into this architecture?
Considering the following use case: The user clicks a button "Start" on the View which calls the ViewModel which redirects to my scheduling/optimization application. This app then gets all the new jobs from the database and creates/updates the current schedule. The ViewModel should then update the old schedule with the new created one. Finally the View shows the generated schedule to the user.
In this case my ViewModel knows about my application (because it calls it) and about my domain/business objects (because my app will deliver e.g. a Schedule domain object, which the ViewModel encapsulates).
Is this a correct usage of the EF, MVVM and my application?
Regards

To start, you'll want to identify which pieces of your application go where, and that's fairly easy to do. Essentially, you have to ask yourself: Does this method or class help define my domain. If the answer is yes, you put it in the domain layer, and if not, you'll put it in presentation.
Here's how you'd look at it in your example:
Your Presentation layer (PL) receives a message via the start
button.
The PL calls the Domain and tells it to generate a schedule. This call is probably to a domain service.
Your domain service is then in charge of populating the Job domain objects, creating a new Schedule domain object (or modifying an existing one), and returning the Schedule domain object.
Your PL then simply displays the returned Schedule.
This might be different if you just wanted to obtain an existing Schedule object. Instead of calling a domain service, you would ask a domain repository to get the existing schedule. The repository would be the way of encapsulating or otherwise obscuring the data layer from your PL and from your Domain.
Now, what you DON'T want to do:
Do not get the list of jobs in your PL, and then use that list of jobs to create the schedule in the controller of your MVVM. This would be business logic that defines your domain.
If Schedules are commonly generated from Jobs, regardless of whether it's called from MVVM or a PHP site, then don't add complexity in your PL and Domain Layer by forcing the PL to first get the jobs and pass them back into the Domain for a Schedule to be generated. The fact that those two concepts are tied to each other means that the relationship helps define your domain, and thus belongs in your domain layer. An exception might be when both the jobs and the schedule to be modified both rely on context from the front end (user input), but even this isn't always an exception.
Do not pass in VMs to your domain. Let your controller filter out the data and determine what needs to be sent to which domain part.
It's really hard to give a precise detail of what you should place where because only you would have a clear view of what defines your domain, but here's essentially how I break it down:
Could I change/replace this without affecting how my business/domain works?
If the answer is yes, it does not belong in your domain. Example: You could replace your entire MVVM front-end to flat PHP or ASPX, and even though it'd be a lot of work and a huge pain, you could to do it without affecting how the rest of the business operates.

Related

Clean architecture and user logins in Flutter - how do I store user information?

I've been trying to use Reso Coder's Flutter adaptation of Uncle Bob's Clean Architecture.
My app connects to an API, and most requests (other than logging in) require an authentication token.
Furthermore, upon logging in, user profile information (like a name and profile picture) is received.
I need a way to save this data upon login, and use it in both future API requests and my app's UI.
As I'm new to Uncle Bob's Clean Architecture, I'm not quite sure where this data belongs. Here are the ideas I've come up with, all involving storing the data in a User object:
Store the User in the repository layer, in an authentication feature directory. Other repository-level methods can pass it to the appropriate datasource methods.
This seems to make the most sense; other repository-level methods that call other API calls can use the stored User easily, passing it to methods in the data source layer.
If this is the way to go, I'm not quite sure how other features (that use the API) would access the User - is it okay to have a repository depend on another, and pass the authentication repository to the new feature repository?
Store the User in the repository layer, in an authentication feature directory. Other (non-login) usecases can depend on both this repository and on one relevant to their own feature, passing the User to their repository methods.
This is also breaking the vertical feature barrier, but it may be cleaner then idea 1.
For both these ideas, here's what my repository looks like:
abstract class AuthenticationRepository {
/// The current user.
User get currentUser;
/// True if logged in.
bool get loggedIn;
/// Logs in, saving the [User].
Future<void> login(AuthenticationParams params);
/// Logs out, disposing of the [User].
Future<void> logout();
/// Same as [logout], but logs out of all devices.
Future<void> logoutAll();
/// Retrieves stored login credentials.
Future<AuthenticationParams> retrieveStoredCredentials();
}
Are these ideas "valid", and are there any better ways of doing it?
I see another option to tackle the problem. The solution I want to talk about comes from the domain-driven design and is an event based approach.
In DDD you have the concept of a bounded context. A business object (uncle bob's entity) can have different meanings in different bounded contexts. Take a look at your user business object. The data and methods that some use case uses is often differnt to the data and methods that other use cases use. That's why you have differnt user objects in differnt bounded contexts. They are a kind of perspective that each use case has on the same business object.
If a business object is modified in one bounded context it can emit a business event. Another feature can listen to those events. The event mechanism can either be a simple observer pattern or if you need to distribute your application features via microservices a message queue. In case you use a simple observer patter the event emitter and event handler can run within the same data source transaction. But they can also run in differnt ones. It depends on your needs.
So when the sign-up use case registers a new user it emits a UserSignedUpEvent. Other features can now listen to this event. The event carries the information of the user, like the email, the name, the profile image and other infomration that the user provided during sign-up. Other features can now save the piece of data they need to their own data source. It can be the same as the sign-up use case uses (just other tables or another schema). But it is also possible that is a completely differnt data source, maybe another kind of data source like a nosql db. The part I wrote above about transactions is of course more difficult if you have different data sources.
The main point is that each feature has it's own data and manages it. It might be a copy of the whole user infromation, but in a lot of cases it is only a subset.
The event-based approach can give you perfect modularization. But as it is always when something looks great, it comes at a cost. You have to duplicate some part or even all data. When you think of a microservice architecture and some features are in different microservices it means that the duplication increases the availability of the service. The service can operate even the main service that manages the data is down, because a local copy exists. But now you have to deal with consistency issues - eventual consistency.
At this point I like to stop and guide you to other sources for details:
Chapter 8: Domain Events, Implementing Domain-Driven Desing, Vaughn Vernon
The many meanings of event-driven architectures, Martin Fowler

Fetching potentially needed data from repository - DDD

We have (roughly) following architecture:
Application service does the infrastructure job - fetches data from repositories which are hidden behind interfaces.
Object graph is created and passed to appropriate domain service.
Domain service does it thing and raises appropriate events.
Events are handled in different application services which perform some persistent operations (altering repositories, sending e-mails etc).
However. Domain service (3) has become so complex that it requires data from different external APIs only if particular conditions are satisfied. For example - if Product X is of type Car, we need to know price of that car model from some external CatalogService (example invented) hidden behind ICatalogService. This operation is potentially expensive one (REST call).
How do we go about this?
A. Do we pre-fetch all data in Application Service listed as (1) even we might not need it? Do we inject interface ICatalogService into given Domain Service and fetch data only when needed? The latter solution might create performance issues if, some other client of Domain Service, calls this Domain Service repeatedly without knowing there is a REST call hidden inside it.
Or did we simply get the domain model wrong?
This question is related to Domain Driven Design.
How do we go about this?
There are two common patterns.
One is to pass the capability to make the query into the domain model, allowing the model to fetch the information itself when it is needed. What this will usually look like is defining an interface / a contract that will be consumed by the domain model, but implemented in the application/infrastructure layers.
The other is to extend the protocol between the domain model and the application, so that we can signal to the application layer what information is needed, and then the application code can decide how to provide it. You end up with something like a state machine for the processes, with the application code coordinating the exchange of information between the external api and the domain model.
If you use a bit of imagination, you've already got a state machine something like this; as your application code is already coordinating the movement of inputs to the repository and the domain model. The difference, of course, is that the existing "state machine" is simple and linear enough that it may not be obvious that there is a state machine present at all.
how exactly would you signal application layer?
Simple queries; which is to say, the application code pulls the information it needs out of the domain model and uses that information to compute the next action. When the action is completed, the application code pushes information to the domain model.
There isn't enough information to give you targeted good advice. I suspect you need to refactor your domains into further subdomains. It sounds like your domain service has way more than 1 responsibility. Keep the service simple.
In addition, If you have a long running task like a service call that takes a long time, then you need to architect it away. The most supple design will not keep the consumer waiting. It'll return immediately with some sort of result to the user even if it's simply a periodic status update.

Hosting a Workflow in an MVVM application

I'm designing a MVVM application that does not use WPF or Silverlight. It will simply present web pages in HTML5, styled with CSS3.
The domain is a perfect case for using WF because it involves a number of activities in a long-running process. Specifically, I am tracking the progress of interactions with a customer over a 30 day period and that involves filling out various forms at points along the way, getting approvals from a supervisor at certain times, and making certain that the designated order of activities is followed and is executed correctly.
Each activity will normally be represented by a form on a view designed to capture the desired information at that step. Stated differently, the view that a user sees will be determined by where she is in the workflow at that moment.
My research so far has turned up examples where the workflow is used to execute business logic in accordance with the flowchart that defines it.
In my situation, I need for a user to login then pick up where she left off in the workflow (for example, some new external event has occurred and she needs to fill out the form for that or move forward in the workflow to that step.)
And I need to support the case where the supervisor logs in and can basically be presented with activities that need approval at that time.
So... it seems to me that a WF solution might be appropriate, but maybe the way I want to use it is inverted - like the cart pulling the horse so to speak.
I'd appreciate any insight that anyone here can offer.
Thanks - Steve
I have designed an app similar to yours, actually based on WPF, but the screens shown by the application are actually driven by workflows.
I use a task-based approach. I have some custom activities that create user tasks on a DB. There are different type of tasks, one for every different form type that the application supports. When the workflow reaches one of these special activities, the task is saved to DB and the WF goes idle (bookmark).
Once the user submits the form, the wf is resumed up to the point where another user task is reached and so on.
Tasks can be assigned to different users along the way (final user, supervisor, ..) and they have a pending tasks list where they can resume previous wf instances, etc.
Then, to generate user views (HTML5 forms in your case) you have to read the pending task and translate that into the corresponding form.
Hope you find it useful

where does common/global data go in mvvm?

I'm trying to get my head around mvvm and came up with a test application that I think will give me a good foundation. Suppose my application has a service that goes out every minute and gets the latest flight arrival and departure information at an airport. Now suppose I have 3 different views: InboundView, OutboundView and GateView. The Inbound and Outbound views would simply display the various flight details for inbound and outbound flights that I'm sure we've all seen on the flight boards in the airport. The GateView would display similar flight information but might be sorted by gate # instead of flight #.
So the model for the Flight object would contain the flight data details as well as an instance of a Gate object that would be updated appropriately once a flight arrives.
So all 3 views are using the same flight data service and I know I can pass an instance of that service to each VM but then I'd need to hook up the appropriate INPC events at each view model and that seems less than ideal as the number of views/vms increases.
Right now, each VM uses a ListCollectionView wrapped around the passed in collection of flight data and I just sort/filter based on inbound/outbound, etc.. but I was hoping to incorporate the service results into a sort of parent view model that would then pass a reference to itself along to the sub-views and then I could just handle all the INPC, etc.. events at the parent view model level and those will automatically trickle down to each of the subviews if data on a particular flight changes (such as its gate) instead of having to handle that separately in each of the VMs.
I've looked into the Messenger framework for MVVM Light but it still seems like each of the sub-VMs would have to register for the message and respond to it individually.
Does that make sense? Am I on the right track here?
So all 3 views are using the same flight data service and I know I can pass an instance of that service to each VM but then I'd need to hook up the appropriate INPC events at each view model and that seems less than ideal as the number of views/vms increases.
You don't necessarily have to do this, if the "service" implements INotifyPropertyChanged. Remember, you can bind to a property within a property, ie: {Binding Path=FlightService.Gate} or whatever, which may work. (It's difficult to know your requirements here, though.)
I've looked into the Messenger framework for MVVM Light but it still seems like each of the sub-VMs would have to register for the message and respond to it individually.
Yes, if you wanted to use a messaging framework, you would need this to be handled in each of the ViewModels. Alternatively, you could use some form of service location or constructor injection to "pull in" the flight service. The latter is my personal preference here.
The advantage of handling this in each VM is that each VM will likely want to handle things somewhat differently (otherwise, why is there more than 1?). By pulling a reference to the service in via IoC, you can handle this anyway you need to.

Making Catalyst calls from the model?

I'm using Catalyst with Catalyst::Plugin::Authentication and
Catalyst::Plugin::Authorization::Roles and am wondering if there is a better
approach to adding an attribute to a model that I'm not seeing.
Each user is permitted to access one or more companies, but there is
always one primary (current) company at a time. The permitted list is
stored in the database, and database access is primarily through DBIC.
My first inclination is to say that it's the user that has a current
company, and thus put it as part of the user model: give the user
package a "sub company { … }" to get/set the user's current company. The
database check is fairly easy; just use "$self->search_related" (a DBIC
method, inherited by the user model).
The problems I run in to are:
The current company needs to persist between requests, but I'd rather
not store it to the database (it should only persist for this
session). The natural place is the session…
There is a role, akin to Unix's root, that allows you to act as
any company, ignoring the list in the database. Checking this role
can be done through the database, but everywhere else in the app uses
$c->assert_user_role and friends.
I've heard its best to keep the models as Catalyst-independent as
possible. It also seems pretty weird to have a model manipulating
$c->session.
Of course, I could move those checks to the controllers, and have the
model accept whatever the controller sends, but that's violating DRY
pretty heavily, and just begging for a security issue if I forget one of
the checks somewhere.
Any suggestions? Or do I just shrug and go ahead and do it in the model?
Thanks, and apologies for the title, I couldn't come up with a good one.
The key is to create an instance of the model class for each request, and then pass in the parts of the request you need. In this case, you probably want to pass in a base resultset. Your model will make all the database calls via $self->resultset->..., and it will "just work" for the current user. (If the current user is root, then you just pass in $schema->resultset("Foo"). If the current user is someone else, then pass in $schema->resultset("Foo")->stuff_that_can_be_seen_by($c->user). Your model then no longer cares.)
I have some slides about this, but they are very outdated:
http://www.jrock.us/doqueue-grr/slide95c.html#end
(See the stuff immediately before and after, also.)
Note that restricted resultsets and web ACLs are orthogonal. You want to make the model as tight as possible (so that your app can't accidentally do something you don't want it to, even if the code says to), but various web-only details will still need to be encoded in ACLs. ("You are not allowed to view this page." is different from "You can only delete your own objects, not everyone's". The ACL handles the first case, the restricted resultset handles the second. Even if you write $rs->delete, since the resultset is restricted, you didn't delete everything in the database. You only deleted the things that you have permission to delete. Convenient!)