I am new to ASP.NET MVC and I am trying to implement best practices for a small-to-mid size application that uses a web service as its data source. The web service exposes the following methods to support the application:
AuthenticateCustomer - returns the customer ID if valid email/password
GetCustomer - returns a serialized object containing customer information
Etc
My question is, all of these services return Success (bool) and Message (string) values depending on the result of the operation. The message contains descriptive information if an error occurred. I'm not sure if the invocation of the web services belongs in the Repository layer, but I think it is important to be able to pass the Success and Message values up through the Repository -> Service -> Controller layers. The only way I can think of doing this is by either littering the Repository methods with out arguments:
public int AuthenticateCustomer(string Email, string Password, out bool Success, out bool Message);
or create some sort of wrapper that contains the intended return value (integer) and the Success and Message values. However, each web service method returns different values so a one-size-fits-all wrapper would not work. Also, these values would need to be passed up through the Service layer, and it just seems like validation of some sort is happening at the Repository level.
Any thoughts on how to accomplish:
1. Separation of concerns (validation, data access via web service) while ...
2. Having the ability to maintain the feedback received from the web service and pass it through all the way to the View?
P.S. - Sorry for the terse question. It is a bit difficult to explain with any kind of brevity.
Do you need a repository? From what you described, I am viewing the Web Service as the repository. I would then use the Services (Business Layer) to call to the Web Service, and have the logic in there handle errors or success.
Have the Business Layer convert the Customer object from the Web Service into a Business object, and pass that on to the Controllers.
Repository (Web Services) -> Services (Business Layer) -> Presentation Layer (Controllers)
Martin's answer is good.
I could possibly see a case for a thin layer around the web services to facilitate testing.
If you do go that route and Success = false is not a common condition you could throw an Exception of your own type with Success and Message wrapped into it. I would only do this if it really was an exceptional condition. If you find your business layer catching the exception just to return an error code to the presentation layer, then it's the wrong approach.
Related
I have a resource called “subscriptions”
I need to update a subscription’s send date. When a request is sent to my endpoint, my server will call a third-party system to update the passed subscription.
“subscriptions” have other types of updates. For instance, you can change a subscription’s frequency. This operation also involves calling a third-party system from my server.
To be truly “RESTful,” must I force these different types of updates to share an endpoint?
PATCH subscriptions/:id
I can hypothetically use my controller behind the endpoint to fire different functions depending on the query string... But what if I need to add a third or fourth “update” type action? Should they ALL run through this single PATCH route?
To be truly “RESTful,” must I force these different types of updates to share an endpoint?
No - but you will often want to.
Consider how you would support this on the web: you might have a number of different HTML forms, each accepting a slightly different set of inputs from the user. When the form is submitted, the browser will use the input controls and form metadata to construct an HTTP (POST) request. The target URI of the request is copied from the form action.
So your question is analogous to: should we use the same action for all of our different forms?
And the answer is yes, if you want the general purpose HTTP application to understand which resource is expected to change in response to the message. One reason that you might want that is cache invalidation; using the right target URI allows all of the caches to understand which previously cached responses should not be reused.
Is that choice free? no - it adds some ambiguity to your access logs, and routing the request to the appropriate handler in your code takes a bit more work.
Trying to use PATCH with different target URI is a little bit weird, and suggests that maybe you are trying to stretch PATCH beyond the standard constraints.
PATCH (and PUT) have remote authoring semantics; what they mean is "make your copy of the target resource look like my copy". These are methods we would use if we were trying to fix a spelling error on a web page.
Trying to change the representation of one resource by sending a remote authoring request to a different resource makes it harder for the general purpose HTTP application components to add value. You are coloring outside of the lines, and that means accepting the liability if anything goes wrong because you are using standardized messages in a non standard way.
That said, it is reasonable to have many different resources that present representations of the same domain entity. Instead of putting everything you know about a user into one web page, you can spread it out among several that are linked together.
You might have, for example, a web page for an invoice, and then another web page for shipping information, and another web page for billing information. You now have a resource model with clearer separation of concerns, and can combine the standardized meanings of PUT/PATCH with this resource model to further your business goals.
We can create as many resources as we need (in the web level; at the REST level) to get a job done. -- Webber, 2011
So, in your example, would I do one endpoint like this user/:id/invoice/:id and then another like this user/:id/billing/:id
Resources, not endpoints.
GET /invoice/12345
GET /invoice/12345/shipping-address
GET /invoice/12345/billing-address
Or
GET /invoice/12345
GET /shipping-address/12345
GET /billing-address/12345
The spelling conventions that you use for resource identifiers don't actually matter very much.
So if it makes life easier for you to stick all of these into a hierarchy that includes both users and invoices, that's also fine.
I've been trying to use Reso Coder's Flutter adaptation of Uncle Bob's Clean Architecture.
My app connects to an API, and most requests (other than logging in) require an authentication token.
Furthermore, upon logging in, user profile information (like a name and profile picture) is received.
I need a way to save this data upon login, and use it in both future API requests and my app's UI.
As I'm new to Uncle Bob's Clean Architecture, I'm not quite sure where this data belongs. Here are the ideas I've come up with, all involving storing the data in a User object:
Store the User in the repository layer, in an authentication feature directory. Other repository-level methods can pass it to the appropriate datasource methods.
This seems to make the most sense; other repository-level methods that call other API calls can use the stored User easily, passing it to methods in the data source layer.
If this is the way to go, I'm not quite sure how other features (that use the API) would access the User - is it okay to have a repository depend on another, and pass the authentication repository to the new feature repository?
Store the User in the repository layer, in an authentication feature directory. Other (non-login) usecases can depend on both this repository and on one relevant to their own feature, passing the User to their repository methods.
This is also breaking the vertical feature barrier, but it may be cleaner then idea 1.
For both these ideas, here's what my repository looks like:
abstract class AuthenticationRepository {
/// The current user.
User get currentUser;
/// True if logged in.
bool get loggedIn;
/// Logs in, saving the [User].
Future<void> login(AuthenticationParams params);
/// Logs out, disposing of the [User].
Future<void> logout();
/// Same as [logout], but logs out of all devices.
Future<void> logoutAll();
/// Retrieves stored login credentials.
Future<AuthenticationParams> retrieveStoredCredentials();
}
Are these ideas "valid", and are there any better ways of doing it?
I see another option to tackle the problem. The solution I want to talk about comes from the domain-driven design and is an event based approach.
In DDD you have the concept of a bounded context. A business object (uncle bob's entity) can have different meanings in different bounded contexts. Take a look at your user business object. The data and methods that some use case uses is often differnt to the data and methods that other use cases use. That's why you have differnt user objects in differnt bounded contexts. They are a kind of perspective that each use case has on the same business object.
If a business object is modified in one bounded context it can emit a business event. Another feature can listen to those events. The event mechanism can either be a simple observer pattern or if you need to distribute your application features via microservices a message queue. In case you use a simple observer patter the event emitter and event handler can run within the same data source transaction. But they can also run in differnt ones. It depends on your needs.
So when the sign-up use case registers a new user it emits a UserSignedUpEvent. Other features can now listen to this event. The event carries the information of the user, like the email, the name, the profile image and other infomration that the user provided during sign-up. Other features can now save the piece of data they need to their own data source. It can be the same as the sign-up use case uses (just other tables or another schema). But it is also possible that is a completely differnt data source, maybe another kind of data source like a nosql db. The part I wrote above about transactions is of course more difficult if you have different data sources.
The main point is that each feature has it's own data and manages it. It might be a copy of the whole user infromation, but in a lot of cases it is only a subset.
The event-based approach can give you perfect modularization. But as it is always when something looks great, it comes at a cost. You have to duplicate some part or even all data. When you think of a microservice architecture and some features are in different microservices it means that the duplication increases the availability of the service. The service can operate even the main service that manages the data is down, because a local copy exists. But now you have to deal with consistency issues - eventual consistency.
At this point I like to stop and guide you to other sources for details:
Chapter 8: Domain Events, Implementing Domain-Driven Desing, Vaughn Vernon
The many meanings of event-driven architectures, Martin Fowler
We have (roughly) following architecture:
Application service does the infrastructure job - fetches data from repositories which are hidden behind interfaces.
Object graph is created and passed to appropriate domain service.
Domain service does it thing and raises appropriate events.
Events are handled in different application services which perform some persistent operations (altering repositories, sending e-mails etc).
However. Domain service (3) has become so complex that it requires data from different external APIs only if particular conditions are satisfied. For example - if Product X is of type Car, we need to know price of that car model from some external CatalogService (example invented) hidden behind ICatalogService. This operation is potentially expensive one (REST call).
How do we go about this?
A. Do we pre-fetch all data in Application Service listed as (1) even we might not need it? Do we inject interface ICatalogService into given Domain Service and fetch data only when needed? The latter solution might create performance issues if, some other client of Domain Service, calls this Domain Service repeatedly without knowing there is a REST call hidden inside it.
Or did we simply get the domain model wrong?
This question is related to Domain Driven Design.
How do we go about this?
There are two common patterns.
One is to pass the capability to make the query into the domain model, allowing the model to fetch the information itself when it is needed. What this will usually look like is defining an interface / a contract that will be consumed by the domain model, but implemented in the application/infrastructure layers.
The other is to extend the protocol between the domain model and the application, so that we can signal to the application layer what information is needed, and then the application code can decide how to provide it. You end up with something like a state machine for the processes, with the application code coordinating the exchange of information between the external api and the domain model.
If you use a bit of imagination, you've already got a state machine something like this; as your application code is already coordinating the movement of inputs to the repository and the domain model. The difference, of course, is that the existing "state machine" is simple and linear enough that it may not be obvious that there is a state machine present at all.
how exactly would you signal application layer?
Simple queries; which is to say, the application code pulls the information it needs out of the domain model and uses that information to compute the next action. When the action is completed, the application code pushes information to the domain model.
There isn't enough information to give you targeted good advice. I suspect you need to refactor your domains into further subdomains. It sounds like your domain service has way more than 1 responsibility. Keep the service simple.
In addition, If you have a long running task like a service call that takes a long time, then you need to architect it away. The most supple design will not keep the consumer waiting. It'll return immediately with some sort of result to the user even if it's simply a periodic status update.
I am working on a WinForms application using Entity Framework 6 with the following layers:
Presentation
Application
Domain
Infrastructure
When a user clicks the save button from the UI, it calls an application service in the Application layer and passes in the request. The application service then calls a domain service with the request. The domain service calls on an several entities within a domain model to perform validations on data used in the request.
One or more validations in the domain model require information from a repository to determine if data in the request received from the Presentation Layer conforms to certain business rules.
I am considering two options to address this.
Have the Application Service retrieve the information needed from
the repositories for validation and pass those values into the
Domain Service which will call on the domain model and entities to
validate the incoming request for rules and values. Then let the
Application Service save the request when the Domain Service has
finished its validations, which will result in returning control
back to the Application Service which was synchronously waiting for
completion of validations. If I do this, then the domain layer will
have no direct or indirect (injected) reference to the repositories.
Unit testing of the Domain Service will be easier if I do this
because nothing is injected into it to perform validations.
Everything it needs is already passed in. The drawback is that some
business knowledge is put into the Application Service because now
it needs to know which repository information to retrieve for
validations of a request.
When calling the domain service for validation of the request,
inject an instance of the Application Service into it. The Domain
Service can then get information from the repository using the
injected Application Service, whose service contract is defined in
the Domain Layer. Once all the information is available it is passed
as needed to various entities to validate rules and values. Once
validation is completed, the Domain Service saves the request using
the injected Application Service. When the Domain Service is done
and exits, it returns the status of the save operation to the
Application Service which has been waiting for validation to
complete. The outer waiting application service can then return the
results of the save to the UI. One concern I have here is that when
unit testing the Domain Service I will have to mock the injected
Application Service.
Which option or other course of action would work out better? Thanks in advance.
"Should An Application Service Be Injected Into A Domain Service"
No, never!
Resolving the data from the application service and passing it to the domain service is usually fine, but if you think that domain logic is leaking then
you can apply the Interface Segregation Principle (ISP) and define an interface in the domain based on what contract is required to query for the "wanted data". Implement that interface on your repository or any other object that can fulfill the task and inject it into your domain service.
E.g. (pseudo-code)
//domain
public interface WantedDataProvider {
public WantedData findWantedData(...) {}
}
public class SomeDomainService {
WantedDataProvider wantedDataProvider;
}
//infrastructure
public class SomeRepository implements WantedDataProvider {
public WantedData findWantedData(...) {
//implementation
}
}
EDIT:
I have a request aggregate root with an employee name. One rule is the
employee must be full time and not a contractor
If the information to perform the validation already exists on an aggregate, you may also use this AR as a factory for other ARs. Assuming an employee holds it's contract type...
Employee employee = employeeRepository.findById(employeeId);
Request request = employee.submitRequest(requestDetails); //throws if not full time
requestRepository.add(request);
Note that the invariant can only be made eventually consistent here, unless you change your aggregate boundaries, but it's the same with the other solutions.
When a significant process or transformation in the domain is not a natural responsibility of an Entity or Value Object, add an operation to the model as a standalone interface declared as a Service. Define the interface in terms of the language of the model and make sure the operation name is part of the Ubiquitous Language. Make the Service stateless - Evans, the Blue Book
Don't lean to heavily toward modelling a domain concept as a Service. Do so only if the circumstances fit. If we aren't careful,, we might start to treat Services as our modelling "silver bullet". Using Services overzealously will usually result in the negative consequences of creating an Anemnic Domain Model, where all the domain logic resides in Services. - Vernon, the Red Book
What I am trying to tell by extensive citing, is that you seems to see Domain Services as something you MUST have where you absolutely don't have to do it. Your application services can happily use repositories to fetch the aggregate and then call the aggregate root method to perform necessary operations. Your aggregate root is responsible to protect its own invariants by executing necessary validations inside it and throw validation exception if anything should go wrong.
In case you absolutely must use a domain service, if you are doing it right, meaning you implement onion architecture, where inner layers do not know about outer layers, your domain service won't be able to know about the application service. However, you can, if you absolutely have to, send a delegate into the domain service to do something that the application service desires. However, this scenario is too complex to be true. Seeing that you only require the validation, please read the cited parts of the blue and red books and make the right decision. Again, there is no mandatory domain service in DDD, this is one of common misconceptions.
What are "best practices" concerning error handling in an ASP.NET MVC2 web app that is DDD designed? For example, let's take the most common aspect of a web app, the login:
UserController: Obviously coordinates
a few domain objects to eventually
log in or refuse the user, and
redirect to other parts of the web
interface as needed. In my case, it's
a few calls to different UserTasks
methods like IsLoggedIn() or LogIn(),
plus some RedirectToAction.
UserTasks: Has the meat of the work
of coordinating relevant domain
objects services, like
SecurityService and lower domain
objects, such as calling
SecurityService.ValidateUser() or
checking User.IsUserInactive().
SecurityService: Obviously
coordinates
authentication/authorization
services. Similar to a
MembershipProvider, without the
excess baggage.
User: Represents the user. Not
anemic, as it has various
User-specific methods such as
IsuUserInactive() that checks
IsDeleted, IsLockedOut or if the user
is between FromDt and ThruDt.
How do you bubble up errors such that they are informative and not hostile to users? Do you litter code with exceptions and then just handle them all in Application_Error()? For example, should ValidateUser() throw an ArgumentNullException() when password is empty and a AuthenticationException() when the password isn't right, or return a bool = false? If the latter, how do you inform the user of what caused the validation to fail?
I'm assuming you're using WhoCanHelpMe / S#arp Architecture based on the naming conventions I see? If so, I'd highly recommend a look at this article which demostrates the implementation of a cleaner application services layer. Have a look at the ActionConfirmation result being returned from the service layer; we have found this an ideal way to return a less-than-nasty error result from the Tasks layer.