What are "best practices" concerning error handling in an ASP.NET MVC2 web app that is DDD designed? For example, let's take the most common aspect of a web app, the login:
UserController: Obviously coordinates
a few domain objects to eventually
log in or refuse the user, and
redirect to other parts of the web
interface as needed. In my case, it's
a few calls to different UserTasks
methods like IsLoggedIn() or LogIn(),
plus some RedirectToAction.
UserTasks: Has the meat of the work
of coordinating relevant domain
objects services, like
SecurityService and lower domain
objects, such as calling
SecurityService.ValidateUser() or
checking User.IsUserInactive().
SecurityService: Obviously
coordinates
authentication/authorization
services. Similar to a
MembershipProvider, without the
excess baggage.
User: Represents the user. Not
anemic, as it has various
User-specific methods such as
IsuUserInactive() that checks
IsDeleted, IsLockedOut or if the user
is between FromDt and ThruDt.
How do you bubble up errors such that they are informative and not hostile to users? Do you litter code with exceptions and then just handle them all in Application_Error()? For example, should ValidateUser() throw an ArgumentNullException() when password is empty and a AuthenticationException() when the password isn't right, or return a bool = false? If the latter, how do you inform the user of what caused the validation to fail?
I'm assuming you're using WhoCanHelpMe / S#arp Architecture based on the naming conventions I see? If so, I'd highly recommend a look at this article which demostrates the implementation of a cleaner application services layer. Have a look at the ActionConfirmation result being returned from the service layer; we have found this an ideal way to return a less-than-nasty error result from the Tasks layer.
Related
I've been trying to use Reso Coder's Flutter adaptation of Uncle Bob's Clean Architecture.
My app connects to an API, and most requests (other than logging in) require an authentication token.
Furthermore, upon logging in, user profile information (like a name and profile picture) is received.
I need a way to save this data upon login, and use it in both future API requests and my app's UI.
As I'm new to Uncle Bob's Clean Architecture, I'm not quite sure where this data belongs. Here are the ideas I've come up with, all involving storing the data in a User object:
Store the User in the repository layer, in an authentication feature directory. Other repository-level methods can pass it to the appropriate datasource methods.
This seems to make the most sense; other repository-level methods that call other API calls can use the stored User easily, passing it to methods in the data source layer.
If this is the way to go, I'm not quite sure how other features (that use the API) would access the User - is it okay to have a repository depend on another, and pass the authentication repository to the new feature repository?
Store the User in the repository layer, in an authentication feature directory. Other (non-login) usecases can depend on both this repository and on one relevant to their own feature, passing the User to their repository methods.
This is also breaking the vertical feature barrier, but it may be cleaner then idea 1.
For both these ideas, here's what my repository looks like:
abstract class AuthenticationRepository {
/// The current user.
User get currentUser;
/// True if logged in.
bool get loggedIn;
/// Logs in, saving the [User].
Future<void> login(AuthenticationParams params);
/// Logs out, disposing of the [User].
Future<void> logout();
/// Same as [logout], but logs out of all devices.
Future<void> logoutAll();
/// Retrieves stored login credentials.
Future<AuthenticationParams> retrieveStoredCredentials();
}
Are these ideas "valid", and are there any better ways of doing it?
I see another option to tackle the problem. The solution I want to talk about comes from the domain-driven design and is an event based approach.
In DDD you have the concept of a bounded context. A business object (uncle bob's entity) can have different meanings in different bounded contexts. Take a look at your user business object. The data and methods that some use case uses is often differnt to the data and methods that other use cases use. That's why you have differnt user objects in differnt bounded contexts. They are a kind of perspective that each use case has on the same business object.
If a business object is modified in one bounded context it can emit a business event. Another feature can listen to those events. The event mechanism can either be a simple observer pattern or if you need to distribute your application features via microservices a message queue. In case you use a simple observer patter the event emitter and event handler can run within the same data source transaction. But they can also run in differnt ones. It depends on your needs.
So when the sign-up use case registers a new user it emits a UserSignedUpEvent. Other features can now listen to this event. The event carries the information of the user, like the email, the name, the profile image and other infomration that the user provided during sign-up. Other features can now save the piece of data they need to their own data source. It can be the same as the sign-up use case uses (just other tables or another schema). But it is also possible that is a completely differnt data source, maybe another kind of data source like a nosql db. The part I wrote above about transactions is of course more difficult if you have different data sources.
The main point is that each feature has it's own data and manages it. It might be a copy of the whole user infromation, but in a lot of cases it is only a subset.
The event-based approach can give you perfect modularization. But as it is always when something looks great, it comes at a cost. You have to duplicate some part or even all data. When you think of a microservice architecture and some features are in different microservices it means that the duplication increases the availability of the service. The service can operate even the main service that manages the data is down, because a local copy exists. But now you have to deal with consistency issues - eventual consistency.
At this point I like to stop and guide you to other sources for details:
Chapter 8: Domain Events, Implementing Domain-Driven Desing, Vaughn Vernon
The many meanings of event-driven architectures, Martin Fowler
We have (roughly) following architecture:
Application service does the infrastructure job - fetches data from repositories which are hidden behind interfaces.
Object graph is created and passed to appropriate domain service.
Domain service does it thing and raises appropriate events.
Events are handled in different application services which perform some persistent operations (altering repositories, sending e-mails etc).
However. Domain service (3) has become so complex that it requires data from different external APIs only if particular conditions are satisfied. For example - if Product X is of type Car, we need to know price of that car model from some external CatalogService (example invented) hidden behind ICatalogService. This operation is potentially expensive one (REST call).
How do we go about this?
A. Do we pre-fetch all data in Application Service listed as (1) even we might not need it? Do we inject interface ICatalogService into given Domain Service and fetch data only when needed? The latter solution might create performance issues if, some other client of Domain Service, calls this Domain Service repeatedly without knowing there is a REST call hidden inside it.
Or did we simply get the domain model wrong?
This question is related to Domain Driven Design.
How do we go about this?
There are two common patterns.
One is to pass the capability to make the query into the domain model, allowing the model to fetch the information itself when it is needed. What this will usually look like is defining an interface / a contract that will be consumed by the domain model, but implemented in the application/infrastructure layers.
The other is to extend the protocol between the domain model and the application, so that we can signal to the application layer what information is needed, and then the application code can decide how to provide it. You end up with something like a state machine for the processes, with the application code coordinating the exchange of information between the external api and the domain model.
If you use a bit of imagination, you've already got a state machine something like this; as your application code is already coordinating the movement of inputs to the repository and the domain model. The difference, of course, is that the existing "state machine" is simple and linear enough that it may not be obvious that there is a state machine present at all.
how exactly would you signal application layer?
Simple queries; which is to say, the application code pulls the information it needs out of the domain model and uses that information to compute the next action. When the action is completed, the application code pushes information to the domain model.
There isn't enough information to give you targeted good advice. I suspect you need to refactor your domains into further subdomains. It sounds like your domain service has way more than 1 responsibility. Keep the service simple.
In addition, If you have a long running task like a service call that takes a long time, then you need to architect it away. The most supple design will not keep the consumer waiting. It'll return immediately with some sort of result to the user even if it's simply a periodic status update.
In Rest Services,we normally use 'GET' requests when we want to retrieve some data from the server, however we can also retrieve data using a 'POST' request.
We use 'POST' to create, 'PUT' to update, and 'DELETE' to delete, however we can even create new data using a 'DELETE' request.
So I was just wondering what is the real reason behind for that, why these conventions are used?
So I was just wondering what is the real reason behind for that, why these conventions are used?
So the world doesn't fall apart!
No but seriously, why are any protocols or standards created? Take this historical scenario. Back in the early days of Google, many developers (relative to nowadays) weren't too savvy on the HTTP protocol. What you might've caught was a bunch of sites who just made use of the well known (maybe only known) GET method. So there would be links that would be GET requests, but would perform operations that were meant to be POST request, that would change the state of the server (sometimes very important changes of state). Enter Google, who spends its days crawling the web. So now you have all these links that Google is crawling, all these links that are GET requests, but changing the state of the server. So all these companies are getting a bunch of hits on their servers changing state. They all think they're being attacked! But Google isn't doing anything wrong. HTTP semantics state that GET requests should not have state changing behaviors. It should be a "read only" method. So finally these companies smartened up, and started following HTTP semantics. True story.
The moral of the story: follow protocols, that's what they're there for - to follow.
You seem to be looking at it from the perspective of server implementation. Yeah you can implement your server to accept DELETE request to "get" something. That's not really the matter at hand. When implementing the server, you need to think about what the client expects. I mean ultimately, you are creating an API. Look at it from a code API perspective
public class Foo {
public Bar bar;
public Bar deleteBar() {
return bar; // Really?!
}
public void getBar() {
bar = null; // What the..??!
}
}
I don't know how long a developer would last in the game, writing code like that. Any callers expecting to "get" Bar (simply by naming semantics) has another thing coming. Same goes for your REST services. It is ultimately a WEB API, and should follow the semantics of the protocol (namely HTTP) on which it is built. Those who understand the protocol, will have an idea of what the API does (at least in the CRUD sense), simply based on the type of request they make.
My suggestion to you or anyone trying to learn REST, is to get a good handle on HTTP. I would keep the following document handy. Read it once, then keep it as a reference
Hypertext Transfer Protocol (HTTP/1.1): Semantics and Content
GET cached by proxies POST and DELETE not!
Yes you can create data with GET but now you have to destroy that cached.Why to do extra job.
Also maximum header sizes accepted are different because of purpose of usage.
I recommend reading the spec which clearly states how each each http-method should be used.
why these conventions are used?
They are conventions, that is, best practices that have been adopted as a standard. You do not have to adhere to the standard, but most consumers of a REST service assume you do. That way it is easier to understand the implementation / interface.
I am new to ASP.NET MVC and I am trying to implement best practices for a small-to-mid size application that uses a web service as its data source. The web service exposes the following methods to support the application:
AuthenticateCustomer - returns the customer ID if valid email/password
GetCustomer - returns a serialized object containing customer information
Etc
My question is, all of these services return Success (bool) and Message (string) values depending on the result of the operation. The message contains descriptive information if an error occurred. I'm not sure if the invocation of the web services belongs in the Repository layer, but I think it is important to be able to pass the Success and Message values up through the Repository -> Service -> Controller layers. The only way I can think of doing this is by either littering the Repository methods with out arguments:
public int AuthenticateCustomer(string Email, string Password, out bool Success, out bool Message);
or create some sort of wrapper that contains the intended return value (integer) and the Success and Message values. However, each web service method returns different values so a one-size-fits-all wrapper would not work. Also, these values would need to be passed up through the Service layer, and it just seems like validation of some sort is happening at the Repository level.
Any thoughts on how to accomplish:
1. Separation of concerns (validation, data access via web service) while ...
2. Having the ability to maintain the feedback received from the web service and pass it through all the way to the View?
P.S. - Sorry for the terse question. It is a bit difficult to explain with any kind of brevity.
Do you need a repository? From what you described, I am viewing the Web Service as the repository. I would then use the Services (Business Layer) to call to the Web Service, and have the logic in there handle errors or success.
Have the Business Layer convert the Customer object from the Web Service into a Business object, and pass that on to the Controllers.
Repository (Web Services) -> Services (Business Layer) -> Presentation Layer (Controllers)
Martin's answer is good.
I could possibly see a case for a thin layer around the web services to facilitate testing.
If you do go that route and Success = false is not a common condition you could throw an Exception of your own type with Success and Message wrapped into it. I would only do this if it really was an exceptional condition. If you find your business layer catching the exception just to return an error code to the presentation layer, then it's the wrong approach.
I'm trying to develop a simple REST API. I'm still trying to understand the basic architectural paradigms for it. I need some help with the following:
"Resources" should be nouns, right? So, I should have "user", not "getUser", right?
I've seen this approach in some APIs: www.domain.com/users/ (returns list), www.domain.com/users/user (do something specific to a user). Is this approach good?
In most examples I've seen, the input and output values are usually just name/value pairs (e.g. color='red'). What if I wanted to send or return something more complex than that? Am I forced to deal with XML only?
Assume a PUT to /user/ method to add a new user to the system. What would be a good format for input parameter (assume the only fields needed are 'username' and 'password')? What would be a good response if the user is successful? What if the user has failed (and I want to return a descriptive error message)?
What is a good & simple approach to authentication & authorization? I'd like to restrict most of the methods to users who have "logged in" successfully. Is passing username/password at each call OK? Is passing a token considered more secured (if so, how should this be implemented in terms of expiration, etc.)?
For point 1, yes. Nouns are expected.
For point 2, I'd expect /users to give me a list of users. I'd expect /users/123 to give me a particular user.
For point 3, you can return anything. Your client can specify what it wants. e.g. text/xml, application/json etc. by using an HTTP request header, and you should comply as much as you can with that request (although you may only handle, say, text/xml - that would be reasonable in a lot of situations).
For point 4, I'd expect POST to create a new user. PUT would update an existing object. For reporting success or errors, you should be using the existing HTTP success/error codes. e.g. 200 OK. See this SO answer for more info.
the most important constraint of REST is the hypermedia constraint ("hypertext as the engine of application state"). Think of your Web application as a state machine where each state can be requested by the client (e.g. GET /user/1).Once the client has one such state (think: a user looking at a Web page) it sees a bunch of links that it can follow to go to a next state in the application. For example, there might be a link from the 'user state' that the client can follow to go to the details state.
This way, the server presents the client the application's state machine one state at a time at runtime. The clever thing: since the state machine is discovered at runtime on state at a time, the server can dynamically change the state machine at runtime.
Having said that...
on 1. the resources essentially represent the application states you want to present to the client. The will often closely match domain objects (e.g. user) but make sure you understand that the representations you provide for them are not simply serialized domain objects but states of your Web application.
Thinking in terms of GET /users/123 is fine. Do NOT place any action inside a URI. Although not harmful (it is just an opaque string) it is confusing to say the least.
on 2. As Brian said. You might want to take a look at the Atom Publishing Protocol RFC (5023) because it explains create/read/update cycles pretty well.
on 3. Focus on document oriented messages. Media types are an essential part of REST because they provide the application semantics (completely). Do not use generic types such as application/xml or application/json as you'll couple your clients and servers around the often implicit schema. If nothing fits your needs, just make up your own type.
Maybe you are interested in an example I am hacking together using UBL: http://www.nordsc.com/blog/?cat=13
on 4. Normally, use POST /users/ for creation. Have a look at RFC 5023 - this will clarify that. It is an easy to understand spec.
on 5. Since you cannot use sessions (stateful server) and be RESTful you have to send credentials in every request. Various HTTP auth schemes handle that already. It is also important with regard to caching because the HTTP Authorization header has special specified semantics to caches (no public caching). If you stuff your credentials into a cookie, you loose that important piece.
All HTTP status codes have a certain application semantic. Use them, do not tunnel your own error semantics through HTTP.
You can come visit #rest IRC or join rest-discuss on Yahoo for detailed discussions.
Jan