Is it good to return domain model from REST api over a DDD application? - rest

If you were to have a REST layer on top of your DDD App for CRUD, would you let the REST layer spit out domain model(in terms of data)(say for a GET)?

Generally, you'd want to be able to change your domain objects (for instance when you learn something new about the domain), without having to change a public interface/API to your system. Same thing the other way around: if a change is required to a public interface, you don't want to have to change your domain model.
So from this perspective I'd never expose my domain objects as-is over a public interface. Instead I'd create data transfer objects (DTO) that are part of the public interface. This way, changes to my domain and public api can change independently.

You should not expose the DDD model. This is absolutely correct, because a SOA frontend should not expose implementation details to clients. Your users should depend on a business function, not an implementation detail… But this assumes a nice design of several, maybe heterogeneous, applications united into a SOA bus.
I would like to add to the answer because the mention of a CRUD interface makes me think that this could be a case of SOA abuse where SOA principles are used to glue the layers of an application, instead of a network of applications. SOA is meant as a way for the enterprise to communicate its systems, it is not a way to implement MVC! So simple yet so misunderstood. For example, just because your front end GUI uses services to access the backend you do not have a "SOA application."… what ever that means.
If this is a case of SOA used to glue layers, please revise your design and use an appropriate design architecture for that level of abstraction. Otherwise you will misinterpret the recommendations found here about no exposing the DDD model and not using CRUDY, and you will surely end up creating a separate domain model for the services interface, that then you will have to map to the DDD , which is so complicated that you will need to use dozer and things like that to map the same thing with different names, and so forth until we end up with a bloated un maintainable mess…
.. just be careful.
-Alex
Redzedi is so right that we need a clarification....
Like everything, this is quite more complicated to do than to say. Serializing a complex domain model could be so difficult that you can end up either not putting any logic in the domain, the anemic model antipattern (http://martinfowler.com/bliki/AnemicDomainModel.html), or having a separate anemic model for persistence, ie DTOs.
I don’t know what is worst, but both options are bad. You should put the logic that goes in the model in the model and you should be able to serialize directly everywhere.
In my experience using the domain model for many years, I believe that the best thing is a point in the middle. Yes, as Fowler and Evans state, business objects should carry logic, but not all (http://codebetter.com/gregyoung/2009/07/15/the-anemic-domain-model-pattern/) a little anemia with a nice service layer is best.
For example, an invoice should know about its items and have a procedure to calculate its total, which depends on the items. But an invoice's item does not need to know about invoicing. So what happens when an item changes in cost, should it have a pointer back to the father invoice as a circular reference and call the invoice's calculate total procedure?
I believe not. I think that's a task for the service layer who should received the event first and then orchestrate the procedure, with out having to couple all the business objects together for implementation purposes and violating the business interaction rules, which is what a domain model is for.
-Alex

Related

How to structure a RESTful backend API with a database?

I want to make an API using REST which interacts (stores) data in a database.
While I was reading some design patterns and I came across remote facade, and the book I was reading mentions that the role of this facade is to translate the course grained methods from the remote calls into fine grained local calls, and that it should not have any extra logic. As an explaination, it says that the program should still work without this facade.
Here's an example
Yet I have two questions:
Considering I also have a database, does it make sense to split the general call into specific calls for each attribute? Doesn't it make more sense to just have a general "get data" method that runs one query against the database and converts it into an usable object, to reduce the number of database calls? So instead of splitting the get address to get street, get city, get zip, make on db call for all that info.
With all this in mind, and, in my case using golang, how should the project be structured in terms of files and functions?
I will have the main file with all the endpoints from the REST API, calling the controllers that handle these requests.
I will have a set of files that define those controllers. Are these controllers the remote facade? Should those methods not have logic in that case, and just call the equivalent local methods?
Should the local methods call the database directly, or should they use some sort of helper class that accesses the database?
Assuming all questions are positive, does the following structure make sense?
Main
Controllers
Domain
Database helper
First and foremost, as Mike Amundsen has stated
Your data model is not your object model is not your resource model is not your affordance model
Jim Webber did say something very similar, that by implementing a REST architecture you have an integration model, in the form of the Web, which is governed by HTTP and the other being the domain model. Resources adept and project your domain model to the world, though there is no 1:1 mapping between the data in your database and the representations you send out. A typical REST system does have many more resources than you have DB entries in your domain model.
With that being said, it is hard to give concrete advice on how you should structure your project, especially in terms of a certain framework you want to use. In regards to Robert "Uncle Bob" C. Martin on looking at the code structure, it should tell you something about the intent of the application and not about the framework¹ you use. According to him Architecture is about intent. Though what you usually see is the default-structure imposed by a framework such as Maven, Ruby on Rails, ... For golang you should probably read through certain documentation or blogs which might or might not give you some ideas.
In terms of accessing the database you might either try to follow a micro-service architecture where each service maintains their own database or you attempt something like a distributed monolith that acts as one cohesive system and shares the database among all its parts. In case you scale to the broad and a couple of parallel services consume data, i.e. in case of a message broker, you might need a distributed lock and/or queue to guarantee that the data is not consumed by multiple instances at the same time.
What you should do, however, is design your data layer in a way that it does scale well. What many developers often forget or underestimate is the benefit they can gain from caching. Links are basically used on the Web to reference from one resource to an other and giving the relation some semantic context by the utilization of well-defined link-relation names. Link relations also allow a server to control its own namespace and change URIs as needed. But URIs are not only pointers to a resource a client can invoke but also keys for a cache. Caching can take place on multiple locations. On the server side to avoid costly calculations or look ups on the client side to avoid sending requests out in general or on intermediary hops which allow to take away pressure from heavily requested servers. Fielding made caching even a constraint that needs to be respected.
In regards to what attributes you should create queries for is totally dependent on the use case you attempt to depict. In case of the address example given it does make sense to return the address information all at once as the street or zip code is rarely queried on its own. If the address is part of some user or employee data it is more vague whether to return that information as part of the user or employee data or just as a link that should be queried on its own as part of a further request. What you return may also depend on the capabilities of the media-type client and your service agree upon (content-type negotiation).
If you implement something like a grouping for i.e. some football players and certain categories they belong to, such as their teams and whether they are offense or defense players, you might have a Team A resource that includes all of the players as embedded data. Within the DB you could have either an own table for teams and references to the respective player or the team could just be a column in the player table. We don't know and a client usually doesn't bother as well. From a design perspective you should however be aware of the benefits and consequences of including all the players at the same time in regards to providing links to the respective player or using a mixed approach of presenting some base data and a link to learn further details.
The latter approach is probably the most sensible way as this gives a client enough information to determine whether more detailed data is needed or not. If needed a simple GET request to the provided URI is enough, which might be served by a cache and thus never reach the actual server at all. The first approach has for sure the disadvantage that it doesn't reuse caching optimally and may return way more data then actually needed. The approach to include links only may not provide enough information forcing the client to perform a follow-up request to learn data about the team member. But as mentioned before, you as the service designer decide which URIs or queries are returned to the client and thus can design your system and data model accordingly.
In general what you do in a REST architecture is providing a client with choices. It is good practice to design the overall interaction flow as a state machine which is traversed through receiving requests and returning responses. As REST uses the same interaction model as the Web, it probably feels more natural to design the whole system as if you'd implement it for the Web and then apply the design to your REST system.
Whether controllers should contain business logic or not is primarily an opinionated question. As Jim Webber correctly stated, HTTP, which is the de-facto transport layer of REST, is an
application protocol whose application domain is the transfer of documents over a network. That is what HTTP does. It moves documents around. ... HTTP is an application protocol, but it is NOT YOUR application protocol.
He further points out that you have to narrow HTTP into a domain application protocol and trigger business activities as a side-effect of moving documents around the network. So, it's the side-effect of moving documents over the network that triggers your business logic. There is no straight rule whether to include business logic in your controller or not, but usually you try to keep the business logic in yet their own layer, i.e. as a service that you just invoke from within the controller. That allows to test the business logic without the need of the controller and thus without the need of a real HTTP request.
While this answer can't provide more detailed information, partly due to the broad nature of the question itself, I hope I could shed some light in what areas you should put in some thoughts and that your data model is not necessarily your resource or affordance model.

What specific issue does the repository pattern solve?

(Note: My question has very similar concerns as the person who asked this question three months ago, but it was never answered.)
I recently started working with MVC3 + Entity Framework and I keep reading that the best practice is to use the repository pattern to centralize access to the DAL. This is also accompanied with explanations that you want to keep the DAL separate from the domain and especially the view layer. But in the examples I've seen the repository is (or appears to be) simply returning DAL entities, i.e. in my case the repository would return EF entities.
So my question is, what good is the repository if it only returns DAL entities? Doesn't this add a layer of complexity that doesn't eliminate the problem of passing DAL entities around between layers? If the repository pattern creates a "single point of entry into the DAL", how is that different from the context object? If the repository provides a mechanism to retrieve and persist DAL objects, how is that different from the context object?
Also, I read in at least one place that the Unit of Work pattern centralizes repository access in order to manage the data context object(s), but I don't grok why this is important either.
I'm 98.8% sure I'm missing something here, but from my readings I didn't see it. Of course I may just not be reading the right sources... :\
I think the term "repository" is commonly thought of in the way the "repository pattern" is described by the book Patterns of Enterprise Application Architecture by Martin Fowler.
A Repository mediates between the domain and data mapping layers,
acting like an in-memory domain object collection. Client objects
construct query specifications declaratively and submit them to
Repository for satisfaction. Objects can be added to and removed from
the Repository, as they can from a simple collection of objects, and
the mapping code encapsulated by the Repository will carry out the
appropriate operations behind the scenes.
On the surface, Entity Framework accomplishes all of this, and can be used as a simple form of a repository. However, there can be more to a repository than simply a data layer abstraction.
According to the book Domain Driven Design by Eric Evans, a repository has these advantages:
They present clients with a simple model for obtaining persistence objects and managing their life cycle
They decouple application and domain design from persistence technology, multiple database strategies, or even multiple data sources
They communicate design decisions about object access
They allow easy substitution of a dummy implementation, for unit testing (typically using an in-memory collection).
The first point roughly equates to the paragraph above, and it's easy to see that Entity Framework itself easily accomplishes it.
Some would argue that EF accomplishes the second point as well. But commonly EF is used simply to turn each database table into an EF entity, and pass it through to UI. It may be abstracting the mechanism of data access, but it's hardly abstracting away the relational data structure behind the scenes.
In simpler applications that mostly data oriented, this might not seem to be an important point. But as the applications' domain rules / business logic become more complex, you may want to be more object oriented. It's not uncommon that the relational structure of the data contains idiosyncrasies that aren't important to the business domain, but are side-effects of the data storage. In such cases, it's not enough to abstract the persistence mechanism but also the nature of the data structure itself. EF alone generally won't help you do that, but a repository layer will.
As for the third advantage, EF will do nothing (from a DDD perspective) to help. Typically DDD uses the repository not just to abstract the mechanism of data persistence, but also to provide constraints around how certain data can be accessed:
We also need no query access for persistent objects that are more
convenient to find by traversal. For example, the address of a person
could be requested from the Person object. And most important, any
object internal to an AGGREGATE is prohibited from access except by
traversal from the root.
In other words, you would not have an 'AddressRepository' just because you have an Address table in your database. If your design chooses to manage how the Address objects are accessed in this way, the PersonRepository is where you would define and enforce the design choice.
Also, a DDD repository would typically be where certain business concepts relating to sets of domain data are encapsulated. An OrderRepository may have a method called OutstandingOrdersForAccount which returns a specific subset of Orders. Or a Customer repository may contain a PreferredCustomerByPostalCode method.
Entity Framework's DataContext classes don't lend themselves well to such functionality without the added repository abstraction layer. They do work well for what DDD calls Specifications, which can be simple boolean expressions sent in to a simple method that will evaluate the data against the expression and return a match.
As for the fourth advantage, while I'm sure there are certain strategies that might let one substitute for the datacontext, wrapping it in a repository makes it dead simple.
Regarding 'Unit of Work', here's what the DDD book has to say:
Leave transaction control to the client. Although the REPOSITORY will insert into and delete from the database, it will ordinarily not
commit anything. It is tempting to commit after saving, for example,
but the client presumably has the context to correctly initiate and
commit units of work. Transaction management will be simpler if the
REPOSITORY keeps its hands off.
Entity Framework's DbContext basically resembles a Repository (and a Unit of Work as well). You don't necessarily have to abstract it away in simple scenarios.
The main advantage of the repository is that your domain can be ignorant and independent of the persistence mechanism. In a layer based architecture, the dependencies point from the UI layer down through the domain (or usually called business logic layer) to the data access layer. This means the UI depends on the BLL, which itself depends on the DAL.
In a more modern architecture (as propagated by domain-driven design and other object-oriented approaches) the domain should have no outward-pointing dependencies. This means the UI, the persistence mechanism and everything else should depend on the domain, and not the other way around.
A repository will then be represented through its interface inside the domain but have its concrete implementation outside the domain, in the persistence module. This way the domain depends only on the abstract interface, not the concrete implementation.
That basically is object-orientation versus procedural programming on an architectural level.
See also the Ports and Adapters a.k.a. Hexagonal Architecture.
Another advantage of the repository is that you can create similar access mechanisms to various data sources. Not only to databases but to cloud-based stores, external APIs, third-party applications, etc.
You're right,in those simple cases the repository is just another name for a DAO and it brings only one value: the fact that you can switch EF to another data access technique. Today you're using MSSQL, tomorrow you'll want a cloud storage. OR using a micro orm instead of EF or switching from MSSQL to MySql.
In all those cases it's good that you use a repository, as the rest of the app won't care about what storage you're using now.
There's also the limited case where you get information from multiple sources (db + file system), a repo will act as the facade, but it's still a another name for a DAO.
A 'real' repository is valid only when you're dealing with domain/business objects, for data centric apps which won't change storage, the ORM alone is enough.
It would be useful in situations where you have multiple data sources, and want to access them using a consistent coding strategy.
For example, you may have multiple EF data models, and some data accessed using traditional ADO.NET with stored procs, and some data accessed using a 3rd party API, and some accessed from an Access database living on a Windows NT4 server sitting under a blanket of dust in your broom closet.
You may not want your business or front-end layers to care about where the data is coming from, so you build a generic repository pattern to access "data", rather than to access "Entity Framework data".
In this scenario, your actual repository implementations will be different from each other, but the code that calls them wouldn't know the difference.
Given your scenario, I would simply opt for a set of interfaces that represent what data structures (your Domain Models) need to be returned from your data layer. Your implementation can then be a mixture of EF, Raw ADO.Net or any other type of Data Store/Provider. The key strategy here is that the implementation is abstracted away from the immediate consumer - your Domain layer. This is useful when you want to unit test your domain objects and, in less common situations - change your data provider / database platform altogether.
You should, if you havent already, consider using an IOC container as they make loose coupling of your solution very easy by way of Dependency Injection. There are many available, personally i prefer Ninject.
The domain layer should encapsulate all of your business logic - the rules and requirements of the problem domain, and can be consumed directly by your MVC3 web application. In certain situations it makes sense to introduce a services layer that sits above the domain layer, but this is not always necessary, and can be overkill for straightforward web applications.
Another thing to consider is that even when you know that you will be working with a single data store it still might make sense to create a repository abstraction. The reason is that there might be a function that your application needs that your ORM du jour either does badly (performance), not at all, or you just don't know how to make the ORM bend to your needs.
If you are wrapping your ORM behind a well thought out repository interface, you can easily switch between different technologies as you see fit. It's not uncommon in my repositories to see some methods use EF for their work and others to use something like PetaPoco, or (gasp) ADO.net code. The repository abstraction enables you to use exactly the right tool for the job at hand without leaking these complexities into the client code.
I think there is a big misunderstanding of what many articles call "repository." And that's why there are doubts about what real value those abstractions bring.
In my opinion the repository in it's pure form is IEnumerable, while you and many articles are talking about "data access service."
I've blogged about it here.

Considering the following architectural changes and need some advice (Domain Entities, DTO, Aggregates)

about a year ago I set set up a solution consisting of an ASP.Net MVC 3 (now) presentation layer, application layer, domain layer and infrastructure layer (crosscutting stuff and data). I decided to keep the domain model in a separate project from the domain logic and use a relaxed approach to the presentation layer by passing the domain entities instead of DTO's since we really only have 1 front end right now.
We are going to be servicing a distributed layer soon, in addition to our main website and I will use DTO's there, but I am considering using DTO's in the main website also. I am also wondering if I should bother to break out the framework code in the domain layer (IRepository, IUnitOfWork, Entity/Value object supertypes etc). Well here, let me list out the questions I need feedback on:
1) I was pretty diligent about not having an anemic domain model and also watched out for behavior that was specific to the presentation concerns. Most of the business calculations that are needed are on the domain entities, is it ok for the presentation layer to call this behavior directly or should it instead call an application service that then calls the domain entities? This would suggest to me that there is no reason to have the presentation layer know about the domain entities and instead could use DTO's. Alternatively, I could have the DTO's expose these behaviors, but then I feel like I am robbing the domain entities. So I guess that is 3 options (Rich domain objects called directly, service layer or dto with behavior) which is best?
2) Right now I have a domain project, which has domain services, specifications and logic and is orchestrated by the application layer and separate project for the domain model (used by presentation layer and application layer). I also have framework interfaces for generic repository and unit of work pattern here. Should I break the framework stuff out into a separate project and combine the rest into one project?
3) I want to reorganize my domain layer into aggregates, right now all of the domain model is organized by modules, basically all the types for each module are in one namespace. Would it be better to organize the entities, value objects, services and other stuff by the aggregates?
4) Should I use the Separated Interface pattern for infrastructure services that are basically .net framework helper library types? For example configuration objects or validation runners? What is the benefit there in doing so?
5) Lastly, not many examples I have seen have used interfaces for domain entities. Almost every object I have I prefer to pass around interfaces for dependency reasons and it makes testing much easier. Is it valid to use interfaces instead of concretes? I should mention that we use EF 4.3.1 (soon to upgrade to latest version) and I seem to remember that EF had a problem with using interfaces or something. Should I be exposing interfaces instead of the domain entities?
Thank you very much in advance.
Project Structure:
Presentation.Web
| |
| Application
| | |
Domain.Model - Domain
(Infrastructure.Data, Infrastructure.Core, Infrastructure.Security)
Explanation:
Presentation.Web (MVC3 Web Project)
Application
-- Service Layer that orchestrates the domain layer and responds to requests from the presentation layer (get this update that). This is organized by module, for example if I had a customer module I would have Application.Customer and in that would be all of the application services
Domain
-- Contains domain services, specifications, calculations and other domain logic that is not exposed as behavior on domain entities. For example a calculation that involves several domain entities exposed as a domain service for the application layer to call.
-- Also contains framework code for a specification framework and the main interfaces for a generic repository and unit of work pattern.
Domain.Model
-- Contains the domain entities and enumerations. Organized by module. For example, if I might have a customer module which has a customer entity, customerorder entity etc. This is broken out away from the domain project so that the objects can be used by the application and presenation layer.
Infrastructure.Security
-- Security infrastructure for authentication and authorization
Infrastructure.Core
-- Cross-cutting stuff used by multiple layers (validators, logging, configuration, extensions, IoC, email etc..). Most of the projects depend on interfaces in this project (except domain.model) for infrastructure services.
Infrastructure.Data
-- Repository Implementations via LINQ and EF 4.3.1, mapping layer, Unit of Work implementation. Interfaces are in Domain project (separated interfaces pattern)
1) First, determine whether your main website really needs to use the application layer. IMHO, if your application services and your main website are on the same web server, then you should evaluate whether the potential performance loss is worth having your main website call app server methods when it could call the domain objects directly. However, if your application server is definitely on another server, then yes, you should have the application server call your domain objects and pass only DTOs back and forth between it and any presentation layers you may have, including your main website.
2) This is really a question on preference of organization. Both are valid. You choose.
3) Anoter question on preference of organization. I, personally, organize my code by bounded context first. Then, I have entities and aggregate roots directly under them. Then, I have folders for Enumerations, Repositories (interfaces), Services (interfaces), Specifications, and Values. The namespaces do not reflect this organizational structure past the last bounded context folder. But, again, you should do this in the way that best suits the way you look at the code.
4) This is an implementation concern. I, personally, only break out implementation concerns into interfaces if I think there is a good possibility that I will need to swap out the implementations in the future. That being said, I usually organize my helper libraries into specific infrastructure contexts (eg. MainContext.Web.MVC.Helpers or MainContext.Web.WebForms.Helpers.) These rarely change and I have yet to come across an instance where I needed to swap out implementations entirely.
5) From my understanding, it is perfectly valid to use interfaces instead of concretes for your domain entities. That being said, I have yet to run into a case where I needed different implementations for my domain entities. The only reason I can even think of would be if you needed to change your business logic for one application, but leave an older application using the original business logic. If your business objects are good models for the domain, I can't fathom you actually running into this problem, but I have seen examples where people do this just for the sake of the abstraction. IMHO, that is not worth the extra coding effort, but if it makes you feel good inside or you get some actual benefit (eg. making testing easier), there isn't any reason why you can't abstract out your domain entities. That being said, domain services and repositories should definitely have contracts that allows you to swap out their implementations.
Answer 5 is derived from the idea that the application is the one who chooses the implementations. If you are trying to achieve onion architecture, then your application is going to be choosing the concrete implementations for everything (repositories, domain services, and other abstracted implementation concerns). I see no reason why it can't just use domain aggregates directly since they are the concrete representation of your domain model. (Note: All entities should be encapsulated into aggregates. The application should never be able to hold a reference to an entity that is not an aggregate under the context)

Service Layer vs Business Layer in architecting web applications?

I know this might sound silly but I am finding it hard to understand the need of a service layer and its differences with business layer.
So, we are using asp.net mvc 2 and have Data Access layer which does all the querying with the database and then we have the Business Layer which has the business logic and validations needed to be done. Finally we have the Presentation Layer which basically has all the views. In addition we also have some helpers,DTOs and viewmodel classes in different folders as a part of our libraries. But I have tried to read about architecture and it seems that service layer is an important part of an architecture.
All I understand is that a service layer is something that calls all the functions.
But I can't really see the need of Service layer in our application ? Or it might be already there and I can't see it... Can anyone explain with an example how a service layer is important ? How it is different from a business layer because from what I have read seem pretty similar?
If its in the first needed at all ? All we trying to do is architect our application in the best possible way what are your thoughts and experience on it ?
It is all about decoupling your app into self contained pieces, each one defined by the requirement to do one job really well.
This allows you to apply specialised design patterns and best practices to each component.
For example, the business layer's job is to implement the business logic. Full stop. Exposing an API designed to be consumed by the presentation layer is not its "concern".
This role of the go between is best performed by a service layer. Factoring out this specialised layer allows you to apply much more specialised patterns to each individual component.
There is no need to do design things this way, but the accumulated experience of the community indicates that it results in an application that is much easier to develop and maintain because you know exactly what each component is expected to do, even before you start coding the app.
Each layer should do one job really well. The role of go between that the service layer performs is one such well defined job and that is the reason for its existence: it is a unit of complexity that is designed in the same way over and over again, rather than having to reinvent the wheel each time, to mangle this role with the business logic where it does not belong. Think of the service layer as a mapping component. It is external to the business logic and does not belong in its classes, or in the controllers either.
Also, as a result of being factored out of the business logic, you get simpler business objects that are easier to use by other applications and services that the "business" consumes.
ASP.NET MVC is nothing if not a platform to enable you to write your apps as specialised components.
As a result of this increasing understanding of how to specialise components, programs are evolving from a primordial bowl of soup and spaghetti into something different and strange. The complexity they can address, whilst still using simple structures, is increasing. Evolution is getting going. If life is anything to go by, this has to be good, so keep the ball rolling.
You might find the term Architecture Astronaut interesting.
The point is, don't get caught up in all of these "layers" that people bandy about. Every time you had another layer to the application, there has to be a purpose in it.
For example, some people successfully combine the concepts of a Data Access and Business Logic layer into one. It's not right for every solution, but it works out perfectly for a lot of them. Some people might even combine Presentation with Business... which is a major no no in a lot of circles but, again, may be perfect for the need in question.
Basically, the problem you are solving should dictate the structure of the application. If other applications need to integrate with yours, then a Service Layer may need to be added. This might take the form of simple web forms which others can post data to or it might go further to be full on web services. There might even be situations where you want the service layer to be the primary go to location for multiple presentations.
You can get as complicated as you want, but a good rule of thumb is to keep it simple until
the complications become necessary.
In some designs, the service layer is not used by the presentation layer.
The service layer is called by other applications that want to use the business and data access layers in the application.
In a way, the service layer is another front-end separate from the presentation layer.
See the architectural diagram here. The users access the application through the presentation layer. And the external systems access the application through the services layer. The presentation layer and the services layer talk to the application facade in the business layer.
As an example of what those other "external systems" might be, web services and WCF services call the service layer. Some other web application could call this application's service layer in a web service call. This would be one way to ensure that both apps are applying the same business logic, and that any changes made to the business logic are reflected in both of the apps.
As Chris Lively points out, one shouldn't get carried away with creating layers. I would recommend only creating the layers that would be useful in your application. In my experience, the need for a service layer is not frequent, but the need for a business layer is very frequent.
The Service Layer is usually constructed in terms of discrete operations that have to be supported for a client.
For example, a Service Layer may expose Creating an Account. Whereas the Business Layer may consist of validating the parameters needed in creating an account, constructing data objects to be persisted, etc.
Oftentimes, the Service Layer uses a procedural or Transaction Script style code to orchestrate the business and/or logic layers.
Knowing this, you may realize that your Business Layer really is a Service Layer as well. At some point, the point from which you're asking this question being one such point, the distinction is mostly semantic.
From my perspective a service layer allows you to isolate your presentation layer from your business layer, in the same way the business and data access layer isolates you from how you persist the data.
Inside your business layer you'd put things that are pivotal to your 'business'. A contrived (and probably poorly conceived example) would be to have that be the place where say discounting prices on a product occur.
The service layer allows you to further seperate the interface from the business. Or even swap out other business layers depending on the changing scenarios of the business.
Not every application needs one though (a lot of variables go into that determination), too much architecture can introduce complexities your team may not need.
Have a look to what Randy Stafford says about Service Layer in the "P of EAA" Book
http://martinfowler.com/eaaCatalog/serviceLayer.html
A Service Layer defines an
application's boundary [Cockburn PloP]
and its set of available operations
from the perspective of interfacing
client layers. It encapsulates the
application's business logic,
controlling transactions and
coor-dinating responses in the
implementation of its operations.
Simple. To expose your business logic to a client, use a service layer.
Ask yourself:
When changing the business logic, should the service layer change?
If the answer is "not always" then a service layer is needed.
I know this thread is old, but one useful thing I've done in the Service layer is to handle transactions (Business Layer shouldn't need to know how to handle rollbacks, ordering of operations, etc.).
Another thing I've done is used it to translate between domain entities and DTOs. The Business layer deals with the domain model, but I've passed the data back to the presentation layer in the form of DTOs (in some cases it wasn't practical to expose the whole domain model to the presentation layer for various reasons), so the service layer handles this mapping.
Ultimately, I see the business layer as more fine grained, whereas the Service layer can be more coarse in that it could call multiple operations in the BLL, and order calls within one service call.
Yes, and I would also note on that the service layer is a good place for authentication, both role based and user based.

What are the pros and cons of using a Data Services Layer?

This is a discussion that seems to reappear regularly in the SOA world. I heard it as far back as '95, but it's probably been a topic of conversation long before that. I definitely have my own opinions about it, but I'd like to hear some good, solid arguments for having a Data Services Layer, and likewise for arguments against having one.
What value does it add to a systems architecture?
What are the inherent pitfalls?
What are common anti-patterns?
Links to articles are definitely acceptable.
To avoid confusion, this article describes the type of Data Service Layer I'm talking about. Essentially, a thin layer above the database that provides SOAP access to data and includes no business logic.
Data services are quite data oriented, for projects without logic always doing crud. For instance, it can suit if you have a log service or a properties service, you will just do the crud to it.
If the domain that involves that DDBB is complex, with complex logic, you will need to manage that logic up to that service (maybe in an orchestration), so you will divide the logic into several services. In that case I think is better to use a thicker unique service (DAL, BLL and SIL) that manage that domain and expose just one interface.
At the end it is another tool, depend of the problem.