What constitutes a rich domain model in a POJO/POCO? - class-design

What is the difference between
A simple fields-accesors-mutators class
A rich-modeled class
What constitutes rich modeling in business-domain classes?

"Rich" as used here implies "rich behavior" (as opposed to state).
There is technical behavior and domain behavior. Accessors and mutators are technical; they lack the "why" which defines business interest.
Domain objects represent the "why" and encapsulate the "how". Actually, all objects do that; domain objects do it specifically for business value.
Let's say you, as an employee domain object, have to request a day off of work. You have 2 options:
Tell your manager and he marks the schedule.
Ask your manager for the schedule and mark it.
Model 1 is rich. The "why" (vacation time) encapsulates the "how" (marking the schedule).
Model 2 relegates the manager to a simple property bag and leaks the scheduling abstraction.

When your business logic is encapsulated in your business objects. In other words, you have a Business Objects (Domain Model) layer, without the need for a separate Business Logic layer.

Related

Using of ubiquitous language in application services arguments

The ubiquitous language (UL) is used in a whole bounded context, both the domain model and the application layer right? Ok. Then the name of the methods of an application service belongs to the UL. But the arguments of the method, as domain objects shouldn't be expose to the users, won't (cannot) be terms from the UL. If you used UL vocabulary to name method args, then you would be exposing domain objects outside the application.
How do you explain this contradiction about naming application services parameters?
Maybe the question seems a little bit philosophical, but so is DDD, it's a philosophy on software development, and it is based on the UL.
UPDATE
Someone asked for an example, not just philosophy. Well let's say our domain is about a shop selling products. One method of an application service could be:
addProductToShoppingCart ( Product product, ShoppingCart shoppingCart );
But Product and ShoppingCart are entities/value objects of the domain model, and we shouldn't expose it to clients.
So args should be DTOs or primitive types. But such types don't belong to the UL. Product and ShoppingCart do belong to the UL and should be the args of the method, but by doing it you break the rule of exposing domain to clients.
I think the application service layer should strive to reflect the UL as much as it can without leaking details from the domain model technical solution. In other words, you want the application service public API to be expressed using terms of the ubiquitous language, but you do not want the client code to be couple on the domain model layer.
"If you used UL vocabulary to name method args, then you would be exposing domain objects outside the application."
That's a misconception: method arguments should be named using UL terms where you can, but argument types shouldn't leverage types defined in the domain package. This is for technical reasons only as that segregation lets you change the domain model independently from the public application's API.
An example would be much better to discuss than just the "philosophy". But..
The contradiction is that most DDD designs do not in fact follow the UL rigorously enough. Look at almost any publicly available "DDD" design, for example Vaughn Vernon's Github repository.
The "Domain" (i.e. Value Objects and Entities) usually are modeled as data-only "objects", with little if any business logic. Right there the method names already left the UL and are purely in technical terrain (setters, getters usually).
Same with Services. Services are not part of the "Domain" at all. Try telling a business person that you've implemented a PasswordService, I guarantee a blank stare back. Services are also purely technical on the outside, with some business-related methods in them, that could actually belong to some Value Object or Entity.
So, while I agree with the "philosophy" part, the building blocks defined by Eric Evans as used today are far from an optimal implementation of that philosophy.
Take a look at my presentation about exactly this issue: https://speakerdeck.com/robertbraeutigam/object-oriented-domain-driven-design

Is it possible to do DDD and REST interface and language mapping?

REST has a uniform interface constraint which is the following in a very zipped opinion based format.
You have to use standards like HTTP, URI, MIME, etc...
You have to use hyperlinks.
You have to use RDF vocabs to annotate data and hyperlinks with semantics.
You do all of these to decouple the client from the implementation details of the service.
DDD with CQRS (or without it) is very similar as far as I understand.
By CQRS you define an interface to interact with the domain model. This interface consists of commands an queries classes.
By DDD you define domain events to decouple the domain model from the persistence details.
By DDD you have one ubiquitous language per bounded context which expresses the semantics.
You do all of these to completely decouple the domain model from the outside world.
Is it possible to map the REST uniform interface to the domain interface defined by commands and queries and domain events? (So the REST service code would be generated automatically.)
Is it possible to map the linked data semantics to the ubiquitous languages? (So you wouldn't need to define very similar terms, just find and reuse existing vocabs.)
Please add a very simple mapping example to your answer, why yes or why not!
I don't think this is possible. There is a term which I believe describes this problem, it is called ontology alignment.
In this case have have at least 3 ontologies:
the ubiquitous language (UL) of the domain model
the application specific vocab (ASO) of the REST service
the linked open data vocabs (LODO) which the application specific vocab uses
So we have at least 2 alignments:
the UL : ASO alignment
the ASO : LODO alignment
Our problem is related to the UL : ASO alignment, so let's talk about these ontologies.
The UL is object oriented, because we are talking about DDD and domain model. So most of the domain objects entities, value objects are real objects and not data structures. The non-object-oriented part of it are the DTOs like command+domainEvent, query+result and error on the interface of the domain model.
In contrast the ASO is strictly procedural, we manipulate the resources (data structures) using a set of standard methods (procedures) on them.
So from my aspect we are talking about 2 very different things and we got the following options:
make the ASO more object oriented -> RPC
make the UL less object oriented -> anaemic domain model
So from my point of view we can do the following things:
we can automatically map entities to resources and commands to operations by CRUD, for example the HydraBundle does this with active records (we can do just the same with DDD and without CQRS)
we can manually map commands to operations by a complex domain model
the operation POST transaction {...} can result a SendMoneyCommand{...}
the operation GET orders/123/total can result a OrderTotalQuery{...}
we cannot map entities to resources by a complex domain model, because we have to define new resources to describe a new service or a new entity method, for example
the operation POST transaction {...} can result account.sendMoney(anotherAccount, ...)
the operation GET orders/123/total can result in an SQL query on a read database without ever touching a single entity
I think it is not possible to do this kind of ontology alignment between DDD+CQRS and REST, but I am not an expert of this topic. What I think we can do is creating an application specific vocab with resource classes, properties and operations and map the operations to the commands/queries and the properties to the command/query properties.
You have posed some interesting questions here.
To start with I do not quite agree with
By DDD you define domain events to decouple the domain model from the
persistence details.
I think you might be confusing Event Sourcing ES with DDD, ES can be used with DDD but its very much optional in fact you should give it a lot of thought before choosing it as your persistence mechanism.
Now to the bulk of your question, of whether REST and DDD get along if yes how ?
My take on it, yes they do get along, however generally you do not want to expose your domain model via a REST interface, you want to build a abstraction over it and then expose that.
You can refer to this answer here, for a little more detail.
However i cannot recommend enough the Implementing Domain-Driven Design book, Chapter 14 Application deals with your concern to a fair degree.
I could not have explained it more thoroughly than the book and hence referring you there :)

ORM Entities vs. Domain Entities under Entity Framework 6.0

I stumbled upon the following two articles First and Second in which the author states in summary that ORM Entities and Domain Entities shouldn't be mixed up.
I face exactly this problem at the moment as I code with EF 6.0 using the Code First approach. I use the POCO classes as entities in the EF as well as my domain/business objects. But I find myself frequently in the situation where I define a property as public or a navigation property as virtual only because the EF Framework forces me to do so.
I don't know what to take as the bottom line of the two articles? Should I really create for example a CustomerEF class for the entity framework and a CustomerD for my domain. Then create a repository which consumes CustomerD maps it to CustomerEF do some queries and than maps back the received CustomerEF to CustomerD. I thought EF is all about mapping my domain entities to the data.
So please give me some advice. Do I overlook an important thing the EF is able to provide me with? Or is this a problem which can not completely solved by the EF? In the latter case what is a good way to manage this problem?
I agree with the general idea of these posts. An ORM class model is part of a data access layer first and foremost (even if it consists of so-called POCOs). If any conflict of interests arises between persistence and business logic (or any other concern), decisions should always be made in favor of persistence.
However, as software developers we always have to balance between purism and pragmatism. Whether or not to use the persistence model as a domain model depends on a number of factors:
The size/coherence of the development team. When the whole team knows that properties can be public just because of ORM requirements, but should not be set all over the place, it may not be a big deal. If everybody knows (and obeys) that an ID property is not to be used in business logic, having IDs may not be a big deal. A scattered, unexperienced or undisciplined team may need more stringent segregation of code.
The overlap between business logic concerns and persistence concerns. Object oriented design thrives when a class model sticks to SOLID principles. But these principles are not necessarily at odds with persistence concerns. I mean that although the concerns are different, in the end their resultant requirements may be quite similar. For instance, both concerns may require valid object state and correct associations.
There can be use cases, however, in which objects temporarily need to be in a state that absolutely shouldn't be stored. This may be a reason to work with dedicated domain classes. Another reason may be that the entity model just can't fulfill the best segmentation of responsibilities. For instance, a business process "blacklisting customer" may require data that is scattered over so many entity objects that new domain classes must be designed that can encapsulate the data and the methods working on them. In other words: doing this by entities would violate the Tell Don't Ask principle.
The need for layering. For instance, if the data access layer targets different database vendors it may have to consist of interchangeable parts that are vendor-specific (e.g. to account for subtle differences in data types between Oracle and Sql Server or to exploit vendor-specific features). Using the persistence model as domain model would probably bleed vendor-specific implementations into the business logic. That would be really bad. There the data access layer should be precisely that, a layer.
(Very trivial) The amount of data. Creating objects takes time and resources. When "many" objects are involved in a business case it may just be too expensive to build both entity objects and domain objects.
And more, undoubtedly.
So I would always try to be a pragmatist. If entity classes do a decent job, go for it. If the mismatch is too large, create a business domain for appropriate parts of the business logic. I would not slavishly follow a (any) design pattern just because it is a good pattern. Contrary to what is said in the post, it requires a lot of maintenance to map an entity model onto a business model. When you find yourself creating myriads of business classes that are almost identical to entity classes it's time to rethink what you're doing.

Business logic in Entity Framework POCOs using partial classes?

I have business logic that could either sit in a business logic/service layer or be added to new members of an extended domain class (EF T4 generated POCO) that exploits the partial class feature.
So I could have:
a) bool OrderBusiness.OrderCanBeCancelledOnline(Order order) .. or (IOrder order)
or
b) bool order.CanBeCancelledOnline() .. i.e. it is the order itself knows whether or not it can be cancelled.
For me option b) is more OO. However option a) allows more complex logic to be applied e.g. using other domain objects or services.
At the moment I have a mix of both and this doesn't seem elegant.
Any guidance on this would be much appreciated!
The key thing about OO for me is that you tell objects to do things for you. You don't pull attributes out and make the decisions yourself (in a helper class or other).
So I agree with your assertion about option b). Since you require additional logic, there's no harm in performing an operation on the object whilst passing references to additional helper objects such that they collaborate. Whether you do this at the time of the operation itself, or pre-populate your order object with those collaborating entities is very much dependent upon your current situation.
You can also use extension methods to the POCO's to wrap your bll methods.
So you can keep using your current bll's.
in c# something like:
public static class OrderBusiness <- everything must be static, class and method
{
public static bool CanBeCancelledOnline(this Order order) <- notice the 'this'
{
logic ...
And now you can do order.CanBeCancelledOnline()
This is likely to depend on the complexity of your application and does require some judgement that comes with experience. The short answer is that if your project is anything more than a pretty simple one then you are best off putting your logic in the domain classes.
The longer answer:
If you place your logic within a service layer you are affectively following the transaction script pattern, and ending up with an anaemic domain model. This can be a valid route, but it generally works best with simple and small projects. The problem is that the transaction script layer (your service layer) becomes more complicated to maintain as it grows.
So the alternative is to create a rich domain model that contains the logic within it. Keeping logic together with the class it applies to is a key part of good OO design, and in a complex project pretty essential. It usually requires a bit more thought and effort initially, which is why for very simple projects people sometimes use the transaction script pattern.
If you are unsure about which to go with it is not normally a too difficult job to refactor your logic to move it from your service layer to the domain, but you need to make the call early enough that the job is not too large.
Contrary to one of the answers, using POCO classes does not mean you can't have business logic in your domain classes. POCO is about not applying framework specific structures to your domain classes, such as methods and interfaces specific to a particular ORM. A class with some functions to apply business logic is clearly still a Plain-Old-CLR-Object.
A common question, and one that is partially subjective.
IMO, you should go with Option A.
POCO's should be exactly that, "plain-old-CLR" objects. If you start applying business logic to them, they cease to be POCO's. :)
You can certainly put your business logic in the same assembly as your POCO's, just don't add methods directly to them, create helper classes to facilitate business rules. The only thing your POCO's should have is properties mapping to your domain model.
Really depends on how complex your business rules are. In our application, the busines rules are very straightforward, so we use Option A.
But if your business rules start to get messy, consider using the Specification Pattern.

Passing DTOs around in the domain model

I see DTO types being created within and passed between types in the domain model. Is this good practise?
I always thought DTOs were to be used principally at context boundaries (i.e. at the edge of the object graph) to decouple context implementations (e.g. at the domain / ui boundary).
Your question is sort of subjective, but that's ok. As with most "hard and fast rules", there really are no hard and fast rules. There are only guidelines. There is always an exception, or some special case where the best course of action is to do something against best practices (like using a goto statement to instantly break out of multiple nested loops).
That being said, no, passing around DTO types withing your domain model is not a good practice. DTO stands for data transfer object, the transfer typically meaning transport across some boundary. If you're staying inside your domain model, you shouldn't be converting to DTO types and then back to domain types.
Creating a DTO hierarchy that parallels your domain model, just for the sake of layering purity, seems like an anti-pattern to me. I'd argue against it every time.
EJB 1.0 encouraged using DTOs this way, because passing entity EJBs that were chatty was inefficient. People would load the data into DTOs to avoid network traffic. I think it's unnecessary now.