Domain Driven Design: where does the workflow logic lie? - workflow

In my project,i have workflow which operates on multiple entities to accomplish a business transaction. What is the best place to represent the workflow logic? currently i just create a "XXXManager" which is responsible for collaborating with entity objects to conclude a business transaction. Are there other options?

I'd say you're doing the right thing having something collaborate with multiple entities to get something done. The important thing is that each entity (and indeed each service) should have a single responsibility.
The overarching workflow you're talking about is something that you can consider as a part of your Application Layer.
According to Paul Gielens (paraphrased) The Application Layer’s responsibility is to digest the course-grained requests (messages/commands) to acheive a certain overarching goal. It does this by sending a message to Domain Services for fulfillment. It then also (optionally) decides to send notifications to the Infrastructure Service.
But then what is a 'Service'?! It's an overloaded term but one that's described well (again, by Paul Gielens)
You might also want to read about Onion Architecture for more ideas...

Usually there is a domain object that should actually handle the control which is mistaken for an "entity".
Is there an instance of something that gets created as a result of this workflow? If so, the workflow logic probably belongs in there. Consider "Order" in the model below.
alt text http://img685.imageshack.us/img685/4383/order.png
Oder is both a noun and a verb. The "Order" is the object that created as a result of "Ordering". Like any good class, it has both data and behavior (and I don't mean getters and setters). The behavior is the dynamic process that goes with the data, i.e., the ordering workflow. "Order" is the controller.
This is why OO was invented.

DDD might not be exactly about this sort of thing, so I would suggest taking a look at the Service Layer architectural pattern. Martin Fowler's book Patterns of Enterprise Architecture is a good book that will explain it. You can find the description of the pattern on Fowler's web site as well.

Creating Workflow systems can be a daunting prospect. Have you considered using Workflow engines?
If I understand you correctly, you will need to create a manager which keeps track of the different transactions in the worklflow, related to user. There are probably other ways of doing it - but I've always used engines.

To the great answers, I'd like to add "domain events" (link is just to one possible implementation) which is something Evans himself has come to put more focus on ("increased emphasis on events").

Related

How to structure a RESTful backend API with a database?

I want to make an API using REST which interacts (stores) data in a database.
While I was reading some design patterns and I came across remote facade, and the book I was reading mentions that the role of this facade is to translate the course grained methods from the remote calls into fine grained local calls, and that it should not have any extra logic. As an explaination, it says that the program should still work without this facade.
Here's an example
Yet I have two questions:
Considering I also have a database, does it make sense to split the general call into specific calls for each attribute? Doesn't it make more sense to just have a general "get data" method that runs one query against the database and converts it into an usable object, to reduce the number of database calls? So instead of splitting the get address to get street, get city, get zip, make on db call for all that info.
With all this in mind, and, in my case using golang, how should the project be structured in terms of files and functions?
I will have the main file with all the endpoints from the REST API, calling the controllers that handle these requests.
I will have a set of files that define those controllers. Are these controllers the remote facade? Should those methods not have logic in that case, and just call the equivalent local methods?
Should the local methods call the database directly, or should they use some sort of helper class that accesses the database?
Assuming all questions are positive, does the following structure make sense?
Main
Controllers
Domain
Database helper
First and foremost, as Mike Amundsen has stated
Your data model is not your object model is not your resource model is not your affordance model
Jim Webber did say something very similar, that by implementing a REST architecture you have an integration model, in the form of the Web, which is governed by HTTP and the other being the domain model. Resources adept and project your domain model to the world, though there is no 1:1 mapping between the data in your database and the representations you send out. A typical REST system does have many more resources than you have DB entries in your domain model.
With that being said, it is hard to give concrete advice on how you should structure your project, especially in terms of a certain framework you want to use. In regards to Robert "Uncle Bob" C. Martin on looking at the code structure, it should tell you something about the intent of the application and not about the framework¹ you use. According to him Architecture is about intent. Though what you usually see is the default-structure imposed by a framework such as Maven, Ruby on Rails, ... For golang you should probably read through certain documentation or blogs which might or might not give you some ideas.
In terms of accessing the database you might either try to follow a micro-service architecture where each service maintains their own database or you attempt something like a distributed monolith that acts as one cohesive system and shares the database among all its parts. In case you scale to the broad and a couple of parallel services consume data, i.e. in case of a message broker, you might need a distributed lock and/or queue to guarantee that the data is not consumed by multiple instances at the same time.
What you should do, however, is design your data layer in a way that it does scale well. What many developers often forget or underestimate is the benefit they can gain from caching. Links are basically used on the Web to reference from one resource to an other and giving the relation some semantic context by the utilization of well-defined link-relation names. Link relations also allow a server to control its own namespace and change URIs as needed. But URIs are not only pointers to a resource a client can invoke but also keys for a cache. Caching can take place on multiple locations. On the server side to avoid costly calculations or look ups on the client side to avoid sending requests out in general or on intermediary hops which allow to take away pressure from heavily requested servers. Fielding made caching even a constraint that needs to be respected.
In regards to what attributes you should create queries for is totally dependent on the use case you attempt to depict. In case of the address example given it does make sense to return the address information all at once as the street or zip code is rarely queried on its own. If the address is part of some user or employee data it is more vague whether to return that information as part of the user or employee data or just as a link that should be queried on its own as part of a further request. What you return may also depend on the capabilities of the media-type client and your service agree upon (content-type negotiation).
If you implement something like a grouping for i.e. some football players and certain categories they belong to, such as their teams and whether they are offense or defense players, you might have a Team A resource that includes all of the players as embedded data. Within the DB you could have either an own table for teams and references to the respective player or the team could just be a column in the player table. We don't know and a client usually doesn't bother as well. From a design perspective you should however be aware of the benefits and consequences of including all the players at the same time in regards to providing links to the respective player or using a mixed approach of presenting some base data and a link to learn further details.
The latter approach is probably the most sensible way as this gives a client enough information to determine whether more detailed data is needed or not. If needed a simple GET request to the provided URI is enough, which might be served by a cache and thus never reach the actual server at all. The first approach has for sure the disadvantage that it doesn't reuse caching optimally and may return way more data then actually needed. The approach to include links only may not provide enough information forcing the client to perform a follow-up request to learn data about the team member. But as mentioned before, you as the service designer decide which URIs or queries are returned to the client and thus can design your system and data model accordingly.
In general what you do in a REST architecture is providing a client with choices. It is good practice to design the overall interaction flow as a state machine which is traversed through receiving requests and returning responses. As REST uses the same interaction model as the Web, it probably feels more natural to design the whole system as if you'd implement it for the Web and then apply the design to your REST system.
Whether controllers should contain business logic or not is primarily an opinionated question. As Jim Webber correctly stated, HTTP, which is the de-facto transport layer of REST, is an
application protocol whose application domain is the transfer of documents over a network. That is what HTTP does. It moves documents around. ... HTTP is an application protocol, but it is NOT YOUR application protocol.
He further points out that you have to narrow HTTP into a domain application protocol and trigger business activities as a side-effect of moving documents around the network. So, it's the side-effect of moving documents over the network that triggers your business logic. There is no straight rule whether to include business logic in your controller or not, but usually you try to keep the business logic in yet their own layer, i.e. as a service that you just invoke from within the controller. That allows to test the business logic without the need of the controller and thus without the need of a real HTTP request.
While this answer can't provide more detailed information, partly due to the broad nature of the question itself, I hope I could shed some light in what areas you should put in some thoughts and that your data model is not necessarily your resource or affordance model.

How many layers your small REST system should have?

I am building simple REST deployable using Spring Boot. Decided to create it by using failing acceptance test first followed with TDD until its green.
My module is pretty simple, I have 3 API's:
Retrieving list of data from datastore.
Adds item to datastore.
Deletes item from datastore.
I feel like it is good idea to abstract datastore and have maybe backed by Map data structure for testing purposes and use it with either NoSQL or SQL db if I want to for deployments/releases and end to end testing.
On the service layer side I am unsure since it would just delegate call to repository with no logic.
So standard approach would be controller->service->repository. In my case service does not do much(possible some exception handling but not more) and I will end up with interface and implementation as an extra as well as few more lines of code. I fell like going for controller->repository solution in my situation but it is not a practice I have seen and not sure how others would see it.
What's the best way to implement this sort of system?
I feel like it is good idea to abstract datastore
You are right. The abstraction is called 'Repository' in DDD (Domain Driven Design) for example.
On the service layer side I am unsure since it would just delegate call to repository with no logic.
I'm pretty sure there are data that you want to validate. So you should have a layer in the middle (e.g. the domain layer) which will be in charge of this validation.
Even so, if you feel like your application is simple and doesn't require such layers, go without. You will have less supple design, but more simplicity at first. Be careful: while evolving your app, you could run into trouble.
Hope this will help.
This is rather an opinion based question, but if you are asking whether a 3 layer architecture is a must, to that I say no. Be pragmatic, if you don' see a reason for a class/layer/module to exist, it does not need to exist.
A repository has a purpose (to store/retrieve), and the api layer has a purpose, to offer those things through HTTP.
Here is an article for building small services with the sparkframework: https://dzone.com/articles/building-simple-restful-api

CQRS - is it allowed to call the read side from the write side?

I started with reading about CQRS and I'm little confused.
Is it allowed to call the read side within the write side for getting additional informations?
http://cqrs.nu/Faq/command-handlers here they say it is not allowed, but in the cqrs journey code I found that they call a service 'IPricingService' which internally uses a DAO service class.
So what I must do to get additional informations inside my aggregation root?
CQRS Journey should not be seen as a manual. This is just a story of some team fighting their way to CQRS and having all limitations of using only Microsoft stack. Per se you should not use your read model in the command handlers or domain logic. But you can query your read model from the client to fetch the data you need in for your command and to validate the command.
Since I got some downvotes on this answer, I need to point, that what I wrote is the established practice within the pattern. Neither read side accesses the write side, not write side gets data from the read side.
However, the definition of "client" could be a subject of discussion. For example, I would not trust a public facing JS browser application to be a proper "client". Instead, I would use my REST API layer to be the "client" in CQRS and the web application would be just a UI layer for this client. In this case, the REST API service call processing will be a legitimate read side reader since it needs to validate all what UI layer send to prevent forgery and validate some business rules. When this work is done, the command is formed and sent over to the write side. The validations and everything else is synchronous and command handling is then asynchronous.
UPDATE: In the light of some disagreements below, I would like to point to Udi's article from 2009 talking about CQRS in general, commands and validation in particular.
The CQRS FAQ (http://cqrs.nu/Faq) suggests:
"How can I communicate between bounded contexts?
Exclusively in terms of their public API. This could involve subscribing to events coming from another bounded context. Or one bounded context could act like a regular client of another, sending commands and queries."
So although within one BC its not possible to use read-side from write-side and vice-versa, another bounded context or service could. In essence, this would be acting like a human using the user interface.
Yes assuming you have accepted the eventual consistency of the read side. Now the question is where. Although there is no hard rule on this, it is preferable to pass the data to command handler as opposed to retrieving it inside. From my observation there are two ways:
Do it on domain service
Basically create a layer where you execute necessary queries to build the data. This is as straightforward as doing API calls. However if you have your microservices running on Lambda/Serverless it's probably not a great fit as we tend to avoid a situation where lambda is calling another lambda.
Do it on the client side
Have the client query the data then pass it to you. To prevent tampering, encrypt it. You can implement the decryption in the same place you validate and transform the DTO to a command. To me this is a better alternative as it requires fewer moving parts.
I think it depends.
If in your architecture the "command side" updates the projections on real-time (synchronously) you could do that calling the query api. (although that seems strange)
But, if your projections (query side) is updated asyncronously would a bad idea to do it. Would be a posibility to get a "unreal" data.
Maybe this situation suggests a design problem that you should solve.
For instance: If from one context/domain you think you need information from another, could a domain definition problem.
I assume this, because read data from itself (same domain) during a command operation doesn't make much sense. (in this case could be a API design problem)

Is it good to return domain model from REST api over a DDD application?

If you were to have a REST layer on top of your DDD App for CRUD, would you let the REST layer spit out domain model(in terms of data)(say for a GET)?
Generally, you'd want to be able to change your domain objects (for instance when you learn something new about the domain), without having to change a public interface/API to your system. Same thing the other way around: if a change is required to a public interface, you don't want to have to change your domain model.
So from this perspective I'd never expose my domain objects as-is over a public interface. Instead I'd create data transfer objects (DTO) that are part of the public interface. This way, changes to my domain and public api can change independently.
You should not expose the DDD model. This is absolutely correct, because a SOA frontend should not expose implementation details to clients. Your users should depend on a business function, not an implementation detail… But this assumes a nice design of several, maybe heterogeneous, applications united into a SOA bus.
I would like to add to the answer because the mention of a CRUD interface makes me think that this could be a case of SOA abuse where SOA principles are used to glue the layers of an application, instead of a network of applications. SOA is meant as a way for the enterprise to communicate its systems, it is not a way to implement MVC! So simple yet so misunderstood. For example, just because your front end GUI uses services to access the backend you do not have a "SOA application."… what ever that means.
If this is a case of SOA used to glue layers, please revise your design and use an appropriate design architecture for that level of abstraction. Otherwise you will misinterpret the recommendations found here about no exposing the DDD model and not using CRUDY, and you will surely end up creating a separate domain model for the services interface, that then you will have to map to the DDD , which is so complicated that you will need to use dozer and things like that to map the same thing with different names, and so forth until we end up with a bloated un maintainable mess…
.. just be careful.
-Alex
Redzedi is so right that we need a clarification....
Like everything, this is quite more complicated to do than to say. Serializing a complex domain model could be so difficult that you can end up either not putting any logic in the domain, the anemic model antipattern (http://martinfowler.com/bliki/AnemicDomainModel.html), or having a separate anemic model for persistence, ie DTOs.
I don’t know what is worst, but both options are bad. You should put the logic that goes in the model in the model and you should be able to serialize directly everywhere.
In my experience using the domain model for many years, I believe that the best thing is a point in the middle. Yes, as Fowler and Evans state, business objects should carry logic, but not all (http://codebetter.com/gregyoung/2009/07/15/the-anemic-domain-model-pattern/) a little anemia with a nice service layer is best.
For example, an invoice should know about its items and have a procedure to calculate its total, which depends on the items. But an invoice's item does not need to know about invoicing. So what happens when an item changes in cost, should it have a pointer back to the father invoice as a circular reference and call the invoice's calculate total procedure?
I believe not. I think that's a task for the service layer who should received the event first and then orchestrate the procedure, with out having to couple all the business objects together for implementation purposes and violating the business interaction rules, which is what a domain model is for.
-Alex

What are the pros and cons of using a Data Services Layer?

This is a discussion that seems to reappear regularly in the SOA world. I heard it as far back as '95, but it's probably been a topic of conversation long before that. I definitely have my own opinions about it, but I'd like to hear some good, solid arguments for having a Data Services Layer, and likewise for arguments against having one.
What value does it add to a systems architecture?
What are the inherent pitfalls?
What are common anti-patterns?
Links to articles are definitely acceptable.
To avoid confusion, this article describes the type of Data Service Layer I'm talking about. Essentially, a thin layer above the database that provides SOAP access to data and includes no business logic.
Data services are quite data oriented, for projects without logic always doing crud. For instance, it can suit if you have a log service or a properties service, you will just do the crud to it.
If the domain that involves that DDBB is complex, with complex logic, you will need to manage that logic up to that service (maybe in an orchestration), so you will divide the logic into several services. In that case I think is better to use a thicker unique service (DAL, BLL and SIL) that manage that domain and expose just one interface.
At the end it is another tool, depend of the problem.