Domain Driven Design - Shared entities across bounded contexts - entity-framework

I am new to domain driven design and trying to learn and implement in my project. My project structure up till now similar to this.
Maintainance Folder Maintainance.Data(Class
Library) Maintainance.Domain(Class Library)
Maintainance.Domin.Tests(test project)
MovieBooking Folder MovieBooking.Data(Class
Library) MovieBooking.Domain(Class Library)
MovieBooking.Domain.Tests(test project)
SharedKernel Common things
Web Application MovieBooking MVC Web
Application(which have reference to MovieBooking Domain)
In Maintainance boundned context I am keeping all CRUD, GetAll type things for say Movie, Country, Category, Subcategory entities in Maintainance DBContext.
Now in MovieBooking data layer I will also need to use these entities (mostly to display name or dropdown fills in view, kind of subset needed - not all properties needed, only few like Id, name)
There are few ways I can access this entities in Movie booking Bounded Context
Via web services - Need to create web api for common entities like Movie,Country,Category,Subcategory and call web api in web project (to fill Dropdowns or get name from entities)
Via Reference Context (Seperate Dbcontext) - Need to configure Dbset and then map a database view (with only require fields) to Dbset
Example :
modelBuilder.Entity().ToTable(ViewName);
For (1) it can be long term implmentation solution for me
(2) I have to create view (with only few properties) for each require table and it will increase my number of views in my DB drastically as I have enterprise level application.
Is there any other way I can achieve this? Anything I am missing in DDD to look for ?

Option 2, while it will save you time, is actually a very bad idea from the DDD perspective as it allows for violations of the transactional boundary guarantees that each aggregate is meant to enforce\represent.
Option 1 seems a better option, although there are still quite a bit of wiggle room for interpretation based on your brief description of your proposed solution. If I understood correctly, it is generally recommended to follow the below:
Do not expose your aggregate state directly since this exposes internals and increases coupling. Simple create meaningful DTO's and use something like Automapper to map your Aggregates to DTO's easilly and with little effort before sending it over.
Have a duplicate of the DTO definition in your client. This will reduce coupling and allow for easier deployments.
I strongly recommend reading the DDD orange book although I have to say that I cannot recall specifically on which chapter this is discussed. You will also benefit a lot by reading about hexagonal architecture (and I would search for that term in the orange book to find more info about your question).
There is actually one alternative that I can think of: if you're publishing events from your BC's you can create a workflow to translate the domain events to "public" events and then in the other BC listen for the public events that you need to and store the data that you need somewhere inside there. The difficulty of this ranges from very easy to quite problematic depending on your infrastructure. Be aware that it is not a very good idea to re-use your domain events for transmitting data to other BC's since this closely couples the two BC's.
I hope this helps. Please do not hesitate to elaborate if I did not understood the question well enough.

Related

Is it possible to do DDD and REST interface and language mapping?

REST has a uniform interface constraint which is the following in a very zipped opinion based format.
You have to use standards like HTTP, URI, MIME, etc...
You have to use hyperlinks.
You have to use RDF vocabs to annotate data and hyperlinks with semantics.
You do all of these to decouple the client from the implementation details of the service.
DDD with CQRS (or without it) is very similar as far as I understand.
By CQRS you define an interface to interact with the domain model. This interface consists of commands an queries classes.
By DDD you define domain events to decouple the domain model from the persistence details.
By DDD you have one ubiquitous language per bounded context which expresses the semantics.
You do all of these to completely decouple the domain model from the outside world.
Is it possible to map the REST uniform interface to the domain interface defined by commands and queries and domain events? (So the REST service code would be generated automatically.)
Is it possible to map the linked data semantics to the ubiquitous languages? (So you wouldn't need to define very similar terms, just find and reuse existing vocabs.)
Please add a very simple mapping example to your answer, why yes or why not!
I don't think this is possible. There is a term which I believe describes this problem, it is called ontology alignment.
In this case have have at least 3 ontologies:
the ubiquitous language (UL) of the domain model
the application specific vocab (ASO) of the REST service
the linked open data vocabs (LODO) which the application specific vocab uses
So we have at least 2 alignments:
the UL : ASO alignment
the ASO : LODO alignment
Our problem is related to the UL : ASO alignment, so let's talk about these ontologies.
The UL is object oriented, because we are talking about DDD and domain model. So most of the domain objects entities, value objects are real objects and not data structures. The non-object-oriented part of it are the DTOs like command+domainEvent, query+result and error on the interface of the domain model.
In contrast the ASO is strictly procedural, we manipulate the resources (data structures) using a set of standard methods (procedures) on them.
So from my aspect we are talking about 2 very different things and we got the following options:
make the ASO more object oriented -> RPC
make the UL less object oriented -> anaemic domain model
So from my point of view we can do the following things:
we can automatically map entities to resources and commands to operations by CRUD, for example the HydraBundle does this with active records (we can do just the same with DDD and without CQRS)
we can manually map commands to operations by a complex domain model
the operation POST transaction {...} can result a SendMoneyCommand{...}
the operation GET orders/123/total can result a OrderTotalQuery{...}
we cannot map entities to resources by a complex domain model, because we have to define new resources to describe a new service or a new entity method, for example
the operation POST transaction {...} can result account.sendMoney(anotherAccount, ...)
the operation GET orders/123/total can result in an SQL query on a read database without ever touching a single entity
I think it is not possible to do this kind of ontology alignment between DDD+CQRS and REST, but I am not an expert of this topic. What I think we can do is creating an application specific vocab with resource classes, properties and operations and map the operations to the commands/queries and the properties to the command/query properties.
You have posed some interesting questions here.
To start with I do not quite agree with
By DDD you define domain events to decouple the domain model from the
persistence details.
I think you might be confusing Event Sourcing ES with DDD, ES can be used with DDD but its very much optional in fact you should give it a lot of thought before choosing it as your persistence mechanism.
Now to the bulk of your question, of whether REST and DDD get along if yes how ?
My take on it, yes they do get along, however generally you do not want to expose your domain model via a REST interface, you want to build a abstraction over it and then expose that.
You can refer to this answer here, for a little more detail.
However i cannot recommend enough the Implementing Domain-Driven Design book, Chapter 14 Application deals with your concern to a fair degree.
I could not have explained it more thoroughly than the book and hence referring you there :)

Using ViewModels instead DTOs as the result of a CQRS query

Reading a SO question, I realized that my Read services could provide some smarter object like ViewModels instead plain DTOs. This makes me reconsider what information should be provided by the objects returned by the Read Services
Before, using just DTOs, my Read Service just made flat view mapping of a database query into hash like structure with minimum normalization and no behavior.
However I tend to think of a ViewModel as something "smarter" that can have generated information not provided by the database, like status icon, calculated values, reformatted values, default values, etc.
I am starting to see that the construction of some ViewModel objects might get more complicated and has potential downsides if I made my generic ReadServiceInterface return ViewModels only:
(1) Should I plan some design restriction for the ViewModels returned by my CQRS? Like making sure that their construction is almost as fast as a plain DTO?
(2) DTOs by nature are easily serialized and ready to be sent to an external system in a SOA architecture or embedded into a message. Does this mean that using ViewModels will have a negative impact on my architecture?
(3) Which type of ViewModels should I keep outside my Read Services?
(4) Should I expect all ViewModels to be retrieved from Read Services?
In the past I implemented some ViewModels that needed more than one query. In a CQRS I suppose, that is a design smell, since everything they provide, should be in only one query.
I am starting a new project, where I thought that any query will return either aggregate objects or DTOs. Since now ViewModels come into play. I am wondering:
(5) Should I plan that queries within my architecture will yield two type of objects (ViewModels+Aggregates) or three (+DTO)?
View Models (VM) serve a single master: the View. We're usually consider the VM a pretty dumb object so in this regard, there's no technical difference between a VM and a DTO, only their purpose and semantics are different.
How you build a VM is an implementation detail. Some VM are pre generated and stored in a VM repository. Others are built in real-time by a service (or a query handler) either by querying the db directly or querying other repos/services then assembling the results. There's no right or wrong and no rules about how to do it. It comes down to preference.
In CQRS the important part is separation of commands from queries i.e more than one model. There's no rule about how many queries you should do or if you should return a view model or dto. As long as you have at least one read model dedicated for queries, it's CQRS.
Don't let technicalities complicate your design. Proper design is more about high level structure and not low level implementation. Use CQRS because having a read model simplifies your app, not for other reasons. Aim for simplification and clean code, not for rigid rules that dictate a 'how to' recipe.

Using Scala Play and Slick, how do I pass non-trivial relationships to the view

I'm looking at building an application using Play. Imagine a typical eCommerce domain model: Customer, Order, Order Line Item, Product.
In investigating various options for persistence the recommendation seems to be to avoid ORM layers in Scala and use a different abstraction, such as Slick.
Where I am stuck is that with an ORM I could pass a single "Order" object to my view, which could then use existing relationships to pull related information from the Customer, OrderLines, and Products. With Slick, I'm currently passing a tuple of (Order, Customer, Seq[(OrderLine, Product)]) to the view to provide the same information. If you start to complicate the model a bit more, say with an Address on the customer object, it gets very messy quickly.
Is this the recommended approach or am I missing something? I've found several Play-Slick example applications, but they just have 1 or 2 entities, so they don't really address the issue I bring up here.
Have a look at the Slick-Examples, especially: This one
If you implemented your classes correctly you should be able to access either Customer-object via the Order-object or vice-versa (for example order.customer.name or something like that to access the customer's Name).

How to expose read model from shared module

I am working on developing a set of assemblies that encapsulate parts of our domain that will be shared by many applications. Using the example of an order management system, one such assembly will contain all of the core operations an application can perform to/with an order. We are applying a simple version of CQS/CQRS so that all operations that change the state of the "system" are represented as public commands, such as CancelOrderCommand, ShipOrderCommand and CreateORderCommand. The command handlers are internal to the assembly.
The question I am struggling to answer is how to best expose the read model to consuming code?
The read model will be used by consuming code to perform queries. I don't know how all of the ways the read model will be used so the interface needs to be flexible to allow any query.
What complicates it for me is that I not only need to expose my aggregate root but there are also several "lookup" lists of related data that client applications may use. For example, each order has an associated OrderType which is data-driven (i.e., not an enum) and contains several properties that will drive some of our business rules that control what operations can/cannot be performed, etc. It is easy inside my module to manage this relationship; however, a client application that allows order creation will most likely need to display the list of possible OrderTypes to the user. As a result, I need to not only expose the list of Order aggregates but the supporting list of OrderTypes (and other lookup lists) from my read model.
How is this typically done?
I'm not sure what else to explain that will help trigger a solution, so please ask away...
I have never seen a CQRS based implementation expose a full dataset for ad-hoc querying so this is an interesting situation! In a typical CQRS scenario you would expose very specific queries because you may want to raise events when they are called (for caching for example - see this post for more details on that).
However since this is your design, let's not worry about "typical" or "correct" CQRS, I guess you just need a solution! One of the best new mechanisms for exposing data for flexible querying I have seen is the Open Data Protocol (OData). It will allow consumers to implement their own filtering, sorting and paging over a data source you expose.
Most implementations of this seem to deal with relational data. If you are dealing with a relational data source then OData might be a nice way to go. I suspect by your comment of "expose my aggregate root" that you might be using a document database? If so, there is one example I have seen of OData services on top of MongoDB: http://bloggingabout.net/blogs/vagif/archive/2012/10/11/mongodb-odata-provider-now-supports-arrays-and-nested-collections.aspx.
I hope that helps, OData is definitely worth looking into. It seems to be growing really quickly and is getting good support on both server and client technology platforms.

Linq to SQL, Entity Framework, Repository Pattern, and Dependency Injection

Stephan Walters video on MVC and Models is a very good and light discussion of the various topics listed in this questions title. The one question listed in the notes unanswered was:
If you create an Interface / Repository pattern for Linq2SQL, does Linq2SQLs classes still cause a dependency on Linq, even though you pass the classes as toList?
It is probably an easy answer YES, however, what standard mechanic would you use to represent the data?
Lets say you have a Product entity that is made up of three tables (Prices, Text, and Photos) (you could have sets of price for different regions, different text for localization, and different photos). (Sounds like a builder pattern) Would you create a slice of these tables grabbing the right prices, text, and photos in to a single List? Since Lists may be proprietary, would you use a Dictionary object?
I thank you for your answers. I am very interested in the "standard and proper" way to do it rather than 101 possibilities.
Another quick question: is Entity Framework ready for a complicated database yet? There are a lot of constructs that Linq2SQL likes that EF does not. EF seems to require identity fields as primary keys (HAHA), but it seems like every demo does this. I want to use EF, but I constantly fail to make it work, falling back to Linq2SQL.
If you keep the L2S on the other side of the Repository facade (remember, that's all a Repository is - a facade) then you decouple the rest of your application from L2S. This means that the job of the code behind your repository is to turn the L2S into "domain" objects, custom classes, and then the Repository returns those.
In this sense, the Repository is returning fully formed "Product" objects with all their related Price, Text, and Photo data. This is called an Aggregate Root.
There shouldn't be a problem with Lists, since they are CLR objects.
As far as EF for advanced scenarios, my advice would be not yet, for the reasons you note.
The standard mechanism I'd use to represent the data is a Data Transfer Object. I would never return a LINQ to SQL or Entity Framework object across a service boundary, and I would hesitate to return it across a layer boundary of any kind. This is because these objects will serialize implementation-dependant data.