Shared Entities in Clean Architecture - flutter

I'm reading into clean architecture and I want to apply it to some software that I wrote. In this software, entities such as User and Group play a very central role. For this reason, it would make sense to share these entities across the code in order to avoid repetition but it is somehow also against the rules of clean architecture.
What is the right decision to be made in this case? Should I just throw in a "shared/entities" folder in core?
The project that I am working on in programmed mainly in flutter, if that's important to the answer.

I know zero about Flutter or Dart (apart from what I just Googled), but it looks like it supports interfaces.
A good approach is not to throw users and groups into a shared folder, but to create some really light-weight representations of them, and throw those into a shared folder / common library. An interface is a great way to do this because in the interface you can specify the important basics that parts of your application need to generically know about, without adding implementation specific baggage and dependencies.
When designing your interfaces, be sure to be aware of principles like SOLID - especially the Interface segregation principle (the I in SOLID).
To elaborate a bit more:
Define IUser interface
Have your User class implement IUser
Have someway of instantiating IUser using your implelentations, like User.
Where your application references User - switch this to use IUser.
Update in response to comment
My usual starting point architecture would be to define DTO's which I share across all layers of the app (UI, Biz Logic / Data Access), basically like what I think you're suggesting. For me, DTO's are as light-weight as possible - basically information carriers. These are shared across all layers, in a common package, as per clean architecture so that different layers of the architecture (e.g. biz logic and data access) are not tightly coupled to each other.
However, users can be bit of a special case in that they can have security / identity / credential related stuff which can become quite specialized and/or implementation & technology specific. This, I suspect, is partly where #duffymo's comment about tight-coupling comes in.
With my answer I'm basically saying that, yes, you can put DTO's into a common folder - and for tricky cases (like Users) you always have the option of defining interfaces on them as well... e.g. the User DTO might have properties which are interfaces rather than concrete properties.
class UserInfo
{
string FirstName { get; set; }
string LastName { get; set; }
ITaxData TaxData { get; set; }
}
I admit it's entirely possibly I'm over-complicating things with the '(some) DTO properties as interfaces' idea.

Related

Domain Driven Design - Shared entities across bounded contexts

I am new to domain driven design and trying to learn and implement in my project. My project structure up till now similar to this.
Maintainance Folder Maintainance.Data(Class
Library) Maintainance.Domain(Class Library)
Maintainance.Domin.Tests(test project)
MovieBooking Folder MovieBooking.Data(Class
Library) MovieBooking.Domain(Class Library)
MovieBooking.Domain.Tests(test project)
SharedKernel Common things
Web Application MovieBooking MVC Web
Application(which have reference to MovieBooking Domain)
In Maintainance boundned context I am keeping all CRUD, GetAll type things for say Movie, Country, Category, Subcategory entities in Maintainance DBContext.
Now in MovieBooking data layer I will also need to use these entities (mostly to display name or dropdown fills in view, kind of subset needed - not all properties needed, only few like Id, name)
There are few ways I can access this entities in Movie booking Bounded Context
Via web services - Need to create web api for common entities like Movie,Country,Category,Subcategory and call web api in web project (to fill Dropdowns or get name from entities)
Via Reference Context (Seperate Dbcontext) - Need to configure Dbset and then map a database view (with only require fields) to Dbset
Example :
modelBuilder.Entity().ToTable(ViewName);
For (1) it can be long term implmentation solution for me
(2) I have to create view (with only few properties) for each require table and it will increase my number of views in my DB drastically as I have enterprise level application.
Is there any other way I can achieve this? Anything I am missing in DDD to look for ?
Option 2, while it will save you time, is actually a very bad idea from the DDD perspective as it allows for violations of the transactional boundary guarantees that each aggregate is meant to enforce\represent.
Option 1 seems a better option, although there are still quite a bit of wiggle room for interpretation based on your brief description of your proposed solution. If I understood correctly, it is generally recommended to follow the below:
Do not expose your aggregate state directly since this exposes internals and increases coupling. Simple create meaningful DTO's and use something like Automapper to map your Aggregates to DTO's easilly and with little effort before sending it over.
Have a duplicate of the DTO definition in your client. This will reduce coupling and allow for easier deployments.
I strongly recommend reading the DDD orange book although I have to say that I cannot recall specifically on which chapter this is discussed. You will also benefit a lot by reading about hexagonal architecture (and I would search for that term in the orange book to find more info about your question).
There is actually one alternative that I can think of: if you're publishing events from your BC's you can create a workflow to translate the domain events to "public" events and then in the other BC listen for the public events that you need to and store the data that you need somewhere inside there. The difficulty of this ranges from very easy to quite problematic depending on your infrastructure. Be aware that it is not a very good idea to re-use your domain events for transmitting data to other BC's since this closely couples the two BC's.
I hope this helps. Please do not hesitate to elaborate if I did not understood the question well enough.

Foreign key between aggregate roots

I understand the concept of aggregate root and I know that one aggregate root must reference another by identity ( http://dddcommunity.org/wp-content/uploads/files/pdf_articles/Vernon_2011_2.pdf ) so what I don't get is how can I force Entity Framework to add a foreign key constraint between two aggregates?
Lets suppose I have a simplified domain:
public class AggregateOne{
[Key]
public Guid AggregateOneID{ get; private set;}
public Guid AggregateTwoFK{get; private set;}
/*Other Properties and methods*/
}
public class AggregateTwo{
[Key]
public Guid AggregateTwoID{get; private set;}
/*Other Properties and methods*/
}
With this domain design, Entity Framework doesn't know that there is a relationship between AggregateOne and AggregateTwo and consequently there is no foreign key at the generated database.
In DDD, EF doesn't exist. Domain relationships are not the same as database relationships. Don't try to mix EF with domain modeling, they don't work together. So in a nutshell, what you have there is not DDD, just plain old relational db masquerading as DDD. EF would be used by the Repositories and would care about persisting one Aggregate Root (AR).
Two ARs can work together, however you need to model the process according to the domain. EF is there to act as a db for the app, it's concerned with persistence issues and shouldn't care about the Domain. Persistence is all about storage and not about reflecting domain relationships (the EF entity is not the domain entity although they can have the same name and can look similar. The important detail is that both belong to different layers and handle different issues). The Domain repositories care only to persist the AR in a way that can be easily restored when it will change. If more AR need to be persisted together, embrace eventual consistency and learn how to use a service bus and sagas. It will greatly simplify your life (consider it a kind of implementation for the unit of work pattern).
For querying, the most clean and elegant way is to generate/update a read model suitable for the querying use cases and this is usually done after a domain event tells the 'world' that something changed in the Domain.
Doing DDD right is not straightforward and it's very easy to fall into the trap, believing that you apply DDD when in fact you're just CRUD ing away, using DDD terminology. Also IMO CQRS is a must with DDD if you like an easy life.
Understand the domain without rushing it and being superficial, identify the bounded contexts, model the domain concepts and their use cases (very important!!!), define repository interfaces as you need them, and implement the repositories only when there's nothing else left to do (the real repos, in the mean time you can use fake ones like in memory repos - they're very fast to implement and your app being decoupled means it shouldn't care about how persistence is implemented, right?). I know it sounds weird, but this how you know you have a maintainable DDD app.
The point of implementing the repositories last is to really decouple the app from the persistence details and also to have defined the expectations(repository methods) the app has from persistence. Once defined, you can write tests :D then implement the repositories. The bonus is that you get to focus only on repo implementation is isolation and when the all tests pass, you know everything works as it should.
Why should you have two complete different objects? Why not only expose your entities as domain objects through a domain interface?
In this case there's no issue with having your entities also act as domain objects with their implementation details neatly hidden behind the interface.
Another point a neat way to represent aggregate roots with EF is to make sure the foreign key column also makes up the primary key of the dependant entity. In your case that would mean AggregateOneId and AggregateTwoFk together would form the composite primary key of AggregateOne. This will ensure that EF doesn't need a repository for removing instances off AggregateOne as long as it's removed from AggregateTwo's collection it will be properly marked for deletion from the databases (if you don't have key like this you need to remove it from AggregateOne set because EF would throw an exception not understanding the intent of the developer that AggregateOne should be deleted.

Persistence ignorance and DDD reality

I'm trying to implement fully valid persistence ignorance with little effort. I have many questions though:
The simplest option
It's really straightforward - is it okay to have Entities annotated with Spring Data annotations just like in SOA (but make them really do the logic)? What are the consequences other than having to use persistance annotation in the Entities, which doesn't really follow PI principle? I mean is it really the case with Spring Data - it provides nice repositories which do what repositories in DDD should do. The problem is with Entities themself then...
The harder option
In order to make an Entity unaware of where the data it operates on came from it is natural to inject that data as an interface through constructor. Another advantage is that we always could perform lazy loading - which we have by default in Neo4j graph database for instance. The drawback is that Aggregates (which compose of Entities) will be totally aware of all data even if they don't use them - possibly it could led to debugging difficulties as data is totally exposed (DAO's would be hierarchical just like Aggregates). This would also force us to use some adapters for the repositories as they doesn't store real Entities anymore... And any translation is ugly... Another thing is that we cannot instantiate an Entity without such DAO - though there could be in-memory implementations in domain... again, more layers. Some say that injecting DAOs does break PI too.
The hardest option
The Entity could be wrapped around a lazy-loader which decides where data should come from. It could be both in-memory and in-database, and it could handle any operations which need transactions and so on. Complex layer though, but might be generic to some extent perhaps...? Have a read about it here
Do you know any other solution? Or maybe I'm missing something in mentioned ones. Please share your thoughts!
I achieve persistence ignorance (almost) for free, as a side effect of proper domain modeling.
In particular:
if you correctly define each context's boundary, you will obtain small entities without any need for lazy loading (that, actually becomes an antipattern/code smell in a DDD project)
if you can't simply use SQL into your repository, map a set of DTO to your db schema, and use them into factories to initialize entity classes.
In DDD projects, persistence ignorance is relevant for the domain model itself, not for repositories, factories and other applicative code. Indeed you are very unlikely to change the ORM and/or the DB in the future.
The only (but very strong) rational behind persistence ignorance of the domain model is separation of concerns: in the domain model you should express business invariants only! Persistence is an infrastructural concern!
For example without persistence ignorance (and with lazy loading) the domain model should handle possible exceptions from the db, it's complexity grows and business rules are buried under technological details.
Personally I find it near impossible to achieve a clean domain model when trying to use the same entities as the ORM.
My solution is to model my domain entities as I see fit and ensure that any ORM entities don't leak outside of the repositories. This means that my repositories accept and return domain entities.
This means you lose "most of your ORM goodness" and end up "using your ORM for simple CRUD operations".
Both of these trade-offs are fine for me, I would rather have a clean domain model that I can use, rather than one polluted with artefacts from my DB or ORM. It also cuts down the amount of time I spend "wrestling with my ORM" to zero.
As a side-note, I find document databases a much better fit for DDD.
Once you will provide persistence mapping in you domain model:
your code depends on framework. If you decided to change this framework, you want to change persistence layer and model layer source code - more work, more changes, more merging of code etc.
your domain model jar file depends on spring/nhibernate jars etc.
your classes become larger and larger how business code and persistence related code grows
I've to admit that I dont understand harder and hardest option.
We used separated interfaces and implementations for domain entities. Provide separated mapping files using Hibernate along with repositories.
Entities are created using factory (or repository later), identifier is generated within persistence layer, entity does not need it until it's being persisted.
Lazy loading is provided by special implementation of List once:
mapping of an entity contains it
entity/aggregate is fetched from persistence layer
The only issue is related to transaction as when you use lazy-loaded collection out of transaction scope, it fails.
I would follow the simplest option unless I ran into a stone wall. There are also pitfalls such as this when you adopt pi principle.
Somtimes some compromises are acceptable.
public class Order {
private String status;//my orm does not support enum
public Status status() {
return Status.of(this.status);
}
public is(Status status) {
return status() == status;//use status() instead of getStatus() in domain model
}
}

Business logic in Entity Framework POCOs using partial classes?

I have business logic that could either sit in a business logic/service layer or be added to new members of an extended domain class (EF T4 generated POCO) that exploits the partial class feature.
So I could have:
a) bool OrderBusiness.OrderCanBeCancelledOnline(Order order) .. or (IOrder order)
or
b) bool order.CanBeCancelledOnline() .. i.e. it is the order itself knows whether or not it can be cancelled.
For me option b) is more OO. However option a) allows more complex logic to be applied e.g. using other domain objects or services.
At the moment I have a mix of both and this doesn't seem elegant.
Any guidance on this would be much appreciated!
The key thing about OO for me is that you tell objects to do things for you. You don't pull attributes out and make the decisions yourself (in a helper class or other).
So I agree with your assertion about option b). Since you require additional logic, there's no harm in performing an operation on the object whilst passing references to additional helper objects such that they collaborate. Whether you do this at the time of the operation itself, or pre-populate your order object with those collaborating entities is very much dependent upon your current situation.
You can also use extension methods to the POCO's to wrap your bll methods.
So you can keep using your current bll's.
in c# something like:
public static class OrderBusiness <- everything must be static, class and method
{
public static bool CanBeCancelledOnline(this Order order) <- notice the 'this'
{
logic ...
And now you can do order.CanBeCancelledOnline()
This is likely to depend on the complexity of your application and does require some judgement that comes with experience. The short answer is that if your project is anything more than a pretty simple one then you are best off putting your logic in the domain classes.
The longer answer:
If you place your logic within a service layer you are affectively following the transaction script pattern, and ending up with an anaemic domain model. This can be a valid route, but it generally works best with simple and small projects. The problem is that the transaction script layer (your service layer) becomes more complicated to maintain as it grows.
So the alternative is to create a rich domain model that contains the logic within it. Keeping logic together with the class it applies to is a key part of good OO design, and in a complex project pretty essential. It usually requires a bit more thought and effort initially, which is why for very simple projects people sometimes use the transaction script pattern.
If you are unsure about which to go with it is not normally a too difficult job to refactor your logic to move it from your service layer to the domain, but you need to make the call early enough that the job is not too large.
Contrary to one of the answers, using POCO classes does not mean you can't have business logic in your domain classes. POCO is about not applying framework specific structures to your domain classes, such as methods and interfaces specific to a particular ORM. A class with some functions to apply business logic is clearly still a Plain-Old-CLR-Object.
A common question, and one that is partially subjective.
IMO, you should go with Option A.
POCO's should be exactly that, "plain-old-CLR" objects. If you start applying business logic to them, they cease to be POCO's. :)
You can certainly put your business logic in the same assembly as your POCO's, just don't add methods directly to them, create helper classes to facilitate business rules. The only thing your POCO's should have is properties mapping to your domain model.
Really depends on how complex your business rules are. In our application, the busines rules are very straightforward, so we use Option A.
But if your business rules start to get messy, consider using the Specification Pattern.

DDD: What are good reasons for you to loosely-couple Entities?

Back in December, there was this post that was answered with "it is ok to use concret types [for simple object]".
But I keep seeing more and more simple entities with interfaces in sample projects, and even the very large Enterprise application I just took control over (counting 89 interfaces and going).
Is it that people are not picking the best approach, and just shotgunning the project with the "my project is loosely-coupled!" approach?
Or, am I missing something. I can unit test with concret types for my IService, IFactory and IRepository implmentations I have (and works quite well). I am also building my first "Anticorruption Layer" for abstracting a lot of these 3rd party tools out away from the main domain. This anticorruption layer has a number of Facades, Translators, and Adapters - all of which are loosely coupled (or planned to be).
So, is there something I missing about Entities having Interfaces?
public interface IContent
{
Int32 ContentID {get; set;}
}
IList<IContent> list = new List();
Edit: I should also mention that that the enterprise app I have that has all of these interfaces, has zero unit tests. lol
It is more important that entities that have responsibility conform to an interface than it is for simple data objects. If you can define the entity in terms of methods, then, yes, you'll benefit from an interface. I can't see that an object that will simply be used as a DTO within the application gains any great advantage by having an interface.
That said, there is certainly benefit from abstracting away "entities" created by a third party tool, or a framework like L2S, in my opinion.