DDD, CQRS, onion architecture, ef core for enterprise level app - entity-framework

Recently I have found that following approach works great for many projects that I have worked on.
The issue however is, that I read that ef core DbContext is a UoW by itself, and I should NOT create my own UoW and repositories. But in such case, I am unable to abstract my persistance layer from my application logic layer.
TL;DR question is:
Is it possible to NOT to have own repositories nor own UoW and still follow the mentioned architecture with DbContext as UoW?
My architecture is like follows:
Layer 1 (most inner):
Aggregates, Entities, POCO domain classes, Value Objects
Layer 2:
Domain services
Layer 3:
Application services (CQRS commands, queries, handlers) and Repository Interfaces
Layer 4A: (persistance layer)
Repositories implementation (DbContext injected here)
EF Core mappings (ORM mappings)
Layer 4B:
Asp MVC API (DI registered here)
Controllers of API just issues commands and queries (via MediatR).
The advantage of above approach is that the app core (layers 1, 2 and 3) are completely abstracted from persistance.
The disadvantage is that you really have to write your own Repositories.
Is it correct approach? Or am I missing something?

Why is a DbContext is a unit of work?
The DbContext captures all changes that you are making within one single transaction via one single commit (SaveChanges).
Why shouldn't you create your own?
Ideally, you should only be committing to one single data store via one single transaction. If you are either saving to multiple data stores in multiple transactions or saving to the same data store in several transactions, then you face the likely possibility of data corruption. If you are using a distributed transaction across multiple data stores, well then God help you.
SaveChanges should therefore be sufficient, so why create your own?
But what about abstraction?
If SaveChanges is sufficient, then how do we abstract out our dependency on EF? You can introduce an IUnitOfWork interface with a single method, Commit, which you can implement by calling DbContext.SaveChanges.
And repositories?
I am not sure I understand not creating Repositories as a hard rule. As part of abstracting out your persistence layer, it is helpful to have a layer such as IRepository to provide that separation. That said, you should not be creating a repository per table. A repository per Aggregate is more appropriate. Each repository will load the entire Aggregate to ensure consistency within the boundary of the Aggregate.
...
In general, I would caution against following advice that speaks in absolutes if you don't understand the reasoning behind that advice. You should be able to formulate the same conclusion given the same starting information for yourself. Otherwise, you are just applying rote memorization to a pattern that does not always benefit from that approach.

Related

Using an abstraction layer over DbContext

DbContext in EF Code first implements Unit of Work and Repository patterns as
MSDN site said:
A DbContext instance represents a combination of the Unit Of Work and Repository patterns such that it can be used to query from a database and group together changes that will then be written back to the store as a unit. DbContext is conceptually similar to ObjectContext.
Does it means that using another UoW and Repository abstractions(such as IRepository and IUnitOfWor), over DbContext is wrong?
In the other word does using another abstraction layer over DbContext add any additional value to our code?
Values such as technology independent DAL(Our Domain will depends on IRepository and IUnitofWork instead of DbContext)
Consider this - you currently have two strong ORMs, each having it's pros and cons over the other:
Entity Framework
NHibernate
Additionally there are several more micro ORMs, such as:
Dapper
Massive
PetaPoco
...
And to make things even more complicated, there are clients / drivers for non-SQL databases such as:
C# driver for MongoDb
StackExchange Driver for Redis
...
And of course, one more thing that always has to be taken in consideration is whether there will be testing that would include mocking the data access layer.
Decision whether you should use UoW/Repository pattern should come from your project itself.
If your project is short-termed, with limited budget, and you are not likely to be using anything else but Entity Framework and SQL, then introducing UoW/Repository layer of abstraction will just take you additional pointless development time which you could have used in something else or completed project earlier and earned some extra cash.
However, if project is long-running and involves more complex development lifecycle that includes continuous testing, then UoW/Repository pattern is a must. With amount of databases that are now in usage and NoSQL movement coming heavily in .NET ecosystem, making a decision to nail selection of ORM and database might cause severe refactoring once you decide to scale out (i.e. scaling out with MongoDb is much cheaper than with SQL, so your client might ask you out of sudden to move everything to MongoDb). As sides are shifting constantly right now and new ideas are being implemented (such as combined graph+document databases), no one can make a good statement which database will be best choice for your project in 1 year from now.
There is no bool answer to this question.
This is just my point of view, and it is coming from developer who works on both short-termed and long-running projects.

Persistence ignorance and DDD reality

I'm trying to implement fully valid persistence ignorance with little effort. I have many questions though:
The simplest option
It's really straightforward - is it okay to have Entities annotated with Spring Data annotations just like in SOA (but make them really do the logic)? What are the consequences other than having to use persistance annotation in the Entities, which doesn't really follow PI principle? I mean is it really the case with Spring Data - it provides nice repositories which do what repositories in DDD should do. The problem is with Entities themself then...
The harder option
In order to make an Entity unaware of where the data it operates on came from it is natural to inject that data as an interface through constructor. Another advantage is that we always could perform lazy loading - which we have by default in Neo4j graph database for instance. The drawback is that Aggregates (which compose of Entities) will be totally aware of all data even if they don't use them - possibly it could led to debugging difficulties as data is totally exposed (DAO's would be hierarchical just like Aggregates). This would also force us to use some adapters for the repositories as they doesn't store real Entities anymore... And any translation is ugly... Another thing is that we cannot instantiate an Entity without such DAO - though there could be in-memory implementations in domain... again, more layers. Some say that injecting DAOs does break PI too.
The hardest option
The Entity could be wrapped around a lazy-loader which decides where data should come from. It could be both in-memory and in-database, and it could handle any operations which need transactions and so on. Complex layer though, but might be generic to some extent perhaps...? Have a read about it here
Do you know any other solution? Or maybe I'm missing something in mentioned ones. Please share your thoughts!
I achieve persistence ignorance (almost) for free, as a side effect of proper domain modeling.
In particular:
if you correctly define each context's boundary, you will obtain small entities without any need for lazy loading (that, actually becomes an antipattern/code smell in a DDD project)
if you can't simply use SQL into your repository, map a set of DTO to your db schema, and use them into factories to initialize entity classes.
In DDD projects, persistence ignorance is relevant for the domain model itself, not for repositories, factories and other applicative code. Indeed you are very unlikely to change the ORM and/or the DB in the future.
The only (but very strong) rational behind persistence ignorance of the domain model is separation of concerns: in the domain model you should express business invariants only! Persistence is an infrastructural concern!
For example without persistence ignorance (and with lazy loading) the domain model should handle possible exceptions from the db, it's complexity grows and business rules are buried under technological details.
Personally I find it near impossible to achieve a clean domain model when trying to use the same entities as the ORM.
My solution is to model my domain entities as I see fit and ensure that any ORM entities don't leak outside of the repositories. This means that my repositories accept and return domain entities.
This means you lose "most of your ORM goodness" and end up "using your ORM for simple CRUD operations".
Both of these trade-offs are fine for me, I would rather have a clean domain model that I can use, rather than one polluted with artefacts from my DB or ORM. It also cuts down the amount of time I spend "wrestling with my ORM" to zero.
As a side-note, I find document databases a much better fit for DDD.
Once you will provide persistence mapping in you domain model:
your code depends on framework. If you decided to change this framework, you want to change persistence layer and model layer source code - more work, more changes, more merging of code etc.
your domain model jar file depends on spring/nhibernate jars etc.
your classes become larger and larger how business code and persistence related code grows
I've to admit that I dont understand harder and hardest option.
We used separated interfaces and implementations for domain entities. Provide separated mapping files using Hibernate along with repositories.
Entities are created using factory (or repository later), identifier is generated within persistence layer, entity does not need it until it's being persisted.
Lazy loading is provided by special implementation of List once:
mapping of an entity contains it
entity/aggregate is fetched from persistence layer
The only issue is related to transaction as when you use lazy-loaded collection out of transaction scope, it fails.
I would follow the simplest option unless I ran into a stone wall. There are also pitfalls such as this when you adopt pi principle.
Somtimes some compromises are acceptable.
public class Order {
private String status;//my orm does not support enum
public Status status() {
return Status.of(this.status);
}
public is(Status status) {
return status() == status;//use status() instead of getStatus() in domain model
}
}

Entity Framework - No Repository abstraction

In my project, I need to use EF and abstract the queries from the Presentation layer. Based from what I've been reading questions and answers all over the net, EF is built having repository pattern on it's DbSet and Unit of work on DbContext.
Repository pattern can easily do the requirement but I don't wanna repeat this implementation and now confused where should I initialize or access the DbContext. Should it be on the service layer?
MVC4 Api will be used for this project
One way I have seen this done in the past is to essentially remove the DbContext's dependency on a physical database by creating an interface for your context then make your data access calls from your Services Layer (Business Logic Layer).
There is however, a disadvantage in using this approach, which is the fact that your unit tests (which will be using a Fake implementation of your DbContext) will be using LINQ to Objects to run your queries whereas your concrete implementation will use LINQ to Entities which does not support all LINQ to Objects methods.
There's documentation on MSDN (http://msdn.microsoft.com/en-us/library/bb738550.aspx) which highlights these differences.
I also recommend reading this article (http://kearon.blogspot.com.au/2011/02/mocking-entity-framework-4-code-first.html) which demonstrates how to make DbContext unit testable by removing the inderlying dependency on a phyiscal database.
Hope this all helps!

Implementing Repository pattern and doing Tests

I have read almost all articles about Repository pattern and different implementations of it. Many of them judged bad practices (ex: using IQueryable<T> instead of IList<T>) etc. that why i'm still stuck and couldn't end-up to the right one.
So:
Do I need Repository pattern to apply IoC in my MVVM applications ?
If yes, What is the efficient IRepository implementation to EF Entities which is a good practice and better testable ?
How can I test my Repositories and UnitofWork behavior ? Unit tests against in memory Repositories ? Integration tests ?
Edit : According to answers I added the first question.
Ayende Rahien has a lot of posts about repository http://ayende.com/blog/search?q=repository and why it is bad when you are using ORM's. I thought Repository was the way to go. Maybe it was, in 2004. Not now. ORM's already implement repository. In case of EF it is IDbSet. And DbContext is UnitOfWork. Creating a repository to wrap EF or other ORM's adds a lot of unnecessary complexity.
Do integration testing of code that will interact with database.
The repository pattern adds an extra layer when you are using EF, so you should make sure that you need this extra level of indirection.
The original idea of the Repository pattern is to protect the layers above from the complexity of and knowing about the structure of the database. In many ways EF's ORM model protects the code from the physical implementation in the database so the need for a repository is less.
There are 2 basic alternatives:
Let the business layer talk directly to the EF model
Build a datalayer that implements the Repository pattern, this layers maps EF objects to POCO
For tests:
When we use EF directly we use a transaction scope with rollback, so that the tests do not change the data.
When we use the repository pattern we use Rhino Mocks to mock the repository
We use both approaches, Repository pattern gives a more clearly layered app and therefore maybe more control, using EF directly gives an app with less code and therefore faster to build.

Entity Framework as Repository and UnitOfWork?

I'm starting a new project and have decided to try to incorporate DDD patterns and also include Linq to Entities. When I look at the EF's ObjectContext it seems to be performing the functions of both Repository and Unit of Work patterns:
Repository in the sense that the underlying data level interface is abstracted from the entity representation and I can request and save data through the ObjectContext.
Unit Of Work in the sense that I can write all my inserts/updates to the objectContext and execute them all in one shot when I do a SaveChanges().
It seems redundant to put another layer of these patterns on top of the EF ObjectContext? It also seems that the Model classes can be incorporated directly on top of the EF generated entities using 'partial class'.
I'm new at DDD so please let me know if I'm missing something here.
I don't think that the Entity Framework is a good implementation of Repository, because:
The object context is insufficiently abstract to do good unit testing of things which reference it, since it is bound to the DB access. Having an IRepository reference instead works much better for creating unit tests.
When a client has access to the ObjectContext, the client can do pretty much anything it cares to. The only real control you have over this at all is to make certain types or properties private. It is hard to implement good data security this way.
On a non-trivial model, the ObjectContext is insufficiently abstract. You may, for example, have both tables and stored procedures mapped to the same entity type. You don't really want the client to have to distinguish between the two mappings.
On a related note, it is difficult to write comprehensive and well-enforce business rules and entity code. Indeed, whether or not it this is even a good idea is debatable.
On the other hand, once you have an ObjectContext, implementing the Repository pattern is trivial. Indeed, for cases that are not particularly complex, the Repository is something of a wrapper around the ObjectContext and the Entity types.
I would say that you should look at the ObjectContext as your UnitOfWork, and not as a repository.
An ObjectContext cannot be a repository -imho- since it is 'to generic'.
You should create your own Repositories, which have specialized methods (like GetCustomersWithGoldStatus for instance) next to the regular CRUD methods.
So, what I would do, is create repositories (one for each aggregate-root), and let those repositories use the ObjectContext.
I like to have a repository layer for the following reasons:
EF gotcha's
When you look at some of the current tutorials on EF (Code First version), it is apparent there's a number of gotcha's to be handled, particularly around object graphs (entities containing entities) and disconnected scenarios. I think a repository layer is great for wrapping these up in one place.
A clear picture of data access mechanisms
A repository gives a specific picture as to how the BL is accessing and updating the data store. It exposes methods that have a clear single purpose, and can be tested independently of the BL. Standard example from the textbooks, Find() to find a single entity. A more application specific example, Clear() to clear down a db table.
A place for optimizations
Inevitably you come up against performance hits when using vanilla EF. I use the repository to hide the optimization mechanisms from the BL.
Examples,
GetKeys() to project cached keys from the tables (for Insert/Update decisions). The reading of key only is faster and uses less memory than reading the full entity.
Bulk load via SqlBulkCopy. EF will insert by individual SQL statements. If you want a single statement to insert multiple rows, SqlBulkCopy is a good mechanism. The repository encapsulates this and provides metadata for SqlBulkCopy. As well as the Insert method, you need a StartBatch() and EndBatch() method, which is also an argument for a UnitOfWork layer.