DbContext in EF Code first implements Unit of Work and Repository patterns as
MSDN site said:
A DbContext instance represents a combination of the Unit Of Work and Repository patterns such that it can be used to query from a database and group together changes that will then be written back to the store as a unit. DbContext is conceptually similar to ObjectContext.
Does it means that using another UoW and Repository abstractions(such as IRepository and IUnitOfWor), over DbContext is wrong?
In the other word does using another abstraction layer over DbContext add any additional value to our code?
Values such as technology independent DAL(Our Domain will depends on IRepository and IUnitofWork instead of DbContext)
Consider this - you currently have two strong ORMs, each having it's pros and cons over the other:
Entity Framework
NHibernate
Additionally there are several more micro ORMs, such as:
Dapper
Massive
PetaPoco
...
And to make things even more complicated, there are clients / drivers for non-SQL databases such as:
C# driver for MongoDb
StackExchange Driver for Redis
...
And of course, one more thing that always has to be taken in consideration is whether there will be testing that would include mocking the data access layer.
Decision whether you should use UoW/Repository pattern should come from your project itself.
If your project is short-termed, with limited budget, and you are not likely to be using anything else but Entity Framework and SQL, then introducing UoW/Repository layer of abstraction will just take you additional pointless development time which you could have used in something else or completed project earlier and earned some extra cash.
However, if project is long-running and involves more complex development lifecycle that includes continuous testing, then UoW/Repository pattern is a must. With amount of databases that are now in usage and NoSQL movement coming heavily in .NET ecosystem, making a decision to nail selection of ORM and database might cause severe refactoring once you decide to scale out (i.e. scaling out with MongoDb is much cheaper than with SQL, so your client might ask you out of sudden to move everything to MongoDb). As sides are shifting constantly right now and new ideas are being implemented (such as combined graph+document databases), no one can make a good statement which database will be best choice for your project in 1 year from now.
There is no bool answer to this question.
This is just my point of view, and it is coming from developer who works on both short-termed and long-running projects.
Related
Recently I have found that following approach works great for many projects that I have worked on.
The issue however is, that I read that ef core DbContext is a UoW by itself, and I should NOT create my own UoW and repositories. But in such case, I am unable to abstract my persistance layer from my application logic layer.
TL;DR question is:
Is it possible to NOT to have own repositories nor own UoW and still follow the mentioned architecture with DbContext as UoW?
My architecture is like follows:
Layer 1 (most inner):
Aggregates, Entities, POCO domain classes, Value Objects
Layer 2:
Domain services
Layer 3:
Application services (CQRS commands, queries, handlers) and Repository Interfaces
Layer 4A: (persistance layer)
Repositories implementation (DbContext injected here)
EF Core mappings (ORM mappings)
Layer 4B:
Asp MVC API (DI registered here)
Controllers of API just issues commands and queries (via MediatR).
The advantage of above approach is that the app core (layers 1, 2 and 3) are completely abstracted from persistance.
The disadvantage is that you really have to write your own Repositories.
Is it correct approach? Or am I missing something?
Why is a DbContext is a unit of work?
The DbContext captures all changes that you are making within one single transaction via one single commit (SaveChanges).
Why shouldn't you create your own?
Ideally, you should only be committing to one single data store via one single transaction. If you are either saving to multiple data stores in multiple transactions or saving to the same data store in several transactions, then you face the likely possibility of data corruption. If you are using a distributed transaction across multiple data stores, well then God help you.
SaveChanges should therefore be sufficient, so why create your own?
But what about abstraction?
If SaveChanges is sufficient, then how do we abstract out our dependency on EF? You can introduce an IUnitOfWork interface with a single method, Commit, which you can implement by calling DbContext.SaveChanges.
And repositories?
I am not sure I understand not creating Repositories as a hard rule. As part of abstracting out your persistence layer, it is helpful to have a layer such as IRepository to provide that separation. That said, you should not be creating a repository per table. A repository per Aggregate is more appropriate. Each repository will load the entire Aggregate to ensure consistency within the boundary of the Aggregate.
...
In general, I would caution against following advice that speaks in absolutes if you don't understand the reasoning behind that advice. You should be able to formulate the same conclusion given the same starting information for yourself. Otherwise, you are just applying rote memorization to a pattern that does not always benefit from that approach.
Our company has been using Telerik Open Access for years. We have multiple projects using it including some in development and some in production that need updated. Because Telerik no longer updates or supports Open Access, we are having a variety of problems. We've got users that have to go to another work station because we can't get Open Access on their computers and we've got projects where we can't add or update tables because the visual designer doesn't work in modern Visual Studio versions. So my question is, how do we convert these and what do we convert these to?
I've heard of Microsoft Entities Framework and we used to just call stored procedures instead of having a separate data project. Obviously our clients aren't going to pay us for hours to switch so we need something that works quick. How do we convert our existing Telerik Open Access project to Microsoft Entities Framework, straight SQL queries, or some other data layer option?
Here's an example of what we have currently.
A separate Visual Studio project that acts as our data layer where all the code was created by Telerik Open Access's visual designer:
We then have a DataAccess.cs class in our main project that creates the instance of the data layer:
Then we call it by using linq statements in the main project:
I don't know OA hands-on, so I can only put my two-cents in.
I'm afraid this isn't going to be be an easy transition. I've yet to see the first seamless transition from one data layer implementation to another (and I've seen a few). The main cause for this is that IQueryable is a leaky abstraction. That is, the data layer exposes IQueryables, but
it doesn't support all features of the interface, and
it adds its own features, and
it's got its own interpretation of how to implement the features that are supported.
This means that if you're going to port your data layer to EF, you may notice that some LINQ queries throw runtime errors because they contain unsupported .Net methods/properties (for instance DateTime.Date), or perform worse -- or better, or return data in subtly different shapes or sorting orders.
Some important OA features that are missing in EF:
Runtime mappings (EF's mapping is static)
Bulk update/delete functions (with EF: only by using third-party libraries)
Second-leve cache
Profiler and Tuning Advisor
Streaming of large objects
Mixing database-side and client-side evaluation of LINQ queries (EF6: only db-evaluation)
On the other hand, the basic architectures of OA and EF don't seem to be too different. They both -
support POCOs
work with simple navigation properties
have a fluent mapping API
support LINQ through IQueryable<T>, where T is an entity class.
most importantly: both have revolve around the Unit of Work and Repository patterns. (For EF: DbContext and DbSet, respectively)
All-in-all it's goinig to be a delicate process of converting, trying and testing. One good thing is that your current DAL is already abstracted to a certain extent. Another is that the syntax doesn't even look too different. Where you have ...
dbContext.Add(newDockReport);
dbContext.SaveChanges();
... using EF this would become ...
dbContext.DockReports.Add(newDockReport);
dbContext.SaveChanges();
With EF-core it wouldn't even have to change one bit.
But that's another important choice to make: EF6 or EF-core? EF6 is stable, mature, feature-rich, but at the end of its life cycle (a phrase you've probably come to hate by now). EF-core, on the other hand, is the future, but is presently struggling to get bug-free in its major functions, not yet as feature-rich as EF6 (and some features will never return, although other new features are great improvements compared to EF6). At the moment, I'd be wary of using EF-core for enterprise applications, but I'm pretty sure that a year from now this is not an issue any more.
Whichever way you go, I'd start the process by writing large amounts of integration tests, if you didn't do so already. The advantage of integration tests is that you can avoid the hassle of mocking either framework first (which isn't trivial).
I have never heard of a tool that can do that.Expect it to take time.
You have to figure how to do it by yourself :
( for the exact safer way to migrate )
1rst you must have a simple page that use your DataLayer it will be your test page. A simple one that you can adapt the LinQ logic .
Add a LinQ to SQL Class, Right click> Add > LinQ to SQL Class.
Drop your table for this page only the usefull one, put the link if needed.
In the test page create a new data context of the linQtoSql.
Use it fixing the type and rename what have to be rename.
Fix error till it compile.
Stock coffee and anything that can boost your brain.
Add table to your context to match the one you had in telerik data access.
Debug for days.
Come back with new question on how to fix a new issue twice a day.
To help with the dbContext.Add() difference between the 2 frameworks you could use this extension in the EF 6.x :
public static void Add<T>(this DbContext db, T entityToCreate) where T : class
{
db.Set<T>().Add(entityToCreate);
db.SaveChanges();
}
then you could do :
db.Add(obj);
See Gert Arnold answer : In Entity Framework, how do I add a generic entity to its corresponding DbSet without a switch statement that enumerates all the possible DbSets?
I'm trying to implement fully valid persistence ignorance with little effort. I have many questions though:
The simplest option
It's really straightforward - is it okay to have Entities annotated with Spring Data annotations just like in SOA (but make them really do the logic)? What are the consequences other than having to use persistance annotation in the Entities, which doesn't really follow PI principle? I mean is it really the case with Spring Data - it provides nice repositories which do what repositories in DDD should do. The problem is with Entities themself then...
The harder option
In order to make an Entity unaware of where the data it operates on came from it is natural to inject that data as an interface through constructor. Another advantage is that we always could perform lazy loading - which we have by default in Neo4j graph database for instance. The drawback is that Aggregates (which compose of Entities) will be totally aware of all data even if they don't use them - possibly it could led to debugging difficulties as data is totally exposed (DAO's would be hierarchical just like Aggregates). This would also force us to use some adapters for the repositories as they doesn't store real Entities anymore... And any translation is ugly... Another thing is that we cannot instantiate an Entity without such DAO - though there could be in-memory implementations in domain... again, more layers. Some say that injecting DAOs does break PI too.
The hardest option
The Entity could be wrapped around a lazy-loader which decides where data should come from. It could be both in-memory and in-database, and it could handle any operations which need transactions and so on. Complex layer though, but might be generic to some extent perhaps...? Have a read about it here
Do you know any other solution? Or maybe I'm missing something in mentioned ones. Please share your thoughts!
I achieve persistence ignorance (almost) for free, as a side effect of proper domain modeling.
In particular:
if you correctly define each context's boundary, you will obtain small entities without any need for lazy loading (that, actually becomes an antipattern/code smell in a DDD project)
if you can't simply use SQL into your repository, map a set of DTO to your db schema, and use them into factories to initialize entity classes.
In DDD projects, persistence ignorance is relevant for the domain model itself, not for repositories, factories and other applicative code. Indeed you are very unlikely to change the ORM and/or the DB in the future.
The only (but very strong) rational behind persistence ignorance of the domain model is separation of concerns: in the domain model you should express business invariants only! Persistence is an infrastructural concern!
For example without persistence ignorance (and with lazy loading) the domain model should handle possible exceptions from the db, it's complexity grows and business rules are buried under technological details.
Personally I find it near impossible to achieve a clean domain model when trying to use the same entities as the ORM.
My solution is to model my domain entities as I see fit and ensure that any ORM entities don't leak outside of the repositories. This means that my repositories accept and return domain entities.
This means you lose "most of your ORM goodness" and end up "using your ORM for simple CRUD operations".
Both of these trade-offs are fine for me, I would rather have a clean domain model that I can use, rather than one polluted with artefacts from my DB or ORM. It also cuts down the amount of time I spend "wrestling with my ORM" to zero.
As a side-note, I find document databases a much better fit for DDD.
Once you will provide persistence mapping in you domain model:
your code depends on framework. If you decided to change this framework, you want to change persistence layer and model layer source code - more work, more changes, more merging of code etc.
your domain model jar file depends on spring/nhibernate jars etc.
your classes become larger and larger how business code and persistence related code grows
I've to admit that I dont understand harder and hardest option.
We used separated interfaces and implementations for domain entities. Provide separated mapping files using Hibernate along with repositories.
Entities are created using factory (or repository later), identifier is generated within persistence layer, entity does not need it until it's being persisted.
Lazy loading is provided by special implementation of List once:
mapping of an entity contains it
entity/aggregate is fetched from persistence layer
The only issue is related to transaction as when you use lazy-loaded collection out of transaction scope, it fails.
I would follow the simplest option unless I ran into a stone wall. There are also pitfalls such as this when you adopt pi principle.
Somtimes some compromises are acceptable.
public class Order {
private String status;//my orm does not support enum
public Status status() {
return Status.of(this.status);
}
public is(Status status) {
return status() == status;//use status() instead of getStatus() in domain model
}
}
From reading various books and articles, I seem to rather often find the usage of the Repository-pattern suggested. I get the point if you need to be able to swap out your data layer from one to another, but my question is, if I know with 100% certainty that I will not use any other tech for data access, is there any reason for using said pattern?
The thing that I find myself doubting the most is that I don't really see what this extra layer of abstraction can bring to the table in this scenario. From my experience, EF with its fluent linq-to-entities -functionality should be more than enough for pretty much all my needs.
The most usual cases seem to start the repositories with methods such as FindAll, Find, Add and Delete, all of which are very easily accessible directly through EF (so no code duplication to speak of).
So am I just missing some big point, or is the repository more for when you need to support multiple different data access technologies?
They are many opinions on the issue but after using repositories for 2 projects, i never tried it again.
Too much pain with hundreds of methods for all those cases with no clear benefits (i'm almost never going to swap out EF for another ORM).
The best advice would be to try it out so you can make an informed opinion on which route to take.
Some opinions against it here
I think you're on the right direction. I asked myself the same question two years ago after I've used the repository pattern in some projects. I came to the conclusion that hiding your ORM behind a repository implemented on top of your ORM will get you nothing but unnecessary work. In addition to implementing meaningless FindAll, Find, Add ... methods you would loose some performance optimization possibilities that the ORM gives you. Or at least it will get quite hard to apply some of those methods.
So if you're not going to switch your ORM within the lifetime of your project, I don't see any benefits in applying the repository pattern.
So instead of preparing for the situation where one could in future easily switch the ORM, I would suggest to do some more investigation upfront, wisely choose an ORM, stick with it and stay away from the repository pattern.
What people don't realize is that EF is already a Repository and a Unit-of-Work.
Repository has recently become an anti-pattern. Never use a design pattern because its cool or trendy or trying to build your resume, in fact this should be a standard rule for all design patterns.
Only build a Repository and Unit-of-Work on top of EF if your application is
very large (lasagna layer)
long-lived (10 years or so)
has more than 5 developers working in parallel
has developers that are separated geographically
requires a lot of maintenance
multiple data access infrastructures
A good indication is when upgrading from EF5 to EF6 requires you to knock on everybody's door.
I'm not as hot on the repository pattern as I used to be, but I still find it can be useful in the following scenarios (assuming swapping the ORM isn't one of them):
Unit testing (assuming you'd rather mock or stub than use Sqlite or hit a real db)
Being able to stub out data access during development via a repository that has an in-memory IEnumerable as its backing source.
I am actually in disagreement that EF is a correct example of a "Repository Pattern". It is a typed Data Access Layer and an exposed LINQ implementation.
Please note that if one fully endorses EF as "the business domain" then the above does not hold; however, I use EF - as poorly as it does - with Schema First, in which EF is not the strict business domain. The term "correct" is used to reinforce this viewpoint - adjust for your own perspective / design.
A correct Repository Pattern, in my book, exposes aggregate roots of relevant operations. That is, the implementation details (EF) is kept within the Repository, as much as possible. That is, the Repository takes care of the mapping of the relevant Domain objects to the underlying model.
This is an agreement with how Microsoft defines The Repository Pattern - note that the business entities are mapped to a data source. (And thus my fundamental disagreement with EF fulfilling this role: EF only has a chance of sanely maps business entities when designed from Code First.)
The best summation / article I have found is Repository pattern, done right by Gauffin. While he approaches the Repository pattern from a more extreme view than I, here are some key points as to why EF's simply bleeds through an Active Record / ORM pattern.
Here are some selected excerpts that highlight why I do not thing that EF is a proper implementation of a Repository Pattern.
The repository pattern is an abstraction. It’s purpose is to reduce complexity and make the rest of the code persistent ignorant. As a bonus it allows you to write unit tests instead of integration tests. The problem is that many developers fail to understand the patterns purpose - and create repositories [ie. EF] which leak persistence specific information up to the caller
..
Using repositories is not about being able to switch persistence technology (i.e. changing database or using a web service etc instead) .. Repository pattern do allow you to do that, but it’s not the main purpose.
..
When people talks about Repository pattern and unit tests they are not saying that the pattern allows you to use unit tests for the data access layer .. If you use ORM/LINQ in your business logic you can never be sure why the tests fail.
..
Do note that the repository pattern is only useful if you have POCOs which are mapped using code first. Otherwise you’ll just break the abstraction using the entities. [That is, only EF Code First can even attempt to meet this requirement.]
..
Building a correct repository implementation is very easy. In fact, you only have to follow a single rule: Do not add anything into the repository class until the very moment that you need it
And if you do use EF as a Repository - I believe it's still a typed DAL, as much as it bleeds - please do not try to hide it in a "more generic" pattern - a Repository pattern is not about type unification, it is about exposing aggregate roots for a particular context and operations on such.
See What specific issue does the repository pattern solve? as well.
While I speek against EF being a correct Repository Pattern - it is the Active Record pattern - I am not a Repository purist. That is there are cases when I will bleed EF (or L2S) entities in specific cases; I accept this as technical debt. Just understand the cost of breaking such a Repository / Domain boundary.
I'm starting a new project and have decided to try to incorporate DDD patterns and also include Linq to Entities. When I look at the EF's ObjectContext it seems to be performing the functions of both Repository and Unit of Work patterns:
Repository in the sense that the underlying data level interface is abstracted from the entity representation and I can request and save data through the ObjectContext.
Unit Of Work in the sense that I can write all my inserts/updates to the objectContext and execute them all in one shot when I do a SaveChanges().
It seems redundant to put another layer of these patterns on top of the EF ObjectContext? It also seems that the Model classes can be incorporated directly on top of the EF generated entities using 'partial class'.
I'm new at DDD so please let me know if I'm missing something here.
I don't think that the Entity Framework is a good implementation of Repository, because:
The object context is insufficiently abstract to do good unit testing of things which reference it, since it is bound to the DB access. Having an IRepository reference instead works much better for creating unit tests.
When a client has access to the ObjectContext, the client can do pretty much anything it cares to. The only real control you have over this at all is to make certain types or properties private. It is hard to implement good data security this way.
On a non-trivial model, the ObjectContext is insufficiently abstract. You may, for example, have both tables and stored procedures mapped to the same entity type. You don't really want the client to have to distinguish between the two mappings.
On a related note, it is difficult to write comprehensive and well-enforce business rules and entity code. Indeed, whether or not it this is even a good idea is debatable.
On the other hand, once you have an ObjectContext, implementing the Repository pattern is trivial. Indeed, for cases that are not particularly complex, the Repository is something of a wrapper around the ObjectContext and the Entity types.
I would say that you should look at the ObjectContext as your UnitOfWork, and not as a repository.
An ObjectContext cannot be a repository -imho- since it is 'to generic'.
You should create your own Repositories, which have specialized methods (like GetCustomersWithGoldStatus for instance) next to the regular CRUD methods.
So, what I would do, is create repositories (one for each aggregate-root), and let those repositories use the ObjectContext.
I like to have a repository layer for the following reasons:
EF gotcha's
When you look at some of the current tutorials on EF (Code First version), it is apparent there's a number of gotcha's to be handled, particularly around object graphs (entities containing entities) and disconnected scenarios. I think a repository layer is great for wrapping these up in one place.
A clear picture of data access mechanisms
A repository gives a specific picture as to how the BL is accessing and updating the data store. It exposes methods that have a clear single purpose, and can be tested independently of the BL. Standard example from the textbooks, Find() to find a single entity. A more application specific example, Clear() to clear down a db table.
A place for optimizations
Inevitably you come up against performance hits when using vanilla EF. I use the repository to hide the optimization mechanisms from the BL.
Examples,
GetKeys() to project cached keys from the tables (for Insert/Update decisions). The reading of key only is faster and uses less memory than reading the full entity.
Bulk load via SqlBulkCopy. EF will insert by individual SQL statements. If you want a single statement to insert multiple rows, SqlBulkCopy is a good mechanism. The repository encapsulates this and provides metadata for SqlBulkCopy. As well as the Insert method, you need a StartBatch() and EndBatch() method, which is also an argument for a UnitOfWork layer.