I have watched Julie Lerman's videos about using EF in an enterprise application. Now I am developing a website using "Bounded Contexts" and other stuff she has taught in that series.
The problem is I do not know how to use bounded contexts (BC) from within my "Business Layer". To make it clearer: How should the BL know that which specific BC it should use.
Suppose the UI requests a list of products from the business layer. In BL I have a method that returns a list of products: GetAll(). This method does not know which part of the UI (site admin, moderator or public user) has requested the list of products. Since each user/scenario has its own bounded context, the list needs to be pulled using that related context. How should the BL choose the appropriate BC?
Moreover I do not want the UI layer to interact with data layer.
How can this be done?
If by business layer you mean a place where all your business rules are defined, then that is a bounded context.
A bounded context looks at your system from a certain angle so that business rules can be implemented in a compartmentalised fashion (with the goal that it is easier to handle the overall problem by splitting in to smaller chunks).
http://martinfowler.com/bliki/BoundedContext.html
Front-end
So assuming you have a ASP MVC front end, this controllers are the things that will call your use cases/user stories that are presented from the domain to be called via a standard known interface.
public class UserController : Controller
{
ICommandHandler<ChangeNameCommand> handler;
public UserController(ICommandHandler<ChangeNameCommand> handler)
{
this.handler = handler;
}
public ActionResult ChangeUserName(string id, string name)
{
try
{
var command = new ChangeNameCommand(id,name);
var data = handler.handle(command);
}
catch(Exception e)
{
// add error logging and display info
ViewBag.Error = e.Message;
}
// everything went OK, let the user know
return View("Index");
}
}
Domain Application (Use Cases)
Next, you would have an domain application entry point that implements the use case (this would be a command or query handler).
You may call this directly and have the code run in-process with your front end application, or you may have a WebAPI or WCF service in front of it presenting the domain application services. It doesn't really matter, how you the system is distrusted depends on the system requirements (it is often simpler from an infrastructure perspective to not to distribute if not needed).
The domain application layer then orchestrates the user story - it will new up repositories, fetch entities, perform an operation on them, and then write back to the repository. The code here should not be complex or contain logic.
public class NewUserHandler : ICommandHandler<ChangeNameCommand>
{
private readonly IRepository repository;
public NewUserHandler(IRepository repository)
{
this.repository = repository;
}
public void Handle(ChangeUserName command)
{
var userId = new UserId(command.UserId);
var user = this.repository.GetById<User>(userId);
user.ChangeName(command.NewName);
this.repository.Save(newUser);
}
}
Domain Model
The entities them selves implement their own business logic in the domain model. You may also have domain services for logic which doesn't naturally fit nicely inside an individual entity.
public class User
{
protected string Name;
protected DateTime NameLastChangedOn;
public ChangeName(string newName)
{
// not the best of business rules, just an example...
if((DateTime.UtcNow - NameLastChangedOn).Days < 30)
{
throw new DomainException("Cannot change name more than once every 30 days");
}
this.Name = newName;
this.NameLastChangedOn = DateTime.UtcNow;
}
}
Infrastructure
You would have infrastructure which implements the code to fetch and retrieve entities from your backing store. For you this is Entity Framework and the DbContext (my example code above is not using EF but you can substitute).
Answer to your question - Which bounded context should the front end application call?
Not to make the answer complex or long, but I included the above code to set the background and hope to make it easier to understand as I think the terms you are using are getting a little mixed up.
With the above code as you started implementing more command and query handlers, which bounded context is called from your front end application depends on what specific user story the user wishes to perform.
User stories will generally be clustered across different bounded contexts, so you would just select the command or query for the bounded context that implements the required functionality - don't worry about making it anything more complicated than that.
Let the problem you are trying to solve dictate the mapping, and dont be afraid that this mapping will possibly change as insight in to the problem you are looking to solve improves.
Sidenote
As a side note to mention things I found useful (I started my DDD journey with EF)... with entity framework there are ORM concepts that are often required such as defining mapping relationships and navigation properties between entities, and what happens with cascade deletes and updates. For me, this started to influence how I designed my entities, rather than the problem dictating how the entities should be designed. You may find this interesting: http://mehdi.me/ambient-dbcontext-in-ef6/
You may also want to look at http://geteventstore.com and event sourcing which takes away any headaches of ORM mapping (but comes with added complexity and workarounds needed to get acceptable performance). What is best to use depends on the situation, but its always good to know all the options.
I also use SimpleInjector to wire up my classes and inject in to the MVC controller (as a prebuilt Command or Query handler), more info here: https://cuttingedge.it/blogs/steven/pivot/entry.php?id=91.
Using an IoC container is a personal preference only and not set in stone.
This book is also awesome: https://vaughnvernon.co/?page_id=168
I mention the above as I started my DDD journey with EF and the exact same question you had.
Related
As the title suggests what is the best practice when designing service layers?. I do understand service layer should always return a DTO so that domain (entity) objects are preserved within the service layer. But what should be the input for the service layer from the controllers?
I put forward three of my own suggestions below:
Method 1:
In this method the domain object (Item) is preserved within the service layer.
class Controller
{
#Autowired
private ItemService service;
public ItemDTO createItem(IntemDTO dto)
{
// service layer returns a DTO object and accepts a DTO object
return service.createItem(dto);
}
}
Method 2:
This is where the service layer receives a custom request object. I have seen this pattern extensively in AWS Java SDK and also Google Cloud Java API
class Controller
{
#Autowired
private ItemService service;
public ItemDTO createItem(CreateItemRequest request)
{
// service layer returns a DTO object and accepts a custom request object
return service.createItem(request);
}
}
Method 3:
Service layer accepts a DTO and returns a domain object. I am not a fan of this method. But its been used extensively used at my workplace.
class Controller
{
#Autowired
private ItemService service;
public ItemDTO createItem(CreateItemRequest request)
{
// service layer returns a DTO object and accepts a DTO object
Item item = service.createItem(request);
return ItemDTO.fromEntity(item);
}
}
If all 3 of the above methods are incorrect or not the best way to do it, please advise me on the best practice.
Conceptually speaking, you want to be able to reuse the service/application layer across presentation layers and through different access ports (e.g. console app talking to your app through a web socket). Furthermore, you do not want every single domain change to bubble up into the layers above the application layer.
The controller conceptually belongs to the presentation layer. Therefore, you wouldn't want the application layer to be coupled upon a contract defined in the same conceptual layer the controller is defined in. You also wouldn't want the controller to depend upon the domain or it may have to change when the domain changes.
You want a solution where the application layer method contracts (parameters & return type) are expressed in any Java native types or types defined in the service layer boundary.
If we take an IDDD sample from Vaughn Vernon, we can see that his application service method contracts are defined in Java native types. His application service command methods also do not yield any result given he used CQRS, but we can see query methods do return a DTO defined in the application/service layer package.
In the above listed 3 methods which ones are correct/wrong?
Both, #1 and #2 are very similar and could be right from a dependency standpoint, as long as ItemDto and CreateItemRequest are defined in the application layer package, but I would favor #2 since the input data type is named against the use case rather than simply the kind of entity it deals with: entity-naming-focus would fit better with CRUD and because of that you might find it difficult to find good names for input data types of other use case methods operating on the same kind of entity. #2 also have been popularized through CQRS (where commands are usually sent to a command bus), but is not exclusive to CQRS. Vaughn Vernon also uses this approach in the IDDD samples. Please note that what you call request is usually called command.
However, #3 would not be ideal given it couples the controller (presentation layer) with the domain.
For example, some methods receive 4 or 5 args. According to Eric Evans in Clean Code, such methods must be avoided.
That's a good guideline to follow and I'm not saying the samples couldn't be improved, but keep in mind that in DDD, the focus is put on naming things according to the Ubiquitous Language (UL) and following it as closely as possible. Therefore, forcing new concepts into the design just for the sake of grouping arguments together could potentially be harmful. Ironically, the process of attempting to do so may still offer some good insights and allow to discover overlooked & useful domain concepts that could enrich the UL.
PS: Robert C. Martin has written Clean Code, not Eric Evans which is famous for the blue book.
I'm from C# background but the concept remains same here.
In a situation like this, where we have to pass the parameters/state from application layer to service layer and, then return result from service layer, I would tend to follow separation-of-concerns. The service layer does not need to know about the Request parameter of you application layer/ controller. Similarly, what you return from service layer should not be coupled with what you return from your controller. These are different layers, different requirements, separate concerns. We should avoid tight coupling.
For the above example, I would do something like this:
class Controller
{
#Autowired
private ItemService service;
public ItemResponse createItem(CreateItemRequest request)
{
var creatItemDto = GetDTo(request);
var itemDto = service.createItem(createItemDto);
return GetItemResponse(itemDto);
}
}
This may feel like more work since now you need to write addional code to convert the different objects. However, this gives you a great flexiblity and makes the code much easier to maintain. For example: CreateItemDto may have additional/ computational fields as compared to CreateItemRequest. In such cases, you do not need to expose those fields in your Request object. You only expose your Data Contract to the client and nothing more. Similarly, you only return the relevant fields to the client as against what you return from service layer.
If you want to avoid manual mapping between Dto and Request objects C# has libaries like AutoMapper. In java world, I'm sure there should be an equivalent. May be ModelMapper can help.
Entity Framework Layer Guidance
I'm in the design stage of a WPF business application. The first stage of this application will be a WPF/Desktop application. Later iterations may include a browser based mini version.
I envision creating a dll or 2 that contain the domain model & dbcontext that all applications(Desktop or Browser) will use.
My intention is to ride or die with EF. I'm not worried about using DI/Repository patterns etc for flexibility. The benefits of using them don't outweigh the added complexity in my opinion for this project. My plan is to use a model, and a derived dbcontext.
Having said that, I'm looking for input on where to put certain types of method code.
An example will hopefully make my question more clear:
Let's say I have the following two entities..
Entity: Employee
Entity: PermissionToken
Inside of these two entities I have a ManyToMany relationship resulting in me creating another entity for the relationship:
EmployeesPermissionTokens
For clarity, the PermissionToken Entity's Primary Key is an Enum representing the permission..
In the application, lets say the current user is Administering Employees and wants to grant a permission to an Employee.
In the app, I could certainly code this as:
var e = dbcontext.Employees.Find(1);
var pt = new PermissionToken
{
PermissionID=PermissionTypeEnum.DELETEUSER";
...
}
e.PermissionTokens.Add(pt)
But it seems to me that it would be more convenient to wrap that code in a method so that one line of code could perform those actions from whatever application chooses to do so. Where would a method like that live in all of this?
I've thought about adding a static method to the EF Entity:
In The employee class:
public static void GrantPermission(PermissionToken token)
{
e.PermissionTokens.Add(token);
}
Going further, what would be really convenient for the app would be the ability to write a line like this:
Permissions.GrantToEmployee(EmployeeID employeeId, PermissionTypeEnum
permissionId);
Of course that means that the method would have to be able to access the DbContext to grab the Employee Object and the PermissionObject by ID to do its work. I really want to avoid my entities knowing about/calling DbContext because I feel long term the entities get stuffed full of dbcontext code which in my opinion shouldn't even be in the Model classes.
So Where would a method like this go?
My gut tells me to put these sorts of code in my derived DbContext since in order to do these sorts of things, the method is going to need access to a DbContext anyway.
Does this make sense, or am I missing something? I hate to write oodles of code and then figure out 3 months later that I went down the wrong road to start with. Where should these types of methods live? I know there is probably a purist answer to this, but I'm looking for a clean, real world solution.
First of all you are making a good decision to not abstract EF behind a repository.
With the EF Context you have a class supporting the Unit Of Work pattern which is handling your data access needs.No need to wrap it up in repository.
However this does not mean you should call the Context directly from your controller or viewmodel.
You could indeed just extend the DbContext however I suggest to use services to mediate between your controllers/view models and your dbcontext.
If e.g. in your controller you are handling a user request (e.g. the user has clicked a button) then your controller should call a service to archive what ever "Use Case" is behind the button.
In your case this could be a PermissionService, the PermissionService would be the storage for all operations concerning permission.
public class PermissionService
{
PermissionService(DbContext context)
{
}
public bool AddPermission(Employee e, PermissionType type) { }
public bool RemovePermission(Employee e, PermissionType type) {}
}
Your service ofcourse needs access to the DbContext.
It makes sense to use DI here and register the DbContext with a DI Container.
Thus the context will be injected into all your services. This is pretty straight forward and I do not see any extra complexity here.
However, if you don't want to do this you can simply new up up the Db Context inside your services. Of course this is harder / impossible to mock for testing.
I am about to implement an Entity Framework 6 design with a repository and unit of work.
There are so many articles around and I'm not sure what the best advice is: For example I realy like the pattern implemented here: for the reasons suggested in the article here
However, Tom Dykstra (Senior Programming Writer on Microsoft's Web Platform & Tools Content Team) suggests it should be done in another article: here
I subscribe to Pluralsight, and it is implemented in a slightly different way pretty much every time it is used in a course so choosing a design is difficult.
Some people seem to suggest that unit of work is already implemented by DbContext as in this post, so we shouldn't need to implement it at all.
I realise that this type of question has been asked before and this may be subjective but my question is direct:
I like the approach in the first (Code Fizzle) article and wanted to know if it is perhaps more maintainable and as easily testable as other approaches and safe to go ahead with?
Any other views are more than welcome.
#Chris Hardie is correct, EF implements UoW out of the box. However many people overlook the fact that EF also implements a generic repository pattern out of the box too:
var repos1 = _dbContext.Set<Widget1>();
var repos2 = _dbContext.Set<Widget2>();
var reposN = _dbContext.Set<WidgetN>();
...and this is a pretty good generic repository implementation that is built into the tool itself.
Why go through the trouble of creating a ton of other interfaces and properties, when DbContext gives you everything you need? If you want to abstract the DbContext behind application-level interfaces, and you want to apply command query segregation, you could do something as simple as this:
public interface IReadEntities
{
IQueryable<TEntity> Query<TEntity>();
}
public interface IWriteEntities : IReadEntities, IUnitOfWork
{
IQueryable<TEntity> Load<TEntity>();
void Create<TEntity>(TEntity entity);
void Update<TEntity>(TEntity entity);
void Delete<TEntity>(TEntity entity);
}
public interface IUnitOfWork
{
int SaveChanges();
}
You could use these 3 interfaces for all of your entity access, and not have to worry about injecting 3 or more different repositories into business code that works with 3 or more entity sets. Of course you would still use IoC to ensure that there is only 1 DbContext instance per web request, but all 3 of your interfaces are implemented by the same class, which makes it easier.
public class MyDbContext : DbContext, IWriteEntities
{
public IQueryable<TEntity> Query<TEntity>()
{
return Set<TEntity>().AsNoTracking(); // detach results from context
}
public IQueryable<TEntity> Load<TEntity>()
{
return Set<TEntity>();
}
public void Create<TEntity>(TEntity entity)
{
if (Entry(entity).State == EntityState.Detached)
Set<TEntity>().Add(entity);
}
...etc
}
You now only need to inject a single interface into your dependency, regardless of how many different entities it needs to work with:
// NOTE: In reality I would never inject IWriteEntities into an MVC Controller.
// Instead I would inject my CQRS business layer, which consumes IWriteEntities.
// See #MikeSW's answer for more info as to why you shouldn't consume a
// generic repository like this directly by your web application layer.
// See http://www.cuttingedge.it/blogs/steven/pivot/entry.php?id=91 and
// http://www.cuttingedge.it/blogs/steven/pivot/entry.php?id=92 for more info
// on what a CQRS business layer that consumes IWriteEntities / IReadEntities
// (and is consumed by an MVC Controller) might look like.
public class RecipeController : Controller
{
private readonly IWriteEntities _entities;
//Using Dependency Injection
public RecipeController(IWriteEntities entities)
{
_entities = entities;
}
[HttpPost]
public ActionResult Create(CreateEditRecipeViewModel model)
{
Mapper.CreateMap<CreateEditRecipeViewModel, Recipe>()
.ForMember(r => r.IngredientAmounts, opt => opt.Ignore());
Recipe recipe = Mapper.Map<CreateEditRecipeViewModel, Recipe>(model);
_entities.Create(recipe);
foreach(Tag t in model.Tags) {
_entities.Create(tag);
}
_entities.SaveChanges();
return RedirectToAction("CreateRecipeSuccess");
}
}
One of my favorite things about this design is that it minimizes the entity storage dependencies on the consumer. In this example the RecipeController is the consumer, but in a real application the consumer would be a command handler. (For a query handler, you would typically consume IReadEntities only because you just want to return data, not mutate any state.) But for this example, let's just use RecipeController as the consumer to examine the dependency implications:
Say you have a set of unit tests written for the above action. In each of these unit tests, you new up the Controller, passing a mock into the constructor. Then, say your customer decides they want to allow people to create a new Cookbook or add to an existing one when creating a new recipe.
With a repository-per-entity or repository-per-aggregate interface pattern, you would have to inject a new repository instance IRepository<Cookbook> into your controller constructor (or using #Chris Hardie's answer, write code to attach yet another repository to the UoW instance). This would immediately make all of your other unit tests break, and you would have to go back to modify the construction code in all of them, passing yet another mock instance, and widening your dependency array. However with the above, all of your other unit tests will still at least compile. All you have to do is write additional test(s) to cover the new cookbook functionality.
I'm (not) sorry to say that the codefizzle, Dyksta's article and the previous answers are wrong. For the simple fact that they use the EF entities as domain (business) objects, which is a big WTF.
Update: For a less technical explanation (in plain words) read Repository Pattern for Dummies
In a nutshell, ANY repository interface should not be coupled to ANY persistence (ORM) detail. The repo interface deals ONLY with objects that makes sense for the rest of the app (domain, maybe UI as in presentation). A LOT of people (with MS leading the pack, with intent I suspect) make the mistake of believing that they can reuse their EF entities or that can be business object on top of them.
While it can happen, it's quite rare. In practice, you'll have a lot of domain objects 'designed' after database rules i.e bad modelling. The repository purpose is to decouple the rest of the app (mainly the business layer) from its persistence form.
How do you decouple it when your repo deals with EF entities (persistence detail) or its methods return IQueryable, a leaking abstraction with wrong semantics for this purpose (IQueryable allows you to build a query, thus implying that you need to know persistence details thus negating the repository's purpose and functionality)?
A domin object should never know about persistence, EF, joins etc. It shouldn't know what db engine you're using or if you're using one. Same with the rest of the app, if you want it to be decoupled from the persistence details.
The repository interface know only about what the higher layer know. This means, that a generic domain repository interface looks like this
public interface IStore<TDomainObject> //where TDomainObject != Ef (ORM) entity
{
void Save(TDomainObject entity);
TDomainObject Get(Guid id);
void Delete(Guid id);
}
The implementation will reside in the DAL and will use EF to work with the db. However the implementation looks like this
public class UsersRepository:IStore<User>
{
public UsersRepository(DbContext db) {}
public void Save(User entity)
{
//map entity to one or more ORM entities
//use EF to save it
}
//.. other methods implementation ...
}
You don't really have a concrete generic repository. The only usage of a concrete generic repository is when ANY domain object is stored in serialized form in a key-value like table. It isn't the case with an ORM.
What about querying?
public interface IQueryUsers
{
PagedResult<UserData> GetAll(int skip, int take);
//or
PagedResult<UserData> Get(CriteriaObject criteria,int skip, int take);
}
The UserData is the read/view model fit for the query context usage.
You can use directly EF for querying in a query handler if you don't mind that your DAL knows about view models and in that case you won't be needing any query repo.
Conclusion
Your business object shouldn't know about EF entities.
The repository will use an ORM, but it never exposes the ORM to the rest of the app, so the repo interface will use only domain objects or view models (or any other app context object that isn't a persistence detail)
You do not tell the repo how to do its work i.e NEVER use IQueryable with a repo interface
If you just want to use the db in a easier/cool way and you're dealing with a simple CRUD app where you don't need (be sure about it) to maintain separation of concerns then skip the repository all together, use directly EF for everything data. The app will be tightly coupled to EF but at least you'll cut the middle man and it will be on purpose not by mistake.
Note that using the repository in the wrong way, will invalidate its use and your app will still be tightly coupled to the persistence (ORM).
In case you believe the ORM is there to magically store your domain objects, it's not. The ORM purpose is to simulate an OOP storage on top of relational tables. It has everything to do with persistence and nothing to do with domain, so don't use the ORM outside persistence.
DbContext is indeed built with the Unit of Work pattern. It allows all of its entities to share the same context as we work with them. This implementation is internal to the DbContext.
However, it should be noted that if you instantiate two DbContext objects, neither of them will see the other's entities that they are each tracking. They are insulated from one another, which can be problematic.
When I build an MVC application, I want to ensure that during the course of the request, all my data access code works off of a single DbContext. To achieve that, I apply the Unit of Work as a pattern external to DbContext.
Here is my Unit of Work object from a barbecue recipe app I'm building:
public class UnitOfWork : IUnitOfWork
{
private BarbecurianContext _context = new BarbecurianContext();
private IRepository<Recipe> _recipeRepository;
private IRepository<Category> _categoryRepository;
private IRepository<Tag> _tagRepository;
public IRepository<Recipe> RecipeRepository
{
get
{
if (_recipeRepository == null)
{
_recipeRepository = new RecipeRepository(_context);
}
return _recipeRepository;
}
}
public void Save()
{
_context.SaveChanges();
}
**SNIP**
I attach all my repositories, which are all injected with the same DbContext, to my Unit of Work object. So long as any repositories are requested from the Unit of Work object, we can be assured that all our data access code will be managed with the same DbContext - awesome sauce!
If I were to use this in an MVC app, I would ensure the Unit of Work is used throughout the request by instantiating it in the controller, and using it throughout its actions:
public class RecipeController : Controller
{
private IUnitOfWork _unitOfWork;
private IRepository<Recipe> _recipeService;
private IRepository<Category> _categoryService;
private IRepository<Tag> _tagService;
//Using Dependency Injection
public RecipeController(IUnitOfWork unitOfWork)
{
_unitOfWork = unitOfWork;
_categoryRepository = _unitOfWork.CategoryRepository;
_recipeRepository = _unitOfWork.RecipeRepository;
_tagRepository = _unitOfWork.TagRepository;
}
Now in our action, we can be assured that all our data access code will use the same DbContext:
[HttpPost]
public ActionResult Create(CreateEditRecipeViewModel model)
{
Mapper.CreateMap<CreateEditRecipeViewModel, Recipe>().ForMember(r => r.IngredientAmounts, opt => opt.Ignore());
Recipe recipe = Mapper.Map<CreateEditRecipeViewModel, Recipe>(model);
_recipeRepository.Create(recipe);
foreach(Tag t in model.Tags){
_tagRepository.Create(tag); //I'm using the same DbContext as the recipe repo!
}
_unitOfWork.Save();
Searching around the internet I found this http://www.thereformedprogrammer.net/is-the-repository-pattern-useful-with-entity-framework/ it's a 2 part article about the usefulness of the repository pattern by Jon Smith.
The second part focuses on a solution. Hope it helps!
Repository with unit of work pattern implementation is a bad one to answer your question.
The DbContext of the entity framework is implemented by Microsoft according to the unit of work pattern. That means the context.SaveChanges is transactionally saving your changes in one go.
The DbSet is also an implementation of the Repository pattern. Do not build repositories that you can just do:
void Add(Customer c)
{
_context.Customers.Add(c);
}
Create a one-liner method for what you can do inside the service anyway ???
There is no benefit and nobody is changing EF ORM to another ORM nowadays...
You do not need that freedom...
Chris Hardie is argumenting that there could be instantiated multiple context objects but already doing this you do it wrong...
Just use an IOC tool you like and setup the MyContext per Http Request and your are fine.
Take ninject for example:
kernel.Bind<ITeststepService>().To<TeststepService>().InRequestScope().WithConstructorArgument("context", c => new ITMSContext());
The service running the business logic gets the context injected.
Just keep it simple stupid :-)
You should consider "command/query objects" as an alternative, you can find a bunch of interesting articles around this area, but here is a good one:
https://rob.conery.io/2014/03/03/repositories-and-unitofwork-are-not-a-good-idea/
When you need a transaction over multiple DB objects, use one command object per command to avoid the complexity of the UOW pattern.
A query object per query is likely unnecessary for most projects. Instead you might choose to start with a 'FooQueries' object
...by which I mean you can start with a Repository pattern for READS but name it as "Queries" to be explicit that it does not and should not do any inserts/updates.
Later, you might find splitting out individual query objects worthwhile if you want to add things like authorization and logging, you could feed a query object into a pipeline.
I always use UoW with EF code first. I find it more performant and easier tot manage your contexts, to prevent memory leaking and such. You can find an example of my workaround on my github: http://www.github.com/stefchri in the RADAR project.
If you have any questions about it feel free to ask them.
I always used Repository pattern but for my latest project I wanted to see if I could perfect the use of it and my implementation of “Unit Of Work”. The more I started digging I started asking myself the question: "Do I really need it?"
Now this all starts with a couple of comments on Stackoverflow with a trace to Ayende Rahien's post on his blog, with 2 specific,
repository-is-the-new-singleton
ask-ayende-life-without-repositories-are-they-worth-living
This could probably be talked about forever and ever and it depends on different applications. Whats I like to know,
would this approach be suited for a Entity Framework project?
using this approach is the business logic still going in a service layer, or extension methods (as explained below, I know, the extension method is using NHib session)?
That's easily done using extension methods. Clean, simple and reusable.
public static IEnumerable GetAll(
this ISession instance, Expression<Func<T, bool>> where) where T : class
{
return instance.QueryOver().Where(where).List();
}
Using this approach and Ninject as DI, do I need to make the Context a interface and inject that in my controllers?
I've gone down many paths and created many implementations of repositories on different projects and... I've thrown the towel in and given up on it, here's why.
Coding for the exception
Do you code for the 1% chance your database is going to change from one technology to another? If you're thinking about your business's future state and say yes that's a possibility then a) they must have a lot of money to afford to do a migration to another DB technology or b) you're choosing a DB technology for fun or c) something has gone horribly wrong with the first technology you decided to use.
Why throw away the rich LINQ syntax?
LINQ and EF were developed so you could do neat stuff with it to read and traverse object graphs. Creating and maintain a repository that can give you the same flexibility to do that is a monstrous task. In my experience any time I've created a repository I've ALWAYS had business logic leak into the repository layer to either make queries more performant and/or reduce the number of hits to the database.
I don't want to create a method for every single permutation of a query that I have to write. I might as well write stored procedures. I don't want GetOrder, GetOrderWithOrderItem, GetOrderWithOrderItemWithOrderActivity, GetOrderByUserId, and so on... I just want to get the main entity and traverse and include the object graph as I so please.
Most examples of repositories are bullshit
Unless you are developing something REALLY bare-bones like a blog or something your queries are never going to be as simple as 90% of the examples you find on the internet surrounding the repository pattern. I cannot stress this enough! This is something that one has to crawl through the mud to figure out. There will always be that one query that breaks your perfectly thought out repository/solution that you've created, and it's not until that point where you second guess yourself and the technical debt/erosion begins.
Don't unit test me bro
But what about unit testing if I don't have a repository? How will I mock? Simple, you don't. Lets look at it from both angles:
No repository - You can mock the DbContext using an IDbContext or some other tricks but then you're really unit testing LINQ to Objects and not LINQ to Entities because the query is determined at runtime... OK so that's not good! So now it's up to the integration test to cover this.
With repository - You can now mock your repositories and unit test the layer(s) in between. Great right? Well not really... In the cases above where you have to leak logic into the repository layer to make queries more performant and/or less hits to the database, how can your unit tests cover that? It's now in the repo layer and you don't want to test IQueryable<T> right? Also let's be honest, your unit tests aren't going to cover the queries that have a 20 line .Where() clause and .Include()'s a bunch of relationships and hits the database again to do all this other stuff, blah, blah, blah anyways because the query is generated at runtime. Also since you created a repository to keep the upper layers persistence ignorant, if you now you want to change your database technology, sorry your unit tests are definitely not going to guarantee the same results at runtime, back to integration tests. So the whole point of the repository seems weird..
2 cents
We already lose a lot of functionality and syntax when using EF over plain stored procedures (bulk inserts, bulk deletes, CTEs, etc.) but I also code in C# so I don't have to type binary. We use EF so we can have the possibility of using different providers and to work with object graphs in a nice related way amongst many things. Certain abstractions are useful and some are not.
The repository pattern is an abstraction. It's purpose is to reduce complexity and make the rest of the code persistant ignorant. As a bonus it allows you to write unit tests instead of integration tests.
The problem is that many developers fail to understand the patterns purpose and create repositories which leak persistance specific information up to the caller (typically by exposing IQueryable<T>). By doing so they get no benefit over using the OR/M directly.
Update to address another answer
Coding for the exception
Using repositories is not about being able to switch persistence technology (i.e. changing database or using a webservice etc instead). It's about separating business logic from persistence to reduce complexity and coupling.
Unit tests vs integration tests
You do not write unit tests for repositories. period.
But by introducing repositories (or any other abstraction layer between persistance and business) you are able to write unit tests for the business logic. i.e. you do not have to worry about your tests failing due to an incorrectly configured database.
As for the queries. If you use LINQ you also have to make sure that your queries work, just as you have to do with repositories. and that is done using integration tests.
The difference is that if you have not mixed your business with LINQ statements you can be 100% sure that it's your persistence code that are failing and not something else.
If you analyze your tests you will also see that they are much cleaner if you have not mixed concerns (i.e. LINQ + Business logic)
Repository examples
Most examples are bullshit. that is very true. However, if you google any design pattern you will find a lot of crappy examples. That is no reason to avoid using a pattern.
Building a correct repository implementation is very easy. In fact, you only have to follow a single rule:
Do not add anything into the repository class until the very moment that you need it
A lot of coders are lazy and tries to make a generic repository and use a base class with a lot of methods that they might need. YAGNI. You write the repository class once and keep it as long as the application lives (can be years). Why fuck it up by being lazy. Keep it clean without any base class inheritance. It will make it much easier to read and maintain.
(The above statement is a guideline and not a law. A base class can very well be motivated. Just think before you add it, so that you add it for the right reasons)
Old stuff
Conclusion:
If you don't mind having LINQ statements in your business code nor care about unit tests I see no reason to not use Entity Framework directly.
Update
I've blogged both about the repository pattern and what "abstraction" really means: http://blog.gauffin.org/2013/01/repository-pattern-done-right/
Update 2
For single entity type with 20+ fields, how will you design query method to support any permutation combination? You dont want to limit search only by name, what about searching with navigation properties, list all orders with item with specific price code, 3 level of navigation property search. The whole reason IQueryable was invented was to be able to compose any combination of search against database. Everything looks great in theory, but user's need wins above theory.
Again: An entity with 20+ fields is incorrectly modeled. It's a GOD entity. Break it down.
I'm not arguing that IQueryable wasn't made for quering. I'm saying that it's not right for an abstraction layer like Repository pattern since it's leaky. There is no 100% complete LINQ To Sql provider (like EF).
They all have implementation specific things like how to use eager/lazy loading or how to do SQL "IN" statements. Exposing IQueryable in the repository forces the user to know all those things. Thus the whole attempt to abstract away the data source is a complete failure. You just add complexity without getting any benefit over using the OR/M directly.
Either implement Repository pattern correctly or just don't use it at all.
(If you really want to handle big entities you can combine the Repository pattern with the Specification pattern. That gives you a complete abstraction which also is testable.)
IMO both the Repository abstraction and the UnitOfWork abstraction have a very valuable place in any meaningful development. People will argue about implementation details, but just as there are many ways to skin a cat, there are many ways to implement an abstraction.
Your question is specifically to use or not to use and why.
As you have no doubt realised you already have both these patterns built into Entity Framework, DbContext is the UnitOfWork and DbSet is the Repository. You don’t generally need to unit test the UnitOfWork or Repository themselves as they are simply facilitating between your classes and the underlying data access implementations. What you will find yourself needing to do, again and again, is mock these two abstractions when unit testing the logic of your services.
You can mock, fake or whatever with external libraries adding layers of code dependencies (that you don’t control) between the logic doing the testing and the logic being tested.
So a minor point is that having your own abstraction for UnitOfWork and Repository gives you maximum control and flexibility when mocking your unit tests.
All very well, but for me, the real power of these abstractions is they provide a simple way to apply Aspect Oriented Programming techniques and adhere to the SOLID principles.
So you have your IRepository:
public interface IRepository<T>
where T : class
{
T Add(T entity);
void Delete(T entity);
IQueryable<T> AsQueryable();
}
And its implementation:
public class Repository<T> : IRepository<T>
where T : class
{
private readonly IDbSet<T> _dbSet;
public Repository(PPContext context)
{
_dbSet = context.Set<T>();
}
public T Add(T entity)
{
return _dbSet.Add(entity);
}
public void Delete(T entity)
{
_dbSet.Remove(entity);
}
public IQueryable<T> AsQueryable()
{
return _dbSet.AsQueryable();
}
}
Nothing out of the ordinary so far but now we want to add some logging - easy with a logging Decorator.
public class RepositoryLoggerDecorator<T> : IRepository<T>
where T : class
{
Logger logger = LogManager.GetCurrentClassLogger();
private readonly IRepository<T> _decorated;
public RepositoryLoggerDecorator(IRepository<T> decorated)
{
_decorated = decorated;
}
public T Add(T entity)
{
logger.Log(LogLevel.Debug, () => DateTime.Now.ToLongTimeString() );
T added = _decorated.Add(entity);
logger.Log(LogLevel.Debug, () => DateTime.Now.ToLongTimeString());
return added;
}
public void Delete(T entity)
{
logger.Log(LogLevel.Debug, () => DateTime.Now.ToLongTimeString());
_decorated.Delete(entity);
logger.Log(LogLevel.Debug, () => DateTime.Now.ToLongTimeString());
}
public IQueryable<T> AsQueryable()
{
return _decorated.AsQueryable();
}
}
All done and with no change to our existing code. There are numerous other cross cutting concerns we can add, such as exception handling, data caching, data validation or whatever and throughout our design and build process the most valuable thing we have that enables us to add simple features without changing any of our existing code is our IRepository abstraction.
Now, many times I have seen this question on StackOverflow – “how do you make Entity Framework work in a multi tenant environment?”.
https://stackoverflow.com/search?q=%5Bentity-framework%5D+multi+tenant
If you have a Repository abstraction then the answer is “it’s easy add a decorator”
public class RepositoryTennantFilterDecorator<T> : IRepository<T>
where T : class
{
//public for Unit Test example
public readonly IRepository<T> _decorated;
public RepositoryTennantFilterDecorator(IRepository<T> decorated)
{
_decorated = decorated;
}
public T Add(T entity)
{
return _decorated.Add(entity);
}
public void Delete(T entity)
{
_decorated.Delete(entity);
}
public IQueryable<T> AsQueryable()
{
return _decorated.AsQueryable().Where(o => true);
}
}
IMO you should always place a simple abstraction over any 3rd party component that will be referenced in more than a handful of places. From this perspective an ORM is the perfect candidate as it is referenced in so much of our code.
The answer that normally comes to mind when someone says “why should I have an abstraction (e.g. Repository) over this or that 3rd party library” is “why wouldn’t you?”
P.S. Decorators are extremely simple to apply using an IoC Container, such as SimpleInjector.
[TestFixture]
public class IRepositoryTesting
{
[Test]
public void IRepository_ContainerRegisteredWithTwoDecorators_ReturnsDecoratedRepository()
{
Container container = new Container();
container.RegisterLifetimeScope<PPContext>();
container.RegisterOpenGeneric(
typeof(IRepository<>),
typeof(Repository<>));
container.RegisterDecorator(
typeof(IRepository<>),
typeof(RepositoryLoggerDecorator<>));
container.RegisterDecorator(
typeof(IRepository<>),
typeof(RepositoryTennantFilterDecorator<>));
container.Verify();
using (container.BeginLifetimeScope())
{
var result = container.GetInstance<IRepository<Image>>();
Assert.That(
result,
Is.InstanceOf(typeof(RepositoryTennantFilterDecorator<Image>)));
Assert.That(
(result as RepositoryTennantFilterDecorator<Image>)._decorated,
Is.InstanceOf(typeof(RepositoryLoggerDecorator<Image>)));
}
}
}
First of all, as suggested by some answer, EF itself is a repository pattern, there is no need to create further abstraction just to name it as repository.
Mockable Repository for Unit Tests, do we really need it?
We let EF communicate to test DB in unit tests to test our business logic straight against SQL test DB. I don't see any benefit of having mock of any repository pattern at all. What is really wrong doing unit tests against test database? As it is bulk operations are not possible and we end up writing raw SQL. SQLite in memory is perfect candidate for doing unit tests against real database.
Unnecessary Abstraction
Do you want to create repository just so that in future you can easily replace EF with NHbibernate etc or anything else? Sounds great plan, but is it really cost effective?
Linq kills unit tests?
I will like to see any examples on how it can kill.
Dependency Injection, IoC
Wow these are great words, sure they look great in theory, but sometimes you have to choose trade off between great design and great solution. We did use all of that, and we ended up throwing all at trash and choosing different approach. Size vs Speed (Size of code and Speed of development) matters huge in real life. Users need flexibility, they don't care if your code is great in design in terms of DI or IoC.
Unless you are building Visual Studio
All these great design are needed if you are building a complex program like Visual Studio or Eclipse which will be developed by many people and it needs to be highly customizable. All great development pattern came into picture after years of development these IDEs has gone through, and they have evolved at place where all these great design patterns matter so much. But if you are doing simple web based payroll, or simple business app, it is better that you evolve in your development with time, instead of spending time to build it for million users where it will be only deployed for 100s of users.
Repository as Filtered View - ISecureRepository
On other side, repository should be a filtered view of EF which guards access to data by applying necessary filler based on current user/role.
But doing so complicates repository even more as it ends up in huge code base to maintain. People end up creating different repositories for different user types or combination of entity types. Not only this, we also end up with lots of DTOs.
Following answer is an example implementation of Filtered Repository without creating whole set of classes and methods. It may not answer question directly but it can be useful in deriving one.
Disclaimer: I am author of Entity REST SDK.
http://entityrestsdk.codeplex.com
Keeping above in mind, we developed a SDK which creates repository of filtered view based on SecurityContext which holds filters for CRUD operations. And only two kinds of rules simplify any complex operations. First is access to entity, and other is Read/Write rule for property.
The advantage is, that you do not rewrite business logic or repositories for different user types, you just simply block or grant them the access.
public class DefaultSecurityContext : BaseSecurityContext {
public static DefaultSecurityContext Instance = new DefaultSecurityContext();
// UserID for currently logged in User
public static long UserID{
get{
return long.Parse( HttpContext.Current.User.Identity.Name );
}
}
public DefaultSecurityContext(){
}
protected override void OnCreate(){
// User can access his own Account only
var acc = CreateRules<Account>();
acc.SetRead( y => x=> x.AccountID == UserID ) ;
acc.SetWrite( y => x=> x.AccountID == UserID );
// User can only modify AccountName and EmailAddress fields
acc.SetProperties( SecurityRules.ReadWrite,
x => x.AccountName,
x => x.EmailAddress);
// User can read AccountType field
acc.SetProperties<Account>( SecurityRules.Read,
x => x.AccountType);
// User can access his own Orders only
var order = CreateRules<Order>();
order.SetRead( y => x => x.CustomerID == UserID );
// User can modify Order only if OrderStatus is not complete
order.SetWrite( y => x => x.CustomerID == UserID
&& x.OrderStatus != "Complete" );
// User can only modify OrderNotes and OrderStatus
order.SetProperties( SecurityRules.ReadWrite,
x => x.OrderNotes,
x => x.OrderStatus );
// User can not delete orders
order.SetDelete(order.NotSupportedRule);
}
}
These LINQ Rules are evaluated against Database in SaveChanges method for every operation, and these Rules act as Firewall in front of Database.
There is a lot of debate over which method is correct, so I look at it as both are acceptable so I use ever which one I like the most (Which is no repository, UoW).
In EF UoW is implemented via DbContext and the DbSets are repositories.
As for how to work with the data layer I just directly work on the DbContext object, for complex queries I will make extension methods for the query that can be reused.
I believe Ayende also has some posts about how abstracting out CUD operations is bad.
I always make an interface and have my context inherit from it so I can use an IoC container for DI.
What most apply over EF is not a Repository Pattern. It is a Facade pattern (abstracting the calls to EF methods into simpler, easier to use versions).
EF is the one applying the Repository Pattern (and the Unit of Work pattern as well). That is, EF is the one abstracting the data access layer so that the user has no idea they are dealing with SQLServer.
And at that, most "repositories" over EF are not even good Facades as they merely map, quite straightforwardly, to single methods in EF, even to the point of having the same signatures.
The two reasons, then, for applying this so-called "Repository" pattern over EF is to allow easier testing and to establish a subset of "canned" calls to it. Not bad in themselves, but clearly not a Repository.
Linq is a nowadays 'Repository'.
ISession+Linq already is the repository, and you need neither GetXByY methods nor QueryData(Query q) generalization. Being a little paranoid to DAL usage, I still prefer repository interface. (From maintainability point of view we also still have to have some facade over specific data access interfaces).
Here is repository we use - it de-couples us from direct usage of nhibernate, but provides linq interface (as ISession access in exceptional cases, which are subject to refactor eventually).
class Repo
{
ISession _session; //via ioc
IQueryable<T> Query()
{
return _session.Query<T>();
}
}
The Repository (or however one chooses to call it) at this time for me is mostly about abstracting away the persistence layer.
I use it coupled with query objects so I do not have a coupling to any particular technology in my applications. And also it eases testing a lot.
So, I tend to have
public interface IRepository : IDisposable
{
void Save<TEntity>(TEntity entity);
void SaveList<TEntity>(IEnumerable<TEntity> entities);
void Delete<TEntity>(TEntity entity);
void DeleteList<TEntity>(IEnumerable<TEntity> entities);
IList<TEntity> GetAll<TEntity>() where TEntity : class;
int GetCount<TEntity>() where TEntity : class;
void StartConversation();
void EndConversation();
//if query objects can be self sustaining (i.e. not need additional configuration - think session), there is no need to include this method in the repository.
TResult ExecuteQuery<TResult>(IQueryObject<TResult> query);
}
Possibly add async methods with callbacks as delegates.
The repo is easy to implement generically, so I am able not to touch a line of the implementation from app to app. Well, this is true at least when using NH, I did it also with EF, but made me hate EF. 4. The conversation is the start of a transaction. Very cool if a few classes share the repository instance. Also, for NH, one repo in my implementation equals one session which is opened at the first request.
Then the Query Objects
public interface IQueryObject<TResult>
{
/// <summary>Provides configuration options.</summary>
/// <remarks>
/// If the query object is used through a repository this method might or might not be called depending on the particular implementation of a repository.
/// If not used through a repository, it can be useful as a configuration option.
/// </remarks>
void Configure(object parameter);
/// <summary>Implementation of the query.</summary>
TResult GetResult();
}
For the configure I use in NH only to pass in the ISession. In EF makes no sense more or less.
An example query would be.. (NH)
public class GetAll<TEntity> : AbstractQueryObject<IList<TEntity>>
where TEntity : class
{
public override IList<TEntity> GetResult()
{
return this.Session.CreateCriteria<TEntity>().List<TEntity>();
}
}
To do an EF query you would have to have the context in the Abstract base, not the session. But of course the ifc would be the same.
In this way the queries are themselves encapsulated, and easily testable. Best of all, my code relies only on interfaces. Everything is very clean. Domain (business) objects are just that, e.g. there is no mixing of responsibilities like when using the active record pattern which is hardly testable and mixes data access (query) code in the domain object and in doing so is mixing concerns (object which fetches itself??). Everybody is still free to create POCOs for data transfer.
All in all, much code reuse and simplicity is provided with this approach at the loss of not anything I can imagine. Any ideas?
And thanks a lot to Ayende for his great posts and continued dedication. Its his ideas here (query object), not mine.
For me, it's a simple decision, with relatively few factors. The factors are:
Repositories are for domain classes.
In some of my apps, domain classes are the same as my persistence (DAL) classes, in others they are not.
When they are the same, EF is providing me with Repositories already.
EF provides lazy loading and IQueryable. I like these.
Abstracting/'facading'/re-implementing repository over EF usually means loss of lazy and IQueryable
So, if my app can't justify #2, separate domain and data models, then I usually won't bother with #5.
This questions doesn't let me sleep as it's since one year I'm trying to find a solution but... still nothing happened in my mind. Probably you can help me, because I think this is a very common issue.
I've a n-layered application: presentation layer, business logic layer, model layer. Suppose for simplicity that my application contains, in the presentation layer, a form that allows a user to search for a customer. Now the user fills the filters through the UI and clicks a button. Something happens and the request arrives to presentation layer to a method like CustomerSearch(CustomerFilter myFilter). This business logic layer now keeps it simple: creates a query on the model and gets back results.
Now the question: how do you face the problem of loading data? I mean business logic layer doesn't know that that particular method will be invoked just by that form. So I think that it doesn't know if the requesting form needs just the Customer objects back or the Customer objects with the linked Order entities.
I try to explain better:
our form just wants to list Customers searching by surname. It has nothing to do with orders. So the business logic query will be something like:
(from c in ctx.CustomerSet
where c.Name.Contains(strQry) select c).ToList();
now this is working correctly. Two days later your boss asks you to add a form that let you search for customers like the other and you need to show the total count of orders created by each customer. Now I'd like to reuse that query and add the piece of logic that attach (includes) orders and gets back that.
How would you front this request?
Here is the best (I think) idea I had since now. I'd like to hear from you:
my CustomerSearch method in BLL doesn't create the query directly but passes through private extension methods that compose the ObjectQuery like:
private ObjectQuery<Customer> SearchCustomers(this ObjectQuery<Customer> qry, CustomerFilter myFilter)
and
private ObjectQuery<Customer> IncludeOrders(this ObjectQuery<Customer> qry)
but this doesn't convince me as it seems too complex.
Thanks,
Marco
Consider moving to DTO's for the interface between the presentation layer and the business layer, see for example:- http://msdn.microsoft.com/en-us/magazine/ee236638.aspx
Something like Automapper can relieve much of the pain associated with moving to DTOs and the move will make explicit what you can and cannot do with the results of a query, i.e. if it's on the DTO it's loaded, if it's not you need a different DTO.
Your current plan sounds a rather too tightly coupled between presentation layer and data layer.
I would agree with the comment from Hightechrider in reference to using DTOs, however you have a valid question with regard to business entities.
One possible solution (I'm using something along these lines on a project I'm developing) is to use DTOs that are read-only (at least from the presentation layer perspective. Your query/get operations would only return DTOs, this would give you the lazy loading capability.
You could setup your business layer to return an Editable object that wraps the DTO when an object/entity is updated/created. Your editable object could enforce any business rules and then when it was saved/passed to the business layer the DTO it wrapped (with the updated values) could be passed to the data layer.
public class Editable
{
//.......initialize this, other properties/methods....
public bool CanEdit<TRet>(Expression<Func<Dto, TRet>> property)
{
//do something to determine can edit
return true;
}
public bool Update<TRet>(Expression<Func<Dto, TRet>> property, TRet updatedValue)
{
if (CanEdit(property))
{
//set the value on the property of the DTO (somehow)
return true;
}
return false;
}
public Dto ValueOf { get; private set;}
}
This gives you the ability to enforce if the user can get editable objects from the business layer as well as allowing the business object to enforce if the user has permission to edit specific properties of an object. A common problem I run into with the domain I work in is that some users can edit all of the properties and others can not, while anyone can view the values of the properties. Additionally the presentation layer gains the ability to determine what to expose as editable to the user as dictated and enforced by the business layer.
Other thought I had is can't your Business Layer expose IQueryable or take standard expressions as arguments that you pass to your data layer. For example I have a page building query something like this:
public class PageData
{
public int PageNum;
public int TotalNumberPages;
public IEnumerable<Dto> DataSet;
}
public class BL
{
public PageData GetPagedData(int pageNum, int itemsPerPage, Expression<Func<Dto, bool>> whereClause)
{
var dataCt = dataContext.Dtos.Where(whereClause).Count();
var dataSet = dataContext.Dtos.Where(whereClause).Skip(pageNum * itemsPerPage).Take(itemsPerPage);
var ret = new PageData
{
//init this
};
return ret;
}
}