EF4 with MVC3 - Do I need The Repository Pattern? - entity-framework

I have recently learned of the Repository and Unit of Work Design Patterns and thought that I would implement them in a new EF4 MVC3 project, since abstraction is generally good.
As I add them to the project, I am wondering if the juice is worth the proverbial squeeze, given the following:
It is EXTREMELY unlikely that the underlying data access mechanism will change from EF4.
This level of abstraction will require more overhead/confusion to the project and to other developers on the team.
The only real benefit I see to using the Repository pattern is for unit testing the application. Abstracting away the data store doesn't seem useful since I know the datastore won't change, and further, that EF4 already provides a pretty good abstraction (I just call .AddObject() and it looks like I am modifying an in-memory collection and I just call .SaveChanges() which already provides the unit of work pattern).
Should I even bother implementing this abstraction? I feel like there must be some massive benefit that I am missing, but it just doesn't feel like I need to go down this route. I am willing to be convinced otherwise; can someone make a case? Thanks.

I recommend you reading this answer and all linked questions. The repository is very popular pattern and it really makes your application nice and clean. It make you feel that your architecture is correct but some assumptions about repository pattern with EF are not correct. In my opinion (described in those answers):
It will make some more complex EF related task much harder to achieve or your repository and UoW implementation will need to have public interface very similar to EF's
It will not make your code better unit testable because all interactions with repository must still be covered by integration tests. Not only my experience proved that mocking EF code by replacing linq-to-entities with linq-to-objects does not test your code.

yes yes yes : ) - first of all - the repository pattern helps to inject your dependencies for unit testing. Secondly, it gives a very clear view of exactly what data access methods are available to get something rather than people misc. coding against the EF layer directly. Download the POCO templates though for EF4 so your classes don't carry the EF properties around with them if you happen to use them as models and/or don't want any EF dependency libraries references in your mvc app assuming your repository work is in a separate project (which I recommend). If you are using all viewmodels then its not as much of a concern, but its nice working with a "Customer" object without extra methods on them. Its cleaner in my opinion.

Related

Most common practice for managing ObjectContext in EF

I'm really confused about the standard way to create and dispose of your context for my MVC3 app with multiple layers. I started with EF4 and upgraded to EF5, and the default MSDN tutorials always seem to indicate working within using blocks, which seems particularly lousy - it seems to me that I have to pass the context object up and down the method chain.
I've done a fair bit of reading about context per request, repository patterns, unit of work patterns, etc, and it seems everyone is reinventing the wheel.
Are developers really sitting on a plethora of different EF implementations, or is there a common approach that I missed in a master tutorial?
There might have a few different ways to implement UoW and Repository pattern but one thing everyone agree with is that it's pretty useful because they create an abstraction level over the Context created by Entity Framework.
There are several reasons not to use directly the EF DBContext, two of them are preventing misuse and abstract complex features that should not be exposed to all the developers.
Now, concerning the implementation, I did not feel like reinventing the wheel when I came up using UoW and Repositories that way. Please have a look and tell me what you think! It's pretty straightforward.
Hope that helps!
What you need to really remember is the underlying context is still a DbConnection. It is recommend to wrap it in a using statement so you don't forget to dispose of it when you are done.
Other than that, it really depends on what you are doing. Sometimes wrapping in a using statement is fine. Other times you may need to keep a instance of it and keep using it, again, just remember to dispose of it when you are done.
I think the reposition pattern is fairly popular with abstracting the context so you can just call methods on the repository and then return the results from the context and keep it alive.

Is there an advantage with Code First (DbContext model) vs EDMX files?

Our group has a DBA that manages all the databases. We started to use Code First and it's working okay. Now we have suggestions that we should be using a database first approach but as far as I am aware this requires us to do mapping in a diagram and we cannot use the Fluent API.
We're happy with the idea of POCO classes so would it be best for us to just continue with Code First or is there a particular advantage (other than stored procedure use) with using EDMX files and the traditional way of working?
The main advantage is flexibility, avoid code-generation and acquire more control over how the things are made behind the scenes.
As you define the mappings in code, you've more power in terms of mapping strategies, tweaking and configuration.
In summary: your domain won't be database-driven: you've your domain model and it's the database who needs to fit it. For me, this is how should be a serious domain using a serious OR/M. OR/M makes more possible to build true object-oriented domains while they handle the pain of interoperate with a very different world, the relational model.
If you really want to have a platform-independent, neutral domain model, Code-First is the way to go.
Maybe I'm biased, but my opinion is serious, medium-to-large or great projects should start and go with Code-First. Code generation and the EDMX paradigm and this kind of sugar works if your domain isn't that complex. Once it gets complex, you need to work on your own data and domain strategies.

How to Unit Test the Entitiy-Framework (EF) in my business logic?

I have read a lot about unit testing the Entity Framework.
I am posting this question because I simply saw there are too many solutions to this problem !
Here are the solutions I found:
Use an expensive commercial tool called TypeMock (mentioned here).
Use an alpha open-source tool called Effort (mentioned here).
Use Repository Pattern and Rhino Mock. test the isolated LINQ queries against a real database (mentioned here).
Some problems with some of the methods stated here:
You cannot get around the fact that you need to supply an ObjectContext with a connection string
If you fake the ObjectContext - some things that might work in unit testing won't work in production (like running functions inside the queries)
Some of the articles I read were from 3-4 years ago.
Does any one here have any experience with this issue and can help me go for the best solution ?
Just to make things clear:
my business logic functions aren't just simple functions like 'GetUserById'.
Some of the functions include accessing objects, that have relationships to other objects.
(for example - I can add a user + departmant + office in the same function).
For doing stuff like this I would recommend using the Repository pattern and use a mocking framework like Rhino or MOQ to test your business logic and I would then recommend you do some integration tests for your repository.
First this follows the "Single Responsibility Principal", and allows you to test your business logic with out nearly as much overhead (Mocking ObjectContext is a pain) and it allows you to test your queries with real data. I would strongly state any well tested solution is going to include both Unit and Integration testing.

Entity Framework, application layers and separation of concerns

I'm using the Entity Framework 4.1 and ASP.Net MVC 3 for my application. MVC provides the presentation layer, an intermediate library provides the business logic and the Entity Framework sort of acts as the data layer I guess?
I could separate the Entity Framework code into a set of repository classes, or an appropriate variation thereof, whatever constitutes a worthwhile data layer, but I'm having trouble resolving a design problem I have.
If the multi-layered approach exists to help me keep concerns separated, then it stands to reason that my choice of data persistence should also not be a concern of the presentation layer. The problem is that by using the Entity Framework, I'm basically tightly coupling my application to the notion that entity changes are tracked and persisted automatically.
As such, let's say in a hypothetical world I found a reason not to use the Entity Framework and wanted to swap it out. A well-designed solution should allow me to do this at the appropriate layer and not have dependent layers affected, but because all code is being written with the knowledge that the data layer tracks object changes, I would only be able to swap out the Entity Framework for something that works in a similar fashion, for example nHibernate.
How do I get to use the Entity Framework but not need to write my code in a way that assumes that entity changes are being tracked by the data layer?
UPDATE for those still wondering about this issue in their own scenarios:
Ayende Rahien wrote a great article shooting down this whole argument:
http://ayende.com/blog/4567/the-false-myth-of-encapsulating-data-access-in-the-dal
If you want to continue this way you should give up programming job and go to study philosophy. Entity framework is abstraction of persistence and there is a rule of Leaky abstraction which says that any non-trivial abstraction is to some degree leaky.
Agile methodologies come with really interesting phenomenon: Do not prepare for hypothetical situations. The most of the time it is just Gold plating. Each change has its cost. Changing persistence layer later in the project is costly but it is also very rare. From customer perspective there is no reason to pay part of these costs in the most of projects where this change is not needed. If we discus customer perspective more deeply we can say that he should not pay for that at all because choosing bad API which has to be replaced later on is failure of developers / architects. Refactor your code regularly but only to the point which is needed for adding new features which customer wants otherwise you can hardly be competitive on the market. This of course has some exceptions:
Customer wants (or architecture demands it for any reason and customer agrees with it) such abstraction. In such case you must count with it and define architecture open for such changes.
It is hobby or open source project where you can do what you want because it is not constrained by some resources
Now to your problem. If you want such high level abstraction you should not expose entities to your controller. Expose DTOs from the business layer (or even from repositories) and add fields like IsNew, IsModified, IsDeleted to those DTOs. Now your UI is completely separated from the persistence but your architecture is much more complex and there is probably no reason for such complexity - it is over architected. Another way can be simply turning off tracking (add AsNoTracking() to each query) and proxy creation on your entities (context.Configuration.ProxyCreationEnabled) - lazy loading will not work as well. That's like throwing away most of features persistence frameworks offer to you.
There are also other points of view. I recommend you reading Ayende's recent posts about repository and his comments to Sharp architecture.
Short answer? You don't. You could turn off EF's tracking and then not worry about it, but that's about it.
If you're going to write your presentation layer with the expectation that changes are being tracked and persisted automatically, then whatever you replace EF with has to do that. You can't swap it out for something that doesn't track and persist changes automatically and just expect things to keep working. That'd be like taking a system that relies on a TCP/IP connection for duplex communication, swapping it to a HTTP connection (which by the nature of HTTP isn't really duplex) and expect things to work the same way. It doesn't.
If you want to be able to swap out your persistence layer for something else and not have to change anything else, then you need to wrap EF (or whatever) in your own custom code to provide the functionality you want. Then you have to provide implementations for anything not provided by whatever you swap to.
This is doable, but it's going to be an awful lot of work for a problem that very rarely actually happens. It's also going to add extra complexity to the project. Ladislav is bang on: it's not worth abstracting this far.
You should implement the repository pattern and plain POCOS if you are concerned about potentially swapping out EF.
There is a great project on Codeplex that goes over Domain Driven design including documentation. Take a look at that.
http://microsoftnlayerapp.codeplex.com/
Please after reading Microsoft n-layer project, read ayenede's weblog.
Mr.ayende posted series posts about advantage and disadvantage Microsoft n-layer project.

Is MEF mature enough to bet the company on?

My company needs to rewrite a large monolithic program, and I would want it written using a plugin type architecture. Currently the best solution appears to be MEF, but as it is a fairly 'new' thing I am warey of betting the future of my company (and my reputation) on it.
Does anyone have a feeling on how mature a solution MEF is ?
Thanks
Visual Studio's entire extension system is now built on MEF.
That is to say that Microsoft is Dog-fooding it (like they are doing with WPF).
Given that the framework developers themselves will be working with it, you can feel pretty confident that it is here to stay. However, as with any first release, you are almost guaranteed to have some growing pains when the next release comes around.
Personally, I would go for it. It is certainly better than the tightly-coupled-reflection-based alternative.
I don't think it is necessary to "bet on MEF". Your code should have very little dependencies on MEF.
You can use the technique of dependency injection to break up your monolithic application into components which have only a single responsibility, and which limit their knowledge of other components to abstractions. See this blog post by Nicholas Blumhardt for a nice overview of the type of relations that can exist between components.
Wiring the components together into an application can then be done with any dependency injection framework, or even manually. The component logic shouldn't need to be aware of the container - there might not even be a container.
In the case of MEF, you do need to add import/export attributes to your classes. However, you can still ignore those attributes and reuse those components without MEF, e.g. by using another DI framework like AutoFac.
It's a relatively new technology, so I'm not sure if it's exactly mature. I'm sure it will change quite a bit over the next several years, perhaps merging with other frameworks to better support IoC. That said, MS has a pretty good history of preserving backwards compatibility, so now that MEF is actually part of the Framework, I would consider the public interfaces stable.
That said, MEF might not actually be the right solution for your project. It depends on your extensibility needs and how large is 'large'. If you want to support true extensibility, including the possibility for third-party plugins, it has an enormous impact on your design responsibilities. It's much harder to make changes to the infrastructure as you now need to maintain very stable public interfaces. If you're really only after the IoC features, you're probably better off with a true IoC framework, which more clearly limits your design responsibility to support of your internal dependencies. If you're betting the future of the company, this is the bigger question, in my mind.