I'm really confused about the standard way to create and dispose of your context for my MVC3 app with multiple layers. I started with EF4 and upgraded to EF5, and the default MSDN tutorials always seem to indicate working within using blocks, which seems particularly lousy - it seems to me that I have to pass the context object up and down the method chain.
I've done a fair bit of reading about context per request, repository patterns, unit of work patterns, etc, and it seems everyone is reinventing the wheel.
Are developers really sitting on a plethora of different EF implementations, or is there a common approach that I missed in a master tutorial?
There might have a few different ways to implement UoW and Repository pattern but one thing everyone agree with is that it's pretty useful because they create an abstraction level over the Context created by Entity Framework.
There are several reasons not to use directly the EF DBContext, two of them are preventing misuse and abstract complex features that should not be exposed to all the developers.
Now, concerning the implementation, I did not feel like reinventing the wheel when I came up using UoW and Repositories that way. Please have a look and tell me what you think! It's pretty straightforward.
Hope that helps!
What you need to really remember is the underlying context is still a DbConnection. It is recommend to wrap it in a using statement so you don't forget to dispose of it when you are done.
Other than that, it really depends on what you are doing. Sometimes wrapping in a using statement is fine. Other times you may need to keep a instance of it and keep using it, again, just remember to dispose of it when you are done.
I think the reposition pattern is fairly popular with abstracting the context so you can just call methods on the repository and then return the results from the context and keep it alive.
Related
I'm using the Entity Framework 4.1 and ASP.Net MVC 3 for my application. MVC provides the presentation layer, an intermediate library provides the business logic and the Entity Framework sort of acts as the data layer I guess?
I could separate the Entity Framework code into a set of repository classes, or an appropriate variation thereof, whatever constitutes a worthwhile data layer, but I'm having trouble resolving a design problem I have.
If the multi-layered approach exists to help me keep concerns separated, then it stands to reason that my choice of data persistence should also not be a concern of the presentation layer. The problem is that by using the Entity Framework, I'm basically tightly coupling my application to the notion that entity changes are tracked and persisted automatically.
As such, let's say in a hypothetical world I found a reason not to use the Entity Framework and wanted to swap it out. A well-designed solution should allow me to do this at the appropriate layer and not have dependent layers affected, but because all code is being written with the knowledge that the data layer tracks object changes, I would only be able to swap out the Entity Framework for something that works in a similar fashion, for example nHibernate.
How do I get to use the Entity Framework but not need to write my code in a way that assumes that entity changes are being tracked by the data layer?
UPDATE for those still wondering about this issue in their own scenarios:
Ayende Rahien wrote a great article shooting down this whole argument:
http://ayende.com/blog/4567/the-false-myth-of-encapsulating-data-access-in-the-dal
If you want to continue this way you should give up programming job and go to study philosophy. Entity framework is abstraction of persistence and there is a rule of Leaky abstraction which says that any non-trivial abstraction is to some degree leaky.
Agile methodologies come with really interesting phenomenon: Do not prepare for hypothetical situations. The most of the time it is just Gold plating. Each change has its cost. Changing persistence layer later in the project is costly but it is also very rare. From customer perspective there is no reason to pay part of these costs in the most of projects where this change is not needed. If we discus customer perspective more deeply we can say that he should not pay for that at all because choosing bad API which has to be replaced later on is failure of developers / architects. Refactor your code regularly but only to the point which is needed for adding new features which customer wants otherwise you can hardly be competitive on the market. This of course has some exceptions:
Customer wants (or architecture demands it for any reason and customer agrees with it) such abstraction. In such case you must count with it and define architecture open for such changes.
It is hobby or open source project where you can do what you want because it is not constrained by some resources
Now to your problem. If you want such high level abstraction you should not expose entities to your controller. Expose DTOs from the business layer (or even from repositories) and add fields like IsNew, IsModified, IsDeleted to those DTOs. Now your UI is completely separated from the persistence but your architecture is much more complex and there is probably no reason for such complexity - it is over architected. Another way can be simply turning off tracking (add AsNoTracking() to each query) and proxy creation on your entities (context.Configuration.ProxyCreationEnabled) - lazy loading will not work as well. That's like throwing away most of features persistence frameworks offer to you.
There are also other points of view. I recommend you reading Ayende's recent posts about repository and his comments to Sharp architecture.
Short answer? You don't. You could turn off EF's tracking and then not worry about it, but that's about it.
If you're going to write your presentation layer with the expectation that changes are being tracked and persisted automatically, then whatever you replace EF with has to do that. You can't swap it out for something that doesn't track and persist changes automatically and just expect things to keep working. That'd be like taking a system that relies on a TCP/IP connection for duplex communication, swapping it to a HTTP connection (which by the nature of HTTP isn't really duplex) and expect things to work the same way. It doesn't.
If you want to be able to swap out your persistence layer for something else and not have to change anything else, then you need to wrap EF (or whatever) in your own custom code to provide the functionality you want. Then you have to provide implementations for anything not provided by whatever you swap to.
This is doable, but it's going to be an awful lot of work for a problem that very rarely actually happens. It's also going to add extra complexity to the project. Ladislav is bang on: it's not worth abstracting this far.
You should implement the repository pattern and plain POCOS if you are concerned about potentially swapping out EF.
There is a great project on Codeplex that goes over Domain Driven design including documentation. Take a look at that.
http://microsoftnlayerapp.codeplex.com/
Please after reading Microsoft n-layer project, read ayenede's weblog.
Mr.ayende posted series posts about advantage and disadvantage Microsoft n-layer project.
It seems a lot of Objective-C code is using Singleton nowadays.
While a lot of people complaining about Singleton, e.g. Google (Where Have All the Singletons Gone?), their fellow engineers also use it anyway: http://code.google.com/mobile/analytics/docs/iphone/
I know we had some answers in Stack Overflow already but they are not totally specific to Objective-C as a dynamic language: Objective C has categories, while many other languages do not.
So what is your opinion? Do you still use Singleton? If so, how do you make your app more testable?
Updated: I think we need to use codes as example for more concrete discussion, so much discussions on SO are theory based without a single line of code
Let's use the Google Analytics iOS SDK as an example:
// Initialization
[[GANTracker sharedTracker] startTrackerWithAccountID:#"UA-0000000-1"
dispatchPeriod:kGANDispatchPeriodSec
delegate:nil];
// Track page view
[[GANTracker sharedTracker] trackPageview:#"/app_entry_point"
withError:&error];
The beauty of the above code is once you have initialized using the method "startTrackerWithAccountID", you can run method "trackPageview" throughout out your apps without passing through configurations.
If you think Singleton is bad, can you improve the above code?
Much thanked for your input, have a happy Friday.
This post is likely to be downvote-bait, but I don't really understand why singletons get no love. They're perfectly valid, you just have to understand what they're useful for.
In iOS development, you have one and only one instance of the application you currently are. You're only one application, right? You're not two or zero applications, are you? So the framework provides you with a UIApplication singleton through which to get at application-level os and framework features. It models something appropriately to have that be a singleton.
If you've got data fields of which there can and should be only one, and you need to get to them from all over the place in your app, there's totally nothing wrong with modeling that as a singleton too. Creating a singleton as a globals bucket is probably a misuse of the pattern, and I think that's probably what most people object to about them. But if you're modeling something that has "singleness" to it, a singleton might well be the way to go.
Some developers seem to have a fundamental disgust for singletons, but when actually asked why, they mumble something about globals and namespaces and aesthetics. Which I guess I can understand, if you've really resolved once and for all that Singletons are an anti-pattern and to be abhorred in all cases. But you're not thinking anymore, at that point. And the framework design disagrees with you.
I think most developers go through the Singleton phase, where you have everything you need at your fingertips, in a bunch of wonderful Singletons.
Then you discover that unit testing with Singletons can be difficult. You don't actually want to connect to the database, but your Singleton does. Add a layer of redirection and mock it.
Then you discover that unit testing isn't the only time you need different behaviour. You make your Singleton configurable to have different behaviour based on a parameter. You start to wonder if you need to split it into two Singletons. Then your code needs to know which Singleton to use, so you need a Singleton that knows which Singleton to use.
Then some other code starts messing with the values in your Singleton, while you're using it. How dare they! If you wanted just anybody to get at those values from anywhere, you'd make them global...
Once you get to this point, you start wondering if Singletons were the right solution. You start to see the dangers of global data, particularly within an OO design, where you just assume your data won't get poked at by other people.
So you go back and start passing the data along, rather than looking it up (this used to be called good OO design, but now it has a fancy name like "Dependency Injection").
Eventually you learn that Singletons are fine in moderation. You learn to recognize when your Singleton needs to stop being single.
So you get shared objects like UIApplication and NSUserDefaults. Those are good uses of Singletons.
I got burned enough in the Java Singleton craze a decade ago. I don't even consider writing my own Singletons. The only time I've needed anything similar in recent memory is wanting to cache the result of [NSCalendar currentCalendar] (which takes a long time). I created a category on NSCalendar and cached it as a static variable. I felt a bit dirty, but the alternative was painfully slow code.
To summarize and for those who tl;dr:
Singletons are a tool. They're not likely to be the right tool, but you have to discover that for yourself.
Why do you need an answer that is "total Objective C specific"? Singletons aren't totally Obj-C specific either, and you're able to use those. Functions aren't Obj-C-specific, integers aren't Obj-C specific, and yet you're able to use all of those in your Obj-C code.
The obvious replacements for a singleton work in any language.
A singleton is a badly-designed global.
So the simplest replacement is to just make it a regular global, without the silly "one instance only" restriction.
A more thorough solution is, instead of having a globally accessible object at all, pass it as a parameter to the functions that need it.
And finally, you can go for a hybrid solution using a Dependency Injection framework.
The problem with singletons is that they can lead to tight coupling. Let's say you're building an airline booking system: your booking controller might use an
id<FlightsClient>
A common way to obtain it within the controller would be as follows:
_flightsClient = [FlightsClient sharedInstance];
Drawbacks:
It becomes difficult to test a class in isolation.
If you want to change the flight client for another implementation, its necessary to search through the application and swap it out one by one.
If there's a case where the application should use a different implementation (eg OnlineFlightClient, OfflineFlightClient), things get tricky.
A good workaround is to apply the dependency injection design pattern.
Think of dependency injectionas telling an architectural story. When the key actors in your application are pulled up into an assembly, then the application’s configuration is correctly modularized (removing duplication). Having created this script, its now easy to reconfigure or swap one actor for another.”. In this way we need not understand all of a problem at once, its easy to evolve our app’s design as the requirements evolve.
Here's a dependency injection library: https://github.com/typhoon-framework/Typhoon
I have recently learned of the Repository and Unit of Work Design Patterns and thought that I would implement them in a new EF4 MVC3 project, since abstraction is generally good.
As I add them to the project, I am wondering if the juice is worth the proverbial squeeze, given the following:
It is EXTREMELY unlikely that the underlying data access mechanism will change from EF4.
This level of abstraction will require more overhead/confusion to the project and to other developers on the team.
The only real benefit I see to using the Repository pattern is for unit testing the application. Abstracting away the data store doesn't seem useful since I know the datastore won't change, and further, that EF4 already provides a pretty good abstraction (I just call .AddObject() and it looks like I am modifying an in-memory collection and I just call .SaveChanges() which already provides the unit of work pattern).
Should I even bother implementing this abstraction? I feel like there must be some massive benefit that I am missing, but it just doesn't feel like I need to go down this route. I am willing to be convinced otherwise; can someone make a case? Thanks.
I recommend you reading this answer and all linked questions. The repository is very popular pattern and it really makes your application nice and clean. It make you feel that your architecture is correct but some assumptions about repository pattern with EF are not correct. In my opinion (described in those answers):
It will make some more complex EF related task much harder to achieve or your repository and UoW implementation will need to have public interface very similar to EF's
It will not make your code better unit testable because all interactions with repository must still be covered by integration tests. Not only my experience proved that mocking EF code by replacing linq-to-entities with linq-to-objects does not test your code.
yes yes yes : ) - first of all - the repository pattern helps to inject your dependencies for unit testing. Secondly, it gives a very clear view of exactly what data access methods are available to get something rather than people misc. coding against the EF layer directly. Download the POCO templates though for EF4 so your classes don't carry the EF properties around with them if you happen to use them as models and/or don't want any EF dependency libraries references in your mvc app assuming your repository work is in a separate project (which I recommend). If you are using all viewmodels then its not as much of a concern, but its nice working with a "Customer" object without extra methods on them. Its cleaner in my opinion.
My company needs to rewrite a large monolithic program, and I would want it written using a plugin type architecture. Currently the best solution appears to be MEF, but as it is a fairly 'new' thing I am warey of betting the future of my company (and my reputation) on it.
Does anyone have a feeling on how mature a solution MEF is ?
Thanks
Visual Studio's entire extension system is now built on MEF.
That is to say that Microsoft is Dog-fooding it (like they are doing with WPF).
Given that the framework developers themselves will be working with it, you can feel pretty confident that it is here to stay. However, as with any first release, you are almost guaranteed to have some growing pains when the next release comes around.
Personally, I would go for it. It is certainly better than the tightly-coupled-reflection-based alternative.
I don't think it is necessary to "bet on MEF". Your code should have very little dependencies on MEF.
You can use the technique of dependency injection to break up your monolithic application into components which have only a single responsibility, and which limit their knowledge of other components to abstractions. See this blog post by Nicholas Blumhardt for a nice overview of the type of relations that can exist between components.
Wiring the components together into an application can then be done with any dependency injection framework, or even manually. The component logic shouldn't need to be aware of the container - there might not even be a container.
In the case of MEF, you do need to add import/export attributes to your classes. However, you can still ignore those attributes and reuse those components without MEF, e.g. by using another DI framework like AutoFac.
It's a relatively new technology, so I'm not sure if it's exactly mature. I'm sure it will change quite a bit over the next several years, perhaps merging with other frameworks to better support IoC. That said, MS has a pretty good history of preserving backwards compatibility, so now that MEF is actually part of the Framework, I would consider the public interfaces stable.
That said, MEF might not actually be the right solution for your project. It depends on your extensibility needs and how large is 'large'. If you want to support true extensibility, including the possibility for third-party plugins, it has an enormous impact on your design responsibilities. It's much harder to make changes to the infrastructure as you now need to maintain very stable public interfaces. If you're really only after the IoC features, you're probably better off with a true IoC framework, which more clearly limits your design responsibility to support of your internal dependencies. If you're betting the future of the company, this is the bigger question, in my mind.
I think the title speaks for itself guys - why should I write an interface and then implement a concrete class if there is only ever going to be 1 concrete implementation of that interface?
I think you shouldn't ;)
There's no need to shadow all your classes with corresponding interfaces.
Even if you're going to make more implementations later, you can always extract the interface when it becomes necessary.
This is a question of granularity. You cannot clutter your code with unnecessary interfaces but they are useful at boundaries between layers.
Someday you may try to test a class that depends on this interface. Then it's nice that you can mock it.
I'm constantly creating and removing interfaces. Some were not worth the effort and some are really needed. My intuition is mostly right but some refactorings are necessary.
The question is, if there is only going to ever be one concrete implementation, should there be an interface?
YAGNI - You Ain't Gonna Need It from Wikipedia
According to those who advocate the YAGNI approach, the temptation to write code that is not necessary at the moment, but might be in the future, has the following disadvantages:
* The time spent is taken from adding, testing or improving necessary functionality.
* The new features must be debugged, documented, and supported.
* Any new feature imposes constraints on what can be done in the future, so an unnecessary feature now may prevent implementing a necessary feature later.
* Until the feature is actually needed, it is difficult to fully define what it should do and to test it. If the new feature is not properly defined and tested, it may not work right, even if it eventually is needed.
* It leads to code bloat; the software becomes larger and more complicated.
* Unless there are specifications and some kind of revision control, the feature may not be known to programmers who could make use of it.
* Adding the new feature may suggest other new features. If these new features are implemented as well, this may result in a snowball effect towards creeping featurism.
Two somewhat conflicting answers to your question:
You do not need to extract an interface from every single concrete class you construct, and
Most Java programmers don't build as many interfaces as they should.
Most systems (even "throwaway code") evolve and change far past what their original design intended for them. Interfaces help them to grow flexibly by reducing coupling. In general, here are the warning signs that you ought to be coding to an interface:
Do you even suspect that another concrete class might need the same interface (like, if you suspect your data access objects might need XML representation down the road -- something that I've experienced)?
Do you suspect that your code might need to live on the other side of a Web Services layer?
Does your code forms a service layer to some outside client?
If you can honestly answer "no" to all these questions, then an interface might be overkill. Might. But again, unforeseen consequences are the name of the game in programming.
You need to decide what the programming interface is, by specifying the public functions. If you don't do a good job of that, the class would be difficult to use.
Therefore, if you decide later you need to create a formal interface, you should have the design ready to go.
So, you do need to design an interface, but you don't need to write it as an interface and then implement it.
I use a test driven approach to creating my code. This will often lead me to create interfaces where I want to supply a mock or dummy implementation as part of my test fixture.
I would not normally create any code unless it has some relevance to my tests, and since you cannot easily test an interface, only an implementation, that leads me to create interfaces if I need them when supplying dependencies for a test case.
I will also sometimes create interfaces when refactoring, to remove duplication or improve code readability.
You can always refactor your code to introduce an interface if you find out you need one later.
The only exception to this would be if I were designing an API for release to a third party - where the cost of making API changes is high. In this case I might try to predict the type of changes I might need to do in the future and work out ways of creating my API to minimise future incompatible changes.
One thing which no one mentioned yet, is that sometimes it is necessary in order to avoid depenency issues. you can have the interface in a common project with few dependencies and the implementation in a separate project with lots of dependencies.
"Only Ever going to have One implementation" == famous last words
It doesn't cost much to make an interface and then derive a concrete class from it. The process of doing it can make you rethink your design and often leads to a better end product. And once you've done it, if you ever find yourself eating those words - as frequently happens - you won't have to worry about it. You're already set. Whereas otherwise you have a pile of refactoring to do and it's gonna be a pain.
Editted to clarify: I'm working on the assumption that this class is going to be spread relatively far and wide. If it's a tiny utility class used by one or two other classes in a single package then yeah, don't worry about it. If it's a class that's going to be used in multiple packages by multiple other classes then my previous answer applies.
The question should be: "how can you ever be sure, that there is only going to ever be one concrete implementation?"
How can you be totally sure?
By the time you thought this through, you would already have created the interface and be on your way without assumptions that might turn out to be wrong.
With today's coding tools (like Resharper), it really doesn't take much time at all to create and maintain interfaces alongside your classes, whereas discovering that now you need an extra implementation and to replace all concrete references can take a long time and is no fun at all - believe me.
A lot of this is taken from a Rainsberger talk on InfoQ: http://www.infoq.com/presentations/integration-tests-scam
There are 3 reasons to have a class:
It holds some Value
It helps Persist some entity
It performs some Service
The majority of services should have interfaces. It creates a boundary, hides implementation, and you already have a second client; all of the tests that interact with that service.
Basically if you would ever want to Mock it out in a unit test it should have an interface.