I would like to have one dbcontext with lazy-loading enabled for writes, and another dbcontext with lazy-loading disabled for reads. Both should work on the same model. I would like to inject the 2 dbcontexts after constructing them with lazy loading settings into the service class and use each of them where appropriate.
Is this even possible?
I guess I am trying to avoid having to set lazyloading to false inside the service methods.
You can, but it's probably a bad idea. You won't be able to use an entity retrieved from one context with the other (not directly anyway). To write to an entity retrieved using the "read" context you'll have to read it again using the "write" context in order to modify it.
Instead, you can simply enable or disable lazy loading as needed before you make use of your context.
DbContext.Configuration.LazyLoadingEnabled = false; //or true
You could make it easier by simply defining a custom constructor for setting the LazyLoading attribute.
public MyDbContext(bool LazyLoad)
: base(nameOrConnectionString: "MyDbContext") {
this.Configuration.LazyLoadingEnabled = LazyLoad;
}
If you really really need to, you could subclass your DbContext and set LazyLoading in the constructor, but it just seems like a bad idea.
Related
I've decided to use fluent mapping in Entity Framework. My intention was to map everyting by code without any atributes and auto mapping functions. Best way I've found is class EntityTypeConfiguration, that I implement for each entity in my project.
Later I add property to one of my entity. This property isn't needed to be persisted. I've expected, that until I add mapping for this property, it will be ignored by database and persistence layer. Unfortunatly it doesn't work that way, and property is mapped. Only way is to use Ignore method or NotMapped attribute, but I don't want to do it explicitly.
Is there any way, to stop Entity Framework from automapping? I've tried to remove all Conventions from DbModelBuilder, but it doesn't help.
So far as I am aware, there is no other way around it. You need to use either Ignore() or [NotMapped]. I tend to prefer the former as it does not clutter up the model.
Actually I have tried a lot of ways:
- custom convention to remove mapped properties
- removing all conventions
But the easiest (and cleanest) way was to use reflection inside the mapping class and to disable all property mappings that weren't configured.
The code for that (and also an usage example) is inside my public gist.
https://gist.github.com/hidegh/36d92380c720804dee043fde8a863ecb
I'm fighting a few different design concepts within the context of MVVM that mainly stem from the question of when to initialize a ViewModel. To be more specific in terms of "initializing" I'm referring to loading values such as selection values, security context, and other things that could in some cases cause a few second delay.
Possible strategies:
Pass arguments to ViewModel constructor and do loading in the constructor.
Only support a parameterless constructor on the ViewModel and instead support initialize methods that take parameters and do the loading.
A combination of option 1 and 2 where arguments are passed to the ViewModel constructor but loading is deferred until the an Initialize method is called.
A variation on option 3 where instead of parameters being passed to the ViewModel constructor they are set directly on properties.
Affect on ViewModel property getters and setters
In cases where initialization is deferred there is a need to know whether the ViewModel is in a state that is considered available for which the IsBusy property generally serves just as it does for other async and time consuming operations. What this also means though is that since most properties on the ViewModel expose values retrieved from a model object that we constantly have to write the following type of plumbing to make sure the model is available.
public string Name
{
get
{
if (_customerModel == null) // Check model availability
{
return string.Empty;
}
_customerModel.Name;
}
}
Although the check is simple it just adds to the plumbing of INPC and others types of necessities that make the ViewModel become somewhat cumbersome to write and maintain. In some cases it becomes even more problematic since there may not always be a reasonable default to return from the property getter such might be the case with a boolean property IsCommercialAccount where if there is no model available it doesn't make sense to return true or false bringing into question a whole slew of other design questions such as nullability. In the case of option 1 from above where we passed everything into the constructor and loaded it then we only need to concern ourselves with a NULL ViewModel from the View and when the ViewModel is not null it is guaranteed to be initialized.
Support for deferred initialization
With option 4 it is also possible to rely on ISupportInitialize which could be implemented in the base class of the ViewModel to provide a consistent way of signaling whether the ViewModel is initialized or not and also to kick off the initialization via a standard method BeginInit. This could also be used in the case of option 2 and 3 but makes less sense if all initialization parameters are set all in one atomic transaction. At least in this way, the condition shown above could turn into something like
How the design affects IoC
In terms of IoC I understand that options 1 and 3 can be done using constructor injection which is generally preferred, and that options 2 and 4 can be accomplished using method and property injection respectively. My concern however is not with IoC or how to pass in these parameters but rather the overall design and how it affects the ViewModel implementation and it's public interface although I'd like to be a good citizen to make IoC a bit easier if necessary in the future.
Testability
All three options seem to support the concept of testability equally which doesn't help much in deciding between these options although it's arguable that option 4 could require a more broad set of tests to ensure proper behavior of properties where that behavior depends on the initialization state.
Command-ability
Options 2, 3, and 4 all have the side effect of requiring code behind in the View to call initialization methods on the ViewModel however these could be exposed as commands if necessary. In most cases one would probably be loading calling these methods directly after construction anyways like below.
var viewModel = new MyViewModel();
this.DataContext = viewModel;
// Wrap in an async call if necessary
Task.Factory.StartNew(() => viewModel.InitializeWithAccountNumber(accountNumber));
Some other thoughts
I've tried variations on these strategies as I've been working with the MVVM design pattern but haven't concluded on a best practice yet. I would love to hear what the community thinks and attempt to find a reasonable consensus on the best way forward for initializing ViewModels or otherwise dealing with their properties when they are in an unavailable state.
An ideal case may be to use the State pattern where the ViewModel itself is swapped out with different ViewModel objects that represent the different states. As such we could have a general BusyViewModel implementation that represents the busy state which removes one of the needs for the IsBusy property on the ViewModel and then when the next ViewModel is ready it is swapped out on the View allowing that ViewModel to follow the stategy outlined in option 1 where it is fully initialized during construction. This leaves open some questions about who is responsible for managing the state transitions, it could for example be the responsibility of BusyViewModel to abstract something similar to a BackgroundWorker or a Task that is doing the initialization itself and will present the internal ViewModel when ready. Swapping the DataContext on the view on the other hand may require either handling an event in the View or giving limited access to the DataContext property of the View to BusyViewModel so it can be set in the traditional state pattern sense. If there is something similar that people are doing along these lines I would definitely like to know because my google searching hasn't turned up much yet.
My general approach to object oriented design, whether I am creating a view model, or an other type of class is that; Everything that can be passed to the constructor, should be passed to the constructor. This reduces the need to have some sort of IsInitialized state and makes your objects less complex. Sometimes certain frameworks make it hard to follow this approach, IoC containers for example (although they should allow constructor injection), but I still adhere to it as a general rule.
I've just implemented a repository based on EFv4 POCO entity templates.
When I do this
public Client Load(Guid firmId,
int prettyId)
{
var client = (from c in _ctx.Clients where c.firm_id == firmId && c.PrettyId == prettyId select c).FirstOrDefault();
return client;
}
the client returned is of type
{System.Data.Entity.DynamicProxies.Client_8E92CA62619EB03F03DF1A1FC60C5B21F87ECC5D85B65759DB3A3949B8A606D3}
What is happening here? I thought I would get rid of any reference to types from System.Data.Entity namespace. The returned instance should be of type Client, which is a simple POCO class.
I can confirm that the solution is to set
context.ProxyCreationEnabled = false;
which disables creation of dynamic proxy typed objects and leaves us with simple POCOs, which is what we were after with EF POCO templates in the first place.
But you lose lazy loading of navigation properties and change tracking on entities. For the first, you either have to use context.LoadProperty() or the Include() method on your ObjectQuery object. For the second, I do not know the solution yet (actually it doesn't really make sense to have change tracking on POCOs).
Also here is a similar question I would like to point out
What are the downsides to turning off ProxyCreationEnabled for CTP5 of EF code first
I agree that Mare's answer is correct. However, I would add a note of caution.
If you run a query without this ProxyCreationEnabled setting set to true, then EF will return DynamicProxies. If you subsequently run a query with the setting set to false, then EF will return the cached DynamicProxies objects, regardless of the ProxyCreationEnabled setting.
This can be configured globally for the EF context in the *Model.Context.tt file in *Model.edmx under
if (!loader.IsLazyLoadingEnabled(container))
...
this.Configuration.LazyLoadingEnabled = false;
this.Configuration.ProxyCreationEnabled = false;
These will be added to the *Model.context.cs generated file, and will persist between updates from the Database.
I prefer this setting as I do not want a child object that matches the parent loaded from the database.
ALT: It can be configured for Json serizialization:
JSON.NET Error Self referencing loop detected for type
A fellow developer and I are conversing (to put it lightly) over Lazy-Loading of Properties of an object.
He says to use a static IoC lookup call for resolution and Lazy-Loading of objects of an object.
I say that violates SRP, and to use the owning Service to resolve that object.
So, how would you handle Lazy-Loading following IoC and SRP?
You cannot Unit test that lazy-loaded property. He rebuttles that one saying, "you already have unit tests for the UserStatsService - there's your code coverage." A valid point, but the property remains untested though for "complete" coverage.
Setup / code patterns:
Project is using strict Dependency Injection rules (injected in the ctors of all services, repositories, etc).
Project is using IoC by way of Castle (but could be anything, like Unity).
An example is below.
public class User
{
public Guid UserId { get; set; }
private UserStats _userStats;
// lazy-loading of an object on an object
public UserStats UserStats
{
get
{
if (_userStats == null)
{
// ComponentsLookup is just a wrapper around IoC
// Castle/Unity/etc.
_userStats =
ComponentsLookup
.Fetch<UserStatsService>()
.GetByUserId(this.UserId);
}
return _userStats;
}
}
}
The above shows an example of lazy-loading an object. I say not to use this, and to access UserStatsService from the UI layer wherever you need that object.
EDIT: One answer below reminded me of the NHibernate trick to lazy-loading, which is to virtualize your property, allowing NHibernate to create an over-load of the lazy-loading itself. Slick, yes, but we are not using NHibernate.
No one really tackles the matter of Lazy-Loading. Some good articles and SO questions get close:
Using Dependency Injection frameworks for classes with many dependencies
http://blog.vuscode.com/malovicn/archive/2009/10/16/inversion-of-control-single-responsibility-principle-and-nikola-s-laws-of-dependency-injection.aspx
I do see a benefit of lazy-loading. Don't get my wrong, all I did was lazy-loading of my complex types and their sub-types until I switched to the D.I.-ways of the ninja. The benefit is in the UI layer, where a user's stats is displayed, say, in a list with 100 rows. But with DI, now you have to reference a few lines of code to get that user stats (to not violate SRP and not violate the law-of-Demeter), and it has to walk this long path of lookups 100+ times.
Yes yes, adding caching and ensuring the UserStatsService is coded to be used as a Singleton pattern greatly lower the performance cost.
But I am wondering if anyone else out there has a [stubborn] developer that just won't bend to the IoC and D.I. rules completely, and has valid performance/coding points to justify the work-arounds.
Entities themselves should not have the responsibility of lazy loading. That is an infrastructural concern whose solution will lie elsewhere.
Let's say an entity is used in two separate contexts. In the first, its children are used heavily and are eagerly-loaded. In the second, they are used rarely and are lazy-loaded. Is that also the entity's concern? What would the configuration look like?
NHibernate answers these questions by proxying the entity type. A property of type IList<Entity> is set by the infrastructure to an implementation which knows about lazy loading. The entity remains blissfully unaware. Parent references (like in your question) are also handled, requiring only a simple property.
Now that the concern is outside the entity, the infrastructure (ORM) is responsible for determining context and configuration (like eager/lazy loading).
I'm a total newbie at Entity Framework and ASP.Net MVC, having learned mostly from tutorials, without having a deep understanding of either. (I do have experience on .Net 2.0, ADO.Net and WebForms)
My current doubt comes from the way I'm instancing my Entities objects.
Basically I'm doing this in my controllers:
public class PostsController : Controller {
private NorthWindEntities db = new NorthWindEntities();
public ActionResult Index() {
// Use the db object here, never explicitly Close/Dispose it
}
}
I'm doing it like this because I found it in some MSDN blog that seemed authoritative enough to me that I assumed this was a correct way.
However, I feel pretty un-easy about this. Although it saves me a lot of code, I'm used to doing:
using (NorthWindEntities db = new NorthWindEntities() {
}
In every single method that needs a connection, and if that method calls others that'll need it, it'll pass db as a parameter to them. This is how I did everything with my connection objects before Linq-to-SQL existed.
The other thing that makes me uneasy is that NorthWindEntities implements IDisposable, which by convention means I should be calling it's Dispose() method, and I'm not.
What do you think about this?
Is it correct to instance the Entities object as I'm doing? Should it take care of its connections by opening and closing them for each query?
Or should I be disposing it explicitly with a using() clause?
Thanks!
Controller itself implements IDisposable. So you can override Dispose and dispose of anything (like an object context) that you initialize when the controller is instantiated.
The controller only lives as long as a single request. So having a using inside an action and having one object context for the whole controller is exactly the same number of contexts: 1.
The big difference between these two methods is that the action will have completed before the view has rendered. So if you create your ObjectContext in a using statement inside the action, the ObjectContext will have been disposed before the view has rendered. So you better have read anything from the context that you need before the action completes. If the model you pass to the view is some lazy list like an IQueryable, you will have disposed the context before the view is rendered, causing an exception when the view tries to enumerate the IQueryable.
By contrast, if you initialize the ObjectContext when the Controller is initialized (or write lazy initialization code causing it to be initialized when the action is run) and dispose of the ObjectContext in the Controller.Dispose, then the context will still be around when the view is rendered. In this case, it is safe to pass an IQueryable to the view. The Controller will be disposed shortly after the view is rendered.
Finally, I'd be remiss if I didn't point out that it's probably a bad idea to have your Controller be aware of the Entity Framework at all. Look into using a separate assembly for your model and the repository pattern to have the controller talk to the model. A Google search will turn up quite a bit on this.
You are making a good point here. How long should the ObjectContext live? All patterns and practises books (like Dino Esposito's Microsoft-NET-Architecting-Applications) tell you that a DataContext must not live long, nor should it be cached.
I was just wondering why not having, in your case, a ControllerBase class (I'm not aware of the MVC implementation, so bear with me) where the ObjectContext gets initiated once for all controller. Especially think about the Identity Map Pattern, that's already implemented by Entity Framework. Even though you need to call another controller as your PostsController, it would still work with the same Context and improve performance as well.