My project uses Entity Framework Core as ORM. Lazy loading is enabled by default:
protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
optionsBuilder.UseLazyLoadingProxies();
}
I need to write a code, which processes some big collection of objects with navigation properties. To decrease amount of database requests, I would like to download the objects with their navigation properties "eagerly".
What is right way to do this? Can I just use something like this:
dbContext.MyObjects.Include(myObject => myObject.NavigationProperty).ToListAsync()
Yes, Include forces the EF to load the data for the given property within the query.
You might also consider to use .ToListAsync() overload and await the call. It might help the thread to not freeze while loading a huge result set from the database.
Doco with examples can be found here: https://learn.microsoft.com/en-us/ef/core/querying/related-data/eager
Related
I would like to have one dbcontext with lazy-loading enabled for writes, and another dbcontext with lazy-loading disabled for reads. Both should work on the same model. I would like to inject the 2 dbcontexts after constructing them with lazy loading settings into the service class and use each of them where appropriate.
Is this even possible?
I guess I am trying to avoid having to set lazyloading to false inside the service methods.
You can, but it's probably a bad idea. You won't be able to use an entity retrieved from one context with the other (not directly anyway). To write to an entity retrieved using the "read" context you'll have to read it again using the "write" context in order to modify it.
Instead, you can simply enable or disable lazy loading as needed before you make use of your context.
DbContext.Configuration.LazyLoadingEnabled = false; //or true
You could make it easier by simply defining a custom constructor for setting the LazyLoading attribute.
public MyDbContext(bool LazyLoad)
: base(nameOrConnectionString: "MyDbContext") {
this.Configuration.LazyLoadingEnabled = LazyLoad;
}
If you really really need to, you could subclass your DbContext and set LazyLoading in the constructor, but it just seems like a bad idea.
I'm sorry if my question is normal. But I meet this problem when I design my ASP.NET MVC 4.0 Application using Entity Framework 5.
If I choose Eager Loading, I just simplify using :
public Problem getProblemById(int id) {
using(DBEntity ctx = new DBEntity ())
{
return (Problem) ctx.Posts.Find(id);
}
}
But if I use Eager Loading, I will meet problem: when I want to navigate through all its attributes such as comments (of problem), User (of Problem) ... I must manually use Include to include those properties. and At sometimes, If I don't use those properties, I will lost performance, and maybe I lost the strength of Entity Framework.
If I use Lazy Loading. There are two ways to use DBContext object. First way is using DBContext object locally :
public Problem getProblemById(int id) {
DBEntity ctx = new DBEntity ();
return (Problem) ctx.Posts.Find(id);
}
Using this, I think will meet memory leak, because ctx will never dispose again.
Second way is make DBContext object static and use it globally :
static DBEntity ctx = new DBEntity ();
public Problem getProblemById(int id) {
return (Problem) ctx.Posts.Find(id);
}
I read some blog, they say that, if I use this way, I must control concurrency access (because multi request sends to server) by myself, OMG. For example this link :
Entity Framework DBContext Usage
So, how can design my app, please help me figure out.
Thanks :)
Don't use a static DBContext object. See c# working with Entity Framework in a multi threaded server
A simple rule for ASP.Net MVC: use a DBContext instance per user request.
As for using lazy loading or not, I would say it depends, but personally I would deactivate lazy-loading. IMO it's a broken feature because there are fundamental issues with it:
just too hard to handle exceptions, because a SQL request can fail at any place in your code (not just in the DAL because one developer can access to a navigation property in any piece of code)
poor performances if not well used
too easy to write broken code that produces thousands of SQL requests
I've been creating a prototype for a modern MUD engine. A MUD is a simple form of simulation and provide a good method in which to test a concept I'm working on. This has led me to a couple of places in my code where things, are a bit unclear, and the design is coming into question (probably due to its being flawed). I'm using model first (I may need to change this) and I've designed a top down architecture of game objects. I may be doing this completely wrong.
What I've done is create a MUDObject entity. This entity is effectively a base for all of my other logical constructs, such as characters, their items, race, etc. I've also created a set of three meta classes which are used for logical purposes as well Attributes, Events, and Flags. They are fairly straightforward, and are all inherited from MUDObject.
The MUDObject class is designed to provide default data behavior for all of the objects, this includes deletion of dead objects. The automatically clearing of floors. etc. This is also designed to facilitate this logic virtually if needed. For example, checking a room to see if an effect has ended and deleting the the effect (remove the flag).
public partial class MUDObject
{
public virtual void Update()
{
if (this.LifeTime.Value.CompareTo(DateTime.Now) > 0)
{
using (var context = new ReduxDataContext())
{
context.MUDObjects.DeleteObject(this);
}
}
}
public virtual void Pause()
{
}
public virtual void Resume()
{
}
public virtual void Stop()
{
}
}
I've also got a class World, it is derived from MUDObject and contains the areas and room (which in turn contain the games objects) and handles the timer for the operation to run the updates. (probably going to be moved, put here as if it works would limit it to only the objects in-world at the time.)
public partial class World
{
private Timer ticker;
public void Start()
{
this.ticker = new Timer(3000.0);
this.ticker.Elapsed += ticker_Elapsed;
this.ticker.Start();
}
private void ticker_Elapsed(object sender, ElapsedEventArgs e)
{
this.Update();
}
public override void Update()
{
this.CurrentTime += 3;
// update contents
base.Update();
}
public override void Pause()
{
this.ticker.Enabled = false;
// update contents
base.Pause();
}
public override void Resume()
{
this.ticker.Enabled = true;
// update contents
this.Resume();
}
public override void Stop()
{
this.ticker.Stop();
// update contents
base.Stop();
}
}
I'm curious of two things.
Is there a way to recode the context so that it has separate
ObjectSets for each type derived from MUDObject?
i.e. context.MUDObjects.Flags or context.Flags
If not how can I query a child type specifically?
Does the Update/Pause/Resume/Stop architecture I'm using work
properly when placed into the EF entities directly? given than it's for
data purposes only?
Will locking be an issue?
Does the partial class automatically commit changes when they are made?
Would I be better off using a flat repository and doing this in the game engine directly?
1) Is there a way to recode the context so that it has separate ObjectSets for each type derived from MUDObject?
Yes, there is. If you decide that you want to define a base class for all your entities it is common to have an abstract base class that is not part of the entity framework model. The model only contains the derived types and the context contains DbSets of derived types (if it is a DbContext) like
public DbSet<Flag> Flags { get; set; }
If appropriate you can implement inheritance between classes, but that would be to express polymorphism, not to implement common persistence-related behaviour.
2) Does the Update/Pause/Resume/Stop architecture I'm using work properly when placed into the EF entities directly?
No. Entities are not supposed to know anything about persistence. The context is responsible for creating them, tracking their changes and updating/deleting them. I think that also answers your question about automatically committing changes: no.
Elaboration:
I think here it's good to bring up the single responsibility principle. A general pattern would be to
let a context populate objects from a store
let the object act according to their responsibilities (the simulation)
let a context store their state whenever necessary
I think Pause/Resume/Stop could be responsibilities of MUD objects. Update is an altogether different kind of action and responsibility.
Now I have to speculate, but take your World class. You should be able to express its responsibility in a short phrase, maybe something like "harbour other objects" or "define boundaries". I don't think it should do the timing. I think the timing should be the responsibility of some core utility which signals that a time interval has elapsed. Other objects know how to respond to that (e.g. do some state change, or, the context or repository, save changes).
Well, this is only an example of how to think about it, probably far from correct.
One other thing is that I think saving changes should be done not nearly as often as state changes of the objects that carry out the simulation. It would probably slow down the process dramatically. Maybe it should be done in longer intervals or by a user action.
First thing to say, if you are using EF 4.1 (as it is tagged) you should really consider going to version 5.0 (you will need to make a .NET 4.5 project for this)
With several improvements on performance, you can benefit from other features also. The code i will show you will work for 5.0 (i dont know if it will work for 4.1 version)
Now, let's go to you several questions:
Is there a way to recode the context so that it has separate
ObjectSets for each type derived from MUDObject? If not how can I
query a child type specifically?
i.e. context.MUDObjects.Flags or context.Flags
Yes, you can. But to call is a little different, you will not have Context.Worlds you will only have the base class to be called this way, if you want to get the set of Worlds (that inherit from MUDObject, you will call:
var worlds = context.MUDObjects.OfType<World>();
Or you can do in direct way by using generics:
var worlds = context.Set<World>();
If you define you inheritance the right way, you should have an abstract class called MUDObjects and all others should iherit from that class. EF can work perfectly with this, you just need to make it right.
Does the Update/Pause/Resume/Stop architecture I'm using work properly
when placed into the EF entities directly? given than it's for data
purposes only?
In this case i think you should consider using a Design Pattern called Strategy Pattern, do some research, it will fit your objects.
Will locking be an issue?
Depends on how you develop the system....
Does the partial class automatically commit changes when they are
made?
Did not understand that question.... Partial classes are just like regular classes, thay are just in different files, but when compiled (or event at Design-Time, because of the vshost.exe) they are in fact just one.
Would I be better off using a flat repository and doing this in the
game engine directly?
Hard to answer, it all depends on the requirements of the game, deploy strategy....
I'm looking for suggestions on how to approach using an ORM (in this case, EF5) in the design of modular Non-Monolithic applications, with a Core part and 3rd party Modules, where the Core has no direct Reference to the 3rd party Modules, and Modules only have a reference to Core/Common tables and classes.
For arguments sake, a close enough analogy would be DNN.
CodeFirst:
With CodeFirst, the approach I used was to build up the model of the Db was via reflection: in the Core's DbContext's DbInitialation phase, I used Reflection to find any class in any dll (eg Core or various Modules) decorated with IDbInitializer (a custom contract containing an Execute() method) to define just the dll's structure. Each dll added to the DbModel what it knew about itself.
Any subsequent Seeding was also handled in the same wa (searching for a specific IDbSeeder contract, and executing it).
Pro:
* the approach works for now.
* The same core DbContext can be used across all respositories, as long as each repo uses dbContext.GetSet(), rather than expecting it to be a property of the dbContext. No biggie.
Cons:
* it only works at startup (ie, adding new modules would require an AppPool refresh).
* CodeFirst is great for a POC. But in EF5, it's not mature enough for Enterprise work yet (and I can't wait for EF6 for StoredProcs and other features to be added).
* My DBA hates CodeFirst, at least for the Core, wanting to optimize that part with Stored Procs as much as as possible...We're a team, so I have to try to find a way to please him, if I can find a way...
Database-first:
The DbModel phase appears to be happening prior to the DbContext's constructor (reading from embedded *.edmx resource file). DbInitialization is never invoked (as model is deemed complete), so I can't add more tables than what the Core knows about.
If I can't add elements to the Model, dynamically, as one can with CodeFirst, it means that
* either the Core DbContext's Model has to have knowledge of every table in the Db -- Core AND every 3rd party module. Making the application Monolithic and highly coupled, defeating the very thing I am trying to achieve.
* Or each 3rd party has to create their own DbContext, importing Core tables, leading to
* versioning issues (module not updating their *.edmx's when Core's *.edmx is updated, etc.)
* duplication everywhere, in different memory contexts = hard to track down concurrency issues.
At this point, it seems to me that the CodeFirst approach is the only way that Modular software can be achieved with EF. But hopefully someone else know's how to make DatabaseFirst shine -- is there any way of 'appending' DbSet's to the model created from the embedded *.edmx file?
Or any other ideas?
I would consider using a sort of plugin architecture, since that's your overall design for the application itself.
You can accomplish the basics of this by doing something like the following (note that this example uses StructureMap - here is a link to the StructureMap Documentation):
Create an interface from which your DbContext objects can derive.
public interface IPluginContext {
IDictionary<String, DbSet> DataSets { get; }
}
In your Dependency Injection set-up (using StructureMap) - do something like the following:
Scan(s => {
s.AssembliesFromApplicationBaseDirectory();
s.AddAllTypesOf<IPluginContext>();
s.WithDefaultConventions();
});
For<IEnumerable<IPluginContext>>().Use(x =>
x.GetAllInstances<IPluginContext>()
);
For each of your plugins, either alter the {plugin}.Context.tt file - or add a partial class file which causes the DbContext being generated to derive from IPluginContext.
public partial class FooContext : IPluginContext { }
Alter the {plugin}.Context.tt file for each plugin to expose something like:
public IDictionary<String, DbSet> DataSets {
get {
// Here is where you would have the .tt file output a reference
// to each property, keyed on its property name as the Key -
// in the form of an IDictionary.
}
}
You can now do something like the following:
// This could be inside a service class, your main Data Context, or wherever
// else it becomes convenient to call.
public DbSet DataSet(String name) {
var plugins = ObjectFactory.GetInstance<IEnumerable<IPluginContext>>();
var dataSet = plugins.FirstOrDefault(p =>
p.DataSets.Any(ds => ds.Key.Equals(name))
);
return dataSet;
}
Forgive me if the syntax isn't perfect - I'm doing this within the post, not within the compiler.
The end result gives you the flexibility to do something like:
// Inside an MVC controller...
public JsonResult GetPluginByTypeName(String typeName) {
var dataSet = container.DataSet(typeName);
if (dataSet != null) {
return Json(dataSet.Select());
} else {
return Json("Unable to locate that object type.");
}
}
Clearly, in the long-run - you would want the control to be inverted, where the plugin is the one actually tying into the architecture, rather than the server expecting a type. You can accomplish the same kind of thing using this sort of lazy-loading, however - where the main application exposes an endpoint that all of the Plugins tie to.
That would be something like:
public interface IPlugin : IDisposable {
void EnsureDatabase();
void Initialize();
}
You now can expose this interface to any application developers who are going to create plugins for your architecture (DNN style) - and your StructureMap configuration works something like:
Scan(s => {
s.AssembliesFromApplicationBaseDirectory(); // Upload all plugin DLLs here
// NOTE: Remember that this gives people access to your system!!!
// Given what you are developing, though, I am assuming you
// already get that.
s.AddAllTypesOf<IPlugin>();
s.WithDefaultConventions();
});
For<IEnumerable<IPlugin>>().Use(x => x.GetAllInstances<IPlugin>());
Now, when you initialize your application, you can do something like:
// Global.asax
public static IEnumerable<IPlugin> plugins =
ObjectFactory.GetInstance<IEnumerable<IPlugin>>();
public void Application_Start() {
foreach(IPlugin plugin in plugins) {
plugin.EnsureDatabase();
plugin.Initialize();
}
}
Each of your IPlugin objects can now contain its own database context, manage the process of installing (if necessary) its own database instance / tables, and dispose of itself gracefully.
Clearly this isn't a complete solution - but I hope it starts you off in a useful direction. :) If I can help clarify anything herein, please let me know.
While it's a CodeFirst approach, and not the cleanest solution, what about forcing initialization to run even after start up, as in the answer posted here: Forcing code-first to always initialize a non-existent database? (I know precisely nothing about CodeFirst, but seeing as this is a month old with no posts it's worth a shot)
So I'm using Entity Framework Code First (so no .edmx)
I have a base entity class with a bool IsEnabled to do soft delete's
I am using repository pattern so all queries against the repository can be filtered out with IsEnabled.
However any time I use the repository to get an MyType which is IsEnabled, Lazy Loading MyType.Items may mean that Items could be not enabled.
Is there a way, perhaps with EF Fluent to describe how to do filtering on tables?
Update:
If I have a Dbset
public class UnitOfWork : DbContext
{
private IDbSet<MyObj> _MyObj;
public IDbSet<MyObj> MyObjs
{
get { return _MyObj ?? (_MyObj = base.Set<MyObj>()); }
}
}
Is there any way I can tell the DbContext to filter the DbSet?
No, there is no way to define a filter for lazy loading (also not for eager loading using Include). If you want that your navigation collections only get populated with items where IsEnabled is true you can only shape your queries accordingly, for example with explicit loading:
context.Entry(parent).Collection(p => p.Items).Query()
.Where(i => i.IsEnabled)
.Load();
This will populate the Items collection of parent only with the enabled items.
Edit
I feel a bit like the messenger of the bad news about a lost battle who gets his head knocked off. Maybe it's too hard to believe that Entity Framework sometimes does not have the capabilities you want. To improve my chance to convince you I add a quote from an authority, Julie Lerman:
Neither eager loading nor deferred/lazy loading in the Entity
Framework allows you to filter or sort the related data being
returned.
It looks like it is still possible. If you are intrested you can take a look at the Wiktor Zychla blogpost, where he gives a solution to the soft delete problem.
This http://blogs.claritycon.com/blog/2012/01/25/a-smarter-infrastructure-automatically-filtering-an-ef-4-1-dbset/ basically defines how I can achieve what I was looking for.
Basically you create a FilteredDbSet and make all your DbContext IDbSet's return it.