EF Core one-to-many infinite includes - entity-framework

I have two tables Appointment and TaskAllocation having one to many relationship. now when i get the Appointment
public IEnumerable<Appointment> GetAppointments(int employeeId, DateTime date)
{
return _context.Appointment.Where(a => a.EmployeeId == employeeId &&
a.AppointmentDate == date)
.Include(a=>a.Tasks).ToList();
}
It causes including one appointment with many tasks and again one task with that appointment with many tasks and so on.

In your ConfigureService, you need to Add Json Options for handling reference loop handling
.AddJsonOptions(options =>
{
options.SerializerSettings.ReferenceLoopHandling = ReferenceLoopHandling.Serialize;
options.SerializerSettings.PreserveReferencesHandling = PreserveReferencesHandling.Objects;
});
or you may choose to directly ignore reference loops by
options.SerializerSettings.ReferenceLoopHandling = ReferenceLoopHandling.Ignore;

Related

What is the difference between creating a new object inside select LINQ clause and inside a method

I have an entity class which is mapped to an SQL table:
public class EntityItem {
public virtual ICollection<EntityItem2> SomeItems { get; set; }
}
And I have the following two snippets:
var items = _repository.Table.Where(x => x.Id == id)
.Select(x => new ItemModel {
Items = x.SomeItems.Select(y => new SomeItem { //mapping is here...}).ToList()
});
And
var items = _repository.Table.Where(x => x.Id == id).Select(x => someModelMapper.BuildModel(x));
//inside a mapper
public ItemModel BuildModel(EntityType entity){
var model = new ItemModel();
model.Items = entity.SomeItems.Select(x => anotherMapper.BuildModel(x));
return model;
}
As a result, I am getting different SQL queries in both cases. Moreover, the second snippet is working much slower than the first one. As I can see in SQL profiler the second snipper is generating many SQL queries.
So my questions:
Why is that happening?
How to create new objects like in the second snippet but to avoid
lots of SQL queries?
The likely reason you are seeing a difference in performance is due to EF Core materializing the query prematurely. When a Linq statement is compiled, EF attempts to translate it into SQL. If you call a function within the expression, EF6 would have raised an exception to the effect that the method cannot be converted to SQL. EF Core tries to be clever, and when it encounters a method it cannot convert, it executes the query up to the point it could get to, and then continues to execute the rest as Linq2Object where you method can run. IMO this is a pretty stupid feature and represents a huge performance landmine, and while it's fine to offer it as a possible option, it should be disabled by default.
You're probably seeing extra queries due to lazy loading after the main query runs, to populate the view models in the mapping method.
For instance if I execute:
var results = context.Parents.Select(x => new ParentViewModel
{
ParentId = x.ParentId,
Name = x.Name,
OldestChildName = x.Children.OrderByDescending(c => c.BirthDate).Select(c => c.Name).FirstOrDefault() ?? "No Child"
}).Single(x => x.ParentId == parentId);
That would execute as one statement. Calling a method to populate the view model:
var results = context.Parents
.Select(x => buildParentViewModel(x))
.Single(x => x.ParentId == parentId);
would execute something like:
var results = context.Parents
.ToList()
.Select(x => new ParentViewModel
{
ParentId = x.ParentId,
Name = x.Name,
OldestChildName = x.Children.OrderByDescending(c => c.BirthDate).Select(c =>
c.Name).FirstOrDefault() ?? "No Child"
}).Single(x => x.ParentId == parentId);
at worst or:
var results = context.Parents
.Where(x => x.ParentId == parentId)
.ToList()
.Select(x => new ParentViewModel
{
ParentId = x.ParentId,
Name = x.Name,
OldestChildName = x.Children.OrderByDescending(c => c.BirthDate).Select(c =>
c.Name).FirstOrDefault() ?? "No Child"
}).Single();
... at best. These are due to the Extra .ToList() call prior to the Select which is roughly what the premature execution will do automatically. The issue with these queries compared to the first one is that when it comes to loading the child's name. In the first query, the SQL generated pulls the parent and related child's details in one query. In the alternative cases the query will execute to pull the parent's details, but getting the child details will constitute a lazy load call to get more details since that will be executed as Linq2Object.
The solution would be to use Automapper and it's built in ProjectTo method to populate your view model. This will place the mapping code in automatically so that it works like the first scenario without you needing to write out all of the mapping code.

Issues with Automatically setting created and modified date on each record in EF Core

Using ASP.NET Core 2.2 with EF Core, I have followed various guides in trying to implement the automatic creation of date/time values when creating either a new record or editing/updating an existing one.
The current result is when i initially create a new record, the CreatedDate & UpdatedDate column will be populated with the current date/time.
However first time I edit this same record, the UpdatedDate column is then given a new date/time value (as expected) BUT for some reason, EF Core is wiping out the value of the original CreatedDate which results in SQL assigning a default value.
Required result I need as follows:
Step 1: New row created, both CreatedDate & UpdatedDate column is given a date/time value (this already works)
Step 2: When editing and saving an existing row, I want EF Core to update the UpdatedDate column with the updated date/time only, BUT leave the other CreatedDate column unmodified with the original creation date.
I'm using EF Core code first, and do no want to go down the fluent API route.
One of the guides i was partially following is https://www.entityframeworktutorial.net/faq/set-created-and-modified-date-in-efcore.aspx but neither this or other solutions I've tried is giving the result I am after.
Baseclass:
public class BaseEntity
{
public DateTime? CreatedDate { get; set; }
public DateTime? UpdatedDate { get; set; }
}
DbContext Class:
public override Task<int> SaveChangesAsync(bool acceptAllChangesOnSuccess, CancellationToken cancellationToken = default(CancellationToken))
{
var entries = ChangeTracker.Entries().Where(E => E.State == EntityState.Added || E.State == EntityState.Modified).ToList();
foreach (var entityEntry in entries)
{
if (entityEntry.State == EntityState.Modified)
{
entityEntry.Property("UpdatedDate").CurrentValue = DateTime.Now;
}
else if (entityEntry.State == EntityState.Added)
{
entityEntry.Property("CreatedDate").CurrentValue = DateTime.Now;
entityEntry.Property("UpdatedDate").CurrentValue = DateTime.Now;
}
}
return base.SaveChangesAsync(acceptAllChangesOnSuccess, cancellationToken);
}
UPDATE FOLLOWING ADVICE FROM STEVE IN COMMENTS BELOW
I spent a bit more time debugging today, turns out the methods I posted above are appear to be functioning as expected i.e. when editing an existing row and saving it, only the entityEntry.State == EntityState.Modified IF statement is being called. So what I'm finding is that after saving the entity, the CreatedDate column is being overwitten with a Null value, I can see this by watching the SQL explorer after a refresh. I believe the issue is along the lines of what Steve mentions below "If it is #null then this might also explain the behavior in that it is not being loaded with the entity for whatever reason."
But i'm a little lost in tracing where this CreatedDate value is being dropped somewhere through edit/save process.
Image below shows the result at the point just before the entity is saved following an update. In the debugger I'm not quite sure where to find the entry of the CreatedDate to see what value is held at this step, but it appears to be missing from the debugger list so wandering whether somehow it doesn't know about the existence of this field when saving.
Below is the method I have in my form 'Edit' Razor page model class:
public class EditModel : PageModel
{
private readonly MyProject.Data.ApplicationDbContext _context;
public EditModel(MyProject.Data.ApplicationDbContext context)
{
_context = context;
}
[BindProperty]
public RuleParameters RuleParameters { get; set; }
public async Task<IActionResult> OnGetAsync(int? id)
{
if (id == null)
{
return NotFound();
}
RuleParameters = await _context.RuleParameters
.Include(r => r.SystemMapping).FirstOrDefaultAsync(m => m.ID == id);
if (RuleParameters == null)
{
return NotFound();
}
ViewData["SystemMappingID"] = new SelectList(_context.SystemMapping, "ID", "MappingName");
return Page();
}
public async Task<IActionResult> OnPostAsync()
{
if (!ModelState.IsValid)
{
return Page();
}
_context.Attach(RuleParameters).State = EntityState.Modified;
try
{
await _context.SaveChangesAsync();
}
catch (DbUpdateConcurrencyException)
{
if (!RuleParametersExists(RuleParameters.ID))
{
return NotFound();
}
else
{
throw;
}
}
return RedirectToPage("./Index");
}
private bool RuleParametersExists(int id)
{
return _context.RuleParameters.Any(e => e.ID == id);
}
}
Possibly one of the reasons for this issue is the fact that I have not included the CreatedDate field in my Edit Razor Page form, so when I update the entity which in turn will run the PostAsync method server side, there is no value stored for the CreatedDate field and therefore nothing in the bag by the tine the savechangesasync method is called in my DbContext Class. But I also didn't think this was necessary? otherwise I'd struggle to see what value there is in the this process of using an inherited BaseEntity class i.e. not having to manually add the CreatedDate & UpdatedDate attribute to every model class where I want to use it...
It may be easier to just give your BaseEntity a constructor:
public BaseEntity()
{
UpdatedDate = DateTime.Now;
CreatedDate = CreatedDate ?? UpdatedDate;
}
Then you can have your DbContext override SaveChangesAsync like:
public override Task<int> SaveChangesAsync(
bool acceptAllChangesOnSuccess,
CancellationToken token = default)
{
foreach (var entity in ChangeTracker
.Entries()
.Where(x => x.Entity is BaseEntity && x.State == EntityState.Modified)
.Select(x => x.Entity)
.Cast<BaseEntity>())
{
entity.UpdatedDate = DateTime.Now;
}
return base.SaveChangesAsync(acceptAllChangesOnSuccess, token);
}
Possibly one of the reasons for this issue is the fact that I have not included the CreatedDate field in my Edit Razor Page form, so when I update the entity which in turn will run the PostAsync method server side, there is no value stored for the CreatedDate field and therefore nothing in the bag by the tine the savechangesasync method is called in my DbContext Class.
That's true.Your post data does not contains the original CreatedDate,so when save to database, it is null and could not know what the exact value unless you assign it before saving.It is necessary.
You could just add below code in your razor form.
<input type="hidden" asp-for="CreatedDate" />
Update:
To overcome it in server-side,you could assign data manually:
public async Task<IActionResult> OnPostAsync()
{
RuleParameters originalData = await _context.RuleParameters.FirstOrDefaultAsync(m => m.ID == RuleParameters.ID);
RuleParameters.CreatedDate = originalData.CreatedDate;
_context.Attach(RuleParameters).State = EntityState.Modified;
await _context.SaveChangesAsync();
}
I don't suspect EF is doing this, but rather your database, or you're inadvertently inserting records instead of updating them.
A simple test: Put break-points in your SaveChangesAsnc method within both the Modified and Added handlers and then run a unit test that loads an entity, edits it, and saves. Which breakpoint is hit? If the behavior seems to be normal with a simple unit test, repeat again with your code.
If the Modified breakpoint is hit, and only the Modified handler is hit then check the state of the CreatedDate value in the entity modified. Does it still reflect the original CreatedDate? If yes, then it would appear that something in your schema will be overwriting it on save. If no then you have a bug in your code that has caused it to update. If it is #null then this might also explain the behaviour in that it is not being loaded with the entity for whatever reason. Check that the property has not been configured as something like a Computed property.
If the Added breakpoint is hit at all, then this would point at a scenario where you're dealing with a detached entity, such as an entity that was read from a different DB Context and being associated to another entity in the current DB Context and saved as a byproduct. When a DbContext encounters an entity that was loaded and disassociated with a different DbContext, it will treat that entity as a completely new entity and insert a new record. The biggest single culprit for this is invariably MVC code where people pass entities to/from views. Entity references are loaded in one request, serialized to the view, and then passed back on another request. Devs assume they are receiving an entity that they can just associate to a new entity and save, but the Context of this request doesn't know about that entity, and that "entity" isn't actually an entity, it is now a POCO shell of data that the serializer created. It's no different to you newing up a new class and populating fields. EF won't know the difference. The result of this is you will trip the Added condition for your entity, and after completion you will have a duplicate record. (with different PK if EF is configured to treat PKs as Identity)
So an example is an Order screen: When presenting a screen to create a new order I may have loaded the Customer and passed that to the view to display customer information and will want to associate to the new order:
var customer = context.Customers.Single(x => x.CustomerId == 15);
var newOrder = new Order { Customer = customer };
return View(newOrder);
This looks innocent enough. When we go to save the new order after setting their details:
public ActionResult Save(Order newOrder)
{
context.Orders.Add(newOrder);
newOrder.Customer.Orders.Add(newOrder);
context.SaveChanges();
// ...
}
newOrder had a reference to Customer #14, so all looks good. We're even associating the new order to the customer's order collection. We might even want to have updated fields on the customer record to reflect a change to the Modified date. However, newOrder in this case, and all associated data including .Customer are plain 'ol C# objects at this point. We've added the new order to the Context, but as far as the context is concerned, the Customer referenced is also a new record. It will ignore the Customer ID if that is set as an Identity column and it will save a brand new Customer record (ID #15 for example) with all of the same details as Customer ID 14 and associate that to the new order. It can be subtle and easy to miss until you start querying Customers and spotting duplicate looking rows.
If you are passing entities to/from views, I'd be very wary of this gotcha. Attaching and setting modified state is one option, but that involves trusting that the data has not been tampered with. As a general rule, calls to update entities should never pass entities & attach them, but rather re-load those entities, validate row version, validate the data coming in, and only copy across fields you expect should ever be modified before saving the entity associated to the DbContext.
Hopefully that gives you a few ideas on things to check to get to the bottom of the issue.

How to Use Automapper on DTO Returned From EF?

I was told to use automapper in the code below. I cannot get clarification for reasons that are too lengthy to go into. What object am I supposed to be mapping to what object? I don't see a "source" object, since the source is the database...
Would really appreciate any help on how to do this with automapper. Note, the actual fields are irrelevant, I need help with the general concept. I do understand how mapping works when mapping from one object to another.
public IQueryable<Object> ReturnDetailedSummaries(long orgId)
{
var summaries = from s in db.ReportSummaries
where s.OrganizationId == orgId
select new SummaryViewModel
{
Id = s.Id,
Name = s.Name,
AuditLocationId = s.AuditLocationId,
AuditLocationName = s.Location.Name,
CreatedOn = s.CreatedOn,
CreatedById = s.CreatedById,
CreatedByName = s.User.Name,
OfficeId = s.OfficeId,
OfficeName = s.Office.Name,
OrganizationId = s.OrganizationId,
OrganizationName = s.Organization.Name,
IsCompleted = s.IsCompleted,
isHidden = s.isHidden,
numberOfItemsInAuditLocations = s.numberOfItemsInAuditLocations,
numberOfLocationsScanned = s.numberOfLocationsScanned,
numberOfItemsScanned = s.numberOfItemsScanned,
numberofDiscrepanciesFound = s.numberofDiscrepanciesFound
};
return summaries;
}
It is a handy and a timesaver, especially if you use a one to one naming between translations layers. Here is how I use it.
For single item
public Domain.Data.User GetUserByUserName(string userName)
{
Mapper.CreateMap<User, Domain.Data.User>();
return (
from s in _dataContext.Users
where s.UserName==userName
select Mapper.Map<User, Domain.Data.User>(s)
).SingleOrDefault();
}
Multiple Items
public List<Domain.Data.User> GetUsersByProvider(int providerID)
{
Mapper.CreateMap<User, Domain.Data.User>();
return (
from s in _dataContext.Users
where s.ProviderID== providerID
select Mapper.Map<User, Domain.Data.User>(s)
).ToList();
}
It looks like you already have a model? SummaryViewModel?
If this isn't the DTO, then presumably you want to do:
Mapper.CreateMap<SummaryViewModel, SummaryViewModelDto>();
SummaryViewModelDto summaryViewModelDto =
Mapper.Map<SummaryViewModel, SummaryViewModelDto>(summaryViewModel);
AutoMapper will copy fields from one object to another, to save you having to do it all manually.
See https://github.com/AutoMapper/AutoMapper/wiki/Getting-started
The source is your entity class ReportSummary, the target is SummaryViewModel:
Mapper.CreateMap<ReportSummary, SummaryViewModel>();
The best way to use AutoMapper in combination with an IQueryable data source is through the Project.To API:
var summaries = db.ReportSummaries.Where(s => s.OrganizationId == orgId)
.Project().To<SummaryViewModel>();
Project.To translates the properties in the target model straight to the selected columns in the generated SQL.
Mapper.Map, on the other hand, only works on in-memory collections, so you can only use it when you first fetch complete ReportSummary objects from the database. (In this case there may not be much of a difference, but in other cases it can be substantial).

Linq to select top 1 related entity

How can I include a related entity, but only select the top 1?
public EntityFramework.Member Get(string userName)
{
var query = from member in context.Members
.Include(member => member.Renewals)
where member.UserName == userName
select member;
return query.SingleOrDefault();
}
According to MSDN:
"Note that it is not currently possible to filter which related entities are loaded. Include will always bring in all related entities."
http://msdn.microsoft.com/en-us/data/jj574232
There is also a uservoice item for this functionality:
http://data.uservoice.com/forums/72025-entity-framework-feature-suggestions/suggestions/1015345-allow-filtering-for-include-extension-method
The approach to use an anonymous object works, even though it's not clean as you wish it would be:
public Member GetMember(string username)
{
var result = (from m in db.Members
where m.Username == username
select new
{
Member = m,
FirstRenewal = m.Renewals.FirstOrDefault()
}).AsEnumerable().Select(r => r.Member).FirstOrDefault();
return result;
}
The FirstRenewal property is used just to make EF6 load the first renewal into the Member object. As a result the Member returned from the GetMember() method contains only the first renewal.
This code generates a single Query to the DB, so maybe it's good enough for You.

How to do recursive load with Entity framework?

I have a tree structure in the DB with TreeNodes table. the table has nodeId, parentId and parameterId. in the EF, The structure is like TreeNode.Children where each child is a TreeNode...
I also have a Tree table with contain id,name and rootNodeId.
At the end of the day I would like to load the tree into a TreeView but I can't figure how to load it all at once.
I tried:
var trees = from t in context.TreeSet.Include("Root").Include("Root.Children").Include("Root.Children.Parameter")
.Include("Root.Children.Children")
where t.ID == id
select t;
This will get me the the first 2 generations but not more.
How do I load the entire tree with all generations and the additional data?
I had this problem recently and stumbled across this question after I figured a simple way to achieve results. I provided an edit to Craig's answer providing a 4th method, but the powers-that-be decided it should be another answer. That's fine with me :)
My original question / answer can be found here.
This works so long as your items in the table all know which tree they belong to (which in your case it looks like they do: t.ID). That said, it's not clear what entities you really have in play, but even if you've got more than one, you must have a FK in the entity Children if that's not a TreeSet
Basically, just don't use Include():
var query = from t in context.TreeSet
where t.ID == id
select t;
// if TreeSet.Children is a different entity:
var query = from c in context.TreeSetChildren
// guessing the FK property TreeSetID
where c.TreeSetID == id
select c;
This will bring back ALL the items for the tree and put them all in the root of the collection. At this point, your result set will look like this:
-- Item1
-- Item2
-- Item3
-- Item4
-- Item5
-- Item2
-- Item3
-- Item5
Since you probably want your entities coming out of EF only hierarchically, this isn't what you want, right?
.. then, exclude descendants present at the root level:
Fortunately, because you have navigation properties in your model, the child entity collections will still be populated as you can see by the illustration of the result set above. By manually iterating over the result set with a foreach() loop, and adding those root items to a new List<TreeSet>(), you will now have a list with root elements and all descendants properly nested.
If your trees get large and performance is a concern, you can sort your return set ASCENDING by ParentID (it's Nullable, right?) so that all the root items are first. Iterate and add as before, but break from the loop once you get to one that is not null.
var subset = query
// execute the query against the DB
.ToList()
// filter out non-root-items
.Where(x => !x.ParentId.HasValue);
And now subset will look like this:
-- Item1
-- Item2
-- Item3
-- Item4
-- Item5
About Craig's solutions:
You really don't want to use lazy loading for this!! A design built around the necessity for n+1 querying will be a major performance sucker. ********* (Well, to be fair, if you're going to allow a user to selectively drill down the tree, then it could be appropriate. Just don't use lazy loading for getting them all up-front!!)I've never tried the nested set stuff, and I wouldn't suggest hacking EF configuration to make this work either, given there is a far easier solution. Another reasonable suggestion is creating a database view that provides the self-linking, then map that view to an intermediary join/link/m2m table. Personally, I found this solution to be more complicated than necessary, but it probably has its uses.
When you use Include(), you are asking the Entity Framework to translate your query into SQL. So think: How would you write an SQL statement which returns a tree of an arbitrary depth?
Answer: Unless you are using specific hierarchy features of your database server (which are not SQL standard, but supported by some servers, such as SQL Server 2008, though not by its Entity Framework provider), you wouldn't. The usual way to handle trees of arbitrary depth in SQL is to use the nested sets model rather than the parent ID model.
Therefore, there are three ways which you can use to solve this problem:
Use the nested sets model. This requires changing your metadata.
Use SQL Server's hierarchy features, and hack the Entity Framework into understanding them (tricky, but this technique might work). Again, you'll need to change your metadata.i
Use explicit loading or EF 4's lazy loading instead of eager loading. This will result in many database queries instead of one.
I wanted to post up my answer since the others didn't help me.
My database is a little different, basically my table has an ID and a ParentID. The table is recursive. The following code gets all children and nests them into a final list.
public IEnumerable<Models.MCMessageCenterThread> GetAllMessageCenterThreads(int msgCtrId)
{
var z = Db.MCMessageThreads.Where(t => t.ID == msgCtrId)
.Select(t => new MCMessageCenterThread
{
Id = t.ID,
ParentId = t.ParentID ?? 0,
Title = t.Title,
Body = t.Body
}).ToList();
foreach (var t in z)
{
t.Children = GetChildrenByParentId(t.Id);
}
return z;
}
private IEnumerable<MCMessageCenterThread> GetChildrenByParentId(int parentId)
{
var children = new List<MCMessageCenterThread>();
var threads = Db.MCMessageThreads.Where(x => x.ParentID == parentId);
foreach (var t in threads)
{
var thread = new MCMessageCenterThread
{
Id = t.ID,
ParentId = t.ParentID ?? 0,
Title = t.Title,
Body = t.Body,
Children = GetChildrenByParentId(t.ID)
};
children.Add(thread);
}
return children;
}
For completeness, here's my model:
public class MCMessageCenterThread
{
public int Id { get; set; }
public int ParentId { get; set; }
public string Title { get; set; }
public string Body { get; set; }
public IEnumerable<MCMessageCenterThread> Children { get; set; }
}
I wrote something recently that does N+1 selects to load the whole tree, where N is the number of levels of your deepest path in the source object.
This is what I did, given the following self-referencing class
public class SomeEntity
{
public int Id { get; set; }
public int? ParentId { get; set; }
public string Name { get; set;
}
I wrote the following DbSet helper
using System;
using System.Collections.Generic;
using System.Linq;
using System.Linq.Expressions;
using System.Threading.Tasks;
namespace Microsoft.EntityFrameworkCore
{
public static class DbSetExtensions
{
public static async Task<TEntity[]> FindRecursiveAsync<TEntity, TKey>(
this DbSet<TEntity> source,
Expression<Func<TEntity, bool>> rootSelector,
Func<TEntity, TKey> getEntityKey,
Func<TEntity, TKey> getChildKeyToParent)
where TEntity: class
{
// Keeps a track of already processed, so as not to invoke
// an infinte recursion
var alreadyProcessed = new HashSet<TKey>();
TEntity[] result = await source.Where(rootSelector).ToArrayAsync();
TEntity[] currentRoots = result;
while (currentRoots.Length > 0)
{
TKey[] currentParentKeys = currentRoots.Select(getEntityKey).Except(alreadyProcessed).ToArray();
alreadyProcessed.AddRange(currentParentKeys);
Expression<Func<TEntity, bool>> childPredicate = x => currentParentKeys.Contains(getChildKeyToParent(x));
currentRoots = await source.Where(childPredicate).ToArrayAsync();
}
return result;
}
}
}
Whenever you need to load a whole tree you simply call this method, passing in three things
The selection criteria for your root objects
How to get the property for the primary key of the object (SomeEntity.Id)
How to get the child's property that refers to its parent (SomeEntity.ParentId)
For example
SomeEntity[] myEntities = await DataContext.SomeEntity.FindRecursiveAsync(
rootSelector: x => x.Id = 42,
getEntityKey: x => x.Id,
getChildKeyToParent: x => x.ParentId).ToArrayAsync();
);
Alternatively, if you can add a RootId column to the table then for each non-root entry you can set this column to the ID of the root of the tree. Then you can fetch everything with a single select
DataContext.SomeEntity.Where(x => x.Id == rootId || x.RootId == rootId)
For an example of loading in child objects, I'll give the example of a Comment object that holds a comment. Each comment has a possible child comment.
private static void LoadComments(<yourObject> q, Context yourContext)
{
if(null == q | null == yourContext)
{
return;
}
yourContext.Entry(q).Reference(x=> x.Comment).Load();
Comment curComment = q.Comment;
while(null != curComment)
{
curComment = LoadChildComment(curComment, yourContext);
}
}
private static Comment LoadChildComment(Comment c, Context yourContext)
{
if(null == c | null == yourContext)
{
return null;
}
yourContext.Entry(c).Reference(x=>x.ChildComment).Load();
return c.ChildComment;
}
Now if you were having something that has collections of itself you would need to use Collection instead of Reference and do the same sort of diving down. At least that's the approach I took in this scenario as we were dealing with Entity and SQLite.
This is an old question, but the other answers either had n+1 database hits or their models were conducive to bottom-up (trunk to leaves) approaches. In this scenario, a tag list is loaded as a tree, and a tag can have multiple parents. The approach I use only has two database hits: the first to get the tags for the selected articles, then another that eager loads a join table. Thus, this uses a top-down (leaves to trunk) approach; if your join table is large or if the result cannot really be cached for reuse, then eager loading the whole thing starts to show the tradeoffs with this approach.
To begin, I initialize two HashSets: one to hold the root nodes (the resultset), and another to keep a reference to each node that has been "hit."
var roots = new HashSet<AncestralTagDto>(); //no parents
var allTags = new HashSet<AncestralTagDto>();
Next, I grab all of the leaves that the client requested, placing them into an object that holds a collection of children (but that collection will remain empty after this step).
var startingTags = await _dataContext.ArticlesTags
.Include(p => p.Tag.Parents)
.Where(t => t.Article.CategoryId == categoryId)
.GroupBy(t => t.Tag)
.ToListAsync()
.ContinueWith(resultTask =>
resultTask.Result.Select(
grouping => new AncestralTagDto(
grouping.Key.Id,
grouping.Key.Name)));
Now, let's grab the tag self-join table, and load it all into memory:
var tagRelations = await _dataContext.TagsTags.Include(p => p.ParentTag).ToListAsync();
Now, for each tag in startingTags, add that tag to the allTags collection, then travel down the tree to get the ancestors recursively:
foreach (var tag in startingTags)
{
allTags.Add(tag);
GetParents(tag);
}
return roots;
Lastly, here's the nested recursive method that builds the tree:
void GetParents(AncestralTagDto tag)
{
var parents = tagRelations.Where(c => c.ChildTagId == tag.Id).Select(p => p.ParentTag);
if (parents.Any()) //then it's not a root tag; keep climbing down
{
foreach (var parent in parents)
{
//have we already seen this parent tag before? If not, instantiate the dto.
var parentDto = allTags.SingleOrDefault(i => i.Id == parent.Id);
if (parentDto is null)
{
parentDto = new AncestralTagDto(parent.Id, parent.Name);
allTags.Add(parentDto);
}
parentDto.Children.Add(tag);
GetParents(parentDto);
}
}
else //the tag is a root tag, and should be in the root collection. If it's not in there, add it.
{
//this block could be simplified to just roots.Add(tag), but it's left this way for other logic.
var existingRoot = roots.SingleOrDefault(i => i.Equals(tag));
if (existingRoot is null)
roots.Add(tag);
}
}
Under the covers, I am relying on the properties of a HashSet to prevent duplicates. To that end, it's important that the intermediate object that you use (I used AncestralTagDto here, and its Children collection is also a HashSet), override the Equals and GetHashCode methods as appropriate for your use-case.