I have the ReportingActivity entity class.
public class ReportingActivity
{
public int ReportingActivityID { get; set; }
public DateTime ReportingActivitySend { get; set; }
public string Remark { get; set; }
public string SendersCSV { get; set; }
public string MailSenderStatus { get; set; }
public long RptGenerationCostMiliseconds { get; set; }
public DateTime RptGeneratedDateTime { get; set; }
public string RptGeneratedByWinUser { get; set; }
public string RptGeneratedOnMachine { get; set; }
public Int64 Run { get; set; }
public byte[] AttachmentFile { get; set; }
public virtual Report Report { get; set; }
public virtual Employee Employee { get; set; }
public virtual ReportingTask ReportingTask { get; set; }
}
I use this code to load data:
ctxDetail = new ReportingContext();
ctxDetail.ReportingActivity
.Where(x => x.Employee.EmployeeID == currentEmployee.EmployeeID)
.Load();
My code gets all the columns in (like SELECT * FROM... )
My question is how to skip the byte[] column, ideally recommend me a way how to improve my lines of code to be able specify exact list of columns.
Normally when dealing with a schema where records have large, seldom accessed details, it is beneficial to split those details off into a separate table as David mentions /w a one-to-one relationship back to the main record. These can be eager or lazy loaded as desired when they are needed.
If changing the schema is not practical then another option when retrieving data from the table is to utilize Projection via Select to populate a view model containing just the fields you need, excluding the larger fields. This will help speed up things like reads for views, however for things like performing updates you will still need to load the entire entity including the large fields to ensure you don't accidentally overwrite/erase data. It is possible to perform updates without loading this data, but it will add a bit of complexity and risk of introducing bugs if mismanaged later.
You can use Table Splitting, and optionally Lazy Loading to have only commonly needed columns loaded.
See
https://learn.microsoft.com/en-us/ef/core/modeling/table-splitting
This is for EF Core but it works the same on EF6 code-first.
Related
I've hit a snag while building a .net mvc site. I have 2 related objects and am struggling with properly linking them. Specifically:
public class Address
{
public int AddressId { get; set; }
public string Street { get; set; }
public string City { get; set; }
public string State { get; set; }
public string PostCode { get; set; }
[ForeignKey("AddressCategory")] // <-- EF adds field to below object's table
public int AddressCategoryId { get; set; }
public virtual AddressCategory AddressCategory { get; set; }
}
public class AddressCategory
{
public int AddressCategoryId { get; set; }
public string Description { get; set; }
}
Adding the [ForeignKey] data annotation to the Address object results in EF adding an Address_AddressId column to the AddressCategory table, which I don't want (or need) to happen.
I've tried to omit the ForeignKey attribute, but then I run into other errors because .net can't link the tables (e.g. Unknown column 'Extent1.AddressId' in 'field list'). Additionally, I wouldn't be able to use:
var addresses = db.Addresses.Include(l => l.AddressCategory);
Is there any way to link the 2 tables without EF adding an additional column to the AddressCategory table?
Thank you to #cloudikka for responding. After much trial-and-error I seem to have gotten it to work simply by omitting any ForeignKey reference from either object. I let EF rebuild the database and perform all scaffolding (CRUD forms) and they have been created perfectly.
My take-away is that foreign key attributes should be used for parent-child relationships, but not for look-up tables. I clearly have much to learn about asp.net mvc!
I use Entity Framework Code First to access my SQL Server database. The "Client" table currently has about 90 columns:
[Table("Clients")]
public class Client
{
public int Id { get; set; }
public string Property1 { get; set; }
...
public string Property90 { get; set; }
}
I have decided to vertically partition this table into 3 tables, because often not all the properties are used. However, I still have legacy code (that I can't change right now) that expects the full Client object with all 90 columns.
My solution so far is to split the Client class into 3 classes corresponding with the new tables, and then use Table Per Type inheritance to allow the legacy code to access the Client object as though the original Clients table is still there:
[Table("Clients")]
public class Client: Client1
{
public int Id { get; set; }
public string Property1 { get; set; }
...
public string Property30 { get; set; }
}
[Table("Client1s")]
public class Client1: Client2
{
public int Id { get; set; }
public string Property31 { get; set; }
...
public string Property60 { get; set; }
}
[Table("Client2s")]
public class Client2
{
public int Id { get; set; }
public string Property61 { get; set; }
...
public string Property90 { get; set; }
}
However, this somehow seems a bit clunky to me.
Is there a more elegant way to achieve vertical partitioning with Entity Framework Code First?
So, considering you refer to the existing approach as being used by "legacy" systems and your new partitioned approach is most likely intended to be the new "correct" way going forwards, my advice would be to keep them as separated as possible.
What you could look to do is replace the existing monolithic Clients table with a database view that joins the 3 separate, partitioned tables back together. Then you can hook up the existing Clients class in all it's former glory to the view, leaving your legacy systems relatively untouched, in theory.
I'd also recommend ditching the inheritance idea and leaving the 3 new partitioned classes completely independent of one another. Otherwise, both legacy and new systems will be extremely sensitive to any changes being made to classes and properties within that entire inheritance chain.
By doing it this way you are then free to change and evolve the new classes independently and modify any underlying table structures however you see fit in the future. Providing you maintain the views integrity and consistency, your legacy systems should continue to function as normal without any repercussions or regressions, mostly :-)
In my humble experience, shielding the old from changes in the new far outway the slight inconveniences of having some code duplication and stricter boundaries.
To answer your elegance question more directly I'd say that insulating your classes against unnecessary coupling and avoiding the "ripple effect" yields the more elegant solution.
Anyway, I hope it helps.
I'm going to assume your actual classes have much more meaningful property names, right?
// LEGECY SYSTEMS
[Obsolete("Use the newer partitioned classes going forwards")]
[Table("vClients")]
public class Client
{
public int Id { get; set; }
public string Property1 { get; set; }
...
public string Property90 { get; set; }
}
// NEW STRUCTURES
[Table("Client1s")]
public class Client1
{
public int Id { get; set; }
public string Property1 { get; set; }
...
public string Property30 { get; set; }
}
[Table("Client2s")]
public class Client2
{
public int Id { get; set; }
public string Property31 { get; set; }
...
public string Property60 { get; set; }
}
[Table("Client3s")]
public class Client3
{
public int Id { get; set; }
public string Property61 { get; set; }
...
public string Property90 { get; set; }
}
Updating the the base tables via the view can be done using some INSTEAD OF triggers, like so:
CREATE TRIGGER ClientsLegacyInsertAdapter on vClients
INSTEAD OF INSERT
AS
BEGIN
BEGIN TRANSACTION
INSERT INTO Client1s
SELECT Id, PropertyA1, Property2, ..., Property30
FROM inserted;
INSERT INTO Client2s
SELECT Id, Property31, Property32, ..., Property60
FROM inserted;
INSERT INTO Client3s
SELECT Id, Property61, Property62, ..., Property90
FROM inserted;
COMMIT TRANSACTION
END
You should be able to use the same technique for UPDATE and DELETE commands also.
We are using Entity Framework + Repository Pattern in a web based application to fetch database . Because of our complex business, our models are getting complex sometimes and this cause strange behaviour at Entity Framework eager loading system.
Please imagine our real model like this. We have tables, boxes which are on table, pencil cases which can be on table or in the box and pencils that can be on the table or in the box or in the pencil case.
We had modelled this in our application like this.
public class Table
{
public int TableID{ get; set; }
public virtual ICollection<Box> Boxes{ get; set; }
public virtual ICollection<PencilCases> PencilCases{ get; set; }
public virtual ICollection<Pencils> Pencils{ get; set; }
}
public class Box
{
public int BoxID{ get; set; }
public int TableID{ get; set; }
[ForeignKey("TableID")]
public virtual Table Table{ get; set; }
public virtual ICollection<PencilCases> PencilCases{ get; set; }
public virtual ICollection<Pencils> Pencils{ get; set; }
}
public class PencilCases
{
public int PencilCaseID{ get; set; }
public int? BoxID{ get; set; }
public int TableID{ get; set; }
[ForeignKey("TableID")]
public virtual Table Table{ get; set; }
[ForeignKey("BoxID")]
public virtual Box Box{ get; set; }
public virtual ICollection<Pencils> Pencils{ get; set; }
}
public class Pencils
{
public int PencilID{ get; set; }
public int? PencilCaseID{ get; set; }
public int? BoxID{ get; set; }
public int TableID{ get; set; }
[ForeignKey("TableID")]
public virtual Table Table{ get; set; }
[ForeignKey("BoxID")]
public virtual Box Box{ get; set; }
[ForeignKey("PencilCaseID")]
public virtual PencilCase PancelCase{ get; set; }
}
Our repository pattern implementation similar with this tutorial, http://www.asp.net/mvc/tutorials/getting-started-with-ef-5-using-mvc-4/implementing-the-repository-and-unit-of-work-patterns-in-an-asp-net-mvc-application
So we call get method like this.
var tables = unitOfWork.TableRepository.Get(includeProperties: "Boxes, PencilCases, Boxes.Pencils");
So the problem is the result is very different from my expectations;i expect only Boxes,PencilCases and Boxes.Pencils collections will be fetched, but all the Pencil entities fetched from database including Pencils, PencilCases.Pencils and Boxes.PencilCases.Pencils. This recursive fetch causes OutOfMemoryException because amount of data.
I couldn't understand why Entity Framework fetches all Pencils except Boxes.Pencils. I also tried to specify including list with Expression instead of Query Path but result didn't change.
first off - I'm fairly new to EF myself so please excuse if the following is not 100% accurate. However, I've dealt with this exact same problem just a couple of days ago, so hopefully this will help.
The problem is that when EF loads a specific entity, it will add that entity to every part of the Data Model that it appears in - not just the parts that were explicitly loaded.
This means that every Pencil in Boxes.Pencils that is also in the ICollection of Table.Pencils will be automatically resolved even though you did not specifically ask for it.
By itself that fact does not present a problem, and can even be helpful in a user-driven MVC application.
Where it all goes wrong is when you try to do anything that recurses trough the Data Entity, such as trying to map the self-recursing Data Entity to a Business Model or trying to turn the self-recursing data entity into JSON/XML.
Now, there are several solutions to this problem:
Implement a mapper / encoder that hashes / remembers each object and only adds it once:
The problem with this one is that it can lead to some hard-to-predict results, especially when you want / need the object in multiple places. Additionally, hashing and comparing every object could be costly.
Implement a mapper / encoder that can be configured to ignore some properties
Relatively simple - if you can specify that you don't want to map or encode Pencil at all, you won't have any issues. Downsides are of course that you could still encounter a stackoverflow if you are not vigilant about specifying the ignored properties.
Implement a mapper / encoder with specifyable recursion depth
This is a very simple and pretty decent solution - simply set a hard limit on recursion depth, either on a global or on a per-type basis, and you won't have any more stackoverflows. Downside is that you would still end up with elements that you don't want, and thus get a unnecessarily bloated return object.
Implement custom business entities
This is probably the best solution - simply create a new business entity with the offending navigational properties removed. The primary downside is that it would require you to create different business entities for different purposes.
Here is a example:
// Removed Pencils
public class BusinessTable
{
public int TableID{ get; set; }
public IEnumerable<Box> Boxes{ get; set; }
public IEnumerable<PencilCases> PencilCases{ get; set; }
}
// Removed Table & PencilCases
public class BusinessBox
{
public int BoxID{ get; set; }
public int TableID{ get; set; }
public IEnumerable<Pencils> Pencils{ get; set; }
}
// Removed Table & Box & Pencils
public class BusinessPencilCases
{
public int PencilCaseID{ get; set; }
public int? BoxID{ get; set; }
public int TableID{ get; set; }
}
// Removed Table, Box, PencilCase
public class BusinessPencils
{
public int PencilID{ get; set; }
public int? PencilCaseID{ get; set; }
public int? BoxID{ get; set; }
public int TableID{ get; set; }
}
Now when you map your Data Entity to this set of Business Entities, you won't get any more errors.
For the mapping aspect of this, theres 2 solutions: Manually doing things / using a mapping factory Example of Model Factory, ValueInjecter and AutoMapper - the latter two being available NuGet packages.
For AutoMapper:
I don't use AutoMapper, but you'd have to create a config file that looks something like this:
Mapper.CreateMap<Table, BusinessTable>();
Mapper.CreateMap<Box, BusinessBox>();
Mapper.CreateMap<PencilCases, BusinessPencilCases>();
Mapper.CreateMap<Pencils, BusinessPencils>();
And then in your query:
var tables = unitOfWork.TableRepository.Get(includeProperties: "Boxes, PencilCases, Boxes.Pencils");
var result = Mapper.Map<IEnumerable<Table>, IEnumerable<BusinessTable>>(tables);
Or
var tables = unitOfWork.TableRepository.Get(includeProperties: "Boxes, PencilCases, Boxes.Pencils").Project().To<IEnumerable<BusinessTable>;
For more info pertaining AutoMapper ( like how to set up a config file ): https://github.com/AutoMapper/AutoMapper/wiki/Getting-started
For ValueInjecter:
var tables = unitOfWork.TableRepository.Get(includeProperties: "Boxes, PencilCases, Boxes.Pencils");
var result = new List<BusinessTable>().InjectFrom(tables);
Or:
var tables = unitOfWork.TableRepository.Get(includeProperties: "Boxes, PencilCases, Boxes.Pencils");
var result = tables.Select(x => new BusinessTable.InjectFrom(x).Cast<BusinessTable>());
It might also be worthwhile to look at additional ValueInjecter Injections, like SmartConventionInjection, Deep Cloning, Useful Injections and a ORM with ValueInjecter guide.
I also made a few injections for my own project that may be of use to you, which you can find On my Github
With MaxDepthCloneInjector for example, you can supply a dictionary of (property names, max recursion depth) and it will only map values included in the dictionary, and only until the specified level.
Two more pieces of advice:
If you want a bit more freedom with your queries, you should consider using the Query Expression Syntax for some of your more complex needs. Theres also some good information in this answer on SO: How to limit number of related data with Include
If you are planning to run queries including navigational properties like the one in your example: STICK WITH EAGER LOADING. A query like that in Lazy Loading would lead to the N + 1 problem. As a rule of thumb:
Use Lazy Loading if you don't need the entire result set right away, for example if you are developing a application where data requirements naturally expand based on the User's interaction with the application.
Use Eager Loading if you need the entire result-set right away, for example in a Web Api, or a application that needs to work with the complete entity.
Best of luck,
Felix
I am getting Error when trying to run this code.
Unable to determine the principal end of an association between the
types 'AddressBook.DAL.Models.User' and 'AddressBook.DAL.Models.User'.
The principal end of this association must be explicitly configured
using either the relationship fluent API or data annotations.
The objective is that i am creating baseClass that has commonfield for all the tables.
IF i don't use base class everything works fine.
namespace AddressBook.DAL.Models
{
public class BaseTable
{
[Required]
public DateTime DateCreated { get; set; }
[Required]
public DateTime DateLastUpdatedOn { get; set; }
[Required]
public virtual int CreatedByUserId { get; set; }
[ForeignKey("CreatedByUserId")]
public virtual User CreatedByUser { get; set; }
[Required]
public virtual int UpdatedByUserId { get; set; }
[ForeignKey("UpdatedByUserId")]
public virtual User UpdatedByUser { get; set; }
[Required]
public RowStatus RowStatus { get; set; }
}
public enum RowStatus
{
NewlyCreated,
Modified,
Deleted
}
}
namespace AddressBook.DAL.Models
{
public class User : BaseTable
{
[Key]
public int UserID { get; set; }
public string UserName { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public string MiddleName { get; set; }
public string Password { get; set; }
}
}
You need to provide mapping information to EF. The following article describes code-first strategies for different EF entity inheritance models (table-per-type, table-per-hierarchy, etc.). Not all the scenarios are directly what you are doing here, but pay attention to the mapping code because that's what you need to consider (and it's good info in case you want to use inheritance for other scenarios). Note that inheritance does have limitations and costs when it comes to ORMs, particularly with polymorphic associations (which makes the TPC scenario somewhat difficult to manage). http://weblogs.asp.net/manavi/archive/2010/12/24/inheritance-mapping-strategies-with-entity-framework-code-first-ctp5-part-1-table-per-hierarchy-tph.aspx
The other way EF can handle this kind of scenario is by aggregating a complex type into a "fake" compositional relationship. In other words, even though your audit fields are part of some transactional entity table, you can split them out into a common complex type which can be associated to any other entity that contains those same fields. The difference here is that you'd actually be encapsulting those fields into another type. So for example, if you moved your audit fields into an "Audit" complext type, you would have something like:
User.Audit.DateCreated
instead of
User.DateCreated
In any case, you still need to provide the appropriate mapping information.
This article here explains how to do this: http://weblogs.asp.net/manavi/archive/2010/12/11/entity-association-mapping-with-code-first-part-1-one-to-one-associations.aspx
I have a database, and I have entity POCO's, and all I want to use EF for is to map between the two and keep track of changes for loading, saving, etc.
I have been reading a lot of the literature (such as it is) on "Code First", and I am unclear on how much of the database information I need to supply when there is not going to be a database generated.
For example, does EF need to know which properties are keys, the maximum length of string properties, the relationships between the tables, etc.? Or if it does need to know, can it get that information from the database itself? In other words, do I have to provide [Key] annotations and such, or provide configuration information detailing the foreign-key relationships, if no database needs to be created?
UPDATE: To make things a little clearer, the following code is what I am talking about. I have to manually create this class derived from DbContext. I could supply a lot of DB information about the properties in OnModelCreating, or in attributes attached to the properties in the entity classes.
public class SchedulerContext : DbContext
{
public SchedulerContext(EntityConnection connection)
: base(connection)
{
}
public DbSet<Client> Clients { get; set; }
public DbSet<ConsultantDistrict> ConsultantDistricts { get; set; }
public DbSet<ConsultantInterviewSetting> ConsultantInterviewSettings { get; set; }
public DbSet<ConsultantUnavailable> ConsultantsUnavailable { get; set; }
public DbSet<CustomEmailTemplate> CustomEmailTemplates { get; set; }
public DbSet<DateEvent> DateEvents { get; set; }
public DbSet<Event> Events { get; set; }
public DbSet<EventItem> EventItems { get; set; }
public DbSet<EventItemUserViewed> EventItemsUserViewed { get; set; }
public DbSet<FlaggedDate> FlaggedDates { get; set; }
public DbSet<Interview> Interviews { get; set; }
public DbSet<Interviewee> Interviewees { get; set; }
public DbSet<IntervieweeNote> IntervieweeNotes { get; set; }
public DbSet<InterviewEvent> InterviewEvents { get; set; }
public DbSet<NotificationSent> NotificationsSent { get; set; }
public DbSet<SchedulerRole> SchedulerRoles { get; set; }
public DbSet<SiteEvent> SiteEvents { get; set; }
public DbSet<UnavailableHour> UnavailableHours { get; set; }
public DbSet<UserLogin> UserLogins { get; set; }
public DbSet<UserSites> UserSites { get; set; }
public DbSet<Visit> Visits { get; set; }
protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder)
{
base.OnModelCreating(modelBuilder);
modelBuilder.Entity<ConsultantUnavailable>().MapSingleType().ToTable("ConsultantsUnavailable");
modelBuilder.Entity<EventItemUserViewed>().MapSingleType().ToTable("EventItemsUserViewed");
}
}
Yes, the EF does need information on string field lengths, foreign keys, etc., in the model. For example, if a DB FK has a cascade, the EF needs to know that so that it doesn't force you to manually delete detail records; if the EF is aware of the cascade it will let the DB handle that. Similarly, if the EF is aware that a key is store-generated (e.g., auto-incremented), it won't complain when you don't set it on a new record, because it will presume that the DB will do that.
However, the code-only approach takes a "convention over configuration" approach. You don't have to specify values which the EF can guess. You can read about those here.
If you are doing Code Only, the EF doesn't look at the DB at all when creating the model.
There is no way to tell the EF to look at code and the DB to create the model. You have to choose one or the other.