Entity Framework; Object-Oriented delete - entity-framework

I'm building out a stock management system at the moment, using Entity Framework 4.
A little background, my entities go a bit like this (only showing needed info)
Product --> ProductLocations
WarehouseLocation --> has many ProductLocations
Each ProductLocation has a Quantity
What I'm trying to do is have it so that when you call something like Product.TakeFromLocation(wl as WarehouseLocation, qty as Integer), it deletes the ProductLocation if its quantity falls to zero.
However... Product is an entity, as is ProductLocation, and they are meant to be persistance ignorant. I'm using the POCO EF templates, with a couple of modifications so that it produces an IEntities interface, and generates a FakeEntities using in-memory versions for testing. This means my entities do not know anything about Entity Framework, and don't inherit from anything or implement any interfaces, so Context.DeleteObject() is out of bounds.
Anyone encountered a similar scenario and got any ideas on how to get round this?
I kind of thought that if SaveChanges() on the context is a partial method, I could extend it to check for 0-quantities - but then it's going to do this for all saves, which is a bit of a drag on 90%+ of operations that aren't doing this.

I would do it in the SavingChanges event, not SaveChanges().
As for the performance "overhead," I suspect it's somewhere between trivial and un-measurable.

Related

How do we switch from Telerik Open Access to anything else?

Our company has been using Telerik Open Access for years. We have multiple projects using it including some in development and some in production that need updated. Because Telerik no longer updates or supports Open Access, we are having a variety of problems. We've got users that have to go to another work station because we can't get Open Access on their computers and we've got projects where we can't add or update tables because the visual designer doesn't work in modern Visual Studio versions. So my question is, how do we convert these and what do we convert these to?
I've heard of Microsoft Entities Framework and we used to just call stored procedures instead of having a separate data project. Obviously our clients aren't going to pay us for hours to switch so we need something that works quick. How do we convert our existing Telerik Open Access project to Microsoft Entities Framework, straight SQL queries, or some other data layer option?
Here's an example of what we have currently.
A separate Visual Studio project that acts as our data layer where all the code was created by Telerik Open Access's visual designer:
We then have a DataAccess.cs class in our main project that creates the instance of the data layer:
Then we call it by using linq statements in the main project:
I don't know OA hands-on, so I can only put my two-cents in.
I'm afraid this isn't going to be be an easy transition. I've yet to see the first seamless transition from one data layer implementation to another (and I've seen a few). The main cause for this is that IQueryable is a leaky abstraction. That is, the data layer exposes IQueryables, but
it doesn't support all features of the interface, and
it adds its own features, and
it's got its own interpretation of how to implement the features that are supported.
This means that if you're going to port your data layer to EF, you may notice that some LINQ queries throw runtime errors because they contain unsupported .Net methods/properties (for instance DateTime.Date), or perform worse -- or better, or return data in subtly different shapes or sorting orders.
Some important OA features that are missing in EF:
Runtime mappings (EF's mapping is static)
Bulk update/delete functions (with EF: only by using third-party libraries)
Second-leve cache
Profiler and Tuning Advisor
Streaming of large objects
Mixing database-side and client-side evaluation of LINQ queries (EF6: only db-evaluation)
On the other hand, the basic architectures of OA and EF don't seem to be too different. They both -
support POCOs
work with simple navigation properties
have a fluent mapping API
support LINQ through IQueryable<T>, where T is an entity class.
most importantly: both have revolve around the Unit of Work and Repository patterns. (For EF: DbContext and DbSet, respectively)
All-in-all it's goinig to be a delicate process of converting, trying and testing. One good thing is that your current DAL is already abstracted to a certain extent. Another is that the syntax doesn't even look too different. Where you have ...
dbContext.Add(newDockReport);
dbContext.SaveChanges();
... using EF this would become ...
dbContext.DockReports.Add(newDockReport);
dbContext.SaveChanges();
With EF-core it wouldn't even have to change one bit.
But that's another important choice to make: EF6 or EF-core? EF6 is stable, mature, feature-rich, but at the end of its life cycle (a phrase you've probably come to hate by now). EF-core, on the other hand, is the future, but is presently struggling to get bug-free in its major functions, not yet as feature-rich as EF6 (and some features will never return, although other new features are great improvements compared to EF6). At the moment, I'd be wary of using EF-core for enterprise applications, but I'm pretty sure that a year from now this is not an issue any more.
Whichever way you go, I'd start the process by writing large amounts of integration tests, if you didn't do so already. The advantage of integration tests is that you can avoid the hassle of mocking either framework first (which isn't trivial).
I have never heard of a tool that can do that.Expect it to take time.
You have to figure how to do it by yourself :
( for the exact safer way to migrate )
1rst you must have a simple page that use your DataLayer it will be your test page. A simple one that you can adapt the LinQ logic .
Add a LinQ to SQL Class, Right click> Add > LinQ to SQL Class.
Drop your table for this page only the usefull one, put the link if needed.
In the test page create a new data context of the linQtoSql.
Use it fixing the type and rename what have to be rename.
Fix error till it compile.
Stock coffee and anything that can boost your brain.
Add table to your context to match the one you had in telerik data access.
Debug for days.
Come back with new question on how to fix a new issue twice a day.
To help with the dbContext.Add() difference between the 2 frameworks you could use this extension in the EF 6.x :
public static void Add<T>(this DbContext db, T entityToCreate) where T : class
{
db.Set<T>().Add(entityToCreate);
db.SaveChanges();
}
then you could do :
db.Add(obj);
See Gert Arnold answer : In Entity Framework, how do I add a generic entity to its corresponding DbSet without a switch statement that enumerates all the possible DbSets?

Entity Framework 4.1 for large number of tables (715)

I'm developing a data access layer for a database with over 700 tables. I created the model including all the tables, which generated a huge model. I then changed the model to use DBContext from 4.1 which seemed to improve how it compiled and worked. The designer didnt seem to work at all.
I then created a test app which just added two records to the table, but the processor went 100% in the db.SaveChanges method. Being a black box it was difficult to accertain what went wrong.
So my questions are
Is the entity framework the best approach to a large database
If so, should the model be broken down into logical areas. I did note that you cant have the same sql table in multiple models
I have read that the code only approach is best in these large cases. What is that.
Any guidance would be truly appreciated
Thanks
Large database is always something special. Any technology has some pros and cons when working with a large database.
The problem you have encountered is the most probably related to building the model. When you start the application and use EF related stuff for the first time EF must build the model description and compile it - this is the most time consuming operation you can find in EF. Complexity of this operation grows with number of entities in the model. Once the model is compiled it is reused for the whole lifetime of the application (if you restart the application or unload application domain the model must be compiled again). You can avoid this by precompiling the model. It is done at design time where you use some tool to generate code from the model and you include that code into your project (it must be done again after each change in the model). For EDMX based models you can use EdmGen.exe to generate views and for code first based models you can use EF Power Tools CTP1.
EDMX (the designer) was improved in VS 2010 SP1 to be able to work with large models but I still think the large in this case is around 100 entities / tables. In the same time you rarely need 715 tables in the same model. I believe that these 715 tables indeed model several domains so you can divide them into multiple models.
The same is true when you are using DbContext and code first. If you model a class do you think that it is correct design when the class exposes 715 properties? I don't think so but that is exactly what your derived DbContext looks like - it has a public property for each exposed entity set (in the simplest mapping it means one property per table).
Same entity can be used in multiple models but you should try to avoid it as much as possible because it can introduce some complexities when loading entity in one context type and using it in other context type.
Code only = code first = Entity framework when you define mapping in the code without using EDMX.
take a look this post.
http://blogs.msdn.com/b/adonet/archive/2008/11/24/working-with-large-models-in-entity-framework-part-1.aspx

Entity Framework Table Per Type Performance

So it turns out that I am the last person to discover the fundamental floor that exists in Microsoft's Entity Framework when implementing TPT (Table Per Type) inheritance.
Having built a prototype with 3 sub classes, the base table/class consisting of 20+ columns and the child tables consisting of ~10 columns, everything worked beautifully and I continued to work on the rest of the application having proved the concept. Now the time has come to add the other 20 sub types and OMG, I've just started looking the SQL being generated on a simple select, even though I'm only interested in accessing the fields on the base class.
This page has a wonderful description of the problem.
Has anyone gone into production using TPT and EF, are there any workarounds that will mean that I won't have to:
a) Convert the schema to TPH (which goes against everything I try to achieve with my DB design - urrrgghh!)?
b) rewrite with another ORM?
The way I see it, I should be able to add a reference to a Stored Procedure from within EF (probably using EFExtensions) that has the the TSQL that selects only the fields I need, even using the code generated by EF for the monster UNION/JOIN inside the SP would prevent the SQL being generated every time a call is made - not something I would intend to do, but you get the idea.
The killer I've found, is that when I'm selecting a list of entities linked to the base table (but the entity I'm selecting is not a subclass table), and I want to filter by the pk of the Base table, and I do .Include("BaseClassTableName") to allow me to filter using x=>x.BaseClass.PK == 1 and access other properties, it performs the mother SQL generation here too.
I can't use EF4 as I'm limited to the .net 2.0 runtimes with 3.5 SP1 installed.
Has anyone got any experience of getting out of this mess?
This seems a bit confused. You're talking about TPH, but when you say:
The way I see it, I should be able to add a reference to a Stored Procedure from within EF (probably using EFExtensions) that has the the TSQL that selects only the fields I need, even using the code generated by EF for the monster UNION/JOIN inside the SP would prevent the SQL being generated every time a call is made - not something I would intend to do, but you get the idea.
Well, that's Table per Concrete Class mapping (using a proc rather than a table, but still, the mapping is TPC...). The EF supports TPC, but the designer doesn't. You can do it in code-first if you get the CTP.
Your preferred solution of using a proc will cause performance problems if you restrict queries, like this:
var q = from c in Context.SomeChild
where c.SomeAssociation.Foo == foo
select c;
The DB optimizer can't see through the proc implementation, so you get a full scan of the results.
So before you tell yourself that this will fix your results, double-check that assumption.
Note that you can always specify custom SQL for any mapping strategy with ObjectContext.ExecuteStoreQuery.
However, before you do any of this, consider that, as RPM1984 points out, your design seems to overuse inheritance. I like this quote from NHibernate in Action
[A]sk yourself whether it might be better to remodel inheritance as delegation in the object model. Complex inheritance is often best avoided for all sorts of reasons unrelated to persistence or ORM. [Your ORM] acts as a buffer between the object and relational models, but that doesn't mean you can completely ignore persistence concerns when designing your object model.
We've hit this same problem and are considering porting our DAL from EF4 to LLBLGen because of this.
In the meantime, we've used compiled queries to alleviate some of the pain:
Compiled Queries (LINQ to Entities)
This strategy doesn't prevent the mammoth queries, but the time it takes to generate the query (which can be huge) is only done once.
You'll can use compiled queries with Includes() as such:
static readonly Func<AdventureWorksEntities, int, Subcomponent> subcomponentWithDetailsCompiledQuery = CompiledQuery.Compile<AdventureWorksEntities, int, Subcomponent>(
(ctx, id) => ctx.Subcomponents
.Include("SubcomponentType")
.Include("A.B.C.D")
.FirstOrDefault(s => s.Id == id));
public Subcomponent GetSubcomponentWithDetails(int id)
{
return subcomponentWithDetailsCompiledQuery.Invoke(ObjectContext, id);
}

Entity Framework - Foreign key constraints not added for inherited entity

It appears to me that a strange phenomenon is occurring with inherited entities (TPT) in EF4.
I have three entities.
1. Asset
2. Property
3. Activity
Property is a derived-type of Asset.
Property has many activities (many-to-many)
When modeling this in my EDMX, everything seems fine until I try to insert a new Property into the database. If the property does not contain any Activity, it works, but all hell breaks loose when I add some new activities to the new Property.
As it turns out after 2 days of crawling the web and fiddling around, I noticed that in the EF store (SSDL) some of the constraints between entities were not picked up during the update process.
Property_Activity table which links properties and activities show only
one constraint
FK_Property_Activity_Activity but
FK_Property_Activity_Property was
missing.
I knew this is an Entity Framework anomoly because when I switched the relationship in the database to:
Asset <--> Asset_Activity <--> Activity
After an update, all foreign key constraints are picked up and the save is successful, with or without activities in the new property.
Is this intended or a bug in EF?
How do I get around this problem?
Should I abandon inheritance altogether?
Not a but but a poor visual designer.
Its generally best to simply manage the Entity XML by hand.
No inheritance works well for many situations.
Basically I use the update from database in the visual designer but knowing that the designer has its quirks. I have simply used the update from database to stub out the basics of what I want. Then I go into the Entity XML my self and clean it up the way I want. Just of note Complex types are a pain with the designer. If you plan to use complex types get ready to learn your Entity XML well.

What are the pros/cons of returning POCO objects from a Repository rathen than EF Entities?

Following the way Rob does it, I have the classes that are generated by the Linq to SQL wizard, and then a copy of those classes that are POCOs. In my repositories I return these POCOs rather than the Linq to SQL models:
return from c in DataContext.Customer
where c.ID == id
select new MyPocoModels.Customer { ID = c.ID, Name = c.Name }
I understand that the benefit of this is that the POCO models can be instantiated easier so this will make my code more testable.
I'm now moving from Linq to SQL over to Entity Framework and I'm about half way through an EF book. It seems there's a lot of goodness I'm going to lose out on by returning POCOs from my repositories rather than the EF entities.
I still haven't really embraced unit testing, so I feel like I'm wasting a lot of time creating these extra POCOs and writing the code to populate them, when all I appear to be gaining is testable code, yet I'm also gonna lose out on a lot of the benefits of the EF by not being able to track my objects.
Does anyone have any advice for a relative newb to all this ORM/Repository stuff?
Anthony
Another reason people don't like the auto-generated objects (in LINQ to SQL for example) is because of their built-in "magic".
Usually the magic is invisible and you never notice it, but when you try to do things like serialize one of those objects and then deserialize it (for example when using web services) its internal connection to the data source is broken and special hacks need to be employed to "put the magic back in".
With POCOs, you don't have to worry about those sorts of things and can get a better separation between your data and service layers. The downside of course is that you have to write lots of boring POCO -> magic object and magic object -> POCO conversion code. But in the end I think it's usually worth it, especially for large or complex projects.
The main reason is that a lot of people like to develop their model with a specific mindset: like DDD for instance. They might want to use a specific pattern (like Spec or State) for things like statuses (instead of enums) - or you might want to use a Factory for instantiation.
OO breaks when you try to use Tables as Objects when things get more complex. Simple sites work OK - but when you get to big big things, it gets ugly.
So - as always - it depends what you think your project will turn into.
My experience is that when you start writting some complex queries .Include method is worthless and you will find yourself either:
a) Writting a lot of queries to get the data you want or
b) abusing of annonymous types to load the data in a single query and then writting a lot of code just to pass that data to your entities.
POCOs are the way to go, IMHO.