TransactionScope with Object context on dependant objects - entity-framework

I'm working on a MVC3 application and i'm using the Entity Framework linked to an Oracle database (11G R2).
I'm encountering an issue when i'm trying to use a single object context inside a TransactionScope.
Here is the code :
using (TransactionScope scope = new TransactionScope())
{
using (Entities context = new Entities())
{
// Right insert
T_RIGRIGHT entity1 = new T_RIGRIGHT()
{
RIGCODE = "test1",
RIGINSERTLOGIN = "aco",
RIGINSERTDATE = DateTime.Now,
RIGUPDATELOGIN = "aco",
RIGUPDATEDATE = DateTime.Now
};
context.AddToT_RIGRIGHT(entity1);
context.SaveChanges();
// Right/Profile insert
T_RIPRIGHTPROFILE entity2 = new T_RIPRIGHTPROFILE()
{
PROID = 3,
RIGID = entity1.RIGID,
RIPINSERTLOGIN = "aco",
RIPINSERTDATE = DateTime.Now,
RIPUPDATELOGIN = "aco",
RIPUPDATEDATE = DateTime.Now
};
context.AddToT_RIPRIGHTPROFILE(entity2);
context.SaveChanges(); // SaveChanges fails due to the FK constraint on table
}
scope.Complete();
}
Let me explain the code...
First I create an entity called entity1 as a T_RIGRIGHT element.
The I instanciate a T_RIPRIGHTPROFILE element that uses the id of the T_RIGRIGHT element created before.
The execution fails on the second context.SaveChanges() and the exception concerns the Foreign Key constraint on the table T_RIPRIGHTPROFILE (requires a T_RIGRIGHT).
Hope my explanations are clear enough
Is there any way to make it works ?
P.S. : I apologize for my english as it's not my native language.

You are trying to assign the FK entity1.RIGID of an entity that has not been committed to the DB:
RIGID = entity1.RIGID,
If you look at entity1 closely you will see that RIGID is 0 by default - instead you should set the navigation property representing the FK relationship:
RIG = entity1,
This will enable EF to properly relate these entities, for this entity1 does not have to be committed to the DB yet, so you do not even need the extra SaveChanges() call.
Also in your scenario you do not need a TransactionScope - EF uses a transaction internally already in SaveChanges() - based on the suggested changes you only need one SaveChanges() call and hence no outer transaction scope is needed.

Related

Auto detection of changes with disconnected entities

I am making a simple editor on a web server that lets user change/add data to a single table stored on a MS SQL server.
I am using Entity Framework 6 to do this, and I am wondering how I should do to track the changes made to the entity model.
I would have hoped that I could load new data in the context, and have the context automatically diff against what's in the DB, and then call SaveChanges().
But from what I read online, it looks like I need to loop through all the data, and check myself what changed, so that I can then call Context.Entry(myEntry).State = Added or Context.Entry(myEntry).State = Modified
Is there no way for EF to automatically detect what's new, what's modified and what's unchanged?
I would recommend passing ViewModels or DTOs to the view, then map them back to the reloaded entity on a commit. EF will automatically only update values that change when setting values. Setting a value without changing the value will not trigger an update. (Where attaching an entity, and setting it's modified state will update all columns) Passing entities, while convenient, exposes more about your data structure than your UI may present, and can be tampered with before being sent back. Never trust anything coming back from the client. When serialized to a client, the data is no longer an entity, it is a JSON block of data. When sent back to the server, it isn't a tracked entity, it is a POCO with the entity's signature. No change tracking that EF entities can provide will apply on the client or survive serialization/deserialization.
For example:
Given a Child that has a name and birth date. We select a DTO to pass to the view. The view changes a name, we get the DTO back and copy all values, modified or otherwise back into the entity and call SaveChanges()
// For example, loading the child in the controller to pass to the view...
ChildDTO childDto = null;
using (var context = new TestDbContext())
{
childDto = context.Children
.Select(x => new ChildDto
{
ChildId = x.ChildId,
Name = x.Name,
BirthDte = x.BirthDate
}).Single(x => x.ChildId == 1);
}
// View updates just the name...
childDto.Name = "Luke";
// Example if the view passed DTO back to controller to update...
using (var context = new TestDbContext())
{
var child = context.Children.Single(x => x.ChildId == 1);
child.Name = childDto.Name;
child.BirthDate = childDto.BirthDate;
context.SaveChanges();
}
If the name changed and the birth date did not, the EF generated update statement would only update the Name. If the entity name was already "Luke", then no Update statement would be issued. You can verify this behavior with an SQL profiler to see if/when/what SQL EF sends to the database.
Automapper can help simplify this for getting the DTO back into the entity:
var mappingConfig = new MapperConfiguration(cfg =>
{
cfg.CreateMap<Child, ChildDTO>();
cfg.CreateMap<ChildDTO, Child>();
});
Then when reading, leverage ProjectTo instead of Select:
using (var context = new TestDbContext())
{
childDto = context.Children
.ProjectTo<ChildDTO>(mappingConfig)
.Single(x => x.ChildId == 1);
}
... and when updating the entity:
using (var context = new TestDbContext())
{
var child = context.Children.Single(x => x.ChildId == 1);
var mapper = mappingConfig.CreateMapper();
mapper.Map(childDto, child); // copies values from DTO to the entity instance.
context.SaveChanges();
}
It's important to validate the DTO prior to copying values across to the Entity, whether doing it manually or with Automapper. Automapper config can also be set up to only copy over values that are expected/allowed to change.

how to insert parent child while doing entityframework.bulkinsert?

I am using Entityframework 6, I am trying to insert a parent-child kind of data in the database.
I am using Entityframework.BulkInsert to insert data. I have autoIncrement int primary key in all the tables
My object is as follows :
var parentObjects= new List<parentObject>();
var childObjects= new List<childObject>();
for (int i = 0; i <= 100; i++)
{
var parentObj= new parentObject()
{
Name="p1",
Address="a1"
};
childObjects= SeedInitializer.ChildItems.OrderBy(x => new Random().Next()).Take(2).ToList();//this gets 2 child objects
foreach (var childObj in childObjects)
{
childObj .ParentObject= parentObj;
//childObj .CommissionPlanId = i; //tried this still not working
parentObj.ChildObjects.Add(childObj );
}
parentObjects.Add(parentObj);
}
//when I do a quickwatch on parentObjects, i see child objects in each parentObject, but
//with the last id of parentObject
context.BulkInsert(parentObjects, 1000);
context.SaveChanges();
On save only 2 records are created in the childObject are created with a wrong parentObject id i.e. 0
I am not able to understand why child items are not getting created, while parent objects are getting created. Can someone help me understand where I am doing the mistake ?
Disclaimer: I'm the owner of EntityFramework.BulkInsert
You cannot.
This feature has never been implemented.
Disclaimer: I'm the owner of Entity Framework Extensions
However, this new library (not free), can easily handle this kind of scenario.
The BulkSaveChanges work exactly like SaveChanges (handle parent/child) but way faster!
All methods are supported:
Bulk SaveChanges
Bulk Insert
Bulk Delete
Bulk Update
Bulk Merge
Example
// Easy to use
context.BulkSaveChanges();
// Easy to customize
context.BulkSaveChanges(bulk => bulk.BatchSize = 100);
I do not think there is an easy way to accomplish this task, because in order to insert the children, you have to actually finish inserting the parents and get their ids. Normal EF inserts have the advantage that each INSERT will also embed a SELECT to fetch just generated identifier, so that it can use to push it for children (if any).
One possible solution is the following:
Add a Guid RefProperty to the ParentObject type which is also persisted
Add a Guid BatchId to the ParentObject type which is also persisted
Add a Guid RefProperty to the ChildObject type which is not persisted
Save the whole structure by using the following (mainly pseudocode) sequence
var batchId = new Guid();
parentObjects.ForEach(item => item.BatchId = batchId);
// set RefProperty for all parents and children to reflect proper parentation
TransactionScope scope = null;
try
{
context.BulkInsert(parentObjects, 1000);
var newParents = context.ParentObjects.Where(_ => _.BatchId = batchId);
var refPropMap = newParents.ToDictionary(_ => _.RefProperty, _ => ParentId);
var childObjects.ForEach(item => item.ParentId = refPropMap[item.RefProperty]);
context.BulkInsert(childObjects, 1000);
DataAccess.SaveChanges();
scope.Complete();
}
catch (Exception exc)
{
scope?.Dispose();
}
Note: this is not tested
This is quite ugly, but it should do the trick: minimize round-trips to SQL Server and still be one single transaction.
In order to make the SELECT faster, an index on ParentObject table should be placed on BatchId including (covering) its key.
Alternative: change design for these tables to not use auto-increments, but UNIQUEIDENTIFIER columns. This way, all identifiers can be set before making the inserts.

Having a hard time with Entity Framework detached POCO objects

I want to use EF DbContext/POCO entities in a detached manner, i.e. retrieve a hierarchy of entities from my business tier, make some changes, then send the entire hierarchy back to the business tier to persist back to the database. Each BLL call uses a different instance of the DbContext. To test this I wrote some code to simulate such an environment.
First I retrieve a Customer plus related Orders and OrderLines:-
Customer customer;
using (var context = new TestContext())
{
customer = context.Customers.Include("Orders.OrderLines").SingleOrDefault(o => o.Id == 1);
}
Next I add a new Order with two OrderLines:-
var newOrder = new Order { OrderDate = DateTime.Now, OrderDescription = "Test" };
newOrder.OrderLines.Add(new OrderLine { ProductName = "foo", Order = newOrder, OrderId = newOrder.Id });
newOrder.OrderLines.Add(new OrderLine { ProductName = "bar", Order = newOrder, OrderId = newOrder.Id });
customer.Orders.Add(newOrder);
newOrder.Customer = customer;
newOrder.CustomerId = customer.Id;
Finally I persist the changes (using a new context):-
using (var context = new TestContext())
{
context.Customers.Attach(customer);
context.SaveChanges();
}
I realise this last part is incomplete, as no doubt I'll need to change the state of the new entities before calling SaveChanges(). Do I Add or Attach the customer? Which entities states will I have to change?
Before I can get to this stage, running the above code throws an Exception:
An object with the same key already exists in the ObjectStateManager.
It seems to stem from not explicitly setting the ID of the two OrderLine entities, so both default to 0. I thought it was fine to do this as EF would handle things automatically. Am I doing something wrong?
Also, working in this "detached" manner, there seems to be an lot of work required to set up the relationships - I have to add the new order entity to the customer.Orders collection, set the new order's Customer property, and its CustomerId property. Is this the correct approach or is there a simpler way?
Would I be better off looking at self-tracking entities? I'd read somewhere that they are being deprecated, or at least being discouraged in favour of POCOs.
You basically have 2 options:
A) Optimistic.
You can proceed pretty close to the way you're proceeding now, and just attach everything as Modified and hope. The code you're looking for instead of .Attach() is:
context.Entry(customer).State = EntityState.Modified;
Definitely not intuitive. This weird looking call attaches the detached (or newly constructed by you) object, as Modified. Source: http://blogs.msdn.com/b/adonet/archive/2011/01/29/using-dbcontext-in-ef-feature-ctp5-part-4-add-attach-and-entity-states.aspx
If you're unsure whether an object has been added or modified you can use the last segment's example:
context.Entry(customer).State = customer.Id == 0 ?
EntityState.Added :
EntityState.Modified;
You need to take these actions on all of the objects being added/modified, so if this object is complex and has other objects that need to be updated in the DB via FK relationships, you need to set their EntityState as well.
Depending on your scenario you can make these kinds of don't-care writes cheaper by using a different Context variation:
public class MyDb : DbContext
{
. . .
public static MyDb CheapWrites()
{
var db = new MyDb();
db.Configuration.AutoDetectChangesEnabled = false;
db.Configuration.ValidateOnSaveEnabled = false;
return db;
}
}
using(var db = MyDb.CheapWrites())
{
db.Entry(customer).State = customer.Id == 0 ?
EntityState.Added :
EntityState.Modified;
db.SaveChanges();
}
You're basically just disabling some extra calls EF makes on your behalf that you're ignoring the results of anyway.
B) Pessimistic. You can actually query the DB to verify the data hasn't changed/been added since you last picked it up, then update it if it's safe.
var existing = db.Customers.Find(customer.Id);
// Some logic here to decide whether updating is a good idea, like
// verifying selected values haven't changed, then
db.Entry(existing).CurrentValues.SetValues(customer);

Entity Framework AddObject not adding Object to EntitySet

I have the following piece of code
private void DoAddPropertyType()
{
var ctx = Globals.DbContext;
var propType = new PropertyType()
{
ID = Guid.NewGuid(),
Name = "NewType",
Description = "New Property Type",
ModifiedDate = DateTime.Now
};
ctx.AddToPropertyTypes(propType);
PropertyTypes.Add(propType);
}
Globals.DbContext provides a static reference to the objectcontext initiated on startup. For some reason the ctx.AddToPropertyTypes(propType); bit does not add the entity to the context. If I breakpoint after that line and browse the ctx.PropertyTypes entity set it is not there. Any ideas?
EDIT 1:
If I add a ctx.SaveChanges() after the ctx.AddToPropertyTypes(propType) and step the actual adding appears to happen only once SaveChanges execute. This however does not suit my requirements as I want to first validate objects prior to saving and wanted to iterate through the entities in the entity set. Does any one know of an alternative approach?
So that is the point of your issue. ctx.PropertyTypes is not a real collection - it is entrance to the database and your "browsing" actually executes query to the database where your new object was not yet stored. If you want to find a new object added to the context without saving it first you must search the object inside the ObjectStateManager:
var entity = ctx.ObjectStateManager
.GetObjectStateEntries(EntityState.Added)
.Where(e => !e.IsRelationship)
.Select(e => e.Entity)
.OfType<PropertyType>()
.SingleOrDefault(p => p.ID == ...);

EF Code First - Recreate Database If Model Changes

I'm currently working on a project which is using EF Code First with POCOs. I have 5 POCOs that so far depends on the POCO "User".
The POCO "User" should refer to my already existing MemberShip table "aspnet_Users" (which I map it to in the OnModelCreating method of the DbContext).
The problem is that I want to take advantage of the "Recreate Database If Model changes" feature as Scott Gu shows at: http://weblogs.asp.net/scottgu/archive/2010/07/16/code-first-development-with-entity-framework-4.aspx - What the feature basically does is to recreate the database as soon as it sees any changes in my POCOs. What I want it to do is to Recreate the database but to somehow NOT delete the whole Database so that aspnet_Users is still alive. However it seems impossible as it either makes a whole new Database or replaces the current one with..
So my question is: Am I doomed to define my database tables by hand, or can I somehow merge my POCOs into my current database and still take use of the feature without wipeing it all?
As of EF Code First in CTP5, this is not possible. Code First will drop and create your database or it does not touch it at all. I think in your case, you should manually create your full database and then try to come up with an object model that matches the DB.
That said, EF team is actively working on the feature that you are looking for: altering the database instead of recreating it:
Code First Database Evolution (aka Migrations)
I was just able to do this in EF 4.1 with the following considerations:
CodeFirst
DropCreateDatabaseAlways
keeping the same connection string and database name
The database is still deleted and recreated - it has to be to for the schema to reflect your model changes -- but your data remains intact.
Here's how: you read your database into your in-memory POCO objects, and then after the POCO objects have successfully made it into memory, you then let EF drop and recreate the database. Here is an example
public class NorthwindDbContextInitializer : DropCreateDatabaseAlways<NorthindDbContext> {
/// <summary>
/// Connection from which to ead the data from, to insert into the new database.
/// Not the same connection instance as the DbContext, but may have the same connection string.
/// </summary>
DbConnection connection;
Dictionary<Tuple<PropertyInfo,Type>, System.Collections.IEnumerable> map;
public NorthwindDbContextInitializer(DbConnection connection, Dictionary<Tuple<PropertyInfo, Type>, System.Collections.IEnumerable> map = null) {
this.connection = connection;
this.map = map ?? ReadDataIntoMemory();
}
//read data into memory BEFORE database is dropped
Dictionary<Tuple<PropertyInfo, Type>, System.Collections.IEnumerable> ReadDataIntoMemory() {
Dictionary<Tuple<PropertyInfo,Type>, System.Collections.IEnumerable> map = new Dictionary<Tuple<PropertyInfo,Type>,System.Collections.IEnumerable>();
switch (connection.State) {
case System.Data.ConnectionState.Closed:
connection.Open();
break;
}
using (this.connection) {
var metaquery = from p in typeof(NorthindDbContext).GetProperties().Where(p => p.PropertyType.IsGenericType)
let elementType = p.PropertyType.GetGenericArguments()[0]
let dbsetType = typeof(DbSet<>).MakeGenericType(elementType)
where dbsetType.IsAssignableFrom(p.PropertyType)
select new Tuple<PropertyInfo, Type>(p, elementType);
foreach (var tuple in metaquery) {
map.Add(tuple, ExecuteReader(tuple));
}
this.connection.Close();
Database.Delete(this.connection);//call explicitly or else if you let the framework do this implicitly, it will complain the connection is in use.
}
return map;
}
protected override void Seed(NorthindDbContext context) {
foreach (var keyvalue in this.map) {
foreach (var obj in (System.Collections.IEnumerable)keyvalue.Value) {
PropertyInfo p = keyvalue.Key.Item1;
dynamic dbset = p.GetValue(context, null);
dbset.Add(((dynamic)obj));
}
}
context.SaveChanges();
base.Seed(context);
}
System.Collections.IEnumerable ExecuteReader(Tuple<PropertyInfo, Type> tuple) {
DbCommand cmd = this.connection.CreateCommand();
cmd.CommandText = string.Format("select * from [dbo].[{0}]", tuple.Item2.Name);
DbDataReader reader = cmd.ExecuteReader();
using (reader) {
ConstructorInfo ctor = typeof(Test.ObjectReader<>).MakeGenericType(tuple.Item2)
.GetConstructors()[0];
ParameterExpression p = Expression.Parameter(typeof(DbDataReader));
LambdaExpression newlambda = Expression.Lambda(Expression.New(ctor, p), p);
System.Collections.IEnumerable objreader = (System.Collections.IEnumerable)newlambda.Compile().DynamicInvoke(reader);
MethodCallExpression toArray = Expression.Call(typeof(Enumerable),
"ToArray",
new Type[] { tuple.Item2 },
Expression.Constant(objreader));
LambdaExpression lambda = Expression.Lambda(toArray, Expression.Parameter(typeof(IEnumerable<>).MakeGenericType(tuple.Item2)));
var array = (System.Collections.IEnumerable)lambda.Compile().DynamicInvoke(new object[] { objreader });
return array;
}
}
}
This example relies on a ObjectReader class which you can find here if you need it.
I wouldn't bother with the blog articles, read the documentation.
Finally, I would still suggest you always back up your database before running the initialization. (e.g. if the Seed method throws an exception, all your data is in memory, so you risk your data being lost once the program terminates.) A model change isn't exactly an afterthought action anyway, so be sure to back your data up.
One thing you might consider is to use a 'disconnected' foreign key. You can leave the ASPNETDB alone and just reference the user in your DB using the User key (guid). You can access the logged in user as follows:
MembershipUser currentUser = Membership.GetUser(User.Identity.Name, true /* userIsOnline */);
And then use the User's key as a FK in your DB:
Guid UserId = (Guid) currentUser.ProviderUserKey ;
This approach decouples your DB with the ASPNETDB and associated provider architecturally. However, operationally, the data will of course be loosely connected since the IDs will be in each DB. Note also there will be no referential constraints, whcih may or may not be an issue for you.