Following Julia Lermas book 'DbContext' on a N-Tier solution of keeping track of changes, I provided each entity with a State property and a OriginalValues dictionary (through IObjectWithState). After the entity is constructed I copy the original values to this dictionary. See this sample (4-23) of the book:
public BreakAwayContext()
{
((IObjectContextAdapter)this).ObjectContext.ObjectMaterialized += (sender, args) =>
{
var entity = args.Entity as IObjectWithState;
if (entity != null)
{
entity.State = State.Unchanged;
entity.OriginalValues = BuildOriginalValues(this.Entry(entity).OriginalValues);
}
};
}
In the constructor of the BreakAwayContext (inherited from DbContext) the ObjectMaterialized event is caught. To retrieve the original values of the entity, the DbEntityEntry is retrieved from the context by the call to this.Entry(entity). This call is slowing the process down. 80% of the time of this event handler is spend on this call.
Is there a faster way to retrieve the original values or the entities DbEntityEntry?
Context.Entry() calls DetectChanges() that depends on number of objects in context and could be very slow. In your case you could replace with faster version ((IObjectContextAdapter) ctx).ObjectContext.ObjectStateManager.GetObjectStateEntry(obj);
Related
I have a method that receives an IEnumerable<Guid> of IDs to objects I want to delete. One suggested method is as follows
foreach(Guid id in ids)
{
var tempInstance = new MyEntity { Id = id };
DataContext.Attach(tempInstance); // Exception here
DataContext.Remove(tempInstance);
}
This works fine if the objects aren't already loaded into memory. But my problem is that when they are already loaded then the Attach method throws an InvalidOperationException - The instance of entity type 'MyEntity' cannot be tracked because another instance with the key value 'Id:...' is already being tracked. The same happens if I use DataContext.Remove without calling Attach.
foreach(Guid id in ids)
{
var tempInstance = new MyEntity { Id = id };
DataContext.Remove(tempInstance); // Exception here
}
I don't want to use DataContext.Find to grab the instance of an already loaded object because that will load the object into memory if it isn't already loaded.
I cannot use DataContext.ChangeTracker to find already loaded objects because only objects with modified state appear in there and my objects might be loaded and unmodified.
The following approach throws the same InvalidOperationException when setting EntityEntry.State, even when I override GetHashCode and Equals on MyEntity to ensure dictionary lookups see them as the same object.
foreach(Guid id in ids)
{
var tempInstance = new MyEntity { Id = id };
EntityEntry entry = DataContext.Entry(tempInstance);
entry.State == EntityState.Deleted; // Exception here
}
The only way so far I have found that I can achieve deleting objects by ID without knowing if the object is the following:
foreach(Guid id in ids)
{
var tempInstance = new MyEntity { Id = id };
try
{
DataContext.Attach(tempInstance); // Exception here
}
catch (InvalidOperationException)
{
}
DataContext.Remove(tempInstance);
}
It's odd that I am able to call DataContext.Remove(tempInstance) without error after experiencing an exception trying to Attach it, but at this point it does work without an exception and also deletes the correct rows from the database when DataContext.SaveChanges is executed.
I don't like catching the exception. Is there a "good" way of achieving what I want?
Note: If the class has a self-reference then you need to load the objects into memory so EntityFrameworkCore can determine in which order to delete the objects.
Strangely, although this is a quite common exception in EF6 and EF Core, neither of them expose publicly a method for programmatically detecting the already tracked entity instance with the same key. Note that overriding GetHashCode and Equals doesn't help since EF is using reference equality for tracking entity instances.
Of course it can be obtained from the DbSet<T>.Local property, but it would not be as efficient as the internal EF mechanism used by Find and the methods throwing the aforementioned exception. All we need is the first part of the Find method and returning null when not found instead of loading from the database.
Luckily, for EF Core the method that we need can be implemented relatively easily by using some of the EF Core internals (under the standard This API supports the Entity Framework Core infrastructure and is not intended to be used directly from your code. This API may change or be removed in future releases. policy). Here is the sample implementation, tested on EF Core 2.0.1:
using Microsoft.EntityFrameworkCore.Internal;
namespace Microsoft.EntityFrameworkCore
{
public static partial class CustomExtensions
{
public static TEntity FindTracked<TEntity>(this DbContext context, params object[] keyValues)
where TEntity : class
{
var entityType = context.Model.FindEntityType(typeof(TEntity));
var key = entityType.FindPrimaryKey();
var stateManager = context.GetDependencies().StateManager;
var entry = stateManager.TryGetEntry(key, keyValues);
return entry?.Entity as TEntity;
}
}
}
Now you can use simply:
foreach (var id in ids)
DataContext.Remove(DataContext.FindTracked<MyEntity>(id) ?? new MyEntity { Id = id }));
or
DataContext.RemoveRange(ids.Select(id =>
DataContext.FindTracked<MyEntity>(id) ?? new MyEntity { Id = id }));
I have a complex and big object graph that I want to insert in database by using a DbContext and SaveChanges method.
This object is a result of parsing a text file with 40k lines (around 3MB of data). Some collections inside this object have thousands of items.
I am able to parse the file correctly and add it to the context so that it can start tracking the object. But when I try to SaveChanges, it says:
Microsoft.EntityFrameworkCore.DbUpdateException: An error occurred while updating the entries. See the inner exception for details. ---> System.Data.SqlClient.SqlException: String or binary data would be truncated.
I would like to know if there is a smart and efficient way of discovering which object is causing the issue. It seems that a varchar field is too little to store the data. But it's a lot of tables and fields to check manually.
I would like to get a more specific error somehow. I already configured an ILoggerProvider and added the EnableSensitiveDataLogging option in my dbContext to be able to see which sql queries are being generated. I even added MiniProfiler to be able to see the parameter values, because they are not present in the log generated by the dbContext.
Reading somewhere in the web, I found out that in EF6 there is some validation that happens before the sql is passed to the database to be executed. But it seems that in EF Core this is not available anymore. So how can I solve this?
After some research, the only approach I've found to solve this, is implementing some validation by overriding dbContext's SaveChanges method. I've made a merge of these two approaches to build mine:
Implementing Missing Features in Entity Framework Core - Part 3
Validation in EF Core
The result is...
ApplicationDbContext.cs
public override int SaveChanges(bool acceptAllChangesOnSuccess)
{
ValidateEntities();
return base.SaveChanges(acceptAllChangesOnSuccess);
}
public override async Task<int> SaveChangesAsync(bool acceptAllChangesOnSuccess, CancellationToken cancellationToken = new CancellationToken())
{
ValidateEntities();
return await base.SaveChangesAsync(acceptAllChangesOnSuccess, cancellationToken);
}
private void ValidateEntities()
{
var serviceProvider = this.GetService<IServiceProvider>();
var items = new Dictionary<object, object>();
var entities = from entry in ChangeTracker.Entries()
where entry.State == EntityState.Added || entry.State == EntityState.Modified
select entry.Entity;
foreach (var entity in entities)
{
var context = new ValidationContext(entity, serviceProvider, items);
var results = new List<ValidationResult>();
if (Validator.TryValidateObject(entity, context, results, true)) continue;
foreach (var result in results)
{
if (result == ValidationResult.Success) continue;
var errorMessage = $"{entity.GetType().Name}: {result.ErrorMessage}";
throw new ValidationException(errorMessage);
}
}
}
Note that it's not necessary to override the other SaveChanges overloads, because they call these two.
The Error tells you that youre writing more characters to a field than it can hold.
This error for example would be thrown when you create a given field as NVARCHAR(4) or CHAR(4) and write 'hello' to it.
So you could simply check the length of the values you read in to find the one which is causing your problem. There is at least on which is too long for a field.
I have the following Update generic method for my entities:
public void Update < T > (T entity) where T: class {
DbEntityEntry dbEntityEntry = DbContext.Entry(entity);
if (dbEntityEntry.State == System.Data.Entity.EntityState.Detached) {
DbContext.Set < T > ().Attach(entity);
}
dbEntityEntry.State = System.Data.Entity.EntityState.Modified;
}
After SaveChanges() the data is successfully updated in the DB.
Now I nee to implement and Audit Log before SaveChanges() but I noticed that CurrentValues are equal to OriginalValues:
// For updates, we only want to capture the columns that actually changed
if (!object.Equals(dbEntry.OriginalValues.GetValue<object>(propertyName), dbEntry.CurrentValues.GetValue<object>(propertyName))){
//here I add a new Audit Log entity
}
Any clue on how to solve this? Or is there a better way to do it in Entity Framework 6?
If you are using a disconnected entity, you can set originals values without affect entity instance values, adapt this method at you needs
public static void LoadOriginalValues(this WorkflowsContext db, DbEntityEntry entity)
{
var props = entity.GetDatabaseValues();
foreach (var p in props.PropertyNames)
{
if (entity.Property(p).IsModified)
{
entity.Property(p).OriginalValue = props[p];
}
}
}
The original values are recovered from the entity itself. If the entity is being tracked by a context, this information is available.
In your case, you're using a disconected entity, so there is no change tracking, and the entity doesn't have the original values.
SO, in this case, if you need the original values there is no other option than getting them from the DB, and compare them, one by one.
If you want to get an entity that behaves as if it had been tracked by the context you can use a context to read the entity from the DB, and use something like ValueInjecter to automatically set the property values from the disconected entity into the tracked entity.
I'm using EF5 and attaching a disconnected graph of POCO entities to my context, something like this:-
using (var context = new MyEntities())
{
context.Configuration.AutoDetectChangesEnabled = false;
context.MyEntities.Attach(myEntity);
// Code to walk the entity graph and set each entity's state
// using ObjectStateManager omitted for clarity ..
context.SaveChanges();
}
The entity "myEntity" is a large graph of entities, with many child collections, which in turn have their own child collections, and so on. The entire graph contains in the order of 10000 entities, but only a small number are usually changed.
The code to set the entity states and the actual SaveChanges() is fairly quick (<200ms). It's the Attach() that's the problem here, and takes 2.5 seconds, so I was wondering if this could be improved. I've seen articles that tell you to set AutoDetectChangesEnabled = false, which I'm doing above, but it makes no difference in my scenario. Why is this?
I am afraid that 2,5 sec for attaching an object graph with 10000 entities is "normal". It's probably the entity snapshot creation that takes place when you attach the graph that takes this time.
If "only a small number are usually changed" - say 100 - you could consider to load the original entities from the database and change their properties instead of attaching the whole graph, for example:
using (var context = new MyEntities())
{
// try with and without this line
// context.Configuration.AutoDetectChangesEnabled = false;
foreach (var child in myEntity.Children)
{
if (child.IsModified)
{
var childInDb = context.Children.Find(child.Id);
context.Entry(childInDb).CurrentValues.SetValues(child);
}
//... etc.
}
//... etc.
context.SaveChanges();
}
Although this will create a lot of single database queries, only "flat" entities without navigation properties will be loaded and attaching (that occurs when calling Find) won't consume much time. To reduce the number of queries you could also try to load entities of the same type as a "batch" using a Contains query:
var modifiedChildIds = myEntity.Children
.Where(c => c.IsModified).Select(c => c.Id);
// one DB query
context.Children.Where(c => modifiedChildIds.Contains(c.Id)).Load();
foreach (var child in myEntity.Children)
{
if (child.IsModified)
{
// no DB query because the children are already loaded
var childInDb = context.Children.Find(child.Id);
context.Entry(childInDb).CurrentValues.SetValues(child);
}
}
It's just a simplified example under the assumption that you only have to change scalar properties of the entities. It can become arbitrarily more complex if modifications of relationships (children have been added to and/or deleted from the collections, etc.) are involved.
I want to invoke a validation function inside the entities objects right before they are stored with ObjectContext#SaveChanges(). Now, I can keep track of all changed objects myself and then loop through all of them and invoke their validation methods, but I suppose an easier approach would be implement some callback that ObjectContext will invoke before saving each entity. Can the latter be done at all? Is there any alternative?
I've figured out how. Basically, we can intercept SavingChanges event of ObjectContext and loop through the newly added/modified entities to invoke their validation function. Here's the code I used.
partial void OnContextCreated()
{
SavingChanges += PerformValidation;
}
void PerformValidation(object sender, System.EventArgs e)
{
var objStateEntries = ObjectStateManager.GetObjectStateEntries(
EntityState.Added | EntityState.Modified);
var violatedRules = new List<RuleViolation>();
foreach (ObjectStateEntry entry in objStateEntries)
{
var entity = entry.Entity as IRuleValidator;
if (entity != null)
violatedRules.AddRange(entity.Validate());
}
if (violatedRules.Count > 0)
throw new ValidationException(violatedRules);
}
Well, you could do it that way, but it means that you're allowing your clients to directly access the ObjectContext, and, personally, I like to abstract that away, in order to make the clients more testable.
What I do is use the repository pattern, and do the validation when save is called on a repository.