I have this method that delete object if exist and insert the new instance any way :
internal void SaveCarAccident(WcfContracts.BLObjects.Contract.Dtos.CarAccident DTOCarAccident)
{
using(var context = BLObjectsFactory.Create())
{
context.ContextOptions.ProxyCreationEnabled = false;
CarAccident NewCarAccident = ConvertToCarAccident(DTOCarAccident);
CarAccident carFromDB = context.CarAccident.FirstOrDefault(current => current.CarAccidentKey.Equals(NewCarAccident.CarAccidentKey));
if(carFromDB != null)
context.CarAccident.DeleteObject(carFromDB);
context.CarAccident.AddObject(NewCarAccident);
context.SaveChanges();
}
}
I sometimes get exception that the key already exist in table.
I wnted to know if the way I save the changes is a problem (saving after delete and insert and not after each one)
At the time I got the exception there were few clients that activate the method at the same time I blocked other clients from writing already, but is this may be the problem ?
Thanks
Eran
Related
I am using SqlBulkCopy (.NET) with ObjectReader (FastMember) to perform an import from XML based file. I have added the proper column mappings.
At certain instances I get an error: Failed to convert parameter value from a String to a Int32.
I'd like to understand how to
1. Trace the actual table column which has failed
2. Get the "current" on the ObjectReader
sample code:
using (ObjectReader reader = genericReader.GetReader())
{
try
{
sbc.WriteToServer(reader); //sbc is SqlBulkCopy instance
transaction.Commit();
}
catch (Exception ex)
{
transaction.Rollback();
}
}
Does the "ex" carry more information then just the error:
System.InvalidOperationException : The given value of type String from the data source cannot be converted to type int of the specified target column.
Simple Answer
The simple answer is no. One of the reasons .NET's SqlBulkCopy is so fast is that it does not log anything it does. You can't directly get any additional information from the .NET's SqlBulkCopy exception. However, that said David Catriel has wrote an article about this and has delivered a possible solution you can read fully about here.
Even though this method may provide the answer you are looking for I suggest only using the helper method when debugging as this quite possibly could have some performance impact if ran consistently within your code.
Why Use A Work Around
The lack of logging definitely speeds things up, but when you are
pumping hundreds of thousands of rows and suddenly have a failure on
one of them because of a constraint, you're stuck. All the
SqlException will tell you is that something went wrong with a given
constraint (you'll get the constraint's name at least), but that's
about it. You're then stuck having to go back to your source, run
separate SELECT statements on it (or do manual searches), and find the
culprit rows on your own.
On top of that, it can be a very long and iterative process if you've
got data with several potential failures in it because SqlBulkCopy
will stop as soon as the first failure is hit. Once you correct that
one, you need to rerun the load to find the second error, etc.
advantages:
Reports all possible errors that the SqlBulkCopy would encounter
Reports all culprit data rows, along with the exception that row would be causing
The entire thing is run in a transaction that is rolled back at the end, so no changes are committed.
disadvantages:
For extremely large amounts of data it might take a couple of minutes.
This solution is reactive; i.e. the errors are not returned as part of the exception raised by your SqlBulkCopy.WriteToServer() process. Instead, this helper method is executed after the exception is raised to try and capture all possible errors along with their related data. This means that in case of an exception, your process will take longer to run than just running the bulk copy.
You cannot reuse the same DataReader object from the failed SqlBulkCopy, as readers are forward only fire hoses that cannot be reset. You'll need to create a new reader of the same type (e.g. re-issue the original SqlCommand, recreate the reader based on the same DataTable, etc).
Using the GetBulkCopyFailedData Method
private void TestMethod()
{
// new code
SqlConnection connection = null;
SqlBulkCopy bulkCopy = null;
DataTable dataTable = new DataTable();
// load some sample data into the DataTable
IDataReader reader = dataTable.CreateDataReader();
try
{
connection = new SqlConnection("connection string goes here ...");
connection.Open();
bulkCopy = new SqlBulkCopy(connection);
bulkCopy.DestinationTableName = "Destination table name";
bulkCopy.WriteToServer(reader);
}
catch (Exception exception)
{
// loop through all inner exceptions to see if any relate to a constraint failure
bool dataExceptionFound = false;
Exception tmpException = exception;
while (tmpException != null)
{
if (tmpException is SqlException
&& tmpException.Message.Contains("constraint"))
{
dataExceptionFound = true;
break;
}
tmpException = tmpException.InnerException;
}
if (dataExceptionFound)
{
// call the helper method to document the errors and invalid data
string errorMessage = GetBulkCopyFailedData(
connection.ConnectionString,
bulkCopy.DestinationTableName,
dataTable.CreateDataReader());
throw new Exception(errorMessage, exception);
}
}
finally
{
if (connection != null && connection.State == ConnectionState.Open)
{
connection.Close();
}
}
}
GetBulkCopyFailedData() then opens a new connection to the database,
creates a transaction, and begins bulk copying the data one row at a
time. It does so by reading through the supplied DataReader and
copying each row into an empty DataTable. The DataTable is then bulk
copied into the destination database, and any exceptions resulting
from this are caught, documented (along with the DataRow that caused
it), and the cycle then repeats itself with the next row. At the end
of the DataReader we rollback the transaction and return the complete
error message. Fixing the problems in the data source should now be a
breeze.
The GetBulkCopyFailedData Method
/// <summary>
/// Build an error message with the failed records and their related exceptions.
/// </summary>
/// <param name="connectionString">Connection string to the destination database</param>
/// <param name="tableName">Table name into which the data will be bulk copied.</param>
/// <param name="dataReader">DataReader to bulk copy</param>
/// <returns>Error message with failed constraints and invalid data rows.</returns>
public static string GetBulkCopyFailedData(
string connectionString,
string tableName,
IDataReader dataReader)
{
StringBuilder errorMessage = new StringBuilder("Bulk copy failures:" + Environment.NewLine);
SqlConnection connection = null;
SqlTransaction transaction = null;
SqlBulkCopy bulkCopy = null;
DataTable tmpDataTable = new DataTable();
try
{
connection = new SqlConnection(connectionString);
connection.Open();
transaction = connection.BeginTransaction();
bulkCopy = new SqlBulkCopy(connection, SqlBulkCopyOptions.CheckConstraints, transaction);
bulkCopy.DestinationTableName = tableName;
// create a datatable with the layout of the data.
DataTable dataSchema = dataReader.GetSchemaTable();
foreach (DataRow row in dataSchema.Rows)
{
tmpDataTable.Columns.Add(new DataColumn(
row["ColumnName"].ToString(),
(Type)row["DataType"]));
}
// create an object array to hold the data being transferred into tmpDataTable
//in the loop below.
object[] values = new object[dataReader.FieldCount];
// loop through the source data
while (dataReader.Read())
{
// clear the temp DataTable from which the single-record bulk copy will be done
tmpDataTable.Rows.Clear();
// get the data for the current source row
dataReader.GetValues(values);
// load the values into the temp DataTable
tmpDataTable.LoadDataRow(values, true);
// perform the bulk copy of the one row
try
{
bulkCopy.WriteToServer(tmpDataTable);
}
catch (Exception ex)
{
// an exception was raised with the bulk copy of the current row.
// The row that caused the current exception is the only one in the temp
// DataTable, so document it and add it to the error message.
DataRow faultyDataRow = tmpDataTable.Rows[0];
errorMessage.AppendFormat("Error: {0}{1}", ex.Message, Environment.NewLine);
errorMessage.AppendFormat("Row data: {0}", Environment.NewLine);
foreach (DataColumn column in tmpDataTable.Columns)
{
errorMessage.AppendFormat(
"\tColumn {0} - [{1}]{2}",
column.ColumnName,
faultyDataRow[column.ColumnName].ToString(),
Environment.NewLine);
}
}
}
}
catch (Exception ex)
{
throw new Exception(
"Unable to document SqlBulkCopy errors. See inner exceptions for details.",
ex);
}
finally
{
if (transaction != null)
{
transaction.Rollback();
}
if (connection.State != ConnectionState.Closed)
{
connection.Close();
}
}
return errorMessage.ToString();
Entity Framework: 6.1.3.
I have a function that reads a simple table for a record and either updates it or first creates a new entity. Either way it then calls AddOrUpdate and SaveChangesAsync. This function has worked for quite some time without any apparent problem.
In my current situation, however, I'm getting a return value of 0 from SaveChangesAsync. I have a break point just before the save and verified that the record doesn't exist. I step through the code and, as expected, a new entity was created. The curious part is that the record is now in the table as desired. If I understand the documentation, 0 should indicate that nothing was written out.
I'm not using transactions for this operation. Other database operations including writes would have already occurred on the context prior to this function being called, however, they should all have been committed.
So how can I get a return of 0 and still have something written out?
Here is a slightly reduced code fragment:
var settings = OrganizationDb.Settings;
var setting = await settings.FirstOrDefaultAsync(x => x.KeyName == key).ConfigureAwait(false);
if (setting == null)
{
setting = new Setting()
{
KeyName = key,
};
}
setting.Value = value;
settings.AddOrUpdate(setting);
if (await OrganizationDb.SaveChangesAsync().ConfigureAwait(false) == 0)
{
//// error handling - record not written out.
}
Inspired by ruby on rails I want to add a delete callback to entity framework. I've started by overriding SaveChanges() to loop over the tracked entities and create an interface which is called whenever an entity gets deleted.
var changedEntities = ChangeTracker.Entries();
foreach (var changedEntity in changedEntities)
{
if (changedEntity.Entity is IBeforeDelete && changedEntity.State == EntityState.Deleted)
{
IBeforeDelete saveChange = (IBeforeDelete)changedEntity.Entity;
saveChange.BeforeDelete(this, changedEntity);
}
}
This works quite well, but I figured one problem and I don't know how to solve that. If an entity gets deleted within the callback, the Changetracker needs to be updated to resprect the newly deleted items. How can I solve that? Or is there another solution? Or do I do it wrong?
Good question. If I understand you correctly, your BeforeDelete implementations might delete a different entry that also needs to have BeforeDelete called on it. This could recursively go forever, so the only thing I could think of would be to recursively check the change tracker entries to see if new ones were added after the last batch was processed.
Untested:
public override int SaveChanges()
{
var processed = new List<DbEntityEntry>();
var entries = ChangeTracker.Entries();
do
{
foreach (var entry in entries)
{
processed.Add(entry);
if (entry.Entity is IBeforeDelete && entry.State == EntityState.Deleted)
{
IBeforeDelete saveChange = (IBeforeDelete)entry.Entity;
saveChange.BeforeDelete(this, entry);
}
}
} while ((entries = ChangeTracker.Entries().Except(processed)).Any());
return base.SaveChanges();
}
you can filter out uncommited entity changes...
var changedEntities = Context.ChangeTracker
.Entries<TEntity>()
.Where(e => e.State != EntityState.Detached)
.Select(e => e.Entity);
and lock on the rest in your "if" block
lock(changedEntity.Entity){ ... interface code }
I'm trying to save few entities with this code:
this.UserService.Users.Add(eUser);
if (SelectedRewindItems != null && SelectedRewindItems.Count > 0)
{
foreach (var ug in SelectedRewindItems)
{
HpmModel.Usergroup nUg = new HpmModel.Usergroup();
decimal numId;
var a = Decimal.TryParse(ug.Key.ToString(), out numId);
nUg.Groupid = numId;
nUg.Userid = eUser.Userid;
// eUser.Usergroups.Add(nUg);
this.UserService.Usergroups.Add(nUg);
}
}
var submitOp = this.UserService.SubmitChanges();
IsSuccess = true;
ActionMessageOnButtonSuccess = User.Fname + " " + User.Lname + " Added Successfully !!";
string message = null;
if (submitOp.EntitiesInError.Any())
{
message = string.Empty;
Entity entityInError = submitOp.EntitiesInError.First();
if (entityInError.EntityConflict != null)
{
EntityConflict conflict = entityInError.EntityConflict;
foreach (var cm in conflict.PropertyNames)
{
message += string.Format("{0}", cm);
}
}
else if (entityInError.ValidationErrors.Any())
{
message += "\r\n" + entityInError.ValidationErrors.First().ErrorMessage;
}
MessageBox.Show(message);
}
else
{
MessageBox.Show("Submit Done");
}
But I'm getting this error:
System.InvalidOperationException was unhandled by user code
HResult=-2146233079
Message=The changes to the database were committed successfully, but an error occurred while updating the object context. The ObjectContext might be in an inconsistent state.
Inner exception message: Saving or accepting changes failed because more than one entity of type 'HpmModel.Usergroup' have the same primary key value. Ensure that explicitly set primary key values are unique. Ensure that database-generated primary keys are configured correctly in the database and in the Entity Framework model. Use the Entity Designer for Database First/Model First configuration. Use the 'HasDatabaseGeneratedOption" fluent API or 'DatabaseGeneratedAttribute' for Code First configuration.
Source=EntityFramework
StackTrace:
at System.Data.Entity.Core.Objects.ObjectContext.SaveChangesToStore(SaveOptions options, IDbExecutionStrategy executionStrategy, Boolean startLocalTransaction)
at System.Data.Entity.Core.Objects.ObjectContext.<>c__DisplayClass2a.b__27()
at System.Data.Entity.Infrastructure.DefaultExecutionStrategy.Execute[TResult](Func1 operation)
at System.Data.Entity.Core.Objects.ObjectContext.SaveChangesInternal(SaveOptions options, Boolean executeInExistingTransaction)
at System.Data.Entity.Core.Objects.ObjectContext.SaveChanges(SaveOptions options)
at System.Data.Entity.Core.Objects.ObjectContext.SaveChanges()
at OpenRiaServices.DomainServices.EntityFramework.LinqToEntitiesDomainService1.InvokeSaveChanges(Boolean retryOnConflict) in c:\Code\Repos\openriaservices\OpenRiaServices.DomainServices.EntityFramework\Framework\LinqToEntitiesDomainService.cs:line 145
at OpenRiaServices.DomainServices.EntityFramework.LinqToEntitiesDomainService`1.PersistChangeSet() in c:\Code\Repos\openriaservices\OpenRiaServices.DomainServices.EntityFramework\Framework\LinqToEntitiesDomainService.cs:line 138
at OpenRiaServices.DomainServices.Server.DomainService.PersistChangeSetInternal()
at OpenRiaServices.DomainServices.Server.DomainService.Submit(ChangeSet changeSet)
InnerException: System.InvalidOperationException
HResult=-2146233079
Message=Saving or accepting changes failed because more than one entity of type 'HpmModel.Usergroup' have the same primary key value. Ensure that explicitly set primary key values are unique. Ensure that database-generated primary keys are configured correctly in the database and in the Entity Framework model. Use the Entity Designer for Database First/Model First configuration. Use the 'HasDatabaseGeneratedOption" fluent API or 'DatabaseGeneratedAttribute' for Code First configuration.
Source=EntityFramework
StackTrace:
at System.Data.Entity.Core.Objects.ObjectStateManager.FixupKey(EntityEntry entry)
at System.Data.Entity.Core.Objects.EntityEntry.AcceptChanges()
at System.Data.Entity.Core.Objects.ObjectContext.AcceptAllChanges()
at System.Data.Entity.Core.Objects.ObjectContext.SaveChangesToStore(SaveOptions options, IDbExecutionStrategy executionStrategy, Boolean startLocalTransaction)
InnerException:
When I checked the Database Entities got saved but still it is giving me this issues.
Is this because I'm trying save them after saving User & Then UserGroup entities separatly. or Child Entities should get saved with Parent Entities. I'm a beginner so facing challanges.
After wasting a lot of time, I came to know that I need to fix my EDMX file & Entity Code.
So I added in my entity:
[DatabaseGenerated( DatabaseGeneratedOption.Identity)]
In the SSDL file in my Users -> Usersgroup (1-M) Relationship
Usersgroup Id Node I Added:
StoreGeneratedPattern="Identity" [SSDL]
In CSDL:
ed:StoreGeneratedPattern="Identity"
In my code:
this.UserService.Users.Add(eUser);
if (SelectedRewindItems != null && SelectedRewindItems.Count > 0)
{
foreach (var ug in SelectedRewindItems)
{
HpmModel.Usergroup nUg = new HpmModel.Usergroup();
decimal numId;
var a = Decimal.TryParse(ug.Key.ToString(), out numId);
nUg.Groupid = numId;
nUg.Userid = eUser.Userid;
eUser.Usergroups.Add(nUg);
}
}
After applying these changes, SaveChanges() worked.
This blog post helped me.
I am trying to solve situation with rolling back our datacontexts.
We are using one TransactionScope and inside two data contexts of two different databases.
At the end we want to save changes on both databases so we call .SaveChanges but the problem is that when an error occurs on the other database the changes on the first database are still saved.
What am I doing wrong in there that the first database doesn't roll back?
Thank you,
Jakub
public void DoWork()
{
using (var scope = new TransactionScope())
{
using (var rawData = new IntranetRawDataDevEntities())
{
rawData.Configuration.AutoDetectChangesEnabled = true;
using (var dataWareHouse = new IntranetDataWareHouseDevEntities())
{
dataWareHouse.Configuration.AutoDetectChangesEnabled = true;
... some operations with the data - no savechanges() is being called.
// Save changes for all items.
if (!errors)
{
// First database save.
rawData.SaveChanges();
// Fake data to fail the second database save.
dataWareHouse.Tasks.Add(new PLKPIDashboards.DataWareHouse.Task()
{
Description = string.Empty,
Id = 0,
OperationsQueue = new OperationsQueue(),
Queue_key = 79,
TaskTypeSLAs = new Collection<TaskTypeSLA>(),
Tasktype = null
});
// Second database save.
dataWareHouse.SaveChanges();
scope.Complete();
}
else
{
scope.Dispose();
}
}
}
}
From this article http://blogs.msdn.com/b/alexj/archive/2009/01/11/savechanges-false.aspx
try to use
rawData.SaveChanges(false);
dataWareHouse.SaveChanges(false);
//if everything is ok
scope.Complete();
rawData.AcceptAllChanges();
dataWareHouse.AcceptAllChanges();