UPDATED: I have a piece of code that creates records when they don't exist, or updates them when they do exist. However, while trying to update the records I get this exception:
Microsoft.EntityFrameworkCore.DbUpdateException The DELETE statement
conflicted with the REFERENCE constraint
public static string AddCurrencies(ApplicationDbContext db)
{
// ...
foreach (Currency c in db.Currency.ToList())
{
try
{
db.Remove(c); // the troublemaker!
db.SaveChanges();
}
catch
{
// probably in use (foreign key)
}
}
// ...
foreach (Currency c in CurrencyList)
{
var c_db = db.Currency.FirstOrDefault(x => x.Code == c.Code);
if (c_db == null)
{
// adding
db.Currency.Add(c);
}
else
{
// updating
c_db.Name = c.Name;
c_db.LocalDisplay = c.LocalDisplay;
}
db.SaveChanges(); // exception fired if updating!
}
// ...
}
After some investigation, and having being able to turn SQL debugging on, I found out that the Remove() "persists" and that it will be retried with the second call to SaveChanges(), hence the exception. Now the question is reformulated: how do I "undo" (in the lack of a better expression) the Remove() commands that failed?
I managed to solve this issue this way:
var entry = context.Entry(entity);
entry.Reload();
for each entry where delete failed.
Related
I am building an API where I get a specific object sent as a JSON and then it gets converted into another object of another type, so we have sentObject and convertedObject. Now I can do this:
using (var dbContext = _dbContextFactory.CreateDbContext())
using (var dbContext2 = _dbContextFactory2.CreateDbContext())
{
await dbContext.AddAsync(sentObject);
await dbContext.SaveChangesAsync();
await dbContext2.AddAsync(convertedObject);
await dbContext2.SaveChangesAsync();
}
Now I had a problem where the first SaveChanges call went ok but the second threw an error with a datefield that was not properly set. The first SaveChanges call happened so the data is inserted in the database while the second SaveChanges failed, which cannot happen in my use-case.
What I want to do is if the second SaveChanges call goes wrong then I basically want to rollback the changes that have been made by the first SaveChanges.
My first thought was to delete cascade but the sentObject has a complex structure and I don't want to run into circular problems with delete cascade.
Is there any tips on how I could somehow rollback my changes if one of the SaveChanges calls fails?
You can call context.Database.BeginTransaction as follows:
using (var dbContextTransaction = context.Database.BeginTransaction())
{
context.Database.ExecuteSqlCommand(
#"UPDATE Blogs SET Rating = 5" +
" WHERE Name LIKE '%Entity Framework%'"
);
var query = context.Posts.Where(p => p.Blog.Rating >= 5);
foreach (var post in query)
{
post.Title += "[Cool Blog]";
}
context.SaveChanges();
dbContextTransaction.Commit();
}
(taken from the docs)
You can therefore begin a transaction for dbContext in your case and if the second command failed, call dbContextTransaction.Rollback();
Alternatively, you can implement the cleanup logic yourself, but it would be messy to maintain that as your code here evolves in the future.
Here is an example code that is working for me, no need for calling the rollback function. Calling the rollback function can fail. If you do it inside the catch block for example then you have a silent exception that gets thrown and you will never know about it. The rollback happens automatically when the transaction object in the using statement gets disposed. You can see this if you go to SSMS and look for the open transactions while debugging. See this for reference: https://github.com/dotnet/EntityFramework.Docs/issues/327
Using Transactions or SaveChanges(false) and AcceptAllChanges()?
using (var transactionApplication = dbContext.Database.BeginTransaction())
{
try
{
await dbContext.AddAsync(toInsertApplication);
await dbContext.SaveChangesAsync();
using (var transactionPROWIN = dbContextPROWIN.Database.BeginTransaction())
{
try
{
await dbContext2.AddAsync(convertedApplication);
await dbContext2.SaveChangesAsync();
transaction2.Commit();
insertOperationResult = ("Insert successfull", false);
}
catch (Exception e)
{
Logger.LogError(e.ToString());
insertOperationResult = ("Insert converted object failed", true);
return;
}
}
transactionApplication.Commit();
}
catch (DbUpdateException dbUpdateEx)
{
Logger.LogError(dbUpdateEx.ToString());
if (dbUpdateEx.InnerException.ToString().ToLower().Contains("overflow"))
{
insertOperationResult = ("DateTime overflow", true);
return;
}
//transactionApplication.Rollback();
insertOperationResult = ("Duplicated UUID", true);
}
catch (Exception e)
{
Logger.LogError(e.ToString());
transactionApplication.Rollback();
insertOperationResult = ("Insert Application: Some other error happened", true);
}
}
The following Trigger is firing twice:
trigger AccountTrigger on Account ( before insert, after insert, before update, after update, before delete, after delete) {
AccountTriggerHandler handle = new AccountTriggerHandler(trigger.new, trigger.oldMap);
System.debug('AccountTrigger created a handler instance: ' + handle);
// Currently the Trigger is firing twice with no obvious reason.
if (Trigger.isBefore) {
if (Trigger.isInsert) {
handle.beforeInsert();
}
if (Trigger.isUpdate) {
// Call handler here!
}
if (Trigger.isDelete) {
// Call handler here!
}
}
if (Trigger.isAfter) {
if (Trigger.isInsert) {
// Call handler here!
}
if (Trigger.isUpdate) {
// Call handler here!
}
if (Trigger.isDelete) {
// Call handler here!
}
}
}
The debug result is showing two handler instances. The weird thing is: The first one seems to be empty? How can that be?
EDIT 1:
The Testcode:
#isTest
public class AccountTestTest {
#isTest
public static void testAccountInsert() {
// Insert an Account
Account a = new Account(name='TestCustomer');
insert a;
Account queryAccount = [SELECT Account.id, Account.name FROM Account WHERE Id = :a.Id];
System.debug('TEST RESULT: ' + queryAccount);
System.debug('AccountTestTest completed.');
// Actually test something...
}
}
I know it's missing asserts, but for the sake of simplicity, I just tried this one.
It's because of ”before insert”. At that stage ids haven't been generated yet. If you don't have any logic that fits into before insert best (complex validations? Field prepopulation?) remove that event?
I get a runtime exception from the below code. I think it is not creating a primary key (which is CustomerID, and int). I used the Wizard and I have experience from years ago with ADO.NET but this is my first project using Entity Framework 6.0 with DbContext. I generated the EDMX file using "database first", so an actual database is present. I don't think any other table has a foreign key association with this table that is mandatory, so I think (but am not sure) this runtime error is from a failure to create new primary key. This is either a very easy question or a hard one, so I will check later.
Paul
[OperationContract]
public void DoWork ()
{
try
{
using (var ctx = new MyEDMXdBExistingDatabase())
{
CUSTOMER myCustomerREALLYNEW = new CUSTOMER();
myCustomerREALLYNEW.LastName = "NeWLASTNAME";
myCustomerREALLYNEW.CustomerID = 0; //set primary key to zero and let entity framework create new one automagically? I think this works based on past experience, but it's not working now
ctx.CUSTOMERs.Add(myCustomerREALLYNEW);
ctx.SaveChanges(); //PROBLEM HERE If comment out this line no runtime error
}
}
catch (DbEntityValidationException ex1)
{
Debug.WriteLine(ex1.Message);
}
catch (Exception ex)
{
Debug.WriteLine(ex.Message);
}
finally
{
}
}
I added the following code and it showed me that the new CUSTOMER was lacking a manditory, non-null field (the Address). Once I added myCustomerREALLYNEW.Address= "123 Street"; then the new record/object could be added.
Here is the code I added to catch the runtime exception, which I got off the net:
catch (DbEntityValidationException ex1)
{
Debug.WriteLine("!" + ex1.Message);
List<string> errorMessages = new List<string>();
foreach (DbEntityValidationResult validationResult in ex1.EntityValidationErrors)
{
string entityName = validationResult.Entry.Entity.GetType().Name;
foreach (DbValidationError error in validationResult.ValidationErrors)
{
errorMessages.Add(entityName + "." + error.PropertyName + ":" + errorMessages);
}
}
foreach (string s in errorMessages)
{
Debug.WriteLine(s);
}
}
See Resolving optimistic concurrency exceptions with Reload (database wins) :
using (var context = new BloggingContext())
{
var blog = context.Blogs.Find(1);
blog.Name = "The New ADO.NET Blog";
bool saveFailed;
do
{
saveFailed = false;
try
{
context.SaveChanges();
}
catch (DbUpdateConcurrencyException ex)
{
saveFailed = true;
// Update the values of the entity that failed to save from the store
ex.Entries.Single().Reload();
}
} while (saveFailed);
}
Why the method SaveChanges() is called after Reload()?
This call will never change the data in the database.
I agree it's not too clear. The intention of this piece of code is in the sentence
The entity is then typically given back to the user in some form and they must try to make their changes again and re-save.
So it would have been better if they had added a comment:
...
// User evaluates current values and may make new changes.
try
{
context.SaveChanges();
}
...
We are receiving a file from a client (Silverlight) via WCF and on the serverside I parse this file. Each line in the file is transformed into an object and stored into the database. if the file is very large (10000 entries and more), I get the following error (MSSQLEXPRESS):
The transaction associated with the current connection has completed but has not been disposed. The transaction must be disposed before the connection can be used to execute SQL statements.
I tried a lot (TransactionOptions timeout set and so on), but nothings works. The above exception message is either raised after 3000, sometimes after 6000 objects processed, but I can't succeed in processing all objects.
I append my source, hopefully somebody got an idea and can help me:
public xxxResponse SendLogFile (xxxRequest request
{
const int INTERMEDIATE_SAVE = 100;
using (var context = new EntityFramework.Models.Cubes_ServicesEntities())
{
// start a new transactionscope with the timeout of 0 (unlimited time for developing purposes)
using (var transactionScope = new TransactionScope(TransactionScopeOption.RequiresNew,
new TransactionOptions
{
IsolationLevel = System.Transactions.IsolationLevel.Serializable,
Timeout = TimeSpan.FromSeconds(0)
}))
{
try
{
// open the connection manually to prevent undesired close of DB
// (MSDTC)
context.Connection.Open();
int timeout = context.Connection.ConnectionTimeout;
int Counter = 0;
// read the file submitted from client
using (var reader = new StreamReader(new MemoryStream(request.LogFile)))
{
try
{
while (!reader.EndOfStream)
{
Counter++;
Counter2++;
string line = reader.ReadLine();
if (String.IsNullOrEmpty(line)) continue;
// Create a new object
DomainModel.LogEntry le = CreateLogEntryObject(line);
// an attach it to the context, set its state to added.
context.AttachTo("LogEntry", le);
context.ObjectStateManager.ChangeObjectState(le, EntityState.Added);
// while not 100 objects were attached, go on
if (Counter != INTERMEDIATE_SAVE) continue;
// after 100 objects, make a call to SaveChanges.
context.SaveChanges(SaveOptions.None);
Counter = 0;
}
}
catch (Exception exception)
{
// cleanup
reader.Close();
transactionScope.Dispose();
throw exception;
}
}
// do a final SaveChanges
context.SaveChanges();
transactionScope.Complete();
context.Connection.Close();
}
catch (Exception e)
{
// cleanup
transactionScope.Dispose();
context.Connection.Close();
throw e;
}
}
var response = CreateSuccessResponse<ServiceSendLogEntryFileResponse>("SendLogEntryFile successful!");
return response;
}
}
There is no bulk insert in entity framework. You call SaveChanges after 100 records but it will execute 100 separate inserts with database round trip for each insert.
Setting timeout of the transaction is also dependent on transaction max timeout which is configured on machine level (I think default value is 10 minutes). How lond does it take before your operation fails?
The best way you can do is rewriting your insert logic with common ADO.NET or with bulk insert.
Btw. throw exception and throw e? That is incorrect way to rethrow exceptions.
Important edit:
SaveChanges(SaveOptions.None) !!! means do not accept changes after saving so all records are still in added state. Because of that the first call to SaveChanges will insert first 100 records. The second call will insert first 100 again + next 100, the third call will insert first 200 + next 100, etc.
I had exactly same issue. I did EF code to insert bulk 1000 records each time.
I was working since the beginning, with a little problem with msDTC that I put to allow remot clients and admin , but after that it was ok. I did lot of work with this, but one day it JUST STOP WORKING.
I am getting
The transaction associated with the current connection has completed but has not been disposed. The transaction must be disposed before the connection can be used to execute SQL statements.
VERY WEIRD! Sometimes the error changes. My suspect is the msDTC somehow , strange behaviors.
I am changing now for not using TransactionScope!
I hate when it did work and just stop. I also tried to run this in a vm, another enourmous waste of time...
My code:
private void AddTicks(FileHelperTick[] fhTicks)
{
List<ForexEF.Entities.Tick> Ticks = new List<ForexEF.Entities.Tick>();
var str = LeTicks(ref fhTicks, ref Ticks);
using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Required, new TransactionOptions()
{
IsolationLevel = System.Transactions.IsolationLevel.Serializable,
Timeout = TimeSpan.FromSeconds(180)
}))
{
ForexEF.EUR_TICKSContext contexto = null;
try
{
contexto = new ForexEF.EUR_TICKSContext();
contexto.Configuration.AutoDetectChangesEnabled = false;
int count = 0;
foreach (var tick in Ticks)
{
count++;
contexto = AddToContext(contexto, tick, count, 1000, true);
}
contexto.SaveChanges();
}
finally
{
if (contexto != null)
contexto.Dispose();
}
scope.Complete();
}
}
private ForexEF.EUR_TICKSContext AddToContext(ForexEF.EUR_TICKSContext contexto, ForexEF.Entities.Tick tick, int count, int commitCount, bool recreateContext)
{
contexto.Set<ForexEF.Entities.Tick>().Add(tick);
if (count % commitCount == 0)
{
contexto.SaveChanges();
if (recreateContext)
{
contexto.Dispose();
contexto = new ForexEF.EUR_TICKSContext();
contexto.Configuration.AutoDetectChangesEnabled = false;
}
}
return contexto;
}
It times out due the TransactionScope default Maximum Timeout, check the machine.config for that.
Check out this link:
http://social.msdn.microsoft.com/Forums/en-US/windowstransactionsprogramming/thread/584b8e81-f375-4c76-8cf0-a5310455a394/