We are receiving a file from a client (Silverlight) via WCF and on the serverside I parse this file. Each line in the file is transformed into an object and stored into the database. if the file is very large (10000 entries and more), I get the following error (MSSQLEXPRESS):
The transaction associated with the current connection has completed but has not been disposed. The transaction must be disposed before the connection can be used to execute SQL statements.
I tried a lot (TransactionOptions timeout set and so on), but nothings works. The above exception message is either raised after 3000, sometimes after 6000 objects processed, but I can't succeed in processing all objects.
I append my source, hopefully somebody got an idea and can help me:
public xxxResponse SendLogFile (xxxRequest request
{
const int INTERMEDIATE_SAVE = 100;
using (var context = new EntityFramework.Models.Cubes_ServicesEntities())
{
// start a new transactionscope with the timeout of 0 (unlimited time for developing purposes)
using (var transactionScope = new TransactionScope(TransactionScopeOption.RequiresNew,
new TransactionOptions
{
IsolationLevel = System.Transactions.IsolationLevel.Serializable,
Timeout = TimeSpan.FromSeconds(0)
}))
{
try
{
// open the connection manually to prevent undesired close of DB
// (MSDTC)
context.Connection.Open();
int timeout = context.Connection.ConnectionTimeout;
int Counter = 0;
// read the file submitted from client
using (var reader = new StreamReader(new MemoryStream(request.LogFile)))
{
try
{
while (!reader.EndOfStream)
{
Counter++;
Counter2++;
string line = reader.ReadLine();
if (String.IsNullOrEmpty(line)) continue;
// Create a new object
DomainModel.LogEntry le = CreateLogEntryObject(line);
// an attach it to the context, set its state to added.
context.AttachTo("LogEntry", le);
context.ObjectStateManager.ChangeObjectState(le, EntityState.Added);
// while not 100 objects were attached, go on
if (Counter != INTERMEDIATE_SAVE) continue;
// after 100 objects, make a call to SaveChanges.
context.SaveChanges(SaveOptions.None);
Counter = 0;
}
}
catch (Exception exception)
{
// cleanup
reader.Close();
transactionScope.Dispose();
throw exception;
}
}
// do a final SaveChanges
context.SaveChanges();
transactionScope.Complete();
context.Connection.Close();
}
catch (Exception e)
{
// cleanup
transactionScope.Dispose();
context.Connection.Close();
throw e;
}
}
var response = CreateSuccessResponse<ServiceSendLogEntryFileResponse>("SendLogEntryFile successful!");
return response;
}
}
There is no bulk insert in entity framework. You call SaveChanges after 100 records but it will execute 100 separate inserts with database round trip for each insert.
Setting timeout of the transaction is also dependent on transaction max timeout which is configured on machine level (I think default value is 10 minutes). How lond does it take before your operation fails?
The best way you can do is rewriting your insert logic with common ADO.NET or with bulk insert.
Btw. throw exception and throw e? That is incorrect way to rethrow exceptions.
Important edit:
SaveChanges(SaveOptions.None) !!! means do not accept changes after saving so all records are still in added state. Because of that the first call to SaveChanges will insert first 100 records. The second call will insert first 100 again + next 100, the third call will insert first 200 + next 100, etc.
I had exactly same issue. I did EF code to insert bulk 1000 records each time.
I was working since the beginning, with a little problem with msDTC that I put to allow remot clients and admin , but after that it was ok. I did lot of work with this, but one day it JUST STOP WORKING.
I am getting
The transaction associated with the current connection has completed but has not been disposed. The transaction must be disposed before the connection can be used to execute SQL statements.
VERY WEIRD! Sometimes the error changes. My suspect is the msDTC somehow , strange behaviors.
I am changing now for not using TransactionScope!
I hate when it did work and just stop. I also tried to run this in a vm, another enourmous waste of time...
My code:
private void AddTicks(FileHelperTick[] fhTicks)
{
List<ForexEF.Entities.Tick> Ticks = new List<ForexEF.Entities.Tick>();
var str = LeTicks(ref fhTicks, ref Ticks);
using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Required, new TransactionOptions()
{
IsolationLevel = System.Transactions.IsolationLevel.Serializable,
Timeout = TimeSpan.FromSeconds(180)
}))
{
ForexEF.EUR_TICKSContext contexto = null;
try
{
contexto = new ForexEF.EUR_TICKSContext();
contexto.Configuration.AutoDetectChangesEnabled = false;
int count = 0;
foreach (var tick in Ticks)
{
count++;
contexto = AddToContext(contexto, tick, count, 1000, true);
}
contexto.SaveChanges();
}
finally
{
if (contexto != null)
contexto.Dispose();
}
scope.Complete();
}
}
private ForexEF.EUR_TICKSContext AddToContext(ForexEF.EUR_TICKSContext contexto, ForexEF.Entities.Tick tick, int count, int commitCount, bool recreateContext)
{
contexto.Set<ForexEF.Entities.Tick>().Add(tick);
if (count % commitCount == 0)
{
contexto.SaveChanges();
if (recreateContext)
{
contexto.Dispose();
contexto = new ForexEF.EUR_TICKSContext();
contexto.Configuration.AutoDetectChangesEnabled = false;
}
}
return contexto;
}
It times out due the TransactionScope default Maximum Timeout, check the machine.config for that.
Check out this link:
http://social.msdn.microsoft.com/Forums/en-US/windowstransactionsprogramming/thread/584b8e81-f375-4c76-8cf0-a5310455a394/
Related
I am building an API where I get a specific object sent as a JSON and then it gets converted into another object of another type, so we have sentObject and convertedObject. Now I can do this:
using (var dbContext = _dbContextFactory.CreateDbContext())
using (var dbContext2 = _dbContextFactory2.CreateDbContext())
{
await dbContext.AddAsync(sentObject);
await dbContext.SaveChangesAsync();
await dbContext2.AddAsync(convertedObject);
await dbContext2.SaveChangesAsync();
}
Now I had a problem where the first SaveChanges call went ok but the second threw an error with a datefield that was not properly set. The first SaveChanges call happened so the data is inserted in the database while the second SaveChanges failed, which cannot happen in my use-case.
What I want to do is if the second SaveChanges call goes wrong then I basically want to rollback the changes that have been made by the first SaveChanges.
My first thought was to delete cascade but the sentObject has a complex structure and I don't want to run into circular problems with delete cascade.
Is there any tips on how I could somehow rollback my changes if one of the SaveChanges calls fails?
You can call context.Database.BeginTransaction as follows:
using (var dbContextTransaction = context.Database.BeginTransaction())
{
context.Database.ExecuteSqlCommand(
#"UPDATE Blogs SET Rating = 5" +
" WHERE Name LIKE '%Entity Framework%'"
);
var query = context.Posts.Where(p => p.Blog.Rating >= 5);
foreach (var post in query)
{
post.Title += "[Cool Blog]";
}
context.SaveChanges();
dbContextTransaction.Commit();
}
(taken from the docs)
You can therefore begin a transaction for dbContext in your case and if the second command failed, call dbContextTransaction.Rollback();
Alternatively, you can implement the cleanup logic yourself, but it would be messy to maintain that as your code here evolves in the future.
Here is an example code that is working for me, no need for calling the rollback function. Calling the rollback function can fail. If you do it inside the catch block for example then you have a silent exception that gets thrown and you will never know about it. The rollback happens automatically when the transaction object in the using statement gets disposed. You can see this if you go to SSMS and look for the open transactions while debugging. See this for reference: https://github.com/dotnet/EntityFramework.Docs/issues/327
Using Transactions or SaveChanges(false) and AcceptAllChanges()?
using (var transactionApplication = dbContext.Database.BeginTransaction())
{
try
{
await dbContext.AddAsync(toInsertApplication);
await dbContext.SaveChangesAsync();
using (var transactionPROWIN = dbContextPROWIN.Database.BeginTransaction())
{
try
{
await dbContext2.AddAsync(convertedApplication);
await dbContext2.SaveChangesAsync();
transaction2.Commit();
insertOperationResult = ("Insert successfull", false);
}
catch (Exception e)
{
Logger.LogError(e.ToString());
insertOperationResult = ("Insert converted object failed", true);
return;
}
}
transactionApplication.Commit();
}
catch (DbUpdateException dbUpdateEx)
{
Logger.LogError(dbUpdateEx.ToString());
if (dbUpdateEx.InnerException.ToString().ToLower().Contains("overflow"))
{
insertOperationResult = ("DateTime overflow", true);
return;
}
//transactionApplication.Rollback();
insertOperationResult = ("Duplicated UUID", true);
}
catch (Exception e)
{
Logger.LogError(e.ToString());
transactionApplication.Rollback();
insertOperationResult = ("Insert Application: Some other error happened", true);
}
}
We have two .net services (.Net core console applications) which are accessing a postgres db table.
Service 1 inserts some 500 rows every 1 minute. It runs as a background thread.
Service 2 reads data from the same table continuously. There is an MQTT publisher which keeps reading data from this table when any new data is requested. This also happens very frequently i.e atleast 4/5 times a minute.
We are getting "FATAL: sorry, too many clients already " error.
What I am assuming is since write and read is happening simultaneously too frequently, the connection is not getting dispose properly.
Is there a way to avoid read whenever a write is happening.
EDITED
Thanks for the reply.. I know some connection pooling is happening but not sure where.. so my question was how to avoid concurrent access of postgres db..
Was not sure what part of code I can post to make the question clear
I am having using clause on dbcontext and also disposed like the below..
This is retrieval section
using (PlatinumDBContext platinumDBContext = new PlatinumDBContext())
{
try
{
var data = platinumDBContext.TrendPoints.Where(x => ids.Contains(x.TrendPointID) && x.TimeStamp >= DateTime.Now.AddHours(-timeinHours));
result = data.Select(x => new Last24hours
{
Label = x.TrendPointID.ToString(),
Value = (double)x.TrendPointValue,
time = x.TimeStamp.ToString("MM/dd/yyyy HH:mm:ss")
}).ToList();
}
catch (Exception oE)
{
}
finally {
platinumDBContext.Dispose();
}
}
This is the insertion section
using (PlatinumDBContext platinumDBContext = new PlatinumDBContext())
{
try
{
foreach (var point in trendPoints)
{
if (point != null)
{
TrendPoint item = new TrendPoint();
item.CreatedDate = DateTime.Now;
item.ObjectState = ObjectState.Added;
item.TrendPointID = point.TrendID;
item.TrendPointValue = double.IsNaN(point.Value) ? decimal.MinValue : (decimal)point.Value;
item.TimeStamp = new DateTime(point.TimeStamp);
platinumDBContext.Add(item);
}
}
platinumDBContext.SaveChanges();
}
catch (Exception ex)
{
}
finally
{
platinumDBContext.Dispose();
}
}
Regards,
Geervani
I am using SqlBulkCopy (.NET) with ObjectReader (FastMember) to perform an import from XML based file. I have added the proper column mappings.
At certain instances I get an error: Failed to convert parameter value from a String to a Int32.
I'd like to understand how to
1. Trace the actual table column which has failed
2. Get the "current" on the ObjectReader
sample code:
using (ObjectReader reader = genericReader.GetReader())
{
try
{
sbc.WriteToServer(reader); //sbc is SqlBulkCopy instance
transaction.Commit();
}
catch (Exception ex)
{
transaction.Rollback();
}
}
Does the "ex" carry more information then just the error:
System.InvalidOperationException : The given value of type String from the data source cannot be converted to type int of the specified target column.
Simple Answer
The simple answer is no. One of the reasons .NET's SqlBulkCopy is so fast is that it does not log anything it does. You can't directly get any additional information from the .NET's SqlBulkCopy exception. However, that said David Catriel has wrote an article about this and has delivered a possible solution you can read fully about here.
Even though this method may provide the answer you are looking for I suggest only using the helper method when debugging as this quite possibly could have some performance impact if ran consistently within your code.
Why Use A Work Around
The lack of logging definitely speeds things up, but when you are
pumping hundreds of thousands of rows and suddenly have a failure on
one of them because of a constraint, you're stuck. All the
SqlException will tell you is that something went wrong with a given
constraint (you'll get the constraint's name at least), but that's
about it. You're then stuck having to go back to your source, run
separate SELECT statements on it (or do manual searches), and find the
culprit rows on your own.
On top of that, it can be a very long and iterative process if you've
got data with several potential failures in it because SqlBulkCopy
will stop as soon as the first failure is hit. Once you correct that
one, you need to rerun the load to find the second error, etc.
advantages:
Reports all possible errors that the SqlBulkCopy would encounter
Reports all culprit data rows, along with the exception that row would be causing
The entire thing is run in a transaction that is rolled back at the end, so no changes are committed.
disadvantages:
For extremely large amounts of data it might take a couple of minutes.
This solution is reactive; i.e. the errors are not returned as part of the exception raised by your SqlBulkCopy.WriteToServer() process. Instead, this helper method is executed after the exception is raised to try and capture all possible errors along with their related data. This means that in case of an exception, your process will take longer to run than just running the bulk copy.
You cannot reuse the same DataReader object from the failed SqlBulkCopy, as readers are forward only fire hoses that cannot be reset. You'll need to create a new reader of the same type (e.g. re-issue the original SqlCommand, recreate the reader based on the same DataTable, etc).
Using the GetBulkCopyFailedData Method
private void TestMethod()
{
// new code
SqlConnection connection = null;
SqlBulkCopy bulkCopy = null;
DataTable dataTable = new DataTable();
// load some sample data into the DataTable
IDataReader reader = dataTable.CreateDataReader();
try
{
connection = new SqlConnection("connection string goes here ...");
connection.Open();
bulkCopy = new SqlBulkCopy(connection);
bulkCopy.DestinationTableName = "Destination table name";
bulkCopy.WriteToServer(reader);
}
catch (Exception exception)
{
// loop through all inner exceptions to see if any relate to a constraint failure
bool dataExceptionFound = false;
Exception tmpException = exception;
while (tmpException != null)
{
if (tmpException is SqlException
&& tmpException.Message.Contains("constraint"))
{
dataExceptionFound = true;
break;
}
tmpException = tmpException.InnerException;
}
if (dataExceptionFound)
{
// call the helper method to document the errors and invalid data
string errorMessage = GetBulkCopyFailedData(
connection.ConnectionString,
bulkCopy.DestinationTableName,
dataTable.CreateDataReader());
throw new Exception(errorMessage, exception);
}
}
finally
{
if (connection != null && connection.State == ConnectionState.Open)
{
connection.Close();
}
}
}
GetBulkCopyFailedData() then opens a new connection to the database,
creates a transaction, and begins bulk copying the data one row at a
time. It does so by reading through the supplied DataReader and
copying each row into an empty DataTable. The DataTable is then bulk
copied into the destination database, and any exceptions resulting
from this are caught, documented (along with the DataRow that caused
it), and the cycle then repeats itself with the next row. At the end
of the DataReader we rollback the transaction and return the complete
error message. Fixing the problems in the data source should now be a
breeze.
The GetBulkCopyFailedData Method
/// <summary>
/// Build an error message with the failed records and their related exceptions.
/// </summary>
/// <param name="connectionString">Connection string to the destination database</param>
/// <param name="tableName">Table name into which the data will be bulk copied.</param>
/// <param name="dataReader">DataReader to bulk copy</param>
/// <returns>Error message with failed constraints and invalid data rows.</returns>
public static string GetBulkCopyFailedData(
string connectionString,
string tableName,
IDataReader dataReader)
{
StringBuilder errorMessage = new StringBuilder("Bulk copy failures:" + Environment.NewLine);
SqlConnection connection = null;
SqlTransaction transaction = null;
SqlBulkCopy bulkCopy = null;
DataTable tmpDataTable = new DataTable();
try
{
connection = new SqlConnection(connectionString);
connection.Open();
transaction = connection.BeginTransaction();
bulkCopy = new SqlBulkCopy(connection, SqlBulkCopyOptions.CheckConstraints, transaction);
bulkCopy.DestinationTableName = tableName;
// create a datatable with the layout of the data.
DataTable dataSchema = dataReader.GetSchemaTable();
foreach (DataRow row in dataSchema.Rows)
{
tmpDataTable.Columns.Add(new DataColumn(
row["ColumnName"].ToString(),
(Type)row["DataType"]));
}
// create an object array to hold the data being transferred into tmpDataTable
//in the loop below.
object[] values = new object[dataReader.FieldCount];
// loop through the source data
while (dataReader.Read())
{
// clear the temp DataTable from which the single-record bulk copy will be done
tmpDataTable.Rows.Clear();
// get the data for the current source row
dataReader.GetValues(values);
// load the values into the temp DataTable
tmpDataTable.LoadDataRow(values, true);
// perform the bulk copy of the one row
try
{
bulkCopy.WriteToServer(tmpDataTable);
}
catch (Exception ex)
{
// an exception was raised with the bulk copy of the current row.
// The row that caused the current exception is the only one in the temp
// DataTable, so document it and add it to the error message.
DataRow faultyDataRow = tmpDataTable.Rows[0];
errorMessage.AppendFormat("Error: {0}{1}", ex.Message, Environment.NewLine);
errorMessage.AppendFormat("Row data: {0}", Environment.NewLine);
foreach (DataColumn column in tmpDataTable.Columns)
{
errorMessage.AppendFormat(
"\tColumn {0} - [{1}]{2}",
column.ColumnName,
faultyDataRow[column.ColumnName].ToString(),
Environment.NewLine);
}
}
}
}
catch (Exception ex)
{
throw new Exception(
"Unable to document SqlBulkCopy errors. See inner exceptions for details.",
ex);
}
finally
{
if (transaction != null)
{
transaction.Rollback();
}
if (connection.State != ConnectionState.Closed)
{
connection.Close();
}
}
return errorMessage.ToString();
I'm developing a cloud service (worker role) for collecting data from a number of instruments. These instruments reports data randomly every minute or so. The service itself is not performance critical and doesn't need to be asynchronous. The instruments are able to resend their data up to an hour on failed connection attempt.
I have tried several implementations for my cloud service including this one:
http://msdn.microsoft.com/en-us/library/system.net.sockets.tcplistener.stop(v=vs.110).aspx
But all of them hang my cloud server sooner or later (sometimes within an hour).
I suspect something is wrong with my code. I have a lot of logging in my code but I get no errors. The service just stops to receive incoming connections.
In Azure portal it seems like the service is running fine. No error logs and no suspicious cpu usage etc.
If I restart the service it will run fine again until it hangs next time.
Would be most grateful if someone could help me with this.
public class WorkerRole : RoleEntryPoint
{
private LoggingService _loggingService;
public override void Run()
{
_loggingService = new LoggingService();
StartListeningForIncommingTCPConnections();
}
private void StartListeningForIncommingTCPConnections()
{
TcpListener listener = null;
try
{
listener = new TcpListener(RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["WatchMeEndpoint"].IPEndpoint);
listener.Start();
while (true)
{
_loggingService.Log(SeverityLevel.Info, "Waiting for connection...");
var client = listener.AcceptTcpClient();
var remoteEndPoint = client.Client != null ? client.Client.RemoteEndPoint.ToString() : "Unknown";
_loggingService.Log(SeverityLevel.Info, String.Format("Connected to {0}", remoteEndPoint));
var netStream = client.GetStream();
var data = String.Empty;
using (var reader = new StreamReader(netStream, Encoding.ASCII))
{
data = reader.ReadToEnd();
}
_loggingService.Log(SeverityLevel.Info, "Received data: " + data);
ProcessData(data); //data is processed and stored in database (all resources are released when done)
client.Close();
_loggingService.Log(SeverityLevel.Info, String.Format("Connection closed for {0}", remoteEndPoint));
}
}
catch (Exception exception)
{
_loggingService.Log(SeverityLevel.Error, exception.Message);
}
finally
{
if (listener != null)
listener.Stop();
}
}
private void ProcessData(String data)
{
try
{
var processor = new Processor();
var lines = data.Split('\n');
foreach (var line in lines)
processor.ProcessLine(line);
processor.ProcessMessage();
}
catch (Exception ex)
{
_loggingService.Log(SeverityLevel.Error, ex.Message);
throw new Exception(ex.InnerException.Message);
}
}
}
One strange observation i just did:
I checked the log recently and no instrument has connected for the last 30 minutes (which indicates that the service is down).
I connected to the service myself via a TCP client i've written myself and uploaded some test data.
This worked fine.
When I checked the log again my test data had been stored.
The strange thing is, that 4 other instruments had connected about the same time and send their data successfully.
Why couldn't they connect by themself before I connected with my test client?
Also, what does this setting in .csdef do for an InputEndpoint, idleTimeoutInMinutes?
===============================================
Edit:
Since a cuple of days back my cloud service has been running successfully.
Unfortunately this morning last log entry was from this line:
_loggingService.Log(SeverityLevel.Info, String.Format("Connected to {0}", remoteEndPoint));
No other connections could be made after this. Not even from my own test TCP client (didn't get any error though, but no data was stored and no new logs).
This makes me think that following code causes the service to hang:
var netStream = client.GetStream();
var data = String.Empty;
using (var reader = new StreamReader(netStream, Encoding.ASCII))
{
data = reader.ReadToEnd();
}
I've read somewhere that StremReader's ReadToEnd() could hang. Is this possible?
I have now changed this piece of code to this:
int i;
var bytes = new Byte[256];
var data = new StringBuilder();
const int dataLimit = 10;
var dataCount = 0;
while ((i = netStream.Read(bytes, 0, bytes.Length)) != 0)
{
data.Append(Encoding.ASCII.GetString(bytes, 0, i));
if (dataCount >= dataLimit)
{
_loggingService.Log(SeverityLevel.Error, "Reached data limit");
break;
}
dataCount++;
}
Another explanation could be something hanging in the database. I use the SqlConnection and SqlCommand classes to read and write to my database. I always close my connection afterwards (finally block).
SqlConnection and SqlCommand should have default timeouts, right?
===============================================
Edit:
After some more debugging I found out that when the service wasn't responding it "hanged" on this line of code:
while ((i = netStream.Read(bytes, 0, bytes.Length)) != 0)
After some digging I found out that the NetStream class and its read methods could actually hang. Even though MS declares otherwise.
NetworkStream read hangs
I've now changed my code into this:
Thread thread = null;
var task = Task.Factory.StartNew(() =>
{
thread = Thread.CurrentThread;
while ((i = netStream.Read(bytes, 0, bytes.Length)) != 0)
{
// Translate data bytes to a ASCII string.
data.Append(Encoding.ASCII.GetString(bytes, 0, i));
}
streamReadSucceeded = true;
});
task.Wait(5000);
if (streamReadSucceeded)
{
//Process data
}
else
{
thread.Abort();
}
Hopefully this will stop the hanging.
I'd say that part of your problem is you are processing your data on the thread that listens for connections from clients. This would prevent new clients from connecting if another client has started a long running operation of some type. I'd suggest you defer your processing to worker threads thus freeing the "listener" thread to accept new connections.
Another problem you could be experiencing, if your service throws an error, then the service will stop accepting connections as well.
private static void ListenForClients()
{
tcpListener.Start();
while (true)
{
TcpClient client = tcpListener.AcceptTcpClient();
Thread clientThread = new Thread(new ParameterizedThreadStart(HandleClientComm));
clientThread.Start(client);
}
}
private static void HandleClientComm(object obj)
{
try
{
using(TcpClient tcpClient = (TcpClient)obj)
{
Console.WriteLine("Got Client...");
using (NetworkStream clientStream = tcpClient.GetStream())
using (StreamWriter writer = new StreamWriter(clientStream))
using(StreamReader reader = new StreamReader(clientStream))
{
//do stuff
}
}
}
catch(Exception ex)
{
}
}
I'm in ASP.NET MVC and am (mostly) using Entity Framework. I want to call a stored procedure without waiting for it to finish. My current approach is to use a background worker. Trouble is, it works fine without using the background worker, but fails to execute with it.
In the DoWork event handler when I call
command.ExecuteNonQuery();
it just "disappears" (never gets to next line in debug mode).
Anyone have tips on calling a sproc asynchronously? BTW, it'll be SQL Azure in production if that matters; for now SQL Server 2008.
public void ExecAsyncUpdateMemberScoreRecalc(MemberScoreRecalcInstruction instruction)
{
var bw = new BackgroundWorker();
bw.DoWork += new DoWorkEventHandler(AsyncUpdateMemberScoreRecalc_DoWork);
bw.WorkerReportsProgress = false;
bw.WorkerSupportsCancellation = false;
bw.RunWorkerAsync(instruction);
}
private void AsyncUpdateMemberScoreRecalc_DoWork(object sender, DoWorkEventArgs e)
{
var instruction = (MemberScoreRecalcInstruction)e.Argument;
string connectionString = string.Empty;
using (var sprocEntities = new DSAsyncSprocEntities()) // getting the connection string
{
connectionString = sprocEntities.Connection.ConnectionString;
}
using (var connection = new EntityConnection(connectionString))
{
connection.Open();
EntityCommand command = connection.CreateCommand();
command.CommandText = DSConstants.Sproc_MemberScoreRecalc;
command.CommandType = CommandType.StoredProcedure;
command.Parameters.AddWithValue(DSConstants.Sproc_MemberScoreRecalc_Param_SageUserId, instruction.SageUserId);
command.Parameters.AddWithValue(DSConstants.Sproc_MemberScoreRecalc_Param_EventType, instruction.EventType);
command.Parameters.AddWithValue(DSConstants.Sproc_MemberScoreRecalc_Param_EventCode, instruction.EventCode);
command.Parameters.AddWithValue(DSConstants.Sproc_MemberScoreRecalc_Param_EventParamId, instruction.EventParamId);
int result = 0;
// NEVER RETURNS FROM RUNNING NEXT LINE (and never executes)... yet it works if I do the same thing directly in the main thread.
result = command.ExecuteNonQuery();
}
}
Add a try catch around the call and see if any exceptions are caught and are thus aborting the thread.
try {
result = command.ExecuteNonQuery();
} catch(Exception ex) {
// Log this error and if needed handle or
throw;
}