As the title suggests, Is the following code acidic, e.g. if I call SaveChanges, will all the Product.Add INSERT statements be executed (or rolled back if there is an error).
using(DBEntities ctx = new DBEntities())
{
for(int i = 0; i < 10; i++)
{
ctx.Products.Add(new Product("Product " + (i + 1)));
}
ctx.SaveChanges();
}
MSDN states:
SaveChanges operates within a transaction. SaveChanges will roll back
that transaction and throw an exception if any of the dirty
ObjectStateEntry objects cannot be persisted.
However looking at the profiler, this doesn't seem to be the case. Am I required to wrap the block with TransactionScope?
using(DBEntities ctx = new DBEntities())
{
for(int i = 0; i < 10; i++)
{
ctx.Products.Add(new Product("Product " + (i + 1)));
}
ctx.SaveChanges();
}
This SaveChanges() call will be in a transaction automatically. You won't have to wrap it under a new transactionscope.
Related
I was wondering whether it's possible to wait for a callback before continuing a process.
I'm using a library that handles a future internally and then if it was successful, does a callback, otherwise handles the error internally with no callback.
Now I'm trying to use this library to create an instance, then fill it with random test data and then update that entity.
Map generateRandomizedInstance() {
lib.createEntity((result1){
result1["a"] = generateRandomA();
result1["b"] = generateRandomB();
result1["c"] = generateRandomC();
...
lib.updateEntity(result1, (result2){
// want to return this result2
return result2;
})
});
}
This would be fine if I'm only creating one entity and updating it once, but I want to create lots of random data:
ButtonElement b = querySelector("button.create")..onClick.listen((e){
for (int i = 0; i < 500; i++) {
generateRandomizedInstance();
}
});
It doesn't take long for this code to crash spectacularly as the callbacks aren't coming back fast enough.
I've tried changing the method signature to
generateRandomizedInstance() async {
and then doing:
for (int i = 0; i < 500; i++) {
print(await generateRandomizedInstance());
}
but that await syntax seems to be invalid and I'm not completely sure how to wrap that callback code in some kind of future that I can wait for the callback to come back before continuing to the next iteration of the loop.
I've tried a while loop at the end of generateRandomizedInstance that waits for a result variable to not be null, but that kills the browser and seeing as I'm not always getting a callback, in some cases it could cause an infinite loop.
Any ideas / suggestion on how to pause that for loop while waiting for the callback?
This should do what you want:
Future<Map> generateRandomizedInstance() {
Completer<Map> c = new Completer<Map>();
lib.createEntity((result1){
result1["a"] = generateRandomA();
result1["b"] = generateRandomB();
result1["c"] = generateRandomC();
...
lib.updateEntity(result1, (result2){
// want to return this result2
c.complete(result2);
})
});
return c.future;
}
ButtonElement b = querySelector("button.create")..onClick.listen((e) async {
for (int i = 0; i < 500; i++) {
await generateRandomizedInstance();
}
});
I first initialie the BulkWriteOperation and add several inserts to it through a for loop. Then I do execute. I then reinitialize the BulkWriteOperation and try to add more insert but I keep getting:
java.lang.IllegalStateException: already executed
My Code:
BulkWriteOperation builder = coll.initializeOrderedBulkOperation();
for( int i = 0; i < 10; i++ ) {
BasicDBObject doc = new BasicDBObject("Something", something);
builder.insert(doc);
}
builder.execute();
builder = coll.initializeOrderedBulkOperation();
for( int i = 0; i < 10; i++ ) {
BasicDBObject doc = new BasicDBObject("Something", something);
builder.insert(doc);
}
builder.execute();
There isn't a way to reset the existing BulkWriteOperation object after executing it, so you just need to create a new one like this:
builder = coll.initializeOrderedBulkOperation();
I want to know why Code fragment 1 is faster than Code 2 using POCO's with Devart DotConnect for Oracle.
I tried it over 100000 records and Code 1 is way faster than 2. Why? I thought "SaveChanges" would clear the buffer making it faster as there is only 1 connection. Am I wrong?
Code 1:
for (var i = 0; i < 100000; i++)
{
using (var ctx = new MyDbContext())
{
MyObj obj = new MyObj();
obj.Id = i;
obj.Name = "Foo " + i;
ctx.MyObjects.Add(obj);
ctx.SaveChanges();
}
}
Code 2:
using (var ctx = new MyDbContext())
{
for (var i = 0; i < 100000; i++)
{
MyObj obj = new MyObj();
obj.Id = i;
obj.Name = "Foo " + i;
ctx.MyObjects.Add(obj);
ctx.SaveChanges();
}
}
The first code snippet works faster as the same connection is taken from the pool every time, so there are no performance losses on its re-opening.
In the second case 100000 objects gradually are added to the context. A slow snapshot-based tracking is used (if no dynamic proxy). This leads to the detection if any changes in any of cached objects occured on each SaveChanges(). More and more time is spent by each subsequent iteration.
We recommend you to try the following approach. It should have a better performance than the mentioned ones:
using (var ctx = new MyDbContext())
{
for (var i = 0; i < 100000; i++)
{
MyObj obj = new MyObj();
obj.Id = i;
obj.Name = "Foo " + i;
ctx.MyObjects.Add(obj);
}
ctx.SaveChanges();
}
EDIT
If you use an approach with executing large number of operations within one SaveChanges(), it will be useful to configure additionally the Entity Framework behaviour of Devart dotConnect for Oracle provider:
// Turn on the Batch Updates mode:
var config = OracleEntityProviderConfig.Instance;
config.DmlOptions.BatchUpdates.Enabled = true;
// If necessary, enable the mode of re-using parameters with the same values:
config.DmlOptions.ReuseParameters = true;
// If object has a lot of nullable properties, and significant part of them are not set (i.e., nulls), omitting explicit insert of NULL-values will decrease greatly the size of generated SQL:
config.DmlOptions.InsertNullBehaviour = InsertNullBehaviour.Omit;
Only some options are mentioned here. The full list of them is available in our article:
http://www.devart.com/blogs/dotconnect/index.php/new-features-of-entity-framework-support-in-dotconnect-providers.html
Am I wrong to assume that when SaveChanges() is called, all the
objects in cache are stored to DB and the cache is cleared, so each
loop is independent?
SaveChanges() sends and commits all changes to database, but change tracking is continued for all entities which are attached to the context. And new SaveChanges, if snapshot-based change tracking is used, will start a long process of checking (changed or not?) the values of each property for each object.
I have an issue implementing CCR with SQL. It seems that when I step through my code the updates and inserts I am trying to execute work great. But when I run through my interface without any breakpoints, it seems to be working and it shows the inserts, updates, but at the end of the run, nothing got updated to the database.
I proceeded to add a pause to my code every time I pull anew thread from my pool and it works... but that defeats the purpose of async coding right? I want my interface to be faster, not slow it down...
Any suggestions... here is part of my code:
I use two helper classes to set my ports and get a response back...
/// <summary>
/// Gets the Reader, requires connection to be managed
/// </summary>
public static PortSet<Int32, Exception> GetReader(SqlCommand sqlCommand)
{
Port<Int32> portResponse = null;
Port<Exception> portException = null;
GetReaderResponse(sqlCommand, ref portResponse, ref portException);
return new PortSet<Int32, Exception>(portResponse, portException);
}
// Wrapper for SqlCommand's GetResponse
public static void GetReaderResponse(SqlCommand sqlCom,
ref Port<Int32> portResponse, ref Port<Exception> portException)
{
EnsurePortsExist(ref portResponse, ref portException);
sqlCom.BeginExecuteNonQuery(ApmResultToCcrResultFactory.Create(
portResponse, portException,
delegate(IAsyncResult ar) { return sqlCom.EndExecuteNonQuery(ar); }), null);
}
then I do something like this to queue up my calls...
DispatcherQueue queue = CreateDispatcher();
String[] commands = new String[2];
Int32 result = 0;
commands[0] = "exec someupdateStoredProcedure";
commands[1] = "exec someInsertStoredProcedure '" + Settings.Default.RunDate.ToString() + "'";
for (Int32 i = 0; i < commands.Length; i++)
{
using (SqlConnection connSP = new SqlConnection(Settings.Default.nbfConn + ";MultipleActiveResultSets=true;Async=true"))
using (SqlCommand cmdSP = new SqlCommand())
{
connSP.Open();
cmdSP.Connection = connSP;
cmdSP.CommandTimeout = 150;
cmdSP.CommandText = "set arithabort on; " + commands[i];
Arbiter.Activate(queue, Arbiter.Choice(ApmToCcrAdapters.GetReader(cmdSP),
delegate(Int32 reader) { result = reader; },
delegate(Exception e) { result = 0; throw new Exception(e.Message); }));
}
}
where ApmToCcrAdapters is the class name where my helper methods are...
The problem is when I pause my code right after the call to Arbiter.Activate and I check my database, everything looks fine... if I get rid of the pause ad run my code through, nothing happens to the database, and no exceptions are thrown either...
The problem here is that you are calling Arbiter.Activate in the scope of your two using blocks. Don't forget that the CCR task you create is queued and the current thread continues... right past the scope of the using blocks. You've created a race condition, because the Choice must execute before connSP and cmdSP are disposed and that's only going to happen when you're interfering with the thread timings, as you have observed when debugging.
If instead you were to deal with disposal manually in the handler delegates for the Choice, this problem would no longer occur, however this makes for brittle code where it's easy to overlook disposal.
I'd recommend implementing the CCR iterator pattern and collecting results with a MulitpleItemReceive so that you can keep your using statements. It makes for cleaner code. Off the top of my head it would look something like this:
private IEnumerator<ITask> QueryIterator(
string command,
PortSet<Int32,Exception> resultPort)
{
using (SqlConnection connSP =
new SqlConnection(Settings.Default.nbfConn
+ ";MultipleActiveResultSets=true;Async=true"))
using (SqlCommand cmdSP = new SqlCommand())
{
Int32 result = 0;
connSP.Open();
cmdSP.Connection = connSP;
cmdSP.CommandTimeout = 150;
cmdSP.CommandText = "set arithabort on; " + commands[i];
yield return Arbiter.Choice(ApmToCcrAdapters.GetReader(cmdSP),
delegate(Int32 reader) { resultPort.Post(reader); },
delegate(Exception e) { resultPort.Post(e); });
}
}
and you could use it something like this:
var resultPort=new PortSet<Int32,Exception>();
foreach(var command in commands)
{
Arbiter.Activate(queue,
Arbiter.FromIteratorHandler(()=>QueryIterator(command,resultPort))
);
}
Arbiter.Activate(queue,
Arbiter.MultipleItemReceive(
resultPort,
commands.Count(),
(results,exceptions)=>{
//everything is done and you've got 2
//collections here, results and exceptions
//to process as you want
}
)
);
How can I update the column value (programmatically) in a Row of a DataTable via use of the RowChanged event, without triggering infinite loop? (which I currently get)
Note I do not want to use the DataColumn.Expression property.
For example the following gives me a recursive loop and stack overflow error:
DataColumn dc = new DataColumn("OverallSize", typeof(long));
DT_Webfiles.Columns.Add(dc);
DT_Webfiles.RowChanged += new DataRowChangeEventHandler(DT_Row_Changed);
private static void DT_Row_Changed(object sender, DataRowChangeEventArgs e)
{
Console.Out.WriteLine("DT_Row_Changed - Size = " + e.Row["OverallSize"]);
e.Row["OverallSize"] = e.Row["OverallSize"] ?? 0;
e.Row["OverallSize"] = (long)e.Row["OverallSize"] + 1;
}
thanks
PS. In fact using the RowChanging event (rather than RowChanged) makes sense to me (i.e. change the value before it saves so to speak) however when I try this I get a "Cannot change a proposed value in the RowChanging event" at the following line in the handler:
e.Row["OverallSize"] = e.Row["OverallSize"] ?? 0;
You could unsubscribe when within the event handler, though that would make your code not thread-safe.
DataColumn dc = new DataColumn("OverallSize", typeof(long));
DT_Webfiles.Columns.Add(dc);
DT_Webfiles.RowChanged += new DataRowChangeEventHandler(DT_Row_Changed);
private static void DT_Row_Changed(object sender, DataRowChangeEventArgs e)
{
e.RowChanged -= new DataRowChangeEventHandler(DT_Row_Changed);
Console.Out.WriteLine("DT_Row_Changed - Size = " + e.Row["OverallSize"]);
e.Row["OverallSize"] = e.Row["OverallSize"] ?? 0;
e.Row["OverallSize"] = (long)e.Row["OverallSize"] + 1;
e.RowChanged += new DataRowChangeEventHandler(DT_Row_Changed);
}
Try with this
e.Row.Table.RowChanged -= new DataRowChangeEventHandler(DT_Row_Changed);
And
e.Row.Table.RowChanged += new DataRowChangeEventHandler(DT_Row_Changed);