Replay(1) returning result from disconnected source - system.reactive

I have an Observable stream (representing a network connection of data values) which I'm Replaying and RefCounting. The underlying source is disconnecting as expected when the subscriber count hits zero, but Replay(1) returns a value from this source to the next subscriber, which I consider to be stale. I've fixed this with a custom implementation of ReplaySubject, which forgets the last value when the last observer disconnects, but it's a leaky abstraction and I'm wondering whether there's a more idomatic way to solve this problem?
Example failing test:
[Test]
public async Task Replay_DoesNotReplay_AfterRefCountDisconnection()
{
int i = 0;
var replay = Observable.Defer(
() =>
{
i++;
return new[] {i, i}.ToObservable();
})
.Replay(1)
.RefCount();
var first = await replay.Take(1).ToTask();
var second = await replay.Take(1).ToTask();
Assert.AreEqual(1, first);
Assert.AreEqual(2, second);
}

Related

EF Core ChangeTracker entities snapshot and reset to that snapshot

Is it possible to take a snapshot of the current state of EF Core's ChangeTracker and then reset later to that snapshot if needed?
Let's say I want to make the following code into reality:
public async Task ExecuteTransactionAsync(DbContext context)
{
var executionStrategy = new SqlServerRetryingExecutionStrategy(context);
var retries = 0;
var snapshot = null;
await executionStrategy.ExecuteAsync(async () =>
{
var txOptions = new TransactionOptions
{
IsolationLevel = IsolationLevel.ReadCommitted
};
using var transaction = new TransactionScope(TransactionScopeOption.RequiresNew, txOptions, TransactionScopeAsyncFlowOption.Enabled)
if (retries > 0 && snapshot != null)
{
// This method in reality doesn't exist.
context.ChangeTracker.ResetToSnapshot(snapshot);
}
// This method in reality doesn't exist.
var snapshot = context.ChangeTracker.TakeSnapshot();
retries += 1;
// Let's imagine that we are doing an insert operation on the context.
context.DoInsert();
await context.SaveChangesAsync(false);
// Let's imagine that we are doing an update operation on the context.
context.DoUpdate();
// Let's say this SaveChanges fails the first time and the transaction will retry.
await context.SaveChangesAsync(false);
transaction.Complete();
}
}
Why would this be useful?
I have found it hard to work with ChangeTracker and retrying transactions. The transaction itself of course works fine, the changes will be rollbacked database side between retries. However, ChangeTracker doesn't seem to have a similar rollback functionality.
There is SaveChanges(false) option which is supposed to be used with transactions in order to preserve the state of ChangeTracker between retries. However, if I had an insert on the first run of the transaction, on the second run (first retry) another entity would be added to the ChangeTracker and then on successful SaveChanges, two entities would get inserted to database, but my expectation from the code was to have one entity inserted.
public async Task ExecuteTransactionAsync(DbContext context)
{
var executionStrategy = new SqlServerRetryingExecutionStrategy(context);
await executionStrategy.ExecuteAsync(async () =>
{
var txOptions = new TransactionOptions
{
IsolationLevel = IsolationLevel.ReadCommitted
};
using var transaction = new TransactionScope(TransactionScopeOption.RequiresNew, txOptions, TransactionScopeAsyncFlowOption.Enabled)
// On the first retry, the ChangeTracker will already have a new entity added.
// DoInsert will add another one and now I will have two new entities
// when my expectation was just one.
context.DoInsert();
await context.SaveChangesAsync(false);
context.DoUpdate();
// Let's say this SaveChanges fails the first time and the transaction will retry.
await context.SaveChangesAsync(false);
transaction.Complete();
}
}
A similar problem would occur when updating an entity with += or -= operators. If, for example, an entity had Price = 10 and .DoUpdate would do Price -= 2 - after the first retry (so after the code wrapped in the transaction would run twice) I would have Price = 6 because it was subtracted twice, when my expectation at the end of the transaction was to have Price = 8.
So it got me thinking that a total reset to some ChangeTracker state in some point of time would be extremely useful.

Rate limiting observable [duplicate]

I would like to set up an Rx subscription that can respond to an event right away, and then ignore subsequent events that happen within a specified "cooldown" period.
The out of the box Throttle/Buffer methods respond only once the timeout has elapsed, which is not quite what I need.
Here is some code that sets up the scenario, and uses a Throttle (which isn't the solution I want):
class Program
{
static Stopwatch sw = new Stopwatch();
static void Main(string[] args)
{
var subject = new Subject<int>();
var timeout = TimeSpan.FromMilliseconds(500);
subject
.Throttle(timeout)
.Subscribe(DoStuff);
var factory = new TaskFactory();
sw.Start();
factory.StartNew(() =>
{
Console.WriteLine("Batch 1 (no delay)");
subject.OnNext(1);
});
factory.StartNewDelayed(1000, () =>
{
Console.WriteLine("Batch 2 (1s delay)");
subject.OnNext(2);
});
factory.StartNewDelayed(1300, () =>
{
Console.WriteLine("Batch 3 (1.3s delay)");
subject.OnNext(3);
});
factory.StartNewDelayed(1600, () =>
{
Console.WriteLine("Batch 4 (1.6s delay)");
subject.OnNext(4);
});
Console.ReadKey();
sw.Stop();
}
private static void DoStuff(int i)
{
Console.WriteLine("Handling {0} at {1}ms", i, sw.ElapsedMilliseconds);
}
}
The output of running this right now is:
Batch 1 (no delay)
Handling 1 at 508ms
Batch 2 (1s delay)
Batch 3 (1.3s delay)
Batch 4 (1.6s delay)
Handling 4 at 2114ms
Note that batch 2 isn't handled (which is fine!) because we wait for 500ms to elapse between requests due to the nature of throttle. Batch 3 is also not handled, (which is less alright because it happened more than 500ms from batch 2) due to its proximity to Batch 4.
What I'm looking for is something more like this:
Batch 1 (no delay)
Handling 1 at ~0ms
Batch 2 (1s delay)
Handling 2 at ~1000s
Batch 3 (1.3s delay)
Batch 4 (1.6s delay)
Handling 4 at ~1600s
Note that batch 3 wouldn't be handled in this scenario (which is fine!) because it occurs within 500ms of Batch 2.
EDIT:
Here is the implementation for the "StartNewDelayed" extension method that I use:
/// <summary>Creates a Task that will complete after the specified delay.</summary>
/// <param name="factory">The TaskFactory.</param>
/// <param name="millisecondsDelay">The delay after which the Task should transition to RanToCompletion.</param>
/// <returns>A Task that will be completed after the specified duration.</returns>
public static Task StartNewDelayed(
this TaskFactory factory, int millisecondsDelay)
{
return StartNewDelayed(factory, millisecondsDelay, CancellationToken.None);
}
/// <summary>Creates a Task that will complete after the specified delay.</summary>
/// <param name="factory">The TaskFactory.</param>
/// <param name="millisecondsDelay">The delay after which the Task should transition to RanToCompletion.</param>
/// <param name="cancellationToken">The cancellation token that can be used to cancel the timed task.</param>
/// <returns>A Task that will be completed after the specified duration and that's cancelable with the specified token.</returns>
public static Task StartNewDelayed(this TaskFactory factory, int millisecondsDelay, CancellationToken cancellationToken)
{
// Validate arguments
if (factory == null) throw new ArgumentNullException("factory");
if (millisecondsDelay < 0) throw new ArgumentOutOfRangeException("millisecondsDelay");
// Create the timed task
var tcs = new TaskCompletionSource<object>(factory.CreationOptions);
var ctr = default(CancellationTokenRegistration);
// Create the timer but don't start it yet. If we start it now,
// it might fire before ctr has been set to the right registration.
var timer = new Timer(self =>
{
// Clean up both the cancellation token and the timer, and try to transition to completed
ctr.Dispose();
((Timer)self).Dispose();
tcs.TrySetResult(null);
});
// Register with the cancellation token.
if (cancellationToken.CanBeCanceled)
{
// When cancellation occurs, cancel the timer and try to transition to cancelled.
// There could be a race, but it's benign.
ctr = cancellationToken.Register(() =>
{
timer.Dispose();
tcs.TrySetCanceled();
});
}
if (millisecondsDelay > 0)
{
// Start the timer and hand back the task...
timer.Change(millisecondsDelay, Timeout.Infinite);
}
else
{
// Just complete the task, and keep execution on the current thread.
ctr.Dispose();
tcs.TrySetResult(null);
timer.Dispose();
}
return tcs.Task;
}
Here's my approach. It's similar to others that have gone before, but it doesn't suffer the over-zealous window production problem.
The desired function works a lot like Observable.Throttle but emits qualifying events as soon as they arrive rather than delaying for the duration of the throttle or sample period. For a given duration after a qualifying event, subsequent events are suppressed.
Given as a testable extension method:
public static class ObservableExtensions
{
public static IObservable<T> SampleFirst<T>(
this IObservable<T> source,
TimeSpan sampleDuration,
IScheduler scheduler = null)
{
scheduler = scheduler ?? Scheduler.Default;
return source.Publish(ps =>
ps.Window(() => ps.Delay(sampleDuration,scheduler))
.SelectMany(x => x.Take(1)));
}
}
The idea is to use the overload of Window that creates non-overlapping windows using a windowClosingSelector that uses the source time-shifted back by the sampleDuration. Each window will therefore: (a) be closed by the first element in it and (b) remain open until a new element is permitted. We then simply select the first element from each window.
Rx 1.x Version
The Publish extension method used above is not available in Rx 1.x. Here is an alternative:
public static class ObservableExtensions
{
public static IObservable<T> SampleFirst<T>(
this IObservable<T> source,
TimeSpan sampleDuration,
IScheduler scheduler = null)
{
scheduler = scheduler ?? Scheduler.Default;
var sourcePub = source.Publish().RefCount();
return sourcePub.Window(() => sourcePub.Delay(sampleDuration,scheduler))
.SelectMany(x => x.Take(1));
}
}
The solution I found after a lot of trial and error was to replace the throttled subscription with the following:
subject
.Window(() => { return Observable.Interval(timeout); })
.SelectMany(x => x.Take(1))
.Subscribe(i => DoStuff(i));
Edited to incorporate Paul's clean-up.
Awesome solution Andrew! We can take this a step further though and clean up the inner Subscribe:
subject
.Window(() => { return Observable.Interval(timeout); })
.SelectMany(x => x.Take(1))
.Subscribe(DoStuff);
The initial answer I posted has a flaw: namely that the Window method, when used with an Observable.Interval to denote the end of the window, sets up an infinite series of 500ms windows. What I really need is a window that starts when the first result is pumped into the subject, and ends after the 500ms.
My sample data masked this problem because the data broke down nicely into the windows that were already going to be created. (i.e. 0-500ms, 501-1000ms, 1001-1500ms, etc.)
Consider instead this timing:
factory.StartNewDelayed(300,() =>
{
Console.WriteLine("Batch 1 (300ms delay)");
subject.OnNext(1);
});
factory.StartNewDelayed(700, () =>
{
Console.WriteLine("Batch 2 (700ms delay)");
subject.OnNext(2);
});
factory.StartNewDelayed(1300, () =>
{
Console.WriteLine("Batch 3 (1.3s delay)");
subject.OnNext(3);
});
factory.StartNewDelayed(1600, () =>
{
Console.WriteLine("Batch 4 (1.6s delay)");
subject.OnNext(4);
});
What I get is:
Batch 1 (300ms delay)
Handling 1 at 356ms
Batch 2 (700ms delay)
Handling 2 at 750ms
Batch 3 (1.3s delay)
Handling 3 at 1346ms
Batch 4 (1.6s delay)
Handling 4 at 1644ms
This is because the windows begin at 0ms, 500ms, 1000ms, and 1500ms and so each Subject.OnNext fits nicely into its own window.
What I want is:
Batch 1 (300ms delay)
Handling 1 at ~300ms
Batch 2 (700ms delay)
Batch 3 (1.3s delay)
Handling 3 at ~1300ms
Batch 4 (1.6s delay)
After a lot of struggling and an hour banging on it with a co-worker, we arrived at a better solution using pure Rx and a single local variable:
bool isCoolingDown = false;
subject
.Where(_ => !isCoolingDown)
.Subscribe(
i =>
{
DoStuff(i);
isCoolingDown = true;
Observable
.Interval(cooldownInterval)
.Take(1)
.Subscribe(_ => isCoolingDown = false);
});
Our assumption is that calls to the subscription method are synchronized. If they are not, then a simple lock could be introduced.
Use .Scan() !
This is what I use for Throttling when I need the first hit (after a certain period) immediately, but delay (and group/ignore) any subsequent hits.
Basically works like Throttle, but fires immediately if the previous onNext was >= interval ago, otherwise, schedule it at exactly interval from the previous hit. And of course, if within the 'cooling down' period multiple hits come, the additional ones are ignored, just like Throttle does.
The difference with your use case is that if you get an event at 0 ms and 100 ms, they will both be handled (at 0ms and 500ms), which might be what you actually want (otherwise, the accumulator is easy to adapt to ignore ANY hit closer than interval to the previous one).
public static IObservable<T> QuickThrottle<T>(this IObservable<T> src, TimeSpan interval, IScheduler scheduler)
{
return src
.Scan(new ValueAndDueTime<T>(), (prev, id) => AccumulateForQuickThrottle(prev, id, interval, scheduler))
.Where(vd => !vd.Ignore)
.SelectMany(sc => Observable.Timer(sc.DueTime, scheduler).Select(_ => sc.Value));
}
private static ValueAndDueTime<T> AccumulateForQuickThrottle<T>(ValueAndDueTime<T> prev, T value, TimeSpan interval, IScheduler s)
{
var now = s.Now;
// Ignore this completely if there is already a future item scheduled
// but do keep the dueTime for accumulation!
if (prev.DueTime > now) return new ValueAndDueTime<T> { DueTime = prev.DueTime, Ignore = true };
// Schedule this item at at least interval from the previous
var min = prev.DueTime + interval;
var nextTime = (now < min) ? min : now;
return new ValueAndDueTime<T> { DueTime = nextTime, Value = value };
}
private class ValueAndDueTime<T>
{
public DateTimeOffset DueTime;
public T Value;
public bool Ignore;
}
I got another one for your. This one doesn't use Repeat() nor Interval() so it might be what you are after:
subject
.Window(() => Observable.Timer(TimeSpan.FromMilliseconds(500)))
.SelectMany(x => x.Take(1));
Well the most obvious thing will be to use Repeat() here. However, as far as I know Repeat() might introduce problems so that notifications disappear in between the moment when the stream stops and we subscribe again. In practice this has never been a problem for me.
subject
.Take(1)
.Concat(Observable.Empty<long>().Delay(TimeSpan.FromMilliseconds(500)))
.Repeat();
Remember to replace with the actual type of your source.
UPDATE:
Updated query to use Concat instead of Merge
I have stumbled upon this question while trying to re-implement my own solution to the same or similar problem using .Window
Take a look, it seems to be the same as this one and solved quite elegantly:
https://stackoverflow.com/a/3224723/58463
It's an old post, but no answer could really fill my needs, so I'm giving my own solution :
public static IObservable<T> ThrottleOrImmediate<T>(this IObservable<T> source, TimeSpan delay, IScheduler scheduler)
{
return Observable.Create<T>((obs, token) =>
{
// Next item cannot be send before that time
DateTime nextItemTime = default;
return Task.FromResult(source.Subscribe(async item =>
{
var currentTime = DateTime.Now;
// If we already reach the next item time
if (currentTime - nextItemTime >= TimeSpan.Zero)
{
// Following item will be send only after the set delay
nextItemTime = currentTime + delay;
// send current item with scheduler
scheduler.Schedule(() => obs.OnNext(item));
}
// There is still time before we can send an item
else
{
// we schedule the time for the following item
nextItemTime = currentTime + delay;
try
{
await Task.Delay(delay, token);
}
catch (TaskCanceledException)
{
return;
}
// If next item schedule was change by another item then we stop here
if (nextItemTime > currentTime + delay)
return;
else
{
// Set next possible time for an item and send item with scheduler
nextItemTime = currentTime + delay;
scheduler.Schedule(() => obs.OnNext(item));
}
}
}));
});
}
First item is immediately sent, then following items are throttled. Then if a following item is sent after the delayed time, it's immediately sent too.

How to limit API calls per second with angular2

I have an API limit of 10 calls per second (however thousands per day), however, when I run this function (Called each Style ID of object, > 10 per second):
getStyleByID(styleID: number): void {
this._EdmundsAPIService.getStyleByID(styleID).subscribe(
style => {this.style.push(style); },
error => this.errorMessage = <any>error);
}
from this function (only 1 call, used onInit):
getStylesWithoutYear(): void {
this._EdmundsAPIService.getStylesWithoutYear(this.makeNiceName, this.modelNiceName, this.modelCategory)
.subscribe(
styles => { this.styles = styles;
this.styles.years.forEach(year =>
year.styles.forEach(style =>
this.getStyleByID(style.id)));
console.log(this.styles); },
error => this.errorMessage = <any>error);
}
It makes > 10 calls a second. How can I throttle or slow down these calls in order to prevent from getting a 403 error?
I have a pretty neat solution where you combine two observables with the .zip() operator:
An observable emitting the requests.
Another observable emitting a value every .1 second.
You end up with one observable emitting requests every .1 second (= 10 requests per second).
Here's the code (JSBin):
// Stream of style ids you need to request (this will be throttled).
const styleIdsObs = new Rx.Subject<number>();
// Getting a style means pushing a new styleId to the stream of style ids.
const getStyleByID = (id) => styleIdsObs.next(id);
// This second observable will act as the "throttler".
// It emits one value every .1 second, so 10 values per second.
const intervalObs = Rx.Observable.interval(100);
Rx.Observable
// Combine the 2 observables. The obs now emits a styleId every .1s.
.zip(styleIdsObs, intervalObs, (styleId, i) => styleId)
// Get the style, i.e. run the request.
.mergeMap(styleId => this._EdmundsAPIService.getStyleByID(styleId))
// Use the style.
.subscribe(style => {
console.log(style);
this.style.push(style);
});
// Launch of bunch of requests at once, they'll be throttled automatically.
for (let i=0; i<20; i++) {
getStyleByID(i);
}
Hopefully you'll be able to translate my code to your own use case. Let me know if you have any questions.
UPDATE: Thanks to Adam, there's also a JSBin showing how to throttle the requests if they don't come in consistently (see convo in the comments). It uses the concatMap() operator instead of the zip() operator.
You could use a timed Observable that triggers every n milliseconds. I didn't adapt your code but this one shows how it would work:
someMethod() {
// flatten your styles into an array:
let stylesArray = ["style1", "style2", "style3"];
// create a scheduled Observable that triggers each second
let source = Observable.timer(1000,1000);
// use a counter to track when all styles are processed
let counter = 0;
let subscription = source.subscribe( x => {
if (counter < stylesArray.length) {
// call your API here
counter++;
} else {
subscription.complete();
}
});
}
Find here a plunk that shows it in action
While I didn't test this code, I would do try something along these lines.
Basically I create a variable that keeps track of when the next request is allowed to be made. If that time has not passed, and a new request comes in, it will use setTimeout to allow that function to run at the appropriate time interval. If the delayUntil value is in the past, then the request can run immediately, and also push back the timer by 100 ms from the current time.
delayUntil = Date.now();
getStylesWithoutYear(): void {
this.delayRequest(() => {
this._EdmundsAPIService.getStylesWithoutYear(this.makeNiceName, this.modelNiceName, this.modelCategory)
.subscribe(
styles => { this.styles = styles;
this.styles.years.forEach(year =>
year.styles.forEach(style =>
this.getStyleByID(style.id)));
console.log(this.styles); },
error => this.errorMessage = <any>error);
};
}
delayRequest(delayedFunction) {
if (this.delayUntil > Date.now()) {
setTimeout(delayedFunction, this.delayUntil - Date.now());
this.delayUntil += 100;
} else {
delayedFunction();
this.delayUntil = Date.now() + 100;
}
}

Reactive extensions(Rx) Switch() produces new observable which is not subscribed to provided OnCompleted()

I have a problem with my Rx subscription using Switch statement.
_performSearchSubject
.AsObservable()
.Select(_ => PerformQuery())
.Switch()
.ObserveOn(_synchronizationContextService.SynchronizationContext)
.Subscribe(DataArrivedForPositions, PositionQueryError, PositionQueryCompleted)
.DisposeWith(this);
The flow is:
Some properties change and the performSearchSubject.OnNext is called
The PerformPositionQuery() is called, which returns a observer each time it is hit
The service which responds through this observer calls OnNext twice and OnCompleted once when the data receive is done
Method DataArrivedForPositions is called twice as expected
Method PositionQueryCompleted is never called, though observer.OnCompleted() is called inside my data service.
Code for dataService is:
protected override void Request(Request request, IObserver<Response> observer)
{
query.Arrive += p => QueryReceive(request.RequestId, p, observer, query);
query.Error += (type, s, message) => QueryError(observer, message);
query.NoMoreData += id => QueryCompleted(observer);
query.Execute(request);
}
private void QueryError(IObserver<PositionSheetResponse> observer, string message)
{
observer.OnError(new Exception(message));
}
private void QueryCompleted(IObserver<PositionSheetResponse> observer)
{
observer.OnCompleted();
}
private void QueryReceive(Guid requestId, Qry0079Receive receiveData, IObserver<PositionSheetResponse> observer, IQry0079PositionSheet query)
{
observer.OnNext(ConvertToResponse(requestId, receiveData));
}
Switch result will only Complete when your outer observable (_performSearchSubject) completes. I assume in your case this one never does (it's probably bound to a user action performing the search).
What's unclear is when you expect PositionQueryCompleted to be called. If It's after each and every successful query is processed, then your stream needs to be modified, because Switch lost you the information that the query stream completed, but it also lacks information about the UI (wrong scheduler even) to say whether its data was actually processed.
There may be other ways to achieve it, but basically you want your query stream complete to survive through Switch (which currently ignore this event). For instance you can transform your query stream to have n+1 events, with one extra for the complete:
_performSearchSubject
.AsObservable()
.Select(_ =>
PerformQuery()
.Select(Data => new { Data, Complete = false})
.Concat(Observable.Return(new { Data = (string)null, Complete = true })))
You can safely apply .Switch().ObserveOn(_synchronizationContextService.SynchronizationContext) on it, but then you need to modify your subscription:
.Subscribe(data => {
if (data.Complete) DataArrivedForPositions(data.Data);
else PositionQueryCompleted()
}, PositionQueryError)

Bulk inserts with EntityFramework 4.0 causes abort of transaction

We are receiving a file from a client (Silverlight) via WCF and on the serverside I parse this file. Each line in the file is transformed into an object and stored into the database. if the file is very large (10000 entries and more), I get the following error (MSSQLEXPRESS):
The transaction associated with the current connection has completed but has not been disposed. The transaction must be disposed before the connection can be used to execute SQL statements.
I tried a lot (TransactionOptions timeout set and so on), but nothings works. The above exception message is either raised after 3000, sometimes after 6000 objects processed, but I can't succeed in processing all objects.
I append my source, hopefully somebody got an idea and can help me:
public xxxResponse SendLogFile (xxxRequest request
{
const int INTERMEDIATE_SAVE = 100;
using (var context = new EntityFramework.Models.Cubes_ServicesEntities())
{
// start a new transactionscope with the timeout of 0 (unlimited time for developing purposes)
using (var transactionScope = new TransactionScope(TransactionScopeOption.RequiresNew,
new TransactionOptions
{
IsolationLevel = System.Transactions.IsolationLevel.Serializable,
Timeout = TimeSpan.FromSeconds(0)
}))
{
try
{
// open the connection manually to prevent undesired close of DB
// (MSDTC)
context.Connection.Open();
int timeout = context.Connection.ConnectionTimeout;
int Counter = 0;
// read the file submitted from client
using (var reader = new StreamReader(new MemoryStream(request.LogFile)))
{
try
{
while (!reader.EndOfStream)
{
Counter++;
Counter2++;
string line = reader.ReadLine();
if (String.IsNullOrEmpty(line)) continue;
// Create a new object
DomainModel.LogEntry le = CreateLogEntryObject(line);
// an attach it to the context, set its state to added.
context.AttachTo("LogEntry", le);
context.ObjectStateManager.ChangeObjectState(le, EntityState.Added);
// while not 100 objects were attached, go on
if (Counter != INTERMEDIATE_SAVE) continue;
// after 100 objects, make a call to SaveChanges.
context.SaveChanges(SaveOptions.None);
Counter = 0;
}
}
catch (Exception exception)
{
// cleanup
reader.Close();
transactionScope.Dispose();
throw exception;
}
}
// do a final SaveChanges
context.SaveChanges();
transactionScope.Complete();
context.Connection.Close();
}
catch (Exception e)
{
// cleanup
transactionScope.Dispose();
context.Connection.Close();
throw e;
}
}
var response = CreateSuccessResponse<ServiceSendLogEntryFileResponse>("SendLogEntryFile successful!");
return response;
}
}
There is no bulk insert in entity framework. You call SaveChanges after 100 records but it will execute 100 separate inserts with database round trip for each insert.
Setting timeout of the transaction is also dependent on transaction max timeout which is configured on machine level (I think default value is 10 minutes). How lond does it take before your operation fails?
The best way you can do is rewriting your insert logic with common ADO.NET or with bulk insert.
Btw. throw exception and throw e? That is incorrect way to rethrow exceptions.
Important edit:
SaveChanges(SaveOptions.None) !!! means do not accept changes after saving so all records are still in added state. Because of that the first call to SaveChanges will insert first 100 records. The second call will insert first 100 again + next 100, the third call will insert first 200 + next 100, etc.
I had exactly same issue. I did EF code to insert bulk 1000 records each time.
I was working since the beginning, with a little problem with msDTC that I put to allow remot clients and admin , but after that it was ok. I did lot of work with this, but one day it JUST STOP WORKING.
I am getting
The transaction associated with the current connection has completed but has not been disposed. The transaction must be disposed before the connection can be used to execute SQL statements.
VERY WEIRD! Sometimes the error changes. My suspect is the msDTC somehow , strange behaviors.
I am changing now for not using TransactionScope!
I hate when it did work and just stop. I also tried to run this in a vm, another enourmous waste of time...
My code:
private void AddTicks(FileHelperTick[] fhTicks)
{
List<ForexEF.Entities.Tick> Ticks = new List<ForexEF.Entities.Tick>();
var str = LeTicks(ref fhTicks, ref Ticks);
using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Required, new TransactionOptions()
{
IsolationLevel = System.Transactions.IsolationLevel.Serializable,
Timeout = TimeSpan.FromSeconds(180)
}))
{
ForexEF.EUR_TICKSContext contexto = null;
try
{
contexto = new ForexEF.EUR_TICKSContext();
contexto.Configuration.AutoDetectChangesEnabled = false;
int count = 0;
foreach (var tick in Ticks)
{
count++;
contexto = AddToContext(contexto, tick, count, 1000, true);
}
contexto.SaveChanges();
}
finally
{
if (contexto != null)
contexto.Dispose();
}
scope.Complete();
}
}
private ForexEF.EUR_TICKSContext AddToContext(ForexEF.EUR_TICKSContext contexto, ForexEF.Entities.Tick tick, int count, int commitCount, bool recreateContext)
{
contexto.Set<ForexEF.Entities.Tick>().Add(tick);
if (count % commitCount == 0)
{
contexto.SaveChanges();
if (recreateContext)
{
contexto.Dispose();
contexto = new ForexEF.EUR_TICKSContext();
contexto.Configuration.AutoDetectChangesEnabled = false;
}
}
return contexto;
}
It times out due the TransactionScope default Maximum Timeout, check the machine.config for that.
Check out this link:
http://social.msdn.microsoft.com/Forums/en-US/windowstransactionsprogramming/thread/584b8e81-f375-4c76-8cf0-a5310455a394/