Is there a way to have an observable sequence to resume execution with the next element in the sequence if an error occurs?
From this post it looks like you need to specify a new observable sequence in Catch() to resume execution, but what if you needed to just continue processing with the next element in the sequence instead? Is there a way to achieve this?
UPDATE:
The scenario is as follows:
I have a bunch of elements that I need to process. The processing is made up of a bunch of steps. I have
decomposed the steps into tasks that I would like to compose.
I followed the guidelines for ToObservable() posted here
to convert by tasks to an observables for composition.
so basically I'm doing somethng like so -
foreach(element in collection)
{
var result = from aResult in DoAAsync(element).ToObservable()
from bResult in DoBAsync(aResult).ToObservable()
from cResult in DoCAsync(bResult).ToObservable()
select cResult;
result.subscribe( register on next and error handlers here)
}
or I could something like this:
var result =
from element in collection.ToObservable()
from aResult in DoAAsync(element).ToObservable()
from bResult in DoBAsync(aResult).ToObservable()
from cResult in DoCAsync(bResult).ToObservable()
select cResult;
What is the best way here to continue processing other elements even if let's say the processing of
one of the elements throws an exception. I would like to be able to log the error and move on ideally.
Both James & Richard made some good points, but I don't think they have given you the best method for solving your problem.
James suggested using .Catch(Observable.Never<Unit>()). He was wrong when he said that "will ... allow the stream to continue" because once you hit an exception the stream must end - that is what Richard pointed out when he mentioned the contract between observers and observables.
Also, using Never in this way will cause your observables to never complete.
The short answer is that .Catch(Observable.Empty<Unit>()) is the correct way to change a sequence from one that ends with an error to one that ends with completion.
You've hit on the right idea of using SelectMany to process each value of the source collection so that you can catch each exception, but you're left with a couple of issues.
You're using tasks (TPL) just to turn a function call into an observable. This forces your observable to use task pool threads which means that the SelectMany statement will likely produce values in a non-deterministic order.
Also you hide the actual calls to process your data making refactoring and maintenance harder.
I think you're better off creating an extension method that allows the exceptions to be skipped. Here it is:
public static IObservable<R> SelectAndSkipOnException<T, R>(
this IObservable<T> source, Func<T, R> selector)
{
return
source
.Select(t =>
Observable.Start(() => selector(t)).Catch(Observable.Empty<R>()))
.Merge();
}
With this method you can now simply do this:
var result =
collection.ToObservable()
.SelectAndSkipOnException(t =>
{
var a = DoA(t);
var b = DoB(a);
var c = DoC(b);
return c;
});
This code is much simpler, but it hides the exception(s). If you want to hang on to the exceptions while letting your sequence continue then you need to do some extra funkiness. Adding a couple of overloads to the Materialize extension method works to keep the errors.
public static IObservable<Notification<R>> Materialize<T, R>(
this IObservable<T> source, Func<T, R> selector)
{
return source.Select(t => Notification.CreateOnNext(t)).Materialize(selector);
}
public static IObservable<Notification<R>> Materialize<T, R>(
this IObservable<Notification<T>> source, Func<T, R> selector)
{
Func<Notification<T>, Notification<R>> f = nt =>
{
if (nt.Kind == NotificationKind.OnNext)
{
try
{
return Notification.CreateOnNext<R>(selector(nt.Value));
}
catch (Exception ex)
{
ex.Data["Value"] = nt.Value;
ex.Data["Selector"] = selector;
return Notification.CreateOnError<R>(ex);
}
}
else
{
if (nt.Kind == NotificationKind.OnError)
{
return Notification.CreateOnError<R>(nt.Exception);
}
else
{
return Notification.CreateOnCompleted<R>();
}
}
};
return source.Select(nt => f(nt));
}
These methods allow you to write this:
var result =
collection
.ToObservable()
.Materialize(t =>
{
var a = DoA(t);
var b = DoB(a);
var c = DoC(b);
return c;
})
.Do(nt =>
{
if (nt.Kind == NotificationKind.OnError)
{
/* Process the error in `nt.Exception` */
}
})
.Where(nt => nt.Kind != NotificationKind.OnError)
.Dematerialize();
You can even chain these Materialize methods and use ex.Data["Value"] & ex.Data["Selector"] to get the value and selector function that threw the error out.
I hope this helps.
The contract between IObservable and IObserver is OnNext*(OnCompelted|OnError)? which is upheld by all operators, even if not by the source.
Your only choice is to re-subscribe to the source using Retry, but if the source returns the IObservable instance for every description you won't see any new values.
Could you supply more information on your scenario? Maybe there is another way of looking at it.
Edit: Based on your updated feedback, it sounds like you just need Catch:
var result =
from element in collection.ToObservable()
from aResult in DoAAsync(element).ToObservable().Log().Catch(Observable.Empty<TA>())
from bResult in DoBAsync(aResult).ToObservable().Log().Catch(Observable.Empty<TB>())
from cResult in DoCAsync(bResult).ToObservable().Log().Catch(Observable.Empty<TC>())
select cResult;
This replaces an error with an Empty which would not trigger the next sequence (since it uses SelectMany under the hood.
Related
i'm a RxJava newcomer, and i'm having some trouble wrapping my head around how to do the following.
i'm using Retrofit to invoke a network request that returns me a Single<Foo>, which is the type i ultimately want to consume via my Subscriber instance (call it SingleFooSubscriber)
Foo has an internal property items typed as List<String>.
if Foo.items is not empty, i would like to invoke separate, concurrent network requests for each of its values. (the actual results of these requests are inconsequential for SingleFooSubscriber as the results will be cached externally).
SingleFooSubscriber.onComplete() should be invoked only when Foo and all Foo.items have been fetched.
fetchFooCall
.subscribeOn(Schedulers.io())
// Approach #1...
// the idea here would be to "merge" the results of both streams into a single
// reactive type, but i'm not sure how this would work given that the item emissions
// could be far greater than one. using zip here i don't think it would every
// complete.
.flatMap { foo ->
if(foo.items.isNotEmpty()) {
Observable.zip(
Observable.fromIterable(foo.items),
Observable.just(foo),
{ source1, source2 ->
// hmmmm...
}
).toSingle()
} else {
Single.just(foo)
}
}
// ...or Approach #2...
// i think this would result in the streams for Foo and items being handled sequentially,
// which is not really ideal because
// 1) i think it would entail nested streams (i get the feeling i should be using flatMap
// instead)
// 2) and i'm not sure SingleFooSubscriber.onComplete() would depend on the completion of
// the stream for items
.doOnSuccess { data ->
if(data.items.isNotEmpty()) {
// hmmmm...
}
}
.observeOn(AndroidSchedulers.mainThread())
.subscribe(
{ data -> /* onSuccess() */ },
{ error -> /* onError() */ }
)
any thoughts on how to approach this would be greatly appreciated!
bonus points: in trying to come up with a solution to this, i've begun to question the decision to use the Single reactive type vs the Observable reactive type. most (all, except this one Foo.items case?) of my streams actually revolve around consuming a single instance of something, so i leaned toward Single to represent my streams as i thought it would add some semantic clarity around the code. anybody have any general guidance around when to use one vs the other?
You need to nest flatMaps and then convert back to Single:
retrofit.getMainObject()
.flatMap(v ->
Flowable.fromIterable(v.items)
.flatMap(w ->
retrofit.getItem(w.id).doOnNext(x -> w.property = x)
)
.ignoreElements()
.toSingle(v)
)
I'm experimenting with Reactive Extensions on various platforms, and one thing that annoys me a bit are the glitches.
Even though for UI code these glitches might not be that problematic, and usually one can find an operator that works around them, I still find debugging code more difficult in the presence of glitches: the intermediate results are not important to debug, but my mind does not know when a result is intermediate or "final".
Having worked a bit with pure functional FRP in Haskell and synchronous data-flow systems, it also 'feels' wrong, but that is of course subjective.
But when hooking RX to non-UI actuators (like motors or switches), I think glitches are more problematic. How would one make sure that only the correct value is send to the external actuators?
Maybe this can be solved by some 'dispatcher' that knows when some 'external sensor' fired the initiating event, so that all internal events are handled before forwarding the final result(s) to the actuators. Something like described in the flapjax paper.
The question(s) I hope to get answers for are:
Is there something in RX that makes fixing glitches for synchronous notifications impossible?
If not, does a (preferably production quality) library or approach exists for RX that fixes synchronous glitches? Especially for the single-threaded Javascript this might make sense?
If no general solution exists, how would RX be used to control external sensors/actuators without glitches at the actuators?
Let me give an example
Suppose I want to print a sequence of tuples (a,b) where the contract is
a=n b=10 * floor(n/10)
n is a natural number stream = 0,1,2....
So I expect the following sequence
(a=0, b=0)
(a=1, b=0)
(a=2, b=0)
...
(a=9, b=0)
(a=10, b=10)
(a=11, b=10)
...
In RX, to make things more interesting, I will use filter for computing the b stream
var n = Observable
.Interval(TimeSpan.FromSeconds(1))
.Publish()
.RefCount();
var a = n.Select(t => "a=" + t);
var b = n.Where(t => t % 10 == 0)
.Select(t => "b=" + t);
var ab = a.CombineLatest(b, Tuple.Create);
ab.Subscribe(Console.WriteLine);
This gives what I believed to be a glitch (temporary violation of the invariant/contract):
(a=0, b=0)
(a=1, b=0)
(a=2, b=0)
...
(a=10, b=0) <-- glitch?
(a=10, b=10)
(a=11, b=10)
I realize that this is the correct behavior of CombineLatest, but I also thought this was called a glitch because in a real pure FRP system, you do not get these intermediate-invariant-violating results.
Note that in this example, I would not be able to use Zip, and also WithLatestFrom would give an incorrect result.
Of course I could just simplify this example into one monadic computation, never multi-casting the n stream occurrences (this would mean not being able to filter but just map), but that's not the point: IMO in RX you always get a 'glitch' whenever you split and rejoin an observable stream:
s
/ \
a b
\ /
t
For example, in FlapJAX you don't get these problems.
Does any of this make sense?
Thanks a lot,
Peter
Update: Let me try to answer my own question in an RX context.
First of all, it seems my understanding of what a "glitch" is, was wrong. From a pure FRP standpoint, what looked like glitches in RX to me, seems actually correct behavior in RX.
So I guess that in RX we need to be explicit about the "time" at which we expect to actuate values combined from sensors.
In my own example, the actuator is the console, and the sensor the interval n.
So if I change my code
ab.Subscribe(Console.WriteLine);
into
ab.Sample(n).Subscribe(Console.WriteLine);
then only the "correct" values are printed.
This does mean that when we get an observable sequence that combines values from sensors, that we must know all the original sensors, merge them all, and sample the values with that merged signal before sending any values to actuators...
So an alternative approach would be to "lift" IObservable into a "Sensed" structure that remembers and merges the originating sensors, for example like this:
public struct Sensed<T>
{
public IObservable<T> Values;
public IObservable<Unit> Sensors;
public Sensed(IObservable<T> values, IObservable<Unit> sensors)
{
Values = values;
Sensors = sensors;
}
public IObservable<Unit> MergeSensors(IObservable<Unit> sensors)
{
return sensors == Sensors ? Sensors : Sensors.Merge(sensors);
}
public IObservable<T> MergeValues(IObservable<T> values)
{
return values == Values ? Values : Values.Merge(values);
}
}
And then we must transfer all RX method to this "Sensed" structure:
public static class Sensed
{
public static Sensed<T> Sensor<T>(this IObservable<T> source)
{
var hotSource = source.Publish().RefCount();
return new Sensed<T>(hotSource, hotSource.Select(_ => Unit.Default));
}
public static Sensed<long> Interval(TimeSpan period)
{
return Observable.Interval(period).Sensor();
}
public static Sensed<TOut> Lift<TIn, TOut>(this Sensed<TIn> source, Func<IObservable<TIn>, IObservable<TOut>> lifter)
{
return new Sensed<TOut>(lifter(source.Values), source.Sensors);
}
public static Sensed<TOut> Select<TIn, TOut>(this Sensed<TIn> source, Func<TIn, TOut> func)
{
return source.Lift(values => values.Select(func));
}
public static Sensed<T> Where<T>(this Sensed<T> source, Func<T, bool> func)
{
return source.Lift(values => values.Where(func));
}
public static Sensed<T> Merge<T>(this Sensed<T> source1, Sensed<T> source2)
{
return new Sensed<T>(source1.MergeValues(source2.Values), source1.MergeSensors(source2.Sensors));
}
public static Sensed<TOut> CombineLatest<TIn1, TIn2, TOut>(this Sensed<TIn1> source1, Sensed<TIn2> source2, Func<TIn1, TIn2, TOut> func)
{
return new Sensed<TOut>(source1.Values.CombineLatest(source2.Values, func), source1.MergeSensors(source2.Sensors));
}
public static IDisposable Actuate<T>(this Sensed<T> source, Action<T> next)
{
return source.Values.Sample(source.Sensors).Subscribe(next);
}
}
My example then becomes:
var n = Sensed.Interval(TimeSpan.FromMilliseconds(100));
var a = n.Select(t => "a=" + t);
var b = n.Where(t => t % 10 == 0).Select(t => "b=" + t);
var ab = a.CombineLatest(b, Tuple.Create);
ab.Actuate(Console.WriteLine);
And again only the "desired" values are passed to the actuator, but with this design, the originating sensors are remember in the Sensed structure.
I'm not sure if any of this makes "sense" (pun intended), maybe I should just let go of my desire for pure FRP, and live with it. After all, time is relative ;-)
Peter
Say for example I have an enumerable
dim e = Enumerable.Range(0, 1024)
I'd like to be able to do
dim o = e.ToObservable(Timespan.FromSeconds(1))
So that the observable would generate values every second
until the enumerable is exhausted. I can't figure a simple way to
do this.
You can use Interval together with Zip to get the desired functionality:
var sequence = Observable.Interval(TimeSpan.FromSeconds(1))
.Zip(e.ToObservable(), (tick, index) => index)
I have also looked for the solution and after reading the intro to rx made my self one:
There is an Observable.Generate() overload which I have used to make my own ToObservable() extension method, taking TimeSpan as period:
public static class MyEx {
public static IObservable<T> ToObservable<T>(this IEnumerable<T> enumerable, TimeSpan period)
{
return Observable.Generate(
enumerable.GetEnumerator(),
x => x.MoveNext(),
x => x,
x => x.Current,
x => period);
}
public static IObservable<T> ToObservable<T>(this IEnumerable<T> enumerable, Func<T,TimeSpan> getPeriod)
{
return Observable.Generate(
enumerable.GetEnumerator(),
x => x.MoveNext(),
x => x,
x => x.Current,
x => getPeriod(x.Current));
}
}
Already tested in LINQPad. Only concerning about what happens with the enumerator instance after the resulting observable is e.g. disposed. Any corrections appreciated.
You'd need something to schedule notifying observers with each value taken from the Enumerable.
You can use the recursive Schedule overload on an Rx scheduler.
Public Shared Function Schedule ( _
scheduler As IScheduler, _
dueTime As TimeSpan, _
action As Action(Of Action(Of TimeSpan)) _
) As IDisposable
On each scheduled invocation, simply call enumerator.MoveNext(), and call OnNext(enumerator.Current), and finally OnCompleted when MoveNext() returns false. This is pretty much the bare-bones way of doing it.
An alternative was to express your requirement is to restate it as "for a sequence, have a minimum interval between each value".
See this answer. The test case resembles your original question.
You could always do this very simple approach:
dim e = Enumerable.Range(0, 1024)
dim o = e.ToObservable().Do(Sub (x) Thread.Sleep(1000))
When you subscribe to o the values take a second to be produced.
I can only assume that you are using Range to dumb down your question.
Do you want every value that the Enumerable pushes to be delayed by a second?
var e = Enumerable.Range(0, 10);
var o = Observable.Interval(TimeSpan.FromSeconds(1))
.Zip(e, (_,i)=>i);
Or do you want only the last value of the Enumerable at each second to be pushed. i.e. reading from Enumerable that is evaluating as you enumerate it (perhaps some IO). In which case CombineLatest is more useful than Zip.
Or perhaps you just want to get a value every second, in which case just use the Observable.Interval method
var o = Observable.Interval(TimeSpan.FromSeconds(1));
If you explain your problem space then the community will be able to better help you.
Lee
*Excuse the C# answer, but I dont know what the equivalent VB.NET code would be.
I've been using .NET Reactive Extensions to observe log events as they come in. I'm currently using a class that derives from IObservable and uses a ReplaySubject to store the logs, that way I can filter and replay the logs (for example: Show me all the Error logs, or show me all the Verbose logs) without losing the logs I've buffered.
The problem is, even though I've set a buffer size on the subject:
this.subject = new ReplaySubject<LogEvent>(10);
The memory usage of my program goes through the roof when I use OnNext to add to the observable collection on an infinite loop:
internal void WatchForNewEvents()
{
Task.Factory.StartNew(() =>
{
while (true)
{
dynamic parameters = new ExpandoObject();
// TODO: Add parameters for getting specific log events
if (this.logEventRepository.GetManyHasNewResults(parameters))
{
foreach (var recentEvent in this.logEventRepository.GetMany(parameters))
{
this.subject.OnNext(recentEvent);
}
}
// Commented this out for now to really see the memory go up
// Thread.Sleep(1000);
}
});
}
Does the buffer size on ReplaySubject not work? It doesn't seem to be clearing the buffer when the buffer size is reached. Any help much appreciated!
UPDATE:
I add subscribers like this (Is this wrong?):
public IDisposable Subscribe(IObserver<LogEvent> observer)
{
return this.subject.Subscribe(observer);
}
...which is called like:
// Inserts into UI ListView
this.logEventObservable.Subscribe(evt => this.InsertNewLogEvent(evt));
I'm not sure if this is the definitive answer, but I suspect that you're hitting an issue because of concurrency around the scheduler you're using. The constructor you're calling on ReplaySubject looks like this:
public ReplaySubject(int bufferSize)
: this(bufferSize, TimeSpan.MaxValue, Scheduler.CurrentThread)
{ }
The Scheduler.CurrentThread worries me. Try changing it to Scheduler.ThreadPool and see if that helps.
Also, as a side note, you seem to be mixing Rx with TPL and old fashioned thread sleeping. It's usually best to avoid doing that. You could change your WatchForNewEvents code to look like this:
dynamic parameters = new ExpandoObject();
var newEvents =
from n in Observable.Interval(TimeSpan.FromSeconds(1.0))
where this.logEventRepository.GetManyHasNewResults(parameters)
from recentEvent in
this.logEventRepository.GetMany(parameters).ToObservable()
select recentEvent;
newEvents.Subscribe(this.subject);
That's a nice compact Rx-y way of doing things.
I've just started using Lambda expressions, and really like the shortcut. I also like the fact that I have scope within the lambda of the encompassing method. One thing I am having trouble with is nesting lambdas. Here is what I am trying to do:
public void DoSomeWork()
{
MyContext context = new MyDomainContext();
context.GetDocumentTypeCount(ci.CustomerId, io =>
{
if (io.HasError)
{
// Handle error
}
// Do some work here
// ...
// make DB call to get data
EntityQuery<AppliedGlobalFilter> query =
from a in context.GetAppliedGlobalFiltersQuery()
where a.CustomerId == ci.CustomerId && a.FilterId == 1
select a;
context.Load<AppliedGlobalFilter>(query, lo =>
{
if (lo.HasError)
{
}
**// Do more work in this nested lambda.
// Get compile time error here**
}
}, null);
}, null);
}
The second lambda is where I get the following compile time error:
Cannot convert Lambda expression to type 'System.ServiceModel.DomainService.Client.LoadBehavior' because it is not a delegate type
The compiler is choosing the wrong overload for the Load method even though I am using the same override I did in the previous Lambda.
Is this because I am trying to nest? Or do I have something else wrong?
Thanks,
-Scott
Found the problem as described in my comment above. I'll head back to work now - red face and all....
I realize this is not the answer you want, but I suggest caution about lengthy and/or nested lambdas. They work, but they often make code harder to read / maintain by other developers. I try to limit my lambdas in length to three statements, with no nesting.