What does the Wait operator in Rx.NET do? - system.reactive

In v2.2.5 of the Rx.NET library, there is an operator named Waitthat is defined as so:
public virtual TSource Wait<TSource>(IObservable<TSource> source)
Neither the class library reference on MSDN nor this page mention this operator.
From looking at its implementation, which is a bit too cumbersome to follow, I am guessing it waits for the observable to produce all its elements and returns the last element if the observable had any elements, and if not, it returns the default(TSource). But I am not sure.
If this is correct, then how is it different from LastOrDefaultAsync?
What does it actually do?

The intellisense documentation seems pretty accurate
Waits for the observable sequence to complete and returns the last element of the sequence.
If the sequence terminates with an OnError notification, the exception is thrown.
https://github.com/Reactive-Extensions/Rx.NET/blob/master/Rx.NET/Source/System.Reactive.Linq/Reactive/Linq/Observable.Blocking.cs#L493
So the operator will block the calling thread (YUCK!) until the sequence completes and then yield the last value.
LastOrDefaultAsync in contrast returns an IObservable<T> so is not blocking.

The documentation for the methods are on the Observable class, not the query language implementation.
Waits for the observable sequence to complete and returns the last element of the sequence.
If the sequence terminates with an OnError notification, the exception is throw.
https://github.com/Reactive-Extensions/Rx.NET/blob/v2.2.5/Rx.NET/Source/System.Reactive.Linq/Reactive/Linq/Observable.Blocking.cs#L493
It's essentially a synonym of Last<TSource>().
Wait
Last

The description of Wait provided in the question is not fully correct.
Here are the similarities between Wait and LastOrDefaultAsync:
Both logically wait to receive all the values in the source observable. But as Lee Cambell points out in his answer, Wait blocks the current thread while LastOrDefaultAsync does not.
Here is the summary of differences between Wait and LastOrDefaultAsync:
If there are no elements in the observable sequence, Wait throws an exception; LastOrDefault returns default(TSource).
If an exception occurs during the observation of the observable, Wait reports the exception by invoking observer.OnError but then also throws the exception immediately afterwards; LastOrDefaultAsync only reports the exception by calling observer.OnError on all subscribed observers. However, on error, in both the cases, the observation is stopped.
The XML documentation that comes with the source code (or even with the binary distribution either via NuGet or through the MSI installer) for Rx explains thus:
Waits for the observable sequence to complete and returns the last
element of the sequence. If the sequence terminates with an OnError
notification, the exception is throw.
Exceptions
Throws ArgumentNullException if source is null.
Throws InvalidOperationException if the source sequence is empty.

Related

How to ensure the order of user callback function in OpenCL?

I am working on OpenCL implementation wherein the host side particular function has to call every time the clEnqueueReadBuffer is done executing.
I am calling the kernels in a loop. It will look like below in an ordered queue.
clEnqueueNDRangeKernel() -> clEnqueueReadBuffer(&Event) ->
clEnqueueNDRangeKernel() -> clEnqueueReadBuffer(&Event) .......
I have used clSetEventCall() to register Events in each read command to execute a callback function. I have observed that, though the command queue is an in-order queue, the order of the callback function does not execute in-order.
Also, in OpenCL 1.2, it has a mention as below.
The order in which the registered user callback functions are called
is undefined. There is no guarantee that the callback functions
registered for various execution status values for an event will be
called in the exact order that the execution status of a command
changes.
Can anyone give me a solution? I want to execute the callback function in order.
A simple solution could be to subscribe the same callback function to both events. In the callback code, you can check the status of each relevant event and perform the operation you want accordingly.
Note that on some implementations, the driver will batch multiple commands for execution.
The immediate effect is that that multiple events will be signaled "at once" even though the associated commands complete at a different time.
// event1 & event2 are likely to be signaled at once:
clEnqueueNDRangeKernel();
clEnqueueReadBuffer(&event1);
clEnqueueNDRangeKernel();
clEnqueueReadBuffer(&event2);
Wheres:
// event1 is likely to be signaled before event2:
clEnqueueNDRangeKernel();
clEnqueueReadBuffer(&event1);
clflush(queue);
clEnqueueNDRangeKernel();
clEnqueueReadBuffer(&event2);
clflush(queue);
I would also check on which exact thread the callbacks are invoked.
Is it the same thread each time? or a different one? If the implementation opens a new thread for this task, it might be wiser to open a single thread yourself and wait for events in the order that you wish.

Completing scala promises race

I can't seem to find anywhere whether complete and tryComplete are atomic operations on Promises in Scala. Promises are only supposed to be written to once, but if two tryCompletes happen concurrently in two different callbacks for example could something go wrong? Or are we assured that tryComplete is atomic?
First a quick note that success(...) is equivalent to calling complete(Success(...)) and tryComplete(...) is equivalent to complete(...).isCompleted.
In the docs it says
As mentioned before, promises have single-assignment semantics. As such, they can be completed only once. Calling success on a promise that has already been completed (or failed) will throw an IllegalStateException.
A promise can only complete once. Digging into the source code, DefaultPromise extends AtomicReference (ie. thread safe) and so all writes are atomic. This means that if you have two threads completing a promise, only one of them can ever succeed and it'll be whichever did so first. The other will throw an IllegalStateException.
Here's a small example of what happens when you try and complete a promise twice.
https://scastie.scala-lang.org/hTYBqVywSQCl8bFSgQI0Sg
Though apparently it seems one can circumvent the immutability of a Future by doing a bunch of weird casting acrobatics.
https://contributors.scala-lang.org/t/defaultpromise-violates-encapsulation/3440
One should probably avoid that.

Difference between map and subscribe on Mono\Flux?

Am I right to assume that "map" could essentially be a "subscribe" with a return type . They both seem to get called asynchronously when the promise gets resolved ?
For example , if I dispatch a list of 3 Async calls concurrently, would applying the map operation in manner below be blocking ?
Flux.merge(albums.stream().map(album -> {
Mono<CoverResponse> responseMono = clientRequestHandler.makeAsyncCall()
//2.call and handler for async call
return responseMono
.map(response -> processResponse());
}).collect(Collectors.toList())).then(Mono.just(monoResponse));
In the above snippet, is each map operation going to be blocking? If say, the first call takes 5 ms to to return and every other call takes 2 ms to return , are we going to wait 3ms+2ms+2m = 7ms for the enitre operation ? or just 3ms since once the first call gets resolved , the 2ms calls are already resolved by then.
First of all, nothing will happen until someone subscribes. Subscribe is the last thing in the chain, that will trigger the start of all the events.
Second of all you need to understand the difference of running something in parallell vs running something non-blocking.
to resolve your first map, it needs to make the rest call, then with the response it needs to do your second map. These two wont be run in parallell.
Your responseMono.map can't be run until Mono<Response> responseMono actually has something in it. Think of it as a Promise that will signal the application when it has been resolved.
Or you can think of it as a chain of callbacks.
So in your example you are doing a clientRequestHandler.makeAsyncCall() but you are returning a Mono<CoverResponse> the next part the responseMono.map wont trigger, until there is a CoverResponse in the mono. So your "async" call, is probably async, but still adheres to list order since its all in a sequential stream.
But map is a mapping function. It takes something out of the box, performs a computation on something in the box and then returns the new value or type.
What makes reactive better than other options is that when you do your side effect the remote call somewhere that takes time, the thread that is processing this wont hang around and wait for the external request to finish, it will start doing other things, like processing other requests.
Then when the Mono<Response> signals the system that there is "something in the box" a response, then the same thread or any other available thread will keep on processing the request.

Scala Future `.onComplete` function discarded after call?

Are the function bodies passed to Future.onComplete(), and their closures, discarded and so garbage collected after they are called?
I ask because I'm writing an unbounded sequence of Future instances. Each Future has an .onComplete { case Failure(t)...} that refers to the previous known-good value from a previous Future. What I want to avoid is the total history of all Future results being kept in the JVM's memory because of references in closure bodies.
Perhaps Scala is more clever than this, but skimming the code related to execution contexts and futures isn't yielding much.
Thanks.
The class that normally implements Future and that you want to look at is DefaultPromise.
It contains mutable state that is being updated as the Future completes.
If you call onComplete and it has already been completed then it just schedules your callback immediately with the result. The callback is not recorded anywhere.
If you call onComplete while the result is not yet available, the callback is added to a list of "listeners".
When the result becomes available (someone calls complete on the promise), then all listeners are scheduled to run with that result, and the list of listeners is deleted (the internal state changes to "completed with this result")
This means that your callback chain is only being built up until the "upstream future" is incomplete. After that, everything gets resolved and garbage-collected.
"list of listeners" above is a bit of a simplification. There is special care being taken that these listeners do not end up referring to each-other, specifically to break reference loops that would prevent garbage collection to work when constructing futures recursively. Apparently this was indeed a problem in earlier versions.
The problem of leaks is solved by automatically breaking these chains of promises, so that promises don't refer to each other in a long chain. This
allows each promise to be individually collected. The idea is to "flatten"
the chain of promises, so that instead of each promise pointing to its
neighbour, they instead point directly the promise at the root of the
chain. This means that only the root promise is referenced, and all the
other promises are available for garbage collection as soon as they're no
longer referenced by user code.

protractor: what is the relationship between the control flow and javascript event loop?

I'm having a difficult time trying to understand how the control flow in protractor work in relation to how JS event loop works. Here is what I know so far:
Protractor control flow stores commands that return promises in a queue. The first command will be at the front of the queue and the last command will be at the back. No command will be executed until the command in front of it has its promise resolved.
JS event loop stores asynchronous task (callbacks to be specific). Callbacks are not executed until all functions in the stack have completed and the stack is empty. Before running each callback, there is a check on whether the stack is empty or not.
so lets take this code for example. The code is basically clicking a search button and a api request is made. Then after data is returned, it checks whether the field that stores the returned data exists.
elem('#searchButton').click(); //will execute a api call to retrieve data
browser.wait(ExpectedConditions.presenceOf(elem('#resultDataField'),3000));
expect(elem('#resultDataField').isPresent()).toBeTruthy();
So with this code, I'm able to get it to work. But I don't know how it does it. How is the event loop applied in this scenario?
The core of the ControlFlow implementation is in runEventLoop_ (in Selenium's promise.js implementation).
As I understand it, the ControlFlow registers a call to runEventLoop_ with the JS event loop (e.g., with a 0-second timeout or somesuch). The call to runEventLoop_ can be thought of as a single iteration of a normal event loop. It registers code to actually run a scheduled task (i.e., actually do the work you queued up during your it). Once that task completes or fails (e.g., by hooking its async promise callbacks) the next iteration of runEventLoop_ is scheduled (see the calls to scheduleEventLoop in runEventLoop_).
There is some complexity when a callback ends up registering new promises (those need to be "inserted" before the old next event, this is accomplished by creating a "nested" control flow. Mostly you should never have to know this.)