Consider a method DoSomething() which returns:
IObservable<IEnumerable<T>> DoSomething()
IObservable<T> DoSomething()
Considering both:
Should the method FlatMap the IEnumerable into an IObservable?
Should it be left to the consumer to do as they please?
Is one way or the other more correct?
Does it matter either way, your consumer should really do as they please, the intent is the same in either being returned?
Consider the signature of this Observable.Buffer extension method:
IObservable<IList<TSource>> Buffer<TSource>(IObservable<TSource> source, int count)
Clearly it wouldn't be very useful if the result was flattened to an IObservable<TSource>.
As you said yourself, IObservable<T> and IObservable<IEnumerable<T>> are conceptually different.
IObservable<IEnumerable<T>> does differ semantically from IObservable<T>.
IObservable<IEnumerable<T>> represents zero or more moments in time where zero or more values are reported.
IObservable<T> represents zero or more moments in time where a single value is reported.
The former can then be used to represent the activity between points in time and returning an empty enumerable would positively represent nil activity since the last value. The latter cannot explicitly do that.
These two forms can therefore be used to represent two different real-world ways or reporting values.
There's nothing wrong with returning IObservable<IEnumerable<T>> if that's what your method means. Buffer is one example where it makes sense - anything with batching would make sense. If you had a service that accepted batched requests for orders, then an IObservable<IEnumerable<Order>> could be valid. You could flatten it, and that might also be valid.
It depends whether the concept of the batch/buffer itself is integral to what your method is supposed to do. If your method just happens to be using buffer to achieve its aims, but the method isn't a batch operation by nature, then it will probably be more intuitive to flatten the Observable.
Related
I am searching for a way to repeat the last element when the subscriber of a Flux signals onNext but the publisher did not supply a new element.
Of course this approach would logically introduce eager streaming, but in my case that's exactly what I want, similarly to onBackpressureDrop and others, where an infinite demand is requested upstream.
I kind of need the exact opposite - with my subscriber being faster than the publisher.
I struggle to think of a case where it wouldn't be better for the subscriber to simply cache the last emitted value within itself and do what it needs to do there (whether that's looping, firing on a scheduled executor or something else entirely) rather than deliberately having an infinite demand on the last value emitted by the Flux.
Something akin to the following might work, but is incredibly hacky (that being said, I couldn't think of a better way):
flux.subscribe(str -> {
Mono.just(str).repeat().takeUntilOther(flux.next())
.subscribe(s -> {
//Actual subscriber
});
});
I want to implement the following functions in the most re-active way. I need these for implementing the bijections for automatic conversion between the said types.
def convertScalaRXObservableToTwitterFuture[A](a: Observable[A]): TwitterFuture[A] = ???
def convertScalaRXObservableToTwitterFutureList[A](a: Observable[A]): TwitterFuture[List[A]] = ???
I came across this article on a related subject but I can't get it working.
Unfortunately the claim in that article is not correct and there can't be a true bijection between Observable and anything like Future. The thing is that Observable is more powerful abstraction that can represent things that can't be represented by Future. For example, Observable might actually represent an infinite sequence. For example see Observable.interval. Obviously there is no way to represent something like this with a Future. The Observable.toList call used in that article explicitly mentions that:
Returns a Single that emits a single item, a list composed of all the items emitted by the finite source ObservableSource.
and later it says:
Sources that are infinite and never complete will never emit anything through this operator and an infinite source may lead to a fatal OutOfMemoryError.
Even if you limit yourself to only finite Observables, still Future can't fully express semantics of Observable. Consider Observable.intervalRange that generates a limited range one by one over some time period. With Observable the first event comes after initialDelay and then you get event each period. With Future you can get only one event and it must be only when the sequence is fully generated so Observable is completed. It means that by transforming Observable[A] into Future[List[A]] you immediately break the main benefit of Observable - reactivity: you can't process events one by one, you have to process them all in a single bunch.
To sum up the claim at the first paragraph of the article:
convert between the two, without loosing asynchronous and event-driven nature of them.
is false because conversion Observable[A] -> Future[List[A]] exactly looses the "event-driven nature" of Observable and there is no way to work this around.
P.S. Actually the fact that Future is less powerful than Observable should not be a big surprise. If it was not, why anybody would create Observable in the first place?
As per the documentation:
Flux is a stream which can emit 0..N elements:
Flux<String> fl = Flux.just("a", "b", "c");
Mono is a stream of 0..1 elements:
Mono<String> mn = Mono.just("hello");
And as both are the implementations of the Publisher interface in the reactive stream.
Can't we use only Flux in most of the cases as it also can emit 0..1, thus satisfying the conditions of a Mono?
Or there are some specific conditions when only Mono needs to be used and Flux can not handle the operations?
Please suggest.
In many cases, you are doing some computation or calling a service and you expect exactly one result (or maybe zero or one result), and not a collection that contains possibly multiple results. In such cases, it's more convenient to have a Mono.
Compare it to "regular" Java: you would not use List as the return type of any method that can return zero or one result. You would use Optional instead, which makes it immediately clear that you do not expect more than one result.
Flux is equivalent to RxJava Observable is capable of emitting
- zero or more item (streams of many elements)
- and then OPTIONALLY , completing OR failing
Mono can only emit one item at the most (streams one element)
Relations:
If you concatente two Monos you will get a Flux
You can call single() on Flux to return a Mono
From the docs here
This distinction carries a bit of semantic information into the type, indicating the rough cardinality of the asynchronous processing. For instance, an HTTP request produces only one response, so there is not much sense in doing a count operation. Expressing the result of such an HTTP call as a Mono thus makes more sense than expressing it as a Flux, as it offers only operators that are relevant to a context of zero items or one item.
Simply as the Mono is used for handling zero or one result, while the Flux is used to handle zero to many results, possibly even infinite results.
And both two in common behave in a purely asynchronous and fully non-blocking.
I think it is good practice to use Mono in cases where we know we can only get one result. In this way, we make it known to other developers working on the same thing that the result can be 0 or 1.
We are following that approach on all our projects.
Here is one good tutorial on Reactive Streams and the uses of Mono and Flux -> Reactive programming in Java.
I used Supplier quite often and I'm looking at new Guava 10 Optional now.
In contrast to a Supplier an Optional guarantees never to return null but will throw an IllegalStateException instead. In addition it is immutable and thus it has a fixed known value once it is created. In contrast to that a Supplier may be used to create different or lazy values triggered by calling get() (but it is not imposed to do so).
I followed the discussion about why an Optional should not extend a Supplier and I found:
...it would not be a well-behaved Supplier
But I can't see why, as Supplier explicitly states:
No guarantees are implied by this interface.
For me it would fit, but it seems I used to employ Suppliers in a different way as it was originally intended. Can someone please explain to me why an Optional should NOT be used as a Supplier?
Yes: it is quite easy to convert an Optional into a Supplier (and in addition you may choose if the adapted Supplier.get() will return Optional.get() or Optional.orNull())
but you need some additional transformation and have to create new objects for each :-(
Seems there is some mismatch between the intended use of a Supplier and my understanding of its documentation.
Dieter.
Consider the case of
Supplier<String> s = Optional.absent();
Think about this. You have a type containing one method, that takes no arguments, but for which it's a programmer error to ever invoke that method! Does that really make sense?
You'd only want Supplierness for "present" optionals, but then, just use Suppliers.ofInstance.
A Supplier is generally expected to be capable of returning objects (assuming no unexpected errors occur). An Optional is something that explicitly may not be capable of returning objects.
I think "no guarantees are implied by this interface" generally means that there are no guarantees about how it retrieves an object, not that the interface need not imply the ability to retrieve an object at all. Even if you feel it is OK for a Supplier instance to throw an exception every time you call get() on it, the Guava authors do not feel that way and choose to only provide suppliers that can be expected to be generally well-behaved.
the other day I was wondering why scala.collection.Map defines its unzip method as
def unzip [A1, A2] (implicit asPair: ((A, B)) ⇒ (A1, A2)): (Iterable[A1], Iterable[A2])
Since the method returns "only" a pair of Iterable instead of a pair of Seq it is not guaranteed that the key/value pairs in the original map occur at matching indices in the returned sequences since Iterable doesn't guarantee the order of traversal. So if I had a
Map((1,A), (2,B))
, then after calling
Map((1,A), (2,B)) unzip
I might end up with
... = (List(1, 2),List(A, B))
just as well as with
... = (List(2, 1),List(B, A))
While I can imagine storage-related reasons behind this (think of HashMaps, for example) I wonder what you guys think about this behavior. It might appear to users of the Map.unzip method that the items were returned in the same pair order (and I bet this is probably almost always the case) yet since there's no guarantee this might in turn yield hard-to-find bugs in the library user's code.
Maybe that behavior should be expressed more explicitly in the accompanying scaladoc?
EDIT: Please note that I'm not referring to maps as ordered collections. I'm only interested in "matching" sequences after unzip, i.e. for
val (keys, values) = someMap.unzip
it holds for all i that (keys(i), values(i)) is an element of the original mapping.
Actually, the examples you gave will not occur. The Map will always be unzipped in a pair-wise fashion. Your statement that Iterable does not guarantee the ordering, is not entirely true. It is more accurate to say that any given Iterable does not have to guarantee the ordering, but this is dependent on implementation. In the case of Map.unzip, the ordering of pairs is not guaranteed, but items in the pairs will not change they way they are matched up -- that matching is a fundamental property of the Map. You can read the source to GenericTraversableTemplate to verify this is the case.
If you expand unzip's description, you'll get the answer:
definition classes: GenericTraversableTemplate
In other words, it didn't get specialized for Map.
Your argument is sound, though, and I daresay you might get your wishes if you open an enhancement ticket with your reasoning. Specially if you go ahead an produce a patch as well -- if nothing else, at least you'll learn a lot more about Scala collections in doing so.
Maps, generally, do not have a natural sequence: they are unordered collections. The fact your keys happen to have a natural order does not change the general case.
(However I am at a loss to explain why Map has a zipWithIndex method. This provides a counter-argument to my point. I guess it is there for consistency with other collections and that, although it provides indices, they are not guaranteed to be the same on subsequent calls.)
If you use a LinkedHashMap or LinkedHashSet the iterators are supposed to return the pairs in the original order of insertion. Other HashMaps, yeah, you have no control. Retaining the original order of insertion is quite useful in UI contexts, it allows you to resort tables on any column in a Web application without changing types, for instance.