Using `onBackpressureLatest` to drop intermediate messages in blocking Flowable - rx-java2

I have a chain where I do some blocking IO calls (e.g. HTTP-call). I want the blocking call to consume a value, proceed without interrupting, but drop everything that is piling up meanwhile, and then consume the next value in the same manner.
Consider the following example:
fun main() {
Flowable.interval(100, TimeUnit.MILLISECONDS).onBackpressureLatest().map {
Thread.sleep(1000)
it
}.blockingForEach { println(it) }
}
From a naive point of view, I would it expect to print something like 0, 10, 20, ..., but it prints 0, 1, 2, ....
What am I doing wrong?
EDIT:
I thought about naively adding debounce to eat up the incoming stream:
fun main() {
Flowable.interval(100, TimeUnit.MILLISECONDS)
.debounce(0, TimeUnit.MILLISECONDS)
.map {
Thread.sleep(1000)
it
}
.blockingForEach { println(it) }
}
But, now I get a java.lang.InterruptedException: sleep interrupted.
EDIT:
What seems to work is the following:
fun main() {
Flowable.interval(100, TimeUnit.MILLISECONDS)
.throttleLast(0, TimeUnit.MILLISECONDS)
.map {
Thread.sleep(1000)
it
}
.blockingForEach { println(it) }
}
The output is as expected 0, 10, 20, ...!!
Is that the correct way?
I noted that throttleLast will switch to the Computation-Scheduler. Is there a way to go back to the original scheduler?
EDIT:
I also get an occasional java.lang.InterruptedException: sleep interrupted with that variant.

The most simple approach to solve the problem is:
fun <T> Flowable<T>.lossy() : Flowable<T> {
return onBackpressureLatest().observeOn(Schedulers.io(), false, 1)
}
By calling lossy on a Flowable it starts to drop all element that are coming in faster than the downstream consumer can process.

Related

Repeat Single based on onSuccess() value

I want to repeat a Single based on the single value emitted in onSuccess(). Here is a working example
import org.reactivestreams.Publisher;
import io.reactivex.Flowable;
import io.reactivex.Single;
import io.reactivex.functions.Function;
public class Temp {
void main() {
Job job = new Job();
Single.just(job)
.map(this::processJob)
.repeatWhen(new Function<Flowable<Object>, Publisher<?>>() {
#Override
public Publisher<?> apply(Flowable<Object> objectFlowable) throws Exception {
// TODO repeat when Single emits false
return null;
}
})
.subscribe();
}
/**
* returns true if process succeeded, false if failed
*/
boolean processJob(Job job) {
return true;
}
class Job {
}
}
I understand how repeatWhen works for Observables by relying on the "complete" notification. However since Single doesn't receive that notification I'm not sure what the Flowable<Object> is really giving me. Also why do I need to return a Publisher from this function?
Instead of relying on a boolean value, you could make your job throw an exception when it fails:
class Job {
var isSuccess: Boolean = false
}
fun processJob(job: Job): String {
if (job.isSuccess) {
return "job succeeds"
} else {
throw Exception("job failed")
}
}
val job = Job()
Single.just(job)
.map { processJob(it) }
.retry() // will resubscribe until your job succeeds
.subscribe(
{ value -> print(value) },
{ error -> print(error) }
)
i saw a small discrepancy in the latest docs and your code, so i did a little digging...
(side note - i think the semantics of retryWhen seem like the more appropriate operator for your case, so i've substituted it in for your usage of repeatWhen. but i think the root of your problem remains the same in either case).
the signature for retryWhen is:
retryWhen(Function<? super Flowable<Throwable>,? extends Publisher<?>> handler)
that parameter is a factory function whose input is a source that emits anytime onError is called upstream, giving you the ability to insert custom retry logic that may be influenced through interrogation of the underlying Throwable. this begins to answer your first question of "I'm not sure what the Flowable<Object> is really giving me" - it shouldn't be Flowable<Object> to begin with, it should be Flowable<Throwable> (for the reason i just described).
so where did Flowable<Object> come from? i managed to reproduce IntelliJ's generation of this code through it's auto-complete feature using RxJava version 2.1.17. upgrading to 2.2.0, however, produces the correct result of Flowable<Throwable>. so, see if upgrading to the latest version generates the correct result for you as well.
as for your second question of "Also why do I need to return a Publisher from this function?" - this is used to determine if re-subscription should happen. if the factory function returns a Publisher that emits a terminal state (ie calls onError() or onComplete()) re-subscription will not happen. however, if onNext() is called, it will. (this also explains why the Publisher isn't typed - the type doesn't matter. the only thing that does matter is what kind of notification it publishes).
another way to rewrite this, incorporating the above, might be as follows:
// just some type to use as a signal to retry
private class SpecialException extends RuntimeException {}
// job processing results in a Completable that either completes or
// doesn't (by way of an exception)
private Completable rxProcessJob(Job job) {
return Completable.complete();
// return Completable.error(new SpecialException());
}
...
rxProcessJob(new Job())
.retryWhen(errors -> {
return errors.flatMap(throwable -> {
if(throwable instanceof SpecialException) {
return PublishProcessor.just(1);
}
return PublishProcessor.error(throwable);
});
})
.subscribe(
() -> {
System.out.println("## onComplete()");
},
error -> {
System.out.println("## onError(" + error.getMessage() + ")");
}
);
i hope that helps!
The accepted answer would work, but is hackish. You don't need to throw an error; simply filter the output of processJob which converts the Single to a Maybe, and then use the repeatWhen handler to decide how many times, or with what delay, you may want to resubscribe. See Kotlin code below from a working example, you should be able to easily translate this to Java.
filter { it }
.repeatWhen { handler ->
handler.zipWith(1..3) { _, i -> i }
.flatMap { retryCount -> Flowable.timer(retryDelay.toDouble().pow(retryCount).toLong(), TimeUnit.SECONDS) }
.doOnNext { log.warn("Retrying...") }
}

how does onNext is called in scala rx

I'm trying to understand how observable works. Here is my code.
def make: Stream[Int] = {
Stream.cons(scala.util.Random.nextInt(10), {
println("Making ..")
Thread.sleep(1000)
make
})
}
val y = Observable.from(make)
y.foreach(a => println(a))
emit will produce new values every 1 second. I'm making an observable out of it. for each loop will go on forever printing newly produced values.
As I understand, a=>println(a) is a callback value which is called onNext(t) in rx observable.
What i'm trying to figure out is how that is glued to the producer so when new value is produced where does the onNext is called. I looked into rx code for a while and still cannot figure out.
Thanks.
It seems subscribe to the stream calls rx.Observable's
Subscription subscribe(Subscriber<? super T> subscriber)
This will call IterableProducer.request method
which will have this bit of code.
if (n == Long.MAX_VALUE && REQUESTED_UPDATER.compareAndSet(this, 0, Long.MAX_VALUE)) {
// fast-path without backpressure
while (it.hasNext()) {
if (o.isUnsubscribed()) {
return;
}
o.onNext(**it.next()**);
}
if (!o.isUnsubscribed()) {
o.onCompleted();
}
}
onNext is the observer code and input to it would be producers it.next. Thats how its glued.

Akka Future - Parallel versus Concurrent?

From the well-written Akka Concurrency:
As I understand, the diagram points out, both numSummer and charConcat will run on the same thread.
Is it possible to run each Future in parallel, i.e. on separate threads?
The picture on the left is them running in parallel.
The point of the illustration is that the Future.apply method is what kicks off the execution, so if it doesn't happen until the first future's result is flatMaped (as in the picture on the right), then you don't get the parallel execution.
(Note that by "kicked off", i mean the relevant ExecutionContext is told about the job. How it parallelizes is a different question and may depend on things like the size of its thread pool.)
Equivalent code for the left:
val numSummer = Future { ... } // execution kicked off
val charConcat = Future { ... } // execution kicked off
numSummer.flatMap { numsum =>
charConcat.map { string =>
(numsum, string)
}
}
and for the right:
Future { ... } // execution kicked off
.flatMap { numsum =>
Future { ... } // execution kicked off (Note that this does not happen until
// the first future's result (`numsum`) is available.)
.map { string =>
(numsum, string)
}
}

Rxjava User-Retry observable with .cache operator?

i've an observable that I create with the following code.
Observable.create(new Observable.OnSubscribe<ReturnType>() {
#Override
public void call(Subscriber<? super ReturnType> subscriber) {
try {
if (!subscriber.isUnsubscribed()) {
subscriber.onNext(performRequest());
}
subscriber.onCompleted();
} catch (Exception e) {
subscriber.onError(e);
}
}
});
performRequest() will perform a long running task as you might expect.
Now, since i might be launching the same Observable twice or more in a very short amount of time, I decided to write such transformer:
protected Observable.Transformer<ReturnType, ReturnType> attachToRunningTaskIfAvailable() {
return origObservable -> {
synchronized (mapOfRunningTasks) {
// If not in maps
if ( ! mapOfRunningTasks.containsKey(getCacheKey()) ) {
Timber.d("Cache miss for %s", getCacheKey());
mapOfRunningTasks.put(
getCacheKey(),
origObservable
.doOnTerminate(() -> {
Timber.d("Removed from tasks %s", getCacheKey());
synchronized (mapOfRunningTasks) {
mapOfRunningTasks.remove(getCacheKey());
}
})
.cache()
);
} else {
Timber.d("Cache Hit for %s", getCacheKey());
}
return mapOfRunningTasks.get(getCacheKey());
}
};
}
Which basically puts the original .cache observable in a HashMap<String, Observable>.
This basically disallows multiple requests with the same getCacheKey() (Example login) to call performRequest() in parallel. Instead, if a second login request arrives while another is in progress, the second request observable gets "discarded" and the already-running will be used instead. => All the calls to onNext are going to be cached and sent to both subscribers actually hitting my backend only once.
Now, suppouse this code:
// Observable loginTask
public void doLogin(Observable<UserInfo> loginTask) {
loginTask.subscribe(
(userInfo) -> {},
(throwable) -> {
if (userWantsToRetry()) {
doLogin(loinTask);
}
}
);
}
Where loginTask was composed with the previous transformer. Well, when an error occurs (might be connectivity) and the userWantsToRetry() then i'll basically re-call the method with the same observable. Unfortunately that has been cached and I'll receive the same error without hitting performRequest() again since the sequence gets replayed.
Is there a way I could have both the "same requests grouping" behavior that the transformer provides me AND the retry button?
Your question has a lot going on and it's hard to put it into direct terms. I can make a couple recommendations though. Firstly your Observable.create can be simplified by using an Observable.defer(Func0<Observable<T>>). This will run the func every time a new subscriber is subscribed and catch and channel any exceptions to the subscriber's onError.
Observable.defer(() -> {
return Observable.just(performRequest());
});
Next, you can use observable.repeatWhen(Func1<Observable<Void>, Observable<?>>) to decide when you want to retry. Repeat operators will re-subscribe to the observable after an onComplete event. This particular overload will send an event to a subject when an onComplete event is received. The function you provide will receive this subject. Your function should call something like takeWhile(predicate) and onComplete when you do not want to retry again.
Observable.just(1,2,3).flatMap((Integer num) -> {
final AtomicInteger tryCount = new AtomicInteger(0);
return Observable.just(num)
.repeatWhen((Observable<? extends Void> notifications) ->
notifications.takeWhile((x) -> num == 2 && tryCount.incrementAndGet() != 3));
})
.subscribe(System.out::println);
Output:
1
2
2
2
3
The above example shows that retries are aloud when the event is not 2 and up to a max of 22 retries. If you switch to a repeatWhen then the flatMap would contain your decision as to use a cached observable or the realWork observable. Hope this helps!

How to implement a self-cancelling poller without using a var?

I'm curious if it's possible to safely implement a self-cancelling poller without using a var to keep the instance of akka.actor.Cancellable
So far, I came up with something like what you see in the example below. However, I'm curious if it's safe to assume that the "tick" message will never be dispatched before the hotswap happens, i.e the line that schedules the poller:
tick(1, 5, context.system.scheduler.schedule(Duration.Zero, 3 seconds, self, "tick"))
is basically the same as:
val poll = context.system.scheduler.schedule(Duration.Zero, 3 seconds, self, "tick")
tick(1, 5, poll)
So, I would think that in some cases the first tick would be received before the hotswap has a chance to happen... Thoughts?
import akka.actor.{Cancellable, ActorSystem}
import akka.actor.ActorDSL._
import concurrent.duration._
object PollerDemo {
def run() {
implicit val system = ActorSystem("DemoPoller")
import system.dispatcher
actor(new Act{
become {
case "tick" => println("UH-OH!")
case "start" =>
become {
tick(1, 5, context.system.scheduler.schedule(Duration.Zero, 3 seconds, self, "tick"))
}
}
def tick(curr:Long, max:Long, poll:Cancellable):Receive = {
case "tick" => {
println(s"poll $curr/$max")
if(curr > max)
cancel(poll)
else
become{ tick(curr + 1, max, poll) }
}
}
def cancel(poll:Cancellable) {
println("cancelling")
poll.cancel()
println(s"cancelled successfully? ${poll.isCancelled}")
println("shutting down")
context.system.shutdown()
}
}) ! "start"
system.awaitTermination(1 minute)
}
}
My guess is that your code will be okay. Remember, actors only process their mailbox one at a time. When you receive the start message, you setup a timer that will deliver another message to the mailbox and then you swap the receive implementation. Because you do the receive swap while you are still processing the start message, then you will have already changed the actors's receive behavior before it processes the next message in the mailbox. So when it moves on to process the tick message you can be sure that it will be using the new receive behavior.
You could verify this by sending an additional tick message inside the first become like so:
become {
self ! "tick"
tick(1, 5, context.system.scheduler.schedule(Duration.Zero, 3 seconds, self, "tick"))
}
Here we are really eliminating the timer from the equation when asking if a message sent during the become block will be processed by the old receive or the new receive. I did not run this, but from my understanding or akka, both of these ticks should be handled by the new receive.
You really can't do pure functional programming with actors. Sending them messages is a side-effect. Since their receive function doesn't return a result, all the actor can do when receiving a message is to side effect. Just about every single thing your code does is for side-effects
You might be avoiding vars in your implementation of the code, but become is mutating a var in the Actor superclass. context.system.scheduler.schedule is clearly side-effecting and mutating state somewhere. Every single thing that cancel does is a side effect. system.awaitTermination(1 minute) is not a function...