private String stringResult=null;
private Throwable throwableResult=null;
#Test
public void whereIsTheThrowable() {
Observable.just("foo")
.map(this::justBlowUp)
.retryWhen(errors -> errors.zipWith(Observable.range(1, 3), (n, i) -> i))
.subscribe(s -> stringResult=s, throwable -> throwableResult=throwable);
assertNull(stringResult);
assertNotNull(throwableResult);
}
private String justBlowUp(String s) {
throw new RuntimeException();
}
That test fails with RxJava 2.1.7. retryWhen() appears to consume the Throwable, even after it no longer retries. The subscribe() lambda does not get any Throwable. While this test is silly (justBlowUp() just blows up), you can imagine an Observable chain where the work usually succeeds, only occasionally fails, and seldom fails four times in succession. However, in that case, it would be useful to have the Throwable for logging purposes.
retryUntil() does allow subscribe() to get the final Throwable... but then in retryUntil() we do not have the Throwable at all and cannot make decisions on it (e.g., retry N times if it seems to be an Internet connectivity error, but fail fast for everything else). retryWhen() seems to be the more powerful option, but how do we get the final Throwable, after retryWhen() stops retrying?
I could use a field to hold the Throwable, set inside the retryWhen() logic, but it feels like there should be a more idiomatic solution.
retryWhen treats the handler's completion as indicator for completing normally, therefore, the handler should fail after the retry options have been exhausted:
Observable.just("foo")
.map(this::justBlowUp)
.retryWhen(errors -> errors.flatMap(new Function<Throwable, Observable<Integer>>() {
int count;
#Override
public Observable<Integer> apply(Throwable error) {
if (count++ < 3) {
return Observable.just(count);
}
return Observable.error(error);
}
}))
.test()
.assertFailure(RuntimeException.class);
Related
Given the following code, is it guaranteed that System.out.println(v)will print 1? What if I change the io and computation schedulers to other schedulers?
I have checked the source of computation scheduler, it seems use executor's submit method and according to the documentation, submit is happens-before the execution of the actual runnable, so I think in this case, this happens-before relationship is guaranteed, but is this apply to other schedulers?
import io.reactivex.Completable;
import io.reactivex.schedulers.Schedulers;
public class Test {
static int v = 0;
public static void main(String[] args){
Completable.create(e -> {v = 1; e.onComplete();})
.subscribeOn(Schedulers.io())
.observeOn(Schedulers.computation())
.subscribe(() -> System.out.println(v));
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
also, if I assign 1 to v before Completable#create, is this change visible to Completable's body?
Given the following code, is it guaranteed that System.out.println(v) will print 1?
Yes.
If you, however, swapped the order, there is no guarantee:
Completable.create(e -> {e.onComplete(); v = 1;})
What if I change the io and computation schedulers to other schedulers?
All standard schedulers have this guarantee.
but is this apply to other schedulers?
Any asynchronous scheduler is expected to provide this happens-before relationship and the standard ones are guaranteed because of the underlying ExecutorService.
if I assign 1 to v before Completable#create, is this change visible to Completable's body?
subscribeOn is also establishes a happens-before relationship so upon subscription, the v is committed and the body of the create will see the value.
I'm using RxJava in and Android application with RxAndroid. I'm using mergeDelayError to combine two retro fit network calls into one observable which will process emitted items if either emits one and the error if either has one. This is not working and it is only firing off the onError action when either encounters an error. Now to test this I shifted to a very simple example and still the successAction is never called when I have an onError call. See example below.
Observable.mergeDelayError(
Observable.error(new RuntimeException()),
Observable.just("Hello")
)
.observeOn(AndroidSchedulers.mainThread())
.subscribeOn(Schedulers.io())
.finallyDo(completeAction)
.subscribe(successAction, errorAction);
The success action will only be called if I use two success observables. Am I missing something with how mergeDelayError is supposed to work?
EDIT:
I've found that if I remove the observeOn and subscribeOn everything works as expected. I need to specify threads and thought that was the whole point of using Rx. Any idea why specifying those Schedulers would break the behavior?
Use .observeOn(AndroidSchedulers.mainThread(), true) instead of .observeOn(AndroidSchedulers.mainThread()
public final Observable<T> observeOn(Scheduler scheduler, boolean delayError) {
return observeOn(scheduler, delayError, RxRingBuffer.SIZE);
}
Above is the signature of observeOn function. Following code works.
Observable.mergeDelayError(
Observable.error(new RuntimeException()),
Observable.just("Hello")
)
.observeOn(AndroidSchedulers.mainThread(), true)
.subscribeOn(Schedulers.io())
.subscribe(new Subscriber<String>() {
#Override
public void onCompleted() {
}
#Override
public void onError(Throwable e) {
}
#Override
public void onNext(String s) {
}
});
Got this trick from ConcatDelayError thread: https://github.com/ReactiveX/RxJava/issues/3908#issuecomment-217999009
This still seems like a bug in the mergeDelayError operator but I was able to get it working by duplicating the observerOn and Subscribe on for each observable.
Observable.mergeDelayError(
Observable.error(new RuntimeException())
.observeOn(AndroidSchedulers.mainThread())
.subscribeOn(Schedulers.io()),
Observable.just("Hello")
.observeOn(AndroidSchedulers.mainThread())
.subscribeOn(Schedulers.io())
)
.finallyDo(completeAction)
.subscribe(successAction, errorAction);
I think you don't wait for the terminal event and the main thread quits before the events are delivered to your observer. The following test passes for me with RxJava 1.0.14:
#Test
public void errorDelayed() {
TestSubscriber<Object> ts = TestSubscriber.create();
Observable.mergeDelayError(
Observable.error(new RuntimeException()),
Observable.just("Hello")
)
.subscribeOn(Schedulers.io()).subscribe(ts);
ts.awaitTerminalEvent();
ts.assertError(RuntimeException.class);
ts.assertValue("Hello");
}
I have this the following scenario I need to achieve:
perform each network call for a list of request object with 1 second delay each
and I have this following implementation using rxjava2
emit an interval stream
emit an iterable stream
zip them to emit each item from the iterable source
which by far has no problem and I fully understand how it works, now I integrated the above to the following
map each item emitted from zip into a new observable that defer/postpone an observable source for a network call
each mapped-emitted observable will perform an individual network call for each request
which I ended up with the following code
Observable
.zip(Observable.interval(1, TimeUnit.SECONDS), Observable.fromIterable(iterableRequests), new BiFunction<Long, RequestInput, RequestResult>() {
#Override
public RequestResult apply(#NonNull Long aLong, #NonNull final RequestInput request) throws Exception {
return request;
}
})
.map(new Function<RequestResult, ObservableSource<?>>() {
#Override
public ObservableSource<?> apply(#NonNull RequestResult requestResult) throws Exception {
// map each requestResult into this observable and perform a new stream
return Observable
.defer(new Callable<ObservableSource<?>>() {
// return a postponed observable for each subscriber
})
.retryWhen(new Function<Observable<Throwable>, ObservableSource<?>>() {
// return throwable observable
})
}
})
.subscribe(new Observer<ObservableSource<?>>() {
//.. onSubscribe {}
//.. onError {}
//.. onComplete {}
#Override
public void onNext(ObservableSource<?> observableSource) {
// actual subscription for each of the Observable.defer inside
// so it will start to emit and perform the necessary operation
}
});
but the problem is, it executes the Observable.defer source, only ONCE, but keeps on iterating(by putting a Log inside the map operator to see the iteration).
Can anyone guide me please on how can I achieve what I want, I exhausted alot of papers, drawing alot of marble diagrams, just to see where Im at on my code,
I dont know if the diagram I created illustrate the thing that I want, if it does, I dont know why does the sample code dont perform as the diagram portraits
Any help would be greatly appreciated.
The first part is fine, but the map thingy is a bit unneeded, what you are doing is mapping each RequestResult to an Observable, and then manually subscribe to it at the Observer.onNext(), actually the defer is not necessary as you're creating separate Observable for each RequestResult with different data, defer will occur at each subscribe yoy do at onNext(), and the map occur as you observed for each emission of the zipped RequestResult.
what you probably need is simple flatMap() to map each RequestResult value to a separate Observable that will do the network request, and it will merge back the result for each request to the stream, so you'll just need to handle the final values emission for each request instead to subscribe manually to each Observable.
Just keep in mind that order might be lost, in case some requests might take longer than your delay between them.
Observable.zip(Observable.interval(1, TimeUnit.SECONDS), Observable.fromIterable(iterableRequests),
new BiFunction<Long, RequestInput, RequestResult>() {
#Override
public RequestResult apply(#NonNull Long aLong,
#NonNull final RequestInput request) throws Exception {
return request;
}
})
.flatMap(new Function<RequestResult, ObservableSource<?>>() {
#Override
public ObservableSource<?> apply(RequestResult requestResult) throws Exception {
return createObservableFromRequest(requestResult)
.retryWhen(new Function<Observable<Throwable>, ObservableSource<?>>() {
// return throwable observable
})
}
})
.subscribe(new Observer<ObservableSource<?>>() {
//.. onSubscribe {}
//.. onError {}
//.. onComplete {}
#Override
public void onNext(ObservableSource<?> observableSource) {
//do something with each network result request emission
}
});
I manage to make it work, as somewhere inside the Observable.defer, my retrofitclient was null,
retrofitClient.getApiURL().post(request); // client was null
my retrofitClient was null ( i looked somewhere in the code and I noticed i was not initialized, and I initialized it properly and made it work)
now can anybody tell me why Rx didnt throw an exception back to the original observable stream? theres no NullPointerException that occurred, Im confused
I have heard several reactive programming folks say "don't break the monad, just continue it". I see the merit in this. There are still certain instances I'm confused about, especially when the Observable is finally consumed or subscribed to. This is even more confusing when several observables have to be consumed at once, and it doesn't feel very practical to combine them into a single observable.
Let's say for instance I have a TrackedAircraft type. Some of its properties are final while other properties are Observable.
public interface TrackedAircraft {
public int getFlightNumber();
public int getCapacity();
public Observable<String> getOrigin();
public Observable<String> getDestination();
public Observable<Integer> getMileage();
public Observable<Point> getLocation();
}
I could wire this to a GUI pretty easily, and just subscribe controls to be updated with each emission of each property. But what about an email or a body of static text? This is not as straightforward because the only way I can think of not breaking the monad is to combine all the observables which sounds like a pain especially if I have an Observable emitting TrackedFlights.
Is it okay to block in this situation? Or is there a more monadic way to accomplish this I haven't thought of?
public static String generateEmailReportForFlight(TrackedAircraft flight) {
return new StringBuilder().append("FLIGHT NUMBER: ").append(flight.getFlightNumber()).append("\r\n")
.append("CAPACITY: ").append(flight.getCapacity()).append("\r\n")
.append("ORIGIN: ").append(flight.getOrigin() /*What do I do here without blocking?*/)
.append("DESTINATION: ").append(flight.getDestination() /*What do I do here without blocking?*/)
.append("MILEAGE: ").append(flight.getMileage() /*What do I do here without blocking?*/)
.append("LOCATION: ").append(flight.getLocation() /*What do I do here without blocking?*/)
.toString();
}
///
Observable<TrackedAircraft> trackedFlights = ...;
trackedFlights.map(f -> generateEmailReportForFlight(f));
You can use flatMap + combineLatest:
Observable<TrackedAircraft> trackedFlights = ...
trackedFlights
.flatMap(flight -> emailReport(flight))
.subscribe(msg -> sendEmail(msg));
Observable<String> emailReport(TrackedAircraft flight) {
return Observable.combineLatest(
flight.getOrigin(),
flight.getDestination(),
flight.getMileage(),
flight.getLocation()
(origin, destination, mileage, location) -> {
return new StringBuilder()
.append("FLIGHT NUMBER: ").append(flight.getFlightNumber())
.append("\r\n")
.append("CAPACITY: ").append(flight.getCapacity())
.append("\r\n")
.append("ORIGIN: ").append(origin)
.append("\r\n")
.append("DESTINATION: ").append(destination)
.append("\r\n")
.append("MILEAGE: ").append(mileage)
.append("\r\n")
.append("LOCATION: ").append(location)
.toString();
}
)
}
I am writing a server in netty, in which I need to make a call to memcached. I am using spymemcached and can easily do the synchronous memcached call. I would like this memcached call to be async. Is that possible? The examples provided with netty do not seem to be helpful.
I tried using callbacks: created a ExecutorService pool in my Handler and submitted a callback worker to this pool. Like this:
public class MyHandler extends ChannelInboundMessageHandlerAdapter<MyPOJO> implements CallbackInterface{
...
private static ExecutorService pool = Executors.newFixedThreadPool(20);
#Override
public void messageReceived(ChannelHandlerContext ctx, MyPOJO pojo) {
...
CallingbackWorker worker = new CallingbackWorker(key, this);
pool.submit(worker);
...
}
public void myCallback() {
//get response
this.ctx.nextOutboundMessageBuf().add(response);
}
}
CallingbackWorker looks like:
public class CallingbackWorker implements Callable {
public CallingbackWorker(String key, CallbackInterface c) {
this.c = c;
this.key = key;
}
public Object call() {
//get value from key
c.myCallback(value);
}
However, when I do this, this.ctx.nextOutboundMessageBuf() in myCallback gets stuck.
So, overall, my question is: how to do async memcached calls in Netty?
There are two problems here: a small-ish issue with the way you're trying to code this, and a bigger one with many libraries that provide async service calls, but no good way to take full advantage of them in an async framework like Netty. That forces users into suboptimal hacks like this one, or a less-bad, but still not ideal approach I'll get to in a moment.
First the coding problem. The issue is that you're trying to call a ChannelHandlerContext method from a thread other than the one associated with your handler, which is not allowed. That's pretty easy to fix, as shown below. You could code it a few other ways, but this is probably the most straightforward:
private static ExecutorService pool = Executors.newFixedThreadPool(20);
public void channelRead(final ChannelHandlerContext ctx, final Object msg) {
//...
final GetFuture<String> future = memcachedClient().getAsync("foo", stringTranscoder());
// first wait for the response on a pool thread
pool.execute(new Runnable() {
public void run() {
String value;
Exception err;
try {
value = future.get(3, TimeUnit.SECONDS); // or whatever timeout you want
err = null;
} catch (Exception e) {
err = e;
value = null;
}
// put results into final variables; compiler won't let us do it directly above
final fValue = value;
final fErr = err;
// now process the result on the ChannelHandler's thread
ctx.executor().execute(new Runnable() {
public void run() {
handleResult(fValue, fErr);
}
});
}
});
// note that we drop through to here right after calling pool.execute() and
// return, freeing up the handler thread while we wait on the pool thread.
}
private void handleResult(String value, Exception err) {
// handle it
}
That will work, and might be sufficient for your application. But you've got a fixed-sized thread pool, so if you're ever going to handle much more than 20 concurrent connections, that will become a bottleneck. You could increase the pool size, or use an unbounded one, but at that point, you might as well be running under Tomcat, as memory consumption and context-switching overhead start to become issues, and you lose the scalabilty that was the attraction of Netty in the first place!
And the thing is, Spymemcached is NIO-based, event-driven, and uses just one thread for all its work, yet provides no way to fully take advantage of its event-driven nature. I expect they'll fix that before too long, just as Netty 4 and Cassandra have recently by providing callback (listener) methods on Future objects.
Meanwhile, being in the same boat as you, I researched the alternatives, and not being too happy with what I found, I wrote (yesterday) a Future tracker class that can poll up to thousands of Futures at a configurable rate, and call you back on the thread (Executor) of your choice when they complete. It uses just one thread to do this. I've put it up on GitHub if you'd like to try it out, but be warned that it's still wet, as they say. I've tested it a lot in the past day, and even with 10000 concurrent mock Future objects, polling once a millisecond, its CPU utilization is negligible, though it starts to go up beyond 10000. Using it, the example above looks like this:
// in some globally-accessible class:
public static final ForeignFutureTracker FFT = new ForeignFutureTracker(1, TimeUnit.MILLISECONDS);
// in a handler class:
public void channelRead(final ChannelHandlerContext ctx, final Object msg) {
// ...
final GetFuture<String> future = memcachedClient().getAsync("foo", stringTranscoder());
// add a listener for the Future, with a timeout in 2 seconds, and pass
// the Executor for the current context so the callback will run
// on the same thread.
Global.FFT.addListener(future, 2, TimeUnit.SECONDS, ctx.executor(),
new ForeignFutureListener<String,GetFuture<String>>() {
public void operationSuccess(String value) {
// do something ...
ctx.fireChannelRead(someval);
}
public void operationTimeout(GetFuture<String> f) {
// do something ...
}
public void operationFailure(Exception e) {
// do something ...
}
});
}
You don't want more than one or two FFT instances active at any time, or they could become a drain on CPU. But a single instance can handle thousands of outstanding Futures; about the only reason to have a second one would be to handle higher-latency calls, like S3, at a slower polling rate, say 10-20 milliseconds.
One drawback of the polling approach is that it adds a small amount of latency. For example, polling once a millisecond, on average it will add 500 microseconds to the response time. That won't be an issue for most applications, and I think is more than offset by the memory and CPU savings over the thread pool approach.
I expect within a year or so this will be a non-issue, as more async clients provide callback mechanisms, letting you fully leverage NIO and the event-driven model.