There is a sample program below that replicates my issue.
Issue:
Apply the flatMap transformation to some Observable
Subscribe to the aforementioned Observable, store the subscription somewhere
Dispose of the aforementioned subscription before the Observable terminates naturally
In an Observable returned by the mapper function, raise an Exception
Flatmap operator doesn't know how to handle the Exception, raises it, program crashes/quits
Preferred/expected behaviour:
The error should be propagated to my onError handler instead of crashing the program when RxJavaPlugins#onError is invoked
The culprit is the snippet of code below, found in ObservableFlatMap. The issue is that once the parent has been disposed of, invocations to addThrowable return false. Thus, the error is never propagated down to onError.
#Override
public void onError(Throwable t) {
if (parent.errors.addThrowable(t)) {
if (!parent.delayErrors) {
parent.disposeAll();
}
done = true;
parent.drain();
} else {
RxJavaPlugins.onError(t);
}
}
What can I do in this situation? I need an operator that acts like flatMap and propagates errors down to my onError handler instead of crashing my program.
This is a real scenario with an Android app of mine. Subscriptions are automatically disposed when the user exits a window/activity and exceptions may be raised after disposal due to InterruptedIOExceptions.
Code to replicate issue
import io.reactivex.Observable;
import io.reactivex.disposables.Disposable;
import io.reactivex.plugins.RxJavaPlugins;
import io.reactivex.schedulers.Schedulers;
public class Main {
public static void main(String[] args) throws InterruptedException {
RxJavaPlugins.setErrorHandler((throwable)->{
System.out.println("Please don't come through here");
throwable.printStackTrace();
});
Disposable disposable = Observable.just(1)
.subscribeOn(Schedulers.computation())
.flatMap((item)->{
return Observable.just(1)
.doOnNext((arg)->Thread.sleep(1000))
.doOnNext((arg)->{
throw new RuntimeException("Error");
});
})
.subscribe(System.out::println, (throwable)->{
System.out.println("Please come through here");
throwable.printStackTrace();
});
Thread.sleep(500);
disposable.dispose();
Thread.sleep(1000);
}
}
Execution output
Please don't come through here
io.reactivex.exceptions.UndeliverableException: java.lang.InterruptedException: sleep interrupted
at io.reactivex.plugins.RxJavaPlugins.onError(RxJavaPlugins.java:349)
at io.reactivex.internal.operators.observable.ObservableFlatMap$InnerObserver.onError(ObservableFlatMap.java:573)
at io.reactivex.internal.operators.observable.ObservableDoOnEach$DoOnEachObserver.onError(ObservableDoOnEach.java:119)
at io.reactivex.internal.operators.observable.ObservableDoOnEach$DoOnEachObserver.onError(ObservableDoOnEach.java:119)
at io.reactivex.internal.operators.observable.ObservableDoOnEach$DoOnEachObserver.onNext(ObservableDoOnEach.java:99)
at io.reactivex.internal.operators.observable.ObservableScalarXMap$ScalarDisposable.run(ObservableScalarXMap.java:248)
at io.reactivex.internal.operators.observable.ObservableJust.subscribeActual(ObservableJust.java:35)
at io.reactivex.Observable.subscribe(Observable.java:10903)
at io.reactivex.internal.operators.observable.ObservableDoOnEach.subscribeActual(ObservableDoOnEach.java:42)
at io.reactivex.Observable.subscribe(Observable.java:10903)
at io.reactivex.internal.operators.observable.ObservableDoOnEach.subscribeActual(ObservableDoOnEach.java:42)
at io.reactivex.Observable.subscribe(Observable.java:10903)
at io.reactivex.internal.operators.observable.ObservableFlatMap$MergeObserver.subscribeInner(ObservableFlatMap.java:162)
at io.reactivex.internal.operators.observable.ObservableFlatMap$MergeObserver.onNext(ObservableFlatMap.java:139)
at io.reactivex.internal.operators.observable.ObservableSubscribeOn$SubscribeOnObserver.onNext(ObservableSubscribeOn.java:58)
at io.reactivex.internal.operators.observable.ObservableScalarXMap$ScalarDisposable.run(ObservableScalarXMap.java:248)
at io.reactivex.internal.operators.observable.ObservableJust.subscribeActual(ObservableJust.java:35)
at io.reactivex.Observable.subscribe(Observable.java:10903)
at io.reactivex.internal.operators.observable.ObservableSubscribeOn$SubscribeTask.run(ObservableSubscribeOn.java:96)
at io.reactivex.internal.schedulers.ScheduledDirectTask.call(ScheduledDirectTask.java:38)
at io.reactivex.internal.schedulers.ScheduledDirectTask.call(ScheduledDirectTask.java:26)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.InterruptedException: sleep interrupted
at java.lang.Thread.sleep(Native Method)
at Main.lambda$null$1(Main.java:18)
at io.reactivex.internal.operators.observable.ObservableDoOnEach$DoOnEachObserver.onNext(ObservableDoOnEach.java:95)
... 22 more
Expected/preferred output
Please come through here
java.lang.RuntimeException: Error
at Main.lambda$null$2(Main.java:19)
at io.reactivex.internal.operators.observable.ObservableDoOnEach$DoOnEachObserver.onNext(ObservableDoOnEach.java:95)
at io.reactivex.internal.operators.observable.ObservableDoOnEach$DoOnEachObserver.onNext(ObservableDoOnEach.java:103)
at io.reactivex.internal.operators.observable.ObservableScalarXMap$ScalarDisposable.run(ObservableScalarXMap.java:248)
at io.reactivex.internal.operators.observable.ObservableJust.subscribeActual(ObservableJust.java:35)
at io.reactivex.Observable.subscribe(Observable.java:10903)
at io.reactivex.internal.operators.observable.ObservableDoOnEach.subscribeActual(ObservableDoOnEach.java:42)
at io.reactivex.Observable.subscribe(Observable.java:10903)
at io.reactivex.internal.operators.observable.ObservableDoOnEach.subscribeActual(ObservableDoOnEach.java:42)
at io.reactivex.Observable.subscribe(Observable.java:10903)
at io.reactivex.internal.operators.observable.ObservableFlatMap$MergeObserver.subscribeInner(ObservableFlatMap.java:162)
at io.reactivex.internal.operators.observable.ObservableFlatMap$MergeObserver.onNext(ObservableFlatMap.java:139)
at io.reactivex.internal.operators.observable.ObservableSubscribeOn$SubscribeOnObserver.onNext(ObservableSubscribeOn.java:58)
at io.reactivex.internal.operators.observable.ObservableScalarXMap$ScalarDisposable.run(ObservableScalarXMap.java:248)
at io.reactivex.internal.operators.observable.ObservableJust.subscribeActual(ObservableJust.java:35)
at io.reactivex.Observable.subscribe(Observable.java:10903)
at io.reactivex.internal.operators.observable.ObservableSubscribeOn$SubscribeTask.run(ObservableSubscribeOn.java:96)
at io.reactivex.internal.schedulers.ScheduledDirectTask.call(ScheduledDirectTask.java:38)
at io.reactivex.internal.schedulers.ScheduledDirectTask.call(ScheduledDirectTask.java:26)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
The problem is with your .subscribeOn(Schedulers.computation()) before flatMap. When you dispose your interrupting thread that produces flat map observables which breaks the whole subscription. To fix this your should specify subscription after or inside flatMap or observe it in a different thread.
Working example:
RxJavaPlugins.setErrorHandler((throwable) -> {
System.out.println("Please don't come through here");
throwable.printStackTrace();
});
Disposable disposable = Observable.just(1)
.flatMap((item) -> {
return Observable.just(1)
.doOnNext((arg) -> Thread.sleep(1000))
.doOnNext((arg) -> {
throw new IllegalStateException("Error");
})
.subscribeOn(Schedulers.computation());
})
.subscribe(System.out::println,
(throwable) -> {
System.out.println("Please come through here");
throwable.printStackTrace();
});
Thread.sleep(500);
disposable.dispose();
Thread.sleep(1000);
Related
I'm developing an Apache beam pipeline to publish unbounded data into a pubsub topic. Publishing is done using pubsub IO connector PubsubIO.writeMessages().
If pubsub connection is failed during pipeline is processing, I need to capture the connection failure and identify the data which is being processed during the connection failure. But I couldn't find a straight forward failure handling mechanism in Apache beam pubsub write.
When I test this using a bad pubsub connection, pipeline is trying to connect throwing following exception for a while and if the connection is unsuccessful pipeline execution will fail.
com.google.api.gax.rpc.UnavailableException: io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:69)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
at com.google.api.gax.grpc.GrpcExceptionCallable$ExceptionTransformingFuture.onFailure(GrpcExceptionCallable.java:97)
at com.google.api.core.ApiFutures$1.onFailure(ApiFutures.java:68)
at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1041)
at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1215)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:983)
at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:771)
at io.grpc.stub.ClientCalls$GrpcFuture.setException(ClientCalls.java:563)
at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:533)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:553)
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:68)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:739)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:718)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
at io.grpc.Status.asRuntimeException(Status.java:535)
... 10 more
Caused by: io.grpc.netty.shaded.io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: no further information: /127.0.0.1:58843
Caused by: java.net.ConnectException: Connection refused: no further information
at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:779)
at io.grpc.netty.shaded.io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:330)
at io.grpc.netty.shaded.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:702)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.grpc.netty.shaded.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.grpc.netty.shaded.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.grpc.netty.shaded.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:834)
I tried to catch this exception from the pubsub write transform and it is not working either.
So my question is: Is there any way to capture above exception and continue pipeline until the connection is successful? My pubsub write code snippet is as follows:
public class PubSubWrite extends PTransform<PCollection<String>, PDone> {
private final String outputTopic;
public PubSubWrite(String outputTopic) {
this.outputTopic = outputTopic;
}
#Override
public PDone expand(PCollection<String> input) {
return input
.apply(
"convertMessagesToPubsubMessages",
MapElements.into(TypeDescriptor.of(PubsubMessage.class))
.via(
(String json) ->
new PubsubMessage(json.getBytes(Charsets.UTF_8), ImmutableMap.of("SOURCE", "TEST"))))
.apply(
"writePubsubMessagesToPubSub", PubsubIO.writeMessages().to(outputTopic));
}
}
There is not a native API for error handling in transforms for PubsubIO as you can see on the documentation.
I recommend you to open a feature request on issue tracker asking for a error handling implementation on the Java Library - PubsubIO connector.
While, you could return ans empty error collection or implement it to catch the exception by yourself.
Example for the empty error:
#Override
public WithFailures.Result < PDone, PubsubMessage > expand(PCollection < PubsubMessage > input) {
PDone done = input //
.apply(
"convertMessagesToPubsubMessages",
MapElements.into(TypeDescriptor.of(PubsubMessage.class))
.via(
(String json) - >
new PubsubMessage(json.getBytes(Charsets.UTF_8), ImmutableMap.of("SOURCE", "TEST"))))
.apply(
"writePubsubMessagesToPubSub", PubsubIO.writeMessages().to(outputTopic));
return WithFailures.Result.of(done, EmptyErrors.in(input.getPipeline()));
}
private static class EmptyErrors extends PTransform < PBegin, PCollection < PubsubMessage >> {
/** Creates an empty error collection in the given pipeline. */
public static PCollection < PubsubMessage > in (Pipeline pipeline) {
return pipeline.apply(new EmptyErrors());
}
#Override
public PCollection < PubsubMessage > expand(PBegin input) {
return input.apply(Create.empty(PubsubMessageWithAttributesCoder.of()));
}
}
Usually such failures are retried by the runner. For example, Dataflow runner will retry failures indefinitely for streaming jobs. Note that this is in addition to any local (VM level) retries for errors that produce re-triable HTTP error codes (for example 5xx). So pipeline should continue once you fix the underlying issue. But note that your backlog might significantly increase if the pipeline is unable to process data for some time so you might see a delay.
I'm trying to use spring-batch remote-partitioning for scaling the Job and Apache Kafka as the middleware.
here is a brief configuration of the masterStep:
#Bean
public Step managerStep() {
return managerStepBuilderFactory.get("managerStep")
.partitioner("workerStep", filePartitioner)
.outputChannel(requestForWorkers())
.inputChannel(repliesFromWorkers())
.build();
}
So I'm using channels for both sending requests to the workers as well as receiving responses from them. I know the other option is to poll the JobRepository (which works fine in my case), but I would rather not use it.
here also is some of the configs for the Kafka:
spring.kafka.producer.key-serializer= org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.springframework.kafka.support.serializer.JsonSerializer
spring.kafka.consumer.key-deserializer= org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.value-deserializer= org.springframework.kafka.support.serializer.JsonDeserializer
spring.kafka.producer.properties.spring.json.add.type.headers=true
spring.kafka.consumer.properties.spring.json.trusted.packages = org.springframework.batch.integration.partition,org.springframework.batch.core
The master and the workers are configured and the master can send the request through Kafka to the workers. The workers start processing and everything is fine until the workers try to send the response through the Kafka
as you see I'm using the JsonSerializer and JsonDeserializer for sending/receiving the messages. The problem is that when Jackson tries to serialize the StepExecution, it falls into an infinite loop since the StepExetion has a JobExecution in it and the JobExecution also has a List of StepExetions:
Caused by: org.apache.kafka.common.errors.SerializationException: Can't serialize data [StepExecution: id=3001, version=6, name=workerStep:61127a319d6caf656442ff53, status=COMPLETED, exitStatus=COMPLETED, readCount=10, filterCount=0, writeCount=10 readSkipCount=0, writeSkipCount=0, processSkipCount=0, commitCount=4, rollbackCount=0, exitDescription=] for topic [repliesFromWorkers]
Caused by: com.fasterxml.jackson.databind.JsonMappingException: Infinite recursion (StackOverflowError) (through reference chain: org.springframework.batch.core.JobExecution["stepExecutions"]->java.util.Collections$UnmodifiableRandomAccessList[0]->org.springframework.batch.core.StepExecution["jobExecution"]->org.springframework.batch.core.JobExecution["stepExecutions"]->java.util.Collections$UnmodifiableRandomAccessList[0]->org.springframework.batch.core.StepExecution["jobExecution"]->org.springframework.batch.core.JobExecution["stepExecutions"]-....
So I thought maybe I can customize the serializing of the StepExecution so it ignores the List of StepExecutions in the JobExecution of the first StepExecution! but even in this case, it will fails at the master side while deserializing of this StepExecution:
Caused by: com.fasterxml.jackson.databind.exc.MismatchedInputException: Cannot construct instance of `org.springframework.batch.core.StepExecution` (although at least one Creator exists): cannot deserialize from Object value (no delegate- or property-based Creator)
Is there anyway to make this work?
Im using Spring Boot 2.4.2 and its corresponding versions of the spring-boot-starter-batch, spring-batch-integration, spring-integration-kafka and spring-kafka
you can create a custom (de)serializer and handle it manually. something like this will help:
public class KafkaStringOrByteSerializer<T> extends JsonSerializer<T> {
private final Serializer<Object> byteSerializer = new DefaultSerializer();
private final org.apache.kafka.common.serialization.Serializer<String> stringSerializer = new StringSerializer();
#Override
public byte[] serialize(String topic, T data) {
if (needsBinarySerializer(data)) {
return this.serializeBinary(data);
} else {
return stringSerializer.serialize(topic, (String) data);
}
}
private boolean needsBinarySerializer(Object data) {
if (data instanceof byte[] || data instanceof Byte[] || data instanceof Byte)
return true;
if (data != null && data.getClass() != null) {
return (data.getClass().getName()).startsWith("org.springframework.batch");
}
return false;
}
private byte[] serializeBinary(Object data) {
try (ByteArrayOutputStream output = new ByteArrayOutputStream()) {
byteSerializer.serialize(data, output);
return output.toByteArray();
} catch (IOException e) {
throw new MessageConversionException("Cannot convert object to bytes", e);
}
}
}
a similar approach can be taken for the deserializer
Is there a way to wait for a future to complete without blocking the event loop?
An example of a use case with querying Mongo:
Future<Result> dbFut = Future.future();
mongo.findOne("myusers", myQuery, new JsonObject(), res -> {
if(res.succeeded()) {
...
dbFut.complete(res.result());
}
else {
...
dbFut.fail(res.cause());
}
}
});
// Here I need the result of the DB query
if(dbFut.succeeded()) {
doSomethingWith(dbFut.result());
}
else {
error();
}
I know the doSomethingWith(dbFut.result()); can be moved to the handler, yet if it's long, the code will get unreadable (Callback hell ?) It that the right solution ? Is that the omny solution without additional libraries ?
I'm aware that rxJava simplifies the code, but as I don't know it, learning Vert.x and rxJava is just too much.
I also wanted to give a try to vertx-sync. I put the dependency in the pom.xml; everything got downloaded fine but when I started my app, I got the following error
maurice#mickey> java \
-javaagent:~/.m2/repository/co/paralleluniverse/quasar-core/0.7.5/quasar-core-0.7.5-jdk8.jar \
-jar target/app-dev-0.1-fat.jar \
-conf conf/config.json
Error opening zip file or JAR manifest missing : ~/.m2/repository/co/paralleluniverse/quasar-core/0.7.5/quasar-core-0.7.5-jdk8.jar
Error occurred during initialization of VM
agent library failed to init: instrument
I know what the error means in general, but I don't know in that context... I tried to google for it but didn't find any clear explanation about which manifest to put where. And as previously, unless mandatory, I prefer to learn one thing at a time.
So, back to the question : is there a way with "basic" Vert.x to wait for a future without perturbation on the event loop ?
You can set a handler for the future to be executed upon completion or failure:
Future<Result> dbFut = Future.future();
mongo.findOne("myusers", myQuery, new JsonObject(), res -> {
if(res.succeeded()) {
...
dbFut.complete(res.result());
}
else {
...
dbFut.fail(res.cause());
}
}
});
dbFut.setHandler(asyncResult -> {
if(asyncResult.succeeded()) {
// your logic here
}
});
This is a pure Vert.x way that doesn't block the event loop
I agree that you should not block in the Vertx processing pipeline, but I make one exception to that rule: Start-up. By design, I want to block while my HTTP server is initialising.
This code might help you:
/**
* #return null when waiting on {#code Future<Void>}
*/
#Nullable
public static <T>
T awaitComplete(Future<T> f)
throws Throwable
{
final Object lock = new Object();
final AtomicReference<AsyncResult<T>> resultRef = new AtomicReference<>(null);
synchronized (lock)
{
// We *must* be locked before registering a callback.
// If result is ready, the callback is called immediately!
f.onComplete(
(AsyncResult<T> result) ->
{
resultRef.set(result);
synchronized (lock) {
lock.notify();
}
});
do {
// Nested sync on lock is fine. If we get a spurious wake-up before resultRef is set, we need to
// reacquire the lock, then wait again.
// Ref: https://stackoverflow.com/a/249907/257299
synchronized (lock)
{
// #Blocking
lock.wait();
}
}
while (null == resultRef.get());
}
final AsyncResult<T> result = resultRef.get();
#Nullable
final Throwable t = result.cause();
if (null != t) {
throw t;
}
#Nullable
final T x = result.result();
return x;
}
When I run the following snippet I don't see the backpressure.
public static void main(String[] args) throws InterruptedException {
MyFileProcessor pro = new MyFileProcessor();
Timer t = new Timer();
t.start();
Disposable x = pro
.generateFlowable(
new File("path\\to\\file.raw"))
.subscribeOn(Schedulers.io(), false).observeOn(Schedulers.io()).map(y -> {
System.out.println(Thread.currentThread().getName() + " xxx");
return y;
})
.subscribe(onNext -> {
System.out.println(Thread.currentThread().getName() + " " + new String(onNext));
Thread.sleep(100);
}, Throwable::printStackTrace, () -> {
System.out.println("Done");
t.end();
System.out.println(t.getTotalTime());
});
Thread.sleep(1000000000);
}
When I run the class above I get an alternating lines of
RxCachedThreadScheduler-1 xxx
RxCachedThreadScheduler-1 Line1
....
Its using the same thread.
Now when I move the observeOn to just before the subscribe, I see a bunch of
RxCachedThreadScheduler-1 xxx
Followed by a bunch of
RxCachedThreadScheduler-1 Line1
I am assuming this is back pressure but still the thread used is the same.
Why am I seeing this behavior?
Why is only one thread being utilized?
There is no operator as such for the observeOn to operate on, so why am I seeing this behavior?
[edit]
public Flowable<byte[]> generateFlowable(File file) {
return Flowable.generate(() -> new BufferedInputStream(new FileInputStream(file)), (bufferedIs, output) -> {
try {
byte[] data = getMessageRawData(bufferedIs);
if (data != null)
output.onNext(data);
else
output.onComplete();
}
catch (Exception e) {
output.onError(e);
}
return bufferedIs;
}, bufferedIs -> {
try {
bufferedIs.close();
}
catch (IOException ex) {
RxJavaPlugins.onError(ex);
}
});
}
Why is only one thread being utilized?
Works correctly because you check the running thread after observeOn and thus you are supposed to see the same thread there and below, no matter what happens above it. subscribeOn affects generateFlowable where, I suppose, you don't print the current thread and thus you don't see it runs on a different IO thread.
Now when I move the observeOn to just before the subscribe
There shouldn't be any difference unless something odd happens in generateFlowable.
For various valid reasons, some jobs in the job store are old and can no longer be recovered. For instance, when the Job class is no longer part of the .NET assemblies after a refactor. I'm wondering how to gracefully catch these problems when the scheduler starts, and then delete the unrecoverable jobs.
When the app starts, I basically do this (abridged):
IScheduler scheduler = <create a scheduler and a jobstore object>
try{ scheduler.Start() } catch {}
try{ scheduler.Start() } catch {}
try{ scheduler.Start() } catch {}
If I call Start() three times, the scheduler eventually starts. The reason I have to do this hacky thing is because Start() will throw exceptions for unrecoverable, old jobs.
Failure occured during job recovery. and Could not load type 'MyOldClassName' from assembly 'MyAssembly'.
I want to gracefully remove the broken jobs and avoid these exceptions. In my actual code, I log these exceptions.
Is there a better way to do this?
I found one way to do this. Calling this before Start() cures the problem.
var jobs = this._scheduler.GetJobKeys(GroupMatcher<JobKey>.AnyGroup());
foreach (var jobKey in jobs)
{
try
{
// attempt to access the jobType. If it fails, then we know it's broken
Type t = _scheduler.GetJobDetail(jobKey).JobType;
}
catch (JobPersistenceException ex)
{
if (ex.InnerException != null)
{
if (ex.InnerException.GetType() == typeof(TypeLoadException))
{
_scheduler.DeleteJob(jobKey);
}
}
else
{
// log this
}
}
catch (Exception ex)
{
// log this
}
}