I'm already querying some external resource with Flux.using(). Now I want to implement a kind of optimistic locking: read some state before query starts to execute and check if it was updated after query is finished. If so - throw some exception to break http request handling.
I've achieved this by using doOnComplete:
final AtomicReference<String> initialState = new AtomicReference<>();
return Flux.just("some", "constant", "data")
.doOnComplete(() -> initialState.set(getState()))
.concatWith(Flux.using(...)) //actual data query
.doOnComplete(() -> {if (!initialState.get().equals(getState())) throw new RuntimeException();})
.concatWithValues("another", "constant", "data")
My questions:
Is it correct? Is it guaranteed that 1st doOnComplete lambda would be finished before Flux.using() and is it guaranteed that 2nd doOnComplete lambda would be executed strictly after?
Does more elegant solution exists?
The first doOnComplete would be executed after Flux.just("some", "constant", "data") emits all elements and the second one after emitted Publisher defined in concatWith completes successfully. This is working because both publishers have a finite number of elements.
With the proposed approach, however the pre-/postconditions from a particular operation are handled outside of the operations at a higher level. In other words, the condition check belonging to the operation is leaking to the flux definition.
Suggestion, pushing the condition check down to the operation:
var otherElements = Flux.using( // actual data query
() -> "other",
x -> {
var initialState = getState();
return Flux.just(x).doOnComplete(() ->
{ if (!initialState.equals(getState())) throw new IllegalStateException(); }
);
},
x -> { }
);
Flux.just("some", "constant", "data")
.concatWith(otherElements)
.concatWith(Mono.just("another")) // "constant", "data" ...
Related
I have a Single flow organized like this:
getSomething() // returns Single<>
.flatMap(something -> {
// various things
return Single.defer( () -> {
// various other things
return Single.<SomeType>create(emitter -> {
// some more stuff
someCallbackApi(result -> {
if (result.isError()) {
emitter.onError( result.getCause() );
} else {
// guaranteed non-null data
emitter.onSuccess( result.getData() ); // this generates NoSuchElement
}
});
});
})
.retryWhen( ... )
.flatMap( data -> handle(data) )
.retryWhen( ... );
})
.retryWhen( ... )
.onErrorResumeNext(error -> process(error))
.subscribe(data -> handleSuccess(data), error -> handleError(error));
In test cases, the callback api Single successfully retries a number of times (determined by the test case), and every time on the last retry, the call to emitter.onSuccess() generates the exception below. What is going on? I haven't been able to restructure or change the downstream operators or subscribers to avoid the problem.
java.util.NoSuchElementException: null
at io.reactivex.internal.operators.flowable.FlowableSingleSingle$SingleElementSubscriber.onComplete(FlowableSingleSingle.java:116)
at io.reactivex.subscribers.SerializedSubscriber.onComplete(SerializedSubscriber.java:168)
at io.reactivex.internal.operators.flowable.FlowableRepeatWhen$WhenReceiver.onComplete(FlowableRepeatWhen.java:118)
at io.reactivex.internal.operators.flowable.FlowableFlatMap$MergeSubscriber.drainLoop(FlowableFlatMap.java:426)
at io.reactivex.internal.operators.flowable.FlowableFlatMap$MergeSubscriber.drain(FlowableFlatMap.java:366)
at io.reactivex.internal.operators.flowable.FlowableFlatMap$MergeSubscriber.onComplete(FlowableFlatMap.java:338)
at io.reactivex.internal.operators.flowable.FlowableZip$ZipCoordinator.drain(FlowableZip.java:210)
at io.reactivex.internal.operators.flowable.FlowableZip$ZipSubscriber.onNext(FlowableZip.java:381)
at io.reactivex.processors.UnicastProcessor.drainFused(UnicastProcessor.java:363)
at io.reactivex.processors.UnicastProcessor.drain(UnicastProcessor.java:396)
at io.reactivex.processors.UnicastProcessor.onNext(UnicastProcessor.java:458)
at io.reactivex.processors.SerializedProcessor.onNext(SerializedProcessor.java:103)
at io.reactivex.internal.operators.flowable.FlowableRepeatWhen$WhenSourceSubscriber.again(FlowableRepeatWhen.java:171)
at io.reactivex.internal.operators.flowable.FlowableRetryWhen$RetryWhenSubscriber.onError(FlowableRetryWhen.java:76)
at io.reactivex.internal.operators.single.SingleToFlowable$SingleToFlowableObserver.onError(SingleToFlowable.java:67)
at io.reactivex.internal.operators.single.SingleFlatMap$SingleFlatMapCallback$FlatMapSingleObserver.onError(SingleFlatMap.java:116)
at io.reactivex.internal.operators.flowable.FlowableSingleSingle$SingleElementSubscriber.onError(FlowableSingleSingle.java:97)
at io.reactivex.subscribers.SerializedSubscriber.onError(SerializedSubscriber.java:142)
at io.reactivex.internal.operators.flowable.FlowableRepeatWhen$WhenReceiver.onError(FlowableRepeatWhen.java:112)
at io.reactivex.internal.operators.flowable.FlowableFlatMap$MergeSubscriber.checkTerminate(FlowableFlatMap.java:567)
at io.reactivex.internal.operators.flowable.FlowableFlatMap$MergeSubscriber.drainLoop(FlowableFlatMap.java:374)
at io.reactivex.internal.operators.flowable.FlowableFlatMap$MergeSubscriber.drain(FlowableFlatMap.java:366)
at io.reactivex.internal.operators.flowable.FlowableFlatMap$MergeSubscriber.innerError(FlowableFlatMap.java:606)
at io.reactivex.internal.operators.flowable.FlowableFlatMap$InnerSubscriber.onError(FlowableFlatMap.java:672)
at io.reactivex.internal.subscriptions.EmptySubscription.error(EmptySubscription.java:55)
at io.reactivex.internal.operators.flowable.FlowableError.subscribeActual(FlowableError.java:40)
at io.reactivex.Flowable.subscribe(Flowable.java:14918)
at io.reactivex.Flowable.subscribe(Flowable.java:14865)
at io.reactivex.internal.operators.flowable.FlowableFlatMap$MergeSubscriber.onNext(FlowableFlatMap.java:163)
at io.reactivex.internal.operators.flowable.FlowableZip$ZipCoordinator.drain(FlowableZip.java:249)
at io.reactivex.internal.operators.flowable.FlowableZip$ZipSubscriber.onNext(FlowableZip.java:381)
at io.reactivex.processors.UnicastProcessor.drainFused(UnicastProcessor.java:363)
at io.reactivex.processors.UnicastProcessor.drain(UnicastProcessor.java:396)
at io.reactivex.processors.UnicastProcessor.onNext(UnicastProcessor.java:458)
at io.reactivex.processors.SerializedProcessor.onNext(SerializedProcessor.java:103)
at io.reactivex.internal.operators.flowable.FlowableRepeatWhen$WhenSourceSubscriber.again(FlowableRepeatWhen.java:171)
at io.reactivex.internal.operators.flowable.FlowableRetryWhen$RetryWhenSubscriber.onError(FlowableRetryWhen.java:76)
at io.reactivex.internal.operators.single.SingleToFlowable$SingleToFlowableObserver.onError(SingleToFlowable.java:67)
at io.reactivex.internal.operators.single.SingleFlatMap$SingleFlatMapCallback$FlatMapSingleObserver.onError(SingleFlatMap.java:116)
at io.reactivex.internal.disposables.EmptyDisposable.error(EmptyDisposable.java:78)
at io.reactivex.internal.operators.single.SingleError.subscribeActual(SingleError.java:42)
at io.reactivex.Single.subscribe(Single.java:3603)
at io.reactivex.internal.operators.single.SingleFlatMap$SingleFlatMapCallback.onSuccess(SingleFlatMap.java:84)
at io.reactivex.internal.operators.flowable.FlowableSingleSingle$SingleElementSubscriber.onComplete(FlowableSingleSingle.java:114)
at io.reactivex.subscribers.SerializedSubscriber.onComplete(SerializedSubscriber.java:168)
at io.reactivex.internal.operators.flowable.FlowableRetryWhen$RetryWhenSubscriber.onComplete(FlowableRetryWhen.java:82)
at io.reactivex.internal.subscriptions.DeferredScalarSubscription.complete(DeferredScalarSubscription.java:134)
at io.reactivex.internal.operators.single.SingleToFlowable$SingleToFlowableObserver.onSuccess(SingleToFlowable.java:62)
at io.reactivex.internal.operators.single.SingleCreate$Emitter.onSuccess(SingleCreate.java:67)
Solved:
Many thanks to #dano for pointing out the retryWhen behavior when used with Single. In this case, the outermost retryWhen operator had a bad terminating condition, roughly like:
.retryWhen(errors -> errors.zipWith( Flowable.range(1, maxRetries), ...)
.flatMap( zipped -> {
if (zipped.retryCount() <= maxRetries) {
return Flowable.just(0L);
}
return Flowable.error( new Exception() );
})
...Flowable.range() will complete when it has generated the last number, which will cause the Single to emit NoSuchElement. Just bumping the count argument to Flowable.range() by one is enough to fix the problem:
.retryWhen(errors -> errors.zipWith( Flowable.range(1, maxRetries + 1), ...)
.flatMap( zipped -> {
if (zipped.retryCount() <= maxRetries) {
return Flowable.just(0L);
}
return Flowable.error( new Exception() );
})
This is happening because of the way you implemented the callback you passed to retryWhen. The retryWhen docuementation states (emphasis mine):
Re-subscribes to the current Single if and when the Publisher returned
by the handler function signals a value.
If the Publisher signals an onComplete, the resulting Single will
signal a NoSuchElementException.
One of the Flowable instances you're returning inside of the calls to retryWhen is emitting onComplete, which leads to the NoSuchElementException.
Here's a very simple example that produces the same error:
Single.error(new Exception("hey"))
.retryWhen(e -> Flowable.just(1))
.subscribe(System.out::println, e -> e.printStackTrace());
The stacktrace this produces starts with this, same as yours:
java.util.NoSuchElementException
at io.reactivex.internal.operators.flowable.FlowableSingleSingle$SingleElementSubscriber.onComplete(FlowableSingleSingle.java:116)
at io.reactivex.subscribers.SerializedSubscriber.onComplete(SerializedSubscriber.java:168)
at io.reactivex.internal.operators.flowable.FlowableRepeatWhen$WhenReceiver.onComplete(FlowableRepeatWhen.java:118)
You don't include any of your code from inside the retryWhen calls, so I can't say exactly what you did wrong, but generally you want to chain whatever you do to the Flowable that is passed in. So my example above would look like this, if we really wanted to retry forever:
Single.error(new Exception("hey"))
.retryWhen(e -> e.flatMap(ign -> Flowable.just(1)))
.subscribe(System.out::println, e -> e.printStackTrace());
could you please advise , how can I stop sending to my 3rd kafka topic, when the control reaches the catch block, currently the message is sent to both error topic as well as the topic to which it should send in case of normal processing. A snippet of code is as below:
#Component
public class Abc {
private final StreamBridge streamBridge;
public Abc (StreamBridge streamBridge)
this.streamBridge = streamBridge;
#Bean
public Function<KStream<String, KafkaClass>, KStream<String,KafkaClass>> hiProcess() {
return input -> input.map((key,value) -> {
try{
KafkaClass stream = processFunction();
}
catch(Exception e) {
Message<KakfaClass> mess = MessageBuilder.withPayload(value).build();
streamBridge.send("errProcess-out-0". mess);
}
return new KeyValue<>(key, stream);
})
}
}
This can be implemented using the following pattern:
KafkaClass stream;
return input -> input
.branch((k, v) -> {
try {
stream = processFunction();
return true;
}
catch (Exception e) {
Message<KakfaClass> mess = MessageBuilder.withPayload(value).build();
streamBridge.send("errProcess-out-0". mess);
return false;
}
},
(k, v) -> true)[0]
.map((k, v) -> new KeyValue<>(k, stream));
Here, we are using the branching feature (API) of KStream to split your input into two paths - normal flow and the one causing the errors. This is accomplished by providing two filters to the branch method call. The first filter is the normal flow in which you call the processFunction method and get a response back. If we don't get an exception, the filter returns true, and the result of the branch operation is captured in the first element of the output array [0] which is processed downstream in the map operation in which it sends the final result to the outbound topic.
On the other hand, if it throws an exception, it sends whatever is necessary to the error topic using StreamBridge and the filter returns false. Since the downstream map operation is only performed on the first element of the array from branching [0], nothing will be sent outbound. When the first filter returns false, it goes to the second filter which always returns true. This is a no-op filter where the results are completely ignored.
One downside of this particular implementation is that you need to store the response from processFunction in an instance field and then mutate on each incoming KStream record so that you can access its value in the final map method where you send the output. However, for this particular use case, this may not be an issue.
I have a aggregator utility class, where i have to joint more than one cassandra table data. my production code will looks like below but not exactly same.
#Autowired FollowersRepository followersRepository;
#Autowired TopicRepository topicRepository;
#GetMapping("/info")
public Flux<FullDetails> getData(){
return Flux.create(emitter ->{
followersRepository.findAll()
.doOnNext(data -> {
List<String> all = data.getTopiclist(); //will get list of topic id
List<Alltopics> processedList = new ArrayList<Alltopics>();
all.forEach(action -> {
topicRepository.findById(action) //will get full detail about topic
.doOnSuccess(topic ->{
processedList.add(topic);
if (processedList.size() >= all.size()) {
FullDetails fulldetails = new FullDetails(action,processedList);
emitter.next(fulldetails);
//emitter.complete();
}
})
.subscribe();
});
})
.doOnComplete(() ->{
System.out.println("All the data are processed !!!");
//emitter.complete(); // executing if all the data are pushed from database not waiting for doOnNext method to complete.
})
.subscribe();
});
}
For more details, refer the code here CodeLink.
I have tried with doOnComplete and doOnFinally for outer Flux, it is not waiting for all inner Non-blocking calls to complete.
I want to call onComplete, after processing all the nested Mono/Flux(non-blocking) request inside Flux.
For nested blocking flux/mono, the outer flux doOnComplete method is executing after completion of inner Flux/Mono.
PostScript(PS):-
In below example, i am not able find where to place emitter.complete().
because doOnComplete() method is called before completion of all the inner Mono.
Request Body:-
[{ "content":"Intro to React and operators", "author":"Josh Long", "name":"Spring WebFlux" },{ "content":"Intro to Flux", "author":"Josh Long", "name":"Spring WebFlux" },{ "content":"Intro to Mono", "author":"Josh Long", "name":"Spring WebFlux" }]
My Rest Controller:-
#PostMapping("/topics")
public Flux<?> loadTopic(#RequestBody Flux<Alltopics> data)
{
return Flux.create(emitter ->{
data
.map(topic -> {
topic.setTopicid(null ==topic.getTopicid() || topic.getTopicid().isEmpty()?UUID.randomUUID().toString():topic.getTopicid());
return topic;
})
.doOnNext(topic -> {
topicRepository.save(topic).doOnSuccess(persistedTopic ->{
emitter.next(persistedTopic);
//emitter.complete();
}).subscribe();
})
.doOnComplete(() -> {
//emitter.complete();
System.out.println(" all the data are processed!!!");
}).subscribe();
});
}
Here are a few rules that you should follow when writing a reactive pipeline:
doOnXYZ operators should never be used to do lots of I/O, latency involved operations or any reactive operation. Those should be used for "side-effects" operations, such as logging, metrics and so on.
you should never subscribe from within a pipeline or a method that returns a reactive type. This decouples the processing of this operation from the main pipeline, meaning there's no guarantee you'll get the expected result at the right time nor that the complete/error signals will be known to your application.
you should never block from within a pipeline or a method that returns a reactive type. This will create critical issues to your application at runtime.
Now because your code snippet is quite convoluted, I'll just give you the general direction to follow with another code snippet.
#GetMapping("/info")
public Flux<FullDetails> getData(){
return followersRepository.findAll()
.flatMap(follower -> {
Mono<List<Alltopics>> topics = topicRepository.findAllById(follower.getTopiclist()).collectList();
return topics.map(topiclist -> new FullDetails(follower.getId(), topiclist));
});
}
I have paged interface. Given a starting point a request will produce a list of results and a continuation indicator.
I've created an observable that is built by constructing and flat mapping an observable that reads the page. The result of this observable contains both the data for the page and a value to continue with. I pluck the data and flat map it to the subscriber. Producing a stream of values.
To handle the paging I've created a subject for the next page values. It's seeded with an initial value then each time I receive a response with a valid next page I push to the pages subject and trigger another read until such time as there is no more to read.
Is there a more idiomatic way of doing this?
function records(start = 'LATEST', limit = 1000) {
let pages = new rx.Subject();
this.connect(start)
.subscribe(page => pages.onNext(page));
let records = pages
.flatMap(page => {
return this.read(page, limit)
.doOnNext(result => {
let next = result.next;
if (next === undefined) {
pages.onCompleted();
} else {
pages.onNext(next);
}
});
})
.pluck('data')
.flatMap(data => data);
return records;
}
That's a reasonable way to do it. It has a couple of potential flaws in it (that may or may not impact you depending upon your use case):
You provide no way to observe any errors that occur in this.connect(start)
Your observable is effectively hot. If the caller does not immediately subscribe to the observable (perhaps they store it and subscribe later), then they'll miss the completion of this.connect(start) and the observable will appear to never produce anything.
You provide no way to unsubscribe from the initial connect call if the caller changes its mind and unsubscribes early. Not a real big deal, but usually when one constructs an observable, one should try to chain the disposables together so it call cleans up properly if the caller unsubscribes.
Here's a modified version:
It passes errors from this.connect to the observer.
It uses Observable.create to create a cold observable that only starts is business when the caller actually subscribes so there is no chance of missing the initial page value and stalling the stream.
It combines the this.connect subscription disposable with the overall subscription disposable
Code:
function records(start = 'LATEST', limit = 1000) {
return Rx.Observable.create(observer => {
let pages = new Rx.Subject();
let connectSub = new Rx.SingleAssignmentDisposable();
let resultsSub = new Rx.SingleAssignmentDisposable();
let sub = new Rx.CompositeDisposable(connectSub, resultsSub);
// Make sure we subscribe to pages before we issue this.connect()
// just in case this.connect() finishes synchronously (possible if it caches values or something?)
let results = pages
.flatMap(page => this.read(page, limit))
.doOnNext(r => this.next !== undefined ? pages.onNext(this.next) : pages.onCompleted())
.flatMap(r => r.data);
resultsSub.setDisposable(results.subscribe(observer));
// now query the first page
connectSub.setDisposable(this.connect(start)
.subscribe(p => pages.onNext(p), e => observer.onError(e)));
return sub;
});
}
Note: I've not used the ES6 syntax before, so hopefully I didn't mess anything up here.
I have two methods that both return an IObservable
IObservable<Something[]> QueryLocal();
and
IObservable<Something[]> QueryWeb();
QueryLocal is always successful. QueryWeb is susceptible to both a timeout and possible web errors.
I wish to implement a QueryLocalAndWeb() that calls both and combines their results.
So far I have:
IObservable<Something[]> QueryLocalAndWeb()
{
var a = QueryLocal();
var b = QueryWeb();
var plan = a.And(b).Then((x, y) => x.Concat(y).ToArray());
return Observable.When(plan).Timeout(TimeSpan.FromSeconds(10), a);
}
However, I'm not sure that it handles the case where QueryWeb yields an error.
In the future I might have a QueryWeb2() that also needs to be taken into account.
So, how do I combine the results from a number of IObservables ignoring the ones that throw errors (or time out)?
I guess OnErrorResumeNext should be able to handle this scenario:
From MSDN:
Continues an observable sequence that is terminated normally or by an
exception with the next observable sequence.
IObservable<Something[]> QueryLocalAndWeb()
{
var a = QueryLocal();
var b = QueryWeb().Timeout(TimeSpan.FromSeconds(10));
return Observable.OnErrorResumeNext(b, a);
}
You can do concat of array by using Aggregation on the returned observable.
I am assuming that both local and web are cold observable i.e they start producing values only when someone subscribes to them.
How about:
var plan = a.And(b).Then((x, y) => x.Concat(y.Catch(Observable.Empty<Something[]>()).ToArray());