I have this method deleteFeedTable() which returns a Completable and when it finishes I want to start another Disposable.
What I did is combine the two using operator concatWith, but this results in a nested subscription and I'd like to avoid that.
disposables.add(
localDataSource.deleteFeedTable()
.doOnComplete(() -> { preferencesManager.setFeedTableUpdateState(false);
})
.concatWith(new Completable() {
#Override
protected void subscribeActual(CompletableObserver s) {
s.onSubscribe(localDataSource.getLastStoredId()
.flatMap(lastStoredId -> remoteDataSource.getFeed(lastStoredId))
.doOnNext(feedItemList -> localDataSource.saveFeed(feedItemList))
.map(feedItemList -> {
Timber.i("MESA STO MAP");
List<Feed> feedList = new ArrayList<>();
for (FeedItem feedItem : feedItemList) {
feedList.add(mapper.from(feedItem));
}
downloadImageUseCase.downloadPhotos(feedList);
return feedList;
})
.subscribe());
}
})
.subscribeOn(schedulerProvider.io())
.observeOn(schedulerProvider.mainThread())
.subscribe(() -> {}, throwable -> Log.i("THROW", "loadData ", throwable)));
Is there a way I can avoid the nested subscription ? Or is there another way to add it to the disposables variable so I can clear the subscription later ?
Use andThen:
disposables.add(
localDataSource.deleteFeedTable()
.doOnComplete(() -> {
preferencesManager.setFeedTableUpdateState(false);
})
.andThen(
localDataSource.getLastStoredId()
.flatMap(lastStoredId -> remoteDataSource.getFeed(lastStoredId))
.doOnNext(feedItemList -> localDataSource.saveFeed(feedItemList))
.map(feedItemList -> {
Timber.i("MESA STO MAP");
List<Feed> feedList = new ArrayList<>();
for (FeedItem feedItem : feedItemList) {
feedList.add(mapper.from(feedItem));
}
downloadImageUseCase.downloadPhotos(feedList);
return feedList;
})
)
.subscribeOn(schedulerProvider.io())
.observeOn(schedulerProvider.mainThread())
.subscribe(() -> {}, throwable -> Log.i("THROW", "loadData ", throwable)));
Related
I intend to share data aggregated by one stream with another one to reduce (re)processing time when restarting services or rebuilding aggregates.
This stream creates a store which the changelog topic belongs to:
#Bean
fun wirtschaftseinheiten() = Consumer<KStream<String, WirtschaftseinheitAggregat>> {
it.toTable(Materialized.`as`(wirtschaftseinheitTableStoreSupplier))
}
And this is how I join the changelog topic:
fun KStream<String, ProjektEvent>.leftJoin(wirtschaftseinheiten: KTable<String, WirtschaftseinheitAggregat>): KTable<String, ProjektAggregat> =
mapValues { _, v -> ProjektAggregat(projekt = v, projektErstelltAm = v.metaInfo.createdAt) }
.groupByKey()
// take the earliest date which should be from event with ACTION = CREATE_REQUEST
.reduce { prev, next -> if (next.projektErstelltAm?.isAfter(prev.projektErstelltAm) == true) next.copy(projektErstelltAm = prev.projektErstelltAm) else next }
.toStream()
.toTable(Materialized.`as`(preliminaryProjektStoreSupplier))
.leftJoin(
wirtschaftseinheiten,
{ projektAggregat -> projektAggregat.projekt?.projekt?.technischerPlatz?.take(7) },
{ projektAggregat, wirtschaftseinheit ->
if (wirtschaftseinheit != null) {
projektAggregat + wirtschaftseinheit
} else {
logger().error("No wirtschaftseinheit found for $projektAggregat")
projektAggregat
}
},
Materialized.`as`(projektWirtschaftseinheitJoinStoreSupplier)
)
but unfortunately no match will be found as the right side is always null.
If I directly join the topic, then it of course works, but du to migrations I have to rebuild topics which also means consuming the topic declared in wirtschaftseinheitTableStoreSupplier and this is time-consuming.
So therefore a general question: is this a feasible way? If not, is there a better one?
Switching from KTable to GlobalKTable solved the issue
fun KStream<String, ProjektEvent>.leftJoin(wirtschaftseinheiten: GlobalKTable<String, WirtschaftseinheitAggregat>): KTable<String, ProjektAggregat> =
mapValues { _, v -> ProjektAggregat(projekt = v, projektErstelltAm = v.metaInfo.createdAt) }
.groupByKey()
// take the earliest date which should be from event with ACTION = CREATE_REQUEST
.reduce(
{ current, next ->
if (next.projektErstelltAm?.isAfter(current.projektErstelltAm) == true)
next.copy(projektErstelltAm = current.projektErstelltAm)
else
next
},
Materialized.`as`(preliminaryProjektStoreSupplier)
)
.toStream()
.leftJoin(
wirtschaftseinheiten,
{ _, projektAggregat -> projektAggregat.projekt?.projekt?.technischerPlatz?.take(7) },
{ projektAggregat, wirtschaftseinheit ->
if (wirtschaftseinheit != null) {
projektAggregat + wirtschaftseinheit
} else {
logger().error("No wirtschaftseinheit found for $projektAggregat")
projektAggregat
}
},
)
.toTable(Materialized.`as`(projektWirtschaftseinheitJoinStoreSupplier))
What I am trying to accomplish is to return a simple Mono Response.
I am calling different backends API's in the method detailsHandler.fetchDetailsValue
Since this is a Synchronous blocking call, I am wrapping it in Mono.fromCallable as suggested in the documentation.
But I am facing this error upon compiling -
error: local variables referenced from a lambda expression must be final or effectively final
Actually, inside .subscribe lambda I am trying to assign to Response object which is declared outside the lambda. Since I need to assign the object returned from the fetchDetailsValue method upon subscription, how can I return this response object ?
Please correct me if wrong below and suggest how to fix this. Appreciate any inputs. Thanks!
Below is the sample code -
#Override
public Mono<Response> getDetails(Mono<RequestDO> requestDO) {
return requestDO.flatMap(
request -> {
Response response = new Response();
Mono<List<Object>> optionalMono = Mono.fromCallable(() -> {
return detailsHandler.fetchDetailsValue(request);
});
optionalMono. subscribeOn(Schedulers.boundedElastic())
.subscribe(result -> {
Cat1 cat1Object = null;
Cat2 cat2Object = null;
for(Object obj : result) {
if (obj instanceof Cat1) {
cat1Object = (Cat1) obj;
response.addResponseObj(cat1Object); // error: local variables referenced from a lambda expression must be final or effectively final
}
if (obj instanceof Cat2) {
cat2Object = (Cat2) obj;
response.addResponseObj(cat2Object); // error: local variables referenced from a lambda expression must be final or effectively final
}
}
});
return Mono.just(response);
});
}
When I tried to declare that Response object inside subscribe method and tried to return as and when value is received. But getting the error - Void methods cannot return a value
Below is the code -
#Override
public Mono<Response> getDetails(Mono<RequestDO> requestDO) {
return requestDO.flatMap(
request -> {
Mono<List<Object>> optionalMono = Mono.fromCallable(() -> {
return detailsHandler.fetchDetailsValue(request);
});
optionalMono. subscribeOn(Schedulers.boundedElastic())
.subscribe(result -> {
Response response = new Response(); // Added this inside subscribe lambda. But now getting - Void methods cannot return a value
Cat1 cat1Object = null;
Cat2 cat2Object = null;
for(Object obj : result) {
if (obj instanceof Cat1) {
cat1Object = (Cat1) obj;
response.addResponseObj(cat1Object);
}
if (obj instanceof Cat2) {
cat2Object = (Cat2) obj;
response.addResponseObj(cat2Object);
}
}
return Mono.just(response); // Added this inside subscribe lambda. But now getting - Void methods cannot return a value
});
});
}
UPDATE:
When I tried like below, I am getting errors. Please correct if anything I am doing wrong.
public Mono<Response> getDetails(Mono<RequestDO> requestDO) {
return requestDO
.flatMap(request -> Mono.fromCallable(() -> detailsHandler.fetchDetailsValue(request)))
.map(result -> {
Response response = new Response();
for (Object obj : result) {
if (obj instanceof Cat1) {
response.addResponseObj((Cat1) obj);
}
if (obj instanceof Cat2) {
response.addResponseObj((Cat2) obj);
}
}
return response;
})
.map(result1 -> {
Response response = resultnew;
requestDO.flatMap(request -> Mono.fromCallable(() -> detailsHandler.fetchAdditionalValue(request, response)))
.map(result2 -> {
return result2;
});
}
You should not call subscribe inside your Reactor pipeline. Subscribe should be considered a terminal operation that starts the pipeline asynchronously in an unknown time in the future, and should only serve to connect to some other part of your system.
What you want is to transform your List<Object> into a new Response using a simple synchronous function, the map operator is made for this:
public Mono<Response> getDetails(Mono<RequestDO> requestDO) {
return requestDO
.flatMap(request -> Mono.fromCallable(() -> detailsHandler.fetchDetailsValue(request)))
.map(result -> {
Response response = new Response();
for (Object obj : result) {
if (obj instanceof Cat1) {
response.addResponseObj((Cat1) obj);
}
if (obj instanceof Cat2) {
response.addResponseObj((Cat2) obj);
}
}
return response;
});
}
Update
For your updated question you want to use both request and response to call another Mono. You can do this by first pulling the map inside the flatMap, then add another flatMap to it:
public Mono<Response> getDetails(Mono<RequestDO> requestDO) {
return requestDO
.flatMap(request -> Mono.fromCallable(() -> detailsHandler.fetchDetailsValue(request))
.map(result -> {
Response response = new Response();
for (Object obj : result) {
if (obj instanceof Cat1) {
response.addResponseObj((Cat1) obj);
}
if (obj instanceof Cat2) {
response.addResponseObj((Cat2) obj);
}
}
return response;
})
.flatMap(response -> Mono.fromCallable(() -> detailsHandler.fetchAdditionalValue(request, response))));
}
I need some help to pass parameter from one compose to another. I want to pass the labelParemeters in the second compose into the last compose, as I have shown in the code below.
public Future<JsonArray> startTest(int jobID, RoutingContext context) {
LOG.info("-----INside startTest() method-----");
return jobDao.getJob(jobID, context)
.compose(job -> productDao.getLabelParameters(job, context))
.compose(labelParameters -> jobDao.getJobParentChildForPrint(jobID, context, true, labelParameters))
.compose(parentChildSerials -> {
LOG.debug(" Future Returned Job Parent-Child to Print: ");
prepareSerialForPrint(parentChildSerials, labelParameters); //Pass Here
return Future.succeededFuture(parentChildSerials);
})
.onFailure(error -> {
LOG.debug("startTest() Failed: ", error);
})
.onSuccess(server -> {
LOG.debug("Finished startTest!!");
LOG.debug(server.encodePrettily());
});
}
You can create a context object the has setters/getter for the data that is passed around in the Futures.
Compose guareantees serial execution of your Futures, you can set the results and assume that the result is present in your context object in the next compose section. Example:
public Future<JsonArray> startTest(int jobID, RoutingContext context) {
MyTestContext myTestCtx = new MyTestConext();
return jobDao.getJob(jobID, context)
.compose(job -> {
myTestCtx.setjob(job);
return productDao.getLabelParameters(job, context);
})
.compose(labelParameters -> {
myTestCtx.setLabelParameters(labelparameters);
return jobDao.getJobParentChildForPrint(jobID, context, true, labelParameters);
})
.compose(parentChildSerials -> {
var labelParameters = myTestCtx.getLabelParameters();
prepareSerialForPrint(parentChildSerials, labelParameters); //Pass Here
return Future.succeededFuture(parentChildSerials);
})
.onSuccess(server -> LOG.debug("Finished startTest!!"));
}
You can also make use of the java records for this.
Alternatively you can nest the compose parts as follows:
public Future<JsonArray> startTest(int jobID, RoutingContext context) {
return jobDao.getJob(jobID, context)
.compose(job -> productDao.getLabelParameters(job, context))
.compose(labelParameters -> {
return jobDao.getJobParentChildForPrint(jobID, context, true, labelParameters)
.compose(parentChildSerials -> {
LOG.debug(" Future Returned Job Parent-Child to Print: ");
prepareSerialForPrint(parentChildSerials, labelParameters); //Pass Here
return Future.succeededFuture(parentChildSerials);
})
})
.onFailure(error -> LOG.debug("startTest() Failed: ", error))
.onSuccess(server -> LOG.debug(server.encodePrettily()));
}
How to implement a Observable.concatEagerDelayError or an equivalent in RxJava2/RxKotlin2 ?
There is :
Observable.concatEager
Observable.concatDelayError
But not :
Observable.concatEagerDelayError
What i have :
fun getAll(): Observable<List<User>> = Observable.concatArrayDelayError(
// from db
userDAO
.selectAll()
.subscribeOn(ioScheduler),
// from api
userAPI
.getAll()
.doOnNext { lstUser -> Completable.concatArray(
userDAO.deleteAll().subscribeOn(ioScheduler),
userDAO.save(lstUser).subscribeOn(ioScheduler)
) }
.subscribeOn(ioScheduler)
)
I want same behaviour but eagerly for selectAll() and getAll() because there is no reason to wait from db to launch network call.
Use concatMapEagerDelayError:
Observable.fromIterable(sources)
.concatMapEagerDelayError(v -> v, true);
Observable.fromArray(source1, source2, source3)
.concatMapEagerDelayError(v -> v, true);
JavaDoc.
Edit:
fun getAll(): Observable<List<User>> = Observable.fromArray(
// from db
userDAO
.selectAll()
.subscribeOn(ioScheduler),
// from api
userAPI
.getAll()
// --- this makes no sense by the way -------------------
.doOnNext { lstUser -> Completable.concatArray(
userDAO.deleteAll().subscribeOn(ioScheduler),
userDAO.save(lstUser).subscribeOn(ioScheduler)
)}
// ------------------------------------------------------
.subscribeOn(ioScheduler)
)
.concatMapEagerDelayError({ v -> v }, true)
I have an API endpoint that can different result count based on request parameters. Parameters are page, per_page, query and others.
fun getItems(params : Map<String, String>) : Single<ItemsResponse>
data class ItemsResponse(
val hasMore : Boolean,
val items : List<Items>
)
API is not trustworthy and could return less than per_page. I want to ensure, that I always get result count I need and cache remainder for next request cycle.
For example something
val page : Int = 1
fun fetchItems(requestedItems : Int = 20) : Single<List<Items>> {
...
.map { buildParams(page, perPage, query) }
.flatMap { api.getItems(it) }
.doOnSuccess { page++ }
.buffer(requestedItems)
}
fun buildParams(page: Int, perPage: Int, query : String) : Map<String, String> {
...
}
Example scenario:
Caller requests 20 items for the first time.
Call to api.getItems() with page: 1, per_page is always 20.
Call returns 16 items
Call to api.getItems() with page: 2
Call return 19 items
20 items were returned to caller and 15 remaining items were cached for next caller request.
Caller requests 20 items for 2nd time.
Call to api.getItems() with page: 3
Call returns 12 items
20 items were returned to caller (15 older ones and 5 from last response) and 7 remaining items were cached for next caller requests.
And so on and so forth.
This looks like Producer-Consumer pattern, but is doable in RxJava2?
Edit: based on the additional info
Requires: RxJava 2 Extensions library: compile "com.github.akarnokd:rxjava2-extensions:0.17.0"
import hu.akarnokd.rxjava2.expr.StatementObservable
import io.reactivex.Observable
import io.reactivex.functions.BooleanSupplier
import io.reactivex.subjects.PublishSubject
import java.util.concurrent.Callable
import java.util.concurrent.ConcurrentLinkedQueue
import java.util.concurrent.ThreadLocalRandom
var counter = 0;
fun service() : Observable<String> {
return Observable.defer(Callable {
val n = ThreadLocalRandom.current().nextInt(21)
val c = counter++;
Observable.range(1, n).map({ v -> "" + c + " | " + v })
})
}
fun getPage(pageSignal : Observable<Int>, pageSize: Int) : Observable<List<String>> {
return Observable.defer(Callable {
val queue = ConcurrentLinkedQueue<String>()
pageSignal.concatMap({ _ ->
StatementObservable.whileDo(
service()
.toList()
.doOnSuccess({ v -> v.forEach { queue.offer(it) }})
.toObservable()
, BooleanSupplier { queue.size < pageSize })
.ignoreElements()
.andThen(
Observable.range(1, pageSize)
.concatMap({ _ ->
val o = queue.poll();
if (o == null) {
Observable.empty()
} else {
Observable.just(o)
}
})
.toList()
.toObservable()
)
})
})
}
fun main(args: Array<String>) {
val pages = PublishSubject.create<Int>();
getPage(pages, 20)
.subscribe({ println(it) }, { it.printStackTrace() })
pages.onNext(1)
pages.onNext(2)
}