How to Iterate through list with RxJava and perform initial process on first item - rx-java2

I am new to RxJava and finding it very useful for network and database processing within my Android applications.
I have two use cases that I cannot implement completely in RxJava
Use Case 1
Clear down my target database table Table A
Fetch a list of database records from Table B that contain a key field
For each row retrieved from Table B, call a Remote API and persist all the returned data into Table A
The closest I have managed is this code
final AtomicInteger id = new AtomicInteger(0);
DatabaseController.deleteAll(TableA_DO.class);
DatabaseController.fetchTable_Bs()
.subscribeOn(Schedulers.io())
.toObservable()
.flatMapIterable(b -> b)
.flatMap(b_record -> NetworkController.getTable_A_data(b_record.getKey()))
.flatMap(network -> transformNetwork(id, network, NETWORK_B_MAPPER))
.doOnNext(DatabaseController::persistRealmObjects)
.doOnComplete(onComplete)
.doOnError(onError)
.doAfterTerminate(doAfterTerminate())
.doOnSubscribe(compositeDisposable::add)
.subscribe();
Use Case 2
Clear down my target database table Table X
Clear down my target database table Table Y
Fetch a list of database records from Table Z that contain a key field
For each row retrieved from Table B, call a Remote API and persist some of the returned data into Table X the remainder of the data should be persisted into table Y
I have not managed to create any code for use case 2.
I have a number of questions regarding the use of RxJava for these use cases.
Is it possible to achieve both my use cases in RxJava?
Is it "Best Practice" to combine all these steps into an Rx "Stream"
UPDATE
I ended up with this POC test code which seems to work...
I am not sure if its the optimum solution however My API calls return Single and my database operations return Completable so I feel like this is the best solution for me.
public class UseCaseOneA {
public static void main(final String[] args) {
login()
.andThen(UseCaseOneA.deleteDatabaseTableA())
.andThen(UseCaseOneA.deleteDatabaseTableB())
.andThen(manufactureRecords())
.flatMapIterable(x -> x)
.flatMapSingle(record -> NetworkController.callApi(record.getPrimaryKey()))
.flatMapSingle(z -> transform(z))
.flatMapCompletable(p -> UseCaseOneA.insertDatabaseTableA(p))
.doOnComplete(() -> System.out.println("ON COMPLETE"))
.doFinally(() -> System.out.println("ON FINALLY"))
.subscribe();
}
private static Single<List<PayloadDO>> transform(final List<RemotePayload> payloads) {
return Single.create(new SingleOnSubscribe<List<PayloadDO>>() {
#Override
public void subscribe(final SingleEmitter<List<PayloadDO>> emitter) throws Exception {
System.out.println("transform - " + payloads.size());
final List<PayloadDO> payloadDOs = new ArrayList<>();
for (final RemotePayload remotePayload : payloads) {
payloadDOs.add(new PayloadDO(remotePayload.getPayload()));
}
emitter.onSuccess(payloadDOs);
}
});
}
private static Observable<List<Record>> manufactureRecords() {
final List<Record> records = new ArrayList<>();
records.add(new Record("111-111-111"));
records.add(new Record("222-222-222"));
records.add(new Record("3333-3333-3333"));
records.add(new Record("44-444-44444-44-4"));
records.add(new Record("5555-55-55-5-55-5555-5555"));
return Observable.just(records);
}
private static Completable deleteDatabaseTableA() {
return Completable.create(new CompletableOnSubscribe() {
#Override
public void subscribe(final CompletableEmitter emitter) throws Exception {
System.out.println("deleteDatabaseTableA");
emitter.onComplete();
}
});
}
private static Completable deleteDatabaseTableB() {
return Completable.create(new CompletableOnSubscribe() {
#Override
public void subscribe(final CompletableEmitter emitter) throws Exception {
System.out.println("deleteDatabaseTableB");
emitter.onComplete();
}
});
}
private static Completable insertDatabaseTableA(final List<PayloadDO> payloadDOs) {
return Completable.create(new CompletableOnSubscribe() {
#Override
public void subscribe(final CompletableEmitter emitter) throws Exception {
System.out.println("insertDatabaseTableA - " + payloadDOs);
emitter.onComplete();
}
});
}
private static Completable login() {
return Completable.complete();
}
}
This code doesn't address all my use case requirements. Namely being able to transform the remote payload records into multiple Database record types and insert each type into its own specific target databased table.
I could just call the Remote API twice to get the same remote data items and transform first into one database type then secondly into the second database type, however that seems wasteful.
Is there an operand in RxJava where I can reuse the output from my API calls and transform them into another database type?

You have to index the items yourself in some manner, for example, via external counting:
Observable.defer(() -> {
AtomicInteger counter = new AtomicInteger();
return DatabaseController.fetchTable_Bs()
.subscribeOn(Schedulers.io())
.toObservable()
.flatMapIterable(b -> b)
.doOnNext(item -> {
if (counter.getAndIncrement() == 0) {
// this is the very first item
} else {
// these are the subsequent items
}
});
});
The defer is necessary to isolate the counter to the inner sequence so that repetition still works if necessary.

Related

AspNet Boilerplate Parallel DB Access through Entity Framework from an AppService

We are using ASP.NET Zero and are running into issues with parallel processing from an AppService. We know requests must be transactional, but unfortunately we need to break out to slow running APIs for numerous calls, so we have to do parallel processing.
As expected, we are running into a DbContext contingency issue on the second database call we make:
System.InvalidOperationException: A second operation started on this context
before a previous operation completed. This is usually caused by different
threads using the same instance of DbContext, however instance members are
not guaranteed to be thread safe. This could also be caused by a nested query
being evaluated on the client, if this is the case rewrite the query avoiding
nested invocations.
We read that a new UOW is required, so we tried using both the method attribute and the explicit UowManager, but neither of the two worked.
We also tried creating instances of the referenced AppServices using the IocResolver, but we are still not able to get a unique DbContext per thread (please see below).
public List<InvoiceDto> CreateInvoices(List<InvoiceTemplateLineItemDto> templateLineItems)
{
List<InvoiceDto> invoices = new InvoiceDto[templateLineItems.Count].ToList();
ConcurrentQueue<Exception> exceptions = new ConcurrentQueue<Exception>();
Parallel.ForEach(templateLineItems, async (templateLineItem) =>
{
try
{
XAppService xAppService = _iocResolver.Resolve<XAppService>();
InvoiceDto invoice = await xAppService
.CreateInvoiceInvoiceItem();
invoices.Insert(templateLineItems.IndexOf(templateLineItem), invoice);
}
catch (Exception e)
{
exceptions.Enqueue(e);
}
});
if (exceptions.Count > 0) throw new AggregateException(exceptions);
return invoices;
}
How can we ensure that a new DbContext is availble per thread?
I was able to replicate and resolve the problem with a generic version of ABP. I'm still experiencing the problem in my original solution, which is far more complex. I'll have to do some more digging to determine why it is failing there.
For others that come across this problem, which is exactly the same issue as reference here, you can simply disable the UnitOfWork through an attribute as illustrated in the code below.
public class InvoiceAppService : ApplicationService
{
private readonly InvoiceItemAppService _invoiceItemAppService;
public InvoiceAppService(InvoiceItemAppService invoiceItemAppService)
{
_invoiceItemAppService = invoiceItemAppService;
}
// Just add this attribute
[UnitOfWork(IsDisabled = true)]
public InvoiceDto GetInvoice(List<int> invoiceItemIds)
{
_invoiceItemAppService.Initialize();
ConcurrentQueue<InvoiceItemDto> invoiceItems =
new ConcurrentQueue<InvoiceItemDto>();
ConcurrentQueue<Exception> exceptions = new ConcurrentQueue<Exception>();
Parallel.ForEach(invoiceItemIds, (invoiceItemId) =>
{
try
{
InvoiceItemDto invoiceItemDto =
_invoiceItemAppService.CreateAsync(invoiceItemId).Result;
invoiceItems.Enqueue(invoiceItemDto);
}
catch (Exception e)
{
exceptions.Enqueue(e);
}
});
if (exceptions.Count > 0) {
AggregateException ex = new AggregateException(exceptions);
Logger.Error("Unable to get invoice", ex);
throw ex;
}
return new InvoiceDto {
Date = DateTime.Now,
InvoiceItems = invoiceItems.ToArray()
};
}
}
public class InvoiceItemAppService : ApplicationService
{
private readonly IRepository<InvoiceItem> _invoiceItemRepository;
private readonly IRepository<Token> _tokenRepository;
private readonly IRepository<Credential> _credentialRepository;
private Token _token;
private Credential _credential;
public InvoiceItemAppService(IRepository<InvoiceItem> invoiceItemRepository,
IRepository<Token> tokenRepository,
IRepository<Credential> credentialRepository)
{
_invoiceItemRepository = invoiceItemRepository;
_tokenRepository = tokenRepository;
_credentialRepository = credentialRepository;
}
public void Initialize()
{
_token = _tokenRepository.FirstOrDefault(x => x.Id == 1);
_credential = _credentialRepository.FirstOrDefault(x => x.Id == 1);
}
// Create an invoice item using info from an external API and some db records
public async Task<InvoiceItemDto> CreateAsync(int id)
{
// Get db record
InvoiceItem invoiceItem = await _invoiceItemRepository.GetAsync(id);
// Get price
decimal price = await GetPriceAsync(invoiceItem.Description);
return new InvoiceItemDto {
Id = id,
Description = invoiceItem.Description,
Amount = price
};
}
private async Task<decimal> GetPriceAsync(string description)
{
// Simulate a slow API call to get price using description
// We use the token and credentials here in the real deal
await Task.Delay(5000);
return 100.00M;
}
}

How to wait for a finite stream bulk result

I have a stream processing application built with spring cloud streams & kafka streams,
this system takes logs from an application and compares them to observations made by another stream processor
and produces a score, the log stream is then split by the score (above & below some threshold).
The topology:
The issue:
So my problem is how to properly implement the "Log best observation selector processor",
There are a finite amount of observations at the moment the log is processed but there may be a lot of them.
So I came up with 2 solutions...
Group & Window log-scored-observations topic by log id and then reduce to get the highest score. (Problem: scoring all observations may take longer then the window)
Emit a scoring completed message after every scoring, join with log-relevant-observations, use log-scored-observations global table & interactive query to check that every observation id is in the global table store, when all ids are in the store map to the observation with the highest score. (Problem: global table does not appear to work when only used for interactive query)
What would be the best way to achieve what I'm trying?
I'm hoping not to create any partition, disk or memory bottleneck.
Everything has unique ids and tuples of relevant ids when the value is joined from log & observation.
(Edit: Switched text description of topology with a diagram & change title)
Solution #2 seems to work, but it emitted warnings because interactive queries takes some time to be ready - so I implemented the same solution with a Transformer:
#Slf4j
#Configuration
#RequiredArgsConstructor
#SuppressWarnings("unchecked")
public class LogBestObservationsSelectorProcessorConfig {
private String logScoredObservationsStore = "log-scored-observations-store";
private final Serde<LogEntryRelevantObservationIdTuple> logEntryRelevantObservationIdTupleSerde;
private final Serde<LogRelevantObservationIdsTuple> logRelevantObservationIdsTupleSerde;
private final Serde<LogEntryObservationMatchTuple> logEntryObservationMatchTupleSerde;
private final Serde<LogEntryObservationMatchIdsRelevantObservationsTuple> logEntryObservationMatchIdsRelevantObservationsTupleSerde;
#Bean
public Function<
GlobalKTable<LogEntryRelevantObservationIdTuple, LogEntryObservationMatchTuple>,
Function<
KStream<LogEntryRelevantObservationIdTuple, LogEntryRelevantObservationIdTuple>,
Function<
KTable<String, LogRelevantObservationIds>,
KStream<String, LogEntryObservationMatchTuple>
>
>
>
logBestObservationSelectorProcessor() {
return (GlobalKTable<LogEntryRelevantObservationIdTuple, LogEntryObservationMatchTuple> logScoredObservationsTable) ->
(KStream<LogEntryRelevantObservationIdTuple, LogEntryRelevantObservationIdTuple> logScoredObservationProcessedStream) ->
(KTable<String, LogRelevantObservationIdsTuple> logRelevantObservationIdsTable) -> {
return logScoredObservationProcessedStream
.selectKey((k, v) -> k.getLogId())
.leftJoin(
logRelevantObservationIdsTable,
LogEntryObservationMatchIdsRelevantObservationsTuple::new,
Joined.with(
Serdes.String(),
logEntryRelevantObservationIdTupleSerde,
logRelevantObservationIdsTupleSerde
)
)
.transform(() -> new LogEntryObservationMatchTransformer(logScoredObservationsStore))
.groupByKey(
Grouped.with(
Serdes.String(),
logEntryObservationMatchTupleSerde
)
)
.reduce(
(match1, match2) -> Double.compare(match1.getScore(), match2.getScore()) != -1 ? match1 : match2,
Materialized.with(
Serdes.String(),
logEntryObservationMatchTupleSerde
)
)
.toStream()
;
};
}
#RequiredArgsConstructor
private static class LogEntryObservationMatchTransformer implements Transformer<String, LogEntryObservationMatchIdsRelevantObservationsTuple, KeyValue<String, LogEntryObservationMatchTuple>> {
private final String stateStoreName;
private ProcessorContext context;
private TimestampedKeyValueStore<LogEntryRelevantObservationIdTuple, LogEntryObservationMatchTuple> kvStore;
#Override
public void init(ProcessorContext context) {
this.context = context;
this.kvStore = (TimestampedKeyValueStore<LogEntryRelevantObservationIdTuple, LogEntryObservationMatchTuple>) context.getStateStore(stateStoreName);
}
#Override
public KeyValue<String, LogEntryObservationMatchTuple> transform(String logId, LogEntryObservationMatchIdsRelevantObservationsTuple value) {
val observationIds = value.getLogEntryRelevantObservationsTuple().getRelevantObservations().getObservationIds();
val allObservationsProcessed = observationIds.stream()
.allMatch((observationId) -> {
val key = LogEntryRelevantObservationIdTuple.newBuilder()
.setLogId(logId)
.setRelevantObservationId(observationId)
.build();
return kvStore.get(key) != null;
});
if (!allObservationsProcessed) {
return null;
}
val observationId = value.getLogEntryRelevantObservationIdTuple().getObservationId();
val key = LogEntryRelevantObservationIdTuple.newBuilder()
.setLogId(logId)
.setRelevantObservationId(observationId)
.build();
ValueAndTimestamp<LogEntryObservationMatchTuple> observationMatchValueAndTimestamp = kvStore.get(key);
return new KeyValue<>(logId, observationMatchValueAndTimestamp.value());
}
#Override
public void close() {
}
}
}

Android Room with RXJava2; onNext() of emitter is not properly triggered

I am switching from async tasks to rxjava2 and have some issues with my code tests.
I have a room table of elements that have a certain monetary amount. On a usercontrol that is called DisplayCurrentBudget, a sum of all amounts should be displayed. This number must refresh everytime a new element is inserted. I tackled the requirement in two ways, but both produce the same result: My code does not care if the database is updated, it only updates when the fragment is recreated (onCreateView).
My first attempt was this:
//RxJava2 Test
Observable<ItemS> ItemObservable = Observable.create( emitter -> {
try {
List<ItemS> movies = oStandardModel.getItemsVanilla();
for (ItemS movie : movies) {
emitter.onNext(movie);
}
emitter.onComplete();
} catch (Exception e) {
emitter.onError(e);
}
});
DisposableObserver<ItemS> disposable = ItemObservable.
subscribeOn(Schedulers.io()).
observeOn(AndroidSchedulers.mainThread()).
subscribeWith(new DisposableObserver<ItemS>() {
public List<ItemS> BadFeelingAboutThis = new ArrayList<ItemS>();
#Override
public void onNext(ItemS movie) {
// Access your Movie object here
BadFeelingAboutThis.add(movie);
}
#Override
public void onError(Throwable e) {
// Show the user that an error has occurred
}
#Override
public void onComplete() {
// Show the user that the operation is complete
oBinding.DisplayCurrentBudget.setText(Manager.GetBigSum(BadFeelingAboutThis).toString());
}
});
I already was uncomfortable with that code. My second attempt produces the exact same result:
Observable<BigDecimal> ItemObservable2 = Observable.create( emitter -> {
try {
BigDecimal mySum = oStandardModel.getWholeBudget();
emitter.onNext(mySum);
emitter.onComplete();
} catch (Exception e) {
emitter.onError(e);
}
});
DisposableObserver<BigDecimal> disposable = ItemObservable2.
subscribeOn(Schedulers.io()).
observeOn(AndroidSchedulers.mainThread()).
subscribeWith(new DisposableObserver<BigDecimal>() {
#Override
public void onNext(BigDecimal sum) {
// Access your Movie object here
oBinding.DisplayCurrentBudget.setText(sum.toString());
}
#Override
public void onError(Throwable e) {
// Show the user that an error has occurred
}
#Override
public void onComplete() {
// Show the user that the operation is complete
}
});
Any obvious issues with my code?
Thanks for reading, much appreciate it!
Edit:
I was asked what Manager.GetBigSum does, it actually does not do much. It only adds BigDecimal-Values of an Item list.
public static BigDecimal GetBigSum(List<ItemS> ListP){
List<BigDecimal> bigDList = ListP.stream().map(ItemS::get_dAmount).collect(Collectors.toList());
return bigDList.stream()
.reduce(BigDecimal.ZERO, BigDecimal::add);
}
Further, I simplified the query. But it still does not care about DB updates, only about fragment recreation:
Single.fromCallable(() -> oStandardModel.getItemsVanilla())
.map(Manager::GetBigSum)
.subscribeOn(Schedulers.io())
.observeOn(AndroidSchedulers.mainThread())
.subscribe(
e -> oBinding.DisplayCurrentBudget.setText(e.toString())
);
Your rx logic has no error. That should be internal error in your getWholeBudget.
But why you write rx so complex?
For your case, you can just write:
Single.fromCallable(() -> oStandardModel.getItemsVanilla())
.map(Manager::GetBigSum)
.subscribeOn(Schedulers.io())
.observeOn(AndroidSchedulers.mainThread())
.subscribe(
e -> oBinding.DisplayCurrentBudget.setText(sum.toString()),
e -> log.error(e));
I solved it this way:
oStandardModel.getItemJointCatLive().observe(this, new Observer<List<ItemJointCat>>() {
#Override
public void onChanged(#Nullable final List<ItemJointCat> oItemSP) {
Single.fromCallable(() -> oStandardModel.getWholeBudget())
.subscribeOn(Schedulers.io())
.observeOn(AndroidSchedulers.mainThread())
.subscribe(
e -> oBinding.DisplayCurrentBudget.setText(e.toString())
);
}
});
My mistake was that I assumed RXjava2 does not need an onchanged event...now i just use onchanged event of livedata observer to trigger a simple rxjava2 query.
Do you think there is anything wrong with that approach?

How to insert data and get id as out parameter in android room and rxjava 2?

Insert query
#Insert(onConflict = OnConflictStrategy.REPLACE)
long insertProduct(Product product); //product id is auto generated
view model
public Completable insertProduct(final String productName) {
return new CompletableFromAction(() -> {
Product newProduct = new Product();
newProduct.setProductName(productName);
mProductDataSource.insertOrUpdateProduct(newProduct);
});
}
In activity where I called this above function I used CompositeDisposable.
CompositeDisposable mDisposable = new CompositeDisposable();
mDisposable.add(mViewModel.insertProduct(productName))
.subscribeOn(Schedulers.io())
.observeOn(AndroidSchedulers.mainThread())
.subscribe(() ->{} ,throwable -> Log.e(TAG, "Error msg", throwable)));
Am I implementing in wrong way?
According to docs. If the #Insert method receives only 1 parameter, it can return a long, which is the new rowId for the inserted item. If the parameter is an array or a collection, it should return long[] or List instead.
Since you insert only one item, the method will return only one rowID.
So, try this
Single.fromCallable(new Callable<Long>() {
#Override
public Long call() throws Exception {
return productDao.insertProduct(new Product()));
}
})
.subscribe(id -> {
} ,throwable -> Log.e(TAG, "Error msg", throwable)))
You could use Observable or Maybe as well. But I think Single fits better since in your case the id is autogenerated and the insertion should always complete.

Android Mobile Apps query from the azure database returns last row only

There's more than 15 items in my azure database table called Events.
I've tried to run most of the commands found on
https://learn.microsoft.com/en-us/azure/app-service-mobile/app-service-mobile-android-how-to-use-client-library such as :
List<Events> results = eventsTable.execute().get()
and
List<Events> results = eventsTable.select("Events").execute().get();
and
List<Events> results = eventsTable.top(20).execute().get();
to return all the row items in the table. The queries seem to run on the last row of the table only and returns the last row or nothing at all when query is executed.
Though the ToDoItem Quickstart from Azure works perfectly with all the queries - which is odd.
Here's some of the code
ArrayList<Events> events = new ArrayLists<Events>();
private void EventsFromTable() {
AsyncTask<Void, Void, Void> task = new AsyncTask<Void, Void, Void>(){
#Override
protected Void doInBackground(Void... params) {
try {
final List<Events> results = EventsTable.execute().get();
runOnUiThread(new Runnable() {
#Override
public void run() {
for (Events event : results) {
Events ev = new Events(event.getName(), event.getVenue(), event.getDate());
events.add(ev);
System.out.println("size is " +events.size());
<======This returns "size is 1"======>
}
}
});
} catch (final Exception e){
createAndShowDialogFromTask(e, "Error");
}
return null;
}
};
runAsyncTask(task);
}
Might any one know what the matter is?
Thanks
According to your code, the variable events seems to be a public shared instance of ArraryList in your Android app, so I don't know whether exists the case which multiple threads access it concurrently. The implementation of ArrayList class is not synchronized, please see here.
So please use the code below instead of the code ArrayList<Events> events = new ArrayLists<Events>(); when you shared the variable between UI thread and data async task thread.
List<Events> events = Collections.synchronizedList(new ArrayLists<Events>());
And I think it's better for copying data retrieved from table via addAll method, not add method for each, as the code below.
#Override
public void run() {
events.addAll(results);
}