How to cache records in an RxJava stream and consume later in same stream? - rx-java2

I am investigating the use of RxJava in my current Android Application
I have the following use case and cannot see how to implement it within RxJava.
1). Read a set of database records as a Single<List<DBRecord>>
2). Transform each database record to an associated network model class
3). Call a remote Update API for each network object
4). When the remote Update API call is successful, update the specific database record.
The code I have so far is
login().andThen(DatabaseController.fetchDBRecord())
.toObservable()
.flatMapIterable(dbRecord -> dbRecord)
.flatMapSingle(database -> transformDatabase(database, DB_RECORD_MAPPER))
.flatMapSingle(NetworkController::UpdateCall)
.flatMapCompletable(response -> DatabaseController.updateDBRecord(response.body()))
The issue I have is the Update API response is a String value that contains "SUCCESS" or "FAILURE", e.g. the response doesn't identify the current DBRecord details.
Is there any way I can get access to the dbRecord from the .flatMapIterable(dbRecord -> dbRecord) stage when I am executing .flatMapCompletable(response -> DatabaseController.updateDBRecord(response.body()))
So that I can pass dbRecord into DatabaseController.updateDBRecord(dbRecord) like so...
login().andThen(DatabaseController.fetchDBRecord())
.toObservable()
.flatMapIterable(dbRecord -> dbRecord)
.flatMapSingle(database -> transformDatabase(database, DB_RECORD_MAPPER))
.flatMapSingle(NetworkController::UpdateCall)
.flatMapCompletable(response -> DatabaseController.updateDBRecord(dbRecord))
UPDATED
I have realised my use case is more complex than originally stated:
1). Read a set of database records as a Single<List<DBRecord>>
2). Transform each database record to an associated network model class
3). Call a remote Update API for each network object
4). Only when the remote Update API call is successful, update the specific database record.
if I am using nested streams how do I know the nested API call returned successful in my following outer stream to update the database?
login().andThen(DatabaseController.fetchDBRecord())
.flattenAsObservable(dbRecord -> dbRecord)
.flatMapCompletable(database -> transformDatabase(database, DB_RECORD_MAPPER)
.flatMap(NetworkController::UpdateCall)
.flatMapCompletable(response -> DatabaseController.updateDBRecord(database)))

You can use nested stream. For example:
login().andThen(DatabaseController.fetchDBRecord())
.flattenAsObservable(dbRecord -> dbRecord)
.flatMapCompletable(database -> transformDatabase(database, DB_RECORD_MAPPER)
.flatMap(NetworkController::UpdateCall)
.flatMapCompletable(response -> {
if (isSuccess(response))
return DatabaseController.updateDBRecord(database);
else
return Completable.complete()
});

Related

How to get multiple datasets after calling db stored procedure in Mule

A Mule application is calling a stored procedure like this: call sp("email","ispresent","id"). This procedure returns two data sets; one dataset for user details and another for college details. Below I'm attaching an example response which we need to get after calling the procedure.
{
"userdetails": {
"emailId":""abc#gmail.com",
"useris":"20"
"ispresent":0,
"collegedetails":{
"collegeName":"VV",
"collegeid":"12"
}
}

how read-through work in ignite

my cache is empty so sql queries return null.
The read-through means that if the cache is missed, then Ignite will automatically get down to the underlying db(or persistent store) to load the corresponding data.
If there are new data inserted into the underlying db table ,i have to down cache server to load the newly inserted data from the db table automatically or it will sync automatically ?
Is work same as Spring's #Cacheable or work differently.
It looks to me that the answer is no. Cache SQL query don't work as no data in cache but when i tried cache.get in i got following results :
case 1:
System.out.println("data == " + cache.get(new PersonKey("Manish", "Singh")).getPhones());
result ==> data == 1235
case 2 :
PersonKey per = new PersonKey();
per.setFirstname("Manish");
System.out.println("data == " + cache.get(per).getPhones());
throws error:- as following
error image, image2
Read-through semantics can be applied when there is a known set of keys to read. This is not the case with SQL, so in case your data is in an arbitrary 3rd party store (RDBMS, Cassandra, HBase, ...), you have to preload the data into memory prior to running queries.
However, Ignite provides native persistence storage [1] which eliminates this limitation. It allows to use any Ignite APIs without having anything in memory, and this includes SQL queries as well. Data will be fetched into memory on demand while you're using it.
[1] https://apacheignite.readme.io/docs/distributed-persistent-store
When you insert something into the database and it is not in the cache yet, then get operations will retrieve missing values from DB if readThrough is enabled and CacheStore is configured.
But currently it doesn't work this way for SQL queries executed on cache. You should call loadCache first, then values will appear in the cache and will be available for SQL.
When you perform your second get, the exact combination of name and lastname is sought in DB. It is converted into a CQL query containing lastname=null condition, and it fails, because lastname cannot be null.
UPD:
To get all records that have firstname column equal to 'Manish' you can first do loadCache with an appropriate predicate and then run an SQL query on cache.
cache.loadCache((k, v) -> v.lastname.equals("Manish"));
SqlFieldsQuery qry = new SqlFieldsQuery("select firstname, lastname from Person where firstname='Manish'");
try (FieldsQueryCursor<List<?>> cursor = cache.query(qry)) {
for (List<?> row : cursor)
System.out.println("firstname:" + row.get(0) + ", lastname:" + row.get(1));
}
Note that loadCache is a complex operation and requires to run over all records in the DB, so it shouldn't be called too often. You can provide null as a predicate, then all records will be loaded from the database.
Also to make SQL run fast on cache, you should mark firstname field as indexed in QueryEntity configuration.
In your case 2, have you tried specifying lastname as well? By your stack trace it's evident that Cassandra expects it to be not null.

How to mass update in scala-activerecord

I want to update multiple record in a database with a single SQL query. There is a method forceUpdate, which can be used like this:
case class Document(size: Int, var status: String) extends ActiveRecord
object Document extends ActiveRecordCompanion[Document]
Document.forceUpdate(_.size < 100)(_.status := "small")
However, it bypasses the validations and hooks like beforeSave(). I tried underlying squeryl:
Document.inTransaction {
update(Document.table)(d =>
where(d.size < 100)
set(d.status := "small")
)
}
But it also ignores hooks.
I can't seem to find a method that updates multiple documents at once, while using hooks and validations. Is there at least some workaround?
When you do a partial update, you are updating some unknown number of records that match your criteria without retrieving them. For the hooks to get triggered though, the object getting updated needs to be known (i.e. retrieved). The best alternative I can think of would be to retrieve all of the the objects you are updating and then use a batch update rather than a partial update. This won't be as fast or efficient as the partial update you are doing, but unless you can register your hook in the database I don't know what the alternative would be.

Orientdb Transactions Best Practices

I'm working on a REST API. I'm having all sorts of problems with transactions in Orientdb. In the current setup, we have a singleton that wraps around the ODatabaseDocumentPool. We retrieve all instances through this setup. Each api call starts by acquiring an instance from the pool and creating a new instance of OrientGraph using the ODatabaseDocumentTx instance. The code that follows uses methods from both ODatabaseDocumentTx and OrientGraph. At the end of the code, we call graph.commit() on write operations and graph.shutdown() on all operations.
I have a list of questions.
To verify, I can still use the ODatabaseDocumentTx instanced I used to create OrientGraph? Or should I use OrientGraph.getRawGraph()?
What is the best way to do read operations when using OrientGraph? Even during read operations, I get OConcurrentModificationExceptions, lock exceptions, or error on retrieving records. Is this because the OrientGraph is transactional and versions are modified even when retrieving records? I should mention, I also use the index manager and iterate through edges of a vertex in these read operations.
When I get a record through the Index Manager, does this update the version on the database?
Does graph.shutdown() release the ODatabaseDocumentTx instance back to the pool?
Does v1.78 still required us to lock records in transactions?
If set autoStartTx to false on OrientGraph, do I have to start transactions manually, or do they start automatically when accessing the database?
Sample Code:
ODatabaseDocumentTx db = pool.acquire();
// READ
OrientGraph graph = new OrientGraph(db);
ODocument doc = (ODocument) oidentifialbe.getRecord() // I use Java API to a get record from index
if( ((String) doc.field("field")).equals('name') )
//code
OrientVertex v = graph.getVertex(doc);
for(OrientVertex vv : v.getVertices()) {
//code
}
// OR WRITE
doc.field('d',val);
doc = doc.save();
OrientVertex v = v.getVertex(doc);
graph.addEdge(null, v, otherVertex);
graph.addEdge(null, v, anotherVertex) // do I have to reload the record in v?
// End Transaction
// if write
graph.commit();
// then
graph.shutdown();

ETL:parallel lookup in and insert in scala

For our ETL, the fact data don't have item_key, but have item_number. During the loading, if we can find the item_key for the item_number, then just use it,if can NOT find, then auto create an item_key. Currently the process is not in parallel, I am thinking about run it in parallel using scala since scala have build-in concurrent collection.
Use a simple example:
val keys=1 to 1000
val items=keys map {num=>"Item"+num}
var itemMap=(items zip keys).toMap
and now we have millions rows to load whose item number is:
def g(v:String)=List.fill(5000)(v)
var fact="Item2000" :: List(items.flatMap(x=>g(x)))
Since the fact data has an item item2000 which can't be found in item master data of itemMap, we need to autocreate a map of (item2000,2000) and add it to itemMap so that if in future we find item2000 again we could use the same item key.
How to implement using concurrent collection? For each loop of the row in fact data, if can't find the item key then autocreate, so we need a way to lock itemMap otherwise there might be multiple thead trying to insert autocreate data into itemMap