I have the following code:
Single<Response<User>> single = service.registerUser();
single
.subscribeOn(Schedulers.io())
.observeOn(Schedulers.computation())
.map(Response::body)
.flatMap(parentsRepsitory::writeUser)
.observeOn(AndroidSchedulers.mainThread())
.flatMap(parentsRepsitory::getUser)
Where the parentsRepository is a repo wraping my realm database. The problems come when the server returns validation errors, however. So somewhere in my stream i want to have the equivalent of
if(response.code() == 201){
// CONTINUE STREAM USING THE LOGIC THAT HANDLES SUCCESS
}elseif(response.code() == 400){
// CONTINUE STREAM USING LOGIC TO HANDLE THE VALIDATION ERRORS
}
A solution I have previously implemented is as follows:
Observable<Response<User>> observable_from_api =
service.attemptLogin(username, password)
.share();
observable_from_api
.filter(response -> response.code() == HttpStatus.HTTP_STATUS_200_OK)
.//handle logic for success
observable_from_api
.filter(response -> response.code() == HttpStatus.HTTP_STATUS_400_BAD_REQUEST)
.//handle logic for validation errors
I don't like this solution for several different reasons. The main one being it just does not seem right. The second one being that the .share() method is only available on an Observable object. Since my network operation emits only one responce I would much rather use Single instead, but the .share() method is not available there.
Excuse me if this is a duplicate question, I have done some digging around and only found the solution I mentioned. I want to either see the optimal solution or be told explicitly that this is in fact the optimal solution.
I think you need to define which kind of data you want your consumer to receive. I assume you want to receive in the consumer a User object.
These are the signatures of the method that you should create:
Single<User> handleSuccess(Response<User> response)
Single<User> handleError(Response<User> response)
And then you create you stream in this way:
service.registerUser()
.flatMap(response -> {
if (response.success) {
return handleSuccess(response);
} else {
return handleError(response);
}
})
.subscribe(user -> logd("user: " + user.name));
Related
I am working on a solution where I am using vertx 3.8.4 and vertx-mysql-client 3.9.0 for asynchronous database calls.
Here is the scenario that I have been trying to resolve, in a proper reactive manner.
I have some mastertable records which are in inactive state.
I run a query and get the list of records from the database.
This I did like this :
Future<List<Master>> locationMasters = getInactiveMasterTableRecords ();
locationMasters.onSuccess (locationMasterList -> {
if (locationMasterList.size () > 0) {
uploadTargetingDataForAllInactiveLocations(vertx, amazonS3Utility,
locationMasterList);
}
});
Now in uploadTargetingDataForAllInactiveLocations method, i have a list of items.
What I have to do is, I need to iterate over this list, for each item, I need to download a file from aws, parse the file and insert those data to db.
I understand the way to do it using CompositeFuture.
Can someone from vertx dev community help me with this or with some documentation available ?
I did not find good contents on this by googling.
I'm answering this as I was searching for something similar and I ended up spending some time before finding an answer and hopefully this might be useful to someone else in future.
I believe you want to use CompositeFuture in vertx only if you want to synchronize multiple actions. That means that you either want an action to execute in the case that either all your other actions on which your composite future is built upon succeed or at least one of the action on which your composite future is built upon succeed.
In the first case I would use CompositeFuture.all(List<Future> futures) and in the second case I would use CompositeFuture.any(List<Future> futures).
As per your question, below is a sample code where a list of item, for each item we run an asynchronous operation (namely downloadAnProcessFile()) which returns a Future and we want to execute an action doAction() in the case that all the async actions succeeded:
List<Future> futures = new ArrayList<>();
locationMasterList.forEach(elem -> {
Promise<Void> promise = Promise.promise();
futures.add(promise.future());
Future<Boolean> processStatus = downloadAndProcessFile(); // doesn't need to be boolean
processStatus.onComplete(asyncProcessStatus -> {
if (asyncProcessStatus.succeeded()){
// eventually do stuff with the result
promise.complete();
} else {
promise.fail("Error while processing file whatever");
}
});
});
CompositeFuture.all(futures).onComplete(compositeAsync -> {
if (compositeAsync.succeeded()){
doAction(); // <-- here do what you want to do when all future complete
} else {
// at least 1 future failed
}
});
This solution is probably not perfect and I suppose can be improved but this is what I found works for me. Hopefully will work for someone else.
So here we go: given a Confluent.Kafka IConsumer<>, it wraps it into a dedicated async CE and consumes as long as cancellation hasn't been requested. This piece of code is also defends itself against the OperationCancelledException and runs finally block to ensure graceful termination of consumer.
let private consumeUntiCancelled callback (consumer: IConsumer<'key, 'value>) =
async {
let! ct = Async.CancellationToken
try
try
while not ct.IsCancellationRequested do
let consumeResult = consumer.Consume(ct)
if not consumeResult.IsPartitionEOF then do! (callback consumeResult)
with
| :? OperationCanceledException -> return ()
finally
consumer.Close()
consumer.Dispose()
}
Question #1: is this code correct or am I abusing the async?
So far so good. In my app I have to deal with lots of consumers that must die altogether. So, assuming that consumers: seq<Async<unit>> represents them, the following code is what I came up with:
async {
for consumer in consumers do
do! (Async.StartChild consumer |> Async.Ignore).
}
I expect this code to chain childs to the parent's cancellation context, and once it is cancelled, childs gonna be cancelled as well.
Question #2: is my finally block guaranteed to be ran even though child got cancelled?
I have two observations about your code:
Your use of Async.StartChild is correct - all child computations will inherit the same cancellation token and they will all get cancelled when the main token is cancelled.
The async workflow can be cancelled after you call consumer.Consume(ct) and before you call callback. I'm not sure what this means for your specific problem, but if it removes some data from a queue, the data could be lost before it is processed. If that's an issue, then I think you'll need to make callback non-asynchronous, or invoke it differently.
In your consumeUntilCancelled function, you do not explicity need to check while not if ct.IsCancellationRequested is true. The async workflow does this automatically in every do! or let!, so you can replace this with just a while loop.
Here is a minimal stand-alone demo:
let consume s = async {
try
while true do
do! Async.Sleep 1000
printfn "%s did work" s
finally
printfn "%s finalized" s }
let work =
async {
for c in ["A"; "B"; "C"; "D"] do
do! Async.StartChild (consume c) |> Async.Ignore }
Now we create the computation with a cancellation token:
// Run this in F# interactive
let ct = new System.Threading.CancellationTokenSource()
Async.Start(work, ct.Token)
// Run this sometime later
ct.Cancel()
Once you call ct.Cancel, all the finally blocks will be called and all the loops will stop.
I have a loop where I POST requests to the server:
for (traineeId, points) in traineePointsDict {
// create a new point
let parameters: NSDictionary = [
"traineeId": "\(traineeId)",
"numPoints": points,
"exerciseId": "\(exerciseId)"
]
DataManager.sharedInstance.api.points.request(.POST, json: parameters).success { data in
if data.json["success"].int == 1 {
self.pointCreated()
} else {
self.pointFailToCreate()
}
}.failure { error in
self.pointFailToCreate()
}
}
The problem is that for some reason the last request fails and I am guessing that this is due to posting too many requests to the server at the same time.
Is there a way to chain these requests so they wait for the one before to complete before executing the next?
I have been looking at PromiseKit, but I don't really know how to implement this and I am looking for a quick solution.
Siesta does not control how requests are queued or how many requests run concurrently. You have two choices:
control it on the app side, or
control it in the network layer.
I’d investigate option 2 first. It gives you less fine-grained control, but it give you more robust options on the cheap and is less prone to mistakes. If you are using URLSession as your networking layer (which is Siesta’s default), then investigate whether the HTTPMaximumConnectionsPerHost property of URLSessionConfiguration does what you need. (Here are some examples of passing custom configuration to Siesta.)
If that doesn’t work for you, a simple version of #1 is to use a completion handler to chain the requests:
func chainRequests(_ queue: [ThingsToRequest])
guard let thing = queue.first else { return }
params = makeParamsFor(thing)
resource.request(.POST, json: params)
.onSuccess {
...
}.onFailure {
...
}.onCompletion { _ in
chainRequests(queue[1 ..< queue.count])
}
}
Note that you can attach multiple overlapping handlers to the same request, and they’re called in the order you attached them. Note also that Siesta guarantees that the completion block is always called, no matter the outcome. This means that each request will result in calls to either closures 1 & 3 or closures 2 & 3. That’s why this approach works.
I have paged interface. Given a starting point a request will produce a list of results and a continuation indicator.
I've created an observable that is built by constructing and flat mapping an observable that reads the page. The result of this observable contains both the data for the page and a value to continue with. I pluck the data and flat map it to the subscriber. Producing a stream of values.
To handle the paging I've created a subject for the next page values. It's seeded with an initial value then each time I receive a response with a valid next page I push to the pages subject and trigger another read until such time as there is no more to read.
Is there a more idiomatic way of doing this?
function records(start = 'LATEST', limit = 1000) {
let pages = new rx.Subject();
this.connect(start)
.subscribe(page => pages.onNext(page));
let records = pages
.flatMap(page => {
return this.read(page, limit)
.doOnNext(result => {
let next = result.next;
if (next === undefined) {
pages.onCompleted();
} else {
pages.onNext(next);
}
});
})
.pluck('data')
.flatMap(data => data);
return records;
}
That's a reasonable way to do it. It has a couple of potential flaws in it (that may or may not impact you depending upon your use case):
You provide no way to observe any errors that occur in this.connect(start)
Your observable is effectively hot. If the caller does not immediately subscribe to the observable (perhaps they store it and subscribe later), then they'll miss the completion of this.connect(start) and the observable will appear to never produce anything.
You provide no way to unsubscribe from the initial connect call if the caller changes its mind and unsubscribes early. Not a real big deal, but usually when one constructs an observable, one should try to chain the disposables together so it call cleans up properly if the caller unsubscribes.
Here's a modified version:
It passes errors from this.connect to the observer.
It uses Observable.create to create a cold observable that only starts is business when the caller actually subscribes so there is no chance of missing the initial page value and stalling the stream.
It combines the this.connect subscription disposable with the overall subscription disposable
Code:
function records(start = 'LATEST', limit = 1000) {
return Rx.Observable.create(observer => {
let pages = new Rx.Subject();
let connectSub = new Rx.SingleAssignmentDisposable();
let resultsSub = new Rx.SingleAssignmentDisposable();
let sub = new Rx.CompositeDisposable(connectSub, resultsSub);
// Make sure we subscribe to pages before we issue this.connect()
// just in case this.connect() finishes synchronously (possible if it caches values or something?)
let results = pages
.flatMap(page => this.read(page, limit))
.doOnNext(r => this.next !== undefined ? pages.onNext(this.next) : pages.onCompleted())
.flatMap(r => r.data);
resultsSub.setDisposable(results.subscribe(observer));
// now query the first page
connectSub.setDisposable(this.connect(start)
.subscribe(p => pages.onNext(p), e => observer.onError(e)));
return sub;
});
}
Note: I've not used the ES6 syntax before, so hopefully I didn't mess anything up here.
I have two methods that both return an IObservable
IObservable<Something[]> QueryLocal();
and
IObservable<Something[]> QueryWeb();
QueryLocal is always successful. QueryWeb is susceptible to both a timeout and possible web errors.
I wish to implement a QueryLocalAndWeb() that calls both and combines their results.
So far I have:
IObservable<Something[]> QueryLocalAndWeb()
{
var a = QueryLocal();
var b = QueryWeb();
var plan = a.And(b).Then((x, y) => x.Concat(y).ToArray());
return Observable.When(plan).Timeout(TimeSpan.FromSeconds(10), a);
}
However, I'm not sure that it handles the case where QueryWeb yields an error.
In the future I might have a QueryWeb2() that also needs to be taken into account.
So, how do I combine the results from a number of IObservables ignoring the ones that throw errors (or time out)?
I guess OnErrorResumeNext should be able to handle this scenario:
From MSDN:
Continues an observable sequence that is terminated normally or by an
exception with the next observable sequence.
IObservable<Something[]> QueryLocalAndWeb()
{
var a = QueryLocal();
var b = QueryWeb().Timeout(TimeSpan.FromSeconds(10));
return Observable.OnErrorResumeNext(b, a);
}
You can do concat of array by using Aggregation on the returned observable.
I am assuming that both local and web are cold observable i.e they start producing values only when someone subscribes to them.
How about:
var plan = a.And(b).Then((x, y) => x.Concat(y.Catch(Observable.Empty<Something[]>()).ToArray());