I have a view model that makes some updates to an array of todos. I'm mapping a set of inputs to actions (modeled as cases in an enum) and merging them into a single
let mergedActions = Observable<TodosAction>.merge([
todosFromService.map { .fromService(todos: $0) },
toggleFavoriteSubject.map { .toggleFavorite(identifier: $0) },
toggleIsReadSubject.map { .toggleIsRead(identifier: $0) }
])
Then I'm using scan to "remember" the history of the updates.
todos = mergedActions
.scan([]) { (lastTodos, new) -> [Todo] in
switch new {
case .fromService(let todos):
return todos
case .toggleFavorite(let identifier):
return lastTodos.withFavoritedToggled(atId: identifier)
case .toggleIsRead(let identifier):
return lastTodos.withIsReadToggled(atId: identifier)
}
}
My problem is integrating the network requests into this approach. E.g. I have the "optimistic update" where I'm assuming success and updating the todo in memory. But I also want to update it on the server and
"roll back" that update if the request fails.
I can't think of how to do this based on the current structure of my Observables. E.g. the scan closure is no longer in the world of Observables as it just returns a [Todo] so I can't make api requests with flatMap or anything.
How could this be amended or augmented to support api integration and roll back the corresponding local updates if the remote updates fail?
I would suggest to aggregate state as this framework does - https://github.com/maxvol/RaspSwift
It also relies heavily on .scan() operator, so you will find the approach familiar.
You can either use it, or just implement a similar solution yourself.
Related
I have a single reducer in form of a Subscriber. The base class is like this:
class ResponseReducer<Input>: Subscriber {
func receive(subscription: Subscription) {
subscription.request(.unlimited)
}
func receive(completion: Subscribers.Completion<Error>) {
}
func receive(_ input: Input) -> Subscribers.Demand {
return .unlimited
}
}
let reducer = ResponseReducer<Response>()
And I subscribe it for network responses from different places the following way:
networkRequestPublisher
.subscribe(reducer)
Multiple subscriptions may exist at the same time. And the underlying Subscription is not retained without extra effort in such cases. I get only receive(subscription:) call inside ResponseReducer as a result.
If it was only one subscription at a time, I would store a reference to it inside ResponseReducer and release it inside the receive(completion:) func. But I have multiple subscriptions and there is no reference to the completing subscription inside the receive(completion:) func when it is called.
Experimentally, I've found how to achieve what I need, but not sure it is a reliable solution:
networkRequestPublisher
.handleEvents()
.subscribe(reducer)
.handleEvents() operator does the job. It will also work if I replace .handleEvents() with .receive(on:) or .print() operators.
Can somebody explain, why the solution with .handleEvents() works?
And what would be the best way to solve this task?
I am going through the tutorial:
https://marcosantadev.com/mvvmc-with-swift/
Which talks about MVVM-C design pattern. I have real trouble understanding of how and why .never() observable is used there (and in general why we would want to use .never() besides testing timeouts).
Could anyone give a reasonable example of .never() observable usage in swift code (not in testing) and explain why it is necessary and what are the alternatives?
I address all the actions from View to ViewModel. User taps on a button? Good, the signal is delivered to a ViewModel. That is why I have multiple input observables in ViewModel. And all the observables are optional. They are optional because sometimes I write tests and don't really want to provide all the fake observables to test some single function. So, I provide other observables as nil. But working with nil is not very convenient, so I provide some default behavior for all the optional observables like this:
private extension ViewModel {
func observableNavigation() -> Observable<Navigation.Button> {
return viewOutputFactory().observableNavigation ?? Observable.never()
}
func observableViewState() -> Observable<ViewState> {
return viewOutputFactory().observableViewState ?? Observable.just(.didAppear)
}
}
As you can see, if I pass nil for observableViewState I substitute it with just(.didAppear) because the ViewModel logic heavily depends on the state of view. On the other hand if I pass nil for observableNavigation I provide never() because I assume that non of the navigation button will ever be triggered.
But this whole story is just my point of view. I bet you will find your own place to use this never operator.
Maybe your ViewModel has different configurations (or you have different viewModel under the same protocol), one of which does not need to send any updates to its observers. Instead of saying that the observable does not exist for this particular case (which you would implement as an optional), you might want to be able to define an observable as a .never(). This is in my opinion cleaner.
Disclaimer - I am not a user of RxSwift, but I am assuming never is similar than in ReactiveSwift, i.e. a signal that never sends any value.
It's an open ended question, and there can be many answers, but I've found myself reaching for never on a number of cases. There are many ways to solve a problem, but recently, I was simplifying some device connection code that had a cascading fail over, and I wanted to determine if my last attempt to scan for devices yielded any results.
To do that, I wanted to create an observable that only emitted a "no scan results" event in the event that it was disposed without having seen any results, and conversely, emitted nothing if it did.
I have pruned out other details from my code to sake of brevity, but in essence:
func connect(scanDuration: TimeInterval) -> Observable<ConnectionEvent> {
let scan = scan(for: scanDuration).share(replay: 1)
let connection: Observable<ConnectionEvent> =
Observable.concat(Observable.from(restorables ?? []),
connectedPeripherals(),
scan)
.flatMapLatest { [retainedSelf = self] in retainedSelf.connect(to: $0) }
let scanDetector = scan
.toArray() // <-- sum all results as an array for final count
.asObservable()
.flatMap { results -> Observable<ConnectionEvent> in
results.isEmpty // if no scan results
? Observable.just(.noDevicesAvailable) // emit event
: Observable.never() } // else, got results, no action needed
// fold source and stream detector into common observable
return Observable.from([
connection
.filter { $0.isConnected }
.flatMapLatest { [retained = self] event -> Observable<ConnectionEvent> in
retained.didDisconnect(peripheral: event.connectedPeripheral!.peripheral)
.startWith(event) },
scanDetector])
.switchLatest()
}
For a counter point, I realized as I typed this up, that there is still a simpler way to achieve my needs, and that is to add a final error emitting observable into my concat, it fails-over until it hits the final error case, so I don't need the later error detection stream.
Observable.concat(Observable.from(restorables ?? []),
connectedPeripherals(),
scan,
hardFailureEmitNoScanResults())
That said, there are many cases where we may want to listen and filter down stream, where the concat technique is not available.
i'm a RxJava newcomer, and i'm having some trouble wrapping my head around how to do the following.
i'm using Retrofit to invoke a network request that returns me a Single<Foo>, which is the type i ultimately want to consume via my Subscriber instance (call it SingleFooSubscriber)
Foo has an internal property items typed as List<String>.
if Foo.items is not empty, i would like to invoke separate, concurrent network requests for each of its values. (the actual results of these requests are inconsequential for SingleFooSubscriber as the results will be cached externally).
SingleFooSubscriber.onComplete() should be invoked only when Foo and all Foo.items have been fetched.
fetchFooCall
.subscribeOn(Schedulers.io())
// Approach #1...
// the idea here would be to "merge" the results of both streams into a single
// reactive type, but i'm not sure how this would work given that the item emissions
// could be far greater than one. using zip here i don't think it would every
// complete.
.flatMap { foo ->
if(foo.items.isNotEmpty()) {
Observable.zip(
Observable.fromIterable(foo.items),
Observable.just(foo),
{ source1, source2 ->
// hmmmm...
}
).toSingle()
} else {
Single.just(foo)
}
}
// ...or Approach #2...
// i think this would result in the streams for Foo and items being handled sequentially,
// which is not really ideal because
// 1) i think it would entail nested streams (i get the feeling i should be using flatMap
// instead)
// 2) and i'm not sure SingleFooSubscriber.onComplete() would depend on the completion of
// the stream for items
.doOnSuccess { data ->
if(data.items.isNotEmpty()) {
// hmmmm...
}
}
.observeOn(AndroidSchedulers.mainThread())
.subscribe(
{ data -> /* onSuccess() */ },
{ error -> /* onError() */ }
)
any thoughts on how to approach this would be greatly appreciated!
bonus points: in trying to come up with a solution to this, i've begun to question the decision to use the Single reactive type vs the Observable reactive type. most (all, except this one Foo.items case?) of my streams actually revolve around consuming a single instance of something, so i leaned toward Single to represent my streams as i thought it would add some semantic clarity around the code. anybody have any general guidance around when to use one vs the other?
You need to nest flatMaps and then convert back to Single:
retrofit.getMainObject()
.flatMap(v ->
Flowable.fromIterable(v.items)
.flatMap(w ->
retrofit.getItem(w.id).doOnNext(x -> w.property = x)
)
.ignoreElements()
.toSingle(v)
)
I'm building a Swift-based iOS application that uses PromiseKit to handle promises (although I'm open to switching promise library if it makes my problem easier to solve). There's a section of code designed to handle questions about overwriting files.
I have code that looks approximately like this:
let fileList = [list, of, files, could, be, any, length, ...]
for file in fileList {
if(fileAlreadyExists) {
let overwrite = Promise<Bool> { fulfill, reject in
let alert = UIAlertController(message: "Overwrite the file?")
alert.addAction(UIAlertAction(title: "Yes", handler: { action in
fulfill(true)
}
alert.addAction(UIAlertAction(title: "No", handler: { action in
fulfill(false)
}
} else {
fulfill(true)
}
}
overwrite.then { result -> Promise<Void> in
Promise<Void> { fulfill, reject in
if(result) {
// Overwrite the file
} else {
// Don't overwrite the file
}
}
}
However, this doesn't have the desired effect; the for loop "completes" as quickly as it takes to iterate over the list, which means that UIAlertController gets confused as it tries to overlay one question on another. What I want is for the promises to chain, so that only once the user has selected "Yes" or "No" (and the subsequent "overwrite" or "don't overwrite" code has executed) does the next iteration of the for loop happen. Essentially, I want the whole sequence to be sequential.
How can I chain these promises, considering the array is of indeterminate length? I feel as if I'm missing something obvious.
Edit: one of the answers below suggests recursion. That sounds reasonable, although I'm not sure about the implications for Swift's stack (this is inside an iOS app) if the list grows long. Ideal would be if there was a construct to do this more naturally by chaining onto the promise.
One approach: create a function that takes a list of the objects remaining. Use that as the callback in the then. In pseudocode:
function promptOverwrite(objects) {
if (objects is empty)
return
let overwrite = [...] // same as your code
overwrite.then {
do positive or negative action
// Recur on the rest of the objects
promptOverwrite(objects[1:])
}
}
Now, we might also be interested in doing this without recursion, just to avoid blowing the call stack if we have tens of thousands of promises. (Suppose that the promises don't require user interaction, and that they all resolve on the order of a few milliseconds, so that the scenario is realistic).
Note first that the callback—in the then—happens in the context of a closure, so it can't interact with any of the outer control flow, as expected. If we don't want to use recursion, we'll likely have to take advantage of some other native features.
The reason you're using promises in the first place, presumably, is that you (wisely) don't want to block the main thread. Consider, then, spinning off a second thread whose sole purpose is to orchestrate these promises. If your library allows to explicitly wait for a promise, just do something like
function promptOverwrite(objects) {
spawn an NSThread with target _promptOverwriteInternal(objects)
}
function _promptOverwriteInternal(objects) {
for obj in objects {
let overwrite = [...] // same as your code
overwrite.then(...) // same as your code
overwrite.awaitCompletion()
}
}
If your promises library doesn't let you do this, you could work around it by using a lock:
function _promptOverwriteInternal(objects) {
semaphore = createSemaphore(0)
for obj in objects {
let overwrite = [...] // same as your code
overwrite.then(...) // same as your code
overwrite.always {
semaphore.release(1)
}
semaphore.acquire(1) // wait for completion
}
}
BrightFutures is a nice implementation of "future" in the Swift language.
https://github.com/Thomvis/BrightFutures
I like to control the parallelism of multicore CPU with it. Does someone know a way to control the # of CPU cores/physical threads to be used?
All closures passed to BrightFutures are executed according to BF's default threading model. It seems like you want to diverge from the default model. This is possible by passing a custom execution context.
An execution context that limits the number of parallel tasks it executes, could be created with the following function:
func executionContextWithControlledParallelism(p: Int) -> ExecutionContext {
let s = Semaphore(value: p)
let q = Queue.global.context
return { task in
s.wait()
q {
task()
s.signal()
}
}
}
I tested this briefly using the following code:
let context = executionContextWithControlledParallelism(5)
for _ in 0..<100 {
future(context:context) { () -> Int in
return fibonacci(Int(arc4random_uniform(15)))
}
}
You'll have to pass context to every map, flatMap, etc. that you want to limit the parallelism of. I'll admit that seems cumbersome. A better way (that is currently not supported by BrightFutures) would be to set the default threading model, like this:
let context = executionContextWithControlledParalelism(5)
// this is not supported right now:
BrightFutures.setDefaultThreadingModel(model: {
return context
})
If you like this, please consider filing an issue to request this or (even better) create a pull request.