RxSwift, Share + retry mechanism - swift

I have a network request that can Succeed or Fail
I have encapsulated it in an observable.
I have 2 rules for the request
1) There can never be more then 1 request at the same time
-> there is a share operator i can use for this
2) When the request was Succeeded i don't want to repeat the same
request again and just return the latest value
-> I can use shareReplay(1) operator for this
The problem arises when the request fails, the shareReplay(1) will just replay the latest error and not restart the request again.
The request should start again at the next subscription.
Does anyone have an idea how i can turn this into a Observable chain?
// scenario 1
let obs: Observable<Int> = request().shareReplay(1)
// outputs a value
obs.subscribe()
// does not start a new request but outputs the same value as before
obs.subscribe()
// scenario 2 - in case of an error
let obs: Observable<Int> = request().shareReplay(1)
// outputs a error
obs.subscribe()
// does not start a new request but outputs the same value as before, but in this case i want it to start a new request
obs.subscribe()
This seems to be a exactly doing what i want, but it consists of keeping state outside the observable, anyone know how i can achieve this in a more Rx way?
enum Err: Swift.Error {
case x
}
enum Result<T> {
case value(val: T)
case error(err: Swift.Error)
}
func sample() {
var result: Result<Int>? = nil
var i = 0
let intSequence: Observable<Result<Int>> = Observable<Int>.create { observer in
if let result = result {
if case .value(let val) = result {
return Observable<Int>.just(val).subscribe(observer)
}
}
print("do work")
delay(1) {
if i == 0 {
observer.onError(Err.x)
} else {
observer.onNext(1)
observer.onCompleted()
}
i += 1
}
return Disposables.create {}
}
.map { value -> Result<Int> in Result.value(val: value) }
.catchError { error -> Observable<Result<Int>> in
return .just(.error(err: error))
}
.do(onNext: { result = $0 })
.share()
_ = intSequence
.debug()
.subscribe()
delay(2) {
_ = intSequence
.debug()
.subscribe()
_ = intSequence
.debug()
.subscribe()
}
delay(4) {
_ = intSequence
.debug()
.subscribe()
}
}
sample()
it only generates work when we don't have anything cached, but thing again we need to use side effects to achieve the desired output

As mentioned earlier, RxSwift errors need to be treated as fatal errors. They are errors your stream usually cannot recover from, and usually errors that would not even be user facing.
For that reason - a stream that emits an .error or .completed event, will immediately dispose and you won't receive any more events there.
There are two approaches to tackling this:
Using a Result type like you just did
Using .materialize() (and .dematerialize() if needed). These first operator will turn your Observable<Element> into a Observable<Event<Element>>, meaning instead of an error being emitted and the sequence terminated, you will get an element that tells you it was an error event, but without any termination.
You can read more about error handling in RxSwift in Adam Borek's great blog post about this: http://adamborek.com/how-to-handle-errors-in-rxswift/

If an Observable sequence emits an error, it can never emit another event. However, it is a fairly common practice to wrap an error-prone Observable inside of another Observable using flatMap and catch any errors before they are allowed to propagate through to the outer Observable. For example:
safeObservable
.flatMap {
Requestor
.makeUnsafeObservable()
.catchErrorJustReturn(0)
}
.shareReplay(1)
.subscribe()

Related

RxSwift, combining multiple observables, but only retrying one

I have a logic problem. The premise of what I would like to achieve is to combine two sources of data and in case of failure, I would like to run retry only on one observable. The logic flow goes like this: I try and fail to get local data from StorageManager class, afterwards I try to get the data from an API. In case that API call fails, I would like to retry only that API call a certain number of times. Is there a nice way of doing this? By just running retry like in the code below, the local data observable gets triggered.
class func loginUser(user: User, replaySubject: ReplaySubject<User>){
let savedUserObs = StorageManager.readCachedModelData(requestType: .LOGIN, modelType: User.self)
let apiUserObs = NetworkHelper.makeRequest(requestType: .LOGIN).map { response -> User in
guard let user = response.user?.first else { throw TempErrors.wasNotAbleToExtractUser }
return user
}
savedUserObs.concat(apiUserObs)
.retry { errorObs in
errorObs.scan(0) { attempt, error in
let max = 5
if attempt == max { throw TempErrors.tooManyRetries }
return attempt + 1
}
}
.subscribe { user in
replaySubject.onNext(user)
StorageManager.saveData(requestType: .LOGIN, data: user)
} onError: { error in
print("onerror error: \(error)")
} onCompleted: {
print("completed")
} onDisposed: {
print("disposed")
}.disposed(by: disposeBag)
}

How two handle errors in async calls in swift combine?

I have two async calls to fetch data from the server, but I want them to handle them as a single response, also want to handle errors for each response.
for example, here I have two methods m1(), m2() each can throw the different types of errors.
We should wait to get a response of both and show an error message based on its error type. If there is no error, continue with the flow.
Which operator do we have to use? I tried with Publishers.Zip & Publishers.Map not able to handle errors.
enum Error1: Error {
case e1
}
enum Error2: Error {
case e2
}
func m1() -> Future<Bool, Error1> {
return Future { promise in
DispatchQueue.main.asyncAfter(deadline: .now() + 2) {
promise(.failure(.e1))
}
}
}
func m2() -> Future<String, Error2> {
return Future { promise in
DispatchQueue.main.asyncAfter(deadline: .now() + 3) {
promise(.success("1"))
}
}
}
Help would be greatly appreciated.!! Thank you.
I think the the problem you are running into is related to the fact that a Publisher can only emit one error type. So any time you try to combine m1 and m2, each of which has a different error type, then you run into type conflict problems.
There are a couple of ways you might choose to solve this problem. I'm going to suggest one. In my solution, each of your requests (your Futures) will use an error type of Never, but the success or failure of an individual request will be carried in a Result. Here is the code in a Playground:
import Foundation
import Combine
enum Error1: Error {
case e1
}
enum Error2: Error {
case e2
}
func m1(shouldFail: Bool) -> Future<Result<Bool, Error1>, Never> {
return Future { promise in
DispatchQueue.main.asyncAfter(deadline: .now() + 2) {
if(shouldFail) {
promise(.success(.failure(.e1)))
} else {
promise(.success(.success(true)))
}
}
}
}
func m2(shouldFail: Bool) -> Future<Result<String, Error2>, Never> {
return Future { promise in
DispatchQueue.main.asyncAfter(deadline: .now() + 3) {
if(shouldFail) {
promise(.success(.failure(.e2)))
} else {
promise(.success(.success("1")))
}
}
}
}
let subscribe = m1(shouldFail: false)
.zip(m2(shouldFail: true))
.sink {
(m1:Result<Bool, Error1>, m2:Result<String, Error2>) in
switch m1 {
case .success(let boolResult) :
print("M1 succeeded with result \(boolResult)")
case .failure(_) :
print("M1 failed")
}
switch m2 {
case .success(let stringResult) :
print("M2 succeeded with result \(stringResult)")
case .failure(_) :
print("M2 failed")
}
}
(Note that I added a shouldFail parameter to your m1 and m2 requests so that you can play with the different cases when one or the other requests fail).
Note that each of the Futures return a type of Future<SomeKindOfResult, Never>. That means that the futures entering the combine pipeline have the same Error type (it happens to be Never). This leads to the very odd looking construct of:
promise(.success(.failure(.e1)))
Which is a bit bizarre, but it says that the Future agrees that the request completed, and the Result carries the success or failure of that request.
The pipeline uses zip to wait until both requests complete. The value coming out of the zip is a tuple (Result<Bool, Error1>, Result<String, Error2>. This tuple accurately represents the success or failure of each request as well as carrying the appropriate value in each case.

ReactiveSwift pipeline flatMap body transform not executed

I have the following pipeline setup, and for some reason I can't understand, the second flatMap is skipped:
func letsDoThis() -> SignalProducer<(), MyError> {
let logError: (MyError) -> Void = { error in
print("Error: \(error); \((error as NSError).userInfo)")
}
return upload(uploads) // returns: SignalProducer<Signal<(), MyError>.Event, Never>
.collect() // SignalProducer<[Signal<(), MyError>.Event], Never>
.flatMap(.merge, { [uploadContext] values -> SignalProducer<[Signal<(), MyError>.Event], MyError> in
return context.saveSignal() // SignalProducer<(), NSError>
.map { values } // SignalProducer<[Signal<(), MyError>.Event], NSError>
.mapError { MyError.saveFailed(error: $0) } // SignalProducer<[Signal<(), MyError>.Event], MyError>
})
.flatMap(.merge, { values -> SignalProducer<(), MyError> in
if let error = values.first(where: { $0.error != nil })?.error {
return SignalProducer(error: error)
} else {
return SignalProducer(value: ())
}
})
.on(failed: logError)
}
See the transformations/signatures starting with the upload method.
When I say skipped I mean even if I add breakpoints or log statements, they are not executed.
Any idea how to debug this or how to fix?
Thanks.
EDIT: it is most likely has something to do with the map withing the first flatMap, but not sure how to fix it yet.
See this link.
EDIT 2: versions
- ReactiveCocoa (10.1.0):
- ReactiveObjC (3.1.1)
- ReactiveObjCBridge (6.0.0):
- ReactiveSwift (6.1.0)
EDIT 3: I found the problem which was due to my method saveSignal sending sendCompleted.
extension NSManagedObjectContext {
func saveSignal() -> SignalProducer<(), NSError> {
return SignalProducer { observer, disposable in
self.perform {
do {
try self.save()
observer.sendCompleted()
}
catch {
observer.send(error: error as NSError)
}
}
}
}
Sending completed make sense, so I can't change that. Any way to change the flatMap to still do what I intended to do?
I think the reason your second flatMap is never executed is that saveSignal never sends a value; it just finishes with a completed event or an error event. That means map will never be called, and no values will ever be passed to your second flatMap. You can fix it by doing something like this:
context.saveSignal()
.mapError { MyError.saveFailed(error: $0) }
.then(SignalProducer(value: values))
Instead of using map (which does nothing because there are no values to map), you just create a new producer that sends the values after saveSignal completes successfully.

How to handle Errors on never ending chain with materialize?

Imagine the following chain where a user wants to save a list of some sort:
var saveChain = userTappedSaveListSubject
.doOnNext { list -> Void in // create pdf version
let pdfFactory = ArticleListPDFFactory()
list.pdf = try pdfFactory.buildPDF(list)
try database.save(list)
}
.flatMap { list in
AuthorizedNetworking.shared.request(.createList(try ListRequestModel(list)))
.filter(statusCode: 201)
.map { _ in list }
}
.doOnNext { list in
list.uploaded = true
try database.save(list)
try Printer().print(list)
}
.materialize()
.share()
On every operator in the chain errors can occur, which would terminate the stream and the user would be unable to retry saving and printing the list (the whole chain gets disposed).
In the end the user should see either a "success" or "failure" screen by binding the observable to a textField:
Observable.of(
saveChain.elements().map { _ in
("List saved!", subtitle: "Saving successfull")
},
saveChain.errors().map { error in
("Error!", subtitle: error.localizedDescription)
})
.merge()
How should the error be handled?
Here's the obvious fix:
let saveChain = userTappedSaveListSubject
.flatMap { list in
Observable.just(list)
.do(onNext: { list -> Void in // create pdf version
let pdfFactory = ArticleListPDFFactory()
list.pdf = try pdfFactory.buildPDF(list)
try database.save(list)
})
.flatMap { list in
AuthorizedNetworking.shared.request(.createList(try ListRequestModel(list)))
.filter(statusCode: 201)
.map { _ in list }
}
.do(onNext: { list in
list.uploaded = true
try database.save(list)
try Printer().print(list)
})
.materialize()
}
.share()
However, there are a host of problems with this code because of the mixed paradigms.
You are passing around a mutable class inside your Observables. This is problematic because it's a functional paradigm so the system expects the contained type to be either a struct/enum or an immutable class.
Your reliance on side effects to load up said mutable class object again is quite odd and against the paradigm.

RxSwift Skip Events Until Own Sequence has finished

I have one observable (we will call it trigger) that can emit many times in a short period of time. When it emits I am doing a network Request and the I am storing the result with the scan Operator.
My problem is that I would like to wait until the request is finished to do it again. (But as it is now if trigger emits 2 observables it doesn't matter if fetchData has finished or not, it will do it again)
Bonus: I also would like to take only the first each X seconds (Debounce is not the solution because it can be emitting all the time and I want to get 1 each X seconds, it isn't throttle neither because if an observable emits 2 times really fast I will get the first and the second delayed X seconds)
The code:
trigger.flatMap { [unowned self] _ in
self.fetchData()
}.scan([], accumulator: { lastValue, newValue in
return lastValue + newValue
})
and fetchData:
func fetchData() -> Observable<[ReusableCellVMContainer]>
trigger:
let trigger = Observable.of(input.viewIsLoaded, handle(input.isNearBottomEdge)).merge()
I'm sorry, I misunderstood what you were trying to accomplish in my answer below.
The operator that will achieve what you want is flatMapFirst. This will ignore events from the trigger until the fetchData() is complete.
trigger
.flatMapFirst { [unowned self] _ in
self.fetchData()
}
.scan([], accumulator: { lastValue, newValue in
return lastValue + newValue
})
I'm leaving my previous answer below in case it helps (if anything, it has the "bonus" answer.)
The problem you are having is called "back pressure" which is when the observable is producing values faster than the observer can handle.
In this particular case, I recommend that you don't restrict the data fetch requests and instead map each request to a key and then emit the array in order:
trigger
.enumerated()
.flatMap { [unowned self] count, _ in
Observable.combineLatest(Observable.just(count), self.fetchData())
}
.scan(into: [Int: Value](), accumulator: { lastValue, newValue in
lastValue[newValue.0] = newValue.1
})
.map { $0.sorted(by: { $0.key < $1.key }).map { $0.value }}
To make the above work, you need this:
extension ObservableType {
func enumerated() -> Observable<(Int, E)> {
let shared = share()
let counter = shared.scan(0, accumulator: { prev, _ in return prev + 1 })
return Observable.zip(counter, shared)
}
}
This way, your network requests are starting ASAP but you aren't loosing the order that they are made in.
For your "bonus", the buffer operator will do exactly what you want. Something like:
trigger.buffer(timeSpan: seconds, count: Int.max, scheduler: MainScheduler.instance)
.map { $0.first }