I have one observable (we will call it trigger) that can emit many times in a short period of time. When it emits I am doing a network Request and the I am storing the result with the scan Operator.
My problem is that I would like to wait until the request is finished to do it again. (But as it is now if trigger emits 2 observables it doesn't matter if fetchData has finished or not, it will do it again)
Bonus: I also would like to take only the first each X seconds (Debounce is not the solution because it can be emitting all the time and I want to get 1 each X seconds, it isn't throttle neither because if an observable emits 2 times really fast I will get the first and the second delayed X seconds)
The code:
trigger.flatMap { [unowned self] _ in
self.fetchData()
}.scan([], accumulator: { lastValue, newValue in
return lastValue + newValue
})
and fetchData:
func fetchData() -> Observable<[ReusableCellVMContainer]>
trigger:
let trigger = Observable.of(input.viewIsLoaded, handle(input.isNearBottomEdge)).merge()
I'm sorry, I misunderstood what you were trying to accomplish in my answer below.
The operator that will achieve what you want is flatMapFirst. This will ignore events from the trigger until the fetchData() is complete.
trigger
.flatMapFirst { [unowned self] _ in
self.fetchData()
}
.scan([], accumulator: { lastValue, newValue in
return lastValue + newValue
})
I'm leaving my previous answer below in case it helps (if anything, it has the "bonus" answer.)
The problem you are having is called "back pressure" which is when the observable is producing values faster than the observer can handle.
In this particular case, I recommend that you don't restrict the data fetch requests and instead map each request to a key and then emit the array in order:
trigger
.enumerated()
.flatMap { [unowned self] count, _ in
Observable.combineLatest(Observable.just(count), self.fetchData())
}
.scan(into: [Int: Value](), accumulator: { lastValue, newValue in
lastValue[newValue.0] = newValue.1
})
.map { $0.sorted(by: { $0.key < $1.key }).map { $0.value }}
To make the above work, you need this:
extension ObservableType {
func enumerated() -> Observable<(Int, E)> {
let shared = share()
let counter = shared.scan(0, accumulator: { prev, _ in return prev + 1 })
return Observable.zip(counter, shared)
}
}
This way, your network requests are starting ASAP but you aren't loosing the order that they are made in.
For your "bonus", the buffer operator will do exactly what you want. Something like:
trigger.buffer(timeSpan: seconds, count: Int.max, scheduler: MainScheduler.instance)
.map { $0.first }
Related
So I want to collect values until I see the last page, but if the last page never comes I want to send what we have, given a time limit.
I have a way of doing this but it seems rather wasteful. I'm going to be using this to make collections that may have hundreds of thousands of values, so a more space-efficient method would be preferred.
You can copy and paste the following into a playground
import UIKit
import Foundation
import Combine
var subj = PassthroughSubject<String, Never>()
let queue = DispatchQueue(label: "Test")
let collectForTime = subj
.collect(.byTime(queue, .seconds(10)))
let collectUntilLast = subj
.scan([String]()) { $0 + [$1] }
.first { $0.last == "LastPage" }
// whichever happens first
let cancel = collectForTime.merge(with: collectUntilLast)
.first()
.sink {
print("complete1: \($0)")
} receiveValue: {
print("received1: \($0)")
}
print("start")
let strings = [
"!##$",
"ZXCV",
"LastPage", // comment this line to test to see what happens if no last page is sent
"ASDF",
"JKL:"
]
// if the last page is present then the items 0..<3 will be sent
// if there's no last page then send what we have
// the main thing is that the system is not just sitting there waiting for a last page that never comes.
for i in (0..<strings.count) {
DispatchQueue.main.asyncAfter(deadline: .now() + .seconds(i)) {
let s = strings[i]
print("sending \(s)")
subj.send(s)
}
}
UPDATE
After playing a bit more with the playground I think all you need is:
subj
.prefix { $0 != "LastPage" }
.append("LastPage")
.collect(.byTime(DispatchQueue.main, .seconds(10)))
I wouldn't use collect because under the hood it is basically doing the same thing that scan is doing, you only need another condition in the first closure eg: .first { $0.last == "LastPage" || timedOut } to emit the collected items in case of timeout.
It's unfortunate that collect doesn't offer the API you need but we can create another version of it.
The idea is to combineLatest the scan output with a stream that emits a Bool after a deadline (In reality we also need to emit false initially for combineLatest to start) and || this additional variable inside filter condition.
Here is the code:
extension Publisher {
func collect<S: Scheduler>(
timeoutAfter interval: S.SchedulerTimeType.Stride,
scheduler: S,
orWhere predicate: #escaping ([Output]) -> Bool
) -> AnyPublisher<[Output], Failure> {
scan([Output]()) { $0 + [$1] }
.combineLatest(
Just(true)
.delay(for: interval, scheduler: scheduler)
.prepend(false)
.setFailureType(to: Failure.self)
)
.first { predicate($0) || $1 }
.map(\.0)
.eraseToAnyPublisher()
}
}
let subj = PassthroughSubject<String, Never>()
let cancel = subj
.collect(
timeoutAfter: .seconds(10),
scheduler: DispatchQueue.main,
orWhere: { $0.last == "LastPage" }
)
.print()
.sink { _ in }
I made a small change to your technique
import Foundation
import Combine
var subj = PassthroughSubject<String, Never>()
let lastOrTimeout = subj
.timeout(.seconds(10), scheduler: RunLoop.main )
.print("watchdog")
.first { $0 == "LastPage" }
.append(Just("Done"))
let cancel = subj
.prefix(untilOutputFrom: lastOrTimeout)
.print("main_publisher")
.collect()
.sink {
print("complete1: \($0)")
} receiveValue: {
print("received1: \($0)")
}
print("start")
let strings = [
"!##$",
"ZXCV",
"LastPage", // comment this line to test to see what happens if no last page is sent
"ASDF",
"JKL:"
]
// if the last page is present then the items 0..<3 will be sent
// if there's no last page then send what we have
// the main thing is that the system is not just sitting there waiting for a last page that never comes.
strings.enumerated().forEach { index, string in
DispatchQueue.main.asyncAfter(deadline: .now() + .seconds(index)) {
print("sending \(string)")
subj.send(string)
}
}
lastOrTimeout will emit a value when it see's LastPage or finishes because of a timeout (and emits Done).
The main pipeline collects values until the watchdog publisher emits a value and collects all the results.
In this code I am expecting the Empty() publisher to send completion to the .sink subscriber, but no completion is sent.
func testEmpty () {
let x = XCTestExpectation()
let subject = PassthroughSubject<Int, Never>()
emptyOrSubjectPublisher(subject).sink(receiveCompletion: { completion in
dump(completion)
}, receiveValue: { value in
dump(value)
}).store(in: &cancellables)
subject.send(0)
wait(for: [x], timeout: 10.0)
}
func emptyOrSubjectPublisher (_ subject: PassthroughSubject<Int, Never>) -> AnyPublisher<Int, Never> {
subject
.flatMap { (i: Int) -> AnyPublisher<Int, Never> in
if i == 1 {
return subject.eraseToAnyPublisher()
} else {
return Empty().eraseToAnyPublisher()
}
}
.eraseToAnyPublisher()
}
Why does the emptyOrSubjectPublisher not receive the completion?
The Empty completes, but the overall pipeline does not, because the initial Subject has not completed. The inner pipeline in which the Empty is produced (the flatMap) has "swallowed" the completion. This is the expected behavior.
You can see this more easily by simply producing a Just in the flatMap, e.g. Just(100):
subject
.flatMap {_ in Just(100) }
.sink(receiveCompletion: { completion in
print(completion)
}, receiveValue: { value in
print(value)
}).store(in: &cancellables)
subject.send(1)
You know and I know that a Just emits once and completes. But although the value of the Just arrives down the pipeline, there is no completion.
And you can readily see why it works this way. It would be very wrong if we had a potential sequence of values from our publisher but some intermediate publisher, produced in a flatMap, had the power to complete the whole pipeline and end it prematurely.
(And see my https://www.apeth.com/UnderstandingCombine/operators/operatorsTransformersBlockers/operatorsflatmap.html where I make the same point.)
If the goal is to send a completion down the pipeline, it's the subject that needs to complete. For example, you could say
func emptyOrSubjectPublisher (_ subject: PassthroughSubject<Int, Never>) -> AnyPublisher<Int, Never> {
subject
.flatMap { (i: Int) -> AnyPublisher<Int, Never> in
if i == 1 {
return subject.eraseToAnyPublisher()
} else {
subject.send(completion: .finished) // <--
return Empty().eraseToAnyPublisher()
}
}
.eraseToAnyPublisher()
}
[Note, however, that your whole emptyOrSubjectPublisher is peculiar; it is unclear what purpose it is intended to serve. Returning subject when i is 1 is kind of pointless too, because subject has already published the 1 by the time we get here, and isn't going to publish anything more right now. Thus, if you send 1 at the start, you won't receive 1 as a value, because your flatMap has swallowed it and has produced a publisher that isn't going to publish.]
I have a cold observable that may get called multiple times. This observable does an expensive task (a network request) and then completes. I would like this observable to only ever make a single network call and if I need to call it again in the future I would like to get the last emitted value.
If an observable doesn't complete (i.e. just sends a next value without a completed event) I can use the .share(replay: 1, scope: .whileConnected) function to always get the last value. Unfortunately, this doesn't work with observables that completes at the end of the request. Bellow is an example:
let disposeBag = DisposeBag()
let refreshSubject = PublishSubject<Void>()
override func viewDidLoad() {
super.viewDidLoad()
let observable = Observable<String>.create { observer in
let seconds = 2.0
DispatchQueue.main.asyncAfter(deadline: .now() + seconds) {
observer.onNext("Hello, World")
observer.onCompleted() // <-- Works when commented out
}
return Disposables.create()
}
.share(replay: 1, scope: .whileConnected)
refreshSubject
.flatMap { _ in observable }
.subscribe(onNext: { response in
print("response: ", response)
})
.disposed(by: disposeBag)
}
#IBAction func refreshButtonHandler(_ sender: Any) {
refreshSubject.onNext(())
}
Every time the refreshSubject is triggered it takes 2 seconds for the Hello, World to be printed. If I remove the observer.onCompleted() line however, it only takes 2 seconds the first time and subsequently returns a cached response.
Obviously this is just an example, in the real world I would not have any control if the observable completes or not but I would like to always just replay the last value regardless.
So you don't want the cold observable to be re-subscribed to even when the refresh is triggered. In that case, this is the solution:
Observable.combineLatest(refreshSubject.startWith(()), yourColdObservable)
.map { $0.1 }
.subscribe(onNext: { val in
print("*** val: ", val)
})
Using flatMap means that the observable is re-subscribed to every time an event enters the flatMap. By using combineLatest instead, the cold observable will only be subscribed to once. The combineLatest operator will store the result of the observable internally and emit it again every time the refresh subject emits.
(No share is needed for this method.)
let yourColdObservable = Observable<String>.create { observer in
let seconds = 2.0
DispatchQueue.main.asyncAfter(deadline: .now() + seconds) {
observer.onNext("Hello, World")
observer.onCompleted()
}
return Disposables.create()
}
let cacheObservable = refreshButton.rx.tap
.startWith(())
.flatMapLatest { _ in yourColdObservable }
.share(replay: 1)
makeRequestWithCacheButton.rx.tap
.flatMapLatest { _ in cacheObservable }
.subscribe(onNext: { response in
print("response: ", response)
})
.disposed(by: disposeBag)
StartWith
emit a specified sequence of items before beginning to emit the items
from the source Observable
Such a fake tap on the refreshButton when starting the sequence
FlatMapLatest
The FlatMap operator transforms an Observable by applying a function
that you specify to each item emitted by the source Observable, where
that function returns an Observable that itself emits items. FlatMap
then merges the emissions of these resulting Observables, emitting
these merged results as its own sequence.
FlatMapLatest is special type of FlatMap, because it cancels previous Observable when refreshButton.rx.tap emit onNext event.
I writing async unit tests for RxSwift,this my code,I can't understand subscribe only once
class TestViewModel: NSObject {
let result : Observable<Int>
init(input:Observable<Int>) {
result = input.flatMapLatest({ (value) -> Observable<Int> in
return Observable.create({ (observer) -> Disposable in
DispatchQueue.main.asyncAfter(deadline: DispatchTime.now() + 1, execute: {
print("next"+" \(value)")
observer.onNext(value)
})
return Disposables.create()
})
})
}
}
func testCount() {
let expectation = XCTestExpectation(description: "async")
let input = scheduler.createHotObservable([.next(100, 1),.next(200, 10)])
let viewModel = TestViewModel.init(input: input.asObservable())
viewModel.result.subscribe(onNext: { (value) in
print("subscribe"+" \(value)")
}).disposed(by: disposeBag)
scheduler.start()
wait(for: [expectation], timeout: timeout)
}
print info:
next 1
next 10
subscribe 10
I think print info should :
next 1
next 10
subscribe 1
subscribe 10
Someone can give me suggestion?thank
It's how flatMapLatest operator works. Its basically tells "map events into observables but use only recent observable's result". So, you map your events into two observables:
1: --1sec-> 1
10: --1sec-> 10
Most recent observable at the moment is for 10.
Try to use flatMap instead of flatMapLatest.
You should also avoid Observable.create if possible. In your particular case (to delay a value) you could use Observable.timer or Observable.just(...).delay(...).
I have a network request that can Succeed or Fail
I have encapsulated it in an observable.
I have 2 rules for the request
1) There can never be more then 1 request at the same time
-> there is a share operator i can use for this
2) When the request was Succeeded i don't want to repeat the same
request again and just return the latest value
-> I can use shareReplay(1) operator for this
The problem arises when the request fails, the shareReplay(1) will just replay the latest error and not restart the request again.
The request should start again at the next subscription.
Does anyone have an idea how i can turn this into a Observable chain?
// scenario 1
let obs: Observable<Int> = request().shareReplay(1)
// outputs a value
obs.subscribe()
// does not start a new request but outputs the same value as before
obs.subscribe()
// scenario 2 - in case of an error
let obs: Observable<Int> = request().shareReplay(1)
// outputs a error
obs.subscribe()
// does not start a new request but outputs the same value as before, but in this case i want it to start a new request
obs.subscribe()
This seems to be a exactly doing what i want, but it consists of keeping state outside the observable, anyone know how i can achieve this in a more Rx way?
enum Err: Swift.Error {
case x
}
enum Result<T> {
case value(val: T)
case error(err: Swift.Error)
}
func sample() {
var result: Result<Int>? = nil
var i = 0
let intSequence: Observable<Result<Int>> = Observable<Int>.create { observer in
if let result = result {
if case .value(let val) = result {
return Observable<Int>.just(val).subscribe(observer)
}
}
print("do work")
delay(1) {
if i == 0 {
observer.onError(Err.x)
} else {
observer.onNext(1)
observer.onCompleted()
}
i += 1
}
return Disposables.create {}
}
.map { value -> Result<Int> in Result.value(val: value) }
.catchError { error -> Observable<Result<Int>> in
return .just(.error(err: error))
}
.do(onNext: { result = $0 })
.share()
_ = intSequence
.debug()
.subscribe()
delay(2) {
_ = intSequence
.debug()
.subscribe()
_ = intSequence
.debug()
.subscribe()
}
delay(4) {
_ = intSequence
.debug()
.subscribe()
}
}
sample()
it only generates work when we don't have anything cached, but thing again we need to use side effects to achieve the desired output
As mentioned earlier, RxSwift errors need to be treated as fatal errors. They are errors your stream usually cannot recover from, and usually errors that would not even be user facing.
For that reason - a stream that emits an .error or .completed event, will immediately dispose and you won't receive any more events there.
There are two approaches to tackling this:
Using a Result type like you just did
Using .materialize() (and .dematerialize() if needed). These first operator will turn your Observable<Element> into a Observable<Event<Element>>, meaning instead of an error being emitted and the sequence terminated, you will get an element that tells you it was an error event, but without any termination.
You can read more about error handling in RxSwift in Adam Borek's great blog post about this: http://adamborek.com/how-to-handle-errors-in-rxswift/
If an Observable sequence emits an error, it can never emit another event. However, it is a fairly common practice to wrap an error-prone Observable inside of another Observable using flatMap and catch any errors before they are allowed to propagate through to the outer Observable. For example:
safeObservable
.flatMap {
Requestor
.makeUnsafeObservable()
.catchErrorJustReturn(0)
}
.shareReplay(1)
.subscribe()