How is the shareReplay function working? - swift

I have read RxSwift/ShareReplayScope.swift file, but a little difficult to understand.
public func share(replay: Int = 0, scope: SubjectLifetimeScope = .whileConnected)
-> Observable<E> {
switch scope {
case .forever:
switch replay {
case 0: return self.multicast(PublishSubject()).refCount()
default: return self.multicast(ReplaySubject.create(bufferSize: replay)).refCount()
}
case .whileConnected:
switch replay {
case 0: return ShareWhileConnected(source: self.asObservable())
case 1: return ShareReplay1WhileConnected(source: self.asObservable())
default: return self.multicast(makeSubject: { ReplaySubject.create(bufferSize: replay) }).refCount()
}
}
}
0,1 and default,what is the difference? why separate 1 from defalut?
override func subscribe<O : ObserverType>(_ observer: O) -> Disposable where O.E == E {
_lock.lock()
let connection = _synchronized_subscribe(observer)
let count = connection._observers.count
let disposable = connection._synchronized_subscribe(observer)
_lock.unlock()
if count == 0 {
connection.connect()
}
return disposable
}
how the lock work, and the most difficult is this function. how the blocked obserables connect to their observers correctly.

Below you get the information about shareReplay function.
share()
As you know share function shares the subscription of Observable. So that you don't need to create multiple observable instances every time.
Simply create one observable and share replay. it will allow to perform next operation within this sharable observable. i.e filter(), subscribe(), etc..
But the problem with share() is, it is not provide values before subscription.
shareReplay()
Where shareReplay() keeps a buffer of the last few emitted values and can provide them to new observers upon subscription.

Related

Swift Combine: enqueue updates to one publisher behind another publisher

I have a situation where my code needs to make one network call to fetch a bunch of items, but while waiting for those to come down, another network call might fetch an update to those items. I'd love to be able to enqueue those secondary results until the first one has finished. Is there a way to accomplish that with Combine?
Importantly, I am not able to wait before making the second request. It’s actually a connection to a websocket that gets made at the same time as the first request, and the updates come over the websocket outside of my control.
Update
After examining Matt’s thorough book on Combine, I settled on .prepend(). But as Matt warned me in the comments, .prepend() doesn’t even subscribe to the other publisher until after the first one completes. This means I miss any signals sent prior to that. What I need is a Subject that enqueues values, but perhaps that’s not so hard to make. Anyway, this is where I got:
Initially I was going to use .append(), but I realized with .prepend() I could avoid keeping a reference to one of the publishers. So here’s a simplified version of what I’ve got. There might be syntax errors in this, as I’ve whittled it down from my (employer’s) code.
There’s the ItemFeed, which handles fetching a list of items and simultaneously handling item update events. The latter can arrive before the initial list of items, and thus must be sequenced via Combine to arrive after it. I attempt to do this by prepending the initial items source to the update PassthroughSubject.
Below that is an XCTestCase that simulates a lengthy initial item load, and adds an update before that load can complete. It attempts to subscribe to changes to the list of items, and tries to test that the first update is the initial 63 items, and the subsequent update is for 64 items (in this case, “update” results in adding an item).
Unfortunately, while the initial list is published, the update never arrives. I also tried removing the .output(at:) operators, but the two sinks are only called once.
After the test case sets up the delayed “fetch,” and subscribes to changes in feed.items, it calls feed.handleItemUpatedEvent. This calls ItemFeed.updateItems.send(_:), but unfortunately that is lost to oblivion.
class
ItemFeed
{
typealias InitialItemsSource = Deferred<Future<[[String : Any]], Error>>
let updateItems = PassthroughSubject<[Item], Error>()
var funnel : AnyCancellable?
#Published var items = [Item]()
init(initialItemSource inSource: InitialItemsSource)
{
// Passthrough subject each time items are updated…
var pub = self.updateItems.eraseToAnyPublisher()
// Prepend the initial items we need to fetch…
let initialItems = source.tryMap { try $0.map { try Item(object: $0) } }
pub = pub.prepend(initialItems).eraseToAnyPublisher()
// Sink on the funnel to add or update to self.items…
self.funnel =
pub.sink { inCompletion in
// Handle errors
}
receiveValue: {
self.update(items: inItems)
}
}
func handleItemUpdatedEvent(_ inItem: Item) {
self.updateItems.send([inItem])
}
func update(items inItems: [Item]) {
// Update or add inItems to self.items
}
}
class
ItemFeedTests : XCTestCase
{
func
testShouldUpdateItems()
throws
{
// Set up a mock source of items…
let source = fetchItems(named: "items", delay: 3.0) // 63 items
let expectation = XCTestExpectation(description: "testShouldUpdateItems")
expectation.expectedFulfillmentCount = 2
let feed = ItemFeed(initialItemSource: source)
let sub1 = feed.$items
.output(at: 0)
.receive(on: DispatchQueue.main)
.sink { inItems in
expectation.fulfill()
debugLog("Got first items: \(inItems.count)")
XCTAssertEqual(inItems.count, 63)
}
let sub2 = feed.$items
.output(at: 1)
.receive(on: DispatchQueue.main)
.sink { inItems in
expectation.fulfill()
debugLog("Got second items: \(inItems.count)")
XCTAssertEqual(inItems.count, 64)
}
// Send an update right away…
let item = try loadItem(named: "Item3")
feed.handleItemUpdatedEvent(item)
XCTAssertEqual(feed.items.count, 0) // Should be no items yet
// Wait for stuff to complete…
wait(for: [expectation], timeout: 10.0)
sub1.cancel() // Not necessary, but silence the compiler warning
sub2.cancel()
}
}
The accepted answer didn't actually have any code, but I was able to figure out a solution. I ended up creating a custom publisher that subscribed to the "gate publisher" and created a subscription that creates a sink for the upstream publisher. I buffer the values from upstream and emit gate publisher values based on demand until it completes, then I switch to sending the buffer downstream based on demand. The tricky part is keeping track of the upstream / gate publisher and sending demand to the right one.
After a fair bit of trial and error, I found a solution. I created a custom Publisher and Subscription that immediately subscribes to its upstream publisher and begins enqueuing elements (up to some specifiable capacity). It then waits for a subscriber to come along, and provides that subscriber with all the values up until now, and then continues providing values. Here’s a marble diagram:
I then use this in conjunction with .prepend() like so:
extension
Publisher
{
func
enqueue<P>(gatedBy inGate: P, capacity inCapacity: Int = .max)
-> AnyPublisher<Self.Output, Self.Failure>
where
P : Publisher,
P.Output == Output,
P.Failure == Failure
{
let qp = Publishers.Queueing(upstream: self, capacity: inCapacity)
let r = qp.prepend(inGate).eraseToAnyPublisher()
return r
}
}
And this is how you use it…
func
testShouldReturnAllItemsInOrder()
{
let gate = PassthroughSubject<Int, Never>()
let stream = PassthroughSubject<Int, Never>()
var results = [Int]()
let sub = stream.enqueue(gatedBy: gate)
.sink
{ inElement in
debugLog("element: \(inElement)")
results.append(inElement)
}
stream.send(3)
stream.send(4)
stream.send(5)
XCTAssertEqual(results.count, 0)
gate.send(1)
gate.send(2)
gate.send(completion: .finished)
XCTAssertEqual(results.count, 5)
XCTAssertEqual(results, [1,2,3,4,5])
sub.cancel()
}
This prints what you would expect:
element: 1
element: 2
element: 3
element: 4
element: 5
It works well because creating the .enqueue(gatedBy:) operator creates the queuing publisher qp, which immediately subscribes to stream and enqueues any values it sends. It then calls .prepend() on qp, which first subscribes to gate, and waits for it to complete. When it finishes, it then subscribes to qp, which immediately provides it with all the enqueued values, and then continues to provide it with values from the upstream publisher.
Here’s the code I finally ended up with.
//
// QueuingPublisher.swift
// Latency: Zero, LLC
//
// Created by Rick Mann on 2021-06-03.
//
import Combine
import Foundation
extension
Publishers
{
final
class
Queueing<Upstream: Publisher>: Publisher
{
typealias Output = Upstream.Output
typealias Failure = Upstream.Failure
private let upstream : Upstream
private let capacity : Int
private var queue : [Output] = [Output]()
private var subscription : QueueingSubscription<Queueing<Upstream>, Upstream>?
fileprivate var completion : Subscribers.Completion<Failure>? = nil
init(upstream inUpstream: Upstream, capacity inCapacity: Int)
{
self.upstream = inUpstream
self.capacity = inCapacity
// Subscribe to the upstream right away so we can start
// enqueueing values…
let sink = AnySubscriber { $0.request(.unlimited) }
receiveValue:
{ [weak self] (inValue: Output) -> Subscribers.Demand in
self?.relay(inValue)
return .none
}
receiveCompletion:
{ [weak self] (inCompletion: Subscribers.Completion<Failure>) in
self?.completion = inCompletion
self?.subscription?.complete(with: inCompletion)
}
inUpstream.subscribe(sink)
}
func
receive<S: Subscriber>(subscriber inSubscriber: S)
where
Failure == S.Failure,
Output == S.Input
{
let subscription = QueueingSubscription(publisher: self, subscriber: inSubscriber)
self.subscription = subscription
inSubscriber.receive(subscription: subscription)
}
/**
Return up to inDemand values.
*/
func
request(_ inDemand: Subscribers.Demand)
-> [Output]
{
let count = inDemand.min(self.queue.count)
let elements = Array(self.queue[..<count])
self.queue.removeFirst(count)
return elements
}
private
func
relay(_ inValue: Output)
{
// TODO: The Wenderlich example code checks to see if the upstream has completed,
// but I feel like want to send all the values we've gotten first?
// Save the new value…
self.queue.append(inValue)
// Discard the oldest if we’re over capacity…
if self.queue.count > self.capacity
{
self.queue.removeFirst()
}
// Send the buffer to our subscriber…
self.subscription?.dataAvailable()
}
final
class
QueueingSubscription<QP, Upstream> : Subscription
where
QP : Queueing<Upstream>
{
typealias Output = Upstream.Output
typealias Failure = Upstream.Failure
let publisher : QP
var subscriber : AnySubscriber<Output,Failure>? = nil
private var demand : Subscribers.Demand = .none
init<S>(publisher inP: QP,
subscriber inS: S)
where
S: Subscriber,
Failure == S.Failure,
Output == S.Input
{
self.publisher = inP
self.subscriber = AnySubscriber(inS)
}
func
request(_ inDemand: Subscribers.Demand)
{
self.demand += inDemand
emitAsNeeded()
}
func
cancel()
{
complete(with: .finished)
}
/**
Called by our publisher to let us know new
data has arrived.
*/
func
dataAvailable()
{
emitAsNeeded()
}
private
func
emitAsNeeded()
{
guard let subscriber = self.subscriber else { return }
let newValues = self.publisher.request(self.demand)
self.demand -= newValues.count
newValues.forEach
{
let nextDemand = subscriber.receive($0)
self.demand += nextDemand
}
if let completion = self.publisher.completion
{
complete(with: completion)
}
}
fileprivate
func
complete(with inCompletion: Subscribers.Completion<Failure>)
{
guard let subscriber = self.subscriber else { return }
self.subscriber = nil
subscriber.receive(completion: inCompletion)
}
}
}
} // extension Publishers
extension
Publisher
{
func
enqueue<P>(gatedBy inGate: P, capacity inCapacity: Int = .max)
-> AnyPublisher<Self.Output, Self.Failure>
where
P : Publisher,
P.Output == Output,
P.Failure == Failure
{
let qp = Publishers.Queueing(upstream: self, capacity: inCapacity)
let r = qp.prepend(inGate).eraseToAnyPublisher()
return r
}
}
extension
Subscribers.Demand
{
func
min(_ inValue: Int)
-> Int
{
if self == .unlimited
{
return inValue
}
return Swift.min(self.max!, inValue)
}
}

Create a Publisher that notifies subscribers one by one, waiting for eachother

I have this publisher and subscribers (example code):
import Combine
let publisher = PassthroughSubject<ComplexStructOrClass, Never>()
let sub1 = publisher.sink { (someString) in
// Async work...
}
let sub2 = publisher.sink { (someString) in
// Async work, but it has to wait until sub1 has finished his work
}
So the publisher constant has 2 subscribers. When I use the method send on the publisher constant, it should send the value first to sub1 and after sub1 finished processing (with a callback or something like that), publisher should and notify sub2.
So in the comments its stated that Combine is made for this. What publisher do I need to use? A PassthroughSubject may be the wrong decision.
Usecase
I need to publish values throughout the lifetime of my app to a dynamic number of subscribers, for a few different publishers (I hope I can make a protocol). So a subscriber can be added and removed from a publisher at any given time. A subscriber look as follows:
It has a NSPersistentContainer
A callback should be made by the publisher when a new value has arrived. That process looks like:
the publisher will create a backgroundContext of the container of the subscriber, because it knows a subscriber has a container
the publisher sends the context along with the new published value to the subscriber
the publisher waits until it receives a callback of the subscriber. The subscriber shouldn't save the context, but the publisher must hold a reference to the context. The subscriber gives a callback of an enum, which has a ok case and some error cases.
When a subscriber gives a callback with an error enum case, the publisher must rollback the contexts it created for each subscriber.
When a subscriber gives a callback with the ok case, the publisher repeats step 1 till 5 for every subscriber
This step will only be reached when no subscriber gave a error enum case or there are no subscribers. The publisher will save all the contexts created by the subscribers.
Current code, no Combine
This is some code without using Combine:
// My publisher
protocol NotiPublisher {
// Type of message to send
associatedtype Notification
// List of subscribers for this publisher
static var listeners: Set<AnyNotiPublisher<Notification>> { get set }
}
// My subscriber
protocol NotificationListener: Hashable {
associatedtype NotificationType
var container: NSPersistentContainer { get }
// Identifier used to find this subscriber in the list of 'listeners' in the publisher
var identifier: Int32 { get }
var notify: ((_ notification: NotificationType, _ context: NSManagedObjectContext, #escaping CompletionHandlerAck) -> ()) { get }
}
// Type erased version of the NotificationListener and some convience methods here, can add them if desired
// In a extension of NotiPublisher, this method is here
static func notify(queue: DispatchQueue, notification: Notification, completionHander: #escaping CompletionHandlerAck) throws {
let dispatchGroup = DispatchGroup()
var completionBlocks = [SomeCompletionHandler]()
var contexts = [NSManagedObjectContext]()
var didLoop = false
for listener in listeners {
if didLoop {
dispatchGroup.wait()
} else {
didLoop = true
}
dispatchGroup.enter()
listener.container.performBackgroundTask { (context) in
contexts.append(context)
listener.notify(notification, context, { (completion) in
completionBlocks.append(completion)
dispatchGroup.leave()
})
}
}
dispatchGroup.notify(queue: queue) {
let err = completion.first(where: { element in
// Check if an error has occured
})
if err == nil {
for context in contexts {
context.performAndWait {
try! context.save()
}
}
}
completionHander(err ?? .ok(true))
}
}
This is pretty complex code, I am wondering if I can make use of the power of Combine to make this code more readable.
I wrote the following to chain async operations from a publisher using flatMap that allows you to return another publisher. I'm not a fan, and it might not meet your need to dynamically change the subs, but it might help someone:
let somePublisher = Just(12)
let anyCancellable = somePublisher.flatMap{ num in
//We can return a publisher from flatMap, so lets return a Future one because we want to do some async work
return Future<Int,Never>({ promise in
//do an async thing using dispatch
DispatchQueue.main.asyncAfter(deadline: .now() + 3, execute: {
print("flat map finished processing the number \(num)")
//now just pass on the value
promise(.success(num))
})
})
}.flatMap{ num in
//We can return a publisher from flatMap, so lets return a Future one because we want to do some async work
return Future<Int,Never>({ promise in
//do an async thing using dispatch
DispatchQueue.main.asyncAfter(deadline: .now() + 3, execute: {
print("second flat map finished processing the number \(num)")
//now just pass on the value
promise(.success(num))
})
})
}.sink { num in
print("This sink runs after the async work in the flatMap/Future publishers")
}
You can try to use serial OperationQueue to receive values. Seems like it will wait until sink completes before call another.
let queue = OperationQueue()
queue.maxConcurrentOperationCount = 1
publisher
.receive(on: queue)
.sink { _ in }

Given a list of timers, how to output if one of them completed, while also being able to reset the list?

I have an output signal that should output when one of a given set of timers time out, complete or when the entire list is reset.
enum DeviceActionStatus {
case pending
case completed
case failed
}
struct DeviceAction {
let start: Date
let status: DeviceActionStatus
func isTimedOut() -> Bool // if start is over 30 seconds ago
let id: String
}
Output signal:
let pendingActionUpdated: Signal<[DeviceAction], NoError>
Inputs:
let completeAction: Signal<String, NoError>
let tick: Signal<Void, NoError> // runs every 1 second and should iterate to see if any DeviceAction is timed out
let addAction: Signal<DeviceAction, NoError>
let resetAllActions: Signal<Void, NoError>
It should output an array of all running device actions.
let output = Signal.combineLatest(
addAction,
resetAllActions,
tick,
Signal.merge(
completeAction,
tick.take(first: 1).map { _ in "InvalidActionId" }
)) // make sure the combinelatest can fire initially
I've tried sending this to a .scan to cumulate every time the addAction is fired, and resetting every time the resetAllActions is fired, but since there is no way of knowing which of those fired, i can't get the logic to work. How can I both cumulate a growing list while also being able to run through it and being able to reset it when I want?
This looks like a job for the merge/enum pattern. I'm more an RxSwift guy myself, but if you map each of your Signals into an enum and merge them, then you can receive them properly into your scan...
enum ActionEvent {
case complete(String)
case tick
case add(DeviceAction)
case reset
}
merge(
completeAction.map { ActionEvent.complete($0) },
tick.map { ActionEvent.tick },
addAction.map { ActionEvent.add($0) },
resetAllActions.map { ActionEvent.reset }
).scan([DeviceAction]()) { actions, event in
switch event {
case let .complete(id):
return actions.filter { $0.id != id }
case .tick:
return actions.filter { $0.isTimedOut() == false }
case let .add(action):
return actions + [action]
case .reset:
let resetDate = Date()
return actions.map { $0.start = resetDate }
// or
return []
// depending on what "reset" means.
}
It's a bit difficult to see the full use case here, so I will just describe how I would distinguish between addAction and resultAllActions being fired, leaving the rest of the design alone.
You can merge those two into one signal prior to the Signal.combineLatest. In order to do that, you will need to map them to the same type. An enum is perfect for this:
enum Action {
case add(DeviceAction)
case resetAll
}
Now you can map each signal and merge them into a single signal:
let action = Signal.merge(
addAction.map { Action.add($0) },
resetAllActions.map { _ in Action.resetAll })
Now you can switch on the value in your scan and determine whether it's a new action being added or a reset.

RxSwift, Share + retry mechanism

I have a network request that can Succeed or Fail
I have encapsulated it in an observable.
I have 2 rules for the request
1) There can never be more then 1 request at the same time
-> there is a share operator i can use for this
2) When the request was Succeeded i don't want to repeat the same
request again and just return the latest value
-> I can use shareReplay(1) operator for this
The problem arises when the request fails, the shareReplay(1) will just replay the latest error and not restart the request again.
The request should start again at the next subscription.
Does anyone have an idea how i can turn this into a Observable chain?
// scenario 1
let obs: Observable<Int> = request().shareReplay(1)
// outputs a value
obs.subscribe()
// does not start a new request but outputs the same value as before
obs.subscribe()
// scenario 2 - in case of an error
let obs: Observable<Int> = request().shareReplay(1)
// outputs a error
obs.subscribe()
// does not start a new request but outputs the same value as before, but in this case i want it to start a new request
obs.subscribe()
This seems to be a exactly doing what i want, but it consists of keeping state outside the observable, anyone know how i can achieve this in a more Rx way?
enum Err: Swift.Error {
case x
}
enum Result<T> {
case value(val: T)
case error(err: Swift.Error)
}
func sample() {
var result: Result<Int>? = nil
var i = 0
let intSequence: Observable<Result<Int>> = Observable<Int>.create { observer in
if let result = result {
if case .value(let val) = result {
return Observable<Int>.just(val).subscribe(observer)
}
}
print("do work")
delay(1) {
if i == 0 {
observer.onError(Err.x)
} else {
observer.onNext(1)
observer.onCompleted()
}
i += 1
}
return Disposables.create {}
}
.map { value -> Result<Int> in Result.value(val: value) }
.catchError { error -> Observable<Result<Int>> in
return .just(.error(err: error))
}
.do(onNext: { result = $0 })
.share()
_ = intSequence
.debug()
.subscribe()
delay(2) {
_ = intSequence
.debug()
.subscribe()
_ = intSequence
.debug()
.subscribe()
}
delay(4) {
_ = intSequence
.debug()
.subscribe()
}
}
sample()
it only generates work when we don't have anything cached, but thing again we need to use side effects to achieve the desired output
As mentioned earlier, RxSwift errors need to be treated as fatal errors. They are errors your stream usually cannot recover from, and usually errors that would not even be user facing.
For that reason - a stream that emits an .error or .completed event, will immediately dispose and you won't receive any more events there.
There are two approaches to tackling this:
Using a Result type like you just did
Using .materialize() (and .dematerialize() if needed). These first operator will turn your Observable<Element> into a Observable<Event<Element>>, meaning instead of an error being emitted and the sequence terminated, you will get an element that tells you it was an error event, but without any termination.
You can read more about error handling in RxSwift in Adam Borek's great blog post about this: http://adamborek.com/how-to-handle-errors-in-rxswift/
If an Observable sequence emits an error, it can never emit another event. However, it is a fairly common practice to wrap an error-prone Observable inside of another Observable using flatMap and catch any errors before they are allowed to propagate through to the outer Observable. For example:
safeObservable
.flatMap {
Requestor
.makeUnsafeObservable()
.catchErrorJustReturn(0)
}
.shareReplay(1)
.subscribe()

RxSwift repeated action

I'm switching from RAC and want to have a repeated network request, returning different result types depending on the API of the request.
I want to use an interval, but I don't know how to match the return types.
var loop: Observable<Element> {
return Observable<Int>.interval(5.0, scheduler: MainScheduler.instance).map { _ in
// Do network request and return Observable<Element>
}
}
I need to invoke Observerable.interval with type Int - but return Observable. How would I do that?
Use flatMap:
var loop: Observable<Element> {
return Observable<Int>.interval(5.0, scheduler: MainScheduler.instance).flatMap { _ in
return networkRequest() // returns Observable<Element>
}
}