I'd like all publishers to execute unless explicitly cancelled. I don't mind AnyCancellable going out of scope, however based on docs it automatically calls cancel on deinit which is undesired.
I've tried to use a cancellable bag, but AnyCancelable kept piling up even after the publisher fired a completion.
Should I manage the bag manually? I had impression that store(in: inout Set) was meant to be used for convenience of managing the cancellable instances, however all it does is push AnyCancellable into a set.
var cancelableSet = Set<AnyCancellable>()
func work(value: Int) -> AnyCancellable {
return Just(value)
.delay(for: .seconds(1), scheduler: DispatchQueue.global(qos: .default))
.map { $0 + 1 }
.sink(receiveValue: { (value) in
print("Got value: \(value)")
})
}
work(value: 1337).store(in: &cancelableSet)
DispatchQueue.main.asyncAfter(deadline: .now() + .seconds(5)) {
print("\(cancelableSet)")
}
What I came up with so far, which works fine but makes me wonder if something is missing in the Combine framework or it was not meant to be used in such fashion:
class DisposeBag {
private let lock = NSLock()
private var cancellableSet = Set<AnyCancellable>()
func store(_ cancellable: AnyCancellable) {
print("Store cancellable: \(cancellable)")
lock.lock()
cancellableSet.insert(cancellable)
lock.unlock()
}
func drop(_ cancellable: AnyCancellable) {
print("Drop cancellable: \(cancellable)")
lock.lock()
cancellableSet.remove(cancellable)
lock.unlock()
}
}
extension Publisher {
#discardableResult func autoDisposableSink(disposeBag: DisposeBag, receiveCompletion: #escaping ((Subscribers.Completion<Self.Failure>) -> Void), receiveValue: #escaping ((Self.Output) -> Void)) -> AnyCancellable {
var sharedCancellable: AnyCancellable?
let disposeSubscriber = {
if let sharedCancellable = sharedCancellable {
disposeBag.drop(sharedCancellable)
}
}
let cancellable = handleEvents(receiveCancel: {
disposeSubscriber()
}).sink(receiveCompletion: { (completion) in
receiveCompletion(completion)
disposeSubscriber()
}, receiveValue: receiveValue)
sharedCancellable = cancellable
disposeBag.store(cancellable)
return cancellable
}
}
The subscriptions in Apple Combine are scoped in a RAII compliant fashion. I.e. the event of deinitialization is equivalent to the event of automatic disposal of the observable. That is contrary to RxSwift Disposable where this behavior is sometimes reproduced, but not strictly so.
Even in RxSwift if you lose a DisposeBag your subscriptions will be disposed and this is a feature. If you would like your subscription to live through the scope, it means that it belongs to an outer scope.
And none of these implementations get busy actually tossing out the Disposables out of the retention tree once the subscriptions are done.
Related
I'm plaing with concurrency features of Swift. I've created helper function which returns AsyncStream with values published by NSOBject implementations. Sort of code below.
func asyncStreamFor<Root: NSObject, Value> (_ root: Root, keyPath: KeyPath<Root, Value>) -> AsyncStream<Value> {
AsyncStream { continuation in
let cancellable = root.publisher(for: keyPath, options: [.initial, .new])
.sink {
continuation.yield($0)
}
continuation.onTermination = { #Sendable _ in
cancellable.cancel()
}
}
}
I'm trying to use it (and previously used it directly using publisher) for such purposes as AVPlayer properties (rate, status) observation. Usage scheme below:
class Player {
let avPlayer = AVPlayer()
var cancellable = Set<AnyCancellable>()
init() {
avPlayer.publisher(for: \.status)
.sink {
print($0)
}.store(in: &cancellable)
}
}
Release of Player's instance works properly (no reference cycle problem).
For AsyncStream, I've tried to make it simple and used scheme as below:
class Player {
let avPlayer = AVPlayer()
init() {
Task {
for await status in asyncStreamFor(avPlayer, keyPath: \.status) {
print(status)
}
}
}
}
Here, it seems like there's a reference cycle: AsyncStream instance hold reference to cancellable (in onTermination clousure), which in turn hold reference to avPlayer. Player instance is not deinitialised when dropping last reference.
Any idea how to solve the problem without explicitly cancelling Task before dropping reference to Player?
I'm playing about with writing a custom Combine publisher in order to better understand how I can turn various classes into them. Admittedly this is not something I want to do a lot, I just want to understand how it could be done if I need to.
The scenario I'm working with is where I have a class that generates values over time and potentially has multiple subscribers listening. It's not a case of the publisher generating values when requested, but pushing values when it desires. This might occur (for example) when reading text, or with random input from a UI.
To test this out I've started with a simple integer generator that's something like this:
class IntPublisher {
func generate() {
DispatchQueue.global(qos: .background).async { [weak self] in
self?.send(0)
self?.send(1)
self?.send(2)
self?.complete()
}
}
private func send(_ value: Int) {
queueOnMain()
}
func queueOnMain() {
Thread.sleep(forTimeInterval: 0.5)
DispatchQueue.main.async { /* ... */ }
}
}
And here's the generator as a Publisher and Subscription:
class IntPublisher: Publisher {
typealias Output = Int
typealias Failure = Never
class Subscription: Combine.Subscription, Equatable {
private var subscriber: AnySubscriber<Int, Never>?
private var didFinish: ((Subscription) -> Void)?
init<S>(subscriber: S, didFinish:#escaping (Subscription) -> Void) where S: Subscriber, S.Input == Output, S.Failure == Failure {
self.subscriber = AnySubscriber(subscriber)
self.didFinish = didFinish
}
func request(_ demand: Subscribers.Demand) {
}
func cancel() {
finish()
}
func complete() {
self.subscriber?.receive(completion: .finished)
finish()
}
func finish() {
didFinish?(self)
subscriber = nil
didFinish = nil
}
func send(_ value: Int) {
_ = subscriber?.receive(value)
}
static func == (lhs: PublisherTests.IntPublisher.Subscription, rhs: PublisherTests.IntPublisher.Subscription) -> Bool {
return lhs.subscriber?.combineIdentifier == rhs.subscriber?.combineIdentifier
}
}
var subscriptions = [Subscription]()
func receive<S>(subscriber: S) where S : Subscriber, S.Failure == Failure, S.Input == Output {
let subscription = Subscription(subscriber: subscriber) { [weak self] (subscription) in
self?.subscriptions.remove(subscription)
}
subscriptions.append(subscription)
subscriber.receive(subscription: subscription)
}
func generate() {
DispatchQueue.global(qos: .background).async { [weak self] in
self?.send(0)
self?.send(1)
self?.send(2)
self?.complete()
}
}
private func send(_ value: Int) {
queueOnMain { $0.send(value) }
}
private func complete() {
queueOnMain { $0.complete() }
}
func queueOnMain(_ block: #escaping (Subscription) -> Void) {
Thread.sleep(forTimeInterval: 0.5)
DispatchQueue.main.async { self.subscriptions.forEach { block($0) } }
}
}
My question revolves around the way I've had to track the subscriptions in the publisher. Because it's generating the values and needs to forward them to the subscriptions, I've had to setup an array and store the subscriptions within it. In turn I had to find a way for the subscriptions to remove themselves from the publisher's array when they're cancelled or completed because the array effective forms a circular reference between the publisher and subscription.
In all the blogs I've read on custom publishing, they all cover the scenario where a publisher is waiting around for subscribers to request values. The publisher doesn't need to store a reference to the subscriptions because it passes closures which they can call to get a value. My use case is different because the publisher controls the request, not the subscribers.
So my question is this - Is using an array a good way to handle this? or is there something in Combine I've missed?
As Apple suggest for Creating Your Own Publishers. You should use Use a concrete subclass of Subject, a CurrentValueSubject, or #Published
For example:
func operation() -> AnyPublisher<String, Error> {
let subject = PassthroughSubject<String, Error>()
subject.send("A")
subject.send("B")
subject.send("C")
return subject.eraseToAnyPublisher()
}
New Dev's idea has made a massive reduction in the code. I don't know why I didn't think of it ... oh wait, I do. I was so focused on implementing I clean forgot to consider the option of using a decorator pattern around a subject.
Anyway, here's the (much simpler) code:
class IntPublisher2: Publisher {
typealias Output = Int
typealias Failure = Never
private let passThroughSubject = PassthroughSubject<Output, Failure>()
func receive<S>(subscriber: S) where S : Subscriber, S.Failure == Failure, S.Input == Output {
passThroughSubject.receive(subscriber: subscriber)
}
func generate() {
DispatchQueue.global(qos: .background).async {
Thread.sleep(forTimeInterval: 0.5)
self.passThroughSubject.send(0)
Thread.sleep(forTimeInterval: 0.5)
self.passThroughSubject.send(1)
Thread.sleep(forTimeInterval: 0.5)
self.passThroughSubject.send(2)
Thread.sleep(forTimeInterval: 0.5)
self.passThroughSubject.send(completion: .finished)
}
}
}
I have a function that returns a publisher. This publisher gives the results of a background process. I only want to trigger the background process when the publisher would be subscribed, so that no results are lost. The background process can update its results many times, so the variant with Future is not suitable.
private let passthroughSubject = PassthroughSubject<Data, Error>()
// This function will be used outside.
func fetchResults() -> AnyPublisher<Data, Error> {
return passthroughSubject
.eraseToAnyPublisher()
.somehowTriggerTheBackgroundProcess()
}
extension MyModule: MyDelegate {
func didUpdateResult(newResult: Data) {
self.passthroughSubject.send(newResult)
}
}
What have I tried?
Future:
Future<Data, Error> { [weak self] promise in
self?.passthroughSubject
.sink(receiveCompletion: { completion in
// My logic
}, receiveValue: { value in
// My logic
})
.store(in: &self.cancellableSet)
self?.triggerBackgroundProcess()
}.eraseToAnyPublisher()
Works the way I want but the subscriber is called only once (logical).
Deffered:
Deferred<AnyPublisher<Data, Error>>(createPublisher: { [weak self] in
defer {
self?.triggerBackgroundProcess()
}
return passthroughSubject.eraseToAnyPublisher()
}
Debugger shows that everything is correct: first return then trigger but the subscriber is not called for the first time.
receiveSubscription:
passthroughSubject
.handleEvents(receiveSubscription: { [weak self] subscription in
self?.triggerBackgroundProcess()
})
.eraseToAnyPublisher()
The same effect as with Deffered.
Is it even possible what I want to achieve?
Or, it is better to create a public publisher subscribe it and receive results from background process. And the fetchResults() function doesn't return anything?
Thanks in advance for your help.
You can write your own type that conforms to Publisher and wraps a PassthroughSubject. In your implementation, you can start the background process when you get a subscription.
public struct MyPublisher: Publisher {
public typealias Output = Data
public typealias Failure = Error
public func receive<Downstream: Subscriber>(subscriber: Downstream)
where Downstream.Input == Output, Downstream.Failure == Failure
{
let subject = PassthroughSubject<Output, Failure>()
subject.subscribe(subscriber)
startBackgroundProcess(subject: subject)
}
private func startBackgroundProcess(subject: PassthroughSubject<Output, Failure>) {
DispatchQueue.global(qos: .utility).async {
print("background process running")
subject.send(Data())
subject.send(completion: .finished)
}
}
}
Note that this publisher starts a new background process for each subscriber. That is a common implementation. For example URLSession.DataTaskPublisher issues a new request for each subscriber. If you want multiple subscribers to share the output of a single request, you can use the .multicast operator, add multiple subscribers, and then .connect() the multicast publisher to start the background process once:
let pub = MyPublisher().multicast { PassthroughSubject() }
pub.sink(...).store(in: &tickets) // first subscriber
pub.sink(...).store(in: &tickets) // second subscriber
pub.connect().store(in: &tickets) // start the background process
It seems to me that your last bit of code is a perfectly viable solution: don't trigger the background process until you detect the subscription. Example:
let subject = PassthroughSubject<String, Never>()
var storage = Set<AnyCancellable>()
func start() {
self.subject
.handleEvents(receiveSubscription: {_ in
print("subscribed")
DispatchQueue.main.async {
self.doSomethingAsynchronous()
}
})
.sink { print("got", $0) }
.store(in: &storage)
}
func doSomethingAsynchronous() {
DispatchQueue.global(qos: .userInitiated).async {
DispatchQueue.main.async {
self.subject.send("bingo")
}
}
}
Is there a good way to handle an array of AnyCancellable to remove a stored AnyCancellable when it's finished/cancelled?
Say I have this
import Combine
import Foundation
class Foo {
private var cancellables = [AnyCancellable]()
func startSomeTask() -> Future<Void, Never> {
Future<Void, Never> { promise in
DispatchQueue.main.asyncAfter(deadline: .now() + .seconds(2)) {
promise(.success(()))
}
}
}
func taskCaller() {
startSomeTask()
.sink { print("Do your stuff") }
.store(in: &cancellables)
}
}
Every time taskCaller is called, a AnyCancellable is created and stored in the array.
I'd like to remove that instance from the array when it finishes in order to avoid memory waste.
I know I can do something like this, instead of the array
var taskCancellable: AnyCancellable?
And store the cancellable by doing:
taskCancellable = startSomeTask().sink { print("Do your stuff") }
But this will end to create several single cancellable and can pollute the code. I don't want a class like
class Bar {
private var task1: AnyCancellable?
private var task2: AnyCancellable?
private var task3: AnyCancellable?
private var task4: AnyCancellable?
private var task5: AnyCancellable?
private var task6: AnyCancellable?
}
I asked myself the same question, while working on an app that generates a large amount of cancellables that end up stored in the same array. And for long-lived apps the array size can become huge.
Even if the memory footprint is small, those are still objects, which consume heap, which can lead to heap fragmentation in time.
The solution I found is to remove the cancellable when the publisher finishes:
func consumePublisher() {
var cancellable: AnyCancellable!
cancellable = makePublisher()
.sink(receiveCompletion: { [weak self] _ in self?.cancellables.remove(cancellable) },
receiveValue: { doSomeWork() })
cancellable.store(in: &cancellables)
}
Indeed, the code is not that pretty, but at least there is no memory waste :)
Some high order functions can be used to make this pattern reusable in other places of the same class:
func cleanupCompletion<T>(_ cancellable: AnyCancellable) -> (Subscribers.Completion<T>) -> Void {
return { [weak self] _ in self?.cancellables.remove(cancellable) }
}
func consumePublisher() {
var cancellable: AnyCancellable!
cancellable = makePublisher()
.sink(receiveCompletion: cleanupCompletion(cancellable),
receiveValue: { doSomeWork() })
cancellable.store(in: &cancellables)
}
Or, if you need support to also do work on completion:
func cleanupCompletion<T>(_ cancellable: AnyCancellable) -> (Subscribers.Completion<T>) -> Void {
return { [weak self] _ in self?.cancellables.remove(cancellable) }
}
func cleanupCompletion<T>(_ cancellable: AnyCancellable, completionWorker: #escaping (Subscribers.Completion<T>) -> Void) -> (Subscribers.Completion<T>) -> Void {
return { [weak self] in
self?.cancellables.remove(cancellable)
completionWorker($0)
}
}
func consumePublisher() {
var cancellable: AnyCancellable!
cancellable = makePublisher()
.sink(receiveCompletion: cleanupCompletion(cancellable) { doCompletionWork() },
receiveValue: { doSomeWork() })
cancellable.store(in: &cancellables)
}
It's a nice idea, but there is really nothing to remove. When the completion (finish or cancel) comes down the pipeline, everything up the pipeline is unsubscribed in good order, all classes (the Subscription objects) are deallocated, and so on. So the only thing that is still meaningfully "alive" after your Future has emitted a value or failure is the Sink at the end of the pipeline, and it is tiny.
To see this, run this code
for _ in 1...100 {
self.taskCaller()
}
and use Instruments to track your allocations. Sure enough, afterwards there are 100 AnyCancellable objects, for a grand total of 3KB. There are no Futures; none of the other objects malloced in startSomeTask still exist, and they are so tiny (48 bytes) that it wouldn't matter if they did.
I've been trying to replicate flatMapLatest from RxSwift in Combine, I've read in a few places that the solution is to use .map(...).switchToLatest
I'm finding some differences between the two, and I'm not sure if it's my implementation/understanding which is the problem.
In RxSwift if the upstream observable emits a stop event (completed or error) then the downstream observables created in the flatMapLatest closure will continue to emit events until they themselves emit a stop event:
let disposeBag = DisposeBag()
func flatMapLatestDemo() {
let mockTrigger = PublishSubject<Void>()
let mockDataTask = PublishSubject<Void>()
mockTrigger
.flatMapLatest { mockDataTask }
.subscribe(onNext: { print("RECEIVED VALUE") })
.disposed(by: disposeBag)
mockTrigger.onNext(())
mockTrigger.onCompleted()
mockDataTask.onNext(()) // -> "RECEIVED VALUE" is printed
}
This same setup in Combine doesn't behave the same way:
var cancellables = Set<AnyCancellable>()
func switchToLatestDemo() {
let mockTrigger = PassthroughSubject<Void, Never>()
let mockDataTask = PassthroughSubject<Void, Never>()
mockTrigger
.map { mockDataTask }
.switchToLatest()
.sink { print("RECEIVED VALUE") }
.store(in: &cancellables)
mockTrigger.send(())
mockTrigger.send(completion: .finished)
mockDataTask.send(()) // -> Nothing is printed, if I uncomment the finished event above then "RECEIVED VALUE" is printed
}
Is this intentional? If so, how do we replicate the behaviour of flatMapLatest in Combine?
If it's not intentional, file a radar I guess?
I've using this Swift implementation by sergdort:
func flatMapLatest<T: Publisher>(_ transform: #escaping (Self.Output) -> T) -> Publishers.SwitchToLatest<T, Publishers.Map<Self, T>> where T.Failure == Self.Failure {
map(transform).switchToLatest()
}