In this code I am expecting the Empty() publisher to send completion to the .sink subscriber, but no completion is sent.
func testEmpty () {
let x = XCTestExpectation()
let subject = PassthroughSubject<Int, Never>()
emptyOrSubjectPublisher(subject).sink(receiveCompletion: { completion in
dump(completion)
}, receiveValue: { value in
dump(value)
}).store(in: &cancellables)
subject.send(0)
wait(for: [x], timeout: 10.0)
}
func emptyOrSubjectPublisher (_ subject: PassthroughSubject<Int, Never>) -> AnyPublisher<Int, Never> {
subject
.flatMap { (i: Int) -> AnyPublisher<Int, Never> in
if i == 1 {
return subject.eraseToAnyPublisher()
} else {
return Empty().eraseToAnyPublisher()
}
}
.eraseToAnyPublisher()
}
Why does the emptyOrSubjectPublisher not receive the completion?
The Empty completes, but the overall pipeline does not, because the initial Subject has not completed. The inner pipeline in which the Empty is produced (the flatMap) has "swallowed" the completion. This is the expected behavior.
You can see this more easily by simply producing a Just in the flatMap, e.g. Just(100):
subject
.flatMap {_ in Just(100) }
.sink(receiveCompletion: { completion in
print(completion)
}, receiveValue: { value in
print(value)
}).store(in: &cancellables)
subject.send(1)
You know and I know that a Just emits once and completes. But although the value of the Just arrives down the pipeline, there is no completion.
And you can readily see why it works this way. It would be very wrong if we had a potential sequence of values from our publisher but some intermediate publisher, produced in a flatMap, had the power to complete the whole pipeline and end it prematurely.
(And see my https://www.apeth.com/UnderstandingCombine/operators/operatorsTransformersBlockers/operatorsflatmap.html where I make the same point.)
If the goal is to send a completion down the pipeline, it's the subject that needs to complete. For example, you could say
func emptyOrSubjectPublisher (_ subject: PassthroughSubject<Int, Never>) -> AnyPublisher<Int, Never> {
subject
.flatMap { (i: Int) -> AnyPublisher<Int, Never> in
if i == 1 {
return subject.eraseToAnyPublisher()
} else {
subject.send(completion: .finished) // <--
return Empty().eraseToAnyPublisher()
}
}
.eraseToAnyPublisher()
}
[Note, however, that your whole emptyOrSubjectPublisher is peculiar; it is unclear what purpose it is intended to serve. Returning subject when i is 1 is kind of pointless too, because subject has already published the 1 by the time we get here, and isn't going to publish anything more right now. Thus, if you send 1 at the start, you won't receive 1 as a value, because your flatMap has swallowed it and has produced a publisher that isn't going to publish.]
Related
I would like to wait for data from all elements and combine them into one result:
Item: AnyPublisher <Int, Swift.Error>
Array: AnyPublisher <[Result<Int, Swift.Error>], Never>
Can this be done somehow? I've tried using Zip and Merge - but I couldn't get the desired result.
Example:
func createItem(num: Int) -> AnyPublisher<Int, Swift.Error> {
Just(num)
.setFailureType(to: Swift.Error.self)
.eraseToAnyPublisher()
}
func createItems(nums: [Int]) -> AnyPublisher<[Result<Int, Swift.Error>], Never> {
Publishers.MergeMany(nums.map { self.createItem(num: $0) } )
.collect()
.eraseToAnyPublisher()
}
Function but "createItems" does not work
Looks like you want to generate an array of results - whether a value or an error - emitted by each publisher for each array value.
MergeMany + Collect is the right approach here. MergeMany will complete only when all its input publishers complete, and .collect() will for for that completion event before emitting.
Using your example:
func createItems(nums: [Int]) -> AnyPublisher<[Result<Int, Error>], Never> {
let publishers = nums.map { v -> AnyPublisher<Result<Int, Error>, Never> in
createItem(num: v) // AnyPublisher<Int, Error>
.first() // ensure a single result
.map { .success($0) } // map value to .success result
.catch { Just(.failure($0)) } // map error to .failure result
.eraseToAnyPublisher()
}
return Publishers.MergeMany(publishers)
.collect() // wait for MergeMany to complete
.eraseToAnyPublisher()
}
I have the following pipeline setup, and for some reason I can't understand, the second flatMap is skipped:
func letsDoThis() -> SignalProducer<(), MyError> {
let logError: (MyError) -> Void = { error in
print("Error: \(error); \((error as NSError).userInfo)")
}
return upload(uploads) // returns: SignalProducer<Signal<(), MyError>.Event, Never>
.collect() // SignalProducer<[Signal<(), MyError>.Event], Never>
.flatMap(.merge, { [uploadContext] values -> SignalProducer<[Signal<(), MyError>.Event], MyError> in
return context.saveSignal() // SignalProducer<(), NSError>
.map { values } // SignalProducer<[Signal<(), MyError>.Event], NSError>
.mapError { MyError.saveFailed(error: $0) } // SignalProducer<[Signal<(), MyError>.Event], MyError>
})
.flatMap(.merge, { values -> SignalProducer<(), MyError> in
if let error = values.first(where: { $0.error != nil })?.error {
return SignalProducer(error: error)
} else {
return SignalProducer(value: ())
}
})
.on(failed: logError)
}
See the transformations/signatures starting with the upload method.
When I say skipped I mean even if I add breakpoints or log statements, they are not executed.
Any idea how to debug this or how to fix?
Thanks.
EDIT: it is most likely has something to do with the map withing the first flatMap, but not sure how to fix it yet.
See this link.
EDIT 2: versions
- ReactiveCocoa (10.1.0):
- ReactiveObjC (3.1.1)
- ReactiveObjCBridge (6.0.0):
- ReactiveSwift (6.1.0)
EDIT 3: I found the problem which was due to my method saveSignal sending sendCompleted.
extension NSManagedObjectContext {
func saveSignal() -> SignalProducer<(), NSError> {
return SignalProducer { observer, disposable in
self.perform {
do {
try self.save()
observer.sendCompleted()
}
catch {
observer.send(error: error as NSError)
}
}
}
}
Sending completed make sense, so I can't change that. Any way to change the flatMap to still do what I intended to do?
I think the reason your second flatMap is never executed is that saveSignal never sends a value; it just finishes with a completed event or an error event. That means map will never be called, and no values will ever be passed to your second flatMap. You can fix it by doing something like this:
context.saveSignal()
.mapError { MyError.saveFailed(error: $0) }
.then(SignalProducer(value: values))
Instead of using map (which does nothing because there are no values to map), you just create a new producer that sends the values after saveSignal completes successfully.
I may be going about this the wrong way, but I have a function with which I want to emit multiple values over time. But I don’t want it to start emitting until something is subscribed to that object. I’m coming to combine from RxSwift, so I’m basically trying to duplicated Observable.create() in the RxSwift world. The closest I have found is returning a Future, but futures only succeed or fail (so they are basically like a Single in RxSwift.)
Is there some fundamental thing I am missing here? My end goal is to make a function that processes a video file and emits progress events until it completes, then emits a URL for the completed file.
Generally you can use a PassthroughSubject to publish custom outputs. You can wrap a PassthroughSubject (or multiple PassthroughSubjects) in your own implementation of Publisher to ensure that only your process can send events through the subject.
Let's mock a VideoFrame type and some input frames for example purposes:
typealias VideoFrame = String
let inputFrames: [VideoFrame] = ["a", "b", "c"]
Now we want to write a function that synchronously processes these frames. Our function should report progress somehow, and at the end, it should return the output frames. To report progress, our function will take a PassthroughSubject<Double, Never>, and send its progress (as a fraction from 0 to 1) to the subject:
func process(_ inputFrames: [VideoFrame], progress: PassthroughSubject<Double, Never>) -> [VideoFrame] {
var outputFrames: [VideoFrame] = []
for input in inputFrames {
progress.send(Double(outputFrames.count) / Double(inputFrames.count))
outputFrames.append("output for \(input)")
}
return outputFrames
}
Okay, so now we want to turn this into a publisher. The publisher needs to output both progress and a final result. So we'll use this enum as its output:
public enum ProgressEvent<Value> {
case progress(Double)
case done(Value)
}
Now we can define our Publisher type. Let's call it SyncPublisher, because when it receives a Subscriber, it immediately (synchronously) performs its entire computation.
public struct SyncPublisher<Value>: Publisher {
public init(_ run: #escaping (PassthroughSubject<Double, Never>) throws -> Value) {
self.run = run
}
public var run: (PassthroughSubject<Double, Never>) throws -> Value
public typealias Output = ProgressEvent<Value>
public typealias Failure = Error
public func receive<Downstream: Subscriber>(subscriber: Downstream) where Downstream.Input == Output, Downstream.Failure == Failure {
let progressSubject = PassthroughSubject<Double, Never>()
let doneSubject = PassthroughSubject<ProgressEvent<Value>, Error>()
progressSubject
.setFailureType(to: Error.self)
.map { ProgressEvent<Value>.progress($0) }
.append(doneSubject)
.subscribe(subscriber)
do {
let value = try run(progressSubject)
progressSubject.send(completion: .finished)
doneSubject.send(.done(value))
doneSubject.send(completion: .finished)
} catch {
progressSubject.send(completion: .finished)
doneSubject.send(completion: .failure(error))
}
}
}
Now we can turn our process(_:progress:) function into a SyncPublisher like this:
let inputFrames: [VideoFrame] = ["a", "b", "c"]
let pub = SyncPublisher<[VideoFrame]> { process(inputFrames, progress: $0) }
The run closure is { process(inputFrames, progress: $0) }. Remember that $0 here is a PassthroughSubject<Double, Never>, exactly what process(_:progress:) wants as its second argument.
When we subscribe to this pub, it will first create two subjects. One subject is the progress subject and gets passed to the closure. We'll use the other subject to publish either the final result and a .finished completion, or just a .failure completion if the run closure throws an error.
The reason we use two separate subjects is because it ensures that our publisher is well-behaved. If the run closure returns normally, the publisher publishes zero or more progress reports, followed by a single result, followed by .finished. If the run closure throws an error, the publisher publishes zero or more progress reports, followed by a .failed. There is no way for the run closure to make the publisher emit multiple results, or emit more progress reports after emitting the result.
At last, we can subscribe to pub to see if it works properly:
pub
.sink(
receiveCompletion: { print("completion: \($0)") },
receiveValue: { print("output: \($0)") })
Here's the output:
output: progress(0.0)
output: progress(0.3333333333333333)
output: progress(0.6666666666666666)
output: done(["output for a", "output for b", "output for c"])
completion: finished
The following extension to AnyPublisher replicates Observable.create semantics by composing a PassthroughSubject. This includes cancellation semantics.
AnyPublisher.create() Swift 5.6 Extension
public extension AnyPublisher {
static func create<Output, Failure>(_ subscribe: #escaping (AnySubscriber<Output, Failure>) -> AnyCancellable) -> AnyPublisher<Output, Failure> {
let passthroughSubject = PassthroughSubject<Output, Failure>()
var cancellable: AnyCancellable?
return passthroughSubject
.handleEvents(receiveSubscription: { subscription in
let subscriber = AnySubscriber<Output, Failure> { subscription in
} receiveValue: { input in
passthroughSubject.send(input)
return .unlimited
} receiveCompletion: { completion in
passthroughSubject.send(completion: completion)
}
cancellable = subscribe(subscriber)
}, receiveCompletion: { completion in
}, receiveCancel: {
cancellable?.cancel()
})
.eraseToAnyPublisher()
}
}
Usage
func doSomething() -> AnyPublisher<Int, Error> {
return AnyPublisher<Int, Error>.create { subscriber in
// Imperative implementation of doing something can call subscriber as follows
_ = subscriber.receive(1)
subscriber.receive(completion: .finished)
// subscriber.receive(completion: .failure(myError))
return AnyCancellable {
// Imperative cancellation implementation
}
}
}
Using Apple's new Combine framework I want to make multiple requests from each element in a list. Then I want a single result from a reduction of all the the responses. Basically I want to go from list of publishers to a single publisher that holds a list of responses.
I've tried making a list of publishers, but I don't know how to reduce that list into a single publisher. And I've tried making a publisher containing a list but I can't flat map a list of publishers.
Please look at the "createIngredients" function
func createIngredient(ingredient: Ingredient) -> AnyPublisher<CreateIngredientMutation.Data, Error> {
return apollo.performPub(mutation: CreateIngredientMutation(name: ingredient.name, optionalProduct: ingredient.productId, quantity: ingredient.quantity, unit: ingredient.unit))
.eraseToAnyPublisher()
}
func createIngredients(ingredients: [Ingredient]) -> AnyPublisher<[CreateIngredientMutation.Data], Error> {
// first attempt
let results = ingredients
.map(createIngredient)
// results = [AnyPublisher<CreateIngredientMutation.Data, Error>]
// second attempt
return Publishers.Just(ingredients)
.eraseToAnyPublisher()
.flatMap { (list: [Ingredient]) -> Publisher<[CreateIngredientMutation.Data], Error> in
return list.map(createIngredient) // [AnyPublisher<CreateIngredientMutation.Data, Error>]
}
}
I'm not sure how to take an array of publishers and convert that to a publisher containing an array.
Result value of type '[AnyPublisher]' does not conform to closure result type 'Publisher'
Essentially, in your specific situation you're looking at something like this:
func createIngredients(ingredients: [Ingredient]) -> AnyPublisher<[CreateIngredientMutation.Data], Error> {
Publishers.MergeMany(ingredients.map(createIngredient(ingredient:)))
.collect()
.eraseToAnyPublisher()
}
This 'collects' all the elements produced by the upstream publishers and – once they have all completed – produces an array with all the results and finally completes itself.
Bear in mind, if one of the upstream publishers fails – or produces more than one result – the number of elements may not match the number of subscribers, so you may need additional operators to mitigate this depending on your situation.
The more generic answer, with a way you can test it using the EntwineTest framework:
import XCTest
import Combine
import EntwineTest
final class MyTests: XCTestCase {
func testCreateArrayFromArrayOfPublishers() {
typealias SimplePublisher = Just<Int>
// we'll create our 'list of publishers' here. Each publisher emits a single
// Int and then completes successfully – using the `Just` publisher.
let publishers: [SimplePublisher] = [
SimplePublisher(1),
SimplePublisher(2),
SimplePublisher(3),
]
// we'll turn our array of publishers into a single merged publisher
let publisherOfPublishers = Publishers.MergeMany(publishers)
// Then we `collect` all the individual publisher elements results into
// a single array
let finalPublisher = publisherOfPublishers.collect()
// Let's test what we expect to happen, will happen.
// We'll create a scheduler to run our test on
let testScheduler = TestScheduler()
// Then we'll start a test. Our test will subscribe to our publisher
// at a virtual time of 200, and cancel the subscription at 900
let testableSubscriber = testScheduler.start { finalPublisher }
// we're expecting that, immediately upon subscription, our results will
// arrive. This is because we're using `just` type publishers which
// dispatch their contents as soon as they're subscribed to
XCTAssertEqual(testableSubscriber.recordedOutput, [
(200, .subscription), // we're expecting to subscribe at 200
(200, .input([1, 2, 3])), // then receive an array of results immediately
(200, .completion(.finished)), // the `collect` operator finishes immediately after completion
])
}
}
I think that Publishers.MergeMany could be of help here. In your example, you might use it like so:
func createIngredients(ingredients: [Ingredient]) -> AnyPublisher<CreateIngredientMutation.Data, Error> {
let publishers = ingredients.map(createIngredient(ingredient:))
return Publishers.MergeMany(publishers).eraseToAnyPublisher()
}
That will give you a publisher that sends you single values of the Output.
However, if you specifically want the Output in an array all at once at the end of all your publishers completing, you can use collect() with MergeMany:
func createIngredients(ingredients: [Ingredient]) -> AnyPublisher<[CreateIngredientMutation.Data], Error> {
let publishers = ingredients.map(createIngredient(ingredient:))
return Publishers.MergeMany(publishers).collect().eraseToAnyPublisher()
}
And either of the above examples you could simplify into a single line if you prefer, ie:
func createIngredients(ingredients: [Ingredient]) -> AnyPublisher<CreateIngredientMutation.Data, Error> {
Publishers.MergeMany(ingredients.map(createIngredient(ingredient:))).eraseToAnyPublisher()
}
You could also define your own custom merge() extension method on Sequence and use that to simplify the code slightly:
extension Sequence where Element: Publisher {
func merge() -> Publishers.MergeMany<Element> {
Publishers.MergeMany(self)
}
}
func createIngredients(ingredients: [Ingredient]) -> AnyPublisher<CreateIngredientMutation.Data, Error> {
ingredients.map(createIngredient).merge().eraseToAnyPublisher()
}
To add on the answer by Tricky, here is a solution which retains the order of elements in the array.
It passes an index for each element through the whole chain, and sorts the collected array by the index.
Complexity should be O(n log n) because of the sorting.
import Combine
extension Publishers {
private struct EnumeratedElement<T> {
let index: Int
let element: T
init(index: Int, element: T) {
self.index = index
self.element = element
}
init(_ enumeratedSequence: EnumeratedSequence<[T]>.Iterator.Element) {
index = enumeratedSequence.offset
element = enumeratedSequence.element
}
}
static func mergeMappedRetainingOrder<InputType, OutputType>(
_ inputArray: [InputType],
mapTransform: (InputType) -> AnyPublisher<OutputType, Error>
) -> AnyPublisher<[OutputType], Error> {
let enumeratedInputArray = inputArray.enumerated().map(EnumeratedElement.init)
let enumeratedMapTransform: (EnumeratedElement<InputType>) -> AnyPublisher<EnumeratedElement<OutputType>, Error> = { enumeratedInput in
mapTransform(enumeratedInput.element)
.map { EnumeratedElement(index: enumeratedInput.index, element: $0)}
.eraseToAnyPublisher()
}
let sortEnumeratedOutputArrayByIndex: ([EnumeratedElement<OutputType>]) -> [EnumeratedElement<OutputType>] = { enumeratedOutputArray in
enumeratedOutputArray.sorted { $0.index < $1.index }
}
let transformToNonEnumeratedArray: ([EnumeratedElement<OutputType>]) -> [OutputType] = {
$0.map { $0.element }
}
return Publishers.MergeMany(enumeratedInputArray.map(enumeratedMapTransform))
.collect()
.map(sortEnumeratedOutputArrayByIndex)
.map(transformToNonEnumeratedArray)
.eraseToAnyPublisher()
}
}
Unit test for the solution:
import XCTest
import Combine
final class PublishersExtensionsTests: XCTestCase {
// MARK: - Private properties
private var cancellables = Set<AnyCancellable>()
// MARK: - Tests
func test_mergeMappedRetainingOrder() {
let expectation = expectation(description: "mergeMappedRetainingOrder publisher")
let numbers = (1...100).map { _ in Int.random(in: 1...3) }
let mapTransform: (Int) -> AnyPublisher<Int, Error> = {
let delayTimeInterval = RunLoop.SchedulerTimeType.Stride(Double($0))
return Just($0)
.delay(for: delayTimeInterval, scheduler: RunLoop.main)
.setFailureType(to: Error.self)
.eraseToAnyPublisher()
}
let resultNumbersPublisher = Publishers.mergeMappedRetainingOrder(numbers, mapTransform: mapTransform)
resultNumbersPublisher.sink(receiveCompletion: { _ in }, receiveValue: { resultNumbers in
XCTAssertTrue(numbers == resultNumbers)
expectation.fulfill()
}).store(in: &cancellables)
waitForExpectations(timeout: 5)
}
}
You can do it in one line:
.flatMap(Publishers.Sequence.init(sequence:))
I have one observable (we will call it trigger) that can emit many times in a short period of time. When it emits I am doing a network Request and the I am storing the result with the scan Operator.
My problem is that I would like to wait until the request is finished to do it again. (But as it is now if trigger emits 2 observables it doesn't matter if fetchData has finished or not, it will do it again)
Bonus: I also would like to take only the first each X seconds (Debounce is not the solution because it can be emitting all the time and I want to get 1 each X seconds, it isn't throttle neither because if an observable emits 2 times really fast I will get the first and the second delayed X seconds)
The code:
trigger.flatMap { [unowned self] _ in
self.fetchData()
}.scan([], accumulator: { lastValue, newValue in
return lastValue + newValue
})
and fetchData:
func fetchData() -> Observable<[ReusableCellVMContainer]>
trigger:
let trigger = Observable.of(input.viewIsLoaded, handle(input.isNearBottomEdge)).merge()
I'm sorry, I misunderstood what you were trying to accomplish in my answer below.
The operator that will achieve what you want is flatMapFirst. This will ignore events from the trigger until the fetchData() is complete.
trigger
.flatMapFirst { [unowned self] _ in
self.fetchData()
}
.scan([], accumulator: { lastValue, newValue in
return lastValue + newValue
})
I'm leaving my previous answer below in case it helps (if anything, it has the "bonus" answer.)
The problem you are having is called "back pressure" which is when the observable is producing values faster than the observer can handle.
In this particular case, I recommend that you don't restrict the data fetch requests and instead map each request to a key and then emit the array in order:
trigger
.enumerated()
.flatMap { [unowned self] count, _ in
Observable.combineLatest(Observable.just(count), self.fetchData())
}
.scan(into: [Int: Value](), accumulator: { lastValue, newValue in
lastValue[newValue.0] = newValue.1
})
.map { $0.sorted(by: { $0.key < $1.key }).map { $0.value }}
To make the above work, you need this:
extension ObservableType {
func enumerated() -> Observable<(Int, E)> {
let shared = share()
let counter = shared.scan(0, accumulator: { prev, _ in return prev + 1 })
return Observable.zip(counter, shared)
}
}
This way, your network requests are starting ASAP but you aren't loosing the order that they are made in.
For your "bonus", the buffer operator will do exactly what you want. Something like:
trigger.buffer(timeSpan: seconds, count: Int.max, scheduler: MainScheduler.instance)
.map { $0.first }