concatMap / flatMap should run immediately, on the same scheduler - reactive-programming

Given a Service object, I want to be sure that each function call to the service will not result in side effects. In my case whatever function A is doing, nothing will be executed in function B unless the scheduler is available.
Here's what this looks like:
class Service {
func handleJobA(input: String) -> Observable<String> {
return Observable.just(input)
.do(onNext: { (str) in
print ("Job A: \(str)")
})
.concatMap { input -> Observable<String> in
return Observable.just("Job AA: \(input)")
.delay(2, scheduler: self.scheduler)
.do(onNext: { (str) in
print (str)
})
}
.subscribeOn(scheduler)
}
func handleJobB(input: String) -> Observable<String> {
return Observable.just(input)
.do(onNext: { (str) in
print ("Job B: \(str)")
})
.delay(1, scheduler: scheduler)
.concatMap { input -> Observable<String> in
return Observable.just("Job BB: \(input)")
.do(onNext: { (str) in
print (str)
})
}
.subscribeOn(scheduler)
}
let scheduler = SerialDispatchQueueScheduler(internalSerialQueueName: "Service")
}
let service = Service()
_ = Observable.from(["1","2","3"])
.concatMap { service.handleJobA(input: $0) }
.subscribe(onNext:{
print($0 + " √")
})
_ = Observable.from(["1","2","3"])
.concatMap { service.handleJobB(input: $0) }
.subscribe(onNext:{
print($0 + " √")
})
import PlaygroundSupport
PlaygroundPage.current.needsIndefiniteExecution = true
At the moment, the output is:
Job A: 1
Job B: 1
Job BB: 1
Job BB: 1 √
Job B: 2
Job AA: 1
Job AA: 1 √
Job A: 2
Job BB: 2
Job BB: 2 √
Job B: 3
Job BB: 3
Job BB: 3 √
Job AA: 2
Job AA: 2 √
Job A: 3
Job AA: 3
Job AA: 3 √
However, this shows the fundamental problem. The internal delays (which can happen from anything, really.. network, processing) cause the observable processing to get out of "order".
What I want is this:
Job A: 1
Job AA: 1
Job AA: 1 √
Job B: 1
Job BB: 1
Job BB: 1 √
Job B: 2
Job BB: 2
Job BB: 2 √
Job B: 3
Job BB: 3
Job BB: 3 √
Job A: 2
Job AA: 2
Job AA: 2 √
Job A: 3
Job AA: 3
Job AA: 3 √
That means, once a function has started processing a task, no one else get's access unless it is done.
I received a very good answer previously. It's not totally applicable, as flatMap/concatMap (?) both seem to dislike the schedulers.
My theory is that the concatMap call indeed does the right job, but then schedules the child sequence omissions to the end of the schedulers queue, whereas I would want it at the front, to be processed next.

I can't explain schedulers behaviour... But I can make a small proposal
...once a function has started processing a task, no one else get's
access unless it is done...
You can pass all your handleJob calls through concatMap to get the behaviour you require:
Observable
.from([1,2,3,4,5,6])
.flatMap({ (value) -> Observable<String> in
switch value % 2 == 0 {
case true:
return service.handleJobA(input: "\(value)")
case false:
return service.handleJobB(input: "\(value)")
}
})
.subscribe(onNext:{
print($0 + " √")
})
Service class example:
private class Service {
private lazy var result = PublishSubject<(index: Int, result: String)>()
private lazy var publish = PublishSubject<(index: Int, input: String, transformation: (String) -> String)>()
private lazy var index: Int = 0
private lazy var disposeBag = DisposeBag()
init() {
publish
.asObservable()
.concatMap({ (index, input, transformation) -> Observable<(index: Int, result: String)> in
let dueTime = RxTimeInterval(arc4random_uniform(3) + 1)
return Observable
.just((index: index, result: transformation(input)))
.delay(dueTime, scheduler: self.scheduler)
})
.bind(to: result)
.disposed(by: disposeBag)
}
func handleJobA(input: String) -> Observable<String> {
let transformation: (String) -> String = { string in
return "Job A: \(string)"
}
return handleJob(input: input, transformation: transformation)
}
func handleJobB(input: String) -> Observable<String> {
let transformation: (String) -> String = { string in
return "Job B: \(string)"
}
return handleJob(input: input, transformation: transformation)
}
func handleJob(input: String, transformation: #escaping (String) -> String) -> Observable<String> {
index += 1
defer {
publish.onNext((index, input, transformation))
}
return result
.filter({ [expected = index] (index, result) -> Bool in
return expected == index
})
.map({ $0.result })
.take(1)
.shareReplayLatestWhileConnected()
}
let scheduler = SerialDispatchQueueScheduler(internalSerialQueueName: "Service")
}

Related

Is there a high-order function to convert a linked list to an array?

Imagine I have a simple linked list:
class Node {
var parent: Node?
}
// Create the chain: a <- b <- c
let a = Node()
let b = Node(parent: a)
let c = Node(parent: b)
Now I want to convert c into an array ([c, b, a]) so I can use other high-order functions like map.
What is a method that produces an array from a linked list typically called?
Is there a way to use other high-order functions to implement this and not use a loop?
The only implementation I could think of falls back to using a loop:
func chain<T>(_ initial: T, _ next: (T) -> T?) -> [T] {
var result = [initial]
while let n = next(result.last!) {
result.append(n)
}
return result
}
chain(c) { $0.parent } // == [c, b, a]
I'm wondering if there is a built-in way to use functions like map/reduce/etc. to get the same results.
You can use sequence(first:next:) to make a Sequence and then Array() to turn that sequence into an array:
let result = Array(sequence(first: c, next: { $0.parent }))
or equivalently:
let result = Array(sequence(first: c, next: \.parent))
You could use it to implement chain:
func chain<T>(_ initial: T, _ next: #escaping (T) -> T?) -> [T] {
Array(sequence(first: initial, next: next))
}
But I'd just use it directly.
Note: If you just want to call map, you don't need to turn the sequence into an Array. You can just apply .map to the sequence.
For example, here is a useless map that represents each node in the linked list with a 1:
let result = sequence(first: c, next: \.parent).map { _ in 1 }
You could make Node be a "denaturated" sequence, this will automatically bring all high-order functions: map, filter, reduce, flatMap, etc.
class Node {
var parent: Node?
var value: String
init(parent: Node? = nil, value: String = "") {
self.parent = parent
self.value = value
}
}
extension Node: Sequence {
struct NodeIterator: IteratorProtocol {
var node: Node?
mutating func next() -> Node? {
let result = node
node = node?.parent
return result
}
}
func makeIterator() -> NodeIterator {
NodeIterator(node: self)
}
}
// Create the chain: a <- b <- c
let a = Node(value: "a")
let b = Node(parent: a, value: "b")
let c = Node(parent: b, value: "c")
// each node behaves like its own sequence
print(c.map { $0.value }) // ["c", "b", "a"]
print(b.map { $0.value }) // ["b", "a"]

How do you test if an observable is retried X times when failing?

I've got an observable that I need to retry a few times if it failed. And I'm currently trying to unit test it. So far I have done this and it failed and always returned 1 instead of 11 times:
func testSetCreated_ShouldRetry10Times_BeforeStopping() throws {
let setCreatedProvider: (String, String) -> Single<ResponseData> = { (_, _) in
return .error(RxCocoaURLError.unknown)
}
let statusHandler = createConsultationHandler(setCreatedProvider: setCreatedProvider)
var setCreatedEmitCount = 0
statusHandler.setCreated(consultationId: .random(length: 24))
.subscribe(onError: { _ in
setCreatedEmitCount += 1
})
.disposed(by: disposeBag)
sleep(10)
XCTAssertEqual(11, setCreatedEmitCount)
}
So, how exactly can I test that this will be called max 11 times if failing? Thanks.
First, understand that an Observable will only ever emit a single error event. There is no way to get your test to pass as it stands.
However, the following will pass although I'm not sure why you name the function 10Times when you are checking to see if it made 11 attempts.
class rx_sandboxTests: XCTestCase {
var disposeBag = DisposeBag()
func testSetCreated_ShouldRetry10Times_BeforeStopping() throws {
var setCreatedEmitCount = 0
let setCreatedProvider: (String, String) -> Single<ResponseData> = { (_, _) in
setCreatedEmitCount += 1
return .error(RxCocoaURLError.unknown)
}
let statusHandler = createConsultationHandler(setCreatedProvider: setCreatedProvider)
statusHandler.setCreated(consultationId: .random(length: 24))
.subscribe()
.disposed(by: disposeBag)
XCTAssertEqual(11, setCreatedEmitCount)
}
}
func createConsultationHandler(setCreatedProvider: #escaping (String, String) -> Single<ResponseData>) -> ConsultationHandler {
return ConsultationHandler(setCreatedProvider: setCreatedProvider)
}
struct ConsultationHandler {
private let createdProvider: (String, String) -> Single<ResponseData>
init(setCreatedProvider: #escaping (String, String) -> Single<ResponseData>) {
self.createdProvider = setCreatedProvider
}
func setCreated(consultationId: String) -> Observable<ResponseData> {
return Observable.just(())
.flatMap { [createdProvider] in createdProvider("hello", "world") }
.retry(11)
}
}
struct ResponseData { }
enum RxCocoaURLError: Error { case unknown }
extension String {
static func random(length: Int) -> String {
return ""
}
}

How to write generic function in Swift using a protocol associatedtype

I'm trying to write generic handler function which will receive ResourceModel instances and process them somehow:
func handleGeneric<R, M: ResourceModel>(resource: R, modelType: M.Type) {
I got stuck with Swift protocols usage though
This is the error I get:
Resource.playground:60:20: note: generic parameter 'R' of global function 'handleGeneric(resource:modelType:)' declared here
func handleGeneric<R, M: ResourceModel>(resource: R, modelType: M.Type) {
import UIKit
// Structs
struct ResourceA: Decodable {
let id: Int
let name: String
}
struct ResourceB: Decodable {
let id: Int
let number: Int
}
// Models
protocol ResourceModel {
associatedtype R
init(resource: R)
}
class ModelA: ResourceModel {
var id: Int = 0
var name: String = ""
convenience required init(resource: ResourceA) {
self.init()
id = resource.id
name = resource.name
}
}
class ModelB: ResourceModel {
var id: Int = 0
var number: Int = 0
convenience required init(resource: ResourceB) {
self.init()
id = resource.id
number = resource.number
}
}
// Save
func handleA(resource: ResourceA) {
let m = ModelA(resource: resource)
// save ... A ...
print("handle A completed")
}
func handleB(resource: ResourceB) {
let m = ModelB(resource: resource)
// ... save B ...
print("handle B completed")
}
// Generic handler
func handleGeneric<R, M: ResourceModel>(resource: R, modelType: M.Type) {
let m = M.init(resource: resource)
// ... save m ...
print("handle generic completed")
}
// Download
func downloadA() {
let r = ResourceA(id: 1, name: "A")
handleA(resource: r)
}
func downloadB() {
let r = ResourceB(id: 2, number: 10)
handleB(resource: r)
}
downloadA()
downloadB()
The question is how can I implement the generic function I need? I.e.
handleGeneric<ResourceA, ModelA>(ResourceA(id: 1, name: "A"))
handleGeneric<ResourceB, ModelB>(ResourceB(id: 2, number: 10))
handleGeneric<ResourceC, ModelC>(ResourceC(id: 3, "foo": "bar"))
handleGeneric<ResourceD, ModelD>(ResourceD(id: 4, "egg": "spam"))
Upd
I tried
handleGeneric(resource: ResourceA(id: 1, name: "A"), modelType: ModelA.Type)
handleGeneric(resource: ResourceB(id: 2, number: 10), modelType: ModelB.Type)
But I get
Cannot convert value of type 'ResourceA' to expected argument type '_.R'
Don't put R in the <> -- use M.R in your signature.
func handleGeneric<M: ResourceModel>(resource: M.R, modelType: M.Type) {

How to combine two sequences cumulatively in RxSwift?

I have two sequences and I'd like to combine them so that any results coming into the second sequence would be cumulatively combined with the latest result from the first sequence.
A---------------B----------------------C------------- ...
-------1-2-----------3-------------------------------- ...
So that the result would be:
A-----A+1--A+1+2---B----B+3--------------C-------------
How might I do that in Rx? (I'm using RxSwift)
You can use combineLatest + bufferWhen
https://stackblitz.com/edit/typescript-s1pemu
import {bufferWhen} from 'rxjs/operators';
import { timer, interval,combineLatest , } from 'rxjs';
// timerOne emits first value at 1s, then once every 4s
const timerOne$ = interval( 4000);
// timerTwo emits first value at 2s, then once every 4s
const timerTwo$ = interval(1000);
// timerThree emits first value at 3s, then once every 4s
// when one timer emits, emit the latest values from each timer as an array
combineLatest(
timerOne$,
timerTwo$.pipe(bufferWhen(()=>timerOne$)),
)
.subscribe(
([timerValOne, timerValTwo]) => {
console.log(
`Timer One Latest: ${timerValOne},
Timer Two Latest: ${timerValTwo}`,
);
console.log('Total:', timerValOne+timerValTwo.reduce((acc,curr)=>acc+curr))
}
);
Here you go. Hopefully, you can use this as a template on how to write a test to establish what you want and then write the code that produces it.
enum Action<A, B> {
case a(A)
case b(B)
}
func example<A, B>(_ a: Observable<A>, _ b: Observable<B>) -> Observable<(A?, [B])> {
return Observable.merge(a.map(Action.a), b.map(Action.b))
.scan((A?.none, [B]())) { current, next in
switch next {
case .a(let a):
return (a, [])
case .b(let b):
return (current.0, current.1 + [b])
}
}
}
And here is a test to prove it works:
class RxSandboxTests: XCTestCase {
func testExample() {
let scheduler = TestScheduler(initialClock: 0)
let a = scheduler.createColdObservable([.next(0, "A"), .next(16, "B"), .next(39, "C")])
let b = scheduler.createColdObservable([.next(7, 1), .next(9, 2), .next(21, 3)])
let result = scheduler.start {
example(a.asObservable(), b.asObservable())
.map { Result(a: $0.0, b: $0.1) }
}
XCTAssertEqual(
result.events,
[
.next(200, Result(a: "A", b: [])),
.next(207, Result(a: "A", b: [1])),
.next(209, Result(a: "A", b: [1, 2])),
.next(216, Result(a: "B", b: [])),
.next(221, Result(a: "B", b: [3])),
.next(239, Result(a: "C", b: []))
]
)
}
}
struct Result: Equatable {
let a: String?
let b: [Int]
}

Why does my Swift CLI code that uses GCD run at the same speed as the code that doesn't use concurrency?

So, I've written some code in Swift 3 as a CLI to practice using Grand Central Dispatch.
The idea is, there are three arrays each filled with 100000000 values. I then have a function to sum up all the numbers of the array and print it out. And then there are two more functions to time the sum of these arrays. One to run the sum function three times on each array. The other runs the sum function on each array on its own async (thread?, dispatch?, not sure what word to use here.)
Here's the code:
import Foundation
func sum(array a: [Int]) {
var suma = 0
for n in a {
suma += n
}
print(suma)
}
func gcd(a: [Int], b: [Int], c: [Int]) {
let queue = DispatchQueue(label: "com.apple.queue")
let group = DispatchGroup()
let methodStart = Date()
queue.async(group: group, execute: {
sum(array: a)
})
queue.async(group: group, execute: {
sum(array: b)
})
queue.async(group: group, execute: {
sum(array: c)
})
group.notify(queue: .main) {
let methodFinish = Date()
let executionTime = methodFinish.timeIntervalSince(methodStart)
print("GCD Exectuion Time: \(executionTime)")
}
}
func non_gcd(a: [Int], b: [Int], c: [Int]) {
let methodStart = Date()
sum(array: a)
sum(array: b)
sum(array: c)
let methodFinish = Date()
let executionTime = methodFinish.timeIntervalSince(methodStart)
print("Non_GCD Exectuion Time: \(executionTime)")
}
var a = [Int]()
var b = [Int]()
var c = [Int]()
// fill each array with 0 to 1 mil - 1
for i in 0..<100000000 {
a.append(i)
b.append(i+1)
c.append(i+2)
}
non_gcd(a: a, b: b, c: c)
gcd(a: a, b: b, c: c)
dispatchMain()
And here's the output where you can see it runs about the same time:
4999999950000000
5000000050000000
5000000150000000
Non_GCD Execution Time: 1.15053302049637
4999999950000000
5000000050000000
5000000150000000
GCD Execution Time: 1.16769099235535
I'm curious as to why it's almost the same time?
I thought concurrent programming made things faster. I think I'm missing something important.
You are creating a serial queue so your "gcd" code doesn't take any advantage of multi-threading.
Change:
let queue = DispatchQueue(label: "com.apple.queue")
to:
let queue = DispatchQueue(label: "com.apple.queue", attributes: .concurrent)
and then run your tests again. You should see an improvement since the three calls to async can now take advantage of multi-threading.