I'm building a Swift-based iOS application that uses PromiseKit to handle promises (although I'm open to switching promise library if it makes my problem easier to solve). There's a section of code designed to handle questions about overwriting files.
I have code that looks approximately like this:
let fileList = [list, of, files, could, be, any, length, ...]
for file in fileList {
if(fileAlreadyExists) {
let overwrite = Promise<Bool> { fulfill, reject in
let alert = UIAlertController(message: "Overwrite the file?")
alert.addAction(UIAlertAction(title: "Yes", handler: { action in
fulfill(true)
}
alert.addAction(UIAlertAction(title: "No", handler: { action in
fulfill(false)
}
} else {
fulfill(true)
}
}
overwrite.then { result -> Promise<Void> in
Promise<Void> { fulfill, reject in
if(result) {
// Overwrite the file
} else {
// Don't overwrite the file
}
}
}
However, this doesn't have the desired effect; the for loop "completes" as quickly as it takes to iterate over the list, which means that UIAlertController gets confused as it tries to overlay one question on another. What I want is for the promises to chain, so that only once the user has selected "Yes" or "No" (and the subsequent "overwrite" or "don't overwrite" code has executed) does the next iteration of the for loop happen. Essentially, I want the whole sequence to be sequential.
How can I chain these promises, considering the array is of indeterminate length? I feel as if I'm missing something obvious.
Edit: one of the answers below suggests recursion. That sounds reasonable, although I'm not sure about the implications for Swift's stack (this is inside an iOS app) if the list grows long. Ideal would be if there was a construct to do this more naturally by chaining onto the promise.
One approach: create a function that takes a list of the objects remaining. Use that as the callback in the then. In pseudocode:
function promptOverwrite(objects) {
if (objects is empty)
return
let overwrite = [...] // same as your code
overwrite.then {
do positive or negative action
// Recur on the rest of the objects
promptOverwrite(objects[1:])
}
}
Now, we might also be interested in doing this without recursion, just to avoid blowing the call stack if we have tens of thousands of promises. (Suppose that the promises don't require user interaction, and that they all resolve on the order of a few milliseconds, so that the scenario is realistic).
Note first that the callback—in the then—happens in the context of a closure, so it can't interact with any of the outer control flow, as expected. If we don't want to use recursion, we'll likely have to take advantage of some other native features.
The reason you're using promises in the first place, presumably, is that you (wisely) don't want to block the main thread. Consider, then, spinning off a second thread whose sole purpose is to orchestrate these promises. If your library allows to explicitly wait for a promise, just do something like
function promptOverwrite(objects) {
spawn an NSThread with target _promptOverwriteInternal(objects)
}
function _promptOverwriteInternal(objects) {
for obj in objects {
let overwrite = [...] // same as your code
overwrite.then(...) // same as your code
overwrite.awaitCompletion()
}
}
If your promises library doesn't let you do this, you could work around it by using a lock:
function _promptOverwriteInternal(objects) {
semaphore = createSemaphore(0)
for obj in objects {
let overwrite = [...] // same as your code
overwrite.then(...) // same as your code
overwrite.always {
semaphore.release(1)
}
semaphore.acquire(1) // wait for completion
}
}
Related
coming from the JS world I'm having a bit of problem wrapping my head around promise kit flavor of promises, I need a bit of help with the following.
Assume I have a function that returns a promise, say an api call, on some super class I await for that promise, then do some other action (potentially another network call), on that parent call I also have a catch block in order to set some error flags for example, so in the end I have something close to this:
func apiCall() -> Promise<Void> {
return Promise { seal in
// some network code at some point:
seal.fulfill(())
}
}
// in another class/object
func doApiCall() -> ? { // catch forces to return PMKFinalizer
return apiCall()
.done {
// do something funky here
}
.catch {
print("Could not do first request"
}
}
now I'm trying to write some unit tests for this functionality, so the response is mocked and I know it will not fail, I just need to await so I can verify the internal state of my class:
// on my test file
doApiCall().done {
// test my code, but I get an error because I cannot pipe a promise that already has a `.catch`
}
How would one go about solving this problem? I could use finally to chain the PMKFinalizer but that feels wrong
Another tangential question would be, is it possible to re catch the error on a higher level, let's say a UI component so it can hold some temporary error state? as far as I see I did not see a way to achieve this.
Many thanks 🙏
I am going through the tutorial:
https://marcosantadev.com/mvvmc-with-swift/
Which talks about MVVM-C design pattern. I have real trouble understanding of how and why .never() observable is used there (and in general why we would want to use .never() besides testing timeouts).
Could anyone give a reasonable example of .never() observable usage in swift code (not in testing) and explain why it is necessary and what are the alternatives?
I address all the actions from View to ViewModel. User taps on a button? Good, the signal is delivered to a ViewModel. That is why I have multiple input observables in ViewModel. And all the observables are optional. They are optional because sometimes I write tests and don't really want to provide all the fake observables to test some single function. So, I provide other observables as nil. But working with nil is not very convenient, so I provide some default behavior for all the optional observables like this:
private extension ViewModel {
func observableNavigation() -> Observable<Navigation.Button> {
return viewOutputFactory().observableNavigation ?? Observable.never()
}
func observableViewState() -> Observable<ViewState> {
return viewOutputFactory().observableViewState ?? Observable.just(.didAppear)
}
}
As you can see, if I pass nil for observableViewState I substitute it with just(.didAppear) because the ViewModel logic heavily depends on the state of view. On the other hand if I pass nil for observableNavigation I provide never() because I assume that non of the navigation button will ever be triggered.
But this whole story is just my point of view. I bet you will find your own place to use this never operator.
Maybe your ViewModel has different configurations (or you have different viewModel under the same protocol), one of which does not need to send any updates to its observers. Instead of saying that the observable does not exist for this particular case (which you would implement as an optional), you might want to be able to define an observable as a .never(). This is in my opinion cleaner.
Disclaimer - I am not a user of RxSwift, but I am assuming never is similar than in ReactiveSwift, i.e. a signal that never sends any value.
It's an open ended question, and there can be many answers, but I've found myself reaching for never on a number of cases. There are many ways to solve a problem, but recently, I was simplifying some device connection code that had a cascading fail over, and I wanted to determine if my last attempt to scan for devices yielded any results.
To do that, I wanted to create an observable that only emitted a "no scan results" event in the event that it was disposed without having seen any results, and conversely, emitted nothing if it did.
I have pruned out other details from my code to sake of brevity, but in essence:
func connect(scanDuration: TimeInterval) -> Observable<ConnectionEvent> {
let scan = scan(for: scanDuration).share(replay: 1)
let connection: Observable<ConnectionEvent> =
Observable.concat(Observable.from(restorables ?? []),
connectedPeripherals(),
scan)
.flatMapLatest { [retainedSelf = self] in retainedSelf.connect(to: $0) }
let scanDetector = scan
.toArray() // <-- sum all results as an array for final count
.asObservable()
.flatMap { results -> Observable<ConnectionEvent> in
results.isEmpty // if no scan results
? Observable.just(.noDevicesAvailable) // emit event
: Observable.never() } // else, got results, no action needed
// fold source and stream detector into common observable
return Observable.from([
connection
.filter { $0.isConnected }
.flatMapLatest { [retained = self] event -> Observable<ConnectionEvent> in
retained.didDisconnect(peripheral: event.connectedPeripheral!.peripheral)
.startWith(event) },
scanDetector])
.switchLatest()
}
For a counter point, I realized as I typed this up, that there is still a simpler way to achieve my needs, and that is to add a final error emitting observable into my concat, it fails-over until it hits the final error case, so I don't need the later error detection stream.
Observable.concat(Observable.from(restorables ?? []),
connectedPeripherals(),
scan,
hardFailureEmitNoScanResults())
That said, there are many cases where we may want to listen and filter down stream, where the concat technique is not available.
I need to match items in two different arrays (one with imported items and another with local items that share some properties with the imported items) to sync two databases that are quite different. I need to use several criteria to do the matching to increase the robustness of finding the right local item and match it with the imported item. I could check each criterium in the same loop, but that is too expensive, because the criteria are checked by the likelihood of success in descending order. Thus, in my first implementation I used a boolean flag called found to flag that the checking of other criteria should be ignored.
Using pseudo code:
// calling code for the matching
for item in importedItems {
item.match() }
In the imported item class:
match()
{
var found = false
for localItem in localItems
{
if (self.property == localItem.property)
{
// update the local item here
found = true
break
}
}
// match with less likely 2nd property
if (!found)
{
for localItem in localItems
{
if (self.property2 == localItem.property2)
{
// update the local item here
found = true
break
}
}
}
The if !found {...} pattern is repeated two additional times with even less likely criteria.
After reviewing this code, it is clear that this can be optimized by returning instead of breaking when there is a match.
So, my question is "are there any known side-effects of leaving a loop early by using return instead of break in Swift?" I could not find any definitive answer here in SO or in the Swift documentation or in blogs that discuss Swift flow control.
No, there are no side effects, quite the opposite it's more efficient.
It's like Short-circuit evaluation in a boolean expression.
But your code is a bad example because found cannot be used outside the function.
This is a more practical example returning a boolean value
func match() -> Bool
{
for localItem in localItems
{
if (self.property == localItem.property)
{
// update the local item here
return true
}
}
....
return false
}
If you know for sure that you can return because nothing else have to be done after the loop then there are no side effects of using return
BrightFutures is a nice implementation of "future" in the Swift language.
https://github.com/Thomvis/BrightFutures
I like to control the parallelism of multicore CPU with it. Does someone know a way to control the # of CPU cores/physical threads to be used?
All closures passed to BrightFutures are executed according to BF's default threading model. It seems like you want to diverge from the default model. This is possible by passing a custom execution context.
An execution context that limits the number of parallel tasks it executes, could be created with the following function:
func executionContextWithControlledParallelism(p: Int) -> ExecutionContext {
let s = Semaphore(value: p)
let q = Queue.global.context
return { task in
s.wait()
q {
task()
s.signal()
}
}
}
I tested this briefly using the following code:
let context = executionContextWithControlledParallelism(5)
for _ in 0..<100 {
future(context:context) { () -> Int in
return fibonacci(Int(arc4random_uniform(15)))
}
}
You'll have to pass context to every map, flatMap, etc. that you want to limit the parallelism of. I'll admit that seems cumbersome. A better way (that is currently not supported by BrightFutures) would be to set the default threading model, like this:
let context = executionContextWithControlledParalelism(5)
// this is not supported right now:
BrightFutures.setDefaultThreadingModel(model: {
return context
})
If you like this, please consider filing an issue to request this or (even better) create a pull request.
I like the way Catch has nested hierarchies of tests, and it works through the combinations. It feels more natural than the setup/teardown of xUnit frameworks.
I now have a set of tests. What I want to do, about halfway down is insert a load/save serialization test, and then repeat all the tests below that point, first without the load/save, then again using the data it loaded from the serialization process. I.e. to prove that the load/save was correct.
I cannot get my head around if Catch has anything that can help with this? If it was phpUnit, I would be thinking about a string of #depends tests, and use a #dataProvider with a boolean input. A bit ugly.
(If that does not make sense, let me know, and I'll try to work out a minimal example)
The issue here is that Catch is designed to descend a tree-like organisation of tests and it automatically discovers all of the leaf-nodes of the structure and calls back into the test cases with untested code paths until they're all tested. The leaf nodes (tests, sections) are meant to be independent.
It sounds like you want to test a repository - something that can persist some data and then load it back in.
To repeat the exact same tests in two different scenarios (before serialisation, after serialisation) you'd need to put the same tests into some common place and call into that place. You can still use the same Catch macros in a non-test-case function, as long as you call it from a test case.
One possible way to do this is:
struct TestFixture {
Data data;
Repository repository;
TestFixture() : data(), instance() { }
};
void fillUpData(Data& data) {
// ...
}
void isDataAsExpected(Data& data) {
// Verify that 'data' is what we expect it to be, whether we
// loaded it or filled it up manually
SECTION("Data has ...) {
REQUIRE(data...);
}
}
TEST_CASE_METHOD(TestFixture, "Test with raw data") {
fillUpData(data);
isDataAsExpected(data);
REQUIRE(repository.save(data));
}
TEST_CASE_METHOD(TestFixture, "Operate on serialised data") {
REQUIRE(repository.load(data));
isDataAsExpected(_data);
}
One possible alternative is to supply your own main and then use command-line arguments to control whether/not the data is first serialised.
There's a third way I can think of that uses a non-quite-ready-yet feature of Catch - Generators:
TEST_CASE("...") {
using Catch::Generators;
int iteration(GENERATE(values(0, 1)));
const bool doSave(iteration == 0);
const bool doLoad(iteration == 1);
Repository repository;
Data data;
if (doLoad) {
REQUIRE(repository.load(data));
} else {
// fill up data
}
REQUIRE(..data..test..);
if (doSave) {
REQUIRE(repository.save(data));
}
}
The advantage of this method is you can see the flow and the test runs twice (for the two values) but the major disadvantage is that Generators are incompatible with SECTIONs and BDD-style features.