Passing arguments to code executed via asyncafter [closed] - swift

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
so I'm trying to execute multiple lines of code after a delay. Asyncafter in grand central dispatch doesnt seem to provide a way to pass information to the code to be executed.
for example, I'd like to execute func(1) after 0.1 seconds, func(2) after 0.2 seconds, and func(3) after 0.3 seconds
how do I do this?

You don’t have to “pass” the information. The closure will automatically “capture constants and variables from the surrounding context”:
for i in 0 ..< 1000 {
DispatchQueue.main.asyncAfter(deadline: .now() + Double(i) / 10) {
self.func(i)
}
}
Maybe you were just using this as an example, it is worth noting that if you want to do something at some regular interval, asyncAfter has a few disadvantages:
If you might need to cancel this process (e.g. maybe the view in question was dismissed), canceling multiple, individually scheduled, blocks dispatched with asyncAfter becomes a bit cumbersome and unwieldy.
When you schedule many blocks in the future, asyncAfter is subject to “timer coalescing” where dispatched blocks that are scheduled within 10% of each other will start firing at the same time. (This is a power saving feature.)
So, if you want to schedule something to fire at some regular interval, we would use a repeating timer:
var i = 0
Timer.scheduledTimer(withTimeInterval: 0.1, repeats: true) { [weak self] timer in
guard let self = self, i < 1000 else {
timer.invalidate()
return
}
self.func(i)
i += 1
}
But, again, we do not have to “pass” a variable to the Timer closure. We can just use that variable directly from within the closure.
But, if you really were only scheduling three calls, at 0.1, 0.2, and 0.3 seconds, respectively, then asyncAfter is fine. But if you were planning on adding a lot of these calls (where cancelation and/or timer coalescing becomes an issue), then consider a Timer.

Related

Parallelism within concurrentPerform closure

I am looking to implement concurrency inside part of my app in order to speed up processing. The input array can be a large array, that I need to check multiple things related to it. This would be some sample code.
EDITED:
So this is helpful for looking at striding through the array, which was something else I was looking at doing, but I think the helpful answers are sliding away from the original question, due to the fact that I already have a DispatchQueue.concurrentPerform present in the code.
Within a for loop multiple times, I was looking to implement other for loops, due to having to relook at the same data multiple times. The inputArray is an array of structs, so in the outer loop, I am looking at one value in the struct, and then in the inner loops I am looking at a different value in the struct. In the change below I made the two inner for loops function calls to make the code a bit more clear. But in general, I would be looking to make the two funcA and funcB calls, and wait until they are both done before continuing in the main loop.
//assume the startValues and stop values will be within the bounds of the
//array and wont under/overflow
private func funcA(inputArray: [Int], startValue: Int, endValue: Int) -> Bool{
for index in startValue...endValue {
let dataValue = inputArray[index]
if dataValue == 1_000_000 {
return true
}
}
return false
}
private func funcB(inputArray: [Int], startValue: Int, endValue: Int) -> Bool{
for index in startValue...endValue {
let dataValue = inputArray[index]
if dataValue == 10 {
return true
}
}
return false
}
private func testFunc(inputArray: [Int]) {
let dataIterationArray = Array(Set(inputArray))
let syncQueue = DispatchQueue(label: "syncQueue")
DispatchQueue.concurrentPerform(iterations: dataIterationArray.count) { index in
//I want to do these two function calls starting roughly one after another,
//to work them in parallel, but i want to wait until both are complete before
//moving on. funcA is going to take much longer than funcB in this case,
//just because there are more values to check.
let funcAResult = funcA(inputArray: dataIterationArray, startValue: 10, endValue: 2_000_000)
let funcBResult = funcB(inputArray: dataIterationArray, startValue: 5, endValue: 9)
//Wait for both above to finish before continuing
if funcAResult && funcBResult {
print("Yup we are good!")
} else {
print("Nope")
}
//And then wait here until all of the loops are done before processing
}
}
In your revised question, you contemplated a concurrentPerform loop where each iteration called funcA and then funcB and suggested that you wanted them “to work them in parallel”.
Unfortunately. that is not how concurrentPerform works. It runs the separate iterations in parallel, but the code within the closure should be synchronous and run sequentially. If the closure introduces additional parallelism, that will adversely affect how the concurrentPerform can reason about how many worker threads it should use.
Before we consider some alternatives, let us see what will happen if funcA and funcB remain synchronous. In short, you will still enjoy parallel execution benefits.
Below, I logged this with “Points of Interest” intervals in Instruments, and you will see that funcA (in green) never runs concurrently with funcB (in purple) for the same iteration (i.e., for the same range of start and end indices). In this example, I am processing an array with 180 items, striding 10 items at a time, ending up with 18 iterations running on an iPhone 12 Pro Max with six cores:
But, as you can see, although funcB for a given range of indices will not start until funcA finishes for the same range of indices, it does not really matter, because we are still enjoying full parallelism on the device, taking advantage of all the CPU cores.
I contend that, given that we are enjoying parallelism, that there is little benefit to contemplate making funcA and funcB run concurrently with respect to each other, too. Just let the individual iterations run parallel to each other, but let A and B run sequentially, and call it a day.
If you really want to have funcA and funcB run parallel with each other, as well, you will need to consider a different pattern. The concurrentPerform simply is not designed for launching parallel tasks that, themselves, are asynchronous. You could consider:
Have concurrentPerform launch, using my example, 36 iterations, half of which do funcA and half of which do funcB.
Or you might consider using OperationQueue with a reasonable maxConcurrentOperationCount (but you do not enjoy the dynamic limitation of the degree concurrency to the device’s CPU cores).
Or you might use an async-await task group, which will limit itself to the cooperative thread pool.
But you will not want to have concurrentPerform have a closure that launches asynchronous tasks or introduces additional parallel execution.
And, as I discuss below, the example provided in the question is not a good candidate for parallel execution. Mere tests of equality are not computationally intensive enough to enjoy parallelism benefits. It will undoubtedly just be slower than the serial pattern.
My original answer, below, outlines the basic concurrentPerform considerations.
The basic idea is to “stride” through the values. So calculate how many “iterations” are needed and calculate the “start” and “end” index for each iteration:
private func testFunc(inputArray: [Int]) {
DispatchQueue.global().async {
let array = Array(Set(inputArray))
let syncQueue = DispatchQueue(label: "syncQueue")
// calculate how many iterations will be needed
let count = array.count
let stride = 10
let (quotient, remainder) = count.quotientAndRemainder(dividingBy: stride)
let iterations = remainder == 0 ? quotient : quotient + 1
// now iterate
DispatchQueue.concurrentPerform(iterations: iterations) { iteration in
// calculate the `start` and `end` indices
let start = stride * iteration
let end = min(start + stride, count)
// now loop through that range
for index in start ..< end {
let value = array[index]
print("iteration =", iteration, "index =", index, "value =", value)
}
}
// you won't get here until they're all done; obviously, if you
// want to now update your UI or model, you may want to dispatch
// back to the main queue, e.g.,
//
// DispatchQueue.main.async {
// ...
// }
}
}
Note, if something is so slow that it merits concurrentPerform, you probably want to dispatch the whole thing to a background queue, too. Hence the DispatchQueue.global().async {…} shown above. You would probably want to add a completion handler to this method, now that it runs asynchronously, but I will leave that to the reader.
Needless to say, there are quite a few additional considerations:
The stride should be large enough to ensure there is enough work on each iteration to offset the modest overhead introduced by multithreading. Some experimentation is often required to empirically determine the best striding value.
The work done in each thread must be significant (again, to justify the multithreading overhead). I.e., simply printing values is obviously not enough. (Worse, print statements compound the problem by introducing a hidden synchronization.) Even building a new array with some simple calculation will not be sufficient. This pattern really only works if you are doing something very computationally intensive.
You have a “sync” queue, which suggests that you understand that you need to synchronize the combination of the results of the various iterations. That is good. I will point out, though, that you will want to minimize the total number of synchronizations you do. E.g. let’s say you have 1000 values and you end up doing 10 iterations, each striding through 100 values. You generally want to have each iteration build a local result and do a single synchronization for each iteration. Using my example, you should strive to end up with only 10 total synchronizations, not 1000 of them, otherwise excessive synchronization can easily negate any performance gains.
Bottom line, making a routine execute in parallel is complicated and you can easily find that the process is actually slower than the serial rendition. Some processes simply don’t lend themselves to parallel execution. We obviously cannot comment further without understanding what your processes entail. Sometimes other technologies, such as Accelerate or Metal can achieve better results.
I will explain it here, since comment is too small, but will delete later if it doesn't answer the question.
Instead of looping over iterations: dataIterationArray.count, have number of iterations based on number of desired parallel streams of work, not based on array size. For example as you mentioned you want to have 3 streams of work, then you should have 3 iterations, each iteration processing independent part of work:
DispatchQueue.concurrentPerform(iterations: 3) { iteration in
switch iteration {
case 0:
for i in 1...10{
print ("i \(i)")
}
case 1:
for j in 11...20{
print ("j \(j)")
}
case 2:
for k in 21...30{
print ("k \(k)")
}
}
}
And the "And then wait here until all of the loops are done before processing" will happen automatically, this is what concurrentPerform guarantees.

Looped delay question in Swift, what's the difference?

So basically I've been trying to mess around with loops and delays in Swift. I've found multiple answers about how to implement it properly and I've also found answers how to do so. But I have one unanswered question.
Why does this delayed loop work:
for a in 1..<61 {
DispatchQueue.main.asyncAfter(deadline: .now() + Double(a)) {
print(a)
}
}
while this doesnt have any delay besides the very first one:
for a in 1..<61 {
DispatchQueue.main.asyncAfter(deadline: .now() + 1) {
print(a)
}
}
Loops do not wait for the DispatchQueue. Using a DispatchQueue is like saying:
"I want this work to be moved over to another thread (in this case the main thread, so doesn't change) by some deadline (how long until the work will happen)".
Since the loop does not wait, in the second example, everything is executed after 1 second.
However, in the first case, the delay is offset by different amounts. First iteration is in 2 seconds, then 3, then 4, etc.
Note: Delays inside loops are not recommended. There are usually other solutions, such as using timers.

Is Returning a value and also have an escaping parameter in the same function considered bad practice? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Is this code considered bad practice?
Is there any benefit or downside to having both a return value and a completion handler in the same function?
For example:
func isPrintHelloWorld(status: Bool, completion: #escaping (Bool) -> ()) -> Bool {
print("hello world")
completion(false)
return false
}
It's not a matter of good or bad practice. If that is what you need, in order to accomplish or communicate what you need to accomplish or communicate, then that is what you need. If it isn't, it isn't.
Here's a well-known method that both returns a value and takes a completion handler, and both the returned value and the completion handler are of great importance:
https://developer.apple.com/documentation/uikit/uiapplication/1623031-beginbackgroundtask
But it's probably fair to say that situations demanding that sort of thing are few and far between.
Also, this pattern is a lot rarer in Swift than in Objective-C. For example, this method takes a completion handler and also returns a BOOL in Objective-C:
https://developer.apple.com/documentation/photokit/phphotolibrary/1620747-performchangesandwait?language=objc
But the very same method, translated into Swift, doesn't return a value:
https://developer.apple.com/documentation/photokit/phphotolibrary/1620747-performchangesandwait
That's because if there's an issue, Swift can throw instead.
So, regardless of goodness or badness of practice, I'd say, if you're thinking of returning a value while also taking a completion handler, think again; there may be a Swiftier way.

When should you use assertions and preconditions and when you can use guard statements, forced unwrapping and error handling? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I've already read Difference between “precondition” and “assert” in swift. But still can't draw a clear line between the (different ways of unwrapping ie guard & ! + error handling) vs assertions.
If I want my application to no longer work, can't I just force unwrap something and substitute as for a precondition?
Is it because we want to stop/exit the app and basically don't want any control flow or state changing and therefore we use asserts/preconditions which also happens to come with easy logging of human readable messages (helps us to not constantly write prints)?
Things that we use asserts for have are vital, guard statements are eventually a control flow system that even if your function returns early doesn't necessarily mean your app should crash.
And if it's anything beyond nils like you want a String and the user is giving you an Int then you can use error handling.
EDIT:
I'm not after opinions, I'm asking this only to understand what convenience assertions provide over the mentioned alternatives. The numbered list is the core of my question.
Errors are a form of flow control, on a par with if and while. In particuar, they involve coherent message sending and early exit. The idea is to wind up the current scope immediately and return control to the caller, telling the caller "something went wrong".
Assertions are a way to crash immediately.
Thus they belong to completely different conceptual worlds. Errors are for things that can go wrong in real time, from which we need to recover coherently. Assertions are for things that should never go wrong, and about which we feel so strongly that we don't want the program even to be released into the world under these circumstances, and can be used in places where Errors can't be used.
Example from my own code:
final class Board : NSObject, NSCoding, CALayerDelegate {
// ...
fileprivate var xct : Int { return self.grid.xct }
fileprivate var yct : Int { return self.grid.yct }
fileprivate var grid : Grid // can't live without a grid, but it is mutable
// ...
fileprivate lazy var pieceSize : CGSize = {
assert((self.xct > 0 && self.yct > 0), "Meaningless to ask for piece size with no grid dimensions.")
let pieceWidth : CGFloat = self.view.bounds.size.width / (CGFloat(self.xct) + OUTER + LEFTMARGIN + RIGHTMARGIN)
let pieceHeight : CGFloat = self.view.bounds.size.height / (CGFloat(self.yct) + OUTER + TOPMARGIN + BOTTOMMARGIN)
return CGSize(width: pieceWidth, height: pieceHeight)
}()
// ...
}
If pieceSize is ever called with a zero grid dimension, something is very wrong with my entire program. It's not a matter of testing for a runtime error; the program itself is based on faulty algorithms. That is what I want to detect.

NSOperationQueue worse performance than single thread on computation task

My first question!
I am doing CPU-intensive image processing on a video feed, and I wanted to use OperationQueue. However, the results are absolutely horrible. Here's an example—let's say I have a CPU intensive operation:
var data = [Int].init(repeating: 0, count: 1_000_000)
func run() {
let startTime = DispatchTime.now().uptimeNanoseconds
for i in data.indices { data[i] = data[i] &+ 1 }
NSLog("\(DispatchTime.now().uptimeNanoseconds - startTime)")
}
It takes about 40ms on my laptop to execute. I time a hundred runs:
(1...100).forEach { i in run(i) }
They average about 42ms each, for about 4200ms total. I have 4 physical cores, so I try to run it on an OperationQueue:
var q = OperationQueue()
(1...100).forEach { i in
q.addOperation {
run(i)
}
}
q.waitUntilAllOperationsAreFinished()
Interesting things happen depending on q.maxConcurrentOperationCount:
concurrency single operation total
1 45ms 4500ms
2 100-250ms 8000ms
3 100-300ms 7200ms
4 250-450ms 9000ms
5 250-650ms 9800ms
6 600-800ms 11300ms
I use the default QoS of .background and can see that the thread priority is default (0.5). Looking at the CPU utilization with Instruments, I see a lot of wasted cycles (the first part is running it on main thread, the second is running with OperationQueue):
I wrote a simple thread queue in C and used that from Swift and it scales linearly with the cores, so I'm able to get my 4x speed increase. But what am I doing wrong with Swift?
Update: I think we have concluded that this is a legitimate bug in DispatchQueue. Then the question actually is what is the correct channel to ask about issues in DispatchQueue code?
You seem to measure the wall-clock time of each run execution. This does not seem to be the right metric. Parallelizing the problem does not signify that each run will execute faster... it just means that you can do several runs at once.
Anyhow, let me verify your results.
Your function run seems to take a parameter some of the time only. Let me define a similar function for clarity:
func increment(_ offset : Int) {
for i in data.indices { data[i] = data[i] &+ offset }
}
On my test machine, in release mode, this code takes 0.68 ns per entry or about 2.3 cycles (at 3.4 GHz) per addition. Disabling bound checking helps a bit (down to 0.5 ns per entry).
Anyhow. So next let us parallelize the problem as you seem to suggest:
var q = OperationQueue()
for i in 1...queues {
q.addOperation {
increment(i)
}
}
q.waitUntilAllOperationsAreFinished()
That does not seem particular safe but is it fast?
Well, it is faster... I hit 0.3 ns per entry.
Source code : https://github.com/lemire/Code-used-on-Daniel-Lemire-s-blog/tree/master/extra/swift/opqueue
.background Will run the threads with the lowest priority. If you are looking for fast execution, consider .userInitiated and make sure you are measuring the performance with compiler optimizations turned on.
Also consider using DispatchQueue instead of OperationQueue. It might have less overhead and better performance.
Update based on your comments: try this. It goes from 38s on my laptop to 14 or so.
Notable changes:
I made the queue explicitly concurrent
I run the thing in release mode
Replaced the inner loop calculation with random number, the original got optimized out
QoS set to higher level: QoS now works as expected and .background runs forever
var data = [Int].init(repeating: 0, count: 1_000_000)
func run() {
let startTime = DispatchTime.now().uptimeNanoseconds
for i in data.indices { data[i] = Int(arc4random_uniform(1000)) }
print("\((DispatchTime.now().uptimeNanoseconds - startTime)/1_000_000)")
}
let startTime = DispatchTime.now().uptimeNanoseconds
var g = DispatchGroup()
var q = DispatchQueue(label: "myQueue", qos: .userInitiated, attributes: [.concurrent])
(1...100).forEach { i in
q.async(group: g) {
run()
}
}
g.wait()
print("\((DispatchTime.now().uptimeNanoseconds - startTime)/1_000_000)")
Something is still wrong though - serial queue runs 3x faster even though it does not use all cores.
For the sake of future readers, two observations on multithreaded performance:
There is a modest overhead introduced by multithreading. You need to make sure that there is enough work on each thread to offset this overhead. As the old Concurrency Programming Guide says
You should make sure that your task code does a reasonable amount of work through each iteration. As with any block or function you dispatch to a queue, there is overhead to scheduling that code for execution. If each iteration of your loop performs only a small amount of work, the overhead of scheduling the code may outweigh the performance benefits you might achieve from dispatching it to a queue. If you find this is true during your testing, you can use striding to increase the amount of work performed during each loop iteration. With striding, you group together multiple iterations of your original loop into a single block and reduce the iteration count proportionately. For example, if you perform 100 iterations initially but decide to use a stride of 4, you now perform 4 loop iterations from each block and your iteration count is 25.
And goes on to say:
Although dispatch queues have very low overhead, there are still costs to scheduling each loop iteration on a thread. Therefore, you should make sure your loop code does enough work to warrant the costs. Exactly how much work you need to do is something you have to measure using the performance tools.
A simple way to increase the amount of work in each loop iteration is to use striding. With striding, you rewrite your block code to perform more than one iteration of the original loop.
You should be wary of using either operations or GCD dispatches to achieve multithreaded algorithms. This can lead to “thread explosion”. You should use DispatchQueue.concurrentPerform (previously known as dispatch_apply). This is a mechanism for performing loops in parallel, while ensuring that the degree of concurrency will not exceed the capabilities of the device.