action does not wait till function call ends - sprite-kit

I have 2 actions that i put in a sequence. In the first action I am calling a method to calculate the new waiting time for the next action. The next action is just a wait for this duration, but the second action always executes straight away, so the time must be 0. I debugged it and in the method spawnFlowers i get the time returned as 3.5 seconds.
these are my 2 actions
let spawnFlowerAction = SKAction.run {
self.WaitTime = self.calculateWaitingTime()
}
let waitForNewFlower = SKAction.wait(forDuration: self.WaitTime)
I execute it this way:
let spawnSeq = SKAction.sequence([spawnFlowerAction, waitForNewFlower])
let spawnRepeat = SKAction.repeat(spawnSeq, count: 4)
self.run(spawnRepeat)
Result: 4 times spawned without waiting, printing 4 different calculated times in the console from the calculateWaitingTime function (in which the spawning happens)
What is a good way to fix this?

The problem is trying to dynamically change the values used within SKActions after the action has been created. For example when your WaitTime variable changes while running the spawnFlowerAction, the waitForNewFlower Action's wait time won't change dynamically because it doesn't reference WaitTime. Instead its wait value is whatever your variable WaitTime was when you declared let waitForNewFlower = SKAction.wait(forDuration: self.WaitTime) (Which I'm guessing was initially 0). Same concept goes with your other two spawn actions.
I usually use the dispatch Queue for things like these, but to use SKActions here's a function. Just call it once and input the number of times you want it to repeat.
func spawnRepeat(count: Int) {
//Put whatever code to spawn flower here
print("SPAWN FLOWER")
if count > 1 {
//Recalculate WaitTime
WaitTime = calculateWaitingTime()
let waitAction = SKAction.wait(forDuration: WaitTime)
run(waitAction, completion: { self.spawnRepeat(count: count - 1) })
}
}

Related

Split up Task into multiple concurrent subtasks

I am trying to do some calculations on a large number of objects. The objects are saved in an array and the results of the operation should be saved in a new array. To speed up the processing, I‘m trying to break up the task into multiple subtasks which can run concurrently on different threads. The simplified example code below replaces the actual operation with two seconds of wait.
I have tried multiple ways of solving this issue, using both DispatchQueues and Tasks.
Using DispatchQueue
The basic setup I used is the following:
import Foundation
class Main {
let originalData = ["a", "b", "c"]
var calculatedData = Set<String>()
func doCalculation() {
//calculate length of array slices.
let totalLength = originalData.count
let sliceLength = Int(totalLength / 3)
var start = 0
var end = 0
let myQueue = DispatchQueue(label: "Calculator", attributes: .concurrent)
var allPartialResults = [Set<String>]()
for i in 0..<3 {
if i != 2 {
start = sliceLength * i
end = start + sliceLength - 1
} else {
start = totalLength - sliceLength * (i - 1)
end = totalLength - 1
}
allPartialResults.append(Set<String>())
myQueue.async {
allPartialResults[i] = self.doPartialCalculation(data: Array(self.originalData[start...end]))
}
}
myQueue.sync(flags: .barrier) {
for result in allPartialResults {
self.calculatedData.formUnion(result)
}
}
//do further calculations with the data
}
func doPartialCalculation(data: [String]) -> Set<String> {
print("began")
sleep(2)
let someResultSet: Set<String> = ["some result"]
print("ended")
return someResultSet
}
}
As expected, the Console Log is the following (with all three "ended" appearing at once, two seconds after all three "began" appeared at once):
began
began
began
ended
ended
ended
When measuring performance using os_signpost (and using real data and calculations), this approach reduces the time needed for the entire doCalculation() function to run from 40ms to around 14ms.
Note that to avoid data races when appending the results to the final calculatedData Set, I created an array of partial Data sets of which every DispatchQueue only accesses one index (which is not a solution I like and the main reason why I am not satisfied with this approach). What I would have liked to do is to call DispatchQueue.main from within myQueue and add the new data to the calculatedData Set on the main thread, however calling DispatchQueue.main.sync causes a deadlock and using the async version leads to the barrier flag not working as intended.
Using Tasks
In a second attempt, I tried using Tasks to run code concurrently. As I understand it, there are two options for running code concurrently with Tasks. async let and withTaskGroup. For the purpose of retrieving a variable quantity of partial results form a variable amount of concurrent tasks, I figured using withTaskGroup was the best option for me.
I modified the code to look like this:
class Main {
let originalData = ["a", "b", "c"]
var calculatedData = Set<String>()
func doCalculation() async {
//calculate length of array slices.
let totalLength = originalData.count
let sliceLength = Int(totalLength / 3)
var start = 0
var end = 0
await withTaskGroup(of: Set<String>.self) { group in
for i in 0..<3 {
if i != 2 {
start = sliceLength * i
end = start + sliceLength - 1
} else {
start = totalLength - sliceLength * (i - 1)
end = totalLength - 1
}
group.addTask {
return await self.doPartialCalculation(data: Array(self.originalData[start...end]))
}
}
for await newSet in group {
calculatedData.formUnion(newSet)
}
}
//do further calculations with the data
}
func doPartialCalculation(data: [String]) async -> Set<String> {
print("began")
try? await Task.sleep(nanoseconds: UInt64(1e9))
let someResultSet: Set<String> = ["some result"]
print("ended")
return someResultSet
}
}
However, the Console Log prints the following (with every "ended" coming 2 seconds after the preceding "before"):
began
ended
began
ended
began
ended
Measuring performance using os_signpost revealed that the operation takes 40ms to complete. Therefore it is not running concurrently.
With that being said, what is the best course of action for this problem?
Using DispatchQueue, how do you call the Main Queue to avoid data races from within a queue, while at the same time preserving a barrier flag later on in the code?
Using Task, how do can you actually make them run concurrently?
EDIT
Running the code on a real device instead of the simulator and changing the sleep function inside the Task from sleep() to Task.sleep(), I was able to achieve concurrent behavior in that the Console prints the expected log. However, the operation time for the task remains upwards of 40-50ms and is highly variable, sometimes reaching 200ms or more. This problem remains after adding the .userInitiated property to the Task.
Why does it take so much longer to run the same operation concurrently using Task compared to using DispatchQueue? Am I missing something?
A few observations:
One possible performance difference is that the simulator artificially constrains the “cooperative thread pool” used by async-await. See Maximum number of threads with async-await task groups. This is one cause of a lack of full concurrency (on the simulator).
In the async-await test, another factor that can affect concurrency is an actor. If an actor is enforcing serial execution, then consider declaring doPartialCalculation as nonisolated, so that it allows concurrent execution. Failure to do so can prevent any concurrent execution (with your sleep scenario, for example).
The fact that you saw a significant performance difference when you went from sleep to Task.sleep makes me wonder if might have done this within an actor. Actors are “reentrant” and Task.sleep suspends execution and lets the actor to switch to another task. So it allows concurrency for a series of async methods.
But Task.sleep is not analogous to some computationally intensive task that will tie up the thread. But by declaring the function as nonisolated, that will achieve concurrent execution for computationally intensive processes. That can achieve performance results that are nearly equivalent to what you achieved with a GCD implementation.
That having being said, you might still find that async-await is a tiny bit slower than pure GCD implementations. Then again, Swift concurrency offers more native protections and compile-time warnings to ensure thread-safety.
E.g., here are 100 compute-heavy tasks in both GCD and async-await, performed twice for each:
So, you simply have to ask yourself whether the benefits of async-await warrant the modest performance impact or not.
A few unrelated asides on the GCD implementation:
It should be noted that your GCD example is not thread-safe and so the comparison of your two code snippets is not entirely fair. You should make the GCD implementation thread-safe. (Perhaps consider temporarily testing with TSAN. See “Detect Data Races Among Your App’s Threads” section of Diagnosing Memory, Thread, and Crash Issues Early.) You should perform doPartialCalculation in parallel, but you must synchronize the update of allPartialResults (or any shared resource). You can use GCD serial queue for this. Or since you seem to be so concerned about performance, perhaps a NSLock or os_unfair_lock (though care must be taken with the latter). See the GCD example at the end of this answer.
If your dispatched blocks are taking ~50 msec, that simply might not be enough work to justify the overhead of concurrency. You may even find that a simple, serial, rendition is faster!
Often, to maximize the amount of work done per thread, we would “stride” through our index (which is what you appear to be doing with your “slice” logic). But if, even after striding, the time per concurrent loop is still measured in milliseconds, then it may turn out that concurrency is unwarranted altogether. Some tasks are so trivial that they simply will not benefit from concurrent execution.
In your GCD example, you are dispatching to a concurrent queue, which if you have too many iterations, can lead to “thread explosion”, exhausting a very limited worker thread pool. You are only doing three iterations, so that’s not a problem now, but if the number of iterations grows, you would want to abandon that pattern, and adopt concurrentPerform (as seen here). It’s a great way to make full use of the hardware capabilities while avoiding the exhausting of the worker thread pool.
As an aside, I would be wary of using any of the sleep methods as a proxy for a time consuming task. You actually want to keep the CPU busy. I personally use an inefficient π calculation as my general proxy for “do something slow”. That is what I used above.
func performHeavyTask(iteration: Int) {
let id = OSSignpostID(log: poi)
os_signpost(.begin, log: poi, name: #function, signpostID: id, "%d", iteration)
let pi = calculatePi(iterations: 100_000_000)
os_signpost(.end, log: poi, name: #function, signpostID: id, "%f", pi)
}
// calculate pi using Gregory-Leibniz series
func calculatePi(iterations: Int) -> Double {
var result = 0.0
var sign = 1.0
for i in 0 ..< iterations {
result += sign / Double(i * 2 + 1)
sign *= -1
}
return result * 4
}
E.g. here is a GCD example which
uses concurrentPerform;
performs calculation in parallel but synchronizes array updates;
performs update of model on main thread;
uses Sequence<String> rather than [String] to eliminate expensive array creation:
func doCalculation() {
DispatchQueue.global().async { [originalData] in // gives me the willies to see asynchronous routine accessing property, so I might capture it here in case it ever changes to mutable property; or, better, it should be parameter of `doCalculation`
let totalLength = originalData.count
let iterations = 3 // avoid brittle pattern of repeating this number (of values based upon it) repeatedly
let sliceLength = totalLength / iterations
let queue = DispatchQueue(label: "Calculator") // serial queue for synchronization
var allResults = Set<String>()
DispatchQueue.concurrentPerform(iterations: iterations) { i in
let start = i * sliceLength
let end = min(start + sliceLength, totalLength)
let result = self.doPartialCalculation(with: originalData[start..<end]) // do calculation in parallel
queue.sync { allResults.formUnion(result) } // synchronize update
}
// personally, I would not update a property from this method,
// but rather would use local var and supply the results in a completion
// handler parameter, and let caller update model as it sees fit.
//
// But if you are going to do this, synchronize the update somehow,
// e.g., do it on the main thread.
DispatchQueue.main.async { // update on main thread
self.calculatedData = allResults // or `self.calculatedData.formUnion(allResults)`, if that's what you really mean
}
}
}
// note, rather than taking `[String]`, which requires us to create a new
// `Array` instance, let's change this to take `Sequence<String>` as
// input ... that way we can supply array slices directly
func doPartialCalculation<S>(with data: S) -> Set<String> where S: Sequence, S.Element == String {
print("began")
sleep(2)
let someResultSet: Set<String> = ["some result"]
print("ended")
return someResultSet
}
Or, alternatively, you could do the updates of the local var asynchronously and keep track of them with a DispatchGroup, performing the final update (or call to the completion handler) on the .main queue:
func doCalculation() {
DispatchQueue.global().async { [originalData] in // gives me the willies to see asynchronous routine accessing property, so I might capture it here in case it ever changes to mutable property; or, better, it should be parameter of `doCalculation`
let totalLength = originalData.count
let iterations = 3 // avoid brittle pattern of repeating this number (of values based upon it) repeatedly
let sliceLength = totalLength / iterations
let queue = DispatchQueue(label: "Calculator") // serial queue for synchronization
let group = DispatchGroup()
var allResults = Set<String>()
DispatchQueue.concurrentPerform(iterations: iterations) { i in
let start = i * sliceLength
let end = min(start + sliceLength, totalLength)
let result = self.doPartialCalculation(with: originalData[start..<end]) // do calculation in parallel
queue.async(group: group) { allResults.formUnion(result) } // synchronize update
}
// personally, I would not update a property from this method,
// but rather would use local var and supply the results in a completion
// handler parameter, and let caller update model as it sees fit.
//
// But if you are going to do this, synchronize the update somehow,
// e.g., do it on the main thread.
group.notify(queue: .main) {
self.calculatedData = allResults // or `self.calculatedData.formUnion(allResults)`, if that's what you really mean
}
}
}
You can benchmark this and see whether the asynchronous update has any material impact. It probably will not in this case, but the proof is in the pudding.
Your Task-based example looks like it should execute concurrently. I ran it and am able to get concurrent execution.
Probably the issue you're having is that Swift concurrency tries to limit Task concurrency to the number of available cores. And (I don't think this is well documented!) Swift playgrounds and the iOS simulators seem to execute in a single-core environment.
So if you run your code in a Swift playground, you'll get serial task execution. If you make a Mac app and run it in that, or on an iOS device, you should get parallel execution.
This WWDC talk from last year has a discussion of why it works that way: https://developer.apple.com/videos/play/wwdc2021/10254/?time=652
That's worth paying attention to. You'll of course be fine scheduling 3 blocks on a concurrent queue, but if your example is standing in for a real workload that might have hundreds or thousands, it's easy to cause thread explosion and create new, harder to understand performance issues.

How to stop an UIView animation in swift

I'm working with Swift language. I wrote an animation for a timer that lasts for 30 seconds and is full. Now I want to stop this animation, but I do not know how! I want to be able to start again from the beginning. Animation.
Thanks if you have a solution or a method that helps me🙏
You can try
self.myView.layer.removeAllAnimations()
Two possible ways:
Somewhere store variable for total time. And in every timer repeat increase this value. When total time reaches 30 seconds, remove animation
When you start animation, set action which gets executed after specific time
First possible way:
var time: Double = 0
#objc func timerChangedValue() {
time += 1
if time == 30 {
view.layer.removeAllAnimations()
view.layoutIfNeeded()
}
}
Second possible way:
DispatchQueue.main.asyncAfter(deadline: .now() + 30) {
self.view.layer.removeAllAnimations()
self.view.layoutIfNeeded()
}

Uneven typing in Spritekit SKAction sequence led typing animation

Hi I know there are plenty of questions on here about the timer, but nothing I can find about this specific issue. Thanks in advance for any help.
I am trying to use a sequence of SKActions to simulate a typing animation (in an SKLabelNode I've called actualLabel) that begins after a user touch.
I have the following:
var charArray = []
var labelText = ""
var calls = 0
And then, in touchesEnded:
if calls < charArray.count + 1 {
let wait = SKAction.wait(forDuration: 1)
let block = SKAction.run({
self.redoLabelText()
})
let sequence = SKAction.sequence([wait,block])
run(SKAction.repeatForever(sequence))
}
With my function as the following:
func redoLabelText() {
if labelText.characters.count < charArray.count {
labelText += charArray[calls]
actualLabel.text = labelText
calls += 1
}
}
And then touchesBegan resets all the initial variables and the process starts again. Everything works fine, except the typing is really choppy. Every time the user presses again, the typing gets faster. It's as if after one press it works fine, then after two etc there are more characters waiting in labelText at each second interval.
Thanks again, got me baffled.

How can I create delay inside while loop in Swift 2?

I would need help with this while loop - what I'm trying to do is slow down the whole process of removing and adding new circles while radius is changing every time this happens. I'm becoming really desperate, I've tried using both dispatch_after and sleep inside the loop (which I found online) but neither of them is suitable, they basically stop the whole app. If I put them in the while loop, nothing happens. Thanks in advance!
while radius < 100 {
self.removeAllChildren()
addCircle()
radius++
print(radius)
}
Basically you just need to do few simple things:
Wait for a certain duration and add a node to the scene
Repeat this step forever (or certain number of times)
Here is the example of how you can do it. The important part is action sequence. You create the step above, and repeat it forever. Each time you check radius value and based on that you stop the action (remove it by the key). And that's it. You can change spawning speed by changing action's duration parameter.
Using NSTimer might seem like an easier solution, but NSTimer doesn't respect node's (or scene's or view's) paused state. So, imagine this situation:
You start spawning of nodes
User receive phone call and app automatically goes to background
Because NSTimer is not automatically paused, the nodes will continue with spawning. So you have to take an additional step and invalidate/restart timer by your self. When using SKAction, this is done automatically. There are some other flaws of using NSTimer in SpriteKit, search SO about all that, there are some posts which covering all this.
import SpriteKit
class GameScene: SKScene{
var radius:UInt32 = 0
override func didMoveToView(view: SKView) {
startSpawning()
}
func startSpawning(){
let wait = SKAction.waitForDuration(0.5)
// let wait = SKAction.waitForDuration(1, withRange: 0.4) // randomize wait duration
let addNode = SKAction.runBlock({
[unowned self] in //http://stackoverflow.com/a/24320474/3402095 - read about strong reference cycles here
if(self.radius >= 30){
if self.actionForKey("spawning") != nil {
self.removeActionForKey("spawning")
}
}
self.radius++
let sprite = self.spawnNode()
sprite.position = CGPoint(x: Int(arc4random()) % 300, y: Int(arc4random()) % 300) // change this to randomize sprite's position to suit your needs
self.addChild(sprite)
})
//wait & add node
let sequence = SKAction.sequence([wait, addNode])
//repeat forever
runAction(SKAction.repeatActionForever(sequence), withKey: "spawning")
}
func spawnNode()->SKSpriteNode{
let sprite = SKSpriteNode(color: SKColor.purpleColor(), size: CGSize(width: 50, height: 50))
//Do sprite initialization here
return sprite
}
}
The sleep stops the whole app because you are running the loop on the main thread.
You can solve this problem in one of two ways...
1) Use an NSTimer. Set it to the amount of time you want to delay. Put the body of the loop in the timer's trigger function and disable the timer when it has been called enough times. This is the best solution.
2) Wrap the loop in a dispatch async block to a background thread and put a sleep in the loop. If you do this though, you will have to wrap the UI code in another dispatch async block that comes back to the main thread.

Swift/SpriteKit: Animate multiple buttons with small delay between each

I have several buttons (SKSpriteNodes) which I am trying to cycle through and animate, with a small delay between each. My code compiles, but when I run it - I only get a white screen and a crash with this error: "Message from debugger: Terminated due to memory issue". Here is my code:
var sequence = SKAction.sequence([animationUp, animationDown])
runAction(SKAction.repeatActionForever(SKAction.sequence([
SKAction.runBlock({
button1.runAction(sequence)
SKAction.waitForDuration(0.5)
button2.runAction(sequence)
SKAction.waitForDuration(0.5)
button3.runAction(sequence)
SKAction.waitForDuration(0.5)
}),
])))
So what I am trying to accomplish is an up/down animation on every button I'm drawing to the screen, with a 0.5 second delay between each button. The animation should run forever until I change the current screen. I had no problem animating these buttons simultaneously but I'd really like to add a uniform delay so that they don't all animate at the same time. Any ideas?
let waitAction = SKAction.waitForDuration(0.5)
let movementAction = SKAction.sequence([animateUp, animateDown])
let button1Block = SKAction.runBlock({
button1.runAction(movementAction)
})
let button2Block = SKAction.runBlock({
button2.runAction(movementAction)
})
let button3Block = SKAction.runBlock({
button3.runAction(movementAction)
})
let sequence = SKAction.sequence([button1Block,waitAction,button2Block, waitAction, button3Block, waitAction])
runAction(SKAction.repeatActionForever(sequence))
I'm not certain about how the SKAction.sequence initiates/destroys, but possibly your loop in .repeatActionForever(..) creates and destroys objects, whereas memory won't be released until the loop is allowed to complete.
Try to run your inner runAction block within an autoreleasepool { .. } block:
autoreleasepool {
runAction ...
}
But there is one situation we do need to autorelease, that is we
create a lot of objects in a method scope and want to release them
sooner. It is extremely useful when these objects turn to a pressure
on memory.
http://en.swifter.tips/autoreleasepool/