I have a nested for loop, and am trying to make it so that the outer loop will only continue once the inner loop and its code is completed, and also add a 1 second delay before performing the next loop.
for _ in 0...3 {
for player in 0...15 {
// CODE ADDING MOVEMENTS TO QUEUE
}
updateBoardArray()
printBoard()
// NEED TO WAIT HERE FOR 1 SEC
}
So I wan't the 0...3 For loop to progress only once the inner loop and update and print functions have completed their cycle, and also a 1 second wait time.
At the moment it all happens at once and then just prints all 4 boards instantly, even when I put that 1 second wait in using DispatchQueue.main.asyncAfter(deadline: DispatchTime.now() + 1) {}.
Have tried other answers to similar questions but can't seem to get it to work.
As I understand, what you tried to do is the following:
for _ in 0...3 {
for player in 0...15 {
// CODE ADDING MOVEMENTS TO QUEUE
}
updateBoardArray()
printBoard()
DispatchQueue.main.asyncAfter(deadline: DispatchTime.now() + 1) {}
}
This will NOT work since what you are doing is add a task (that will do nothing) to the main queue that will get trigged after 1 second, and after adding it, the code continues with the for, without waiting.
Solution
What you could do is simply use sleep(1), but bear in mind that this will freeze the main thread (if you are executing this in the main thread).
To avoid freezing the app you could do is:
DispatchQueue.global(qos: .default).async {
for _ in 0...3 {
for player in 0...15 {
// CODE ADDING MOVEMENTS TO QUEUE
}
DispatchQueue.main.async {
updateBoardArray()
printBoard()
}
sleep(1)
}
}
}
Just keep in mind that any UI action you do, must be done in the main thread
asyncAfter() function takes DispatchTime. DispatchTime is in nanosecond. Following is prototype:
func asyncAfter(deadline: DispatchTime, execute: DispatchWorkItem)
Following is extract of docs from Apple Developer webpage:
DispatchTime represents a point in time relative to the default clock with nanosecond precision. On Apple platforms, the default clock is based on the Mach absolute time unit
To solve the problem, use:
DispatchQueue.main.asyncAfter(deadline: DispatchTime.now() + 1000) {}
You can use scheduledTimer:
for i in 0..3{
Timer.scheduledTimer(withTimeInterval: 0.2 * Double(i), repeats: false) { (timer) in
//CODE
}
}
More Info about scheduledTimer.
Related
I'm using Amadeus API for flight search. Amadeus API requires that requests must not be more frequent than 1/100ms. I use the following code to limit the request frequency
for depDate in depDates{
DispatchQueue.main.asyncAfter(deadline: .now() + 1) {
AmadeusHelper.sharedInstance().searchAFlight(depAirport: depAirport, arrAirport: arrAirport, depDate: depDate, airlineCode: airlineCode, flightNumber: flightNum) { result in
switch result{
case .failure(let err):
print(err)
case .success(let flightOffer):
if let json = flightOffer.get(){
flightCandidate.offers?.append(json)
}
}
}
}
}
The above for-loop runs 4 loops. The searchAFlight function is essentially a wrapper of HTTP request. asyncAfter function will delay each request by 1 second. I thought this would be enough. However, I still get tooManyRequests error. When I was debugging, I followed the code step by step, which should took much longer to send out the requests. The requests should spread out across several minutes, long enough to satisfy the 100ms interval. Did I do something wrong here? Thanks.
===========Update=========
Based on Eric's comments, I changed the code to the following
var timeNow = DispatchTime.now() // each request is delayed with an increasing interval
for (index, depDate) in depDates.enumerated(){
DispatchQueue.main.asyncAfter(deadline: timeNow + Double(1*index)) {
AmadeusHelper.sharedInstance().searchAFlight(depAirport: depAirport, arrAirport: arrAirport, depDate: depDate, airlineCode: airlineCode, flightNumber: flightNum) { result in
switch result{
case .failure(let err):
print(err)
case .success(let flightOffer):
if let json = flightOffer.get(){
flightCandidate.offers?.append(json)
}
}
}
}
}
Unfortunately, I still get tooManyRequest error.
Your first attempt was:
for depDate in depDates {
DispatchQueue.main.asyncAfter(deadline: .now() + 1) { ... }
}
That will not work because you are scheduling all of those iterations to start one second from now, not one second from each other.
You then attempted:
for (index, depDate) in depDates.enumerated() {
DispatchQueue.main.asyncAfter(deadline: .now() + Double(index)) { ... }
}
That is likely closer to what you want, but is subject to “timer coalescing”, a feature where the OS will start grouping/coalescing dispatched blocks together. This is a great power saving feature, but will circumvent the desire to have delays between the requests. Also, you’re going to have troubles if you ever want to cancel some of these subsequent requests (without complicating the code a bit, at least).
The simplest solution is to adopt a recursive pattern, where you trigger the next request in the completion handler of the prior one.
func searchFlight(at index: Int = 0) {
guard index < depDates.count else { return }
let depDate = depDates[index]
AmadeusHelper.sharedInstance().searchAFlight(depDate: depDate, ...) { result in
defer {
if index < (depDates.count - 1) {
DispatchQueue.main.asyncAfter(deadline: .now() + 1) { [weak self] in
self?.searchFlight(at: index + 1)
}
}
}
switch result { ... }
}
}
There’s no coalescing of these calls. Also, this has the virtue that the delay for a subsequent request will be based upon when the prior one finished, not after the prior one was issued (which can be an issue if scheduling a bunch of these requests up-front).
That having been said, I would advise that you direct this inquiry to Amadeus. I wouldn’t be surprised if this 100ms limitation was introduced to prevent people from mining their database through their API. I also wouldn’t be surprised if they have other, undocumented, techniques for identifying excessive requests (e.g. number of requests per hour, per 24 hours, etc.). The question of “why am I receiving tooManyRequest error” is best directed to them.
In Java, we can do something like this:
synchronized(a) {
while(condition == false) {
a.wait(time);
}
//critical section ...
//do something
}
The above is a conditional synchronized block, that waits for a condition to become successful to execute a critical section.
When a.wait is executed (for say 100 ms), the thread exits critical section for that duration & some other critical section synchronized by object a executes, which makes condition true.
When the condition becomes successful, next time current thread enters the critical section and evaluates condition, loop exits and code executes.
Important points to note:
1. Multiple critical sections synchronized by same object.
2. A thread is not in critical section for only the duration of wait. Once wait comes out, the thread is in critical section again.
Is the below the proper way to do the same in Swift 4 using DispatchSemaphore?
while condition == false {
semaphore1.wait(duration)
}
semaphore1.wait()
//execute critical section
semaphore1.signal()
The condition could get modified by the time we enter critical section.
So, we might have to do something like below to achieve the Java behavior. Is there a simpler way to do this in Swift?
while true {
//lock
if condition == false {
//unlock
//sleep for sometime to prevent frequent polling
continue
} else {
//execute critical section
//...
//unlock
break
}
}
Semaphores
You can solve this problem with a DispatchSemaphore.
Let's look at this code.
Here we have a semaphore, storage property of type String? and a serial queue
let semaphore = DispatchSemaphore(value: 0)
var storage: String? = nil
let serialQueue = DispatchQueue(label: "Serial queue")
Producer
func producer() {
DispatchQueue.global().asyncAfter(deadline: .now() + 3) {
storage = "Hello world!"
semaphore.signal()
}
}
Here we have a function that:
Waits for 3 seconds
Writes "Hello world" into storage
Sends a signal through the semaphore
Consumer
func consumer() {
serialQueue.async {
semaphore.wait()
print(storage)
}
}
Here we have a function that
Waits for a signal from the semaphore
Prints the content of storage
Test
Now I'm going to run the consumer BEFORE the producer function
consumer()
producer()
Result
Optional("Hello world!")
How does it work?
func consumer() {
serialQueue.async {
semaphore.wait()
print(storage)
}
}
The body of the consumer() function is executed asynchronously into the serial queue.
serialQueue.async {
...
}
This is the equivalent of your synchronized(a). Infact, by definition, a serial queue will run one closure at the time.
The first line inside the closure is
semaphore.wait()
So the execution of the closure is stopped, waiting for the green light from the semaphore.
This is happening on a different queue (not the main one) so we are not blocking the main thread.
func producer() {
DispatchQueue.global().asyncAfter(deadline: .now() + 3) {
storage = "Hello world!"
semaphore.signal()
}
}
Now producer() is executed. It waits for 3 seconds on a queue different from the main one and then populates storageand send a signal via the semaphore.
Finally consumer() receives the signal and can run the last line
print(storage)
Playground
If you want to run this code in Playground remember to
import PlaygroundSupport
and to run this line
PlaygroundPage.current.needsIndefiniteExecution = true
Answering my question.
Used an instance of NSLock to lock and unlock in the below pseudo code.
while true {
//lock
if condition == false {
//unlock
//sleep for sometime to prevent frequent polling
continue
} else {
//execute critical section
//...
//unlock
break
}
}
I'm trying to run a set of SKActions in a "for" loop, as in the code below. The sequence should run like this:
For the 1st superNode...
Run openingAction on all child subNodes concurrently; WHEN DONE...
Run variableDurationActions on all child spriteNodes concurrently; WHEN DONE...
Run closingAction on all child subNodes concurrently, AND AT THE SAME TIME...
For the 2nd superNode...
Run openingAction on all child subNodes concurrently; WHEN DONE...
And this three-step process repeats for each superNode.
The first problem is that, although I could write a closure to wait for one action to complete before starting another, I don't know how to go about doing so for several concurrent actions across "for" loops. This problem happens between Steps 1 and 2, and Steps 2 and 3. Between 2 and 3 it's even worse, because the durations of the actions at Step 2 can vary a lot.
The second problem is that I don't know how to write a completion handler, or some equivalent, to make closingAction and next iteration of openingAction run concurrently, but before the variableDurationActions in the next iteration.
Is what I'm trying to do even possible with SpriteKit's tools? Or should I use dispatch groups instead? I'm looking for elegant solutions, because my code is complicated enough as it is.
for superNode in scene.children {
for subNode in superNode.children {
// All openingActions should run first
subNode.run(openingAction)
for spriteNode in subNode.children {
// These variableDurationActions should run concurrently, second
spriteNode.run(variableDurationAction)
}
// All closingActions should run third AND
// concurrently with the openingActions in the next iteration
subNode.run(closingAction)
}
}
I ultimately came up with a solution by separating the temporally distinct processing steps, creating a dispatch group with two check points, and having the main function (at the bottom, below) iterate through the superNodes by calling itself.
It seems a little clumsy to me, but here's how it looks:
let dispatchGroup = DispatchGroup()
// Step 1
func runOpeningActions() {
for subNode in superNode.children {
dispatchGroup.enter()
// All openingActions should run concurrently, first
subNode.run(openingAction) {
dispatchGroup.leave()
}
}
}
// Step 2
func runVariableDurationActions {
for subNode in superNode.children {
dispatchGroup.enter()
for spriteNode in subNode.children {
// These variableDurationActions should run concurrently, second
spriteNode.run(variableDurationAction) {
dispatchGroup.leave()
}
}
}
}
// Step 3 – to run concurrently with the next Step 1
func runClosingActions() {
for subNode in superNode.children {
dispatchGroup.enter()
// All closingActions should run concurrently, third AND
// concurrently with the openingActions in the next iteration
subNode.run(closingAction) {
dispatchGroup.leave()
}
}
}
var nodeIndex = 0
// Main function – iterates over Steps 1, 2, and 3
func runAllActions() {
let superNode = scene.children[nodeIndex]
runOpeningActions()
dispatchGroup.notify(queue: .main) {
runVariableDurationActions()
dispatchGroup.notify(queue: .main) {
runClosingActions()
nodeIndex += 1
if nodeIndex < scene.children.count {
runAnimation()
} else {
// No more superNodes to process; leave the function
}
}
}
}
// Main function call
runAllActions()
I have a serial queue in which I am adding task synchronously. This is to prevent the same function being called at same time from multiple points.
This is my code:
var isProcessGoingOn = false
let serialQueue = DispatchQueue(label: "co.random.queue")
func funcA() {
serialQueue.sync {
if isProcessGoingOn {
debugPrint("Returned")
return
} else {
isProcessGoingOn = true
debugPrint("Executing Code")
// This is to mock the n/w call behaviour. In actual code, I would have a n/w hit in this place.
serialQueue.asyncAfter(deadline: .now() + .seconds(2), execute: {
debugPrint("Setting isProcessGoingOn false")
isProcessGoingOn = false
funcA()
//There may be some cases, where I would need to call the funcA from here.
})
}
}
}
Now, let's suppose the function is called two times:
funcA()
funcA()
And I am getting the following output:
"Executing Code"
"Returned"
Now I was expecting a third call to the funcA, but I am not getting that.
Could anyone explain what is the problem here?
Thanks in advance.
As per your code your first method call will execute debugPrint("Executing Code") and also set isProcessGoingOn = true, At the same time you again call your method funcA(), So it's not wait for your nested block which are:
serialQueue.asyncAfter(deadline: .now() + .seconds(2), execute: {
debugPrint("Setting isProcessGoingOn false")
isProcessGoingOn = false
funcA()
})
So, above block will never execute due to deadline: .now() + .seconds(2) code. Because it's going to wait for 2 second, and before this code execute, your second method call will execute debugPrint("Returned") and return from method.
Solution:
You need to remove your 2 second delay from your nested block.
Cons:
This will affected to your method call second time. It will show you error of execution interrupted.
I hope this will helpful to you and you will fin your answer.
In my code I have a simple for loop that loops 100 times with nested for loops to create a delay. After the delay, I am updating a progress view element in the UI through a dispatch_async. However, I cannot get the UI to update. Does anyone know why the UI is not updating? Note: The print statement below is used to verify that the for loop is looping correctly.
for i in 0..<100 {
//Used to create a delay
for var x = 0; x<100000; x++ {
for var z = 0; z<1000; z++ {
}
}
println(i)
dispatch_async(dispatch_get_main_queue()) {
// update some UI
self.progressView.setProgress(Float(i), animated: true)
}
}
Three observations, two basic, one a little more advanced:
Your loop will not be able to update the UI in that main thread unless the loop itself is running on another thread. So, you can dispatch it to some background queue. In Swift 3:
DispatchQueue.global(qos: .utility).async {
for i in 0 ..< kNumberOfIterations {
// do something time consuming here
DispatchQueue.main.async {
// now update UI on main thread
self.progressView.setProgress(Float(i) / Float(kNumberOfIterations), animated: true)
}
}
}
In Swift 2:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)) {
for i in 0 ..< kNumberOfIterations {
// do something time consuming here
dispatch_async(dispatch_get_main_queue()) {
// now update UI on main thread
self.progressView.setProgress(Float(i) / Float(kNumberOfIterations), animated: true)
}
}
}
Also note that the progress is a number from 0.0 to 1.0, so you presumably want to divide by the maximum number of iterations for the loop.
If UI updates come more quickly from the background thread than the UI can handle them, the main thread can get backlogged with update requests (making it look much slower than it really is). To address this, one might consider using dispatch source to decouple the "update UI" task from the actual background updating process.
One can use a DispatchSourceUserDataAdd (in Swift 2, it's a dispatch_source_t of DISPATCH_SOURCE_TYPE_DATA_ADD), post add calls (dispatch_source_merge_data in Swift 2) from the background thread as frequently as desired, and the UI will process them as quickly as it can, but will coalesce them together when it calls data (dispatch_source_get_data in Swift 2) if the background updates come in more quickly than the UI can otherwise process them. This achieves maximum background performance with optimal UI updates, but more importantly, this ensures the UI won't become a bottleneck.
So, first declare some variable to keep track of the progress:
var progressCounter: UInt = 0
And now your loop can create a source, define what to do when the source is updated, and then launch the asynchronous loop which updates the source. In Swift 3 that is:
progressCounter = 0
// create dispatch source that will handle events on main queue
let source = DispatchSource.makeUserDataAddSource(queue: .main)
// tell it what to do when source events take place
source.setEventHandler() { [unowned self] in
self.progressCounter += source.data
self.progressView.setProgress(Float(self.progressCounter) / Float(kNumberOfIterations), animated: true)
}
// start the source
source.resume()
// now start loop in the background
DispatchQueue.global(qos: .utility).async {
for i in 0 ..< kNumberOfIterations {
// do something time consuming here
// now update the dispatch source
source.add(data: 1)
}
}
In Swift 2:
progressCounter = 0
// create dispatch source that will handle events on main queue
let source = dispatch_source_create(DISPATCH_SOURCE_TYPE_DATA_ADD, 0, 0, dispatch_get_main_queue());
// tell it what to do when source events take place
dispatch_source_set_event_handler(source) { [unowned self] in
self.progressCounter += dispatch_source_get_data(source)
self.progressView.setProgress(Float(self.progressCounter) / Float(kNumberOfIterations), animated: true)
}
// start the source
dispatch_resume(source)
// now start loop in the background
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)) {
for i in 0 ..< kNumberOfIterations {
// do something time consuming here
// now update the dispatch source
dispatch_source_merge_data(source, 1);
}
}