Updating the UI Using Dispatch_Async in Swift - swift

In my code I have a simple for loop that loops 100 times with nested for loops to create a delay. After the delay, I am updating a progress view element in the UI through a dispatch_async. However, I cannot get the UI to update. Does anyone know why the UI is not updating? Note: The print statement below is used to verify that the for loop is looping correctly.
for i in 0..<100 {
//Used to create a delay
for var x = 0; x<100000; x++ {
for var z = 0; z<1000; z++ {
}
}
println(i)
dispatch_async(dispatch_get_main_queue()) {
// update some UI
self.progressView.setProgress(Float(i), animated: true)
}
}

Three observations, two basic, one a little more advanced:
Your loop will not be able to update the UI in that main thread unless the loop itself is running on another thread. So, you can dispatch it to some background queue. In Swift 3:
DispatchQueue.global(qos: .utility).async {
for i in 0 ..< kNumberOfIterations {
// do something time consuming here
DispatchQueue.main.async {
// now update UI on main thread
self.progressView.setProgress(Float(i) / Float(kNumberOfIterations), animated: true)
}
}
}
In Swift 2:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)) {
for i in 0 ..< kNumberOfIterations {
// do something time consuming here
dispatch_async(dispatch_get_main_queue()) {
// now update UI on main thread
self.progressView.setProgress(Float(i) / Float(kNumberOfIterations), animated: true)
}
}
}
Also note that the progress is a number from 0.0 to 1.0, so you presumably want to divide by the maximum number of iterations for the loop.
If UI updates come more quickly from the background thread than the UI can handle them, the main thread can get backlogged with update requests (making it look much slower than it really is). To address this, one might consider using dispatch source to decouple the "update UI" task from the actual background updating process.
One can use a DispatchSourceUserDataAdd (in Swift 2, it's a dispatch_source_t of DISPATCH_SOURCE_TYPE_DATA_ADD), post add calls (dispatch_source_merge_data in Swift 2) from the background thread as frequently as desired, and the UI will process them as quickly as it can, but will coalesce them together when it calls data (dispatch_source_get_data in Swift 2) if the background updates come in more quickly than the UI can otherwise process them. This achieves maximum background performance with optimal UI updates, but more importantly, this ensures the UI won't become a bottleneck.
So, first declare some variable to keep track of the progress:
var progressCounter: UInt = 0
And now your loop can create a source, define what to do when the source is updated, and then launch the asynchronous loop which updates the source. In Swift 3 that is:
progressCounter = 0
// create dispatch source that will handle events on main queue
let source = DispatchSource.makeUserDataAddSource(queue: .main)
// tell it what to do when source events take place
source.setEventHandler() { [unowned self] in
self.progressCounter += source.data
self.progressView.setProgress(Float(self.progressCounter) / Float(kNumberOfIterations), animated: true)
}
// start the source
source.resume()
// now start loop in the background
DispatchQueue.global(qos: .utility).async {
for i in 0 ..< kNumberOfIterations {
// do something time consuming here
// now update the dispatch source
source.add(data: 1)
}
}
In Swift 2:
progressCounter = 0
// create dispatch source that will handle events on main queue
let source = dispatch_source_create(DISPATCH_SOURCE_TYPE_DATA_ADD, 0, 0, dispatch_get_main_queue());
// tell it what to do when source events take place
dispatch_source_set_event_handler(source) { [unowned self] in
self.progressCounter += dispatch_source_get_data(source)
self.progressView.setProgress(Float(self.progressCounter) / Float(kNumberOfIterations), animated: true)
}
// start the source
dispatch_resume(source)
// now start loop in the background
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)) {
for i in 0 ..< kNumberOfIterations {
// do something time consuming here
// now update the dispatch source
dispatch_source_merge_data(source, 1);
}
}

Related

Synchronize nested async network requests inside a while loop by using Semaphores

I have a func that gets a list of Players. When i fetch the players i need only to show those who belongs to the current Team so i am showing only a subset of the original list by filtering them. I don't know in advance, before making the request, how much players belong to the Team selected by the User, so i may need to do additional requests until i can display on the TableView at least 10 rows of Players. The User by pulling up from the bottom of the TableView can request more players to display. To do this i am calling a first async func request which in turn calls, inside a while, another nested async func request. Here a code to give you an idea of what i am trying to do:
let semaphore = DispatchSemaphore(value: 0)
func getTeamPlayersRequest() {
service.getTeamPlayers(...)
{
(result) in
switch result
{
case .success(let playersModel):
if let validCurrentPage = currentPageTmp ,
let validTotalPages = totalPagesTmp ,
let validNextPage = self.getTeamPlayersListNextPage()
{
while self.playersToShowTemp.count < 10 && self.currentPage < validTotalPages
{
self.currentPage = validNextPage //global var
self.fetchMorePlayers()
self.semaphore.wait() //global semaphore
}
}
case .failure(let error):
//some code...
}
})
}
private func fetchMorePlayers(){
// Completion handler of the following function is never called..
service.getTeamPlayers(requestedPage: currentPage, completion: {
(result) in
switch result
{
case .success(let playersModel):
if let validPlayerList = playersList,
let validPlayerListData = validPlayerList.data,
let validTeamModel = self.teamPlayerModel,
let validNextPage = self.getTeamPlayersListNextPage()
{
for player in validPlayerListData
{
if ( validTeamModel.id == player.team?.id)
{
self.playersToShowTemp.append(player)
}
}
}
self.currentPage = validNextPage
self.semaphore.signal() //global semaphore
case .failure(let error):
//some code...
}
}
}
I have tried both with DispatchGroup and Semaphore but i don't get it what i am doing wrong. I debugged the code and saw that the first async call get executed in a different queue (not the main queue) and a different thread. The nested async call getexecuted on a different thread but i don't know if it's the same concurrent queue of the first async call.
The completion handler of thenested call it's never called. Does anyone know why? is the self.semaphore.wait(), even if it get executed after the fetchMorePlayers() return, blocking/preventing the nested async completion handler to be called?
I am noticing through the Debugger that the completion() in the Xcode vars window has the note "swift partial apply forwarder for closure #1"
If we inline the function call in your loop, it looks something like this:
while self.playersToShowTemp.count < 10 && self.currentPage < validTotalPages
{
self.currentPage = validNextPage //global var
nbaService.getTeamPlayers(requestedPage: currentPage, completion: { ... })
self.semaphore.wait() //global semaphore
}
So nbaService.getTeamPlayers schedules a request, probably on the main DispatchQueue and immediately returns. Then you call wait on your semaphore, which blocks, probably before GCD even tries to run the task scheduled by nbaService.getTeamPlayers.
That's a problem on DispatchQueue.main, which is a serial queue. It has to be a serial queue for UI updates to work. What normally happens is on some iteration of the run loop you make a request, and return.. that bubbles back up to the run loop, which checks for more events and queued tasks. In this case, when your completion handler in getTeamPlayersRequest is waiting to be run, the run loop (via GCD) executes it for that iteration. Then you block the main thread, so the run loop can't continue. If you do need to block always do it on a different DispatchQueue, preferably a .concurrent one.
There is sometimes confusion about what .async does. It only means "run this later and right now return control back to the caller". That's all. It does not guarantee that your closure will run concurrently. It merely schedules it to be run later (possibly soon) on whatever DispatchQueue you called it on. If that queue is a serial queue, then it will be queued to run in its turn in that dispatch queue's run loop. If it's a concurrent queue (ie one you specifically set the attributes to include .concurrent). Then it will run, possibly at the same time as other tasks on that same DispatchQueue.
To avoid that instead of using a loop you can use async-chaining.
private func fetchMorePlayers(while condition: #autoclosure #escaping () -> Bool){
guard condition() else { return }
nbaService.getTeamPlayers(requestedPage: currentPage, completion: {
(result) in
switch result
{
case .success(let playersModel):
if let validPlayerList = playersList,
let validPlayerListData = validPlayerList.data,
let validTeamModel = self.teamPlayerModel,
let validNextPage = self.getTeamPlayersListNextPage()
{
for player in validPlayerListData
{
if ( validTeamModel.id == player.team?.id)
{
self.playersToShowTemp.append(player)
}
}
}
self.currentPage = validNextPage
// Chain to next call
self.fetchMorePlayers(while: condition))
case .failure(let error):
//some code...
}
}
}
Then in getTeamPlayersRequest you can do this:
func getTeamPlayersRequest() {
service.getTeamPlayers(...)
{
(result) in
switch result
{
case .success(let playersModel):
if let validCurrentPage = currentPageTmp ,
let validTotalPages = totalPagesTmp ,
let validNextPage = self.getTeamPlayersListNextPage()
{
self.currentPage = validNextPage //global var
self.fetchMorePlayers(while: self.playersToShowTemp.count < 10 && self.currentPage < validTotalPages)
}
case .failure(let error):
//some code...
}
})
}
This avoids the need to block on a semaphore, because each subsequent request happens in the completion handler of the previously completed one. The only issue is if you need for the completion handler in getTeamPlayersRequest to block while the fetchMorePlayers requests are being fetched, because now it won't you can re-introduce the semaphore. In that case the guard statement in fetchMorePlayers becomes:
guard condition() else
{
self.semaphore.signal()
return
}
That way it only signals on the last completion handler in the chain. You may need to block in a different DispatchQueue though. I think if you need to block, you probably have something about your design that needs to be reconsidered.
If you find yourself reaching for semaphores, it is almost always a mistake. Semaphores are inefficient at best, and introduce deadlock risks if misused. Semaphores should generally be avoided. (Don't get me wrong: Semaphores can be useful in some very narrow use cases, but this is not one of them.)
Use asynchronous patterns. One simple approach might be to recursively call the routine, calling the completion handler when done:
func startFetching(#escaping completion: () -> Void) {
fetchPlayers(page: 0, completion: completion)
}
private func fetchPlayers(page: Int, #escaping completion: () -> Void) {
// prepare request
// now perform request
performRequest(...) { ...
if let error = error {
completion()
return
}
...
if doesNeedMorePlayers {
fetchPlayers(page: page + 1, completion: completion)
} else {
completion()
}
}
}
Personally, I might probably add another closure to emit the players retrieved as we go along, e.g. like, if not actually, a Combine Publisher. Or if you want to update the UI all at once at the very end, just pass the players retrieved thus far as additional parameter in this recursive routine and pass the whole array back in the completion handler. But avoid globals or other state properties.
But the broader idea is to scrupulously avoid semaphores and instead embrace asynchronous patterns.

Why is a process suspended by another process behind it?

The code is in a simple way, only read and parse an xml file into an array. I did not notice the problem until one day I tried to open a big xml file.
I added a blur view with NSProgressIndicator when the data is parsing, but the blur view did not show up until the parsing was completed.
self.addBlurView()
let file = HandleFile.shared.openFile(filePath)
self.removeBlurView()
guard let name = file.name, let path = file.path, let data = file.data else {
return
}
So I tried to delay parsing data. The blur view can be showed up, and removed when completed.
self.addBlurView()
DispatchQueue.main.asyncAfter(deadline: .now() + 0.3, execute: {
let file = HandleFile.shared.openFile(filePath)
self.removeBlurView()
guard let name = file.name, let path = file.path, let data = file.data else {
return
}
})
I thought it might be a problem fo thread, so I tried this in func addBlurView(), failed. I also tried to add an counting in addBlurView(), it counted to a certain number and paused, and continue counting after parsing data.
DispatchQueue.main.async {
self.blurView.isHidden = false
self.spinner.startAnimation(self)
}
Have no idea why this happen. Can anyone help to solve this problem?
Thanks.
As I mentioned in the comments above, main queue is a serial queue and all the tasks assigned to it are executed serially by main thread. In general, You should not perform any heavy lifting task (like loading file to memory) on main thread as it would block the main thread and render UI unresponsive.
Typically all the heavy lifting tasks like loading a file to a memory (anything which does not deal with UI rendering directly) should be delegated to one of dispatch queues. Try wrapping your openFile(filePath) call inside DispatchQueue
self.addBlurView()
DispatchQueue.global(qos: .default).async {
let file = HandleFile.shared.openFile(filePath)
}
Personally I would expect openFile function to have a completion block which is triggered on main queue when it finished loading file so that you can remove your blurView, but in your case it seems like its a synchronous statement so you can try
self.addBlurView()
DispatchQueue.global(qos: .default).async {
let file = HandleFile.shared.openFile(filePath)
DispatchQueue.main.async {
self.removeBlurView()
}
}

How does the semaphore keep async loop in order?

I've set up this script to loop through a bunch of data in the background and I've successfully set up a semaphore to keep everything (the array that will populate the table) in order but I cannot exactly understand how or why the semaphore keeps the array in order. The dispatchGroup is entered, the loop stops and waits until the image is downloaded, once the image is gotten the dispatchSemaphore is set to 1 and immediately the dispatchGroup is exited and the semaphore set back to 0. The semaphore is toggled so fast from 0 to 1 that I don't understand how it keeps the array in order.
let dispatchQueue = DispatchQueue(label: "someTask")
let dispatchGroup = DispatchGroup()
let dispatchSemaphore = DispatchSemaphore(value: 0)
dispatchQueue.async {
for doc in snapshot.documents {
// create data object for array
dispatchGroup.enter()
// get image with asynchronous completion handler
Storage.storage().reference(forURL: imageId).getData(maxSize: 1048576, completion: { (data, error) in
defer {
dispatchSemaphore.signal()
dispatchGroup.leave()
}
if let imageData = data,
error == nil {
// add image to data object
// append to array
}
})
dispatchSemaphore.wait()
}
// do some extra stuff in background after loop is done
}
dispatchGroup.notify(queue: dispatchQueue) {
DispatchQueue.main.async {
self.tableView.reloadData()
}
}
The solution is in your comment get image with asynchronous completion handler. Without the semaphore all image downloads would be started at the same time and race for completion, so the image that downloads fastest would be added to the array first.
So after you start your download you immediately wait on your semaphore. This will block until it is signaled in the callback closure from the getData method. Only then the loop can continue to the next document and download it. This way you download one file after another and block the current thread while the downloads are running.
Using a serial queue is not an option here, since this would only cause the downloads to start serially, but you can’t affect the order in which they finish.
This is a rather inefficient though. Your network layer probably can run faster if you give it multiple requests at the same time (think of parallel downloads and HTTP pipelining). Also you're 'wasting' a thread which could do some different work in the meantime. If there is more work to do at the same time GCD will spawn another thread which wastes memory and other resources.
A better pattern would be to skip the semaphore, let the downloads run in parallel and store the image directly at the correct index in your array. This of course means you have to prepare an array of the appropriate size beforehand, and you have to think of a placeholder for missing or failed images. Optionals would do the trick nicely:
var images: [UIImage?] = Array(repeating: nil, count: snapshot.documents.count)
for (index, doc) in snapshot.documents.enumerated() {
// create data object for array
dispatchGroup.enter()
// get image with asynchronous completion handler
Storage.storage().reference(forURL: imageId).getData(maxSize: 1048576) { data, error in
defer {
dispatchGroup.leave()
}
if let imageData = data,
error == nil {
// add image to data object
images[index] = image
}
}
}
The DispatchGroup isn't really doing anything here. You have mutual exclusion granted by the DispatchSemaphor, and the ordering is simply provided by the iteration order of snapshot.documents

For Loop wait and delay - Swift 4

I have a nested for loop, and am trying to make it so that the outer loop will only continue once the inner loop and its code is completed, and also add a 1 second delay before performing the next loop.
for _ in 0...3 {
for player in 0...15 {
// CODE ADDING MOVEMENTS TO QUEUE
}
updateBoardArray()
printBoard()
// NEED TO WAIT HERE FOR 1 SEC
}
So I wan't the 0...3 For loop to progress only once the inner loop and update and print functions have completed their cycle, and also a 1 second wait time.
At the moment it all happens at once and then just prints all 4 boards instantly, even when I put that 1 second wait in using DispatchQueue.main.asyncAfter(deadline: DispatchTime.now() + 1) {}.
Have tried other answers to similar questions but can't seem to get it to work.
As I understand, what you tried to do is the following:
for _ in 0...3 {
for player in 0...15 {
// CODE ADDING MOVEMENTS TO QUEUE
}
updateBoardArray()
printBoard()
DispatchQueue.main.asyncAfter(deadline: DispatchTime.now() + 1) {}
}
This will NOT work since what you are doing is add a task (that will do nothing) to the main queue that will get trigged after 1 second, and after adding it, the code continues with the for, without waiting.
Solution
What you could do is simply use sleep(1), but bear in mind that this will freeze the main thread (if you are executing this in the main thread).
To avoid freezing the app you could do is:
DispatchQueue.global(qos: .default).async {
for _ in 0...3 {
for player in 0...15 {
// CODE ADDING MOVEMENTS TO QUEUE
}
DispatchQueue.main.async {
updateBoardArray()
printBoard()
}
sleep(1)
}
}
}
Just keep in mind that any UI action you do, must be done in the main thread
asyncAfter() function takes DispatchTime. DispatchTime is in nanosecond. Following is prototype:
func asyncAfter(deadline: DispatchTime, execute: DispatchWorkItem)
Following is extract of docs from Apple Developer webpage:
DispatchTime represents a point in time relative to the default clock with nanosecond precision. On Apple platforms, the default clock is based on the Mach absolute time unit
To solve the problem, use:
DispatchQueue.main.asyncAfter(deadline: DispatchTime.now() + 1000) {}
You can use scheduledTimer:
for i in 0..3{
Timer.scheduledTimer(withTimeInterval: 0.2 * Double(i), repeats: false) { (timer) in
//CODE
}
}
More Info about scheduledTimer.

Multiple workers in Swift Command Line Tool

When writing a Command Line Tool (CLT) in Swift, I want to process a lot of data. I've determined that my code is CPU bound and performance could benefit from using multiple cores. Thus I want to parallelize parts of the code. Say I want to achieve the following pseudo-code:
Fetch items from database
Divide items in X chunks
Process chunks in parallel
Wait for chunks to finish
Do some other processing (single-thread)
Now I've been using GCD, and a naive approach would look like this:
let group = dispatch_group_create()
let queue = dispatch_queue_create("", DISPATCH_QUEUE_CONCURRENT)
for chunk in chunks {
dispatch_group_async(group, queue) {
worker(chunk)
}
}
dispatch_group_wait(group, DISPATCH_TIME_FOREVER)
However GCD requires a run loop, so the code will hang as the group is never executed. The runloop can be started with dispatch_main(), but it never exits. It is also possible to run the NSRunLoop just a few seconds, however that doesn't feel like a solid solution. Regardless of GCD, how can this be achieved using Swift?
I mistakenly interpreted the locking thread for a hanging program. The work will execute just fine without a run loop. The code in the question will run fine, and blocking the main thread until the whole group has finished.
So say chunks contains 4 items of workload, the following code spins up 4 concurrent workers, and then waits for all of the workers to finish:
let group = DispatchGroup()
let queue = DispatchQueue(label: "", attributes: .concurrent)
for chunk in chunk {
queue.async(group: group, execute: DispatchWorkItem() {
do_work(chunk)
})
}
_ = group.wait(timeout: .distantFuture)
Just like with an Objective-C CLI, you can make your own run loop using NSRunLoop.
Here's one possible implementation, modeled from this gist:
class MainProcess {
var shouldExit = false
func start () {
// do your stuff here
// set shouldExit to true when you're done
}
}
println("Hello, World!")
var runLoop : NSRunLoop
var process : MainProcess
autoreleasepool {
runLoop = NSRunLoop.currentRunLoop()
process = MainProcess()
process.start()
while (!process.shouldExit && (runLoop.runMode(NSDefaultRunLoopMode, beforeDate: NSDate(timeIntervalSinceNow: 2)))) {
// do nothing
}
}
As Martin points out, you can use NSDate.distantFuture() as NSDate instead of NSDate(timeIntervalSinceNow: 2). (The cast is necessary because the distantFuture() method signature indicates it returns AnyObject.)
If you need to access CLI arguments see this answer. You can also return exit codes using exit().
Swift 3 minimal implementation of Aaron Brager solution, which simply combines autoreleasepool and RunLoop.current.run(...) until you break the loop:
var shouldExit = false
doSomethingAsync() { _ in
defer {
shouldExit = true
}
}
autoreleasepool {
var runLoop = RunLoop.current
while (!shouldExit && (runLoop.run(mode: .defaultRunLoopMode, before: Date.distantFuture))) {}
}
I think CFRunLoop is much easier than NSRunLoop in this case
func main() {
/**** YOUR CODE START **/
let group = dispatch_group_create()
let queue = dispatch_queue_create("", DISPATCH_QUEUE_CONCURRENT)
for chunk in chunks {
dispatch_group_async(group, queue) {
worker(chunk)
}
}
dispatch_group_wait(group, DISPATCH_TIME_FOREVER)
/**** END **/
}
let runloop = CFRunLoopGetCurrent()
CFRunLoopPerformBlock(runloop, kCFRunLoopDefaultMode) { () -> Void in
dispatch_async(dispatch_queue_create("main", nil)) {
main()
CFRunLoopStop(runloop)
}
}
CFRunLoopRun()