How to implement "Endless Task" in Swift - swift

I'm new in Swift and I don't have much experience of it. And the thing is, I'm having an issue of high CPU usage (over 100% cpu usage in Xcode Debug session) when I add following codes.
To implement endless working thread, I used GCD (most recommended way in many articles) including while-loop structure.
private let engineQueue = DispatchQueue(label: "engine queue", qos: .userInitiated)
public func startEngine() {
engineQueue.async {
while true {
if (self.captureManager.isSessionRunning) {
guard let frame = self.captureManager.frameQueue.popFrame(),
let buf = frame.getYPlanar() else {
continue
}
// process a frame
self.engine.processNewFrame(buf: buf, width: Int32(frame.cols!), height: Int32(frame.rows!))
}
}
}
}
And I noticed that this is a terrible practice (high CPU usage), and there must be a way better than this.
So questions are
Do I have to use GCD rather than Thread in this case?
What is the best way to work for this?
Thanks,

Related

What are the benefits of using async-await in the background thread?

I'm trying to understand the proper application of async-await in Swift. Let's say an async method, a non-IO method that doesn't make external calls, is called from a background thread to execute its process, such as some heavy image processing method.
func processImage(image: UIImage) async -> UIImage {
///
}
Task {
let result = await processImage(image: image)
}
What is happening when the code is paused and is waiting for the result? Since it's not making an external call, the process must be done somewhere from within the pool of threads. And since it's not being done in the very thread the method is called from, it must be being executed on another thread. Is a subtask created to execute the process? From what I understand, Task is a unit of concurrency and a single task contains no concurrency (except for async let), so this is a bit confusing to me. Is a task concurrent or not concurrent?
I understand that if this method is called from the main thread, the non-blocking aspect of the async method frees up the thread for the UI elements to run, thereby providing a seamless visual experience. But, what is the benefit of calling an async method from a background thread? I'm not referring to the syntactic sugar of being able to return the results or throw errors. Are there any benefits to the non-blocking aspect as opposed to using a synchronous method if the method is a non-IO method called from the background? In other words, what is it not blocking? If it's a parallel process, it's utilizing more resources to process multiple things efficiently, but I'm not sure how a concurrent process in this case is beneficial.
You need to stop thinking in terms of threads if you want to use async/await. It is useful, to some extent and for obvious reasons, to keep using phrases like "on the main thread" and "on a background thread", but these are almost metaphors.
You just need to accept that, no matter what "thread" something runs on, await has the magical power to say "hold my place" and to permit the computer to walk away and do something else entirely until the thing we're waiting for comes back to us. Nothing blocks, nothing spins. That is, indeed, a large part of the point of async/await.
(If you want to understand how it works under the hood, you need to find out what a "continuation" is. But in general it's really not worth worrying about; it's just a matter of getting your internal belief system in order.)
The overall advantage of async/await, however, is syntactic, not mechanical. Perhaps you could have done in effect everything you would do via async/await by using some other mechanism (Combine, DispatchQueue, Operation, whatever). But experience has shown that, especially in the case of DispatchQueue, beginners (and not-so-beginners) have great difficulty reasoning about the order in which lines of code are executed when code is asynchronous. With async/await, that problem goes away: code is executed in the order in which it appears, as if it were not asynchronous at all.
And not just programmers; the compiler can't reason about the correctness of your DispatchQueue code. It can't help catch you in your blunders (and I'm sure you've made a few in your time; I certainly have). But async/await is not like that; just the opposite: the compiler can reason about your code and can help keep everything neat, safe, and correct.
As for the actual example that you pose, the correct implementation is to define an actor whose job it is to perform the time-consuming task. This, by definition, will not be the main actor; since you defined it, it will be what we may call a background actor; its methods will be called off the main thread, automatically, and everything else will follow thanks to the brilliance of the compiler.
Here is an example (from my book), doing just the sort of thing you ask about — a time-consuming calculation. This is a view which, when you call its public drawThatPuppy method, calculates a crude image of the Mandelbrot set off the main thread and then portrays that image within itself. The key thing to notice, for your purposes, is that in the lines
self.bitmapContext = await self.calc.drawThatPuppy(center: center, bounds: bounds)
self.setNeedsDisplay()
the phrases self.bitmapContext = and self.setNeedsDisplay are executed on the main thread, but the call to self.calc.drawThatPuppy is executed on a background thread, because calc is an actor. Yet the main thread is not blocked while self.calc.drawThatPuppy is executing; on the contrary, other main thread code is free to run during that time. It's a miracle!
// Mandelbrot drawing code based on https://github.com/ddeville/Mandelbrot-set-on-iPhone
import UIKit
extension CGRect {
init(_ x:CGFloat, _ y:CGFloat, _ w:CGFloat, _ h:CGFloat) {
self.init(x:x, y:y, width:w, height:h)
}
}
/// View that displays mandelbrot set
class MyMandelbrotView : UIView {
var bitmapContext: CGContext!
var odd = false
// the actor declaration puts us on the background thread
private actor MyMandelbrotCalculator {
private let MANDELBROT_STEPS = 200
func drawThatPuppy(center:CGPoint, bounds:CGRect) -> CGContext {
let bitmap = self.makeBitmapContext(size: bounds.size)
self.draw(center: center, bounds: bounds, zoom: 1, context: bitmap)
return bitmap
}
private func makeBitmapContext(size:CGSize) -> CGContext {
var bitmapBytesPerRow = Int(size.width * 4)
bitmapBytesPerRow += (16 - (bitmapBytesPerRow % 16)) % 16
let colorSpace = CGColorSpaceCreateDeviceRGB()
let prem = CGImageAlphaInfo.premultipliedLast.rawValue
let context = CGContext(data: nil, width: Int(size.width), height: Int(size.height), bitsPerComponent: 8, bytesPerRow: bitmapBytesPerRow, space: colorSpace, bitmapInfo: prem)
return context!
}
private func draw(center:CGPoint, bounds:CGRect, zoom:CGFloat, context:CGContext) {
func isInMandelbrotSet(_ re:Float, _ im:Float) -> Bool {
var fl = true
var (x, y, nx, ny) : (Float, Float, Float, Float) = (0,0,0,0)
for _ in 0 ..< MANDELBROT_STEPS {
nx = x*x - y*y + re
ny = 2*x*y + im
if nx*nx + ny*ny > 4 {
fl = false
break
}
x = nx
y = ny
}
return fl
}
context.setAllowsAntialiasing(false)
context.setFillColor(red: 0, green: 0, blue: 0, alpha: 1)
var re : CGFloat
var im : CGFloat
let maxi = Int(bounds.size.width)
let maxj = Int(bounds.size.height)
for i in 0 ..< maxi {
for j in 0 ..< maxj {
re = (CGFloat(i) - 1.33 * center.x) / 160
im = (CGFloat(j) - 1.0 * center.y) / 160
re /= zoom
im /= zoom
if (isInMandelbrotSet(Float(re), Float(im))) {
context.fill (CGRect(CGFloat(i), CGFloat(j), 1.0, 1.0))
}
}
}
}
}
private let calc = MyMandelbrotCalculator()
// jumping-off point: draw the Mandelbrot set
func drawThatPuppy() async {
let bounds = self.bounds
let center = CGPoint(x: bounds.midX, y: bounds.midY)
self.bitmapContext =
await self.calc.drawThatPuppy(center: center, bounds: bounds)
self.setNeedsDisplay()
}
// turn pixels of self.bitmapContext into CGImage, draw into ourselves
override func draw(_ rect: CGRect) {
if self.bitmapContext != nil {
let context = UIGraphicsGetCurrentContext()!
context.setFillColor(self.odd ? UIColor.red.cgColor : UIColor.green.cgColor)
self.odd.toggle()
context.fill(self.bounds)
let im = self.bitmapContext.makeImage()
context.draw(im!, in: self.bounds)
}
}
}
The Swift Programming Language: Concurrency defines an asynchronous function as “a special kind of function or method that can be suspended while it’s partway through execution.”
So, this async designation on a function is designed for truly asynchronous routines, where the function will suspend/await the execution while the asynchronous process is underway. A typical example of this is the fetching of data with URLSession.
But this computationally intensive image processing is not an asynchronous task. It is inherently synchronous. So, it does not make sense to mark it as async. Furthermore, Task {…} is probably not the right pattern, either, as that creates a “new top-level task on behalf of the current actor”. But you probably do not want that slow, synchronous process running on the current actor (certainly, if that is the main actor). You may want a detached task. Or put it on its own actor.
The below code snippet illustrates how truly asynchronous methods (like the network request to fetch the data, fetchImage) differ from slow, synchronous methods (the processing of the image in processImage):
func processedImage(from url: URL) async throws -> UIImage {
// fetch from network (calling `async` function)
let image = try await fetchImage(from: url)
// process synchronously, but do so off the current actor, so
// we don’t block this actor
return try await Task.detached {
await self.processImage(image)
}.value
}
// asynchronous method to fetch image
func fetchImage(from url: URL) async throws -> UIImage {
let (data, response) = try await URLSession.shared.data(from: url)
guard let image = UIImage(data: data) else { throw ImageError.notImage }
return image
}
// slow, synchronous method to process image
func processImage(_ image: UIImage) -> UIImage {
…
}
enum ImageError: Error {
case notImage
}
For more information, see WWDC 2021 video Meet async/await in Swift. For insights about what await (i.e., a suspension point) really means within the broader threading model, Swift concurrency: Behind the scenes might be an interesting watch.

Use NWPathMonitor with Swift Modern Concurrency (AsyncStream) vs GCD (DispatchQueue)

I have noticed that the start(queue:) method in NWPathMonitor requires a queue of type DispatchQueue. Is there a way to implement this using Swift Modern Concurrency, probably using AsyncStream?
Using Apple documentation for AsyncStream, I have created the extension to NWPathMonitor, but I cannot start the NWPathMonitor monitor, any suggestion will be appreciated, thanks
extension NWPathMonitor {
static var nwpath: AsyncStream<NWPath> {
AsyncStream { continuation in
let monitor = NWPathMonitor()
monitor.pathUpdateHandler = { path in
continuation.yield(path)
}
continuation.onTermination = { #Sendable _ in
monitor.cancel()
}
// monitor.start(queue: )
}
}
}
Read Apple's documentation
If you are wrapping legacy APIs within some continuation pattern (whether with AsyncStream or withCheckedContinuation or whatever), that wrapped code will have to employ whatever pattern the legacy API requires.
So, in short, if an API wrapped by an AsyncStream requires a dispatch queue, then simply supply it a queue.
So:
extension NWPathMonitor {
func paths() -> AsyncStream<NWPath> {
AsyncStream { continuation in
pathUpdateHandler = { path in
continuation.yield(path)
}
continuation.onTermination = { [weak self] _ in
self?.cancel()
}
start(queue: DispatchQueue(label: "NSPathMonitor.paths"))
}
}
}
Then you can do things like:
func startMonitoring() async {
let monitor = NWPathMonitor()
for await path in monitor.paths() {
print(path.debugDescription)
}
}
A few unrelated and stylistic recommendations, which I integrated in the above:
I did not make this static, as we generally want our extensions to be as flexible as possible. If this is in an extension, we want the application developer to create whatever NWPathMonitor they want (e.g., perhaps requiring or prohibiting certain interfaces) and then create the asynchronous sequence for the updates for whatever path monitor they want.
I made this a function, rather than a computed property, so that it is intuitive to an application developer that this will create a new sequence every time you call it. I would advise against hiding factories behind computed properties.
The concern with a computed property is that it is not at all obvious to an application developer unfamiliar with the underlying implementation that if you access the same property twice that you will get two completely different objects. Using a method makes this a little more explicit.
Obviously, you are free to do whatever you want regarding these two observations, but I at least wanted to explain my rationale for the adjustments in the above code.

How to solve data race/ read and write problem with a help of semaphore/lock?

Is it possible to solve read and write problem with a help of semaphore or lock?
It is possible to make the solution having serial write and serial read but is it possible to have concurrent reads (giving the possibility to have concurrent reads at one time)?
Here is my simple implementation but reads are not concurrent.
class ThreadSafeContainerSemaphore<T> {
private var value: T
private let semaphore = DispatchSemaphore(value: 1)
func get() -> T {
semaphore.wait()
defer { semaphore.signal() }
return value
}
func mutate(_ completion: (inout T) -> Void) {
semaphore.wait()
completion(&self.value)
semaphore.signal()
}
init(value: T) {
self.value = value
}
}
You asked:
Is it possible to solve read and write problem with a help of semaphore or lock?
Yes. The approach that you have supplied should accomplish that.
It is possible to make the solution having serial write and serial read but is it possible to have concurrent reads (giving the possibility to have concurrent reads at one time)?
That’s more complicated. Semaphores and locks lend themselves to simple synchronization that prohibits any concurrent access (including prohibiting concurrent reads).
The approach which allows concurrent reads is called the “reader-writer” pattern. But semaphores/locks do not naturally lend themselves to the reader-writer pattern without adding various state properties. We generally accomplish it with concurrent GCD queue, performing reads concurrently, but performing writes with a barrier (to prevent any concurrent operations):
class ThreadSafeContainerGCD<Value> {
private var value: Value
private let queue = DispatchQueue(label: ..., attributes: .concurrent)
func get() -> Value {
queue.sync { value }
}
func mutate(_ block: #escaping (inout Value) -> Void) {
queue.async(flags: .barrier) { block(&self.value) }
}
init(value: Value) {
self.value = value
}
}
A few observations:
Semaphores are relatively inefficient. In my benchmarks, a simple NSLock is much faster, and an unfair lock even more so.
The GCD reader-writer pattern, while more efficient than the semaphore pattern, is still not as quick as a simple lock approaches (even though the latter does not support concurrent reads). The GCD overhead outweighs the benefits achieved by concurrent reads and asynchronous writes.
But benchmark the various patterns in your use case and see which is best for you. See https://stackoverflow.com/a/58211849/1271826.
Yes, you can propably solve your problem with semaphore. It is also made for accessting shared resource.
Parralel reads are not problem, however if you want to implement even single write, then you need to by carefull and handle that properly. (even if you have parallely single write + single read)

Swift DispatchQueue concurrentPerform OpenGL parallel rendering

I have a headless EGL renderer in c++ for Linux that I have wrapped with bindings to use in Swift. It works great – I can render in parallel creating multiple contexts and rendering in separate threads, but I've run into a weird issue. First of all I have wrapped all GL calls specific to a renderer and it's context inside it's own serial queue like below.
func draw(data:Any) -> results {
serial.sync {
//All rendering code for this renderer is wrapped in a unique serial queue.
bindGLContext()
draw()
}
}
To batch data between renderers I used DispatchQueue.concurrentPerform. It works correctly, but when I try creating a concurrent queue with a DispatchGroup something weird happens. Even though I have wrapped all GL calls in serial queues the GL contexts get messed up and all gl calls fail to allocate textures/buffers/etc.
So I am trying to understand the difference between these two and why one works and the other doesn't. Any ideas would be great!
//This works
DispatchQueue.concurrentPerform(iterations: renderers.count) { j in
let batch = batches[j]
let renderer = renderers[j]
let _ = renderer.draw(data:batch)
}
//This fails – specifically GL calls fail
let group = DispatchGroup()
let q = DispatchQueue(label: "queue.concurrent", attributes: .concurrent)
for (j, renderer) in renderers.enumerated() {
q.async(group: group) {
let batch = batches[j]
let _ = renderer.draw(data:batch)
}
}
group.wait()
Edit:
I would make sure the OpenGL wrapper is actually thread safe. Each renderer having it's own serial queue may not help if the multiple renderers are making OpenGL calls simultaneously. It's possible the DispatchQueue.concurrentPerform version works because it is just running serially.
Original answer:
I suspect the OpenGL failures have to do with hitting memory constraints. When you dispatch many tasks to a concurrent queue, GCD doesn't do anything clever to rate-limit the number of tasks that are started. If a bunch of running tasks are blocked doing IO, it may just start more and more tasks before any of them finish, gobbling up more and more memory. Here's a detailed write-up from Mike Ash about the problem.
I would guess that DispatchQueue.concurrentPerform works because it has some kind of extra logic internally to avoid spawning too many threads, though it's not well documented and there may be platform-specific stuff happening here. I'm not sure why the function would even exist if all it was doing was dispatching to a concurrent queue.
If you want to dispatch a large number of items directly to a DispatchQueue, especially if those items have some non-CPU-bound component to them, you need to add some extra logic yourself to limit the number of tasks that get started. Here's an example from Soroush Khanlou's GCD Handbook:
class LimitedWorker {
private let serialQueue = DispatchQueue(label: "com.khanlou.serial.queue")
private let concurrentQueue = DispatchQueue(label: "com.khanlou.concurrent.queue", attributes: .concurrent)
private let semaphore: DispatchSemaphore
init(limit: Int) {
semaphore = DispatchSemaphore(value: limit)
}
func enqueue(task: #escaping () -> ()) {
serialQueue.async(execute: {
self.semaphore.wait()
self.concurrentQueue.async(execute: {
task()
self.semaphore.signal()
})
})
}
}
It uses a sempahore to limit the number of concurrent tasks that are executing on the concurrent queue, and uses a serial queue to feed new tasks to the concurrent queue. Newly enqueued tasks block at self.semaphore.wait() if the maximum number of tasks are already scheduled on the concurrent queue.
You would use it like this:
let group = DispatchGroup()
let q = LimitedWorker(limit: 10) // Experiment with this number
for (j, renderer) in renderers.enumerated() {
group.enter()
q.enqueue {
let batch = batches[j]
let _ = renderer.draw(data:batch)
group.leave()
}
}
group.wait()

How do I dispatch_sync, dispatch_async, dispatch_after, etc in Swift 3, Swift 4, and beyond?

I have lots of code in Swift 2.x (or even 1.x) projects that looks like this:
// Move to a background thread to do some long running work
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)) {
let image = self.loadOrGenerateAnImage()
// Bounce back to the main thread to update the UI
dispatch_async(dispatch_get_main_queue()) {
self.imageView.image = image
}
}
Or stuff like this to delay execution:
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, Int64(0.5 * Double(NSEC_PER_SEC))), dispatch_get_main_queue()) {
print("test")
}
Or any of all kinds of other uses of the Grand Central Dispatch API...
Now that I've opened my project in Xcode 8 (beta) for Swift 3, I get all kinds of errors. Some of them offer to fix my code, but not all of the fixes produce working code. What do I do about this?
Since the beginning, Swift has provided some facilities for making ObjC and C more Swifty, adding more with each version. Now, in Swift 3, the new "import as member" feature lets frameworks with certain styles of C API -- where you have a data type that works sort of like a class, and a bunch of global functions to work with it -- act more like Swift-native APIs. The data types import as Swift classes, their related global functions import as methods and properties on those classes, and some related things like sets of constants can become subtypes where appropriate.
In Xcode 8 / Swift 3 beta, Apple has applied this feature (along with a few others) to make the Dispatch framework much more Swifty. (And Core Graphics, too.) If you've been following the Swift open-source efforts, this isn't news, but now is the first time it's part of Xcode.
Your first step on moving any project to Swift 3 should be to open it in Xcode 8 and choose Edit > Convert > To Current Swift Syntax... in the menu. This will apply (with your review and approval) all of the changes at once needed for all the renamed APIs and other changes. (Often, a line of code is affected by more than one of these changes at once, so responding to error fix-its individually might not handle everything right.)
The result is that the common pattern for bouncing work to the background and back now looks like this:
// Move to a background thread to do some long running work
DispatchQueue.global(qos: .userInitiated).async {
let image = self.loadOrGenerateAnImage()
// Bounce back to the main thread to update the UI
DispatchQueue.main.async {
self.imageView.image = image
}
}
Note we're using .userInitiated instead of one of the old DISPATCH_QUEUE_PRIORITY constants. Quality of Service (QoS) specifiers were introduced in OS X 10.10 / iOS 8.0, providing a clearer way for the system to prioritize work and deprecating the old priority specifiers. See Apple's docs on background work and energy efficiency for details.
By the way, if you're keeping your own queues to organize work, the way to get one now looks like this (notice that DispatchQueueAttributes is an OptionSet, so you use collection-style literals to combine options):
class Foo {
let queue = DispatchQueue(label: "com.example.my-serial-queue",
attributes: [.serial, .qosUtility])
func doStuff() {
queue.async {
print("Hello World")
}
}
}
Using dispatch_after to do work later? That's a method on queues, too, and it takes a DispatchTime, which has operators for various numeric types so you can just add whole or fractional seconds:
DispatchQueue.main.asyncAfter(deadline: .now() + 0.5) { // in half a second...
print("Are we there yet?")
}
You can find your way around the new Dispatch API by opening its interface in Xcode 8 -- use Open Quickly to find the Dispatch module, or put a symbol (like DispatchQueue) in your Swift project/playground and command-click it, then brouse around the module from there. (You can find the Swift Dispatch API in Apple's spiffy new API Reference website and in-Xcode doc viewer, but it looks like the doc content from the C version hasn't moved into it just yet.)
See the Migration Guide for more tips.
In Xcode 8 beta 4 does not work...
Use:
DispatchQueue.main.asyncAfter(deadline: .now() + 0.5) {
print("Are we there yet?")
}
for async two ways:
DispatchQueue.main.async {
print("Async1")
}
DispatchQueue.main.async( execute: {
print("Async2")
})
This one is good example for Swift 4 about async:
DispatchQueue.global(qos: .background).async {
// Background Thread
DispatchQueue.main.async {
// Run UI Updates or call completion block
}
}
in Xcode 8 use:
DispatchQueue.global(qos: .userInitiated).async { }
Swift 5.2, 4 and later
Main and Background Queues
let main = DispatchQueue.main
let background = DispatchQueue.global()
let helper = DispatchQueue(label: "another_thread")
Working with async and sync threads!
background.async { //async tasks here }
background.sync { //sync tasks here }
Async threads will work along with the main thread.
Sync threads will block the main thread while executing.
Swift 4.1 and 5. We use queues in many places in our code. So, I created Threads class with all queues. If you don't want to use Threads class you can copy the desired queue code from class methods.
class Threads {
static let concurrentQueue = DispatchQueue(label: "AppNameConcurrentQueue", attributes: .concurrent)
static let serialQueue = DispatchQueue(label: "AppNameSerialQueue")
// Main Queue
class func performTaskInMainQueue(task: #escaping ()->()) {
DispatchQueue.main.async {
task()
}
}
// Background Queue
class func performTaskInBackground(task:#escaping () throws -> ()) {
DispatchQueue.global(qos: .background).async {
do {
try task()
} catch let error as NSError {
print("error in background thread:\(error.localizedDescription)")
}
}
}
// Concurrent Queue
class func perfromTaskInConcurrentQueue(task:#escaping () throws -> ()) {
concurrentQueue.async {
do {
try task()
} catch let error as NSError {
print("error in Concurrent Queue:\(error.localizedDescription)")
}
}
}
// Serial Queue
class func perfromTaskInSerialQueue(task:#escaping () throws -> ()) {
serialQueue.async {
do {
try task()
} catch let error as NSError {
print("error in Serial Queue:\(error.localizedDescription)")
}
}
}
// Perform task afterDelay
class func performTaskAfterDealy(_ timeInteval: TimeInterval, _ task:#escaping () -> ()) {
DispatchQueue.main.asyncAfter(deadline: (.now() + timeInteval)) {
task()
}
}
}
Example showing the use of main queue.
override func viewDidLoad() {
super.viewDidLoad()
Threads.performTaskInMainQueue {
//Update UI
}
}