I have a function which control some resources, for example:
var resource: Int?
func changeSomeResources() {
resource = 1
// rewriting keychain parameters
// working with UIApplication.shared
}
Then I add this function to global thread several times
DispatchQueue.global(qos: .userInitiated).async {
changeSomeResources()
}
DispatchQueue.global(qos: .userInitiated).async {
changeSomeResources()
}
Can I get some thread problems in this case except race condition ?
For example if both functions will try to change a resource at the same time
The global dispatch queues are concurrent, so that does not protect
your function from being called simultaneously from multiple threads.
If you want to serialize access to the resources then you have to
create a serial queue:
let myQueue = DispatchQueue(label: "myQueue", qos: .userInitiated)
Then all work items dispatched to this queue are executed sequentially:
myQueue.async {
changeSomeResources()
}
Note also that UIApplication – as a UI related resource – must only
be accessed on the main thread:
DispatchQueue.main.async {
// working with UIApplication.shared
}
Xcode also has options “Thread Sanitizer” and “Main Thread Checker”
(in the “Diagnostics” pane of the scheme settings) which can help
to detect threading problems.
Related
I have Realm notifications on a background thread created with the following code (taken from Realm's website)
class BackgroundWorker: NSObject {
private let name: String
private var thread: Thread!
private var block: (()->Void)!
init(name: String) {
self.name = name
}
#objc internal func runBlock() {
block()
}
internal func start(_ block: #escaping () -> Void) {
self.block = block
if thread == nil {
createThread()
}
perform(
#selector(runBlock),
on: thread,
with: nil,
waitUntilDone: false,
modes: [RunLoop.Mode.default.rawValue]
)
}
private func createThread() {
thread = Thread { [weak self] in
while (self != nil && !self!.thread.isCancelled) {
RunLoop.current.run(
mode: RunLoop.Mode.default,
before: Date.distantFuture)
}
Thread.exit()
}
thread.name = name
thread.start()
}
func stop() {
thread.cancel()
}
}
And using the background worker like this
struct RealmBackGroundWorker {
static var tokens: [NotificationToken] = []
static let backgroundWorker = BackGroundWorker(name: "RealmWorker")
static func start() {
backgroundWorker.start {
self.tokens = ...
}
}
}
The background notifications work great. But I often need to save data to realm without notifying these transactions. From what I have found, it does not look like there is a way write data without notifying all tokens. You always have to specify the tokens you want to ignore.
How can I write data to the Realm without notifying these background tokens?
Let me preface this answer with a couple of things. The Realm website the OP got their code from was here Realm Notifications on Background Threads with Swift and part of the point of that code was to not only spin up a runloop on a background thread to handle Realm functions but to also handle notifications on that same thread.
That code is pretty old - 4+ years and is somewhat outdated. In essence, there are possibly better options. From Apple:
... newer technologies such as Grand Central Dispatch (GCD) provide a
more modern and efficient infrastructure for implementing concurrency
But to address the question, if an observer is added to a Realm results on thread A, then all of the notifications will also occur on thread A. e.g. the token returned from the observe function is tied to that thread.
It appears the OP wants to write data without receiving notifications
I do not want to sync local changes to the server, so I would like to
call .write(withouNotifying: RealmWorkBlock.tokens)
and
I want a way to write data to the realm database without notifying
these notifications.
Noting that those notifications will occur on the same thread as the runloop. Here's the code that we need to look at
static func start() {
backgroundWorker.start {
self.tokens = ...
}
and in particular this line
self.tokens = ...
because the ... is the important part. That ... leads to this line (from the docs)
self?.token = files.observe { changes in
which is where the observer is added that generates the notifications. If no notifications are needed then that code, starting with self?.token can be completely removed as that's is sole purpose - to generate notifications.
One thought is to add a different init to the background worker class to have a background worker with no notifications:
static func start() {
backgroundWorker.startWithoutNotifications()
}
Another thought is to take a more modern approach and leverage DispatchQueue with an autorelease pool which eliminates the need for these classes completely, will run in the background freeing up the UI ad does not involve tokens or notifications.
DispatchQueue(label: "background").async {
autoreleasepool {
let realm = try! Realm()
let files = realm.objects(File.self).filter("localUrl = ''")
}
}
I have a headless EGL renderer in c++ for Linux that I have wrapped with bindings to use in Swift. It works great – I can render in parallel creating multiple contexts and rendering in separate threads, but I've run into a weird issue. First of all I have wrapped all GL calls specific to a renderer and it's context inside it's own serial queue like below.
func draw(data:Any) -> results {
serial.sync {
//All rendering code for this renderer is wrapped in a unique serial queue.
bindGLContext()
draw()
}
}
To batch data between renderers I used DispatchQueue.concurrentPerform. It works correctly, but when I try creating a concurrent queue with a DispatchGroup something weird happens. Even though I have wrapped all GL calls in serial queues the GL contexts get messed up and all gl calls fail to allocate textures/buffers/etc.
So I am trying to understand the difference between these two and why one works and the other doesn't. Any ideas would be great!
//This works
DispatchQueue.concurrentPerform(iterations: renderers.count) { j in
let batch = batches[j]
let renderer = renderers[j]
let _ = renderer.draw(data:batch)
}
//This fails – specifically GL calls fail
let group = DispatchGroup()
let q = DispatchQueue(label: "queue.concurrent", attributes: .concurrent)
for (j, renderer) in renderers.enumerated() {
q.async(group: group) {
let batch = batches[j]
let _ = renderer.draw(data:batch)
}
}
group.wait()
Edit:
I would make sure the OpenGL wrapper is actually thread safe. Each renderer having it's own serial queue may not help if the multiple renderers are making OpenGL calls simultaneously. It's possible the DispatchQueue.concurrentPerform version works because it is just running serially.
Original answer:
I suspect the OpenGL failures have to do with hitting memory constraints. When you dispatch many tasks to a concurrent queue, GCD doesn't do anything clever to rate-limit the number of tasks that are started. If a bunch of running tasks are blocked doing IO, it may just start more and more tasks before any of them finish, gobbling up more and more memory. Here's a detailed write-up from Mike Ash about the problem.
I would guess that DispatchQueue.concurrentPerform works because it has some kind of extra logic internally to avoid spawning too many threads, though it's not well documented and there may be platform-specific stuff happening here. I'm not sure why the function would even exist if all it was doing was dispatching to a concurrent queue.
If you want to dispatch a large number of items directly to a DispatchQueue, especially if those items have some non-CPU-bound component to them, you need to add some extra logic yourself to limit the number of tasks that get started. Here's an example from Soroush Khanlou's GCD Handbook:
class LimitedWorker {
private let serialQueue = DispatchQueue(label: "com.khanlou.serial.queue")
private let concurrentQueue = DispatchQueue(label: "com.khanlou.concurrent.queue", attributes: .concurrent)
private let semaphore: DispatchSemaphore
init(limit: Int) {
semaphore = DispatchSemaphore(value: limit)
}
func enqueue(task: #escaping () -> ()) {
serialQueue.async(execute: {
self.semaphore.wait()
self.concurrentQueue.async(execute: {
task()
self.semaphore.signal()
})
})
}
}
It uses a sempahore to limit the number of concurrent tasks that are executing on the concurrent queue, and uses a serial queue to feed new tasks to the concurrent queue. Newly enqueued tasks block at self.semaphore.wait() if the maximum number of tasks are already scheduled on the concurrent queue.
You would use it like this:
let group = DispatchGroup()
let q = LimitedWorker(limit: 10) // Experiment with this number
for (j, renderer) in renderers.enumerated() {
group.enter()
q.enqueue {
let batch = batches[j]
let _ = renderer.draw(data:batch)
group.leave()
}
}
group.wait()
(I wasn't able to find this information in the documentation, or don't know where to look, so forgive me if this is already been explained somewhere (link would be helpful).
My app creates and uses NSThreads to interact with Realm. All threads have working Run Loops set up on them so the Realms created on them will autorefresh.
One thread, called ReadThread, is used by the app by different modules to set up notification tokens so that they can receive updates and do some processing without blocking the main thread.
Example (Pseudocode:
ReadThread {
func performAsync(_ block: ()->Void) {
// execute block on run loop of the thread using self.perform(#selector(), on: self)
}
}
Singleton {
let readThread: ReadThread()
init {
self.readThread.start()
}
}
Main Thread:
Class A {
private var token: NotificationToken?
init {
Singleton.readThread.perform {
let token = realm.observe() { [weak self] (notification: Realm.Notification, realm) in
self?.doWork()
DispatchQueue.main.async { [weak self] in
self?.notifyUI()
}
}
DispatchQueue.main.async {[weak self] in self?.token = token }
}
}
}
The idea is that the token is created on the ReadThread, but the token is stored in an instance variable on a different thread (main thread). Is the token thread-safe enough that the main thread objects can at lease call invalidate() on the token, or if the main thread object is deallocated, the token will be automatically invalidated?
Thanks for your help!
Learned the hard was, but the answer is no, the omens must be invalidated under the same thread that the token was generated under. Otherwise, a runtime exception will be thrown by Realm.verifyThread check.
Here is the scenario, everything works but I get hanged up on the main queue. I have:
singleton class to manage API connection. Everything works (execution time aside....)
a number of view controllers calling GET API via the above singleton class to get the data
I normally call the above from either viewDidLoad or viewWillAppear
they all work BUT ....
if I call a couple of API methods implemented with Alamofire.request() with a closure (well, I need to know when it is
time to reload!), one of the two gets hung waiting for the default
(main) queue to give it a thread and it can take up to 20 seconds
if I call only one, do my thing and then call a POST API, this
latter one ends up in the same situation as (5), it takes a long
time to grab a slot in the default queue.
I am not specifying a queue in Alamofiore.request() and it sounds to me like I should so I tried it. I added a custom concurrent queue to my singleton API class and I tried adding that to my Alamofiore.request() .....and that did absolutely nothing. Help please, I must be missing something obvious?!
Here is my singleton API manager (excerpt) class:
class APIManager {
// bunch of stuff here
static let sharedInstance = APIController()
// more stuff here
let queue = DispatchQueue(label: "com.teammate.response-queue", qos: .utility, attributes: [.concurrent])
// more stuff
func loadSports(completion: #escaping (Error?) -> Void) {
let parameters: [String: Any?] = [:]
let headers = getAuthenticationHeaders()
let url = api_server+api_sport_list
Alamofire.request(url, method: .get, parameters: parameters, encoding: JSONEncoding.default, headers: headers).responseString (queue: queue) { response in
if let json = response.result.value {
if let r = JSONDeserializer<Response<[Sport]>>.deserializeFrom(json: json) {
if r.status == 200 {
switch r.content{
case let content as [Sport]:
self.sports = content
NSLog("sports loaded")
completion(nil)
default:
NSLog("could not read the sport list payload!")
completion(GenericError.reportError("could not read sports payload"))
}
}
else {
NSLog("sports not obtained, error %d %#",r.status, r.error)
completion(GenericError.reportError(r.error))
}
}
}
}
}
// more stuff
}
And this is how I call the methods from APIManager once I get the sigleton:
api.loadSports(){ error in
if error != nil {
// something bad happened, more code here to handle the error
}
else {
self.someViewThingy.reloadData()
}
}
Again, it all works it is just that if I make multiple Alamofire calls from the same UIViewController, the first is fast, every other call sits for ever to get a spot in the queue an run.
UI updates must happen on the main queue, so by moving this stuff to a concurrent queue is only going to introduce problems. In fact, if you change the completion handler queue to your concurrent queue and neglect to dispatch UI updates back to the main queue, it's going to just make it look much slower than it really is.
I actually suspect you misunderstand the purpose of the queue parameter of responseString. It isn't how the requests are processed (they already happen concurrently with respect to the main queue), but merely on which queue the completion handler will be run.
So, a couple of thoughts:
If you're going to use your own queue, make sure to dispatch UI updates to the main queue.
If you're going to use your own queue and you're going to update your model, make sure to synchronize those updates with any interaction you might be doing on the main queue. Either create a synchronization queue for that or, easier, dispatch all model updates back to the main queue.
I see nothing here that justifies the overhead and hassle of running the completion handler on anything other than the main queue. If you don't supply a queue to responseString, it will use the main queue for the completion handlers (but won't block anything, either), and it solves the prior two issues.
I have lots of code in Swift 2.x (or even 1.x) projects that looks like this:
// Move to a background thread to do some long running work
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)) {
let image = self.loadOrGenerateAnImage()
// Bounce back to the main thread to update the UI
dispatch_async(dispatch_get_main_queue()) {
self.imageView.image = image
}
}
Or stuff like this to delay execution:
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, Int64(0.5 * Double(NSEC_PER_SEC))), dispatch_get_main_queue()) {
print("test")
}
Or any of all kinds of other uses of the Grand Central Dispatch API...
Now that I've opened my project in Xcode 8 (beta) for Swift 3, I get all kinds of errors. Some of them offer to fix my code, but not all of the fixes produce working code. What do I do about this?
Since the beginning, Swift has provided some facilities for making ObjC and C more Swifty, adding more with each version. Now, in Swift 3, the new "import as member" feature lets frameworks with certain styles of C API -- where you have a data type that works sort of like a class, and a bunch of global functions to work with it -- act more like Swift-native APIs. The data types import as Swift classes, their related global functions import as methods and properties on those classes, and some related things like sets of constants can become subtypes where appropriate.
In Xcode 8 / Swift 3 beta, Apple has applied this feature (along with a few others) to make the Dispatch framework much more Swifty. (And Core Graphics, too.) If you've been following the Swift open-source efforts, this isn't news, but now is the first time it's part of Xcode.
Your first step on moving any project to Swift 3 should be to open it in Xcode 8 and choose Edit > Convert > To Current Swift Syntax... in the menu. This will apply (with your review and approval) all of the changes at once needed for all the renamed APIs and other changes. (Often, a line of code is affected by more than one of these changes at once, so responding to error fix-its individually might not handle everything right.)
The result is that the common pattern for bouncing work to the background and back now looks like this:
// Move to a background thread to do some long running work
DispatchQueue.global(qos: .userInitiated).async {
let image = self.loadOrGenerateAnImage()
// Bounce back to the main thread to update the UI
DispatchQueue.main.async {
self.imageView.image = image
}
}
Note we're using .userInitiated instead of one of the old DISPATCH_QUEUE_PRIORITY constants. Quality of Service (QoS) specifiers were introduced in OS X 10.10 / iOS 8.0, providing a clearer way for the system to prioritize work and deprecating the old priority specifiers. See Apple's docs on background work and energy efficiency for details.
By the way, if you're keeping your own queues to organize work, the way to get one now looks like this (notice that DispatchQueueAttributes is an OptionSet, so you use collection-style literals to combine options):
class Foo {
let queue = DispatchQueue(label: "com.example.my-serial-queue",
attributes: [.serial, .qosUtility])
func doStuff() {
queue.async {
print("Hello World")
}
}
}
Using dispatch_after to do work later? That's a method on queues, too, and it takes a DispatchTime, which has operators for various numeric types so you can just add whole or fractional seconds:
DispatchQueue.main.asyncAfter(deadline: .now() + 0.5) { // in half a second...
print("Are we there yet?")
}
You can find your way around the new Dispatch API by opening its interface in Xcode 8 -- use Open Quickly to find the Dispatch module, or put a symbol (like DispatchQueue) in your Swift project/playground and command-click it, then brouse around the module from there. (You can find the Swift Dispatch API in Apple's spiffy new API Reference website and in-Xcode doc viewer, but it looks like the doc content from the C version hasn't moved into it just yet.)
See the Migration Guide for more tips.
In Xcode 8 beta 4 does not work...
Use:
DispatchQueue.main.asyncAfter(deadline: .now() + 0.5) {
print("Are we there yet?")
}
for async two ways:
DispatchQueue.main.async {
print("Async1")
}
DispatchQueue.main.async( execute: {
print("Async2")
})
This one is good example for Swift 4 about async:
DispatchQueue.global(qos: .background).async {
// Background Thread
DispatchQueue.main.async {
// Run UI Updates or call completion block
}
}
in Xcode 8 use:
DispatchQueue.global(qos: .userInitiated).async { }
Swift 5.2, 4 and later
Main and Background Queues
let main = DispatchQueue.main
let background = DispatchQueue.global()
let helper = DispatchQueue(label: "another_thread")
Working with async and sync threads!
background.async { //async tasks here }
background.sync { //sync tasks here }
Async threads will work along with the main thread.
Sync threads will block the main thread while executing.
Swift 4.1 and 5. We use queues in many places in our code. So, I created Threads class with all queues. If you don't want to use Threads class you can copy the desired queue code from class methods.
class Threads {
static let concurrentQueue = DispatchQueue(label: "AppNameConcurrentQueue", attributes: .concurrent)
static let serialQueue = DispatchQueue(label: "AppNameSerialQueue")
// Main Queue
class func performTaskInMainQueue(task: #escaping ()->()) {
DispatchQueue.main.async {
task()
}
}
// Background Queue
class func performTaskInBackground(task:#escaping () throws -> ()) {
DispatchQueue.global(qos: .background).async {
do {
try task()
} catch let error as NSError {
print("error in background thread:\(error.localizedDescription)")
}
}
}
// Concurrent Queue
class func perfromTaskInConcurrentQueue(task:#escaping () throws -> ()) {
concurrentQueue.async {
do {
try task()
} catch let error as NSError {
print("error in Concurrent Queue:\(error.localizedDescription)")
}
}
}
// Serial Queue
class func perfromTaskInSerialQueue(task:#escaping () throws -> ()) {
serialQueue.async {
do {
try task()
} catch let error as NSError {
print("error in Serial Queue:\(error.localizedDescription)")
}
}
}
// Perform task afterDelay
class func performTaskAfterDealy(_ timeInteval: TimeInterval, _ task:#escaping () -> ()) {
DispatchQueue.main.asyncAfter(deadline: (.now() + timeInteval)) {
task()
}
}
}
Example showing the use of main queue.
override func viewDidLoad() {
super.viewDidLoad()
Threads.performTaskInMainQueue {
//Update UI
}
}