I feel that I've always misunderstood that when reference cycles are created. Before I use to think that almost any where that you have a block and the compiler is forcing you to write .self then it's a sign that I'm creating a reference cycle and I need to use [weak self] in.
But the following setup doesn't create a reference cycle.
import Foundation
import PlaygroundSupport
PlaygroundPage.current.needsIndefiniteExecution
class UsingQueue {
var property : Int = 5
var queue : DispatchQueue? = DispatchQueue(label: "myQueue")
func enqueue3() {
print("enqueued")
queue?.asyncAfter(deadline: .now() + 3) {
print(self.property)
}
}
deinit {
print("UsingQueue deinited")
}
}
var u : UsingQueue? = UsingQueue()
u?.enqueue3()
u = nil
The block only retains self for 3 seconds. Then releases it. If I use async instead of asyncAfter then it's almost immediate.
From what I understand the setup here is:
self ---> queue
self <--- block
The queue is merely a shell/wrapper for the block. Which is why even if I nil the queue, the block will continue its execution. They’re independent.
So is there any setup that only uses queues and creates reference cycles?
From what I understand [weak self] is only to be used for reasons other than reference cycles ie to control the flow of the block. e.g.
Do you want to retain the object and run your block and then release it? A real scenario would be to finish this transaction even though the view has been removed from the screen...
Or you want to use [weak self] in so that you can exit early if your object has been deallocated. e.g. some purely UI like stopping a loading spinner is no longer needed
FWIW I understand that if I use a closure then things are different ie if I do:
import PlaygroundSupport
import Foundation
PlaygroundPage.current.needsIndefiniteExecution
class UsingClosure {
var property : Int = 5
var closure : (() -> Void)?
func closing() {
closure = {
print(self.property)
}
}
func execute() {
closure!()
}
func release() {
closure = nil
}
deinit {
print("UsingClosure deinited")
}
}
var cc : UsingClosure? = UsingClosure()
cc?.closing()
cc?.execute()
cc?.release() // Either this needs to be called or I need to use [weak self] for the closure otherwise there is a reference cycle
cc = nil
In the closure example the setup is more like:
self ----> block
self <--- block
Hence it's a reference cycle and doesn't deallocate unless I set block to capturing to nil.
EDIT:
class C {
var item: DispatchWorkItem!
var name: String = "Alpha"
func assignItem() {
item = DispatchWorkItem { // Oops!
print(self.name)
}
}
func execute() {
DispatchQueue.main.asyncAfter(deadline: .now() + 1, execute: item)
}
deinit {
print("deinit hit!")
}
}
With the following code, I was able to create a leak ie in Xcode's memory graph I see a cycle, not a straight line. I get the purple indicators. I think this setup is very much like how a stored closure creates leaks. And this is different from your two examples, where execution is never finished. In this example execution is finished, but because of the references it remains in memory.
I think the reference is something like this:
┌─────────┐─────────────self.item──────────────▶┌────────┐
│ self │ │workItem│
└─────────┘◀︎────item = DispatchWorkItem {...}───└────────┘
You say:
From what I understand the setup here is:
self ---> queue
self <--- block
The queue is merely a shell/wrapper for the block. Which is why even if I nil the queue, the block will continue its execution. They’re independent.
The fact that self happens to have a strong reference to the queue is inconsequential. A better way of thinking about it is that a GCD, itself, keeps a reference to all dispatch queues on which there is anything queued. (It’s analogous to a custom URLSession instance that won’t be deallocated until all tasks on that session are done.)
So, GCD keeps reference to the queue with dispatched tasks. The queue keeps a strong reference to the dispatched blocks/items. The queued block keeps a strong reference to any reference types they capture. When the dispatched task finishes, it resolves any strong references to any captured reference types and is removed from the queue (unless you keep your own reference to it elsewhere.), generally thereby resolving any strong reference cycles.
Setting that aside, where the absence of [weak self] can get you into trouble is where GCD keeps a reference to the block for some reason, such as dispatch sources. The classic example is the repeating timer:
class Ticker {
private var timer: DispatchSourceTimer?
func startTicker() {
let queue = DispatchQueue(label: Bundle.main.bundleIdentifier! + ".ticker")
timer = DispatchSource.makeTimerSource(queue: queue)
timer!.schedule(deadline: .now(), repeating: 1)
timer!.setEventHandler { // whoops; missing `[weak self]`
self.tick()
}
timer!.resume()
}
func tick() { ... }
}
Even if the view controller in which I started the above timer is dismissed, GCD keeps firing this timer and Ticker won’t be released. As the “Debug Memory Graph” feature shows, the block, created in the startTicker routine, is keeping a persistent strong reference to the Ticker object:
This is obviously resolved if I use [weak self] in that block used as the event handler for the timer scheduled on that dispatch queue.
Other scenarios include a slow (or indefinite length) dispatched task, where you want to cancel it (e.g., in the deinit):
class Calculator {
private var item: DispatchWorkItem!
deinit {
item?.cancel()
item = nil
}
func startCalculation() {
let queue = DispatchQueue(label: Bundle.main.bundleIdentifier! + ".calcs")
item = DispatchWorkItem { // whoops; missing `[weak self]`
while true {
if self.item?.isCancelled ?? true { break }
self.calculateNextDataPoint()
}
self.item = nil
}
queue.async(execute: item)
}
func calculateNextDataPoint() {
// some intense calculation here
}
}
All of that having been said, in the vast majority of GCD use-cases, the choice of [weak self] is not one of strong reference cycles, but rather merely whether we mind if strong reference to self persists until the task is done or not.
If we’re just going to update the the UI when the task is done, there’s no need to keep the view controller and its views in the hierarchy waiting some UI update if the view controller has been dismissed.
If we need to update the data store when the task is done, then we definitely don’t want to use [weak self] if we want to make sure that update happens.
Frequently, the dispatched tasks aren’t consequential enough to worry about the lifespan of self. For example, you might have a URLSession completion handler dispatch UI update back to the main queue when the request is done. Sure, we theoretically would want [weak self] (as there’s no reason to keep the view hierarchy around for a view controller that’s been dismissed), but then again that adds noise to our code, often with little material benefit.
Unrelated, but playgrounds are a horrible place to test memory behavior because they have their own idiosyncrasies. It’s much better to do it in an actual app. Plus, in an actual app, you then have the “Debug Memory Graph” feature where you can see the actual strong references. See https://stackoverflow.com/a/30993476/1271826.
Related
I have a Swift class that contains an instance of AVAudioEngine and I and making use of the AVAudioEngineConfigurationChange notification like so:
class Demonstration : NSObject {
var engine:AVAudioEngine? = AVAudioEngine()
// ...
override init() {
super.init()
// ...
NotificationCenter.default.addObserver(self,
selector: #selector(self.handleEngineConfigChange(_:)),
name: .AVAudioEngineConfigurationChange,
object: nil)
}
#objc func handleEngineConfigChange(_ notification: Notification) {
// what can I wrap this code with in order to make it not dangerous?
// DispatchQueue.main.sync?
engine = nil
}
}
In the docs it says:
Don’t deallocate the engine from within the client’s notification
handler. The callback happens on an internal dispatch queue and can
deadlock while trying to tear down the engine synchronously.
I don't even really know what they mean by deallocate -- if it means there's some method like engine.reset() or engine.stop()... or whether it means setting the engine to nil... or if it only applies to objective C... which I don't know.
At any rate, I would just like to know how to set up the method so that in the future I don't have to worry about breaking things.
You can move this to the next iteration of the runloop by using DispatchQueue.async (not sync) to whatever queue you are managing your engine on (probably the main queue). The important thing is that you apply the changes after returning from this callback.
Could someone explain why I get this warning: Publishing changes from background threads is not allowed; make sure to publish values from the main thread (via operators like receive(on:)) on model updates.
I'm know that if I wrap the changes in DispatchQueue.main.async the problem goes away. Why does it happen with some view modals and not others? I thought that since the variable has #Published it's automatically a publisher on main thread?
class VM: ObservableObject {
private let contactsRepo = ContactsCollection()
#Published var mutuals: [String]?
func fetch() {
contactsRepo.findMutuals(uid: uid, otherUid: other_uid, limit: 4) { [weak self] mutuals in
guard let self = self else { return }
if mutuals != nil {
self.mutualsWithHost = mutuals // warning...
} else {
self.mutualsWithHost = []
}
}
}
}
Evidently, contactsRepo.findMutuals can call its completion handler on a background thread. You need to ward that off by getting back onto the main thread.
The #Published property wrapper creates a publisher of the declared type, nothing more. The documentation may be able to provide further clarity.
As for it happening on some viewModels and not others, we wouldn't be able to tell here as we don't have the code. However it's always best practice to use DispatchQueue.main.async block or .receive(on: DispatchQueue.main) modifier for combine as you've already figured out when updating your UI.
The chances are your other viewModel is already using the main thread or the properties on the viewModel aren't being used to update the UI, again without the code we'll never be sure.
I have an observable inside a function.
The function happens in a certain queue, queueA, and the observable is subscribed to with observeOn(schedulerB). In onNext, I'm changing a class variable.
In another function, I'm changing the same class variable, from a different queue.
Here is some code to demonstrate my situation:
class SomeClass {
var commonResource: [String: String] = [:]
var queueA = DispatchQueue(label: "A")
var queueB = DispatchQueue(label: "B")
var schedulerB = ConcurrentDispatchQueueScheduler(queue: QueueB)
func writeToResourceInOnNext() {
let obs: PublishSubject<String> = OtherClass.GetObservable()
obs.observeOn(schedulerB)
.subscribe(onNext: { [weak self] res in
// this happens on queue B
self.commonResource["key"] = res
}
}
func writeToResource() {
// this happens on queue A
commonResource["key"] = "otherValue"
}
}
My question is, is it likely to have concurrency issues, if commonResource is modified in both places at the same time?
What is the common practice for writing/reading from class/global variables inside onNext in an observable with observeOn?
Thanks all!
Since your SomeClass has no control over when these functions will be called or on what threads the answer is yes, you are setup to have concurrency issues in this code due to its passive nature.
The obvious solution here is to dispatch to queue B inside writeToResource() in order to avoid the race condition.
Another option would be to use an NSLock (or NSRecursiveLock) and lock it before you write to the resource and unlock it after.
The best practice is: when you have a side effect happening inside a subscribe function's closure (in this case writing to commonResource that the closure is the only place where the side effect occurs. This would mean doing away with the passive writeToResource() function and instead passing in an Observable that was generated by whatever code currently is calling the function.
I have a process which runs for a long time and which I would like the ability to interrupt.
func longProcess (shouldAbort: #escaping ()->Bool) {
// Runs a long loop and periodically checks shouldAbort(),
// returning early if shouldAbort() returns true
}
Here's my class which uses it:
class Example {
private var abortFlag: NSObject? = .init()
private var dispatchQueue: DispatchQueue = .init(label: "Example")
func startProcess () {
let shouldAbort: ()->Bool = { [weak abortFlag] in
return abortFlag == nil
}
dispatchQueue.async {
longProcess(shouldAbort: shouldAbort)
}
}
func abortProcess () {
self.abortFlag = nil
}
}
The shouldAbort closure captures a weak reference to abortFlag, and checks whether that reference points to nil or to an NSObject. Since the reference is weak, if the original NSObject is deallocated then the reference that is captured by the closure will suddenly be nil and the closure will start returning true. The closure will be called repeatedly during the longProcess function, which is occurring on the private dispatchQueue. The abortProcess method on the Example class will be externally called from some other queue. What if someone calls abortProcess(), thereby deallocating abortFlag, at the exact same time that longProcess is trying to perform the check to see if abortFlag has been deallocated yet? Is checking myWeakReference == nil a thread-safe operation?
You can create the dispatched task as a DispatchWorkItem, which has a thread-safe isCancelled property already. You can then dispatch that DispatchWorkItem to a queue and have it periodically check its isCancelled. You can then just cancel the dispatched as such point you want to stop it.
Alternatively, when trying to wrap some work in an object, we’d often use Operation, instead, which encapsulates the task in its own class quite nicely:
class SomeLongOperation: Operation {
override func main() {
// Runs a long loop and periodically checks `isCancelled`
while !isCancelled {
Thread.sleep(forTimeInterval: 0.1)
print("tick")
}
}
}
And to create queue and add the operation to that queue:
let queue = OperationQueue()
let operation = SomeLongOperation()
queue.addOperation(operation)
And to cancel the operation:
operation.cancel()
Or
queue.cancelAllOperations()
Bottom line, whether you use Operation (which is, frankly, the “go-to” solution for wrapping some task in its own object) or roll-your-own with DispatchWorkItem, the idea is the same, namely that you don’t need to have your own state property to detect cancellation of the task. Both dispatch queues and operation queues already have nice mechanisms to simplify this process for you.
I saw this bug (Weak properties are not thread safe when reading SR-192) indicating that weak reference reads weren't thread safe, but it has been fixed, which suggests that (absent any bugs in the runtime), weak reference reads are intended to be thread safe.
Also interesting: Friday Q&A 2017-09-22: Swift 4 Weak References by Mike Ash
I have this code to add a NSOperation instance to a queue
let operation = NSBlockOperation()
operation.addExecutionBlock({
self.asyncMethod() { (result, error) in
if operation.cancelled {
return
}
// etc
}
})
operationQueue.addOperation(operation)
When user leaves the view that triggered this above code I cancel operation doing
operationQueue.cancelAllOperations()
When testing cancelation, I'm 100% sure cancel is executing before async method returns so I expect operation.cancelled to be true. Unfortunately this is not happening and I'm not able to realize why
I'm executing cancellation on viewWillDisappear
EDIT
asyncMethod contains a network operation that runs in a different thread. That's why the callback is there: to handle network operation returns. The network operation is performed deep into the class hierarchy but I want to handle NSOperations at root level.
Calling the cancel method of this object sets the value of this
property to YES. Once canceled, an operation must move to the finished
state.
Canceling an operation does not actively stop the receiver’s code from
executing. An operation object is responsible for calling this method
periodically and stopping itself if the method returns YES.
You should always check the value of this property before doing any
work towards accomplishing the operation’s task, which typically means
checking it at the beginning of your custom main method. It is
possible for an operation to be cancelled before it begins executing
or at any time while it is executing. Therefore, checking the value at
the beginning of your main method (and periodically throughout that
method) lets you exit as quickly as possible when an operation is
cancelled.
import Foundation
let operation1 = NSBlockOperation()
let operation2 = NSBlockOperation()
let queue = NSOperationQueue()
operation1.addExecutionBlock { () -> Void in
repeat {
usleep(10000)
print(".", terminator: "")
} while !operation1.cancelled
}
operation2.addExecutionBlock { () -> Void in
repeat {
usleep(15000)
print("-", terminator: "")
} while !operation2.cancelled
}
queue.addOperation(operation1)
queue.addOperation(operation2)
sleep(1)
queue.cancelAllOperations()
try this simple example in playground.
if it is really important to run another asynchronous code, try this
operation.addExecutionBlock({
if operation.cancelled {
return
}
self.asyncMethod() { (result, error) in
// etc
}
})
it's because you doing work wrong. You cancel operation after it executed.
Check this code, block executed in one background thread. Before execution start – operation cancel, remove first block from queue.
Swift 4
let operationQueue = OperationQueue()
operationQueue.qualityOfService = .background
let ob1 = BlockOperation {
print("ExecutionBlock 1. Executed!")
}
let ob2 = BlockOperation {
print("ExecutionBlock 2. Executed!")
}
operationQueue.addOperation(ob1)
operationQueue.addOperation(ob2)
ob1.cancel()
// ExecutionBlock 2. Executed!
Swift 2
let operationQueue = NSOperationQueue()
operationQueue.qualityOfService = .Background
let ob1 = NSBlockOperation()
ob1.addExecutionBlock {
print("ExecutionBlock 1. Executed!")
}
let ob2 = NSBlockOperation()
ob2.addExecutionBlock {
print("ExecutionBlock 2. Executed!")
}
operationQueue.addOperation(ob1)
operationQueue.addOperation(ob2)
ob1.cancel()
// ExecutionBlock 2. Executed!
The Operation does not wait for your asyncMethod to be finished. Therefore, it immediately returns if you add it to the Queue. And this is because you wrap your async network operation in an async NSOperation.
NSOperation is designed to give a more advanced async handling instead for just calling performSelectorInBackground. This means that NSOperation is used to bring complex and long running operations in background and not block the main thread. A good article of a typically used NSOperation can be found here:
http://www.raywenderlich.com/19788/how-to-use-nsoperations-and-nsoperationqueues
For your particular use case, it does not make sense to use an NSOperation here, instead you should just cancel your running network request.
It does not make sense to put an asynchronous function into a block with NSBlockOperation. What you probably want is a proper subclass of NSOperation as a concurrent operation which executes an asynchronous work load. Subclassing an NSOperation correctly is however not that easy as it should.
You may take a look here reusable subclass for NSOperation for an example implementation.
I am not 100% sure what you are looking for, but maybe what you need is to pass the operation, as parameter, into the asyncMethod() and test for cancelled state in there?
operation.addExecutionBlock({
asyncMethod(operation) { (result, error) in
// Result code
}
})
operationQueue.addOperation(operation)
func asyncMethod(operation: NSBlockOperation, fun: ((Any, Any)->Void)) {
// Do stuff...
if operation.cancelled {
// Do something...
return // <- Or whatever makes senes
}
}