How does retain count with synchronous dispatch work? - swift

I'm trying to explain ownership of objects and how GCD does its work.
These are the things I've learned:
a function will increase the retain count of the object its calling against
a dispatch block, unless it captures self weakly will increase the count.
after a dispatched block is executed it release the captured object, hence the retain count of self should decrease. But that's not what I'm seeing here. Why is that?
class C {
var name = "Adam"
func foo () {
print("inside func before sync", CFGetRetainCount(self)) // 3
DispatchQueue.global().sync {
print("inside func inside sync", CFGetRetainCount(self)) // 4
}
sleep(2)
print("inside func after sync", CFGetRetainCount(self)) // 4 ?????? I thought this would go back to 3
}
}
Usage:
var c: C? = C()
print("before func call", CFGetRetainCount(c)) // 2
c?.foo()
print("after func call", CFGetRetainCount(c)) // 2

A couple of thoughts:
If you ever have questions about precisely where ARC is retaining and releasing behind the scenes, just add breakpoint after “inside func after sync”, run it, and when it stops use “Debug” » “Debug Workflow” » “Always Show Disassembly” and you can see the assembly, to precisely see what’s going on. I’d also suggest doing this with release/optimized builds.
Looking at the assembly, the releases are at the end of your foo method.
As you pointed out, if you change your DispatchQueue.global().sync call to be async, you see the behavior you’d expect.
Also, unsurprisingly, if you perform functional decomposition, moving the GCD sync call into a separate function, you’ll again see the behavior you were expecting.
You said:
a function will increase the retain count of the object its calling against
Just to clarify what’s going on, I’d refer you to WWDC 2018 What’s New in Swift, about 12:43 into the video, in which they discuss where the compiler inserts the retain and release calls, and how it changed in Swift 4.2.
In Swift 4.1, it used the “Owned” calling convention where the caller would retain the object before calling the function, and the called function was responsible for performing the release before returning.
In 4.2 (shown in the WWDC screen snapshot below), they implemented a “Guaranteed” calling convention, eliminating a lot of redundant retain and release calls:
This results, in optimized builds at least, in more efficient and more compact code. So, do a release build and look at the assembly, and you’ll see that in action.
Now, we come to the root of your question, as to why the GCD sync function behaves differently than other scenarios (e.g. where its release call is inserted in a different place than other scenarios with non-escaping closures).
It seems that this is potentially related to optimizations unique to GCD sync. Specifically, when you dispatch synchronously to some background queue, rather than stopping the current thread and then running the code on one of the worker threads of the designated queue, the compiler is smart enough to determine that the current thread would be idle and it will just run the dispatched code on the current thread if it can. I can easily imagine that this GCD sync optimization, might have introduced wrinkles in the logic about where the compiler inserted the release call.
IMHO, the fact that the release is done at the end of the method as opposed to at the end of the closure is a somewhat academic matter. I’m assuming they had good reasons (or practical reasons at least), to defer this to the end of the function. What’s important is that when you return from foo, the retain count is what it should be.

Related

When to use [self] vs [weak self] in swift blocks?

[self] is new term which we can use in blocks to avoid usage of self keyword. Then how is this different from [weak self]? Does [self] takes cares of retain cycle?
I couldn't find much info on this so any simple example with explanation will be highly appreciated.
[self] indicates that self is intentionally held with a strong reference (and so some syntax is simplified). [weak self] indicates that self is held with a weak reference.
why would I use strong capture [self] inside block as there are chances of memory leak
You would use this when you know there is no reference cycle, or when you wish there to be a temporary reference cycle. Capturing self does not by itself create a cycle. There has to be a cycle. You may know from your code that there isn't. For example, the thing holding the closure may be held by some other object (rather than self). Good composition (and decomposition of complex types into smaller types) can easily lead to this.
Alternately, you may want a temporary cycle. The most common case of this URLSessionTask. The docs are very valuable here (emphasis added):
After you create a task, you start it by calling its resume() method. The session then maintains a strong reference to the task until the request finishes or fails; you don’t need to maintain a reference to the task unless it’s useful for your app’s internal bookkeeping.
Another common example is DispatchQueue, which similarly holds onto a closure until it finishes. At that point, it releases it, killing the cycle and allowing everything to deallocate. This is useful and powerful (and common!), when used with intent. It's a source of bugs when used accidentally. So Swift requires you to state your intentions and tries to make the situation explicit.
When you build your own types that retain completion handlers, you should strongly consider this pattern, too. After calling the completion handler, set it to nil (or {_ in } for non-optionals) to release anything that completion handler might be referencing.
One frustrating effect of the current situation is that developers slap [weak self] onto closures without thought. That is the opposite of what was intended. Seeing self was supposed to cause developers to pause and think about the reference graph. I'm not certain it ever really achieved this, but as a Swift programmer you should understand that this is the intent. It's not just random syntax.

How do I interpret swift (or general) closure in call stack in assembly?

class Assembly {
func getThis(value: Int) -> () -> () {
var a = value
return {
a += 1
print(a)
}
}
}
let assembly = Assembly()
let first = assembly.getThis(value: 5)
first()
first()
let second = assembly.getThis(value: 0)
second()
second()
first()
second()
gives value of:
6
7
1
2
8
3
I understand this is expected and I just want to understand how this works. I know the basics of how stack works in assembly. I think when getThis called, a new stack frame is pushed and allocate local variable a by assigning value.
But does this stack frame get popped? It looks like it does, because it returns a value means the function has finished. But on the other hand, if the stack frame is popped, the local variable on the stack frame is gone, means the a will be deallocated, obviously this isn't the case.
So how does this whole thing work?
There is a very large "gap" between Swift code and assembly code. The transformation between these two languages is vast, so I recommend you not to think "Oh wait, that wouldn't work in assembly!" because compilers are much smarter than you might think.
What basically happens here is, the closure retuned by getThis captures the local variable a. This means that a kind of "went inside" the closure. The closure now has state. This happens whenever you use self or a local variable in a closure.
How is this achieved at a low-level of abstraction? I am not really sure about how exactly the compiler works, but this kind of capturing of variables is usually done with a state machine of some kind. Therefore, your statement about a getting deallocated when getThis returns is probably not true. a will stay in memory, because it is retained by the closure. How does it do this? I can only tell you "compiler magic".
Anyway, how else do you want first to behave? Undefined behaviour? That doesn't sound Swifty. Resetting a to 0 every time you call first? That's no different from declaring a new variable again in first, is it? At the end of the day, it just seems like this "capturing" semantics is a useful thing to have in the language, so they designed Swift like so.

deinit not called in specific case

I have the following test case: I expect deinit to be called at program termination but it never is. I'm new to Swift but would not think this is expected behaviour. (this is not in a playground)
class Test
{
init() {
print( "init" )
}
deinit {
print( "deinit" )
}
}
print("Starting app")
var test = Test()
print( "Ending App" )
the output is:
Starting app
init
Ending App
Program ended with exit code: 0
If I place the code in a function and then call the function I get expected results
Starting app
init
Ending App
deinit
Program ended with exit code: 0
Shouldn't deinit of the object be called at program termination?
I expect deinit to be called at program termination
You should not expect that. Objects that exist at program termination are generally not deallocated. Memory cleanup is left to the operating system (which frees all of the program's memory). This is a long-existing optimization in Cocoa to speed up program termination.
deinit is intended only to release resources (such as freeing memory that is not under ARC). There is no equivalent of a C++ destructor in ObjC or Swift. (C++ and Objective-C++ objects are destroyed during program termination, since this is required by spec.)
If I place the code in a function and then call the function I get
expected results
Yes, this works because the life of the test variable is defined by the scope of the method. When the scope of the method ends (i.e., method gets removed from the Stack), all local variables' life would end unless there are other strong references to those variables.
Shouldn't deinit of the object be called at program termination
There is no way to close an app gracefully in iOS programmatically like Android. The possible ways that an app can be closed are,
By swiping up (terminating) from recent apps list
Or making a deliberate crash.
Or by letting the OS kill your app due to low-memory reasons.
All of your app's memory will be cleared up by the OS when it terminates, so the deinit will not be called, we cannot expect that too. You can note the word termination, which explains the fact that the program didn't end in a proper way, so we can't expect the OS to do the last honors.
There is a similar question.
In Apple there are no clear explanation how the deinit works. Just it calls by ARC when the class is deallocating. I think when app terminates there is different mechanism for deiniting in contrast of regular runtime deiniting of the class. Or maybe the main window retain your class? Anyway, in order to perform code when app terminates you should use Application delegate (like applicationWillTerminate).

Safely locking variable in Swift 3 using GCD

How to lock variable and prevent from different thread changing it at the same time, which leads to error?
I tried using
func lock(obj: AnyObject, blk:() -> ()) {
objc_sync_enter(obj)
blk()
objc_sync_exit(obj)
}
but i still have multithreading issues.
Shared Value
If you have a shared value that you want to access in a thread safe way like this
var list:[Int] = []
DispatchQueue
You can create your own serial DispatchQueue.
let serialQueue = DispatchQueue(label: "SerialQueue")
Dispatch Synch
Now different threads can safely access list, you just need to write the code into a closure dispatched to your serial queue.
serialQueue.sync {
// update list <---
}
// This line will always run AFTER the closure on the previous line 👆👆👆
Since the serial queue executes the closures one at the time, the access to list will be safe.
Please note that the previous code will block the current thread until the closure is executed.
Dispatch Asynch
If you don't want to block the current thread until the closure is processed by the serial queue, you can dispatch the closure asynchronously
serialQueue.async {
// update list <---
}
// This line can run BEFORE the closure on the previous line 👆👆👆
Swift's concurrency support isn't there yet. It sounds like it might be developed a bit in Swift 5. An excellent article is Matt Gallagher's Mutexes and Closure Capture in Swift, which looks at various solutions but recommends pthread_mutex_t. The choice of approach depends on other aspects of what you're writing - there's much to consider with threading.
Could you provide a specific simple example that's failing you?

If a function returns an UnsafeMutablePointer is it our responsibility to destroy and dealloc?

For example if I were to write this code:
var t = time_t()
time(&t)
let x = localtime(&t) // returns UnsafeMutablePointer<tm>
println("\(x.memory.tm_hour): \(x.memory.tm_min): \(x.memory.tm_sec)")
...would it also be necessary to also do the following?
x.destroy()
x.dealloc(1)
Or did we not allocate the memory and so therefore don't need to dismiss it?
Update #1:
If we imagine a function that returns an UnsafeMutablePointer:
func point() -> UnsafeMutablePointer<String> {
let a = UnsafeMutablePointer<String>.alloc(1)
a.initialize("Hello, world!")
return a
}
Calling this function would result in a pointer to an object that will never be destroyed unless we do the dirty work ourselves.
The question I'm asking here: Is a pointer received from a localtime() call any different?
The simulator and the playground both enable us to send one dealloc(1) call to the returned pointer, but should we be doing this or is the deallocation going to happen for a returned pointer by some other method at a later point?
At the moment I'm erring towards the assumption that we do need to destroy and dealloc.
Update #1.1:
The last assumption was wrong. I don't need to release, because I didn't create object.
Update #2:
I received some answers to the same query on the Apple dev forums.
In general, the answer to your question is yes. If you receive a pointer to memory which you would be responsible for freeing in C, then you are still responsible for freeing it when calling from swift ... [But] in this particular case you need do nothing. (JQ)
the routine itself maintains static memory for the result and you do not need to free them. (it would probably be a "bad thing" if you did) ... In general, you cannot know if you need to free up something pointed to by an UnsafePointer.... it depends on where that pointer obtains its value. (ST)
UnsafePointer's dealloc() is not compatible with free(). Pair alloc() with dealloc() and malloc and co. with free(). As pointed out previously, the function you're calling should tell you whether it's your response to free the result ... destroy() is only necessary if you have non-trivial content* in the memory referred to by the pointer, such as a strong reference or a Swift struct or enum. In general, if it came from C, you probably don't need to destroy() it. (In fact, you probably shouldn't destroy() it, because it wasn't initialized by Swift.) ... * "non-trivial content" is not an official Swift term. I'm using it by analogy with the C++ notion of "trivially copyable" (though not necessarily "trivial"). (STE)
Final Update:
I've now written a blogpost outlining my findings and assumptions with regard to the release of unsafe pointers taking onboard info from StackOverflow, Apple Dev Forums, Twitter and Apple's old documentation on allocating memory and releasing it, pre-ARC. See here.
From Swift library UnsafeMutablePointer<T>
A pointer to an object of type T. This type provides no automated
memory management, and therefore the user must take care to allocate
and free memory appropriately.
The pointer can be in one of the following states:
memory is not allocated (for example, pointer is null, or memory has
been deallocated previously);
memory is allocated, but value has not been initialized;
memory is allocated and value is initialized.
struct UnsafeMutablePointer<T> : RandomAccessIndexType, Hashable, NilLiteralConvertible { /**/}