BAD_ACCESS during recursive calls in Swift - swift

While toying with Swift, I've encountered a situation which crashes and I still not have figured out why.
Let define:
class TestClass {
var iteration: Int = 0
func tick() -> Void{
if iteration > 100000 {
print("Done!")
return
}
iteration++
tick()
}
}
The tick() function calls itself and each time increments iteration. Any call of the type
let test = TestClass()
test.tick()
makes the program crash after a rather small number of recursions (around 50000 on my iMac), with a EXC_BAD_ACCESS error:
If I define a similar struct instead of a class, there is no crash (at least not in these limits). Note that when it crashes, the program uses only a few MB of RAM.
I can't explain yet why this crashes. Does anybody have an explanation? The callbackStorage value seem suspicious but I haven't found any pointer on this.

In your program, each thread has something called a stack. A stack is a LIFO (last in first out) - a data container that has 2 main operations: push, which adds an element to the top of the stack, and pop, which removes an item from the top of the stack.
When your program calls a function, it pushes the location of the code that called the function, called the return address, (and sometimes some of the function's arguments also), then jumps to that function's code. (The function's local variables are also stored on the stack.) When the function returns, it pops the return address off of the stack and jumps to it.
However, the stack has a limited size. In your example, your function calls itself so many times that there isn't enough room in the stack for all of the return addresses, arguments, and local variables. This is called a stack overflow (which is where this website got its name from). The program tries to write past the end of the stack, causing a segfault.
The reason why the program doesn't crash when you use a struct is likely, as dans3itz said, because classes have more overhead than structs.

The runtime error that you are experiencing here is a stack overflow. The fact that you do not experience it when modifying the definition to use a struct does not mean it will not occur. Increase the iteration depth just slightly and you will also be able to achieve the same runtime error with the struct implementation. It will get to this point sooner with the class implementation because of the implicit arguments being passed around.

Related

How do I interpret swift (or general) closure in call stack in assembly?

class Assembly {
func getThis(value: Int) -> () -> () {
var a = value
return {
a += 1
print(a)
}
}
}
let assembly = Assembly()
let first = assembly.getThis(value: 5)
first()
first()
let second = assembly.getThis(value: 0)
second()
second()
first()
second()
gives value of:
6
7
1
2
8
3
I understand this is expected and I just want to understand how this works. I know the basics of how stack works in assembly. I think when getThis called, a new stack frame is pushed and allocate local variable a by assigning value.
But does this stack frame get popped? It looks like it does, because it returns a value means the function has finished. But on the other hand, if the stack frame is popped, the local variable on the stack frame is gone, means the a will be deallocated, obviously this isn't the case.
So how does this whole thing work?
There is a very large "gap" between Swift code and assembly code. The transformation between these two languages is vast, so I recommend you not to think "Oh wait, that wouldn't work in assembly!" because compilers are much smarter than you might think.
What basically happens here is, the closure retuned by getThis captures the local variable a. This means that a kind of "went inside" the closure. The closure now has state. This happens whenever you use self or a local variable in a closure.
How is this achieved at a low-level of abstraction? I am not really sure about how exactly the compiler works, but this kind of capturing of variables is usually done with a state machine of some kind. Therefore, your statement about a getting deallocated when getThis returns is probably not true. a will stay in memory, because it is retained by the closure. How does it do this? I can only tell you "compiler magic".
Anyway, how else do you want first to behave? Undefined behaviour? That doesn't sound Swifty. Resetting a to 0 every time you call first? That's no different from declaring a new variable again in first, is it? At the end of the day, it just seems like this "capturing" semantics is a useful thing to have in the language, so they designed Swift like so.

How does retain count with synchronous dispatch work?

I'm trying to explain ownership of objects and how GCD does its work.
These are the things I've learned:
a function will increase the retain count of the object its calling against
a dispatch block, unless it captures self weakly will increase the count.
after a dispatched block is executed it release the captured object, hence the retain count of self should decrease. But that's not what I'm seeing here. Why is that?
class C {
var name = "Adam"
func foo () {
print("inside func before sync", CFGetRetainCount(self)) // 3
DispatchQueue.global().sync {
print("inside func inside sync", CFGetRetainCount(self)) // 4
}
sleep(2)
print("inside func after sync", CFGetRetainCount(self)) // 4 ?????? I thought this would go back to 3
}
}
Usage:
var c: C? = C()
print("before func call", CFGetRetainCount(c)) // 2
c?.foo()
print("after func call", CFGetRetainCount(c)) // 2
A couple of thoughts:
If you ever have questions about precisely where ARC is retaining and releasing behind the scenes, just add breakpoint after “inside func after sync”, run it, and when it stops use “Debug” » “Debug Workflow” » “Always Show Disassembly” and you can see the assembly, to precisely see what’s going on. I’d also suggest doing this with release/optimized builds.
Looking at the assembly, the releases are at the end of your foo method.
As you pointed out, if you change your DispatchQueue.global().sync call to be async, you see the behavior you’d expect.
Also, unsurprisingly, if you perform functional decomposition, moving the GCD sync call into a separate function, you’ll again see the behavior you were expecting.
You said:
a function will increase the retain count of the object its calling against
Just to clarify what’s going on, I’d refer you to WWDC 2018 What’s New in Swift, about 12:43 into the video, in which they discuss where the compiler inserts the retain and release calls, and how it changed in Swift 4.2.
In Swift 4.1, it used the “Owned” calling convention where the caller would retain the object before calling the function, and the called function was responsible for performing the release before returning.
In 4.2 (shown in the WWDC screen snapshot below), they implemented a “Guaranteed” calling convention, eliminating a lot of redundant retain and release calls:
This results, in optimized builds at least, in more efficient and more compact code. So, do a release build and look at the assembly, and you’ll see that in action.
Now, we come to the root of your question, as to why the GCD sync function behaves differently than other scenarios (e.g. where its release call is inserted in a different place than other scenarios with non-escaping closures).
It seems that this is potentially related to optimizations unique to GCD sync. Specifically, when you dispatch synchronously to some background queue, rather than stopping the current thread and then running the code on one of the worker threads of the designated queue, the compiler is smart enough to determine that the current thread would be idle and it will just run the dispatched code on the current thread if it can. I can easily imagine that this GCD sync optimization, might have introduced wrinkles in the logic about where the compiler inserted the release call.
IMHO, the fact that the release is done at the end of the method as opposed to at the end of the closure is a somewhat academic matter. I’m assuming they had good reasons (or practical reasons at least), to defer this to the end of the function. What’s important is that when you return from foo, the retain count is what it should be.

deinit not called in specific case

I have the following test case: I expect deinit to be called at program termination but it never is. I'm new to Swift but would not think this is expected behaviour. (this is not in a playground)
class Test
{
init() {
print( "init" )
}
deinit {
print( "deinit" )
}
}
print("Starting app")
var test = Test()
print( "Ending App" )
the output is:
Starting app
init
Ending App
Program ended with exit code: 0
If I place the code in a function and then call the function I get expected results
Starting app
init
Ending App
deinit
Program ended with exit code: 0
Shouldn't deinit of the object be called at program termination?
I expect deinit to be called at program termination
You should not expect that. Objects that exist at program termination are generally not deallocated. Memory cleanup is left to the operating system (which frees all of the program's memory). This is a long-existing optimization in Cocoa to speed up program termination.
deinit is intended only to release resources (such as freeing memory that is not under ARC). There is no equivalent of a C++ destructor in ObjC or Swift. (C++ and Objective-C++ objects are destroyed during program termination, since this is required by spec.)
If I place the code in a function and then call the function I get
expected results
Yes, this works because the life of the test variable is defined by the scope of the method. When the scope of the method ends (i.e., method gets removed from the Stack), all local variables' life would end unless there are other strong references to those variables.
Shouldn't deinit of the object be called at program termination
There is no way to close an app gracefully in iOS programmatically like Android. The possible ways that an app can be closed are,
By swiping up (terminating) from recent apps list
Or making a deliberate crash.
Or by letting the OS kill your app due to low-memory reasons.
All of your app's memory will be cleared up by the OS when it terminates, so the deinit will not be called, we cannot expect that too. You can note the word termination, which explains the fact that the program didn't end in a proper way, so we can't expect the OS to do the last honors.
There is a similar question.
In Apple there are no clear explanation how the deinit works. Just it calls by ARC when the class is deallocating. I think when app terminates there is different mechanism for deiniting in contrast of regular runtime deiniting of the class. Or maybe the main window retain your class? Anyway, in order to perform code when app terminates you should use Application delegate (like applicationWillTerminate).

What is objc_msgSend and why does it take up so much processing time?

I have been time profiling my turn based game app and I have run into an interesting (maybe) issue. As per the image below, it seems objc_msgSend takes up almost a minute of my app's run time. What is this and is it a sign of some poorly written code? Thanks!
As #user1118321 said above, objc_msgSend is basically the implementation of Objective-C's message dispatch. Basically, when you send a message such as [foo bar], objc_msgSend gets invoked, and it essentially does this:
Figure out what class foo is.
Send the bar message (converted to a string-based selector) to foo, and get an implementation (which is basically a C function that takes foo and the selector as its first two arguments). This can be intercepted at runtime and manipulated in lots of gnarly ways, which is very cool but also incurs a performance cost. Also, the selector manipulation involves string operations, which incur a performance cost of their own.
Call the implementation from step 2.
If you want to get into the nitty-gritty of the inner workings of objc_msgSend, this article's pretty great: https://www.mikeash.com/pyblog/friday-qa-2012-11-16-lets-build-objc_msgsend.html
Anyway, all this is obviously not going to be as fast as a straight C function call, but for the majority of cases, the overhead from objc_msgSend isn't enough to be a major concern (the function itself is written in some pretty hard-core hand-optimized assembler and uses a bunch of caching techniques, so it's probably pretty close to the optimal performance that it could have given all that it does). There are, however, some cases where objc_msgSend's performance could be a concern, and for those cases you can try to minimize your use of Objective-C calls, as #user1118321 said. Alternatively, if you can narrow down the specific Objective-C method calls that are causing the problem, you can use a technique known as IMP caching, by which you can look up a method once and save its implementation as a C function pointer, which you can then call as repeatedly as you like, sending the object and selector as the first two arguments.
For example, if you have this object:
#interface Foo: NSObject
- (id)doSomethingWithString:(NSString *)bar;
#end
You can get its IMP like so:
Foo *foo = ...
SEL selector = #selector(doSomethingWithString:);
IMP imp = [foo methodForSelector:selector];
id (*funcVersion)(Foo *, SEL, NSString *) = (id (*)(Foo *, SEL, NSString *))imp;
You can store funcVersion and the selector somewhere, and then call it like this. You won't incur the cost of objc_msgSend in doing so, since you'll effectively be skipping the first two steps from the above list.
id returnValue = funcVersion(foo, selector, #"Baz");
objc_msgSend is the function which converts a selector into an address and jumps to it whenever you call a method in Objective-C. It's not an indication of poor programming in and of itself. But if it's taking the majority of your program's time, you may want to consider refactoring your code so that you don't need to call so many methods to do the work you're doing. Without any more information about your app it's impossible to tell you how to proceed.
I have hit this before. In my case, there was a method my app was calling to retrieve an NSDictionary. But it turned out that the dictionary was static throughout the app's lifetime. The method was creating it from scratch every time I called it. So instead of repeatedly calling the method that created the dictionary, I called it once at the beginning and saved (retained) the result eliminating all future calls to the method.

If a function returns an UnsafeMutablePointer is it our responsibility to destroy and dealloc?

For example if I were to write this code:
var t = time_t()
time(&t)
let x = localtime(&t) // returns UnsafeMutablePointer<tm>
println("\(x.memory.tm_hour): \(x.memory.tm_min): \(x.memory.tm_sec)")
...would it also be necessary to also do the following?
x.destroy()
x.dealloc(1)
Or did we not allocate the memory and so therefore don't need to dismiss it?
Update #1:
If we imagine a function that returns an UnsafeMutablePointer:
func point() -> UnsafeMutablePointer<String> {
let a = UnsafeMutablePointer<String>.alloc(1)
a.initialize("Hello, world!")
return a
}
Calling this function would result in a pointer to an object that will never be destroyed unless we do the dirty work ourselves.
The question I'm asking here: Is a pointer received from a localtime() call any different?
The simulator and the playground both enable us to send one dealloc(1) call to the returned pointer, but should we be doing this or is the deallocation going to happen for a returned pointer by some other method at a later point?
At the moment I'm erring towards the assumption that we do need to destroy and dealloc.
Update #1.1:
The last assumption was wrong. I don't need to release, because I didn't create object.
Update #2:
I received some answers to the same query on the Apple dev forums.
In general, the answer to your question is yes. If you receive a pointer to memory which you would be responsible for freeing in C, then you are still responsible for freeing it when calling from swift ... [But] in this particular case you need do nothing. (JQ)
the routine itself maintains static memory for the result and you do not need to free them. (it would probably be a "bad thing" if you did) ... In general, you cannot know if you need to free up something pointed to by an UnsafePointer.... it depends on where that pointer obtains its value. (ST)
UnsafePointer's dealloc() is not compatible with free(). Pair alloc() with dealloc() and malloc and co. with free(). As pointed out previously, the function you're calling should tell you whether it's your response to free the result ... destroy() is only necessary if you have non-trivial content* in the memory referred to by the pointer, such as a strong reference or a Swift struct or enum. In general, if it came from C, you probably don't need to destroy() it. (In fact, you probably shouldn't destroy() it, because it wasn't initialized by Swift.) ... * "non-trivial content" is not an official Swift term. I'm using it by analogy with the C++ notion of "trivially copyable" (though not necessarily "trivial"). (STE)
Final Update:
I've now written a blogpost outlining my findings and assumptions with regard to the release of unsafe pointers taking onboard info from StackOverflow, Apple Dev Forums, Twitter and Apple's old documentation on allocating memory and releasing it, pre-ARC. See here.
From Swift library UnsafeMutablePointer<T>
A pointer to an object of type T. This type provides no automated
memory management, and therefore the user must take care to allocate
and free memory appropriately.
The pointer can be in one of the following states:
memory is not allocated (for example, pointer is null, or memory has
been deallocated previously);
memory is allocated, but value has not been initialized;
memory is allocated and value is initialized.
struct UnsafeMutablePointer<T> : RandomAccessIndexType, Hashable, NilLiteralConvertible { /**/}