I have the following test case: I expect deinit to be called at program termination but it never is. I'm new to Swift but would not think this is expected behaviour. (this is not in a playground)
class Test
{
init() {
print( "init" )
}
deinit {
print( "deinit" )
}
}
print("Starting app")
var test = Test()
print( "Ending App" )
the output is:
Starting app
init
Ending App
Program ended with exit code: 0
If I place the code in a function and then call the function I get expected results
Starting app
init
Ending App
deinit
Program ended with exit code: 0
Shouldn't deinit of the object be called at program termination?
I expect deinit to be called at program termination
You should not expect that. Objects that exist at program termination are generally not deallocated. Memory cleanup is left to the operating system (which frees all of the program's memory). This is a long-existing optimization in Cocoa to speed up program termination.
deinit is intended only to release resources (such as freeing memory that is not under ARC). There is no equivalent of a C++ destructor in ObjC or Swift. (C++ and Objective-C++ objects are destroyed during program termination, since this is required by spec.)
If I place the code in a function and then call the function I get
expected results
Yes, this works because the life of the test variable is defined by the scope of the method. When the scope of the method ends (i.e., method gets removed from the Stack), all local variables' life would end unless there are other strong references to those variables.
Shouldn't deinit of the object be called at program termination
There is no way to close an app gracefully in iOS programmatically like Android. The possible ways that an app can be closed are,
By swiping up (terminating) from recent apps list
Or making a deliberate crash.
Or by letting the OS kill your app due to low-memory reasons.
All of your app's memory will be cleared up by the OS when it terminates, so the deinit will not be called, we cannot expect that too. You can note the word termination, which explains the fact that the program didn't end in a proper way, so we can't expect the OS to do the last honors.
There is a similar question.
In Apple there are no clear explanation how the deinit works. Just it calls by ARC when the class is deallocating. I think when app terminates there is different mechanism for deiniting in contrast of regular runtime deiniting of the class. Or maybe the main window retain your class? Anyway, in order to perform code when app terminates you should use Application delegate (like applicationWillTerminate).
Related
I'm coding a generic Swift application (not for iOS, it will later run on raspbian) and i noticed a constant increase of memory. I checked for memory leaks also with the inspector, and there are none.
To dig deeper, I created a blank application for macOS, and I just wrote those lines of code, which are only for testing:
var array = [Decimal]()
while(true) {
array = [Decimal]()
for i in 0..<10000
{
array.append(Decimal(string: i.description)!)
}
sleep(1)
}
As I know, at beginning of every cycle of the while loop the entire array that was filled in the previous cycle should be deleted from memory. But seems that this is not happening, with those lines of code the process memory rises indefinitely.
I also tried the same code on an iOS project putting it on the application function (the one that is called at the beginning in the app delegate) and I noticed that in this case, the memory remains constant and do not rises up.
Am I missing something on the non iOS project?
The Decimal(string:) is creating autorelease objects. Use an autoreleasepool to drain the pool periodically:
var array = [Decimal]()
while true {
autoreleasepool {
for i in 0..<10_000 {
array.append(Decimal(string: i.description)!)
}
sleep(1)
array = []
}
}
Normally the autorelease pool is drained when you yield back to the runloop. But in this case, this while loop never yields back to the OS, and therefore the pool is not getting drained. The use of your own autoreleasepool, like above, solves that problem.
FWIW, Apple has been slowly excising the use of autorelease objects in the Foundation/Cocoa classes. (We used to experience this problem with far more Foundation objects/APIs than we do now.) Clearly Decimal(string:) still is creating autorelease objects behind the scenes. In most practical cases, this isn't a problem, but in your example, you will need to introduce your own autoreleasepool to mitigate this behavior in this while loop.
I'm trying to explain ownership of objects and how GCD does its work.
These are the things I've learned:
a function will increase the retain count of the object its calling against
a dispatch block, unless it captures self weakly will increase the count.
after a dispatched block is executed it release the captured object, hence the retain count of self should decrease. But that's not what I'm seeing here. Why is that?
class C {
var name = "Adam"
func foo () {
print("inside func before sync", CFGetRetainCount(self)) // 3
DispatchQueue.global().sync {
print("inside func inside sync", CFGetRetainCount(self)) // 4
}
sleep(2)
print("inside func after sync", CFGetRetainCount(self)) // 4 ?????? I thought this would go back to 3
}
}
Usage:
var c: C? = C()
print("before func call", CFGetRetainCount(c)) // 2
c?.foo()
print("after func call", CFGetRetainCount(c)) // 2
A couple of thoughts:
If you ever have questions about precisely where ARC is retaining and releasing behind the scenes, just add breakpoint after “inside func after sync”, run it, and when it stops use “Debug” » “Debug Workflow” » “Always Show Disassembly” and you can see the assembly, to precisely see what’s going on. I’d also suggest doing this with release/optimized builds.
Looking at the assembly, the releases are at the end of your foo method.
As you pointed out, if you change your DispatchQueue.global().sync call to be async, you see the behavior you’d expect.
Also, unsurprisingly, if you perform functional decomposition, moving the GCD sync call into a separate function, you’ll again see the behavior you were expecting.
You said:
a function will increase the retain count of the object its calling against
Just to clarify what’s going on, I’d refer you to WWDC 2018 What’s New in Swift, about 12:43 into the video, in which they discuss where the compiler inserts the retain and release calls, and how it changed in Swift 4.2.
In Swift 4.1, it used the “Owned” calling convention where the caller would retain the object before calling the function, and the called function was responsible for performing the release before returning.
In 4.2 (shown in the WWDC screen snapshot below), they implemented a “Guaranteed” calling convention, eliminating a lot of redundant retain and release calls:
This results, in optimized builds at least, in more efficient and more compact code. So, do a release build and look at the assembly, and you’ll see that in action.
Now, we come to the root of your question, as to why the GCD sync function behaves differently than other scenarios (e.g. where its release call is inserted in a different place than other scenarios with non-escaping closures).
It seems that this is potentially related to optimizations unique to GCD sync. Specifically, when you dispatch synchronously to some background queue, rather than stopping the current thread and then running the code on one of the worker threads of the designated queue, the compiler is smart enough to determine that the current thread would be idle and it will just run the dispatched code on the current thread if it can. I can easily imagine that this GCD sync optimization, might have introduced wrinkles in the logic about where the compiler inserted the release call.
IMHO, the fact that the release is done at the end of the method as opposed to at the end of the closure is a somewhat academic matter. I’m assuming they had good reasons (or practical reasons at least), to defer this to the end of the function. What’s important is that when you return from foo, the retain count is what it should be.
I'm using Swift 3 with ARC in an iOS app, and I want to manually retain an object.
I tried object.retain() but Xcode says that it's unavailable in ARC mode. Is there an alternative way to do this, to tell Xcode I know what I'm doing?
Long Version:
I have a LocationTracker class that registers itself as the delegate of a CLLocationManager. When the user's location changes, it updates a static variable named location. Other parts of my code that need the location access this static variable, without having or needing a reference to the LocationTracker instance.
The problem with this design is that delegates aren't retained, so the LocationTracker is deallocated by the time the CLLocationManager sends a message to it, causing a crash.
I would like to manually increment the refcount of the LocationTracker before setting it as a delegate. The object will never be deallocated anyway, since the location should be monitored as long as the app is running.
I found a workaround, which is to have a static variable 'instance' that keeps a reference to the LocationTracker. I consider this design inelegant, since I'm never going to use the 'instance' variable. Can I get rid of it and explicitly increment the refcount?
This question is not a duplicate, as was claimed, since the other question is about Objective-C, while this one is about Swift.
The solution turned out to be to re-enable retain() and release():
extension NSObjectProtocol {
/// Same as retain(), which the compiler no longer lets us call:
#discardableResult
func retainMe() -> Self {
_ = Unmanaged.passRetained(self)
return self
}
/// Same as autorelease(), which the compiler no longer lets us call.
///
/// This function does an autorelease() rather than release() to give you more flexibility.
#discardableResult
func releaseMe() -> Self {
_ = Unmanaged.passUnretained(self).autorelease()
return self
}
}
This is easily done with withExtendedLifetime(_:_:) function. From the documentation:
Evaluates a closure while ensuring that the given instance is not destroyed before the closure returns.
Cheers!
While toying with Swift, I've encountered a situation which crashes and I still not have figured out why.
Let define:
class TestClass {
var iteration: Int = 0
func tick() -> Void{
if iteration > 100000 {
print("Done!")
return
}
iteration++
tick()
}
}
The tick() function calls itself and each time increments iteration. Any call of the type
let test = TestClass()
test.tick()
makes the program crash after a rather small number of recursions (around 50000 on my iMac), with a EXC_BAD_ACCESS error:
If I define a similar struct instead of a class, there is no crash (at least not in these limits). Note that when it crashes, the program uses only a few MB of RAM.
I can't explain yet why this crashes. Does anybody have an explanation? The callbackStorage value seem suspicious but I haven't found any pointer on this.
In your program, each thread has something called a stack. A stack is a LIFO (last in first out) - a data container that has 2 main operations: push, which adds an element to the top of the stack, and pop, which removes an item from the top of the stack.
When your program calls a function, it pushes the location of the code that called the function, called the return address, (and sometimes some of the function's arguments also), then jumps to that function's code. (The function's local variables are also stored on the stack.) When the function returns, it pops the return address off of the stack and jumps to it.
However, the stack has a limited size. In your example, your function calls itself so many times that there isn't enough room in the stack for all of the return addresses, arguments, and local variables. This is called a stack overflow (which is where this website got its name from). The program tries to write past the end of the stack, causing a segfault.
The reason why the program doesn't crash when you use a struct is likely, as dans3itz said, because classes have more overhead than structs.
The runtime error that you are experiencing here is a stack overflow. The fact that you do not experience it when modifying the definition to use a struct does not mean it will not occur. Increase the iteration depth just slightly and you will also be able to achieve the same runtime error with the struct implementation. It will get to this point sooner with the class implementation because of the implicit arguments being passed around.
My app will create an object. That object will continuously run one of its method, let's say batch processing pictures.
If its method is running, and I release the object and it is dealloced, will iOs automatically deal with the method of the object? for example, automatically stop running the method of the object and avoiding bad_exec?
when you call release and your reference count reaches 0 your object's dealloc is called. that's it.
that means: if you're processing on one thread and your object is sent release from another thread (or the same thread for some other bad reason) then you should expect undefined behavior (which will likely result in termination, by EXC_BAD_ACCESS or something equally pleasant). something should hold on to a reference to the object in this case (e.g., an NSOperation subclass).
If you release an object, that means the OS is free to reuse the memory and overwrite it with other data.
Thus any reference within that object's method, after final release, to self, an ivar, a getter, a setter, or any other method that requires such (recursively) could crash or, worse, randomly corrupt memory being used elsewhere.
A method that purely uses global or local variables (and not needing further initialization or assignment from the object) might be safe, but that's just a C function or class method masquerading as an instance method.