GCD Memory Bloat Swift on Linux - swift

I'm working on a Producer-consumer problem with an unbounded consumer. The producer can put as many tasks into the processing queue as it wants. When the queue is empty the consumer will block the thread.
while true {
do {
guard let job = try self.queue.dequeue() else { return }
job.perform()
} catch {
print(error)
}
}
Normally I would put everything in the loop in an autorelease pool, however, it's not available on Linux. It seems as though ARC is never releasing the objects in the loop. How should I go about controlling memory usage?

I don't believe memory spikes due to autorelease pools should be a thing on Linux. It's possible that something else could be holding onto a reference to one of your objects, though. Try setting a breakpoint in the middle of the loop, then click on "Debug Memory Graph" in the debugger to see what objects have references to the objects that are piling up. This can help determine the cause of objects that stick around longer than they ought to.

Related

Memory leak in Swift's FileHandle.read or Data buffer on macOS

I try to to read a file on macOS with Swift using class FileHandle and function
#available(macOS 10.15.4, iOS 13.4, watchOS 6.2, tvOS 13.4, *)
public func read(upToCount count: Int) throws -> Data?
This works fine in principle. However, I'm faced with a memory leak. The data allocated in the returned Data buffer is not freed. Is this possibly a bug in the underlying Swift wrapper?
Here is my test function, which reads a file block by block (and otherwise does nothing with the data). (Yes, I know, the function is stupid and has the sole purpose to prove the memory leak issue.)
func readFullFile(filePath: String) throws {
let blockSize = 32 * 1024
guard let file = FileHandle(forReadingAtPath: filePath) else {
fatalError("Failed to open file")
}
while (true) {
guard let data = try file.read(upToCount: blockSize) else {
break
}
}
}
If I call the function in a loop, and watch the program's memory consumption, I can see that the memory goes up in each loop step by the size of the read file and is never released.
Can anybody confirm this behavior or knows how to fix it?
My environment:
macOS 11.3.1 Big Sur
XCode 12.5
Best,
Michael
The underlying Objective C implementation uses autorelease. Objects are kept alive by the thread's auto-release pool. Typically this pool is drained on every iteration of the run loop.
But in your context, since you have a tight loop that runs multiple times within a single iteration of the run loop, you're accumulating a bunch of objects that are kept around temporarily.
Rest assured, they will eventually be reallocated when the run-loop iteration completes, assuming your app doesn't crash from EOM before that.
If the build-up of objects is an issue, you can manually drain the autorelease pool by placing the allocations inside an autorelease pool block.
for _ in something {
autoreleasepool {
// Do your work here
}
}
Beware: if your work doesn't allocate much memory, this might actually make performance worse. It'll slow down your loop without much memory benefit.
Docs: ObjectiveC.autoreleasepool(invoking:)
To access this, you can import ObjectiveC, although more commonly you'll have it transitively imported when you import Foundation, AppKit, Cocoa, etc.

Memory continuously increasing on swift application

I'm coding a generic Swift application (not for iOS, it will later run on raspbian) and i noticed a constant increase of memory. I checked for memory leaks also with the inspector, and there are none.
To dig deeper, I created a blank application for macOS, and I just wrote those lines of code, which are only for testing:
var array = [Decimal]()
while(true) {
array = [Decimal]()
for i in 0..<10000
{
array.append(Decimal(string: i.description)!)
}
sleep(1)
}
As I know, at beginning of every cycle of the while loop the entire array that was filled in the previous cycle should be deleted from memory. But seems that this is not happening, with those lines of code the process memory rises indefinitely.
I also tried the same code on an iOS project putting it on the application function (the one that is called at the beginning in the app delegate) and I noticed that in this case, the memory remains constant and do not rises up.
Am I missing something on the non iOS project?
The Decimal(string:) is creating autorelease objects. Use an autoreleasepool to drain the pool periodically:
var array = [Decimal]()
while true {
autoreleasepool {
for i in 0..<10_000 {
array.append(Decimal(string: i.description)!)
}
sleep(1)
array = []
}
}
Normally the autorelease pool is drained when you yield back to the runloop. But in this case, this while loop never yields back to the OS, and therefore the pool is not getting drained. The use of your own autoreleasepool, like above, solves that problem.
FWIW, Apple has been slowly excising the use of autorelease objects in the Foundation/Cocoa classes. (We used to experience this problem with far more Foundation objects/APIs than we do now.) Clearly Decimal(string:) still is creating autorelease objects behind the scenes. In most practical cases, this isn't a problem, but in your example, you will need to introduce your own autoreleasepool to mitigate this behavior in this while loop.

Foundation.Data memory leak in Swift

To cut a long story short, here is a code snippet that would easily eat as much memory as it can until it's stopped. But why? When I wrap the scope inside while in autoreleasepool, not a single byte is leaked. However it affects only current scope; if there are leaky function calls, leakage will continue. So the answer is to just wrap leak-prone operations in autoreleasepool? It looks kinda ridiculous and non-swifty.
import Foundation
while true {
let _ = "Foo Bar".data(using: .ascii)
usleep(100)
}
This is not unexpected. Until your while returns control to the run loop, the top-level autorelease pool will not be drained. Objects put into it will continue to accumulate.
I'm a little surprised that ARC doesn't destroy the Data instances immediately, however, since assigning them to the "cut" means that they are in effect never in scope. There's no name by which you can ever refer to them, and no reason to keep them alive.

Is there a limit on the number of blocks that can be submitted to a serial queue in GCD?

I define a property that returns a serial dispatch queue using lazy instantiation, something like this:
#property (nonatomic, readonly) dispatch_queue_t queue;
- (dispatch_queue_t)queue
{
if (!_queue) {
_queue = dispatch_queue_create("com.example.MyQueue", NULL);
}
return _queue;
}
Then let's say that I define an action method for some button that adds a block to the queue:
- (IBAction)buttonTapped:(UIButton *)sender
{
dispatch_async(self.queue, ^{
printf("Do some work here.\n");
});
}
The code for the actual method is more involved than a simple print statement, but this will do for the example.
So far so good. However, if I build and run the program, I can tap on the button 10 times and see the block run, but when I tap an eleventh time, the program hangs.
If I change the serial queue to a concurrent queue, no problems. I can dispatch as many blocks to the queue as I like.
Any idea what might be going on? Is there a limit to the number of blocks that can be posted to a serial queue?
In answer to the question on max blocks, I know of no practical limitation on what can be queued (other than available memory). Certainly, you should be able to queue far more than ten without incident.
But you have a typo in your queue getter method. You're setting _queue, but returning queue. You should return the same variable that you set. It looks like you must have two ivars defined; perhaps one that you defined manually and one that was synthesized? If you have a manually declared instance variable, you should just eliminate it and make sure your getter method is using the same instance variable, namely the one that was synthesized for you. Also, are you initializing this ivar in your init method?
If fixing this doesn't resolve the issue, then the problem probably rests in the particular code that you're dispatching to this queue and you should share that with us. Any synchronization code there? Any interaction with any shared resources?
OK, I finally resolved this issue.
First, when I reported that concurrent queues worked fine, but serial queues did not, I was mistaken. Both types of queues failed. When I observed everything working, that was actually using the main queue. So, in that case, there really wasn't any concurrency.
That said, the problem was a deadlock issue having to do with logging information via the main thread while I was processing on a secondary thread -- from either a serial or concurrent queue. To make matters worse, my app uses Lumberjack for logging which introduces additional threading issues. To resolve the matter, I wrapped every call to a Lumberjack logging method as follows:
dispatch_async(dispatch_get_main_queue(), ^{
// do logging here
});
That took care of the problem. Thanks for the comments. They ultimately led me to a solution.

Memory management for UIKit in tight loops under ARC

I'm interested in learning more about how best to handle memory management under tight loops with ARC. In particular, I've got an app I'm writing which has a while loop which rus for a really long time, and I've noticed that despite having implemented (what I believe to be) the best practices in ARC, the heap keeps growing boundlessly.
To illustrate the problem I'm having, I first set up the following test to fail on purpose:
while (true) {
NSMutableArray *array = [NSMutableArray arrayWithObject:#"Foo"];
[array addObject:#"bar"]; // do something with it to prevent compiler optimisations from skipping over it entirely
}
Running this code and profiling with the Allocations tool shows that the memory usage just endlessly increases. However, wrapping this in an #autoreleasepool as follows, immediately resolves the issue and keeps the memory usage nice and low:
while (true) {
#autoreleasepool {
NSMutableArray *array = [NSMutableArray arrayWithObject:#"Foo"];
[array addObject:#"bar"];
}
}
Perfect! This all seems to work fine -- and it even works fine (as would be expected) for non-autoreleased instances created using [[... alloc] init]. Everything works fine until I start involving any UIKit classes.
For example, let's create a UIButton and see what happens:
while (true) {
#autoreleasepool {
UIButton *button = [UIButton buttonWithType:UIButtonTypeRoundedRect];
button.frame = CGRectZero;
}
}
Now, the memory usage increases ad infinitum -- effectively, it appears that the #autoreleasepool is having no effect.
So the question is why does #autoreleasepool work fine for the NSMutableArray and keep the memory in-check, but when applied to a UIButton the heap continues to grow?
Most importantly, how can I keep the heap from expanding endlessly when using UIKit classes in an endless loop like this, and what does this tell us about the best practices for ARC in while(true) or while(keepRunningForALongTime) style loops?
My gut feeling on this is (and I could be totally wrong) is that it's perhaps something about how the while (true) keeps the runloop from cycling, which is keeping the UIKit instances in memory rather than releasing them... But clearly I'm missing something in my understanding of ARC!
(And to eliminate an obvious cause, NSZombiedEnabled is not enabled.)
As to why the UI* objects grow without bounds? Internal implementation detail. Most likely, some kind of cache or runloop interaction that you are effectively disabling by blocking the main run loop.
Which brings me to the real answer:
Most importantly, how can I keep the heap from expanding endlessly
when using UIKit classes in an endless loop like this, and what does
this tell us about the best practices for ARC in while(true) or
while(keepRunningForALongTime) style loops?
How to fix it? Do not ever use a tight loop on the main thread and Do not ever block the main run loop.
Even if you were to figure out and workaround the UI* induced heap growth, your program still wouldn't work if you were to use a while(...) loop on the main thread. The entire design of iOS applications -- and Cocoa applications -- is that the main thread has a main run loop and that main run loop must be free to run.
If not? Your app will not be responsive (and will eventually be killed by the system) to user input and your drawing code is unlikely to work as expected (since the run loop coalesces dirty regions and draws them on demand in conjunction with the main thread, oft offloading to a secondary thread).
Speculation on my part here, but, it could boil down to the fact that UI-related objects especially tend to use GCD or similar (e.g. performSelectorOnMainThread:…) to ensure some actions happen on the main thread. This is as you suspect - the enqueued block or other unit of execution maintains a reference to the instance, waiting for its time in the runloop to execute, and never getting it.
As a rule it's bad to block the runloop. Once upon a time it used to be relatively common - drag tracking was often done this way (or effectively so, by running the runloop only in a special mode while the drag progressed). But it leads to weird interactions and even deadlocks, because lots of code isn't designed with the possibility in mind - especially in a GCD world where asynchronousity is king.
Remember that you can run the runloop explicitly if you like, inside your while loop, and while that's not quite identical to letting it run naturally, it usually works.
Right, I have done some more thinking about this, together with the great contributions from bbum and Wade Tegaskis regarding blocking the runloop, and realised that the way to mitigate this sort of issue is by letting the runloop cycle, by using the performSelector:withObject:afterDelay: which lets the runloop continue, whilst scheduling the loop to continue itself in the future.
For example, to return to my original example with the UIButton, this should now be rewritten as a method like this:-
- (void)spawnButton {
UIButton *button = [UIButton buttonWithType:UIButtonTypeRoundedRect];
button.frame = CGRectZero;
[self performSelector:#selector(spawnButton) withObject:nil afterDelay:0];
}
This way, the method ends immediately and button is correctly released when it goes out of scope, but in the final line, spawnButton instructs the runloop to run spawnButton again in 0 seconds (i.e. as soon as possible), which in turn instructs the runloop to run... etc etc, you get the idea.
All you then need to do is call [self spawnButton] somewhere else in the code to get the cycle going.
This can also be solved similarly using GCD, with the following code which essentially does the same thing:
- (void)spawnButton {
UIButton *button = [UIButton buttonWithType:UIButtonTypeRoundedRect];
button.frame = CGRectZero;
dispatch_async(dispatch_get_main_queue(), ^{
[self spawnButton];
});
}
The only difference here is that the method call is dispatched (asynchronously) onto the main queue (main runloop) using GCD.
Profiling it in Instruments I can now see that although the overall allocation memory is going up, the live memory remains low and static, showing that the runloop is cycling and the old UIButtons are being deallocated.
By thinking about runloops like this, and using performSelector:withObject:afterDelay: or GCD, there are actually a number of other instances (not just with UIKit) where this sort of approach can prevent unintentional "memory leaks" caused by runloop lockups (I use that in quotations because in this case I was being a UIKit rogue.. but there are other cases where this technique is useful.)