How can I manually zero out memory? - swift

Is it possible to manually clear out the contents of an object from memory?
In particular, I'm dealing with NSData. I've tried using data.length = 0 and data.setData(NSData).
I know ARC will come in and clean up after it is out of scope to whom it belongs, but is it possible to manually force this process when I want?

I think you have some misconceptions about ARC I'd like to clear up. The goal of ARC is is to ensure memory leaks don't occur. It's responsible for tracking the object over its life cycle, and ensuring it's "freed" when no references remain to it.
It's important to note that the memory being "freed" does not imply "writing over it all with 0s".
It simply means that memory will be designated as unused. The freed memory becomes a candidate for allocation when the system needs to allocate memory to new objects.
There's no guarentee, however, that this reallocation will happen, thus it's very possible for your freed memory to contain your original data, and never be overwritten.

Related

Swift - Risk in using autoreleasepool? CPU usage?

With the Xcode Profiler I have just spotted a not really necessary memory peak on JSON decoding. Apparently it's a known issue and I should wrap the call in an autoreleasepool which helped:
//extension..
var jsonData: Data? {
return autoreleasepool{ try? JSONSerialization.data(withJSONObject: self, options: []) }
}
I found another few big chunks of allocations that were not really needed so I applied my newly-learned trick to other code as well, such as the following:
var protoArray = [Proto_Bit]()
for bit in data {
autoreleasepool{
if let str = bit.toJSONString() {
if let proto = try? Proto_Bit(jsonString: str) {
protoArray.append(proto)
}
}
}
}
Now, before I wrap every single instruction of my code (or at least wherever I see fit) in this autoreleasepool thing, I would like to ask if there are any risks or drawbacks associated to it.
With these two wraps I was able to reduce my peak memory consumption from 500mb to 170mb. I am aware that Swift also does these kinds of things behind the scenes and probably has some guards in place however I would rather be safe than sorry.
does autoreleasepool come with a CPU overhead? If it is 5% I would be okay with that since it sounds like a good tradeoff, if it's more I would have to investigate
can I mess up anything using autoreleasepool? Null pointers, thread locking etc. since the block structure looks a bit scary.. or is this just telling the hardware "at the end of the bracket clean up and close the door behind you" without affecting other objects?
Autorelease Pools are a mechanism which comes from Objective-C for helping automate memory management and ensure that objects and resources are released "eventually", where that "eventually" comes when the pool is drained. i.e., an autorelease pool, once created on a thread, captures (retains) all objects which are -autoreleaseed while the pool is active — when the pool is drained, all of those objects are released. (Note that this is a Foundation feature in conjunction with the Objective-C runtime, and is not directly integrated with hardware: it's way, way higher-level than that.)
As a short-hand for managing autorelease pools directly (and avoiding creating NSAutoreleasePool instances directly), Objective-C introduced the #autoreleasepool language keyword, which effectively creates an autorelease pool at the beginning of the scope, and drains it at the end:
#autoreleasepool /* create an autorelease pool to capture autoreleased objects */ {
// ... do stuff ...
} /* release the autoreleasepool, and all objects that were in it */
Introducing autorelease pools manually in this way grants you more control over when autoreleased objects are effectively cleaned up: if you know that a block of code creates many autoreleased objects that really don't need to outlive that block of code, that may be a good candidate for wrapping up in an #autoreleasepool.
Autorelease pools pre-date ARC, which automates reference counting in a deterministic way, and its introduction made autorelease pools became largely unnecessary in most code: if an object can be deterministically retained and released, there's no need to rely on autoreleasing it "at some point". (And in fact, along with regular memory management calls like -retain and -release themselves, ARC will not allow you to call -autorelease on objects directly either.)
Swift, following the ARC memory management model, also does not rely on autoreleasing objects — all objects are deterministically released after their last usage. However: Swift does still need to interoperate with Objective-C code, and notable, not all Objective-C code (including a lot of code in, e.g., Foundation) uses ARC. Many internal Apple frameworks still use Objective-C's manual memory management, and thus still rely on autoreleased objects.
On platforms where Swift might need to interoperate with Objective-C code, no work needs to be explicitly done in order to allow autoreleased objects to eventually be released: every Swift application on Darwin platforms has at least one implicit autorelease pool at the root of the process which captures autoreleased objects. However, as you note: this "eventual" release of Objective-C objects might keep memory usage high until the pool is drained. To help alleviate that high memory usage, Swift has autoreleasepool { ... } (matching Objective-C's #autoreleasepool { ... }), which allows you to explicitly and eagerly capture those autoreleased objects, and free them at the end of the scope.
To answer your questions directly, but in reverse order:
Can I mess up anything using autoreleasepool? For correctly-written code, no. All you're doing is helping the Objective-C runtime clean up these objects a little bit earlier than it would otherwise. And it's critical to note: the objects will only be released by the pool — if their retain count is still positive after the pool releases them, they must still be in use somewhere, and will not be deallocated until that other owner holding on to the object also releases them.
Is it possible that the introduction of an autoreleasepool will cause some unexpected behavior to occur which didn't before? Absolutely. Incorrectly-written code could have accidentally worked due to the fact that an object was incidentally kept alive long enough to prevent unintentional behavior from occurring — and releasing the object sooner might trigger it. But, this is both unlikely (given the miniscule amount of actually manual memory management outside of Apple frameworks) and not something you can rely on: if the code misbehaves inside of a newly-introduced autoreleasepool, it wasn't correct to begin with, and could have backfired on you some other way.
Does autoreleasepool come with a CPU overhead? Yes, and it is likely vanishingly small compared to the actual work an application performs. But, that doesn't mean that sprinkling autoreleasepool all over the place will be useful:
Given the decreasing amount of autoreleased objects in a Swift project as increasing amounts of code transition away from Objective-C, it's becoming rarer to see large numbers of autoreleased objects which need to be eagerly cleaned up. You could sprinkle autoreleasepools everywhere, but it's entirely possible that those pools will be entirely empty, with nothing to clean up
autoreleasepools don't affect native Swift allocations: only Objective-C objects can be autoreleased, which means that for a good portion of Swift code, autoreleasepools are entirely wasted
So, when should you use autoreleasepools?
When you're working with code coming from Objective-C, which
You've measured to show that is contributing to high memory usage thanks to autoreleased objects, which
You've also measured are cleaned up appropriately by the introduction of an autoreleasepool
In other words, exactly what you've done here in your question. So, kudos.
However, try to avoid cargo-culting the insertion of autoreleasepools all over the place: it's highly unlikely to be effective without actual measurements and understanding what might be going on.
[An aside: how do you know when objects/code might be coming from Objective-C? You can't, very easily. A good rule of thumb is that many Apple frameworks are still written in Objective-C under the hood, or may at some layer return an Objective-C object bridged (or not) to Swift — so they may be a likely culprit to investigate if you've measured something actionable. 3rd-party libraries are also much less likely to contain Objective-C these days, but you may also have source access to them to confirm.]
Another note about optimizations and autoreleasepools: in general, you should not typically expect a Release configuration of a build to behave differently with regard to autoreleased objects as opposed to a Debug configuration.
Unlike ARC code (both in Swift and in Objective-C), where the compiler can insert memory management optimizations for code at compile time, autorelease pools are a runtime feature, and since any retain will necessarily keep an object instance alive, even a single insertion of an object into an autorelease pool will keep it alive until it is disposed of at runtime. So, even if the compiler can aggressively optimize the specific locations of retains and releases for most objects in a Release configurations, there's nothing to be done for an object that's autoreleased.
(Well, the ARC optimizer can do some amount of optimization around autoreleasing objects if it has enough visibility into all of the code using the object, the context of the autorelease pools it belongs to, etc., but this is usually very limited because the scope in which the object was originally -autoreleased is usually far from the scope in which the autorelease pool lives, by definition [otherwise it would be a candidate for regular memory management].)

release memory after del in jupiter notebook

I have deleted some variables in jupiter notebook using del list_of_df. But we realize the contents still occupies memory. so we tried %reset list_of_df , but the previous variable names are already not there... Is there nothing we could do but to restart the kernel to recollect the memory? Thanks
Further:
In a more general, I might have lost track of what I have deleted from a huge jupiter notebook codes. Is it possible to check what have been deleted but still occupying the memory and delete it?
Python isn't like C (for example) in which you have to manually free memory. Instead, all memory allocation and deallocation tasks are handled automatically in the background by a garbage collection (GC) routine. GC uses lazy evaluation, which means that the memory probably won't be freed right away, but will instead be freed only when it "needs" to be (in the ideal case, anyway).
You shouldn't use this in production code, but if you really want to, after you del your list you can force GC to run using the gc module:
import gc
gc.collect()
It might not actually work/deallocate the memory, though, for many different reasons. In general, it's better to just let Python manage memory automatically and not interfere.

POSIX mq_timedsend what happens to msg_ptr?

I am trying to debug a potential memory leak. I can see that the msg_ptr is not freed manually after the call to mq_timedsend.
My question is does mq_timedsend free the message after sending it to the queue?
No, it does not free the message, and neither should it - for any number of reasons!
The object referenced may not have been dynamically allocated in the first instance.
It cannot safely assume that the caller is no longer using the object pointed to by msg_ptr.
It cannot know that it is not a pointer to a C++ object requiring a destructor to to be called, rather than simply freeing the memory block.
In short it would be inappropriate and dangerous for any library function to behave in the way you suggest. As a general principle, dynamically allocated memory should be deleted by its owner unless there is some clear and documented protocol for ceding ownership - which is not a common pattern.
In this case the data is copied to the message queue, so you are free to modify or release whatever msg_ptr references after sending.

Memory management dilemma, Objective -C

I have been testing different features of Objective -C and reached topic which deals with memory management. Apparently upon reading few documents it seems memory management is a very strict in order to build well functioned application.
Now as per my understanding, When we allocate a memory an object's retainCount will become 1. However Something I wrote for learning purposes and it is giving me abnormal retainCount
It might be abnormal number for me, But people who's knows under the hood, Could you please explain how did I get this retainCount and what will be the best way to release it.
Code which has abnormal retainCount,
Object name is : ...(UISlider *) greenSender...
-(IBAction) changeGreen:(UISlider *)greenSender{
showHere.textColor = [UIColor colorWithRed:red.value green:greenSender.value blue:blue.value alpha:1.0];
NSLog(#"retainCount %d",[greenSender retainCount]);
}
Has reatainCount, just after executing my code.
A short explanation will give me a hint, And external reading resources would be appreciated.
Thanks
Do not rely on retain counts. They should only be used as a debugging tool. The reason is that if an object gets retained and autoreleased, its effective retain count has not changed, but its actual retain count has increased by one. It will be released at some point in the future when the autorelease pool drains. Therefore, you cannot rely on the retain count for knowing whether the object has been managed properly or not.
A large retain count such as 8 may indicate a programming bug (such as retaining it too many times), but it could also just be a sign that it has been retained and autoreleased a large number of times, which, although curious, could be perfectly valid.
Do not trust/rely on retainCount. Really.
From Apple:
Important: This method is typically of no value in debugging
memory management issues. Because any number of framework objects may
have retained an object in order to hold references to it, while at
the same time autorelease pools may be holding any number of deferred
releases on an object, it is very unlikely that you can get useful
information from this method.

What can I do to find out what's causing my program to consume lots of memory over time?

I have an application using POE which has about 10 sessions doing various tasks. Over time, the app starts consuming more and more RAM and this usage doesn't go down even though the app is idle 80% of the time. My only solution at present is to restart the process often.
I'm not allowed to post my code here so I realize it is difficult to get help but maybe someone can tell me what I can do find out myself?
Don't expect the process size to decrease. Memory isn't released back to the OS until the process terminates.
That said, might you have reference loops in data structures somewhere? AFAIK, the perl garbage collector can't sort out reference loops.
Are you using any XS modules anywhere? There could be leaks hidden inside those.
A guess: your program executes a loop for as long as it is running; in this loop it may be that you allocate memory for a buffer (or more) each time some condition occurs; since the scope is never exited, the memory remains and will never be cleaned up. I suggest you check for something like this. If it is the case, place the allocating code in a sub that you call from the loop and where it will go out of scope, and get cleaned up, on return to the loop.
Looks like Test::Valgrind is a tool for searching for memory leaks. I've never used it myself though (but I used plain valgrind with C source).
One technique is to periodically dump the contents of $POE::Kernel::poe_kernel to a time- or sequence-named file. $poe_kernel is the root of a tree spanning all known sessions and the contents of their heaps. The snapshots should monotonically grow if the leaked memory is referenced. You'll be able to find out what's leaking by diff'ing an early snapshot with a later one.
You can export POE_ASSERT_DATA=1 to enable POE's internal data consistency checks. I don't expect it to surface problems, but if it does I'd be very happy to receive a bug report.
Perl can not resolve reference rings. Either you have zombies (which you can detect via ps axl) or you have a memory leak (reference rings/circle)
There are a ton of programs to detect memory leaks.
strace, mtrace, Devel::LeakTrace::Fast, Devel::Cycle