Sometimes the project compiles, and sometimes it fails with
"Out of memory allocating 4072 bytes after a total of 0 bytes"
If the project does compile, when it starts it immediately throws a bad access exception when attempting to access the first (allocated and retained) object, or, throws an error "unable to access memory address xxxxxxxx", where xxxxxxxx is a valid memory address.
Has anyone seen similar symptoms and knows of workarounds?
Thanks in advance.
If compilation or linking is failing with an out of memory error like that, it is likely one of two issues.
First, does your boot drive or the drive that you are building your source on have free space (they may be the same drive)? If not, then that error may arise when the VM subsystem tries to map in a file or, more likely if boot drive is full, the VM subsystem tries to allocate more drive for swap space.
Secondly, is your application just absolutely gigantic? I.e. is it the linker that is failing as it tries to assemble something really really large?
There is also the possibility that the system has some bad RAM in it. Unlikely, though, given that the symptoms are so consistent.
In any case, without more details, it is hard to give a more specific answer.
I've seen this, it is not usually an actual memory error...(of your code)
what is happening is that you have your Xcode target Build settings "optimization level" set to Fast, or faster, or fastest..
there appears to be a bug in there somewhere, set it to none, or try the Os, or O3 (i don't think fastest is effected)..
this will very likely solve someones problem that comes across this thread. for sure try "none" first... this will confirm that this is what is happening in someone's case that sees this...
i can tell that McPragma is having this problem for sure, because he/she describes changing from debug to release, and this causes it (debug is already set to none) and release is set to something else... when that is the case... for sure it is that particular build setting...
Related
I'm running into lots of instances with the latest version(s) of XCode (I believe this has been happening since 4.2 or so) where the stack trace when an exception is thrown is woefully devoid of details.
The screeshot here illustrates one such case; in fact, this scenario at least has SOME context (it shows me that it happened in the JKArray class), where many times I get nothing but "internal" stack items (just 2-3 entries in the thread, none of which reside in user code or anything I can look at). Even in this example, I don't know where the JKArray was allocated or released, so I have no idea what instance or where the problem is.
Thoughts:
I've tried adding a generic "on exception" breakpoint
There is some minor information available in the output; in this case: " malloc: * error for object 0x10e18120: pointer being freed was not allocated * set a breakpoint in malloc_error_break to debug". Doing this doesn't get me any further, though, since the breakpoint gets hit in the same stack as the exception...
I've tried switching to the other debugger
I've tried using my own custom exception handler
I've tried using the Profiler to look for leaks. There are no leaks that I can find.
No matter what I do, I can't seem to isolate the issues plaguing my app. Furthermore, the issues do not seem to be exactly the same each time, likely due to the high amount of concurrency in my app... so I'm left without a good way to fix the problems.
Edit: in the case of this particular exception, I did end up finding the cause. I had attempted to [release] and object that was [autoreleased]d. However, all of this was happening within my code; I can't understand why XCode would not give me a decent stack trace to help me find the problem, rather than forcing me to hunt through my whole app...
Sometimes it is impossible to pinpoint the real problem, because the issue that causes the symptom has happened much earlier.
Your issue provides a great example: when you release your object, cocoa has no idea that you've done anything wrong: you are releasing an object that you own, this is precisely what you should be doing, so there's no red flags. The code that leads to a breakdown is executed well after your method has finished, and gave the control back to the run loop. It is at this point that the run loop gets around to draining its autorelease pool, causing the second deallocation. All it knows at this point, however, is that the run loop has made an invalid deallocation, not your code. By the time the error happens the culprit is safely off the stack (and off the hook), so there is nothing else Xcode can report back to you.
A solution to problems like this is to profile your memory use: the profiler should catch issues like that, and pinpoint the places where they happen.
It goes without saying that switching to automatic reference counting will save you lots of troubles in the memory management department.
I work on an average (~ 20k lines of code, Objective-C mixed with C++), and I am figthing to hunt down an EXC_BAD_ACCESS error.
I have tried all the common techniques (like enabling NSZombie, guard edges,etc.) So far, I have ruled out the possibility to access a released object, and the double-free error.
It seems that something writes on a memory space where it shouldn't. But, as many memory errors, it's not happening all the time, and it's not crashing always in the same place.
(Sometimes I receive the "object was modified after being freed" message).
Sometimes, the overwritten memory belongs to the allocator, and it crashes on malloc, or on free().
And, of course, some changes in the app may influence the bug's behaviour - if I try to comment out parts of the code, the error appears less often, so it's more difficult to find it.
Finally, I have been looking into using valgrind, but it seems that all those who used it worked on the simulator. but my code must run on the actual device (some code is ARM-specific)
Are there any general tips on how to debug such errors?
Note: The app involves video processing, so the amount of memory used is fairly large.
There are some special tools available on the XCode. You could try to use them in order to analyse your code.
http://developer.apple.com/library/mac/#featuredarticles/StaticAnalysis/index.html
It will produce some warning in case of invalid objects usage so it could help you to find a problem.
If you feel that the C++ code is causing the issue you could copy the C++ out of your iPhone project and create a Mac project. With this you could set up various stress tests. And, you should be able to use valgrind as well.
I'm wondering what would be the correct approach after executing a command that allocates memory in Objective C (I'm mainly referring to iOS apps).
My dilemma comes from the fact that checking for the success of failure of a memory allocation operation adds lots of lines of code, while wondering whether this is at all useful.
Moreover, sometimes memory allocations are obvious, such as using 'alloc', but sometimes they are taking place behind the scenes. And even if we check each and every allocation - when we find it failed - there isn't much we could actually do. So maybe the correct approach is to just let it fail and have the app crash?
Take a look at this code:
// Explicit memory allocation
NSArray a1 = [[NSArray alloc] initWithObjects:someObj, nil];
if (!a1) {
// Should we make this check at all? Is there really what to do?
}
// Implicit memory allocation
NSArray a2 = [NSArray arrayWithObjects:someObj, nil];
if (!a2) {
// should we make this check at all? Is there really what to do?
}
What in your opinion would be the correct approach? Check or not check for allocation failures? iOS developers out there - how have you handled it in your apps?
Fantasy: Every memory allocation would be checked and any failure would be reported to the user in a friendly fashion, the app would shut down cleanly, a bug report would be sent, you could fix it and the next version would be perfect [in that one case].
Reality: By the time something as trivial as arrayWithObjects: fails, your app was dead long, long, ago. There is no recovery in this case. It is quite likely that the frameworks have already failed an allocation and have already corrupted your app's state.
Furthermore, once something as basic as arrayWithObjects: has failed, you aren't going to be able to tell the user anyway. There is no way that you are going to be able to reliably put a dialog on screen without further allocations.
However, the failure happened much further before your app failed an allocation. Namely, your app should have received a memory warning and should have responded by (a) persisting state so no customer data is lost and (b) freeing up as much memory as possible to avoid catastrophic failure.
Still, a memory warning is the last viable line of defense in the war on memory usage.
Your first assault on memory reduction is in the design and development process. You should consider memory use from the start of the application development process and you must optimize for memory use as you polish your application for publication. Use the Allocations Instrument (see this Heapshot analysis write-up I did a bit ago -- it is highly applicable) and justify the existence of every major consumer of memory.
iPhone apps should register for UIApplicationDidReceiveMemoryWarningNotification notifications. iOS will send these when available memory gets low. Google iphoneappprogrammingguide.pdf (dated 10/12/2011) for more information.
That said, one general approach to the problem I've seen is to reserve a block of memory at app startup as a "cushion". In your code put in a test after each allocation. If an allocation fails, release the cushion so you have enough memory to display an error message and exit. The size of the cushion has to be large enough to accommodate allowing your hobbled app to shutdown nicely. You could determine the size of the cushion by using a memory stress tester.
This is really a tricky problem because it happens so rarely (for well-designed programs). In the PC/mini/mainframe world virtual memory virtually eliminates the problem in but the most pathological programs. In limited memory systems (like smartphones), stress testing your app with a heap monitor tool should give you a good indication of its max memory usage. You could code a high water mark wrapper routine for alloc that does the same thing.
Check them, assert in debug (so you know where/why failures exist), and push the error to the client (in most cases). The client will typically have more context - do they retry with a smaller request? fail somehow? disable a feature? Display an alert to the user, etc, etc. The examples you have provided are not the end of the world, and you can gracefully recover from many - furthermore, you need to (or ought to) know when and where your programs fail.
With the power you have in an Iphone/smartphone, the time it takes to compute a few test is ridiculous to be thinking "is it really worth checking", it is always good test and catch any failures in your code/allocations. (if you don't it sounds more like your lazy to add a few extra lines in your code.
Also "letting the app crash" gives a REALLY poor impression of your application, the user see the app close for no reason and thinks its a poor quality software.
You should always add your tests and if you can't do anything about the error then at least you should display a message before the app closes (makes the user less frustrated).
there a several options when tracking memory allocations, like catching exception. testing if the pointer returned is nil, checking the size of the list etc.
you should think of ways to let your application run in the case the allocation fails:
if it is jsut a view of your interface, display a message saying fail to load the particular view ...
if it is the main and only view, close the application gracefully with a message
...
I don't know what application you are working on but if you are short on memory, you should consider creating a system to allocate a deallocated memory as your progressing in your app, so that you always have the maximum memory available. it might be slightly slower than keeping everything cached but you app quality will improve if you suppress any force close.
Is there a maximum size of executables or executables + shared objects on the iPhone? I've been developing an app that has been crashing on startup or early in execution with SIGSYS. Removing code from the program has helped, though structuring data so the code is simply not executed does not.
This could be memory corruption, of some kind, however when I compiled with -Os rather than -O2 or -O3 the size of my executable goes down from 5.15MB to 3.60MB and the application runs perfectly. I also have a bunch of libraries I use, of course.
I'm wondering, is there a limit on the size of executable code on the iPhone? Or am I just 'getting lucky' and masking memory corruption when I use -Os?
If there is a maximum size, there's no way you are hitting it with a 5.15 or 3.60 MB app file. You have a different bug in your app.
You are getting lucky.
You're probably just running out of memory. Are you getting memory warnings?
There is a maximal executable size for delivery via the app store but the hardware itself imposes no unusual restrictions.
Your problem is most likely in one of your libraries. Which, based on your description of the complexity of your app, is not a helpful observation on my part.
Given the pattern with the compiler options, I'm going to take a wild, wild guess that you've got library that has a problem and the tighter compile causes that code to be excluded.
Under the heading of a longshot, you might also want to take a look at resources such as images as well. I've seen a couple of case in the past couple of years in which seemingly innocuous resources have trigger fatal errors when they loaded.
I have hit exactly the same problem. In some tens or hundreds of C++ kloc I never had any problems. Now I essentially copy&pasted a class and renamed two buttons and I get an access violation at startup.
None of the new code is ever executed, just linked in because I take the address of a function. The newly linked compilation unit does not rely on a single static symbol from the outside that could cause background code execution.
The debugger does not point to a useful location. My executable is at around 3.6 MB.
I'm working on catching a seriously insidious bug that's happening in my code. The problem is, the bug is completely random and can happen either 9 minutes into the application's runtime or 30 minutes. I've gone ahead and added the fabulous PLCrashReporter to my project (http://code.google.com/p/plcrashreporter) and that works fine for trivial bugs. Also, when I'm in doubt, I will navigate to the crash logs found in ~/Library/Logs/CrashReporter/MobileDevice/ and run symbolicatecrash on the crash log. This + GDB will eventually catch any bug, except for the one I'm facing now.
Apparently the nature of this bug is preventing even Apple's crash logs to be properly written to storage. This shows when I sync my iPhone or iPod Touch with iTunes and run symbolicatecrash on my app:
sf$ symbolicatecrash foo.crash
No crash report version in foo.crash at /usr/local/bin/symbolicatecrash line 741.
It might be that my application is not leaving a crash report at all, and exiting due to memory issues. I do indeed see the applicationWillTerminate: executing my NSLog statement before exiting, in my App Delegate. However, after running the application through ObjectAlloc, my application never reaches > 2.08MB of usage. Although if I'm reading the results proper, I did allocate over 28MB of memory throughout the entire duration of my test run.
Thanks again for everything.
A couple of suggestions:
Make sure that you're not actually calling exit(), returning from main(), or otherwise cleanly exiting anywhere in your code. If your application is just quitting, and not crashing, that obviously won't leave a log.
I think that running the system very rapidly out of memory can sometimes cause your application to crash without leaving a crash log. Run it under Instruments and see what the memory usage over time looks like.
If you have a set of steps that "often" reproduces the problem, try running it under the debugger and poking at it until it does crash. That might be a half-hour well-spent.
Having eliminated the obvious/easy, it's on to the more-obscure. Chances are that you're corrupting your heap or stack somewhere along the way, via a buffer overrun, re-using an invalid pointer, etc, etc. Here are some things to try:
Try running with NSZombieEnabled=YES in the environment variables. This will help you find re-use of freed objects. It does have an enormous impact on memory usage though, so it may not be applicable for everyone. Here's an Apple article dealing with NSZombie (among other things).
When running in the iPhone Simulator, use the "Simulate Memory Warning" item in the Hardware menu to force a low-memory condition - this can flush out bugs in that code, which otherwise runs at unpredictable times.
Last but not least, do a search through your code for everywhere that you use low-level C memory manipulation functions - malloc, calloc, realloc, memcpy, strcpy,strncpy, etc - and make absolutely sure that the buffer sizes are appropriate.