Terminated App due to memory Pressure - iphone

I have an app to take images in Burst mode ,but once when the image is take and about to come the preview its getting crashed and error shows that "Terminated App due to memory Pressure "
I need to take more number of images when user holds the camera button...after Leave the button ,i need to show all the images as slideshow..what i have to do ?
My code is:
- (void)longPress:(UILongPressGestureRecognizer*)gesture {
if (gesture.state == UIGestureRecognizerStateBegan) {
[[NSNotificationCenter defaultCenter] addObserver:self
selector:#selector(takePictures)
name:AVCaptureSessionDidStartRunningNotification object:nil];
}
else if (gesture.state == UIGestureRecognizerStateEnded)
{
}
-(void)takePictures
{
[imagePicker takePicture];
}
Help me..

Storing images in RAM is costly, due to the high resolution of the images. What more, your observer cause takePicture to be called multiple times - too many. As you take multiple pictures using the imagePicker, all these images quickly consume RAM, and since iOS has no swap - you run out of RAM. Jetsam/memorystatus then kicks in, and kills your app, for having consumed so much memory.
Ways around this:
A) Take less pictures in burst mode. Use some global variable , say j, increment it in takePictures, but only take the actual picture on j % 2 == 0, or j %3 ==0 (you'll need to play around with the value)
B) try to save at least some of the photos to storage, then release them from RAM (remove the reference to them).

Related

performance issues in sprite kit with efficient code

I read all the posts I found about improving performance but my problem is a bit different.
I have a simple physics-based game. no super fancy stuff.
I´ve got max. 40 nodes on the screen and a few sklabelnodes.
my code is efficient - every node which moves outside the screen gets removed from his parent and Iv'e got just 4 physicBodys at once. textures are not to big - max 250x250 which are preloaded in an atlas.
the problem is that it works for a minute or so and then it starts to stutter for 4-5 sec and then it works fine again. that stuttering doesn't appear on a certain point. sometimes it does on startup and sometimes after a few minutes.
I don´t know what to do.
I load my ad´s on another thread and otherwise there isn't anything to load.
EDIT:
NO shapeNodes.
XCode 7 + iPhone 6plus (9.3.2)
#Alessandro Ornano
my update method only handles the background images. i already tried without bg´s. same problem. i read about performance issues in iOS9
do you know something about that?
-(void)update:(NSTimeInterval)currentTime{
//BG
if (self.background1.position.x >= self.size.width*1.5) {
self.background1.position = CGPointMake(self.background2.position.x - self.background1.size.width+1, self.size.height/2);
}
if (self.background2.position.x >= self.size.width*1.5) {
self.background2.position = CGPointMake(self.background1.position.x - self.background2.size.width+1, self.size.height/2);
}
//BG CLOUDS
if (self.backgroundClouds1.position.x >= self.size.width*1.5) {
self.backgroundClouds1.position = CGPointMake(self.backgroundClouds2.position.x - self.backgroundClouds1.size.width+1, self.size.height/2);
}
if (self.backgroundClouds2.position.x >= self.size.width*1.5) {
self.backgroundClouds2.position = CGPointMake(self.backgroundClouds1.position.x - self.backgroundClouds2.size.width+1, self.size.height/2);
}
}

iOS - CMSampleBufferRef is not being released from captureOutput:didOutputSampleBuffer:fromConnection

I am capturing frames from the camera using the code:
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
:(AVCaptureConnection *)connection
{
// Create a UIImage from the sample buffer data
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
if(delegate && [delegate respondsToSelector:#selector(captureManagerCapturedFrame:withFrameImage:withFrameBuffer:)]) {
[delegate captureManagerCapturedFrame:self withFrameImage:image withFrameBuffer:sampleBuffer];
}
}
I am doing this because in the delegate method captureManagerCapturedFrame:withFrameImage:withFrameBuffer: I have a flag which tells the app to use either the returned uiimage OR the returned sampleBuffer.
The delegate method is:
- (void) captureManagerCapturedFrame:(AVCamCaptureManager *)captureManager
withFrameImage:(UIImage *)image
withFrameBuffer:(CMSampleBufferRef)frameBuffer {
if(_screen1) {
NSLog(#"Only display camera image\n");
}
else if(_screen2) {
//Enable IR
NSLog(#"Display AND Process camera image\n");
[self imageReconigitionProcessFrame:frameBuffer];
}
}
where imageReconigitionProcessFrame: is:
-(void)imageReconigitionProcessFrame:(CMSampleBufferRef)frameBuffer {
//CFRetain(frameBuffer);
MSImage *qry = [[MSImage alloc] initWithBuffer:frameBuffer orientation:AVCaptureVideoOrientationPortrait]; //MEMORY LEAK HERE???
qry = nil;
//CFRelease(frameBuffer);
}
This code effectively works. But here is my problem. When this code is run and profiled in instruments, I see a rapid increase in the overall bytes used, but the allocations profiler doesn't appear to increase. Nor do a see any 'leaks' using the leaks tool. But clearly, there is a rapid memory gain each time imageReconigitionProcessFrame: is called and the app crashes after a few seconds. When I set frameBuffer to nil, there is NO increase in memory (or course I also don't have the frame buffer to do any processing with).
I have tried transfering ownership of frameBuffer using CFRetain and CFRelease (commented out in the above code), but these don't seem to do anything either.
Does anyone have any idea where I could be leaking memory inside this function???
The method [[MSImage alloc] initWithBuffer: is form a third party SDK (Moodstocks, which is an awesome image recognition SDK) and it works just fine in their demos, so I don't think the problem is inside this function.
First of all, thanks for mentioning Moodstocks (I work for them): we're happy that you find our SDK useful!
To answer your question, I guess your code does indeed contain a leak: at the end of the imageReconigitionProcessFrame method, you should call [qry release]. The rule in Obj-C is quite simple: whenever you manually call alloc on an object, it should also be manually released!
That's BTW what is done in the Moodstocks SDK wrapper: if you look at the [MSScannerSession session: didOutputSampleBuffer:] method, you'll see that we do manually release the MSImage object after it's been processed.
As to why the profiler doesn't find this leak, I guess that it's due to the fact that leaks are analyzed every 10 seconds by default: in this case, the memory leak is so heavy (1280x720 frames, at 15+ FPS if you're on an iPhone 5, for 10 seconds: at least 130 MB leaked) that the code must crash before the first 10 seconds are reached.
Hope this helps!

Load an Image from URL by many threads

I'm loading an image from URL into my app. The image size is large (around 1.5Mb). How can I use many threads (ex: 2 threads) to load this image to improve the speed? If using one thread to load this image, it takes me around 5s and I want to reduce this duration.
You are correct. 1.5Mb is a big image and the way to optimise is NOT to use many threads. Although you are on the right track. The technique is called "slicing" and is heavily used on web to load images faster. So take a image and slice it into 3 or 4 smaller pics (and not more) in your server. When rendering call these 4 images all at once. It will load faster than one big pic. Also this lessens the "perceived" latency for the end-user.
Also, when you slice up an image, it makes it easier to reduce the number of colors necessary to display that portion of the image, thus reducing your file size (sometimes fairly significantly).
As an example Google does used to do this for its main logo in its main search page. See 4 split us images of its logo?
The downside of slicing is that it increases maintenance costs. Some one has to maintain these image splits and make sure nothing goes amiss as the app keeps changing.
Please Try the following Code:
//in .h file declare the following objects:
IBOutlet UIImageView *imgTest;
-(IBAction)buttonTapped:(id)sender;
-(void)LoadImage:(NSString *) irlString;
-(void)setImage:(NSData *) imgData;
//in .m file write the following code:
-(IBAction)buttonTapped:(id)sender
{
[self performSelectorOnMainThread:#selector(LoadImage:) withObject:#"http://www.google.com/images/errors/logo_sm.gif" waitUntilDone:NO];
}
-(void)LoadImage:(NSString *) urlString
{
NSURL *imgURL=[NSURL URLWithString:urlString];
NSData *imgData=[NSData dataWithContentsOfURL:imgURL];
[self performSelectorInBackground:#selector(setImage:) withObject:imgData];
}
-(void)setImage:(NSData *) imgData;
{
imgTest.image=[UIImage imageWithData:imgData];
}
you can use activity indicator while loading the image as well. Start it in the buttonTapped method and stop it in the setImage method.
i hope this will help you.

Get amount of memory used by app in iOS

I'm working on an upload app that splits files before upload. It splits the files to prevent being closed by iOS for using too much memory as some of the files can be rather large. It would be great if I could, instead of setting the max "chunk" size, set the max memory usage and determine the size using that.
Something like this
#define MAX_MEM_USAGE 20000000 //20MB
#define MIN_CHUNK_SIZE 5000 //5KB
-(void)uploadAsset:(ALAsset*)asset
{
long totalBytesRead = 0;
ALAssetRepresentation *representation = [asset defaultRepresentation];
while(totalBytesRead < [representation size])
{
long chunkSize = MAX_MEM_USAGE - [self getCurrentMemUsage];
chunkSize = min([representation size] - totalBytesRead,max(chunkSize,MIN_CHUNK_SIZE));//if I can't get 5KB without getting killed then I'm going to get killed
uint8_t *buffer = malloc(chunkSize);
//read file chunk in here, adding the result to totalBytesRead
//upload chunk here
}
}
Is essentially what I'm going for. I can't seem to find a way to get the current memory usage of my app specifically. I don't really care about the amount of system memory left.
The only way I've been able to think of is one I don't like much. Grab the amount of system memory on the first line of main in my app, then store it in a static variable in a globals class then the getCurrentMemUsage would go something like this
-(long)getCurrentMemUsage
{
long sysUsage = [self getSystemMemoryUsed];
return sysUsage - [Globals origSysUsage];
}
This has some serious drawbacks. The most obvious one to me is that another app might get killed in the middle of my upload, which could drop sysUsage lower than origSysUsage resulting in a negative number even if my app is using 10MB of memory which could result in my app using 40MB for a request rather than the maximum which is 20MB. I could always set it up so it clamps the value between MIN_CHUNK_SIZE and MAX_MEM_USAGE, but that would just be a workaround instead of an actual solution.
If there are any suggestions as to getting the amount of memory used by an app or even different methods for managing a dynamic chunk size I would appreciate either.
Now, as with any virtual memory operating system, getting the "memory used" is not very well defined and is notoriously difficult to define and calculate.
Fortunately, thanks to the virtual memory manager, your problem can be solved quite easily: the mmap() C function. Basically, it allows your app to virtually load the file into memory, treating it as if it were in RAM, but it is actually swapped in from storage as it is accessed, and swapped out when iOS is low on memory.
This function is really easy to use in iOS with the Cocoa APIs for it:
- (void) uploadMyFile:(NSString*)fileName {
NSData* fileData = [NSData dataWithContentsOfMappedFile:fileName];
// Work with the data as with any NSData* object. The iOS kernel
// will take care of loading the file as needed.
}

Understanding iOS Instruments

I am creating an iPhone app. Running into memory issues I started using Instruments to track down any memory problems. Am running into some strange behavior that leads me to believe that I am either mis-using Instruments or mis-reading its data.
These are the LiveBytes values recorded when moving in and out of a location:
**Expensive Location-**
World (12 MB)
Loc (27 MB)
World (13 MB )
Loc (28 MB)
World (14 MB)
-Crash
**Cheap Location-**
World (12 MB)
Loc (23 MB)
World (13 MB )
Loc (24 MB)
World (14 MB)
-Crash
Notice how I still crash even though the cheap location's memory has come no where near the expensive locations memory. Could anyone help me out here?
I'm not sure if this is related to the problem you have but I hope it helps: I was recently tracking the memory footprint of an app and I noticed that even though the dealloc message was being sent to a view controller after hitting "back" on the UINavigator controller, I still had a few dozen live objects left over from this operation (you can see this in the 'Allocations' panel of the instruments app). To solve this I used a mix of a few things:
First, I added the following three methods to NSLog the retain counters of my Custom subviews (found here on SO at iOS4 - fast context switching):
#pragma mark - RETAIN DEBUG MAGIC
// -----------------------------------------------------------------------------
- (id)retain
{
NSLog(#"retain \t%s \tretainCount: %i", __PRETTY_FUNCTION__ , [self retainCount]);
return [super retain];
}
- (void)release
{
NSLog(#"release \t%s \tretainCount: %i", __PRETTY_FUNCTION__ , [self retainCount]);
[super release];
}
- (id)autorelease
{
NSLog(#"autorelease \t%s \tretainCount: %i", __PRETTY_FUNCTION__ , [self retainCount]);
return [super autorelease];
}
Then, I isolated each one of the view building blocks leaving only one simple task (for example loading a UIButton as a subview) and went back to the instruments app to track the live objects (under Product > Profile in Xcode) and disabled all the objects with 'NS', 'CF' and 'Malloc' prefixes (you can do this clicking on the little i button next to the 'Allocations' tab). After this, selected "Call Trees" on the bottom right pane and kept drilling until I found a few places where the object counter went up as I navigated back and forth.
Notice that you can double click on the symbol to see the details related to the calls made to the processor. Additionally, clicking on the little i icon will bring a pop up with the backtraces for the highlighted call.
When looking at the backtraces you will see that some of them have a small icon that depicts a person on a frame (the text next to these icons is significantly darker as a visual cue). Double clicking on these will take you to the line in your code responsible for this call.
Below are a few links that might give you a hand in understanding more about instruments:
http://www.raywenderlich.com/2696/how-to-debug-memory-leaks-with-xcode-and-instruments-tutorial
http://developer.apple.com/library/mac/#documentation/DeveloperTools/Conceptual/InstrumentsUserGuide/ViewingandAnalyzingData/ViewingandAnalyzingData.html
Note:
At the end of my journey, all I had to do was release my views after adding them to their 'super' views to ensure they would be dealloc'd. i.e.,
[[self view] addSubView:aButton];
[aButton release];