UIImageJPEGRepresentation - memory release issue - iphone

On a iPhone app, I need to send a jpg by mail with a maximum size of 300Ko (I don't no the maximum size mail.app can have, but it's another problem). To do that, I'm trying to decrease quality until obtain an image under 300Ko.
In order to obtain the good value of the quality (compressionLevel) who give me a jpg under 300Ko, I have made the following loop.
It's working, but each time time the loop is executed, the memory increase of the size of the original size of my jpg (700Ko) despite the "[tmpImage release];".
float compressionLevel = 1.0f;
int size = 300001;
while (size > 300000) {
UIImage *tmpImage =[[UIImage alloc] initWithContentsOfFile:[self fullDocumentsPathForTheFile:#"imageToAnalyse.jpg"]];
size = [UIImageJPEGRepresentation(tmpImage, compressionLevel) length];
[tmpImage release];
//In the following line, the 0.001f decrement is choose just in order test the increase of the memory
//compressionLevel = compressionLevel - 0.001f;
NSLog(#"Compression: %f",compressionLevel);
}
Any ideas about how can i get it off, or why it happens?
thanks

At the very least, there's no point in allocating and releasing the image on every trip through the loop. It shouldn't leak memory, but it's unnecessary, so move the alloc/init and release out of the loop.
Also, the data returned by UIImageJPEGRepresentation is auto-released, so it'll hang around until the current release pool drains (when you get back to the main event loop). Consider adding:
NSAutoreleasePool* p = [[NSAutoreleasePool alloc] init];
at the top of the loop, and
[p drain]
at the end. That way you'll not be leaking all of the intermediate memory.
And finally, doing a linear search for the optimal compression setting is probably pretty inefficient. Do a binary search instead.

Related

iPhone run animation while moving object on the screen smoothly

currently I have an image on the screen that is swapped out every 5 seconds with another image and is using an animation to do so.
At the same time on the screen I have objects that the user can pick up and drag around (using panning gesture). During the .5 duration of the animation, if I am moving around an object the UI stutters. For example I have a brush that I pick up and move around the screen. the 5 second timer ends and the background image updates. while this updates the brush stutters while that animation occurs. I moved the Image loading the the UI thread and force it to load by using NSData.
Is there a way that I can prevent this stutter while the animation to change the image is running. Here is how I swap the image.
// Dispatch to the queue, and do not wait for it to complete
// Grab image in background thread in order to not block UI as much as possible
dispatch_async(imageGrabbingQueue, ^{
curPos++;
if (curPos> (self.values.count - 1)) curPos= 0;
NSDictionary *curValue = self.values[curPos];
NSString *imageName = curValue [KEY_IMAGE_NAME];
// This may cause lazy loading later and stutter UI, convert to DataObject and force it into memory for faster processing
UIImage *imageHolder = [UIImage imageNamed:imageName];
// Load the image into NSData and recreate the image with the data.
NSData *imageData = UIImagePNGRepresentation(imageHolder);
UIImage *newImage = [[UIImage alloc] initWithData:imageData];
dispatch_async(dispatch_get_main_queue(), ^{
[UIView transitionWithView:self.view duration:.5 options:UIViewAnimationOptionTransitionCrossDissolve|UIViewAnimationOptionAllowUserInteraction|UIViewAnimationOptionAllowAnimatedContent
animations:^{
[self.image setImage:newImage ];
// Temp clause to show ad logo
if (curPos != 0) [self.imagePromotion setAlpha:1.0];
else [self.imagePromotion setAlpha:0];
}
completion:nil];
});
});
Thanks,
DMan
The image processing libs on the iPhone are not magic, they do take CPU time to actually decode the image. This is likely what you are running into. Calling UIImage imageNamed would likely cache the image, but caches can always be flushed so that does not force the system to keep the image in memory. Your code to call initWithData is pointless because it still has to decompress the PNG into memory and that is the part that is causing the slowdown. What you could do is render the image out as decoded pixels and then save that into a file. Then, memory map the file and wrap the mapped memory up in a coregraphics image. That would avoid the "decode and render" step that is likely causing the slowdown. But, anything else may not actually do what you are expecting. Oh, and you should not hold the decoded bytes in memory, because image data is typically so large that it will take up too much room in the device memory.

CoreGraphics memory warnings and crash; Instruments show no memory leak

UPDATE This piece of code is actually not where the problem is; commenting out all the CoreGraphics lines and returning the first image in the array as the result does not prevent the crashes from happening, so I must look farther upstream.
I am running this on a 75ms NSTimer. It works perfectly with 480x360 images, and will run all day long without crashing.
But when I send it images that are 1024x768, it will crash after about 20 seconds, having given several low memory warnings.
In both cases Instruments shows absolutely normal memory usage: a flat allocations graph, less than one megabyte of live bytes, no leaks the whole time.
So, what's going on? Is Core Graphics somehow using too much memory without showing it?
Also worth mentioning: there aren't that many images in (NSMutableArray*)imgs -- usually three, sometimes two or four. Crashes regardless. Crashes slightly less soon when there are only two.
- (UIImage*) imagefromImages:(NSMutableArray*)imgs andFilterName:(NSString*)filterName {
UIImage *tmpResultant = [imgs objectAtIndex:0];
CGSize s = [tmpResultant size];
UIGraphicsBeginImageContext(s);
[tmpResultant drawInRect:CGRectMake(0, 0, s.width, s.height) blendMode:kCGBlendModeNormal alpha:1.0];
for (int i=1; i<[imgs count]; i++) { [[imgs objectAtIndex:i] drawInRect:CGRectMake(0, 0, s.width, s.height) blendMode:kCGBlendModeMultiply alpha:1.0]; }
tmpResultant = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return tmpResultant;
}
Sounds to me like the problem is outside of the code you have shown. Images that are displayed on screen have a backing store outside of your app's memory that is width*height*bytes_per_pixel. You also get memory warnings and app termination if you have too many backing stores.
You might need to optimize there, to either create smaller optimized versions of these images for display or to allow for the backing stores to be released. Also turning on rasterization for certain non-changing layers can help here as well as setting layer contents directly to the CGImage as opposed to working with UIImages.
You should make a sample project that demonstrates the issue that has no other code around it and see if you still run out of memory. But as I suspect you'll find that just with the code you have shown you will not be able to reproduce the isse as it lies elsewhere.

Playing looping images sequence in UIView

I have a looping animation consisting of 120 frames, 512x512 resolution, saved as 32bit PNG files. I want to play this sequence back in a UIView inside my application. Can anyone give me some pointers regarding how I might do this, hopefully I can do this using the standard API (which I would prefer). I could use Cocos2D if needed or even OpenGL (but I am totally new to OpenGL at this point).
You can try this:
// Init an UIImageView
UIImageView *imageView = [[UIImageView alloc] initWithFrame:/*Some frame*/];
// Init an array with UIImage objects
NSArray *array = [NSArray arrayWithObjects: [UIImage imageNamed:#"image1.png"], [UIImage imageNamed:#"image2.png"], .. ,nil];
// Set the UIImage's animationImages property
imageView.animationImages = array;
// Set the time interval
imageView.animationDuration = /* Number of images x 1/30 gets you 30FPS */;
// Set repeat count
imageView.animationRepeatCount = 0; /* 0 means infinite */
// Start animating
[imageView startAnimating];
// Add as subview
[self.view addSubview:imageView];
This is the easiest approach, but I can't say anything about the performance, since I haven't tried it. I think it should be fine though with the images that you have.
Uncompressed, that's about 90MB of images, and that might be as much as you're looking at if they're unpacked into UIImage format. Due to the length of the animation and the size of the images, I highly recommend storing them in a compressed movie format. Take a look at the reference for the MediaPlayer framework; you can remove the playback controls, embed an MPMoviePlayerController within your own view hierarchy, and set playback to loop. Note that 640x480 is the upper supported limit for H.264 so you might need to scale down the video anyway.
Do take a note of issues looping video, as mentioned in the question here Smooth video looping in iOS.

Low FPS when accesing iPhone video output image buffer

I'm trying to do some image processing on iPhone. I'm using http://developer.apple.com/library/ios/#qa/qa2010/qa1702.html to capture the camera frames.
My problem is that when I'm trying to access the captured buffer, the camera FPS drops from 30 to about 20. Does anybody knows how I can fix it?
I use the lowest capture quality I could find (AVCaptureSessionPresetLow = 192x144) in kCVPixelFormatType_32BGRA format. If anybody knows a lower quality I could use, I'm willing to try it.
When I do the same image access on other platforms, like Symbian, it works OK.
Here is my code:
#pragma mark -
#pragma mark AVCaptureSession delegate
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
/*We create an autorelease pool because as we are not in the main_queue our code is
not executed in the main thread. So we have to create an autorelease pool for the thread we are in*/
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
//Lock the image buffer
if (CVPixelBufferLockBaseAddress(imageBuffer, 0) == kCVReturnSuccess)
{
// calculate FPS and display it using main thread
[self performSelectorOnMainThread:#selector(updateFps:) withObject: (id) nil waitUntilDone:NO];
UInt8 *base = (UInt8 *)CVPixelBufferGetBaseAddress(imageBuffer); //image buffer start address
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
int size = (height*width);
UInt8* pRGBtmp = m_pRGBimage;
/*
Here is the problem; m_pRGBimage is RGB image I want to process.
In the 'for' loop I convert the image from BGRA to RGB. As a resault, the FPS drops to 20.
*/
for (int i=0;i<size;i++)
{
pRGBtmp[0] = base[2];
pRGBtmp[1] = base[1];
pRGBtmp[2] = base[0];
base = base+4;
pRGBtmp = pRGBtmp+3;
}
// Display received action
[self performSelectorOnMainThread:#selector(displayAction:) withObject: (id) nil waitUntilDone:NO];
//[self displayAction:&eyePlayOutput];
//saveFrame( imageBuffer );
//unlock the image buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
}
[pool drain];
}
As a follow-on to the answers, I need to process the image in realtime, it is being displayed.
I noticed that when I use AVCaptureSessionPresetHigh, the most simple thing I do, like:
for (int i=0;i<size;i++)
x = base[0];
causes the framerate to drop to 4-5 FPS. I guess its because an image in that size is not cached.
Basically I need 96x48 image. is there a simple way to downscale the camera output image, a way that uses hardware acceleration, so I could work with the small one?
Anything that iterates over every pixel in an image will be fairly slow on all but the fastest iOS devices. For example, I benchmarked iterating over every pixel in a 640 x 480 video frame (307,200 pixels) with a simple per-pixel color test and found that this only runs at around 4 FPS on an iPhone 4.
You're looking at processing 27,648 pixels in your case, which should run fast enough to hit 30 FPS on an iPhone 4, but that's a much faster processor than what was in the original iPhone and iPhone 3G. The iPhone 3G will probably still struggle with this processing load. You also don't say how fast the processor was in your Symbian devices.
I'd suggest reworking your processing algorithm to avoid the colorspace conversion. There should be no need to reorder the color components in order to process them.
Additionally, you could selectively process only a few pixels by sampling at certain intervals within the rows and columns of the image.
Finally, if you are targeting the newer iOS devices that have support for OpenGL ES 2.0 (iPhone 3G S and newer), you might want to look at using a GLSL fragment shader to process the video frame entirely on the GPU. I describe the process here, along with sample code for realtime color-based object tracking. The GPU can handle this kind of processing 14 - 28 times faster than the CPU, in my benchmarks.
disclaimer: THIS ANSWER IS A GUESS :)
You're doing quite a lot of work while the buffer is locked; is this holding up the thread that is capturing the image from the camera?
You could you copy the data out of the buffer while you work on it so you can unlock it asap i.e. something like
if (CVPixelBufferLockBaseAddress(imageBuffer, 0) == kCVReturnSuccess) {
// Get the base address and size of the buffer
UInt8 *buffer_base = (UInt8 *)CVPixelBufferGetBaseAddress(imageBuffer); //image buffer start address
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Copy it's contents out
Uint8 *base = malloc(width * height * 4);
memcpy(base, buffer_base, size);
// Unlock the buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// base now points to a copy of the buffers' data - do what you want to it . . .
...
// remember to free base once you're done ;)
free(base);
If it's the lock that's holding up the capture then this should help.
NB You could speed this up if you know all the buffers will be the same size you can just call malloc once to get the memory and then just reuse it each time and only free it when you have finished processing all the buffers.
Or if that's not the problem you could try lowering the priority of this thread
[NSThread setThreadPriority:0.25];
Copy the contents of the camera frame into a dedicated buffer and operate on it from there. This results in a massive speed improvement in my experience. My best guess is that the region of memory where the camera frame is located has special protections that make reading/writing accesses slow.
Check out the memory address of the camera frame data. On my device the camera buffer is at 0x63ac000. That doesn't mean anything to me, except that the other heap objects are in addresses closer to 0x1300000. The lock suggestion did not solve my slowdown, but the memcpy did.

How to release memory associated by CGImageCreateWithImageInRect

I am using CGImageCreateWithImageInRect() for generating a small image from a background image runtime, which is going to displayed for every thread call (0.01 sec). Once I start showing part of image through CGImageCreateWithImageInRect application starts consuming memory in very high rate and crashes within seconds. Memory consumed goes to 20+ MB and crashes. Normal application would run on 2MBs.
Image1 = CGImageCreateWithImageInRect(imageRef, CGRectMake(x, y, w, h));
and after processing I do
CGImageRelease(Image1);
but it is not helping me.
I want to know how to release memory associated with CGImageCreateWithImageInRect.
The answer to your question is in fact CGImageRelease; the pair of lines you have excerpted from your code are correctly balanced. You must be leaking somewhere else.
Actually, that pair of lines are not balanced, since CGImageCreateWithImageInRect() retains the original image. It should be…
CGImageRef Image1 = CGImageCreateWithImageInRect(imageRef, CGRectMake(x, y, w, h));
CGImageRelease(imageRef);
...
CGImageRelease(Image1);