How to release memory associated by CGImageCreateWithImageInRect - iphone

I am using CGImageCreateWithImageInRect() for generating a small image from a background image runtime, which is going to displayed for every thread call (0.01 sec). Once I start showing part of image through CGImageCreateWithImageInRect application starts consuming memory in very high rate and crashes within seconds. Memory consumed goes to 20+ MB and crashes. Normal application would run on 2MBs.
Image1 = CGImageCreateWithImageInRect(imageRef, CGRectMake(x, y, w, h));
and after processing I do
CGImageRelease(Image1);
but it is not helping me.
I want to know how to release memory associated with CGImageCreateWithImageInRect.

The answer to your question is in fact CGImageRelease; the pair of lines you have excerpted from your code are correctly balanced. You must be leaking somewhere else.

Actually, that pair of lines are not balanced, since CGImageCreateWithImageInRect() retains the original image. It should be…
CGImageRef Image1 = CGImageCreateWithImageInRect(imageRef, CGRectMake(x, y, w, h));
CGImageRelease(imageRef);
...
CGImageRelease(Image1);

Related

CoreGraphics memory warnings and crash; Instruments show no memory leak

UPDATE This piece of code is actually not where the problem is; commenting out all the CoreGraphics lines and returning the first image in the array as the result does not prevent the crashes from happening, so I must look farther upstream.
I am running this on a 75ms NSTimer. It works perfectly with 480x360 images, and will run all day long without crashing.
But when I send it images that are 1024x768, it will crash after about 20 seconds, having given several low memory warnings.
In both cases Instruments shows absolutely normal memory usage: a flat allocations graph, less than one megabyte of live bytes, no leaks the whole time.
So, what's going on? Is Core Graphics somehow using too much memory without showing it?
Also worth mentioning: there aren't that many images in (NSMutableArray*)imgs -- usually three, sometimes two or four. Crashes regardless. Crashes slightly less soon when there are only two.
- (UIImage*) imagefromImages:(NSMutableArray*)imgs andFilterName:(NSString*)filterName {
UIImage *tmpResultant = [imgs objectAtIndex:0];
CGSize s = [tmpResultant size];
UIGraphicsBeginImageContext(s);
[tmpResultant drawInRect:CGRectMake(0, 0, s.width, s.height) blendMode:kCGBlendModeNormal alpha:1.0];
for (int i=1; i<[imgs count]; i++) { [[imgs objectAtIndex:i] drawInRect:CGRectMake(0, 0, s.width, s.height) blendMode:kCGBlendModeMultiply alpha:1.0]; }
tmpResultant = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return tmpResultant;
}
Sounds to me like the problem is outside of the code you have shown. Images that are displayed on screen have a backing store outside of your app's memory that is width*height*bytes_per_pixel. You also get memory warnings and app termination if you have too many backing stores.
You might need to optimize there, to either create smaller optimized versions of these images for display or to allow for the backing stores to be released. Also turning on rasterization for certain non-changing layers can help here as well as setting layer contents directly to the CGImage as opposed to working with UIImages.
You should make a sample project that demonstrates the issue that has no other code around it and see if you still run out of memory. But as I suspect you'll find that just with the code you have shown you will not be able to reproduce the isse as it lies elsewhere.

Memory consumption spikes when resizing, rotating and cropping images on iPhone

I have an "ImageManipulator" class that performs some cropping, resizing and rotating of camera images on the iPhone.
At the moment, everything works as expected but I keep getting a few huge spikes in memory consumption which occasionally cause the app to crash.
I have managed to isolate the problem to a part of the code where I check for the current image orientation property and rotate it accordingly to UIImageOrientationUp. I then get the image from the bitmap context and save it to disk.
This is currently what I am doing:
CGAffineTransform transform = CGAffineTransformIdentity;
// Check for orientation and set transform accordingly...
transform = CGAffineTransformTranslate(transform, self.size.width, 0);
transform = CGAffineTransformScale(transform, -1, 1);
// Create a bitmap context with the image that was passed so we can perform the rotation
CGContextRef ctx = CGBitmapContextCreate(NULL, self.size.width, self.size.height,
CGImageGetBitsPerComponent(self.CGImage), 0,
CGImageGetColorSpace(self.CGImage),
CGImageGetBitmapInfo(self.CGImage));
// Rotate the context
CGContextConcatCTM(ctx, transform);
// Draw the image into the context
CGContextDrawImage(ctx, CGRectMake(0,0,self.size.height,self.size.width), self.CGImage);
// Grab the bitmap context and save it to the disk...
Even after trying to scale the image down to half or even 1/4 of the size, I am still seeing the spikes to I am wondering if there is a different / more efficient way to get the rotation done as above?
Thanks in advance for the replies.
Rog
If you are saving to JPEG, I guess an alternative approach is to save the image as-is and then set the rotation to whatever you'd like by manipulating the EXIF metadata? See for example this post. Simple but probably effective, even if you have to hold the image payload bytes in memory ;)
Things you can do:
Scale down the image even more (which you probably don't want)
Remember to release everything as soon as you finish with it
Live with it
I would choose option 2 and 3.
Image editing is very resource intensive, as it loads the entire raw uncompressed image data into the memory for processing. This is inevitable as there is absolutely no other way to modify an image other than to load the complete raw data into the memory. Having memory consumption spikes doesn't really matter unless the app receives a memory warning, in that case quickly get rid of everything before it crashes. It is very rare that you would get a memory warning, though, because my app regularly loads a single > 10 mb file into the memory and I don't get a warning, even on older devices. So you'll be fine with the spikes.
Have you tried checking for memory leaks and analyzing allocations?
If the image is still too big, try rotating the image in pieces instead of as a whole.
As Anomie mentioned, CGBitmapContextCreate creates a context. We should release that by using
CGContextRelease(ctx);
If you have any other objects created using create or copy, that should also be released. If it is CFData, then
CFRelease(cfdata);

Program received signal EXC_BAD_ACCESS accessing array

I am using the routine for getting pixel colour (this one: http://www.markj.net/iphone-uiimage-pixel-color/ ) and am faced with frequent app crashes when using it. The relevant portion of the code:
unsigned char* data = CGBitmapContextGetData (cgctx);
if (data != NULL) {
int offset = (some calculations here);
int alpha = data[offset]; // <<<< crashes here
}
This code is linked to be ran on touchesBegan, touchesEnded and touchesMoved. It appears that the crashes occur during touchesEnded and touchesMoved events only, particularly when I start the touch on the target image, but move it off the boundaries of the image in the process.
Is there any way to check what is the size of the data in the array pointed to by data object? What could be going wrong there?
Edit:
The calculation of offset:
int offset = 4*((w*round(point.y)*x)+round(point.x)*x);
Where point is the point where touch occurs, w is the width if the image, x is the scale of the image (for hi-res images on retina displays).
I don't see anything wrong with the cgctx either. Basically, I am running the code from the link above almost unmodified, the particular code snippet I have problems with is in the function (UIColor*) getPixelColorAtLocation:(CGPoint)point so if you want the details of what the code does, just read the source there.
Edit: another thing is that this never happens in the simulator, but often happens when testing on a device.
Edit: Ideally I'd want to do nothing if the finger is currently not over the image, but have trouble figuring out when that happens. It looks like the relevant methods in SDK only show what view the touch originated in, not where it is now. How can I figure that out?
You didn't show all your work. Your offset calculation is likely returning either a negative number or a number well beyond the end of the buffer. Since CG* APIs often allocate rather large chunks of memory, often memory mapped, it is quite likely that the addresses before and after the allocation are unallocated/unmapped and, thus, access outside of the buffer leads to an immediate crash (as opposed to returning garbage).
Which is good. Easier to debug.
You did provide a clue:
move it off the boundaries of the
image in the process
I'd guess you modified the offset calculation to take the location of the touch. And that location has moved beyond the bounds of the image and, thus, leads to a nonsense offset and a subsequent crash.
Fix that, and your app will stop crashing here.
Does your image exactly occupy the entire bounds of the item being touched? I.e. does the thing handling the touches*: events have a bounds whose width and height are exactly the same as the image?
If not, you need to make sure you are correctly translating the coordinates from whatever is handling the touches to coordinates within the image. Also note that the layout of bytes in an image is heavily dependent on exactly how the image was created and what internal color model it is using.

UIImageJPEGRepresentation - memory release issue

On a iPhone app, I need to send a jpg by mail with a maximum size of 300Ko (I don't no the maximum size mail.app can have, but it's another problem). To do that, I'm trying to decrease quality until obtain an image under 300Ko.
In order to obtain the good value of the quality (compressionLevel) who give me a jpg under 300Ko, I have made the following loop.
It's working, but each time time the loop is executed, the memory increase of the size of the original size of my jpg (700Ko) despite the "[tmpImage release];".
float compressionLevel = 1.0f;
int size = 300001;
while (size > 300000) {
UIImage *tmpImage =[[UIImage alloc] initWithContentsOfFile:[self fullDocumentsPathForTheFile:#"imageToAnalyse.jpg"]];
size = [UIImageJPEGRepresentation(tmpImage, compressionLevel) length];
[tmpImage release];
//In the following line, the 0.001f decrement is choose just in order test the increase of the memory
//compressionLevel = compressionLevel - 0.001f;
NSLog(#"Compression: %f",compressionLevel);
}
Any ideas about how can i get it off, or why it happens?
thanks
At the very least, there's no point in allocating and releasing the image on every trip through the loop. It shouldn't leak memory, but it's unnecessary, so move the alloc/init and release out of the loop.
Also, the data returned by UIImageJPEGRepresentation is auto-released, so it'll hang around until the current release pool drains (when you get back to the main event loop). Consider adding:
NSAutoreleasePool* p = [[NSAutoreleasePool alloc] init];
at the top of the loop, and
[p drain]
at the end. That way you'll not be leaking all of the intermediate memory.
And finally, doing a linear search for the optimal compression setting is probably pretty inefficient. Do a binary search instead.

Why is scaling down a UIImage from the camera so slow?

resizing a camera UIImage returned by the UIImagePickerController takes a ridiculously long time if you do it the usual way as in this post.
[update: last call for creative ideas here! my next option is to go ask Apple, I guess.]
Yes, it's a lot of pixels, but the graphics hardware on the iPhone is perfectly capable of drawing lots of 1024x1024 textured quads onto the screen in 1/60th of a second, so there really should be a way of resizing a 2048x1536 image down to 640x480 in a lot less than 1.5 seconds.
So why is it so slow? Is the underlying image data the OS returns from the picker somehow not ready to be drawn, so that it has to be swizzled in some fashion that the GPU can't help with?
My best guess is that it needs to be converted from RGBA to ABGR or something like that; can anybody think of a way that it might be possible to convince the system to give me the data quickly, even if it's in the wrong format, and I'll deal with it myself later?
As far as I know, the iPhone doesn't have any dedicated "graphics" memory, so there shouldn't be a question of moving the image data from one place to another.
So, the question: is there some alternative drawing method besides just using CGBitmapContextCreate and CGContextDrawImage that takes more advantage of the GPU?
Something to investigate: if I start with a UIImage of the same size that's not from the image picker, is it just as slow? Apparently not...
Update: Matt Long found that it only takes 30ms to resize the image you get back from the picker in [info objectForKey:#"UIImagePickerControllerEditedImage"], if you've enabled cropping with the manual camera controls. That isn't helpful for the case I care about where I'm using takePicture to take pictures programmatically. I see that that the edited image is kCGImageAlphaPremultipliedFirst but the original image is kCGImageAlphaNoneSkipFirst.
Further update: Jason Crawford suggested CGContextSetInterpolationQuality(context, kCGInterpolationLow), which does in fact cut the time from about 1.5 sec to 1.3 sec, at a cost in image quality--but that's still far from the speed the GPU should be capable of!
Last update before the week runs out: user refulgentis did some profiling which seems to indicate that the 1.5 seconds is spent writing the captured camera image out to disk as a JPEG and then reading it back in. If true, very bizarre.
Seems that you have made several assumptions here that may or may not be true. My experience is different than yours. This method seems to only take 20-30ms on my 3Gs when scaling a photo snapped from the camera to 0.31 of the original size with a call to:
CGImageRef scaled = CreateScaledCGImageFromCGImage([image CGImage], 0.31);
(I get 0.31 by taking the width scale, 640.0/2048.0, by the way)
I've checked to make sure the image is the same size you're working with. Here's my NSLog output:
2009-12-07 16:32:12.941 ImagePickerThing[8709:207] Info: {
UIImagePickerControllerCropRect = NSRect: {{0, 0}, {2048, 1536}};
UIImagePickerControllerEditedImage = <UIImage: 0x16c1e0>;
UIImagePickerControllerMediaType = "public.image";
UIImagePickerControllerOriginalImage = <UIImage: 0x184ca0>;
}
I'm not sure why the difference and I can't answer your question as it relates to the GPU, however I would consider 1.5 seconds and 30ms a very significant difference. Maybe compare the code in that blog post to what you are using?
Best Regards.
Use Shark, profile it, figure out what's taking so long.
I have to work a lot with MediaPlayer.framework and when you get properties for songs on the iPod, the first property request is insanely slow compared to subsequent requests, because in the first property request MobileMediaPlayer packages up a dictionary with all the properties and passes it to my app.
I'd be willing to bet that there is a similar situation occurring here.
EDIT: I was able to do a time profile in Shark of both Matt Long's UIImagePickerControllerEditedImage situation and the generic UIImagePickerControllerOriginalImage situation.
In both cases, a majority of the time is taken up by CGContextDrawImage. In Matt Long's case, the UIImagePickerController takes care of this in between the user capturing the image and the image entering 'edit' mode.
Scaling the percentage of time taken to CGContextDrawImage = 100%, CGContextDelegateDrawImage then takes 100%, then ripc_DrawImage (from libRIP.A.dylib) takes 100%, and then ripc_AcquireImage (which it looks like decompresses the JPEG, and takes up most of its time in _cg_jpeg_idct_islow, vec_ycc_bgrx_convert, decompress_onepass, sep_upsample) takes 93% of the time. Only 7% of the time is actually spent in ripc_RenderImage, which I assume is the actual drawing.
I have had the same problem and banged my head against it for a long time. As far as I can tell, the first time you access the UIImage returned by the image picker, it's just slow. As an experiment, try timing any two operations with the UIImage--e.g., your scale-down, and then UIImageJPEGRepresentation or something. Then switch the order. When I've done this in the past, the first operation gets a time penalty. My best hypothesis is that the memory is still on the CCD somehow, and transferring it into main memory to do anything with it is slow.
When you set allowsImageEditing=YES, the image you get back is resized and cropped down to about 320x320. That makes it faster, but it's probably not what you want.
The best speedup I've found is:
CGContextSetInterpolationQuality(context, kCGInterpolationLow)
on the context you get back from CGBitmapContextCreate, before you do CGContextDrawImage.
The problem is that your scaled-down images might not look as good. However, if you're scaling down by an integer factor--e.g., 1600x1200 to 800x600--then it looks OK.
Here's a git project that I've used and it seems to work well. The usage is pretty clean as well - one line of code.
https://github.com/AliSoftware/UIImage-Resize
DO NOT USE CGBitmapImageContextCreate in this case! I spent almost a week in the same situation you are in. Performance will be absolutely terrible and it will eat up memory like crazy. Use UIGraphicsBeginImageContext instead:
// create a new CGImage of the desired size
UIGraphicsBeginImageContext(desiredImageSize);
CGContextRef c = UIGraphicsGetCurrentContext();
// clear the new image
CGContextClearRect(c, CGRectMake(0,0,desiredImageSize.width, desiredImageSize.height));
// draw in the image
CGContextDrawImage(c, rect, [image CGImage]);
// return the result to our parent controller
UIImage * result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
In the above example (from my own image resize code), "rect" is significantly smaller than the image. The code above runs very fast, and should do exactly what you need.
I'm not entirely sure why UIGraphicsBeginImageContext is so much faster, but I believe it has something to do with memory allocation. I've noticed that this approach requires significantly less memory, implying that the OS has already allocated space for an image context somewhere.