memory issues with UIImage when we load 2000 images - iphone

I'm trying to run an animation by changing the images of my UIImageView . I need about 200 images of 24K to create a 5 sec animation. I am able to load all the images into the memory (into an NSArray), but when I start the animation (switching the UIImage of the UIImageView) - after about 60 images I get a memory warning and if I continue displaying images the app crashes.

Just because your image files are 24Kb on disk, doesn't mean that is the amount of memory they will take up.
If you have an image that is 480x960 with 1 byte per pixel, that may only be a small file size due to compression (jpeg, for example), but when it is in memory in your app, it will be 450KB. Multiply that by 60 (the point at which you get the memory warning) and you will see that is approx 27MB.
If your images are larger, or have a greater colour depth, then obviously they will consume more memory. I think I read once that iOS gives you a memory warning when you hit 22Mb, but that includes other memory allocated to your app for other things as well.
And just because your app "loads" the images into the array, doesn't mean it actually loads it into memory, or expands it until it really needs it.
So, to calculate how much memory your image is going to use, don't look at the file size, but instead work it out from the image dimensions.

Related

How to reduce image size captured from cam

The Picture taken from the iphone cam is nearly 2.5 Mb, How to reduce this size ,I have tried
UIJPEGRepresentation(image,0.1f),but it does not effect the size ?
You really can't reduce the size the images takes up in memory.
When an image is loaded, basically a UIImage object the size wil be width x height x 4 bytes. That is the size the an uncompressed image will take up in memory.
Since you can use compressed images all image, once loaded in a UIImage will be uncompressed.
If you really need so save some memory, save the image to disk and create a thumbnail which you use in your app. Then when need you can load the larger image and use it,
Try using the Resize method in UIImage+Resize.h
https://github.com/AliSoftware/UIImage-Resize
[aImgView setImage:[ImageObjectFromPicker resizedImageWithContentMode:UIViewContentModeScaleAspectFit bounds:YourSize interpolationQuality:kCGInterpolationHigh]];

Objective C (iPhone): CGContextDrawImage is too slow

I'm writing a program that does various types of image processing while getting pictures at a rate of 15 FPS. When I comment out the code that prints out the images and only leave in the processing, I find that I can print images at a maximal speed of 13/14 FPS.
However, upon calling CGContextDrawImage 6 times in a row (6 different images), my drawing rate drops down to 6/7 FPS. I was wondering if anyone knows an alternative to CGContext's CGContextDrawImage such that printing the image takes minimal time.
scale it to the right size and/or render intermediates to an offscreen cached context (e.g. composite and merge multiple images), which can be easily copied. make sure that your image uses an optimal layout -- assuming you render it multiple times. only draw when needed. profile to see what takes the most time. determine what needs to be drawn -- if you have 6 images and they overlap, do not draw portions which are not visible.

Major speed issues with imageWithContentsOfFile

In my application I'm creating a large image dynamically and then loading it up for display in my image explorer class. Because I can't add new images to the bundle at run time, it seems I have to use imageWithContentsOfFile - however, this gives me major speed issues further down the line.
The way my image explorer works is that it takes in an image, splits it up into tiles, caches those tiles and then only loads those tiles into memory for display that need to be shown on the screen. Using a bunch of NSLogs, I've managed to find out where all the slowdown is. It's not in the imageWithContentsOfFile function itself, it's when I try to call this line:
CGContextDrawImage(context_ref,
CGRectMake(0, 0, imgWidth, imgHeight), tileImage);
This is when I'm writing the tile to the cache file. tileImage is a CGImageRef that is returned from CGImageCreateWithImageInRect, which is how I get subsets of my larger image to save separately.
The odd thing is that splitting up a large image this way takes about 45 seconds (!), but when I split up an image from the bundle using imageNamed rather than imageWithContentsOfFile, it takes only about 2 seconds.
Anyone have any ideas? Thanks in advance :)
I think U should split up your image.
Because, CGContextDrawImage will take fully loaded "tileImage".
If your "tileImage" size is 8 MB, your app must load 8MB data to memory.
It takes long time for loading. and It may create memory issue and so on.
If you want to use a single big image and you can wait for loading,
there is solution that U can use another thread.
It can avoid to UI lock during loading a big image.
An 8MB JPG image will use over 8MB memory, UIImage should use noncompressed format for fast drawing.
imageNamed uses caching, and may reduce the amount of scaling.
UIImage is immutable. imageNamed may note this and return a reference to a cached image, rather than loading and creating a new image... wherever you load your image.
if you create an images, you can setup your own (in memory) caching scheme and pass references in many cases. then purge the cache when you receive a memory warning.
if you need to scale the image and the size is static, determine the size to draw, and create a UIImage using imageWithCGImage:scale:orientation: -- or you can approach the problem in a similar way using CoreGraphics apis directly too.
beyond that, hold onto/reuse what you need, and use a profiler to balance your allocations and to measure timings.

UIImage allocates more memory

In my viewcontroller i created a UIImageView and assigned a image in the Interface Builder. While checking on instruments i have allocation of malloc of 600kb and the responsible library is ImageIO_Malloc. But the size of my image is 37kb. I dont know why it allocates 600kb.
I have also tried with the code by assigning UIImage imageNamed. Still no good.
Do you people have any idea on that.
600 kB is really not much to allocate for a image. Your 37 kB is probably just the size of the compressed image file. However when that image needs to be displayed the image view needs to allocate back buffering of it so it can be represented in an uncompressed format internally.
An image with dimensions of 640x480 pixels will result in 300.000 pixels, each of which needs and R, G, B, and possible alpha value - meaning 3-4 bytes per pixel. So you can easily see allocations in the order og 600 kB for even fairly small images.

Image editing using iphone

I'm creating an image editing application for iphone. i would like to enable the user to pick an image from the photolibrary, edit it (grayscale, sepia,etc) and if possible, save back to the filesystem. I've done it for picking image (the simplest thing, as you know using imagepicker) and also for creating the grayscale image. But, i got stuck with sepia. I don't know how to implement that. Is it possible to get the values of each pixel of the image so that we can vary it to get the desired effects. Or any other possible methods are there??? pls help...
The Apple image picker code will most likely be holding just the file names and some lower-res renderings of the images in RAM til the last moment when a user selects an image.
When you ask for the full frame buffer of the image, the CPU suddenly has to do a lot more work decoding the image at full resolution, but it might be even as simple as this to trigger it off:
CFDataRef CopyImagePixels(CGImageRef inImage)
{
return CGDataProviderCopyData(CGImageGetDataProvider(inImage));
}
/* IN MAIN APPLICATION FLOW - but see EDIT 2 below */
const char* pixels = [[((NSData*)CopyImagePixels([myImage CGImage]))
autorelease] bytes]; /* N.B. returned pixel buffer would be read-only */
This is just a guess as to how it works, really, but based on some experience with image processing in other contexts. To work out whether what I suggest makes sense and is good from a memory usage point of view, run Instruments.
The Apple docs say (related, may apply to you):
You should avoid creating UIImage objects that are greater than 1024 x 1024 in size. Besides the large amount of memory such an image would consume, you may run into problems when using the image as a texture in OpenGL ES or when drawing the image to a view or layer. This size restriction does not apply if you are performing code-based manipulations, such as resizing an image larger than 1024 x 1024 pixels by drawing it to a bitmap-backed graphics context. In fact, you may need to resize an image in this manner (or break it into several smaller images) in order to draw it to one of your views.
[ http://developer.apple.com/iphone/library/documentation/UIKit/Reference/UIImage_Class/Reference/Reference.html ]
AND
Note: Prior to iPhone OS 3.0, UIView instances may have a maximum height and width of 1024 x 1024. In iPhone OS 3.0 and later, views are no longer restricted to this maximum size but are still limited by the amount of memory they consume. Therefore, it is in your best interests to keep view sizes as small as possible. Regardless of which version of iPhone OS is running, you should consider using a CATiledLayer object if you need to create views larger than 1024 x 1024 in size.
[ http://developer.apple.com/iPhone/library/documentation/UIKit/Reference/UIView_Class/UIView/UIView.html ]
Also worth noting:-
(a) Official how-to
http://developer.apple.com/iphone/library/qa/qa2007/qa1509.html
(b) From http://cameras.about.com/od/cameraphonespdas/fr/apple-iphone.htm
"The image size uploaded to your computer is at 1600x1200, but if you email the photo directly from the iPhone, the size will be reduced to 640x480."
(c) Encoding large images with JPEG image compression requires large amounts of RAM, depending on the size, possibly larger amounts than are available to the application.
(d) It may be possible to use an alternate compression algorithm with (if necessary) its malloc rewired to use temporary memory mapped files. But consider the data privacy/security issues.
(e) From iPhone SDK: After a certain number of characters entered, the animation just won't load
"I thought it might be a layer size issue, as the iPhone has a 1024 x 1024 texture size limit (after which you need to use a CATiledLayer to back your UIView), but I was able to lay out text wider than 1024 pixels and still have this work."
Sometimes the 1024 pixel limit may appear to be a bit soft, but I would always suggest you program defensively and stay within the 1024 pixel limit if you can.
EDIT 1
Added extra line break in code.
EDIT 2
Oops! The code gets a read-only copy of the data (there is a diference between CFMutableDataRef and CFDataRef). Because of limitations on available RAM, you then have to make a lower-res copy of it by smooth-scaling it down yourself, or to copy it into a modifiable buffer, if the image is large, you may need to write it in bands to a temporary file, release the unmodifiable data block and load the data back from file. And only do this of course if having the data in a temporary file like this is acceptable. Painful.
EDIT 3
Here's perhaps a better idea maybe try using a destination bitmap context that uses a CFData block that is a memory-mapped CFData. Does that work? Again only do this if you're happy with the data going via a temporary file.
EDIT 4
Oh no, it appears that memory mapped read-write CFData not available. Maybe try mmap BSD APIs.
EDIT 5
Added "const char*" and "pixels read-only" comment to code.