for ipad
e.g.
for pixel size 10x10 image, how much memory is used?
provide some other contribution:
usually for a normal iPad, it start with app-usable memory of 190-200mb
this number decrease if background process / other apps is running
many thanks!~~~
10 x 10 x 4 = 400bytes for that image, so it's 4 bytes per pixel. (GRBA).
Also it is normal for background app to take some memory. iOS will free memory if needed by any app.
Related
I'm trying to run an animation by changing the images of my UIImageView . I need about 200 images of 24K to create a 5 sec animation. I am able to load all the images into the memory (into an NSArray), but when I start the animation (switching the UIImage of the UIImageView) - after about 60 images I get a memory warning and if I continue displaying images the app crashes.
Just because your image files are 24Kb on disk, doesn't mean that is the amount of memory they will take up.
If you have an image that is 480x960 with 1 byte per pixel, that may only be a small file size due to compression (jpeg, for example), but when it is in memory in your app, it will be 450KB. Multiply that by 60 (the point at which you get the memory warning) and you will see that is approx 27MB.
If your images are larger, or have a greater colour depth, then obviously they will consume more memory. I think I read once that iOS gives you a memory warning when you hit 22Mb, but that includes other memory allocated to your app for other things as well.
And just because your app "loads" the images into the array, doesn't mean it actually loads it into memory, or expands it until it really needs it.
So, to calculate how much memory your image is going to use, don't look at the file size, but instead work it out from the image dimensions.
I've gotten some graphics files for buttons etc. from the designer. Most of the retina files have one or both dimensions odd, like 29 x 30 or 79 x 61, and then the dimensions of the corresponding non-retina files will be 15 x 15 or 39 x 31, for example. The dimensions of the UIImageView s that hold each image exactly match the size of the non-retina files they hold, so on a non-retina phone there is no distortion and everything looks fine.
On a retina phone, these images (icons and such) only look fine when the images happen to be even dimensions (like 30 x 30 or 46 x 80); when there's an odd dimension to the image, it gets distorted slightly.
Should the pixel dimensions of a retina image always be twice the size of the non-retina dimensions, and of the dimensions of the frame that displays it?
As the name (#2X) implies, it is indeed assumed that the retina-version is exactly twice the size of the non-retina version. Otherwise, as you have seen, there might be distortions.
On a side note, this has only indirectly to do with the displaying frame, e.g. think of scrollviews.
Ask your designer to always design the UI (not necessarily the components themselves) for the non-retina version first, and then just double up the sizes for the retina version. This way, you won't run into distortion-problems. If he designs for retina-first and then scales all components back to half their sizes, he will likely end up with odd dimensions.
Oh, and give your designer this link:
http://www.smashingmagazine.com/2010/11/17/designing-for-iphone-4-retina-display-techniques-and-workflow/
Yes, image files that have #2x appended should be exactly double the size of the 'non'-retina image. Thus should only have even dimensions.
It would appear so.
When you create a view which is 30 points by 30 points on a regular display the backing store (the data that gets drawn on the screen) will be created 30 pixels by 30 pixels.
On a retina display that backing store is simply multiplied by a scale factor. Currently that scale factor is 2 for iPhone 4 and iPhone 4s. This means that the backing stores on retina displays will always be a multiple of 2.
Your 30 point by 30 point view would have a 60 pixel by 60 pixel backing store. If your images aren't drawn properly for retina displays it would seem that the #2x image needs to be the full size of the backing store, and hence exactly double the size of the view in points.
Does iPhone automatically use bicubic resizing, or bicubic sharpening?
Say I have an image that will be 100 X 100 in the iPhone. I know I should make it 200 X 200 and append #2x.png so iPhone 4 high resolution screens can use that version. Assuming file size makes no difference (a huge assumption, I know)... would it matter if I just use a 800 X 800 image and let the iPhone do the resizing itself? Rather than me manually resizing a 800 X 800 image to 200 X 200? What different would it make?
It would consume unnecessary memory and processor cycles.
Addition: Perceived image quality is a subjective issue—why don't you try it and see? Be sure to test it on older devices. If you make an exact pixel double, or quadruple, or in the case of 800x800 down to 100x100 octouple version, the downsizing is more straightforward and that may help. Frankly, there are few developers who don't care about all three of disk space, memory, and processor cycles, all of which relate to UI performance and thus end user experience, and exporting images in the correct sizes is not that difficult a step in professional development, so you're not likely to get many more answers here as it isn't something most people would contemplate.
I have an image taken by my ipod touch 4 that's 720x960. In the simulator calling CGImageGetBytesPerRow() on the image returns 2880 (720 * 4 bytes) which is what I expected. However on the device CGImageGetBytesPerRow() will return 3840, the "bytes per row" along the height. Does anyone know why there is different behavior even though the image I'm CGImageGetBytesPerRow() on has a width of 720 and height of 960 in both cases?
Thanks in advance.
Bytes per row can be anything as long as it is sufficient to hold the image bounds, so best not to make assumptions that it will be the minimum to fit the image.
I would guess that on the device, bytes per row is dictated by some or other optimisation or hardware consideration: perhaps an image buffer that does not have to be changed if the orientation is rotated, or the image sensor transfers extra bytes of dead data per row that are then ignored instead of doing a second transfer into a buffer with minimum bytes per row, or some other reason that would only make sense if we knew the inner workings of these devices.
It may slightly different because the internal memory allocation: "The number of bytes used in memory for each row of the specified bitmap image (or image mask)."1
Consider using NSBitmapRepresention for some special tasks.
I'm creating an image editing application for iphone. i would like to enable the user to pick an image from the photolibrary, edit it (grayscale, sepia,etc) and if possible, save back to the filesystem. I've done it for picking image (the simplest thing, as you know using imagepicker) and also for creating the grayscale image. But, i got stuck with sepia. I don't know how to implement that. Is it possible to get the values of each pixel of the image so that we can vary it to get the desired effects. Or any other possible methods are there??? pls help...
The Apple image picker code will most likely be holding just the file names and some lower-res renderings of the images in RAM til the last moment when a user selects an image.
When you ask for the full frame buffer of the image, the CPU suddenly has to do a lot more work decoding the image at full resolution, but it might be even as simple as this to trigger it off:
CFDataRef CopyImagePixels(CGImageRef inImage)
{
return CGDataProviderCopyData(CGImageGetDataProvider(inImage));
}
/* IN MAIN APPLICATION FLOW - but see EDIT 2 below */
const char* pixels = [[((NSData*)CopyImagePixels([myImage CGImage]))
autorelease] bytes]; /* N.B. returned pixel buffer would be read-only */
This is just a guess as to how it works, really, but based on some experience with image processing in other contexts. To work out whether what I suggest makes sense and is good from a memory usage point of view, run Instruments.
The Apple docs say (related, may apply to you):
You should avoid creating UIImage objects that are greater than 1024 x 1024 in size. Besides the large amount of memory such an image would consume, you may run into problems when using the image as a texture in OpenGL ES or when drawing the image to a view or layer. This size restriction does not apply if you are performing code-based manipulations, such as resizing an image larger than 1024 x 1024 pixels by drawing it to a bitmap-backed graphics context. In fact, you may need to resize an image in this manner (or break it into several smaller images) in order to draw it to one of your views.
[ http://developer.apple.com/iphone/library/documentation/UIKit/Reference/UIImage_Class/Reference/Reference.html ]
AND
Note: Prior to iPhone OS 3.0, UIView instances may have a maximum height and width of 1024 x 1024. In iPhone OS 3.0 and later, views are no longer restricted to this maximum size but are still limited by the amount of memory they consume. Therefore, it is in your best interests to keep view sizes as small as possible. Regardless of which version of iPhone OS is running, you should consider using a CATiledLayer object if you need to create views larger than 1024 x 1024 in size.
[ http://developer.apple.com/iPhone/library/documentation/UIKit/Reference/UIView_Class/UIView/UIView.html ]
Also worth noting:-
(a) Official how-to
http://developer.apple.com/iphone/library/qa/qa2007/qa1509.html
(b) From http://cameras.about.com/od/cameraphonespdas/fr/apple-iphone.htm
"The image size uploaded to your computer is at 1600x1200, but if you email the photo directly from the iPhone, the size will be reduced to 640x480."
(c) Encoding large images with JPEG image compression requires large amounts of RAM, depending on the size, possibly larger amounts than are available to the application.
(d) It may be possible to use an alternate compression algorithm with (if necessary) its malloc rewired to use temporary memory mapped files. But consider the data privacy/security issues.
(e) From iPhone SDK: After a certain number of characters entered, the animation just won't load
"I thought it might be a layer size issue, as the iPhone has a 1024 x 1024 texture size limit (after which you need to use a CATiledLayer to back your UIView), but I was able to lay out text wider than 1024 pixels and still have this work."
Sometimes the 1024 pixel limit may appear to be a bit soft, but I would always suggest you program defensively and stay within the 1024 pixel limit if you can.
EDIT 1
Added extra line break in code.
EDIT 2
Oops! The code gets a read-only copy of the data (there is a diference between CFMutableDataRef and CFDataRef). Because of limitations on available RAM, you then have to make a lower-res copy of it by smooth-scaling it down yourself, or to copy it into a modifiable buffer, if the image is large, you may need to write it in bands to a temporary file, release the unmodifiable data block and load the data back from file. And only do this of course if having the data in a temporary file like this is acceptable. Painful.
EDIT 3
Here's perhaps a better idea maybe try using a destination bitmap context that uses a CFData block that is a memory-mapped CFData. Does that work? Again only do this if you're happy with the data going via a temporary file.
EDIT 4
Oh no, it appears that memory mapped read-write CFData not available. Maybe try mmap BSD APIs.
EDIT 5
Added "const char*" and "pixels read-only" comment to code.