CGImageGetBytesPerRow() returns different values on iOS simulator and iOS Device - iphone

I have an image taken by my ipod touch 4 that's 720x960. In the simulator calling CGImageGetBytesPerRow() on the image returns 2880 (720 * 4 bytes) which is what I expected. However on the device CGImageGetBytesPerRow() will return 3840, the "bytes per row" along the height. Does anyone know why there is different behavior even though the image I'm CGImageGetBytesPerRow() on has a width of 720 and height of 960 in both cases?
Thanks in advance.

Bytes per row can be anything as long as it is sufficient to hold the image bounds, so best not to make assumptions that it will be the minimum to fit the image.
I would guess that on the device, bytes per row is dictated by some or other optimisation or hardware consideration: perhaps an image buffer that does not have to be changed if the orientation is rotated, or the image sensor transfers extra bytes of dead data per row that are then ignored instead of doing a second transfer into a buffer with minimum bytes per row, or some other reason that would only make sense if we knew the inner workings of these devices.

It may slightly different because the internal memory allocation: "The number of bytes used in memory for each row of the specified bitmap image (or image mask)."1
Consider using NSBitmapRepresention for some special tasks.

Related

BMP image header - biXPelsPerMeter

I have read a lot about BMP file format structure but I still cannot get what is the real meaning of the fields "biXPelsPermeter" and "biYPelsPermeter". I mean in practical way, how is it used or how it can be utilized. Any example or experience? Thanks a lot
biXPelsPermeter
Specifies the horizontal print resolution, in pixels per meter, of the target device for the bitmap.
biYPelsPermeter
Specifies the vertical print resolution.
Its not very important. You can leave them on 2835 its not going to ruin the image.
(72 DPI × 39.3701 inches per meter yields 2834.6472)
Think of it this way: The image bits within the BMP structure define the shape of the image using that much data (that much information describes the image), but that information must then be translated to a target device using a measuring system to indicate its applied resolution in practical use.
For example, if the BMP is 10,000 pixels wide, and 4,000 pixels high, that explains how much raw detail exists within the image bits. However, that image information must then be applied to some target. It uses the relationship to the dpi and its target to derive the applied resolution.
If it were printed at 1000 dpi then it's only going to give you an image with 10" x 4" but one with extremely high detail to the naked eye (more pixels per square inch). By contrast, if it's printed at only 100 dpi, then you'll get an image that's 100" x 40" with low detail (fewer pixels per square inch), but both of them have the same overall number of bits within. You can actually scale an image without scaling any of its internal image data by merely changing the dpi to non-standard values.
Also, using 72 dpi is a throwback to ancient printing techniques (https://en.wikipedia.org/wiki/Twip) which are not really relevant in moving forward (except to maintain compatibility with standards) as modern hardware devices often use other values for their fundamental relationships to image data. For video screens, for example, Macs use 72 dpi as the default. Windows uses 96 dpi. Others are similar. In theory you can set it to whatever you want, but be warned that not all software honors the internal settings and will instead assume a particular size. This can affect the way images are scaled within the app, even though the actual image data within hasn't changed.

How can you repeat a background in cocos2d when HxW lengths are not a power of 2?

While trying to create a repeated tile overlay, I've found many questions (like this one)
mentioning that repeated images in Cocos2d must have height and width dimensions that are powers of two.
This raises two questions. First, why is this a limitation? Second, and more importantly, how can I create a repeating, scrolling image that has dimensions that are not a power of two? What if I have a really wide background (say 4000 pixels) and I want it to repeat across the X axis. What should I do in that context? I can't believe the "correct" answer is to add an additional 96 pixels to the width, and increase the height of the image to 4096, as well. That's wasted bytes!
This answer has excellent info on why the need for power of 2 textures.
Why do images for textures on the iPhone need to have power-of-two dimensions?
As for your second question, the texture does not have to be square, just both the width and height have to be a power of 2. So you could have an image that is 4096x128 repeating as your background. Keep in mind also that textures, no matter what the size, are always stored in memory in an uncompressed power of two size. So an image with width of 4000 and and an image with width of 4096 are actually using the same amount of memory.

Should the dimensions (height/width) of the retina images (#2X) always be multiples of two?

I've gotten some graphics files for buttons etc. from the designer. Most of the retina files have one or both dimensions odd, like 29 x 30 or 79 x 61, and then the dimensions of the corresponding non-retina files will be 15 x 15 or 39 x 31, for example. The dimensions of the UIImageView s that hold each image exactly match the size of the non-retina files they hold, so on a non-retina phone there is no distortion and everything looks fine.
On a retina phone, these images (icons and such) only look fine when the images happen to be even dimensions (like 30 x 30 or 46 x 80); when there's an odd dimension to the image, it gets distorted slightly.
Should the pixel dimensions of a retina image always be twice the size of the non-retina dimensions, and of the dimensions of the frame that displays it?
As the name (#2X) implies, it is indeed assumed that the retina-version is exactly twice the size of the non-retina version. Otherwise, as you have seen, there might be distortions.
On a side note, this has only indirectly to do with the displaying frame, e.g. think of scrollviews.
Ask your designer to always design the UI (not necessarily the components themselves) for the non-retina version first, and then just double up the sizes for the retina version. This way, you won't run into distortion-problems. If he designs for retina-first and then scales all components back to half their sizes, he will likely end up with odd dimensions.
Oh, and give your designer this link:
http://www.smashingmagazine.com/2010/11/17/designing-for-iphone-4-retina-display-techniques-and-workflow/
Yes, image files that have #2x appended should be exactly double the size of the 'non'-retina image. Thus should only have even dimensions.
It would appear so.
When you create a view which is 30 points by 30 points on a regular display the backing store (the data that gets drawn on the screen) will be created 30 pixels by 30 pixels.
On a retina display that backing store is simply multiplied by a scale factor. Currently that scale factor is 2 for iPhone 4 and iPhone 4s. This means that the backing stores on retina displays will always be a multiple of 2.
Your 30 point by 30 point view would have a 60 pixel by 60 pixel backing store. If your images aren't drawn properly for retina displays it would seem that the #2x image needs to be the full size of the backing store, and hence exactly double the size of the view in points.

Objective C (iPhone): CGContextDrawImage is too slow

I'm writing a program that does various types of image processing while getting pictures at a rate of 15 FPS. When I comment out the code that prints out the images and only leave in the processing, I find that I can print images at a maximal speed of 13/14 FPS.
However, upon calling CGContextDrawImage 6 times in a row (6 different images), my drawing rate drops down to 6/7 FPS. I was wondering if anyone knows an alternative to CGContext's CGContextDrawImage such that printing the image takes minimal time.
scale it to the right size and/or render intermediates to an offscreen cached context (e.g. composite and merge multiple images), which can be easily copied. make sure that your image uses an optimal layout -- assuming you render it multiple times. only draw when needed. profile to see what takes the most time. determine what needs to be drawn -- if you have 6 images and they overlap, do not draw portions which are not visible.

Image editing using iphone

I'm creating an image editing application for iphone. i would like to enable the user to pick an image from the photolibrary, edit it (grayscale, sepia,etc) and if possible, save back to the filesystem. I've done it for picking image (the simplest thing, as you know using imagepicker) and also for creating the grayscale image. But, i got stuck with sepia. I don't know how to implement that. Is it possible to get the values of each pixel of the image so that we can vary it to get the desired effects. Or any other possible methods are there??? pls help...
The Apple image picker code will most likely be holding just the file names and some lower-res renderings of the images in RAM til the last moment when a user selects an image.
When you ask for the full frame buffer of the image, the CPU suddenly has to do a lot more work decoding the image at full resolution, but it might be even as simple as this to trigger it off:
CFDataRef CopyImagePixels(CGImageRef inImage)
{
return CGDataProviderCopyData(CGImageGetDataProvider(inImage));
}
/* IN MAIN APPLICATION FLOW - but see EDIT 2 below */
const char* pixels = [[((NSData*)CopyImagePixels([myImage CGImage]))
autorelease] bytes]; /* N.B. returned pixel buffer would be read-only */
This is just a guess as to how it works, really, but based on some experience with image processing in other contexts. To work out whether what I suggest makes sense and is good from a memory usage point of view, run Instruments.
The Apple docs say (related, may apply to you):
You should avoid creating UIImage objects that are greater than 1024 x 1024 in size. Besides the large amount of memory such an image would consume, you may run into problems when using the image as a texture in OpenGL ES or when drawing the image to a view or layer. This size restriction does not apply if you are performing code-based manipulations, such as resizing an image larger than 1024 x 1024 pixels by drawing it to a bitmap-backed graphics context. In fact, you may need to resize an image in this manner (or break it into several smaller images) in order to draw it to one of your views.
[ http://developer.apple.com/iphone/library/documentation/UIKit/Reference/UIImage_Class/Reference/Reference.html ]
AND
Note: Prior to iPhone OS 3.0, UIView instances may have a maximum height and width of 1024 x 1024. In iPhone OS 3.0 and later, views are no longer restricted to this maximum size but are still limited by the amount of memory they consume. Therefore, it is in your best interests to keep view sizes as small as possible. Regardless of which version of iPhone OS is running, you should consider using a CATiledLayer object if you need to create views larger than 1024 x 1024 in size.
[ http://developer.apple.com/iPhone/library/documentation/UIKit/Reference/UIView_Class/UIView/UIView.html ]
Also worth noting:-
(a) Official how-to
http://developer.apple.com/iphone/library/qa/qa2007/qa1509.html
(b) From http://cameras.about.com/od/cameraphonespdas/fr/apple-iphone.htm
"The image size uploaded to your computer is at 1600x1200, but if you email the photo directly from the iPhone, the size will be reduced to 640x480."
(c) Encoding large images with JPEG image compression requires large amounts of RAM, depending on the size, possibly larger amounts than are available to the application.
(d) It may be possible to use an alternate compression algorithm with (if necessary) its malloc rewired to use temporary memory mapped files. But consider the data privacy/security issues.
(e) From iPhone SDK: After a certain number of characters entered, the animation just won't load
"I thought it might be a layer size issue, as the iPhone has a 1024 x 1024 texture size limit (after which you need to use a CATiledLayer to back your UIView), but I was able to lay out text wider than 1024 pixels and still have this work."
Sometimes the 1024 pixel limit may appear to be a bit soft, but I would always suggest you program defensively and stay within the 1024 pixel limit if you can.
EDIT 1
Added extra line break in code.
EDIT 2
Oops! The code gets a read-only copy of the data (there is a diference between CFMutableDataRef and CFDataRef). Because of limitations on available RAM, you then have to make a lower-res copy of it by smooth-scaling it down yourself, or to copy it into a modifiable buffer, if the image is large, you may need to write it in bands to a temporary file, release the unmodifiable data block and load the data back from file. And only do this of course if having the data in a temporary file like this is acceptable. Painful.
EDIT 3
Here's perhaps a better idea maybe try using a destination bitmap context that uses a CFData block that is a memory-mapped CFData. Does that work? Again only do this if you're happy with the data going via a temporary file.
EDIT 4
Oh no, it appears that memory mapped read-write CFData not available. Maybe try mmap BSD APIs.
EDIT 5
Added "const char*" and "pixels read-only" comment to code.