Does iPhone automatically use bicubic resizing, or bicubic sharpening?
Say I have an image that will be 100 X 100 in the iPhone. I know I should make it 200 X 200 and append #2x.png so iPhone 4 high resolution screens can use that version. Assuming file size makes no difference (a huge assumption, I know)... would it matter if I just use a 800 X 800 image and let the iPhone do the resizing itself? Rather than me manually resizing a 800 X 800 image to 200 X 200? What different would it make?
It would consume unnecessary memory and processor cycles.
Addition: Perceived image quality is a subjective issue—why don't you try it and see? Be sure to test it on older devices. If you make an exact pixel double, or quadruple, or in the case of 800x800 down to 100x100 octouple version, the downsizing is more straightforward and that may help. Frankly, there are few developers who don't care about all three of disk space, memory, and processor cycles, all of which relate to UI performance and thus end user experience, and exporting images in the correct sizes is not that difficult a step in professional development, so you're not likely to get many more answers here as it isn't something most people would contemplate.
Related
I added 33 mb worth of sprite assets (they are large character illustrations), so I would expect the data folder to increase proportionally. However, the size actually increases by 2 GB (6000% increase!) increasing total data size by over 500% too.
Doesn't make any sense to me. Is there a mistake with my import options? I use mip maps, bilinear/trilinear filters. Truecolor/ vs compressed doesn't change anything.
Additional info: It's like 10 files with 5-8 large sprites each. Another weird thing is that when it's compressed to a zip file the size collapses to 142mb (from like 2.3 GB). Which is weird because that's too big of a difference.
It's also very slow to start.
I believe this is related to how unity handles image compression. The assets live in your project in compressed (jpg/png) form, but they get recompressed (or not) to a form thats fastest to decode on the target platform. Try playing with the compession settings with the asset import settings (available if you highlight your asset in the project window)
There are a few reasons why file sizes can get so big.
As #zambari said, PNG/JPEG are compressed forms, which compress much better than what unity will. Due to that, you have to be careful with your file sizes, since they will be much bigger in-game.
Another issue I had was that my files weren't sized properly. The compression method that I was trying to utilize requires file sizes divisible by 4 (DTX5).
Another big issue was I had large images that I did not need. I used "generate mip-maps" + trilinear filtering, and that once again doubled the file sizes. The best thing you can do is just use image sizes that reflect their use. Relying on Unity to do that for you by using max image size does not guarantee good quality (in fact it looked terrible). This was all in Unity 5
I have read a lot about BMP file format structure but I still cannot get what is the real meaning of the fields "biXPelsPermeter" and "biYPelsPermeter". I mean in practical way, how is it used or how it can be utilized. Any example or experience? Thanks a lot
biXPelsPermeter
Specifies the horizontal print resolution, in pixels per meter, of the target device for the bitmap.
biYPelsPermeter
Specifies the vertical print resolution.
Its not very important. You can leave them on 2835 its not going to ruin the image.
(72 DPI × 39.3701 inches per meter yields 2834.6472)
Think of it this way: The image bits within the BMP structure define the shape of the image using that much data (that much information describes the image), but that information must then be translated to a target device using a measuring system to indicate its applied resolution in practical use.
For example, if the BMP is 10,000 pixels wide, and 4,000 pixels high, that explains how much raw detail exists within the image bits. However, that image information must then be applied to some target. It uses the relationship to the dpi and its target to derive the applied resolution.
If it were printed at 1000 dpi then it's only going to give you an image with 10" x 4" but one with extremely high detail to the naked eye (more pixels per square inch). By contrast, if it's printed at only 100 dpi, then you'll get an image that's 100" x 40" with low detail (fewer pixels per square inch), but both of them have the same overall number of bits within. You can actually scale an image without scaling any of its internal image data by merely changing the dpi to non-standard values.
Also, using 72 dpi is a throwback to ancient printing techniques (https://en.wikipedia.org/wiki/Twip) which are not really relevant in moving forward (except to maintain compatibility with standards) as modern hardware devices often use other values for their fundamental relationships to image data. For video screens, for example, Macs use 72 dpi as the default. Windows uses 96 dpi. Others are similar. In theory you can set it to whatever you want, but be warned that not all software honors the internal settings and will instead assume a particular size. This can affect the way images are scaled within the app, even though the actual image data within hasn't changed.
While trying to create a repeated tile overlay, I've found many questions (like this one)
mentioning that repeated images in Cocos2d must have height and width dimensions that are powers of two.
This raises two questions. First, why is this a limitation? Second, and more importantly, how can I create a repeating, scrolling image that has dimensions that are not a power of two? What if I have a really wide background (say 4000 pixels) and I want it to repeat across the X axis. What should I do in that context? I can't believe the "correct" answer is to add an additional 96 pixels to the width, and increase the height of the image to 4096, as well. That's wasted bytes!
This answer has excellent info on why the need for power of 2 textures.
Why do images for textures on the iPhone need to have power-of-two dimensions?
As for your second question, the texture does not have to be square, just both the width and height have to be a power of 2. So you could have an image that is 4096x128 repeating as your background. Keep in mind also that textures, no matter what the size, are always stored in memory in an uncompressed power of two size. So an image with width of 4000 and and an image with width of 4096 are actually using the same amount of memory.
for ipad
e.g.
for pixel size 10x10 image, how much memory is used?
provide some other contribution:
usually for a normal iPad, it start with app-usable memory of 190-200mb
this number decrease if background process / other apps is running
many thanks!~~~
10 x 10 x 4 = 400bytes for that image, so it's 4 bytes per pixel. (GRBA).
Also it is normal for background app to take some memory. iOS will free memory if needed by any app.
I'm creating an image editing application for iphone. i would like to enable the user to pick an image from the photolibrary, edit it (grayscale, sepia,etc) and if possible, save back to the filesystem. I've done it for picking image (the simplest thing, as you know using imagepicker) and also for creating the grayscale image. But, i got stuck with sepia. I don't know how to implement that. Is it possible to get the values of each pixel of the image so that we can vary it to get the desired effects. Or any other possible methods are there??? pls help...
The Apple image picker code will most likely be holding just the file names and some lower-res renderings of the images in RAM til the last moment when a user selects an image.
When you ask for the full frame buffer of the image, the CPU suddenly has to do a lot more work decoding the image at full resolution, but it might be even as simple as this to trigger it off:
CFDataRef CopyImagePixels(CGImageRef inImage)
{
return CGDataProviderCopyData(CGImageGetDataProvider(inImage));
}
/* IN MAIN APPLICATION FLOW - but see EDIT 2 below */
const char* pixels = [[((NSData*)CopyImagePixels([myImage CGImage]))
autorelease] bytes]; /* N.B. returned pixel buffer would be read-only */
This is just a guess as to how it works, really, but based on some experience with image processing in other contexts. To work out whether what I suggest makes sense and is good from a memory usage point of view, run Instruments.
The Apple docs say (related, may apply to you):
You should avoid creating UIImage objects that are greater than 1024 x 1024 in size. Besides the large amount of memory such an image would consume, you may run into problems when using the image as a texture in OpenGL ES or when drawing the image to a view or layer. This size restriction does not apply if you are performing code-based manipulations, such as resizing an image larger than 1024 x 1024 pixels by drawing it to a bitmap-backed graphics context. In fact, you may need to resize an image in this manner (or break it into several smaller images) in order to draw it to one of your views.
[ http://developer.apple.com/iphone/library/documentation/UIKit/Reference/UIImage_Class/Reference/Reference.html ]
AND
Note: Prior to iPhone OS 3.0, UIView instances may have a maximum height and width of 1024 x 1024. In iPhone OS 3.0 and later, views are no longer restricted to this maximum size but are still limited by the amount of memory they consume. Therefore, it is in your best interests to keep view sizes as small as possible. Regardless of which version of iPhone OS is running, you should consider using a CATiledLayer object if you need to create views larger than 1024 x 1024 in size.
[ http://developer.apple.com/iPhone/library/documentation/UIKit/Reference/UIView_Class/UIView/UIView.html ]
Also worth noting:-
(a) Official how-to
http://developer.apple.com/iphone/library/qa/qa2007/qa1509.html
(b) From http://cameras.about.com/od/cameraphonespdas/fr/apple-iphone.htm
"The image size uploaded to your computer is at 1600x1200, but if you email the photo directly from the iPhone, the size will be reduced to 640x480."
(c) Encoding large images with JPEG image compression requires large amounts of RAM, depending on the size, possibly larger amounts than are available to the application.
(d) It may be possible to use an alternate compression algorithm with (if necessary) its malloc rewired to use temporary memory mapped files. But consider the data privacy/security issues.
(e) From iPhone SDK: After a certain number of characters entered, the animation just won't load
"I thought it might be a layer size issue, as the iPhone has a 1024 x 1024 texture size limit (after which you need to use a CATiledLayer to back your UIView), but I was able to lay out text wider than 1024 pixels and still have this work."
Sometimes the 1024 pixel limit may appear to be a bit soft, but I would always suggest you program defensively and stay within the 1024 pixel limit if you can.
EDIT 1
Added extra line break in code.
EDIT 2
Oops! The code gets a read-only copy of the data (there is a diference between CFMutableDataRef and CFDataRef). Because of limitations on available RAM, you then have to make a lower-res copy of it by smooth-scaling it down yourself, or to copy it into a modifiable buffer, if the image is large, you may need to write it in bands to a temporary file, release the unmodifiable data block and load the data back from file. And only do this of course if having the data in a temporary file like this is acceptable. Painful.
EDIT 3
Here's perhaps a better idea maybe try using a destination bitmap context that uses a CFData block that is a memory-mapped CFData. Does that work? Again only do this if you're happy with the data going via a temporary file.
EDIT 4
Oh no, it appears that memory mapped read-write CFData not available. Maybe try mmap BSD APIs.
EDIT 5
Added "const char*" and "pixels read-only" comment to code.