When creating an UIImage file from a .png to be displayed on a button, view/cell background, etc. for a standard Iphone application, should all of them be in powers of 2 for optimization reasons?
As others have said, no - but you should generally use images with even dimensions. This is because when views are positioned with the center property, it'll position an odd-dimensioned image at some half-pixel position. This will cause the image to appear blurry.
As long as you're aware of this it shouldn't really cause you any problems, but it's still a good idea to use even sizes just to be on the safe side.
(This applies for UIKit, not necessarily OpenGL)
Apple uses odd and arbitrary dimensions for all the images it adds to the interface on your behalf, such as system toolbar items. The best optimization you can do is anything that reduces compositing, which basically means setting the opaque property of views and layers whenever possible.
If you have the choice between a transparent png that will be composited over a static background and an opaque png with the background already included, you have a chance to optimize. When the images will be sliding around or the background will change, you have to composite, otherwise choose opaque.
Here is an article on optimization of iPhone images -- basically tells you why to use PNG files. The size shouldn't matter unless you are using OpenGLES.
No, this will have little or no benefit, I usually suffice at doing my own optimization using photoshop "Save for web or devices" option.
Please see http://iphonedevelopment.blogspot.com/2008/10/iphone-optimized-pngs.html for a detailed explanation about the iPhones pre-optimization of pngs.
Related
As I say in the title, I'm developing an iPhone app. I use nib files, I don't use any storyboard, and I know that for iPad I'll need to replace some of the controls I currently use for iPhone, since, for instance, in iPad is more suitable to use popovers in some places, and some other considerations. But I'm not sure if I'd need to create a separate nib file targeted for iPad per each nib file I have now for iPhone, or it should be just the convenient thing but not needed, or I can keep just one nib file if views are for example scroll views or table views, and just resizing things would be enough...
What I want is some guidelines to avoid redundant files and work when creating an iPad version of an existing iPhone version, and what the best practices are, since I don´t find how to handle this, programmatically speaking, neither in Apple's docs nor in posts...
Thanks in advance
EDIT. A question about dealing with icons and images: let's say I have an image view that is 50x50 in iPhone. I have two .png images for the iPhone version of this image: 50x50 and 100x100 for retina display. Let's say I need this image to be 80x80 in iPad. What should be the best way to deal with this: having 4 versions of the image (50x50, 100x100, 80x80, 160x160)? or just having the greatest versions (the 80x80 and 160x160 for iPad), and just resizing them to be smaller for iPhone? In general, what is the best practice about this, having one image file per each size you need, or just having the greater you need and fitting it to smaller sizes?
At xib files you should use the AutoLayout with constraints. That was introduced in ios6.
The key and the war with patience and time it will be setting up correctly those constraints.
One xib can be used to iPhone and iPad too, not needed 2 separate files in this way. As speed of development the 2 file are faster to develop, at least for me...
You can usually get away with not re-implementing a lot of .nib files to be iPad specific and just reuse your pre-existing ones. I have a lot of projects that do this.
That said... you usually have to reimplement the top level container .nib to be iPad specific and you really need to think about where you can take advantage of the larger screen size on iPad and adapt or create new .nibs as you see fit. While you're in any .nib... consider updating it to use AutoLayout if you can!!!
Using child view controllers (UIViewController has had childViewController since iOS 5.0) and other libs like for example SGBDrillDownController (you can find it in CocoaPods.org) might be things to take a look at.
On your images question... If the images scale well and don't get too anti-aliased looking when you go from largest to smallest rendition then going that route can make your life a lot easier. I have found however that to get a decent looking image it is best to use scalable vector art and then create a separate rendition for each size needed. Getting the best possible graphics possible is certainly a key to success.
According to Apple, the iPhone 4 has a new and better screen resolution:
3.5-inch (diagonal) widescreen Multi-Touch display
960-by-640-pixel resolution at 326 ppi
This little detail affects our apps in a heavy way. Most of the demo apps on the net have one thing in common: They position views in the believe that the screen has a fixed size of 320 x 480 pixels. So what most (if not all) developers do is: they designed everything in such a way, that a touchable area is (for example) 50 x 50 pixels big. Just enough to tap it. Things have been positioned relative to the upper left, to reach a specific position on screen - let's say the center, or somewhere at the bottom.
When we develop high-resolution apps, they probably won't work on older devices. And if they do, they would suffer a lot from 4-times the size of any image, having to scale them down in memory.
According to Supporting High-Resolution Screens In Views, from the Apple docs:
On devices with high-resolution screens, the imageNamed:,
imageWithContentsOfFile:, and
initWithContentsOfFile: methods
automatically looks for a version of
the requested image with the #2x
modifier in its name. It if finds one,
it loads that image instead. If you do
not provide a high-resolution version
of a given image, the image object
still loads a standard-resolution
image (if one exists) and scales it
during drawing.
When it loads an image, a UIImage object automatically sets the size and
scale properties to appropriate values
based on the suffix of the image file.
For standard resolution images, it
sets the scale property to 1.0 and
sets the size of the image to the
image’s pixel dimensions. For images
with the #2x suffix in the filename,
it sets the scale property to 2.0 and
halves the width and height values to
compensate for the scale factor. These
halved values correlate correctly to
the point-based dimensions you need to
use in the logical coordinate space to
render the image.
This is purely speculation, but if the resolution really is 960 x 640 - that's exactly twice as high a resolution as the current version. It would be trivially simple for the iPhone to check the apps build target and detect a legacy version of the app and simply scale it by 2. You'd never notice the difference.
Engadget's reporting of the keynote included the following transcript from Steve Jobs
...It makes it so your apps run
automatically on this, but it renders
your text and controls in the higher
resolution. Your apps look even
better, but if you do a little bit of
work, then they will look stunning. So
we suggest that you do that
So I infer from that, if you use existing APIs your app will get scaled up. If you take advantage of new iOS4 APIs, you can get all groovy with the new pixels.
It sounds like the display will be ok but I'm concerned about the logic in my game. Will touchesBegan positions return points in the new resolution? The screen bounds will be different, these types of things could potentially be problems for me.
Scaling to a double resolution for display purpose is straight forward, but will this scalling apply to all api's that input/output a screen coordinate? If not things are going to break aren't they?
Fair enough if it's been handled extensively throughout the framework.. I would imagine there are a lot of potential api's this effects.
For people who are coming to this thread looking for a solution to a mobile web interface, check out this post on the Webkit blog: http://webkit.org/blog/55/high-dpi-web-sites/
It seems that Webkit has solved this problem four years ago.
Yes it is true.
According to WWDC it appears that apple has build it some form of automatic conversion so that the resolution for applications will not be completely off. Think up-convert for dvd to HDTV's.
My guess would be that apple knows what most of the standards developers have been using and will already be using these for an immediate conversion. Of course if you are programming an application to take advantage of the new resolution it will look much nicer than whatever the result of apples auto-conversion is.
All of your labels and system buttons will be at 326dpi but your images will still be pixel doubled until you add the hi-res resources. I am currently updating my apps. If you build and run on the iPhone 4 sim then it is presented at 50%, go to Window > Scale > 100% to see the real difference! Labels are smooth, my images look shocking!
I've run into the issue of using a UIBarButtonItem with a custom color. Everything out on the 'net seems to indicate that the only way around this lack of official API support revolves around the use of images. This is all fine and dandy when developing for pre-iOS 4 devices, except when using the new iPhone 4. Creating an image for iPad and pre-iOS 4 devices is straightforward enough, but the images developed for those devices look absolutely horrid on iPhone 4. I suspect that this problem will be exacerbated further with the introduction of next generation devices.
Consider the example below. Notice how the default colored button is nice and smooth, but the iPhone 3GS image looks terrible. It does not seem very scalable (pun intended) to have to include multiple images for different resolution devices.
In the absence of an official API for changing the color of a UIBarButtonItem, what strategies are out there for creating images that scale well against differing resolution devices? This problem is hardly unique to UIBarButtonItems, how is the community adapting to other UI elements that are bitmapped? Is there a better solution for this particular case than using an image (such as using Quartz to draw it)?
If at all possible, please offer concrete code examples.
You can list any image as Image#2x.png along with Image.png and the system will select the appropriate image at runtime.
If you look at the source for Three20 you can see how they draw custom buttons and shapes that will scale well, regardless of resolution.
Give Opacity (for Mac) a try. Draw your button in it with vector elements and effects, and it'll spit out the necessary Quartz code to reproduce it, drawing natively in your iOS application. You get Retina (#2x) support automatically.
Been over a year since I posted this question, but ran into a use case where I wanted to be able to do this, so instead of having to draw or otherwise create the buttons, I decided to write an open source application to create them. This application uses private APIs to change the colors of the UIBarButtonItem objects and then uses a graphics context to save them to a determined location on your computer's file system. This way you can have pixel perfect UIBarButtonItem images to use in your UIToolbars.
The app creates both the standard and #2x resolution images.
UIBarButtonItem-Generator # GitHub
Any vector drawing app may work, but I would also consider povray, which allows you to create in a C-like scripting 3D language, then export any pixel size you choose.
http://povray.org
I have the same problem with navigation bar so solve as the following:
first i subclass my navigation bar
inside this class
- (void)drawRect:(CGRect)rect
{
UIImage *image=[UIImage imageNamed:#"MyImage.png"];
self.frame=CGRectMake(0, 0, image.size.width, image.size.height);
self.backgroundImage =image;
}
finally save the same image with different resolution With #2x at the end
What is the best way to use the custom UI graphics on the iPhone?
I have come across CGContextDrawPDFPage and Panic's Shrinkit. Should I be using PDF's to store my vector ui graphics and loading them using CGContextDrawPDFPage to draw them.
Previously I asked what way Apple store their UI graphics and was answered crushed png. The options are I can think of are below, but I'm also interested in any other techniques...
PNG (bitmapped image)
Custom UIView drawing code (generated from Opacity)
PDF (I've not used this method, is it with CGContextDrawPDFPage?)
This question is for vector graphics only (but I guess some people may only use bitmapped?). Looking for what is standard / most effective / most efficient.
Edit: Bounty added, I'm interested to hear the process of anyone who works with UI designers, or are themselves a UI designer. And pointers on resolution independence i.e. for iPad / iPhone HD future proofing.
Many thanks
Ross
I can suggest 3 different ways, 2 of which you already mentioned:
Creating custom UIView.
Drawing in a CGLayer.
loading from PDF.
Each have their advantages, depending on what you want to do:
UIView vs CGLayer
In terms of performance (for one-time drawing) and ease of use there shouldn't be much difference between the two (there are minor differences, but nothing serious). Apparently Opacity can export source code for both (I haven't personally used it). That said, there are things you should consider before choosing:
If you have a fixed image (which your question suggests so), use CGLayer. CGLayer objects will be cached on the graphics device, so re-using them is much faster. Even if the cache is cleared, you're still using the same object for redrawing, meaning there's no need for re-creating it.
On the other hand, if you need to change your drawing as the user interacts with the app, UIView could be faster, as you have the flexibility of updating just one part of the image instead of the whole view.
CGLayer is independent of the UI. So the same code works fine for Mac/iPhone/iPad, or even for saving to files.
Conclusion: Use CGLayer, unless it's a special case.
CGLayer (In code drawing) vs PDF (loading from file)
I don't have any benchmark for this, but I expect CGLayer to be slightly faster: (1) there's no need to read a file. (2) the pdf commands should be converted to system's graphic elements, which is more or less the same as creating a CGLayer. (3) I'm not sure about how pdf pages are cached, but I don't expect it to be faster than CGLayer. Anyway, all this shouldn't make much difference unless you want optimization till the last millisec. Again, the choice should be based on your use case:
CGLayer gives you more flexibility in the code. Your only access to a pdf page is through CGContextDrawPDFPage, which means even simple tasks such as scaling/transforming the drawing will be harder.
Using PDFs on the other hand, is more flexible after finishing the code. You can simply update the pdf file with a new whenever you want, load it from the web, etc. .
Creating a pdf could be easier than coding the drawing. You can use any app you want, you don't need to worry about the API and system resources. After all, the code can output a pdf file, not the other way around.
Conclusion: If you don't need to do much with the drawing (just want to show an icon or something), go with the pdf. if you need to work on it in the app, consider CGLayer.
Of course you could always mix the approaches as you see fit: e.g. Load a pdf file, put it in a CGLayer to adjust it, draw it with a UIView where you can put a badge on it!
I stumbled over this question because I have a question about PDFs too. I'm working together with a UI designer and we are succesfully using PDFs to create UI elements. For example: For a button we have 3 PDFs for ON, OFF and the shadow. I wrote a piece of code that transformes a PDF into a UIImage. It can scale the resulting image and even colorize it to have one template for many styles of buttons. It works pretty good :)
Our problem is that we can't scale up the vector graphics without quality loss. That's why we decided to use graphics that are big enough and we only have to scale them down. But I still wonder if there's a way to scale up a PDF before drawing it to a context and create a UIImage. Here's my post.
When a PNG is added to an XCode iPhone project, the compiler optimizes it using pngcrush. Once on the device, the image's rendering performance is very fast.
My problem is that my application downloads its PNGs from an external source at runtime (from Picasa Web albums, using the Google Data APIs). Unfortunately, these images' performance is quite bad. When I do custom rendering on top of the image, it seems 100x slower than its internally stored counterparts. I strongly suspect this is because the downloaded images haven't been optimized.
Does anyone know how I can optimize an externally downloaded PNG at runtime on the iPhone? I'm hoping for a class that does this. I even considered adding pngcrush's source code to my app, which seems drastic. I haven't been able to find an decent answer myself. I'd be very grateful for any help.
Thanks!
Update:
Some folks have suggested that it may be due to the file's size, but it isn't. During my tests, I added a toggle button to switch between using the embedded version and the downloaded version of exactly the same PNG. The only difference is that the embedded one was optimized by 'pngcrush' during compilation. This does some byte-swapping (from RGBA to BRGA) and pre-multiplication of alpha. (http://iphonedevelopment.blogspot.com/2008/10/iphone-optimized-pngs.html)
Also, the performance I'm referring to isn't the downloading, but the rendering. I superimpose custom painting on top of the image (overriding the drawRect method of the UIView), and it's very choppy when the background is the downloaded version, and very smooth when it's the embedded (and therefore optimized) version. Again, it's exactly the same file. The only difference is the optimization, which I'm hoping I can perform on the image at runtime, on the device, after downloading it.
Thanks again for everyone's help!
That link you posted pretty much answers your question.
During the build process XCode pre-processes your png so it's in a format that's more friendly to the graphics chip in the iPhone.
Png's that have not been processed like this will likely use a slower rendering path, one that deals with the non-native format and the fact that the alpha must be computed separately for each color.
So you have two options;
Perform the same work that pngcrush does and swap ordering/pre-multiply alpha. The speed up may be due to one or both of these.
After you have loaded your image, you can "create" a new image from it. This new image should be in the iPhone's native format and so should perform faster. The downside is it could potentially take up a bit more memory.
E.g.
CGRect area = CGRectMake(0, 0, width, height);
CGSize size = area.size;
UIGraphicsBeginImageContext(size);
[oldImage drawInRect:area];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
The fact that you say it "seems" 100x slower indicates that you have not performed any experimentation, but made a guess (it must be the PNG optimization), and are now going down a path based on a hunch.
You should spend time to confirm what the problem is before you try to solve it. My gut says that PNG optimization shouldn't be the issue: that mostly affects the loading of images, but once they are in memory it doesn't matter what file format they were originally in.
Anyway, you should try an A-B comparison, either get your code to load an optimized PNG from somewhere else and see how it compares, or make a test app that just does some drawing on the two PNG types. Once you've confirmed what the problem is, then you can figure out if you need to compile pngcrush into your app.
On the surface, it sounds like something else is at play here. Any additional image manipulation should only add time until it's displayed onscreen...
Would it be at all possible to get the server to gzip the images by sending the appropriate HTTP header? (If it even helps file size much, that is.)
Temporarily using the pngcrush source might be a good test as well, just to get some measurements.
Are you storing the png at the original downloaded size? If it's a large image it'll take significantly longer to render.
Well it seems that a good way to do it (since you can't run pngcrush on the iPhone and expect that to speed it up) would be to make your requests through a proxy that runs pngcrush. The proxy would have nice horse power to actually give you some gain over the 100x pain you feel.
try pincrush to trans the normal png file to the crushed png file
You say you are drawing on top of the image by overriding a UIView's drawRect: method. Are you trying to do some animation by repeatedly drawing the whole image with your custom stuff on top of it?
You might get better results if you put your custom stuff in a separate view or layer, and let the OS deal with compositing the result over the background. The OS will only update the parts of the screen that you actually change, and won't be repainting the entire image as often.