iPhone – Best method to import/drawing UI graphic elements? CGContextDrawPDFPage? - iphone

What is the best way to use the custom UI graphics on the iPhone?
I have come across CGContextDrawPDFPage and Panic's Shrinkit. Should I be using PDF's to store my vector ui graphics and loading them using CGContextDrawPDFPage to draw them.
Previously I asked what way Apple store their UI graphics and was answered crushed png. The options are I can think of are below, but I'm also interested in any other techniques...
PNG (bitmapped image)
Custom UIView drawing code (generated from Opacity)
PDF (I've not used this method, is it with CGContextDrawPDFPage?)
This question is for vector graphics only (but I guess some people may only use bitmapped?). Looking for what is standard / most effective / most efficient.
Edit: Bounty added, I'm interested to hear the process of anyone who works with UI designers, or are themselves a UI designer. And pointers on resolution independence i.e. for iPad / iPhone HD future proofing.
Many thanks
Ross

I can suggest 3 different ways, 2 of which you already mentioned:
Creating custom UIView.
Drawing in a CGLayer.
loading from PDF.
Each have their advantages, depending on what you want to do:
UIView vs CGLayer
In terms of performance (for one-time drawing) and ease of use there shouldn't be much difference between the two (there are minor differences, but nothing serious). Apparently Opacity can export source code for both (I haven't personally used it). That said, there are things you should consider before choosing:
If you have a fixed image (which your question suggests so), use CGLayer. CGLayer objects will be cached on the graphics device, so re-using them is much faster. Even if the cache is cleared, you're still using the same object for redrawing, meaning there's no need for re-creating it.
On the other hand, if you need to change your drawing as the user interacts with the app, UIView could be faster, as you have the flexibility of updating just one part of the image instead of the whole view.
CGLayer is independent of the UI. So the same code works fine for Mac/iPhone/iPad, or even for saving to files.
Conclusion: Use CGLayer, unless it's a special case.
CGLayer (In code drawing) vs PDF (loading from file)
I don't have any benchmark for this, but I expect CGLayer to be slightly faster: (1) there's no need to read a file. (2) the pdf commands should be converted to system's graphic elements, which is more or less the same as creating a CGLayer. (3) I'm not sure about how pdf pages are cached, but I don't expect it to be faster than CGLayer. Anyway, all this shouldn't make much difference unless you want optimization till the last millisec. Again, the choice should be based on your use case:
CGLayer gives you more flexibility in the code. Your only access to a pdf page is through CGContextDrawPDFPage, which means even simple tasks such as scaling/transforming the drawing will be harder.
Using PDFs on the other hand, is more flexible after finishing the code. You can simply update the pdf file with a new whenever you want, load it from the web, etc. .
Creating a pdf could be easier than coding the drawing. You can use any app you want, you don't need to worry about the API and system resources. After all, the code can output a pdf file, not the other way around.
Conclusion: If you don't need to do much with the drawing (just want to show an icon or something), go with the pdf. if you need to work on it in the app, consider CGLayer.
Of course you could always mix the approaches as you see fit: e.g. Load a pdf file, put it in a CGLayer to adjust it, draw it with a UIView where you can put a badge on it!

I stumbled over this question because I have a question about PDFs too. I'm working together with a UI designer and we are succesfully using PDFs to create UI elements. For example: For a button we have 3 PDFs for ON, OFF and the shadow. I wrote a piece of code that transformes a PDF into a UIImage. It can scale the resulting image and even colorize it to have one template for many styles of buttons. It works pretty good :)
Our problem is that we can't scale up the vector graphics without quality loss. That's why we decided to use graphics that are big enough and we only have to scale them down. But I still wonder if there's a way to scale up a PDF before drawing it to a context and create a UIImage. Here's my post.

Related

NSString drawInRect vs. Core Text

I've read in the documentation that the NSString category method drawInRect is good for small amounts of text, but is too much overhead for large amounts. The alternative in my case would be to use Core Text CTFrameDraw.
My question is, what is the "cut-off" point between using drawInRect vs Core Text? Is Core Text always faster than drawInRect? If the extra programming is not the issue, should I always just use Core Text instead of drawInRect, even for small strings?
Mostly I'll be drawing paragraphs, but sometimes they can be short lines too.
For me it comes to use core text when I have some specific requirements from the graphic designer. For instance label that need to mix colors, type, font size etc. drawRect is method that draws something in a view layer, is just a wrapper around CG functions, core text is full framework to deal on how to draw text: change fonts, spaces, interlines.
I mean is not fair as a comparison, a better question could be when it comes to use core text instead of common UI text porpoise obejects and the answer is based on your app design-UI related requirements. Hope this helps.
I would write some test code to benchmark the two methods under consideration using your expected statistical distribution of string content. Maybe a timing a few tens of millions of string renders using each. Run the benchmarks on an actual iOS device, as relative performance may be different using the ARM targeted frameworks.
I wonder if using a UIWebView would get all the performance that's possible. The iOS (and every OS) has a constantly loaded webkit ready to go. Its pretty well optimized too. It would also get work offloaded.
Interesting to compare.

Create iphone interface from psd

For every developer arrives the day to improve the user interface experience because apps are evalutated mainly from the ui carefulness.
So, i've took a look around the websites and I found some psd where to start to desing my apps.
My question is: How to transform a psd prototype to a well-working app?
I don't unserstand how a mockup can help a developer to build a ui...
Can someone make me some clear the situation?
Well, I'd be careful to make a distinction between the graphics an app uses and the actual User Interface. Certainly the graphics are part of the UI, but the UI is soooo much more than that. Depending on how it is done, photoshop mock ups can be simple graphics you can use for your interface to complexes 'scenes' describing how the app functions. In the latter case, the mock-up can be useful for UI design, in the former case it just gives you pretty images to use (which can certainly be useful).
But to more directly answer your question, most people take 'slices' (individual pieces) of the photoshop image and export them as .png images (or .jpg). If the .psd file doesn't already have the images 'sliced', look up 'photoshop image slicing' on Google. You can then import them into Xcode and use them as background images for the controls you want to use. Especially since iOS 5.0, images can be used for a lot of controls. Also, you'll probably want to make sure you make the image resizable with proper UIEdgeInsets. This will allow the image to resize without pixilation by setting an area that can be tiled within the image.

UIImageView vs UIView w/ Image - efficiency

This is not really a question about addressing a specific problem but more a request to be pointed in the right direction.
I am making an app where I am loading several images (saved as JPGs) onto a screen at the same time and this has to be the way it is, I can't unload any of them because they are all being shown at once.
I tried loading 30 images at about 800px*600px resolution naively thinking that it only loads the 'compressed' size of the pictures into memory (this being about 200 KB) - so a total of 6 MB? Of course, several memory warnings later, I realised how stupid I was. So i now re-size them to about 400px*300px each and my iPhone 4S just about copes with the memory requirements.
I originally used a UiView where in the drawRect I drew a custom drawn image but changing over to using a UIImageView improved the situation dramatically. The app is so much faster and responsive.
I also found that switching off rasterization in my layer made a huge difference in performance.
These sorts of 'discoveries' have taken me hours to work out and what I was wondering was if there are particular design patterns or resources that I can use to load my images to the screen as efficiently as possible; mainly regarding using as little memory as possible - are there are good rules of thumb? Why did using UIImageView vs UiView w/ image make such a difference?
Would be very grateful if someone could help.
Thanks.
Implementing -drawRect: causes the system to allocate a bitmap image of the same size as your view (after all, you need a buffer to draw in to). If all you're doing is drawing an image you've already loaded, then in one fell swoop you've doubled the memory usage for that image (because you have the copy you loaded, and the second copy you just drew).
Similarly, rasterizing layers requires allocating bitmap images the same size as the layer, so it has a buffer to rasterize into. So turning that on sucks up memory as well (proportional to the size of the layer).
The basic rule of thumb is, don't do extra work. Using -drawRect: to draw an image is extra work. Rasterizing a layer is extra work (though, depending on what the layer is, this may be a one-time performance cost (and a constant memory cost) in order to save on performance later, e.g. if it's a CAShapeLayer or if it's drawing shadows). Keeping large images in memory that you always scale down before rendering to screen is extra work (just scale it down once when you load the image, and keep the scaled copy around).
Another thing to keep in mind is, if your goal is to draw images, you should try to use UIImageView if you possibly can. It's generally the fastest and cheapest way to get an image to the screen, and it's reasonably flexible.
The design pattern is basically, use UIImageView. Apple has spent a lot of time making it fast, and Apple is allowed to use private APIs that you aren't.
That said, if you want to do it yourself, you should try using one CALayer per image. You just load an image and set it as the content property of a CALayer. The CALayer can cache its contents in GPU memory, and may do other optimizations that you can't do with public APIs.
You can learn a lot about making your UI fast by watching Apple's development videos. They include a lot of tips and "inside info" that are either not in the written documentation, or hard to find/easy to overlook in the docs. The development videos are here: http://developer.apple.com/videos/. Some good ones relevant to your question:
iOS - "Understanding iOS View Compositing"
WWDC 2011 - "Understanding UIKit Rendering"
WWDC 2011 - "Practical Drawing for iOS Developers"
WWDC 2011 - "Core Animation Essentials"
WWDC 2010 - "Core Animation in Practice"

Should all .png files be in power of two on the Iphone?

When creating an UIImage file from a .png to be displayed on a button, view/cell background, etc. for a standard Iphone application, should all of them be in powers of 2 for optimization reasons?
As others have said, no - but you should generally use images with even dimensions. This is because when views are positioned with the center property, it'll position an odd-dimensioned image at some half-pixel position. This will cause the image to appear blurry.
As long as you're aware of this it shouldn't really cause you any problems, but it's still a good idea to use even sizes just to be on the safe side.
(This applies for UIKit, not necessarily OpenGL)
Apple uses odd and arbitrary dimensions for all the images it adds to the interface on your behalf, such as system toolbar items. The best optimization you can do is anything that reduces compositing, which basically means setting the opaque property of views and layers whenever possible.
If you have the choice between a transparent png that will be composited over a static background and an opaque png with the background already included, you have a chance to optimize. When the images will be sliding around or the background will change, you have to composite, otherwise choose opaque.
Here is an article on optimization of iPhone images -- basically tells you why to use PNG files. The size shouldn't matter unless you are using OpenGLES.
No, this will have little or no benefit, I usually suffice at doing my own optimization using photoshop "Save for web or devices" option.
Please see http://iphonedevelopment.blogspot.com/2008/10/iphone-optimized-pngs.html for a detailed explanation about the iPhones pre-optimization of pngs.

Performing iPhone optimization on externally downloaded PNGs

When a PNG is added to an XCode iPhone project, the compiler optimizes it using pngcrush. Once on the device, the image's rendering performance is very fast.
My problem is that my application downloads its PNGs from an external source at runtime (from Picasa Web albums, using the Google Data APIs). Unfortunately, these images' performance is quite bad. When I do custom rendering on top of the image, it seems 100x slower than its internally stored counterparts. I strongly suspect this is because the downloaded images haven't been optimized.
Does anyone know how I can optimize an externally downloaded PNG at runtime on the iPhone? I'm hoping for a class that does this. I even considered adding pngcrush's source code to my app, which seems drastic. I haven't been able to find an decent answer myself. I'd be very grateful for any help.
Thanks!
Update:
Some folks have suggested that it may be due to the file's size, but it isn't. During my tests, I added a toggle button to switch between using the embedded version and the downloaded version of exactly the same PNG. The only difference is that the embedded one was optimized by 'pngcrush' during compilation. This does some byte-swapping (from RGBA to BRGA) and pre-multiplication of alpha. (http://iphonedevelopment.blogspot.com/2008/10/iphone-optimized-pngs.html)
Also, the performance I'm referring to isn't the downloading, but the rendering. I superimpose custom painting on top of the image (overriding the drawRect method of the UIView), and it's very choppy when the background is the downloaded version, and very smooth when it's the embedded (and therefore optimized) version. Again, it's exactly the same file. The only difference is the optimization, which I'm hoping I can perform on the image at runtime, on the device, after downloading it.
Thanks again for everyone's help!
That link you posted pretty much answers your question.
During the build process XCode pre-processes your png so it's in a format that's more friendly to the graphics chip in the iPhone.
Png's that have not been processed like this will likely use a slower rendering path, one that deals with the non-native format and the fact that the alpha must be computed separately for each color.
So you have two options;
Perform the same work that pngcrush does and swap ordering/pre-multiply alpha. The speed up may be due to one or both of these.
After you have loaded your image, you can "create" a new image from it. This new image should be in the iPhone's native format and so should perform faster. The downside is it could potentially take up a bit more memory.
E.g.
CGRect area = CGRectMake(0, 0, width, height);
CGSize size = area.size;
UIGraphicsBeginImageContext(size);
[oldImage drawInRect:area];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
The fact that you say it "seems" 100x slower indicates that you have not performed any experimentation, but made a guess (it must be the PNG optimization), and are now going down a path based on a hunch.
You should spend time to confirm what the problem is before you try to solve it. My gut says that PNG optimization shouldn't be the issue: that mostly affects the loading of images, but once they are in memory it doesn't matter what file format they were originally in.
Anyway, you should try an A-B comparison, either get your code to load an optimized PNG from somewhere else and see how it compares, or make a test app that just does some drawing on the two PNG types. Once you've confirmed what the problem is, then you can figure out if you need to compile pngcrush into your app.
On the surface, it sounds like something else is at play here. Any additional image manipulation should only add time until it's displayed onscreen...
Would it be at all possible to get the server to gzip the images by sending the appropriate HTTP header? (If it even helps file size much, that is.)
Temporarily using the pngcrush source might be a good test as well, just to get some measurements.
Are you storing the png at the original downloaded size? If it's a large image it'll take significantly longer to render.
Well it seems that a good way to do it (since you can't run pngcrush on the iPhone and expect that to speed it up) would be to make your requests through a proxy that runs pngcrush. The proxy would have nice horse power to actually give you some gain over the 100x pain you feel.
try pincrush to trans the normal png file to the crushed png file
You say you are drawing on top of the image by overriding a UIView's drawRect: method. Are you trying to do some animation by repeatedly drawing the whole image with your custom stuff on top of it?
You might get better results if you put your custom stuff in a separate view or layer, and let the OS deal with compositing the result over the background. The OS will only update the parts of the screen that you actually change, and won't be repainting the entire image as often.