To clarify the purpose of this question: I know HOW to create complicated views with both subviews and using drawRect. I'm trying to fully understand the when's and why's to use one over the other.
I also understand that it doesn't make sense to optimize that much ahead of time, and do something the more difficult way before doing any profiling. Consider that I'm comfortable with both methods, and now really want a deeper understanding.
A lot of my confusion comes from learning how to make table view scroll performance really smooth and fast. Of course the original source of this method is from the author behind twitter for iPhone (formerly tweetie). Basically it says that to make table scrolling buttery smooth, the secret is to NOT use subviews, but instead do all the drawing in one custom uiview. Essentially it seems that using lots of subviews slows rendering down because they have lots of overhead, and are constantly re-composited over their parent views.
To be fair, this was written when the 3GS was pretty brand spankin new, and iDevices have gotten much faster since then. Still this method is regularly suggested on the interwebs and elsewhere for high performance tables. In fact it's a suggested method in Apple's Table Sample Code, has been suggested in several WWDC videos (Practical Drawing for iOS Developers), and many iOS programming books.
There are even awesome looking tools to design graphics and generate Core Graphics code for them.
So at first I'm lead to believe "there’s a reason why Core Graphics exists. It’s FAST!"
But as soon as I think I get the idea "Favor Core Graphics when possible", I start seeing that drawRect is often responsible for poor responsiveness in an app, is extremely expensive memory wise, and really taxes the CPU. Basically, that I should "Avoid overriding drawRect" (WWDC 2012 iOS App Performance: Graphics and Animations)
So I guess, like everything, it's complicated. Maybe you can help myself and others understand the When's and Why's for using drawRect?
I see a couple obvious situations to use Core Graphics:
You have dynamic data (Apple's Stock Chart example)
You have a flexible UI element that can't be executed with a simple resizable image
You are creating a dynamic graphic, that once rendered is used in multiple places
I see situations to avoid Core Graphics:
Properties of your view need to be animated separately
You have a relatively small view hierarchy, so any perceived extra effort using CG isn't worth the gain
You want to update pieces of the view without redrawing the whole thing
The layout of your subviews needs to update when the parent view size changes
So bestow your knowledge. In what situations do you reach for drawRect/Core Graphics (that could also be accomplished with subviews)? What factors lead you to that decision? How/Why is drawing in one custom view recommended for buttery smooth table cell scrolling, yet Apple advises drawRect against for performance reasons in general? What about simple background images (when do you create them with CG vs using a resizable png image)?
A deep understanding of this subject may not be needed to make worthwhile apps, but I don't love choosing between techniques without being able to explain why. My brain gets mad at me.
Question Update
Thanks for the information everyone. Some clarifying questions here:
If you are drawing something with core graphics, but can accomplish the same thing with UIImageViews and a pre-rendered png, should you always go that route?
A similar question: Especially with badass tools like this, when should you consider drawing interface elements in core graphics? (Probably when the display of your element is variable. e.g. a button with 20 different color variations. Any other cases?)
Given my understanding in my answer below, could the same performance gains for a table cell possibly be gained by effectively capturing a snapshot bitmap of your cell after your complex UIView render's itself, and displaying that while scrolling and hiding your complex view? Obviously some pieces would have to be worked out. Just an interesting thought I had.
Stick to UIKit and subviews whenever you can. You can be more productive, and take advantage of all the OO mechanisms that should things easier to maintain. Use Core Graphics when you can't get the performance you need out of UIKit, or you know trying to hack together drawing effects in UIKit would be more complicated.
The general workflow should be to build the tableviews with subviews. Use Instruments to measure the frame rate on the oldest hardware your app will support. If you can't get 60fps, drop down to CoreGraphics. When you've done this for a while, you get a sense for when UIKit is probably a waste of time.
So, why is Core Graphics fast?
CoreGraphics isn't really fast. If it's being used all the time, you're probably going slow. It's a rich drawing API, which requires its work be done on the CPU, as opposed to a lot of UIKit work that is offloaded to the GPU. If you had to animate a ball moving across the screen, it would be a terrible idea to call setNeedsDisplay on a view 60 times per second. So, if you have sub-components of your view that need to be individually animated, each component should be a separate layer.
The other problem is that when you don't do custom drawing with drawRect, UIKit can optimize stock views so drawRect is a no-op, or it can take shortcuts with compositing. When you override drawRect, UIKit has to take the slow path because it has no idea what you're doing.
These two problems can be outweighed by benefits in the case of table view cells. After drawRect is called when a view first appears on screen, the contents are cached, and the scrolling is a simple translation performed by the GPU. Because you're dealing with a single view, rather than a complex hierarchy, UIKit's drawRect optimizations become less important. So the bottleneck becomes how much you can optimize your Core Graphics drawing.
Whenever you can, use UIKit. Do the simplest implementation that works. Profile. When there's an incentive, optimize.
The difference is that UIView and CALayer essentially deal in fixed images. These images are uploaded to the graphics card (if you know OpenGL, think of an image as a texture, and a UIView/CALayer as a polygon showing such a texture). Once an image is on the GPU, it can be drawn very quickly, and even several times, and (with a slight performance penalty) even with varying levels of alpha transparency on top of other images.
CoreGraphics/Quartz is an API for generating images. It takes a pixel buffer (again, think OpenGL texture) and changes individual pixels inside it. This all happens in RAM and on the CPU, and only once Quartz is done, does the image get "flushed" back to the GPU. This round-trip of getting an image from the GPU, changing it, then uploading the whole image (or at least a comparatively large chunk of it) back to the GPU is rather slow. Also, the actual drawing that Quartz does, while really fast for what you are doing, is way slower than what the GPU does.
That's obvious, considering the GPU is mostly moving around unchanged pixels in big chunks. Quartz does random-access of pixels and shares the CPU with networking, audio etc. Also, if you have several elements that you draw using Quartz at the same time, you have to re-draw all of them when one changes, then upload the whole chunk, while if you change one image and then let UIViews or CALayers paste it onto your other images, you can get away with uploading much smaller amounts of data to the GPU.
When you don't implement -drawRect:, most views can just be optimized away. They don't contain any pixels, so can't draw anything. Other views, like UIImageView, only draw a UIImage (which, again, is essentially a reference to a texture, which has probably already been loaded onto the GPU). So if you draw the same UIImage 5 times using a UIImageView, it is only uploaded to the GPU once, and then drawn to the display in 5 different locations, saving us time and CPU.
When you implement -drawRect:, this causes a new image to be created. You then draw into that on the CPU using Quartz. If you draw a UIImage in your drawRect, it likely downloads the image from the GPU, copies it into the image you're drawing to, and once you're done, uploads this second copy of the image back to the graphics card. So you're using twice the GPU memory on the device.
So the fastest way to draw is usually to keep static content separated from changing content (in separate UIViews/UIView subclasses/CALayers). Load static content as a UIImage and draw it using a UIImageView and put content generated dynamically at runtime in a drawRect. If you have content that gets drawn repeatedly, but by itself doesn't change (I.e. 3 icons that get shown in the same slot to indicate some status) use UIImageView as well.
One caveat: There is such a thing as having too many UIViews. Particularly transparent areas take a bigger toll on the GPU to draw, because they need to be mixed with other pixels behind them when displayed. This is why you can mark a UIView as "opaque", to indicate to the GPU that it can just obliterate everything behind that image.
If you have content that is generated dynamically at runtime but stays the same for the duration of the application's lifetime (e.g. a label containing the user name) it may actually make sense to just draw the whole thing once using Quartz, with the text, the button border etc., as part of the background. But that's usually an optimization that's not needed unless the Instruments app tells you differently.
I'm going to try and keep a summary of what I'm extrapolating from other's answers here, and ask clarifying questions in an update to the original question. But I encourage others to keep answers coming and vote up those who have provided good information.
General Approach
It's quite clear that the general approach, as Ben Sandofsky mentioned in his answer, should be "Whenever you can, use UIKit. Do the simplest implementation that works. Profile. When there's an incentive, optimize."
The Why
There are two main possible bottlenecks in an iDevice, the CPU and GPU
CPU is responsible for the initial drawing/rendering of a view
GPU is responsible for a majority of animation (Core Animation), layer effects, compositing, etc.
UIView has a lot of optimizations, caching, etc, built in for handling complex view hierarchies
When overriding drawRect you miss out on a lot of the benefits UIView's provide, and it's generally slower than letting UIView handle the rendering.
Drawing cells contents in one flat UIView can greatly improve your FPS on scrolling tables.
Like I said above, CPU and GPU are two possible bottlenecks. Since they generally handle different things, you have to pay attention to which bottleneck you are running up against. In the case of scrolling tables, it's not that Core Graphics is drawing faster, and that's why it can greatly improve your FPS.
In fact, Core Graphics may very well be slower than a nested UIView hierarchy for the initial render. However, it seems the typical reason for choppy scrolling is you are bottlenecking the GPU, so you need to address that.
Why overriding drawRect (using core graphics) can help table scrolling:
From what I understand, the GPU is not responsible for the initial rendering of the views, but is instead handed textures, or bitmaps, sometimes with some layer properties, after they have been rendered. It is then responsible for compositing the bitmaps, rendering all those layer affects, and the majority of animation (Core Animation).
In the case of table view cells, the GPU can be bottlenecked with complex view hierarchies, because instead of animating one bitmap, it is animating the parent view, and doing subview layout calculations, rendering layer effects, and compositing all the subviews. So instead of animating one bitmap, it is responsible for the relationship of bunch of bitmaps, and how they interact, for the same pixel area.
So in summary, the reason drawing your cell in one view with core graphics can speed up your table scrolling is NOT because it's drawing faster, but because it is reducing the load on the GPU, which is the bottleneck giving you trouble in that particular scenario.
I am a game developer, and I was asking the same questions when my friend told me that my UIImageView based view hierarchy was going to slow down my game and make it terrible. I then proceeded to research everything I could find about whether to use UIViews, CoreGraphics, OpenGL or something 3rd party like Cocos2D. The consistent answer I got from friends, teachers, and Apple engineers at WWDC was that there won't be much of a difference in the end because at some level they are all doing the same thing. Higher-level options like UIViews rely on the lower level options like CoreGraphics and OpenGL, just they are wrapped in code to make it easier for you to use.
Don't use CoreGraphics if you are just going to end up re-writing the UIView. However, you can gain some speed from using CoreGraphics, as long as you do all your drawing in one view, but is it really worth it? The answer I have found is usually no. When I first started my game, I was working with the iPhone 3G. As my game grew in complexity, I began to see some lag, but with the newer devices it was completely unnoticeable. Now I have plenty of action going on, and the only lag seems to be a drop in 1-3 fps when playing in the most complex level on an iPhone 4.
Still I decided to use Instruments to find the functions that were taking up the most time. I found that the problems were not related to my use of UIViews. Instead, it was repeatedly calling CGRectMake for certain collision sensing calculations and loading image and audio files separately for certain classes that use the same images, rather than having them draw from one central storage class.
So in the end, you might be able to achieve a slight gain from using CoreGraphics, but usually it will not be worth it or may not have any effect at all. The only time I use CoreGraphics is when drawing geometric shapes rather than text and images.
I need to make a design decision of how to approach an app which needs to render few 3D objects on top of an image texture.
Result needs to be rendered and saved as an UIImage.
Graphical design (client) expects standard UIKit controls to manipulate 3D world.
OpenGL view (CAEAGLLayer layer) needs to be inside UIScrollView for cheap and natural scroll and zooming implementation.
There are just few extra controls to manipulate rotation scale and transition of the few 3D objects.
There is not many triangles expected (100-200 at most) and 2-3 textures (1 mask). It does not even have to be refreshed constantly, just when some transformations and zoom changes.
CAEAGLLayer does not need to be opaque.
I would go with Core Animation solution but rendering 3D transformed CALayers to CGContextRef is not supported (neither is masking).
What is real performance cost when putting CAEAGLLayer inside UIScrollView and mixing it with few UIKit views?
How much triangles per second can I expect to be rendered with smooth frame rate (30fps will do), so I can make the best decision possible?
I know there are similar questions out there already, but none of the answers provides specific numbers, which could help with estimating expected rendering results.
Per Allan Schaffer, who gives the WWDC and WWDC on tour OpenGL speeches, OpenGL itself is not so much a special case — it's that anything that changes will cause everything on top of it to be recomposited. I spoke to him specifically about an app with a live updating video view underneath a live updating OpenGL view and he said that sort of thing is the pathological worst case. It's generally not that expensive to throw a few quite static views on top, such as using a UILabel to display the current score over an OpenGL game.
In your case, I think you can probably largely avoid the problem. Don't put the GL view inside the scroll view, but rather make the scroll view non-opaque and put the GL view behind it. Catch scrollViewDidScroll: (and the corresponding zoom messages) and make related OpenGL adjustments. I can speak from experience and say I've done exactly that on an iPad 1 with no performance issues. Particularly for the sort of model you're talking about I don't imagine a problem.
I am working on a sample in which I have placed two textures one above the other. What I want, whenever user moves his finger on the screen, underneath view should get revealed as he moves. Wiping out front view to reveal underneath view is what I am looking for.
I would like to know some of ideas/ thoughts to implement this feature using OpenGL ES. Any related pointer will be highly appreciated.
Thanks in advance.
This does not sound performance-intensive so simple code can trump complicated tuned operations.
You don't need to use OpenGL. You can simply have two images - front and back - with the front supporting an alpha channel. Each time you get a hit or move, you clear a circular patch whereever the impact is for some certain radius or such.
And then queue-up a redraw. The redraw draws the two bitmaps, back first then front.
If possible, try to queue a redraw for just the the area where you have updated the front since the last draw.
I am working on a 2D scrolling game for iPhone. I have a large image background, say 480×6000 pixels, of only a part is visible (exactly one screen’s worth, 480×320 pixels). What is the best way to get such a background on the screen?
Currently I have the background split into several textures (to get around the maximum texture size limit) and draw the whole background in each frame as a textured triangle strip. The scrolling is done by translating the modelview matrix. The scissor box is set to the window size, 480×320 pixels. This is not meant to be fast, I just wanted a working code before I get to optimizing.
I thought that maybe the OpenGL implementation would be smart enough to discard the invisible portion of the background, but according to some measuring code I wrote it looks like background takes 7 ms to draw on average and 84 ms at maximum. (This is measured in the simulator.) This is about a half of the whole render loop, ie. quite slow for me.
Drawing the background should be as easy as copying some 480×320 pixels from one part of the VRAM to another, or, in other words, blazing fast. What is the best way to get closer to such performance?
That's the fast way of doing it. Things you can do to improve performance:
Try different texture-formats. Presumably the SDK docs have details on the preferred format, and presumably smaller is better.
Cull out entirely offscreen tiles yourself
Split the image into smaller textures
I'm assuming you're drawing at a 1:1 zoom-level; is that the case?
Edit: Oops. Having read your question more carefully, I have to offer another piece of advice: Timings made on the simulator are worthless.
The quick solution:
Create a geometry matrix of tiles (quads preferably) so that there is at least one row/column of off-screen tiles on all sides of the viewable area.
Map textures to all those tiles.
As soon as one tile is outside the viewable area you can release this texture and bind a new one.
Move the tiles using a modulo of the tile width and tile height as position (so that the tile will reposition itself at its starting pos when it have moved exactly one tile in length). Also remember to remap the textures during that operation. This allows you to have a very small grid/very little texture memory loaded at any given time. Which I guess is especially important in GL ES.
If you have memory to spare and are still plagued with slow load speed (although you shouldn't for that amount of textures). You could build a texture streaming engine that preloads textures into faster memory (whatever that may be on your target device) when you reach a new area. Mapping as textures will in that case go from that faster memory when needed. Just be sure that you are able to preload it without using up all memory and remember to release it dynamically when not needed.
Here is a link to a GL (not ES) tile engine. I haven't used it myself so I cannot vouch for its functionality but it might be able to help you: http://www.mesa3d.org/brianp/TR.html
I'm working on a game where I need to let the player look at a plane (e.g., a wall) through a lens (e.g., a magnifying glass). The game is to run on the iPhone, so my choices are Core Animation or OpenGL ES.
My first idea (that I have not yet tried) is to do this using Core Animation.
Create the wall and objects on it using CALayers.
Use CALayer's renderInContext: method to create an image of the wall as a background layer.
Crop the image to the lens shape, scale it up, then draw it over the background.
Draw the lens frame and "shiny glass" layer on top of all that.
Notes:
I am a lot more familiar with Core Animation than OpenGL, so maybe there is a much better way to do this with OpenGL. (Please tell me!)
If I am using CALayers that are not attached to a view, do I have to manage all animations myself? Or is there a straightforward way to run them manually?
3D perspective is not important; I'm just magnifying a flat wall.
I'm concerned that doing all of the above will be too slow for smooth animation.
Before I commit a lot of code to writing this, my question is do you see any pitfalls in the plan above or can you recommend an easier way to do this?
I have implemented a magnifying glass on the iPhone using a UIView. CA was way too slow.
You can draw a CGImage into a UIView using it's drawRect method. Here's the steps in my drawRect:
get the current context
create a path for clipping the view (circle)
scale the current transformation matrix (CTM)
move the current transformation matrix
draw the CGimage
You can have the CGImage prerendered, then it's in the graphics memory.
If you want something dynamic, draw it from scratch instead of drawing a CGImage.
Very fast, looks great.
That is how I'd do it, it sounds like a good plan.
Whether you choose OGL or CA the basic principle is the same so I would stick with what you're more comfortable with.
Identify the region you wish to magnify
Render this region to a separate surface
Render any border/overlay onto of the surface
Render your surface enlarged onto the main scene, clipping appropriately.
In terms of performance you will have to try it and see (just make sure you test on actual hardware, because the simulator is far faster than the hardware). If it IS to slow then you can look at doing steps 2/3 less frequently, e.g every 2-3 frames. This will give some magnification lag but it may be perfectly acceptable.
I suspect that performance between OGL / CA will be roughly equivalent. CA is built ontop of the OGL libraries but your cost is going to be doing the actual rendering, not the time spent in the layers.