To clarify the purpose of this question: I know HOW to create complicated views with both subviews and using drawRect. I'm trying to fully understand the when's and why's to use one over the other.
I also understand that it doesn't make sense to optimize that much ahead of time, and do something the more difficult way before doing any profiling. Consider that I'm comfortable with both methods, and now really want a deeper understanding.
A lot of my confusion comes from learning how to make table view scroll performance really smooth and fast. Of course the original source of this method is from the author behind twitter for iPhone (formerly tweetie). Basically it says that to make table scrolling buttery smooth, the secret is to NOT use subviews, but instead do all the drawing in one custom uiview. Essentially it seems that using lots of subviews slows rendering down because they have lots of overhead, and are constantly re-composited over their parent views.
To be fair, this was written when the 3GS was pretty brand spankin new, and iDevices have gotten much faster since then. Still this method is regularly suggested on the interwebs and elsewhere for high performance tables. In fact it's a suggested method in Apple's Table Sample Code, has been suggested in several WWDC videos (Practical Drawing for iOS Developers), and many iOS programming books.
There are even awesome looking tools to design graphics and generate Core Graphics code for them.
So at first I'm lead to believe "there’s a reason why Core Graphics exists. It’s FAST!"
But as soon as I think I get the idea "Favor Core Graphics when possible", I start seeing that drawRect is often responsible for poor responsiveness in an app, is extremely expensive memory wise, and really taxes the CPU. Basically, that I should "Avoid overriding drawRect" (WWDC 2012 iOS App Performance: Graphics and Animations)
So I guess, like everything, it's complicated. Maybe you can help myself and others understand the When's and Why's for using drawRect?
I see a couple obvious situations to use Core Graphics:
You have dynamic data (Apple's Stock Chart example)
You have a flexible UI element that can't be executed with a simple resizable image
You are creating a dynamic graphic, that once rendered is used in multiple places
I see situations to avoid Core Graphics:
Properties of your view need to be animated separately
You have a relatively small view hierarchy, so any perceived extra effort using CG isn't worth the gain
You want to update pieces of the view without redrawing the whole thing
The layout of your subviews needs to update when the parent view size changes
So bestow your knowledge. In what situations do you reach for drawRect/Core Graphics (that could also be accomplished with subviews)? What factors lead you to that decision? How/Why is drawing in one custom view recommended for buttery smooth table cell scrolling, yet Apple advises drawRect against for performance reasons in general? What about simple background images (when do you create them with CG vs using a resizable png image)?
A deep understanding of this subject may not be needed to make worthwhile apps, but I don't love choosing between techniques without being able to explain why. My brain gets mad at me.
Question Update
Thanks for the information everyone. Some clarifying questions here:
If you are drawing something with core graphics, but can accomplish the same thing with UIImageViews and a pre-rendered png, should you always go that route?
A similar question: Especially with badass tools like this, when should you consider drawing interface elements in core graphics? (Probably when the display of your element is variable. e.g. a button with 20 different color variations. Any other cases?)
Given my understanding in my answer below, could the same performance gains for a table cell possibly be gained by effectively capturing a snapshot bitmap of your cell after your complex UIView render's itself, and displaying that while scrolling and hiding your complex view? Obviously some pieces would have to be worked out. Just an interesting thought I had.
Stick to UIKit and subviews whenever you can. You can be more productive, and take advantage of all the OO mechanisms that should things easier to maintain. Use Core Graphics when you can't get the performance you need out of UIKit, or you know trying to hack together drawing effects in UIKit would be more complicated.
The general workflow should be to build the tableviews with subviews. Use Instruments to measure the frame rate on the oldest hardware your app will support. If you can't get 60fps, drop down to CoreGraphics. When you've done this for a while, you get a sense for when UIKit is probably a waste of time.
So, why is Core Graphics fast?
CoreGraphics isn't really fast. If it's being used all the time, you're probably going slow. It's a rich drawing API, which requires its work be done on the CPU, as opposed to a lot of UIKit work that is offloaded to the GPU. If you had to animate a ball moving across the screen, it would be a terrible idea to call setNeedsDisplay on a view 60 times per second. So, if you have sub-components of your view that need to be individually animated, each component should be a separate layer.
The other problem is that when you don't do custom drawing with drawRect, UIKit can optimize stock views so drawRect is a no-op, or it can take shortcuts with compositing. When you override drawRect, UIKit has to take the slow path because it has no idea what you're doing.
These two problems can be outweighed by benefits in the case of table view cells. After drawRect is called when a view first appears on screen, the contents are cached, and the scrolling is a simple translation performed by the GPU. Because you're dealing with a single view, rather than a complex hierarchy, UIKit's drawRect optimizations become less important. So the bottleneck becomes how much you can optimize your Core Graphics drawing.
Whenever you can, use UIKit. Do the simplest implementation that works. Profile. When there's an incentive, optimize.
The difference is that UIView and CALayer essentially deal in fixed images. These images are uploaded to the graphics card (if you know OpenGL, think of an image as a texture, and a UIView/CALayer as a polygon showing such a texture). Once an image is on the GPU, it can be drawn very quickly, and even several times, and (with a slight performance penalty) even with varying levels of alpha transparency on top of other images.
CoreGraphics/Quartz is an API for generating images. It takes a pixel buffer (again, think OpenGL texture) and changes individual pixels inside it. This all happens in RAM and on the CPU, and only once Quartz is done, does the image get "flushed" back to the GPU. This round-trip of getting an image from the GPU, changing it, then uploading the whole image (or at least a comparatively large chunk of it) back to the GPU is rather slow. Also, the actual drawing that Quartz does, while really fast for what you are doing, is way slower than what the GPU does.
That's obvious, considering the GPU is mostly moving around unchanged pixels in big chunks. Quartz does random-access of pixels and shares the CPU with networking, audio etc. Also, if you have several elements that you draw using Quartz at the same time, you have to re-draw all of them when one changes, then upload the whole chunk, while if you change one image and then let UIViews or CALayers paste it onto your other images, you can get away with uploading much smaller amounts of data to the GPU.
When you don't implement -drawRect:, most views can just be optimized away. They don't contain any pixels, so can't draw anything. Other views, like UIImageView, only draw a UIImage (which, again, is essentially a reference to a texture, which has probably already been loaded onto the GPU). So if you draw the same UIImage 5 times using a UIImageView, it is only uploaded to the GPU once, and then drawn to the display in 5 different locations, saving us time and CPU.
When you implement -drawRect:, this causes a new image to be created. You then draw into that on the CPU using Quartz. If you draw a UIImage in your drawRect, it likely downloads the image from the GPU, copies it into the image you're drawing to, and once you're done, uploads this second copy of the image back to the graphics card. So you're using twice the GPU memory on the device.
So the fastest way to draw is usually to keep static content separated from changing content (in separate UIViews/UIView subclasses/CALayers). Load static content as a UIImage and draw it using a UIImageView and put content generated dynamically at runtime in a drawRect. If you have content that gets drawn repeatedly, but by itself doesn't change (I.e. 3 icons that get shown in the same slot to indicate some status) use UIImageView as well.
One caveat: There is such a thing as having too many UIViews. Particularly transparent areas take a bigger toll on the GPU to draw, because they need to be mixed with other pixels behind them when displayed. This is why you can mark a UIView as "opaque", to indicate to the GPU that it can just obliterate everything behind that image.
If you have content that is generated dynamically at runtime but stays the same for the duration of the application's lifetime (e.g. a label containing the user name) it may actually make sense to just draw the whole thing once using Quartz, with the text, the button border etc., as part of the background. But that's usually an optimization that's not needed unless the Instruments app tells you differently.
I'm going to try and keep a summary of what I'm extrapolating from other's answers here, and ask clarifying questions in an update to the original question. But I encourage others to keep answers coming and vote up those who have provided good information.
General Approach
It's quite clear that the general approach, as Ben Sandofsky mentioned in his answer, should be "Whenever you can, use UIKit. Do the simplest implementation that works. Profile. When there's an incentive, optimize."
The Why
There are two main possible bottlenecks in an iDevice, the CPU and GPU
CPU is responsible for the initial drawing/rendering of a view
GPU is responsible for a majority of animation (Core Animation), layer effects, compositing, etc.
UIView has a lot of optimizations, caching, etc, built in for handling complex view hierarchies
When overriding drawRect you miss out on a lot of the benefits UIView's provide, and it's generally slower than letting UIView handle the rendering.
Drawing cells contents in one flat UIView can greatly improve your FPS on scrolling tables.
Like I said above, CPU and GPU are two possible bottlenecks. Since they generally handle different things, you have to pay attention to which bottleneck you are running up against. In the case of scrolling tables, it's not that Core Graphics is drawing faster, and that's why it can greatly improve your FPS.
In fact, Core Graphics may very well be slower than a nested UIView hierarchy for the initial render. However, it seems the typical reason for choppy scrolling is you are bottlenecking the GPU, so you need to address that.
Why overriding drawRect (using core graphics) can help table scrolling:
From what I understand, the GPU is not responsible for the initial rendering of the views, but is instead handed textures, or bitmaps, sometimes with some layer properties, after they have been rendered. It is then responsible for compositing the bitmaps, rendering all those layer affects, and the majority of animation (Core Animation).
In the case of table view cells, the GPU can be bottlenecked with complex view hierarchies, because instead of animating one bitmap, it is animating the parent view, and doing subview layout calculations, rendering layer effects, and compositing all the subviews. So instead of animating one bitmap, it is responsible for the relationship of bunch of bitmaps, and how they interact, for the same pixel area.
So in summary, the reason drawing your cell in one view with core graphics can speed up your table scrolling is NOT because it's drawing faster, but because it is reducing the load on the GPU, which is the bottleneck giving you trouble in that particular scenario.
I am a game developer, and I was asking the same questions when my friend told me that my UIImageView based view hierarchy was going to slow down my game and make it terrible. I then proceeded to research everything I could find about whether to use UIViews, CoreGraphics, OpenGL or something 3rd party like Cocos2D. The consistent answer I got from friends, teachers, and Apple engineers at WWDC was that there won't be much of a difference in the end because at some level they are all doing the same thing. Higher-level options like UIViews rely on the lower level options like CoreGraphics and OpenGL, just they are wrapped in code to make it easier for you to use.
Don't use CoreGraphics if you are just going to end up re-writing the UIView. However, you can gain some speed from using CoreGraphics, as long as you do all your drawing in one view, but is it really worth it? The answer I have found is usually no. When I first started my game, I was working with the iPhone 3G. As my game grew in complexity, I began to see some lag, but with the newer devices it was completely unnoticeable. Now I have plenty of action going on, and the only lag seems to be a drop in 1-3 fps when playing in the most complex level on an iPhone 4.
Still I decided to use Instruments to find the functions that were taking up the most time. I found that the problems were not related to my use of UIViews. Instead, it was repeatedly calling CGRectMake for certain collision sensing calculations and loading image and audio files separately for certain classes that use the same images, rather than having them draw from one central storage class.
So in the end, you might be able to achieve a slight gain from using CoreGraphics, but usually it will not be worth it or may not have any effect at all. The only time I use CoreGraphics is when drawing geometric shapes rather than text and images.
I have an infinite scrollview in which I add images as the user scrolls. Those images have varying heights and I've been trying to come up with the best way of finding a clear space inside the current bounds of the view that would allow me to add the image view.
Is there anything built-in that would make my search more efficient?
The problem is I want the images to be sort of glued to one another with no blank space between them. Making the search through 320x480 pixels tends to be quite a CPU hog. Does anyone know an efficient method to do it?
Thanks!
It seems that you're scrolling this thing vertically (you mentioned varying image heights).
There's nothing built in to UIScrollView that will do this for you. You'll have to track your UIImageView subviews manually. You could simply maintain the max y coordinate occupied by you images as you add them.
You might consider using UITableView instead, and implementing a very customized tableView:heightForRowAtIndexPath: in your delegate. You would probably need to do something special with the actual cells as well, but it would seem to make your job a little easier.
Also, for what it's worth, you might find a way to avoid making your solution infinite. Be careful about your memory footprint! iOS will shut your app off if things get out of hand.
UPDATE
Ok, now I understand what you're going for. I had imagined that you were presenting photographs or something rectangular like that. If I were trying to cover a scroll view with UILeafs (wah wah) I would take a statistical approach. I would 'paint' leaves randomly along horizontal/vertical strips as the user scrolls. Perhaps that's what you're doing already? Whatever you're doing I think it looks good.
Now I guess that the reason you're asking is to prevent the little random white spots that show through - is that right? If I may suggest a different solution: try to color the background of your scroll view to something earthy that looks good if it shows through here and there.
Also, it occurred to me that you could use a larger template image -- something that already has a nice distribution of leaves -- with transparency all along the outside outline of the leaves but nowhere else. Then you could tile these, but with overlap, so that the alpha just shows through to the leaves below. You could have a number of these images so that it doesn't look obvious. This would take away all of the uncertainty and make your retiling very efficient.
Also, consider learning about CoreAnimation (CALayer in particular) and CoreGraphics/Quartz 2D ). Proper use of these libraries will probably yield great improvements in rendering speed.
UPDATE 2:
If your images are all 150px wide, then split your scrollview into columns and add/remove based on those (as discussed in chat).
Good luck!
I have a bit gantt chart that i want to be visible on an iphone.
It is 7200 x 1800px large, and consists of ~600 bars, each of which is a UILabel.
It is to look like this:
Now i've gotten it to work. And at ~100 bars, i can make it run quite smoothly by simply adding them all to the scroll view. However, with the full 600 (or more eventually) it simply crashes when i instantiate all those uilabels and add them all to the scroll view as subviews.
So what i've done is made it create only the uilabels for the currently visible rows, and as the user scrolls up and down it removes the invisible uilabels and adds the newly visible ones.
However, this jerks quite noticeably as you scroll vertically as it crosses each row boundary, and has to render another row and remove the old row.
Does anyone have any suggestions to solve this? Any ideas what is the slow part? Instantiating the uilabels, or adding them as subviews, or anything?
All help will be greatly appreciated.
Apple has some really good demo code that shows how to do this. Check out TiledScrollView.m especially the layoutSubviews method.
Other things you might consider:
If you labels are quite long horizontally you may need to break them into smaller chunks. Quite long in this context is wider than the screen.
Make sure your UILabels are opaque. Scrolling things that require compositing adds extra overhead which may account for some of your issues.
Looking at your screen shot the row and column headers are not opaque and are using alphas. Whereas this is a nice effect it may be worth temporarily making them opaque too just to see if this is contributing to your problems. I don't think this is contributing too much to your problems; the area being composited is quite small.
Just a thought, but could the issue be that even though you are caching and reusing the labels, is the scroll view still retaining them, so even though you may only have a few labels, each is being retained hundreds of times. If this is so then I would think that the scroll view is still effectively trying to manage those hundreds of rows.
So as #Nathon S asked - are you moving them? i.e. building a finite set of labels and just moving them around on the scroll view to match the viewing area. If you are hiding and re-adding to the scroll view then I would suspect a massive set of retains being the slow down. I would think that with a moving label design, you would not need to do any hides and adds after the the initial display. Which should make it very fast and lightweight.
I have a custom UIView which is composed of many images, their positions are changing in response to the user touch.
The view must track the user touch and i'm experiencing a performance bottleneck in the drawing of such view, preventing me to follow the input in realtime.
At the beginning i was drawing everything in the [UIView drawRect:] method and of course it was way too slow because everything was redrawn even if not necessary.
Then, i used more CALayers to update only the layer that was changing and this gave me much better responsiveness.
But still, when i have to draw the same image many times on a layer it takes up to 500ms.
Since the images are placed at fixed positions it there a way to pre-draw them? Should i consider putting them in many CALayers and just hide/show them?
Also, i don't understand why a [CALayer setNeedsDisplayInRect:] exists but the delegate has (apparently) no way to know what the invalid rect is to optimize the drawing.
Solution
Following the advice in the answer I finally created many CALayers for the images and set the contents property the first time the layer was being shown. This is a lazy-loading compromise: in a first attempt i set the contents of every layer at the creation time but this caused to pre-draw any possible image on the program launch, freezing the application for seconds.
From the documentation for -[CALayer drawInContext:]:
Default implementation does nothing. The context may be clipped to protect valid layer content. Subclasses that wish to find the actual region to draw can call CGContextGetClipBoundingBox. Called by the display method when the contents property is being updated.
The default implementation of display calls drawInContext: on an automatically-created context; presumably setting the bounding box as well (which is presumably passed to drawRect:).
If you're drawing several static images, I'd just stick each one in its own UIView; I don't think the overhead is that big (if it is, the CALayer overhead should be smaller). If they all animate, I'd definitely use UIView/CALayer. If some of them don't animate (much) and you notice significant slowness, you can pre-render those. It's a trade-off between rendering in drawRect: (or similar) and layer compositing on the GPU, but in general I'd assume that the latter is much faster.
I'm trying to make a level-of-detail line chart, where the user can zoom in/out horizontally by using two fingers, and grow the contentSize attribute of the the UIScrollView field. They can also scroll horizontally to shift left or right and see more of the chart (check any stock on Google Finance charts to get an idea of what I'm talking about). Potentially, the scroll view could grow to up to 100x its original size, as the user is zooming in.
My questions are:
- Has anyone had any experience with UIScrollViews that have such large contentSize restrictions? Will it work?
- The view for the scroll view could potentially be really huge, since the user is zooming in. How is this handled in memory?
- Just a thought, but would it be possible to use UITableViewCells, oriented to scroll horizontally, to page in/out the data?
This is kind of an open ended question right now - I'm still brainstorming myself. If anyone has any ideas or has implemented such a thing before, please respond with your experience. Thanks!
This is quite an old topic, but still I want to share some my experiences.
Using such a large UIView (100x than its origin size) in UIScrollView could cause Memory Warning. You should avoid render the entire UIView at once.
A better way to implement this is to render the only area which you can see and the area just around it. So, UIViewScroll can scroll within this area smoothly. But what if user scrolled out of the area that has been rendered? Use delegate to get notified when user scroll out of the pre-rendered area and try to render the new area which is going to be showed.
The basic idea under this implementation is to use 9 UIViews (or more) to tile a bigger area, when user scrolled (or moved) from old position to new position. Just move some UIViews to new place to make sure that one of UIView is the main view which you can see mostly, and other 8 UIViews are just around it.
Hope it is useful.
I have something similar, although probably not to the size your talking about. The UIScrollView isn't a problem. The problem is that if you're drawing UIViews on it (rather than drawing lines yourself) UIViews that are well, well off the screen continue to exist in memory. If you're actually drawing the lines by creating your own UIView and responding to drawRect, it's fine.
Assuming that you're a reasonably experienced programmer, getting a big scroll view working that draws pars of the chart is only a days work, so my recommendation would be to create a prototype for it, and run the prototype under the object allocations tool and see if that indicates any problems.
Sorry for the vagueness of my answer; it's a brainstorming question
But still, this approach (in the example above) is not good enough in some cases. Cause we only rendered a limited area in the UIScrollView.
User can use different gestures in UIScrollView: drag or fling. With drag, the pre-rendered 8 small UIViews is enough for covering the scrolling area in most of the case. But with flinging, UIScrollView could scroll over a very large area when user made a quick movement, and this area is totally blank (cause we didn't render it) while scrolling. Even we can display the right content after the UIScrollView stops scrolling, the blank during scrolling isn't very UI friendly to user.
For some apps, this is Ok, for example Google map. Since the data couldn't be downloaded immediately. Waiting before downloading is reasonable.
But if the data is local, we should eliminate this blank area as possible as we can. So, pre-render the area that is going to be scrolled is crucial. Unlike UITableView, UIScrollView doesn't have the ability to tell us which cell is going to be displayed and which cell is going to be recycled. So, we have to do it ourselves. Method [UIScrollViewDelegate scrollViewWillEndDragging:withVelocity:targetContentOffset:] will be called when UIScrollView starts to decelerating (actually, scrollViewWillBeginDecelerating is the method been called before decelerating, but in this method we don't know the information about what content will be displayed or scrolled). So based on the UIScrollView.contentOffset.x and parameter targetContentOffset, we can know exactly where the UIScrollView starts and where the UIScrollView will stop, then pre-render this area to makes the scrolling more smoothly.