Fastest iPhone Blit Routine? - iphone

I have a UIView subclass onto which I need to blit a UIImage. There are several ways to skin this cat depending on which series of APIs you prefer to use, and I'm interested in the fastest. Would it be UIImage's drawAtPoint or drawRect? Or perhaps the C-based CoreGraphics routines, or something else? I have no qualms about altering my source image data format if it'll make the blitting that much faster.
To describe my situation my app has anywhere from ~10 to ~200 small UIViews (64x64), a subset of which will need to be redrawn based on user interaction. My current implementation is a call to drawAtPoint inside my UIView subclass' drawRect routine. If you can think of a better way to handle this kind of scenario, I'm all ears (well, eyes).

Using an OpenGL view may be fastest of all. Keep an age cache of images (or if you know a better way to determine when certain images can be removed from the cache, by all means use that) and preload as many images as you can while the app is idle. It should be very quick, with almost no Objective-C calls involved (just -draw)

While not a "blit" at all, given the requirements of the problem (many small images with various state changes) I was able to keep the different states to redraw in their own separate UIImageView instances, and just showed/hid the appropriate instance given the state change.

Since CALayer is lightweight and fast I would get a try.
Thierry

The fastest blit implementation you are going to find is in my AVAnimator library, it contains an ARM asm implementation of a blit for a CoreGraphics buffer, have a look at the source. The way you could make use of it would be to create a single graphics context, the size of the whole screen, and then blit your specific image changes into this single graphics context, then create a UIImage from that and set it as the image of a UIImageView. That would involve 1 GPU upload per refresh, so it will not depend on how many images you render into the buffer. But, you will likely not need to go that low level. You should first try making each 64x64 image into a CALayer and then update each layer with the contents of an image that is the exact size of the layer 64x64. The only tricky thing is that you will want to decompress each of your original images if they come from PNG or JPEG files. You do that by creating another pixel buffer and rendering the original image into the new pixel buffer, that way all the PNG or JPEG decompression is done before you start setting CALayer contents.

Related

to drawRect or not to drawRect (when should one use drawRect/Core Graphics vs subviews/images and why?)

To clarify the purpose of this question: I know HOW to create complicated views with both subviews and using drawRect. I'm trying to fully understand the when's and why's to use one over the other.
I also understand that it doesn't make sense to optimize that much ahead of time, and do something the more difficult way before doing any profiling. Consider that I'm comfortable with both methods, and now really want a deeper understanding.
A lot of my confusion comes from learning how to make table view scroll performance really smooth and fast. Of course the original source of this method is from the author behind twitter for iPhone (formerly tweetie). Basically it says that to make table scrolling buttery smooth, the secret is to NOT use subviews, but instead do all the drawing in one custom uiview. Essentially it seems that using lots of subviews slows rendering down because they have lots of overhead, and are constantly re-composited over their parent views.
To be fair, this was written when the 3GS was pretty brand spankin new, and iDevices have gotten much faster since then. Still this method is regularly suggested on the interwebs and elsewhere for high performance tables. In fact it's a suggested method in Apple's Table Sample Code, has been suggested in several WWDC videos (Practical Drawing for iOS Developers), and many iOS programming books.
There are even awesome looking tools to design graphics and generate Core Graphics code for them.
So at first I'm lead to believe "there’s a reason why Core Graphics exists. It’s FAST!"
But as soon as I think I get the idea "Favor Core Graphics when possible", I start seeing that drawRect is often responsible for poor responsiveness in an app, is extremely expensive memory wise, and really taxes the CPU. Basically, that I should "Avoid overriding drawRect" (WWDC 2012 iOS App Performance: Graphics and Animations)
So I guess, like everything, it's complicated. Maybe you can help myself and others understand the When's and Why's for using drawRect?
I see a couple obvious situations to use Core Graphics:
You have dynamic data (Apple's Stock Chart example)
You have a flexible UI element that can't be executed with a simple resizable image
You are creating a dynamic graphic, that once rendered is used in multiple places
I see situations to avoid Core Graphics:
Properties of your view need to be animated separately
You have a relatively small view hierarchy, so any perceived extra effort using CG isn't worth the gain
You want to update pieces of the view without redrawing the whole thing
The layout of your subviews needs to update when the parent view size changes
So bestow your knowledge. In what situations do you reach for drawRect/Core Graphics (that could also be accomplished with subviews)? What factors lead you to that decision? How/Why is drawing in one custom view recommended for buttery smooth table cell scrolling, yet Apple advises drawRect against for performance reasons in general? What about simple background images (when do you create them with CG vs using a resizable png image)?
A deep understanding of this subject may not be needed to make worthwhile apps, but I don't love choosing between techniques without being able to explain why. My brain gets mad at me.
Question Update
Thanks for the information everyone. Some clarifying questions here:
If you are drawing something with core graphics, but can accomplish the same thing with UIImageViews and a pre-rendered png, should you always go that route?
A similar question: Especially with badass tools like this, when should you consider drawing interface elements in core graphics? (Probably when the display of your element is variable. e.g. a button with 20 different color variations. Any other cases?)
Given my understanding in my answer below, could the same performance gains for a table cell possibly be gained by effectively capturing a snapshot bitmap of your cell after your complex UIView render's itself, and displaying that while scrolling and hiding your complex view? Obviously some pieces would have to be worked out. Just an interesting thought I had.
Stick to UIKit and subviews whenever you can. You can be more productive, and take advantage of all the OO mechanisms that should things easier to maintain. Use Core Graphics when you can't get the performance you need out of UIKit, or you know trying to hack together drawing effects in UIKit would be more complicated.
The general workflow should be to build the tableviews with subviews. Use Instruments to measure the frame rate on the oldest hardware your app will support. If you can't get 60fps, drop down to CoreGraphics. When you've done this for a while, you get a sense for when UIKit is probably a waste of time.
So, why is Core Graphics fast?
CoreGraphics isn't really fast. If it's being used all the time, you're probably going slow. It's a rich drawing API, which requires its work be done on the CPU, as opposed to a lot of UIKit work that is offloaded to the GPU. If you had to animate a ball moving across the screen, it would be a terrible idea to call setNeedsDisplay on a view 60 times per second. So, if you have sub-components of your view that need to be individually animated, each component should be a separate layer.
The other problem is that when you don't do custom drawing with drawRect, UIKit can optimize stock views so drawRect is a no-op, or it can take shortcuts with compositing. When you override drawRect, UIKit has to take the slow path because it has no idea what you're doing.
These two problems can be outweighed by benefits in the case of table view cells. After drawRect is called when a view first appears on screen, the contents are cached, and the scrolling is a simple translation performed by the GPU. Because you're dealing with a single view, rather than a complex hierarchy, UIKit's drawRect optimizations become less important. So the bottleneck becomes how much you can optimize your Core Graphics drawing.
Whenever you can, use UIKit. Do the simplest implementation that works. Profile. When there's an incentive, optimize.
The difference is that UIView and CALayer essentially deal in fixed images. These images are uploaded to the graphics card (if you know OpenGL, think of an image as a texture, and a UIView/CALayer as a polygon showing such a texture). Once an image is on the GPU, it can be drawn very quickly, and even several times, and (with a slight performance penalty) even with varying levels of alpha transparency on top of other images.
CoreGraphics/Quartz is an API for generating images. It takes a pixel buffer (again, think OpenGL texture) and changes individual pixels inside it. This all happens in RAM and on the CPU, and only once Quartz is done, does the image get "flushed" back to the GPU. This round-trip of getting an image from the GPU, changing it, then uploading the whole image (or at least a comparatively large chunk of it) back to the GPU is rather slow. Also, the actual drawing that Quartz does, while really fast for what you are doing, is way slower than what the GPU does.
That's obvious, considering the GPU is mostly moving around unchanged pixels in big chunks. Quartz does random-access of pixels and shares the CPU with networking, audio etc. Also, if you have several elements that you draw using Quartz at the same time, you have to re-draw all of them when one changes, then upload the whole chunk, while if you change one image and then let UIViews or CALayers paste it onto your other images, you can get away with uploading much smaller amounts of data to the GPU.
When you don't implement -drawRect:, most views can just be optimized away. They don't contain any pixels, so can't draw anything. Other views, like UIImageView, only draw a UIImage (which, again, is essentially a reference to a texture, which has probably already been loaded onto the GPU). So if you draw the same UIImage 5 times using a UIImageView, it is only uploaded to the GPU once, and then drawn to the display in 5 different locations, saving us time and CPU.
When you implement -drawRect:, this causes a new image to be created. You then draw into that on the CPU using Quartz. If you draw a UIImage in your drawRect, it likely downloads the image from the GPU, copies it into the image you're drawing to, and once you're done, uploads this second copy of the image back to the graphics card. So you're using twice the GPU memory on the device.
So the fastest way to draw is usually to keep static content separated from changing content (in separate UIViews/UIView subclasses/CALayers). Load static content as a UIImage and draw it using a UIImageView and put content generated dynamically at runtime in a drawRect. If you have content that gets drawn repeatedly, but by itself doesn't change (I.e. 3 icons that get shown in the same slot to indicate some status) use UIImageView as well.
One caveat: There is such a thing as having too many UIViews. Particularly transparent areas take a bigger toll on the GPU to draw, because they need to be mixed with other pixels behind them when displayed. This is why you can mark a UIView as "opaque", to indicate to the GPU that it can just obliterate everything behind that image.
If you have content that is generated dynamically at runtime but stays the same for the duration of the application's lifetime (e.g. a label containing the user name) it may actually make sense to just draw the whole thing once using Quartz, with the text, the button border etc., as part of the background. But that's usually an optimization that's not needed unless the Instruments app tells you differently.
I'm going to try and keep a summary of what I'm extrapolating from other's answers here, and ask clarifying questions in an update to the original question. But I encourage others to keep answers coming and vote up those who have provided good information.
General Approach
It's quite clear that the general approach, as Ben Sandofsky mentioned in his answer, should be "Whenever you can, use UIKit. Do the simplest implementation that works. Profile. When there's an incentive, optimize."
The Why
There are two main possible bottlenecks in an iDevice, the CPU and GPU
CPU is responsible for the initial drawing/rendering of a view
GPU is responsible for a majority of animation (Core Animation), layer effects, compositing, etc.
UIView has a lot of optimizations, caching, etc, built in for handling complex view hierarchies
When overriding drawRect you miss out on a lot of the benefits UIView's provide, and it's generally slower than letting UIView handle the rendering.
Drawing cells contents in one flat UIView can greatly improve your FPS on scrolling tables.
Like I said above, CPU and GPU are two possible bottlenecks. Since they generally handle different things, you have to pay attention to which bottleneck you are running up against. In the case of scrolling tables, it's not that Core Graphics is drawing faster, and that's why it can greatly improve your FPS.
In fact, Core Graphics may very well be slower than a nested UIView hierarchy for the initial render. However, it seems the typical reason for choppy scrolling is you are bottlenecking the GPU, so you need to address that.
Why overriding drawRect (using core graphics) can help table scrolling:
From what I understand, the GPU is not responsible for the initial rendering of the views, but is instead handed textures, or bitmaps, sometimes with some layer properties, after they have been rendered. It is then responsible for compositing the bitmaps, rendering all those layer affects, and the majority of animation (Core Animation).
In the case of table view cells, the GPU can be bottlenecked with complex view hierarchies, because instead of animating one bitmap, it is animating the parent view, and doing subview layout calculations, rendering layer effects, and compositing all the subviews. So instead of animating one bitmap, it is responsible for the relationship of bunch of bitmaps, and how they interact, for the same pixel area.
So in summary, the reason drawing your cell in one view with core graphics can speed up your table scrolling is NOT because it's drawing faster, but because it is reducing the load on the GPU, which is the bottleneck giving you trouble in that particular scenario.
I am a game developer, and I was asking the same questions when my friend told me that my UIImageView based view hierarchy was going to slow down my game and make it terrible. I then proceeded to research everything I could find about whether to use UIViews, CoreGraphics, OpenGL or something 3rd party like Cocos2D. The consistent answer I got from friends, teachers, and Apple engineers at WWDC was that there won't be much of a difference in the end because at some level they are all doing the same thing. Higher-level options like UIViews rely on the lower level options like CoreGraphics and OpenGL, just they are wrapped in code to make it easier for you to use.
Don't use CoreGraphics if you are just going to end up re-writing the UIView. However, you can gain some speed from using CoreGraphics, as long as you do all your drawing in one view, but is it really worth it? The answer I have found is usually no. When I first started my game, I was working with the iPhone 3G. As my game grew in complexity, I began to see some lag, but with the newer devices it was completely unnoticeable. Now I have plenty of action going on, and the only lag seems to be a drop in 1-3 fps when playing in the most complex level on an iPhone 4.
Still I decided to use Instruments to find the functions that were taking up the most time. I found that the problems were not related to my use of UIViews. Instead, it was repeatedly calling CGRectMake for certain collision sensing calculations and loading image and audio files separately for certain classes that use the same images, rather than having them draw from one central storage class.
So in the end, you might be able to achieve a slight gain from using CoreGraphics, but usually it will not be worth it or may not have any effect at all. The only time I use CoreGraphics is when drawing geometric shapes rather than text and images.

Need help optimizing my 2d drawing on iPhone

I'm writing a game that displays 56 hexagon pieces filling the screen in the shape of a board. I'm currently drawing each piece using a singleton rendering class that when called to draw a piece, creates a path from 6 points based of the coordinate passed in. This path is filled with a solid color and then a 59x59 png with an alpha to white gradient is overlayed over the drawing to give the piece a shiny look. Note I'm currently doing this in Core Graphics.
My first thought is that creating a path everytime I draw is costly and seems like I can somehow do this once and then reuse it, but I'm not sure of the best approach for this. When I look at the bottlenecks with Shark, it looks like the drawing of the png is the most taxing part of the process. I've tried just rendering the png overlay or just rendering the path without the overlay and both give me some frame gains, although removing the png overlay yields the most frames.
My current thought is that at startup, I should render 6 paths (1 for each color piece I have) and overlay them with the png and then store an image of these pieces and then just redraw the pieces each time I need them. Is there an effecient machanism for storing something you've drawn once and redrawing it? It kinda just sounds like I'd be running into the whole drawing pngs too often thing again, but maybe there's a less taxing method that does a similar thing...
Any suggestions are much appreciated.
Thanks!
You might try CGLayer or CALayer.
General thoughts:
Game programming on iPhone usually necessitates OpenGL. Core Graphics is a bit easier to work with, but OpenGL is optimized for speed.
Prerender this "shiny look" into the textures as much as is possible (as in: do it in Photoshop before you even insert them into your project). Alpha blending is hell on performance.
Maybe try PVRTC (also this tutorial) as it's a format used by iPhone's GPU's manufacturer. Then again, this could make things worse depending on where your bottleneck is.
If you really need speed you have to go the OpenGL route. Be careful if you want to mix OpenGL and Core Animation, they can conflict.
OpenGL is a pain if you haven't done much with it. It sounds like you could use Core Animation and make each tile a layer. CA doesn't call the redraw again unless you change something, so you should be able to just move that layer around without taking a big hit. Also note that CA stores the layer in the texture memory so it should be much faster.
Some others have mentioned that you should use OpenGL. Here's a nice introduction specifically for the iPhone: OpenGL ES from the Ground Up: Table of Contents
You might also want to look at cocos2d. It seems to be significantly faster than using CoreAnimation in my tests, and provides lots of useful stuff for games.

Performance-wise: A lot of small PNGs or one large PNG?

Developing a simple game for the iPhone, what gives a better performance?
Using a lot of small (10x10 to 30x30 pixels) PNGs for my UIViews' backgrounds.
Using one large PNG and clipping to the bounds of my UIViews.
My thought is that the first technique requires less memory per individual UIView, but complicates how the iPhone handles the large amount of images, as it tries to combine the images into a larger texture or tries to switch between all the small textures a lot.
The second technique, on the other hand, gives the iPhone the opportunity to handle just one large PNG, but unnessicarily increases the image weight every UIView has to carry.
Am I right about the iPhone's attempts, handling the images the way I described it?
So, what is the way to go?
Seeing the answers thus far, there is still doubt. There seems to be a trade-off with two parameters: Complexity and CPU-intensive coding. What would be my tipping point for deciding what technique to use?
If you end up referring back to the same CGImageRef (for example by sharing a UIImage *), the image won't be loaded multiple times by the different views. This is the technique used by the videowall Core Animation demo at the WWDC 07 keynote. That's OSX code, but UIViews are very similar to CALayers.
The way Core Graphics handles images (from my observation anyway) is heavily tweaked for just-in-time loading and aggressive releasing when memory is tight.
Using a large image you could end up loading the image at draw time if the memory for the decoded image that CGImageRef points to has been reclaimed by the system.
What makes a difference is not how many images you have, but how often the UIKit traverses your code.
Both UIViews and Core Animation CALayers will only repaint if you ask them to (-setNeedsDisplay), and the bottleneck usually is your code plus transferring the rendered content into a texture for the graphics chip.
So my advice is to think your UIView layout in a way that allows portions that change together to be updated all at the same time, which turn into a single texture upload.
One large image mainly gives you more work and more headaches. It's harder to maintain but is probably less ram intensive because there is only one structure + data in memory instead of many structures + data. (though probably not enough to notice).
Looking at the contents of .app bundles on regular Mac OS, it seems the generally approved method of storage is one file/resource per image.
Ofcourse, this is assuming you're not getting the image resources from the web, where the bottleneck would be in http and its specified maximum of two concurrent requests.
One large gives you better performance. (Of cause if you should render all pictures anyway).
One large image will remove any overhead associated with opening and manipulating many images in memory.
I would say there is no authoritative answer to this question. A single large image cuts down (slow) flash access and gets the decode done in one go but a lot of smaller images give you better control over what processing happens when... but it's memory hungry and you do have to slice that image up or mask it.
You will have to implement one solution and test it. If it isn't fast enough and you can't optimise, implement the other. I suggest implementing the version which you personally find easier to imagine implementing because that will be easiest to implement.

iPhone performance with Bitmaps

Pretty new to iPhone / objective-C.
I have an application that has 15-100 small images (16x16 or 8x8 PNG) on the screen. For this example sake, let's assume that I can create these images using CGContext if I needed to.
I would have to assume that iPhone would perform better using that method rather than loading images (PNG's). However, the bitmap version is easier to develop and also has other advantages (like built in touch events) that I need.
If performance is not the ultimate metric for this application, does placing 100 small images degrade performance/memory enough to even consider switching to the CGContext method. My instinct tells me that I will not see that much of a performance difference either way but I am too new to iPhone development to know enough about it to make a difference.
I suppose it depends on the complexity of your image generation algorithm.
I will also depend on you application: will you be drawing this images many times per second, like in an animation? If that's the case, use UIImageViews.
I think using 100 or so UIImageViews should be fine as long as you don't need to rapidly animate them or update them at the same time. You should avoid doing anything that would change the size of the views (like resizing the view that contains them all), and if you use Core Animation to animate them, perform all of the animations inside a single animation block. (Wrap everything with one [UIView beginAnimations:context:], [UIView commitAnimations] - not one for each view)
Good luck!
I'd try the bitmap version first, then CGContext one if bitmap is too slow.
THEN if it's still too slow, I'd put all the icons into a GL texture.

Image strategy in iPhone app

I'm writing a card game for the iPhone, and I'm not sure about the best strategy for displaying the cards. I have a basic prototype that creates a UIImageView that can be dragged for each card with a dummy image. I wanted to use one large UIImage that contains the faces of all of the cards, and then have each draggable UIImageView display a part of that image. I must be misunderstanding what setBounds is for - I thought that controlled which part of the underlying image is displayed. So, two questions:
Is this the right approach?
How do I display just a part of the image?
Depending on your resolution, this might not be the best approach.
From Apple:
You should avoid creating UIImage
objects that are greater than 1024 x
1024 in size. Besides the large amount
of memory such an image would consume,
you may run into problems when using
the image as a texture in OpenGL ES or
when drawing the image to a view or
layer. This size restriction does not
apply if you are performing code-based
manipulations, such as resizing an
image larger than 1024 x 1024 pixels
by drawing it to a bitmap-backed
graphics context. In fact, you may
need to resize an image in this manner
(or break it into several smaller
images) in order to draw it to one of
your views.
Now, you are talking about breaking it up into several smaller pieces, but given UIIMage's caching, I am not sure what happens to memory every time you access the image and copy a sub-rect out of it. I think the approach I would take is to have an array of images, instead of one big one.