Large custom UIView - CABackingStoreUpdate Performance - iphone

Why, on iOS 4.3.5, do 'large' (960 x 1380) custom UIView's perform CABackingStoreUpdate so inefficiently and how can I improve the performance of drawing operations?
Not entirely sure what I mean? Read on...
Note:
As my understanding of this problem has evolved, so has this question. As a result the question itself is similar but the code example and underlying details/reasoning in the following body of text have changed significantly since the question was first asked.
Context
I have an incredibly basic application (code at the bottom) that draws a single elipses in the drawRect: method of a custom UIView. The application demonstrates the difference in performance when the size of the elipses being drawn remains the same but the size of the custom UIView gets larger:
I ran the application on both an iPod 4th Gen running iOS 4.3.5 and an iPad 1st Gen running iOS 5.1.1 a series of times using custom UIViews of different sizes.
The following table displays the results taken from the time profiler instrument:
The following instrument traces display the details of the two extremes for each device:
iOS 5.1.1 - (Custom UIView size 320 x 460)
iOS 5.1.1 - (Custom UIView size 960 x 1380)
iOS 4.3.5 - (Custom UIView size 320 x 460)
iOS 4.3.5 - (Custom UIView size 960 x 1380)
As you can (hopefully) see in 3 out of the 4 cases we get what we'd expect: the majority of time was spent performing the custom UIViews drawRect: method and each held 10fps.
But the forth case shows a plumet in performance with the application struggling to hold 7fps while only drawing a single shape. The majority of time was spent copying memory during the UIView's CALayer's display method, specifically:
[CALayer display] >
[CALayer _display] >
CABackingStoreUpdate >
CA::Render::ShmemBitmap::copy_pixels(CA::Render::ShmemBitmap const*, CGSRegionObject*) >
memcpy$VARIANT$CortexA8
Now it doesn't take a genius to see from the figures that something seriously wrong here. With a custom UIView of size 960 x 1380, iOS 4.3.5 spends over 4 times the amount of time copying memory around than it does drawing the entire view's contents.
Question
Now, given the context, I ask my question again:
Why, on iOS 4.3.5, do 'large' (960 x 1380) custom UIView's perform CABackingStoreUpdate so inefficiently and how can I improve the performance of drawing operations?
Any help is very much appreciated.
I have also posted this question on the Apple Developer forums.
The Real Deal
Now, obviously, I've reduced my real problem to the simplest reproducible case for the sake of this question. I'm actually attempting to animate a portion of a 960 x 1380 custom UIView that sits inside a UIScrollView.
Whilst I appreciate the temptation to steer anyone towards OpenGL ES when they're not achieving the level of performance they want through Quartz 2D I ask that anyone that takes that route at least offer an explanation as to why Quartz 2D is struggling to perform even the most basic of drawing operations on iOS 4.3.5 where iOS 5.1.1 has no problem. As you can imagine I'm not thrilled about the idea of re-writing everything for this cornerstone case.
This also applies for people suggesting using Core Animation. Although I've used an elipses changing colour (a task perfectly suited for Core Animation) in the demo for the sake of simplicity, the drawing operations I'd actual like to perform are a large quantity of lines expanding over time, a drawing task Quartz 2D is ideal for (when it is performant!). Plus, again, this would require a re-write and doesn't help explain this odd performance problem.
Code
TViewController.m (Implementation of a standard view controller)
#import "TViewController.h"
#import "TCustomView.h"
// VERSION 1 features the custom UIView the same size as the screen.
// VERSION 2 features the custom UIView nine times the size of the screen.
#define VERSION 2
#interface TViewController ()
#property (strong, nonatomic) TCustomView *customView;
#property (strong, nonatomic) NSTimer *animationTimer;
#end
#implementation TViewController
- (void)viewDidLoad
{
// Custom subview.
TCustomView *customView = [[TCustomView alloc] init];
customView.backgroundColor = [UIColor whiteColor];
#if VERSION == 1
customView.frame = CGRectMake(0.0f, 0.0f, 320.0f, 460.0f);
#else
customView.frame = CGRectMake(0.0f, 0.0f, 960.0f, 1380.0f);
#endif
[self.view addSubview:customView];
UITapGestureRecognizer *singleTap = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(handleTap:)];
[customView addGestureRecognizer:singleTap];
self.customView = customView;
}
#pragma mark - Timer Loop
- (void)handleTap:(UITapGestureRecognizer *)tapGesture
{
self.customView.value = 0.0f;
if (!self.animationTimer || !self.animationTimer.isValid) {
self.animationTimer = [NSTimer scheduledTimerWithTimeInterval:0.1 target:self selector:#selector(animationLoop) userInfo:nil repeats:YES];
}
}
#pragma mark - Timer Loop
- (void)animationLoop
{
// Update model here. For simplicity, increment a single value.
self.customView.value += 0.01f;
if (self.customView.value >= 1.0f)
{
self.customView.value = 1.0f;
[self.animationTimer invalidate];
}
[self.customView setNeedsDisplayInRect:CGRectMake(0.0f, 0.0f, 320.0f, 460.0f)];
}
#end
-
TCustomView.h (Custom view header)
#import <UIKit/UIKit.h>
#interface TCustomView : UIView
#property (assign) CGFloat value;
#end
-
TCustomView.m (Custom view implementation)
#import "TCustomView.h"
#implementation TCustomView
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
// Draw ellipses.
CGContextSetRGBFillColor(context, self.value, self.value, self.value, 1.0f);
CGContextFillEllipseInRect(context, rect);
// Draw value itself.
[[UIColor redColor] set];
NSString *value = [NSString stringWithFormat:#"%f", self.value];
[value drawAtPoint:rect.origin withFont:[UIFont fontWithName:#"Arial" size:15.0f]];
}
#end

Since both the iPod Touch 4th Gen and iPad 1st Gen have similar hardware (same amount of memory / same GPU) it suggests the problem you are seeing is due to an un-optimized code path in iOS4.
If you look at the size of the views that cause the (negative) performance spike on iOS4 they both have one side longer than 1024. Originally 1024x1024 was the maximum size a UIView could be, and whilst this restriction has since been lifted it is entirely likely that views larger than this only became efficient in iOS5 and later.
I'd conjecture that the excess memory copying you are seeing in iOS4 is due to UIKit using a full size memory buffer for the large UIView, but then having to copy appropriate sized tiles of it before they can be composited; and that in iOS5 and later they've either removed any restriction on the size of the tiles that can be composited, or changed the way UIKit renders for such large UIViews.
In terms of working around this bottleneck on iOS4 you can try tiling the area you want to cover with smaller UIViews. If you structure it as:
Parent View - contains drawing and event related code
Tile View 1 - contains drawRect
...
Tile View n - contains drawRect
In each tile view, you can ask the parent view to render its contents after adjusting the graphics context's transform appropriately. This means you don't have to change the drawing code, it will just be invoked multiple times (there is a small overhead for this, but remember each invocation will be drawing only a portion of the whole view).
Note that its important that the parent view does not have a drawRect method - otherwise UIKit will think you want to draw into it directly and it will create a backing store thus putting you back in the same situation.
There is also CATiledLayer that you could look into - this does the tiling for you but asynchronously; meaning that your drawing code and such has to handle being executed from one or more background threads.

As you have observed, the time is mainly spent to transfer some data. I think in iOS 4.3.5, CoreGraphics does not use the GPU and the graphic memory to implement the primitive drawing functions like CGContextFillEllipseInRect etc...
Then each time you need to draw something it is drawn in main memory with the CPU to calculate everything needed and then copied to the graphic memory. That takes a long time of course because the bus is quite slow.
I guess that since iOS 5. or 5.1 , the primitive drawing functions call some GPU shaders (programs inside the GPU) and then all the heavy stuff is done there.
Then only a few data (parameters and program code) are transferred from main RAM memory to graphic memory.

Related

SpriteKit reducing SKLabelnode draw calls

So I have a scene in my game which displays the levels, like any other game with level, I subclass SKSpriteNode to make a custom level button and within this subclass i Add a SKLabelNode to display the level title ( level 1, level 2 .....). The problem know is that i have a lot of draw calls because each SKLabelNode renders as one texture instant of combining them into an atlas. I would like to know if someone can help me reduce these draw calls. I dont want to use Glyph designer because this game is going to be in a lot of different languages like Japanese Chinese and more.
Any advice?
-(void)setText: (NSString *)text{
_label = [SKLabelNode labelNodeWithFontNamed:#"CooperBlack"];
_label.text = text;
_label.fontColor = [UIColor blackColor];
_label.fontSize = 11;
_label.zPosition = 2;
_label.verticalAlignmentMode = SKLabelVerticalAlignmentModeCenter;
_label.position = CGPointMake(0, 0);
[self addChild: _label];
}
Depending on what you're doing and when, you could render out the contents of the labels into textures at runtime (pre-loading / caching), and then manipulate them in any ways you'd like.
SKLabelNode *theThingToBecomeATexture;
//OR
SKSpriteNode *theThingToBecomeATexture;
SKTexture *theTexture = [theView textureFromNode:theThingToBecomeATexture];
But my follow up question or comment would be: I have a difficult time believing that you are running into performance problems by showing a few dozen label nodes on the screen. I can understand you hitting a load spike if you are trying to alloc and init a number of them all at the same time, in which case, I would preload them, or alloc them not on the main thread.

How could I trick [CALayer renderInContext: ] to render only a section of the layer?

I'm well aware that there's no nice way to render a small part of a UIView to an image, besides rendering the whole view and cropping. (VERY expensive on something like an iPad 3, where the resulting image is huge). See here for a description of the renderInContext method (there's no alternatives).
The part of the view that I want to render can't be predetermined, meaning I can't set up the view hierarchy for the small section to be it's own UIView (and therefore CALayer).
My Idea...
I had an idea, but I need some direction if I'm going to succeed. What if I create a category on UIView (or CALayer) that adds a method along the lines of:
[UIView renderSubframe:(CGFrame)frame];
How? Well, I'm thinking that if a dummy view the size of the sub frame was created, then the view could be shifted temporarily onto this. Then the dummy view's layer could call renderInContext:, resulting in an efficient and fast way of getting views.
So...
I'm really not that up to speed with CALayer/Quartz core stuff... Will this have any chance of working? If not, what's another way I could achieve the same thing - what's the most efficient way I could achieve the same result (or has anyone else already faced this problem and come up with a different solution)?
Here's the category I ended up writing. Not as hard as I first thought, thanks to a handy CGAffineTransform.
#import "UIView+RenderSubframe.h"
#import <QuartzCore/QuartzCore.h>
#implementation UIView (RenderSubframe)
- (UIImage *) renderWithBounds:(CGRect)frame {
CGSize imageSize = frame.size;
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0.0);
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextConcatCTM(c, CGAffineTransformMakeTranslation(-frame.origin.x, -frame.origin.y));
[self.layer renderInContext:c];
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return screenshot;
}
#end
A couple of different ways you might do this:
specify a CGContextClipToRect for your context before calling renderInContext;
use:
setNeedsDisplayInRect:
Marks the region within the specified rectangle as needing to be updated.
- (void)setNeedsDisplayInRect:(CGRect)theRect
This would make the rendering happen only in the specified rect. I am not sure though whether this would work as per your requirement.
I think that the first option should work seamlessly.

Avoiding UIView to create very large drawing canvas

I derived a class from UIView, only to realize that there are real limitations on its size due to memory. I have this UIView inside UIScrollView.
Is there a way for me to put something inside a scroll view that is not a UIView-derived class but into which I can still draw, and which can be very very large?
I don't mind having to respond to expose-rectangle events, like one does when using conventional windowing systems.
Thanks.
The things inside of a UIScrollView must be UIViews, which are size-restricted for memory reasons. UIView maintains a bitmapped backing store for performance reasons, so it has to allocate memory proportional to its size.
The usual way that you handle this is to generate several UIViews and swap them out as the user scrolls around. The other version of that is to use CATiledLayer. Neither of those give you the "giant canvas" drawing model, though. It's up to you to break things up and draw them as needed. This is the usual approach, though.
If you really want a giant canvas, my recommendation would be a CGPDFContext. There is rich existing support for these, particularly using UIWebView (remember, you can open data: URIs to avoid reading files from disk). And you can draw parts of them directly by applying affine transforms and then CGContextDrawPDFPage. CGBitmapContext is another approach, but it could require a lot more memory for a small amount of drawing.
So you have a UIView inside a UIScrollView, but you want your UIView to have very large bounds (i.e., so it matches the size of your UIScrollView's contentSize). But you don't want to draw the entire UIView every time it needs displaying, nor can you fit its entire contents in memory at once.
Make your UIView uses a CAScrollLayer backing, as follows:
// MyCustomUIView.m
+ (Class) layerClass
{
return [CAScrollLayer class];
}
Add a method to update the scroll position when the user scrolls the UIScrollView containing your UIView:
// MyCustomUIView.m
- (void) setScrollOffset:(CGPoint)scrollOffset
{
CAScrollLayer *scrollLayer = (CAScrollLayer*)self.layer;
[scrollLayer scrollToPoint:scrollOffset];
}
Ensure that when you draw your UIView, you only draw the portions contained in the CGRect provided to you:
- (void)drawRect:(CGRect)rect
{
// Only draw stuff that lies inside 'rect'
// CGRectIntersection might be handy here!
}
Now, in your UIScrollViewDelegate, you'll need to notify your CAScrollLayer backed view when the parent UIScrollView updates:
// SomeUIScrollViewDelegate.m
- (void) scrollViewDidScroll:(UIScrollView *)scrollView
{
// Offset myCustomView within the scrollview so that it is always visible
myCustomView.frame = CGRectMake(scrollView.contentOffset.x,
scrollView.contentOffset.y,
scrollView.bounds.size.width,
scrollView.bounds.size.height);
// "Scroll" myCustomView so that the correct portion is rendered
[myCustomView setScrollOffset:self.contentOffset];
// Tell it to update its display
[myCustomView setNeedsDisplay];
}
You can also use CATiledLayer, which is easier because you do not have to track the scroll position — instead your drawRect method will be called with each tile as-needed. However this will cause your view to fade in slowly. It might be desirable if you intend to cache parts of your view and don't mind the slow updates.

App with expanding animated line in iOS ... howto

The basic idea
is very easy. Simplified you could say... a snake like line realized by a let's say 3px line is expanding across the screen collecting and interacting with different stuff, can be steered through user input. Like a continous line you would draw with a pen.
I've already started reading apple's documentation on Quartz/CG.
As I understand now, I need to put my rendering code into a UIView's drawRect.
Then I would need to set a timer (found some answers/posts here and there, nothing concrete though) that fires x times per second and calls setNeedsDisplay on the UIView.
Question:
How to realize the following:
Have whole snake on UIView 1 / (Layer ?), draw new part on UIView 2, merge them, so that the new part gets appended to UIView 1 (or CALayer instead of views ?). I ask explicitly about this cause I read, that one shouldn't redraw the same content over and over again, but just the new/moving part.
I hope you can provide me some sample code or some details which classes I should use and the strategy on how to use them / which calls to make.
Edit
OK, I see that my idea or what I've read before to realize this with Quartz and different views is not so wise... ( Daniel Bleisteiner )
So I switched to OpenGL now, well I'm looking into it, reading examples, Jeff LaMarche's OpenGL blog entries, etc..
I guess I would be able to draw my line. I think I would create classes for curves, straight lines / direction changes, etc. and then on user input I would create the related objects (depending on the steer input) store them in an array and then recreate and redraw the whole line by reading the object properties from the objects stored in the array on each frame. A simple line I would draw like this (code from Beginning iPhone Development)
glDisable(GL_TEXTURE_2D);
GLfloat vertices[4];
// Convert coordinates
vertices[0] = start.x;
vertices[1] = start.y;
vertices[2] = end.x;
vertices[3] = end.y;
glLineWidth(3.0);
glVertexPointer (2, GL_FLOAT , 0, vertices);
glDrawArrays (GL_LINES, 0, 2);
Maybe I will even find a way to antialias it but
now I'm more curious if my idea is good enough or if there are some better established strategies for this
and maybe someone could tell me how to seperate code for hud's, the line drawing itself and some menus that I will have to display f.e. at the beginning ... so that it's easy to transition from one "view" to another (like with a fade)? Some hints ?
Maybe you have also read a book, that will explain how to solve this kind of problems?
Or you can point me to another good answer / example code as I have huge problems in finding "animated drawing" examples.
this got me a little further, but it is still a little vague for me.
I don't know how to realize "update path that you draw (only with needed points + one last point that moves)"
Don't try to merge different views... setNeedsDisplay has a rect parameter that tells the core graphics part that only a certain part of the screen needs to be rendered again. Respect this parameter in your drawRect method and it should be enough for standard 2D games and tools.
If you intent to use intense graphics there is no other option than to use OpenGL. The performance is multiple times better and you won't have to care about optimizations... OpenGL does much in the background.
Use OpenGL ES. Then what you will want to do is create a run loop function or method which gets called by a CADisplayLink 30 to 60 times per second. 60 is the maximum.
So how does this work?
1) Create an assign-property for CADisplayLink. Do NOT retain your CADisplayLink, because display links and timers retain their target. Otherwise you would create a retain-cycle which may cause abandoned memory issues (which is even worse than a leak and much harder to discover).
#property (nonatomic, assign) CADisplayLink *displayLink;
2) Create and schedule the CADisplayLink:
- (void)startRunloop {
if (!animating) {
CADisplayLink *dl = [[UIScreen mainScreen] displayLinkWithTarget:self selector:#selector(drawFrame)];
[dl setFrameInterval:1];
[dl addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
self.displayLink = dl;
animating = YES;
}
}
Some things to note here: -setFrameInterval:1 tells the CADisplayLink to not skip any frame. This means: You get the maximum fps. BUT: This can be bad if your code needs longer than 1/60 seconds. In that case, it's better to set this to 2, for example. Makes your animation more fluid.
3) In your -drawFrame method do your OpenGL ES drawing as usual. The only difference is that this code gets called multiple times per second. Just keep track of time and determine in your code what you want to draw, and how you want it to be drawn. If you were to animate a rectangle moving from bottom left to top right with an animation duration of 1 second, you would simply interpolate the animation frames between start and end by applying a function which takes time t as argument. There are thousands of ways to do it. This is one of them.
4) When you're done or want to halt OpenGL ES drawing, just invalidate or pause your CADisplayLink. Something like this:
- (void)stopRunloop {
if (animating) {
[self.displayLink invalidate];
self.displayLink = nil;
animating = NO;
}
}

UIView transparency shows how the sausages are made!

I have a UIView container that has two UIImageViews inside it, one partially obscuring the other (they're being composed like this to allow for occasional animation of one "layer" or another.
Sometimes I want to make this container 50% alpha, so what the users sees fades. Here's the problem: setting my container view to 50% alpha makes all my subviews inherit this as well, and now you can see through the first subview into the second, which in my application has a weird X-Ray effect that I'm not looking for.
What I'm after, of course, is for what the user currently sees to become 50% transparent-- the equivalent of flattening the visible view into one bitmap, and then making that 50% alpha.
What are my best bets for accomplishing this? Ideally would like to avoid actually, dynamically flattening the views if I can help it, but best practices on that welcome as well. Am I missing something obvious? Since most views have subviews and would run into this issue, I feel like there's some obvious solution here.
Thanks!
EDIT: Thanks for the thoughts folks. I'm just moving one image around on top of another image, which it only partially obscures. And this pair of images has to move together sometimes, as well. And sometimes I want to fade the whole thing out, wherever it is, and whatever the state of the image pair is at the moment. Later, I want to bring it back and continue animating it.
Taking a snapshot of the container, either by rendering its layer (?) or by doing some other offscreen compositing on the fly before alpha'ing out the whole thing, is definitely possible, and I know there are a couple ways to do it. But what if the animation should continue to happen while the whole thing's at 50% alpha, for example?
It sounds like there's no obvious solution to what I'm trying to do, which seems odd to me, but thank you all for the input.
Recently I had this same problem, where I needed to animate layers of content with a global transparency. Since my animation was quite complex, I discovered that flattening the UIView hierarchy made for a choppy animation.
The solution I found was using CALayers instead of UIViews, and setting the .shouldRasterize property to YES in the container layer, so that any sublayers would be flattened automatically prior to applying the opacity.
Here's what a UIView could look like:
#import <QuartzCore/QuartzCore.h> //< Needed to use CALayers
...
#interface MyView : UIView{
CALayer *layer1;
CALayer *layer2;
CALayer *compositingLayer; //< Layer where compositing happens.
}
...
- (void)initialization
{
UIImage *im1 = [UIImage imageNamed:#"image1.png"];
UIImage *im2 = [UIImage imageNamed:#"image2.png"];
/***** Setup the layers *****/
layer1 = [CALayer layer];
layer1.contents = im1.CGImage;
layer1.bounds = CGRectMake(0, 0, im1.size.width, im1.size.height);
layer1.position = CGPointMake(100, 100);
layer2 = [CALayer layer];
layer2.contents = im2.CGImage;
layer2.bounds = CGRectMake(0, 0, im2.size.width, im2.size.height);
layer2.position = CGPointMake(300, 300);
compositingLayer = [CALayer layer];
compositingLayer.shouldRasterize = YES; //< Here we turn this into a compositing layer.
compositingLayer.frame = self.bounds;
/***** Create the layer three *****/
[compositingLayer addSublayer:layer1]; //< Add first, so it's in back.
[compositingLayer addSublayer:layer2]; //< Add second, so it's in front.
// Don't mess with the UIView's layer, it's picky; just add sublayers to it.
[self.layer addSublayer:compositingLayer];
}
- (IBAction)animate:(id)sender
{
/* Since we're using CALayers, we can use implicit animation
* to move and change the opacity.
* Layer2 is over Layer1, the compositing is partially transparent.
*/
layer1.position = CGPointMake(200, 200);
layer2.position = CGPointMake(200, 200);
compositingLayer.opacity = 0.5;
}
I think that flattening the UIView into a UIImageView is your best bet if you have your heart set on providing this feature. Also, I don't think that flattening the image is going to be as complicated as you might think. Take a look at the answer provided in this question.
Set the bottom UIImageView to have .hidden = YES, then set .hidden = NO when you setup a cross-fade animation between the top and bottom UIImageViews.
When you need to fade the whole thing, you can either set .alpha = 0.5 on the container view or the top image view - it shouldn't matter. It may be computationally more efficient to set .alpha = 0.5 on the image view itself, but I don't know enough about the graphics pipeline on the iPhone to be sure about that.
The only downside to this approach is that you can't do a cross-fade when your top image is set to 50% opacity.
A way to do this would be to add the ImageViews to the UIWindow (the container would be a fake one)