I have one image that I wish to redraw repeatedly over the screen, however, there is a large number of redraws per second, and drawing the image each time makes the app take a huge performance hit. Is there a way to somehow cache the CGImageRef or something that would make CGContextDrawImage perform faster?
Try using UIImageViews and see if it's fast enough. You are allowed to have many UIImageViews. You should set all of their image properties to the same instance of UIImage.
If it's for a game, you should just use a game engine (Unity, Cocos2D, etc.). They have already spent a lot of time figuring out how to make this stuff fast.
A CGLayerRef should be what you need.
From the Apple docs:
Layers are suited for the following:
High-quality offscreen rendering of drawing that you plan to reuse.
For example, you might be building a scene and plan to reuse the same background. Draw the background scene to a layer and then draw the layer whenever you need it. One added benefit is that you don’t need to know color space or device-dependent information to draw to a layer.
Repeated drawing. For example, you might want to create a pattern that consists of the same item drawn over and over. Draw the item to a layer and then repeatedly draw the layer, as shown in Figure 12-1. Any Quartz object that you draw repeatedly—including CGPath, CGShading, and CGPDFPage objects—benefits from improved performance if you draw it to a CGLayer. Note that a layer is not just for onscreen drawing; you can use it for graphics contexts that aren’t screen-oriented, such as a PDF graphics context.
https://developer.apple.com/library/mac/#documentation/graphicsimaging/Conceptual/drawingwithquartz2d/dq_layers/dq_layers.html#//apple_ref/doc/uid/TP30001066-CH219-TPXREF101
Related
I've been working on custom drawings using drawRect in UIView subclasses. That's cool, but you have to wait until the end of the run loop for drawRect to be called and I'm wondering how you can control frame by frame animations where you change the drawings over time, or if this is possible? Perhaps Quartz isn't really designed for this type of animated graphics? I gather that perhaps it is designed for static drawings that don't change so frequently.
Quartz by itself its not able to sustain a high frame rate, due to its need to redraw everything each time. But you can have Quartz work together with CoreAnimation to have Quartz based animations. The idea behind this is that you can cache previously drawn content inside CALayer objects and then use CoreAnimation to create the continuous drawing effect.
A good example of this technique can be see in the AccelerometerGraph sample code provided by Apple. Inside this sample the UIView subclass that uses this technique is the "GraphView" object. Basically this object draws as completely new only a portion of the graph (the newly generated segments), backs it in a dedicated layer and then animates the layers in order to provide the "scrolling graph" animation.
Clearly this technique works only when you have full control of the drawing elements and can manage this incremental way of adding objects in the screen. Of course things become much more complicated when you must redraw many different parts of the screen and you need to modify previously generated layers.
Anyway have a look at the mentioned code: it is quite interesting.
Your app should exit to the run loop before each frame. Do all your custom frame animation setup between each frame. So frame-by-frame drawing in drawRect should work just fine. This can work in iOS apps at a 60 Hz frame update rate, not just for static views, as long as all your methods between frame times, as well as your drawRects, are fast enough. Chop them up if needed.
My platform is iPhone - OpenGL ES 1.1
I'm looking for the tutorial about modifying or drawing to a texture.
For example:
I have a background texture: (Just blank blue-white gradiant image)
and a object texture:
I need to draw the object to background many times so to optimize the performance I want to draw it to the background texture like this:
does anyone know the fastest way to do this ?
Thanks a lot !
Do you want to draw it into the background texture, and then keep that, or overlay it, or what? I'm not entirely sure the question.
To draw onto the background and then reuse that, you'll want to create another texture, or a pbuffer/fbo, and bind that. Draw a full-screen quad with your background image, then draw additional quads with the overlays as needed. The bound texture should then have the results, composited as necessary, and can be used as a texture or copied into a file. This is typically known as render-to-texture, and is commonly used to combine images or other dynamic image effects.
To optimize the performance here, you'll want to reuse the texture containing the final results. This will reduce the render cost from whatever it may have been (1 background + 4 faces) to a single background draw.
Edit: This article seems to have a rather good breakdown of OpenGL ES RTT. Some good information in this one as well, though not ES-specific.
To overlay the decals, you simply need to draw them over the background. This is the same drawing method as in RTT, but without binding a texture as the render target. This will not persist, it exists only in the backbuffer, but will give the same effect.
To optimize this method, you'll want to batch drawing the decals as much as possible. Assuming they all have the same properties and source texture, this is pretty easy. Bind all the textures and set properties as needed, fill a chunk of memory with the corners, and just draw a lot of quads. You can also draw them individually, in immediate mode, but this is somewhat more expensive.
I would appreciate some advice on an iphone game design. I want to display some backround image and other images on top of it (buildings, characters etc). The backround is going to be large (up to 10 times the size of the screen) so only a piece of the background file will be displayed at once. The idea is to replace this piece when character gets close to screen borders. I need to make this background transition as a smooth animation. Also, I need to have a zoom in/out feature, preferably animated. Some images on the screen will be static (buildings) and some will require some animation (character walking).
What is the best design:
Use Core Graphics combined with
"sprite" classes - displaying
sprite's UIImage with
CGContextDrawImage
Use UIKit -
create UIImageview to hold every
image and add them as subviews in a
single view appplication
Use OpenGL
ES project
Option 1) turned out to be very slow. It seems like CoreGraphics is not meant to display images in a game loop. But maybe there is a way to make it effecient? Maybe combine it with Core Animation somehow?
Option 2) is my current choice. I am hoping the view to cache the image it holds and thus be more effecient than CG. But will the animation provided by UIImageview will be satisfactory? I think the views shouldn't be added all at once, but rather created & added (removed?) dynamically when background moves. Is it a good idea?
Option 3) would probably give the best control over the images but it seems like quite an overhead. I only need to display images, not vector graphics. Plus I'm new to Mac programming and I don't want to get stuck in some complex technology.
I appreciate any advice, thanks :)
I highly recommend Cocos2D as I've done my own development here on my blog. It was really easy to do. I follow Ray Wenderlich's tutorials and he provides great tools for doing everything you describe.
You asked "The backround is going to be large (up to 10 times the size of the screen) so only a piece of the background file will be displayed at once."
The tiled image system is very powerful and fast performance. If you use google maps you will see and example of a tiled image. Scroll off to a new are and blocks appear. In a local app you could take your image that is 10 times the size of the screen and cut in to tiles that are say 100px by 100px and each screen will only load the tiles that are displayed. When the user moves only the needed tiles are loaded. This saves memory and dramatically improves speed. It is the base reason why tables can fly, only the cells one screen are loaded, as is scrolls off the screen it's memory is reused for the next cell.
If option 2 is sufficiently performant for your needs I would stick with that - it's as easy a system as you'll get on the iPhone and fine for very simple graphics. A related option that might buy you a little bit of speed is using CALayers to implement the graphics. CALayers are almost as easy to use, but are a bit more lightweight than UIViews (in some ways you can think of UIViews as just wrappers for CALayers with additional overhead for managing things like touch events, etc.)
If you're interested I would read the Core Animation Programming Guide (I would provide a link but I think my reputation is too low, but Google should track it down for you). Core Animation is a big subject and can be pretty daunting but if you just use layers (i.e. not the animation parts of it) it's not so bad. Here's a quick example to give you a sense of what using layers looks like:
// NOTE: I haven't compiled this code so it may have typos/errors I haven't noticed
UIView* canvasView; // the view that will be the "canvas" for your game
... // initialize the canvas, etc.
CALayer* imageLayer = [CALayer layer];
UIImage* image = [UIImage imageNamed: #"MyImage.png"];
imageLayer.content = (id)image.CGImage;
imageLayer.bounds = CGRectMake(0, 0, image.size.width, image.size.height);
imageLayer.position = CGPointMake(100, 100); // NOTE: Unlike UIViews CALayers have their origin at the center
[canvasView.layer addSublayer:imageLayer];
So basically it looks a lot like working with views but with some added performance (and occasional headache).
P.S. - One thing to keep in mind is that if you make changes to a layer's property that is animatable (e.g. position, opacity, etc.) Core Animation will implicitly animate it (e.g. if you write imageLayer.position = somePoint; the layer animates to that position rather than having it's position set immediately. There's easy ways to work around that but that's a topic for another question/answer.
I am a newbie in Quartz and I am fighting to understand this stuff apple say is very easy and straightforward.
I have created two CGLayers: one for a fixed background and another one for a sprite. I want this sprite to move.
Both the background context and the sprite context are drawn offscreen and I would like both to be seen on the screen.
To do that - and I am not sure if this is the correct way - I have did the following:
I have created an UIImageView
I have captured the layer's contents using
resultingImage = UIGraphicsGetImageFromCurrentImageContext();
myView.image = resultingImage;
This shows me on the screen the contents of both quartz layers.
Now I have two problems:
This approach is slow as hell
When I move the layer, I have to repeat the mentioned code and EVEN DOING THIS, THE LAYER IS NOT MOVING!!!!
So, please, iPhone gurus out there, please tell me if there's another way to do this with quartz and what I have to do to see the sprite moving!!!!
thanks for any help!
The main reason to use CALayers is to get the GPU to composite directly onto the screen memory. Using UIGraphicsGetImageFromCurrentImageContext() results in the image being composited on the GPU, transferred to main memory and then back again to draw it on the screen which will be really slow. What you should be doing instead is making your new layers sublayers of your view's layer:
[self.view.layer addSublayer:myNewLayer];
For a good example of how to use CALayers in a game see: GeekGameBoard
I'm working on a game where I need to let the player look at a plane (e.g., a wall) through a lens (e.g., a magnifying glass). The game is to run on the iPhone, so my choices are Core Animation or OpenGL ES.
My first idea (that I have not yet tried) is to do this using Core Animation.
Create the wall and objects on it using CALayers.
Use CALayer's renderInContext: method to create an image of the wall as a background layer.
Crop the image to the lens shape, scale it up, then draw it over the background.
Draw the lens frame and "shiny glass" layer on top of all that.
Notes:
I am a lot more familiar with Core Animation than OpenGL, so maybe there is a much better way to do this with OpenGL. (Please tell me!)
If I am using CALayers that are not attached to a view, do I have to manage all animations myself? Or is there a straightforward way to run them manually?
3D perspective is not important; I'm just magnifying a flat wall.
I'm concerned that doing all of the above will be too slow for smooth animation.
Before I commit a lot of code to writing this, my question is do you see any pitfalls in the plan above or can you recommend an easier way to do this?
I have implemented a magnifying glass on the iPhone using a UIView. CA was way too slow.
You can draw a CGImage into a UIView using it's drawRect method. Here's the steps in my drawRect:
get the current context
create a path for clipping the view (circle)
scale the current transformation matrix (CTM)
move the current transformation matrix
draw the CGimage
You can have the CGImage prerendered, then it's in the graphics memory.
If you want something dynamic, draw it from scratch instead of drawing a CGImage.
Very fast, looks great.
That is how I'd do it, it sounds like a good plan.
Whether you choose OGL or CA the basic principle is the same so I would stick with what you're more comfortable with.
Identify the region you wish to magnify
Render this region to a separate surface
Render any border/overlay onto of the surface
Render your surface enlarged onto the main scene, clipping appropriately.
In terms of performance you will have to try it and see (just make sure you test on actual hardware, because the simulator is far faster than the hardware). If it IS to slow then you can look at doing steps 2/3 less frequently, e.g every 2-3 frames. This will give some magnification lag but it may be perfectly acceptable.
I suspect that performance between OGL / CA will be roughly equivalent. CA is built ontop of the OGL libraries but your cost is going to be doing the actual rendering, not the time spent in the layers.