Continuously drawing into an iPhone bitmap object? What am I missing? - iphone

I am pretty sure this is a straightforward problem I must be confused about (I am a total newbie on the iPhone): I am trying to continuously draw shapes using the CG APIs into a bitmap object, and to always show an up-to-date version of that bitmap on the screen, inside a UIScrollView object so the user can pan, zoom, etc.
Using UIView with drawRect is no good because this is an ongoing thing -- I have a separate thread where all the drawing commands are issued, and I need them to be applied and accumulated on that same framebuffer (kind of like a web page rendering engine). What I keep seeing is that all GCContext APIs related to images seem to CREATE a brand new image whenever you want it rendered to the screen. That sounds bad because it forces the creation of another copy of the image.
I guess I am looking for a way to create an off-screen bitmap, render to it as much as I want using Core Graphics, and whenever I want to, blt that image to the screen, but still retaining the ability to keep drawing to it and blt it to the screen again, later on.
Thanks in advance!

You could use a CGLayer. Create it once with CGLayerCreateWithContext() and later retrieve the layer's context with CGLayerGetContext() whenever you need to draw into it.
To draw the layer into another graphics context, call CGContextDrawLayerAtPoint() or CGContextDrawLayerInRect().

Related

Redundant image drawing with CGContextDrawImage

I have one image that I wish to redraw repeatedly over the screen, however, there is a large number of redraws per second, and drawing the image each time makes the app take a huge performance hit. Is there a way to somehow cache the CGImageRef or something that would make CGContextDrawImage perform faster?
Try using UIImageViews and see if it's fast enough. You are allowed to have many UIImageViews. You should set all of their image properties to the same instance of UIImage.
If it's for a game, you should just use a game engine (Unity, Cocos2D, etc.). They have already spent a lot of time figuring out how to make this stuff fast.
A CGLayerRef should be what you need.
From the Apple docs:
Layers are suited for the following:
High-quality offscreen rendering of drawing that you plan to reuse.
For example, you might be building a scene and plan to reuse the same background. Draw the background scene to a layer and then draw the layer whenever you need it. One added benefit is that you don’t need to know color space or device-dependent information to draw to a layer.
Repeated drawing. For example, you might want to create a pattern that consists of the same item drawn over and over. Draw the item to a layer and then repeatedly draw the layer, as shown in Figure 12-1. Any Quartz object that you draw repeatedly—including CGPath, CGShading, and CGPDFPage objects—benefits from improved performance if you draw it to a CGLayer. Note that a layer is not just for onscreen drawing; you can use it for graphics contexts that aren’t screen-oriented, such as a PDF graphics context.
https://developer.apple.com/library/mac/#documentation/graphicsimaging/Conceptual/drawingwithquartz2d/dq_layers/dq_layers.html#//apple_ref/doc/uid/TP30001066-CH219-TPXREF101

Openg GL ES draw offscreen to provide the contents for a CALayer

Is it is possible use Open GL ES to draw offscreen to create a CGImageRef to use as content for a CALayer.
I intend to alter the image only once. In detail I'm looking for an efficient way to change only the hue of an image without changing the brightness as well. The other solution might be to create a pixel buffer and to change the data directly but it seems computationally expensive.
Although it's not something I've done, it should be possible.
If you check out the current OpenGL ES template in Xcode, especially EAGLView.m, you'll see that the parts that bind the OpenGL context in there to the screen are:
line 77, [context renderbufferStorage:GL_RENDERBUFFER fromDrawable:(CAEAGLLayer *)self.layer];, which tells the CAEAGLLayer to provide sufficient details to the framebuffer object there being created so that it can be displayed on screen.
line 128, success = [context presentRenderbuffer:GL_RENDERBUFFER];, which gives the CAEAGLLayer the nod that you've drawn a whole new frame and it should present that when possible.
What you should be able to do is dump the CAEAGLLayer connection entirely (and, therefore, you don't need to create a UIView subclass), use glRenderbufferStorage or glRenderbufferStorageMultisampleAPPLE to allocate a colour buffer for your framebuffer instead (so that it has storage, but wherever OpenGL feels like putting it), do all your drawing, then use glReadPixels to get the pixel contents back.
From there you can use CGDataProviderCreateWithData and CGImageCreate to convert the raw pixel data to a suitable CGImageRef.
The GPU stuff should be a lot faster than you can manage on the CPU normally, but your main costs are likely to be the upload and the download. If you don't actually need it as a CGImageRef other than to show it on screen, you'll be better just using a CAEAGLLayer toting UIView subclass. They act exactly like any other view — updating if and when you push new data, compositing in exactly the same way — so there's no additional complexity. The only disadvantage, if you're new, is that most tutorials and sample code on OpenGL tend to focus on setting things up to be full screen, updating 60 times a second, etc, that being what games want.

Draw calls take WAY longer when targeting an offscreen renderbuffer (iPhone GL ES)

I'm using OpenGL ES 1.1 to render a large view in an iPhone app. I have a "screenshot"/"save" function, which basically creates a new GL context offscreen, and then takes exactly the same geometry and renders it to the offscreen context. This produces the expected result.
Yet for reasons I don't understand, the amount of time (measured with CFAbsoluteTimeGetCurrent before and after) that the actual draw calls take when sending to the offscreen buffer is more than an order of magnitude longer than when drawing to the main framebuffer that backs an actual UIView. All of the GL state is the same for both, and the geometry list is the same, and the sequence of calls to draw is the same.
Note that there happens to be a LOT of geometry here-- the order of magnitude is clearly measurable and repeatable. Also note that I'm not timing the glReadPixels call, which is the thing that I believe actually pulls data back from the GPU. This is just a mesaure of the time spent in e.g. glDrawArrays.
I've tried:
Render that geometry to the screen again just after doing the offscreen render: takes the same quick time for the screen draw.
Render the offscreen thing twice in a row-- both times show the same slow draw speed.
Is this an inherent limitation of offscreen buffers? Or might I be missing something fundamental here?
Thanks for your insight/explanation!
Your best bet is probably to sample both your offscreen rendering and window system rendering each running in a tight loop with the CPU Sampler in Instruments and compare the results to see what differences there are.
Also, could you be a bit more clear about what exactly you mean by “render the offscreen thing twice in a row?” You mentioned at the beginning of the question that you “create a new GL context offscreen”—do you mean a new framebuffer and renderbuffer, or a completely new EAGLContext? Depending on how many new resources and objects you’re creating in order to do your offscreen rendering, the driver may need to do a lot of work to set up these resources the first time you use them in a draw call. If you’re just screenshotting the same content you were putting onscreen, you shouldn’t even need to do any of this—it should be sufficient to call glReadPixels before -[EAGLContext presentRenderbuffer:], since the backbuffer contents will still be defined at that point.
Could offscreen rendering be forcing the GPU to flush all its normal state, then do your render, flush the offscreen context, and have to reload all the normal stuff back in from CPU memory? That could take a lot longer than any rendering using data and frame buffers that stays completely on the GPU.
I'm not an expert on the issue but from what I understand graphics accelerators are used for sending data off to the screen so normally the path is Code ---vertices---> Accelerator ---rendered-image---> Screen. In your case you are flushing the framebuffer back into main memory which might be hitting some kind of bottleneck in bandwidth in the memory controller or something or other.

How would I implement undo in an OpenGL ES painting application on the iPhone?

I'm using Apple's sample application GLPaint as a basis for an OpenGL ES painting application, but I can't figure out how to implement undo functionality within it.
I don't want to take images of every stroke and store them. Is there any way of using different frame buffer objects to implement undo? Do you have other suggestions for better ways of doing this?
Use vertex buffer objects (VBO) to render your content. On every new stroke copy the last VBO to some least recently used (LRU) list. If your LRU is full, delete the least recently used VBO. To restore (undo) the last stroke just use the most recently used VBO of the LRU and render it.
VBO:
http://developer.apple.com/iphone/library/documentation/3DDrawing/Conceptual/OpenGLES_ProgrammingGuide/TechniquesforWorkingwithVertexData/TechniquesforWorkingwithVertexData.html
LRU:
http://en.wikipedia.org/wiki/Cache_algorithms#Least_Recently_Used
I would recommend using NSUndoManager to store a list of the actual drawing actions undertaken by the user (draw line from here to here using this paintbrush, etc.). If stored as a list of x, y coordinates for vector drawing, along with all other metadata required to recreate that part of the drawing, you won't be using anywhere near as much memory as storing images, vertex buffer objects, or framebuffer objects.
In fact, if you store these drawing steps in a Core Data database, you can almost get undo / redo for free. See my answer here for more.
To undo in a graphical application, you can use coreData.
here is a detailed blogpost and read this one as well.
Either you can use NSUndoManager, class provided by iOS
Or you can save current state of screen area by:
CGContextRef current = UIGraphicsGetCurrentContext();
You can have one array as stack with screen image objects and on undo action you can pop value from stack and on each change push value into the stack.

Fastest iPhone Blit Routine?

I have a UIView subclass onto which I need to blit a UIImage. There are several ways to skin this cat depending on which series of APIs you prefer to use, and I'm interested in the fastest. Would it be UIImage's drawAtPoint or drawRect? Or perhaps the C-based CoreGraphics routines, or something else? I have no qualms about altering my source image data format if it'll make the blitting that much faster.
To describe my situation my app has anywhere from ~10 to ~200 small UIViews (64x64), a subset of which will need to be redrawn based on user interaction. My current implementation is a call to drawAtPoint inside my UIView subclass' drawRect routine. If you can think of a better way to handle this kind of scenario, I'm all ears (well, eyes).
Using an OpenGL view may be fastest of all. Keep an age cache of images (or if you know a better way to determine when certain images can be removed from the cache, by all means use that) and preload as many images as you can while the app is idle. It should be very quick, with almost no Objective-C calls involved (just -draw)
While not a "blit" at all, given the requirements of the problem (many small images with various state changes) I was able to keep the different states to redraw in their own separate UIImageView instances, and just showed/hid the appropriate instance given the state change.
Since CALayer is lightweight and fast I would get a try.
Thierry
The fastest blit implementation you are going to find is in my AVAnimator library, it contains an ARM asm implementation of a blit for a CoreGraphics buffer, have a look at the source. The way you could make use of it would be to create a single graphics context, the size of the whole screen, and then blit your specific image changes into this single graphics context, then create a UIImage from that and set it as the image of a UIImageView. That would involve 1 GPU upload per refresh, so it will not depend on how many images you render into the buffer. But, you will likely not need to go that low level. You should first try making each 64x64 image into a CALayer and then update each layer with the contents of an image that is the exact size of the layer 64x64. The only tricky thing is that you will want to decompress each of your original images if they come from PNG or JPEG files. You do that by creating another pixel buffer and rendering the original image into the new pixel buffer, that way all the PNG or JPEG decompression is done before you start setting CALayer contents.