Openg GL ES draw offscreen to provide the contents for a CALayer - iphone

Is it is possible use Open GL ES to draw offscreen to create a CGImageRef to use as content for a CALayer.
I intend to alter the image only once. In detail I'm looking for an efficient way to change only the hue of an image without changing the brightness as well. The other solution might be to create a pixel buffer and to change the data directly but it seems computationally expensive.

Although it's not something I've done, it should be possible.
If you check out the current OpenGL ES template in Xcode, especially EAGLView.m, you'll see that the parts that bind the OpenGL context in there to the screen are:
line 77, [context renderbufferStorage:GL_RENDERBUFFER fromDrawable:(CAEAGLLayer *)self.layer];, which tells the CAEAGLLayer to provide sufficient details to the framebuffer object there being created so that it can be displayed on screen.
line 128, success = [context presentRenderbuffer:GL_RENDERBUFFER];, which gives the CAEAGLLayer the nod that you've drawn a whole new frame and it should present that when possible.
What you should be able to do is dump the CAEAGLLayer connection entirely (and, therefore, you don't need to create a UIView subclass), use glRenderbufferStorage or glRenderbufferStorageMultisampleAPPLE to allocate a colour buffer for your framebuffer instead (so that it has storage, but wherever OpenGL feels like putting it), do all your drawing, then use glReadPixels to get the pixel contents back.
From there you can use CGDataProviderCreateWithData and CGImageCreate to convert the raw pixel data to a suitable CGImageRef.
The GPU stuff should be a lot faster than you can manage on the CPU normally, but your main costs are likely to be the upload and the download. If you don't actually need it as a CGImageRef other than to show it on screen, you'll be better just using a CAEAGLLayer toting UIView subclass. They act exactly like any other view — updating if and when you push new data, compositing in exactly the same way — so there's no additional complexity. The only disadvantage, if you're new, is that most tutorials and sample code on OpenGL tend to focus on setting things up to be full screen, updating 60 times a second, etc, that being what games want.

Related

Redundant image drawing with CGContextDrawImage

I have one image that I wish to redraw repeatedly over the screen, however, there is a large number of redraws per second, and drawing the image each time makes the app take a huge performance hit. Is there a way to somehow cache the CGImageRef or something that would make CGContextDrawImage perform faster?
Try using UIImageViews and see if it's fast enough. You are allowed to have many UIImageViews. You should set all of their image properties to the same instance of UIImage.
If it's for a game, you should just use a game engine (Unity, Cocos2D, etc.). They have already spent a lot of time figuring out how to make this stuff fast.
A CGLayerRef should be what you need.
From the Apple docs:
Layers are suited for the following:
High-quality offscreen rendering of drawing that you plan to reuse.
For example, you might be building a scene and plan to reuse the same background. Draw the background scene to a layer and then draw the layer whenever you need it. One added benefit is that you don’t need to know color space or device-dependent information to draw to a layer.
Repeated drawing. For example, you might want to create a pattern that consists of the same item drawn over and over. Draw the item to a layer and then repeatedly draw the layer, as shown in Figure 12-1. Any Quartz object that you draw repeatedly—including CGPath, CGShading, and CGPDFPage objects—benefits from improved performance if you draw it to a CGLayer. Note that a layer is not just for onscreen drawing; you can use it for graphics contexts that aren’t screen-oriented, such as a PDF graphics context.
https://developer.apple.com/library/mac/#documentation/graphicsimaging/Conceptual/drawingwithquartz2d/dq_layers/dq_layers.html#//apple_ref/doc/uid/TP30001066-CH219-TPXREF101

How to modify a bound texture in OpenGL ES 1.1

My platform is iPhone - OpenGL ES 1.1
I'm looking for the tutorial about modifying or drawing to a texture.
For example:
I have a background texture: (Just blank blue-white gradiant image)
and a object texture:
I need to draw the object to background many times so to optimize the performance I want to draw it to the background texture like this:
does anyone know the fastest way to do this ?
Thanks a lot !
Do you want to draw it into the background texture, and then keep that, or overlay it, or what? I'm not entirely sure the question.
To draw onto the background and then reuse that, you'll want to create another texture, or a pbuffer/fbo, and bind that. Draw a full-screen quad with your background image, then draw additional quads with the overlays as needed. The bound texture should then have the results, composited as necessary, and can be used as a texture or copied into a file. This is typically known as render-to-texture, and is commonly used to combine images or other dynamic image effects.
To optimize the performance here, you'll want to reuse the texture containing the final results. This will reduce the render cost from whatever it may have been (1 background + 4 faces) to a single background draw.
Edit: This article seems to have a rather good breakdown of OpenGL ES RTT. Some good information in this one as well, though not ES-specific.
To overlay the decals, you simply need to draw them over the background. This is the same drawing method as in RTT, but without binding a texture as the render target. This will not persist, it exists only in the backbuffer, but will give the same effect.
To optimize this method, you'll want to batch drawing the decals as much as possible. Assuming they all have the same properties and source texture, this is pretty easy. Bind all the textures and set properties as needed, fill a chunk of memory with the corners, and just draw a lot of quads. You can also draw them individually, in immediate mode, but this is somewhat more expensive.

ios, quartz2d, fastest way of drawing bitmap context into window context?

ios, quartz2d, fastest way of drawing bitmap context into window context?
hallo, sorry for my weak english,
I am looking hardly for fastest possible way of redrawing bitmap
context (which holds pointer to may raw bitmap data) onto iphone
view window context
in the examples i have found in the net people are doing this by
making CGImage from such bitmap context then making UIImage
from this and drawing it onto the view
i am thinking if it is a fastest way of doing it? do i need to create
then release CGImage - in documentation there is info that
making CGImage copy data - is it possible to send my bitmap context
data straight to window context without allocating/ copying then
releasing it in CGImage? (which seem physically not necessary)
parade
Well, i have done some measuring and here is what i have got -
no need to worry about creating CGImage and UIImage stuff becouse it all only takes about 2 miliseconds - my own image processing routines takes the most time (about 100 ms) drawing UIImage at point takes 20 ms - and there is also third thing: when i receive image buffer in my video frame ready delegate i call setNeedsDisplay by performSelectorOnMainThread - and this operation takes sometimes 2 miliseconds and sometimes about 40 miliseconds - does anybody know what it is with that - can i speed up this thing? thanx in advance
parade
I think I see what you are getting at. You have a pointer to the bitmap data and you just want the window to display that. On the old Mac OS (9 and previous) you could write draw directly to video memory, but you can't do that anymore. Back then video memory was part of RAM and now it's all on the OpenGL card.
At some level the bitmap data will have to be copied at least once. You can either do it directly by creating an OpenGL texture from the data and drawing that in an OpenGL context or you can use the UIImage approach. The UIImage approach will be slower and may contain two or more copies of the bitmap data, once to the UIImage and once when rendering the UIImage.
In either case, you need to create and release the CGImage.
The copy is necessary. You first have to get the bitmap into the GPU, as only the GPU has access to compositing any layer to the display window. And the GPU has to make a copy into it's opaque (device dependent) format. One way to do this is to create an image from your bitmap context (other alternatives include uploading an OpenGL texture, etc.)
Once you create an image you can draw it, or assign it to a visible CALayer's contents. The latter may be faster.

Continuously drawing into an iPhone bitmap object? What am I missing?

I am pretty sure this is a straightforward problem I must be confused about (I am a total newbie on the iPhone): I am trying to continuously draw shapes using the CG APIs into a bitmap object, and to always show an up-to-date version of that bitmap on the screen, inside a UIScrollView object so the user can pan, zoom, etc.
Using UIView with drawRect is no good because this is an ongoing thing -- I have a separate thread where all the drawing commands are issued, and I need them to be applied and accumulated on that same framebuffer (kind of like a web page rendering engine). What I keep seeing is that all GCContext APIs related to images seem to CREATE a brand new image whenever you want it rendered to the screen. That sounds bad because it forces the creation of another copy of the image.
I guess I am looking for a way to create an off-screen bitmap, render to it as much as I want using Core Graphics, and whenever I want to, blt that image to the screen, but still retaining the ability to keep drawing to it and blt it to the screen again, later on.
Thanks in advance!
You could use a CGLayer. Create it once with CGLayerCreateWithContext() and later retrieve the layer's context with CGLayerGetContext() whenever you need to draw into it.
To draw the layer into another graphics context, call CGContextDrawLayerAtPoint() or CGContextDrawLayerInRect().

Fastest iPhone Blit Routine?

I have a UIView subclass onto which I need to blit a UIImage. There are several ways to skin this cat depending on which series of APIs you prefer to use, and I'm interested in the fastest. Would it be UIImage's drawAtPoint or drawRect? Or perhaps the C-based CoreGraphics routines, or something else? I have no qualms about altering my source image data format if it'll make the blitting that much faster.
To describe my situation my app has anywhere from ~10 to ~200 small UIViews (64x64), a subset of which will need to be redrawn based on user interaction. My current implementation is a call to drawAtPoint inside my UIView subclass' drawRect routine. If you can think of a better way to handle this kind of scenario, I'm all ears (well, eyes).
Using an OpenGL view may be fastest of all. Keep an age cache of images (or if you know a better way to determine when certain images can be removed from the cache, by all means use that) and preload as many images as you can while the app is idle. It should be very quick, with almost no Objective-C calls involved (just -draw)
While not a "blit" at all, given the requirements of the problem (many small images with various state changes) I was able to keep the different states to redraw in their own separate UIImageView instances, and just showed/hid the appropriate instance given the state change.
Since CALayer is lightweight and fast I would get a try.
Thierry
The fastest blit implementation you are going to find is in my AVAnimator library, it contains an ARM asm implementation of a blit for a CoreGraphics buffer, have a look at the source. The way you could make use of it would be to create a single graphics context, the size of the whole screen, and then blit your specific image changes into this single graphics context, then create a UIImage from that and set it as the image of a UIImageView. That would involve 1 GPU upload per refresh, so it will not depend on how many images you render into the buffer. But, you will likely not need to go that low level. You should first try making each 64x64 image into a CALayer and then update each layer with the contents of an image that is the exact size of the layer 64x64. The only tricky thing is that you will want to decompress each of your original images if they come from PNG or JPEG files. You do that by creating another pixel buffer and rendering the original image into the new pixel buffer, that way all the PNG or JPEG decompression is done before you start setting CALayer contents.