How to merge two FBOs? - iphone

OK so I have 4 buffers, 3 FBOs and a render buffer. Let me explain.
I have a view FBO, which will store the scene before I render it to the render buffer.
I have a background buffer, which contains the background of the scene.
I have a user buffer, which the user manipulates.
When the user makes some action I draw to the user buffer, using some blending.
Then to redraw the whole scene what I want to do is clear the view buffer, draw the background buffer to the view buffer, change the blending, then draw the user buffer to the view buffer. Finally render the view buffer to the render buffer.
However I can't figure out how to draw a FBO to another FBO. What I want to do is essentially merge and blend two FBOs, but I can't figure out how! I'm very new to OpenGL ES, so thanks for all the help.

Set up your offscreen framebuffers to render directly to a texture. This link shows you how:
http://developer.apple.com/iphone/library/documentation/3DDrawing/Conceptual/OpenGLES_ProgrammingGuide/WorkingwithEAGLContexts/WorkingwithEAGLContexts.html#//apple_ref/doc/uid/TP40008793-CH103-SW7
Let me take a moment to describe framebuffers and renderbuffers, for my benefit and yours. A framebuffer is like a port that accepts OpenGL rendering commands. It has to be attached to a texture or a renderbuffer before you can see or use the rendering output. You can choose between attaching a texture using glFramebufferTexture2DOES or a renderbuffer using glFramebufferRenderbufferOES. A renderbuffer is like a raster image that holds the results of rendering. Storage for the raster image is managed by OpenGL. If you want the image to appear on the screen instead of an offscreen buffer, you use -[EAGLContext renderBufferStorage:fromDrawable:] to use the EAGLContext's storage with the renderbuffer. This code is in the OpenGL ES project template.
You probably don't need the view framebuffer, since after rendering the scene background and the user layer to textures, you can draw those textures into the renderbuffer (that is, into the framebuffer associated with the onscreen renderbuffer).

Related

zoom in the GLPaint sample code

i would like to make an app where you can paint like in the GLPaint sample code, but also zoom in to paint in more detail within your painting.
but i get the feeling, that using OpenGL ES 1.0 which is used in the GLPaint app, is pretty difficult to learn and could be a little bit of an overkill for my need.
if i am chaning the main views frame with the setFrame method to zoom with gesturerecognizer, the already painted lines get erased with every change of the frames size.
so i tried to realize it with another idea: in the touchmoves method i add at "many" positions uiimageviews with an image of the brush, it is slower than the glpaint app and a little bit of a memomy management mess, but i don´t see another way to go there.
any suggestions, learn openGL ES 1.0 or 2.0 or trying to realise the last idea
You can certainly achieve what you are doing, however it will require some effort.
Usually zooming is quite straight-forward as most OpenGL scenes typically do not rely on the the accumulation buffer as the GLPaint sample code does.
If you try and just zoom your the view in GLPaint, your new painting will be drawn at some adjusted scale over your original drawing - which is almost certainly not what you want.
A work-around is instead of drawing directly to your presenting screen buffer, you would first render to a texture buffer, then render said texture buffer on a quad (or equivalent). That way the quad scene can be cleared and re-rendered every frame refresh (at any scale you choose) while your paint buffer retains its accumulation buffer.
This has been tested and works.
I am quite sure the image view method will be an overkill after drawing for a few minutes... You can do all the zooming quite nicely with openGL and I suggest you do that. The bast practice would be to create a canvas as large as possible so when you zoom in you will not lose any resolution.
About zooming: Do not try to resize the GL frame or any frame for that matter because even if you manage to do that successfully you will lose resolution. You should use standard matrices to translate and scale the scene or just play around with glOrtho (set its values to the rect you are currently seeing). Once you get that part there are sadly 2 more things to do that require a bit of math, first is you will have to compute the new touch positions in the openGL scene as location in view will not know about your zooming and translating, second is you probably need to scale the brush as well (make smaller when the scene is bigger so you can draw details).
About the canvas: I do suggest you draw to a FBO rather then your main render buffer and present the texture to your main render scene. Note here that FBO will have attached texture and will be a size of power of 2 (create 2048x2048 or 4096x4096 for newer devices) but you will probably just be using some part of it to keep the same ratio as the screen (glViewport should do the job) so you will have to compute the texture coordinates. Overall the drawing mechanism doesn't change much.
So to sum this up, imagine you have a canvas (FBO) to which you apply the brush of certain size and position on touches events, then you use that canvas as a texture and draw it on your main GL view.

Using renderbuffer only (without framebuffer) to draw offscreen content?

Do I have to generate and bind a framebuffer for every renderbuffer I create?
Or is there a chance to create renderbuffer only (and map it to a texture or submit somehow to the sahders)?
I just want to render to a one channel buffer to create some mask for later use. I think setting up a complete framebuffer would be overhead for this task.
Thanks.
A renderbuffer is just an image. You cannot bind one as a texture; if you want to create an image to use as a texture, then you need to create a texture. That's why we have renderbuffers and textures: one of them is for things that you don't intend to read from.
Framebuffers are collections of images. You can't render to a rendebuffer or texture; you render to the framebuffer, which itself must have renderbuffers and/or textures attached to them.
You can either render to the default framebuffer or to a framebuffer object. The images in the default framebuffer can't be used as textures. So if you want to render to a texture, you have to use a framebuffer object. That's how OpenGL works.
"setting up a complete framebuffer" may involve overhead, but you're going to have to do it if you want to render to a texture.
You could use a stencil buffer instead, and just disable the stencil test until you are ready to mask your output.
edit:
have a look at the following calls in the opengl docs:
glClearStencil
glClear(GL_STENCIL_BUFFER_BIT)
glEnable(GL_STENCIL_TEST)
glDisable(GL_STENCIL_TEST)
glStencilFunc
glStencilOp
http://www.opengl.org/sdk/docs/man/xhtml/glStencilFunc.xml
http://www.opengl.org/sdk/docs/man/xhtml/glStencilOp.xml
http://developer.nvidia.com/system/files/akamai/gamedev/docs/stencil.pdf?download=1

How to modify a bound texture in OpenGL ES 1.1

My platform is iPhone - OpenGL ES 1.1
I'm looking for the tutorial about modifying or drawing to a texture.
For example:
I have a background texture: (Just blank blue-white gradiant image)
and a object texture:
I need to draw the object to background many times so to optimize the performance I want to draw it to the background texture like this:
does anyone know the fastest way to do this ?
Thanks a lot !
Do you want to draw it into the background texture, and then keep that, or overlay it, or what? I'm not entirely sure the question.
To draw onto the background and then reuse that, you'll want to create another texture, or a pbuffer/fbo, and bind that. Draw a full-screen quad with your background image, then draw additional quads with the overlays as needed. The bound texture should then have the results, composited as necessary, and can be used as a texture or copied into a file. This is typically known as render-to-texture, and is commonly used to combine images or other dynamic image effects.
To optimize the performance here, you'll want to reuse the texture containing the final results. This will reduce the render cost from whatever it may have been (1 background + 4 faces) to a single background draw.
Edit: This article seems to have a rather good breakdown of OpenGL ES RTT. Some good information in this one as well, though not ES-specific.
To overlay the decals, you simply need to draw them over the background. This is the same drawing method as in RTT, but without binding a texture as the render target. This will not persist, it exists only in the backbuffer, but will give the same effect.
To optimize this method, you'll want to batch drawing the decals as much as possible. Assuming they all have the same properties and source texture, this is pretty easy. Bind all the textures and set properties as needed, fill a chunk of memory with the corners, and just draw a lot of quads. You can also draw them individually, in immediate mode, but this is somewhat more expensive.

Openg GL ES draw offscreen to provide the contents for a CALayer

Is it is possible use Open GL ES to draw offscreen to create a CGImageRef to use as content for a CALayer.
I intend to alter the image only once. In detail I'm looking for an efficient way to change only the hue of an image without changing the brightness as well. The other solution might be to create a pixel buffer and to change the data directly but it seems computationally expensive.
Although it's not something I've done, it should be possible.
If you check out the current OpenGL ES template in Xcode, especially EAGLView.m, you'll see that the parts that bind the OpenGL context in there to the screen are:
line 77, [context renderbufferStorage:GL_RENDERBUFFER fromDrawable:(CAEAGLLayer *)self.layer];, which tells the CAEAGLLayer to provide sufficient details to the framebuffer object there being created so that it can be displayed on screen.
line 128, success = [context presentRenderbuffer:GL_RENDERBUFFER];, which gives the CAEAGLLayer the nod that you've drawn a whole new frame and it should present that when possible.
What you should be able to do is dump the CAEAGLLayer connection entirely (and, therefore, you don't need to create a UIView subclass), use glRenderbufferStorage or glRenderbufferStorageMultisampleAPPLE to allocate a colour buffer for your framebuffer instead (so that it has storage, but wherever OpenGL feels like putting it), do all your drawing, then use glReadPixels to get the pixel contents back.
From there you can use CGDataProviderCreateWithData and CGImageCreate to convert the raw pixel data to a suitable CGImageRef.
The GPU stuff should be a lot faster than you can manage on the CPU normally, but your main costs are likely to be the upload and the download. If you don't actually need it as a CGImageRef other than to show it on screen, you'll be better just using a CAEAGLLayer toting UIView subclass. They act exactly like any other view — updating if and when you push new data, compositing in exactly the same way — so there's no additional complexity. The only disadvantage, if you're new, is that most tutorials and sample code on OpenGL tend to focus on setting things up to be full screen, updating 60 times a second, etc, that being what games want.

Is there a way that I can make a program like GLPaint using CGContext?

I want to make a program similar to GLPaint using CGContext that is very smooth and easy to put images behind. I understand that GLPaint has no allowance for putting an Image behind the painting canvas, rather than having just a black one.
You can very simply use an image behind the painting canvas.
4 basic steps
load your image in a texture (for example 256x256)
enable TEXTURE_2D mode and set the current texture to the texture id you loaded.
draw a rectangle with that texture enabled and set a texture map coordinates pointer (array of u,v points)
loop on your screen touch events to overlay with points like in GL_PAINT (without clearing your buffer) to keep the old points and bg image. Render your buffer after drawing points (brush).
Do you need more precision or sample code ?