Using renderbuffer only (without framebuffer) to draw offscreen content? - iphone

Do I have to generate and bind a framebuffer for every renderbuffer I create?
Or is there a chance to create renderbuffer only (and map it to a texture or submit somehow to the sahders)?
I just want to render to a one channel buffer to create some mask for later use. I think setting up a complete framebuffer would be overhead for this task.
Thanks.

A renderbuffer is just an image. You cannot bind one as a texture; if you want to create an image to use as a texture, then you need to create a texture. That's why we have renderbuffers and textures: one of them is for things that you don't intend to read from.
Framebuffers are collections of images. You can't render to a rendebuffer or texture; you render to the framebuffer, which itself must have renderbuffers and/or textures attached to them.
You can either render to the default framebuffer or to a framebuffer object. The images in the default framebuffer can't be used as textures. So if you want to render to a texture, you have to use a framebuffer object. That's how OpenGL works.
"setting up a complete framebuffer" may involve overhead, but you're going to have to do it if you want to render to a texture.

You could use a stencil buffer instead, and just disable the stencil test until you are ready to mask your output.
edit:
have a look at the following calls in the opengl docs:
glClearStencil
glClear(GL_STENCIL_BUFFER_BIT)
glEnable(GL_STENCIL_TEST)
glDisable(GL_STENCIL_TEST)
glStencilFunc
glStencilOp
http://www.opengl.org/sdk/docs/man/xhtml/glStencilFunc.xml
http://www.opengl.org/sdk/docs/man/xhtml/glStencilOp.xml
http://developer.nvidia.com/system/files/akamai/gamedev/docs/stencil.pdf?download=1

Related

Copy A Texture to PixelBuffer (CVPixelBufferRef)

I am using an API which only gives me the integer id of the texture object, and I need to pass that texture's data to AVAssetWriter to create the video.
I know how to create CVOpenGLESTexture object from pixel buffer (CVPixelBufferRef), but in my case I have to somehow copy the data of a texture of which only the id is available.
In other words, I need to copy an opengl texture to my pixelbuffer-based texture object. Is it possible? If yes then how?
In my sample code I have something like:
void encodeFrame(Gluint textureOb)
{
CVPixelBufferPoolCreatePixelBuffer (NULL, [assetWriterPixelBufferAdaptor pixelBufferPool], &pixelBuffer[0]);
CVOpenGLESTextureCacheCreateTextureFromImage (kCFAllocatorDefault, coreVideoTextureCache, pixelBuffer[0],
NULL, // texture attributes
GL_TEXTURE_2D,
GL_RGBA, // opengl format
(int)FRAME_WIDTH,
(int)FRAME_HEIGHT,
GL_BGRA, // native iOS format
GL_UNSIGNED_BYTE,
0,
&renderTexture[0]);
CVPixelBufferLockBaseAddress(pixelBuffer[pixelBuffernum], 0);
//Creation of textureOb is not under my control.
//All I have is the id of texture.
//Here I need the data of textureOb somehow be appended as a video frame.
//Either by copying the data to pixelBuffer or somehow pass the textureOb to the Adaptor.
[assetWriterPixelBufferAdaptor appendPixelBuffer:pixelBuffer[0] withPresentationTime:presentationTime];
}
Thanks for tips and answers.
P.S. glGetTexImage isn't available on iOS.
Update:
#Dr. Larson, I can't set the texture ID for API. Actually I can't dictate 3rd party API to use my own created texture object.
After going through the answers what I understood is that I need to:
1- Attach pixelbuffer-associated texture object as color attachment to a texFBO.
For each frame:
2- Bind the texture obtained from API
3- Bind texFBO and call drawElements
What am I doing wrong in this code?
P.S. I'm not familiar with shaders yet, so it is difficult for me to make use of them right now.
Update 2:
With the help of Brad Larson's answer and using the correct shaders solved the problem. I had to use shaders which are an essential requirement of Opengl ES 2.0
For reading back data from OpenGL ES on iOS, you basically have two routes: using glReadPixels(), or using the texture caches (iOS 5.0+ only).
The fact that you just have a texture ID and access to nothing else is a little odd, and limits your choices here. If you have no way of setting what texture to use in this third-party API, you're going to need to re-render that texture to an offscreen framebuffer to extract the pixels for it either using glReadPixels() or the texture caches. To do this, you'd use an FBO sized to the same dimensions as your texture, a simple quad (two triangles making up a rectangle), and a passthrough shader that will just display each texel of your texture in the output framebuffer.
At that point, you can just use glReadPixels() to pull your bytes back into the the internal byte array of your CVPixelBufferRef or preferably use the texture caches to eliminate the need for that read. I describe how to set up the caching for that approach in this answer, as well as how to feed that into an AVAssetWriter. You'll need to set your offscreen FBO to use the CVPixelBufferRef's associated texture as a render target for this to work.
However, if you have the means of setting what ID to use for this rendered texture, you can avoid having to re-render it to grab its pixel values. Set up the texture caching like I describe in the above-linked answer and pass the texture ID for that pixel buffer into the third-party API you're using. It will then render into the texture that's associated with the pixel buffer, and you can record from that directly. This is what I use to accelerate the recording of video from OpenGL ES in my GPUImage framework (with the glReadPixels() approach as a fallback for iOS 4.x).
Yeah it's rather unfortunate that glGetTexImage isn't ios. I struggled with that when I implemented my CCMutableTexture2D class for cocos2d.
Caching the image before pushing to the gpu
If you take a look into the source you'll notice that in the end I kept the pixel buffer of the image cached into my CCMutableTexture2D class instead of the normal route of discarding it after it's pushed to the gpu.
http://www.cocos2d-iphone.org/forum/topic/2449
Using FBO's and glReadPixels
Sadly, I think this approach might not be appropriate for you since you're creating some kind of video with the texture data and holding onto every pixel buffer that we've cached eats up a lot of memory. Another approach could be to create an FBO on the fly in order to use glReadPixels to populate your pixel buffer. I'm not too sure how successful that approach will be but a good example was posted here:
Read texture bytes with glReadPixels?

Is a simple texture cheaper than an FBO with a texture attachment?

I'm manipulating image masks runtime.
Now I'm using a separate FBO to render the masked image to a texture, then draw the result to the screen.
I'm just wondering if this another approach would be cheaper (regarding memory):
Draw the image with a shader that uses two texture slot, one for the image, and another for the mask.
Could I spare any Memory in such a way?

How to modify a bound texture in OpenGL ES 1.1

My platform is iPhone - OpenGL ES 1.1
I'm looking for the tutorial about modifying or drawing to a texture.
For example:
I have a background texture: (Just blank blue-white gradiant image)
and a object texture:
I need to draw the object to background many times so to optimize the performance I want to draw it to the background texture like this:
does anyone know the fastest way to do this ?
Thanks a lot !
Do you want to draw it into the background texture, and then keep that, or overlay it, or what? I'm not entirely sure the question.
To draw onto the background and then reuse that, you'll want to create another texture, or a pbuffer/fbo, and bind that. Draw a full-screen quad with your background image, then draw additional quads with the overlays as needed. The bound texture should then have the results, composited as necessary, and can be used as a texture or copied into a file. This is typically known as render-to-texture, and is commonly used to combine images or other dynamic image effects.
To optimize the performance here, you'll want to reuse the texture containing the final results. This will reduce the render cost from whatever it may have been (1 background + 4 faces) to a single background draw.
Edit: This article seems to have a rather good breakdown of OpenGL ES RTT. Some good information in this one as well, though not ES-specific.
To overlay the decals, you simply need to draw them over the background. This is the same drawing method as in RTT, but without binding a texture as the render target. This will not persist, it exists only in the backbuffer, but will give the same effect.
To optimize this method, you'll want to batch drawing the decals as much as possible. Assuming they all have the same properties and source texture, this is pretty easy. Bind all the textures and set properties as needed, fill a chunk of memory with the corners, and just draw a lot of quads. You can also draw them individually, in immediate mode, but this is somewhat more expensive.

Openg GL ES draw offscreen to provide the contents for a CALayer

Is it is possible use Open GL ES to draw offscreen to create a CGImageRef to use as content for a CALayer.
I intend to alter the image only once. In detail I'm looking for an efficient way to change only the hue of an image without changing the brightness as well. The other solution might be to create a pixel buffer and to change the data directly but it seems computationally expensive.
Although it's not something I've done, it should be possible.
If you check out the current OpenGL ES template in Xcode, especially EAGLView.m, you'll see that the parts that bind the OpenGL context in there to the screen are:
line 77, [context renderbufferStorage:GL_RENDERBUFFER fromDrawable:(CAEAGLLayer *)self.layer];, which tells the CAEAGLLayer to provide sufficient details to the framebuffer object there being created so that it can be displayed on screen.
line 128, success = [context presentRenderbuffer:GL_RENDERBUFFER];, which gives the CAEAGLLayer the nod that you've drawn a whole new frame and it should present that when possible.
What you should be able to do is dump the CAEAGLLayer connection entirely (and, therefore, you don't need to create a UIView subclass), use glRenderbufferStorage or glRenderbufferStorageMultisampleAPPLE to allocate a colour buffer for your framebuffer instead (so that it has storage, but wherever OpenGL feels like putting it), do all your drawing, then use glReadPixels to get the pixel contents back.
From there you can use CGDataProviderCreateWithData and CGImageCreate to convert the raw pixel data to a suitable CGImageRef.
The GPU stuff should be a lot faster than you can manage on the CPU normally, but your main costs are likely to be the upload and the download. If you don't actually need it as a CGImageRef other than to show it on screen, you'll be better just using a CAEAGLLayer toting UIView subclass. They act exactly like any other view — updating if and when you push new data, compositing in exactly the same way — so there's no additional complexity. The only disadvantage, if you're new, is that most tutorials and sample code on OpenGL tend to focus on setting things up to be full screen, updating 60 times a second, etc, that being what games want.

How to merge two FBOs?

OK so I have 4 buffers, 3 FBOs and a render buffer. Let me explain.
I have a view FBO, which will store the scene before I render it to the render buffer.
I have a background buffer, which contains the background of the scene.
I have a user buffer, which the user manipulates.
When the user makes some action I draw to the user buffer, using some blending.
Then to redraw the whole scene what I want to do is clear the view buffer, draw the background buffer to the view buffer, change the blending, then draw the user buffer to the view buffer. Finally render the view buffer to the render buffer.
However I can't figure out how to draw a FBO to another FBO. What I want to do is essentially merge and blend two FBOs, but I can't figure out how! I'm very new to OpenGL ES, so thanks for all the help.
Set up your offscreen framebuffers to render directly to a texture. This link shows you how:
http://developer.apple.com/iphone/library/documentation/3DDrawing/Conceptual/OpenGLES_ProgrammingGuide/WorkingwithEAGLContexts/WorkingwithEAGLContexts.html#//apple_ref/doc/uid/TP40008793-CH103-SW7
Let me take a moment to describe framebuffers and renderbuffers, for my benefit and yours. A framebuffer is like a port that accepts OpenGL rendering commands. It has to be attached to a texture or a renderbuffer before you can see or use the rendering output. You can choose between attaching a texture using glFramebufferTexture2DOES or a renderbuffer using glFramebufferRenderbufferOES. A renderbuffer is like a raster image that holds the results of rendering. Storage for the raster image is managed by OpenGL. If you want the image to appear on the screen instead of an offscreen buffer, you use -[EAGLContext renderBufferStorage:fromDrawable:] to use the EAGLContext's storage with the renderbuffer. This code is in the OpenGL ES project template.
You probably don't need the view framebuffer, since after rendering the scene background and the user layer to textures, you can draw those textures into the renderbuffer (that is, into the framebuffer associated with the onscreen renderbuffer).