OpenGL ES 2.x: How to Discard Depth Buffer glDiscardFramebufferEXT? - iphone

I read iOS OpenGL ES Logical Buffer Loads that a performance gain can be reached by "discarding" your depth buffer after each draw cycle. I try this, but it's as my game engine is not rendering any longer. I am getting an glError 1286, or GL_INVALID_FRAMEBUFFER_OPERATION_EXT, when I try to render the next cycle.
I get the feeling I need to initialize or setup the depth buffer each cycle if I'm going to discard it, but I can't seem to find any information on this. Here is how I init the depth buffer (all buffers, actually):
// ---- GENERAL INIT ---- //
// Extract width and height.
int bufferWidth, bufferHeight;
glGetRenderbufferParameteriv(GL_RENDERBUFFER,
GL_RENDERBUFFER_WIDTH, &bufferWidth);
glGetRenderbufferParameteriv(GL_RENDERBUFFER,
GL_RENDERBUFFER_HEIGHT, &bufferHeight);
// Create a depth buffer that has the same size as the color buffer.
glGenRenderbuffers(1, &m_depthRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, m_depthRenderbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24_OES, GAMESTATE->GetViewportSize().x, GAMESTATE->GetViewportSize().y);
// Create the framebuffer object.
GLuint framebuffer;
glGenFramebuffers(1, &framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_RENDERBUFFER, m_colorRenderbuffer);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT,
GL_RENDERBUFFER, m_depthRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, m_colorRenderbuffer);
And here is what I'm trying to do to discard the depth buffer at the end of each draw cycle:
// Discard the depth buffer
const GLenum discards[] = {GL_DEPTH_ATTACHMENT, GL_COLOR_ATTACHMENT0};
glBindFramebuffer(GL_FRAMEBUFFER, m_depthRenderbuffer);
glDiscardFramebufferEXT(GL_FRAMEBUFFER,1,discards);
I call that immediately following all of my draw calls and...
[m_context presentRenderbuffer:GL_RENDERBUFFER];
Any ideas? Any info someone could point me to? I tried reading through Apple's guide on the subject (where I got the original idea), http://developer.apple.com/library/ios/#documentation/3DDrawing/Conceptual/OpenGLES_ProgrammingGuide/WorkingwithEAGLContexts/WorkingwithEAGLContexts.html, but it doesn't seem to work quite right for me.

Your call to glDiscardFramebufferEXT(GL_FRAMEBUFFER,1,discards) is saying that you are discarding just 1 framebuffer attachment, however your discards array includes two: GL_DEPTH_ATTACHMENT and GL_COLOR_ATTACHMENT0.
Try changing it to:
glDiscardFramebufferEXT(GL_FRAMEBUFFER, 2, discards);
In fact, you say that you are discarding these framebuffer attachments at the end of the draw cycle, but directly before [m_context presentRenderbuffer:GL_RENDERBUFFER];. You are discarding the colour renderbuffer attachment that you need in order to present the renderbuffer - perhaps try just discarding the depth attachment, as this is no longer needed at this point.
You only need to initialise your buffers once, not every draw cycle. The glDiscardFramebufferEXT() doesn't actually delete your framebuffer attachment - it is simply a hint to the API to say that the contents of the renderbuffer are not needed in that draw cycle after the discard command completes. From Apple's OpenGL ES Programming Guide for iOS:
A discard operation is defined by the EXT_discard_framebuffer
extension and is available on iOS 4.0 and later. Discard operations
should be omitted when your application is running on earlier versions
of ioS, but included whenever they are available. A discard is a
performance hint to OpenGL ES; it tells OpenGL ES that the contents of
one or more renderbuffers are not used by your application after the
discard command completes. By hinting to OpenGL ES that your
application does not need the contents of a renderbuffer, the data in
the buffers can be discarded or expensive tasks to keep the contents
of those buffers updated can be avoided.

Related

Fighting Z-fighting in OpenGL-ES

I have a 3D iPhone game done with OpenGL ES.
It's a big world but with some tiny, first-person-view bits I need to paint up close, so I can't reduce the depth range (zNear vs zFar) that glFrustumf() takes any further.
When surfaces meet for a Z-fight, I paint them slightly apart to stop them flickering. I'm also making the camera's distance determine how far apart I adjust them, in cases where this is useful and needed.
It's mostly OK, but there are some things whose perspective suffers by the separation, and making the separation smaller causes flicker. I'd love to paint surfaces closer together.
Is there any way to increase the depth buffer precision, so surfaces can be closer together without a narrower depth range?
If not, is there any other way around this?
I'm still using OpenGL ES 1.1 in the app, but am willing to upgrade if it's worth it.
Thanks for your help.
Here's how I create the depth buffer...
In init method:
// Create default framebuffer object. The backing will be allocated for the current layer in -resizeFromLayer
glGenFramebuffersOES(1, &defaultFramebuffer);
glGenRenderbuffersOES(1, &colorRenderbuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, defaultFramebuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, colorRenderbuffer);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, colorRenderbuffer);
//Added depth buffer
glGenRenderbuffersOES(1, &depthRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, depthRenderbuffer);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_DEPTH_ATTACHMENT_OES, GL_RENDERBUFFER_OES, depthRenderbuffer);
In resizeFromLayer method:
// Allocate color buffer backing based on the current layer size
glBindRenderbufferOES(GL_RENDERBUFFER_OES, colorRenderbuffer);
[context renderbufferStorage:GL_RENDERBUFFER_OES fromDrawable:layer];
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);
//Added depth buffer
glBindRenderbufferOES(GL_RENDERBUFFER_OES, depthRenderbuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_DEPTH_COMPONENT16_OES, backingWidth, backingHeight);
Here's how I create the frustum...
const GLfloat zNear = 2.2;
const GLfloat zFar = 30000;
const GLfloat fieldOfView = 60.0;
GLfloat size = zNear * tanf(degreesToRadian(fieldOfView) / 2.0);
if (LANDSCAPE) { //for landscape clip & aspect ratio.
//parameters are: left, right, bottom, top, near, far
glFrustumf(-size/(backingWidth/backingHeight),
size/(backingWidth/backingHeight),
-size, size,
zNear, zFar);
}
What worked for ME was to adjust near-far values. The difference between far and near value defines how precise your depth buffer is.
By example. Let's say you have a far of 10000 and a near of 500. That will have a total depth of: 9500
With a 16 bits DepthBuffer you have 65536 possible combinations of depth. (This value is calculated with the geometry differently depending on GPU and OpenGl implementation )
Then you'll have approximately 65536/9500 ~= 7 possible depths for each unit of space. Then you'll have 1/7 ~= .14 of depth precision. If your objects have a distance between them of .14 or less you'll probably get z-fighting.
In real life this is more complex, but the idea is the same.
Maybe your far value is to long and you don't need it. Also increasing the near value helps with z-fighting in objects that are more closer to the camera (the ones that are more visible).
Apparently 32-bit depth buffers aren't supported in OpenGL ES 1.x.
Also, it seems that 16-bit depth buffers aren't supported on iOS, so using GL_DEPTH_COMPONENT16_OES was just behaving as 24-bit, which is why I didn't see any improvement when I used GL_DEPTH_COMPONENT24_OES instead!
I confirmed this by checking GL_DEPTH_BITS after trying to set the depth buffer to 16 bit:
glBindRenderbufferOES(GL_RENDERBUFFER_OES, depthRenderbuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_DEPTH_COMPONENT16_OES, backingWidth, backingHeight);
GLint depthBufferBits;
glGetIntegerv(GL_DEPTH_BITS, &depthBufferBits );
NSLog(#"Depth buffer bits: %d", depthBufferBits );
Outputs:
Depth buffer bits: 24
Oh well, at least now I know. Hope this helps someone else.
Standard answers revolve around use of glPolygonOffset. What that does is add an offset to the polygon depth values before comparing to those already in the buffer. The offset is calculated allowing for screen depth and angle, so it's independent of the size of your world and it doesn't affect the identities of the pixels to be painted.
The issue is deciding when to use it. If your scene is, say, lots of discrete objects with no unifying broad data structure (like a quadtree or a BSP tree) then you're probably going to have to use something like a bucket system to spot when objects are very close (relative to their distance) and give a bump to the closer. If the problem is internal to individual meshes and you've no higher level structures then obviously the problem is more complicated.
At the other end, if your scene is entirely or overwhelmingly static then a structure like a BSP tree that can do most of the drawing without even needing a depth buffer might be an advantage. At a desperate end you could render back to front with depth writing but no comparisons then do the moving objects as an extra layer; in practice that'll give you massive overdraw (though a PVS solution would help) versus front-to-back with modern early depth culling — especially on a deferred tile based renderer like the PowerVR — so again it's not an easy win.
As a separate idea, is there any way you can simplify distant geometry?

Using CVOpenGLESTextureRef as a render target?

I am trying to figure out how to use CVOpenGLESTextureRef and CVOpenGLESTextureCacheRef to replace using glReadPixels.
I understand how to use them to create a texture from incoming camera images as shown in the RosyWriter and CameraRipple demos. But I can't figure out how to use them to go the other way.
Comments in the header file for the function CVOpenGLESTextureCacheCreateTextureFromImage give the following example:
CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, textureCache,
pixelBuffer, NULL, GL_RENDERBUFFER, GL_RGBA8_OES, width, height,
GL_RGBA, GL_UNSIGNED_BYTE, 0, &outTexture);
and this is all the information I can find. How do you use this?
Currently I am doing the following to create my offscreen Frame Buffer Object at the start of the app.
glGenFramebuffersOES(1, &frameBufferHandle);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, frameBufferHandle);
glGenRenderbuffersOES(1, &colorBufferHandle);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, colorBufferHandle);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_RGBA8_OES, width, height);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES,
GL_COLOR_ATTACHMENT0_OES,GL_RENDERBUFFER_OES, colorBufferHandle);
Then when I need to write it out to disk I bind it and call glReadPixels.
How would I use CVOpenGLESTextureRef and CVOpenGLESTextureCacheRef instead of the above?
How would I use it on the main render buffer, and not on an offscreen FBO?
Other information that may be pertinent:
I am using OpenGL ES 1.1 on 2.0 capable devices (moving to 2.0 is not an option).
I am already using CVOpenGLESTextureRef and CVOpenGLESTextureCacheRef to display the camera video image on screen.
I am writing out to video using CVPixelBufferRefs and BRGA format.
If I use the main renderbuffer I can call glReadPixels with GL_BGRA_EXT.
If I use an offscreen FBO (for a smaller video size) I have to use RGBA format and do bit swizzling.

OpenGL to video on iPhone

I'm currently working on a project to convert a physics simulation to a video on the iPhone itself.
To do this, I'm presently using two different loops. The first loop runs in the block where the AVAssetWriterInput object polls the EAGLView for more images. The EAGLView provides the images from an array where they are stored.
The other loop is the actual simulation. I've turned off the simulation timer, and am calling the tick myself with a pre-specified time difference every time. Everytime a tick gets called, I create a new image in EAGLView's swap buffers method after the buffers have been swapped. This image is then placed in the array that AVAssetWriter polls.
There is also some miscellaneous code to make sure the array doesn't get too big
All of this works fine, but is very very slow.
Is there something I'm doing that is, conceptually, causing the entire process to be slower than it could be? Also, does anyone know of a faster way to get an image out of Open GL than glReadPixels?
Video memory is designed so, that it's fast for writing and slow for reading. That's why I perform rendering to texture. Here is the entire method that I've created for rendering the scene to texture (there are some custom containers, but I think it's pretty straightforward to replace them with your own):
-(TextureInf*) makeSceneSnapshot {
// create texture frame buffer
GLuint textureFrameBuffer, sceneRenderTexture;
glGenFramebuffersOES(1, &textureFrameBuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, textureFrameBuffer);
// create texture to render scene to
glGenTextures(1, &sceneRenderTexture);
glBindTexture(GL_TEXTURE_2D, sceneRenderTexture);
// create TextureInf object
TextureInf* new_texture = new TextureInf();
new_texture->setTextureID(sceneRenderTexture);
new_texture->real_width = [self viewportWidth];
new_texture->real_height = [self viewportHeight];
//make sure the texture dimensions are power of 2
new_texture->width = cast_to_power(new_texture->real_width, 2);
new_texture->height = cast_to_power(new_texture->real_height, 2);
//AABB2 = axis aligned bounding box (2D)
AABB2 tex_box;
tex_box.p1.x = 1 - (GLfloat)new_texture->real_width / (GLfloat)new_texture->width;
tex_box.p1.y = 0;
tex_box.p2.x = 1;
tex_box.p2.y = (GLfloat)new_texture->real_height / (GLfloat)new_texture->height;
new_texture->setTextureBox(tex_box);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, new_texture->width, new_texture->height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glFramebufferTexture2DOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_TEXTURE_2D, sceneRenderTexture, 0);
// check for completness
if(glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES) != GL_FRAMEBUFFER_COMPLETE_OES) {
new_texture->release();
#throw [NSException exceptionWithName: EXCEPTION_NAME
reason: [NSString stringWithFormat: #"failed to make complete framebuffer object %x", glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES)]
userInfo: nil];
new_texture = nil;
} else {
// render to texture
[self renderOneFrame];
}
glDeleteFramebuffersOES(1, &textureFrameBuffer);
//restore default frame and render buffers
glBindFramebufferOES(GL_FRAMEBUFFER_OES, _defaultFramebuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, _colorRenderbuffer);
glEnable(GL_BLEND);
[self updateViewport];
glMatrixMode(GL_MODELVIEW);
return new_texture;
}
Of course, if you're doing snapshots all the time, then you'd better create texture frame and render buffers only once (and allocate memory for them).
One thing to remember is that the GPU is running asynchronously from the CPU, so if you try to do glReadPixels immediately after you finish rendering, you'll have to wait for commands to be flushed to the GPU and rendered before you can read them back.
Instead of waiting synchronously, render snapshots into a queue of textures (using FBOs like Max mentioned). Wait until you've rendered a couple more frames before you deque one of the previous frames. I don't know if the iPhone supports fences or sync objects, but if so you could check those to see if rendering has finished before reading the pixels.
You could try using a CADisplayLink object to ensure that your drawing rate and your capture rate correspond to the device's screen refresh rate. You might be slowing down the execution time of the run loop by refreshing and capturing too many times per device screen refresh.
Depending on your app's goals, it might not be necessary for you to capture every frame that you present, so in your selector, you could choose whether or not to capture the current frame.
While the question isn't new, it's not answered yet so I thought I'd pitch in.
glReadPixels is indeed very slow, and therefore cannot be used to record video from an OpenGL-application without adversly affecting performance.
We did find a workaround, and have created a free SDK called Everyplay that can record OpenGL-based graphics to a video file, without performance loss. You can check it out at https://developers.everyplay.com/

Configuring an offscreen framebuffer fails the completeness test

I'm trying to create an offscreen framebuffer into which I can do some OpenGL drawing, and then pull the bits out manually. I'm following the instructions here, but in step 4, status is 0 instead of GL_FRAMEBUFFER_COMPLETE_OES.
If I insert a call to glGetError() after every gl call, it returns 0 (GL_NO_ERROR) every time. But, the values of variables do not change during the call. E.g.,
GLuint framebuffer;
glGenFramebuffersOES(1, &framebuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, framebuffer);
the value of framebuffer does not get altered at all (even when I change it to some arbitrary value and re-execute). It's almost like the gl calls are not actually being made. I'm linking against OpenGLES framework, and get no compile, link, or run-time errors (or warnings).
I'm at a loss as to what to do next. I've tried continuing on with my drawing, but do not see the results I expect, but at this point I can't tell whether it's because of the above error, or the conversion to a UIImage.
This sounds like you don't have an active GL context when you try to create the FBO.
Edit: For the sake of completeness, here's how to create an OpenGL ES 1.1 context and activate it:
EAGLContext* myContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES1];
[EAGLContext setCurrentContext: myContext];

OpenGL ES Simple Undo Last Drawing

I am trying to figure out how to implement a simple "undo" of last drawing action on the iPhone screen. I draw by first preparing the frame buffer:
[EAGLContext setCurrentContext:context];
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
I then prepare the vertex array and draw this way:
glVertexPointer(2, GL_FLOAT, 0, vertexBuffer);
glDrawArrays(GL_POINTS, 0, vertexCount);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
How do I simple undo this last action? There has to be a way to save previous state or an built-in OpenGL ES function, I would think.
Thanks
Late answer I know, but incase anyone else comes upon this, I'll post this anyway.
You also have the option of storing the points in an array at every touchesBegan and touchesMoved call. As in here:
[currentStroke addObject:[NSValue valueWithCGPoint:point]];
And when touchesEnded, you can move this to another mutable array, such as:
[allPoints addObject:allCurrentStroke];
Then, you can iterate through allPoints array, passing each subarray to the rendering function. This method has advantages and disadvantages over the method of storing images. First, it saves on hard drive space... however at a cost of memory. Using GL_POINTS, as you are, you will notice it will take time to redraw your image after you hit undo... however you can undo all the way to the first touch easily! So, it depends if you want speed or flexibility... If anyone has a better method to undo, please let me know!
You can grab an image from OpenGL ES context everytime you draw something and save in application's bundle as image file. This saves application's run memory.
When undo is pressed you just draw previous saved image into the context and that's it.
How to grab image from context you can find here: http://www.iphonedevsdk.com/forum/iphone-sdk-development/35281-grab-image-opengl-es-context.html