I'm trying to create an offscreen framebuffer into which I can do some OpenGL drawing, and then pull the bits out manually. I'm following the instructions here, but in step 4, status is 0 instead of GL_FRAMEBUFFER_COMPLETE_OES.
If I insert a call to glGetError() after every gl call, it returns 0 (GL_NO_ERROR) every time. But, the values of variables do not change during the call. E.g.,
GLuint framebuffer;
glGenFramebuffersOES(1, &framebuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, framebuffer);
the value of framebuffer does not get altered at all (even when I change it to some arbitrary value and re-execute). It's almost like the gl calls are not actually being made. I'm linking against OpenGLES framework, and get no compile, link, or run-time errors (or warnings).
I'm at a loss as to what to do next. I've tried continuing on with my drawing, but do not see the results I expect, but at this point I can't tell whether it's because of the above error, or the conversion to a UIImage.
This sounds like you don't have an active GL context when you try to create the FBO.
Edit: For the sake of completeness, here's how to create an OpenGL ES 1.1 context and activate it:
EAGLContext* myContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES1];
[EAGLContext setCurrentContext: myContext];
Related
I have two UIViews, each with their own separate renderBuffers and frameBuffers. They belong to different ViewControllers. I already have them connected via NSNotificationCenter, so that is all set.
I just need to basically render the texture in ClassAView's frameBuffer into ClassBView's frameBuffer. This seems like it should be pretty easy... I tried passing in the texture I have bound in ClassAView:
glBindTexture(GL_TEXTURE_2D, myClassATexture);
then after say, tapping the screen, I try passing the texture over to ClassBView:
// in ClassA:
[classBView addTexture:myClassATexture];
// In ClassB's addTexture method:
myClassBTexture = newTexture
glClear, glBindTexture, etc...
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
[self presentFramebuffer];
But it's resulting in a black screen. I tried to NSLog myClassATexture but it's printing out "0."
Anyway, how would I go about effectively passing along the already rendered texture in ClassA to ClassB?
I'm targeting iOS 5.0 so if there's an easy way to do it that requires iOS 5 I'm all ears. :)
Thanks a bunch!
A texture ID of 0 indicates no texture, so you need to make sure you're properly copying the texture ID to use later.
Is there any faster way to access the frame buffer than using glReadPixels? I would need read-only access to a small rectangular rendering area in the frame buffer to process the data further in CPU. Performance is important because I have to perform this operation repeatedly. I have searched the web and found some approach like using Pixel Buffer Object and glMapBuffer but it seems that OpenGL ES 2.0 does not support them.
As of iOS 5.0, there is now a faster way to grab data from OpenGL ES. It isn't readily apparent, but it turns out that the texture cache support added in iOS 5.0 doesn't just work for fast upload of camera frames to OpenGL ES, but it can be used in reverse to get quick access to the raw pixels within an OpenGL ES texture.
You can take advantage of this to grab the pixels for an OpenGL ES rendering by using a framebuffer object (FBO) with an attached texture, with that texture having been supplied from the texture cache. Once you render your scene into that FBO, the BGRA pixels for that scene will be contained within your CVPixelBufferRef, so there will be no need to pull them down using glReadPixels().
This is much, much faster than using glReadPixels() in my benchmarks. I found that on my iPhone 4, glReadPixels() was the bottleneck in reading 720p video frames for encoding to disk. It limited the encoding from taking place at anything more than 8-9 FPS. Replacing this with the fast texture cache reads allows me to encode 720p video at 20 FPS now, and the bottleneck has moved from the pixel reading to the OpenGL ES processing and actual movie encoding parts of the pipeline. On an iPhone 4S, this allows you to write 1080p video at a full 30 FPS.
My implementation can be found within the GPUImageMovieWriter class within my open source GPUImage framework, but it was inspired by Dennis Muhlestein's article on the subject and Apple's ChromaKey sample application (which was only made available at WWDC 2011).
I start by configuring my AVAssetWriter, adding an input, and configuring a pixel buffer input. The following code is used to set up the pixel buffer input:
NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey,
[NSNumber numberWithInt:videoSize.width], kCVPixelBufferWidthKey,
[NSNumber numberWithInt:videoSize.height], kCVPixelBufferHeightKey,
nil];
assetWriterPixelBufferInput = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:assetWriterVideoInput sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];
Once I have that, I configure the FBO that I'll be rendering my video frames to, using the following code:
if ([GPUImageOpenGLESContext supportsFastTextureUpload])
{
CVReturn err = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, (__bridge void *)[[GPUImageOpenGLESContext sharedImageProcessingOpenGLESContext] context], NULL, &coreVideoTextureCache);
if (err)
{
NSAssert(NO, #"Error at CVOpenGLESTextureCacheCreate %d");
}
CVPixelBufferPoolCreatePixelBuffer (NULL, [assetWriterPixelBufferInput pixelBufferPool], &renderTarget);
CVOpenGLESTextureRef renderTexture;
CVOpenGLESTextureCacheCreateTextureFromImage (kCFAllocatorDefault, coreVideoTextureCache, renderTarget,
NULL, // texture attributes
GL_TEXTURE_2D,
GL_RGBA, // opengl format
(int)videoSize.width,
(int)videoSize.height,
GL_BGRA, // native iOS format
GL_UNSIGNED_BYTE,
0,
&renderTexture);
glBindTexture(CVOpenGLESTextureGetTarget(renderTexture), CVOpenGLESTextureGetName(renderTexture));
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, CVOpenGLESTextureGetName(renderTexture), 0);
}
This pulls a pixel buffer from the pool associated with my asset writer input, creates and associates a texture with it, and uses that texture as a target for my FBO.
Once I've rendered a frame, I lock the base address of the pixel buffer:
CVPixelBufferLockBaseAddress(pixel_buffer, 0);
and then simply feed it into my asset writer to be encoded:
CMTime currentTime = CMTimeMakeWithSeconds([[NSDate date] timeIntervalSinceDate:startTime],120);
if(![assetWriterPixelBufferInput appendPixelBuffer:pixel_buffer withPresentationTime:currentTime])
{
NSLog(#"Problem appending pixel buffer at time: %lld", currentTime.value);
}
else
{
// NSLog(#"Recorded pixel buffer at time: %lld", currentTime.value);
}
CVPixelBufferUnlockBaseAddress(pixel_buffer, 0);
if (![GPUImageOpenGLESContext supportsFastTextureUpload])
{
CVPixelBufferRelease(pixel_buffer);
}
Note that at no point here am I reading anything manually. Also, the textures are natively in BGRA format, which is what AVAssetWriters are optimized to use when encoding video, so there's no need to do any color swizzling here. The raw BGRA pixels are just fed into the encoder to make the movie.
Aside from the use of this in an AVAssetWriter, I have some code in this answer that I've used for raw pixel extraction. It also experiences a significant speedup in practice when compared to using glReadPixels(), although less than I see with the pixel buffer pool I use with AVAssetWriter.
It's a shame that none of this is documented anywhere, because it provides a huge boost to video capture performance.
Regarding what atisman mentioned about the black screen, I had that issue as well. Do really make sure everything is fine with your texture and other settings. I was trying to capture AIR's OpenGL layer, which I did in the end, the problem was that when I didn't set "depthAndStencil" to true by accident in the apps manifest, my FBO texture was half in height(the screen was divided in half and mirrored, I guess because of the wrap texture param stuff). And my video was black.
That was pretty frustrating, as based on what Brad is posting it should have just worked once I had some data in texture. Unfortunately, that's not the case, everything has to be "right" for it to work - data in texture is not a guarantee for seeing equal data in the video. Once I added depthAndStencil my texture fixed itself to full height and I started to get video recording straight from AIR's OpenGL layer, no glReadPixels or anything :)
So yeah, what Brad describes really DOES work without the need to recreate the buffers on every frame, you just need to make sure your setup is right. If you're getting blackness, try playing with the video/texture sizes perhaps or some other settings (setup of your FBO?).
I read iOS OpenGL ES Logical Buffer Loads that a performance gain can be reached by "discarding" your depth buffer after each draw cycle. I try this, but it's as my game engine is not rendering any longer. I am getting an glError 1286, or GL_INVALID_FRAMEBUFFER_OPERATION_EXT, when I try to render the next cycle.
I get the feeling I need to initialize or setup the depth buffer each cycle if I'm going to discard it, but I can't seem to find any information on this. Here is how I init the depth buffer (all buffers, actually):
// ---- GENERAL INIT ---- //
// Extract width and height.
int bufferWidth, bufferHeight;
glGetRenderbufferParameteriv(GL_RENDERBUFFER,
GL_RENDERBUFFER_WIDTH, &bufferWidth);
glGetRenderbufferParameteriv(GL_RENDERBUFFER,
GL_RENDERBUFFER_HEIGHT, &bufferHeight);
// Create a depth buffer that has the same size as the color buffer.
glGenRenderbuffers(1, &m_depthRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, m_depthRenderbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24_OES, GAMESTATE->GetViewportSize().x, GAMESTATE->GetViewportSize().y);
// Create the framebuffer object.
GLuint framebuffer;
glGenFramebuffers(1, &framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_RENDERBUFFER, m_colorRenderbuffer);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT,
GL_RENDERBUFFER, m_depthRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, m_colorRenderbuffer);
And here is what I'm trying to do to discard the depth buffer at the end of each draw cycle:
// Discard the depth buffer
const GLenum discards[] = {GL_DEPTH_ATTACHMENT, GL_COLOR_ATTACHMENT0};
glBindFramebuffer(GL_FRAMEBUFFER, m_depthRenderbuffer);
glDiscardFramebufferEXT(GL_FRAMEBUFFER,1,discards);
I call that immediately following all of my draw calls and...
[m_context presentRenderbuffer:GL_RENDERBUFFER];
Any ideas? Any info someone could point me to? I tried reading through Apple's guide on the subject (where I got the original idea), http://developer.apple.com/library/ios/#documentation/3DDrawing/Conceptual/OpenGLES_ProgrammingGuide/WorkingwithEAGLContexts/WorkingwithEAGLContexts.html, but it doesn't seem to work quite right for me.
Your call to glDiscardFramebufferEXT(GL_FRAMEBUFFER,1,discards) is saying that you are discarding just 1 framebuffer attachment, however your discards array includes two: GL_DEPTH_ATTACHMENT and GL_COLOR_ATTACHMENT0.
Try changing it to:
glDiscardFramebufferEXT(GL_FRAMEBUFFER, 2, discards);
In fact, you say that you are discarding these framebuffer attachments at the end of the draw cycle, but directly before [m_context presentRenderbuffer:GL_RENDERBUFFER];. You are discarding the colour renderbuffer attachment that you need in order to present the renderbuffer - perhaps try just discarding the depth attachment, as this is no longer needed at this point.
You only need to initialise your buffers once, not every draw cycle. The glDiscardFramebufferEXT() doesn't actually delete your framebuffer attachment - it is simply a hint to the API to say that the contents of the renderbuffer are not needed in that draw cycle after the discard command completes. From Apple's OpenGL ES Programming Guide for iOS:
A discard operation is defined by the EXT_discard_framebuffer
extension and is available on iOS 4.0 and later. Discard operations
should be omitted when your application is running on earlier versions
of ioS, but included whenever they are available. A discard is a
performance hint to OpenGL ES; it tells OpenGL ES that the contents of
one or more renderbuffers are not used by your application after the
discard command completes. By hinting to OpenGL ES that your
application does not need the contents of a renderbuffer, the data in
the buffers can be discarded or expensive tasks to keep the contents
of those buffers updated can be avoided.
I'm currently working on a project to convert a physics simulation to a video on the iPhone itself.
To do this, I'm presently using two different loops. The first loop runs in the block where the AVAssetWriterInput object polls the EAGLView for more images. The EAGLView provides the images from an array where they are stored.
The other loop is the actual simulation. I've turned off the simulation timer, and am calling the tick myself with a pre-specified time difference every time. Everytime a tick gets called, I create a new image in EAGLView's swap buffers method after the buffers have been swapped. This image is then placed in the array that AVAssetWriter polls.
There is also some miscellaneous code to make sure the array doesn't get too big
All of this works fine, but is very very slow.
Is there something I'm doing that is, conceptually, causing the entire process to be slower than it could be? Also, does anyone know of a faster way to get an image out of Open GL than glReadPixels?
Video memory is designed so, that it's fast for writing and slow for reading. That's why I perform rendering to texture. Here is the entire method that I've created for rendering the scene to texture (there are some custom containers, but I think it's pretty straightforward to replace them with your own):
-(TextureInf*) makeSceneSnapshot {
// create texture frame buffer
GLuint textureFrameBuffer, sceneRenderTexture;
glGenFramebuffersOES(1, &textureFrameBuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, textureFrameBuffer);
// create texture to render scene to
glGenTextures(1, &sceneRenderTexture);
glBindTexture(GL_TEXTURE_2D, sceneRenderTexture);
// create TextureInf object
TextureInf* new_texture = new TextureInf();
new_texture->setTextureID(sceneRenderTexture);
new_texture->real_width = [self viewportWidth];
new_texture->real_height = [self viewportHeight];
//make sure the texture dimensions are power of 2
new_texture->width = cast_to_power(new_texture->real_width, 2);
new_texture->height = cast_to_power(new_texture->real_height, 2);
//AABB2 = axis aligned bounding box (2D)
AABB2 tex_box;
tex_box.p1.x = 1 - (GLfloat)new_texture->real_width / (GLfloat)new_texture->width;
tex_box.p1.y = 0;
tex_box.p2.x = 1;
tex_box.p2.y = (GLfloat)new_texture->real_height / (GLfloat)new_texture->height;
new_texture->setTextureBox(tex_box);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, new_texture->width, new_texture->height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glFramebufferTexture2DOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_TEXTURE_2D, sceneRenderTexture, 0);
// check for completness
if(glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES) != GL_FRAMEBUFFER_COMPLETE_OES) {
new_texture->release();
#throw [NSException exceptionWithName: EXCEPTION_NAME
reason: [NSString stringWithFormat: #"failed to make complete framebuffer object %x", glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES)]
userInfo: nil];
new_texture = nil;
} else {
// render to texture
[self renderOneFrame];
}
glDeleteFramebuffersOES(1, &textureFrameBuffer);
//restore default frame and render buffers
glBindFramebufferOES(GL_FRAMEBUFFER_OES, _defaultFramebuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, _colorRenderbuffer);
glEnable(GL_BLEND);
[self updateViewport];
glMatrixMode(GL_MODELVIEW);
return new_texture;
}
Of course, if you're doing snapshots all the time, then you'd better create texture frame and render buffers only once (and allocate memory for them).
One thing to remember is that the GPU is running asynchronously from the CPU, so if you try to do glReadPixels immediately after you finish rendering, you'll have to wait for commands to be flushed to the GPU and rendered before you can read them back.
Instead of waiting synchronously, render snapshots into a queue of textures (using FBOs like Max mentioned). Wait until you've rendered a couple more frames before you deque one of the previous frames. I don't know if the iPhone supports fences or sync objects, but if so you could check those to see if rendering has finished before reading the pixels.
You could try using a CADisplayLink object to ensure that your drawing rate and your capture rate correspond to the device's screen refresh rate. You might be slowing down the execution time of the run loop by refreshing and capturing too many times per device screen refresh.
Depending on your app's goals, it might not be necessary for you to capture every frame that you present, so in your selector, you could choose whether or not to capture the current frame.
While the question isn't new, it's not answered yet so I thought I'd pitch in.
glReadPixels is indeed very slow, and therefore cannot be used to record video from an OpenGL-application without adversly affecting performance.
We did find a workaround, and have created a free SDK called Everyplay that can record OpenGL-based graphics to a video file, without performance loss. You can check it out at https://developers.everyplay.com/
I am trying to figure out how to implement a simple "undo" of last drawing action on the iPhone screen. I draw by first preparing the frame buffer:
[EAGLContext setCurrentContext:context];
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
I then prepare the vertex array and draw this way:
glVertexPointer(2, GL_FLOAT, 0, vertexBuffer);
glDrawArrays(GL_POINTS, 0, vertexCount);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
How do I simple undo this last action? There has to be a way to save previous state or an built-in OpenGL ES function, I would think.
Thanks
Late answer I know, but incase anyone else comes upon this, I'll post this anyway.
You also have the option of storing the points in an array at every touchesBegan and touchesMoved call. As in here:
[currentStroke addObject:[NSValue valueWithCGPoint:point]];
And when touchesEnded, you can move this to another mutable array, such as:
[allPoints addObject:allCurrentStroke];
Then, you can iterate through allPoints array, passing each subarray to the rendering function. This method has advantages and disadvantages over the method of storing images. First, it saves on hard drive space... however at a cost of memory. Using GL_POINTS, as you are, you will notice it will take time to redraw your image after you hit undo... however you can undo all the way to the first touch easily! So, it depends if you want speed or flexibility... If anyone has a better method to undo, please let me know!
You can grab an image from OpenGL ES context everytime you draw something and save in application's bundle as image file. This saves application's run memory.
When undo is pressed you just draw previous saved image into the context and that's it.
How to grab image from context you can find here: http://www.iphonedevsdk.com/forum/iphone-sdk-development/35281-grab-image-opengl-es-context.html