I'm currently working on a camera app for the iPhone in which I take the camera input, convert that to an OpenGL texture and then map it onto a 3D Object (currently a plane in perspective projection, for the sake of simplicity).
After mapping the camera input to this 3D plane I then render this 3D scene to a texture which is then used as a new texture for a plane in orthographic space (to apply additional filters in my fragment shader).
As long as I keep everything in orthographic projection, the resolution of my render texture is pretty high. But from the moment I put my plane in perspective projection the resolution of my render texture is very low.
Comparison:
As you can see, the last image has a very low resolution compared to the other two. So I'm guessing I'm doing something wrong.
I'm currently not using multisampling on any of my framebuffers and I'm in doubt if I will need it anyway to fix my problem since the orthographic scene works perfectly.
The textures I render into are 2048x2048 (will eventually be outputted as an image to the iPhone camera roll).
Here are some parts of my source code that I think might be relevant:
Code to create the framebuffer that gets outputted to the screen:
// Color renderbuffer
glGenRenderbuffers(1, &colorRenderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, colorRenderBuffer);
[context renderbufferStorage:GL_RENDERBUFFER
fromDrawable:(CAEAGLLayer*)glView.layer];
// Depth renderbuffer
glGenRenderbuffers(1, &depthRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, depthRenderbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, width, height);
// Framebuffer
glGenFramebuffers(1, &defaultFrameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, defaultFrameBuffer);
// Associate renderbuffers with framebuffer
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_RENDERBUFFER, colorRenderBuffer);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT,
GL_RENDERBUFFER, depthRenderbuffer);
TextureRenderTarget class:
void TextureRenderTarget::init()
{
// Color renderbuffer
glGenRenderbuffers(1, &colorRenderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, colorRenderBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGB8_OES,
width, height);
// Depth renderbuffer
glGenRenderbuffers(1, &depthRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, depthRenderbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16,
width, height);
// Framebuffer
glGenFramebuffers(1, &framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
// Associate renderbuffers with framebuffer
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_RENDERBUFFER, colorRenderBuffer);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT,
GL_RENDERBUFFER, depthRenderbuffer);
// Texture and associate with framebuffer
texture = new RenderTexture(width, height);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_TEXTURE_2D, texture->getHandle(), 0);
// Check for errors
checkStatus();
}
void TextureRenderTarget::bind() const
{
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
glBindRenderbuffer(GL_RENDERBUFFER, colorRenderBuffer);
}
void TextureRenderTarget::unbind() const
{
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
}
And finally, a snippet on how I create the render texture and fill it with pixels:
void Texture::generate()
{
// Create texture to render into
glActiveTexture(unit);
glGenTextures(1, &handle);
glBindTexture(GL_TEXTURE_2D, handle);
// Configure texture
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
}
void Texture::setPixels(const GLvoid* pixels)
{
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA,
GL_UNSIGNED_BYTE, pixels);
updateMipMaps();
}
void Texture::updateMipMaps() const
{
glBindTexture(GL_TEXTURE_2D, handle);
glGenerateMipmap(GL_TEXTURE_2D);
}
void Texture::bind(GLenum unit)
{
this->unit = unit;
if(unit != -1)
{
glActiveTexture(unit);
glBindTexture(GL_TEXTURE_2D, handle);
}
else
{
cout << "Texture::bind -> Couldn't activate unit -1" << endl;
}
}
void Texture::unbind()
{
glBindTexture(GL_TEXTURE_2D, 0);
}
I would assume that texture mapping is not exact with perspective projection.
Could you replace camera roll image by checker (chess grid with 1px cell size)? Then compare rendered checkers in orthogonal and perspective projections - the grid should be not blurred. If it is, then the problem is in projection matrix - it needs some bias for direct texel-to-pixel mapping.
If you have device you can look at rendering steps through OpenGL frame capture feature in XCode - there you will see when exactly the image becomes blurred.
As for mipmapping, it's not good to use it for textures created on-the-fly.
The blurring may be caused by the plane being positioned at half pixels in screen coordinates. Since going from orthographic to perspective transform changes the position of the plane, the plane will likely not be positioned at the same screen coordinate between the two transforms.
Similar blurring occur when you move an UIImageView from frame origin (0.0,0.0) to (0.5,0.5) on standard-res display, and (0.25,0.25) on retina displays.
The fact that your texture is very high-res may not help in this case since number of pixels actually sampled is bounded.
Try moving the plane a small distance in screen x,y coordinates and see if the blurring disappears.
I finally solved my problem by merging the first and second step of my rendering process.
The first step used to crop and flip the texture of the camera and render it to a new texture. Then this newly rendered texture is mapped onto a 3D plane and the result is rendered to a new texture.
I merged these two steps by changing the texture coordinates of my 3D plane so that I can use the original camera texture directly onto this plane.
I don't know what the exact reason is what was causing this loss of quality between the two rendered textures, but as a hint for the future: don't render to texture and reuse that result for a new render to texture. Merging all this together is better for performance and it also avoids color shifting issues.
Related
I am attempting to render to a texture in ios with multisampling enabled and then use that texture in my final output. Is this possible?
So far I have only gotten black textures or aliased images. The code I am using is:
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA4, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glGenFramebuffers(1, &framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
glGenRenderbuffers(1, &colorRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, colorRenderbuffer);
//glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA8_OES, width, height);
glRenderbufferStorageMultisampleAPPLE(GL_RENDERBUFFER, 4, GL_RGBA4, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, colorRenderbuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texture, 0);
glGenRenderbuffers(1, &depthRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, depthRenderbuffer);
//glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, width, self.view.bounds.size.height);
glRenderbufferStorageMultisampleAPPLE(GL_RENDERBUFFER, 4, GL_DEPTH_COMPONENT16, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthRenderbuffer);
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER) ;
if(status != GL_FRAMEBUFFER_COMPLETE) {
NSLog(#"failed to make complete framebuffer object %x", status);
}
// Render my scene
glBindFramebuffer( GL_FRAMEBUFFER, framebuffer );
glViewport(0,0,width,height);
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
// Draw scene
// Then bind default framebuffer
glBindFramebuffer( GL_FRAMEBUFFER, 1 );
// Draw other things
// Now resolve the multisampling and draw texture
glResolveMultisampleFramebufferAPPLE();
glUseTexture( GL_TEXTURE_2D, texture );
// Draw with texture
This code does not work. It fails if I make the depth render buffer multisampled. If I just use a normal fbo for the depth then it works, but produces an aliased image.
Anyone know where I am going wrong?
Thanks!
Yes! I found what I was doing wrong. I wrongly thought that I could have the following:
Framebuffer
Multisampled colour render buffer attached to a texture
Multisampled depth buffer
But you cannot do this. D: You have to have the following:
Multisampled framebuffer:
Multisampled colour render buffer (Not attached to a texture)
Multisampled depth render buffer
Normal framebuffer:
Colour render buffer attached to a texture. This is what will be written to by glResolveMultisampleFramebufferAPPLE() and what we will use to render the result.
No depth buffer.
I.e. you have to copy the results of the multisampled render to a whole new framebuffer.
Some code:
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA4, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glGenFramebuffers(1, &resolved_framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, resolved_framebuffer);
glGenRenderbuffers(1, &resolvedColorRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, resolvedColorRenderbuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texture, 0);
glGenFramebuffers(1, &framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
glGenRenderbuffers(1, &colorRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, colorRenderbuffer);
glRenderbufferStorageMultisampleAPPLE(GL_RENDERBUFFER, 4, GL_RGBA8_OES, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, colorRenderbuffer);
glGenRenderbuffers(1, &depthRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, depthRenderbuffer);
glRenderbufferStorageMultisampleAPPLE(GL_RENDERBUFFER, 4, GL_DEPTH_COMPONENT16, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthRenderbuffer);
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER) ;
if(status != GL_FRAMEBUFFER_COMPLETE) {
NSLog(#"failed to make complete framebuffer object %x", status);
}
// Render my scene
glBindFramebuffer( GL_FRAMEBUFFER, framebuffer );
glViewport(0,0,width,height);
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
// Draw scene
// Then bind default framebuffer
glBindFramebuffer( GL_FRAMEBUFFER, 1 );
// Draw other things
// Now resolve the multisampling into the other fbo
glBindFramebuffer( GL_READ_FRAMEBUFFER_APPLE, framebuffer );
glBindFramebuffer( GL_DRAW_FRAMEBUFFER_APPLE, resolved_framebuffer );
glResolveMultisampleFramebufferAPPLE();
glBindTexture( GL_TEXTURE_2D, texture );
// Draw with texture
Thanks Goz, you got me in the right direction!
I assume you've been working from the sample on this page?
Remove the glFramebufferTexture2D call as this may be causing the multisample render buffer to detach and hence you are have a multisampled back buffer and a single sampled render buffer. Furthermore, creating a single sampled depth buffer wil solve your issues as it will not be paired with theat single sampled render buffer.
Edit: When do you get the error? On the one creating the render buffer? If so you may be best off trying it exactly as in the link I posted (which I assume you are working for).
ie.
glRenderbufferStorageMultisampleAPPLE(GL_RENDERBUFFER, 4, GL_RGBA8_OES, width, height);
When I try to attach a texture to a framebuffer, glCheckFramebufferStatus reports GL_FRAMEBUFFER_UNSUPPORTED for certain texture sizes. I've tested on both a 2nd and 4th generation iPod Touch. The sizes of texture that fail are not identical between the two models.
Here are some interesting results:
2nd generation - 8x8 failed, 16x8 failed, but 8x16 succeeded!
4th generation - 8x8 succeeded, 8x16 succeeded, but 16x8 failed!
Here's some code I used to test attaching textures of different sizes:
void TestFBOTextureSize(int width, int height)
{
GLuint framebuffer, texture;
// Create framebuffer
glGenFramebuffersOES(1, &framebuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, framebuffer);
// Create texture
glGenTextures(1,&texture);
glBindTexture(GL_TEXTURE_2D,texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glBindTexture(GL_TEXTURE_2D,0);
// Attach texture to framebuffer
glFramebufferTexture2DOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_TEXTURE_2D, texture, 0);
GLenum error = glGetError();
GLenum status = glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES);
if (status==GL_FRAMEBUFFER_COMPLETE_OES)
NSLog(#"%dx%d Succeeded!",width,height,status);
else
NSLog(#"%dx%d Failed: %x %x %d %d",width,height,status,error,texture,framebuffer);
// Cleanup
glFramebufferTexture2DOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_TEXTURE_2D, 0, 0);
glDeleteTextures(1, &texture);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, 0);
glDeleteFramebuffersOES(1, &framebuffer);
}
void TestFBOTextureSizes()
{
int width,height;
for (width=1; width<=1024; width<<=1)
{
for (height=1; height<=1024; height<<=1)
TestFBOTextureSize(width,height);
}
}
It seems that as long as both dimensions are at least 16 pixels then everything works ok on both devices. The thing that bothers me, though, is that I haven't seen anything written about texture size requirements for attaching to a framebuffer object. One solution, for now, would be to restrict my texture sizes to be at least 16 pixels, but might this break in the future or already be broken on some device I haven't tried? I could also perform this test code at startup in order to dynamically figure out which texture sizes are allowed, but that seems a bit hokey.
I have experienced similar problem, when I'm trying to render to texture with size 480x320 (full screen w/o resolution scale) on iPod touch 4. When I call glCheckFramebufferStatus() it returns GL_FRAMEBUFFER_UNSUPPORTED. My code:
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 480, 320, 0, GL_RGB, GL_UNSIGNED_SHORT_5_6_5, 0);
glBindTexture(GL_TEXTURE_2D, 0);
glGenFramebuffers(1, &frameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texture, 0);
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if (status != GL_FRAMEBUFFER_COMPLETE) {
// report error
}
Investigating this problem I have found that GL_TEXTURE_2D has to be a valid OpenGL ES object if we want it to use in render-to-texture mechanism. This means texture should be ready for bound and use. So to fix an error I have to set some texture parameters. Because I use non-POT texture I have to set GL_TEXTURE_WRAP_ to GL_CLAMP_TO_EDGE (default value is GL_REPEAT) and GL_TEXTURE_MIN_FILTER to GL_NEAREST or GL_LINEAR (default value is GL_NEAREST_MIPMAP_LINEAR) to use this texture.
I couldn't find what's the problem with 16x8, but 16x9 and 17x8 works fine if this parameters are set. I hope this information will be helpful for you.
I am writing a GLPaint-esque drawing application for the iPad, however I have hit a stumbling block. Specifically, I am trying to implement two things at the moment:
1) A background image that can be drawn onto.
2) The ability to draw temporary shapes, e.g. you might draw a line, but the final shape would only be committed once the finger has lifted.
For the background image, I understand the idea is to draw the image into a VBO and draw it right before every line drawing. This is fine, but now I need to add the ability to draw temporary shapes... with kEAGLDrawablePropertyRetainedBacking set to YES (as in GLPaint) the temporary are obviously not temporary! Turning the retained backing property to NO works great for the temporary objects, but now my previous freehand lines aren't kept.
What is the best approach here? Should I be looking to use more than one EAGLLayer? All the documentation and tutorials I've found seem to suggest that most things should be possible with a single layer. They also say that retained backing should pretty much always be set to NO. Is there a way to work my application in such a configuration? I tried storing every drawing point into a continually expanding vertex array to be redrawn each frame, but due to the sheer number of sprites being drawn this isn't working.
I would really appreciate any help on this one, as I've scoured online and found nothing!
I've since found the solution to this problem. The best way appears to be to use custom framebuffer objects and render-to-texture. I hadn't heard of this before asking the question, but it looks like an incredibly useful tool for the OpenGLer's toolkit!
For those that may be wanting to do something similar, the idea is that you create a FBO and attach a texture to it (instead of a renderbuffer). You can then bind this FBO and draw to it like any other, the only difference being that the drawings are rendered off-screen. Then all you need to do to display the texture is to bind the main FBO and draw the texture to it (using a quad).
So for my implementation, I used two different FBOs with a texture attached to each - one for the "retained" image (for freehand drawing), and the other for the "scratch" image (for temporary drawings). Each time a frame is rendered, I first draw a background texture (in my case I just used the Texture2D class), then draw the retained texture, and finally the scratch texture if required. When drawing a temporary shape everything is rendered to the scratch texture, and this is cleared at the start of every frame. Once it is finished, the scratch texture is drawn to the retained texture.
Here are a few snippets of code that might be of use to somebody:
1) Create the framebuffers (I have only shown a couple here to save space!):
// ---------- DEFAULT FRAMEBUFFER ---------- //
// Create framebuffer.
glGenFramebuffersOES(1, &viewFramebuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
// Create renderbuffer.
glGenRenderbuffersOES(1, &viewRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
// Get renderbuffer storage and attach to framebuffer.
[context renderbufferStorage:GL_RENDERBUFFER_OES fromDrawable:layer];
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, viewRenderbuffer);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);
// Check for completeness.
status = glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES);
if (status != GL_FRAMEBUFFER_COMPLETE_OES) {
NSLog(#"Failed to make complete framebuffer object %x", status);
return NO;
}
// Unbind framebuffer.
glBindFramebufferOES(GL_FRAMEBUFFER_OES, 0);
// ---------- RETAINED FRAMEBUFFER ---------- //
// Create framebuffer.
glGenFramebuffersOES(1, &retainedFramebuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, retainedFramebuffer);
// Create the texture.
glColor4f(0.0f, 0.0f, 0.0f, 0.0f);
glGenTextures(1, &retainedTexture);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, retainedTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 1024, 1024, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glBindTexture(GL_TEXTURE_2D, 0);
// Attach the texture as a renderbuffer.
glFramebufferTexture2DOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_TEXTURE_2D, retainedTexture, 0);
// Check for completeness.
status = glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES);
if (status != GL_FRAMEBUFFER_COMPLETE_OES) {
NSLog(#"Failed to make complete framebuffer object %x", status);
return NO;
}
// Unbind framebuffer.
glBindFramebufferOES(GL_FRAMEBUFFER_OES, 0);
2) Draw to the render-to-texture FBO:
// Ensure that we are drawing to the current context.
[EAGLContext setCurrentContext:context];
glBindFramebufferOES(GL_FRAMEBUFFER_OES, retainedFramebuffer);
glViewport(0, 0, 1024, 1024);
// DRAWING CODE HERE
3) Render the various textures to the main FBO, and present:
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
glViewport(0, 0, backingWidth, backingHeight);
glClearColor(1.0f, 1.0f, 1.0f, 1.0f); // Clear to white.
glClear(GL_COLOR_BUFFER_BIT);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
[self drawBackgroundTexture];
[self drawRetainedTexture];
[self drawScratchTexture];
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
For example, drawing drawing the retained texture using [self drawRetainedTexture] would use the following code:
// Bind the texture.
glBindTexture(GL_TEXTURE_2D, retainedTexture);
// Destination coords.
GLfloat retainedVertices[] = {
0.0, backingHeight, 0.0,
backingWidth, backingHeight, 0.0,
0.0, 0.0, 0.0,
backingWidth, 0.0, 0.0
};
// Source coords.
GLfloat retainedTexCoords[] = {
0.0, 1.0,
1.0, 1.0,
0.0, 0.0,
1.0, 0.0
};
// Draw the texture.
glVertexPointer(3, GL_FLOAT, 0, retainedVertices);
glTexCoordPointer(2, GL_FLOAT, 0, retainedTexCoords);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// Unbind the texture.
glBindTexture(GL_TEXTURE_2D, 0);
A lot of code, but I hope that helps somebody. It certainly had me stumped for a while!
I'm trying to render to a texture, then draw that texture to the screen using OpenGL ES on the iPhone. I'm using this question as a starting point, and doing the drawing in a subclass of Apple's demo EAGLView.
Instance variables:
GLuint textureFrameBuffer;
Texture2D * texture;
To initialize the frame buffer and texture, I'm doing this:
GLint oldFBO;
glGetIntegerv(GL_FRAMEBUFFER_BINDING_OES, &oldFBO);
// initWithData results in a white image on the device (works fine in the simulator)
texture = [[Texture2D alloc] initWithImage:[UIImage imageNamed:#"blank320.png"]];
// create framebuffer
glGenFramebuffersOES(1, &textureFrameBuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, textureFrameBuffer);
// attach renderbuffer
glFramebufferTexture2DOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_TEXTURE_2D, texture.name, 0);
if(glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES) != GL_FRAMEBUFFER_COMPLETE_OES)
NSLog(#"incomplete");
glBindFramebufferOES(GL_FRAMEBUFFER_OES, oldFBO);
Now, if I simply draw my scene to the screen as usual, it works fine:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// draw some triangles, complete with vertex normals
[contentDelegate draw];
[self swapBuffers];
But, if I render to 'textureFrameBuffer', then draw 'texture' to the screen, the resulting image is upside down and "inside out". That is, it looks as though the normals of the 3d objects are pointing inward rather than outward -- the frontmost face of each object is transparent, and I can see the inside of the back face. Here's the code:
GLint oldFBO;
glGetIntegerv(GL_FRAMEBUFFER_BINDING_OES, &oldFBO);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, textureFrameBuffer);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// draw some polygons
[contentDelegate draw];
glBindFramebufferOES(GL_FRAMEBUFFER_OES, oldFBO);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_TEXTURE_2D);
glColor4f(1, 1, 1, 1);
[texture drawInRect:CGRectMake(0, 0, 320, 480)];
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisable(GL_TEXTURE_2D);
[self swapBuffers];
I can flip the image rightside-up easily enough by reordering the (glTexCoordPointer) coordinates accordingly (in Texture2D's drawInRect method), but that doesn't solve the "inside-out" issue.
I tried replacing the Texture2D texture with a manually created OpenGL texture, and the result was the same. Drawing a Texture2D loaded from a PNG image works fine.
As for drawing the objects, each vertex has a unit normal specified, and GL_NORMALIZE is enabled.
glVertexPointer(3, GL_FLOAT, 0, myVerts);
glNormalPointer(GL_FLOAT, 0, myNormals);
glDrawArrays(GL_TRIANGLES, 0, numVerts);
Everything draws fine when it's rendered to the screen; GL_DEPTH_TEST is enabled and is working great.
Any suggestions as to how to fix this? Thanks!
The interesting part of this is that you're seeing a different result when drawing directly to the backbuffer. Since you're on the iPhone platform, you're always drawing to an FBO, even when you're drawing to the backbuffer.
Make sure that you have a depth buffer attached to your offscreen FBO. In your initialization code, you might want to add the following snippet right after the glBindFramebufferOES(...).
// attach depth buffer
GLuint depthRenderbuffer;
glGenRenderbuffersOES(1, &depthRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, depthRenderbuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_DEPTH_COMPONENT16_OES, width, height);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_DEPTH_ATTACHMENT_OES, GL_RENDERBUFFER_OES, depthRenderbuffer);
Using a couple of posts here in StackOverflow, I created what is supposed to be a simple render-to-texture using a framebuffer.
The problem here is that it's not working. Something is broken in the mix, as my final texture is just a white square. I am not getting any gl errors whatsoever. Here is my code.
Declare instance variables.
GLuint texture;
GLuint textureFrameBuffer;
Generate the texture and framebuffer.
glGetError();
//Generate the texture that we will draw to (saves us a lot of processor).
glEnable(GL_TEXTURE_2D);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
// Use OpenGL ES to generate a name for the texture.
// Pass by reference so that our texture variable gets set.
glGenTextures(1, &texture);
// Bind the texture name.
glBindTexture(GL_TEXTURE_2D, texture);
// Specify a 2D texture image, providing a pointer to the image data in memory.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 512, 128, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
//Create a frame buffer to draw to. This will allow us to directly edit the texture.
GLint oldFBO;
glGetIntegerv(GL_FRAMEBUFFER_BINDING_OES, &oldFBO);
glGenFramebuffersOES(1, &textureFrameBuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, textureFrameBuffer);
glFramebufferTexture2DOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_TEXTURE_2D, texture, 0);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, oldFBO);
GLenum err = glGetError();
if (err != GL_NO_ERROR)
{
NSLog(#"Error on framebuffer init. glError: 0x%04X", err);
}
Draw a big string into the framebuffer.
glGetError();
GLint oldFBO;
glGetIntegerv(GL_FRAMEBUFFER_BINDING_OES, &oldFBO);
//Bind our frame buffer.
glBindFramebufferOES(GL_FRAMEBUFFER_OES, textureFrameBuffer);
//Clear out the texture.
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
//Draw the letters to the frame buffer. (calls glDrawArrays a bunch of times, binds various textures, etc.) Does everything in 2D.
[self renderDialog:displayString withSprite:displaySprite withName:displaySpriteName];
//Unbind the frame buffer.
glBindFramebufferOES(GL_FRAMEBUFFER_OES, oldFBO);
GLenum err = glGetError();
if (err != GL_NO_ERROR)
{
NSLog(#"Error on string creation. glError: 0x%04X", err);
}
Draw it.
glColor4f(1.0f, 1.0f, 1.0f, 1.0f);
glGetError();
//Draw the text.
[EAGLView enable2D];
//Push the matrix so we can keep it as it was previously.
glPushMatrix();
//Store the coordinates/dimensions from the rectangle.
float x = 0;
float y = [Globals getPlayableHeight] - dialogRect.size.height;
float w = [Globals getPlayableWidth];
float h = dialogRect.size.height;
// Set up an array of values to use as the sprite vertices.
GLfloat vertices[] =
{
x, y,
x, y+h,
x+w, y+h,
x+w, y
};
// Set up an array of values for the texture coordinates.
GLfloat texcoords[] =
{
0, 0,
0, h / 128,
w / 512, h / 128,
w / 512, 0
};
//Render the vertices by pointing to the arrays.
glVertexPointer(2, GL_FLOAT, 0, vertices);
glTexCoordPointer(2, GL_FLOAT, 0, texcoords);
// Set the texture parameters to use a linear filter when minifying.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
//Allow transparency and blending.
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
//Enable 2D textures.
glEnable(GL_TEXTURE_2D);
//Bind this texture.
[EAGLView bindTexture:texture];
//Finally draw the arrays.
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
//Restore the model view matrix to prevent contamination.
glPopMatrix();
GLenum err = glGetError();
if (err != GL_NO_ERROR)
{
NSLog(#"Error on draw. glError: 0x%04X", err);
}
Any external things I called work just fine in other contexts. Any ideas? I know almost nothing about framebuffers, so any help troubleshooting would be great.
Texture parameters are set on a per-texture basis. The code you posted appears to be setting GL_TEXTURE_MIN_FILTER before the texture you’re rendering to has been created or bound. If you’re not setting the filter anywhere else, and you haven’t specified texture images for the remaining levels, your texture is likely incomplete, which is why you’re getting white.
For future reference, the absence of GL errors after framebuffer setup does not mean that the framebuffer is usable for rendering. You should also check that the framebuffer is complete by calling glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES) and verifying that GL_FRAMEBUFFER_COMPLETE_OES is returned.