I am working with some larger vertice values which I have parsed from a DAE file. E.g :
{-79.6536, -2230.43, -213.8},{-79.6536, 2377.36, -213.8},{79.6536, 2377.36, -213.8},{79.6536, -2230.43, -213.8},{-79.6536, -2230.43, 958.953},{79.6536, -2230.43, 958.953},{79.6536, 2377.36, 958.953},{-79.6536, 2377.36, 958.953},...
My question is what changes do I need to make to the setting up of my viewport in order to accomodate these larger vertices ? I currently have the following :
- (void)setupView
{
// Set up the window that we will view the scene through
glViewport(0, 0, backingWidth, backingHeight);
// switch to the projection matrix and setup our 'camera lens'
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(-1.0f, 1.0f, -1.5f, 1.5f, -1.0f, 1.0f);
// switch to model mode and set our background color
glMatrixMode(GL_MODELVIEW);
glClearColor(0.5f, 0.5f, 0.5f, 1.0f);
}
However when I run the code, I just get a screen filled with white - I presume because my object is zoomed in to an extreme degree.
Thanks for any advice in advance.
Use glScalef(max_s, max_s, max_s);
Where
max_s = 2.0 / max(max(Xi) - min(Xi), max(Yi) - min(Yi), max(Zi) - min(Zi))
Related
I'm making an iPhone game that involves the use of GL_POINT to render a point. However, when the center of the point is off screen, I still want to draw whatever portion of the point that is still onscreen. Here is my code that I'm using to render the point.
-(void)render {
if (!fill || !outline || !active || dead)
return;
NSLog(#"rendering");
glPushMatrix();
glLoadIdentity();
glMultMatrixf(matrix);
glEnable(GL_BLEND);
glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_SMOOTH);
glEnable(GL_POINT_SMOOTH);
glPointSize(scale.x*2);
[outline render];
glPointSize(2*(scale.x-kLineWidth));
[fill render];
glPopMatrix();
}
note that it logs "rendering" when it should be rendering, so this method is getting called properly.
and the [outline render] and [fill render] methods look like this
-(void)render {
// load arrays into the engine
glVertexPointer(vertexSize, GL_FLOAT, 0, vertexes);
glEnableClientState(GL_VERTEX_ARRAY);
glColorPointer(colorSize, GL_FLOAT, 0, colors);
glEnableClientState(GL_COLOR_ARRAY);
//render
glDrawArrays(renderStyle, 0, vertexCount);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
}
and I'm using a "panning" effect using this code
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(-kScreenWidth/2.0 + xPan, kScreenWidth/2.0 + xPan, -kScreenHeight/2.0 + yPan, kScreenHeight/2.0 + yPan, -1.0f, 1.0f);
glMatrixMode(GL_MODELVIEW);
but when the point's center is not on the screen (after panning with glOrthof), the whole point is not drawn. How can I have the point still render even when the center is not on the screen?
I don't believe there is anything you can do for an easy fix. Primitives are clipped before rasterization, so if that point lies outside the view frustum, it's not going to be rasterized, even if the rasterization would create fragments that do lie inside the view frustum.
Either switch to real quads with GL_TRIANGLES/GL_QUADS, or if you really don't want to do that, you can render your points to an offscreen buffer with size slightly larger than the viewport, and then blit the center of that image back onto the main frame.
I'm new to Xcode programming and I'm trying to create an iPhone game using OpenGL with support for retina display at 60 FPS, but it runs way too slow. I based it on the GLSprite example at developer.apple. I've already optimized it the best I could, but it keeps running < 30 FPS on the Simulator (I haven't tested it on a real device yet - maybe it's faster?). The bottleneck appears to be drawing the polygons - I've used really small textures (256x256 PNG) and pixel formats (RGBA4444); I've disabled blending; I've moved all transformation code to the load phase hoping for better performance; everything to no success. I'm keeping a vertex array that stores everything for that step, then draws using GL_TRIANGLES with one function call - because I think it's faster than calling multiple glDrawArrays. It starts lagging when I reach about 120 vertexes (6 for each rectangular sprite), but in many places I've read the iPhone can handle even millions of vertexes. What's wrong with the code below? Is OpenGL the fastest way to render graphics on the iPhone? If not, what else should I use?
OpenGL loading code, called just once, at the beginning:
glViewport(0, 0, backingWidth, backingHeight);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(-1.0f, 1.0f, -1.5f, 1.5f, -1.0f, 1.0f);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
glMatrixMode(GL_MODELVIEW);
glClear(GL_COLOR_BUFFER_BIT);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glBindTexture(GL_TEXTURE_2D,texture[0]); //Binds a texture loaded previously with the code given below
glVertexPointer(3, GL_FLOAT, 0, vertexes); //The array holding the vertexes
glEnableClientState(GL_VERTEX_ARRAY);
glTexCoordPointer(2, GL_FLOAT, 0, uvCoord); //The array holding the uv coordinates
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
The texture loading method:
- (void)loadSprite:(NSString*)filename intoPos:(int)pos { //Loads a texture within the bundle, at the given position in an array storing all textures (but I actually just use one at a time)
CGImageRef spriteImage;
CGContextRef spriteContext;
GLubyte *spriteData;
size_t width, height;
// Sets up matrices and transforms for OpenGL ES
glViewport(0, 0, backingWidth, backingHeight);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(-1.0f, 1.0f, -1.5f, 1.5f, -1.0f, 1.0f);
glMatrixMode(GL_MODELVIEW);
// Clears the view with black
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
// Sets up pointers and enables states needed for using vertex arrays and textures
glVertexPointer(2, GL_FLOAT, 0, spriteVertices);
glEnableClientState(GL_VERTEX_ARRAY);
glTexCoordPointer(2, GL_FLOAT, 0, spriteTexcoords);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
// Creates a Core Graphics image from an image file
spriteImage = [UIImage imageNamed:filename].CGImage;
// Get the width and height of the image
width = CGImageGetWidth(spriteImage);
height = CGImageGetHeight(spriteImage);
textureWidth[pos]=width;
textureHeight[pos]=height;
NSLog(#"Width %lu; Height %lu",width,height);
// Texture dimensions must be a power of 2. If you write an application that allows users to supply an image,
// you'll want to add code that checks the dimensions and takes appropriate action if they are not a power of 2.
if(spriteImage) {
// Allocated memory needed for the bitmap context
spriteData = (GLubyte *) calloc(width * height * 4, sizeof(GLubyte));
// Uses the bitmap creation function provided by the Core Graphics framework.
spriteContext = CGBitmapContextCreate(spriteData, width, height, 8, width * 4, CGImageGetColorSpace(spriteImage), kCGImageAlphaPremultipliedLast);
// After you create the context, you can draw the sprite image to the context.
CGContextDrawImage(spriteContext, CGRectMake(0.0, 0.0, (CGFloat)width, (CGFloat)height), spriteImage);
// You don't need the context at this point, so you need to release it to avoid memory leaks.
CGContextRelease(spriteContext);
// Use OpenGL ES to generate a name for the texture.
glGenTextures(1, &texture[pos]);
// Bind the texture name.
glBindTexture(GL_TEXTURE_2D, texture[pos]);
curTexture=pos;
if (1) { //This should convert pixel format
NSLog(#"convert to 4444");
void* tempData;
unsigned int* inPixel32;
unsigned short* outPixel16;
tempData = malloc(height * width * 2);
inPixel32 = (unsigned int*)spriteData;
outPixel16 = (unsigned short*)tempData;
NSUInteger i;
for(i = 0; i < width * height; ++i, ++inPixel32)
*outPixel16++ = ((((*inPixel32 >> 0) & 0xFF) >> 4) << 12) | ((((*inPixel32 >> 8) & 0xFF) >> 4) << 8) | ((((*inPixel32 >> 16) & 0xFF) >> 4) << 4) | ((((*inPixel32 >> 24) & 0xFF) >> 4) << 0);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_SHORT_4_4_4_4, tempData);
free(tempData);
} else {
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, spriteData);
}
// Set the texture parameters to use a minifying filter and a linear filer (weighted average)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
// Specify a 2D texture image, providing the a pointer to the image data in memory
//glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, spriteData);
// Release the image data
free(spriteData);
// Enable use of the texture
glEnable(GL_TEXTURE_2D);
// Set a blending function to use
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
// Enable blending
glEnable(GL_BLEND);
}
The actual drawing code that is called every game loop:
glDrawArrays(GL_TRIANGLES, 0, vertexIndex); //vertexIndex is the maximum number of vertexes at this loop
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
According to the OpenGL programming guide for iOS :
Important Rendering performance of OpenGL ES in Simulator has no relation to the performance of OpenGL ES on an actual device.
Simulator provides an optimized software rasterizer that takes
advantage of the vector processing capabilities of your Macintosh
computer. As a result, your OpenGL ES code may run faster or slower in
iOS simulator (depending on your computer and what you are drawing)
than on an actual device. Always profile and optimize your drawing
code on a real device and never assume that Simulator reflects
real-world performance.
The simulator is not reliable to profile performance of OpenGL applications. You'll need to run/profile on the real hardware.
It starts lagging when I reach about 120 vertexes (6 for each
rectangular sprite), but in many places I've read the iPhone can
handle even millions of vertexes.
To elaborate a bit on this comment of yours : the number of vertices is not the only variable impacting OpenGL performance.For example, with only a single triangle (3 vertices), you can draw pixels on the whole screen. This obviously needs more computation than drawing a small triangle covering only a few pixels. The metric representing the capacity of drawing many pixels is known as fill-rate.
If your vertices represent large triangles on screen, it is probable that fill-rate is your performance bottleneck, and not vertex transform. As the iOS simulator does use a software rasterizer, albeit being optimized, it is probably slower that actual specialized hardware.
You should profile your application to know what is your actual performance bottleneck before optimizing ; this document can help you.
I'm trying to draw a simple circle using OpenGL ES. The problem is that the circle is stretched vertically. It looks more like an ellipse than a circle. Could someone point out where the things are going wrong?
I played around with glViewPort to fix this but was not successful. As someone else suggested here on Stackoverflow, I also tried loading a different matrix instead of the identity matrix and that doesn't work too...
Here's the code of drawFrame:
- (void)drawFrame
{
[(EAGLView *)self.view setFramebuffer];
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
GLfloat vertices[720];
for (int i = 0; i < 720; i += 2)
{
vertices[i] = (cos(degreesToRadians(i)) * 1);
vertices[i+1] = (sin(degreesToRadians(i)) * 1);
}
glVertexPointer(2, GL_FLOAT, 0, vertices);
glEnableClientState(GL_VERTEX_ARRAY);
glColor4f(1.0f, 0.0f, 0.0f, 1.0f);
glDrawArrays(GL_TRIANGLE_FAN, 0, 360);
[(EAGLView *)self.view presentFramebuffer];
}
The code you are showing will draw a perfect circle in world co-ordinates. What you need to consider is how those world co-ordinates transform into window co-ordinates i.e. pixels.
If the glViewport is set to always match the window then it's the aspect ratio of the window that will determine what you see using the code sample you have shown. If the window is square it will work i.e. you will see a perfect circle. If the window is taller than it is wide then the circle will be stretched vertically.
To preserve the perfect circle you can use a projection matrix that gives you a viewing volume of the same aspect ratio as the viewport/window. I noticed that before your first edit you had a call to glOrthof in there. Set the aspect ratio to match there and that will do the job for you. If you want a perspective projection instead of an orthographic projection then use glFrustum.
To define what I'm trying to do: I want to be able to take an arbitrary 'sprite' image from a ^2x^2 sized PNG, and display just the pixels of interest to a given x/y position on screen.
My results are the problem - major distortion - it looks awful! (Note these SS's are in iPhone sim but on real retina device they appear the same.. junky). Here is a screenshot of the source PNG in 'preview' - which looks wonderful (any variations on rendering that I describe in this question look almost exactly like the junky one)
Previously, I've asked a question about displaying a non-power-of-2 texture as a sprite using OpenGL ES 2.0 (although this applies to any OpenGL). I'm close, but I have some issues that I can't resolve. I think there are probably multiple bugs - I think there's some bug where I'm basically aliasing what I'm displaying by rendering large then squashing x2 or vice versa but I can't see it. Additionally, there are off by one errors and I cannot get a handle on them. I can't visually identify them occurring but I know for sure they're there.
I'm working in 960 x 640 landscape (on iPhone4 retina display). So I expect 0->959 moves left to right, 0->639 moves bottom to top. (And I think I'm seeing opposite of this - but that's not what this question is about)
To make things easy what I'm trying to achieve in this test case is a FULL SCREEN 960x640 display of a PNG file. Just one of them. I display a red background first so that it's obvious if I'm covering the screen or not.
Update: I realized the 'glViewport' inside of the setFramebuffer call was setting my width and height backwards. I noticed this because when I would set my geometry to draw from 0,0 to 100,100 it drew in a rectangle not a square. When I swapped these, that call does draw a square. However, using that same call, my entire screen fills with vertex range of 0,0 -> 480,320 (half 'resolution').. don't understand that. However no matter where I push on from this, I'm still not getting a good looking result
Here's my vertex shader:
attribute vec4 a_position;
attribute vec2 a_texCoord;
varying vec2 v_texCoord;
// Gives 'landscape' full screen..
mat4 projectionMatrix = mat4( 2.0/640.0, 0.0, 0.0, -1.0,
0.0, 2.0/960.0, 0.0, -1.0,
0.0, 0.0, -1.0, 0.0,
0.0, 0.0, 0.0, 1.0);
// Gives a 1/4 of screen.. (not doing 2.0/.. was suggested in previous SO Q)
/*mat4 projectionMatrix = mat4( 1.0/640.0, 0.0, 0.0, -1.0,
0.0, 1.0/960.0, 0.0, -1.0,
0.0, 0.0, -1.0, 0.0,
0.0, 0.0, 0.0, 1.0); */
// Apply the projection matrix to the position and pass the texCoord
void main()
{
gl_Position = a_position;
gl_Position *= projectionMatrix;
v_texCoord = a_texCoord;
}
Here's my fragment shader:
precision mediump float;
varying vec2 v_texCoord;
uniform sampler2D s_texture;
void main()
{
gl_FragColor = texture2D(s_texture, v_texCoord);
}
Here's my draw code:
#define MYWIDTH 960.0f
#define MYHEIGHT 640.0f
// I have to refer to 'X' as height although I'd assume I use 'Y' here..
// I think my X and Y throughout this whole block of code is screwed up
// But, I have experimented flipping them all and verifying that if they
// Are taken from the way they're set now to swapping X and Y that things
// end up being turned the wrong way. So this is a mess, but unlikely my problem
#define BG_X_ORIGIN 0.0f
// ALSO NOTE HERE: I have to put my 'dest' at 640.0f.. --- see note [1] below
#define BG_X_DEST 640.0f
#define BG_Y_ORIGIN 0.0f
// --- see note [1] below
#define BG_Y_DEST 960.0f
// These are texturing coordinates, I texture starting at '0' px and then
// I calculate a percentage of the texture to use based on how many pixels I use
// divided by the actual size of the image (1024x1024)
#define BG_X_ZERO 0.0f
#define BG_Y_USEPERCENTAGE BG_X_DEST / 1023.0f
#define BG_Y_ZERO 0.0f
#define BG_X_USEPERCENTAGE BG_Y_DEST / 1023.0f
// glViewport(0, 0, MYWIDTH, MYHEIGHT);
// See note 2.. it sets glViewport basically, provided by Xcode project template
[(EAGLView *)self.view setFramebuffer];
// Big hack just to get things going - like I said before, these could be backwards
// w/respect to X and Y
static const GLfloat backgroundVertices[] = {
BG_X_ORIGIN, BG_Y_ORIGIN,
BG_X_DEST, BG_Y_ORIGIN,
BG_X_ORIGIN, BG_Y_DEST,
BG_X_DEST, BG_Y_DEST
};
static const GLfloat backgroundTexCoords[] = {
BG_X_ZERO, BG_Y_USEPERCENTAGE,
BG_X_USEPERCENTAGE, BG_Y_USEPERCENTAGE,
BG_X_ZERO, BG_Y_ZERO,
BG_X_USEPERCENTAGE, BG_Y_ZERO
};
// Turn on texturing
glEnable(GL_TEXTURE_2D);
// Clear to RED so that it's obvious when I'm not drawing my sprite on screen
glClearColor(1.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
// Texturing parameters - these make sense.. don't think they are the issue
glActiveTexture(GL_TEXTURE0);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);//GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);//GL_LINEAR);
// Update attribute values.
glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, backgroundVertices);
glEnableVertexAttribArray(ATTRIB_VERTEX);
glVertexAttribPointer(ATTRIB_TEXCOORD, 2, GL_FLOAT, 0, 0, backgroundTexCoords);
glEnableVertexAttribArray(ATTRIB_TEXCOORD);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, background->textureId);
// I don't understand what this uniform does in the texture2D call in shader.
glUniform1f(uniforms[UNIFORM_SAMPLERLOC], 0);
// Draw the geometry...
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// present the framebuffer see note [3]
[(EAGLView *)self.view presentFramebuffer];
Note [1]:
If I set BG_X_DEST to 639.0f I do not get full coverage of the 640 pixels, I get red showing through on the right hand side. But this doesn't make sense to me - I'm aiming for pixel perfect and I have to draw my sprite geometry from 0 to 640 which is 641 pixels when I only have 640!!! red line appearing with 639f instead of 640f
And if I set BG_Y_DEST to 959.0f I do not get the red line show throug.
red line top bug appearing with 958f instead of 960 or 959f
This may be a good clue as to what bug(s) I have going on.
Note: [2] - included in the OpenGL ES 2 framework by Xcode
- (void)setFramebuffer
{
if (context)
{
[EAGLContext setCurrentContext:context];
if (!defaultFramebuffer)
[self createFramebuffer];
glBindFramebuffer(GL_FRAMEBUFFER, defaultFramebuffer);
glViewport(0, 0, framebufferWidth, framebufferHeight);
}
}
Note [3]: - included in the OpenGL ES 2 framework by Xcode
- (BOOL)presentFramebuffer
{
BOOL success = FALSE;
if (context)
{
[EAGLContext setCurrentContext:context];
glBindRenderbuffer(GL_RENDERBUFFER, colorRenderbuffer);
success = [context presentRenderbuffer:GL_RENDERBUFFER];
}
return success;
}
Note [4] - relevant image loading code (I have used PNG with and without alpha channel and actually it doesn't seem to make any difference... I also have tried to change my code up to be ARGB instead of RGBA and that's wrong - since A = 1.0 everywhere, I get a very RED image, which makes me think the RGBA is in fact valid and this code is right.): update: I have switched this texture loading to a completely different setup using CG/ImageIO calls and it looks identical to this so I assume it's not some kind of aliasing or color compression done by the image libraries (unless they both go to the same fundamental calls, which is possible..)
// Otherwise it isn't already loaded
glEnable(GL_TEXTURE_2D);
glActiveTexture(GL_TEXTURE0);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);//GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);//GL_LINEAR);
// TODO Next 2 can prob go later on..
glGenTextures(1, &(newTexture->textureId)); // generate Texture
// Use this before 'drawing' the texture to the memory...
glBindTexture(GL_TEXTURE_2D, newTexture->textureId);
NSString *path = [[NSBundle mainBundle]
pathForResource:[NSString stringWithUTF8String:newTexture->filename.c_str()] ofType:#"png"];
NSData *texData = [[NSData alloc] initWithContentsOfFile:path];
UIImage *image = [[UIImage alloc] initWithData:texData];
if (image == nil)
NSLog(#"Do real error checking here");
newTexture->width = CGImageGetWidth(image.CGImage);
newTexture->height = CGImageGetHeight(image.CGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
void *imageData = malloc(newTexture->height * newTexture->width * 4 );
CGContextRef myContext = CGBitmapContextCreate
(imageData, newTexture->width, newTexture->height, 8, 4 * newTexture->width, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big );
CGColorSpaceRelease(colorSpace);
CGContextClearRect(myContext, CGRectMake(0, 0, newTexture->width, newTexture->height));
CGContextDrawImage(myContext, CGRectMake(0, 0, newTexture->width, newTexture->height), image.CGImage);
// Texture is created!
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, newTexture->width, newTexture->height, 0,
GL_RGBA, GL_UNSIGNED_BYTE, imageData);
CGContextRelease(myContext);
free(imageData);
[image release];
[texData release];
[(EAGLView *)self.view setContentScaleFactor:2.0f];
By default, iPhone windows do scaling to reach their high resolution modes. Which was destroying my image quality ..
Thanks for all the help folks
I'm trying to render to a texture, then draw that texture to the screen using OpenGL ES on the iPhone. I'm using this question as a starting point, and doing the drawing in a subclass of Apple's demo EAGLView.
Instance variables:
GLuint textureFrameBuffer;
Texture2D * texture;
To initialize the frame buffer and texture, I'm doing this:
GLint oldFBO;
glGetIntegerv(GL_FRAMEBUFFER_BINDING_OES, &oldFBO);
// initWithData results in a white image on the device (works fine in the simulator)
texture = [[Texture2D alloc] initWithImage:[UIImage imageNamed:#"blank320.png"]];
// create framebuffer
glGenFramebuffersOES(1, &textureFrameBuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, textureFrameBuffer);
// attach renderbuffer
glFramebufferTexture2DOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_TEXTURE_2D, texture.name, 0);
if(glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES) != GL_FRAMEBUFFER_COMPLETE_OES)
NSLog(#"incomplete");
glBindFramebufferOES(GL_FRAMEBUFFER_OES, oldFBO);
Now, if I simply draw my scene to the screen as usual, it works fine:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// draw some triangles, complete with vertex normals
[contentDelegate draw];
[self swapBuffers];
But, if I render to 'textureFrameBuffer', then draw 'texture' to the screen, the resulting image is upside down and "inside out". That is, it looks as though the normals of the 3d objects are pointing inward rather than outward -- the frontmost face of each object is transparent, and I can see the inside of the back face. Here's the code:
GLint oldFBO;
glGetIntegerv(GL_FRAMEBUFFER_BINDING_OES, &oldFBO);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, textureFrameBuffer);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// draw some polygons
[contentDelegate draw];
glBindFramebufferOES(GL_FRAMEBUFFER_OES, oldFBO);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_TEXTURE_2D);
glColor4f(1, 1, 1, 1);
[texture drawInRect:CGRectMake(0, 0, 320, 480)];
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisable(GL_TEXTURE_2D);
[self swapBuffers];
I can flip the image rightside-up easily enough by reordering the (glTexCoordPointer) coordinates accordingly (in Texture2D's drawInRect method), but that doesn't solve the "inside-out" issue.
I tried replacing the Texture2D texture with a manually created OpenGL texture, and the result was the same. Drawing a Texture2D loaded from a PNG image works fine.
As for drawing the objects, each vertex has a unit normal specified, and GL_NORMALIZE is enabled.
glVertexPointer(3, GL_FLOAT, 0, myVerts);
glNormalPointer(GL_FLOAT, 0, myNormals);
glDrawArrays(GL_TRIANGLES, 0, numVerts);
Everything draws fine when it's rendered to the screen; GL_DEPTH_TEST is enabled and is working great.
Any suggestions as to how to fix this? Thanks!
The interesting part of this is that you're seeing a different result when drawing directly to the backbuffer. Since you're on the iPhone platform, you're always drawing to an FBO, even when you're drawing to the backbuffer.
Make sure that you have a depth buffer attached to your offscreen FBO. In your initialization code, you might want to add the following snippet right after the glBindFramebufferOES(...).
// attach depth buffer
GLuint depthRenderbuffer;
glGenRenderbuffersOES(1, &depthRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, depthRenderbuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_DEPTH_COMPONENT16_OES, width, height);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_DEPTH_ATTACHMENT_OES, GL_RENDERBUFFER_OES, depthRenderbuffer);