Draw Square with OpenGL ES for iOS - iphone

I am trying to draw a rectangle using the GLPaint example project provided by apple. I have tried modifying the vertices but cannot get a rectangle to appear on the screen. The finger painting works perfectly. Am I missing something in my renderRect method?
- (void)renderRect {
[EAGLContext setCurrentContext:context];
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
// Replace the implementation of this method to do your own custom drawing.
static const GLfloat squareVertices[] = {
-0.5f, -0.33f,
0.5f, -0.33f,
-0.5f, 0.33f,
0.5f, 0.33f,
};
static float transY = 0.0f;
glTranslatef(0.0f, (GLfloat)(sinf(transY)/2.0f), 0.0f);
// Render the vertex array
glVertexPointer(2, GL_FLOAT, 0, squareVertices);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// Display the buffer
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
}
The rest of the project is set up stock to allow drawing on the screen but just for reference these are the gl settings that are set.
// Set the view's scale factor
self.contentScaleFactor = 1.0;
// Setup OpenGL states
glMatrixMode(GL_PROJECTION);
CGRect frame = self.bounds;
CGFloat scale = self.contentScaleFactor;
// Setup the view port in Pixels
glOrthof(0, frame.size.width * scale, 0, frame.size.height * scale, -1, 1);
glViewport(0, 0, frame.size.width * scale, frame.size.height * scale);
glMatrixMode(GL_MODELVIEW);
glDisable(GL_DITHER);
glEnable(GL_TEXTURE_2D);
glEnableClientState(GL_VERTEX_ARRAY);
glEnable(GL_BLEND);
// Set a blending function appropriate for premultiplied alpha pixel data
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_POINT_SPRITE_OES);
glTexEnvf(GL_POINT_SPRITE_OES, GL_COORD_REPLACE_OES, GL_TRUE);
glPointSize(width / brushScale);

static const GLfloat squareVertices[] = {
30.0f, 300.0f,//-0.5f, -0.33f,
280.0f, 300.0f,//0.5f, -0.33f,
30.0f, 170.0f,//-0.5f, 0.33f,
280.0f, 170.0f,//0.5f, 0.33f,
};
That's definitely too much. OpenGL has normalized screen coords in range [-1..1]. So you have to convert device coords to normalized ones.

Issues are:
(1) the following code:
glMatrixMode(GL_PROJECTION);
CGRect frame = self.bounds;
CGFloat scale = self.contentScaleFactor;
// Setup the view port in Pixels
glOrthof(0, frame.size.width * scale, 0, frame.size.height * scale, -1, 1);
glViewport(0, 0, frame.size.width * scale, frame.size.height * scale);
Establishes that the on-screen coordinates range from (0, 0) in the lower left to frame.size in the upper right. In other words, one OpenGL unit is one iPhone point. So your array of:
static const GLfloat squareVertices[] = {
-0.5f, -0.33f,
0.5f, -0.33f,
-0.5f, 0.33f,
0.5f, 0.33f,
};
Is less than 1 pixel in size.
(2) you have the following in the setup:
brushImage = [UIImage imageNamed:#"Particle.png"].CGImage;
/* ...brushImage eventually becomes the current texture... */
glEnable(GL_TEXTURE_2D);
You subsequently fail to supply texture coordinates for your quad. Probably you want to disable GL_TEXTURE_2D.
So the following:
static const GLfloat squareVertices[] = {
0.0f, 0.0f,
0.0, 10.0f,
90.0, 0.0f,
90.0f, 10.0f,
};
glDisable(GL_TEXTURE_2D);
glColor4f(1.0f, 1.0f, 1.0f, 1.0f);
// Render the vertex array
glVertexPointer(2, GL_FLOAT, 0, squareVertices);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
Will produce a white quad 90 points wide and 10 points tlal in the lower left of the screen.

Related

Why my iPhone OpenGL-ES texture does not cover the viewport?

I have a square image of size 320x320, from which I create an OpenGL texture. I use the most basic Vertext and Fragment shaders and I want to display the texture in the entire view. The view (EAGLView derived from UIView as found in many OpenGL iOS samples) is also of size 320x320.
The problem is, the image is drawn on the top left corner, covering only around 50% of the entire view. It does not cover 100% of the view. I don't know why?
Here is my code:
position = glGetAttribLocation(m_shaderProgram, "position");
inputTextureCoordinate = glGetAttribLocation(m_shaderProgram, "inputTextureCoordinate");
inputImageTexture = glGetUniformLocation(m_shaderProgram, "inputImageTexture");
static const GLfloat textureCoordinates[] = {
0.0f, 1.0f,
1.0f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f,
};
static const GLfloat imageVertices[] = {
-1.0f, -1.0f,
1.0f, -1.0f,
-1.0f, 1.0f,
1.0f, 1.0f,
};
[EAGLContext setCurrentContext:context];
glViewport(0, 0, backingWidth, backingHeight); // These are 320, 320
glUseProgram(m_shaderProgram);
glClearColor(0, 0, 0, 1);
glClear(GL_COLOR_BUFFER_BIT);
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_2D, sourceTextureID); // The texture is also of size 320x320
glUniform1i(inputImageTexture, 2);
glVertexAttribPointer(position, 2, GL_FLOAT, 0, 0, imageVertices);
glVertexAttribPointer(textureCoordinate, 2, GL_FLOAT, 0, 0, textureCoordinates);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
Vertext Shader.
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
varying vec2 textureCoordinate;
void main()
{
gl_Position = position;
textureCoordinate = inputTextureCoordinate.xy;
}
Fragment Shader.
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
void main()
{
gl_FragColor = texture2D(inputImageTexture, textureCoordinate);
}
So the problem is that the texture's dimensions were not 2's power. So we need to scale the textureCoordinates accordingly. Inserting following lines solved the problem...
GLfloat textureCoordinates[] = {
0.0f, 1.0f,
1.0f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f,
};
float nearest2sPower = 2;
if (nearest2sPower < backingWidth) {
while (nearest2sPower < backingWidth) {
nearest2sPower *= 2;
}
}
verticalFlipTextureCoordinates[2] = backingWidth/nearest2sPower;
verticalFlipTextureCoordinates[6] = backingWidth/nearest2sPower;
nearest2sPower = 2;
if (nearest2sPower < backingWidth) {
while (nearest2sPower < backingWidth) {
nearest2sPower *= 2;
}
}
verticalFlipTextureCoordinates[1] = backingHeight/nearest2sPower;
verticalFlipTextureCoordinates[3] = backingHeight/nearest2sPower;

iPhone OpenGL ES Paint App Brush Effect

I'm developing painting app [taking reference from GLPaint app] for iPhone and iPad.
I am working on brush effect. I want to get brush effect for my paint app as shown in Image1
I've got brush stroke similar to image 2
I am using following code for brush texture:
CGImageRef brushImage;
CGContextRef brushContext;
GLubyte *brushData;
size_t width, height;
if(UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad)
{
brushImage = [UIImage imageNamed:#"flower#2x.png"].CGImage;
}
else {
brushImage = [UIImage imageNamed:#"flower.png"].CGImage;
}
width = CGImageGetWidth(brushImage) ;
height = CGImageGetHeight(brushImage) ;
if(brushImage) {
brushData = (GLubyte *) calloc(width * height * 4, sizeof(GLubyte));
brushContext = CGBitmapContextCreate(brushData, width, height, 8, width * 4, CGImageGetColorSpace(brushImage),kCGImageAlphaPremultipliedLast);
CGContextDrawImage(brushContext, CGRectMake(0.0, 0.0, (CGFloat)width, (CGFloat)height), brushImage);
CGContextRelease(brushContext);
glGenTextures(1, &brushTexture);
glBindTexture(GL_TEXTURE_2D, brushTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, brushData);
free(brushData);
}
CGFloat scale;
scale = self.contentScaleFactor;
glMatrixMode(GL_PROJECTION);
CGRect frame = self.bounds;
glLoadIdentity();
glOrthof(0, (frame.size.width) * scale, 0, (frame.size.height) * scale, -1, 1);
glViewport(0, 0, (frame.size.width) * scale, (frame.size.height) * scale);
glMatrixMode(GL_MODELVIEW);
glDisable(GL_DITHER);
glEnable(GL_BLEND);
glEnable(GL_TEXTURE_2D);
glEnableClientState(GL_VERTEX_ARRAY);
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_POINT_SPRITE_OES);
glTexEnvf(GL_POINT_SPRITE_OES, GL_COORD_REPLACE_OES, GL_TRUE);
glPointSize(width / kBrushScale);
// Define a starting color
HSL2RGB((CGFloat) 0.0 / (CGFloat)kPaletteSize, kSaturation, kLuminosity, &components[0], &components[1], &components[2]);
glColor4f(components[0] * kBrushOpacity, components[1] * kBrushOpacity, components[2] * kBrushOpacity, kBrushOpacity);
I've been searching for code related to different paint brush stroke but I cannot find any code. Help me to get "desired" brush stroke similar to image1.
Do not use GL_POINT_SPRITE_OES mode. Draw sprites via standard triangles. Then it needs to bind sprite texture coords to the target output coords and make sprite texture repeatable.
Assuming sprite texture size is 32x32. The default texture coords is in rect {{0,0},{1.0,1.0}}. For drawing sprite at {x,y} position, it needs to use sprite texture coord based on rect {{(x%32)/32.0, (y%32)/32.0},{1.0, 1.0}}. This way it prevents sprite content from smudging.

OpenGL ES How to Correctly Combine Orthof and Frustum

I am beginning to learn OpenGL ES 1.1 for iPhone and would like to draw a 2D image in orthographic projection behind a few 3D objects. Using Jeff Lamarche's tutorials and the book Pro OpenGL ES for iPhone and I've come up with the following couple methods to attempt to do this. If I disable the call to drawSunInRect the 3D objects are rendered just fine and I can move them with touch controls etc. If I uncomment that call and try to draw the sun image, the image appears in the CGRect I supply, but I cannot see any of my other 3D objects - the rest of the screen is black. I've tried to disable/enable depth testing in various places, pass different parameters to glOrthof(), and move around the rectangle but I keep getting the sun image only when the drawSunInRect method is called. I'm assuming it is covering my 3D objects.
// Draw Sun in Rect and with Depth
- (void)drawSunInRect:(CGRect)rect withDepth:(float)depth {
// Get screen bounds
CGRect frame = [[UIScreen mainScreen] bounds];
// Calculate vertices from passed CGRect and depth
GLfloat vertices[] =
{
rect.origin.x, rect.origin.y, depth,
rect.origin.x + rect.size.width , rect.origin.y, depth,
rect.origin.x, rect.size.height+rect.origin.y , depth,
rect.origin.x + rect.size.width , rect.size.height+rect.origin.y ,depth
};
// Map the texture coords - no repeating
static GLfloat textureCoords[] =
{
0.0, 0.0,
1.0, 0.0,
0.0, 1.0,
1.0, 1.0
};
// Disable DEPTH test and setup Ortho
glDisable(GL_DEPTH_TEST);
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
glOrthof(0,frame.size.width,frame.size.height,0,0.1f,1000.0);
// Enable blending and configure
glEnable(GL_BLEND);
glBlendFunc(GL_ONE,GL_ONE_MINUS_SRC_ALPHA);
// Enable VERTEX and TEXTURE client states
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
// Enable Textures
glEnable(GL_TEXTURE_2D);
// Projection Matrix Mode for ortho and reset
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
// No lighting or Depth Mask for 2D - no Culling
glDisable(GL_LIGHTING);
glDepthMask(GL_FALSE);
glDisable(GL_CULL_FACE);
glDisableClientState(GL_COLOR_ARRAY);
// Define ortho
glOrthof(0,frame.size.width,frame.size.height,0,0.1f,1000);
// From Jeff Lamarche Tutorials
// glOrthof(-1.0, // Left
// 1.0, // Right
// -1.0 / (rect.size.width / rect.size.height), // Bottom
// 1.0 / (rect.size.width / rect.size.height), // Top
// 0.01, // Near
// 10000.0); // Far
// Setup Model View Matrix
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
// Bind and draw the Texture
glBindTexture(GL_TEXTURE_2D,sunInt);
glVertexPointer(3, GL_FLOAT, 0, vertices);
glTexCoordPointer(2, GL_FLOAT,0,textureCoords);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
glMatrixMode(GL_PROJECTION);
glPopMatrix();
// Re-enable lighting
glEnable(GL_LIGHTING);
glDisable(GL_TEXTURE_2D);
glDisable(GL_BLEND);
glDisable(GL_DEPTH_TEST);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDepthMask(GL_TRUE);
}
// Override the draw in rect function
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
// Initialize and init the rotation stuff for the object
// Identity Matrix
glLoadIdentity();
static GLfloat rot = 0.0;
// Clear any remnants in the drawing buffers
// and fill the background with black
glClearColor(0.0f,0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Render the Sun - 2D Sun Image in CGRect
//[self drawSunInRect:CGRectMake(100, 100, 50, 50) withDepth:-30.0]; // 2D Sun Image
// Render the BattleCruiser - 3D Battlecruiser
[self renderTheBattleCruiser];
// Calculate a time interval to use to rotate the cruiser lives
static NSTimeInterval lastDrawTime;
if (lastDrawTime)
{
NSTimeInterval timeSinceLastDraw = [NSDate timeIntervalSinceReferenceDate] - lastDrawTime;
rot += 75 * timeSinceLastDraw;
}
lastDrawTime = [NSDate timeIntervalSinceReferenceDate];
}
UPDATE
I modified the draw sun method to have only 1 push and 1 pop for the GL_PROJECTION. I am still having the same issue. The sun image appears but I cannot see my 3D Objects. I have tried to re-arrange the calls to the render methods so the 3D objects are rendered first but I get the same results. I would appreciate other ideas on how to see my orthographic texture and 3D objects together.
// Draw Sun in Rect and with Depth
- (void)drawSunInRect:(CGRect)rect withDepth:(float)depth {
// Get screen bounds
CGRect frame = [[UIScreen mainScreen] bounds];
// Calculate vertices from passed CGRect and depth
GLfloat vertices[] =
{
rect.origin.x, rect.origin.y, depth,
rect.origin.x + rect.size.width , rect.origin.y, depth,
rect.origin.x, rect.size.height+rect.origin.y , depth,
rect.origin.x + rect.size.width , rect.size.height+rect.origin.y ,depth
};
// Map the texture coords - no repeating
static GLfloat textureCoords[] =
{
0.0, 0.0,
1.0, 0.0,
0.0, 1.0,
1.0, 1.0
};
glEnable(GL_BLEND);
glBlendFunc(GL_ONE,GL_ONE_MINUS_SRC_ALPHA);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_TEXTURE_2D);
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
glDisable(GL_LIGHTING);
glEnable(GL_DEPTH_TEST);
glDisable(GL_CULL_FACE);
glDisableClientState(GL_COLOR_ARRAY);
glOrthof(0,frame.size.width,frame.size.height,0,0,1000);
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
glColor4f(1,1,1,1);
glBindTexture(GL_TEXTURE_2D,sunInt);
glVertexPointer(3, GL_FLOAT, 0, vertices);
glTexCoordPointer(2, GL_FLOAT,0,textureCoords);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glEnable(GL_LIGHTING);
glDisable(GL_TEXTURE_2D);
glDisable(GL_BLEND);
glDisable(GL_DEPTH_TEST);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
I realized I was calling setClipping only once in my viewController viewDidLoad. I moved it to my glkView method and now I get my Sun image and my 3d object.

OpenGL ES; rendering texture created from CGBitmapContext

I am executing the following, which I have derived from a few different tutorials (Just a single render pass, initialisation code not shown but works fine for untextured primitives):
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(0, xSize, 0, ySize, -1.0f, 1.0f);
glMatrixMode(GL_MODELVIEW);
glColor4f(1.0f, 1.0f, 1.0f, 1.0f);
glEnable(GL_TEXTURE_2D);
glBlendFunc(GL_ONE, GL_SRC_COLOR);
GLuint texture[1];
glGenTextures(1, &texture[0]);
glBindTexture(GL_TEXTURE_2D, texture[0]);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
int width = 50;
int height = 50;
void* textureData = malloc(width * height * 4);
CGColorSpaceRef cSp = CGColorSpaceCreateDeviceRGB();
CGContextRef ct = CGBitmapContextCreate(textureData, width, height, 8, width*4, cSp, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextSetRGBFillColor(ct, 0, 1, 0, 1);
CGContextFillRect(ct, CGRectMake(0, 0, 50, 50));
CGContextRelease(ct);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, textureData);
float verts[] = {
0.0f, 0.0f, 0.0f,
50.0f, 0.0f, 0.0f,
0.0f, 50.0f, 0.0f,
50.0f, 50.0f, 0.0f
};
float texCords[] = {
0.0f, 0.0f,
1.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f
};
glVertexPointer(3, GL_FLOAT, 0, verts);
glTexCoordPointer(2, GL_FLOAT, 0, texCords);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDisable(GL_TEXTURE_2D);
The result is a white square. Not the green one as intended. Can anyone spot the error(s) in my code which result in its' failure to render?
I hope to get this working then move it on to text rendering.
The problem is that width and height are not powers of two. There are two solutions:
Use the texture rectangle extension. Set the texture target to GL_TEXTURE_RECTANGLE_ARB instead of GL_TEXTURE_2D. You will have to enable this extension before using it. Note that rectangle textures do not support mipmaps.
Use powers of two for texture dimensions.

glFrustumf displays only clear color, glOrthof displays as expected (OpenGL ES)

I'm new to OpenGL, so I'm sure this is a dummy mistake, but I've read every post, and reviewed sample code, and I can't find a difference, explaining why glFrustum wont display as I'd like it to.
I initialize OpenGL like:
- (void) initOpenGL{
glEnable(GL_DEPTH_TEST);
glMatrixMode(GL_PROJECTION);
//glLoadIdentity();
//glOrthof(0.0f, self.bounds.size.width, self.bounds.size.height, 0.0f, -10.0f, 10.0f);
const GLfloat zNear = -0.1, zFar = 1000.0, fieldOfView = 60.0;
GLfloat size;
size = zNear * tanf(DEGREES_TO_RADIANS(fieldOfView) / 2.0);
// This give us the size of the iPhone display
CGRect rect = self.bounds;
glFrustumf(-size, size, -size / (rect.size.width / rect.size.height), size / (rect.size.width / rect.size.height), zNear, zFar);
glViewport(0, 0, rect.size.width, rect.size.height);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
#if 0
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
#endif
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
#if TARGET_IPHONE_SIMULATOR
glColor4f(0.0, 0.0, 0.0, 0.0f);
#else
glColor4f(0.0, 0.0, 0.0, 0.0f);
#endif
[[Texture2D alloc] initWithImage:[UIImage imageNamed:#"GreenLineTex.png"] filter:GL_LINEAR];
glInitialised = YES;
And my drawing is done like:
- (void)drawView {
if(!glInitialised) {
[self initOpenGL];
}
[EAGLContext setCurrentContext:context];
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glEnable(GL_TEXTURE_2D);
static const GLfloat texCoords[] = {
0.0, 0.0,
1.0, 0.0,
0.0, 1.0,
1.0, 1.0
};
// draw the edges
glEnableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glTexCoordPointer(2, GL_FLOAT, 0, texCoords);
glBindTexture(GL_TEXTURE_2D, 1);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glBlendFunc(GL_SRC_ALPHA, GL_DST_ALPHA);
for (int i = 0; i < connectionNumber; i++){
Vertex2DSet(&vertices[0], connectionLines[i].lineVertexBeginPoint.x, connectionLines[i].lineVertexBeginPoint.y);
Vertex2DSet(&vertices[1], connectionLines[i].lineVertexBeginPoint.x+connectionLines[i].normalVector.x, connectionLines[i].lineVertexBeginPoint.y+connectionLines[i].normalVector.y);
Vertex2DSet(&vertices[2], connectionLines[i].lineVertexEndPoint.x, connectionLines[i].lineVertexEndPoint.y);
Vertex2DSet(&vertices[3], connectionLines[i].lineVertexEndPoint.x+connectionLines[i].normalVector.x, connectionLines[i].lineVertexEndPoint.y+connectionLines[i].normalVector.y);
glVertexPointer(2, GL_FLOAT, 0, vertices);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
}
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
}
Where the block in the for loop is a set of vertices that make up some triangle strips.
If I uncomment the glOrthof() line, then I can see my display, however it's orthographic, and I'd like to move the camera in and out, to change the scaling of the whole scene.
What have I done incorrectly that causes glFrustumf() to display only the clear color?
Short answer: you are looking in the wrong direction.
Long answer:
Your frustum is symmetric while your orthographic matrix isn't. So if your model is set up to be visible in the glOrtho case, it may not be visible with your glFrustum.
Also you shouldn't use glOrtho AND glFrustum together, because the matrices are multiplied and will surely yield a funny projection matrix.
You can use Nate Robins' GL tutors at http://www.xmission.com/~nate/tutors.html to experiment with glFrustum and glOrtho (in the "projection" application).