glBindTexture Problem with OpenGL ES on iPhone - iphone

I'm new to OpenGL, and am having a curious problem with my textures - looking for a nudge in the right direction.
I have an app which uses a Render to texture technique for accomplishing a certain effect - it's working marvelously. I draw to an offscreen buffer every time I need to, and am able to use this as a texture in my render loop.
This texture is only updated when necessary - it's drawn to the screen as is most frames.
I have some toolbars which are also drawn using OpenGL, on top of this surface using a texture atlas, and using blending.
I have recently begun trying to incorporate a particle system into the app, but whenever I try to render my particle system graphics, I "lose" my texture that I've rendered in the first step - ie it's contents disappear.
I have traced this to the call to glBindTexture that binds the texture of the particles.
EDIT: I can reproduce this in my simple toolbar drawing routine, code below. This is a crude routine that animates toolbar graphics on and off screen.
When I uncomment the first two lines in drawToolBar(), my rendered in memory texture disappears, ie the drawarrays call in my render loop renders nothing to the screen. Through testing, I have determined that the glBidTexture call is what triggers this. (For example, I can render colored quads over my texture, just not textured ones)
However, everything is fine if I allow drawToolbar() to run as below - the only difference is that the eventual call to drawTools() is wrapped in glPush/Pop, and is translated.
Note that the toolbar rendering always works - there is some unintended side effect, consequence, or bug issue going on here, which causes my background texture to disappear.
Any ideas are welcome - this is driving me nuts.
The code:
void drawTools()
{
//*Texture Coordinate Stuff Snipped*//
glBindTexture(GL_TEXTURE_2D, _buttontexture);
glEnable(GL_TEXTURE_2D);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glVertexPointer (2, GL_FLOAT, 0,bottomToolQuads);
glTexCoordPointer(2, GL_FLOAT, 0,texc);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDrawArrays(GL_TRIANGLE_STRIP, 4, 4);
glDrawArrays(GL_TRIANGLE_STRIP, 8, 4);
glDrawArrays(GL_TRIANGLE_STRIP, 12, 4);
glDrawArrays(GL_TRIANGLE_STRIP, 16, 4);
glDrawArrays(GL_TRIANGLE_STRIP, 20, 4);
glDisable(GL_TEXTURE_2D);
}
void drawToolBar()
{
//drawTools();
//return;
if(_toolbarState ==0)
{
drawTools();
}
else if(_toolbarState == 2)//hiding
{
_toolbarVisiblePct -= TOOLINC;
if(_toolbarVisiblePct <= 0.0)
{
_toolbarState = 1;
_toolbarVisiblePct = 0.0;
}
else
{
glPushMatrix();
glTranslatef(0.0, -(1-_toolbarVisiblePct) * 50, 0);
drawTools();
glPopMatrix();
}
}
else if(_toolbarState == 3) //showing
{
_toolbarVisiblePct += TOOLINC;
if(_toolbarVisiblePct >= 1.0)
{
_toolbarState = 0;
_toolbarVisiblePct = 1.0;
drawTools();
}
else
{
glPushMatrix();
glTranslatef(0.0, -(1-_toolbarVisiblePct) * 50, 0);
drawTools();
glPopMatrix();
}
}
}

Looks like you're disabling texture rendering at the end of the drawTools method. OpenGL is a state machine, if you disable a state it will stay disabled until you enable it again.

Related

GL_POINT not rendering when center is off screen

I'm making an iPhone game that involves the use of GL_POINT to render a point. However, when the center of the point is off screen, I still want to draw whatever portion of the point that is still onscreen. Here is my code that I'm using to render the point.
-(void)render {
if (!fill || !outline || !active || dead)
return;
NSLog(#"rendering");
glPushMatrix();
glLoadIdentity();
glMultMatrixf(matrix);
glEnable(GL_BLEND);
glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_SMOOTH);
glEnable(GL_POINT_SMOOTH);
glPointSize(scale.x*2);
[outline render];
glPointSize(2*(scale.x-kLineWidth));
[fill render];
glPopMatrix();
}
note that it logs "rendering" when it should be rendering, so this method is getting called properly.
and the [outline render] and [fill render] methods look like this
-(void)render {
// load arrays into the engine
glVertexPointer(vertexSize, GL_FLOAT, 0, vertexes);
glEnableClientState(GL_VERTEX_ARRAY);
glColorPointer(colorSize, GL_FLOAT, 0, colors);
glEnableClientState(GL_COLOR_ARRAY);
//render
glDrawArrays(renderStyle, 0, vertexCount);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
}
and I'm using a "panning" effect using this code
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(-kScreenWidth/2.0 + xPan, kScreenWidth/2.0 + xPan, -kScreenHeight/2.0 + yPan, kScreenHeight/2.0 + yPan, -1.0f, 1.0f);
glMatrixMode(GL_MODELVIEW);
but when the point's center is not on the screen (after panning with glOrthof), the whole point is not drawn. How can I have the point still render even when the center is not on the screen?
I don't believe there is anything you can do for an easy fix. Primitives are clipped before rasterization, so if that point lies outside the view frustum, it's not going to be rasterized, even if the rasterization would create fragments that do lie inside the view frustum.
Either switch to real quads with GL_TRIANGLES/GL_QUADS, or if you really don't want to do that, you can render your points to an offscreen buffer with size slightly larger than the viewport, and then blit the center of that image back onto the main frame.

iOS OpenGL too slow

I'm new to Xcode programming and I'm trying to create an iPhone game using OpenGL with support for retina display at 60 FPS, but it runs way too slow. I based it on the GLSprite example at developer.apple. I've already optimized it the best I could, but it keeps running < 30 FPS on the Simulator (I haven't tested it on a real device yet - maybe it's faster?). The bottleneck appears to be drawing the polygons - I've used really small textures (256x256 PNG) and pixel formats (RGBA4444); I've disabled blending; I've moved all transformation code to the load phase hoping for better performance; everything to no success. I'm keeping a vertex array that stores everything for that step, then draws using GL_TRIANGLES with one function call - because I think it's faster than calling multiple glDrawArrays. It starts lagging when I reach about 120 vertexes (6 for each rectangular sprite), but in many places I've read the iPhone can handle even millions of vertexes. What's wrong with the code below? Is OpenGL the fastest way to render graphics on the iPhone? If not, what else should I use?
OpenGL loading code, called just once, at the beginning:
glViewport(0, 0, backingWidth, backingHeight);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(-1.0f, 1.0f, -1.5f, 1.5f, -1.0f, 1.0f);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
glMatrixMode(GL_MODELVIEW);
glClear(GL_COLOR_BUFFER_BIT);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glBindTexture(GL_TEXTURE_2D,texture[0]); //Binds a texture loaded previously with the code given below
glVertexPointer(3, GL_FLOAT, 0, vertexes); //The array holding the vertexes
glEnableClientState(GL_VERTEX_ARRAY);
glTexCoordPointer(2, GL_FLOAT, 0, uvCoord); //The array holding the uv coordinates
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
The texture loading method:
- (void)loadSprite:(NSString*)filename intoPos:(int)pos { //Loads a texture within the bundle, at the given position in an array storing all textures (but I actually just use one at a time)
CGImageRef spriteImage;
CGContextRef spriteContext;
GLubyte *spriteData;
size_t width, height;
// Sets up matrices and transforms for OpenGL ES
glViewport(0, 0, backingWidth, backingHeight);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(-1.0f, 1.0f, -1.5f, 1.5f, -1.0f, 1.0f);
glMatrixMode(GL_MODELVIEW);
// Clears the view with black
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
// Sets up pointers and enables states needed for using vertex arrays and textures
glVertexPointer(2, GL_FLOAT, 0, spriteVertices);
glEnableClientState(GL_VERTEX_ARRAY);
glTexCoordPointer(2, GL_FLOAT, 0, spriteTexcoords);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
// Creates a Core Graphics image from an image file
spriteImage = [UIImage imageNamed:filename].CGImage;
// Get the width and height of the image
width = CGImageGetWidth(spriteImage);
height = CGImageGetHeight(spriteImage);
textureWidth[pos]=width;
textureHeight[pos]=height;
NSLog(#"Width %lu; Height %lu",width,height);
// Texture dimensions must be a power of 2. If you write an application that allows users to supply an image,
// you'll want to add code that checks the dimensions and takes appropriate action if they are not a power of 2.
if(spriteImage) {
// Allocated memory needed for the bitmap context
spriteData = (GLubyte *) calloc(width * height * 4, sizeof(GLubyte));
// Uses the bitmap creation function provided by the Core Graphics framework.
spriteContext = CGBitmapContextCreate(spriteData, width, height, 8, width * 4, CGImageGetColorSpace(spriteImage), kCGImageAlphaPremultipliedLast);
// After you create the context, you can draw the sprite image to the context.
CGContextDrawImage(spriteContext, CGRectMake(0.0, 0.0, (CGFloat)width, (CGFloat)height), spriteImage);
// You don't need the context at this point, so you need to release it to avoid memory leaks.
CGContextRelease(spriteContext);
// Use OpenGL ES to generate a name for the texture.
glGenTextures(1, &texture[pos]);
// Bind the texture name.
glBindTexture(GL_TEXTURE_2D, texture[pos]);
curTexture=pos;
if (1) { //This should convert pixel format
NSLog(#"convert to 4444");
void* tempData;
unsigned int* inPixel32;
unsigned short* outPixel16;
tempData = malloc(height * width * 2);
inPixel32 = (unsigned int*)spriteData;
outPixel16 = (unsigned short*)tempData;
NSUInteger i;
for(i = 0; i < width * height; ++i, ++inPixel32)
*outPixel16++ = ((((*inPixel32 >> 0) & 0xFF) >> 4) << 12) | ((((*inPixel32 >> 8) & 0xFF) >> 4) << 8) | ((((*inPixel32 >> 16) & 0xFF) >> 4) << 4) | ((((*inPixel32 >> 24) & 0xFF) >> 4) << 0);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_SHORT_4_4_4_4, tempData);
free(tempData);
} else {
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, spriteData);
}
// Set the texture parameters to use a minifying filter and a linear filer (weighted average)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
// Specify a 2D texture image, providing the a pointer to the image data in memory
//glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, spriteData);
// Release the image data
free(spriteData);
// Enable use of the texture
glEnable(GL_TEXTURE_2D);
// Set a blending function to use
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
// Enable blending
glEnable(GL_BLEND);
}
The actual drawing code that is called every game loop:
glDrawArrays(GL_TRIANGLES, 0, vertexIndex); //vertexIndex is the maximum number of vertexes at this loop
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
According to the OpenGL programming guide for iOS :
Important Rendering performance of OpenGL ES in Simulator has no relation to the performance of OpenGL ES on an actual device.
Simulator provides an optimized software rasterizer that takes
advantage of the vector processing capabilities of your Macintosh
computer. As a result, your OpenGL ES code may run faster or slower in
iOS simulator (depending on your computer and what you are drawing)
than on an actual device. Always profile and optimize your drawing
code on a real device and never assume that Simulator reflects
real-world performance.
The simulator is not reliable to profile performance of OpenGL applications. You'll need to run/profile on the real hardware.
It starts lagging when I reach about 120 vertexes (6 for each
rectangular sprite), but in many places I've read the iPhone can
handle even millions of vertexes.
To elaborate a bit on this comment of yours : the number of vertices is not the only variable impacting OpenGL performance.For example, with only a single triangle (3 vertices), you can draw pixels on the whole screen. This obviously needs more computation than drawing a small triangle covering only a few pixels. The metric representing the capacity of drawing many pixels is known as fill-rate.
If your vertices represent large triangles on screen, it is probable that fill-rate is your performance bottleneck, and not vertex transform. As the iOS simulator does use a software rasterizer, albeit being optimized, it is probably slower that actual specialized hardware.
You should profile your application to know what is your actual performance bottleneck before optimizing ; this document can help you.

Want to add color to a rectangle which i draw using openGL function

I have to know how to add color to the rectangle drawn with the below method (which i took from a sample here).. Its by setting the openGL color to some color. But i dont know how to do it. Some help would be appreciated.
-(void) ccDrawFilledRect
{
HelloWorld *gs = [(swipeAppDelegate*)[[UIApplication sharedApplication] delegate] gameScene];
CGPoint poli[]= {gs.StartPoint,CGPointMake(gs.StartPoint.x,gs.EndPoint.y),gs.EndPoint,CGPointMake(gs.EndPoint.x,gs.StartPoint.y)};
glDisable(GL_TEXTURE_2D);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glVertexPointer(2, GL_FLOAT, 0, poli);
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
NSLog(#"openGL rectangles drawn !!");
// restore default state
glEnableClientState(GL_COLOR_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_TEXTURE_2D);
}
NeHe has a lot of easy OpenGL tutorials. This one is on adding color http://nehe.gamedev.net/data/lessons/lesson.asp?lesson=03 . Make your you have colors and color buffers enabled.
Have you tried putting glColor3d(1, 0, 0) before glVertexPointer?

Transparency/Blending issues with OpenGL ES/iPhone

I have a simple 16x16 particle that goes from being opaque to transparent. Unfortunately is appears different in my iPhone port and I can't see where the differences in the code are. Most of the code is essentially the same.
I've uploaded an image to here to show the problem
The particle on the left is the incorrectly rendered iPhone version and the right is how it appears on Mac and Windows. It's just a simple RGBA .png file.
I've tried numerous blend functions and glTexEnv setting but I can't seem to make them the same.
Just to be thorough, my Texture loading code on the iPhone looks like the following
GLuint TextureLoader::LoadTexture(const char *path)
{
NSString *macPath = [NSString stringWithCString:path length:strlen(path)];
GLuint texture = 0;
CGImageRef textureImage = [UIImage imageNamed:macPath].CGImage;
if (textureImage == nil)
{
NSLog(#"Failed to load texture image");
return 0;
}
NSInteger texWidth = CGImageGetWidth(textureImage);
NSInteger texHeight = CGImageGetHeight(textureImage);
GLubyte *textureData = new GLubyte[texWidth * texHeight * 4];
memset(textureData, 0, texWidth * texHeight * 4);
CGContextRef textureContext = CGBitmapContextCreate(textureData, texWidth, texHeight, 8, texWidth * 4, CGImageGetColorSpace(textureImage), kCGImageAlphaPremultipliedLast);
CGContextDrawImage(textureContext, CGRectMake(0.0, 0.0, (float)texWidth, (float)texHeight), textureImage);
CGContextRelease(textureContext);
//Make a texture ID, bind it, create it
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texWidth, texHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, textureData);
delete[] textureData;
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
return texture;
}
The blend function I use is glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
I'll try any ideas people throw at me, because this has been a bit of a mystery to me.
Cheers.
this looks like the standard "textures are converted to premultiplied alpha" problem.
you can use
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
or you can write custom loading code to avoid the premultiplication.
Call me naive, but seeing that premultiplying an image requires (ar, ag, a*b, a), I figured I'd just divide the rgb values by a.
Of course as soon as the alpha value is larger than the r, g, b components, the particle texture became black. Oh well. Unless I can find a different image loader to the one above, then I'll just make all the rgb components 0xff (white). This is a good temporary solution for me because I either need a white particle or just colourise it in the application. Later on I might just make raw rgba files and read them in, because this is mainly for very small 16x16 and smaller particle textures.
I can't use Premultiplied textures for the particle system because overlapping multiple particle textures saturates the colours way too much.

Animating a texture across a surface in OpenGL

I'm working with the iPhone OpenGLES implementation and I wish to endlessly scroll a texture across a simple surface (two triangles making up a rectangle). This should be straightforward, but it's not something I've done before and I must be missing something. I can rotate the texture fine, but translate does not work at all. Do I have a minor implementation issue or am I doing something fundamentally wrong?
// move texture
glMatrixMode(GL_TEXTURE);
glPushMatrix();
glLoadIdentity();
// increment offset - no reset for demo purposes
wallOffset += 1.0;
// move the texture - this does not work
glTranslatef(wallOffset,wallOffset,0.0);
// rotate the texture - this does work
//glRotatef(wallOffset, 1.0, 0.0, 0.0);
glMatrixMode(GL_MODELVIEW);
glBindTexture(GL_TEXTURE_2D, WallTexture.name);
glTexCoordPointer(2, GL_FLOAT, 0, coordinates);
// simple drawing code
glNormalPointer(GL_FLOAT, 0, normals);
glVertexPointer(3, GL_FLOAT, 0, vertices);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// push matrix back
glMatrixMode(GL_TEXTURE);
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
You're incrementing your texture offset by 1.0f; but textures coordinates are considered in the range [0, 1], so you're not actually changing the texture coordinates (assuming you've enabled some sort of wrapping).
Try changing that increment (try .01f, or maybe something depending on the framerate) and see if it works. If not, then it may have something to do with the texture parameters you've got enabled.