Why is glClear() so slow with point sprites on iPhone? - iphone

I am trying to draw point sprites with OpenGL ES on iPhone. It's possible there could be very many of them (1000) and up to 64 pixels wide (maybe that's my problem right there - is there a limit or could I be using too much memory?)
I am using CADisplayLink to time the frames. What happens is that the first gl drawing function tends to delay or stall when either the point count is too high or when the point size is too big. In my example below, glClear() is the first drawing function, and it can take anywhere from 0.02 seconds to 0.2 seconds to run. If I simply comment out glClear, glDrawArrays becomes the slow function (it runs very fast otherwise).
This example is what I've stripped my code down to in order to isolate the problem. It simply draws a bunch of point sprites, with no texture, all in the same spot. I am using VBOs to store all the sprite data (position, color, size). It may seem like overkill for the example but of course I have intentions to modify this data later.
This is the view's init function (minus the boilerplate gl setup):
glDisable(GL_DEPTH_TEST);
glDepthMask(GL_FALSE);
glDisable(GL_LIGHTING);
glDisable(GL_FOG);
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBlendEquationOES(GL_FUNC_ADD_OES);
glClearColor(0.0, 0.0, 0.0, 0.0);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
glTexEnvi(GL_POINT_SPRITE_OES, GL_COORD_REPLACE_OES, GL_TRUE);
glEnable(GL_POINT_SPRITE_OES);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_POINT_SIZE_ARRAY_OES);
glEnableClientState(GL_COLOR_ARRAY);
glBlendFunc(GL_SRC_ALPHA, GL_ONE);
glEnable(GL_POINT_SMOOTH);
glGenBuffers(1, &vbo); // vbo is an instance variable
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glMatrixMode(GL_PROJECTION);
glOrthof(0.0, [self frame].size.width, 0.0, [self frame].size.height, 1.0f, -1.0f);
glViewport(0, 0, [self frame].size.width, [self frame].size.height);
glMatrixMode(GL_MODELVIEW);
glTranslatef(0.0f, [self frame].size.height, 0.0f);
glScalef(1.0f, -1.0f, 1.0f);
And this is the rendering function:
- (void)render
{
glClear(GL_COLOR_BUFFER_BIT); // This function runs slowly!
int pointCount = 1000;
// fyi...
// typedef struct {
// CGPoint point;
// CFTimeInterval time;
// GLubyte r, g, b, a;
// GLfloat size;
// } MyPoint;
glBufferData(GL_ARRAY_BUFFER, sizeof(MyPoint)*pointCount, NULL, GL_DYNAMIC_DRAW);
MyPoint * vboBuffer = (MyPoint *)glMapBufferOES(GL_ARRAY_BUFFER, GL_WRITE_ONLY_OES);
for (int i = 0; i < pointCount; i++) {
vboBuffer[i].a = (GLubyte)0xFF;
vboBuffer[i].r = (GLubyte)0xFF;
vboBuffer[i].g = (GLubyte)0xFF;
vboBuffer[i].b = (GLubyte)0xFF;
vboBuffer[i].size = 64.0;
vboBuffer[i].point = CGPointMake(200.0, 200.0);
}
glUnmapBufferOES(GL_ARRAY_BUFFER);
glPointSizePointerOES(GL_FLOAT, sizeof(MyPoint), (void *)offsetof(MyPoint, size));
glColorPointer(4, GL_UNSIGNED_BYTE, sizeof(MyPoint), (void *)offsetof(MyPoint, r));
glVertexPointer(2, GL_FLOAT, sizeof(MyPoint), (void *)offsetof(MyPoint, point));
glDrawArrays(GL_POINTS, 0, pointCount);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
}
Why is the glClear function stalling? It doesn't just delay in random amounts - depending on the point count or size, it tends to randomly delay in the same intevals (eg. 0.015 sec, 0.030 sec, 0.045 sec, etc). Also something strange I noticed is that if I switch to glBlendMode(GL_ZERO, GL_ONE), it runs just fine (although this is will not be the visual effect I'm after). Other glBlendMode values change the speed as well - usually for the better. That makes me think it is not a memory issue because that has nothing to do with the VBO (right?).
I admit I am a bit new at OpenGL and may be misunderstanding basic concepts about VBOs or other things. Any help or guidance is greatly appreciated!

If glClear() is slow you might try drawing a large blank quad that completely covers the viewport area.

Are you using sync (or is it enabled?). The delay you're seeing might be related to the fact that CPU and GPU run in parallel, so measuring time of individual GL calls has no meaning.
If you're using VSync (or the GPU is heavily loaded), there might be some latency in the SwapBuffers call, since some drivers make busy loops to wait for VBlank.
But first consider that you should NOT time individual GL calls, since most GL calls just set some state of the GPU or write to a command buffer, the command execution happens asynchronously.

Related

GL_POINT not rendering when center is off screen

I'm making an iPhone game that involves the use of GL_POINT to render a point. However, when the center of the point is off screen, I still want to draw whatever portion of the point that is still onscreen. Here is my code that I'm using to render the point.
-(void)render {
if (!fill || !outline || !active || dead)
return;
NSLog(#"rendering");
glPushMatrix();
glLoadIdentity();
glMultMatrixf(matrix);
glEnable(GL_BLEND);
glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_SMOOTH);
glEnable(GL_POINT_SMOOTH);
glPointSize(scale.x*2);
[outline render];
glPointSize(2*(scale.x-kLineWidth));
[fill render];
glPopMatrix();
}
note that it logs "rendering" when it should be rendering, so this method is getting called properly.
and the [outline render] and [fill render] methods look like this
-(void)render {
// load arrays into the engine
glVertexPointer(vertexSize, GL_FLOAT, 0, vertexes);
glEnableClientState(GL_VERTEX_ARRAY);
glColorPointer(colorSize, GL_FLOAT, 0, colors);
glEnableClientState(GL_COLOR_ARRAY);
//render
glDrawArrays(renderStyle, 0, vertexCount);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
}
and I'm using a "panning" effect using this code
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(-kScreenWidth/2.0 + xPan, kScreenWidth/2.0 + xPan, -kScreenHeight/2.0 + yPan, kScreenHeight/2.0 + yPan, -1.0f, 1.0f);
glMatrixMode(GL_MODELVIEW);
but when the point's center is not on the screen (after panning with glOrthof), the whole point is not drawn. How can I have the point still render even when the center is not on the screen?
I don't believe there is anything you can do for an easy fix. Primitives are clipped before rasterization, so if that point lies outside the view frustum, it's not going to be rasterized, even if the rasterization would create fragments that do lie inside the view frustum.
Either switch to real quads with GL_TRIANGLES/GL_QUADS, or if you really don't want to do that, you can render your points to an offscreen buffer with size slightly larger than the viewport, and then blit the center of that image back onto the main frame.

iOS OpenGL too slow

I'm new to Xcode programming and I'm trying to create an iPhone game using OpenGL with support for retina display at 60 FPS, but it runs way too slow. I based it on the GLSprite example at developer.apple. I've already optimized it the best I could, but it keeps running < 30 FPS on the Simulator (I haven't tested it on a real device yet - maybe it's faster?). The bottleneck appears to be drawing the polygons - I've used really small textures (256x256 PNG) and pixel formats (RGBA4444); I've disabled blending; I've moved all transformation code to the load phase hoping for better performance; everything to no success. I'm keeping a vertex array that stores everything for that step, then draws using GL_TRIANGLES with one function call - because I think it's faster than calling multiple glDrawArrays. It starts lagging when I reach about 120 vertexes (6 for each rectangular sprite), but in many places I've read the iPhone can handle even millions of vertexes. What's wrong with the code below? Is OpenGL the fastest way to render graphics on the iPhone? If not, what else should I use?
OpenGL loading code, called just once, at the beginning:
glViewport(0, 0, backingWidth, backingHeight);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(-1.0f, 1.0f, -1.5f, 1.5f, -1.0f, 1.0f);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
glMatrixMode(GL_MODELVIEW);
glClear(GL_COLOR_BUFFER_BIT);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glBindTexture(GL_TEXTURE_2D,texture[0]); //Binds a texture loaded previously with the code given below
glVertexPointer(3, GL_FLOAT, 0, vertexes); //The array holding the vertexes
glEnableClientState(GL_VERTEX_ARRAY);
glTexCoordPointer(2, GL_FLOAT, 0, uvCoord); //The array holding the uv coordinates
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
The texture loading method:
- (void)loadSprite:(NSString*)filename intoPos:(int)pos { //Loads a texture within the bundle, at the given position in an array storing all textures (but I actually just use one at a time)
CGImageRef spriteImage;
CGContextRef spriteContext;
GLubyte *spriteData;
size_t width, height;
// Sets up matrices and transforms for OpenGL ES
glViewport(0, 0, backingWidth, backingHeight);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(-1.0f, 1.0f, -1.5f, 1.5f, -1.0f, 1.0f);
glMatrixMode(GL_MODELVIEW);
// Clears the view with black
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
// Sets up pointers and enables states needed for using vertex arrays and textures
glVertexPointer(2, GL_FLOAT, 0, spriteVertices);
glEnableClientState(GL_VERTEX_ARRAY);
glTexCoordPointer(2, GL_FLOAT, 0, spriteTexcoords);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
// Creates a Core Graphics image from an image file
spriteImage = [UIImage imageNamed:filename].CGImage;
// Get the width and height of the image
width = CGImageGetWidth(spriteImage);
height = CGImageGetHeight(spriteImage);
textureWidth[pos]=width;
textureHeight[pos]=height;
NSLog(#"Width %lu; Height %lu",width,height);
// Texture dimensions must be a power of 2. If you write an application that allows users to supply an image,
// you'll want to add code that checks the dimensions and takes appropriate action if they are not a power of 2.
if(spriteImage) {
// Allocated memory needed for the bitmap context
spriteData = (GLubyte *) calloc(width * height * 4, sizeof(GLubyte));
// Uses the bitmap creation function provided by the Core Graphics framework.
spriteContext = CGBitmapContextCreate(spriteData, width, height, 8, width * 4, CGImageGetColorSpace(spriteImage), kCGImageAlphaPremultipliedLast);
// After you create the context, you can draw the sprite image to the context.
CGContextDrawImage(spriteContext, CGRectMake(0.0, 0.0, (CGFloat)width, (CGFloat)height), spriteImage);
// You don't need the context at this point, so you need to release it to avoid memory leaks.
CGContextRelease(spriteContext);
// Use OpenGL ES to generate a name for the texture.
glGenTextures(1, &texture[pos]);
// Bind the texture name.
glBindTexture(GL_TEXTURE_2D, texture[pos]);
curTexture=pos;
if (1) { //This should convert pixel format
NSLog(#"convert to 4444");
void* tempData;
unsigned int* inPixel32;
unsigned short* outPixel16;
tempData = malloc(height * width * 2);
inPixel32 = (unsigned int*)spriteData;
outPixel16 = (unsigned short*)tempData;
NSUInteger i;
for(i = 0; i < width * height; ++i, ++inPixel32)
*outPixel16++ = ((((*inPixel32 >> 0) & 0xFF) >> 4) << 12) | ((((*inPixel32 >> 8) & 0xFF) >> 4) << 8) | ((((*inPixel32 >> 16) & 0xFF) >> 4) << 4) | ((((*inPixel32 >> 24) & 0xFF) >> 4) << 0);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_SHORT_4_4_4_4, tempData);
free(tempData);
} else {
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, spriteData);
}
// Set the texture parameters to use a minifying filter and a linear filer (weighted average)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
// Specify a 2D texture image, providing the a pointer to the image data in memory
//glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, spriteData);
// Release the image data
free(spriteData);
// Enable use of the texture
glEnable(GL_TEXTURE_2D);
// Set a blending function to use
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
// Enable blending
glEnable(GL_BLEND);
}
The actual drawing code that is called every game loop:
glDrawArrays(GL_TRIANGLES, 0, vertexIndex); //vertexIndex is the maximum number of vertexes at this loop
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
According to the OpenGL programming guide for iOS :
Important Rendering performance of OpenGL ES in Simulator has no relation to the performance of OpenGL ES on an actual device.
Simulator provides an optimized software rasterizer that takes
advantage of the vector processing capabilities of your Macintosh
computer. As a result, your OpenGL ES code may run faster or slower in
iOS simulator (depending on your computer and what you are drawing)
than on an actual device. Always profile and optimize your drawing
code on a real device and never assume that Simulator reflects
real-world performance.
The simulator is not reliable to profile performance of OpenGL applications. You'll need to run/profile on the real hardware.
It starts lagging when I reach about 120 vertexes (6 for each
rectangular sprite), but in many places I've read the iPhone can
handle even millions of vertexes.
To elaborate a bit on this comment of yours : the number of vertices is not the only variable impacting OpenGL performance.For example, with only a single triangle (3 vertices), you can draw pixels on the whole screen. This obviously needs more computation than drawing a small triangle covering only a few pixels. The metric representing the capacity of drawing many pixels is known as fill-rate.
If your vertices represent large triangles on screen, it is probable that fill-rate is your performance bottleneck, and not vertex transform. As the iOS simulator does use a software rasterizer, albeit being optimized, it is probably slower that actual specialized hardware.
You should profile your application to know what is your actual performance bottleneck before optimizing ; this document can help you.

Distortion with 'pixel accurate' OpenGL rendering of sprites

To define what I'm trying to do: I want to be able to take an arbitrary 'sprite' image from a ^2x^2 sized PNG, and display just the pixels of interest to a given x/y position on screen.
My results are the problem - major distortion - it looks awful! (Note these SS's are in iPhone sim but on real retina device they appear the same.. junky). Here is a screenshot of the source PNG in 'preview' - which looks wonderful (any variations on rendering that I describe in this question look almost exactly like the junky one)
Previously, I've asked a question about displaying a non-power-of-2 texture as a sprite using OpenGL ES 2.0 (although this applies to any OpenGL). I'm close, but I have some issues that I can't resolve. I think there are probably multiple bugs - I think there's some bug where I'm basically aliasing what I'm displaying by rendering large then squashing x2 or vice versa but I can't see it. Additionally, there are off by one errors and I cannot get a handle on them. I can't visually identify them occurring but I know for sure they're there.
I'm working in 960 x 640 landscape (on iPhone4 retina display). So I expect 0->959 moves left to right, 0->639 moves bottom to top. (And I think I'm seeing opposite of this - but that's not what this question is about)
To make things easy what I'm trying to achieve in this test case is a FULL SCREEN 960x640 display of a PNG file. Just one of them. I display a red background first so that it's obvious if I'm covering the screen or not.
Update: I realized the 'glViewport' inside of the setFramebuffer call was setting my width and height backwards. I noticed this because when I would set my geometry to draw from 0,0 to 100,100 it drew in a rectangle not a square. When I swapped these, that call does draw a square. However, using that same call, my entire screen fills with vertex range of 0,0 -> 480,320 (half 'resolution').. don't understand that. However no matter where I push on from this, I'm still not getting a good looking result
Here's my vertex shader:
attribute vec4 a_position;
attribute vec2 a_texCoord;
varying vec2 v_texCoord;
// Gives 'landscape' full screen..
mat4 projectionMatrix = mat4( 2.0/640.0, 0.0, 0.0, -1.0,
0.0, 2.0/960.0, 0.0, -1.0,
0.0, 0.0, -1.0, 0.0,
0.0, 0.0, 0.0, 1.0);
// Gives a 1/4 of screen.. (not doing 2.0/.. was suggested in previous SO Q)
/*mat4 projectionMatrix = mat4( 1.0/640.0, 0.0, 0.0, -1.0,
0.0, 1.0/960.0, 0.0, -1.0,
0.0, 0.0, -1.0, 0.0,
0.0, 0.0, 0.0, 1.0); */
// Apply the projection matrix to the position and pass the texCoord
void main()
{
gl_Position = a_position;
gl_Position *= projectionMatrix;
v_texCoord = a_texCoord;
}
Here's my fragment shader:
precision mediump float;
varying vec2 v_texCoord;
uniform sampler2D s_texture;
void main()
{
gl_FragColor = texture2D(s_texture, v_texCoord);
}
Here's my draw code:
#define MYWIDTH 960.0f
#define MYHEIGHT 640.0f
// I have to refer to 'X' as height although I'd assume I use 'Y' here..
// I think my X and Y throughout this whole block of code is screwed up
// But, I have experimented flipping them all and verifying that if they
// Are taken from the way they're set now to swapping X and Y that things
// end up being turned the wrong way. So this is a mess, but unlikely my problem
#define BG_X_ORIGIN 0.0f
// ALSO NOTE HERE: I have to put my 'dest' at 640.0f.. --- see note [1] below
#define BG_X_DEST 640.0f
#define BG_Y_ORIGIN 0.0f
// --- see note [1] below
#define BG_Y_DEST 960.0f
// These are texturing coordinates, I texture starting at '0' px and then
// I calculate a percentage of the texture to use based on how many pixels I use
// divided by the actual size of the image (1024x1024)
#define BG_X_ZERO 0.0f
#define BG_Y_USEPERCENTAGE BG_X_DEST / 1023.0f
#define BG_Y_ZERO 0.0f
#define BG_X_USEPERCENTAGE BG_Y_DEST / 1023.0f
// glViewport(0, 0, MYWIDTH, MYHEIGHT);
// See note 2.. it sets glViewport basically, provided by Xcode project template
[(EAGLView *)self.view setFramebuffer];
// Big hack just to get things going - like I said before, these could be backwards
// w/respect to X and Y
static const GLfloat backgroundVertices[] = {
BG_X_ORIGIN, BG_Y_ORIGIN,
BG_X_DEST, BG_Y_ORIGIN,
BG_X_ORIGIN, BG_Y_DEST,
BG_X_DEST, BG_Y_DEST
};
static const GLfloat backgroundTexCoords[] = {
BG_X_ZERO, BG_Y_USEPERCENTAGE,
BG_X_USEPERCENTAGE, BG_Y_USEPERCENTAGE,
BG_X_ZERO, BG_Y_ZERO,
BG_X_USEPERCENTAGE, BG_Y_ZERO
};
// Turn on texturing
glEnable(GL_TEXTURE_2D);
// Clear to RED so that it's obvious when I'm not drawing my sprite on screen
glClearColor(1.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
// Texturing parameters - these make sense.. don't think they are the issue
glActiveTexture(GL_TEXTURE0);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);//GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);//GL_LINEAR);
// Update attribute values.
glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, backgroundVertices);
glEnableVertexAttribArray(ATTRIB_VERTEX);
glVertexAttribPointer(ATTRIB_TEXCOORD, 2, GL_FLOAT, 0, 0, backgroundTexCoords);
glEnableVertexAttribArray(ATTRIB_TEXCOORD);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, background->textureId);
// I don't understand what this uniform does in the texture2D call in shader.
glUniform1f(uniforms[UNIFORM_SAMPLERLOC], 0);
// Draw the geometry...
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// present the framebuffer see note [3]
[(EAGLView *)self.view presentFramebuffer];
Note [1]:
If I set BG_X_DEST to 639.0f I do not get full coverage of the 640 pixels, I get red showing through on the right hand side. But this doesn't make sense to me - I'm aiming for pixel perfect and I have to draw my sprite geometry from 0 to 640 which is 641 pixels when I only have 640!!! red line appearing with 639f instead of 640f
And if I set BG_Y_DEST to 959.0f I do not get the red line show throug.
red line top bug appearing with 958f instead of 960 or 959f
This may be a good clue as to what bug(s) I have going on.
Note: [2] - included in the OpenGL ES 2 framework by Xcode
- (void)setFramebuffer
{
if (context)
{
[EAGLContext setCurrentContext:context];
if (!defaultFramebuffer)
[self createFramebuffer];
glBindFramebuffer(GL_FRAMEBUFFER, defaultFramebuffer);
glViewport(0, 0, framebufferWidth, framebufferHeight);
}
}
Note [3]: - included in the OpenGL ES 2 framework by Xcode
- (BOOL)presentFramebuffer
{
BOOL success = FALSE;
if (context)
{
[EAGLContext setCurrentContext:context];
glBindRenderbuffer(GL_RENDERBUFFER, colorRenderbuffer);
success = [context presentRenderbuffer:GL_RENDERBUFFER];
}
return success;
}
Note [4] - relevant image loading code (I have used PNG with and without alpha channel and actually it doesn't seem to make any difference... I also have tried to change my code up to be ARGB instead of RGBA and that's wrong - since A = 1.0 everywhere, I get a very RED image, which makes me think the RGBA is in fact valid and this code is right.): update: I have switched this texture loading to a completely different setup using CG/ImageIO calls and it looks identical to this so I assume it's not some kind of aliasing or color compression done by the image libraries (unless they both go to the same fundamental calls, which is possible..)
// Otherwise it isn't already loaded
glEnable(GL_TEXTURE_2D);
glActiveTexture(GL_TEXTURE0);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);//GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);//GL_LINEAR);
// TODO Next 2 can prob go later on..
glGenTextures(1, &(newTexture->textureId)); // generate Texture
// Use this before 'drawing' the texture to the memory...
glBindTexture(GL_TEXTURE_2D, newTexture->textureId);
NSString *path = [[NSBundle mainBundle]
pathForResource:[NSString stringWithUTF8String:newTexture->filename.c_str()] ofType:#"png"];
NSData *texData = [[NSData alloc] initWithContentsOfFile:path];
UIImage *image = [[UIImage alloc] initWithData:texData];
if (image == nil)
NSLog(#"Do real error checking here");
newTexture->width = CGImageGetWidth(image.CGImage);
newTexture->height = CGImageGetHeight(image.CGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
void *imageData = malloc(newTexture->height * newTexture->width * 4 );
CGContextRef myContext = CGBitmapContextCreate
(imageData, newTexture->width, newTexture->height, 8, 4 * newTexture->width, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big );
CGColorSpaceRelease(colorSpace);
CGContextClearRect(myContext, CGRectMake(0, 0, newTexture->width, newTexture->height));
CGContextDrawImage(myContext, CGRectMake(0, 0, newTexture->width, newTexture->height), image.CGImage);
// Texture is created!
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, newTexture->width, newTexture->height, 0,
GL_RGBA, GL_UNSIGNED_BYTE, imageData);
CGContextRelease(myContext);
free(imageData);
[image release];
[texData release];
[(EAGLView *)self.view setContentScaleFactor:2.0f];
By default, iPhone windows do scaling to reach their high resolution modes. Which was destroying my image quality ..
Thanks for all the help folks

glBindTexture Problem with OpenGL ES on iPhone

I'm new to OpenGL, and am having a curious problem with my textures - looking for a nudge in the right direction.
I have an app which uses a Render to texture technique for accomplishing a certain effect - it's working marvelously. I draw to an offscreen buffer every time I need to, and am able to use this as a texture in my render loop.
This texture is only updated when necessary - it's drawn to the screen as is most frames.
I have some toolbars which are also drawn using OpenGL, on top of this surface using a texture atlas, and using blending.
I have recently begun trying to incorporate a particle system into the app, but whenever I try to render my particle system graphics, I "lose" my texture that I've rendered in the first step - ie it's contents disappear.
I have traced this to the call to glBindTexture that binds the texture of the particles.
EDIT: I can reproduce this in my simple toolbar drawing routine, code below. This is a crude routine that animates toolbar graphics on and off screen.
When I uncomment the first two lines in drawToolBar(), my rendered in memory texture disappears, ie the drawarrays call in my render loop renders nothing to the screen. Through testing, I have determined that the glBidTexture call is what triggers this. (For example, I can render colored quads over my texture, just not textured ones)
However, everything is fine if I allow drawToolbar() to run as below - the only difference is that the eventual call to drawTools() is wrapped in glPush/Pop, and is translated.
Note that the toolbar rendering always works - there is some unintended side effect, consequence, or bug issue going on here, which causes my background texture to disappear.
Any ideas are welcome - this is driving me nuts.
The code:
void drawTools()
{
//*Texture Coordinate Stuff Snipped*//
glBindTexture(GL_TEXTURE_2D, _buttontexture);
glEnable(GL_TEXTURE_2D);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glVertexPointer (2, GL_FLOAT, 0,bottomToolQuads);
glTexCoordPointer(2, GL_FLOAT, 0,texc);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDrawArrays(GL_TRIANGLE_STRIP, 4, 4);
glDrawArrays(GL_TRIANGLE_STRIP, 8, 4);
glDrawArrays(GL_TRIANGLE_STRIP, 12, 4);
glDrawArrays(GL_TRIANGLE_STRIP, 16, 4);
glDrawArrays(GL_TRIANGLE_STRIP, 20, 4);
glDisable(GL_TEXTURE_2D);
}
void drawToolBar()
{
//drawTools();
//return;
if(_toolbarState ==0)
{
drawTools();
}
else if(_toolbarState == 2)//hiding
{
_toolbarVisiblePct -= TOOLINC;
if(_toolbarVisiblePct <= 0.0)
{
_toolbarState = 1;
_toolbarVisiblePct = 0.0;
}
else
{
glPushMatrix();
glTranslatef(0.0, -(1-_toolbarVisiblePct) * 50, 0);
drawTools();
glPopMatrix();
}
}
else if(_toolbarState == 3) //showing
{
_toolbarVisiblePct += TOOLINC;
if(_toolbarVisiblePct >= 1.0)
{
_toolbarState = 0;
_toolbarVisiblePct = 1.0;
drawTools();
}
else
{
glPushMatrix();
glTranslatef(0.0, -(1-_toolbarVisiblePct) * 50, 0);
drawTools();
glPopMatrix();
}
}
}
Looks like you're disabling texture rendering at the end of the drawTools method. OpenGL is a state machine, if you disable a state it will stay disabled until you enable it again.

Animating a texture across a surface in OpenGL

I'm working with the iPhone OpenGLES implementation and I wish to endlessly scroll a texture across a simple surface (two triangles making up a rectangle). This should be straightforward, but it's not something I've done before and I must be missing something. I can rotate the texture fine, but translate does not work at all. Do I have a minor implementation issue or am I doing something fundamentally wrong?
// move texture
glMatrixMode(GL_TEXTURE);
glPushMatrix();
glLoadIdentity();
// increment offset - no reset for demo purposes
wallOffset += 1.0;
// move the texture - this does not work
glTranslatef(wallOffset,wallOffset,0.0);
// rotate the texture - this does work
//glRotatef(wallOffset, 1.0, 0.0, 0.0);
glMatrixMode(GL_MODELVIEW);
glBindTexture(GL_TEXTURE_2D, WallTexture.name);
glTexCoordPointer(2, GL_FLOAT, 0, coordinates);
// simple drawing code
glNormalPointer(GL_FLOAT, 0, normals);
glVertexPointer(3, GL_FLOAT, 0, vertices);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// push matrix back
glMatrixMode(GL_TEXTURE);
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
You're incrementing your texture offset by 1.0f; but textures coordinates are considered in the range [0, 1], so you're not actually changing the texture coordinates (assuming you've enabled some sort of wrapping).
Try changing that increment (try .01f, or maybe something depending on the framerate) and see if it works. If not, then it may have something to do with the texture parameters you've got enabled.