iOS openGL ES 1.0, failed to render Texture2D - iphone

I use the openGL ES 1.0. After decoding stream data, it is changed to RGBA bits. And then I transfer RGBA bytes to 'renderer' method with parameter.
the renderer method is called by each frame routines. Because RGBA bytes are changed every times.
But it doesn't draw any picture frames. Just the white rectangle and background gray color are displayed. What is the problem?
[initialize]
- (id <ESRenderer>) init
{
if (self = [super init])
{
context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES1];
if (!context || ![EAGLContext setCurrentContext:context])
{
[self release];
return nil;
}
// Create default framebuffer object. The backing will be allocated for the current layer in -resizeFromLayer
glGenFramebuffersOES(1, &defaultFramebuffer);
glGenRenderbuffersOES(1, &colorRenderbuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, defaultFramebuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, colorRenderbuffer);
glEnable(GL_TEXTURE_2D);
glGenTextures(1, &frameTexture);
glBindTexture(GL_TEXTURE_2D, frameTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glFramebufferTexture2DOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_TEXTURE_2D, frameTexture, 0);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, colorRenderbuffer);
}
return self;
}
[Renderer method. It is called by outside per each frames]
static const GLfloat verticesForGL_TRIANGLE_STRIP[] = {
-0.8, 0.8, 0.0, //v1
0.0, 1.0, //UV1
-0.8, -0.8, 0.0, //v2
0.0, 0.0, //UV2
0.8, 0.8, 0.0, //v3
1.0, 1.0, //UV3
0.8, -0.8, 0.0, //v4
1.0, 0.0, //UV4
};
- (void)render:(uint8_t*)data
{
if ([EAGLContext currentContext] != context)
[EAGLContext setCurrentContext:context];
glClearColor(0.4f, 0.4f, 0.4f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glViewport(0, 0, backingWidth, backingHeight);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(-1.1f, 1.1f, -1.1f, 1.1f, -2.0f, 2.0f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 1280, 1024, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, frameTexture);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glVertexPointer(3, GL_FLOAT, sizeof(GLfloat)*5, verticesForGL_TRIANGLE_STRIP);
glTexCoordPointer(2, GL_FLOAT, sizeof(GLfloat)*5, &verticesForGL_TRIANGLE_STRIP[0]+3);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDisable(GL_TEXTURE_2D);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
}

Seems to me you are attempting to load a non-power-of-two texture. This is not supported on all iOS devices on openGl es 1.0/1.1. You can check if the devices supports such extension
bool npot = strstr(extensions, "GL_APPLE_texture_2D_limited_npot") != 0;
if (! npot)
NSLog("Non power of two textures not supported.");
Also, try loading a power of two square texture and see if that works.

Oh I've solved just now.
I added this line below the glTexImage2D function. (sorry, I'm a beginner about openGL. It seems to make a link frameBuffer with texture.)
glFramebufferTexture2DOES(GL_FRAMEBUFFER_OES, GL_FRAMEBUFFER_ATTACHMENT_TEXTURE_LEVEL_OES, GL_TEXTURE_2D, frameTexture, 0);
I use the iPhone5(armv7s). It is enough to decode and resize the 1280x1024 30fps pictures. but rendering need to change from using CPU to GPU.
So, now I can make a better performance. (23fps -> 30fps)
thanks.

Related

iPhone OpenGL texture not completely transparent

I tried to paint a transparent texture over a sphere, but the transparent areas are not completely transparent. A vivid shade of gray remains. I tried to load a Photoshop generated PNG then paint it on sphere using the code below:
My code to load textures:
- (void) loadPNGTexture: (int)index Name: (NSString*) name{
CGImageRef imageRef = [UIImage imageNamed:[NSString stringWithFormat:#"%#.png",name]].CGImage;
GLsizei width = CGImageGetWidth(imageRef);
GLsizei height = CGImageGetHeight(imageRef);
GLubyte * data = malloc(width * 4 * height);
if (!data)
NSLog(#"error allocating memory for texture loading!");
else {
NSLog(#"Memory allocated for %#", name);
}
NSLog(#"Width : %d, Height :%d",width,height);
CGContextRef cg_context = CGBitmapContextCreate(data, width, height, 8, 4 * width, CGImageGetColorSpace(imageRef), kCGImageAlphaPremultipliedLast);//kCGImageAlphaPremultipliedLast);
CGContextTranslateCTM(cg_context, 0, height);
CGContextScaleCTM(cg_context, 1, -1);
CGContextDrawImage(cg_context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(cg_context);
CGContextSetBlendMode(cg_context, kCGBlendModeCopy); //kCGBlendModeCopy);
glGenTextures(2, m_texture[index]);
glBindTexture(GL_TEXTURE_2D, m_texture[index][0]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
free(data);
}
Drawing clouds:
glPushMatrix();
glTranslatef(0, 0, 3 );
glScalef(3.1, 3.1, 3.1);
glRotatef(-1, 0, 0, 1);
glRotatef(90, -1, 0, 0);
glDisable(GL_LIGHTING);
//Load Texture for left side of globe
glBindTexture(GL_TEXTURE_2D, m_texture[CLOUD_TEXTURE][0]);
glVertexPointer(3, GL_FLOAT, sizeof(TexturedVertexData3D), &VertexData[0].vertex);
glNormalPointer(GL_FLOAT, sizeof(TexturedVertexData3D), &VertexData[0].normal);
glTexCoordPointer(2, GL_FLOAT, sizeof(TexturedVertexData3D), &VertexData[0].texCoord);
// draw the sphere
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_COPY);
glDrawArrays(GL_TRIANGLES, 0, 11520);
glEnable(GL_LIGHTING);
glPopMatrix();
This first thing that stands out in your code is the line:
glBlendFunc(GL_SRC_ALPHA, GL_COPY);
The second argument (GL_COPY), is not a valid argument for glBlendFunc.
You might want to change that to something along the lines of
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);

OpenGL OES Iphone glCopyTexImage2D

I am new to openGL OES on iPhone and have a memory issue with glCopyTexImage2D. So far i understood, this function should copy the current framebuffer to the binded texture. But for some reason it always allocates new memory, which i can see in instruments checking the allocations.
My goal is to read texture images and draw on it, after drawing i want to save the new texture , so i can scroll through the painting. So here is may code:
1) init opengl and framebuffer:
context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES1];
if (!context || ![EAGLContext setCurrentContext:context]) {
[self release];
return nil;
}
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_SRC_COLOR);
// Setup OpenGL states
glMatrixMode(GL_PROJECTION);
CGRect frame = self.bounds;
CGFloat scale = self.contentScaleFactor;
// Setup the view port in Pixels
glOrthof(0, frame.size.width * scale, 0, frame.size.height * scale, -1, 1);
glViewport(0, 0, frame.size.width, frame.size.height * scale);
glDisable(GL_DEPTH_TEST);
glDisable(GL_DITHER);
glMatrixMode(GL_MODELVIEW);
glEnableClientState(GL_VERTEX_ARRAY);
// Set a blending function appropriate for premultiplied alpha pixel data
glEnable(GL_POINT_SPRITE_OES);
glTexEnvf(GL_POINT_SPRITE_OES, GL_COORD_REPLACE_OES, GL_TRUE);
glPointSize(64 / kBrushScale);
2) now i load the saved images into the framebuffer:
if ([[NSFileManager defaultManager] fileExistsAtPath:path]) {
// load texture
NSData* data = [[NSData alloc] initWithContentsOfFile:path];
glGenTextures(1, &drawBoardTextures[i]);
glBindTexture(GL_TEXTURE_2D, drawBoardTextures[i]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 1024, 1024, 0, GL_RGBA, GL_UNSIGNED_BYTE, [data bytes]);
// free memory
[data release];
}
3) and finally render the texture:
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glColor4f(1.0f, 1.0f, 1.0f, 1.0f);
int width = 1024;
GLfloat quad[] = {0.0,1024.0,1024.0,1024.0,0.0,0.0,1024.0,0.0};
GLfloat quadTex[] = {0.0,1.0,1.0,1.0,0.0,0.0,1.0,0.0};
for (int i=0; i<10; i++) {
quad[0] = width * i;
quad[2] = quad[0] + width;
quad[4] = quad[0];
quad[6] = quad[2];
glBindTexture(GL_TEXTURE_2D, drawBoardTextures[i]);
glVertexPointer(2, GL_FLOAT, 0, quad);
glTexCoordPointer(2, GL_FLOAT, 0, quadTex);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glBindTexture(GL_TEXTURE_2D, 0);
}
4) for now everything works fine, with gltranslatef i can scroll through the textures and also there is no allocation yet observed in instruments. so now i draw on the current window and want to save the result like followed:
int texIndex = offset.x/1024;
float diff = offset.x - (1024*texIndex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glBindTexture(GL_TEXTURE_2D, drawBoardTextures[texIndex]);
glBindTexture(GL_TEXTURE_2D, drawBoardTextures[texIndex]);
glCopyTexSubImage2D(GL_TEXTURE_2D, 0, diff, 0, 0, 0, 1024-diff, 1024);
glBindTexture(GL_TEXTURE_2D, drawBoardTextures[texIndex + 1]);
glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 1024-diff, 0, diff, 1024);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glFlush();
No the problems starts. instead of writing it directly into the generated textures, it writes it into client memory. for every copied texture it uses ~4 MB of Ram, but every recopy doesn't need any memory. i really don't know what i did wrong.
Does anyone know what the problem is? Thanks alot for your help.
cheers
chris

how to draw a semi-transparent triangle to back-buffer then render it?

I'm trying to draw a line with semi-transparent color to an offscreen framebuffer, use glCopyTexSubImage2D to copy it into a texture, then draw that texture to onscreen framebuffer.
I tried many configuration but only got an opaque line.
For more information, this is how I setup my OpenGL:
Firstly, I subclass EAGLView, then add an offscreen framebuffer:
glGenFramebuffersOES(1, &offscreenFramebuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, offscreenFramebuffer);
glGenRenderbuffersOES(1, &offscreenRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, offscreenRenderbuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_RGBA8_OES, kTextureOriginalSize, kTextureOriginalSize);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, offscreenRenderbuffer);
glGenRenderbuffersOES(1, &offscreenDepthBuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, offscreenDepthBuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_DEPTH_COMPONENT16_OES, kTextureOriginalSize, kTextureOriginalSize);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_DEPTH_ATTACHMENT_OES, GL_RENDERBUFFER_OES, offscreenDepthBuffer);
// Offscreen framebuffer texture target
glGenTextures(1, &offscreenRenderTexture);
glBindTexture(GL_TEXTURE_2D, offscreenRenderTexture);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
unsigned char * data = (unsigned char *)malloc( kTextureOriginalSize * kTextureOriginalSize * 4 );
memset( data,0xff, kTextureSizeWidth * kTextureSizeHeight * 4 );
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, kTextureOriginalSize, kTextureOriginalSize, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
glFramebufferTexture2DOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_TEXTURE_2D, offscreenRenderTexture, 0);
glEnable( GL_TEXTURE_2D);
glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glEnableClientState(GL_VERTEX_ARRAY);
glEnable(GL_BLEND);
glBlendFunc( GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA );
And copy data from offscreen framebuffer to texture
glBindTexture( GL_TEXTURE_2D, offscreenRenderTexture);
glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 0, 0, kTextureOriginalSize, kTextureOriginalSize);
I used glColor4f to change the alpha of color before drawing.
Thanks for your help!
I found the answer:
First, I need to set alpha to very
small: like 0.1 to see the effect.
Second, glBlendFunc() has effect
only on active framebuffer, so I need
to call glBlendFunc() twice for
both offscreen and onscreen
framebuffer. Or actually, what I do is
disable GL_BLEND when rendering the
texture to onscreen framebuffer.

Draw a line on top of triangles

I created a new iPhone OpenGL Project in Xcode. I filled my background with triangles and gave them a texture, see below:
CGImageRef spriteImage;
CGContextRef spriteContext;
GLubyte *spriteData;
size_t width, height;
// Sets up matrices and transforms for OpenGL ES
glViewport(0, 0, backingWidth, backingHeight);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
//glRotatef(-90,0,0,1);
glOrthof(-1.0f, 1.0f, -1.5f, 1.5f, -1.0f, 1.0f);
glMatrixMode(GL_MODELVIEW);
// Clears the view with black
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
// Sets up pointers and enables states needed for using vertex arrays and textures
glVertexPointer(2, GL_FLOAT, 0, vertices);
glEnableClientState(GL_VERTEX_ARRAY);
//glColorPointer(4, GL_FLOAT, 0, triangleColors);
//glColor4f(0.0f,1.0f,0.0f,1.0f);
//glEnableClientState(GL_COLOR_ARRAY);
glTexCoordPointer(2, GL_FLOAT, 0, spriteTexcoords);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
// Creates a Core Graphics image from an image file
spriteImage = [UIImage imageNamed:#"Bild.png"].CGImage;
// Get the width and height of the image
width = CGImageGetWidth(spriteImage);
height = CGImageGetHeight(spriteImage);
// Texture dimensions must be a power of 2. If you write an application that allows users to supply an image,
// you'll want to add code that checks the dimensions and takes appropriate action if they are not a power of 2.
if(spriteImage) {
// Allocated memory needed for the bitmap context
spriteData = (GLubyte *) calloc(width * height * 4, sizeof(GLubyte));
// Uses the bitmap creation function provided by the Core Graphics framework.
spriteContext = CGBitmapContextCreate(spriteData, width, height, 8, width * 4, CGImageGetColorSpace(spriteImage), kCGImageAlphaPremultipliedLast);
// After you create the context, you can draw the sprite image to the context.
CGContextDrawImage(spriteContext, CGRectMake(0.0, 0.0, (CGFloat)width, (CGFloat)height), spriteImage);
// You don't need the context at this point, so you need to release it to avoid memory leaks.
CGContextRelease(spriteContext);
// Use OpenGL ES to generate a name for the texture.
glGenTextures(1, &spriteTexture);
// Bind the texture name.
glBindTexture(GL_TEXTURE_2D, spriteTexture);
// Set the texture parameters to use a minifying filter and a linear filer (weighted average)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// Specify a 2D texture image, providing the a pointer to the image data in memory
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, spriteData);
// Release the image data
free(spriteData);
// Enable use of the texture
glEnable(GL_TEXTURE_2D);
// Set a blending function to use
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
// Enable blending
glEnable(GL_BLEND);
I have got two questions, bc. I am not so familiar with OpenGL.
I want to write a method, which I give two points as parameters and I want a Line between these two points to be drawn above my triangles (background).
- (void) drawLineFromPoint1:(CGPoint)point1 toPoint2:(CGPoint)point2 {
GLfloat triangle[] = { //Just example points
0.0f, 0.0f,
0.1f, 0.0f,
0.1f, 0.0f,
0.1f, 0.1f
};
GLfloat triangleColors[] = {
0.5f, 0.5f, 0.5f, 1.0f
};
//now draw the triangle
}
Something like that. Now I want to have a 2nd method, which erases this line (and not the background)
My drawing method looks like this:
- (void)drawView
{
// Make sure that you are drawing to the current context
[EAGLContext setCurrentContext:context];
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
glClear(GL_COLOR_BUFFER_BIT);
glDrawElements(GL_TRIANGLES, number_vertices, GL_UNSIGNED_SHORT, indices);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
}
Would be great if you can give e some hints/help,
cheers
The conventional approach would be to redraw everything whenever you move or erase a line.
Well, I got it to work. I just missed to set the Vertex-Pointer in my drawView to my triangles. This here now works:
- (void)drawView
{
[EAGLContext setCurrentContext:context];
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
glClear(GL_COLOR_BUFFER_BIT);
glVertexPointer(2, GL_FLOAT, 0, vertices);
glDrawElements(GL_TRIANGLES, number_vertices, GL_UNSIGNED_SHORT, indices);
[self drawLines];
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
}
- (void) drawLines{
glDisable(GL_TEXTURE_2D);
GLfloat points[4];
for (Dataset *data in buttons) {
CGPoint s = [data screenPosition];
CGPoint p = [data slot];
points[0] = (GLfloat)(768-s.y);
points[1] = (GLfloat)(1024-s.x);
points[2] = (GLfloat)(768-p.y);
points[3] = (GLfloat)(1024-p.x);
glVertexPointer(2, GL_FLOAT, 0, points);
glDrawArrays(GL_LINE_STRIP, 0, 2);
}
glEnable(GL_TEXTURE_2D);
}

Draw to offscreen renderbuffer in OpenGL ES (iPhone)

I'm trying to create an offscreen render buffer in OpenGL ES on the iPhone. I've created the buffer like this:
glGenFramebuffersOES(1, &offscreenFramebuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, offscreenFramebuffer);
glGenRenderbuffersOES(1, &offscreenRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, offscreenRenderbuffer);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, offscreenRenderbuffer);
But I'm confused on how to render the storage. Apple's documentation says to use the EAGLContext renderBufferStorage:fromDrawable: method, but this seems to only work for one render buffer (the main one being displayed). If I use the normal OpenGL function glRenderBufferStorageOES, then I can't seem to get it to display. Here's the code:
// this is in the initialization section:
glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_RGB8_OES, backingWidth, backingHeight);
// and this is when I'm trying to draw to it and display it:
glBindFramebufferOES(GL_FRAMEBUFFER_OES, offscreenFramebuffer);
GLfloat vc[] = {
0.0f, 0.0f, 0.0f,
10.0f, 10.0f, 10.0f,
0.0f, 0.0f, 0.0f,
-10.0f, -10.0f, -10.0f,
};
glLoadIdentity();
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, vc);
glDrawArrays(GL_LINES, 0, 4);
glDisableClientState(GL_VERTEX_ARRAY);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, offscreenRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
Doing it this way, nothing is displayed on the screen. However, if I switch out the references to "offscreen...Buffer" to the buffers that were created with the renderBufferStorage method, it works fine.
Any suggestions?
Since you can't use presentRenderbuffer with an offscreen FBO, you should associate it with a texture object using glFramebufferTexture2DOES, then render a textured full-screen quad.
#david good idea.. what you need to do is what #prideout said.. create a texture and render to it.. and use the texture on a quad every time. Make sure you draw to the texture only once, as in your case things are persistent.
- (void)setUpTextureBuffer
{
glGenFramebuffersOES(1, &texturebuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, texturebuffer);
// create the texture
glGenTextures(1, &canvastexture);
glBindTexture(GL_TEXTURE_2D, canvastexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 512, 512, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glFramebufferTexture2DOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_TEXTURE_2D, canvastexture, 0);
GLenum status = glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES);
if(status != GL_FRAMEBUFFER_COMPLETE_OES) {
NSLog(#"failed to make complete framebuffer object %x", status);
}
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glClearColor(1.0, 1.0, 1.0, 1.0);
glViewport(0, 0, 512, 512);
glClear(GL_COLOR_BUFFER_BIT);
}
//setTargetToTexture() function
glBindFramebufferOES(GL_FRAMEBUFFER_OES, tbuffer);
glBindTexture(GL_TEXTURE_2D, allbrushes);
glViewport(0, 0, 512, 512);
//reset pointers after finishing drawing to textures
glViewport(0, 0, BWIDTH, BHEIGHT);
glVertexPointer(2, GL_FLOAT, 0, canvas); //canvas vertices
glTexCoordPointer(2, GL_FLOAT, 0, texels);
glBindTexture(GL_TEXTURE_2D, boundtexture); //bind to the texture which is the special render target
glBindFramebufferOES(GL_FRAMEBUFFER_OES, fbuffer); //back to normal framebuffer
You cannot present an normal renderbuffer (created with glRenderbufferStorage), it is always offscreen. presentRenderbuffer: can only be used for renderbuffers that were created using the renderbufferStorage:fromDrawable:. If you checked the return value of that presentRenderbuffer:, you should observe it failing.
What are you trying to do?