I am trying to create a GLKView where I add cubes and draw them. The problem is, each cube is of type NSObject and has its own vertex and texture buffers but I want to draw them in a single context. In order to do this, I followed some WWDC videos and created two contexts, one for rendering and one for texture loading, and I put both into the same sharegroup. Code-wise what I did in this respect, was adding a property called renderContext to my GLKView, which I want all cubes to be drawn in and I also set up a loaderContext property, where I want to load textures. However, nothing is drawn at all, I do not see anything, and sometimes I get a crash and GL ERROR 0x0500. It used to work and model view matrix should be setup correctly and everything. The introduction of the asynchronous loading and the two shared contexts caused the problem...
Here is the code:
This is the GLKView: Container(containing the cubes)
- (void)setupGL {
self.renderContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
self.loaderContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2 sharegroup:self.renderContext.sharegroup];
glGenFramebuffers(1, &defaultFrameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, defaultFrameBuffer);
glGenRenderbuffers(1, &depthBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, depthBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, self.bounds.size.width, self.bounds.size.height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthBuffer);
glEnable(GL_DEPTH_TEST);
}
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect {
self.opaque = NO;
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
[EAGLContext setCurrentContext:self.renderContext];
for(Cube *cube in self.cubes){
[cube draw];
}
}
Each individual cube is set up like this:
-(id)init {
self = [super init];
if(self){
self.effect = [[GLKBaseEffect alloc]init];
self.effect.transform.projectionMatrix = GLKMatrix4MakePerspective(45.0f,0.95f, 0.1f, 2.0f);
self.effect.transform.projectionMatrix = GLKMatrix4Translate(self.effect.transform.projectionMatrix, 0, 0.0, 0.0);
self.effect.transform.modelviewMatrix = GLKMatrix4Translate(self.effect.transform.modelviewMatrix,0,0,-1.3);
glGenBuffers(1, &vertexArray);
glBindBuffer(GL_ARRAY_BUFFER, vertexArray);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glBufferData(GL_ARRAY_BUFFER, sizeof(Vertices), Vertices, GL_STATIC_DRAW);
glVertexAttribPointer(GLKVertexAttribPosition,3,GL_FLOAT,GL_FALSE,0,0);
glGenBuffers(1, &texArray);
glBindBuffer(GL_ARRAY_BUFFER, texArray);
glEnableVertexAttribArray(GLKVertexAttribTexCoord0);
glBufferData(GL_ARRAY_BUFFER, sizeof(TexCoords), TexCoords, GL_STATIC_DRAW);
glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_FALSE, 0,0);
}
return self;
}
And has a draw method:
-(void)draw{
[self.effect prepareToDraw];
self.effect.texture2d0.enabled = YES;
for(int i=0;i<6;i++){
if(i==0)glBindTexture(GL_TEXTURE_2D, frontTexture.name);
if(i==1)glBindTexture(GL_TEXTURE_2D, rightTexture.name);
if(i==2)glBindTexture(GL_TEXTURE_2D, backTexture.name);
if(i==3)glBindTexture(GL_TEXTURE_2D, leftTexture.name);
if(i==4)glBindTexture(GL_TEXTURE_2D, bottomTexture.name);
if(i==5)glBindTexture(GL_TEXTURE_2D, topTexture.name);
glDrawArrays(GL_TRIANGLES, i*6, 6);
}
}
Here is how I try to asynchronously load textures:
Note: The GLKView (container) is the parent of each individual cube, whose loaderContext I retrieve, which is in the renderContext's sharegroup, so textures should be drawn correctly, right ?
-(void)loadTextureForTexture:(GLKTextureInfo*)texN withView:(CubeView *)cV{
__block GLKTextureInfo *texName = texN;
EAGLContext *loaderContext = self.parent.loaderContext;
self.textureLoader = [[GLKTextureLoader alloc]initWithSharegroup:loaderContext.sharegroup];
[EAGLContext setCurrentContext:loaderContext];
NSDictionary *options = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES] forKey:GLKTextureLoaderOriginBottomLeft];
dispatch_queue_t loaderQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_queue_t mainQueue = dispatch_get_main_queue();
[self.textureLoader textureWithCGImage:[self imageWithView:cV].CGImage options:options queue:loaderQueue completionHandler:^(GLKTextureInfo *tex, NSError *err){
texName = tex;
if(err)
NSLog(#"%#", err);
else
NSLog(#"no error");
dispatch_async(mainQueue, ^{
[self display];
});
}];
}
It looks like you’re doing more context management than is necessary:
GLKTextureLoader only needs to know which EAGLSharegroup to create its textures in. You don’t have to create an EAGLContext on its behalf, and the one you’re creating in your code already isn’t being passed into any GLKTextureLoader methods.
You should not manually need to manage the current EAGLContext—in fact, the documentation for GLKView specifically mentions that you should not change the current context from inside your drawing methods.
The net result of this is that other than extracting the EAGLSharegroup from your existing context for GLKTextureLoader creation, you should have no new context management code.
Additionally, it looks like the result of loading the texture never makes it out of loadTextureForTexture:withView:. Your texN variable is not passed into the function by reference, so texName is only visible to loadTextureForTexture:withView: and your texture loading completion block. Once loadTextureForTexture:withView: returns and your completion block is invoked, the data is gone. It seems like there should be some kind of CubeView setter that needs to be called with the GLKTextureInfo * you’ve received.
First problem I see in your code is absence of color attachment to the renderbuffer, so you only get depth output which would not draw anything.
glGenRenderbuffers(1, &colorBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, colorBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA4, self.bounds.size.width, self.bounds.size.height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT, GL_RENDERBUFFER, colorBuffer);
Related
For my non-app store app, I've been using the private framework Core Surface to draw directly to the screen of the iPhone. However, this can be rather slow on older devices because it heavily uses the CPU to do its drawing. To fix this, I've decided to try to use OpenGLES to render the pixels to the screen.
Currently (and I have no way of changing this), I have a reference to an unsigned short * variable called BaseAddress, and essential 3rd party code accesses BaseAddress and updates it with the new pixel data.
I've set up a GLKViewController, and implemented the viewDidLoad as follows:
- (void)viewDidLoad {
[super viewDidLoad];
self.context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
if (!self.context) {
NSLog(#"Failed to create ES context");
}
[EAGLContext setCurrentContext:self.context];
GLKView *view = (GLKView *)self.view;
view.context = self.context;
glGenBuffers(1, &screenBuffer);
glBindBuffer(GL_ARRAY_BUFFER, screenBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(BaseAddress), BaseAddress, GL_DYNAMIC_DRAW);
}
where screenBuffer is an instance variable. In the glkView:drawInRect: method I have the following:
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect {
glDrawElements(GL_ARRAY_BUFFER, sizeof(BaseAddress)/sizeof(BaseAddress[0]), GL_UNSIGNED_SHORT, BaseAddress);
}
Unfortunately, only a black screen appears when I run the app. If I go back to using Core Surface, the app works fine. So basically, how can I draw the pixels to the screen using OpenGLES?
I think that it might be best to use a texture and for your case I'd try to find some older ES1 template for iOS devices. Basically what you need is a frame buffer and a color buffer made from your UIView layer:
glGenFramebuffers(1, &viewFramebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, viewFramebuffer);
glGenRenderbuffers(1, &viewColorBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, viewColorBuffer);
[context renderbufferStorage:GL_RENDERBUFFER fromDrawable:(CAEAGLLayer*)self.layer];
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, viewColorBuffer);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &backingWidth);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, & backingHeight);
As for projection matrix I suggest you use glOrthof(.0f, backingWidth, backingHeight, .0f, -1.0f, 1.0f); that will make your GL coordinates same as your view coordinates.
Next on some initialization generate a texture, bind it and give it dimensions of power of 2 (textureWidth = 1; while(textureWidth < backingWidth) textureWidth = textureWidth << 1;) and pass it NULL for data pointer (all in function "glTexImage2D")
Then generate vertex array for a square same as texture, from (0,0) to (textureWidth, textureHeight) and texture coordinates from (0,0) to (1,1)
When you get the data to your pointer and are ready to be pushed to texture use glTexSubImage2D to update it: You can update only a segment of a texture if you get data for it or to update a whole screen use rect (0,0,screenWidth, screenHeight)
Now just draw those 2 triangles with your texture..
Note that there is a limit on texture size: most active iOS devices 1<<11 (2048) iPad3 1<<12
Do not forget to set texture parameters when creating it: glTexParameteri
Do check for retina display and set content scale of CAEAGLLayer to 2 if needed
Finally Mr.Chris helped me to draw a line using GLKview and drawInRect: method. It is working fine. But, i need some clarifications,
How we are assigning values to GLFloat?
const GLfloat line[] =
{
-1.0f, -1.5f, //point A : What is -1.0f and -1.5f ? These are x and y or something.
1.5f, -1.0f, //point B : What is 1.5f and -1.0f ? These are x and y or something.
};
Because, i am confusing to set a values here. How it is taking x,y or length? If it is silly question please accept my apologies. Please clarify my doubts on this.
How to set background image for GLKViewController? I have an image to set background but, i don't know where i need to set that?
Sample viewDidLoad code
- (void)viewDidLoad
{
[super viewDidLoad];
self.view.backgroundColor = [UIColor colorWithPatternImage:[UIImage imageNamed:#"GameDesign03.png"]];
self.context = [[[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2] autorelease];
if (!self.context) {
NSLog(#"Failed to create ES context");
}
GLKView *view = (GLKView *)self.view;
view.context = self.context;
view.drawableDepthFormat = GLKViewDrawableDepthFormat24;
[self setupGL];
}
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
glClearColor(0.65f, 0.65f, 0.65f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
[self.effect prepareToDraw]; // Prepare the effect for rendering
const GLfloat line[]=
{
-1.0f, -1.5f,
1.5f, -1.0f
};
GLuint bufferObjectNameArray; //Create an handle for a buffer object array
glGenBuffers(1, &bufferObjectNameArray); //Have OpenGL generate a buffer name and store it in the buffer object array
glBindBuffer(GL_ARRAY_BUFFER, bufferObjectNameArray); //Bind the buffer object array to the GL_ARRAY_BUFFER target buffer
//Send the line data over to the target buffer in GPU RAM
glBufferData(GL_ARRAY_BUFFER, sizeof(line), line, GL_STATIC_DRAW);
glEnableVertexAttribArray(GLKVertexAttribPosition); //Enable vertex data to be fed down the graphics pipeline to be drawn
glVertexAttribPointer(GLKVertexAttribPosition, 2, GL_FLOAT, GL_FALSE, 2, NULL); //Specify how the GPU looks up the data
glDrawArrays(GL_LINES, 0, 2); // render
}
Please help on this. Thanks in advance.
So to answer your first question, on this line:
glVertexAttribPointer(GLKVertexAttribPosition, 2, GL_FLOAT, GL_FALSE, 2, NULL); //Specify how the GPU looks up the data
you're setting the size argument to 2, which tells OpenGL that there are 2 components per vertex attribute. They are the endpoints of the lines.
And to answer your second question, you need to draw the background image before drawing the lines, as mentioned in my comment above.
Finally i tried to draw a line using OpenGL ES framework in XCode 4.2 for iPhone simple game app.I studied something about GLKView and GLKViewController to draw a line in iPhone. Here is my sample code that was i tried in my project,
#synthesize context = _context;
#synthesize effect = _effect;
- (void)viewDidLoad
{
[super viewDidLoad];
self.context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
if (!self.context) {
NSLog(#"Failed to create ES context");
}
GLKView *view = (GLKView *)self.view;
view.context = self.context;
view.drawableDepthFormat = GLKViewDrawableDepthFormat24;
[EAGLContext setCurrentContext:self.context];
//[self setupGL];
}
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
NSLog(#"DrawInRect Method");
[EAGLContext setCurrentContext:self.context];
// This method is calling multiple times....
glClearColor(0.65f, 0.65f, 0.65f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
const GLfloat line[] =
{
-0.5f, -0.5f, //point A
0.5f, -0.5f, //point B
};
glVertexPointer(2, GL_FLOAT, 0, line);
glEnableClientState(GL_VERTEX_ARRAY);
glDrawArrays(GL_LINES, 0, 2);
}
When i run the project only the gray color only appearing in the screen, the line not showing. And also the - (void)glkView:(GLKView *)view drawInRect:(CGRect)rect delegate is calling infinite time. Please guide me where i am doing wrong. Why the line not appearing or drawing? Can you please help? Am trying this 2 days. Thanks in advance.
I'm rather a student of OpenGL ES 2.0 right now myself. I recommend first starting a new project in Xcode with the "OpenGL Game" template Apple provides.
Among other things, the Apple template code will include creation of a GLKBaseEffect, which provides some Shader functionality that seems to be required in order to be able to draw with OpenGL ES 2.0. (Without the GLKBaseEffect, you would need to use GLSL. The template provides an example of both with and without explicit GLSL Shader code.)
The template creates a "setupGL" function, which I modified to look like this:
- (void)setupGL
{
[EAGLContext setCurrentContext:self.context];
self.effect = [[[GLKBaseEffect alloc] init] autorelease];
// Let's color the line
self.effect.useConstantColor = GL_TRUE;
// Make the line a cyan color
self.effect.constantColor = GLKVector4Make(
0.0f, // Red
1.0f, // Green
1.0f, // Blue
1.0f);// Alpha
}
I was able to get the line to draw by including a few more steps. It all involves sending data over to the GPU to be processed. Here's my glkView:drawInRect function:
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
glClearColor(0.65f, 0.65f, 0.65f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Prepare the effect for rendering
[self.effect prepareToDraw];
const GLfloat line[] =
{
-1.0f, -1.5f, //point A
1.5f, -1.0f, //point B
};
// Create an handle for a buffer object array
GLuint bufferObjectNameArray;
// Have OpenGL generate a buffer name and store it in the buffer object array
glGenBuffers(1, &bufferObjectNameArray);
// Bind the buffer object array to the GL_ARRAY_BUFFER target buffer
glBindBuffer(GL_ARRAY_BUFFER, bufferObjectNameArray);
// Send the line data over to the target buffer in GPU RAM
glBufferData(
GL_ARRAY_BUFFER, // the target buffer
sizeof(line), // the number of bytes to put into the buffer
line, // a pointer to the data being copied
GL_STATIC_DRAW); // the usage pattern of the data
// Enable vertex data to be fed down the graphics pipeline to be drawn
glEnableVertexAttribArray(GLKVertexAttribPosition);
// Specify how the GPU looks up the data
glVertexAttribPointer(
GLKVertexAttribPosition, // the currently bound buffer holds the data
2, // number of coordinates per vertex
GL_FLOAT, // the data type of each component
GL_FALSE, // can the data be scaled
2*4, // how many bytes per vertex (2 floats per vertex)
NULL); // offset to the first coordinate, in this case 0
glDrawArrays(GL_LINES, 0, 2); // render
}
Btw, I've been going through Learning OpenGL ES for iOS by Erik Buck, which you can buy in "Rough Cut" form through O'Reilly (an early form of the book since it is not going to be fully published until the end of the year). The book has a fair number of typos at this stage and no pictures, but I've still found it quite useful. The source code for the book seems to be very far along, which you can grab at his blog. The author also wrote the excellent book Cocoa Design Patterns.
I am making a simple game (using the book "iPhone and iPad Game Development for Dummies), but I cannot get it to work. I am making the sample application that is used in the book, so I think I have the correct code. Here is the problem.
I put in the code for OpenGL ES, but I am getting a lot of warnings. Here is the code.
.h file:
#import <UIKit/UIKit.h>
#import <OpenGLES/ES2/gl.h>
#import <OpenGLES/ES2/glext.h>
#import <QuartzCore/QuartzCore.h>
#import <CoreGraphics/CoreGraphics.h>
#interface OpenGL : UIView {
EAGLContext *context;
GLuint *framebuffer;
GLuint *colorRenderBuffer;
GLuint *depthBuffer;
}
- (void) prepareOpenGL;
- (void) render;
#end
.m file:
#import "OpenGL.h"
#implementation OpenGL
- (void) awakeFromNib; {
[self prepareOpenGL];
[self render];
}
- (void) prepareOpenGL; {
context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
[EAGLContext setCurrentContext:context];
glGenFramebuffers(1, &framebuffer);
glBindFramebuffer(GL_RENDERBUFFER, framebuffer);
glGenRenderbuffers(1, &colorRenderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, colorRenderBuffer);
[context renderbufferStorage:GL_RENDERBUFFER fromDrawable:(CAEAGLLayer*)self.layer];
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, colorRenderBuffer);
GLint height, width;
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &width);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &height);
glGenRenderbuffers(1, &depthBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, depthBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthBuffer);
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if (status != GL_FRAMEBUFFER_COMPLETE) {
NSLog(#"Failed to create a complete render buffer!");
}
}
- (void) render; {
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
glViewport(0, 0, self.bounds.size.width, self.bounds.size.height);
glClearColor(0.5, 0.0, 0.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT);
glBindRenderbuffer(GL_RENDERBUFFER, colorRenderBuffer);
[context presentRenderbuffer:GL_RENDERBUFFER];
}
+ (Class) layerClass; {
return [CAEAGLLayer class];
}
These are the types of warnings I am getting (I cannot put them all in because there are a lot of them):
Passing argument 2 of 'glGenFramebuffers' from incompatible pointer type.
Passing argument 2 of 'glBindFramebuffers' from incompatible pointer type.
Passing argument 2 of 'glGenRenderbuffers' from incompatible pointer type.
Passing argument 2 of 'glBindRenderbuffers' from incompatible pointer type.
So, if all of this came out of the book, why am I getting these warnings? Am I not importing the right files? I really cannot give you much more information than this because I have no idea about what caused this, and I am totally new to OpenGL ES. Thanks for any and all help!
EDIT: One more thing. I get the warnings wherever things like glGenFramebuffers are used.
The glGen* functions take a size and a pointer to a GLuint, and you're passing the address of a pointer to a GLuint (a GLuint **), which is what triggers the warning.
Just pass the pointer directly, like this:
glGenFramebuffers(1, framebuffer);
glGenRenderbuffers(1, colorRenderBuffer);
glGenRenderbuffers(1, depthBuffer);
Also don't forget to allocate memory before passing it to OpenGL.
I am writing a GLPaint-esque drawing application for the iPad, however I have hit a stumbling block. Specifically, I am trying to implement two things at the moment:
1) A background image that can be drawn onto.
2) The ability to draw temporary shapes, e.g. you might draw a line, but the final shape would only be committed once the finger has lifted.
For the background image, I understand the idea is to draw the image into a VBO and draw it right before every line drawing. This is fine, but now I need to add the ability to draw temporary shapes... with kEAGLDrawablePropertyRetainedBacking set to YES (as in GLPaint) the temporary are obviously not temporary! Turning the retained backing property to NO works great for the temporary objects, but now my previous freehand lines aren't kept.
What is the best approach here? Should I be looking to use more than one EAGLLayer? All the documentation and tutorials I've found seem to suggest that most things should be possible with a single layer. They also say that retained backing should pretty much always be set to NO. Is there a way to work my application in such a configuration? I tried storing every drawing point into a continually expanding vertex array to be redrawn each frame, but due to the sheer number of sprites being drawn this isn't working.
I would really appreciate any help on this one, as I've scoured online and found nothing!
I've since found the solution to this problem. The best way appears to be to use custom framebuffer objects and render-to-texture. I hadn't heard of this before asking the question, but it looks like an incredibly useful tool for the OpenGLer's toolkit!
For those that may be wanting to do something similar, the idea is that you create a FBO and attach a texture to it (instead of a renderbuffer). You can then bind this FBO and draw to it like any other, the only difference being that the drawings are rendered off-screen. Then all you need to do to display the texture is to bind the main FBO and draw the texture to it (using a quad).
So for my implementation, I used two different FBOs with a texture attached to each - one for the "retained" image (for freehand drawing), and the other for the "scratch" image (for temporary drawings). Each time a frame is rendered, I first draw a background texture (in my case I just used the Texture2D class), then draw the retained texture, and finally the scratch texture if required. When drawing a temporary shape everything is rendered to the scratch texture, and this is cleared at the start of every frame. Once it is finished, the scratch texture is drawn to the retained texture.
Here are a few snippets of code that might be of use to somebody:
1) Create the framebuffers (I have only shown a couple here to save space!):
// ---------- DEFAULT FRAMEBUFFER ---------- //
// Create framebuffer.
glGenFramebuffersOES(1, &viewFramebuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
// Create renderbuffer.
glGenRenderbuffersOES(1, &viewRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
// Get renderbuffer storage and attach to framebuffer.
[context renderbufferStorage:GL_RENDERBUFFER_OES fromDrawable:layer];
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, viewRenderbuffer);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);
// Check for completeness.
status = glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES);
if (status != GL_FRAMEBUFFER_COMPLETE_OES) {
NSLog(#"Failed to make complete framebuffer object %x", status);
return NO;
}
// Unbind framebuffer.
glBindFramebufferOES(GL_FRAMEBUFFER_OES, 0);
// ---------- RETAINED FRAMEBUFFER ---------- //
// Create framebuffer.
glGenFramebuffersOES(1, &retainedFramebuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, retainedFramebuffer);
// Create the texture.
glColor4f(0.0f, 0.0f, 0.0f, 0.0f);
glGenTextures(1, &retainedTexture);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, retainedTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 1024, 1024, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glBindTexture(GL_TEXTURE_2D, 0);
// Attach the texture as a renderbuffer.
glFramebufferTexture2DOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_TEXTURE_2D, retainedTexture, 0);
// Check for completeness.
status = glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES);
if (status != GL_FRAMEBUFFER_COMPLETE_OES) {
NSLog(#"Failed to make complete framebuffer object %x", status);
return NO;
}
// Unbind framebuffer.
glBindFramebufferOES(GL_FRAMEBUFFER_OES, 0);
2) Draw to the render-to-texture FBO:
// Ensure that we are drawing to the current context.
[EAGLContext setCurrentContext:context];
glBindFramebufferOES(GL_FRAMEBUFFER_OES, retainedFramebuffer);
glViewport(0, 0, 1024, 1024);
// DRAWING CODE HERE
3) Render the various textures to the main FBO, and present:
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
glViewport(0, 0, backingWidth, backingHeight);
glClearColor(1.0f, 1.0f, 1.0f, 1.0f); // Clear to white.
glClear(GL_COLOR_BUFFER_BIT);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
[self drawBackgroundTexture];
[self drawRetainedTexture];
[self drawScratchTexture];
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
For example, drawing drawing the retained texture using [self drawRetainedTexture] would use the following code:
// Bind the texture.
glBindTexture(GL_TEXTURE_2D, retainedTexture);
// Destination coords.
GLfloat retainedVertices[] = {
0.0, backingHeight, 0.0,
backingWidth, backingHeight, 0.0,
0.0, 0.0, 0.0,
backingWidth, 0.0, 0.0
};
// Source coords.
GLfloat retainedTexCoords[] = {
0.0, 1.0,
1.0, 1.0,
0.0, 0.0,
1.0, 0.0
};
// Draw the texture.
glVertexPointer(3, GL_FLOAT, 0, retainedVertices);
glTexCoordPointer(2, GL_FLOAT, 0, retainedTexCoords);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// Unbind the texture.
glBindTexture(GL_TEXTURE_2D, 0);
A lot of code, but I hope that helps somebody. It certainly had me stumped for a while!