Basic Drawing of Pixels From Unsigned Short * Using OpenGLES on iOS - iphone

For my non-app store app, I've been using the private framework Core Surface to draw directly to the screen of the iPhone. However, this can be rather slow on older devices because it heavily uses the CPU to do its drawing. To fix this, I've decided to try to use OpenGLES to render the pixels to the screen.
Currently (and I have no way of changing this), I have a reference to an unsigned short * variable called BaseAddress, and essential 3rd party code accesses BaseAddress and updates it with the new pixel data.
I've set up a GLKViewController, and implemented the viewDidLoad as follows:
- (void)viewDidLoad {
[super viewDidLoad];
self.context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
if (!self.context) {
NSLog(#"Failed to create ES context");
}
[EAGLContext setCurrentContext:self.context];
GLKView *view = (GLKView *)self.view;
view.context = self.context;
glGenBuffers(1, &screenBuffer);
glBindBuffer(GL_ARRAY_BUFFER, screenBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(BaseAddress), BaseAddress, GL_DYNAMIC_DRAW);
}
where screenBuffer is an instance variable. In the glkView:drawInRect: method I have the following:
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect {
glDrawElements(GL_ARRAY_BUFFER, sizeof(BaseAddress)/sizeof(BaseAddress[0]), GL_UNSIGNED_SHORT, BaseAddress);
}
Unfortunately, only a black screen appears when I run the app. If I go back to using Core Surface, the app works fine. So basically, how can I draw the pixels to the screen using OpenGLES?

I think that it might be best to use a texture and for your case I'd try to find some older ES1 template for iOS devices. Basically what you need is a frame buffer and a color buffer made from your UIView layer:
glGenFramebuffers(1, &viewFramebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, viewFramebuffer);
glGenRenderbuffers(1, &viewColorBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, viewColorBuffer);
[context renderbufferStorage:GL_RENDERBUFFER fromDrawable:(CAEAGLLayer*)self.layer];
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, viewColorBuffer);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &backingWidth);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, & backingHeight);
As for projection matrix I suggest you use glOrthof(.0f, backingWidth, backingHeight, .0f, -1.0f, 1.0f); that will make your GL coordinates same as your view coordinates.
Next on some initialization generate a texture, bind it and give it dimensions of power of 2 (textureWidth = 1; while(textureWidth < backingWidth) textureWidth = textureWidth << 1;) and pass it NULL for data pointer (all in function "glTexImage2D")
Then generate vertex array for a square same as texture, from (0,0) to (textureWidth, textureHeight) and texture coordinates from (0,0) to (1,1)
When you get the data to your pointer and are ready to be pushed to texture use glTexSubImage2D to update it: You can update only a segment of a texture if you get data for it or to update a whole screen use rect (0,0,screenWidth, screenHeight)
Now just draw those 2 triangles with your texture..
Note that there is a limit on texture size: most active iOS devices 1<<11 (2048) iPad3 1<<12
Do not forget to set texture parameters when creating it: glTexParameteri
Do check for retina display and set content scale of CAEAGLLayer to 2 if needed

Related

Draw a straight line using OpenGL ES in iPhone?

Finally i tried to draw a line using OpenGL ES framework in XCode 4.2 for iPhone simple game app.I studied something about GLKView and GLKViewController to draw a line in iPhone. Here is my sample code that was i tried in my project,
#synthesize context = _context;
#synthesize effect = _effect;
- (void)viewDidLoad
{
[super viewDidLoad];
self.context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
if (!self.context) {
NSLog(#"Failed to create ES context");
}
GLKView *view = (GLKView *)self.view;
view.context = self.context;
view.drawableDepthFormat = GLKViewDrawableDepthFormat24;
[EAGLContext setCurrentContext:self.context];
//[self setupGL];
}
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
NSLog(#"DrawInRect Method");
[EAGLContext setCurrentContext:self.context];
// This method is calling multiple times....
glClearColor(0.65f, 0.65f, 0.65f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
const GLfloat line[] =
{
-0.5f, -0.5f, //point A
0.5f, -0.5f, //point B
};
glVertexPointer(2, GL_FLOAT, 0, line);
glEnableClientState(GL_VERTEX_ARRAY);
glDrawArrays(GL_LINES, 0, 2);
}
When i run the project only the gray color only appearing in the screen, the line not showing. And also the - (void)glkView:(GLKView *)view drawInRect:(CGRect)rect delegate is calling infinite time. Please guide me where i am doing wrong. Why the line not appearing or drawing? Can you please help? Am trying this 2 days. Thanks in advance.
I'm rather a student of OpenGL ES 2.0 right now myself. I recommend first starting a new project in Xcode with the "OpenGL Game" template Apple provides.
Among other things, the Apple template code will include creation of a GLKBaseEffect, which provides some Shader functionality that seems to be required in order to be able to draw with OpenGL ES 2.0. (Without the GLKBaseEffect, you would need to use GLSL. The template provides an example of both with and without explicit GLSL Shader code.)
The template creates a "setupGL" function, which I modified to look like this:
- (void)setupGL
{
[EAGLContext setCurrentContext:self.context];
self.effect = [[[GLKBaseEffect alloc] init] autorelease];
// Let's color the line
self.effect.useConstantColor = GL_TRUE;
// Make the line a cyan color
self.effect.constantColor = GLKVector4Make(
0.0f, // Red
1.0f, // Green
1.0f, // Blue
1.0f);// Alpha
}
I was able to get the line to draw by including a few more steps. It all involves sending data over to the GPU to be processed. Here's my glkView:drawInRect function:
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
glClearColor(0.65f, 0.65f, 0.65f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Prepare the effect for rendering
[self.effect prepareToDraw];
const GLfloat line[] =
{
-1.0f, -1.5f, //point A
1.5f, -1.0f, //point B
};
// Create an handle for a buffer object array
GLuint bufferObjectNameArray;
// Have OpenGL generate a buffer name and store it in the buffer object array
glGenBuffers(1, &bufferObjectNameArray);
// Bind the buffer object array to the GL_ARRAY_BUFFER target buffer
glBindBuffer(GL_ARRAY_BUFFER, bufferObjectNameArray);
// Send the line data over to the target buffer in GPU RAM
glBufferData(
GL_ARRAY_BUFFER, // the target buffer
sizeof(line), // the number of bytes to put into the buffer
line, // a pointer to the data being copied
GL_STATIC_DRAW); // the usage pattern of the data
// Enable vertex data to be fed down the graphics pipeline to be drawn
glEnableVertexAttribArray(GLKVertexAttribPosition);
// Specify how the GPU looks up the data
glVertexAttribPointer(
GLKVertexAttribPosition, // the currently bound buffer holds the data
2, // number of coordinates per vertex
GL_FLOAT, // the data type of each component
GL_FALSE, // can the data be scaled
2*4, // how many bytes per vertex (2 floats per vertex)
NULL); // offset to the first coordinate, in this case 0
glDrawArrays(GL_LINES, 0, 2); // render
}
Btw, I've been going through Learning OpenGL ES for iOS by Erik Buck, which you can buy in "Rough Cut" form through O'Reilly (an early form of the book since it is not going to be fully published until the end of the year). The book has a fair number of typos at this stage and no pictures, but I've still found it quite useful. The source code for the book seems to be very far along, which you can grab at his blog. The author also wrote the excellent book Cocoa Design Patterns.

CVOpenGLESTextureCacheCreateTextureFromImage from uint8_t buffer

I'm developing an video player for iPhone. I'm using ffmpeg libraries to decode frames of video and I'm using opengl 2.0 to render the frames to the screen.
But my render method is very slowly.
A user told me:
iOS 5 includes a new way to do this fast. The trick is to use AVFoundation and link a Core Video pixel buffer directly to an OpenGL texture.
My problem now is that my video player send to render method a uint8_t* type that I use then with glTexSubImage2D.
But if I want to use CVOpenGLESTextureCacheCreateTextureFromImage I need a CVImageBufferRef with the frame.
The question is: How I can create CVImageBufferRef from uint8_t buffer?
This is my render method:
- (void) render: (uint8_t*) buffer
{
NSLog(#"render");
[EAGLContext setCurrentContext:context];
glBindFramebuffer(GL_FRAMEBUFFER, defaultFramebuffer);
glViewport(0, 0, backingWidth, backingHeight);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// OpenGL loads textures lazily so accessing the buffer is deferred until draw; notify
// the movie player that we're done with the texture after glDrawArrays.
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, mFrameW, mFrameH, GL_RGB,GL_UNSIGNED_SHORT_5_6_5, buffer);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
[moviePlayerDelegate bufferDone];
glBindRenderbuffer(GL_RENDERBUFFER, colorRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER];
}
Thanks,
I am trying to do something similar.
Apparently, you need to create CVPixelBufferRef, and substitute it for CVImageBufferRef. I.e., you first create CVPixelBufferRef, as described at here (for download), and then get access to pixel buffer:
CVPixelBufferLockBaseAddress(renderTarget, 0);
_rawBytesForImage = (GLubyte *)CVPixelBufferGetBaseAddress(renderTarget);
(Code not mine).
For an actual working example that shows how video data can be passed directly to an OpenGL view see my blog post on the subject. The problem with looking at a series of "code pieces" around online is that you will not find actual complete working examples for iOS.

iPhone4 OpenGL ES GLuProject returning wrong y-coordinate

I have a baffling problem with an iPhone 4 OpenGL ES app that I have been trying to tackle on and off for a couple of months now and have hit a dead end despite some really useful and tantalising tips and suggestions on this site.
I am writing a 3d game which simply draws blocks and allows the user to move them around into various arrangements and the bulk of the app is written in C++.
My problem is that I am trying to use GLuUnproject which I found source code for here:
http://webcvs.freedesktop.org/mesa/Mesa/src/glu/mesa/project.c?view=markup
to interpret the 3d point (and hence the block) selected by the user in order to move and rotate it which I convered to floating point rather than double precision.
Please note that I have compared this source to other versions on the net and it appears to be consistent.
I use the following code to get the ray vector:
Ray RenderingEngine::GetRayVector( vec2 winPos ) const
{
// Get the last matrices used
glGetFloatv( GL_MODELVIEW_MATRIX, __modelview );
glGetFloatv( GL_PROJECTION_MATRIX, __projection );
glGetIntegerv( GL_VIEWPORT, __viewport );
// Flip the y coordinate
winPos.y = (float)__viewport[3] - winPos.y;
// Create vectors to be set
vec3 nearPoint;
vec3 farPoint;
Ray rayVector;
//Retrieving position projected on near plan
gluUnProject( winPos.x, winPos.y , 0,
__modelview, __projection, __viewport,
&nearPoint.x, &nearPoint.y, &nearPoint.z);
//Retrieving position projected on far plan
gluUnProject( winPos.x, winPos.y, 1,
__modelview, __projection, __viewport,
&farPoint.x, &farPoint.y, &farPoint.z);
rayVector.nearPoint = nearPoint;
rayVector.farPoint = farPoint;
//Return the ray vector
return rayVector;
}
The vector code for tracing out the returned ray from the near plane to the far plane is straightforward and I find that blocks near the bottom of the screen are correctly identified but as one moves up the screen there seems to be an increasing discrepancy in the reported y-values and the expected y values for the points selected.
I have also tried using GLuProject to manually check which screen coordinates are generated for my world coordinates as follows:
vec3 RenderingEngine::GetScreenCoordinates( vec3 objectPos ) const
{
// Get the last matrices used
glGetFloatv( GL_MODELVIEW_MATRIX, __modelview );
glGetFloatv( GL_PROJECTION_MATRIX, __projection );
glGetIntegerv( GL_VIEWPORT, __viewport );
vec3 winPos;
gluProject(objectPos.x, objectPos.y, objectPos.z ,
__modelview, __projection, __viewport,
&winPos.x, &winPos.y, &winPos.z);
// Swap the y value
winPos.y = (float)__viewport[3] - winPos.y;
return winPos;
}
Again, the results are consistent with the ray tracing approach in that the GLuProjected y coordinate gets increasingly wrong as the user clicks higher up the screen.
For example, when the clicked position directly reported by the touchesBegan event is (246,190) the calculated position is (246, 215), a y discrepancy of 25.
When the clicked position directly reported by the touchesBegan event is (246,398) the calculated position is (246, 405), a y discrepancy of 7.
The x coordinate seems to be spot on.
I notice that the layer.bounds.size.height is reported as 436 when the viewport height is set to 480 (the full screen height). The layer bounds width is reported as 320 which is also the width of the viewport.
The value of 436 seems to be fixed no matter what viewport size I use or whether I display the status screen at the top of the window.
Have tried setting the bounds.size.height to 480 before the following call:
[my_context
renderbufferStorage:GL_RENDERBUFFER
fromDrawable: eaglLayer];
But this seems to be ignored and the height is later reported as 436 in the call:
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES,
GL_RENDERBUFFER_HEIGHT_OES, &height);
I have seen some discussion of the difference in points and pixels and the possible need for scaling but have struggled to use that information usefully since these hinted that the difference was due to the retina display resolution of the iPhone 4 and that different scaling would be required for the simulator and the actual device. However, as far as I can tell the simulator and device are behaving consistently.
30-Aug-2011 As not getting any feedback on this one - is there more information I can supply to make the question more tractable?
31-Aug-2011 OpenGL setup and display code as follows:
- (id) initWithCoder:(NSCoder*)coder
{
if ((self = [super initWithCoder:coder]))
{
// Create OpenGL friendly layer to draw in
CAEAGLLayer* eaglLayer = (CAEAGLLayer*) self.layer;
eaglLayer.opaque = YES;
// eaglLayer.bounds.size.width and eaglLayer.bounds.size.height are
// always 320 and 436 at this point
EAGLRenderingAPI api = kEAGLRenderingAPIOpenGLES1;
m_context = [[EAGLContext alloc] initWithAPI:api];
// check have a context
if (!m_context || ![EAGLContext setCurrentContext:m_context]) {
[self release];
return nil;
}
glGenRenderbuffersOES(1, &m_colorRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, m_colorRenderbuffer);
[m_context
renderbufferStorage:GL_RENDERBUFFER
fromDrawable: eaglLayer];
UIScreen *scr = [UIScreen mainScreen];
CGRect rect = scr.applicationFrame;
int width = CGRectGetWidth(rect); // Always 320
int height = CGRectGetHeight(rect); // Always 480 (status bar not displayed)
// Initialise the main code
m_applicationEngine->Initialise(width, height);
// This is the key c++ code invoked in Initialise call shown here indented
// Setup viewport
LowerLeft = ivec2(0,0);
ViewportSize = ivec2(width,height);
// Code to create vertex and index buffers not shown here
// …
// Extract width and height from the color buffer.
int width, height;
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES,
GL_RENDERBUFFER_WIDTH_OES, &width);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES,
GL_RENDERBUFFER_HEIGHT_OES, &height);
// Create a depth buffer that has the same size as the color buffer.
glGenRenderbuffersOES(1, &m_depthRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, m_depthRenderbuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_DEPTH_COMPONENT16_OES,
width, height);
// Create the framebuffer object.
GLuint framebuffer;
glGenFramebuffersOES(1, &framebuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, framebuffer);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES,
GL_RENDERBUFFER_OES, m_colorRenderbuffer);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_DEPTH_ATTACHMENT_OES,
GL_RENDERBUFFER_OES, m_depthRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, m_colorRenderbuffer);
// Set up various GL states.
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
glEnable(GL_DEPTH_TEST);
// ...Back in initiWithCoder
// Do those things which need to happen when the main code is reset
m_applicationEngine->Reset();
// This is the key c++ code invoked in Reset call shown here indented
// Set initial camera position where
// eye=(0.7,8,-8), m_target=(0,4,0), CAMERA_UP=(0,-1,0)
m_main_camera = mat4::LookAt(eye, m_target, CAMERA_UP);
// ...Back in initiWithCoder
[self drawView: nil];
m_timestamp = CACurrentMediaTime();
// Create timer object that allows application to synchronise its
// drawing to the refresh rate of the display.
CADisplayLink* displayLink;
displayLink = [CADisplayLink displayLinkWithTarget:self
selector:#selector(drawView:)];
[displayLink addToRunLoop:[NSRunLoop currentRunLoop]
forMode:NSDefaultRunLoopMode];
}
return self;
}
- (void) drawView: (CADisplayLink*) displayLink
{
if (displayLink != nil) {
// Invoke main rendering code
m_applicationEngine->Render();
// This is the key c++ code invoked in Render call shown here indented
// Do the background
glClearColor(1.0f, 1.0f, 1.0f, 1);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// A set of objects are provided to this method
// for each one (called visual below) do the following:
// Set the viewport transform.
glViewport(LowerLeft.x, LowerLeft.y, ViewportSize.x, ViewportSize.y);
// Set the model view and projection transforms
// Frustum(T left, T right, T bottom, T top, T near, T far)
float h = 4.0f * size.y / size.x;
mat4 modelview = visual->Rotation * visual->Translation * m_main_camera;
mat4 projection = mat4::Frustum(-1.5, 1.5, h/2, -h/2, 4, 14);
// Load the model view matrix and initialise
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glLoadMatrixf(modelview.Pointer());
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glLoadMatrixf(projection.Pointer());
// Draw the surface - code not shown
// …
// ...Back in drawView
[m_context presentRenderbuffer:GL_RENDERBUFFER];
}
}
When the view that holds renderer is resized, it will be notified in this way:
- (void) layoutSubviews
{
[renderer resizeFromLayer:(CAEAGLLayer*)self.layer];
[self drawView:nil];
}
- (BOOL) resizeFromLayer:(CAEAGLLayer *)layer
{
// Allocate color buffer backing based on the current layer size
glBindRenderbufferOES(GL_RENDERBUFFER_OES, colorRenderBuffer);
[context renderbufferStorage:GL_RENDERBUFFER_OES fromDrawable:layer];
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);
if (glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES) != GL_FRAMEBUFFER_COMPLETE_OES)
{
NSLog(#"Failed to make complete framebuffer object %x", glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES));
return NO;
}
[self recreatePerspectiveProjectionMatrix];
return YES;
}
Notice that perspective matrix should be properly recreated due to view port size has changed. And it will have influence on un-project result.
Related to scale issues:
Inside view initialization get the scale factor:
CGFloat scale = 1;
if ([self respondsToSelector:#selector(getContentScaleFactor:)])
{
self.contentScaleFactor = [[UIScreen mainScreen] scale];
scale = self.contentScaleFactor;
}
The view size is actually virtually the same on standard and retina display, 320px wide, but the rendering layer size will be doubled for the retina, 640px. When converting between opengl renderer space and view space the scale factor should be considered.
Added:
Try to change the order for getting and setting width and height parameter inside initialization code:
Instead of this:
int width = CGRectGetWidth(rect); // Always 320
int height = CGRectGetHeight(rect); // Always 480 (status bar not displayed)
// Initialise the main code
m_applicationEngine->Initialise(width, height);
// This is the key c++ code invoked in Initialise call shown here indented
// Setup viewport
LowerLeft = ivec2(0,0);
ViewportSize = ivec2(width,height);
// Code to create vertex and index buffers not shown here
// …
// Extract width and height from the color buffer.
int width, height;
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES,
GL_RENDERBUFFER_WIDTH_OES, &width);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES,
GL_RENDERBUFFER_HEIGHT_OES, &height);
try this order (dont't use the size from view):
// Code to create vertex and index buffers not shown here
// …
// Extract width and height from the color buffer.
int width, height;
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES,
GL_RENDERBUFFER_WIDTH_OES, &width);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES,
GL_RENDERBUFFER_HEIGHT_OES, &height);
// Initialise the main code
m_applicationEngine->Initialise(width, height);
// This is the key c++ code invoked in Initialise call shown here indented
// Setup viewport
LowerLeft = ivec2(0,0);
ViewportSize = ivec2(width,height);
Also make sure you have set for the UIController parameter for the full screen layout:
self.wantsFullScreenLayout = YES;
After that, for iphone 4 the width and height should be exactly 640x960, and contentScaleFactor should be 2.
But, notice also that layoutSubviews is the standard UIView function and it is the only place where I am getting the screen size and adjusting projection or frustum matrix.
Well...am feeling somewhat foolish now...
The problem was that the view I was using was actually 436 pixels high, which I had set eons ago when experimenting with allowing room for a common navigation bar on the main window which I no longer use.
Setting this back to 480 solved the problem.
Apologies to those who looked at this and particularly those who responded.
I am now going to go and put myself out of my misery after months of frustration!

How do you render OpenGL-ES to an external screen using the VGA out adapter?

I've been developing a 3D program for the iPad and iPhone and would like to be able to render it to an external screen. According to my understanding, you have to do something similar to the below code to implement it, (found at: Sunsetlakesoftware.com):
if ([[UIScreen screens] count] > 1)
{
// External screen attached
}
else
{
// Only local screen present
}
CGRect externalBounds = [externalScreen bounds];
externalWindow = [[UIWindow alloc] initWithFrame:externalBounds];
UIView *backgroundView = [[UIView alloc] initWithFrame:externalBounds];
backgroundView.backgroundColor = [UIColor whiteColor];
[externalWindow addSubview:backgroundView];
[backgroundView release];
externalWindow.screen = externalScreen;
[externalWindow makeKeyAndVisible];
However, I'm not sure what to change to do this to an OpenGL project. Does anyone know what you would do to implement this into the defualt openGL project for iPad or iPhone in XCode?
All you need to do to render OpenGL ES content on the external display is to either create a UIView that is backed by a CAEAGLLayer and add it as a subview of the backgroundView above, or take such a view and move it to be a subview of backgroundView.
In fact, you can remove the backgroundView if you want and just place your OpenGL-hosting view directly on the externalWindow UIWindow instance. That window is attached to the UIScreen instance that represents the external display, so anything placed on it will show on that display. This includes OpenGL ES content.
There does appear to be an issue with particular types of OpenGL ES content, as you can see in the experimental support I've tried to add to my Molecules application. If you look in the source code there, I attempt to migrate my rendering view to an external display, but it never appears. I have done the same with other OpenGL ES applications and had their content render fine, so I believe there might be an issue with the depth buffer on the external display. I'm still working to track that down.
I've figured out how to get ANY OpenGL-ES content to render onto an external display. It's actually really straightforward. You just copy your renderbuffer to a UIImage then display that UIImage on the external screen view. The code to take a snapshot of your renderbuffer is below:
- (UIImage*)snapshot:(UIView*)eaglview
{
// Get the size of the backing CAEAGLLayer
GLint backingWidth, backingHeight;
glBindRenderbufferOES(GL_RENDERBUFFER_OES, defaultFramebuffer);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);
NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
NSInteger widthInPoints, heightInPoints;
if (NULL != UIGraphicsBeginImageContextWithOptions) {
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
CGFloat scale = eaglview.contentScaleFactor;
widthInPoints = width / scale;
heightInPoints = height / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
}
else {
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
widthInPoints = width;
heightInPoints = height;
UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;
}
Although, for some reason I've never been able to get glGetRenderbufferParameterivOES to return the proper backingwidth and backingheight, so I've had to use my own function to calculate those. Just pop this into your rendering implementation and place the result onto the external screen using a timer. If anyone can make any improvements to this method, please let me know.

OpenGL ES render to texture, then draw texture

I'm trying to render to a texture, then draw that texture to the screen using OpenGL ES on the iPhone. I'm using this question as a starting point, and doing the drawing in a subclass of Apple's demo EAGLView.
Instance variables:
GLuint textureFrameBuffer;
Texture2D * texture;
To initialize the frame buffer and texture, I'm doing this:
GLint oldFBO;
glGetIntegerv(GL_FRAMEBUFFER_BINDING_OES, &oldFBO);
// initWithData results in a white image on the device (works fine in the simulator)
texture = [[Texture2D alloc] initWithImage:[UIImage imageNamed:#"blank320.png"]];
// create framebuffer
glGenFramebuffersOES(1, &textureFrameBuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, textureFrameBuffer);
// attach renderbuffer
glFramebufferTexture2DOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_TEXTURE_2D, texture.name, 0);
if(glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES) != GL_FRAMEBUFFER_COMPLETE_OES)
NSLog(#"incomplete");
glBindFramebufferOES(GL_FRAMEBUFFER_OES, oldFBO);
Now, if I simply draw my scene to the screen as usual, it works fine:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// draw some triangles, complete with vertex normals
[contentDelegate draw];
[self swapBuffers];
But, if I render to 'textureFrameBuffer', then draw 'texture' to the screen, the resulting image is upside down and "inside out". That is, it looks as though the normals of the 3d objects are pointing inward rather than outward -- the frontmost face of each object is transparent, and I can see the inside of the back face. Here's the code:
GLint oldFBO;
glGetIntegerv(GL_FRAMEBUFFER_BINDING_OES, &oldFBO);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, textureFrameBuffer);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// draw some polygons
[contentDelegate draw];
glBindFramebufferOES(GL_FRAMEBUFFER_OES, oldFBO);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_TEXTURE_2D);
glColor4f(1, 1, 1, 1);
[texture drawInRect:CGRectMake(0, 0, 320, 480)];
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisable(GL_TEXTURE_2D);
[self swapBuffers];
I can flip the image rightside-up easily enough by reordering the (glTexCoordPointer) coordinates accordingly (in Texture2D's drawInRect method), but that doesn't solve the "inside-out" issue.
I tried replacing the Texture2D texture with a manually created OpenGL texture, and the result was the same. Drawing a Texture2D loaded from a PNG image works fine.
As for drawing the objects, each vertex has a unit normal specified, and GL_NORMALIZE is enabled.
glVertexPointer(3, GL_FLOAT, 0, myVerts);
glNormalPointer(GL_FLOAT, 0, myNormals);
glDrawArrays(GL_TRIANGLES, 0, numVerts);
Everything draws fine when it's rendered to the screen; GL_DEPTH_TEST is enabled and is working great.
Any suggestions as to how to fix this? Thanks!
The interesting part of this is that you're seeing a different result when drawing directly to the backbuffer. Since you're on the iPhone platform, you're always drawing to an FBO, even when you're drawing to the backbuffer.
Make sure that you have a depth buffer attached to your offscreen FBO. In your initialization code, you might want to add the following snippet right after the glBindFramebufferOES(...).
// attach depth buffer
GLuint depthRenderbuffer;
glGenRenderbuffersOES(1, &depthRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, depthRenderbuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_DEPTH_COMPONENT16_OES, width, height);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_DEPTH_ATTACHMENT_OES, GL_RENDERBUFFER_OES, depthRenderbuffer);