OpenGL ES 2.0 FBO creation goes wrong with unknown error - iphone

I've been struggling with this for a while now, and this code crashes with, to me, unknown reasons. I'm creating an FBO, binding a texture, and then the very first glDrawArrays() crashes with a "EXC_BAD_ACCESS" on my iPhone Simulator.
Here's the code I use to create the FBO (and bind texture and...)
glGenFramebuffers(1, &lastFrameBuffer);
glGenRenderbuffers(1, &lastFrameDepthBuffer);
glGenTextures(1, &lastFrameTexture);
glBindTexture(GL_TEXTURE1, lastFrameTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 768, 1029, 0, GL_RGBA, GL_UNSIGNED_SHORT_5_6_5, NULL);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
//Bind/alloc depthbuf
glBindRenderbuffer(GL_RENDERBUFFER, lastFrameDepthBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, 768, 1029);
glBindFramebuffer(GL_FRAMEBUFFER, lastFrameBuffer);
//binding the texture to the FBO :D
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, lastFrameTexture, 0);
// attach the renderbuffer to depth attachment point
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, lastFrameDepthBuffer);
[self checkFramebufferStatus];
As you can see this takes part in an object, checkFrameBufferStatus looks like this:
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
switch(status)
{
case GL_FRAMEBUFFER_COMPLETE:
JNLogString(#"Framebuffer complete.");
return TRUE;
case GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT:
JNLogString(#"[ERROR] Framebuffer incomplete: Attachment is NOT complete.");
return false;
case GL_FRAMEBUFFER_INCOMPLETE_MISSING_ATTACHMENT:
JNLogString(#"[ERROR] Framebuffer incomplete: No image is attached to FBO.");
return false;
case GL_FRAMEBUFFER_INCOMPLETE_DIMENSIONS:
JNLogString(#"[ERROR] Framebuffer incomplete: Attached images have different dimensions.");
return false;
case GL_FRAMEBUFFER_UNSUPPORTED:
JNLogString(#"[ERROR] Unsupported by FBO implementation.");
return false;
default:
JNLogString(#"[ERROR] Unknown error.");
return false;
JNLogString is just an NSLog, and in this case it gives me:
2010-04-03 02:46:54.854 Bubbleeh[6634:207] ES2Renderer.m:372 [ERROR] Unknown error.
When I call it right there.
So, it crashes, and diagnostic tells me there's an unknown error and I'm kinda stuck. I basically copied the code from the OpenGL ES 2.0 Programming Guide...
What am I doing wrong?

glBindTexture(GL_TEXTURE1, lastFrameTexture);
That's not allowed, I was trying to bind the texture to unit one (GL_TEXTURE1), but that should be done by glActiveTexture(), not by glBindTexture(), which wants to know the type of the texture (GL_TEXTURE_2D, GL_TEXTURE_3D, etc.) not the texture unit. To place a texture in texture unit 1 I now have the following code which I think is correct:
//Bind 2D Texture lastFrameTexture to texture unit 1
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, lastFrameTexture);

Try using glGetError after each GL call. Specially the glTexImage2D.

If you use XCode, add breakpoint for OpenGL errors in "Breakpoint Navigator" ([+] -> [Add OpenGL ES Error Breakpoint]). Then all OpenGL problems will be showed on line, where problem appear.

Related

Offscreen FBO on iPhone

I'm working on an iPhone 4, iOS 6.0.1. I'm creating an offscreen FBO with a different size than the main FBO. I use the following code to do that, where cxFBO and cyFBO are twice the size of the main FBO:
glGenRenderbuffers(1, &_colorRenderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, _colorRenderBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA, cxFBO, cyFBO);
glGenFramebuffers(1, &_frameBufferID);
glBindFramebuffer(GL_FRAMEBUFFER, _frameBufferID);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, _colorRenderBuffer);
glGenTextures(1, &_fboTextureID);
glBindTexture(GL_TEXTURE_2D, _fboTextureID);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, cxFBO, cyFBO, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, _fboTextureID, 0);
I'm then drawing to it after calling glBindFramebuffer(GL_FRAMEBUFFER, _frameBufferID). I don't want to post the entire drawing routine with shaders. I think I'm not doing anything special, but if you think it matters, I will post a short example. The problem is that only one quarter of the texture is filled with vertices, the other 3 quarters are the color set by glClearColor().
What am I doing wrong? I'm especially a bit confused about the formats, i.e. GL_RGBA. Should I rather use GL_RGBA_OES? Everywhere?
The problem was that I forgot to call glViewPort(). Kind of obvious if you think about it ;-)

Low resolution render to texture result when using perspective projection

I'm currently working on a camera app for the iPhone in which I take the camera input, convert that to an OpenGL texture and then map it onto a 3D Object (currently a plane in perspective projection, for the sake of simplicity).
After mapping the camera input to this 3D plane I then render this 3D scene to a texture which is then used as a new texture for a plane in orthographic space (to apply additional filters in my fragment shader).
As long as I keep everything in orthographic projection, the resolution of my render texture is pretty high. But from the moment I put my plane in perspective projection the resolution of my render texture is very low.
Comparison:
As you can see, the last image has a very low resolution compared to the other two. So I'm guessing I'm doing something wrong.
I'm currently not using multisampling on any of my framebuffers and I'm in doubt if I will need it anyway to fix my problem since the orthographic scene works perfectly.
The textures I render into are 2048x2048 (will eventually be outputted as an image to the iPhone camera roll).
Here are some parts of my source code that I think might be relevant:
Code to create the framebuffer that gets outputted to the screen:
// Color renderbuffer
glGenRenderbuffers(1, &colorRenderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, colorRenderBuffer);
[context renderbufferStorage:GL_RENDERBUFFER
fromDrawable:(CAEAGLLayer*)glView.layer];
// Depth renderbuffer
glGenRenderbuffers(1, &depthRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, depthRenderbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, width, height);
// Framebuffer
glGenFramebuffers(1, &defaultFrameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, defaultFrameBuffer);
// Associate renderbuffers with framebuffer
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_RENDERBUFFER, colorRenderBuffer);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT,
GL_RENDERBUFFER, depthRenderbuffer);
TextureRenderTarget class:
void TextureRenderTarget::init()
{
// Color renderbuffer
glGenRenderbuffers(1, &colorRenderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, colorRenderBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGB8_OES,
width, height);
// Depth renderbuffer
glGenRenderbuffers(1, &depthRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, depthRenderbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16,
width, height);
// Framebuffer
glGenFramebuffers(1, &framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
// Associate renderbuffers with framebuffer
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_RENDERBUFFER, colorRenderBuffer);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT,
GL_RENDERBUFFER, depthRenderbuffer);
// Texture and associate with framebuffer
texture = new RenderTexture(width, height);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_TEXTURE_2D, texture->getHandle(), 0);
// Check for errors
checkStatus();
}
void TextureRenderTarget::bind() const
{
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
glBindRenderbuffer(GL_RENDERBUFFER, colorRenderBuffer);
}
void TextureRenderTarget::unbind() const
{
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
}
And finally, a snippet on how I create the render texture and fill it with pixels:
void Texture::generate()
{
// Create texture to render into
glActiveTexture(unit);
glGenTextures(1, &handle);
glBindTexture(GL_TEXTURE_2D, handle);
// Configure texture
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
}
void Texture::setPixels(const GLvoid* pixels)
{
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA,
GL_UNSIGNED_BYTE, pixels);
updateMipMaps();
}
void Texture::updateMipMaps() const
{
glBindTexture(GL_TEXTURE_2D, handle);
glGenerateMipmap(GL_TEXTURE_2D);
}
void Texture::bind(GLenum unit)
{
this->unit = unit;
if(unit != -1)
{
glActiveTexture(unit);
glBindTexture(GL_TEXTURE_2D, handle);
}
else
{
cout << "Texture::bind -> Couldn't activate unit -1" << endl;
}
}
void Texture::unbind()
{
glBindTexture(GL_TEXTURE_2D, 0);
}
I would assume that texture mapping is not exact with perspective projection.
Could you replace camera roll image by checker (chess grid with 1px cell size)? Then compare rendered checkers in orthogonal and perspective projections - the grid should be not blurred. If it is, then the problem is in projection matrix - it needs some bias for direct texel-to-pixel mapping.
If you have device you can look at rendering steps through OpenGL frame capture feature in XCode - there you will see when exactly the image becomes blurred.
As for mipmapping, it's not good to use it for textures created on-the-fly.
The blurring may be caused by the plane being positioned at half pixels in screen coordinates. Since going from orthographic to perspective transform changes the position of the plane, the plane will likely not be positioned at the same screen coordinate between the two transforms.
Similar blurring occur when you move an UIImageView from frame origin (0.0,0.0) to (0.5,0.5) on standard-res display, and (0.25,0.25) on retina displays.
The fact that your texture is very high-res may not help in this case since number of pixels actually sampled is bounded.
Try moving the plane a small distance in screen x,y coordinates and see if the blurring disappears.
I finally solved my problem by merging the first and second step of my rendering process.
The first step used to crop and flip the texture of the camera and render it to a new texture. Then this newly rendered texture is mapped onto a 3D plane and the result is rendered to a new texture.
I merged these two steps by changing the texture coordinates of my 3D plane so that I can use the original camera texture directly onto this plane.
I don't know what the exact reason is what was causing this loss of quality between the two rendered textures, but as a hint for the future: don't render to texture and reuse that result for a new render to texture. Merging all this together is better for performance and it also avoids color shifting issues.

How can I rewrite the GLCameraRipple sample without using iOS 5.0-specific features?

How can I rewrite Apple's GLCameraRipple example so that it doesn't require iOS 5.0?
I need to have it run on iOS 4.x, so I cannot use CVOpenGLESTextureCacheCreateTextureFromImage. What should I do?
As a follow on, I'm using the code below to provide YUV data rather than RGB, but the picture is not right, the screen is green. It seems as though UV plane doesn't work.
CVPixelBufferLockBaseAddress(cameraFrame, 0);
int bufferHeight = CVPixelBufferGetHeight(cameraFrame);
int bufferWidth = CVPixelBufferGetWidth(cameraFrame);
// Create a new texture from the camera frame data, display that using the shaders
glActiveTexture(GL_TEXTURE0);
glGenTextures(1, &_lumaTexture);
glBindTexture(GL_TEXTURE_2D, _lumaTexture);
glUniform1i(UNIFORM[Y], 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// This is necessary for non-power-of-two textures
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, bufferWidth, bufferHeight, 0, GL_LUMINANCE,
GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddressOfPlane(cameraFrame, 0));
glActiveTexture(GL_TEXTURE1);
glGenTextures(1, &_chromaTexture);
glBindTexture(GL_TEXTURE_2D, _chromaTexture);
glUniform1i(UNIFORM[UV], 1);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// This is necessary for non-power-of-two textures
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
// Using BGRA extension to pull in video frame data directly
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE_ALPHA, bufferWidth/2, bufferHeight/2, 0, GL_LUMINANCE_ALPHA,
GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddressOfPlane(cameraFrame, 1));
[self drawFrame];
glDeleteTextures(1, &_lumaTexture);
glDeleteTextures(1, &_chromaTexture);
CVPixelBufferUnlockBaseAddress(cameraFrame, 0);
How can I fix this?
If you switch the pixel format from kCVPixelFormatType_420YpCbCr8BiPlanarFullRange to kCVPixelFormatType_32BGRA (at line 315 of RippleViewController) then captureOutput:didOutputSampleBuffer:fromConnection: will receive a sample buffer in which the image buffer can be uploaded straight to OpenGL via glTexImage2D (or glTexSubImage2D if you want to keep your texture sized as a power of two). That works because all iOS devices to date support the GL_APPLE_texture_format_BGRA8888 extension, allowing you to specify an otherwise non-standard format of GL_BGRA.
So you'd create a texture somewhere in advance with glGenTextures and replace line 235 with something like:
glBindTexture(GL_TEXTURE_2D, myTexture);
CVPixelBufferLockBaseAddress(pixelBuffer);
glTexSubImage2D(GL_TEXTURE_2D, 0,
0, 0,
CVPixelBufferGetWidth(pixelBuffer), CVPixelBufferGetHeight(pixelBuffer),
GL_BGRA, GL_UNSIGNED_BYTE,
CVPixelBufferGetBaseAddress(pixelBuffer));
CVPixelBufferUnlockBaseAddress(pixelBuffer);
You may want to check that the result of CVPixelBufferGetBytesPerRow is four times the result of CVPixelBufferGetWidth; I'm uncertain from the documentation whether it's guaranteed always to be (which, pragmatically, probably means that it isn't), but as long as it's a multiple of four you can just supply CVPixelBufferGetBytesPerRow divided by four as your pretend width, given that you're uploading a sub image anyway.
EDIT: in response to the follow-on question posted below as a comment, if you wanted to stick with receiving frames and making them available to the GPU in YUV then the code becomes visually ugly because what you're returned is a structure pointing to the various channel components but you'd want something like this:
// lock the base address, pull out the struct that'll show us where the Y
// and CbCr information is actually held
CVPixelBufferLockBaseAddress(pixelBuffer);
CVPlanarPixelBufferInfo_YCbCrBiPlanar *info = CVPixelBufferGetBaseAddress(imageBuffer);
// okay, upload Y. You'll want to communicate this texture to the
// SamplerY uniform within the fragment shader.
glBindTexture(GL_TEXTURE_2D, yTexture);
uint8_t *yBaseAddress = (uint8_t *)info + EndianU32_BtoN(info->componentInfoY.offset);
uint32_t yRowBytes = EndianU32_BtoN(info->componentInfoY.rowBytes);
/* TODO: check that yRowBytes is equal to CVPixelBufferGetWidth(pixelBuffer);
otherwise you'll need to shuffle memory a little */
glTexSubImage2D(GL_TEXTURE_2D, 0,
0, 0,
CVPixelBufferGetWidth(pixelBuffer), CVPixelBufferGetHeight(pixelBuffer),
GL_LUMINANCE, GL_UNSIGNED_BYTE,
yBaseAddress);
// we'll also need to upload the CbCr part of the buffer, as a two-channel
// (ie, luminance + alpha) texture. This texture should be supplied to
// the shader for the SamplerUV uniform.
glBindTexture(GL_TEXTURE_2D, uvTexture);
uint8_t *uvBaseAddress = (uint8_t *)info + EndianU32_BtoN(info->componentInfoCbCr.offset);
uint32_t uvRowBytes = EndianU32_BtoN(info->componentInfoCbCr.rowBytes);
/* TODO: a check on uvRowBytes, as above */
glTexSubImage2D(GL_TEXTURE_2D, 0,
0, 0,
CVPixelBufferGetWidth(pixelBuffer)/2, CVPixelBufferGetHeight(pixelBuffer)/2,
GL_LUMINANCE_ALPHA, GL_UNSIGNED_BYTE,
uvBaseAddress);
CVPixelBufferUnlockBaseAddress(pixelBuffer);
The iOS 5.0 fast texture upload capabilities can make for very fast uploading of camera frames and extraction of texture data, which is why Apple uses them in their latest sample code. For camera data, I've seen 640x480 frame upload times go from 9 ms to 1.8 ms using these iOS 5.0 texture caches on an iPhone 4S, and for movie capturing I've seen more than a fourfold improvement when switching to them.
That said, you still might want to provide a fallback for stragglers who have not yet updated to iOS 5.x. I do this in my open source image processing framework by using a runtime check for the texture upload capability:
+ (BOOL)supportsFastTextureUpload;
{
return (CVOpenGLESTextureCacheCreate != NULL);
}
If this returns NO, I use the standard upload process that we have had since iOS 4.0:
CVImageBufferRef cameraFrame = CMSampleBufferGetImageBuffer(sampleBuffer);
int bufferWidth = CVPixelBufferGetWidth(cameraFrame);
int bufferHeight = CVPixelBufferGetHeight(cameraFrame);
CVPixelBufferLockBaseAddress(cameraFrame, 0);
glBindTexture(GL_TEXTURE_2D, outputTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bufferWidth, bufferHeight, 0, GL_BGRA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress(cameraFrame));
// Do your OpenGL ES rendering here
CVPixelBufferUnlockBaseAddress(cameraFrame, 0);
GLCameraRipple has one quirk in its upload process, and that is the fact that it uses YUV planar frames (split into Y and UV images), instead of one BGRA image. I get pretty good performance from my BGRA uploads, so I haven't seen the need to work with YUV data myself. You could either modify GLCameraRipple to use BGRA frames and the above code, or rework what I have above into YUV planar data uploads.

glFramebufferTexture2D fails on iPhone for certain texture sizes

When I try to attach a texture to a framebuffer, glCheckFramebufferStatus reports GL_FRAMEBUFFER_UNSUPPORTED for certain texture sizes. I've tested on both a 2nd and 4th generation iPod Touch. The sizes of texture that fail are not identical between the two models.
Here are some interesting results:
2nd generation - 8x8 failed, 16x8 failed, but 8x16 succeeded!
4th generation - 8x8 succeeded, 8x16 succeeded, but 16x8 failed!
Here's some code I used to test attaching textures of different sizes:
void TestFBOTextureSize(int width, int height)
{
GLuint framebuffer, texture;
// Create framebuffer
glGenFramebuffersOES(1, &framebuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, framebuffer);
// Create texture
glGenTextures(1,&texture);
glBindTexture(GL_TEXTURE_2D,texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glBindTexture(GL_TEXTURE_2D,0);
// Attach texture to framebuffer
glFramebufferTexture2DOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_TEXTURE_2D, texture, 0);
GLenum error = glGetError();
GLenum status = glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES);
if (status==GL_FRAMEBUFFER_COMPLETE_OES)
NSLog(#"%dx%d Succeeded!",width,height,status);
else
NSLog(#"%dx%d Failed: %x %x %d %d",width,height,status,error,texture,framebuffer);
// Cleanup
glFramebufferTexture2DOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_TEXTURE_2D, 0, 0);
glDeleteTextures(1, &texture);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, 0);
glDeleteFramebuffersOES(1, &framebuffer);
}
void TestFBOTextureSizes()
{
int width,height;
for (width=1; width<=1024; width<<=1)
{
for (height=1; height<=1024; height<<=1)
TestFBOTextureSize(width,height);
}
}
It seems that as long as both dimensions are at least 16 pixels then everything works ok on both devices. The thing that bothers me, though, is that I haven't seen anything written about texture size requirements for attaching to a framebuffer object. One solution, for now, would be to restrict my texture sizes to be at least 16 pixels, but might this break in the future or already be broken on some device I haven't tried? I could also perform this test code at startup in order to dynamically figure out which texture sizes are allowed, but that seems a bit hokey.
I have experienced similar problem, when I'm trying to render to texture with size 480x320 (full screen w/o resolution scale) on iPod touch 4. When I call glCheckFramebufferStatus() it returns GL_FRAMEBUFFER_UNSUPPORTED. My code:
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 480, 320, 0, GL_RGB, GL_UNSIGNED_SHORT_5_6_5, 0);
glBindTexture(GL_TEXTURE_2D, 0);
glGenFramebuffers(1, &frameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texture, 0);
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if (status != GL_FRAMEBUFFER_COMPLETE) {
// report error
}
Investigating this problem I have found that GL_TEXTURE_2D has to be a valid OpenGL ES object if we want it to use in render-to-texture mechanism. This means texture should be ready for bound and use. So to fix an error I have to set some texture parameters. Because I use non-POT texture I have to set GL_TEXTURE_WRAP_ to GL_CLAMP_TO_EDGE (default value is GL_REPEAT) and GL_TEXTURE_MIN_FILTER to GL_NEAREST or GL_LINEAR (default value is GL_NEAREST_MIPMAP_LINEAR) to use this texture.
I couldn't find what's the problem with 16x8, but 16x9 and 17x8 works fine if this parameters are set. I hope this information will be helpful for you.

Troubleshooting OpenGL ES 1.1 textures on the iPhone

I'm trying to draw a texture into an offscreen framebuffer, and its renderbuffer always ends up completely blank (black). The weird thing is, I know the context is set up, and I'm checking for errors using glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES) and glGetError(), but neither function says anything is wrong. Are there any other error-checking functions I can call which might shed some light on what's happening?
Difficult to give you a precise answer without more information. Perhaps could you post some code about your setup and usage of the render buffer?
In the meantime, here is some info about how to properly setup an offscreen framebuffer:
// Remember the FBO being used for the display framebuffer
glGetIntegerv(GL_FRAMEBUFFER_BINDING_OES, (GLint *)&SystemFBO);
// Create the texture and the FBO for offscreen frame buffer
glGenTextures(1, &ResultTexture);
glBindTexture(GL_TEXTURE_2D, ResultTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glGenFramebuffersOES(1, &ResultFBO);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, ResultFBO);
glFramebufferTexture2DOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_TEXTURE_2D, ResultTexture, 0);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, ResultFBO);
// do your rendering to offscreen framebuffer
...
// restore original frame buffer object
glBindFramebufferOES(GL_FRAMEBUFFER_OES, SystemFBO);
// use ResultTexture as usual
glBindTexture(GL_TEXTURE_2D, ResultTexture);
Hope this helps...