Rendering to non-power-of-two texture on iPhone - iphone

Is it possible to render to texture with OpenGL ES 1.1 on the iPhone (2G and older)? If I bind a texture as a render buffer, it has to be the size of the render buffer, which isn't POT-sized. But OpenGL ES 1.1 requires textures to be POT.
Maybe it can't be done on ES 1.1?

While OpenGL ES 1.1 does not support non-power-of-two textures, newer iOS device models have the extension GL_APPLE_texture_2D_limited_npot, which states:
Conventional OpenGL ES 1.X texturing
is limited to images with power-of-two
(POT) dimensions.
APPLE_texture_2D_limited_npot
extension relaxes these size
restrictions for 2D textures. The
restrictions remain in place for cube
map and 3D textures, if supported.
There is no additional procedural or
enumerant API introduced by this
extension except that an
implementation which exports the
extension string will allow an
application to pass in 2D texture
dimensions that may or may not be a
power of two.
In the absence of OES_texture_npot,
which lifts these restrictions,
neither mipmapping nor wrap modes
other than CLAMP_TO_EDGE are supported
in conjunction with NPOT 2D textures.
A NPOT 2D texture with a wrap mode
that is not CLAMP_TO_EDGE or a
minfilter that is not NEAREST or
LINEAR is considered incomplete. If
such a texture is bound to a texture
unit, it is as if texture mapping
were disabled for that texture unit.
You can use the following code to determine if this extension is supported on your device (drawn from Philip Rideout's excellent iPhone 3D Programming book):
const char* extensions = (char*) glGetString(GL_EXTENSIONS);
bool npot = strstr(extensions, "GL_APPLE_texture_2D_limited_npot") != 0;
On these devices, you should then be able to use non-power-of-two textures as long as you set the proper texture wrapping:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
Unfortunately, this example application that I have which renders to a non-power-of-two texture uses OpenGL ES 2.0, so I'm not sure that will help you in this case.

It can be done, all you need to do is get the next power of 2 bigger than the non-POT.
Then generate a framebuffer:
GLuint aFramebuffer;
glGenFramebuffersOES(1, &aFramebuffer);
And a texture:
GLuint aTexturebuffer;
glGenTextures(1, &aTexturebuffer);
Then you do the same texture like things:
glBindTexture(GL_TEXTURE_2D, aTexturebuffer);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glColor4ub(0, 0, 0, 255);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, textureWidth, textureHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
int area[] = {0.0, 0.0, renderWidth, renderHeight};
glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_CROP_RECT_OES, area);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, aFramebuffer);
glFramebufferTexture2DOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_TEXTURE_2D, aTexturebuffer, 0);
Here I used the draw_texture extension.
textureWidth and textureHeight are the power of 2 bigger, and renderWidth and renderHeight are the the rendere's width and height.
Then when you bind to aFramebuffer it will draw to texture.

Related

How can I rewrite the GLCameraRipple sample without using iOS 5.0-specific features?

How can I rewrite Apple's GLCameraRipple example so that it doesn't require iOS 5.0?
I need to have it run on iOS 4.x, so I cannot use CVOpenGLESTextureCacheCreateTextureFromImage. What should I do?
As a follow on, I'm using the code below to provide YUV data rather than RGB, but the picture is not right, the screen is green. It seems as though UV plane doesn't work.
CVPixelBufferLockBaseAddress(cameraFrame, 0);
int bufferHeight = CVPixelBufferGetHeight(cameraFrame);
int bufferWidth = CVPixelBufferGetWidth(cameraFrame);
// Create a new texture from the camera frame data, display that using the shaders
glActiveTexture(GL_TEXTURE0);
glGenTextures(1, &_lumaTexture);
glBindTexture(GL_TEXTURE_2D, _lumaTexture);
glUniform1i(UNIFORM[Y], 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// This is necessary for non-power-of-two textures
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, bufferWidth, bufferHeight, 0, GL_LUMINANCE,
GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddressOfPlane(cameraFrame, 0));
glActiveTexture(GL_TEXTURE1);
glGenTextures(1, &_chromaTexture);
glBindTexture(GL_TEXTURE_2D, _chromaTexture);
glUniform1i(UNIFORM[UV], 1);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// This is necessary for non-power-of-two textures
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
// Using BGRA extension to pull in video frame data directly
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE_ALPHA, bufferWidth/2, bufferHeight/2, 0, GL_LUMINANCE_ALPHA,
GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddressOfPlane(cameraFrame, 1));
[self drawFrame];
glDeleteTextures(1, &_lumaTexture);
glDeleteTextures(1, &_chromaTexture);
CVPixelBufferUnlockBaseAddress(cameraFrame, 0);
How can I fix this?
If you switch the pixel format from kCVPixelFormatType_420YpCbCr8BiPlanarFullRange to kCVPixelFormatType_32BGRA (at line 315 of RippleViewController) then captureOutput:didOutputSampleBuffer:fromConnection: will receive a sample buffer in which the image buffer can be uploaded straight to OpenGL via glTexImage2D (or glTexSubImage2D if you want to keep your texture sized as a power of two). That works because all iOS devices to date support the GL_APPLE_texture_format_BGRA8888 extension, allowing you to specify an otherwise non-standard format of GL_BGRA.
So you'd create a texture somewhere in advance with glGenTextures and replace line 235 with something like:
glBindTexture(GL_TEXTURE_2D, myTexture);
CVPixelBufferLockBaseAddress(pixelBuffer);
glTexSubImage2D(GL_TEXTURE_2D, 0,
0, 0,
CVPixelBufferGetWidth(pixelBuffer), CVPixelBufferGetHeight(pixelBuffer),
GL_BGRA, GL_UNSIGNED_BYTE,
CVPixelBufferGetBaseAddress(pixelBuffer));
CVPixelBufferUnlockBaseAddress(pixelBuffer);
You may want to check that the result of CVPixelBufferGetBytesPerRow is four times the result of CVPixelBufferGetWidth; I'm uncertain from the documentation whether it's guaranteed always to be (which, pragmatically, probably means that it isn't), but as long as it's a multiple of four you can just supply CVPixelBufferGetBytesPerRow divided by four as your pretend width, given that you're uploading a sub image anyway.
EDIT: in response to the follow-on question posted below as a comment, if you wanted to stick with receiving frames and making them available to the GPU in YUV then the code becomes visually ugly because what you're returned is a structure pointing to the various channel components but you'd want something like this:
// lock the base address, pull out the struct that'll show us where the Y
// and CbCr information is actually held
CVPixelBufferLockBaseAddress(pixelBuffer);
CVPlanarPixelBufferInfo_YCbCrBiPlanar *info = CVPixelBufferGetBaseAddress(imageBuffer);
// okay, upload Y. You'll want to communicate this texture to the
// SamplerY uniform within the fragment shader.
glBindTexture(GL_TEXTURE_2D, yTexture);
uint8_t *yBaseAddress = (uint8_t *)info + EndianU32_BtoN(info->componentInfoY.offset);
uint32_t yRowBytes = EndianU32_BtoN(info->componentInfoY.rowBytes);
/* TODO: check that yRowBytes is equal to CVPixelBufferGetWidth(pixelBuffer);
otherwise you'll need to shuffle memory a little */
glTexSubImage2D(GL_TEXTURE_2D, 0,
0, 0,
CVPixelBufferGetWidth(pixelBuffer), CVPixelBufferGetHeight(pixelBuffer),
GL_LUMINANCE, GL_UNSIGNED_BYTE,
yBaseAddress);
// we'll also need to upload the CbCr part of the buffer, as a two-channel
// (ie, luminance + alpha) texture. This texture should be supplied to
// the shader for the SamplerUV uniform.
glBindTexture(GL_TEXTURE_2D, uvTexture);
uint8_t *uvBaseAddress = (uint8_t *)info + EndianU32_BtoN(info->componentInfoCbCr.offset);
uint32_t uvRowBytes = EndianU32_BtoN(info->componentInfoCbCr.rowBytes);
/* TODO: a check on uvRowBytes, as above */
glTexSubImage2D(GL_TEXTURE_2D, 0,
0, 0,
CVPixelBufferGetWidth(pixelBuffer)/2, CVPixelBufferGetHeight(pixelBuffer)/2,
GL_LUMINANCE_ALPHA, GL_UNSIGNED_BYTE,
uvBaseAddress);
CVPixelBufferUnlockBaseAddress(pixelBuffer);
The iOS 5.0 fast texture upload capabilities can make for very fast uploading of camera frames and extraction of texture data, which is why Apple uses them in their latest sample code. For camera data, I've seen 640x480 frame upload times go from 9 ms to 1.8 ms using these iOS 5.0 texture caches on an iPhone 4S, and for movie capturing I've seen more than a fourfold improvement when switching to them.
That said, you still might want to provide a fallback for stragglers who have not yet updated to iOS 5.x. I do this in my open source image processing framework by using a runtime check for the texture upload capability:
+ (BOOL)supportsFastTextureUpload;
{
return (CVOpenGLESTextureCacheCreate != NULL);
}
If this returns NO, I use the standard upload process that we have had since iOS 4.0:
CVImageBufferRef cameraFrame = CMSampleBufferGetImageBuffer(sampleBuffer);
int bufferWidth = CVPixelBufferGetWidth(cameraFrame);
int bufferHeight = CVPixelBufferGetHeight(cameraFrame);
CVPixelBufferLockBaseAddress(cameraFrame, 0);
glBindTexture(GL_TEXTURE_2D, outputTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bufferWidth, bufferHeight, 0, GL_BGRA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress(cameraFrame));
// Do your OpenGL ES rendering here
CVPixelBufferUnlockBaseAddress(cameraFrame, 0);
GLCameraRipple has one quirk in its upload process, and that is the fact that it uses YUV planar frames (split into Y and UV images), instead of one BGRA image. I get pretty good performance from my BGRA uploads, so I haven't seen the need to work with YUV data myself. You could either modify GLCameraRipple to use BGRA frames and the above code, or rework what I have above into YUV planar data uploads.

Troubleshooting OpenGL ES 1.1 textures on the iPhone

I'm trying to draw a texture into an offscreen framebuffer, and its renderbuffer always ends up completely blank (black). The weird thing is, I know the context is set up, and I'm checking for errors using glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES) and glGetError(), but neither function says anything is wrong. Are there any other error-checking functions I can call which might shed some light on what's happening?
Difficult to give you a precise answer without more information. Perhaps could you post some code about your setup and usage of the render buffer?
In the meantime, here is some info about how to properly setup an offscreen framebuffer:
// Remember the FBO being used for the display framebuffer
glGetIntegerv(GL_FRAMEBUFFER_BINDING_OES, (GLint *)&SystemFBO);
// Create the texture and the FBO for offscreen frame buffer
glGenTextures(1, &ResultTexture);
glBindTexture(GL_TEXTURE_2D, ResultTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glGenFramebuffersOES(1, &ResultFBO);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, ResultFBO);
glFramebufferTexture2DOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_TEXTURE_2D, ResultTexture, 0);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, ResultFBO);
// do your rendering to offscreen framebuffer
...
// restore original frame buffer object
glBindFramebufferOES(GL_FRAMEBUFFER_OES, SystemFBO);
// use ResultTexture as usual
glBindTexture(GL_TEXTURE_2D, ResultTexture);
Hope this helps...

OpenGL ES 2.0 FBO creation goes wrong with unknown error

I've been struggling with this for a while now, and this code crashes with, to me, unknown reasons. I'm creating an FBO, binding a texture, and then the very first glDrawArrays() crashes with a "EXC_BAD_ACCESS" on my iPhone Simulator.
Here's the code I use to create the FBO (and bind texture and...)
glGenFramebuffers(1, &lastFrameBuffer);
glGenRenderbuffers(1, &lastFrameDepthBuffer);
glGenTextures(1, &lastFrameTexture);
glBindTexture(GL_TEXTURE1, lastFrameTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 768, 1029, 0, GL_RGBA, GL_UNSIGNED_SHORT_5_6_5, NULL);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
//Bind/alloc depthbuf
glBindRenderbuffer(GL_RENDERBUFFER, lastFrameDepthBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, 768, 1029);
glBindFramebuffer(GL_FRAMEBUFFER, lastFrameBuffer);
//binding the texture to the FBO :D
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, lastFrameTexture, 0);
// attach the renderbuffer to depth attachment point
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, lastFrameDepthBuffer);
[self checkFramebufferStatus];
As you can see this takes part in an object, checkFrameBufferStatus looks like this:
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
switch(status)
{
case GL_FRAMEBUFFER_COMPLETE:
JNLogString(#"Framebuffer complete.");
return TRUE;
case GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT:
JNLogString(#"[ERROR] Framebuffer incomplete: Attachment is NOT complete.");
return false;
case GL_FRAMEBUFFER_INCOMPLETE_MISSING_ATTACHMENT:
JNLogString(#"[ERROR] Framebuffer incomplete: No image is attached to FBO.");
return false;
case GL_FRAMEBUFFER_INCOMPLETE_DIMENSIONS:
JNLogString(#"[ERROR] Framebuffer incomplete: Attached images have different dimensions.");
return false;
case GL_FRAMEBUFFER_UNSUPPORTED:
JNLogString(#"[ERROR] Unsupported by FBO implementation.");
return false;
default:
JNLogString(#"[ERROR] Unknown error.");
return false;
JNLogString is just an NSLog, and in this case it gives me:
2010-04-03 02:46:54.854 Bubbleeh[6634:207] ES2Renderer.m:372 [ERROR] Unknown error.
When I call it right there.
So, it crashes, and diagnostic tells me there's an unknown error and I'm kinda stuck. I basically copied the code from the OpenGL ES 2.0 Programming Guide...
What am I doing wrong?
glBindTexture(GL_TEXTURE1, lastFrameTexture);
That's not allowed, I was trying to bind the texture to unit one (GL_TEXTURE1), but that should be done by glActiveTexture(), not by glBindTexture(), which wants to know the type of the texture (GL_TEXTURE_2D, GL_TEXTURE_3D, etc.) not the texture unit. To place a texture in texture unit 1 I now have the following code which I think is correct:
//Bind 2D Texture lastFrameTexture to texture unit 1
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, lastFrameTexture);
Try using glGetError after each GL call. Specially the glTexImage2D.
If you use XCode, add breakpoint for OpenGL errors in "Breakpoint Navigator" ([+] -> [Add OpenGL ES Error Breakpoint]). Then all OpenGL problems will be showed on line, where problem appear.

Strange colors when loading some textures with openGL for iphone

I'm writting a 2d game in Iphone, which uses textures as sprites, I'm getting colored noise around some of the images I render ( but such noise never appears over the texture Itself, only in the transparent portion around It). The problem does not happen with the rest of my textures. This is the code I use to load textures:
- (void)loadTexture:(NSString*)nombre {
CGImageRef textureImage = [UIImage imageNamed:nombre].CGImage;
if (textureImage == nil) {
NSLog(#"Failed to load texture image");
return;
}
textureWidth = NextPowerOfTwo(CGImageGetWidth(textureImage));
textureHeight = NextPowerOfTwo(CGImageGetHeight(textureImage));
imageSizeX= CGImageGetWidth(textureImage);
imageSizeY= CGImageGetHeight(textureImage);
GLubyte *textureData = (GLubyte *)malloc(textureWidth * textureHeight * 4);
CGContextRef textureContext = CGBitmapContextCreate(textureData, textureWidth,textureHeight,8, textureWidth * 4,CGImageGetColorSpace(textureImage),kCGImageAlphaPremultipliedLast);
CGContextDrawImage(textureContext, CGRectMake(0.0, 0.0, (float)textureWidth, (float)textureHeight), textureImage);
CGContextRelease(textureContext);
glGenTextures(1, &textures[0]);
glBindTexture(GL_TEXTURE_2D, textures[0]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, textureWidth, textureHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, textureData);
free(textureData);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
}
Your memory is not zeroed out before you draw the image in it. The funny pixels you see is old data in the transparent regions of your image. Use calloc instead of malloc (calloc returns zeroed memory).
You can also use CGContextSetBlendMode(textureContext, kCGBlendModeCopy); before drawing the image.
If you want to know the whole story:
This problem exists only for small images due to the fact that malloc has different code paths for small allocation sizes. It returns a small block from a pool it manages in user space. If the requested size is larger than a certain threshold (16K, I believe), malloc gets the memory from the kernel. The new pages are of course zeroed out.
It took me a while to figure that out.

glError: 0x0501 when loading a large texture with OpenGL ES on the iPhone?

Here's the code I use to load a texture. image is a CGImageRef. After loading the image with this code, I eventually draw the image with glDrawArrays().
size_t imageW = CGImageGetWidth(image);
size_t imageH = CGImageGetHeight(image);
size_t picSize = pow2roundup((imageW > imageH) ? imageW : imageH);
GLubyte *textureData = (GLubyte *) malloc(picSize * picSize << 2);
CGContextRef imageContext = CGBitmapContextCreate( textureData, picSize, picSize, 8, picSize << 2, CGImageGetColorSpace(image), kCGImageAlphaNoneSkipLast | kCGBitmapByteOrder32Big );
if (imageContext != NULL) {
CGContextDrawImage(imageContext, CGRectMake(0.0, 0.0, (CGFloat)imageW, (CGFloat)imageH), image);
glGenTextures(1, &textureId);
glBindTexture(GL_TEXTURE_2D, textureId);
// when texture area is small, bilinear filter the closest mipmap
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
// when texture area is large, bilinear filter the original
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// the texture wraps over at the edges (repeat)
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, picSize, picSize, 0, GL_RGBA, GL_UNSIGNED_BYTE, textureData);
GLenum err = glGetError();
if (err != GL_NO_ERROR)
NSLog(#"Error uploading texture. glError: 0x%04X", err);
CGContextRelease(imageContext);
}
free(textureData);
This seems to work fine when the image is 320x480, but it fails when the image is larger (for example, picSize = 2048).
Here's what I get in the Debugger Console:
Error uploading texture. glError: 0x0501
What's the meaning of this error? What's the best workaround?
Aren’t you simply hitting the maximum texture size limit? The error code is GL_INVALID_VALUE, see the glTexImage2D docs:
GL_INVALID_VALUE is generated if width
or height is less than 0 or greater
than 2 + GL_MAX_TEXTURE_SIZE, or if
either cannot be represented as
2k+2(border) for some integer value of
k.
iPhone does not support textures larger than 1024 pixels. The workaround is to split the image into several textures.
Maybe you are simply running out of memory - did you verify the value returned from malloc() is not NULL?
(this could explain getting 0x0501, which is GL_INVALID_VALUE).
I also experienced the same kind of issue with GLKitBaseEffect, it seemed to be a memory issue (as Hexagon proposed). To fix this, I had to add manual texture release calls:
GLuint name = self.texture.name;
glDeleteTextures(1, &name);
See this thread for more: Release textures (GLKTextureInfo objects) allocated by GLKTextureLoader.