Texture mapping in GLKit is not working only in devices - iphone

I used the code in this link to map textures of human faces. This code uses GLKIT to render the images. Code works fine in simulator but the same code if I run in device its not working. The below are the screen shots of the images where it works in device and not in my ipad.
Code I used to Load Texture:
- (void)loadTexture:(UIImage *)textureImage
{
glGenTextures(1, &_texture);
glBindTexture(GL_TEXTURE_2D, _texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
CGImageRef cgImage = textureImage.CGImage;
float imageWidth = CGImageGetWidth(cgImage);
float imageHeight = CGImageGetHeight(cgImage);
CFDataRef data = CGDataProviderCopyData(CGImageGetDataProvider(cgImage));
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageWidth, imageHeight, 0, GL_RGBA,
GL_UNSIGNED_BYTE, CFDataGetBytePtr(data));
}
Image of simulator:
The same code in device gives the following output:
Is There something i`m missing?

Use Apple's built-in 1-line method to load -- don't write your own (broken) implementation!
https://developer.apple.com/library/mac/#documentation/GLkit/Reference/GLKTextureLoader_ClassRef/Reference/Reference.html#//apple_ref/occ/clm/GLKTextureLoader/textureWithContentsOfFile:options:error:
Or, if you must do it yourself, you have two suspicious parts:
2.1. Get rid of this line:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
2.2. This line uses an out-of-scope number, which isn't great:
glGenTextures(1, &_texture);
If for some reason you cannot use Apple's code (e.g. you want to load raw data from in-memory image), here's a copy/paste of working code:
NSData* imageData = ... // fill this yourself
CGSize* imageSize = ... // fill this yourself
GLuint integerForOpenGLToFill;
glGenTextures(1, &integerForOpenGLToFill);
glBindTexture( GL_TEXTURE_2D, integerForOpenGLToFill);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
unsigned char* pixelData = (unsigned char*) malloc( [imageData length] * sizeof(unsigned char) );
[imageData getBytes:pixelData];
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageSize.width, imageSize.height, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelData);

I finally caved in and switched to GLKTextureLoader. As Adam mentions, it's pretty sturdy.
Here's an implementation which might work for you:
- (void)loadTexture:(NSString *)fileName
{
NSDictionary* options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES],
GLKTextureLoaderOriginBottomLeft,
nil];
NSError* error;
NSString* path = [[NSBundle mainBundle] pathForResource:fileName ofType:nil];
GLKTextureInfo* texture = [GLKTextureLoader textureWithContentsOfFile:path options:options error:&error];
if(texture == nil)
{
NSLog(#"Error loading file: %#", [error localizedDescription]);
}
glBindTexture(GL_TEXTURE_2D, texture.name);
}
I'll change it for the mtl2opengl project on GitHub soon...

Looking at the screenshots ... I wonder if the problem is that you're going from iPhone to iPad?
i.e. retina to non-retina?
Looks to me like your image loading might be ignoring the Retina scale, so that the texture map is "2x too big" on the iPad.
Using Apple's method (my other answer) should fix that automatically, but if not: you could try making two copies of the image, one at current size (named "image#2x.png") and one at half size (resize in photoshop / preview.app / etc) (named "image.png")

Related

iOS CVImageBuffer distorted from AVCaptureSessionDataOutput with AVCaptureSessionPresetPhoto

At a high level, I created an app that lets a user point his or her iPhone camera around and see video frames that have been processed with visual effects. Additionally, the user can tap a button to take a freeze-frame of the current preview as a high-resolution photo that is saved in their iPhone library.
To do this, the app follows this procedure:
1) Create an AVCaptureSession
captureSession = [[AVCaptureSession alloc] init];
[captureSession setSessionPreset:AVCaptureSessionPreset640x480];
2) Hook up an AVCaptureDeviceInput using the back-facing camera.
videoInput = [[[AVCaptureDeviceInput alloc] initWithDevice:backFacingCamera error:&error] autorelease];
[captureSession addInput:videoInput];
3) Hook up an AVCaptureStillImageOutput to the session to be able to capture still frames at Photo resolution.
stillOutput = [[AVCaptureStillImageOutput alloc] init];
[stillOutput setOutputSettings:[NSDictionary
dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
[captureSession addOutput:stillOutput];
4) Hook up an AVCaptureVideoDataOutput to the session to be able to capture individual video frames (CVImageBuffers) at a lower resolution
videoOutput = [[AVCaptureVideoDataOutput alloc] init];
[videoOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
[videoOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
[captureSession addOutput:videoOutput];
5) As video frames are captured, the delegate's method is called with each new frame as a CVImageBuffer:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
[self.delegate processNewCameraFrame:pixelBuffer];
}
6) Then the delegate processes/draws them:
- (void)processNewCameraFrame:(CVImageBufferRef)cameraFrame {
CVPixelBufferLockBaseAddress(cameraFrame, 0);
int bufferHeight = CVPixelBufferGetHeight(cameraFrame);
int bufferWidth = CVPixelBufferGetWidth(cameraFrame);
glClear(GL_COLOR_BUFFER_BIT);
glGenTextures(1, &videoFrameTexture_);
glBindTexture(GL_TEXTURE_2D, videoFrameTexture_);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bufferWidth, bufferHeight, 0, GL_BGRA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress(cameraFrame));
glBindBuffer(GL_ARRAY_BUFFER, [self vertexBuffer]);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, [self indexBuffer]);
glDrawElements(GL_TRIANGLE_STRIP, 4, GL_UNSIGNED_SHORT, BUFFER_OFFSET(0));
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
[[self context] presentRenderbuffer:GL_RENDERBUFFER];
glDeleteTextures(1, &videoFrameTexture_);
CVPixelBufferUnlockBaseAddress(cameraFrame, 0);
}
This all works and leads to the correct results. I can see a video preview of 640x480 processed through OpenGL. It looks like this:
However, if I capture a still image from this session, its resolution will also be 640x480. I want it to be high resolution, so in step one I change the preset line to:
[captureSession setSessionPreset:AVCaptureSessionPresetPhoto];
This correctly captures still images at the highest resolution for the iPhone4 (2592x1936).
However, the video preview (as received by the delegate in steps 5 and 6) now looks like this:
I've confirmed that every other preset (High, medium, low, 640x480, and 1280x720) previews as intended. However, the Photo preset seems to send buffer data in a different format.
I've also confirmed that the data being sent to the buffer at the Photo preset is actually valid image data by taking the buffer and creating a UIImage out of it instead of sending it to openGL:
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(CVPixelBufferGetBaseAddress(cameraFrame), bufferWidth, bufferHeight, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef cgImage = CGBitmapContextCreateImage(context);
UIImage *anImage = [UIImage imageWithCGImage:cgImage];
This shows an undistorted video frame.
I've done a bunch of searching and can't seem to fix it. My hunch is that it's a data format issue. That is, I believe that the buffer is being set correctly, but with a format that this line doesn't understand:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bufferWidth, bufferHeight, 0, GL_BGRA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress(cameraFrame));
My hunch was that changing the external format from GL_BGRA to something else would help, but it doesn't... and through various means it looks like the buffer is actually in GL_BGRA.
Does anyone know what's going on here? Or do you have any tips on how I might go about debugging why this is happening? (What's super weird is that this happens on an iphone4 but not on an iPhone 3GS ... both running ios4.3)
This was a doozy.
As Lio Ben-Kereth pointed out, the padding is 48 as you can see from the debugger
(gdb) po pixelBuffer
<CVPixelBuffer 0x2934d0 width=852 height=640 bytesPerRow=3456 pixelFormat=BGRA
# => 3456 - 852 * 4 = 48
OpenGL can compensate for this, but OpenGL ES cannot (more info here openGL SubTexturing)
So here is how I'm doing it in OpenGL ES:
(CVImageBufferRef)pixelBuffer // pixelBuffer containing the raw image data is passed in
/* ... */
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, videoFrameTexture_);
int frameWidth = CVPixelBufferGetWidth(pixelBuffer);
int frameHeight = CVPixelBufferGetHeight(pixelBuffer);
size_t bytesPerRow, extraBytes;
bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer);
extraBytes = bytesPerRow - frameWidth*4;
GLubyte *pixelBufferAddr = CVPixelBufferGetBaseAddress(pixelBuffer);
if ( [[captureSession sessionPreset] isEqualToString:#"AVCaptureSessionPresetPhoto"] )
{
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, frameWidth, frameHeight, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL );
for( int h = 0; h < frameHeight; h++ )
{
GLubyte *row = pixelBufferAddr + h * (frameWidth * 4 + extraBytes);
glTexSubImage2D( GL_TEXTURE_2D, 0, 0, h, frameWidth, 1, GL_BGRA, GL_UNSIGNED_BYTE, row );
}
}
else
{
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, frameWidth, frameHeight, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixelBufferAddr);
}
Before, I was using AVCaptureSessionPresetMedium and getting 30fps. In AVCaptureSessionPresetPhoto I'm getting 16fps on an iPhone 4. The looping for the sub-texture does not seem to affect the frame rate.
I'm using an iPhone 4 on iOS 5.
Just draw like this.
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer);
int frameHeight = CVPixelBufferGetHeight(pixelBuffer);
GLubyte *pixelBufferAddr = CVPixelBufferGetBaseAddress(pixelBuffer);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (GLsizei)bytesPerRow / 4, (GLsizei)frameHeight, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixelBufferAddr);
Good point Mats.
But as a matter of fact the padding is larger, it's:
bytesPerRow = 4 * bufferWidth + 48;
It works great on the iphone 4 back camera, and solved the issue sotangochips reported about.
The sessionPresetPhoto is the setting for capturing a photo with highest quality. When we use AVCaptureStillImageOutput with preset photo, the frame captured from video stream has always exactly the resolution of the iPad or iPhone screen. I have had the same problem with iPad Pro 12.9 inch which has a 2732 * 2048 resolution. That means the frame I captured from video stream was 2732 * 2048 but it was always distorted and shifted. I tried above mentioned solutions but it did not solve my problem. Finally, I realised that the width of the frame should always be divisible to 8 which 2732 is not. 2732/8 = 341.5. So what I did was to calculate the modulo of width and 8. If modulo is not equal to zero then I add it to the width. In this case 2732%8 = 4 and then I get 2732+4 = 2736. So I will set this frame width in CVPixelBufferCreate in order to initialise my pixelBuffer(CVPixelBufferRef).
Dex, thanks for the excellent answer. To make your code more generic, I would replace:
if ( [[captureSession sessionPreset] isEqualToString:#"AVCaptureSessionPresetPhoto"] )
with
if ( extraBytes > 0 )
I think I found your answer, and I'm sorry because it is no good news.
You can check this link: http://developer.apple.com/library/mac/#documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/04_MediaCapture.html
Configuring a Session
Symbol: AVCaptureSessionPresetPhoto
Resolution: Photo.
Comments: Full photo resolution. This is not supported for video output.
The image buffer you get seem to contain some padding at the end. E.g.
bytesPerRow = 4 * bufferWidth + 12;
This is often done so each pixel row starts at a 16 byte offset.
Use this size evereywhere in your code
int width_16 = (int)yourImage.size.width - (int)yourImage.size.width%16;
int height_ = (int)(yourImage.size.height/yourImage.size.width * width_16) ;
CGSize video_size_ = CGSizeMake(width_16, height_);

How to unload iPhone images to free up memory

I have an iPhone game that uses images to display things such as buttons on screen to control the game. The images aren't huge, but I want to load them "lazily" and unload them from memory when they are not being used.
I have a function that loads images as follows:
void loadImageRef(NSString *name, GLuint location, CGImageRef *textureImage)
{
*textureImage = [UIImage imageNamed:name].CGImage;
if (*textureImage == nil) {
NSLog(#"Failed to load texture image");
return;
}
NSInteger texWidth = CGImageGetWidth(*textureImage);
NSInteger texHeight = CGImageGetHeight(*textureImage);
GLubyte *textureData = (GLubyte *)malloc(texWidth * texHeight * 4);
CGContextRef textureContext = CGBitmapContextCreate(textureData,
texWidth, texHeight,
8, texWidth * 4,
CGImageGetColorSpace(*textureImage),
kCGImageAlphaPremultipliedLast);
CGContextTranslateCTM(textureContext, 0, texHeight);
CGContextScaleCTM(textureContext, 1.0, -1.0);
CGContextDrawImage(textureContext, CGRectMake(0.0, 0.0, (float)texWidth, (float)texHeight), *textureImage);
CGContextRelease(textureContext);
glBindTexture(GL_TEXTURE_2D, location);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texWidth, texHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, textureData);
free(textureData);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
}
The reason I pass the CGImageRef to the function is so that I can free it later. But I don't know if this is the correct thing to do?
I call the function as follows (for example):
loadImageRef(#"buttonConfig.png", textures[IMG_CONFIG], &imagesRef[IMG_CONFIG]);
And when I'm finished with it I unload it as follows:
unloadImage(&imagesRef[IMG_CONFIG]);
which calls the function:
void unloadImage(CGImageRef *image)
{
CGImageRelease(*image);
}
But when I run the app, I get EXC_BAD_ACCESS error, either when unloading, or loading for the second time (i.e. when it is needed again). I don't see anything else to help determine the real cause of the problem.
Perhaps I am attacking this issue from the wrong angle. What is the usual method of lazy loading of images, and what is the usual way to unload the image to free up the memory?
Many thanks in advance,
Sam
You should avoid using the [UIImage imageNamed:name] method at all costs if you want to ensure things don't stay cached. imageNamed caches all images it pulls through. You want to use
UIImage *image = [[UIImage alloc] initWithContentsOfFile:[[NSBundle mainBundle] pathForResource:fileNameWithoutExtension ofType:fileNameExtension]];
With that, you can simply release the object when you are done with it and free the memory.
Apple has improved caching with [UIImage imageNamed:] but it still isn't on demand.

Transparency/Blending issues with OpenGL ES/iPhone

I have a simple 16x16 particle that goes from being opaque to transparent. Unfortunately is appears different in my iPhone port and I can't see where the differences in the code are. Most of the code is essentially the same.
I've uploaded an image to here to show the problem
The particle on the left is the incorrectly rendered iPhone version and the right is how it appears on Mac and Windows. It's just a simple RGBA .png file.
I've tried numerous blend functions and glTexEnv setting but I can't seem to make them the same.
Just to be thorough, my Texture loading code on the iPhone looks like the following
GLuint TextureLoader::LoadTexture(const char *path)
{
NSString *macPath = [NSString stringWithCString:path length:strlen(path)];
GLuint texture = 0;
CGImageRef textureImage = [UIImage imageNamed:macPath].CGImage;
if (textureImage == nil)
{
NSLog(#"Failed to load texture image");
return 0;
}
NSInteger texWidth = CGImageGetWidth(textureImage);
NSInteger texHeight = CGImageGetHeight(textureImage);
GLubyte *textureData = new GLubyte[texWidth * texHeight * 4];
memset(textureData, 0, texWidth * texHeight * 4);
CGContextRef textureContext = CGBitmapContextCreate(textureData, texWidth, texHeight, 8, texWidth * 4, CGImageGetColorSpace(textureImage), kCGImageAlphaPremultipliedLast);
CGContextDrawImage(textureContext, CGRectMake(0.0, 0.0, (float)texWidth, (float)texHeight), textureImage);
CGContextRelease(textureContext);
//Make a texture ID, bind it, create it
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texWidth, texHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, textureData);
delete[] textureData;
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
return texture;
}
The blend function I use is glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
I'll try any ideas people throw at me, because this has been a bit of a mystery to me.
Cheers.
this looks like the standard "textures are converted to premultiplied alpha" problem.
you can use
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
or you can write custom loading code to avoid the premultiplication.
Call me naive, but seeing that premultiplying an image requires (ar, ag, a*b, a), I figured I'd just divide the rgb values by a.
Of course as soon as the alpha value is larger than the r, g, b components, the particle texture became black. Oh well. Unless I can find a different image loader to the one above, then I'll just make all the rgb components 0xff (white). This is a good temporary solution for me because I either need a white particle or just colourise it in the application. Later on I might just make raw rgba files and read them in, because this is mainly for very small 16x16 and smaller particle textures.
I can't use Premultiplied textures for the particle system because overlapping multiple particle textures saturates the colours way too much.

Strange colors when loading some textures with openGL for iphone

I'm writting a 2d game in Iphone, which uses textures as sprites, I'm getting colored noise around some of the images I render ( but such noise never appears over the texture Itself, only in the transparent portion around It). The problem does not happen with the rest of my textures. This is the code I use to load textures:
- (void)loadTexture:(NSString*)nombre {
CGImageRef textureImage = [UIImage imageNamed:nombre].CGImage;
if (textureImage == nil) {
NSLog(#"Failed to load texture image");
return;
}
textureWidth = NextPowerOfTwo(CGImageGetWidth(textureImage));
textureHeight = NextPowerOfTwo(CGImageGetHeight(textureImage));
imageSizeX= CGImageGetWidth(textureImage);
imageSizeY= CGImageGetHeight(textureImage);
GLubyte *textureData = (GLubyte *)malloc(textureWidth * textureHeight * 4);
CGContextRef textureContext = CGBitmapContextCreate(textureData, textureWidth,textureHeight,8, textureWidth * 4,CGImageGetColorSpace(textureImage),kCGImageAlphaPremultipliedLast);
CGContextDrawImage(textureContext, CGRectMake(0.0, 0.0, (float)textureWidth, (float)textureHeight), textureImage);
CGContextRelease(textureContext);
glGenTextures(1, &textures[0]);
glBindTexture(GL_TEXTURE_2D, textures[0]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, textureWidth, textureHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, textureData);
free(textureData);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
}
Your memory is not zeroed out before you draw the image in it. The funny pixels you see is old data in the transparent regions of your image. Use calloc instead of malloc (calloc returns zeroed memory).
You can also use CGContextSetBlendMode(textureContext, kCGBlendModeCopy); before drawing the image.
If you want to know the whole story:
This problem exists only for small images due to the fact that malloc has different code paths for small allocation sizes. It returns a small block from a pool it manages in user space. If the requested size is larger than a certain threshold (16K, I believe), malloc gets the memory from the kernel. The new pages are of course zeroed out.
It took me a while to figure that out.

glError: 0x0501 when loading a large texture with OpenGL ES on the iPhone?

Here's the code I use to load a texture. image is a CGImageRef. After loading the image with this code, I eventually draw the image with glDrawArrays().
size_t imageW = CGImageGetWidth(image);
size_t imageH = CGImageGetHeight(image);
size_t picSize = pow2roundup((imageW > imageH) ? imageW : imageH);
GLubyte *textureData = (GLubyte *) malloc(picSize * picSize << 2);
CGContextRef imageContext = CGBitmapContextCreate( textureData, picSize, picSize, 8, picSize << 2, CGImageGetColorSpace(image), kCGImageAlphaNoneSkipLast | kCGBitmapByteOrder32Big );
if (imageContext != NULL) {
CGContextDrawImage(imageContext, CGRectMake(0.0, 0.0, (CGFloat)imageW, (CGFloat)imageH), image);
glGenTextures(1, &textureId);
glBindTexture(GL_TEXTURE_2D, textureId);
// when texture area is small, bilinear filter the closest mipmap
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
// when texture area is large, bilinear filter the original
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// the texture wraps over at the edges (repeat)
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, picSize, picSize, 0, GL_RGBA, GL_UNSIGNED_BYTE, textureData);
GLenum err = glGetError();
if (err != GL_NO_ERROR)
NSLog(#"Error uploading texture. glError: 0x%04X", err);
CGContextRelease(imageContext);
}
free(textureData);
This seems to work fine when the image is 320x480, but it fails when the image is larger (for example, picSize = 2048).
Here's what I get in the Debugger Console:
Error uploading texture. glError: 0x0501
What's the meaning of this error? What's the best workaround?
Aren’t you simply hitting the maximum texture size limit? The error code is GL_INVALID_VALUE, see the glTexImage2D docs:
GL_INVALID_VALUE is generated if width
or height is less than 0 or greater
than 2 + GL_MAX_TEXTURE_SIZE, or if
either cannot be represented as
2k+2(border) for some integer value of
k.
iPhone does not support textures larger than 1024 pixels. The workaround is to split the image into several textures.
Maybe you are simply running out of memory - did you verify the value returned from malloc() is not NULL?
(this could explain getting 0x0501, which is GL_INVALID_VALUE).
I also experienced the same kind of issue with GLKitBaseEffect, it seemed to be a memory issue (as Hexagon proposed). To fix this, I had to add manual texture release calls:
GLuint name = self.texture.name;
glDeleteTextures(1, &name);
See this thread for more: Release textures (GLKTextureInfo objects) allocated by GLKTextureLoader.