iOS CVImageBuffer distorted from AVCaptureSessionDataOutput with AVCaptureSessionPresetPhoto - iphone

At a high level, I created an app that lets a user point his or her iPhone camera around and see video frames that have been processed with visual effects. Additionally, the user can tap a button to take a freeze-frame of the current preview as a high-resolution photo that is saved in their iPhone library.
To do this, the app follows this procedure:
1) Create an AVCaptureSession
captureSession = [[AVCaptureSession alloc] init];
[captureSession setSessionPreset:AVCaptureSessionPreset640x480];
2) Hook up an AVCaptureDeviceInput using the back-facing camera.
videoInput = [[[AVCaptureDeviceInput alloc] initWithDevice:backFacingCamera error:&error] autorelease];
[captureSession addInput:videoInput];
3) Hook up an AVCaptureStillImageOutput to the session to be able to capture still frames at Photo resolution.
stillOutput = [[AVCaptureStillImageOutput alloc] init];
[stillOutput setOutputSettings:[NSDictionary
dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
[captureSession addOutput:stillOutput];
4) Hook up an AVCaptureVideoDataOutput to the session to be able to capture individual video frames (CVImageBuffers) at a lower resolution
videoOutput = [[AVCaptureVideoDataOutput alloc] init];
[videoOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
[videoOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
[captureSession addOutput:videoOutput];
5) As video frames are captured, the delegate's method is called with each new frame as a CVImageBuffer:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
[self.delegate processNewCameraFrame:pixelBuffer];
}
6) Then the delegate processes/draws them:
- (void)processNewCameraFrame:(CVImageBufferRef)cameraFrame {
CVPixelBufferLockBaseAddress(cameraFrame, 0);
int bufferHeight = CVPixelBufferGetHeight(cameraFrame);
int bufferWidth = CVPixelBufferGetWidth(cameraFrame);
glClear(GL_COLOR_BUFFER_BIT);
glGenTextures(1, &videoFrameTexture_);
glBindTexture(GL_TEXTURE_2D, videoFrameTexture_);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bufferWidth, bufferHeight, 0, GL_BGRA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress(cameraFrame));
glBindBuffer(GL_ARRAY_BUFFER, [self vertexBuffer]);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, [self indexBuffer]);
glDrawElements(GL_TRIANGLE_STRIP, 4, GL_UNSIGNED_SHORT, BUFFER_OFFSET(0));
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
[[self context] presentRenderbuffer:GL_RENDERBUFFER];
glDeleteTextures(1, &videoFrameTexture_);
CVPixelBufferUnlockBaseAddress(cameraFrame, 0);
}
This all works and leads to the correct results. I can see a video preview of 640x480 processed through OpenGL. It looks like this:
However, if I capture a still image from this session, its resolution will also be 640x480. I want it to be high resolution, so in step one I change the preset line to:
[captureSession setSessionPreset:AVCaptureSessionPresetPhoto];
This correctly captures still images at the highest resolution for the iPhone4 (2592x1936).
However, the video preview (as received by the delegate in steps 5 and 6) now looks like this:
I've confirmed that every other preset (High, medium, low, 640x480, and 1280x720) previews as intended. However, the Photo preset seems to send buffer data in a different format.
I've also confirmed that the data being sent to the buffer at the Photo preset is actually valid image data by taking the buffer and creating a UIImage out of it instead of sending it to openGL:
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(CVPixelBufferGetBaseAddress(cameraFrame), bufferWidth, bufferHeight, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef cgImage = CGBitmapContextCreateImage(context);
UIImage *anImage = [UIImage imageWithCGImage:cgImage];
This shows an undistorted video frame.
I've done a bunch of searching and can't seem to fix it. My hunch is that it's a data format issue. That is, I believe that the buffer is being set correctly, but with a format that this line doesn't understand:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bufferWidth, bufferHeight, 0, GL_BGRA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress(cameraFrame));
My hunch was that changing the external format from GL_BGRA to something else would help, but it doesn't... and through various means it looks like the buffer is actually in GL_BGRA.
Does anyone know what's going on here? Or do you have any tips on how I might go about debugging why this is happening? (What's super weird is that this happens on an iphone4 but not on an iPhone 3GS ... both running ios4.3)

This was a doozy.
As Lio Ben-Kereth pointed out, the padding is 48 as you can see from the debugger
(gdb) po pixelBuffer
<CVPixelBuffer 0x2934d0 width=852 height=640 bytesPerRow=3456 pixelFormat=BGRA
# => 3456 - 852 * 4 = 48
OpenGL can compensate for this, but OpenGL ES cannot (more info here openGL SubTexturing)
So here is how I'm doing it in OpenGL ES:
(CVImageBufferRef)pixelBuffer // pixelBuffer containing the raw image data is passed in
/* ... */
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, videoFrameTexture_);
int frameWidth = CVPixelBufferGetWidth(pixelBuffer);
int frameHeight = CVPixelBufferGetHeight(pixelBuffer);
size_t bytesPerRow, extraBytes;
bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer);
extraBytes = bytesPerRow - frameWidth*4;
GLubyte *pixelBufferAddr = CVPixelBufferGetBaseAddress(pixelBuffer);
if ( [[captureSession sessionPreset] isEqualToString:#"AVCaptureSessionPresetPhoto"] )
{
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, frameWidth, frameHeight, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL );
for( int h = 0; h < frameHeight; h++ )
{
GLubyte *row = pixelBufferAddr + h * (frameWidth * 4 + extraBytes);
glTexSubImage2D( GL_TEXTURE_2D, 0, 0, h, frameWidth, 1, GL_BGRA, GL_UNSIGNED_BYTE, row );
}
}
else
{
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, frameWidth, frameHeight, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixelBufferAddr);
}
Before, I was using AVCaptureSessionPresetMedium and getting 30fps. In AVCaptureSessionPresetPhoto I'm getting 16fps on an iPhone 4. The looping for the sub-texture does not seem to affect the frame rate.
I'm using an iPhone 4 on iOS 5.

Just draw like this.
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer);
int frameHeight = CVPixelBufferGetHeight(pixelBuffer);
GLubyte *pixelBufferAddr = CVPixelBufferGetBaseAddress(pixelBuffer);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (GLsizei)bytesPerRow / 4, (GLsizei)frameHeight, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixelBufferAddr);

Good point Mats.
But as a matter of fact the padding is larger, it's:
bytesPerRow = 4 * bufferWidth + 48;
It works great on the iphone 4 back camera, and solved the issue sotangochips reported about.

The sessionPresetPhoto is the setting for capturing a photo with highest quality. When we use AVCaptureStillImageOutput with preset photo, the frame captured from video stream has always exactly the resolution of the iPad or iPhone screen. I have had the same problem with iPad Pro 12.9 inch which has a 2732 * 2048 resolution. That means the frame I captured from video stream was 2732 * 2048 but it was always distorted and shifted. I tried above mentioned solutions but it did not solve my problem. Finally, I realised that the width of the frame should always be divisible to 8 which 2732 is not. 2732/8 = 341.5. So what I did was to calculate the modulo of width and 8. If modulo is not equal to zero then I add it to the width. In this case 2732%8 = 4 and then I get 2732+4 = 2736. So I will set this frame width in CVPixelBufferCreate in order to initialise my pixelBuffer(CVPixelBufferRef).

Dex, thanks for the excellent answer. To make your code more generic, I would replace:
if ( [[captureSession sessionPreset] isEqualToString:#"AVCaptureSessionPresetPhoto"] )
with
if ( extraBytes > 0 )

I think I found your answer, and I'm sorry because it is no good news.
You can check this link: http://developer.apple.com/library/mac/#documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/04_MediaCapture.html
Configuring a Session
Symbol: AVCaptureSessionPresetPhoto
Resolution: Photo.
Comments: Full photo resolution. This is not supported for video output.

The image buffer you get seem to contain some padding at the end. E.g.
bytesPerRow = 4 * bufferWidth + 12;
This is often done so each pixel row starts at a 16 byte offset.

Use this size evereywhere in your code
int width_16 = (int)yourImage.size.width - (int)yourImage.size.width%16;
int height_ = (int)(yourImage.size.height/yourImage.size.width * width_16) ;
CGSize video_size_ = CGSizeMake(width_16, height_);

Related

Displaying a screen shot generated UIImage is not displaying in UIImageView (for device only)

I am trying to save an OpenGL buffer (whats currently displayed in the view) to the device's photo library. The code snippet below works fine on the simulator. But for the actual device it is crashing. I believe there could be a problem with the way im creating the UIImage captured from the screen.
This operations is initiated via an IBAction event handle method.
The function i use to save the image is UIImageWriteToSavedPhotosAlbum (i recently changed this to ALAssetsLibrary's writeImageToSavedPhotosAlbum).
I have ensured that my app is authorized to access the Photos library.
I also made sure that my CGImageRed is globally defined (defined at the top of the file) and my UIImage is a (nonatomic, retain) property.
Can somebody help me fix this issue? I'd like to have a valid UIImage reference that was generated from the glReadPixels data.
Below is the relevant code snippet (call to save to photo library):
-(void)TakeImageBufferSnapshot:(CGSize)dimensions
{
NSLog(#"TakeSnapShot 1 : (%f, %f)", dimensions.width, dimensions.height);
NSInteger size = dimensions.width * dimensions.height * 4;
GLubyte *buffer = (GLubyte *) malloc(size);
glReadPixels(0, 0, dimensions.width, dimensions.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
GLubyte *buffer2 = (GLubyte *) malloc(size);
int height = (int)dimensions.height - 1;
int width = (int)dimensions.width;
for(int y = 0; y < dimensions.height; y++)
{
for(int x = 0; x < dimensions.width * 4; x++)
{
buffer2[(height - 1 - y) * width * 4 + x] = buffer[y * 4 * width + x];
}
}
NSLog(#"TakeSnapShot 2");
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, size, NULL);
if (buffer) free(buffer);
if (buffer2) free(buffer2);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * self.view.bounds.size.width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
NSLog(#"TakeSnapShot 3");
// make the cgimage
g_savePhotoImageRef = CGImageCreate(dimensions.width, dimensions.height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
NSLog(#"TakeSnapShot 4");
// then make the uiimage from that
self.savePhotoImage = [UIImage imageWithCGImage:g_savePhotoImageRef];
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
}
-(void)SaveToPhotoAlbum
{
ALAuthorizationStatus status = [ALAssetsLibrary authorizationStatus];
NSLog(#"Authorization status: %d", status);
if (status == ALAuthorizationStatusAuthorized)
{
[self TakeImageBufferSnapshot:self.view.bounds.size];
// UPDATED - DO NOT proceed to save to album below.
// Instead, set the created image to a UIImageView IBOutlet.
// On the simulator this shows the screen/buffer captured image (as expected) -
// but on the device (ipad) this doesnt show anything and the app crashes.
self.testImageView.image = self.savePhotoImage;
return;
NSLog(#"Saving to photo album...");
UIImageWriteToSavedPhotosAlbum(self.savePhotoImage,
self,
#selector(photoAlbumImageSave:didFinishSavingWithError:contextInfo:),
nil);
}
else
{
UIAlertView *alert = [[UIAlertView alloc] initWithTitle:#"Access is denied"
message:#"Allow access to your Photos library to save this image."
delegate:nil
cancelButtonTitle:#"Close"
otherButtonTitles:nil, nil];
[alert show];
}
}
- (void)photoAlbumImageSave:(UIImage *)image didFinishSavingWithError:(NSError *)error contextInfo:(void *)context
{
self.savePhotoImage = nil;
CGImageRelease(g_savePhotoImageRef);
if (error)
{
NSLog(#"Error saving photo to albums: %#", error.description);
}
else
{
NSLog(#"Saved to albums!");
}
}
* Update *
I think i've managed to narrow down my issue. I started doing trial & error, where i run the app (on the device) after commenting out lines of code, to narrow things down. It looks like i may have a problem with the TakeImageBufferSnapshot function, which takes the screen buffer (using glReadPixels) and creates an CGImageRef. Now, when i try to create a UIImage out of this (using the [UIImage imageWithCGImage:] method, this seems to be why the app crashes. If I comment this line out it seems like there is no issue (other than the fact that i dont have a UIImage reference).
I basically need a valid UIImage reference so that i can save it to the photo library (which seems to work just fine using test images).
First, I should point out that glReadPixels() may not behave the way you expect. If you try to use it to read from the screen after -presentRenderbuffer: has been called, the results are undefined. On iOS 6.0+, this returns a black image, for example. You need to either use glReadPixels() right before the content is presented to the screen (my recommendation) or enable retained backing for your OpenGL ES context (which has adverse performance consequences).
Second, there's no need for the two buffers. You can capture directly into one and use that to create your CGImageRef.
To your core issue, the problem is that you are deallocating your raw image byte buffer while your CGImageRef / UIImage is still relying on it. This pulls the rug out from underneath your UIImage and will lead to the image corruption / crashing you are seeing. To account for this, you need to put in place a callback function to be triggered on the deallocation of your CGDataProvider. This is how I do this within my GPUImage framework:
rawImagePixels = (GLubyte *)malloc(totalBytesForImage);
glReadPixels(0, 0, (int)currentFBOSize.width, (int)currentFBOSize.height, GL_RGBA, GL_UNSIGNED_BYTE, rawImagePixels);
dataProvider = CGDataProviderCreateWithData(NULL, rawImagePixels, totalBytesForImage, dataProviderReleaseCallback);
cgImageFromBytes = CGImageCreate((int)currentFBOSize.width, (int)currentFBOSize.height, 8, 32, 4 * (int)currentFBOSize.width, defaultRGBColorSpace, kCGBitmapByteOrderDefault | kCGImageAlphaLast, dataProvider, NULL, NO, kCGRenderingIntentDefault);
CGDataProviderRelease(dataProvider);
The callback function takes this form:
void dataProviderReleaseCallback (void *info, const void *data, size_t size)
{
free((void *)data);
}
This function will be called only when the UIImage containing your CGImageRef (and by extension the CGDataProvider) is deallocated. Until that point, the buffer containing your image bytes remains.
You can examine how I do this within GPUImage, as a functional example. Take a look at the GPUImageFilter class for how I extract images from an OpenGL ES frame, including a faster method using texture caches instead of glReadPixels().
well - from my experience you cannot just grab the pixels that are in the buffer right now
you need to reestablish the right context, draw and grab THEN before finally releasing the context
=> This is mainly true for the device and ios6 in particular
EAGLContext* previousContext = [EAGLContext currentContext];
[EAGLContext setCurrentContext: self.context];
[self fillBuffer:sender];
//GRAB the pixels here
[EAGLContext setCurrentContext:previousContext];
alternatively (thats how I do it) create a new FrameBuffer, fill THAT and grab pixels from THERE
GLuint rttFramebuffer;
glGenFramebuffers(1, &rttFramebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, rttFramebuffer);
[self fillBuffer:self.displayLink];
size_t size = viewportHeight * viewportWidth * 4;
GLubyte *pixels = malloc(size*sizeof(GLubyte));
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(0, 0, viewportWidth, viewportHeight, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
// Restore the original framebuffer binding
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
glDeleteFramebuffers(1, &rttFramebuffer);
size_t bitsPerComponent = 8;
size_t bitsPerPixel = 32;
size_t bytesPerRow = viewportWidth * bitsPerPixel / bitsPerComponent;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast;
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, pixels, size, ImageProviderReleaseData);
CGImageRef cgImage = CGImageCreate(viewportWidth, viewportHeight, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpace, bitmapInfo, provider, NULL, true, kCGRenderingIntentDefault);
CGDataProviderRelease(provider);
UIImage *image = [UIImage imageWithCGImage:cgImage scale:self.contentScaleFactor orientation:UIImageOrientationDownMirrored];
CGImageRelease(cgImage);
CGColorSpaceRelease(colorSpace);
Edit: removed call to presentBuffer

Texture mapping in GLKit is not working only in devices

I used the code in this link to map textures of human faces. This code uses GLKIT to render the images. Code works fine in simulator but the same code if I run in device its not working. The below are the screen shots of the images where it works in device and not in my ipad.
Code I used to Load Texture:
- (void)loadTexture:(UIImage *)textureImage
{
glGenTextures(1, &_texture);
glBindTexture(GL_TEXTURE_2D, _texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
CGImageRef cgImage = textureImage.CGImage;
float imageWidth = CGImageGetWidth(cgImage);
float imageHeight = CGImageGetHeight(cgImage);
CFDataRef data = CGDataProviderCopyData(CGImageGetDataProvider(cgImage));
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageWidth, imageHeight, 0, GL_RGBA,
GL_UNSIGNED_BYTE, CFDataGetBytePtr(data));
}
Image of simulator:
The same code in device gives the following output:
Is There something i`m missing?
Use Apple's built-in 1-line method to load -- don't write your own (broken) implementation!
https://developer.apple.com/library/mac/#documentation/GLkit/Reference/GLKTextureLoader_ClassRef/Reference/Reference.html#//apple_ref/occ/clm/GLKTextureLoader/textureWithContentsOfFile:options:error:
Or, if you must do it yourself, you have two suspicious parts:
2.1. Get rid of this line:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
2.2. This line uses an out-of-scope number, which isn't great:
glGenTextures(1, &_texture);
If for some reason you cannot use Apple's code (e.g. you want to load raw data from in-memory image), here's a copy/paste of working code:
NSData* imageData = ... // fill this yourself
CGSize* imageSize = ... // fill this yourself
GLuint integerForOpenGLToFill;
glGenTextures(1, &integerForOpenGLToFill);
glBindTexture( GL_TEXTURE_2D, integerForOpenGLToFill);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
unsigned char* pixelData = (unsigned char*) malloc( [imageData length] * sizeof(unsigned char) );
[imageData getBytes:pixelData];
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageSize.width, imageSize.height, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelData);
I finally caved in and switched to GLKTextureLoader. As Adam mentions, it's pretty sturdy.
Here's an implementation which might work for you:
- (void)loadTexture:(NSString *)fileName
{
NSDictionary* options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES],
GLKTextureLoaderOriginBottomLeft,
nil];
NSError* error;
NSString* path = [[NSBundle mainBundle] pathForResource:fileName ofType:nil];
GLKTextureInfo* texture = [GLKTextureLoader textureWithContentsOfFile:path options:options error:&error];
if(texture == nil)
{
NSLog(#"Error loading file: %#", [error localizedDescription]);
}
glBindTexture(GL_TEXTURE_2D, texture.name);
}
I'll change it for the mtl2opengl project on GitHub soon...
Looking at the screenshots ... I wonder if the problem is that you're going from iPhone to iPad?
i.e. retina to non-retina?
Looks to me like your image loading might be ignoring the Retina scale, so that the texture map is "2x too big" on the iPad.
Using Apple's method (my other answer) should fix that automatically, but if not: you could try making two copies of the image, one at current size (named "image#2x.png") and one at half size (resize in photoshop / preview.app / etc) (named "image.png")

How To Load and Unload Images In The Correct Manner

I am having difficulty with images in my iPhone app. I load an image as follows:
UIImage *image = [[UIImage alloc] initWithContentsOfFile:[[NSBundle mainBundle] pathForResource:name ofType:#""]];
CGImageRef imageRef = image.CGImage;
NSInteger texWidth = CGImageGetWidth(*imageRef);
NSInteger texHeight = CGImageGetHeight(*imageRef);
GLubyte *textureData = (GLubyte *)malloc(texWidth * texHeight * 4);
CGContextRef textureContext = CGBitmapContextCreate(textureData, texWidth, texHeight, 8,
texWidth * 4, CGImageGetColorSpace(*imageRef),
kCGImageAlphaPremultipliedLast);
CGContextDrawImage(textureContext, CGRectMake(0.0, 0.0, (float)texWidth,
(float)texHeight), *imageRef);
CGContextRelease(textureContext);
glBindTexture(GL_TEXTURE_2D, location);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texWidth, texHeight, 0, GL_RGBA,
GL_UNSIGNED_BYTE, textureData);
free(textureData);
and I want to free all the memory that the image has used. I attempt to do that as follows:
CGImageRelease(imageRef);
free(image);
However, the memory does not appear to be freed according to Instruments. Am I missing something? Is it being cached by one of these functions? Furthermore, when I reload the image later, it appears to be loaded twice, i.e. the alpha channel is twice as much as it was the first time. And again, when I (attempt to) free the memory and come to reload it for a third time, the alpha channel is three times what it was the first time. The parts that are supposed to be almost transparent are becoming more and more opaque.
The issue with the transparency was not noticed in iOS 3, only iOS 4. But the issue with the memory not being freed was noticed in both.
Please can someone help me with this? I've spent so much time messing around with images! Also, I don't think I use a stupid amount of images: I am making a game that uses about 40 images, most of which are 128x128 PNGs. How come I run out of memory so quickly?? What is the "common" practice for this scenario? Am I right in trying to free the memory when the image isn't needed?
Replies will be greatly appreciated! Thank you.
The answer to freeing up memory when using OpenGL is simple: http://www.opengl.org/sdk/docs/man/xhtml/glDeleteTextures.xml
Freeing the memory used by UIImage is barely noticeable compared to freeing the texture used by OpenGL. The problem lies in the following code:
GLuint textures[100]; // allows total of 100 textures
location = textures[imageNo];
glBindTexture(GL_TEXTURE_2D, location);
And it is solved by:
void unloadImage(GLuint imageNo)
{
[textureImages[imageNo] release];
glDeleteTextures(1, &textures[imageNo]);
}
I really hope this becomes useful to someone else using OpenGL textures on the iPhone, because it's had me going round in circles for ages!
And thanks to #sergio who helped with my issue!
I think that you should do:
[image release];
instead of
CGImageRelease(imageRef);
In fact the object that you are allocating is image, then you get a reference to one of its members; you do not need to release the reference to the member, but to the original object you allocated. You do not need calling free(*image);
EDIT:
All your usage of *image and *imageRef does not seem correct to me; you should use instead image and imageRef:
CGImageRef imageRef = image.CGImage;
NSInteger texWidth = CGImageGetWidth(imageRef);
NSInteger texHeight = CGImageGetHeight(imageRef);
GLubyte *textureData = (GLubyte *)malloc(texWidth * texHeight * 4);
CGContextRef textureContext = CGBitmapContextCreate(textureData, texWidth, texHeight, 8,
texWidth * 4, CGImageGetColorSpace(imageRef),
kCGImageAlphaPremultipliedLast);
CGContextDrawImage(textureContext, CGRectMake(0.0, 0.0, (float)texWidth,
(float)texHeight), imageRef);
CGContextRelease(textureContext);
glBindTexture(GL_TEXTURE_2D, location);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texWidth, texHeight, 0, GL_RGBA,
GL_UNSIGNED_BYTE, textureData);
free(textureData);
About memory consumption: I think that using the OpenGLES framework might exact a fixed penalty, i.e. increase the overall memory footprint of your app (due to loading the framework and instantiating I don't know which objects); but you should not experience an unexpected memory increase each time that you draw a texture.

How to unload iPhone images to free up memory

I have an iPhone game that uses images to display things such as buttons on screen to control the game. The images aren't huge, but I want to load them "lazily" and unload them from memory when they are not being used.
I have a function that loads images as follows:
void loadImageRef(NSString *name, GLuint location, CGImageRef *textureImage)
{
*textureImage = [UIImage imageNamed:name].CGImage;
if (*textureImage == nil) {
NSLog(#"Failed to load texture image");
return;
}
NSInteger texWidth = CGImageGetWidth(*textureImage);
NSInteger texHeight = CGImageGetHeight(*textureImage);
GLubyte *textureData = (GLubyte *)malloc(texWidth * texHeight * 4);
CGContextRef textureContext = CGBitmapContextCreate(textureData,
texWidth, texHeight,
8, texWidth * 4,
CGImageGetColorSpace(*textureImage),
kCGImageAlphaPremultipliedLast);
CGContextTranslateCTM(textureContext, 0, texHeight);
CGContextScaleCTM(textureContext, 1.0, -1.0);
CGContextDrawImage(textureContext, CGRectMake(0.0, 0.0, (float)texWidth, (float)texHeight), *textureImage);
CGContextRelease(textureContext);
glBindTexture(GL_TEXTURE_2D, location);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texWidth, texHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, textureData);
free(textureData);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
}
The reason I pass the CGImageRef to the function is so that I can free it later. But I don't know if this is the correct thing to do?
I call the function as follows (for example):
loadImageRef(#"buttonConfig.png", textures[IMG_CONFIG], &imagesRef[IMG_CONFIG]);
And when I'm finished with it I unload it as follows:
unloadImage(&imagesRef[IMG_CONFIG]);
which calls the function:
void unloadImage(CGImageRef *image)
{
CGImageRelease(*image);
}
But when I run the app, I get EXC_BAD_ACCESS error, either when unloading, or loading for the second time (i.e. when it is needed again). I don't see anything else to help determine the real cause of the problem.
Perhaps I am attacking this issue from the wrong angle. What is the usual method of lazy loading of images, and what is the usual way to unload the image to free up the memory?
Many thanks in advance,
Sam
You should avoid using the [UIImage imageNamed:name] method at all costs if you want to ensure things don't stay cached. imageNamed caches all images it pulls through. You want to use
UIImage *image = [[UIImage alloc] initWithContentsOfFile:[[NSBundle mainBundle] pathForResource:fileNameWithoutExtension ofType:fileNameExtension]];
With that, you can simply release the object when you are done with it and free the memory.
Apple has improved caching with [UIImage imageNamed:] but it still isn't on demand.

Transparency/Blending issues with OpenGL ES/iPhone

I have a simple 16x16 particle that goes from being opaque to transparent. Unfortunately is appears different in my iPhone port and I can't see where the differences in the code are. Most of the code is essentially the same.
I've uploaded an image to here to show the problem
The particle on the left is the incorrectly rendered iPhone version and the right is how it appears on Mac and Windows. It's just a simple RGBA .png file.
I've tried numerous blend functions and glTexEnv setting but I can't seem to make them the same.
Just to be thorough, my Texture loading code on the iPhone looks like the following
GLuint TextureLoader::LoadTexture(const char *path)
{
NSString *macPath = [NSString stringWithCString:path length:strlen(path)];
GLuint texture = 0;
CGImageRef textureImage = [UIImage imageNamed:macPath].CGImage;
if (textureImage == nil)
{
NSLog(#"Failed to load texture image");
return 0;
}
NSInteger texWidth = CGImageGetWidth(textureImage);
NSInteger texHeight = CGImageGetHeight(textureImage);
GLubyte *textureData = new GLubyte[texWidth * texHeight * 4];
memset(textureData, 0, texWidth * texHeight * 4);
CGContextRef textureContext = CGBitmapContextCreate(textureData, texWidth, texHeight, 8, texWidth * 4, CGImageGetColorSpace(textureImage), kCGImageAlphaPremultipliedLast);
CGContextDrawImage(textureContext, CGRectMake(0.0, 0.0, (float)texWidth, (float)texHeight), textureImage);
CGContextRelease(textureContext);
//Make a texture ID, bind it, create it
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texWidth, texHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, textureData);
delete[] textureData;
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
return texture;
}
The blend function I use is glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
I'll try any ideas people throw at me, because this has been a bit of a mystery to me.
Cheers.
this looks like the standard "textures are converted to premultiplied alpha" problem.
you can use
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
or you can write custom loading code to avoid the premultiplication.
Call me naive, but seeing that premultiplying an image requires (ar, ag, a*b, a), I figured I'd just divide the rgb values by a.
Of course as soon as the alpha value is larger than the r, g, b components, the particle texture became black. Oh well. Unless I can find a different image loader to the one above, then I'll just make all the rgb components 0xff (white). This is a good temporary solution for me because I either need a white particle or just colourise it in the application. Later on I might just make raw rgba files and read them in, because this is mainly for very small 16x16 and smaller particle textures.
I can't use Premultiplied textures for the particle system because overlapping multiple particle textures saturates the colours way too much.