Saving imageRef from GLPaint creates completely black image - iphone

Hi I am trying out drawing app and have a problem when it comes to saving the image that is drawn. Right now I'm very early in learning this but I have added code from:
How to get UIImage from EAGLView? to save the image that was drawn.
I have created a new app, then displayed a viewController that I created. In IB I have added a view which is the PaintingView, and an imageView lies behind it.
The only modification I have done to the PaintingView so far is to change the background to clear and set the background to clear so that I can display an image behind it. The drawing works great, my only problem is saving.
- (void)saveImageFromGLView:(UIView *)glView {
if(glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES) != GL_FRAMEBUFFER_COMPLETE_OES)
{
/// This IS being activated with code 0
NSLog(#"failed to make complete framebuffer object %x", glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES));
}
int s = 1;
UIScreen* screen = [ UIScreen mainScreen ];
if ( [ screen respondsToSelector:#selector(scale) ] )
s = (int) [ screen scale ];
const int w = self.frame.size.width;
const int h = self.frame.size.height;
const NSInteger myDataLength = w * h * 4 * s * s;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, w*s, h*s, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < h*s; y++)
{
memcpy( buffer2 + (h*s - 1 - y) * w * 4 * s, buffer + (y * 4 * w * s), w * 4 * s );
}
free(buffer); // work with the flipped buffer, so get rid of the original one.
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * w * s;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(w*s, h*s, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [ UIImage imageWithCGImage:imageRef scale:s orientation:UIImageOrientationUp ];
UIImageWriteToSavedPhotosAlbum( myImage, nil, nil, nil );
CGImageRelease( imageRef );
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpaceRef);
free(buffer2);
}
Adding the above code to the Sample app works fine, the problem is doing it in my new app. The only difference I can tell is that I have not included PaintingWindow - would that be the problem?
It's as if the saveImage method isn't seeing the data for drawings.

The save method should be called within the scope of the OpenGL context.
To solve this you can move your method within the same rendering .m file and call this function from outside.
Also you need to consider OpenGL clear color.
(detail explanation in comments, lol)

I found that changing the CGBitmapInfo into:
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast;
Results in a transparent background.

Ah, you have to do this at the beginning.
[EAGLContext setCurrentContext:drawContext];

Related

Displaying a screen shot generated UIImage is not displaying in UIImageView (for device only)

I am trying to save an OpenGL buffer (whats currently displayed in the view) to the device's photo library. The code snippet below works fine on the simulator. But for the actual device it is crashing. I believe there could be a problem with the way im creating the UIImage captured from the screen.
This operations is initiated via an IBAction event handle method.
The function i use to save the image is UIImageWriteToSavedPhotosAlbum (i recently changed this to ALAssetsLibrary's writeImageToSavedPhotosAlbum).
I have ensured that my app is authorized to access the Photos library.
I also made sure that my CGImageRed is globally defined (defined at the top of the file) and my UIImage is a (nonatomic, retain) property.
Can somebody help me fix this issue? I'd like to have a valid UIImage reference that was generated from the glReadPixels data.
Below is the relevant code snippet (call to save to photo library):
-(void)TakeImageBufferSnapshot:(CGSize)dimensions
{
NSLog(#"TakeSnapShot 1 : (%f, %f)", dimensions.width, dimensions.height);
NSInteger size = dimensions.width * dimensions.height * 4;
GLubyte *buffer = (GLubyte *) malloc(size);
glReadPixels(0, 0, dimensions.width, dimensions.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
GLubyte *buffer2 = (GLubyte *) malloc(size);
int height = (int)dimensions.height - 1;
int width = (int)dimensions.width;
for(int y = 0; y < dimensions.height; y++)
{
for(int x = 0; x < dimensions.width * 4; x++)
{
buffer2[(height - 1 - y) * width * 4 + x] = buffer[y * 4 * width + x];
}
}
NSLog(#"TakeSnapShot 2");
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, size, NULL);
if (buffer) free(buffer);
if (buffer2) free(buffer2);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * self.view.bounds.size.width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
NSLog(#"TakeSnapShot 3");
// make the cgimage
g_savePhotoImageRef = CGImageCreate(dimensions.width, dimensions.height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
NSLog(#"TakeSnapShot 4");
// then make the uiimage from that
self.savePhotoImage = [UIImage imageWithCGImage:g_savePhotoImageRef];
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
}
-(void)SaveToPhotoAlbum
{
ALAuthorizationStatus status = [ALAssetsLibrary authorizationStatus];
NSLog(#"Authorization status: %d", status);
if (status == ALAuthorizationStatusAuthorized)
{
[self TakeImageBufferSnapshot:self.view.bounds.size];
// UPDATED - DO NOT proceed to save to album below.
// Instead, set the created image to a UIImageView IBOutlet.
// On the simulator this shows the screen/buffer captured image (as expected) -
// but on the device (ipad) this doesnt show anything and the app crashes.
self.testImageView.image = self.savePhotoImage;
return;
NSLog(#"Saving to photo album...");
UIImageWriteToSavedPhotosAlbum(self.savePhotoImage,
self,
#selector(photoAlbumImageSave:didFinishSavingWithError:contextInfo:),
nil);
}
else
{
UIAlertView *alert = [[UIAlertView alloc] initWithTitle:#"Access is denied"
message:#"Allow access to your Photos library to save this image."
delegate:nil
cancelButtonTitle:#"Close"
otherButtonTitles:nil, nil];
[alert show];
}
}
- (void)photoAlbumImageSave:(UIImage *)image didFinishSavingWithError:(NSError *)error contextInfo:(void *)context
{
self.savePhotoImage = nil;
CGImageRelease(g_savePhotoImageRef);
if (error)
{
NSLog(#"Error saving photo to albums: %#", error.description);
}
else
{
NSLog(#"Saved to albums!");
}
}
* Update *
I think i've managed to narrow down my issue. I started doing trial & error, where i run the app (on the device) after commenting out lines of code, to narrow things down. It looks like i may have a problem with the TakeImageBufferSnapshot function, which takes the screen buffer (using glReadPixels) and creates an CGImageRef. Now, when i try to create a UIImage out of this (using the [UIImage imageWithCGImage:] method, this seems to be why the app crashes. If I comment this line out it seems like there is no issue (other than the fact that i dont have a UIImage reference).
I basically need a valid UIImage reference so that i can save it to the photo library (which seems to work just fine using test images).
First, I should point out that glReadPixels() may not behave the way you expect. If you try to use it to read from the screen after -presentRenderbuffer: has been called, the results are undefined. On iOS 6.0+, this returns a black image, for example. You need to either use glReadPixels() right before the content is presented to the screen (my recommendation) or enable retained backing for your OpenGL ES context (which has adverse performance consequences).
Second, there's no need for the two buffers. You can capture directly into one and use that to create your CGImageRef.
To your core issue, the problem is that you are deallocating your raw image byte buffer while your CGImageRef / UIImage is still relying on it. This pulls the rug out from underneath your UIImage and will lead to the image corruption / crashing you are seeing. To account for this, you need to put in place a callback function to be triggered on the deallocation of your CGDataProvider. This is how I do this within my GPUImage framework:
rawImagePixels = (GLubyte *)malloc(totalBytesForImage);
glReadPixels(0, 0, (int)currentFBOSize.width, (int)currentFBOSize.height, GL_RGBA, GL_UNSIGNED_BYTE, rawImagePixels);
dataProvider = CGDataProviderCreateWithData(NULL, rawImagePixels, totalBytesForImage, dataProviderReleaseCallback);
cgImageFromBytes = CGImageCreate((int)currentFBOSize.width, (int)currentFBOSize.height, 8, 32, 4 * (int)currentFBOSize.width, defaultRGBColorSpace, kCGBitmapByteOrderDefault | kCGImageAlphaLast, dataProvider, NULL, NO, kCGRenderingIntentDefault);
CGDataProviderRelease(dataProvider);
The callback function takes this form:
void dataProviderReleaseCallback (void *info, const void *data, size_t size)
{
free((void *)data);
}
This function will be called only when the UIImage containing your CGImageRef (and by extension the CGDataProvider) is deallocated. Until that point, the buffer containing your image bytes remains.
You can examine how I do this within GPUImage, as a functional example. Take a look at the GPUImageFilter class for how I extract images from an OpenGL ES frame, including a faster method using texture caches instead of glReadPixels().
well - from my experience you cannot just grab the pixels that are in the buffer right now
you need to reestablish the right context, draw and grab THEN before finally releasing the context
=> This is mainly true for the device and ios6 in particular
EAGLContext* previousContext = [EAGLContext currentContext];
[EAGLContext setCurrentContext: self.context];
[self fillBuffer:sender];
//GRAB the pixels here
[EAGLContext setCurrentContext:previousContext];
alternatively (thats how I do it) create a new FrameBuffer, fill THAT and grab pixels from THERE
GLuint rttFramebuffer;
glGenFramebuffers(1, &rttFramebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, rttFramebuffer);
[self fillBuffer:self.displayLink];
size_t size = viewportHeight * viewportWidth * 4;
GLubyte *pixels = malloc(size*sizeof(GLubyte));
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(0, 0, viewportWidth, viewportHeight, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
// Restore the original framebuffer binding
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
glDeleteFramebuffers(1, &rttFramebuffer);
size_t bitsPerComponent = 8;
size_t bitsPerPixel = 32;
size_t bytesPerRow = viewportWidth * bitsPerPixel / bitsPerComponent;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast;
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, pixels, size, ImageProviderReleaseData);
CGImageRef cgImage = CGImageCreate(viewportWidth, viewportHeight, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpace, bitmapInfo, provider, NULL, true, kCGRenderingIntentDefault);
CGDataProviderRelease(provider);
UIImage *image = [UIImage imageWithCGImage:cgImage scale:self.contentScaleFactor orientation:UIImageOrientationDownMirrored];
CGImageRelease(cgImage);
CGColorSpaceRelease(colorSpace);
Edit: removed call to presentBuffer

Bad access on CGContextClearRect

I am getting the error EXC_BAD_ACCESS on the line CGContextClearRect(c, self.view.bounds) below. I can't seem to figure out why. This is in a UIViewController class. Here is the function I am in during the crash.
- (void)level0Func {
printf("level0Func\n");
frameStart = [NSDate date];
UIImage *img;
CGContextRef c = startContext(self.view.bounds.size);
printf("address of context: %x\n", c);
/* drawing/updating code */ {
CGContextClearRect(c, self.view.bounds); // crash occurs here
CGContextSetFillColorWithColor(c, [UIColor greenColor].CGColor);
CGContextFillRect(c, self.view.bounds);
CGImageRef cgImg = CGBitmapContextCreateImage(c);
img = [UIImage imageWithCGImage:cgImg]; // this sets the image to be passed to the view for drawing
// CGImageRelease(cgImg);
}
endContext(c);
}
Here are my startContext() and endContext():
CGContextRef createContext(int width, int height) {
CGContextRef r = NULL;
CGColorSpaceRef colorSpace;
void *bitmapData;
int byteCount;
int bytesPerRow;
bytesPerRow = width * 4;
byteCount = width * height;
colorSpace = CGColorSpaceCreateDeviceRGB();
printf("allocating %i bytes for bitmap data\n", byteCount);
bitmapData = malloc(byteCount);
if (bitmapData == NULL) {
fprintf(stderr, "could not allocate memory when creating context");
//free(bitmapData);
CGColorSpaceRelease(colorSpace);
return NULL;
}
r = CGBitmapContextCreate(bitmapData, width, height, 8, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
return r;
}
CGContextRef startContext(CGSize size) {
CGContextRef r = createContext(size.width, size.height);
// UIGraphicsPushContext(r); wait a second, we dont need to push anything b/c we can draw to an offscreen context
return r;
}
void endContext(CGContextRef c) {
free(CGBitmapContextGetData(c));
CGContextRelease(c);
}
What I am basically trying to do is draw to a context that I am not pushing onto the stack so I can create a UIImage out of it. Here is my output:
wait_fences: failed to receive reply: 10004003
level0Func
allocating 153600 bytes for bitmap data
address of context: 68a7ce0
Any help would be appreciated. I am stumped.
You're not allocating enough memory. Here are the relevant lines from your code:
bytesPerRow = width * 4;
byteCount = width * height;
bitmapData = malloc(byteCount);
When you compute bytesPerRow, you (correctly) multiply the width by 4 because each pixel requires 4 bytes. But when you compute byteCount, you do not multiply by 4, so you act as though each pixel only requires 1 byte.
Change it to this:
bytesPerRow = width * 4;
byteCount = bytesPerRow * height;
bitmapData = malloc(byteCount);
OR, don't allocate any memory and Quartz will allocate the correct amount for you, and free it for you. Just pass NULL as the first argument of CGBitmapContextCreate:
r = CGBitmapContextCreate(NULL, width, height, 8, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);
Check that the values for self.view.bounds.size are valid. This method could be called before the view size is properly setup.

take screen Programmatically of UIview+glview

i have glview in my uiview ,now i have to take scrren shot of combine view of uiview and glview.
i googled lot but i dnt found any thing useful i know how to take scrrenshot of glview
nt width = glView.frame.size.width;
int height = glView.frame.size.height;
NSInteger myDataLength = width * height * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < height; y++)
{
for(int x = 0; x < width * 4; x++)
{
buffer2[((height - 1) - y) * width * 4 + x] = buffer[y * 4 * width + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
return myImage;
It seems like it's pretty tricky to get a screenshot nowadays, especially when you're mixing the UIKit and OpenGL ES: there used to be UIGetScreenImage() but Apple made it private again and is rejecting apps that use it.
Instead, there are two "solutions" to replace it: Screen capture in UIKit applications and OpenGL ES View Snapshot. The former does not capture OpenGL ES or video content while the later is only for OpenGL ES.
There is another technical note How do I take a screenshot of my app that contains both UIKit and Camera elements?, and here they essentially say: You need to first capture the camera picture and then when rendering the view hierarchy, draw that image in the context.
The very same would apply for OpenGL ES: You would first need to render a snapshot for your OpenGL ES view, then render the UIKit view hierarchy into an image context and draw the image of your OpenGL ES view on top of it. Very ugly, and depending on your view hierarchy it might actually not be what you're seeing on screen (e. g. if there are views in front of your OpenGL view).
Inspired by DarkDust, I was successful in implementing a screen capture of a mix of uiview and openglview (cocos2d 2.0 view). I've sanitized the code a bit and pasted below, hopefully it's helpful for others.
To help explain the setup, my app screen has 4 view layers: the back is a background UIView with background images "backgroundLayer", the middle are 2 layers of Cocos2d glview "glLayer1 and glLayer2"; and the front is another UIView layer with a few native UI controls (e.g. UIButtons) "frontView".
Here's the code:
+ (UIImage *) grabScreenshot
{
// Get the 2 layers in the middle of cocos2d glview and store it as UIImage
[CCDirector sharedDirector].nextDeltaTimeZero = YES;
CGSize winSize = [CCDirector sharedDirector].winSize;
CCRenderTexture* rtx =
[CCRenderTexture renderTextureWithWidth:winSize.width
height:winSize.height];
[rtx begin];
[glLayer1 visit];
[glLayer2 visit];
[rtx end];
UIImage *openglImage = [rtx getUIImage];
UIGraphicsBeginImageContext(winSize);
// Capture the bottom layer
[backgroundView.layer renderInContext:UIGraphicsGetCurrentContext()];
// Save the captured glLayers image to the image context
[openglImage drawInRect:CGRectMake(0, 0, openglImage.size.width, openglImage.size.height)];
[frontView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return viewImage;
}

split UIImage by colors and create 2 images

I have looked through replacing colors in an image but cannot get it to work how i need because I am trying to do it with every color but one, as well as transparency.
what I am looking for is a way to take in an image and split out a color (say all the pure black) from that image. Then take that split out portion and make a new image with a transparent background and the split out portion.
(here is just an example of the idea, say i want to take a screenshot of this page. make every other color but pure black be transparent, and save that new image to the library, or put it into a UIImageView)
i have looked in to CGImageCreateWithMaskingColors but cant seem to do what I need with the transparent portion, and I dont really understand the colorMasking input other than you can provide it with a {Rmin,Rmax,Gmin,Gmax,Bmin,Bmax} color mask but when I do, it colors everything. any ideas or input would be great.
Sounds like you're going to have to get access to the underlying bytes and write code to process them directly. You can use CGImageGetDataProvider() to get access to the data of an image, but there's no guarantee that the format will be something you know how to handle. Alternately you can create a new CGContextRef using a specific format you know how to handle, then draw the original image into your new context, then process the underlying data. Here's a quick attempt at doing what you want (uncompiled):
- (UIImage *)imageWithBlackPixels:(UIImage *)image {
CGImageRef cgImage = image.CGImage;
// create a premultiplied ARGB context with 32bpp
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
size_t width = CGImageGetWidth(cgImage);
size_t height = CGImageGetHeight(cgImage);
size_t bpc = 8; // bits per component
size_t bpp = bpc * 4 / 8; // bytes per pixel
size_t bytesPerRow = bpp * width;
void *data = malloc(bytesPerRow * height);
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Host;
CGContextRef ctx = CGBitmapContextCreate(data, width, height, bpc, bytesPerRow, colorspace, bitmapInfo);
CGColorSpaceRelease(colorspace);
if (ctx == NULL) {
// couldn't create the context - double-check the parameters?
free(data);
return nil;
}
// draw the image into the context
CGContextDrawImage(ctx, CGRectMake(0, 0, width, height), cgImage);
// replace all non-black pixels with transparent
// preserve existing transparency on black pixels
for (size_t y = 0; y < height; y++) {
size_t rowStart = bytesPerRow * y;
for (size_t x = 0; x < width; x++) {
size_t pixelOffset = rowStart + x*bpp;
// check the RGB components of the pixel
if (data[pixelOffset+1] != 0 || data[pixelOffset+2] != 0 || data[pixelOffset+3] != 0) {
// this pixel contains non-black. zero it out
memset(&data[pixelOffset], 0, 4);
}
}
}
// create our new image and release the context data
CGImageRef newCGImage = CGBitmapContextCreateImage(ctx);
CGContextRelease(ctx);
free(data);
UIImage *newImage = [UIImage imageWithCGImage:newCGImage scale:image.scale orientation:image.imageOrientation];
CGImageRelease(newCGImage);
return newImage;
}

free() call works on simulator, makes iPad angry. iPad smash

My app is running out of memory. To resolve this, I free up two very large arrays used in a function that writes a framebuffer to an image. The method looks like this:
-(UIImage *) glToUIImage {
NSInteger myDataLength = 768 * 1024 * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, 768, 1024, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y <1024; y++)
{
for(int x = 0; x <768 * 4; x++)
{
buffer2[(1023 - y) * 768 * 4 + x] = buffer[y * 4 * 768 + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * 768;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(768, 1024, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
//free(buffer);
//free(buffer2);
return myImage;
}
Note the two called to free(buffer) and free(buffer2) at the end there? Those work fine on the iPad simulator, removing the memory problem and allowing me to generate with impudence. However, they kill the iPad instantly. Like, the first time it executes it. If I remove the free() calls it runs fine, just runs out of memory after a minute or two. So why is the free() call crashing the device?
Note - it's not the call to free() that explicitly crashes the device, it crashes later. But that seems to be the root cause/..
EDIT - Someone's asked about where it exactly crashes. This flow goes on to return the image to another object, which writes it to a file. When calling the 'UIImageJPEGRepresentation' method, it generates an EXT_BAD_ACCESS message. I assume this is because the UIImage I'm passing it to write to the file is corrupt, null or something else. But this only happens when I free those two buffers.
I'd understand if the memory was somehow related to the UIIMage, but it really shouldn't be, especially as it works on the simulator. I wondered if it is down to how iPad handles 'free' calls...
From reading the docs, I believe CGDataProviderCreateWithData will only reference the memory pointed to by buffer2, not copy it. You should keep it allocated until the image is released.
Try this:
static void _glToUIImageRelease (void *info, const void *data, size_t size) {
free(data);
}
-(UIImage *) glToUIImage {
NSInteger myDataLength = 768 * 1024 * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, 768, 1024, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y
First things first, you really should check if malloc failed and returned NULL. However, if that doesn't solve your problem, use a debugger and step through your programm to see exactly where it fails (or at least to get a stack trace). From my experience, odd failures like crashing somewhere in unsuspected areas almost always are buffer overflows corrupting arbitrary data some time before.
Buffer(s) are undersized? Look at the loops.
for(int y = 0; y <1024; y++)
{
for(int x = 0; x <768 * 4; x++)
{
buffer2[(1023 - y) * 768 * 4 + x] = buffer[y * 4 * 768 + x];
}
}
let y == 0 and x == (768*4)-1, the index of buffer2 exceeds allocated size.
Probably outside range before that?