BAD_ACCESS when trying to free data when UIView to UIImage - iphone

I intended to take a snapshot of a UIView that drawed by OpenGL ES into an UIImage, here is the code I used:
if(context){
[EAGLContext setCurrentContext:context];
glBindFramebuffer(GL_FRAMEBUFFER, defaultFramebuffer);
NSInteger dataLength = framebufferWidth * framebufferHeight * 4;
GLubyte* data = (GLubyte*)malloc(dataLength) ; // malloc(myDataLength);
glReadPixels(0, 0, framebufferWidth, framebufferHeight, GL_RGBA, GL_UNSIGNED_BYTE, data);
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * framebufferWidth;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef result = CGImageCreate(framebufferWidth, framebufferHeight, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *image = [UIImage imageWithCGImage:result];
CGImageRelease(result);
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
//free(data);
return image;
}
I was intended to use free(data); before return, but that would raises a BAD_ACCESS runtime error.
So my questions is, how to free properly? Or should I free them after the image releases?

You should use CGDataProviderCreateWithData's CGDataProviderReleaseDataCallback as described here.
Simply add the freeData function callback:
void freeData(void *info, const void *data, size_t size)
{
free((void*)data);
}
and pass it to CGDataProviderCreateWithData:
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, data, dataLength, freeData);

This is my current code and it works just fine.
if(context){
[EAGLContext setCurrentContext:context];
glBindFramebuffer(GL_FRAMEBUFFER, defaultFramebuffer);
NSInteger dataLength = width * height * 4;
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * width;
GLubyte* data = (GLubyte*)malloc(dataLength) ;
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
// make upside down
GLubyte* swp = (GLubyte*)malloc(width * 4);
for (int h = 0; h < height / 2; h++) {
memcpy(swp, data + (height - 1 - h) * bytesPerRow, bytesPerRow);
memcpy(data + (height - 1 - h) * bytesPerRow, data + h * bytesPerRow, bytesPerRow);
memcpy(data + h * bytesPerRow, swp, bytesPerRow);
}
NSData* nsData = [NSData dataWithBytesNoCopy:data length:dataLength freeWhenDone:YES];
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithCFData((CFDataRef)nsData);
// prep the ingredients
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef result = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
// then make the uiimage from that
UIImage *image = [UIImage imageWithCGImage:result];
CGImageRelease(result);
return image;
}

Related

Why is this GLView Screenshot code returning a blank/black UIImage?

I am using the following code to take a screenshot of the pixels in a GLView. The problem is, it returns a completely black UIImage. This code is being called in LineDrawer.m which is the heart of the GLView code - so it is being called from the right .m file. How can I save the actual screenshot and not a black image?
- (UIImage*) getGLScreenshot {
NSLog(#"1");
float scale = 0.0;
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)])
{
// scale value should be 1.0 on 3G and 3GS, and 2.0 on iPhone 4.
scale = [[UIScreen mainScreen] scale];
}
// these are swapped since the screen is rotatey
float h = 768 * scale;
float w = 924 * scale;
NSInteger myDataLength = w * h * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, w, h, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y <h; y++)
{
for(int x = 0; x <w * 4; x++)
{
buffer2[(((int)h-1) - y) * (int)w * 4 + x] = buffer[y * 4 * (int)w + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * w;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(w, h, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
return myImage;
}
- (void)saveGLScreenshotToPhotosAlbum {
UIImageWriteToSavedPhotosAlbum([self getGLScreenshot], nil, nil, nil);
}
I had to do something similar in the Sparrow Framework a while back, you should be able to pull the parts you need out of the code in this forum reply:
http://forum.sparrow-framework.org/topic/spdisplayobjectscreenshot
EDIT: Also this post http://forum.sparrow-framework.org/topic/taking-screenshots
Change your drawable properties
eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES],
kEAGLDrawablePropertyRetainedBacking,
kEAGLColorFormatRGB565,
kEAGLDrawablePropertyColorFormat, nil];
kEAGLDrawablePropertyRetainedBacking to YES
Try this I went through a lot of things and finally found a solution.
-(UIImage*)renderImg{
GLint backingWidth = 0;
GLint backingHeight = 0;
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &backingWidth);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &backingHeight);
GLubyte *buffer = (GLubyte *) malloc(backingWidth * backingHeight * 4);
GLubyte *buffer2 = (GLubyte *) malloc(backingWidth * backingHeight * 4);
glReadPixels(0, 0, backingWidth, backingHeight, GL_RGBA, GL_UNSIGNED_BYTE,
(GLvoid *)buffer);
for (int y=0; y<backingHeight; y++) {
for (int x=0; x<backingWidth*4; x++) {
buffer2[y * 4 * backingWidth + x] =
buffer[(backingHeight - y - 1) * backingWidth * 4 + x];
}
}
free(buffer);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2,
backingWidth * backingHeight * 4,
myProviderReleaseData);
// set up for CGImage creation
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * backingWidth;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
// Use this to retain alpha
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(backingWidth, backingHeight,
bitsPerComponent, bitsPerPixel,
bytesPerRow, colorSpaceRef,
bitmapInfo, provider,
NULL, NO,
renderingIntent);
// this contains our final image.
UIImage *newUIImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
return newUIImage;
}
I think it should work perfectly.

Create UIImage from CMSampleBufferRef using video format kCVPixelFormatType_420YpCbCr8BiPlanarFullRange

Im capturing from the iphone camera device using CVPixelFormatType_420YpCbCr8BiPlanarFullRange for faster processing of the grayscale plane (plane 0).
I am storing an amount of frames in memory for later video creation. Before I was creating the video in grayscale so I was storing only the plane that contains the luminiscense (plane 0).
Now I have to store both planes and also create a color video from them, for storing the frames I use something like this :
bytesInFrame = width * height * 2; //2 bytes per pixel, is that correct?
frameData = (unsigned char*) malloc(bytesInFrame * numbeOfFrames);
The function for creating the image from the grayscale buffer I was using :
- (UIImage *) convertBitmapGrayScaleToUIImage:(unsigned char *) buffer
withWidth:(int) width
withHeight:(int) height {
size_t bufferLength = width * height * 1;
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, bufferLength, NULL);
size_t bitsPerComponent = 8;
size_t bitsPerPixel = 8;
size_t bytesPerRow = 1 * width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceGray();
if(colorSpaceRef == NULL) {
DLog(#"Error allocating color space");
CGDataProviderRelease(provider);
return nil;
}
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef iref = CGImageCreate(width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorSpaceRef,
bitmapInfo,
provider, // data provider
NULL, // decode
YES, // should interpolate
renderingIntent);
uint32_t* pixels = (uint32_t*)malloc(bufferLength);
if(pixels == NULL) {
DLog(#"Error: Memory not allocated for bitmap");
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpaceRef);
CGImageRelease(iref);
return nil;
}
CGContextRef context = CGBitmapContextCreate(pixels,
width,
height,
bitsPerComponent,
bytesPerRow,
colorSpaceRef,
bitmapInfo);
if(context == NULL) {
DLog(#"Error context not created");
free(pixels);
}
UIImage *image = nil;
if(context) {
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, width, height), iref);
CGImageRef imageRef = CGBitmapContextCreateImage(context);
image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGContextRelease(context);
}
CGColorSpaceRelease(colorSpaceRef);
CGImageRelease(iref);
CGDataProviderRelease(provider);
if(pixels) {
free(pixels);
}
return image;
}
I have seen that this question is similar to what I want to achieve :
kCVPixelFormatType_420YpCbCr8BiPlanarFullRange frame to UIImage conversion
But I think this would add an extra step to my conversion.
There is any way of creating a color UIImage form the buffer directly?
I would appreciate some indications.

iPhone UIImage from texture has gray instead of alpha

I'm using the following code to get UIImage from screen:
CGSize s = [self getSize];
int tx = s.width;
int ty = s.height;
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerPixel = (bitsPerComponent * 4)/8;
int bytesPerRow = bytesPerPixel * tx;
NSInteger myDataLength = bytesPerRow * ty;
NSMutableData *buffer = [[NSMutableData alloc] initWithCapacity:myDataLength];
NSMutableData *pixels = [[NSMutableData alloc] initWithCapacity:myDataLength];
if( ! (buffer && pixels) ) {
[buffer release];
[pixels release];
return nil;
}
glReadPixels(0,0,tx,ty,GL_RGBA,GL_UNSIGNED_BYTE, [buffer mutableBytes]);
// make data provider with data.
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast | kCGBitmapByteOrderDefault;
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, [buffer mutableBytes], myDataLength, NULL);
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(tx, ty,
bitsPerComponent, bitsPerPixel, bytesPerRow,
colorSpaceRef, bitmapInfo, provider,
NULL, false,
kCGRenderingIntentDefault);
CGContextRef context = CGBitmapContextCreate([pixels mutableBytes], tx,
ty, CGImageGetBitsPerComponent(iref),
CGImageGetBytesPerRow(iref), CGImageGetColorSpace(iref),
bitmapInfo);
CGContextTranslateCTM(context, 0.0f, ty);
CGContextScaleCTM(context, 1.0f, -1.0f);
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, tx, ty), iref);
CGImageRef outputRef = CGBitmapContextCreateImage(context);
UIImage* image = [[UIImage alloc] initWithCGImage:outputRef];
CGImageRelease(iref);
CGContextRelease(context);
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
CGImageRelease(outputRef);
[pixels release];
[buffer release];
return [image autorelease];
What I get is an image that whenever there should be transparency (there is in opengl), there is just color (255, 255, 255, 226). How can I fix this?
CGContextDrawImage doesn't just copy data over, it draws the source image onto the destination image. So at that point, it'll be using your alpha channel as an alpha channel as an alpha channel rather than merely copying it across.
To be honest, I'm a bit confused as to the point of context and image, if you don't actually want to modify the data that comes back from OpenGL. I'd recommend you chop to just:
CGImageRef iref = CGImageCreate(tx, ty,
bitsPerComponent, bitsPerPixel, bytesPerRow,
colorSpaceRef, bitmapInfo, provider,
NULL, false,
kCGRenderingIntentDefault);
/* ... lots of stuff cut out here ... */
UIImage* image = [[UIImage alloc] initWithCGImage:iref];

How to capture a OpenGL view as UIImage?

When I try to use this method to convert a OpenGL view to UIImage, only the view's background is returned, but not the GLView content. How can I convert OpenGL's context into a UIImage?
It depends on what OpenGL view you are using. Since iOS 5 you can make use of GLKit and the corresponding GLKView that greatly simplifies the process of rendering an UIImage.
GLKView* v = (GLKView*) _previewViewController.view;
UIImage* thumbnail = [v snapshot];
http://developer.apple.com/library/ios/#documentation/GLkit/Reference/GLKView_ClassReference/Reference/Reference.html
Use the below code to convert your opengl view in UIImage .
GLubyte *buffer = (GLubyte *) malloc(backingWidth * backingHeight * 4);
GLubyte *buffer2 = (GLubyte *) malloc(backingWidth * backingHeight * 4);
GLvoid *pixel_data = nil;
glReadPixels(0, 0, backingWidth, backingHeight, GL_RGBA, GL_UNSIGNED_BYTE,
(GLvoid *)buffer);
for (int y=0; y<backingHeight; y++) {
for (int x=0; x<backingWidth*4; x++) {
buffer2[y * 4 * backingWidth + x] =
buffer[(backingHeight - y - 1) * backingWidth * 4 + x];
}
}
CGDataProviderRef provider;
provider = CGDataProviderCreateWithData(NULL, buffer2,
backingWidth * backingHeight * 4,
freeImageData);
// set up for CGImage creation
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * backingWidth;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
// Use this to retain alpha
//CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(backingWidth, backingHeight,
bitsPerComponent, bitsPerPixel,
bytesPerRow, colorSpaceRef,
bitmapInfo, provider,
NULL, NO,
renderingIntent);
// this contains our final image.
UIImage *newUIImage = [UIImage imageWithCGImage:imageRef];
Taken from
Thanks for the code. I find it usefull but I had to put some extra code in order to properly release allocated memory with buffer, buffer2, imageRef, colorSpaceRef and provider pointer. Note that buffer2 is released with provider release function.
static void myProviderReleaseData (void *info,const void *data,size_t size)
{
free((void*)data);
}
- (UIImage*)renderToImage
{
// The image size should be grabbed from your ESRenderer class.
// That parameter is get in renderer function:
// - (BOOL) resizeFromLayer:(CAEAGLLayer *)layer {
// glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
GLint backingWidth = renderer.backingWidth;
GLint backingHeight = renderer.backingHeight;
GLubyte *buffer = (GLubyte *) malloc(backingWidth * backingHeight * 4);
GLubyte *buffer2 = (GLubyte *) malloc(backingWidth * backingHeight * 4);
glReadPixels(0, 0, backingWidth, backingHeight, GL_RGBA, GL_UNSIGNED_BYTE,
(GLvoid *)buffer);
for (int y=0; y<backingHeight; y++) {
for (int x=0; x<backingWidth*4; x++) {
buffer2[y * 4 * backingWidth + x] =
buffer[(backingHeight - y - 1) * backingWidth * 4 + x];
}
}
free(buffer);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2,
backingWidth * backingHeight * 4,
myProviderReleaseData);
// set up for CGImage creation
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * backingWidth;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
// Use this to retain alpha
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(backingWidth, backingHeight,
bitsPerComponent, bitsPerPixel,
bytesPerRow, colorSpaceRef,
bitmapInfo, provider,
NULL, NO,
renderingIntent);
// this contains our final image.
UIImage *newUIImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
return newUIImage;
}

Image processing Glamour filter in iphone

I want to create an app in which i want to do some image processing. So I would like to know if there is any open-source image processing library available? also I would like to create a filter like this one Glamour Filter any help regarding this would be very much appreciated. If someone already have a source code to create sepia,black and white rotate scale code than please send. Thanks
Here is the code for sepia image
-(UIImage*)makeSepiaScale:(UIImage*)image
{
CGImageRef cgImage = [image CGImage];
CGDataProviderRef provider = CGImageGetDataProvider(cgImage);
CFDataRef bitmapData = CGDataProviderCopyData(provider);
UInt8* data = (UInt8*)CFDataGetBytePtr(bitmapData);
int width = image.size.width;
int height = image.size.height;
NSInteger myDataLength = width * height * 4;
for (int i = 0; i < myDataLength; i+=4)
{
UInt8 r_pixel = data[i];
UInt8 g_pixel = data[i+1];
UInt8 b_pixel = data[i+2];
int outputRed = (r_pixel * .393) + (g_pixel *.769) + (b_pixel * .189);
int outputGreen = (r_pixel * .349) + (g_pixel *.686) + (b_pixel * .168);
int outputBlue = (r_pixel * .272) + (g_pixel *.534) + (b_pixel * .131);
if(outputRed>255)outputRed=255;
if(outputGreen>255)outputGreen=255;
if(outputBlue>255)outputBlue=255;
data[i] = outputRed;
data[i+1] = outputGreen;
data[i+2] = outputBlue;
}
CGDataProviderRef provider2 = CGDataProviderCreateWithData(NULL, data, myDataLength, NULL);
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider2, NULL, NO, renderingIntent);
CGColorSpaceRelease(colorSpaceRef); // YOU CAN RELEASE THIS NOW
CGDataProviderRelease(provider2); // YOU CAN RELEASE THIS NOW
CFRelease(bitmapData);
UIImage *sepiaImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef); // YOU CAN RELEASE THIS NOW
return sepiaImage;
}
Here is the code for Black & White effect
- (UIImage*) createGrayCopy:(UIImage*) source {
int width = source.size.width;
int height = source.size.height;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
CGContextRef context = CGBitmapContextCreate (nil, width,
height,
8, // bits per component
0,
colorSpace,
kCGImageAlphaNone);
CGColorSpaceRelease(colorSpace);
if (context == NULL) {
return nil;
}
CGContextDrawImage(context,
CGRectMake(0, 0, width, height), source.CGImage);
UIImage *grayImage = [UIImage imageWithCGImage:CGBitmapContextCreateImage(context)];
CGContextRelease(context);
return grayImage;
}
Search for OpenCV