Try to save image in background - iphone

I'm trying to call UIImagePNGRepresentation in other than main thread, but I'm getting only the EXC_BAD_ACCESS exception.
Here is code
UIImage *imageToSave = [UIImage imageWithCGImage:imageRef];
dispatch_queue_t taskQ = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_async(taskQ,
^{
NSData *binaryImage = UIImagePNGRepresentation(imageToSave);
if (![binaryImage writeToFile:destinationFilePath atomically:YES]) {
SCASSERT(assCodeCannotCreateFile, #"Write of image file failed!");
};
});
Exception occours when trying to access imageToSave variable.
EDIT:
I want to save OpenGL sceene to PNG file, there is code of whole function
- (void)saveImage:(NSString *)destinationFilePath {
NSUInteger width = (uint)self.frame.size.width;
NSUInteger height = (uint)self.frame.size.height;
NSUInteger myDataLength = width * height * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *)malloc(myDataLength);
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *)malloc(myDataLength);
for (int y = 0; y <height; y++) {
for (int x = 0; x <width * 4; x++) {
buffer2[((int)height - 1 - y) * (int)width * 4 + x] = buffer[y * 4 * (int)width + x];
}
}
JFFree(buffer);
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorSpaceRef,
bitmapInfo,
provider,
NULL,
NO,
renderingIntent);
UIImage *imageToSave = [UIImage imageWithCGImage:imageRef];
dispatch_queue_t taskQ = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_async(taskQ,
^{
NSData *binaryImage = UIImagePNGRepresentation(imageToSave);
if (![binaryImage writeToFile:destinationFilePath atomically:YES]) {
SCASSERT(assCodeCannotCreateFile, #"Write of image file failed!");
};
});
// release resources
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
CGImageRelease(imageRef);
JFFree(buffer2);
}

leave out the CGImageRelease call because the UIImage doesnt retain the CGImageRef
see imageWithCGImage and memory

Related

Convert UIImage to CMSampleBufferRef

I am doing video recording using AVFoundation. I have to crop the video to 320x280. I am getting the CMSampleBufferRef and converting it to UIImage using the below code.
CGImageRef _cgImage = [self imageFromSampleBuffer:sampleBuffer];
UIImage *_uiImage = [UIImage imageWithCGImage:_cgImage];
CGImageRelease(_cgImage);
_uiImage = [_uiImage resizedImageWithSize:CGSizeMake(320, 280)];
CMSampleBufferRef croppedBuffer = /* NEED HELP WITH THIS */
[_videoInput appendSampleBuffer:sampleBuffer];
// _videoInput is a AVAssetWriterInput
The imageFromSampleBuffer: method looks like this:
- (CGImageRef) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer // Create a CGImageRef from sample buffer data
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0); // Lock the image buffer
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0); // Get information of the image
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
/* CVBufferRelease(imageBuffer); */ // do not call this!
return newImage;
}
Now I have to convert the resized image back to CMSampleBufferRef to write in AVAssetWriterInput.
How do I convert UIImage to CMSampleBufferRef?
Thanks everyone!
While you could create your own Core Media sample buffers from scratch, it's probably easier to use a AVPixelBufferAdaptor.
You describe the source pixel buffer format in the inputSettings dictionary and pass that to the adaptor initializer:
NSMutableDictionary* inputSettingsDict = [NSMutableDictionary dictionary];
[inputSettingsDict setObject:[NSNumber numberWithInt:pixelFormat] forKey:(NSString*)kCVPixelBufferPixelFormatTypeKey];
[inputSettingsDict setObject:[NSNumber numberWithUnsignedInteger:(NSUInteger)(image.uncompressedSize/image.rect.size.height)] forKey:(NSString*)kCVPixelBufferBytesPerRowAlignmentKey];
[inputSettingsDict setObject:[NSNumber numberWithDouble:image.rect.size.width] forKey:(NSString*)kCVPixelBufferWidthKey];
[inputSettingsDict setObject:[NSNumber numberWithDouble:image.rect.size.height] forKey:(NSString*)kCVPixelBufferHeightKey];
[inputSettingsDict setObject:[NSNumber numberWithBool:YES] forKey:(NSString*)kCVPixelBufferCGImageCompatibilityKey];
[inputSettingsDict setObject:[NSNumber numberWithBool:YES] forKey:(NSString*)kCVPixelBufferCGBitmapContextCompatibilityKey];
AVAssetWriterInputPixelBufferAdaptor* pixelBufferAdapter = [[AVAssetWriterInputPixelBufferAdaptor alloc] initWithAssetWriterInput:assetWriterInput sourcePixelBufferAttributes:inputSettingsDict];
You can then append CVPixelBuffers to your adaptor:
[pixelBufferAdapter appendPixelBuffer:completePixelBuffer withPresentationTime:pixelBufferTime]
The pixelbufferAdaptor accepts CVPixelBuffers, so you have to convert your UIImages to pixelBuffers, which is described here: https://stackoverflow.com/a/3742212/100848
Pass the CGImage property of your UIImage to newPixelBufferFromCGImage.
This is a function that I use in my GPUImage framework to resize an incoming CMSampleBufferRef and place the scaled results within a CVPixelBufferRef that you provide:
void GPUImageCreateResizedSampleBuffer(CVPixelBufferRef cameraFrame, CGSize finalSize, CMSampleBufferRef *sampleBuffer)
{
// CVPixelBufferCreateWithPlanarBytes for YUV input
CGSize originalSize = CGSizeMake(CVPixelBufferGetWidth(cameraFrame), CVPixelBufferGetHeight(cameraFrame));
CVPixelBufferLockBaseAddress(cameraFrame, 0);
GLubyte *sourceImageBytes = CVPixelBufferGetBaseAddress(cameraFrame);
CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, sourceImageBytes, CVPixelBufferGetBytesPerRow(cameraFrame) * originalSize.height, NULL);
CGColorSpaceRef genericRGBColorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef cgImageFromBytes = CGImageCreate((int)originalSize.width, (int)originalSize.height, 8, 32, CVPixelBufferGetBytesPerRow(cameraFrame), genericRGBColorspace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst, dataProvider, NULL, NO, kCGRenderingIntentDefault);
GLubyte *imageData = (GLubyte *) calloc(1, (int)finalSize.width * (int)finalSize.height * 4);
CGContextRef imageContext = CGBitmapContextCreate(imageData, (int)finalSize.width, (int)finalSize.height, 8, (int)finalSize.width * 4, genericRGBColorspace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(imageContext, CGRectMake(0.0, 0.0, finalSize.width, finalSize.height), cgImageFromBytes);
CGImageRelease(cgImageFromBytes);
CGContextRelease(imageContext);
CGColorSpaceRelease(genericRGBColorspace);
CGDataProviderRelease(dataProvider);
CVPixelBufferRef pixel_buffer = NULL;
CVPixelBufferCreateWithBytes(kCFAllocatorDefault, finalSize.width, finalSize.height, kCVPixelFormatType_32BGRA, imageData, finalSize.width * 4, stillImageDataReleaseCallback, NULL, NULL, &pixel_buffer);
CMVideoFormatDescriptionRef videoInfo = NULL;
CMVideoFormatDescriptionCreateForImageBuffer(NULL, pixel_buffer, &videoInfo);
CMTime frameTime = CMTimeMake(1, 30);
CMSampleTimingInfo timing = {frameTime, frameTime, kCMTimeInvalid};
CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault, pixel_buffer, YES, NULL, NULL, videoInfo, &timing, sampleBuffer);
CFRelease(videoInfo);
CVPixelBufferRelease(pixel_buffer);
}
It doesn't take you all the way to creating a CMSampleBufferRef, but as weichsel points out, you only need the CVPixelBufferRef for encoding the video.
However, if what you really want to do here is crop video and record it, going to and from a UIImage is going to be a very slow way to do this. Instead, may I recommend looking into using something like GPUImage to capture video with a GPUImageVideoCamera input (or GPUImageMovie if cropping a previously recorded movie), feeding that into a GPUImageCropFilter, and taking the result to a GPUImageMovieWriter. That way, the video never touches Core Graphics and hardware acceleration is used as much as possible. It will be a lot faster than what you describe above.
- (CVPixelBufferRef)CVPixelBufferRefFromUiImage:(UIImage *)img {
CGSize size = img.size;
CGImageRef image = [img CGImage];
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey, nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, size.width, size.height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options, &pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, size.width, size.height, 8, 4*size.width, rgbColorSpace, kCGImageAlphaPremultipliedFirst);
NSParameterAssert(context);
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image), CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
img -> UIImage
CVPixelBufferRef pxbuffer = NULL;
CGImageRef image = [img CGImage];
// Initilize CVPixelBuffer
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, CGImageGetWidth(image), CGImageGetHeight(image), kCVPixelFormatType_32ARGB, NULL, &pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CGContextRef context = CGBitmapContextCreate(CVPixelBufferGetBaseAddress(pxbuffer), CGImageGetWidth(image), CGImageGetHeight(image), CGImageGetBitsPerComponent(image), CVPixelBufferGetBytesPerRow(pxbuffer), CGColorSpaceCreateDeviceRGB(), kCGImageAlphaNoneSkipFirst);
Please make sure that Component and BytesPerRow are fetched from CGImageRef and CVPixelBufferRef respectively.
CGImageGetBitsPerComponent(image)
CVPixelBufferGetBytesPerRow(pxbuffer)
In many places I saw people using constants, if they are not correct you get a distorted image.

Crash: UIImage style crash with code for iOS 6?

Here is my code for styling an image. In the iOS4.3 & above version the code works fine, but in iOS 6, it crashes.
-(UIImage *)grayImage:(UIImage *)image
{
CGImageRef img= image.CGImage;//imageSelected.CGImage;//self.originalPhoto.CGImage;
CFDataRef dataref=CGDataProviderCopyData(CGImageGetDataProvider(img));
int length=CFDataGetLength(dataref);
UInt8 *data=(UInt8 *)CFDataGetBytePtr(dataref);
for(int index=0;index<length;index+=4){
Byte grayScale =
(Byte)(data[index+3]*.11 +
data[index + 2] * .59 +
data[index + 1] * .3);
//set the new image's pixel to the grayscale version
data[index+1] = grayScale;// Code Crash here , By SIGABRAT (Exe_Bad_Access)
data[index+2] = grayScale;
data[index+3] = grayScale;
}
// .. Take image attributes
size_t width=CGImageGetWidth(img);
size_t height=CGImageGetHeight(img);
size_t bitsPerComponent=CGImageGetBitsPerComponent(img);
size_t bitsPerPixel=CGImageGetBitsPerPixel(img);
size_t bytesPerRow=CGImageGetBytesPerRow(img);
// .. Do the pixel manupulation
CGColorSpaceRef colorspace=CGImageGetColorSpace(img);
CGBitmapInfo bitmapInfo=CGImageGetBitmapInfo(img);
CFDataRef newData=CFDataCreate(NULL,data,length);
CGDataProviderRef provider=CGDataProviderCreateWithCFData(newData);
// .. Get the Image out of this raw data
CGImageRef newImg=CGImageCreate(width,height,bitsPerComponent,bitsPerPixel,bytesPerRow,colorspace,bitmapInfo,provider,NULL,true,kCGRenderingIntentDefault);
// .. Prepare the image from raw data
UIImage* rawImage = [[UIImage alloc] initWithCGImage:newImg] ;
// .. done with all,so release the references
CFRelease(newData);
CGImageRelease(newImg);
CGDataProviderRelease(provider);
CFRelease(dataref);
return rawImage;
}
What is wrong in this code?
Please use CFMutableDataRef in place of CFDataRef as below
CFDataRef m_DataRef = CGDataProviderCopyData(CGImageGetDataProvider(inImage));
//->write this:
CFMutableDataRef m_DataRef = CFDataCreateMutableCopy(0, 0, CGDataProviderCopyData(CGImageGetDataProvider(inImage)));
UInt8 * m_PixelBuf = (UInt8 *) CFDataGetBytePtr(m_DataRef);
//->write this
UInt8 *m_PixelBuf=(UInt8 *)CFDataGetMutableBytePtr(m_DataRef);

How to save the image on the photo album after perform the 3D Transform?

How to save the 3D transformed image on the photo album? I am using CATransform3DRotate for change the transform. I am not able to save the image. Image saving code.
UIImageWriteToSavedPhotosAlbum(newImage, nil, nil, nil);
Is it possible to save the 3D Transformed image? Please help me.Thanks in advance.
-(UIImage *) glToUIImage {
NSInteger myDataLength = 320 * 480 * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, 320, 480, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < 480; y++)
{
for(int x = 0; x < 320 * 4; x++)
{
buffer2[(479 - y) * 320 * 4 + x] = buffer[y * 4 * 320 + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * 320;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(320, 480, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
return myImage;
}
-(void)captureToPhotoAlbum {
UIImage *image = [self glToUIImage];
UIImageWriteToSavedPhotosAlbum(image, self, nil, nil);
}
also see full tutorial for save openGL Image in photoAlbum from this link
and also see my blog with this post.. captureimagescreenshot-of-view
2 . also use ALAssetsLibrary to save image
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
[library writeImageToSavedPhotosAlbum:[image CGImage] orientation:(ALAssetOrientation)[image imageOrientation] completionBlock:^(NSURL *assetURL, NSError *error){
if (error) {
// TODO: error handling
} else {
// TODO: success handling
}
}];
[library release];
UIImageWriteToSavedPhotosAlbum(UIImage *image, id completionTarget, SEL completionSelector, void *contextInfo);
You only need completionTarget, completionSelector and contextInfo if you want to be notified when the image is done saving, otherwise you can pass in nil.
ok then try like this..
UIGraphicsBeginImageContext(YOUR_VIEW.frame.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(viewImage, nil, nil, nil);
It will capture your view as like screenshot and save to photo album
YOUR_VIEW = paas your edited image's superview..

NSData -> UIImage -> NSData

I have an NSData object, which contains RGB values for an image. I want to turn that into a UIImage (given the width and the height). Then I want to convert that UIImage back into an NSData object identical to the one I started with.
Please help me I've been trying for hours now.
Here are some things I've looked at/tried though probably didn't too them right cause it didn't work:
CGImageCreate
CGBitmapContextCreateWithData
CGBitmapContextGetData
CGDataProviderCopyData(CGImageGetDataProvider(imageRef))
Thanks!
Here is my current code:
NSMutableData *rgb; //made earlier
double len = (double)[rgb length];
len /= 3;
len += 0.5;
len = (int)len;
int diff = len*3-[rgb length];
NSString *str = #"a";
NSData *a = [str dataUsingEncoding:NSUTF8StringEncoding];
for(int i =0; i < diff; i++) {
[toEncode appendData:a]; //so if my data is RGBRGBR it will turn into RGBRGBR(97)(97)
}
size_t width = (size_t)len;
size_t height = 1;
CGContextRef ctx;
CFDataRef m_DataRef;
m_DataRef = (__bridge CFDataRef)toEncode;
UInt8 * m_PixelBuf = (UInt8 *) CFDataGetBytePtr(m_DataRef);
vImage_Buffer src;
src.data = m_PixelBuf;
src.width = width;
src.height = height;
src.rowBytes = 3 * width;
vImage_Buffer dst;
dst.width = width;
dst.height = height;
dst.rowBytes = 4 * width;
vImageConvert_RGB888toARGB8888(&src, NULL, 0, &dst, NO, kvImageNoFlags);
// free(m_PixelBuf);
// m_PixelBuf = dst.data;
// NSUInteger lenB = len * (4/3);
/*
UInt8 * m_Pixel = malloc(sizeof(UInt8) * lenB);
int z = 0;
for(int i = 0; i < lenB; i++) {
if(i % 4==0) {
m_Pixel[i] = 0;
} else {
m_Pixel[i] = m_PixelBuf[z];
z++;
}
}*/
// Byte tmpByte;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
/*
ctx = CGBitmapContextCreate(m_PixelBuf,
width,
height,
8,
4*width,
colorSpace,
kCGImageAlphaPremultipliedFirst );
*/
size_t w = (size_t)len;
ctx = CGBitmapContextCreate(dst.data,
w,
height,
8,
4*width,
colorSpace,
kCGImageAlphaNoneSkipFirst );
CGImageRef imageRef = CGBitmapContextCreateImage (ctx);
UIImage* rawImage = [UIImage imageWithCGImage:imageRef];
CGContextRelease(ctx);
I get this error:<Error>: CGBitmapContextCreate: invalid data bytes/row: should be at least 324 for 8 integer bits/component, 3 components, kCGImageAlphaNoneSkipFirst.
TO UIImage from NSData:
[UIImage imageWithData:]
More on UIImage
TO NSData from UIImage:
UIImage *img = [UIImage imageNamed:#"some.png"];
NSData *dataObj = UIImageJPEGRepresentation(img, 1.0);
More on UIImageJPEGRepresentation()
The basic procedure would be to create a bitmap context using CGBitmapContextCreateWithData and then creating a CGImageRef from that with CGBitmapContextCreateImage. The parameters for creating the bitmap context depend on how your raw data is laid out in memory. Not all kinds of raw data are supported by Quartz.
The documentation on CGBitmapContextCreateWithData is quite detailed, and this is the most challenging part, getting the CGImageRef from the context and wrapping that in a UIImage (imageWithCGImage:) is trivial afterwards.
If your data is in RGB format, you will want to create a bitmap using CGBitmapContextCreate with CGColorSpaceCreateDeviceRGB and using kCGImageAlphaNone.
The error stating that rowBytes needs to be at least 324; dividing that by 4 is 81 which implies that 'width' is smaller than 'w', and that w=81. The two values should match.
Try replacing width and w with a small number like 5 to validate this. I would also note that you should be allocating dst.data via malloc prior to calling vImageConvert_RGB888toARGB8888.
Consider using CGImageCreate() instead of creating a bitmapcontext:
// this will automatically free() dst.data when destData is dealloc
NSData *destData = [NSData dataWithBytesNoCopy:dst.data length:4*width*height];
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)destData);
CGImageRef imageRef = CGImageCreate(width,
height,
8, //bits per component
8*4, //bits per pixel
4*width, //bytesPerRow
colorSpace, //colorspace
kCGImageAlphaNoneSkipFirst,
provider, //CGDataProviderRef
NULL, //decode
false, //should interpolate
kCGRenderingIntentDefault //intent
);

Catch ImageIO: <ERROR>

is it possible to catch the error "ImageIO: JPEGMaximum supported image dimension is 65500 pixels"
I have wrote try,catch section for the corresponding method.However the catch block didn't catch the ImageIO exception!!
The below code catch block will never catch it. any help on this is Appreciated.
-(void)captureToPhotoAlbum {
#try {
UIImage *image = [self glToUIImage]; //to get an image
if (image != nil) {
UIImageWriteToSavedPhotosAlbum(image, self, nil, nil);
[self eraseView];
[self showAnAlert:#"Saved Successfully" Message:#""];
}
else {
[self showAnAlert:#"Can't Save!!" Message:#"An error occurred "];
}
}
#catch (NSException *ex) {
[self showAnAlert:#"Can't Save!!" Message:#"An error occurred"];
}
}
-(UIImage *) glToUIImage {
UIImage *myImage = nil;
#try {
NSInteger myDataLength = 800 * 960 * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, 800, 960, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < 960; y++)
{
for(int x = 0; x < 800 * 4; x++)
{
buffer2[(959 - y) * 800 * 4 + x] = buffer[y * 4 * 800 + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * 800;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(800, 444444444, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
myImage = [UIImage imageWithCGImage:imageRef];
}
#catch (NSException *ex) {
#throw;
}
#finally {
return myImage;
}
}
the method does not throw an exception, so there is nothing to catch. if you want to receive an error message, you should pass a callback as the third argument:
UIImageWriteToSavedPhotosAlbum(image, self, #selector(image:didFinishSavingWithError:contextInfo:), nil);
then you can implement the following method, which will be called once the image is saved:
- (void) image: (UIImage *) image
didFinishSavingWithError: (NSError *) error
contextInfo: (void *) contextInfo;