Received Memory Warning in DrawRect Method - iphone

I am developing an app in which i have recording of main screen and SetNeedsDisplay method is used.
But the problem is that it takes lot of memory and even i am not recording the screen.
I want to reduce the memory usage of the below mentioned code.
Any solution to this?
Thanks in advance.
- (void) drawRect:(CGRect)rect
{
NSDate* start = [NSDate date];
CGContextRef context = [self createBitmapContextOfSize:self.frame.size];
//NSLog(#"context value %#",context);
//not sure why this is necessary...image renders upside-down and mirrored
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, self.frame.size.height);
CGContextConcatCTM(context, flipVertical);
[self.layer renderInContext:context];
CGImageRef cgImage = CGBitmapContextCreateImage(context);
UIImage* background = [UIImage imageWithCGImage: cgImage];
CGImageRelease(cgImage);
self.currentScreen = background;
if (_recording) {
float millisElapsed = [[NSDate date] timeIntervalSinceDate:startedAt] * 1000.0;
[self writeVideoFrameAtTime:CMTimeMake((int)millisElapsed, 1000)];
}
float processingSeconds = [[NSDate date] timeIntervalSinceDate:start];
delayRemaining = (1.0 / self.frameRate) - processingSeconds;
//redraw at the specified framerate
[self performSelector:#selector(setNeedsDisplay) withObject:nil afterDelay:delayRemaining > 0.0 ? delayRemaining : 0.01];
}
-(CGContextRef) createBitmapContextOfSize:(CGSize) size
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
int bitmapByteCount;
int bitmapBytesPerRow;
bitmapBytesPerRow = (size.width * 4);
bitmapByteCount = (bitmapBytesPerRow * size.height);
colorSpace = CGColorSpaceCreateDeviceRGB();
if (bitmapData != NULL) {
free(bitmapData);
}
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL) {
fprintf (stderr, "Memory not allocated!");
return NULL;
}
context = CGBitmapContextCreate (bitmapData,
size.width,
size.height,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaNoneSkipFirst);
CGContextSetAllowsAntialiasing(context,NO);
if (context== NULL) {
free (bitmapData);
fprintf (stderr, "Context not created!");
return NULL;
}
CGColorSpaceRelease( colorSpace );
return context;
}

You create a context every frame, but never release it, try adding this to the end:
CGContextRelease(context);

Related

iOS - Where should I bew releasing my CFImageRef?

I have a method which returns a rotated image:
- (CGImageRef)rotateImage:(CGImageRef)original degrees:(float)degrees {
if (degrees == 0.0f) {
return original;
} else {
double radians = degrees * M_PI / 180;
#if TARGET_OS_EMBEDDED || TARGET_IPHONE_SIMULATOR
radians = -1 * radians;
#endif
size_t _width = CGImageGetWidth(original);
size_t _height = CGImageGetHeight(original);
CGRect imgRect = CGRectMake(0, 0, _width, _height);
CGAffineTransform _transform = CGAffineTransformMakeRotation(radians);
CGRect rotatedRect = CGRectApplyAffineTransform(imgRect, _transform);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL,
rotatedRect.size.width,
rotatedRect.size.height,
CGImageGetBitsPerComponent(original),
0,
colorSpace,
kCGImageAlphaPremultipliedFirst);
CGContextSetAllowsAntialiasing(context, FALSE);
CGContextSetInterpolationQuality(context, kCGInterpolationNone);
CGColorSpaceRelease(colorSpace);
CGContextTranslateCTM(context,
+(rotatedRect.size.width/2),
+(rotatedRect.size.height/2));
CGContextRotateCTM(context, radians);
CGContextDrawImage(context, CGRectMake(-imgRect.size.width/2,
-imgRect.size.height/2,
imgRect.size.width,
imgRect.size.height),
original);
CGImageRef rotatedImage = CGBitmapContextCreateImage(context);
CFRelease(context);
return rotatedImage;
}
}
Yet, instruments is telling me that rotatedImage: is not being released. I'm running this method a bunch, so the memory build up a lot. Should I be releasing it in the parent method which calls rotateImage:? Or should I release it before rotateImage: passes it back?
Thanks!
I would suggest to release the CGImageRef in the parent method calling rotateImage:
Moreover in that case you should use the naming convention and rename rotateImage: to createRotateImage: for clarity.

Merging two UIImages faster than CGContextDrawImage

I'm merging two UIImages into one context. It works, but it performs pretty slow and I'm in the need of a faster solution. As my solution is it takes about 400ms to make the mergeImage: withImage: call on an iPad 1G.
Here's what I do:
-(CGContextRef)mergeImage:(UIImage*)img1 withImage:(UIImage*)img2
{
CGSize size = [ImageToolbox getScreenSize];
CGContextRef context = [ImageToolbox createARGBBitmapContextFromImageSize:CGSizeMake(size.width, size.height)];
CGContextSetRenderingIntent(context, kCGRenderingIntentSaturation);
CGContextDrawImage(context, CGRectMake(0, 0, size.width, size.height), img1.CGImage);
CGContextDrawImage(context, CGRectMake(0, 0, size.width, size.height), img2.CGImage);
return context;
}
And here's the static methods from the ImageToolbox class:
static CGRect screenRect;
+ (CGContextRef)createARGBBitmapContextFromImageSize:(CGSize)imageSize
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
size_t pixelsWide = imageSize.width;
size_t pixelsHigh = imageSize.height;
bitmapBytesPerRow = (pixelsWide * 4);
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL)
{
fprintf(stderr, "Error allocating color space\n");
return NULL;
}
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL)
{
fprintf (stderr, "Memory not allocated!");
CGColorSpaceRelease( colorSpace );
return NULL;
}
context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst);
if (context == NULL)
{
free (bitmapData);
fprintf (stderr, "Context not created!");
}
CGColorSpaceRelease( colorSpace );
return context;
}
+(CGSize)getScreenSize
{
if (screenRect.size.width == 0 && screenRect.size.height == 0)
{
screenRect = [[UIScreen mainScreen] bounds];
}
return CGSizeMake(screenRect.size.height, screenRect.size.width-20);
}
Any suggestions to increase the performance?
I would definitely recommend using Instruments to profile what message is taking the most time so you can really break it down. Also, I have written a couple methods which I think should do the same thing with a lot less code, but you must have everything written out the way you do to really keep things customizable. Here they are anyways:
-(CGContextRef)mergeImage:(UIImage *)img1 withImage:(UIImage *)img2
{
CGSize size = [ImageToolbox getScreenSize];
CGRect rect = CGRectMake(0, 0, size.width, size.height);
UIGraphicsBeginImageContextWithOptions(size, YES, 1.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetRenderingIntent(context, kCGRenderingIntentSaturation);
[img1 drawInRect:rect];
[img2 drawInRect:rect];
UIGraphicsEndImageContext();
return context;
}
Or if you wanted the combined image right away:
- (UIImage *)mergeImage:(UIImage *)img1 withImage:(UIImage *)img2
{
CGSize size = [ImageToolbox getScreenSize];
CGRect rect = CGRectMake(0, 0, size.width, size.height);
UIGraphicsBeginImageContextWithOptions(size, YES, 1.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetRenderingIntent(context, kCGRenderingIntentSaturation);
[img1 drawInRect:rect];
[img2 drawInRect:rect];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
I have no idea if they would be faster or not, but I really wouldn't know how to speed up what you have very easily unless you had the instruments profile break down.
In any case, I hope this helps.
I did no manage to find a faster way of merging the images. I reduced the images sizes to make the operation faster.

iPhone take augmented reality screenshot with AVCaptureVideoPreviewLayer

I have a small augmented reality app that I'm developing and would like to know how to save a screenshot of what the user sees with a tap of a button or a timer.
The app works by overlaying live camera feed above another UIView. I can save screenshots by using power button +home button, these are saved to camera roll. However, Apple will not render the AVCaptureVideoPreviewLayer, even if I ask the window to save itself. It will create a transparent piece of canvas where the preview layer is.
What's the proper way for an augmented reality app to save screenshots, including transparency and subviews?
//displaying a live preview on one of the views
-(void)startCapture
{
captureSession = [[AVCaptureSession alloc] init];
AVCaptureDevice *audioCaptureDevice = nil;
// AVCaptureDevice *audioCaptureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSArray *videoDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for (AVCaptureDevice *device in videoDevices) {
if(useFrontCamera){
if (device.position == AVCaptureDevicePositionFront) {
//FRONT-FACING CAMERA EXISTS
audioCaptureDevice = device;
break;
}
}else{
if (device.position == AVCaptureDevicePositionBack) {
//Rear-FACING CAMERA EXISTS
audioCaptureDevice = device;
break;
}
}
}
NSError *error = nil;
AVCaptureDeviceInput *audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioCaptureDevice error:&error];
if (audioInput) {
[captureSession addInput:audioInput];
}
else {
// Handle the failure.
}
if([captureSession canAddOutput:captureOutput]){
captureOutput = [[AVCaptureVideoDataOutput alloc] init];
[captureOutput setAlwaysDiscardsLateVideoFrames:YES];
[captureOutput setSampleBufferDelegate:self queue:queue];
[captureOutput setVideoSettings:videoSettings];
dispatch_release(queue);
}else{
//handle failure
}
previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
UIView *aView = arOverlayView;
previewLayer.frame =CGRectMake(0,0, arOverlayView.frame.size.width,arOverlayView.frame.size.height); // Assume you want the preview layer to fill the view.
[aView.layer addSublayer:previewLayer];
[captureSession startRunning];
}
//ask the entire window to draw itself in a graphics context. This call will not render
//the AVCaptureVideoPreviewLayer . It has to be replaced with a UIImageView or GL based view.
//see following code for creating a dynamically updating UIImageView
-(void)saveScreenshot
{
UIGraphicsBeginImageContext(appDelegate.window.bounds.size);
[appDelegate.window.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(screenshot, self,
#selector(image:didFinishSavingWithError:contextInfo:), nil);
}
//image saved to camera roll callback
- (void)image:(UIImage *)image didFinishSavingWithError:(NSError *)error
contextInfo:(void *)contextInfo
{
// Was there an error?
if (error != NULL)
{
// Show error message...
NSLog(#"save failed");
}
else // No errors
{
NSLog(#"save successful");
// Show message image successfully saved
}
}
Here's the code for creating the image:
//you need to add your view controller as a delegate to the camera output to be notified of buffereed data
-(void)activateCameraFeed
{
//this is the code responsible for capturing feed for still image processing
dispatch_queue_t queue = dispatch_queue_create("com.AugmentedRealityGlamour.ImageCaptureQueue", NULL);
captureOutput = [[AVCaptureVideoDataOutput alloc] init];
[captureOutput setAlwaysDiscardsLateVideoFrames:YES];
[captureOutput setSampleBufferDelegate:self queue:queue];
[captureOutput setVideoSettings:videoSettings];
dispatch_release(queue);
//......configure audio feed, add inputs and outputs
}
//buffer delegate callback
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
if ( ignoreImageStream )
return;
[self performImageCaptureFrom:sampleBuffer];
}
Create a UIImage:
- (void) performImageCaptureFrom:(CMSampleBufferRef)sampleBuffer
{
CVImageBufferRef imageBuffer;
if ( CMSampleBufferGetNumSamples(sampleBuffer) != 1 )
return;
if ( !CMSampleBufferIsValid(sampleBuffer) )
return;
if ( !CMSampleBufferDataIsReady(sampleBuffer) )
return;
imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
if ( CVPixelBufferGetPixelFormatType(imageBuffer) != kCVPixelFormatType_32BGRA )
return;
CVPixelBufferLockBaseAddress(imageBuffer,0);
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGImageRef newImage = nil;
if ( cameraDeviceSetting == CameraDeviceSetting640x480 )
{
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
newImage = CGBitmapContextCreateImage(newContext);
CGColorSpaceRelease( colorSpace );
CGContextRelease(newContext);
}
else
{
uint8_t *tempAddress = malloc( 640 * 4 * 480 );
memcpy( tempAddress, baseAddress, bytesPerRow * height );
baseAddress = tempAddress;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipFirst);
newImage = CGBitmapContextCreateImage(newContext);
CGContextRelease(newContext);
newContext = CGBitmapContextCreate(baseAddress, 640, 480, 8, 640*4, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGContextScaleCTM( newContext, (CGFloat)640/(CGFloat)width, (CGFloat)480/(CGFloat)height );
CGContextDrawImage(newContext, CGRectMake(0,0,640,480), newImage);
CGImageRelease(newImage);
newImage = CGBitmapContextCreateImage(newContext);
CGColorSpaceRelease( colorSpace );
CGContextRelease(newContext);
free( tempAddress );
}
if ( newImage != nil )
{
//modified for iOS5.0 with ARC
tempImage = [[UIImage alloc] initWithCGImage:newImage scale:(CGFloat)1.0 orientation:cameraImageOrientation];
CGImageRelease(newImage);
//this call creates the illusion of a preview layer, while we are actively switching images created with this method
[self performSelectorOnMainThread:#selector(newCameraImageNotification:) withObject:tempImage waitUntilDone:YES];
}
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
}
update the interface with a UIView that can actually be rendered in a graphics context:
- (void) newCameraImageNotification:(UIImage*)newImage
{
if ( newImage == nil )
return;
[arOverlayView setImage:newImage];
//or do more advanced processing of the image
}
If you are wanting a snapshot of what's on screen, this is what I'm doing in one of my camera apps. I haven't touched this code in a long time so there might be a better 5.0 way now but this is solid with over 1 million downloads. There is a function for grabbing a UIView based screen and one for grabbing an Open/GLES1 screen:
//
// ScreenCapture.m
// LiveEffectsCam
//
// Created by John Carter on 10/8/10.
//
#import "ScreenCapture.h"
#import <QuartzCore/CABase.h>
#import <QuartzCore/CATransform3D.h>
#import <QuartzCore/CALayer.h>
#import <QuartzCore/CAScrollLayer.h>
#import <OpenGLES/EAGL.h>
#import <OpenGLES/ES1/gl.h>
#import <OpenGLES/ES1/glext.h>
#import <QuartzCore/QuartzCore.h>
#import <OpenGLES/EAGLDrawable.h>
#implementation ScreenCapture
+ (UIImage *) GLViewToImage:(GLView *)glView
{
UIImage *glImage = [GLView snapshot:glView]; // returns an autoreleased image
return glImage;
}
+ (UIImage *) GLViewToImage:(GLView *)glView withOverlayImage:(UIImage *)overlayImage
{
UIImage *glImage = [GLView snapshot:glView]; // returns an autoreleased image
// Merge Image and Overlay
//
CGRect imageRect = CGRectMake((CGFloat)0.0, (CGFloat)0.0, glImage.size.width*glImage.scale, glImage.size.height*glImage.scale);
CGImageRef overlayCopy = CGImageCreateCopy( overlayImage.CGImage );
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, (int)glImage.size.width*glImage.scale, (int)glImage.size.height*glImage.scale, 8, (int)glImage.size.width*4*glImage.scale, colorSpace, kCGImageAlphaPremultipliedLast);
CGContextDrawImage(context, imageRect, glImage.CGImage);
CGContextDrawImage(context, imageRect, overlayCopy);
CGImageRef newImage = CGBitmapContextCreateImage(context);
UIImage *combinedViewImage = [[[UIImage alloc] initWithCGImage:newImage] autorelease];
CGImageRelease(newImage);
CGImageRelease(overlayCopy);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
return combinedViewImage;
}
+ (UIImage *) UIViewToImage:(UIView *)view withOverlayImage:(UIImage *)overlayImage
{
UIImage *viewImage = [ScreenCapture UIViewToImage:view]; // returns an autoreleased image
// Merge Image and Overlay
//
CGRect imageRect = CGRectMake((CGFloat)0.0, (CGFloat)0.0, viewImage.size.width*viewImage.scale, viewImage.size.height*viewImage.scale);
CGImageRef overlayCopy = CGImageCreateCopy( overlayImage.CGImage );
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, (int)viewImage.size.width*viewImage.scale, (int)viewImage.size.height*viewImage.scale, 8, (int)viewImage.size.width*4*viewImage.scale, colorSpace, kCGImageAlphaPremultipliedLast);
CGContextDrawImage(context, imageRect, viewImage.CGImage);
CGContextDrawImage(context, imageRect, overlayCopy);
CGImageRef newImage = CGBitmapContextCreateImage(context);
UIImage *combinedViewImage = [[[UIImage alloc] initWithCGImage:newImage] autorelease];
CGImageRelease(newImage);
CGImageRelease(overlayCopy);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
return combinedViewImage;
}
+ (UIImage *) UIViewToImage:(UIView *)view
{
// Create a graphics context with the target size
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
//
// CGSize imageSize = [[UIScreen mainScreen] bounds].size;
CGSize imageSize = CGSizeMake( (CGFloat)480.0, (CGFloat)640.0 ); // camera image size
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
else
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
// Start with the view...
//
CGContextSaveGState(context);
CGContextTranslateCTM(context, [view center].x, [view center].y);
CGContextConcatCTM(context, [view transform]);
CGContextTranslateCTM(context,-[view bounds].size.width * [[view layer] anchorPoint].x,-[view bounds].size.height * [[view layer] anchorPoint].y);
[[view layer] renderInContext:context];
CGContextRestoreGState(context);
// ...then repeat for every subview from back to front
//
for (UIView *subView in [view subviews])
{
if ( [subView respondsToSelector:#selector(screen)] )
if ( [(UIWindow *)subView screen] == [UIScreen mainScreen] )
continue;
CGContextSaveGState(context);
CGContextTranslateCTM(context, [subView center].x, [subView center].y);
CGContextConcatCTM(context, [subView transform]);
CGContextTranslateCTM(context,-[subView bounds].size.width * [[subView layer] anchorPoint].x,-[subView bounds].size.height * [[subView layer] anchorPoint].y);
[[subView layer] renderInContext:context];
CGContextRestoreGState(context);
}
UIImage *image = UIGraphicsGetImageFromCurrentImageContext(); // autoreleased image
UIGraphicsEndImageContext();
return image;
}
+ (UIImage *) snapshot:(GLView *)eaglview
{
NSInteger x = 0;
NSInteger y = 0;
NSInteger width = [eaglview backingWidth];
NSInteger height = [eaglview backingHeight];
NSInteger dataLength = width * height * 4;
NSUInteger i;
for ( i=0; i<100; i++ )
{
glFlush();
CFRunLoopRunInMode(kCFRunLoopDefaultMode, (float)1.0/(float)60.0, FALSE);
}
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
//
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
//
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast, ref, NULL, true, kCGRenderingIntentDefault);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
//
NSInteger widthInPoints;
NSInteger heightInPoints;
if (NULL != UIGraphicsBeginImageContextWithOptions)
{
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
//
CGFloat scale = eaglview.contentScaleFactor;
widthInPoints = width / scale;
heightInPoints = height / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
}
else
{
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
//
widthInPoints = width;
heightInPoints = height;
UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
//
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext(); // autoreleased image
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;
}
#end

High Resolution-Retina display screenshot of OpenGL ES [CAEAGLLayer] content

I’m using GLPaint example for my Paint app , I needed to take a screenshot of OpenGL ES [CAEAGLLayer] rendered content. I am using function:
-(UIImage *)snapUIImage
{
int s = 2;
const int w = self.frame.size.width;
const int h = self.frame.size.height;
const NSInteger myDataLength = w * h * 4 * s * s;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, w*s, h*s, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < h*s; y++)
{
memcpy( buffer2 + (h*s - 1 - y) * w * 4 * s, buffer + (y * 4 * w * s), w * 4 * s );
}
free(buffer); // work with the flipped buffer, so get rid of the original one.
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * w * s;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault | kCGImageAlphaLast;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(w*s, h*s, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [ UIImage imageWithCGImage:imageRef scale:s orientation:UIImageOrientationUp ];
CGImageRelease( imageRef );
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpaceRef);
// buffer2 will be destroyed once myImage is autoreleased.
return myImage;
}
-(void)captureToPhotoAlbum {
UIImage *image = [self snapUIImage];
    UIImageWriteToSavedPhotosAlbum(image, self, nil, nil);
}
Above code is working I am getting the image but it is not a high resolution [Retina display] image.Please help.
Thanks in advance.
Working code solution
- (UIImage*) getGLScreenshot
{
int myWidth = self.frame.size.width*2;
int myHeight = self.frame.size.height*2;
int myY = 0;
int myX = 0;
int bufferLength = (myWidth*myHeight*4);
//unsigned char buffer[bufferLength];
unsigned char* buffer =(unsigned char*)malloc(bufferLength);
glReadPixels(myX, myY, myWidth, myHeight, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, buffer, bufferLength, NULL);
CGImageRef iref = CGImageCreate(myWidth,myHeight,8,32,myWidth*4,CGColorSpaceCreateDeviceRGB(),
kCGBitmapByteOrderDefault,ref,NULL, true, kCGRenderingIntentDefault);
uint32_t* pixels = (uint32_t *)malloc(bufferLength);
CGContextRef context = CGBitmapContextCreate(pixels, myWidth, myHeight, 8, myWidth*4, CGImageGetColorSpace(iref),
kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Big);
CGContextTranslateCTM(context, 0.0, myHeight);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextDrawImage(context, CGRectMake(0.0, 0.0, myWidth, myHeight), iref);
CGImageRef outputRef = CGBitmapContextCreateImage(context);
UIImage *image = nil;
if(regardOrientation) {
UIDeviceOrientation deviceOrientation = [UIDevice currentDevice].orientation;
if (deviceOrientation == UIDeviceOrientationPortraitUpsideDown) {
image = [UIImage imageWithCGImage:outputRef scale:1 orientation:UIImageOrientationDown];
} else if (deviceOrientation == UIDeviceOrientationLandscapeLeft) {
image = [UIImage imageWithCGImage:outputRef scale:1 orientation:UIImageOrientationLeft];
} else if (deviceOrientation == UIDeviceOrientationLandscapeRight) {
image = [UIImage imageWithCGImage:outputRef scale:1 orientation:UIImageOrientationRight];
} else {
image = [UIImage imageWithCGImage:outputRef scale:1 orientation:UIImageOrientationUp];
}
} else {
image = [UIImage imageWithCGImage:outputRef scale:1 orientation:UIImageOrientationUp];
}
image = [UIImage imageWithCGImage:outputRef scale:1 orientation:UIImageOrientationUp];
CGImageRelease(iref);
CGImageRelease(outputRef);
CGContextRelease(context);
CGDataProviderRelease(ref);
free(buffer);
free(pixels);
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil);
return image;
}

Reproduce Springboard's icon shine with WebKit

anyone got any ideas on how to reproduce the gloss of iPhone app icons using WebKit and CSS3 and/or a transparent overlay image? Is this even possible?
iPhone OS uses the following images to compose the icon:
AppIconMask.png
AppIconShadow.png
AppIconOverlay.png (optional)
This is how I have implemented the above using only the AppIconOverlay.png in my app. I kind of like the desired effect.
It doesn't have the shadow, but I'm sure if you really wanted it you could modify the code in order to suit your needs. In terms of the AppIconMask.png I couldn't really see any need in using this since I use the #import <QuartzCore/QuartzCore.h> framework in order to achieve the desired effect by adding a layer.masksToBounds and layer.cornerRadius.
I hope this works for anyone who is interested in achieving the Apple Springboard overlay effect. Oh and thank-you to rpetrich for providing those images.
I apologise for the lack of comments in the code. It's a culmination of code from similar existing implementations scattered all over the internet. So I'd like to thank all of those people for providing those bits and pieces of code that are used as well.
- (UIImage *)getIconOfSize:(CGSize)size icon:(UIImage *)iconImage withOverlay:(UIImage *)overlayImage {
UIImage *icon = [self scaleImage:iconImage toResolution:size.width];
CGRect iconBoundingBox = CGRectMake (0, 0, size.width, size.height);
CGRect overlayBoundingBox = CGRectMake (0, 0, size.width, size.height);
CGContextRef myBitmapContext = [self createBitmapContextOfSize:size];
CGContextSetRGBFillColor (myBitmapContext, 1, 1, 1, 1);
CGContextFillRect (myBitmapContext, iconBoundingBox);
CGContextDrawImage(myBitmapContext, iconBoundingBox, icon.CGImage);
CGContextDrawImage(myBitmapContext, overlayBoundingBox, overlayImage.CGImage);
UIImage *result = [UIImage imageWithCGImage: CGBitmapContextCreateImage (myBitmapContext)];
CGContextRelease (myBitmapContext);
return result;
}
- (CGContextRef)createBitmapContextOfSize:(CGSize)size {
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
bitmapBytesPerRow = (size.width * 4);
bitmapByteCount = (bitmapBytesPerRow * size.height);
colorSpace = CGColorSpaceCreateDeviceRGB();
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL) {
fprintf (stderr, "Memory not allocated!");
return NULL;
}
context = CGBitmapContextCreate (bitmapData,
size.width,
size.height,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast);
CGContextSetAllowsAntialiasing (context,NO);
if (context== NULL) {
free (bitmapData);
fprintf (stderr, "Context not created!");
return NULL;
}
CGColorSpaceRelease( colorSpace );
return context;
}
- (UIImage *)scaleImage:(UIImage *)image toResolution:(int)resolution {
CGFloat width = image.size.width;
CGFloat height = image.size.height;
CGRect bounds = CGRectMake(0, 0, width, height);
// If already at the minimum resolution, return the original image, otherwise scale.
if (width <= resolution && height <= resolution) {
return image;
} else {
CGFloat ratio = width/height;
if (ratio > 1) {
bounds.size.width = resolution;
bounds.size.height = bounds.size.width / ratio;
} else {
bounds.size.height = resolution;
bounds.size.width = bounds.size.height * ratio;
}
}
UIGraphicsBeginImageContext(bounds.size);
[image drawInRect:CGRectMake(0.0, 0.0, bounds.size.width, bounds.size.height)];
UIImage *imageCopy = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return imageCopy;
}
Usage:
UIImage *overlayImage = [UIImage imageNamed:#"AppIconOverlay.png"];
UIImage *profileImage = [Helper getIconOfSize:CGSizeMake(59, 60) icon:image withOverlay:overlayImage];
[profilePictureImageView setImage:profileImage];
profilePictureImageView.layer.masksToBounds = YES;
profilePictureImageView.layer.cornerRadius = 10.0;
profilePictureImageView.layer.borderColor = [[UIColor grayColor] CGColor];
profilePictureImageView.layer.borderWidth = 1.0;