iOS App Crash when receiving warning memory - iphone

I use two methods to get transparent pixels from UIImage. They are called a lot of times while I drawnig a line in the screen. When the methods start to be called, app crashes, and I get memory warning. I’ve already add #autorelease, and  CGContextFlush(), but it still doesn’t work.How to solve this problem? Anyone knows or have had the same issue?
-(BOOL)isTransparency:(CGPoint)point {
    
    #autoreleasepool {
        
        UIGraphicsBeginImageContext(self.imageView.superview.bounds.size);
        CGContextRef context = UIGraphicsGetCurrentContext();
        CGContextSaveGState(context);
        CGContextSetBlendMode(context, kCGBlendModeCopy);
        CGContextTranslateCTM(context, [self.imageView center].x, [self.imageView center].y);
        CGContextConcatCTM(context, [self.imageView transform]);
        CGContextTranslateCTM(context,
                              -[self.imageView bounds].size.width * [[self.imageView layer] anchorPoint].x,
                              -[self.imageView bounds].size.height * [[self.imageView layer] anchorPoint].y);
        
        [[self.imageView image] drawInRect:[self.imageView bounds]];
        
        CGContextRestoreGState(context);
        
        CGImageRef image = CGBitmapContextCreateImage(context);
        
        
        CGContextRelease(context);
       
        return [self isTransparentPixel:point image:image];
    }
}
-(BOOL)isTransparentPixel:(CGPoint)point image:(CGImageRef)cgim{
        
        unsigned char pixel[1] = {0};
        
        CGContextRef context = CGBitmapContextCreate(pixel,
                                                     1, 1, 8, 1, NULL,
                                                     kCGImageAlphaOnly);
        CGContextSetBlendMode(context, kCGBlendModeCopy);
        CGContextDrawImage(context, CGRectMake(-point.x,
                                               -(self.imageView.superview.frame.size.height - point.y),
                                               CGImageGetWidth(cgim),
                                               CGImageGetHeight(cgim)),cgim);
        
        CGContextRelease(context);
        
        CGFloat alpha = pixel[0]/255.0;
      
        return alpha < 0.01;
    
    
}

It seems that the image created by CGBitmapContextCreateImage() is not released.
You should release it by CGImageRelease().
By the way, you should not release context retrieved from UIGraphicsGetCurrentContext().
CGImageRef image = CGBitmapContextCreateImage(context);
BOOL isTransparent = [self isTransparentPixel:point image:image];
CGImageRelease(image);
return isTransparent;

Related

Screenshot shows black image in device - cocos2d-iPhone

Below code generates screenshot in my Game which works fine in Simulator but when i check in device,it shows black image.
here is my code to generate screenshot .
CCDirector *director = (CCDirector *) [CCDirector sharedDirector];
screenshootimage = [director screenshotUIImage];
appDelegate.screenshootimage = [Utility getImageFromView];
imgsprite = [CCSprite spriteWithCGImage:screenshootimage.CGImage key:#"s"];
- (UIImage*) screenshotUIImage
{
CGSize displaySize = [self displaySizeInPixels];
CGSize winSize = [self winSizeInPixels];
//Create buffer for pixels
GLuint bufferLength = displaySize.width * displaySize.height * 4;
GLubyte* buffer = (GLubyte*)malloc(bufferLength);
//Read Pixels from OpenGL
glReadPixels(0, 0, displaySize.width, displaySize.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
//Make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, bufferLength, NULL);
//Configure image
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * displaySize.width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef iref = CGImageCreate(displaySize.width, displaySize.height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
uint32_t* pixels = (uint32_t*)malloc(bufferLength);
CGContextRef context = CGBitmapContextCreate(pixels, winSize.width, winSize.height, 8, winSize.width * 4, CGImageGetColorSpace(iref), kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextTranslateCTM(context, 0, displaySize.height);
CGContextScaleCTM(context, 1.0f, -1.0f);
switch (deviceOrientation_)
{
case CCDeviceOrientationPortrait: break;
case CCDeviceOrientationPortraitUpsideDown:
CGContextRotateCTM(context, CC_DEGREES_TO_RADIANS(180));
CGContextTranslateCTM(context, -displaySize.width, -displaySize.height);
break;
case CCDeviceOrientationLandscapeLeft:
CGContextRotateCTM(context, CC_DEGREES_TO_RADIANS(-90));
CGContextTranslateCTM(context, -displaySize.height, 0);
break;
case CCDeviceOrientationLandscapeRight:
CGContextRotateCTM(context, CC_DEGREES_TO_RADIANS(90));
CGContextTranslateCTM(context, displaySize.height-displaySize.width, -displaySize.height);
break;
}
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, displaySize.width, displaySize.height), iref);
CGImageRef imageRef = CGBitmapContextCreateImage(context);
UIImage *outputImage = [[[UIImage alloc] initWithCGImage:imageRef] autorelease];
//Dealloc
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGImageRelease(iref);
CGColorSpaceRelease(colorSpaceRef);
CGContextRelease(context);
free(buffer);
free(pixels);
return outputImage;
}
+ (UIImage*)getImageFromView
{
// Create a graphics context with the target size
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
//CGSize imageSize = [[UIScreen mainScreen] bounds].size;
CGSize newImageSize = [[UIScreen mainScreen] bounds].size;
CGSize imageSize = CGSizeMake(newImageSize.height, newImageSize.width);
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(imageSize, NO,1.0);
else
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
// Iterate over every window from back to front
for (UIWindow *window in [[UIApplication sharedApplication] windows])
{
if (![window respondsToSelector:#selector(screen)] || [window screen] == [UIScreen mainScreen])
{
// -renderInContext: renders in the coordinate space of the layer,
// so we must first apply the layer's geometry to the graphics context
CGContextSaveGState(context);
// Center the context around the window's anchor point
CGContextTranslateCTM(context, [window center].y, [window center].x);
CGContextRotateCTM( context, degreesToRadian( -90 ) ) ;
// Apply the window's transform about the anchor point
CGContextConcatCTM(context, [window transform]);
// Offset by the portion of the bounds left of and above the anchor point
CGContextTranslateCTM(context,
-imageSize.height *[[window layer] anchorPoint].y,
-imageSize.width *[[window layer] anchorPoint].x);
// Render the layer hierarchy to the current context
[[window layer] renderInContext:context];
// Restore the context
CGContextRestoreGState(context);
}
}
// Retrieve the screenshot image
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
any help ?!
NOTE: It also works fine in iPad Device, only problem is with iPhone Device.

UIBezierPath : Incorrect retrieval of view screenshot

I'm taking the screenshot following way.
- (UIImage*)screenshot
{
UIWindow *keyWindow = [[UIApplication sharedApplication] keyWindow];
CGRect rect = [keyWindow bounds];
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[keyWindow.layer renderInContext:context];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
On UIView I'm drawing paths using UIBezierPath.
The screenshots that I'm getting are incorrect like this
Any ideas why the upper part is cut to blank ? On UIView all drawing is displayed correctly.
UPDATE : This happens when I draw a long path with UIBezierPath when I release the my brush and get the screenshot it gets the screenshot correctly.
I saw the another thread which you post.
According to the apple technical Q&A, you need to adjust the geometry coordinate first.
So, before render the layer of window, do following things first.
// -renderInContext: renders in the coordinate space of the layer,
// so we must first apply the layer's geometry to the graphics context
CGContextSaveGState(context);
// Center the context around the window's anchor point
CGContextTranslateCTM(context, [window center].x, [window center].y);
// Apply the window's transform about the anchor point
CGContextConcatCTM(context, [window transform]);
// Offset by the portion of the bounds left of and above the anchor point
CGContextTranslateCTM(context,
-[window bounds].size.width * [[window layer] anchorPoint].x,
-[window bounds].size.height * [[window layer] anchorPoint].y);
- (UIImage*)screenshot :(UIView*)vw
{
CGRect rect = [vw bounds];
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[vw.layer renderInContext:context];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
Use This one
-(IBAction)takeScreenShot:(id)sender
{
UIGraphicsBeginImageContext(self.view.bounds.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(viewImage, nil, nil, nil);
}

iPhone take augmented reality screenshot with AVCaptureVideoPreviewLayer

I have a small augmented reality app that I'm developing and would like to know how to save a screenshot of what the user sees with a tap of a button or a timer.
The app works by overlaying live camera feed above another UIView. I can save screenshots by using power button +home button, these are saved to camera roll. However, Apple will not render the AVCaptureVideoPreviewLayer, even if I ask the window to save itself. It will create a transparent piece of canvas where the preview layer is.
What's the proper way for an augmented reality app to save screenshots, including transparency and subviews?
//displaying a live preview on one of the views
-(void)startCapture
{
captureSession = [[AVCaptureSession alloc] init];
AVCaptureDevice *audioCaptureDevice = nil;
// AVCaptureDevice *audioCaptureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSArray *videoDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for (AVCaptureDevice *device in videoDevices) {
if(useFrontCamera){
if (device.position == AVCaptureDevicePositionFront) {
//FRONT-FACING CAMERA EXISTS
audioCaptureDevice = device;
break;
}
}else{
if (device.position == AVCaptureDevicePositionBack) {
//Rear-FACING CAMERA EXISTS
audioCaptureDevice = device;
break;
}
}
}
NSError *error = nil;
AVCaptureDeviceInput *audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioCaptureDevice error:&error];
if (audioInput) {
[captureSession addInput:audioInput];
}
else {
// Handle the failure.
}
if([captureSession canAddOutput:captureOutput]){
captureOutput = [[AVCaptureVideoDataOutput alloc] init];
[captureOutput setAlwaysDiscardsLateVideoFrames:YES];
[captureOutput setSampleBufferDelegate:self queue:queue];
[captureOutput setVideoSettings:videoSettings];
dispatch_release(queue);
}else{
//handle failure
}
previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
UIView *aView = arOverlayView;
previewLayer.frame =CGRectMake(0,0, arOverlayView.frame.size.width,arOverlayView.frame.size.height); // Assume you want the preview layer to fill the view.
[aView.layer addSublayer:previewLayer];
[captureSession startRunning];
}
//ask the entire window to draw itself in a graphics context. This call will not render
//the AVCaptureVideoPreviewLayer . It has to be replaced with a UIImageView or GL based view.
//see following code for creating a dynamically updating UIImageView
-(void)saveScreenshot
{
UIGraphicsBeginImageContext(appDelegate.window.bounds.size);
[appDelegate.window.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(screenshot, self,
#selector(image:didFinishSavingWithError:contextInfo:), nil);
}
//image saved to camera roll callback
- (void)image:(UIImage *)image didFinishSavingWithError:(NSError *)error
contextInfo:(void *)contextInfo
{
// Was there an error?
if (error != NULL)
{
// Show error message...
NSLog(#"save failed");
}
else // No errors
{
NSLog(#"save successful");
// Show message image successfully saved
}
}
Here's the code for creating the image:
//you need to add your view controller as a delegate to the camera output to be notified of buffereed data
-(void)activateCameraFeed
{
//this is the code responsible for capturing feed for still image processing
dispatch_queue_t queue = dispatch_queue_create("com.AugmentedRealityGlamour.ImageCaptureQueue", NULL);
captureOutput = [[AVCaptureVideoDataOutput alloc] init];
[captureOutput setAlwaysDiscardsLateVideoFrames:YES];
[captureOutput setSampleBufferDelegate:self queue:queue];
[captureOutput setVideoSettings:videoSettings];
dispatch_release(queue);
//......configure audio feed, add inputs and outputs
}
//buffer delegate callback
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
if ( ignoreImageStream )
return;
[self performImageCaptureFrom:sampleBuffer];
}
Create a UIImage:
- (void) performImageCaptureFrom:(CMSampleBufferRef)sampleBuffer
{
CVImageBufferRef imageBuffer;
if ( CMSampleBufferGetNumSamples(sampleBuffer) != 1 )
return;
if ( !CMSampleBufferIsValid(sampleBuffer) )
return;
if ( !CMSampleBufferDataIsReady(sampleBuffer) )
return;
imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
if ( CVPixelBufferGetPixelFormatType(imageBuffer) != kCVPixelFormatType_32BGRA )
return;
CVPixelBufferLockBaseAddress(imageBuffer,0);
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGImageRef newImage = nil;
if ( cameraDeviceSetting == CameraDeviceSetting640x480 )
{
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
newImage = CGBitmapContextCreateImage(newContext);
CGColorSpaceRelease( colorSpace );
CGContextRelease(newContext);
}
else
{
uint8_t *tempAddress = malloc( 640 * 4 * 480 );
memcpy( tempAddress, baseAddress, bytesPerRow * height );
baseAddress = tempAddress;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipFirst);
newImage = CGBitmapContextCreateImage(newContext);
CGContextRelease(newContext);
newContext = CGBitmapContextCreate(baseAddress, 640, 480, 8, 640*4, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGContextScaleCTM( newContext, (CGFloat)640/(CGFloat)width, (CGFloat)480/(CGFloat)height );
CGContextDrawImage(newContext, CGRectMake(0,0,640,480), newImage);
CGImageRelease(newImage);
newImage = CGBitmapContextCreateImage(newContext);
CGColorSpaceRelease( colorSpace );
CGContextRelease(newContext);
free( tempAddress );
}
if ( newImage != nil )
{
//modified for iOS5.0 with ARC
tempImage = [[UIImage alloc] initWithCGImage:newImage scale:(CGFloat)1.0 orientation:cameraImageOrientation];
CGImageRelease(newImage);
//this call creates the illusion of a preview layer, while we are actively switching images created with this method
[self performSelectorOnMainThread:#selector(newCameraImageNotification:) withObject:tempImage waitUntilDone:YES];
}
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
}
update the interface with a UIView that can actually be rendered in a graphics context:
- (void) newCameraImageNotification:(UIImage*)newImage
{
if ( newImage == nil )
return;
[arOverlayView setImage:newImage];
//or do more advanced processing of the image
}
If you are wanting a snapshot of what's on screen, this is what I'm doing in one of my camera apps. I haven't touched this code in a long time so there might be a better 5.0 way now but this is solid with over 1 million downloads. There is a function for grabbing a UIView based screen and one for grabbing an Open/GLES1 screen:
//
// ScreenCapture.m
// LiveEffectsCam
//
// Created by John Carter on 10/8/10.
//
#import "ScreenCapture.h"
#import <QuartzCore/CABase.h>
#import <QuartzCore/CATransform3D.h>
#import <QuartzCore/CALayer.h>
#import <QuartzCore/CAScrollLayer.h>
#import <OpenGLES/EAGL.h>
#import <OpenGLES/ES1/gl.h>
#import <OpenGLES/ES1/glext.h>
#import <QuartzCore/QuartzCore.h>
#import <OpenGLES/EAGLDrawable.h>
#implementation ScreenCapture
+ (UIImage *) GLViewToImage:(GLView *)glView
{
UIImage *glImage = [GLView snapshot:glView]; // returns an autoreleased image
return glImage;
}
+ (UIImage *) GLViewToImage:(GLView *)glView withOverlayImage:(UIImage *)overlayImage
{
UIImage *glImage = [GLView snapshot:glView]; // returns an autoreleased image
// Merge Image and Overlay
//
CGRect imageRect = CGRectMake((CGFloat)0.0, (CGFloat)0.0, glImage.size.width*glImage.scale, glImage.size.height*glImage.scale);
CGImageRef overlayCopy = CGImageCreateCopy( overlayImage.CGImage );
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, (int)glImage.size.width*glImage.scale, (int)glImage.size.height*glImage.scale, 8, (int)glImage.size.width*4*glImage.scale, colorSpace, kCGImageAlphaPremultipliedLast);
CGContextDrawImage(context, imageRect, glImage.CGImage);
CGContextDrawImage(context, imageRect, overlayCopy);
CGImageRef newImage = CGBitmapContextCreateImage(context);
UIImage *combinedViewImage = [[[UIImage alloc] initWithCGImage:newImage] autorelease];
CGImageRelease(newImage);
CGImageRelease(overlayCopy);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
return combinedViewImage;
}
+ (UIImage *) UIViewToImage:(UIView *)view withOverlayImage:(UIImage *)overlayImage
{
UIImage *viewImage = [ScreenCapture UIViewToImage:view]; // returns an autoreleased image
// Merge Image and Overlay
//
CGRect imageRect = CGRectMake((CGFloat)0.0, (CGFloat)0.0, viewImage.size.width*viewImage.scale, viewImage.size.height*viewImage.scale);
CGImageRef overlayCopy = CGImageCreateCopy( overlayImage.CGImage );
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, (int)viewImage.size.width*viewImage.scale, (int)viewImage.size.height*viewImage.scale, 8, (int)viewImage.size.width*4*viewImage.scale, colorSpace, kCGImageAlphaPremultipliedLast);
CGContextDrawImage(context, imageRect, viewImage.CGImage);
CGContextDrawImage(context, imageRect, overlayCopy);
CGImageRef newImage = CGBitmapContextCreateImage(context);
UIImage *combinedViewImage = [[[UIImage alloc] initWithCGImage:newImage] autorelease];
CGImageRelease(newImage);
CGImageRelease(overlayCopy);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
return combinedViewImage;
}
+ (UIImage *) UIViewToImage:(UIView *)view
{
// Create a graphics context with the target size
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
//
// CGSize imageSize = [[UIScreen mainScreen] bounds].size;
CGSize imageSize = CGSizeMake( (CGFloat)480.0, (CGFloat)640.0 ); // camera image size
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
else
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
// Start with the view...
//
CGContextSaveGState(context);
CGContextTranslateCTM(context, [view center].x, [view center].y);
CGContextConcatCTM(context, [view transform]);
CGContextTranslateCTM(context,-[view bounds].size.width * [[view layer] anchorPoint].x,-[view bounds].size.height * [[view layer] anchorPoint].y);
[[view layer] renderInContext:context];
CGContextRestoreGState(context);
// ...then repeat for every subview from back to front
//
for (UIView *subView in [view subviews])
{
if ( [subView respondsToSelector:#selector(screen)] )
if ( [(UIWindow *)subView screen] == [UIScreen mainScreen] )
continue;
CGContextSaveGState(context);
CGContextTranslateCTM(context, [subView center].x, [subView center].y);
CGContextConcatCTM(context, [subView transform]);
CGContextTranslateCTM(context,-[subView bounds].size.width * [[subView layer] anchorPoint].x,-[subView bounds].size.height * [[subView layer] anchorPoint].y);
[[subView layer] renderInContext:context];
CGContextRestoreGState(context);
}
UIImage *image = UIGraphicsGetImageFromCurrentImageContext(); // autoreleased image
UIGraphicsEndImageContext();
return image;
}
+ (UIImage *) snapshot:(GLView *)eaglview
{
NSInteger x = 0;
NSInteger y = 0;
NSInteger width = [eaglview backingWidth];
NSInteger height = [eaglview backingHeight];
NSInteger dataLength = width * height * 4;
NSUInteger i;
for ( i=0; i<100; i++ )
{
glFlush();
CFRunLoopRunInMode(kCFRunLoopDefaultMode, (float)1.0/(float)60.0, FALSE);
}
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
//
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
//
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast, ref, NULL, true, kCGRenderingIntentDefault);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
//
NSInteger widthInPoints;
NSInteger heightInPoints;
if (NULL != UIGraphicsBeginImageContextWithOptions)
{
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
//
CGFloat scale = eaglview.contentScaleFactor;
widthInPoints = width / scale;
heightInPoints = height / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
}
else
{
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
//
widthInPoints = width;
heightInPoints = height;
UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
//
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext(); // autoreleased image
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;
}
#end

iPhone: Create a screenshot programmatically and then add it as a subview

I would like to call a method which takes a screenshot and then load this screenshot as a subview. I am using Apples sample code on how to take a screen shot (see below) and was trying to use the result (an image) in my code. However, I don't really know how to get the image from the method into my code. This is what I tried; it's obviously wrong, but it's all I could come up with:
// Test Screenshot:
screenShot = [UIImage screenshot]; // THIS DOESN'T WORK
screenShotView = [[UIImageView alloc] initWithImage:screenShot];
[screenShotView setFrame:CGRectMake(0, 0, 320, 480)];
[self.view addSubview:screenShotView];
And this is Apple's sample code for the method:
- (UIImage*)screenshot
{
NSLog(#"Shot");
// Create a graphics context with the target size
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
CGSize imageSize = [[UIScreen mainScreen] bounds].size;
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
else
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
// Iterate over every window from back to front
for (UIWindow *window in [[UIApplication sharedApplication] windows])
{
if (![window respondsToSelector:#selector(screen)] || [window screen] == [UIScreen mainScreen])
{
// -renderInContext: renders in the coordinate space of the layer,
// so we must first apply the layer's geometry to the graphics context
CGContextSaveGState(context);
// Center the context around the window's anchor point
CGContextTranslateCTM(context, [window center].x, [window center].y);
// Apply the window's transform about the anchor point
CGContextConcatCTM(context, [window transform]);
// Offset by the portion of the bounds left of and above the anchor point
CGContextTranslateCTM(context,
-[window bounds].size.width * [[window layer] anchorPoint].x,
-[window bounds].size.height * [[window layer] anchorPoint].y);
// Render the layer hierarchy to the current context
[[window layer] renderInContext:context];
// Restore the context
CGContextRestoreGState(context);
}
}
// Retrieve the screenshot image
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
Any help would be very much appreciated! Thanks.
EDIT: This is the viewDidLoad method in which I create a TextView and then try to capture a screen shot of it:
- (void)viewDidLoad {
// Setup TextView:
NSString* someText = #"Some Text";
CGRect frameText = CGRectMake(0, 0, 320, 480);
aTextView = [[UITextView alloc] initWithFrame:frameText];
aTextView.text = someText;
[self.view addSubview:aTextView];
// Test Screenshot:
screenShotView = [[UIImageView alloc] initWithImage:[self screenshot]];
[screenShotView setFrame:CGRectMake(10, 10, 200, 200)];
[self.view addSubview:screenShotView];
[self.view bringSubviewToFront:screenShotView];
[super viewDidLoad];
}
To use the image just change this line
screenShot = [UIImage screenshot];
To
screenShot = [self screenshot];
Edit: Check to see if the [self screenshot] returns a valid image or nil.
- (void)viewDidLoad {
// Setup TextView:
NSString* someText = #"Some Text";
CGRect frameText = CGRectMake(0, 0, 320, 480);
aTextView = [[UITextView alloc] initWithFrame:frameText];
aTextView.text = someText;
[self.view addSubview:aTextView];
// Test Screenshot:
UIImage *screenShotImage = [self screenShot];
if(screenShot){
screenShotView = [[UIImageView alloc] initWithImage:screenShotImage];
[screenShotView setFrame:CGRectMake(10, 10, 200, 200)];
[self.view addSubview:screenShotView];
[self.view bringSubviewToFront:screenShotView];
}else
NSLog(#"Something went wrong in screenShot method, the image is nil");
[super viewDidLoad];
}

Drawrect causing memory issues

Currently I am using UIView instead of UIImageview due to Memory consumption in large scale images. following is same code I am using.
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextClearRect(context, rect);
[myImage drawInRect:rect];
}
-(void) SetImage:(UIImage*) aImage
{
if(!aImage)
return;
if(myImage)
{
[myImage release];
myImage = nil;
}
myImage = [[[UIImage alloc]initWithCGImage:aImage.CGImage] retain];
[self setNeedsDisplay];
}
This is causing now memory leak of 8 MB ( checked with Instrument ) every time when Update and set the same image again. if I comment
[self setNeedsDisplay];
There is no leak. can anyone help me here if I am doing something wrong. OR can anyone help me to Subclass UIImageview to handle large scale image.
// Calling functions
-(void) FitToCardStart
{
UIImage* temp = ScaleImage([iImageBgView GetImage]);
[iImageBgView SetImage:temp];
[temp release];
temp = nil;
}
// ScaleImage
UIImage* ScaleImage(UIImage* image)
{
NSAutoreleasePool* pool = [[NSAutoreleasePool alloc] init];
int kMaxResolution = 1800;
CGImageRef imgRef = image.CGImage;
CGFloat width = CGImageGetWidth(imgRef);
CGFloat height = CGImageGetHeight(imgRef);
CGAffineTransform transform = CGAffineTransformIdentity;
CGRect bounds = CGRectMake(0, 0, width, height);
if (width < kMaxResolution || height < kMaxResolution)
{
CGFloat ratio = width/height;
if (ratio > 1)
{
bounds.size.width = kMaxResolution;
bounds.size.height = bounds.size.width / ratio;
}
else
{
bounds.size.height = kMaxResolution;
bounds.size.width = bounds.size.height * ratio;
}
}
CGFloat scaleRatio = bounds.size.width / width;
CGSize imageSize = CGSizeMake(CGImageGetWidth(imgRef), CGImageGetHeight(imgRef));
UIImageOrientation orient = image.imageOrientation;
switch(orient)
{
case UIImageOrientationUp: //default
transform = CGAffineTransformIdentity;
break;
default:
[NSException raise:NSInternalInconsistencyException format:#"Invalid image orientation"];
}
UIGraphicsBeginImageContext(bounds.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextScaleCTM(context, scaleRatio, -scaleRatio);
CGContextTranslateCTM(context, 0, -height);
CGContextConcatCTM(context, transform);
CGContextDrawImage(UIGraphicsGetCurrentContext(), CGRectMake(0, 0, width, height), imgRef);
UIImage *imageCopy = UIGraphicsGetImageFromCurrentImageContext();
UIImage* temp = [[[UIImage alloc] initWithCGImage:imageCopy.CGImage] retain];
CGImageRelease(imgRef);
CGContextRelease(context);
[pool release];
return temp;
}
Thanks,
Sagar
Your problem is this line:
myImage = [[[UIImage alloc]initWithCGImage:aImage.CGImage] retain];
alloc already gives you a retain count of 1, with the additional retain you end up with a retain count of 2 which is too high. Remove the retain and everything will be fine.
myImage = [[[UIImage alloc]initWithCGImage:aImage.CGImage] retain];
There's redundant retain in this line - as you're allocating new UIIMage object (use +alloc) method you don't need to extra retain it.
Edit: ScaleImage method has the same problem with redundant retain:
// remove extra retain here
UIImage* temp = [[[UIImage alloc] initWithCGImage:imageCopy.CGImage] retain];
// should be
UIImage* temp = [[UIImage alloc] initWithCGImage:imageCopy.CGImage];
And a code-style comment - it is better to indicate in your method names what memory management behavior required for returned objects - as image returned by your method needs to be released method name should contain something from "new", "alloc", "copy", "create"...
I suggest not creating a new image, but just keeping the aImage instance.
myImage = [aImage retain];
I you absolutely must make it a new instance, you are doing it in a very roundabout way.
Copying would be a much better alternative.
myImage = [aImage copy];