How could I get CGContextRef from UIImage? - iphone

CGContextRef context = ??; // get from UIImage *oldImage
CGContextSaveGState(context);
CGRect drawRect = CGRectMake(0, 0, 320, 480);
CGContextConcatCTM(context, transform);
UIImage *newImage = [[UIImage alloc] init];
CGContextDrawImage(context, drawRect, newImage.CGImage);
[newImage drawInRect:CGRectMake(0, 0, 320, 480)];
CGContextRestoreGState(context);
In short, what I want to do is [ UIImage -> make some transform -> transformed UIImage].
Any ideas? Big thanks!!

I think what you need is this:
UIGraphicsBeginImageContextWithOptions(newImageSize, YES, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
// drawing commands go here
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Note that UIGraphicsGetImageFromCurrentImageContext() returns an autoreleased UIImage instance.

Related

capture screen as it looks like

iam doing one application.In that Im using one imageview with image and add one view with sone clear holes.Means through that holes we can see the background image.So my problem is i want to capture that total screen (imageview with this holes view).Iam using below code but it's not working.
- (UIImage*)captureView:(UIView *)yourView {
CGRect rect = [[UIScreen mainScreen] bounds];
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[yourView.layer renderInContext:context];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
but it's not working.
This IS how we do:
UIGraphicsBeginImageContextWithOptions(yourView.bounds.size, yourView.opaque, 0.0);
[yourView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *LastImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Try this to take your screeShoT
-(UIImage*)getScreenShot
{
CGSize screenSize = [[UIScreen mainScreen] applicationFrame].size;
//CGSize screenSize = CGSizeMake(1024, 768);
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGContextRef ctx = CGBitmapContextCreate(nil, screenSize.width, 416, 8, 4*(int)screenSize.width, colorSpaceRef, kCGImageAlphaPremultipliedLast);
CGContextTranslateCTM(ctx, 0.0, screenSize.height);
CGContextScaleCTM(ctx, 1.0, -1.0);
[(CALayer*)self.yourView.layer renderInContext:ctx];
CGImageRef cgImage = CGBitmapContextCreateImage(ctx);
UIImage *image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
CGContextRelease(ctx);
return image;
}

Splitting an UIImage in two different UIimageviews in CoreGraphics-iPhone

Problem: I have an three UIImageViews. One has an UIImage and the other two are namely - leftImageView & rightImageView. The leftImageView has the left half of the image and the right one has the other. I am trying to achieve this using Coregraphics, i.e. drawing an image in the two imageviews.
But the right imageview isn't showing up. Refer to the code below:
UIImage *image = imgView.image;
CGSize sz = [image size];
CGRect leftRect = CGRectMake(0, 0, imgView.frame.size.width/2, imgView.frame.size.height);
CGRect rightRect = CGRectMake(imgView.frame.size.width/2, 0, imgView.frame.size.width/2, imgView.frame.size.height);
CGImageRef leftReference = CGImageCreateWithImageInRect([image CGImage], leftRect);
CGImageRef rightReference = CGImageCreateWithImageInRect([image CGImage], rightRect);
// Left Image ...
UIGraphicsBeginImageContextWithOptions(CGSizeMake(leftRect.size.width, leftRect.size.height), NO, 0);
CGContextRef con = UIGraphicsGetCurrentContext();
CGContextDrawImage(con, leftRect,flip(leftReference));
imgViewLeft = [[UIImageView alloc] initWithImage:UIGraphicsGetImageFromCurrentImageContext()];
UIGraphicsEndImageContext();
[self.view addSubview:imgViewLeft];
// Right Image ...
UIGraphicsBeginImageContextWithOptions(CGSizeMake(rightRect.size.width, rightRect.size.height), NO, 0);
con = UIGraphicsGetCurrentContext();
CGContextDrawImage(con, rightRect,flip(rightReference));
imgViewRight = [[UIImageView alloc] initWithImage:UIGraphicsGetImageFromCurrentImageContext()];
UIGraphicsEndImageContext();
[self.view addSubview:imgViewRight];
I think that you can use the following code part.
UIImage *image = [UIImage imageNamed:#"xxx.png"]; // You can change this line.
BOOL isLeft = YES; // You can change this variable(isLeft = YES : left piece, NO : right piece).
UIGraphicsBeginImageContext(CGSizeMake(image.size.width / 2, image.size.height));
CGContextRef context = UIGraphicsGetCurrentContext();
if (isLeft)
CGContextDrawImage(UIGraphicsGetCurrentContext(), CGRectMake(0, 0, image.size.width, image.size.height), image.CGImage);
else
CGContextDrawImage(UIGraphicsGetCurrentContext(), CGRectMake(- image.size.width / 2, 0, image.size.width, image.size.height), image.CGImage);
UIImage *imageCopy = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return imageCopy;
I couldn't launch this code now, but I think that my code is almost correct. Please try this.
Edit :
In your code :
// Right Image ...
UIGraphicsBeginImageContextWithOptions(CGSizeMake(rightRect.size.width, rightRect.size.height), NO, 0);
con = UIGraphicsGetCurrentContext();
CGContextDrawImage(con, rightRect,flip(rightReference));
imgViewRight = [[UIImageView alloc] initWithImage:UIGraphicsGetImageFromCurrentImageContext()];
UIGraphicsEndImageContext();
There is a wrong part in the third line :
CGContextDrawImage(con, rightRect,flip(rightReference));
It should be
CGContextDrawImage(con, leftRect,flip(rightReference));

How to combine UIImage and UILabel into one image and save

I have 2 UILabels and 2 images that i need to merge into a single UIImage to save.
I know I could do it with screen shots but my main image is rounded so if I rect it, it will still show the sharp edge.
I can do this to combine the images :
//CGSize newImageSize = CGSizeMake(cropImage.frame.size.width, cropImage.frame.size.height);
CGSize newImageSize = CGSizeMake(480, 320);
NSLog(#"CGSize %#",NSStringFromCGSize(newImageSize));
UIGraphicsBeginImageContextWithOptions(newImageSize, NO, 0.0); //retina res
[self.viewForImg.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
NSData *imgData = UIImageJPEGRepresentation(image, 0.9); //UIImagePNGRepresentation ( image ); // get JPEG representation
UIImage * imagePNG = [UIImage imageWithData:imgData]; // wrap UIImage around PNG representation
UIGraphicsEndImageContext();
return imagePNG;
but not sure how to add in the UILabel.
Any reply is much appreciated.
Use [myLabel.layer renderInContext:UIGraphicsGetCurrentContext()]; to draw in current context.
For eg:-
UIGraphicsBeginImageContextWithOptions(newImageSize, NO, 0.0); //retina res
[self.viewForImg.layer renderInContext:UIGraphicsGetCurrentContext()];
[myLabel.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
Based on your comments, if you want to draw this in a particular frame do it as follows,
[myLabel drawTextInRect:CGRectMake(0.0f, 0.0f, 100.0f, 50.0f)];
If you want to color the background, try this,
CGRect drawRect = CGRectMake(rect.origin.x, rect.origin.y,rect.size.width, rect.size.height);
CGContextSetRGBFillColor(context, 100.0f/255.0f, 100.0f/255.0f, 100.0f/255.0f, 1.0f);
CGContextFillRect(context, drawRect);
or you can check this question Setting A CGContext Transparent Background.
UIEdgeInsets insets = UIEdgeInsetsMake(1, 1, 1, 1);
CGSize imageSizeWithBorder = CGSizeMake(view.frame.size.width + insets.left + insets.right, view.frame.size.height + insets.top + insets.bottom);
UIGraphicsBeginImageContextWithOptions(imageSizeWithBorder, UIEdgeInsetsEqualToEdgeInsets(insets, UIEdgeInsetsZero), 0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextClipToRect(context, (CGRect){{insets.left, insets.top}, view.frame.size});
CGContextTranslateCTM(context, -view.frame.origin.x + insets.left, -view.frame.origin.y + insets.top);
[view.layer renderInContext:context];
UIImage *viewCopy = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Try this!
UIGraphicsBeginImageContextWithOptions(newImageSize, NO, scale); //retina res
[COGI.layer renderInContext:UIGraphicsGetCurrentContext()];
[COGI.image drawInRect:CGRectMake(0, 0, 248, 290)];
[iconI.image drawInRect:CGRectMake(4, 20, 240, 240)];
[stampI.image drawInRect:CGRectMake(0, -5, 248, 290)];
[headerL drawTextInRect:CGRectMake(14, 35, 220, 40)];
[detailL drawTextInRect:CGRectMake(16, 200, 215, 65)];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
[[UIColor redColor] set];
NSData *imgData = UIImageJPEGRepresentation(image, 1.0); //UIImagePNGRepresentation ( image ); // get JPEG representation
UIImage * imagePNG = [UIImage imageWithData:imgData]; // wrap UIImage around PNG representation
UIGraphicsEndImageContext();
return imagePNG;

UIImageJPEGRepresentation giving 2x images on retina display

I have this code, which creates an image, and then adds some effects to it and sizes it down to make largeThumbnail.
UIImage *originalImage = [UIImage imageWithData:self.originalImage];
thumbnail = createLargeThumbnailFromImage(originalImage);
NSLog(#"thumbnail: %f", thumbnail.size.height);
NSData *thumbnailData = UIImageJPEGRepresentation(thumbnail, 1.0);
Later on:
UIImage *image = [UIImage imageWithData:self.largeThumbnail];
NSLog(#"thumbnail 2: %f", image.size.height);
NSLog returns:
thumbnail: 289.000000
thumbnail 2: 578.000000
As you can see, when it converts the image back from data, it makes it 2x the size. Any ideas why this may be happening?
Large thumbnail code:
UIImage *createLargeThumbnailFromImage(UIImage *image) {
UIImage *resizedImage;
resizedImage = [image imageScaledToFitSize:LARGE_THUMBNAIL_SIZE];
CGRect largeThumbnailRect = CGRectMake(0, 0, resizedImage.size.width, resizedImage.size.height);
UIGraphicsBeginImageContextWithOptions(resizedImage.size, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
//Image
CGContextTranslateCTM(context, 0, resizedImage.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextDrawImage(context, largeThumbnailRect, resizedImage.CGImage);
//Border
CGContextSaveGState(context);
CGRect innerRect = rectForRectWithInset(largeThumbnailRect, 1.5);
CGMutablePathRef borderPath = createRoundedRectForRect(innerRect, 0);
CGContextSetStrokeColorWithColor(context, [[UIColor whiteColor] CGColor]);
CGContextSetLineWidth(context, 3);
CGContextAddPath(context, borderPath);
CGContextStrokePath(context);
CGContextRestoreGState(context);
UIImage *thumbnail = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return thumbnail;
}
Try replacing the part where you load the second image:
UIImage *image = [UIImage imageWithData:self.largeThumbnail];
with this one:
UIImage *jpegImage = [UIImage imageWithData:self.largeThumbnail];
UIImage *image = [UIImage imageWithCGImage:jpegImage.CGImage scale:originalImage.scale orientation:jpegImage.imageOrientation];
What happens here is that the image scale is not set, so you get double image dimensions.

How to scale up and crop a UIImage?

Here's the code I have but it's crashing ... any ideas?
UIImage *tempImage = [[UIImage alloc] initWithData:imageData];
CGImageRef imgRef = [tempImage CGImage];
[tempImage release];
CGFloat width = CGImageGetWidth(imgRef);
CGFloat height = CGImageGetHeight(imgRef);
CGRect bounds = CGRectMake(0, 0, width, height);
CGSize size = bounds.size;
CGAffineTransform transform = CGAffineTransformMakeScale(4.0, 4.0);
UIGraphicsBeginImageContext(size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextConcatCTM(context, transform);
CGContextDrawImage(context, bounds, imgRef);
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
What am I missing here? Basically just trying to scale image up and crop it to be same size as original.
Thanks
The problem is this line:
CGImageRef imgRef = [tempImage CGImage];
Or more precise, the direct follow-up of this line:
[tempImage release];
You are getting a CF object here, the CGImageRef. Core Foundation object only have the retain/release memory management, but no autoreleased objects. Hence, when you release the UIImage in the second row, the CGImageRef will be deleted as well. And this again means that it's undefined when you try to draw it down there.
I can think of three fixes:
use autorelease to delay the release of the image: [tempImage autorelease];
move the release to the very bottom of your method
retain and release the image using CFRetain and CFRelease.
Try this one:
-(CGImageRef)imageCapture
{
UIGraphicsBeginImageContext(self.view.bounds.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGRect rect= CGRectMake(0,0 ,320, 480);
CGImageRef imageRef = CGImageCreateWithImageInRect([viewImage CGImage], rect);
return imageRef;
}
use the below line whenever you want to capture the screen
UIImage *captureImg=[[UIImage alloc] initWithCGImage:[self imageCapture]];