How to scale up and crop a UIImage? - iphone

Here's the code I have but it's crashing ... any ideas?
UIImage *tempImage = [[UIImage alloc] initWithData:imageData];
CGImageRef imgRef = [tempImage CGImage];
[tempImage release];
CGFloat width = CGImageGetWidth(imgRef);
CGFloat height = CGImageGetHeight(imgRef);
CGRect bounds = CGRectMake(0, 0, width, height);
CGSize size = bounds.size;
CGAffineTransform transform = CGAffineTransformMakeScale(4.0, 4.0);
UIGraphicsBeginImageContext(size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextConcatCTM(context, transform);
CGContextDrawImage(context, bounds, imgRef);
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
What am I missing here? Basically just trying to scale image up and crop it to be same size as original.
Thanks

The problem is this line:
CGImageRef imgRef = [tempImage CGImage];
Or more precise, the direct follow-up of this line:
[tempImage release];
You are getting a CF object here, the CGImageRef. Core Foundation object only have the retain/release memory management, but no autoreleased objects. Hence, when you release the UIImage in the second row, the CGImageRef will be deleted as well. And this again means that it's undefined when you try to draw it down there.
I can think of three fixes:
use autorelease to delay the release of the image: [tempImage autorelease];
move the release to the very bottom of your method
retain and release the image using CFRetain and CFRelease.

Try this one:
-(CGImageRef)imageCapture
{
UIGraphicsBeginImageContext(self.view.bounds.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGRect rect= CGRectMake(0,0 ,320, 480);
CGImageRef imageRef = CGImageCreateWithImageInRect([viewImage CGImage], rect);
return imageRef;
}
use the below line whenever you want to capture the screen
UIImage *captureImg=[[UIImage alloc] initWithCGImage:[self imageCapture]];

Related

capture screen as it looks like

iam doing one application.In that Im using one imageview with image and add one view with sone clear holes.Means through that holes we can see the background image.So my problem is i want to capture that total screen (imageview with this holes view).Iam using below code but it's not working.
- (UIImage*)captureView:(UIView *)yourView {
CGRect rect = [[UIScreen mainScreen] bounds];
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[yourView.layer renderInContext:context];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
but it's not working.
This IS how we do:
UIGraphicsBeginImageContextWithOptions(yourView.bounds.size, yourView.opaque, 0.0);
[yourView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *LastImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Try this to take your screeShoT
-(UIImage*)getScreenShot
{
CGSize screenSize = [[UIScreen mainScreen] applicationFrame].size;
//CGSize screenSize = CGSizeMake(1024, 768);
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGContextRef ctx = CGBitmapContextCreate(nil, screenSize.width, 416, 8, 4*(int)screenSize.width, colorSpaceRef, kCGImageAlphaPremultipliedLast);
CGContextTranslateCTM(ctx, 0.0, screenSize.height);
CGContextScaleCTM(ctx, 1.0, -1.0);
[(CALayer*)self.yourView.layer renderInContext:ctx];
CGImageRef cgImage = CGBitmapContextCreateImage(ctx);
UIImage *image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
CGContextRelease(ctx);
return image;
}

UIImageJPEGRepresentation giving 2x images on retina display

I have this code, which creates an image, and then adds some effects to it and sizes it down to make largeThumbnail.
UIImage *originalImage = [UIImage imageWithData:self.originalImage];
thumbnail = createLargeThumbnailFromImage(originalImage);
NSLog(#"thumbnail: %f", thumbnail.size.height);
NSData *thumbnailData = UIImageJPEGRepresentation(thumbnail, 1.0);
Later on:
UIImage *image = [UIImage imageWithData:self.largeThumbnail];
NSLog(#"thumbnail 2: %f", image.size.height);
NSLog returns:
thumbnail: 289.000000
thumbnail 2: 578.000000
As you can see, when it converts the image back from data, it makes it 2x the size. Any ideas why this may be happening?
Large thumbnail code:
UIImage *createLargeThumbnailFromImage(UIImage *image) {
UIImage *resizedImage;
resizedImage = [image imageScaledToFitSize:LARGE_THUMBNAIL_SIZE];
CGRect largeThumbnailRect = CGRectMake(0, 0, resizedImage.size.width, resizedImage.size.height);
UIGraphicsBeginImageContextWithOptions(resizedImage.size, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
//Image
CGContextTranslateCTM(context, 0, resizedImage.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextDrawImage(context, largeThumbnailRect, resizedImage.CGImage);
//Border
CGContextSaveGState(context);
CGRect innerRect = rectForRectWithInset(largeThumbnailRect, 1.5);
CGMutablePathRef borderPath = createRoundedRectForRect(innerRect, 0);
CGContextSetStrokeColorWithColor(context, [[UIColor whiteColor] CGColor]);
CGContextSetLineWidth(context, 3);
CGContextAddPath(context, borderPath);
CGContextStrokePath(context);
CGContextRestoreGState(context);
UIImage *thumbnail = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return thumbnail;
}
Try replacing the part where you load the second image:
UIImage *image = [UIImage imageWithData:self.largeThumbnail];
with this one:
UIImage *jpegImage = [UIImage imageWithData:self.largeThumbnail];
UIImage *image = [UIImage imageWithCGImage:jpegImage.CGImage scale:originalImage.scale orientation:jpegImage.imageOrientation];
What happens here is that the image scale is not set, so you get double image dimensions.

UIGraphicsBeginImageContextWithOptions and UIImageJPEGRepresentation not working well together

So i have this code to create a UIImage:
UIGraphicsBeginImageContextWithOptions(border.frame.size, YES, 0);
[border.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *thumbnailImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
At this point, the size on the image is correct, 80x100.
Then it runs this code:
NSData *fullImageData = UIImageJPEGRepresentation(image, 1.0f);
And the NSData of the image returns an image at the size 160x200 - twice as much as it should be.
It's became clear the reason for this is the line:
UIGraphicsBeginImageContextWithOptions(border.frame.size, YES, 0);
The 0 on the end is the scale, and because it's 0 it goes by the devices scale factor. I keep it this way to maintain a clear image. However, when i set the image to 1, despite the image staying the size it should, it doesn't come out in retina quality. What i want to do is keep it in retina quality, but also keep it at the right size. Is there a way to do this?
Try resizing the UIImage before calling UIImageJPEGRepresentation
- (UIImage *)resizeImage:(UIImage*)image newSize:(CGSize)newSize {
CGRect newRect = CGRectIntegral(CGRectMake(0, 0, newSize.width, newSize.height));
CGImageRef imageRef = image.CGImage;
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
// Set the quality level to use when rescaling
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, newSize.height);
CGContextConcatCTM(context, flipVertical);
// Draw into the context; this scales the image
CGContextDrawImage(context, newRect, imageRef);
// Get the resized image from the context and a UIImage
CGImageRef newImageRef = CGBitmapContextCreateImage(context);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
CGImageRelease(newImageRef);
UIGraphicsEndImageContext();
return newImage;
}
if([UIScreen mainScreen].scale > 1)
{
thumbnailImage = [self thumbnailImage newSize:CGSizeMake(thumbnailImage.size.width/[UIScreen mainScreen].scale, thumbnailImage.size.height/[UIScreen mainScreen].scale)];
}
- (UIImage *)imageWithImage{
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}

Resizing UIImage in UIImageView

I'm trying to create a UIPickerView with some images in it, but I can't seem to figure out how to get the images to fit in the view (right now they're too large and are overlapping each other).
I'm trying to use a function to resize each image when it's drawn, but I'm getting errors when the function is called, although the program compiles and runs fine (with the exception of the image not resizing). The resizing function and initialization functions are:
-(UIImage *)resizeImage:(UIImage *)image width:(int)width height:(int)height {
NSLog(#"resizing");
CGImageRef imageRef = [image CGImage];
CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(imageRef);
//if (alphaInfo == kCGImageAlphaNone)
alphaInfo = kCGImageAlphaNoneSkipLast;
CGContextRef bitmap = CGBitmapContextCreate(NULL, width, height, CGImageGetBitsPerComponent(imageRef),
4 * width, CGImageGetColorSpace(imageRef), alphaInfo);
CGContextDrawImage(bitmap, CGRectMake(0, 0, width, height), imageRef);
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
UIImage *result = [UIImage imageWithCGImage:ref];
CGContextRelease(bitmap);
CGImageRelease(ref);
return result;
}
- (void)viewDidLoad {
UIImage *h1 = [UIImage imageNamed:#"h1.png"];
h1 = [self resizeImage:h1 width:50 height: 50];
UIImageView *h1View = [[UIImageView alloc] initWithImage:h1];
NSArray *imageViewArray = [[NSArray alloc] initWithObjects:
h1View, nil];
NSString *fieldName = [[NSString alloc] initWithFormat:#"column1"];
[self setValue:imageViewArray forKey:fieldName];
[fieldName release];
[imageViewArray release];
[h1View release];
}
Console Output:
TabTemplate[29322:207] resizing
TabTemplate[29322] : CGBitmapContextCreate: unsupported colorspace
TabTemplate[29322] : CGContextDrawImage: invalid context
TabTemplate[29322] : CGBitmapContextCreateImage: invalid context
I can't figure out what's going wrong. Any help is greatly appreciated.
You don't require to resize your UIImage if you use the contentMode property of UIImageView.
myImageView.contentMode = UIViewContentModeScaleAspectFit;
Or if you still want to resize your UIImage, Have look at below SO post.
resizing a UIImage without loading it entirely into memory?
UIImage: Resize, then Crop
Use below to scale the image using aspect ratio, then clip the image to imageview's bounds.
imageView.contentMode = UIViewContentModeScaleAspectFill;
imageView.clipsToBounds = YES;
In case of swift
imageView.contentMode = .ScaleAspectFill
imageView.clipsToBounds = true
UIImage *image = [UIImage imageNamed:#"myImage"];
[image drawInRect: destinationRect];
UIImage *thumbnail = UIGraphicsGetImageFromCurrentImageContext();
UIImageWriteToSavedPhotosAlbum(image,nil,nil,nil);

UIImage created from CGImageRef fails with UIImagePNGRepresentation

I'm using the following code to crop and create a new UIImage out of a bigger one. I've isolated the issue to be with the function CGImageCreateWithImageInRect() which seem to not set some CGImage property the way I want. :-) The problem is that a call to function UIImagePNGRepresentation() fails returning a nil.
CGImageRef origRef = [stillView.image CGImage];
CGImageRef cgCrop = CGImageCreateWithImageInRect( origRef, theRect);
UIImage *imgCrop = [UIImage imageWithCGImage:cgCrop];
...
NSData *data = UIImagePNGRepresentation ( imgCrop);
-- libpng error: No IDATs written into file
Any idea what might wrong or alternative for cropping a rect out of UIImage?
I had the same problem, but only when testing compatibility on iOS 3.2. On 4.2 it works fine.
In the end I found this http://www.hive05.com/2008/11/crop-an-image-using-the-iphone-sdk/ which works on both, albeit a little more verbose!
I converted this into a category on UIImage:
UIImage+Crop.h
#interface UIImage (Crop)
- (UIImage*) imageByCroppingToRect:(CGRect)rect;
#end
UIImage+Crop.m
#implementation UIImage (Crop)
- (UIImage*) imageByCroppingToRect:(CGRect)rect
{
//create a context to do our clipping in
UIGraphicsBeginImageContext(rect.size);
CGContextRef currentContext = UIGraphicsGetCurrentContext();
//create a rect with the size we want to crop the image to
//the X and Y here are zero so we start at the beginning of our
//newly created context
CGRect clippedRect = CGRectMake(0, 0, rect.size.width, rect.size.height);
CGContextClipToRect( currentContext, clippedRect);
//create a rect equivalent to the full size of the image
//offset the rect by the X and Y we want to start the crop
//from in order to cut off anything before them
CGRect drawRect = CGRectMake(rect.origin.x * -1,
rect.origin.y * -1,
self.size.width,
self.size.height);
//draw the image to our clipped context using our offset rect
CGContextTranslateCTM(currentContext, 0.0, rect.size.height);
CGContextScaleCTM(currentContext, 1.0, -1.0);
CGContextDrawImage(currentContext, drawRect, self.CGImage);
//pull the image from our cropped context
UIImage *cropped = UIGraphicsGetImageFromCurrentImageContext();
//pop the context to get back to the default
UIGraphicsEndImageContext();
//Note: this is autoreleased
return cropped;
}
#end
In a PNG there are various chunks present, some containing palette info, some actual image data and some other information, it's a very interesting standard. The IDAT chunk is the bit that actually contains the image data. If there's no "IDAT written into file" then libpng has had some issue creating a PNG from the input data.
I don't know exactly what your stillView.image is, but what happens when you pass your code a CGImageRef that is certainly valid? What are the actual values in theRect? If your theRect is beyond the bounds of the image then the cgCrop you're trying to use to make the UIImage could easily be nil - or not nil, but containing no image or an image with width and height 0, giving libpng nothing to work with.
It seems the solution you are trying should work, but I recommend to use this:
CGImageRef image = [stillView.image CGImage];
CGRect cropZone;
size_t cWitdh = cropZone.size.width;
size_t cHeight = cropZone.size.height;
size_t bitsPerComponent = CGImageGetBitsPerComponent(image);
size_t bytesPerRow = CGImageGetBytesPerRow(image) / CGImageGetWidth(image) * cWidth;
//Now we build a Context with those dimensions.
CGContextRef context = CGBitmapContextCreate(nil, cWitdh, cHeight, bitsPerComponent, bytesPerRow, CGColorSpaceCreateDeviceRGB(), CGImageGetBitmapInfo(image));
CGContextDrawImage(context, cropZone, image);
CGImageRef result = CGBitmapContextCreateImage(context);
UIImage * cropUIImage = [[UIImage alloc] initWithCGImage:tmp];
CGContextRelease(context);
CGImageRelease(mergeResult);
NSData * imgData = UIImagePNGRepresentation ( cropUIImage);
UIImage *croppedImage = [self imageByCropping:yourImageView.image toRect:heredefineyourRect];
CGSize size = CGSizeMake(croppedImage.size.height, croppedImage.size.width);
UIGraphicsBeginImageContext(size);
CGPoint pointImg1 = CGPointMake(0,0);
[croppedImage drawAtPoint:pointImg1 ];
[[UIImage imageNamed:yourImagenameDefine] drawInRect:CGRectMake(0,532, 150,80) ];//here define your Reactangle
UIImage* result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
croppedImage = result;
yourCropImageView.image=croppedImage;
[yourCropImageView.image retain];