I've looked through tons of questions here in Stackoverflow and none of them solved my problem.
So, I'm using Apple's AVCam sample:
http://developer.apple.com/library/ios/#samplecode/AVCam/Introduction/Intro.html
So, when I take a picture and save it in the image library it's fine, but when I show it in the screen by cropping it and when I send it to a server by using
NSData* pictureData = UIImageJPEGRepresentation(self.snappedPictureView.image, 0.9);
it sends it 90 degrees rotated!
Here is the code I crop it:
UIImage* cropped = [image imageByCroppingRect:CGRectMake(0, 0, (image.size.width * 300)/self.view.frame.size.width, (image.size.height * 300)/self.view.frame.size.height)];
imageByCroppingRect is:
- (UIImage *) imageByCroppingRect:(CGRect)area
{
UIImage *croppedImage;
CGImageRef imageRef = CGImageCreateWithImageInRect([self CGImage], area);
// or use the UIImage wherever you like
croppedImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return croppedImage;
}
When you crop the image, you are loosing the metadata associated with that image telling it the proper way to rotate it.
Instead your code should preserve the original rotation of the image like this:
- (UIImage *) imageByCroppingRect:(CGRect)rect {
CGImageRef imageRef = CGImageCreateWithImageInRect([self CGImage], rect);
UIImage *result = [UIImage imageWithCGImage:imageRef scale:self.scale orientation:self.imageOrientation];
CGImageRelease(imageRef);
return result;
}
Related
I want to crop UIImage with the following code:
- (UIImage*)imageByCropping:(UIImage *)imageToCrop toRect:(CGRect)rect
{
CGImageRef imageRef = CGImageCreateWithImageInRect([imageToCrop CGImage], rect);
// or use the UIImage wherever you like
UIImage * img = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return img;
}
This code is working fine in the simulator but giving unusual result on device.
Create a UIImage category and try adding this.
#implementation UIImage (Crop)
- (UIImage *)crop:(CGRect)cropRect {
cropRect = CGRectMake(cropRect.origin.x*self.scale,
cropRect.origin.y*self.scale,
cropRect.size.width*self.scale,
cropRect.size.height*self.scale);
CGImageRef imageRef = CGImageCreateWithImageInRect([self CGImage], cropRect);
UIImage *result = [UIImage imageWithCGImage:imageRef
scale:self.scale
orientation:self.imageOrientation];
CGImageRelease(imageRef);
return result;
}
Try this one :
- (UIImage *)cropImage:(UIImage *)oldImage {
CGSize imageSize = oldImage.size;
UIGraphicsBeginImageContextWithOptions(CGSizeMake( imageSize.width,imageSize.height - 150),NO,0.);
[oldImage drawAtPoint:CGPointMake( 0, -80) blendMode:kCGBlendModeCopy alpha:1.];
UIImage *croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return croppedImage;
}
Hi so i currently have a small image (about 100x160) as a NSData Attribute in my CoreData model.
i display all entities in a TableView. The UIImageView in a single Cell has only a size of 50x80. just dropping the image into this frame looks a bit pebbly.
what would be the best solution to display this image in my tableViewCell? resize it on-the-fly in my cellForRowAtIndexPath? probably this will lead up my tableview to become a bit laggy.
resize it on create and save it in my coredata entity (or probably on disk)?
thank you! please leave a comment if something is unclear
For that you have to crop/resize the image. Following is the code to crop the image as per the required frame.
- (void)viewDidLoad
{
[super viewDidLoad];
// do something......
UIImage *img = [UIImage imageWithData:(nsdata)]; // nsdata will be your image data as you specified.
// To crop Image
UIImage *croppedImage = [self imageByCropping:img] toRect:CGRectMake(10, 10, 50, 80)];
// To resize image
UIImage *resizedImage = [self resizeImage:img width:50 height:80];
}
Crop Image:
- (UIImage*)imageByCropping:(UIImage *)imageToCrop toRect:(CGRect)rect
{
CGImageRef imageRef = CGImageCreateWithImageInRect([imageToCrop CGImage], rect);
UIImage *cropped = [UIImage imageWithCGImage:imageRef];
return cropped;
}
Resize Image:
-(UIImage *)resizeImage:(UIImage *)image width:(int)width height:(int)height
{
CGImageRef imageRef = [image CGImage];
CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(imageRef);
alphaInfo = kCGImageAlphaNoneSkipLast;
CGContextRef bitmap = CGBitmapContextCreate(NULL, width, height, CGImageGetBitsPerComponent(imageRef), 4 * width, CGImageGetColorSpace(imageRef), alphaInfo);
CGContextDrawImage(bitmap, CGRectMake(0, 0, width, height), imageRef);
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
UIImage *result = [UIImage imageWithCGImage:ref];
return result;
}
You can go with either of ways.
i want to crop an UIImage to get a new aspect ratio...
bounds is a CGRect with (0,0, newwidth, newhigh)...
- (UIImage *)croppedImage:(UIImage *)myImage :(CGRect)bounds {
CGImageRef imageRef = CGImageCreateWithImageInRect(myImage.CGImage, bounds);
UIImage *croppedImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGSize asd = croppedImage.size;
return croppedImage;
}
with the call:
[workImage croppedImage: workImage: CGRectMake(0, 0, newWidth, newHeigh)];
after that the "workimage" have the same size as before...
what could be wrong?
regards
Well, you aren't altering the current image as this seems to be a category method on UIImage. You are creating a new image and returning it. So what will work is this,
workImage = [workImage croppedImage: workImage: CGRectMake(0, 0, newWidth, newHeigh)];
However I think the method is better named like this, (assuming it as a category method on UIImage)
- (UIImage *)croppedImageWithRect:(CGRect)bounds {
CGImageRef imageRef = CGImageCreateWithImageInRect(self.CGImage, bounds);
UIImage *croppedImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGSize asd = croppedImage.size;
return croppedImage;
}
This way you will call it like this,
workImage = [workImage croppedImageWithRect:CGRectMake(0, 0, newWidth, newHeigh)];
And as a side note, don't use methods like croppedImage::. It is better to name all parameters like say croppedImage:rect:.
I created a masked image using a function form an iphone blog:
UIImage *imgToSave = [self maskImage:[UIImage imageNamed:#"pic.jpg"] withMask:[UIImage imageNamed:#"sd-face-mask.png"]];
Looks good in a UIImageView
UIImageView *imgView = [[UIImageView alloc] initWithImage:imgToSave];
imgView.center = CGPointMake(160.0f, 140.0f);
[self.view addSubview:imgView];
UIImagePNGRepresentation to save to disk:
[UIImagePNGRepresentation(imgToSave) writeToFile:[self findUniqueSavePath] atomically:YES];
UIImagePNGRepresentation returns NSData of an image that looks different.
The output is inverse image mask.
The area that was cut out in the app is now visible in the file.
The area that was visible in the app is now removed. Visibility is opposite.
My mask is designed to remove everything but the face area in the picture. The UIImage looks right in the app but after I save it on disk, the file looks opposite. The face is removed but everything else this there.
Please let me know if you can help!
In quartz you cam mask either by an image mask (black let through and white blocks), or a normal image (white let through and black blocks) which is the opposite. It seems for some reason saving is treating the image mask as a normal image to mask with. One thought is to render to a bitmap context and then create an image to be saved from that.
I had the exact same issue, when I saved the file it was one way, but the image returned in memory was the exact opposite.
The culprit & the solution was UIImagePNGRepresentation(). It fixes the in-app image before saving it to disk, so I just inserted that function as the last step in creating the masked image and returning that.
This may not be the most elegant solution, but it works. I copied some code from my app and condensed it, not sure if this code below works as is, but if not, its close... maybe just some typos.
Enjoy. :)
// MyImageHelperObj.h
#interface MyImageHelperObj : NSObject
+ (UIImage *) createGrayScaleImage:(UIImage*)originalImage;
+ (UIImage *) createMaskedImageWithSize:(CGSize)newSize sourceImage:(UIImage *)sourceImage maskImage:(UIImage *)maskImage;
#end
// MyImageHelperObj.m
#import <QuartzCore/QuartzCore.h>
#import "MyImageHelperObj.h"
#implementation MyImageHelperObj
+ (UIImage *) createMaskedImageWithSize:(CGSize)newSize sourceImage:(UIImage *)sourceImage maskImage:(UIImage *)maskImage;
{
// create image size rect
CGRect newRect = CGRectZero;
newRect.size = newSize;
// draw source image
UIGraphicsBeginImageContextWithOptions(newRect.size, NO, 0.0f);
[sourceImage drawInRect:newRect];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
// draw mask image
[maskImage drawInRect:newRect blendMode:kCGBlendModeNormal alpha:1.0f];
maskImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// create grayscale version of mask image to make the "image mask"
UIImage *grayScaleMaskImage = [MyImageHelperObj createGrayScaleImage:maskImage];
CGFloat width = CGImageGetWidth(grayScaleMaskImage.CGImage);
CGFloat height = CGImageGetHeight(grayScaleMaskImage.CGImage);
CGFloat bitsPerPixel = CGImageGetBitsPerPixel(grayScaleMaskImage.CGImage);
CGFloat bytesPerRow = CGImageGetBytesPerRow(grayScaleMaskImage.CGImage);
CGDataProviderRef providerRef = CGImageGetDataProvider(grayScaleMaskImage.CGImage);
CGImageRef imageMask = CGImageMaskCreate(width, height, 8, bitsPerPixel, bytesPerRow, providerRef, NULL, false);
CGImageRef maskedImage = CGImageCreateWithMask(newImage.CGImage, imageMask);
CGImageRelease(imageMask);
newImage = [UIImage imageWithCGImage:maskedImage];
CGImageRelease(maskedImage);
return [UIImage imageWithData:UIImagePNGRepresentation(newImage)];
}
+ (UIImage *) createGrayScaleImage:(UIImage*)originalImage;
{
//create gray device colorspace.
CGColorSpaceRef space = CGColorSpaceCreateDeviceGray();
//create 8-bit bimap context without alpha channel.
CGContextRef bitmapContext = CGBitmapContextCreate(NULL, originalImage.size.width, originalImage.size.height, 8, 0, space, kCGImageAlphaNone);
CGColorSpaceRelease(space);
//Draw image.
CGRect bounds = CGRectMake(0.0, 0.0, originalImage.size.width, originalImage.size.height);
CGContextDrawImage(bitmapContext, bounds, originalImage.CGImage);
//Get image from bimap context.
CGImageRef grayScaleImage = CGBitmapContextCreateImage(bitmapContext);
CGContextRelease(bitmapContext);
//image is inverted. UIImage inverts orientation while converting CGImage to UIImage.
UIImage* image = [UIImage imageWithCGImage:grayScaleImage];
CGImageRelease(grayScaleImage);
return image;
}
#end
I use the following code to getting images from the sprite. And it works fine everywhere except the iPhone 4 (HD version).
- (UIImage *)croppedImage:(CGRect)rect {
CGImageRef image = CGImageCreateWithImageInRect([self CGImage], rect);
UIImage *result = [UIImage imageWithCGImage:image];
CGImageRelease(image);
return result;
}
The iPhone 4 automatically load HD version of the image (sprite#2x.png) instead sprite.png. The original image has a scale 2, but the resulting image has a scale 1 and wrong size.
How to handle this behavior taking into account the different scales for iPhone 3G[s] and the iPhone 4?
I have read this document, but about the use CGImageCreateWithImageInRect here says nothing.
From what I can tell the CGImageCreateWithImageInRect will do the right thing. What you need to change is the UIImage initilization
http://developer.apple.com/iphone/library/documentation/uikit/reference/UIImage_Class/Reference/Reference.html#//apple_ref/occ/clm/UIImage/imageWithCGImage:scale:orientation:
Change it to [UIImage imageWithCGImage:image scale:self.scale orientation:self. imageOrientation] and it should work just fine. (this is assuming this is a category on UIImage which it looks like it is)
You should multiply the crop rect by the image scale. From my experience, it's unnecessary to use any different image initilization.
- (UIImage *)_cropImage:(UIImage *)image withRect:(CGRect)cropRect
{
cropRect = CGRectMake(cropRect.origin.x * image.scale,
cropRect.origin.y * image.scale,
cropRect.size.width * image.scale,
cropRect.size.height * image.scale);
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], cropRect);
UIImage *croppedImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return croppedImage;
}