Weird behavior when rotating AVFoundation stills on iOS - iphone

OK, so the following code works, but I don't get why. I am capturing still images from the Front camera using AVFoundation. I have this code before initiating capture:
if ([connection isVideoOrientationSupported]) {
AVCaptureVideoOrientation orientation;
switch ([UIDevice currentDevice].orientation) {
case UIDeviceOrientationPortraitUpsideDown:
orientation = AVCaptureVideoOrientationPortraitUpsideDown;
break;
case UIDeviceOrientationLandscapeLeft:
orientation = AVCaptureVideoOrientationLandscapeRight;
break;
case UIDeviceOrientationLandscapeRight:
orientation = AVCaptureVideoOrientationLandscapeLeft;
break;
default:
orientation = AVCaptureVideoOrientationPortrait;
break;
}
[connection setVideoOrientation:orientation];
}
and then this in the captureStillImageAsynchronouslyFromConnection:completionHandler: to store the image:
NSData *imageData = [AVCaptureStillImageOutputjpegStillImageNSDataRepresentation:imageSampleBuffer];
UIImage *i = [UIImage imageWithData:imageData];
orientation:i.imageOrientation];
UIGraphicsBeginImageContext(i.size);
[i drawAtPoint:CGPointMake(0.0, 0.0)];
image.image = UIGraphicsGetImageFromCurrentImageContext();
as you can see, I don't rotate the image or anything, just draw it in the context and save. But as soon as I try to use i it is always rotated by 90 degrees. If I try to rotate it using
UIImage *rotated = [[UIImage alloc] initWithCGImage:i.CGImage scale:1.0f orientation:i.imageOrientation];
it doesn't work (no change from just using i).
I understand that UIImage might just draw the image into the context using the right orientation automatically, but WTF?

you can check the device orientation like this :
if ((deviceOrientation == UIInterfaceOrientationLandscapeLeft && position == AVCaptureDevicePositionBack)
)
{
and if above condition or whatever your condition is satisfies then you can rotate your image using the code below :
-(void)rotateImageByDegress:(int)degrees{
if (degrees == 0.0) {
return self;
}
// calculate the size of the rotated view's containing box for our drawing space
UIView *rotatedViewBox = [[UIView alloc] initWithFrame:CGRectMake(0,0,self.size.width, self.size.height)];
CGAffineTransform t = CGAffineTransformMakeRotation(DegreesToRadians(degrees));
rotatedViewBox.transform = t;
CGSize rotatedSize = rotatedViewBox.frame.size;
[rotatedViewBox release];
// Create the bitmap context
UIGraphicsBeginImageContext(rotatedSize);
CGContextRef bitmap = UIGraphicsGetCurrentContext();
// Move the origin to the middle of the image so we will rotate and scale around the center.
CGContextTranslateCTM(bitmap, rotatedSize.width/2, rotatedSize.height/2);
// // Rotate the image context
CGContextRotateCTM(bitmap, DegreesToRadians(degrees));
// Now, draw the rotated/scaled image into the context
CGContextScaleCTM(bitmap, 1.0, -1.0);
CGContextDrawImage(bitmap, CGRectMake(-self.size.width / 2, -self.size.height / 2, self.size.width, self.size.height), [self CGImage]);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}

Related

How to perform square cut to the photos in camera roll?

I would like to try some image filter features on iPhone like instagram does.
I use imagePickerController to get photo from camera roll. I understand that the image return by imagePickerController was reduced to save memory. And it's not wise to load the original image to an UIImage. But how can I processe the image then save it back as the original pixels?
I use iPhone 4S as my development device.
The original photo in camera roll is 3264 * 2448.
The image return by UIImagePickerControllerOriginalImage is 1920 * 1440
The image return by UIImagePickerControllerEditedImage is 640 * 640
imageViewOld(use UIImagePickerControllerCropRect [80,216,1280,1280] to crop the image return by UIImagePickerControllerOriginalImage) is 1280 * 1224
imageViewNew(use double sized UIImagePickerControllerCropRect [80,216,2560,2560] to crop the image return by UIImagePickerControllerOriginalImage) is 1840 * 1224.
I check the same photo proceed by instagram is 1280 * 1280
My questions are:
Why UIImagePickerControllerOriginalImage does not return the "Original" photo? Why reduced it to 1920 * 1440?
Why UIImagePickerControllerEditedImage does not return the image by 1280 * 1280? As the
UIImagePickerControllerCropRect shows it is cut by 1280 * 1280 square?
How can I do a square cut to original photo to be a 2448 * 2448 image?
Thanks in advance.
Below is my code:
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
NSString *mediaType = [info objectForKey:UIImagePickerControllerMediaType];
if ([mediaType isEqualToString:#"public.image"])
{
UIImage *imageEdited = [info objectForKey:UIImagePickerControllerEditedImage];
UIImage *imagePicked = [info objectForKey:UIImagePickerControllerOriginalImage];
CGRect cropRect;
cropRect = [[info valueForKey:#"UIImagePickerControllerCropRect"] CGRectValue];
NSLog(#"Original width = %f height= %f ",imagePicked.size.width, imagePicked.size.height);
//Original width = 1440.000000 height= 1920.000000
NSLog(#"imageEdited width = %f height = %f",imageEdited.size.width, imageEdited.size.height);
//imageEdited width = 640.000000 height = 640.000000
NSLog(#"corpRect %f %f %f %f", cropRect.origin.x, cropRect.origin.y , cropRect.size.width, cropRect.size.height);
//corpRect 80.000000 216.000000 1280.000000 1280.000000
CGRect rectNew = CGRectMake(cropRect.origin.x, cropRect.origin.y , cropRect.size.width*2, cropRect.size.height*2);
CGRect rectOld = CGRectMake(cropRect.origin.x, cropRect.origin.y , cropRect.size.width, cropRect.size.height);
CGImageRef imageRefNew = CGImageCreateWithImageInRect([imagePicked CGImage], rectNew);
CGImageRef imageRefOld = CGImageCreateWithImageInRect([imagePicked CGImage], rectOld);
UIImageView *imageViewNew = [[UIImageView alloc] initWithImage:[UIImage imageWithCGImage:imageRefNew]];
CGImageRelease(imageRefNew);
UIImageView *imageViewOld = [[UIImageView alloc] initWithImage:[UIImage imageWithCGImage:imageRefOld]];
CGImageRelease(imageRefOld);
NSLog(#"imageViewNew width = %f height = %f",imageViewNew.image.size.width, imageViewNew.image.size.height);
//imageViewNew width = 1840.000000 height = 1224.000000
NSLog(#"imageViewOld width = %f height = %f",imageViewOld.image.size.width, imageViewOld.image.size.height);
//imageViewOld width = 1280.000000 height = 1224.000000
UIImageWriteToSavedPhotosAlbum(imageEdited, nil, nil, NULL);
UIImageWriteToSavedPhotosAlbum([imageViewNew.image imageRotatedByDegrees:90.0], nil, nil, NULL);
UIImageWriteToSavedPhotosAlbum([imageViewOld.image imageRotatedByDegrees:90.0], nil, nil, NULL);
//assign the image to an UIImage Control
self.imageV.contentMode = UIViewContentModeScaleAspectFit;
self.imageV.frame = CGRectMake(0, 0, self.view.bounds.size.width, self.view.bounds.size.width);
self.imageV.image = imageEdited;
}
[self dismissModalViewControllerAnimated:YES];
}
As you have observed the UIImagePickerController will return a scaled down edited image sometimes 640x640 sometimes 320x320 (device dependent).
Your question:
How can I do a square cut to original photo to be a 2448 * 2448 image?
To do this you need to first use the UIImagePickerControllerCropRect to create a new image from the original image obtained using the UIImagePickerControllerOriginalImage key of the info dictionary. Using the Quartz Core method, CGImageCreateWithImageInRect you can create a new image that only contains the pixels bounded by the passed rect; in this case the crop rect. You will need to take into account orientation in order for this to work properly. Then you need only scale the image to your desired size. It's important to note that the crop rect is relative to the original image when after it has been oriented correctly, not as it comes out of the camera or photo library. This is why we need to transform the crop rect to match the orientation when we start using Quartz methods to create new images, etc.
I took your code above and set it up to create a 1280x1280 image from the original image based on the crop rect. There are still some edge cases here, i.e. taking into account that the crop rect can sometimes have negative values, (the code assumes a square cropping rect) that have not been addressed.
First transform the crop rect to take into account the orientation of the incoming image and size. This transformCGRectForUIImageOrientation function is from NiftyBean
Create an image that is cropped to the transformed cropping rect.
Scale (and rotate) the image to the desired size. i.e. 1280x1280.
Create a UIImage from the CGImage with correct scale and orientation.
Here is your code with the changes: UPDATE New code has been added below this that should take care of the missing cases.
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
NSString *mediaType = [info objectForKey:UIImagePickerControllerMediaType];
if ([mediaType isEqualToString:#"public.image"])
{
UIImage *imageEdited = [info objectForKey:UIImagePickerControllerEditedImage];
UIImage *imagePicked = [info objectForKey:UIImagePickerControllerOriginalImage];
CGRect cropRect;
cropRect = [[info valueForKey:#"UIImagePickerControllerCropRect"] CGRectValue];
NSLog(#"Original width = %f height= %f ",imagePicked.size.width, imagePicked.size.height);
//Original width = 1440.000000 height= 1920.000000
NSLog(#"imageEdited width = %f height = %f",imageEdited.size.width, imageEdited.size.height);
//imageEdited width = 640.000000 height = 640.000000
NSLog(#"corpRect %#", NSStringFromCGRect(cropRect));
//corpRect 80.000000 216.000000 1280.000000 1280.000000
CGSize finalSize = CGSizeMake(1280,1280);
CGImageRef imagePickedRef = imagePicked.CGImage;
CGRect transformedRect = transformCGRectForUIImageOrientation(cropRect, imagePicked.imageOrientation, imagePicked.size);
CGImageRef cropRectImage = CGImageCreateWithImageInRect(imagePickedRef, transformedRect);
CGColorSpaceRef colorspace = CGImageGetColorSpace(imagePickedRef);
CGContextRef context = CGBitmapContextCreate(NULL,
finalSize.width,
finalSize.height,
CGImageGetBitsPerComponent(imagePickedRef),
CGImageGetBytesPerRow(imagePickedRef),
colorspace,
CGImageGetAlphaInfo(imagePickedRef));
CGContextSetInterpolationQuality(context, kCGInterpolationHigh); //Give the context a hint that we want high quality during the scale
CGContextDrawImage(context, CGRectMake(0, 0, finalSize.width, finalSize.height), cropRectImage);
CGImageRelease(cropRectImage);
CGImageRef instaImage = CGBitmapContextCreateImage(context);
CGContextRelease(context);
//assign the image to an UIImage Control
UIImage *image = [UIImage imageWithCGImage:instaImage scale:imagePicked.scale orientation:imagePicked.imageOrientation];
self.imageView.contentMode = UIViewContentModeScaleAspectFit;
self.imageView.image = image;
CGImageRelease(instaImage);
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil);
}
[self dismissModalViewControllerAnimated:YES];
}
CGRect transformCGRectForUIImageOrientation(CGRect source, UIImageOrientation orientation, CGSize imageSize) {
switch (orientation) {
case UIImageOrientationLeft: { // EXIF #8
CGAffineTransform txTranslate = CGAffineTransformMakeTranslation(imageSize.height, 0.0);
CGAffineTransform txCompound = CGAffineTransformRotate(txTranslate,M_PI_2);
return CGRectApplyAffineTransform(source, txCompound);
}
case UIImageOrientationDown: { // EXIF #3
CGAffineTransform txTranslate = CGAffineTransformMakeTranslation(imageSize.width, imageSize.height);
CGAffineTransform txCompound = CGAffineTransformRotate(txTranslate,M_PI);
return CGRectApplyAffineTransform(source, txCompound);
}
case UIImageOrientationRight: { // EXIF #6
CGAffineTransform txTranslate = CGAffineTransformMakeTranslation(0.0, imageSize.width);
CGAffineTransform txCompound = CGAffineTransformRotate(txTranslate,M_PI + M_PI_2);
return CGRectApplyAffineTransform(source, txCompound);
}
case UIImageOrientationUp: // EXIF #1 - do nothing
default: // EXIF 2,4,5,7 - ignore
return source;
}
}
UPDATE I have made a couple of methods that will take care of the rest of the cases.
The steps are basically the same, with a couple of modifications.
First modification is to correctly transform and scale the context to handle the orientation of the incoming image,
and the second is to support the non square crops you can get from
the UIImagePickerController. In these cases the square image is
filled with a color of your choosing.
New Code
// CropRect is assumed to be in UIImageOrientationUp, as it is delivered this way from the UIImagePickerController when using AllowsImageEditing is on.
// The sourceImage can be in any orientation, the crop will be transformed to match
// The output image bounds define the final size of the image, the image will be scaled to fit,(AspectFit) the bounds, the fill color will be
// used for areas that are not covered by the scaled image.
-(UIImage *)cropImage:(UIImage *)sourceImage cropRect:(CGRect)cropRect aspectFitBounds:(CGSize)finalImageSize fillColor:(UIColor *)fillColor {
CGImageRef sourceImageRef = sourceImage.CGImage;
//Since the crop rect is in UIImageOrientationUp we need to transform it to match the source image.
CGAffineTransform rectTransform = [self transformSize:sourceImage.size orientation:sourceImage.imageOrientation];
CGRect transformedRect = CGRectApplyAffineTransform(cropRect, rectTransform);
//Now we get just the region of the source image that we are interested in.
CGImageRef cropRectImage = CGImageCreateWithImageInRect(sourceImageRef, transformedRect);
//Figure out which dimension fits within our final size and calculate the aspect correct rect that will fit in our new bounds
CGFloat horizontalRatio = finalImageSize.width / CGImageGetWidth(cropRectImage);
CGFloat verticalRatio = finalImageSize.height / CGImageGetHeight(cropRectImage);
CGFloat ratio = MIN(horizontalRatio, verticalRatio); //Aspect Fit
CGSize aspectFitSize = CGSizeMake(CGImageGetWidth(cropRectImage) * ratio, CGImageGetHeight(cropRectImage) * ratio);
CGContextRef context = CGBitmapContextCreate(NULL,
finalImageSize.width,
finalImageSize.height,
CGImageGetBitsPerComponent(cropRectImage),
0,
CGImageGetColorSpace(cropRectImage),
CGImageGetBitmapInfo(cropRectImage));
if (context == NULL) {
NSLog(#"NULL CONTEXT!");
}
//Fill with our background color
CGContextSetFillColorWithColor(context, fillColor.CGColor);
CGContextFillRect(context, CGRectMake(0, 0, finalImageSize.width, finalImageSize.height));
//We need to rotate and transform the context based on the orientation of the source image.
CGAffineTransform contextTransform = [self transformSize:finalImageSize orientation:sourceImage.imageOrientation];
CGContextConcatCTM(context, contextTransform);
//Give the context a hint that we want high quality during the scale
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
//Draw our image centered vertically and horizontally in our context.
CGContextDrawImage(context, CGRectMake((finalImageSize.width-aspectFitSize.width)/2, (finalImageSize.height-aspectFitSize.height)/2, aspectFitSize.width, aspectFitSize.height), cropRectImage);
//Start cleaning up..
CGImageRelease(cropRectImage);
CGImageRef finalImageRef = CGBitmapContextCreateImage(context);
UIImage *finalImage = [UIImage imageWithCGImage:finalImageRef];
CGContextRelease(context);
CGImageRelease(finalImageRef);
return finalImage;
}
//Creates a transform that will correctly rotate and translate for the passed orientation.
//Based on code from niftyBean.com
- (CGAffineTransform) transformSize:(CGSize)imageSize orientation:(UIImageOrientation)orientation {
CGAffineTransform transform = CGAffineTransformIdentity;
switch (orientation) {
case UIImageOrientationLeft: { // EXIF #8
CGAffineTransform txTranslate = CGAffineTransformMakeTranslation(imageSize.height, 0.0);
CGAffineTransform txCompound = CGAffineTransformRotate(txTranslate,M_PI_2);
transform = txCompound;
break;
}
case UIImageOrientationDown: { // EXIF #3
CGAffineTransform txTranslate = CGAffineTransformMakeTranslation(imageSize.width, imageSize.height);
CGAffineTransform txCompound = CGAffineTransformRotate(txTranslate,M_PI);
transform = txCompound;
break;
}
case UIImageOrientationRight: { // EXIF #6
CGAffineTransform txTranslate = CGAffineTransformMakeTranslation(0.0, imageSize.width);
CGAffineTransform txCompound = CGAffineTransformRotate(txTranslate,-M_PI_2);
transform = txCompound;
break;
}
case UIImageOrientationUp: // EXIF #1 - do nothing
default: // EXIF 2,4,5,7 - ignore
break;
}
return transform;
}

iOS: Image get rotated 90 degree after saved as PNG representation data

I have researched enough to get this working but not able to fix it. After taking picture from camera as long as I have image stored as UIImage, it's fine but as soon as I stored this image as PNG representation, its get rotated 90 degree.
Following is my code and all things I tried:
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
NSString *mediaType = [info valueForKey:UIImagePickerControllerMediaType];
if([mediaType isEqualToString:(NSString*)kUTTypeImage])
{
AppDelegate *delegate = (AppDelegate *)[[UIApplication sharedApplication] delegate];
delegate.originalPhoto = [info objectForKey:#"UIImagePickerControllerOriginalImage"];
NSLog(#"Saving photo");
[self saveImage];
NSLog(#"Fixing orientation");
delegate.fixOrientationPhoto = [self fixOrientation:[UIImage imageWithContentsOfFile:[delegate filePath:imageName]]];
NSLog(#"Scaling photo");
delegate.scaledAndRotatedPhoto = [self scaleAndRotateImage:[UIImage imageWithContentsOfFile:[delegate filePath:imageName]]];
}
[picker dismissModalViewControllerAnimated:YES];
[picker release];
}
- (void)saveImage
{
AppDelegate *delegate = (AppDelegate *)[[UIApplication sharedApplication] delegate];
NSData *imageData = UIImagePNGRepresentation(delegate.originalPhoto);
[imageData writeToFile:[delegate filePath:imageName] atomically:YES];
}
Here fixOrientation and scaleAndRotateImage functions taken from here and here respectively. They works fine and rotate image when I apply them on UIImage but doesn't work if I save image as PNG representation and apply them.
Please refere the following picture after executing above functions:
Starting with iOS 4.0 when the camera takes a photo it does not rotate it before saving, it
simply sets a rotation flag in the EXIF data of the JPEG.If you save a UIImage as a JPEG, it
will set the rotation flag.PNGs do not support a rotation flag, so if you save a UIImage as a
PNG, it will be rotated incorrectly and not have a flag set to fix it. So if you want PNG
images you must rotate them yourself, for that check this link.
Swift 4.2
Add the following as UIImage extension,
extension UIImage {
func fixOrientation() -> UIImage? {
if self.imageOrientation == UIImage.Orientation.up {
return self
}
UIGraphicsBeginImageContext(self.size)
self.draw(in: CGRect(origin: .zero, size: self.size))
let normalizedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return normalizedImage
}
}
Example usage:
let cameraImage = //image captured from camera
let orientationFixedImage = cameraImage.fixOrientation()
Explanation:
The magic happens when you call UIImage's draw(in:) function, which redraws the image respecting originally captured orientation settings. [link to docs]
Make sure to call UIGraphicsBeginImageContext with image's size before calling draw to let draw rewrite the UIImage in current context's space.
UIGraphicsGetImageFromCurrentImageContext lets you capture whatever is the result of the redrawn image, & UIGraphicsEndImageContext ends and frees up the graphics context.
Swift 3.1 version of the UIImage extension posted by Rao:
extension UIImage {
func fixOrientation() -> UIImage {
if self.imageOrientation == UIImageOrientation.up {
return self
}
UIGraphicsBeginImageContextWithOptions(self.size, false, self.scale)
self.draw(in: CGRect(x: 0, y: 0, width: self.size.width, height: self.size.height))
if let normalizedImage: UIImage = UIGraphicsGetImageFromCurrentImageContext() {
UIGraphicsEndImageContext()
return normalizedImage
} else {
return self
}
}
}
Usage:
let cameraImage = //image captured from camera
let orientationFixedImage = cameraImage.fixOrientation()
Swift 4/5:
UIImageOrientation.up has been renamed to UIImage.Orientation.up
I have found the following tips to be hugely useful:
1. natural output is landscape
2. .width / .height ARE affected by .imageOrientation
3 use short dimension rather than .width
(1) the 'natural' output of the camera for stills IS LANDSCAPE.
this is counterintuitive. portrait is the only way offered by UIImagePickerController etc. But you will get UIImageOrientationRight as the "normal" orientation when using the "normal" portrait camera
(The way I remember this - the natural output for video is (of course) landscape; so stills are the same - even though the iPhone is all about portrait.)
(2) .width and .height are indeed affected by the .imageOrientation!!!!!!!!!
Be sure to do this, and try it both ways on your iPhone,
NSLog(#"fromImage.imageOrientation is %d", fromImage.imageOrientation);
NSLog(#"fromImage.size.width %f fromImage.size.height %f",
fromImage.size.width, fromImage.size.height);
you'll see that the .height and .width swap, "even though" the real pixels are landscape.
(3) simply using the "short dimension" rather than .width, can often solve many problems
I found this to be incredibly helpful. Say you want maybe the top square of the image:
CGRect topRectOfOriginal = CGRectMake(0,0, im.size.width,im.size.width);
that actually won't work, you'll get a squished image, when the camera is ("really") being held landscape.
however if you very simply do this
float shortDimension = fminf(im.size.width, im.size.height);
CGRect topRectOfOriginal = CGRectMake(0,0, shortDimension,shortDimension);
then "everything is fixed" and you actually "do not need to worry about" the orientation flag. Again point (3) is not a cure-all, but it very often does solve all problems.
Hope it helps someone save some time.
I found this code here, which actually fixed it for me. For my app, I took a picture and saved it, and everytime I loaded it, it would have the annoying rotation attached (I looked up, and it's apparently something to do with the EXIF and the way iPhone takes and stores images). This code fixed it for me. I have to say, it was originally as an addition to a class / an extension /category (you can find the original from the link. I used it like below as a simple method, as I didn't really want to make a whole class or category for just this. I only used portrait, but I think the code works for any orientation. I'm not sure though
Rant over, here's the code:
- (UIImage *)fixOrientationForImage:(UIImage*)neededImage {
// No-op if the orientation is already correct
if (neededImage.imageOrientation == UIImageOrientationUp) return neededImage;
// We need to calculate the proper transformation to make the image upright.
// We do it in 2 steps: Rotate if Left/Right/Down, and then flip if Mirrored.
CGAffineTransform transform = CGAffineTransformIdentity;
switch (neededImage.imageOrientation) {
case UIImageOrientationDown:
case UIImageOrientationDownMirrored:
transform = CGAffineTransformTranslate(transform, neededImage.size.width, neededImage.size.height);
transform = CGAffineTransformRotate(transform, M_PI);
break;
case UIImageOrientationLeft:
case UIImageOrientationLeftMirrored:
transform = CGAffineTransformTranslate(transform, neededImage.size.width, 0);
transform = CGAffineTransformRotate(transform, M_PI_2);
break;
case UIImageOrientationRight:
case UIImageOrientationRightMirrored:
transform = CGAffineTransformTranslate(transform, 0, neededImage.size.height);
transform = CGAffineTransformRotate(transform, -M_PI_2);
break;
case UIImageOrientationUp:
case UIImageOrientationUpMirrored:
break;
}
switch (neededImage.imageOrientation) {
case UIImageOrientationUpMirrored:
case UIImageOrientationDownMirrored:
transform = CGAffineTransformTranslate(transform, neededImage.size.width, 0);
transform = CGAffineTransformScale(transform, -1, 1);
break;
case UIImageOrientationLeftMirrored:
case UIImageOrientationRightMirrored:
transform = CGAffineTransformTranslate(transform, neededImage.size.height, 0);
transform = CGAffineTransformScale(transform, -1, 1);
break;
case UIImageOrientationUp:
case UIImageOrientationDown:
case UIImageOrientationLeft:
case UIImageOrientationRight:
break;
}
// Now we draw the underlying CGImage into a new context, applying the transform
// calculated above.
CGContextRef ctx = CGBitmapContextCreate(NULL, neededImage.size.width, neededImage.size.height,
CGImageGetBitsPerComponent(neededImage.CGImage), 0,
CGImageGetColorSpace(neededImage.CGImage),
CGImageGetBitmapInfo(neededImage.CGImage));
CGContextConcatCTM(ctx, transform);
switch (neededImage.imageOrientation) {
case UIImageOrientationLeft:
case UIImageOrientationLeftMirrored:
case UIImageOrientationRight:
case UIImageOrientationRightMirrored:
// Grr...
CGContextDrawImage(ctx, CGRectMake(0,0,neededImage.size.height,neededImage.size.width), neededImage.CGImage);
break;
default:
CGContextDrawImage(ctx, CGRectMake(0,0,neededImage.size.width,neededImage.size.height), neededImage.CGImage);
break;
}
// And now we just create a new UIImage from the drawing context
CGImageRef cgimg = CGBitmapContextCreateImage(ctx);
UIImage *img = [UIImage imageWithCGImage:cgimg];
CGContextRelease(ctx);
CGImageRelease(cgimg);
return img;
}
I'm not sure how useful this will be for you, but I hope it helped :)
Try this,
You can use
NSData *somenewImageData = UIImageJPEGRepresentation(newimg,1.0);
instead of
NSData *somenewImageData = UIImagePNGRepresentation(newimg);
Pls Try the following code
UIImage *sourceImage = ... // Our image
CGRect selectionRect = CGRectMake(100.0, 100.0, 300.0, 400.0);
CGImageRef resultImageRef = CGImageCreateWithImageInRect(sourceImage.CGImage,
selectionRect);
UIImage *resultImage = [[UIImage alloc] initWithCGImage:resultImageRef];
And
CGRect TransformCGRectForUIImageOrientation(CGRect source, UIImageOrientation orientation, CGSize imageSize) {
switch (orientation) {
case UIImageOrientationLeft: { // EXIF #8
CGAffineTransform txTranslate = CGAffineTransformMakeTranslation(imageSize.height, 0.0);
CGAffineTransform txCompound = CGAffineTransformRotate(txTranslate,M_PI_2);
return CGRectApplyAffineTransform(source, txCompound);
}
case UIImageOrientationDown: { // EXIF #3
CGAffineTransform txTranslate = CGAffineTransformMakeTranslation(imageSize.width, imageSize.height);
CGAffineTransform txCompound = CGAffineTransformRotate(txTranslate,M_PI);
return CGRectApplyAffineTransform(source, txCompound);
}
case UIImageOrientationRight: { // EXIF #6
CGAffineTransform txTranslate = CGAffineTransformMakeTranslation(0.0, imageSize.width);
CGAffineTransform txCompound = CGAffineTransformRotate(txTranslate,M_PI + M_PI_2);
return CGRectApplyAffineTransform(source, txCompound);
}
case UIImageOrientationUp: // EXIF #1 - do nothing
default: // EXIF 2,4,5,7 - ignore
return source;
}
}
...
UIImage *sourceImage = ... // Our image
CGRect selectionRect = CGRectMake(100.0, 100.0, 300.0, 400.0);
CGRect transformedRect = TransformCGRectForUIImageOrientation(selectionRect, sourceImage.imageOrientation, sourceImage.size);
CGImageRef resultImageRef = CGImageCreateWithImageInRect(sourceImage.CGImage, transformedRect);
UIImage *resultImage = [[UIImage alloc] initWithCGImage:resultImageRef];
I have referanced from following link have look for more detail
Best Regards :-)
Try this code:
NSData *imageData = UIImageJPEGRepresentation(delegate.originalPhoto,100);

Rotate UIImage: can't make it work

This is part of a big project, but my problem can be simplified as:
I have a simple view with an ImageView and a "Rotate" button. Whenever I press the button, the image inside the ImageView will rotate 90 degree. From much of what I've found on StackOverflow and other sites, this should work (please note that my image is a square image, which has width and height equal to 464):
- (UIImage *) getRotatedImageFromImage:(UIImage*)image:(int)rotate_index
{
UIImage *rotatedImage;
// Get image width, height of the bounding rectangle
CGRect boundingRect = CGRectMake(0, 0, image.size.width, image.size.height);
NSLog(#"width = %f, height = %f",boundingRect.size.width,boundingRect.size.height);
NSLog(#"rotate index = %d",rotate_index);
// Create a graphics context the size of the bounding rectangle
UIGraphicsBeginImageContext(boundingRect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextRotateCTM(context, rotate_index*M_PI/2);
// Draw the image into the context
[image drawInRect:boundingRect];
// Get an image from the context
rotatedImage = UIGraphicsGetImageFromCurrentImageContext();
// Clean up
UIGraphicsEndImageContext();
NSLog(#"rotatedImage size = (%f, %f)",rotatedImage.size.width,rotatedImage.size.height);
return rotatedImage;
}
- (IBAction) rotateImage
{
NSLog(#"rotate image");
rotateIndex++;
if (rotateIndex >= 4)
rotateIndex = 0;
[imageView setImage: [self getRotatedImageFromImage:imageToSubmit :rotateIndex]];
}
But it doesn't work for some reasons. What I have is: when pressing the button, the image only appears when rotateIndex gets to 0, and the image is the same as the original image (which is expected). When rotateIndex is 1, 2, 3, the imageView displays nothing, even though the size of rotatedImage printed out is correct (i.e. 464 x 464) .
Could anyone tell me what's going wrong?
Thanks.
//create rect
UIImageView *myImageView = [[UIImageView alloc]initWithImage:[UIImage imageNamed:#"image.png"]];
//set point of rotation
myImageView.center = CGPointMake(100.0,100.0);
//rotate rect
myImageView.transform =CGAffineTransformMakeRotation(3.14159265/2); //rotation in radians
I was able to solve my problem. Here is my solution, using CGImageRef and [UIImage imageWithCGImage: scale: orientation:]
CGImageRef cgImageRef = CGBitmapContextCreateImage(context);
UIImageOrientation imageOrientation;
switch (rotate_index) {
case 0: imageOrientation = UIImageOrientationUp; break;
case 1: imageOrientation = UIImageOrientationLeft; break;
//2 more cases for rotate_index = 2 and 3
}
rotatedImage = [UIImage imageWithCGImage:cgImageRef scale:1.0 orientation:imageOrientation];`

How to create a flexible frame for photo in iPhone?

I try to create a flexible frame for in my iPhone app with some small pictures. CustomView inherit UIView and override it's setFrame: method. In setFrame:, I try to call [self setNeedsDisplay]; Every time I scale the photo, this frame really display and changed, but something does not work very well. Code and effect below:
//rotate to get mirror image of originImage, and isHorization means the rotation direction
- (UIImage*)fetchMirrorImage:(UIImage*)originImage direction:(BOOL)isHorization{
CGSize imageSize = originImage.size;
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
CGAffineTransform transform = CGAffineTransformIdentity;
if (isHorization) {
transform = CGAffineTransformMakeTranslation(imageSize.width, 0.0);
transform = CGAffineTransformScale(transform, -1.0, 1.0);
}else {
transform = CGAffineTransformMakeTranslation(0.0, imageSize.height);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
}
CGContextScaleCTM(context, 1, -1);
CGContextTranslateCTM(context, 0, -imageSize.height);
CGContextConcatCTM(context, transform);
CGContextDrawImage(context, CGRectMake(0, 0, imageSize.width, imageSize.height), originImage.CGImage);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
- (UIImage*)fetchPattern:(PatternType)pattern{
if (!self.patternImage) {
return nil;
}
UIImage *tmpPattern = nil;
CGRect fetchRect = CGRectZero;
CGSize imageSize = self.patternImage.size;
switch (pattern) {
case kTopPattern:
fetchRect = CGRectMake(self.insetSize.width, 0, imageSize.width-self.insetSize.width, self.insetSize.height);
tmpPattern = [UIImage imageWithCGImage:CGImageCreateWithImageInRect(self.patternImage.CGImage, fetchRect)];
break;
case kTopRightPattern:
fetchRect = CGRectMake(0, 0, self.insetSize.width, self.insetSize.height);
tmpPattern = [UIImage imageWithCGImage:CGImageCreateWithImageInRect(self.patternImage.CGImage, fetchRect)];
break;
case kRightPattern:
break;
case kRightBottomPattern:
break;
case kBottomPattern:
break;
case kLeftBottomPattern:
break;
case kLeftPattern:
fetchRect = CGRectMake(0, self.insetSize.height, self.insetSize.width, imageSize.height-self.insetSize.height);
tmpPattern = [UIImage imageWithCGImage:CGImageCreateWithImageInRect(self.patternImage.CGImage, fetchRect)];
break;
case kLeftTopPattern:
fetchRect = CGRectMake(0, 0, self.insetSize.width, self.insetSize.height);
tmpPattern = [UIImage imageWithCGImage:CGImageCreateWithImageInRect(self.patternImage.CGImage, fetchRect)];
break;
default:
break;
}
return tmpPattern;
}
// Only override drawRect: if you perform custom drawing.
// An empty implementation adversely affects performance during animation.
- (void)drawRect:(CGRect)rect {
if (self.patternImage == nil) {
return;
}
// Drawing code.
UIImage *conLeftImage = [self fetchPattern:kLeftTopPattern];
[conLeftImage drawAtPoint:CGPointMake(0, 0)];
UIImage *topImage = [self fetchPattern:kTopPattern];
[topImage drawAsPatternInRect:CGRectMake(self.insetSize.width, 0, self.frame.size.width-self.insetSize.width*2, self.insetSize.height)];
UIImage *leftImage = [self fetchPattern:kLeftPattern];
[leftImage drawAsPatternInRect:CGRectMake(0, self.insetSize.height, self.insetSize.width, self.frame.size.height-self.insetSize.height*2)];
UIImage *conRightImage = [self fetchMirrorImage:conLeftImage direction:YES];
[conRightImage drawAtPoint:CGPointMake(self.frame.size.width-self.insetSize.width, 0)];
UIImage *rightImage = [self fetchMirrorImage:leftImage direction:YES];
CGRect rectRight = CGRectMake(self.frame.size.width-self.insetSize.width, self.insetSize.height, self.insetSize.width, self.frame.size.height-self.insetSize.height*2);
[rightImage drawAsPatternInRect:rectRight];
UIImage *botRightImage = [self fetchMirrorImage:conRightImage direction:NO];
[botRightImage drawAtPoint:CGPointMake(self.frame.size.width-self.insetSize.width, self.frame.size.height-self.insetSize.height)];
UIImage *bottomImage = [self fetchMirrorImage:topImage direction:NO];
CGRect bottomRect = CGRectMake(self.insetSize.width, self.frame.size.height-self.insetSize.height, self.frame.size.width-self.insetSize.width*2, self.insetSize.height);
[bottomImage drawAsPatternInRect:bottomRect];
UIImage *botLeftImage = [self fetchMirrorImage:conLeftImage direction:NO];
[botLeftImage drawAtPoint:CGPointMake(0, self.frame.size.height-self.insetSize.height)];
[super drawRect:rect];
}
- (void)setFrame:(CGRect)frame{
[super setFrame:frame];
[self setNeedsDisplay];
}
Yeah, I make it!!!
The only thing I need to do is set the phase of current context.
CGContextSetPatternPhase(context, CGSizeMake(x, y));
x, y are the start point where I want the pattern to be drawn.
I just made a mistake to use pattern. If I don't set the phase of current context, It start fill the pattern from the left top of context(0,0) every time.
CGColorRef color = NULL;
UIImage *leftImage = [UIImage imageNamed:#"like_left.png"];
color = [UIColor colorWithPatternImage:leftImage].CGColor;
CGContextSetFillColorWithColor(context, color);
CGContextFillRect(context, CGRectMake(0, 0, FRAME_WIDTH, rect.size.height));
UIImage *topImage = [UIImage imageNamed:#"like_top.png"];
color = [UIColor colorWithPatternImage:topImage].CGColor;
CGContextSetFillColorWithColor(context, color);
CGContextFillRect(context, CGRectMake(0, 0, rect.size.width,FRAME_WIDTH));
UIImage *bottomImage = [UIImage imageNamed:#"like_bom.png"];
CGContextSetPatternPhase(context, CGSizeMake(0, rect.size.height - FRAME_WIDTH));
color = [UIColor colorWithPatternImage:bottomImage].CGColor;
CGContextSetFillColorWithColor(context, color);
CGContextFillRect(context, CGRectMake(0, rect.size.height - FRAME_WIDTH, rect.size.width, FRAME_WIDTH));
UIImage *rightImage = [UIImage imageNamed:#"like_right.png"];
CGContextSetPatternPhase(context, CGSizeMake(rect.size.width - FRAME_WIDTH, 0));
color = [UIColor colorWithPatternImage:rightImage].CGColor;
CGContextSetFillColorWithColor(context, color);
CGContextFillRect(context, CGRectMake(rect.size.width - FRAME_WIDTH, 0, FRAME_WIDTH, rect.size.height));
First, I'll say this:
I don't have a real answer right now. But, I have a hint .. I think it'll be useful.
According to the tutorial posted by Larmarche, you can customize the borders of a UIView easily by overriding drawRect:. And, if you scroll down to the comments below (in the tutorial website), you can see the following comment:
I discovered one other glitch. When you change the RoundRect's bounds,
it will not be updated -- so the corners sometimes get all screwy
(because they're scaled automatically).
The quick fix is to add the following to both init methods:
self.contentMode = UIViewContentModeRedraw;
Setting the contentMode to Redraw saves you the trouble of calling setNeedsDisplay. You can easily change the frame of the UIView, instantly or animate it, and the UIView will redraw automatically..
I will try to take a better look at your problem later .. I'll try to actually implement it.

image clicked from iPhone in Portrait mode gets rotated by 90 degree

I am uploading an image clicked from iphone both in landscape and portrait mode.
The image with landscape mode is uploaded fine but the issue is with the image uploaded with portrait mode. They gets rotated by 90 degree.
Also other images with portrait mode(not clicked from iPhone) works fine.
Any idea why such happens?
In your delegate:
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
After you get your UIImage from "info" dictionary for the key "UIImagePickerControllerOriginalImage" ,You can see the image orientation by imageOrientation property. If is not like you want just rotate your image before you upload it.
imageOrientation
The orientation of the receiver’s image. (read-only)
#property(nonatomic, readonly) UIImageOrientation imageOrientation
Discussion
Image orientation affects the way the image data is displayed when drawn. By default, images are displayed in the “up” orientation. If the image has associated metadata (such as EXIF information), however, this property contains the orientation indicated by that metadata. For a list of possible values for this property, see “UIImageOrientation.”
Availability
Available in iOS 2.0 and later.
Declared In
UIImage.h
UIImage Class Reference
UIImagePickerController Class Reference
UIImagePickerControllerDelegate Protocol Reference
Second option:
Is to allow user to edit your image and get the image for "UIImagePickerControllerEditedImage";
Set your UIImagePickerController "allowsEditing" property to Yes.
In your delegate just get from the "info" dictionary the UIImage for the "UIImagePickerControllerEditedImage" key.
Good luck.
I've wrestled with this problem quite a bit, I was working on a project where I need to actually rotate the image, as in re-arrange the pixels so that I could upload it.
First thing you need to do is determine the orientation, then strip off that pesky meta data, then rotate the image.
So put this inside of the didFinishPickingMediaWithInfo function:
UIImage * img = [info objectForKey:#"UIImagePickerControllerOriginalImage"];
if ([info objectForKey:#"UIImagePickerControllerMediaMetadata"]) {
//Rotate based on orientation
switch ([[[info objectForKey:#"UIImagePickerControllerMediaMetadata"] objectForKey:#"Orientation"] intValue]) {
case 3:
//Rotate image to the left twice.
img = [UIImage imageWithCGImage:[img CGImage]]; //Strip off that pesky meta data!
img = [rotateImage rotateImage:[rotateImage rotateImage:img withRotationType:rotateLeft] withRotationType:rotateLeft];
break;
case 6:
img = [UIImage imageWithCGImage:[img CGImage]];
img = [rotateImage rotateImage:img withRotationType:rotateRight];
break;
case 8:
img = [UIImage imageWithCGImage:[img CGImage]];
img = [rotateImage rotateImage:img withRotationType:rotateLeft];
break;
default:
break;
}
}
And here is the resize function:
+(UIImage*)rotateImage:(UIImage*)image withRotationType:(rotationType)rotation{
CGImageRef imageRef = [image CGImage];
CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(imageRef);
CGColorSpaceRef colorSpaceInfo = CGColorSpaceCreateDeviceRGB();
if (alphaInfo == kCGImageAlphaNone)
alphaInfo = kCGImageAlphaNoneSkipLast;
CGContextRef bitmap;
bitmap = CGBitmapContextCreate(NULL, image.size.height, image.size.width, CGImageGetBitsPerComponent(imageRef), 4 * image.size.height/*CGImageGetBytesPerRow(imageRef)*/, colorSpaceInfo, alphaInfo);
CGColorSpaceRelease(colorSpaceInfo);
if (rotation == rotateLeft) {
CGContextTranslateCTM (bitmap, image.size.height, 0);
CGContextRotateCTM (bitmap, radians(90));
}
else{
CGContextTranslateCTM (bitmap, 0, image.size.width);
CGContextRotateCTM (bitmap, radians(-90));
}
CGContextDrawImage(bitmap, CGRectMake(0, 0, image.size.width, image.size.height), imageRef);
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
UIImage *result = [UIImage imageWithCGImage:ref];
CGImageRelease(ref);
CGContextRelease(bitmap);
return result;
}
The img variable now contains a properly rotated image.
Ok, a cleaner version would be :
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
UIImage *img = [info valueForKey:UIImagePickerControllerOriginalImage];
img = [UIImage imageWithCGImage:[img CGImage]];
UIImageOrientation requiredOrientation = UIImageOrientationUp;
switch ([[[info objectForKey:#"UIImagePickerControllerMediaMetadata"] objectForKey:#"Orientation"] intValue])
{
case 3:
requiredOrientation = UIImageOrientationDown;
break;
case 6:
requiredOrientation = UIImageOrientationRight;
break;
case 8:
requiredOrientation = UIImageOrientationLeft;
break;
default:
break;
}
UIImage *portraitImage = [[UIImage alloc] initWithCGImage:img.CGImage scale:1.0 orientation:requiredOrientation];
}