I am trying to crop an image taken from AVCaptureStillImageOutput but unable to properly crop at the correct rectangles.
My preview of camera video is 320x458 frame and the cropping rectangle is present inside this preview frame which has the co-ordinates and size as CGRectMake(60, 20, 200, 420).
After taking the picture, I receive the image from
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *image = [[UIImage alloc] initWithData:imageData];
UIImage *finalImage = [self cropImage:image];
Afterwards I am trying to crop this actual image of size 1080x1920 with the below function. I am getting a wayward clipping and the resultant image is way out of the actual rectangle! ;(
- (UIImage *)cropImage:(UIImage *)oldImage {
CGRect rect = CGRectMake(60, 20, 200, 420);
CGSize boundSize = CGSizeMake(320, 458);
float widthFactor = rect.size.width * (oldImage.size.width/boundSize.width);
float heightFactor = rect.size.height * (oldImage.size.height/boundSize.height);
float factorX = rect.origin.x * (oldImage.size.width/boundSize.width);
float factorY = rect.origin.y * (oldImage.size.height/boundSize.height);
CGRect factoredRect = CGRectMake(factorX,factorY,widthFactor,heightFactor);
UIImage *croppedImage = [UIImage imageWithCGImage:CGImageCreateWithImageInRect([oldImage CGImage], factoredRect) scale:oldImage.scale orientation:oldImage.imageOrientation];
return croppedImage;
}
In the attached picture, I am trying to crop the coffee mug, but what I get is not the correct cropped image!
I think you should use PEPhotoCropEditor sample code for cropping. It's easy to use. you can download source code from https://www.cocoacontrols.com/controls/pephotocropeditor
i am also facing hard time in this i think the what you have to do is write image of file then crop that image or try out orientation issue to solve.
Related
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Cropping a UIImage
I saw many tutorials about cropping images and am trying my luck but with no success.
I need to crop the image from AVFoundation. Since when the image is being taken from the camera it is rotated left from Portrait mode, I also need to rotate it right and my x and y are opposite. The problem is that if I send the frame of the image where I would it to reside, it seems to me that there is no correlation in the size of the image and the rectangle.
The code is:
#property (weak, nonatomic) IBOutlet UIView *videoPreviewView;
...
...
int width = videoPreviewView.frame.size.width;
int height = videoPreviewView.frame.size.height;
int x = videoPreviewView.frame.origin.x;
int y = videoPreviewView.frame.origin.y;
CGRect croprect = CGRectMake(y, x,height,width);
// Draw new image in current graphics context
CGImageRef imageRef = CGImageCreateWithImageInRect([sourceImage CGImage], croprect);
// Create new cropped UIImage
UIImage *resultImage = [UIImage imageWithCGImage:imageRef scale:[sourceImage scale] orientation:UIImageOrientationRight];
When I print the size of the frame I get:
(4.5,69.5,310,310)
and the image size is:
(720,1280)
How can I perform cropping in any image resolution?
I tried multiplying the values with image.scale - however, the value is 1.00
Try this one.This will definitely help you out. https://github.com/barrettj/BJImageCropper
To resize image, Try this
UIImage *image = [UIImage imageNamed:#"image.png"];
CGSize itemSize = CGSizeMake((your Width), (your Height));
UIGraphicsBeginImageContext(itemSize);
CGRect imageRect = CGRectMake(0.0, 0.0, itemSize.width, itemSize.height);
[image drawInRect:imageRect];
UIImage * yourCroppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
To Crop,
// Create new image context
UIGraphicsBeginImageContext(SIZE);
// Create CGRect for image
CGRect newRect = CGRectMake(x, y,SIZE.width,SIZE.height);
// Draw the image into the rect
[ORIGINALIMAGE drawInRect:newRect];
// Saving the image, ending image context
UIImage * newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
I want to crop Image according to red View . There are some points to keep in mind.
1.Image can be scrolled and Zoomed.
2.Red ImageView is created Dynamically according to Image
UIImage* whole = [UIImage imageNamed:#"9.png"]; //I uses this image
CGImageRef cgImg = CGImageCreateWithImageInRect(whole.CGImage, CGRectMake(x, y, incX, incY));
UIImage* part = [UIImage imageWithCGImage:cgImg];
I just want to know How to find the Values of
x, y, incX, incY
Thanks...
Scenario 1: Normal (Not Scrolled)
Expected Result (Ignore Black Border On Top and Bottom)
Scenario 2:Scrolled
Expected Result (Ignore Black Border On Top and Bottom)
Scenario 3: Zoomed
And same Expected Result for the Zoomed One.
In all cases I want the respective Images Inside the Red Rectangle.
For all These I am Using this Code...
-(void)cropClicked:(UIButton*)sender
{
float zoomScale = 1.0 / [mainScrollView zoomScale];
CGRect rect;
rect.size.width = [redImageView bounds].size.width * zoomScale ;
rect.size.height = [redImageView bounds].size.height * zoomScale ;
rect.origin.x = ([mainScrollView bounds].origin.x + redImageView.frame.origin.x );
rect.origin.y = ([mainScrollView bounds].origin.y + redImageView.frame.origin.y );
CGImageRef cr = CGImageCreateWithImageInRect([[mainImageView image] CGImage], rect);
UIImage *cropped = [UIImage imageWithCGImage:cr];
mainImageView.image=cropped;
UIImageWriteToSavedPhotosAlbum(cropped, nil, nil, nil);
CGImageRelease(cr);
}
Well, as #HDdeveloper rightly said, you can use CGImageCreateWithImageInRect. This take 2 params, the first is the whole image, the second is the frame that you want to crop (so probably the frame of your red imageView).
The problem is that if you're targeting for both retina/non retina; if your whole image is an image #2x and you want to crop the image with the red imageview frame you have to double your frame to get the right screenshot.
So you can try with this method:
//Define the screen type:
#define isRetinaDisplay [[UIScreen mainScreen] respondsToSelector:#selector(displayLinkWithTarget:selector:)] && ([UIScreen mainScreen].scale == 2.0)
- (UIImage*)cropInnerImage:(CGRect)rect {
//Take a screenshot of the whole image
UIGraphicsBeginImageContextWithOptions(self.view.frame.size, NO, 0.0);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage* ret = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGRect rct;
//Double the frame if you're in retina display
if (isRetinaDisplay) {
rct=CGRectMake(rect.frame.origin.x*2, rect.frame.origin.y*2, rect.size.width*2, rect.size.height*2);
} else {
rct=rect;
}
//Crop the image from the screenshot
CGImageRef imageRef = CGImageCreateWithImageInRect([ret CGImage], rct);
UIImage *result = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
//Save and open the result images with Preview.app
[UIImagePNGRepresentation(result) writeToFile: #"/tmp/testCrop.png" atomically: YES];
system("open /tmp/testCrop.png");
[UIImagePNGRepresentation(ret) writeToFile: #"/tmp/testRet.png" atomically: YES];
system("open /tmp/testRet.png");
//
return result;
}
Where the rect parameter must be your red image frame, and self.view.frame must be the equal to the wholeImageView.frame. You can skip the last 4 lines, these are just to see in your Mac what you're cropping.
PS: i use this method to crop an image and set it as background of UIView, this is the reason i have to double the frame.
You can use CGImageRef
pass your rect in the whole image. Then call this on button click
UIImage* whole = [UIImage imageNamed:#"9.png"]; //I uses this image
CGImageRef cgImg = CGImageCreateWithImageInRect(whole.CGImage, CGRectMake(x, y, incX, incY));
UIImage* part = [UIImage imageWithCGImage:cgImg];
I have a large sized image (2048*2048px), this image is shown as 320*320 on iPhone screen. I want to do this:
In my APP, user can open large sized image(e.g. 2048*2048), the image is shown as 320*320 on iPhone screen, and there is rectangle over the image, user can move the rectangle anywhere within image on iPhone screen, e.g. rectangle(100, 100, 300, 200), then I want to clip the original sized image within the rectangle area in scale.
I tried many ways,
UIImageView *originalImageView = [[UIImage View alloc] initWithImage:originalImage]];
CGRect rect = CGRectMake(100, 100, 300, 200);
UIImage *cropImage = [UIImage imageWithCGImage:CGImageCreateWithImageInRect([originalImageView.image CGImage], rect)];
But I got the cropImage is just 300*200 sized image, not scale properly.
How about doing this, it will preserve the original image quality
CGSize bounds = CGSizeMake(320,320) // Considering image is shown in 320*320
CGRect rect = CGRectMake(100, 100, 220, 200); //rectangle area to be cropped
float widthFactor = rect.size.width * (originalImage.size.width/bounds.size.width);
float heightFactor = rect.size.height * (originalImage.size.height/bounds.size.height);
float factorX = rect.origin.x * (originalImage.size.width/bounds.size.width);
float factorY = rect.origin.y * (originalImage.size.height/bounds.size.height);
CGRect factoredRect = CGRectMake(factorX,factorY,widthFactor,heightFactor);
UIImage *cropImage = [UIImage imageWithCGImage:CGImageCreateWithImageInRect([originalImage CGImage], factoredRect)];
And most importantly if you want to crop image that imagePickerController returns, then this can be done by built in function as below,
imagePickerController.allowsEditing = YES;
Firstly resize image with size 320*320 using this method:
+(UIImage *)resizeImage:(UIImage *)image width:(float)width height:(float)height
{
CGSize newSize;
newSize.width = width;
newSize.height = height
UIGraphicsBeginImageContext(newSize);
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
Now set resized image in imageView
UIImage *resizeImage = [YourControllerName resizeImage:originalImage width:320 height:320];
UIImageView *originalImageView = [[UIImage View alloc] initWithImage:resizeImage]];
You can now crop
CGRect rect = CGRectMake(100, 100, 300, 200);
UIImage *cropImage = [UIImage imageWithCGImage:CGImageCreateWithImageInRect([originalImageView.image CGImage], rect)];
Why not calculate the scale factor (e.g. originalImageWidth/smallImageWidth)?
If the rectangle is (100,100,300,200) in your small image, you should clip your lage image at size (100*factor,100*factor,300*factor,200*factor).
I'm writing an iPhone App which uses AVFoundation to take a photo and crop it.
The App is similar to a QR code reader: It uses a AVCaptureVideoPreviewLayer with an overlay.
The overlay has a square. I want to crop the image so the cropped image is exactly what the user has places inside the square.
The preview layer has gravity AVLayerVideoGravityResizeAspectFill.
It looks like what the camera actually captures is not exactly what the user sees in the preview layer. This means that I need to move from the preview coordinate system to the captured image coordinate system so I can crop the image. For this I think that I need the following parameters:
1. ration between view size and captured image size.
2. information which tells which part of the captured image matches what is displayed in the preview layer.
Does anybody know how I can obtain this info, or if there is a different approach to crop the image.
(p.s. capturing a screenshot of the preview is not an option, as I understand it might resulting in the App being rejected).
Thank you in advance
Hope this meets your requirements
- (UIImage *)cropImage:(UIImage *)image to:(CGRect)cropRect andScaleTo:(CGSize)size {
UIGraphicsBeginImageContext(size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGImageRef subImage = CGImageCreateWithImageInRect([image CGImage], cropRect);
NSLog(#"---------");
NSLog(#"*cropRect.origin.y=%f",cropRect.origin.y);
NSLog(#"*cropRect.origin.x=%f",cropRect.origin.x);
NSLog(#"*cropRect.size.width=%f",cropRect.size.width);
NSLog(#"*cropRect.size.height=%f",cropRect.size.height);
NSLog(#"---------");
NSLog(#"*size.width=%f",size.width);
NSLog(#"*size.height=%f",size.height);
CGRect myRect = CGRectMake(0.0f, 0.0f, size.width, size.height);
CGContextScaleCTM(context, 1.0f, -1.0f);
CGContextTranslateCTM(context, 0.0f, -size.height);
CGContextDrawImage(context, myRect, subImage);
UIImage* croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGImageRelease(subImage);
return croppedImage;
}
you can use this api from AVFoundation: AVMakeRectWithAspectRatioInsideRect
it will return the crop region for an image in a bounding region, apple doc is here:
https://developer.apple.com/library/ios/Documentation/AVFoundation/Reference/AVFoundation_Functions/Reference/reference.html
I think this is just simple as this
- (CGRect)computeCropRect:(CGImageRef)cgImageRef
{
static CGFloat cgWidth = 0;
static CGFloat cgHeight = 0;
static CGFloat viewWidth = 320;
if(cgWidth == 0)
cgWidth = CGImageGetWidth(cgImageRef);
if(cgHeight == 0)
cgHeight = CGImageGetHeight(cgImageRef);
CGRect cropRect;
// Only based on width
cropRect.origin.x = cropRect.origin.y = kMargin * cgWidth / viewWidth;
cropRect.size.width = cropRect.size.height = kSquareSize * cgWidth / viewWidth;
return cropRect;
}
with kMargin and kSquareSize (20point and 280point in my case) are the margin and Scanning area respectively
Then perform cropping
CGRect cropRect = [self computeCropRect:cgCapturedImageRef];
CGImageRef croppedImageRef = CGImageCreateWithImageInRect(cgCapturedImageRef, cropRect);
I'm currently using two images for a menu that I've built. I was using this code a while ago for normal display systems and it was working fine, with the retina display I'm having some issues with CGImageRef creating the right masked image on a depress for the background display. I've tried importing using the Image extensions for retina images. The images are supplied using:
[UIImage imageNamed:#"filename.png"]
I've provided both a standard and a retina image with both the filename.png and the filename#2x.png names.
The problem comes when choosing the mask for the selected area. The code works fine with lower resolution resources, and a high resolution main resource, but when I use
CGImageCreateWithImageInRect
And specify the rect that I want to create the image within, the image's scale is increased meaning that the main button's resolution is fine, but the image that is returned and superimposed on the button downpress is not the correct resolution, but oddly scaled to twice the pixel density, which looks terrible.
I've tried both
UIImage *img2 = [UIImage imageWithCGImage:cgImg scale:[img scale] orientation:[img imageOrientation]];
UIImage *scaledImage = [UIImage imageWithCGImage:[img2 CGImage] scale:4.0 orientation:UIImageOrientationUp];
And I seem to be getting nowhere when I take the image and drawInRect:(Selected Rect)
I have been tearing my hair out for about 2 hours now, and can't seem to find a decent solution, does anyone have any ideas?
I figured out what is necessary to be done in this instance. I created a helper method that would take the scale of the image into account when building the pressed state image and made it scale the CGRect by the image scale like so
- (UIImage *)imageFromImage:(UIImage *)image inRect:(CGRect)rect {
rect.size.height = rect.size.height * [image scale];
rect.size.width = rect.size.width * [image scale];
rect.origin.x = rect.origin.x * [image scale];
rect.origin.y = rect.origin.y * [image scale];
CGImageRef sourceImageRef = [image CGImage];
CGImageRef newImageRef = CGImageCreateWithImageInRect(sourceImageRef, rect);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef scale:[image scale] orientation:[image imageOrientation]];
CGImageRelease(newImageRef);
return newImage;
}
That should fix anyone having similar issues for mapping.