I am developing a camera application for iphone/ipad.
We are using an overlay for displaying the camera app on top of the viewfinder.
Currently i am trying to save the zoomed image. We are able to zoom the image on viewfinder. But when we save the image it gets saved in the original size.
To solve this we are scaling the zoomed image using the following code :
UIImageView *v = [[UIImageView alloc]initWithImage:image];
UIGraphicsBeginImageContext(v.bounds.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
CGContextScaleCTM(context, zoomvalue,zoomvalue);
[v drawRect:CGRectMake(0,0,320,480)];
CGContextRestoreGState(context);
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Using the above code we are able to get the zoomed image.
However, we need to crop the center portion of the scaled image and save the same. We are not getting the same.
Kindly help
if You are using UIImagePickerController to capture image and zoom your image there then you can get the edited image by getting UIImagePickerControllerEditedImage from picker NSDictionary something like this -
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
img = [[info objectForKey:UIImagePickerControllerEditedImage] retain];
[picker dismissModalViewControllerAnimated:YES];
}
Related
I am using a UIImagePickerController with the property allowsEditing set to YES.
When the user finish picking an image I want to know if the user edited the image he selected or not (e.g. if he scaled the image). This method:
UIImage *editedImage = [info objectForKey:#"UIImagePickerControllerEditedImage"];
returns always an object even if the user left the picture as it was. Is there any way to check if the user edited the image? For example can i check if the UIImagePickerControllerEditedImage and UIImagePickerControllerOriginalImage are different somehow?
Try this in didFinishPickingMediaWithInfo as i am not sure:
UIImage *image = [info objectForKey:UIImagePickerControllerOriginalImage];
UIImage *editedimage = [info objectForKey:UIImagePickerControllerEditedImage];
if ([UIImagePNGRepresentation(image) isEqualToData:UIImagePNGRepresentation(editedimage)])
//not edited
else
//edited
UIImage *image = [info objectForKey:UIImagePickerControllerOriginalImage];
UIImage *editedimage = [info objectForKey:UIImagePickerControllerEditedImage];
if(editedimage.length>0){
//then got the edited image
}
Could you not just get and compare the CGSize of the image?
BOOL sizeChanged = FALSE;
// get current size of image
CGSize originalSize = [image size];
//After the user hase made the action, get the new size
CGSize currentSize = [image size];
// if the dimensions have been editied the condition is true
if ( originalSize.width != currentSize.width ||
originalSize.height != currentSize.height
)
sizeChanged = TRUE;
else
sizeChanged = FALSE;
Check this out:
http://developer.apple.com/library/ios/#documentation/uikit/reference/UIImagePickerControllerDelegate_Protocol/UIImagePickerControllerDelegate/UIImagePickerControllerDelegate.html#//apple_ref/doc/uid/TP40007069
This is the docs for the ImagePicker Delegate. As you can see, when the user picks and image this is called:
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
info - is a dictionary which contains data about what happened and what has been picked. if allowediting is set to YES then info contains both the original image and the edited one. Check in the link I gave you for the
Editing Information Keys
there are a bunch of constants there which can give you the data you seek!
Start from here to see the whole mechanics:
http://developer.apple.com/library/ios/documentation/uikit/reference/UIImagePickerController_Class/UIImagePickerController/UIImagePickerController.html#//apple_ref/occ/instp/UIImagePickerController/allowsEditing
I know that this is a very old question, with no activity in awhile, but this is what comes up in a google search, and as far as I can tell, the question remains unanswered satisfactorily.
Anyway, the way to tell if the image has been edited or not is this:
In didFinishPickingMediaWithInfo: you can inspect the width of the CropRect and the width of the original image. If CropRect.width == originalImage.width+1, then it has not been edited. The reason this is true is because to edit the image, the user must pinch and zoom, which scales the image and changes the size of the CropRect. Simply moving the image around will not work as it bounces back unless it is scaled.
NSValue *pickerCropRect = info[UIImagePickerControllerCropRect];
CGRect theCropRect = pickerCropRect.CGRectValue;
UIImage *originalImage = info[UIImagePickerControllerOriginalImage];
CGSize originalImageSize = originalImage.size;
if (theCropRect.size.width == originalImageSize.width+1) {
NSLog(#"Image was NOT edited.");
} else {
NSLog(#"Image was edited.");
}
As far as I can tell this works in iOS 9 on the 6S and 6+. I see no real reason it shouldn't work elsewhere.
I'm attempting to implement an iOS camera view that takes pictures that are square in shape (similar to Instagram). My code appears below. The first part, where the frame height is set to be equal to the frame width, is working as expected and the user is given a view that is square. The problem occurs later when I attempt to apply the frame (which is a CGRect property) to the image data using CGImageCreateWithImageInRect. I pass the frame rect to this method with the image. But the results are not cropped to be square. Instead the image retains the original default dimensions from the iOS camera. Can someone please tell me what I've done wrong? My understanding from the Apple documentation is that CGImageCreateWithImageInRect should select an image area of shape Rect from some starting x/y coordinate. But that doesn't seem to be happening.
//Set the frame size to be square shaped
UIView *view = imagePicker.view;
frame = view.frame;
frame.size.height = frame.size.width;
view.frame = frame;
//Crop the image to the frame dimensions using CGImageCreateWithImageInRect
-(void)imagePickerController:(UIImagePickerController *)picker
didFinishPickingMediaWithInfo:(NSDictionary *)info
{
[self.popoverController dismissPopoverAnimated:true];
NSString *mediaType = [info
objectForKey:UIImagePickerControllerMediaType];
[self dismissModalViewControllerAnimated:YES];
if ([mediaType isEqualToString:(NSString *)kUTTypeImage]) {
UIImage *image = [info
objectForKey:UIImagePickerControllerOriginalImage];
croppedImage = (__bridge UIImage *)(CGImageCreateWithImageInRect((__bridge CGImageRef)(image), frame));
imageView.image = croppedImage;
}
else if ([mediaType isEqualToString:(NSString *)kUTTypeMovie])
{
// Code here to support video if enabled
}
}
You are doing right. The only thing is that I think you are setting the frame property same as the picker view, so the final size is the same as the original size.
Try to set frame smaller than pickerView.view.frame, not equal
Check this out
Cropping an UIImage
You are setting the frame wrong.
I suggest you take a look at this sample code from Apple, on how to create what you are trying to:
https://developer.apple.com/library/mac/#samplecode/VirtualScanner/Listings/Sources_VirtualScanner_m.html#//apple_ref/doc/uid/DTS40011006-Sources_VirtualScanner_m-DontLinkElementID_9
Look at the:
- (ICAError)startScanningWithParams:(ICD_ScannerStartPB*)pb
function
I'm trying to merge two UIImageViews. The first UIImageView (theimageView) is the background, and the second UIImageView (Birdie) is an image overlaying the first UIImageView. You can load the first UIImageView from a map or take a picture. After this you can drag, rotate and scale the second UIImageView over the first one. I want the output (saved image) to look the same as what I see on the screen.
I got that working, but I get borders and the quality and size are bad. I want the size to be the same as that of the image which is chosen, and the quality to be good. Also I get a crash if I save it a second time, right after the first time.
Here is my current code:
//save actual design in photo library
- (void)captureScreen {
UIImage *myImage = [self addImage:theImageView ToImage:Birdie];
[myImage retain];
UIImageWriteToSavedPhotosAlbum(myImage, self, #selector(imageSavedToPhotosAlbum:didFinishSavingWithError:contextInfo:), self);
}
- (UIImage*) addImage:(UIImage*)theimageView toImage:(UIImage*)Birdie{
CGSize size = CGSizeMake(theimageView.size.height, theimageView.size.width);
UIGraphicsBeginImageContext(size);
CGPoint pointImg1 = CGPointMake(0,0);
[theimageView drawAtPoint:pointImg1 ];
CGPoint pointImage2 = CGPointMake(0, 0);
[Birdie drawAtPoint:pointImage2 ];
UIImage* result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}
But I only get errors with this code!
Thanks in advanced!
Take a look at Drawing a PNG Image Into a Graphics Context for Blending Mode Manipulation
I am working on an application where i am picking up images from iphone library using uiimagepicker control. I am using following code to pick up image.
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
NSLog(#"image picked:");
UIImage *image = [info objectForKey:#"UIImagePickerControllerOriginalImage"];
//UIImageWriteToSavedPhotosAlbum(image, self, nil, nil);
CGRect cropRect;
cropRect = [[info valueForKey:#"UIImagePickerControllerCropRect"] CGRectValue];
}
The above method works perfect when I capture an image from camera but when I pick an image from iphone library, the cropRect gives me incorrect values. It is always set to x = 43 or greater, even if I pick the rectangle from extreme left of the screen. So as a result I am getting a vertical black strip on left side of the image.
thanks in advance
In my application I m using following codes to crop the captured image :-
-(void)imagePickerController:(UIImagePickerController *) picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
#ifdef _DEBUG
NSLog(#"frmSkinImage-imagePickerController-Start");
#endif
imageView.image = [info objectForKey:#"UIImagePickerControllerOriginalImage"];
//=======================================
UIImage *image =imageView.image;
CGRect cropRect = CGRectMake(100, 100, 125,128);
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], cropRect);
[imageView setImage:[UIImage imageWithCGImage:imageRef]];
CGImageRelease(imageRef);
//===================================================
//imgglobal = [info objectForKey:#"UIImagePickerControllerOriginalImage"];
// for saving image to photo album
//UIImageWriteToSavedPhotosAlbum(imageView.image, self, #selector(image:didFinishSavingWithError:contextInfo:), self);
[picker dismissModalViewControllerAnimated:YES];
#ifdef _DEBUG
NSLog(#"frmSkinImage-imagePickerController-End");
#endif
}
But my problem is that when I use camera to take photo to crop the captured image it rotates the image to 90 degree towards right and in case I use Photo library it works perfectly.
So can you examine my above code, to see where I am wrong?
The CGImage is a UIImage without the meta data, and therefore loses the orientation information. I'd suggest that you get the orientation of the original [UIImage imageOrientation], store it and then apply it to the final image.
If that doesn't work, try applying a CGAffineTransformMakeRotation(90.0*0.0174532925); to the final image according to the orientation of the original.
Fastest and easiest way is to have a look to NYXImagesKit. With UIImage+Resizing category you can crop image. From the documentation:
UIImage+Resizing
This category can be used to crop or to scale images.
Cropping
You can crop your image by 9 different ways :
Top left Top center Top right Bottom left Bottom center Bottom right
Left center Right center Center
UIImage* cropped = [myImage cropToSize:(CGSize){width, height}
usingMode:NYXCropModeCenter];
NYXCropMode is an enum type which can be found in the header file, it
is used to represent the 9 modes above.