How to fix distorted thumb images retrieved with AVAssetImageGenerator? - iphone

The camera roll in the Camera app is showing the correct aspect ratio for the images.
When I retrieve thumbs with AVAssetImageGenerator some thumbs look very distorted. Like 10 times longer than heigh. Always the same images look distorted while others look correct. All images taken with Camera app. Inside camera roll of Camera app, all images look correct. Also the ones that are distorted with AVAssetImageGenerator.
I retrieve the thumbs like this:
if (!_imageGenerator) {
self.imageGenerator = [[AVAssetImageGenerator alloc] initWithAsset:self.avAsset];
_imageGenerator.appliesPreferredTrackTransform = YES;
}
_imageGenerator.maximumSize = maxSize;
[_imageGenerator generateCGImagesAsynchronouslyForTimes:[NSArray arrayWithObject:[NSValue valueWithCMTime:kCMTimeZero]]
completionHandler:^(CMTime requestedTime,
CGImageRef image,
CMTime actualTime,
AVAssetImageGeneratorResult result,
NSError *error) {
if (result == AVAssetImageGeneratorSucceeded) {
weakSelf.thumbImage = [UIImage imageWithCGImage:image];
dispatch_async(dispatch_get_main_queue(), ^{
NSLog(#"Loaded thumbnail");
completionHandler(weakSelf.thumbImage);
});
}
else if (result == AVAssetImageGeneratorFailed) {
// retry
}
}];
Tested with iOS 7 on iPhone 5. Also tested other times than zero. Same videos always delivers distorted thumbs while others are ok. What can be the problem or is there a workaround?

Related

Issues taking pictures with ZBarReader in landscape orientation and flat iPhone position

I'm using ZBar to detect codes but also, I want to enable taking pictures from the same screen. I detected an odd behaviour taking pictures in landscape orientation. If I put my mobile in vertical landscape position the image comes out ok, but If I move my iPhone to the flat landscape position, the picture comes out upside down. I checked UIImage metadata and the image orientation has different values, despite the fact that the device orientation is the same in both cases.
Any idea why this happens?
My solution is to change the image orientation metadata in the wrong cases:
-(void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
[....]
UIImage *image = [info objectForKey: UIImagePickerControllerOriginalImage];
if(image){
// This fixes a bug in ZBarReader taking picture in landscape orientation and device in flat position.
NSLog(#"Image: %d, Device: %d",image.imageOrientation,self.interfaceOrientation);
UIImageOrientation imgOrientation = image.imageOrientation;
UIInterfaceOrientation interfaceOrientation = self.interfaceOrientation;
if(interfaceOrientation == UIInterfaceOrientationLandscapeLeft && imgOrientation == UIImageOrientationUp){
image = [UIImage imageWithCGImage:[image CGImage] scale:1.0 orientation:UIImageOrientationDown];
}else if(interfaceOrientation == UIInterfaceOrientationLandscapeRight && imgOrientation == UIImageOrientationDown){
image = [UIImage imageWithCGImage:[image CGImage] scale:1.0 orientation:UIImageOrientationUp];
}
[self hideScanner];
[self performSegueWithIdentifier:#"mySegue" sender:image];
}
}
}
}

IOS 6 Face Detection Not Working

I have used the following code to detect the face for IOS 5
CIImage *cIImage = [CIImage imageWithCGImage:image.CGImage];
CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace context:nil options:[NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy]];
NSArray *features = nil;
features = [detector featuresInImage:cIImage];
if ([features count] == 0)
{
NSDictionary* imageOptions = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:6] forKey:CIDetectorImageOrientation];
features = [detector featuresInImage:cIImage options:imageOptions];
}
With this code, i am able to detect the face in IOS 5, But recently we have upgraded our Systems to Xcode 4.4 and IOS 6, Now, the face Detection is not working properly,
What changes i need to do for detecting the face in IOS 6.
Anykind of Help is highly Appreciated
I have noticed that face detection in iOS6 is not as good as in iOS5.
Try with a selection of images. You will likely find that it works OK in iOS6 with a lot of the images, but not all of them.
I have been testing the same set of images in:
1. The emulator running iOS6.
2. iPhone 5 (iOS6)
3. iPhone 3GS (iOS5).
The 3GS detects more faces than the other two options.
Here's the code, it works on both, but just not as well on iOS6:
- (void)analyseFaces:(UIImage *)facePicture {
// Create CI image of the face picture
CIImage* image = [CIImage imageWithCGImage:facePicture.CGImage];
// Create face detector with high accuracy
CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace
context:nil options:[NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy]];
// Create array of detected faces
NSArray* features = [detector featuresInImage:image];
// Read through the faces and add each face image to facesFound mutable
for(CIFaceFeature* faceFeature in features)
{
CGSize parentSize = facePicture.size;
CGRect origRect = faceFeature.bounds;
CGRect flipRect = origRect;
flipRect.origin.y = parentSize.height - (origRect.origin.y + origRect.size.height);
CGImageRef imageRef = CGImageCreateWithImageInRect([facePicture CGImage], flipRect);
UIImage *faceImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
if (faceImage)
[facesFound addObject:faceImage];
}
}
I hope this helpful for u...
add the CoreImage.framework
-(void)faceDetector
{
// Load the picture for face detection
UIImageView* image = [[UIImageView alloc] initWithImage:
[UIImage imageNamed:#"facedetectionpic.jpg"]];
// Draw the face detection image
[self.window addSubview:image];
// Execute the method used to markFaces in background
[self markFaces:image];
}
-(void)faceDetector
{
// Load the picture for face detection
UIImageView* image = [[UIImageView alloc] initWithImage:[UIImage
imageNamed:#"facedetectionpic.jpg"]];
// Draw the face detection image
[self.window addSubview:image];
// Execute the method used to markFaces in background
[self performSelectorInBackground:#selector(markFaces:) withObject:image];
// flip image on y-axis to match coordinate system used by core image
[image setTransform:CGAffineTransformMakeScale(1, -1)];
// flip the entire window to make everything right side up
[self.window setTransform:CGAffineTransformMakeScale(1, -1)];
}
and check this two links also
http://maniacdev.com/2011/11/tutorial-easy-face-detection-with-core-image-in-ios-5/
http://i.ndigo.com.br/2012/01/ios-facial-recognition/

iOS PNG Image rotated 90 degrees

In my iOS application I'm writing, I deal with PNGs because I deal with the alpha channel. For some reason, I can load a PNG into my imageView just fine, but when it comes time to either copy the image out of my application (onto the PasteBoard) or save the image to my camera roll, the image rotates 90 degrees.
I've searched everywhere on this, and one of the things I learned is that if I used JPEGs, I wouldn't have this problem (it sounds), due to the EXIF information.
My app has full copy/paste functionality, and here's the kicker (I'll write this in steps so it is easier to follow):
Go to my camera roll and copy an image
Go into my app and press "Paste", image pastes just fine, and I can do that all day
Click the copy function I implemented, and then click "Paste", and the image pastes but is rotated.
I am 100% sure my copy and paste code isn't what is wrong here, because if I go back to Step 2 above, and click "save", the photo saves to my library but it is rotated 90 degrees!
What is even more strange is that it seems to work fine with images downloaded from the internet, but is very hit or miss with images I manually took with the phone. Some it works, some it doesn't...
Does anybody have any thoughts on this? Any possible work arounds I can use? I'm pretty confident in the code being it works for about 75% of my images. I can post the code upon request though.
For those that want a Swift solution, create an extension of UIImage and add the following method:
func correctlyOrientedImage() -> UIImage {
if self.imageOrientation == .up {
return self
}
UIGraphicsBeginImageContextWithOptions(size, false, scale)
draw(in: CGRect(x: 0, y: 0, width: size.width, height: size.height))
let normalizedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return normalizedImage ?? self;
}
If you're having trouble due to the existing image imageOrientation property, you can construct an otherwise identical image with different orientation like this:
CGImageRef imageRef = [sourceImage CGImage];
UIImage *rotatedImage = [UIImage imageWithCGImage:imageRef scale:1.0 orientation:UIImageOrientationUp];
You may need to experiment with just what orientation to set on your replacement images, possibly switching based on the orientation you started with.
Also keep an eye on your memory usage. Photography apps often run out, and this will double your storage per picture, until you release the source image.
Took a few days, but I finally figured it out thanks to the answer #Dondragmer posted. But I figured I'd post my full solution.
So basically I had to write a method to intelligently auto-rotate my images. The downside is that I have to call this method everywhere throughout my code and it is kind of processor intensive, especially when working on mobile devices, but the plus side is that I can take images, copy images, paste images, and save images and they all rotate properly. Here's the code I ended up using (the method isn't 100% complete yet, still need to edit memory leaks and what not).
I ended up learning that the very first time an image was insert into my application (whether that be due to a user pressing "take image", "paste image", or "select image", for some reason it insert just fine without auto rotating. At this point, I stored whatever the rotation value was in a global variable called imageOrientationWhenAddedToScreen. This made my life easier because when it came time to manipulate the image and save the image out of the program, I simply checked this cached global variable and determined if I needed to properly rotate the image.
- (UIImage*) rotateImageAppropriately:(UIImage*) imageToRotate {
//This method will properly rotate our image, we need to make sure that
//We call this method everywhere pretty much...
CGImageRef imageRef = [imageToRotate CGImage];
UIImage* properlyRotatedImage;
if (imageOrientationWhenAddedToScreen == 0) {
//Don't rotate the image
properlyRotatedImage = imageToRotate;
} else if (imageOrientationWhenAddedToScreen == 3) {
//We need to rotate the image back to a 3
properlyRotatedImage = [UIImage imageWithCGImage:imageRef scale:1.0 orientation:3];
} else if (imageOrientationWhenAddedToScreen == 1) {
//We need to rotate the image back to a 1
properlyRotatedImage = [UIImage imageWithCGImage:imageRef scale:1.0 orientation:1];
}
return properlyRotatedImage;
}
I am still not 100% sure why Apple has this weird image rotation behavior (try this... Take your phone and turn it upside down and take a picture, you'll notice that the final picture turns out right side up - perhaps this is why Apple has this type of functionality?).
I know I spent a great deal of time figuring this out, so I hope it helps other people!
This "weird rotation" behavior is really not that weird at all. It is smart, and by smart I mean memory efficient. When you rotate an iOS device the camera hardware rotates with it. When you take a picture that picture will be captured however the camera is oriented. The UIImage is able to use this raw picture data without copying by just keeping track of the orientation it should be in. When you use UIImagePNGRepresentation() you lose this orientation data and get a PNG of the underlying image as it was taken by the camera. To fix this instead of rotating you can tell the original image to draw itself to a new context and get the properly oriented UIImage from that context.
UIImage *image = ...;
//Have the image draw itself in the correct orientation if necessary
if(!(image.imageOrientation == UIImageOrientationUp ||
image.imageOrientation == UIImageOrientationUpMirrored))
{
CGSize imgsize = image.size;
UIGraphicsBeginImageContext(imgsize);
[image drawInRect:CGRectMake(0.0, 0.0, imgsize.width, imgsize.height)];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
NSData *png = UIImagePNGRepresentation(image);
Here is one more way to achieve that:
#IBAction func rightRotateAction(sender: AnyObject) {
let imgToRotate = CIImage(CGImage: sourceImageView.image?.CGImage)
let transform = CGAffineTransformMakeRotation(CGFloat(M_PI_2))
let rotatedImage = imgToRotate.imageByApplyingTransform(transform)
let extent = rotatedImage.extent()
let contex = CIContext(options: [kCIContextUseSoftwareRenderer: false])
let cgImage = contex.createCGImage(rotatedImage, fromRect: extent)
adjustedImage = UIImage(CGImage: cgImage)!
UIView.transitionWithView(sourceImageView, duration: 0.5, options: UIViewAnimationOptions.TransitionCrossDissolve, animations: {
self.sourceImageView.image = self.adjustedImage
}, completion: nil)
}
You can use Image I/O to save PNG image to file(or NSMutableData) with respect to the orientation of the image. In the example below I save the PNG image to a file at path.
- (BOOL)savePngFile:(UIImage *)image toPath:(NSString *)path {
NSData *data = UIImagePNGRepresentation(image);
int exifOrientation = [UIImage cc_iOSOrientationToExifOrientation:image.imageOrientation];
NSDictionary *metadata = #{(__bridge id)kCGImagePropertyOrientation:#(exifOrientation)};
NSURL *url = [NSURL fileURLWithPath:path];
CGImageSourceRef source = CGImageSourceCreateWithData((__bridge CFDataRef)data, NULL);
if (!source) {
return NO;
}
CFStringRef UTI = CGImageSourceGetType(source);
CGImageDestinationRef destination = CGImageDestinationCreateWithURL((__bridge CFURLRef)url, UTI, 1, NULL);
if (!destination) {
CFRelease(source);
return NO;
}
CGImageDestinationAddImageFromSource(destination, source, 0, (__bridge CFDictionaryRef)metadata);
BOOL success = CGImageDestinationFinalize(destination);
CFRelease(destination);
CFRelease(source);
return success;
}
cc_iOSOrientationToExifOrientation: is a method of UIImage category.
+ (int)cc_iOSOrientationToExifOrientation:(UIImageOrientation)iOSOrientation {
int exifOrientation = -1;
switch (iOSOrientation) {
case UIImageOrientationUp:
exifOrientation = 1;
break;
case UIImageOrientationDown:
exifOrientation = 3;
break;
case UIImageOrientationLeft:
exifOrientation = 8;
break;
case UIImageOrientationRight:
exifOrientation = 6;
break;
case UIImageOrientationUpMirrored:
exifOrientation = 2;
break;
case UIImageOrientationDownMirrored:
exifOrientation = 4;
break;
case UIImageOrientationLeftMirrored:
exifOrientation = 5;
break;
case UIImageOrientationRightMirrored:
exifOrientation = 7;
break;
default:
exifOrientation = -1;
}
return exifOrientation;
}
You can alternatively save the image to NSData using CGImageDestinationCreateWithData and pass NSMutableData instead of NSURL in CGImageDestinationCreateWithURL.

Resize an ALAsset Photo takes a long time. Any way around this?

I have a blog application that I'm making. To compose a new entry, there is a "Compose Entry" view where the user can select a photo and input text. For the photo, there is a UIImageView placeholder and upon clicking this, a custom ImagePicker comes up where the user can select up to 3 photos.
This is where the problem comes in. I don't need the full resolution photo from the ALAsset, but at the same time, the thumbnail is too low resolution for me to use.
So what I'm doing at this point is resizing the fullResolution photos to a smaller size. However, this takes some time, especially when resizing up to 3 photos to a smaller size.
Here is a code snipped to show what I'm doing:
ALAssetRepresentation *rep = [[dict objectForKey:#"assetObject"] defaultRepresentation];
CGImageRef iref = [rep fullResolutionImage];
if (iref)
{
CGRect screenBounds = [[UIScreen mainScreen] bounds];
UIImage *previewImage;
UIImage *largeImage;
if([rep orientation] == ALAssetOrientationUp) //landscape image
{
largeImage = [[UIImage imageWithCGImage:iref] scaledToWidth:screenBounds.size.width];
previewImage = [[UIImage imageWithCGImage:iref] scaledToWidth:300];
}
else // portrait image
{
previewImage = [[[UIImage imageWithCGImage:iref] scaledToHeight:300] imageRotatedByDegrees:90];
largeImage = [[[UIImage imageWithCGImage:iref] scaledToHeight:screenBounds.size.height] imageRotatedByDegrees:90];
}
}
Here, from the fullresolution image, I am creating two images: a preview image (max 300px on the long end) and a large image (max 960px or 640px on the long end). The preview image is what is shown on the app itself in the "new entry" preview. The large image is what will be used when uploading to the server.
The actual code I'm using to resize, I grabbed somewhere from here:
-(UIImage*)scaledToWidth:(float)i_width
{
float oldWidth = self.size.width;
float scaleFactor = i_width / oldWidth;
float newHeight = self.size.height * scaleFactor;
float newWidth = oldWidth * scaleFactor;
UIGraphicsBeginImageContext(CGSizeMake(newWidth, newHeight));
[self drawInRect:CGRectMake(0, 0, newWidth, newHeight)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Am I doing things wrong here? As it stands, the ALAsset thumbnail is too low clarity, and at the same time, I dont need the entire full resolution. It's all working now, but the resizing takes some time. Is this just a necessary consequence?
Thanks!
It is a necessary consequence of resizing your image that it will take some amount of time. How much depends on the device, the resolution of the asset and the format of the asset. But you don't have any control over that. But you do have control over where the resizing takes place. I suspect that right now you are resizing the image in your main thread, which will cause the UI to grind to a halt while you are doing the resizing. Do enough images, and your app will appear hung for long enough that the user will just go off and do something else (perhaps check out competing apps in the App Store).
What you should be doing is performing the resizing off the main thread. With iOS 4 and later, this has become much simpler because you can use Grand Central Dispatch to do the resizing. You can take your original block of code from above and wrap it in a block like this:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0), ^{
ALAssetRepresentation *rep = [[dict objectForKey:#"assetObject"] defaultRepresentation];
CGImageRef iref = [rep fullResolutionImage];
if (iref)
{
CGRect screenBounds = [[UIScreen mainScreen] bounds];
__block UIImage *previewImage;
__block UIImage *largeImage;
if([rep orientation] == ALAssetOrientationUp) //landscape image
{
largeImage = [[UIImage imageWithCGImage:iref] scaledToWidth:screenBounds.size.width];
previewImage = [[UIImage imageWithCGImage:iref] scaledToWidth:300];
}
else // portrait image
{
previewImage = [[[UIImage imageWithCGImage:iref] scaledToHeight:300] imageRotatedByDegrees:90];
largeImage = [[[UIImage imageWithCGImage:iref] scaledToHeight:screenBounds.size.height] imageRotatedByDegrees:90];
}
dispatch_async(dispatch_get_main_queue(), ^{
// do what ever you need to do in the main thread here once your image is resized.
// this is going to be things like setting the UIImageViews to show your new images
// or adding new views to your view hierarchy
});
}
});
You'll have to think about things a little differently this way. For example, you've now broken up what used to be a single step into multiple steps now. Code that was running after this will end up running before the image resize is complete or before you actually do anything with the images, so you need to make sure that you didn't have any dependencies on those images or you'll likely crash.
A late answer, but for those stumbling on this question, you might want to consider using the fullScreenImage rather than the fullResolutionImage of the defaultRepresentation. It's usually much smaller, but still large enough to maintain good quality for larger thumbnails.

Rendering MKMapView to UIImage with real resolution

I am using this function for rendering MKMapView instance into image:
#implementation UIView (Ext)
- (UIImage*) renderToImage
{
UIGraphicsBeginImageContext(self.frame.size);
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
This works fine. But with iphone4 the rendered image doesn't have same resolution as it has really on device. On device I have the 640x920 map view quality, and rendered image has the resolution 320x460.
Then I doubled the size that is provided to UIGraphicsBeginImageContext() function but that filled the only top-left image part.
Question: Is there any way to get map rendered to image with full resolution 640x920?
Try using UIGraphicsBeginImageContextWithOptions instead of UIGraphicsBeginImageContext:
UIGraphicsBeginImageContextWithOptions(self.frame.size, NO, 0.0);
See QA1703 for more details. It says:
Note: Starting from iOS 4,
UIGraphicsBeginImageContextWithOptions
allows you to provide with a scale
factor. A scale factor of zero sets it
to the scale factor of the device's
main screen. This enables you to get
the sharpest, highest-resolustion
snapshot of the display, including a
Retina Display.
iOS 7 introduced a new method to generate screenshots of a MKMapView. It is now possible to use the new MKMapSnapshot API as follows:
MKMapView *mapView = [..your mapview..]
MKMapSnapshotOptions *options = [[MKMapSnapshotOptions alloc]init];
options.region = mapView.region;
options.mapType = MKMapTypeStandard;
options.showsBuildings = NO;
options.showsPointsOfInterest = NO;
options.size = CGSizeMake(1000, 500);
MKMapSnapshotter *snapshotter = [[MKMapSnapshotter alloc]initWithOptions:options];
[snapshotter startWithQueue:dispatch_get_main_queue() completionHandler:^(MKMapSnapshot *snapshot, NSError *error) {
if( error ) {
NSLog( #"An error occurred: %#", error );
} else {
[UIImagePNGRepresentation( snapshot.image ) writeToFile:#"/Users/<yourAccountName>/map.png" atomically:YES];
}
}];
Currently all overlays and annotations are not rendered. You have to render them afterwards onto the resulting snapshot image yourself. The provided MKMapSnapshot object has a handy helper method to do the mapping between coordinates and points:
CGPoint point = [snapshot pointForCoordinate:locationCoordinate2D];