IOS 6 Face Detection Not Working - iphone

I have used the following code to detect the face for IOS 5
CIImage *cIImage = [CIImage imageWithCGImage:image.CGImage];
CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace context:nil options:[NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy]];
NSArray *features = nil;
features = [detector featuresInImage:cIImage];
if ([features count] == 0)
{
NSDictionary* imageOptions = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:6] forKey:CIDetectorImageOrientation];
features = [detector featuresInImage:cIImage options:imageOptions];
}
With this code, i am able to detect the face in IOS 5, But recently we have upgraded our Systems to Xcode 4.4 and IOS 6, Now, the face Detection is not working properly,
What changes i need to do for detecting the face in IOS 6.
Anykind of Help is highly Appreciated

I have noticed that face detection in iOS6 is not as good as in iOS5.
Try with a selection of images. You will likely find that it works OK in iOS6 with a lot of the images, but not all of them.
I have been testing the same set of images in:
1. The emulator running iOS6.
2. iPhone 5 (iOS6)
3. iPhone 3GS (iOS5).
The 3GS detects more faces than the other two options.
Here's the code, it works on both, but just not as well on iOS6:
- (void)analyseFaces:(UIImage *)facePicture {
// Create CI image of the face picture
CIImage* image = [CIImage imageWithCGImage:facePicture.CGImage];
// Create face detector with high accuracy
CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace
context:nil options:[NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy]];
// Create array of detected faces
NSArray* features = [detector featuresInImage:image];
// Read through the faces and add each face image to facesFound mutable
for(CIFaceFeature* faceFeature in features)
{
CGSize parentSize = facePicture.size;
CGRect origRect = faceFeature.bounds;
CGRect flipRect = origRect;
flipRect.origin.y = parentSize.height - (origRect.origin.y + origRect.size.height);
CGImageRef imageRef = CGImageCreateWithImageInRect([facePicture CGImage], flipRect);
UIImage *faceImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
if (faceImage)
[facesFound addObject:faceImage];
}
}

I hope this helpful for u...
add the CoreImage.framework
-(void)faceDetector
{
// Load the picture for face detection
UIImageView* image = [[UIImageView alloc] initWithImage:
[UIImage imageNamed:#"facedetectionpic.jpg"]];
// Draw the face detection image
[self.window addSubview:image];
// Execute the method used to markFaces in background
[self markFaces:image];
}
-(void)faceDetector
{
// Load the picture for face detection
UIImageView* image = [[UIImageView alloc] initWithImage:[UIImage
imageNamed:#"facedetectionpic.jpg"]];
// Draw the face detection image
[self.window addSubview:image];
// Execute the method used to markFaces in background
[self performSelectorInBackground:#selector(markFaces:) withObject:image];
// flip image on y-axis to match coordinate system used by core image
[image setTransform:CGAffineTransformMakeScale(1, -1)];
// flip the entire window to make everything right side up
[self.window setTransform:CGAffineTransformMakeScale(1, -1)];
}
and check this two links also
http://maniacdev.com/2011/11/tutorial-easy-face-detection-with-core-image-in-ios-5/
http://i.ndigo.com.br/2012/01/ios-facial-recognition/

Related

MKMapView to UIImage iOS 7

The code to render a MKMapView to an UIImage no longer works in iOS 7. It returns an empty image with nothing but the word "Legal" at the bottom and a black compass on the top right. The map itself is missing. Below is my code:
UIGraphicsBeginImageContext(map.bounds.size);
[map.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Map is an IBOutlet that points to a MKMapView. Is there any way to render a MKMapView correctly in iOS 7?
From this SO post:
You can use MKMapSnapshotter and grab the image from the resulting MKMapSnapshot. See the discussion of it WWDC 2013 session video, Putting Map Kit in Perspective.
For example:
MKMapSnapshotOptions *options = [[MKMapSnapshotOptions alloc] init];
options.region = self.mapView.region;
options.scale = [UIScreen mainScreen].scale;
options.size = self.mapView.frame.size;
MKMapSnapshotter *snapshotter = [[MKMapSnapshotter alloc] initWithOptions:options];
[snapshotter startWithCompletionHandler:^(MKMapSnapshot *snapshot, NSError *error) {
UIImage *image = snapshot.image;
NSData *data = UIImagePNGRepresentation(image);
[data writeToFile:[self snapshotFilename] atomically:YES];
}];
Having said that, the renderInContext solution still works for me. There are notes about only doing that in the main queue in iOS7, but it still seems to work. But MKMapSnapshotter seems like the more appropriate solution for iOS7.

How to Define UIImageView size as UIImage resolution?

I have scenario, in which I am getting images using Web Service and all images are in different resolution. Now my requirement is that I want resolution of each Images and using that I want to define size of UIImageView so I can prevent my Images from getting blurred
For example image resolution if 326 pixel/inch the imageview should be as size of that image can represent fully without any blur.
UIImage *img = [UIImage imageNamed:#"foo.png"];
CGRect rect = CGRectMake(0, 0, img.size.width, img.size.height);
UIImageView *imgView = [[UIImageView alloc] initWithFrame:rect];
[imgView setImage:img];
Image size IS it's resolution.
Your problem might be - retina display!
Check for Retina display and thus - make UIImageView width/height twice smaller (so that each UIImageView pixel would consist of four smaller UIImage pixels for retina display).
How to check for retina display:
https://stackoverflow.com/a/7607087/894671
How to check image size (without actually loading image in memory):
NSString *mFullPath = [[NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) lastObject]
stringByAppendingPathComponent:#"imageName.png"];
NSURL *imageFileURL = [NSURL fileURLWithPath:mFullPath];
CGImageSourceRef imageSource = CGImageSourceCreateWithURL((CFURLRef)imageFileURL, NULL);
if (imageSource == NULL)
{
// Error loading image ...
}
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:NO], (NSString *)kCGImageSourceShouldCache, nil];
CFDictionaryRef imageProperties = CGImageSourceCopyPropertiesAtIndex(imageSource, 0, (CFDictionaryRef)options);
NSNumber *mImgWidth;
NSNumber *mImgHeight;
if (imageProperties)
{
//loaded image width
mImgWidth = (NSNumber *)CFDictionaryGetValue(imageProperties, kCGImagePropertyPixelWidth);
//loaded image height
mImgHeight = (NSNumber *)CFDictionaryGetValue(imageProperties, kCGImagePropertyPixelHeight);
CFRelease(imageProperties);
}
if (imageSource != NULL)
{
CFRelease(imageSource);
}
So - for example:
UIImageView *mImgView = [[UIImageView alloc] init];
[mImgView setImage:[UIImage imageNamed:#"imageName.png"]];
[[self view] addSubview:mImgView];
if ([UIScreen instancesRespondToSelector:#selector(scale)])
{
CGFloat scale = [[UIScreen mainScreen] scale];
if (scale > 1.0)
{
//iphone retina screen
[mImgView setFrame:CGRectMake(0,0,[mImgWidth intValue]/2,[mImgHeight intValue]/2)];
}
else
{
//iphone screen
[mImgView setFrame:CGRectMake(0,0,[mImgWidth intValue],[mImgHeight intValue])];
}
}
Hope that helps!
You can get image size using following code. So, first calculate downloaded image size and than make image view according to that.
UIImage *Yourimage = [UIImage imageNamed:#"image.png"];
CGFloat width = Yourimage.size.width;
CGFloat height = Yourimage.size.height;
Hope, this will help you..
UIImage *oldimage = [UIImage imageWithContentsOfFile:imagePath]; // or you can set from url with NSURL
CGSize imgSize = [oldimage size];
imgview.frame = CGRectMake(10, 10, imgSize.width,imgSize.height);
[imgview setImage:oldimage];
100% working ....
To solve this problem, we need to take care of the device's display resolution..
For example you have an image of resolution 326ppi which is same as of iPhone4, iPhone4S and iPod 4th Gen. So you can simply use solutions suggested by #Nit and #Peko. But for other devices(or for image with different resolution on these devices) you will need to apply maths to calculate size for better display.
Now suppose you have 260ppi(with dimensions W x H) image and you wish to display it on iPhone4S, so as the information contained in it per inches is less than the display resolution of iPhone so we will need to resize it by reducing image size by 326/260 factor. so now the size for imageView that you will use is
imageViewWidth = W*(260/326);
imageViewHeight = H*(260/326);
In general:
resizeFactor = imageResolution/deviceDisplayResolution;
imageViewWidth = W*resizeFactor;
imageViewHeight = H*resizeFactor;
Here I am considering when we set an image in imageView and resize it, it does not removes or adds pixels from image,
Let the UIImageView do the work by utilizing the contentMode property to do your image resizing for you.
You probably want to be displaying your UIImageView with a static sizing (the "frame" property) that represents the maximum size of the image you want to display, and allowing the images to resize within that frame relative to their own particular size requirements (overall size, aspect ratio, etc.). You can let the UIImageView do the heavy lifting for you of dealing with different sized images by mastering the contentMode property. It has many different settings, one of which is UIViewContentModeScaleAspectFit, which will downsize your image as necessary to fit within the UIImageView, which if the image is smaller, it will simply display centered. You can play with the setting to get the results you want.
Note that with this approach, there is nothing special you need to do to deal with scaling issues associated with a Retina display.
As per the requirement you stated in the question body, I believe you need not change UIImageView size.
Image can represent fully without any blur using this line of code:
imageView.contentMode = UIViewContentModeScaleAspectFit;

iPhone UIView replace image color programmatically, like in Photoshop or Illustrator

I have a set of graphics that I would like to reuse within my apps. They are buttons and images with transparency. Rather than having dozens of hard-coded button images, I would love to have a single button image, and be able to assign a color to it dynamically, like the navigation bar and it's tint color.
Is it possible to replace/shift the hue of a UIView?
Alternatively:
I know that if I overlay a transparent image over another UIView, then I can change the background color and the two images will blend together. But the transparent regions will take on the hue of the background.
Is there a way to keep transparent regions transparent, while blending the two images together? I heard there are clipping masks, but have never worked with them.
Thank you!
Update:
This should work, but returns nil image, I've seen someone else report the same issue. Maybe this will work in the future
-(void)doHueAdjustFilter
{
//does not work, returns nil image
CIImage* inputImage = [CIImage imageWithCGImage:[[UIImage imageNamed:#"button.jpg"]CGImage]];
CIFilter *myFilter;
NSDictionary *myFilterAttributes;
myFilter = [CIFilter filterWithName:#"CIHueAdjust"];
[myFilter setDefaults];
myFilterAttributes = [myFilter attributes];
[myFilterAttributes setValue:inputImage forKey:#"inputImage"];
[myFilterAttributes setValue:[NSNumber numberWithFloat:0.9] forKey:#"inputAngle"];
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *ciimage = [myFilter outputImage];
CGImageRef cgimg = [context createCGImage:ciimage fromRect:[ciimage extent]];
UIImage *uimage = [UIImage imageWithCGImage:cgimg scale:1.0f orientation:UIImageOrientationUp];
[caduceusImageView setImage:uimage];
CGImageRelease(cgimg);
}
I'm super excited about this code snippet. It completely changes the hue of graphics or buttons. Vector graphics look like I changed hue in illustrator. This saves great amounts of time, because I no longer need to drop my work, switch to illustrator and start drawing buttons with different hues. One button works for all!
Works on iOS5
-(UIImage*)doHueAdjustFilterWithBaseImageName:(NSString*)baseImageName hueAdjust:(CGFloat)hueAdjust
{
CIImage *inputImage = [[CIImage alloc] initWithImage:[UIImage imageNamed:baseImageName]];
CIFilter * controlsFilter = [CIFilter filterWithName:#"CIHueAdjust"];
[controlsFilter setValue:inputImage forKey:kCIInputImageKey];
[controlsFilter setValue:[NSNumber numberWithFloat:hueAdjust] forKey:#"inputAngle"];
NSLog(#"%#",controlsFilter.attributes);
CIImage *displayImage = controlsFilter.outputImage;
UIImage *finalImage = [UIImage imageWithCIImage:displayImage];
CIContext *context = [CIContext contextWithOptions:nil];
if (displayImage == nil || finalImage == nil) {
// We did not get output image. Let's display the original image itself.
return [UIImage imageNamed:baseImageName];
}else {
// We got output image. Display it.
return [UIImage imageWithCGImage:[context createCGImage:displayImage fromRect:displayImage.extent]];
}
}

Orientation does not behave correctly with Photo in ALAsset

I current have an app that uses ALAsssetsLibrary to fetch the photos. I have placed the photo to an image view and I am able to upload to the server. When I tested on the real device after taking some photos, I found out the photos that supposed to be taken in Portrait become a landscape.
Therefore, I called different function to get the CGImage like this:
UIImage *image = [UIImage imageWithCGImage:[representation fullResolutionImage] scale:1.0 orientation:(UIImageOrientation)[representation orientation]];
The first tried out, I used this :
UIImage *image = [UIImage imageWithCGImage:[representation fullResolutionImage]]
I thought the one with scale and orientation could give me the right orientation that the photo was taken. But it didn't give me the right solution.
Do I miss anything that is necessary to generate a correct orientation of photo?
The correct orientation handling depends on the iOS version you are using.
On iOS4 and iOS 5 the thumbnail is already correctly rotated, so you can initialize your UIImage without specifying any rotation parameters.
However for the fullScreenImage, the behavior is different for each iOS version. On iOS 5 the image is already rotated on iOS 4 not.
So on iOS4 you should use:
ALAssetRepresentation *defaultRep = [asset defaultRepresentation];
UIImage *_image = [UIImage imageWithCGImage:[defaultRep fullScreenImage]
scale:[defaultRep scale] orientation:(UIImageOrientation)[defaultRep orientation]];
On iOS5 the following code should work correctly:
ALAssetRepresentation *defaultRep = [asset defaultRepresentation];
UIImage *_image = [UIImage imageWithCGImage:[defaultRep fullScreenImage] scale:[defaultRep scale] orientation:0];
Cheers,
Hendrik
Try this code:-
UIImage* img = [UIImage imageWithCGImage:asset.thumbnail];
img = [UIImage imageWithCGImage:img.CGImage scale:1.0 orientation:UIImageOrientationUp];
This may help you.
My experience is limited to IOS 5.x but I can tell you that the thumbnail and fullscreen images are oriented properly. It's the fullresolutionimage that's horizontal when shot vertically. My solution is to use a category on uiimage that I got from here:
http://www.catamount.com/forums/viewtopic.php?f=21&t=967&start=0
It provides a nice rotating method on a UIImage like this:
UIImage *tmp = [UIImage imageWithCGImage:startingFullResolutionImage];
startingFullResolutionImage = [[tmp imageRotatedByDegrees:-90.0f] CGImage];
For fullResolutionImage, I'd like to provide a solution as follows,
ALAssetRepresentation *rep = [asset defaultRepresentation];
// First, write orientation to UIImage, i.e., EXIF message.
UIImage *image = [UIImage imageWithCGImage:[rep fullResolutionImage] scale:rep.scale orientation:(UIImageOrientation)rep.orientation];
// Second, fix orientation, and drop out EXIF
if (image.imageOrientation != UIImageOrientationUp) {
UIGraphicsBeginImageContextWithOptions(image.size, NO, image.scale);
[image drawInRect:(CGRect){0, 0, image.size}];
UIImage *normalizedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
image = normalizedImage;
}
// Third, compression
NSData *imageData = UIImageJPEGRepresentation(image, 1.0);
imageData is what you want, and just upload it to your photo server.
By the way, if you think EXIF is useful, you can complement it to normalizedImage as you wish.

Rendering MKMapView to UIImage with real resolution

I am using this function for rendering MKMapView instance into image:
#implementation UIView (Ext)
- (UIImage*) renderToImage
{
UIGraphicsBeginImageContext(self.frame.size);
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
This works fine. But with iphone4 the rendered image doesn't have same resolution as it has really on device. On device I have the 640x920 map view quality, and rendered image has the resolution 320x460.
Then I doubled the size that is provided to UIGraphicsBeginImageContext() function but that filled the only top-left image part.
Question: Is there any way to get map rendered to image with full resolution 640x920?
Try using UIGraphicsBeginImageContextWithOptions instead of UIGraphicsBeginImageContext:
UIGraphicsBeginImageContextWithOptions(self.frame.size, NO, 0.0);
See QA1703 for more details. It says:
Note: Starting from iOS 4,
UIGraphicsBeginImageContextWithOptions
allows you to provide with a scale
factor. A scale factor of zero sets it
to the scale factor of the device's
main screen. This enables you to get
the sharpest, highest-resolustion
snapshot of the display, including a
Retina Display.
iOS 7 introduced a new method to generate screenshots of a MKMapView. It is now possible to use the new MKMapSnapshot API as follows:
MKMapView *mapView = [..your mapview..]
MKMapSnapshotOptions *options = [[MKMapSnapshotOptions alloc]init];
options.region = mapView.region;
options.mapType = MKMapTypeStandard;
options.showsBuildings = NO;
options.showsPointsOfInterest = NO;
options.size = CGSizeMake(1000, 500);
MKMapSnapshotter *snapshotter = [[MKMapSnapshotter alloc]initWithOptions:options];
[snapshotter startWithQueue:dispatch_get_main_queue() completionHandler:^(MKMapSnapshot *snapshot, NSError *error) {
if( error ) {
NSLog( #"An error occurred: %#", error );
} else {
[UIImagePNGRepresentation( snapshot.image ) writeToFile:#"/Users/<yourAccountName>/map.png" atomically:YES];
}
}];
Currently all overlays and annotations are not rendered. You have to render them afterwards onto the resulting snapshot image yourself. The provided MKMapSnapshot object has a handy helper method to do the mapping between coordinates and points:
CGPoint point = [snapshot pointForCoordinate:locationCoordinate2D];