It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I am trying to develop an iPhone for Face recognition/detection. In my app i want to make my iPhone camera should be auto focused and auto capture.
How to recognition the face from iPhone app?
It is possible to auto focus the face and auto capture in our iPhone app. If it is possible can anyone please help to do this? I just want any suggestion/ideas and tutorials about that.
Can you please help me? Thanks in advance.
Core Image has a new CIFaceDetector to detect faces in real time; you can start with these examples to take an overview:
SquareCam
iOS Facial Recognition
Easy Face detection with Core Image
Check this code.You have to import following:-
CoreImage/CoreImage.h
CoreImage/CoreImage.h and after that use the code:-
-(void)markFaces:(UIImageView *)facePicture
{
// draw a CI image with the previously loaded face detection picture
CIImage* image = [CIImage imageWithCGImage:facePicture.image.CGImage];
// create a face detector - since speed is not an issue we'll use a high accuracy
// detector
CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace
context:nil options:[NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy]];
// create an array containing all the detected faces from the detector
NSArray* features = [detector featuresInImage:image];
// we'll iterate through every detected face. CIFaceFeature provides us
// with the width for the entire face, and the coordinates of each eye
// and the mouth if detected. Also provided are BOOL's for the eye's and
// mouth so we can check if they already exist.
for(CIFaceFeature* faceFeature in features)
{
// get the width of the face
CGFloat faceWidth = faceFeature.bounds.size.width;
// create a UIView using the bounds of the face
UIView* faceView = [[UIView alloc] initWithFrame:faceFeature.bounds];
// add a border around the newly created UIView
faceView.layer.borderWidth = 1;
faceView.layer.borderColor = [[UIColor redColor] CGColor];
// add the new view to create a box around the face
[self.window addSubview:faceView];
if(faceFeature.hasLeftEyePosition)
{
// create a UIView with a size based on the width of the face
UIView* leftEyeView = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.leftEyePosition.x-faceWidth*0.15, faceFeature.leftEyePosition.y-faceWidth*0.15, faceWidth*0.3, faceWidth*0.3)];
// change the background color of the eye view
[leftEyeView setBackgroundColor:[[UIColor blueColor] colorWithAlphaComponent:0.3]];
// set the position of the leftEyeView based on the face
[leftEyeView setCenter:faceFeature.leftEyePosition];
// round the corners
leftEyeView.layer.cornerRadius = faceWidth*0.15;
// add the view to the window
[self.window addSubview:leftEyeView];
}
if(faceFeature.hasRightEyePosition)
{
// create a UIView with a size based on the width of the face
UIView* leftEye = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.rightEyePosition.x-faceWidth*0.15, faceFeature.rightEyePosition.y-faceWidth*0.15, faceWidth*0.3, faceWidth*0.3)];
// change the background color of the eye view
[leftEye setBackgroundColor:[[UIColor blueColor] colorWithAlphaComponent:0.3]];
// set the position of the rightEyeView based on the face
[leftEye setCenter:faceFeature.rightEyePosition];
// round the corners
leftEye.layer.cornerRadius = faceWidth*0.15;
// add the new view to the window
[self.window addSubview:leftEye];
}
if(faceFeature.hasMouthPosition)
{
// create a UIView with a size based on the width of the face
UIView* mouth = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.mouthPosition.x-faceWidth*0.2, faceFeature.mouthPosition.y-faceWidth*0.2, faceWidth*0.4, faceWidth*0.4)];
// change the background color for the mouth to green
[mouth setBackgroundColor:[[UIColor greenColor] colorWithAlphaComponent:0.3]];
// set the position of the mouthView based on the face
[mouth setCenter:faceFeature.mouthPosition];
// round the corners
mouth.layer.cornerRadius = faceWidth*0.2;
// add the new view to the window
[self.window addSubview:mouth];
}
}
}
-(void)faceDetector
{
// Load the picture for face detection
UIImageView* image = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"facedetectionpic.jpg"]];
// Draw the face detection image
[self.window addSubview:image];
// Execute the method used to markFaces in background
[self performSelectorInBackground:#selector(markFaces:) withObject:image];
// flip image on y-axis to match coordinate system used by core image
[image setTransform:CGAffineTransformMakeScale(1, -1)];
// flip the entire window to make everything right side up
[self.window setTransform:CGAffineTransformMakeScale(1, -1)];
}
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
{
self.window = [[UIWindow alloc] initWithFrame:[[UIScreen mainScreen] bounds]];
// Override point for customization after application launch.
self.viewController = [[ViewController alloc] initWithNibName:#"ViewController" bundle:nil];
self.window.rootViewController = self.viewController;
[self.window makeKeyAndVisible];
[self faceDetector]; // execute the faceDetector code
return YES;
}
Hope it helps thanks :)
Related
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
I am making a dummy project.which is based on image editing.In this project i want to
calculate the number of people exists in that particular pic, clicked by camera or used
through iPhone image gallery. Does anybody know how this can be possible. I don't have any
idea. Any help would be appropriated.
Check this code.You have to import following:- CoreImage/CoreImage.h CoreImage/CoreImage.h and after that use the code:-
-(void)markFaces:(UIImageView *)facePicture
{
// draw a CI image with the previously loaded face detection picture
CIImage* image = [CIImage imageWithCGImage:facePicture.image.CGImage];
// create a face detector - since speed is not an issue we'll use a high accuracy
// detector
CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace
context:nil options:[NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy]];
// create an array containing all the detected faces from the detector
NSArray* features = [detector featuresInImage:image];
// we'll iterate through every detected face. CIFaceFeature provides us
// with the width for the entire face, and the coordinates of each eye
// and the mouth if detected. Also provided are BOOL's for the eye's and
// mouth so we can check if they already exist.
for(CIFaceFeature* faceFeature in features)
{
// get the width of the face
CGFloat faceWidth = faceFeature.bounds.size.width;
// create a UIView using the bounds of the face
UIView* faceView = [[UIView alloc] initWithFrame:faceFeature.bounds];
// add a border around the newly created UIView
faceView.layer.borderWidth = 1;
faceView.layer.borderColor = [[UIColor redColor] CGColor];
// add the new view to create a box around the face
[self.window addSubview:faceView];
if(faceFeature.hasLeftEyePosition)
{
// create a UIView with a size based on the width of the face
UIView* leftEyeView = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.leftEyePosition.x-faceWidth*0.15, faceFeature.leftEyePosition.y-faceWidth*0.15, faceWidth*0.3, faceWidth*0.3)];
// change the background color of the eye view
[leftEyeView setBackgroundColor:[[UIColor blueColor] colorWithAlphaComponent:0.3]];
// set the position of the leftEyeView based on the face
[leftEyeView setCenter:faceFeature.leftEyePosition];
// round the corners
leftEyeView.layer.cornerRadius = faceWidth*0.15;
// add the view to the window
[self.window addSubview:leftEyeView];
}
if(faceFeature.hasRightEyePosition)
{
// create a UIView with a size based on the width of the face
UIView* leftEye = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.rightEyePosition.x-faceWidth*0.15, faceFeature.rightEyePosition.y-faceWidth*0.15, faceWidth*0.3, faceWidth*0.3)];
// change the background color of the eye view
[leftEye setBackgroundColor:[[UIColor blueColor] colorWithAlphaComponent:0.3]];
// set the position of the rightEyeView based on the face
[leftEye setCenter:faceFeature.rightEyePosition];
// round the corners
leftEye.layer.cornerRadius = faceWidth*0.15;
// add the new view to the window
[self.window addSubview:leftEye];
}
if(faceFeature.hasMouthPosition)
{
// create a UIView with a size based on the width of the face
UIView* mouth = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.mouthPosition.x-faceWidth*0.2, faceFeature.mouthPosition.y-faceWidth*0.2, faceWidth*0.4, faceWidth*0.4)];
// change the background color for the mouth to green
[mouth setBackgroundColor:[[UIColor greenColor] colorWithAlphaComponent:0.3]];
// set the position of the mouthView based on the face
[mouth setCenter:faceFeature.mouthPosition];
// round the corners
mouth.layer.cornerRadius = faceWidth*0.2;
// add the new view to the window
[self.window addSubview:mouth];
}
}
}
-(void)faceDetector
{
// Load the picture for face detection
UIImageView* image = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"facedetectionpic.jpg"]];
// Draw the face detection image
[self.window addSubview:image];
// Execute the method used to markFaces in background
[self performSelectorInBackground:#selector(markFaces:) withObject:image];
// flip image on y-axis to match coordinate system used by core image
[image setTransform:CGAffineTransformMakeScale(1, -1)];
// flip the entire window to make everything right side up
[self.window setTransform:CGAffineTransformMakeScale(1, -1)];
}
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
{
self.window = [[UIWindow alloc] initWithFrame:[[UIScreen mainScreen] bounds]];
// Override point for customization after application launch.
self.viewController = [[ViewController alloc] initWithNibName:#"ViewController" bundle:nil];
self.window.rootViewController = self.viewController;
[self.window makeKeyAndVisible];
[self faceDetector]; // execute the faceDetector code
return YES;
}
For this there is are couple of classes that you need to implement ...
Face Wrapper .
Also this is a part of face detection so you should research under this. Hope this small titbit helps !!
you can easily implement Face detection techniques using CoreImage in iOS.
Face detection tutorial
I have used the following code to detect the face for IOS 5
CIImage *cIImage = [CIImage imageWithCGImage:image.CGImage];
CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace context:nil options:[NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy]];
NSArray *features = nil;
features = [detector featuresInImage:cIImage];
if ([features count] == 0)
{
NSDictionary* imageOptions = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:6] forKey:CIDetectorImageOrientation];
features = [detector featuresInImage:cIImage options:imageOptions];
}
With this code, i am able to detect the face in IOS 5, But recently we have upgraded our Systems to Xcode 4.4 and IOS 6, Now, the face Detection is not working properly,
What changes i need to do for detecting the face in IOS 6.
Anykind of Help is highly Appreciated
I have noticed that face detection in iOS6 is not as good as in iOS5.
Try with a selection of images. You will likely find that it works OK in iOS6 with a lot of the images, but not all of them.
I have been testing the same set of images in:
1. The emulator running iOS6.
2. iPhone 5 (iOS6)
3. iPhone 3GS (iOS5).
The 3GS detects more faces than the other two options.
Here's the code, it works on both, but just not as well on iOS6:
- (void)analyseFaces:(UIImage *)facePicture {
// Create CI image of the face picture
CIImage* image = [CIImage imageWithCGImage:facePicture.CGImage];
// Create face detector with high accuracy
CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace
context:nil options:[NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy]];
// Create array of detected faces
NSArray* features = [detector featuresInImage:image];
// Read through the faces and add each face image to facesFound mutable
for(CIFaceFeature* faceFeature in features)
{
CGSize parentSize = facePicture.size;
CGRect origRect = faceFeature.bounds;
CGRect flipRect = origRect;
flipRect.origin.y = parentSize.height - (origRect.origin.y + origRect.size.height);
CGImageRef imageRef = CGImageCreateWithImageInRect([facePicture CGImage], flipRect);
UIImage *faceImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
if (faceImage)
[facesFound addObject:faceImage];
}
}
I hope this helpful for u...
add the CoreImage.framework
-(void)faceDetector
{
// Load the picture for face detection
UIImageView* image = [[UIImageView alloc] initWithImage:
[UIImage imageNamed:#"facedetectionpic.jpg"]];
// Draw the face detection image
[self.window addSubview:image];
// Execute the method used to markFaces in background
[self markFaces:image];
}
-(void)faceDetector
{
// Load the picture for face detection
UIImageView* image = [[UIImageView alloc] initWithImage:[UIImage
imageNamed:#"facedetectionpic.jpg"]];
// Draw the face detection image
[self.window addSubview:image];
// Execute the method used to markFaces in background
[self performSelectorInBackground:#selector(markFaces:) withObject:image];
// flip image on y-axis to match coordinate system used by core image
[image setTransform:CGAffineTransformMakeScale(1, -1)];
// flip the entire window to make everything right side up
[self.window setTransform:CGAffineTransformMakeScale(1, -1)];
}
and check this two links also
http://maniacdev.com/2011/11/tutorial-easy-face-detection-with-core-image-in-ios-5/
http://i.ndigo.com.br/2012/01/ios-facial-recognition/
I have followed a tutorial to detect a face within an image, it works. It creates a red rectangle around the face by making a UIView *faceView. Now i am trying to obtain the coordinates of the face detected however the results returned are off slightly in the y-axis. How can i fix this? where am i going wrong.
This is what i have attempted :
CGRect newBounds = CGRectMake(faceFeature.bounds.origin.x,
imageView.bounds.size.height - faceFeature.bounds.origin.y - faceFeature.bounds.size.height,
faceFeature.bounds.size.width,
faceFeature.bounds.size.height);
This is the source code for the detection :
-
(void)markFaces:(UIImageView *)facePicture
{
// draw a CI image with the previously loaded face detection picture
CIImage* image = [CIImage imageWithCGImage:facePicture.image.CGImage];
// create a face detector - since speed is not an issue we'll use a high accuracy
// detector
CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace
context:nil options:[NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy]];
// create an array containing all the detected faces from the detector
NSArray* features = [detector featuresInImage:image];
// we'll iterate through every detected face. CIFaceFeature provides us
// with the width for the entire face, and the coordinates of each eye
// and the mouth if detected. Also provided are BOOL's for the eye's and
// mouth so we can check if they already exist.
for(CIFaceFeature* faceFeature in features)
{
// get the width of the face
CGFloat faceWidth = faceFeature.bounds.size.width;
// create a UIView using the bounds of the face
UIView* faceView = [[UIView alloc] initWithFrame:faceFeature.bounds];
// add a border around the newly created UIView
faceView.layer.borderWidth = 1;
faceView.layer.borderColor = [[UIColor redColor] CGColor];
CGRect newBounds = CGRectMake(faceFeature.bounds.origin.x,
imageView.bounds.size.height - faceFeature.bounds.origin.y - faceFeature.bounds.size.height,
faceFeature.bounds.size.width,
faceFeature.bounds.size.height);
NSLog(#"My view frame: %#", NSStringFromCGRect(newBounds));
[self.view addSubview:faceView];
if(faceFeature.hasLeftEyePosition)
{
}
if(faceFeature.hasRightEyePosition)
{
}
if(faceFeature.hasMouthPosition)
{
}
}
}
-(void)faceDetector
{
// Load the picture for face detection
UIImageView* image = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"jolie.jpg"]];
// Draw the face detection image
[self.view addSubview:image];
// flip image on y-axis to match coordinate system used by core image
[image setTransform:CGAffineTransformMakeScale(1, -1)];
// flip the entire window to make everything right side up
[self.view setTransform:CGAffineTransformMakeScale(1, -1)];
// Execute the method used to markFaces in background
[self performSelectorInBackground:#selector(markFaces:) withObject:image];
}
CoreImage Coordination system and UIKit coordination system are quite different. CIFaceFeature provides coordinates in coreimage coordination system and for your work you need to convert them into uikit coordinate system:
// CoreImage coordinate system origin is at the bottom left corner and UIKit is at the top left corner
// So we need to translate features positions before drawing them to screen
// In order to do so we make an affine transform
// **Note**
// Its better to convert CoreImage coordinates to UIKit coordinates and
// not the other way around because doing so could affect other drawings
// i.e. In the original sample project you see the image and the bottom, Isn't weird?
CGAffineTransform transform = CGAffineTransformMakeScale(1, -1);
transform = CGAffineTransformTranslate(transform, 0, -_pickerImageView.bounds.size.height);
for(CIFaceFeature* faceFeature in features)
{
// Translate CoreImage coordinates to UIKit coordinates
const CGRect faceRect = CGRectApplyAffineTransform(faceFeature.bounds, transform);
// create a UIView using the bounds of the face
UIView* faceView = [[UIView alloc] initWithFrame:faceRect];
faceView.layer.borderWidth = 1;
faceView.layer.borderColor = [[UIColor redColor] CGColor];
// get the width of the face
CGFloat faceWidth = faceFeature.bounds.size.width;
// add the new view to create a box around the face
[_pickerImageView addSubview:faceView];
if(faceFeature.hasLeftEyePosition)
{
// Get the left eye position: Translate CoreImage coordinates to UIKit coordinates
const CGPoint leftEyePos = CGPointApplyAffineTransform(faceFeature.leftEyePosition, transform);
// Note1:
// If you want to add this to the the faceView instead of the imageView we need to translate its
// coordinates a bit more {-x, -y} in other words: {-faceFeature.bounds.origin.x, -faceFeature.bounds.origin.y}
// You could do the same for the other eye and the mouth too.
// Create an UIView to represent the left eye, its size depend on the width of the face.
UIView* leftEyeView = [[UIView alloc] initWithFrame:CGRectMake(leftEyePos.x - faceWidth*EYE_SIZE_RATE*0.5f /*- faceFeature.bounds.origin.x*/, // See Note1
leftEyePos.y - faceWidth*EYE_SIZE_RATE*0.5f /*- faceFeature.bounds.origin.y*/, // See Note1
faceWidth*EYE_SIZE_RATE,
faceWidth*EYE_SIZE_RATE)];
leftEyeView.backgroundColor = [[UIColor magentaColor] colorWithAlphaComponent:0.3];
leftEyeView.layer.cornerRadius = faceWidth*EYE_SIZE_RATE*0.5;
//[faceView addSubview:leftEyeView]; // See Note1
[_pickerImageView addSubview:leftEyeView];
}
}
The task is, to draw paths at runtime on custom maps which im using in a Scrollview, and then i will have to draw paths at runtime whenever the location coordinates (lat, long) updates. The problem what im trying to solve here is that i have made a class 'graphics' which is a subclass of UIView, in which i code the drawing in the 'drawrect:' method. So when im adding the graphics as subview of the scrollview over image, the line draws, but i need to keep drawing the line as though it were paths. I need to draw the lines at runtime, need to keep updating the points(x,y) of 'CGContextStrokeLineSegments' method. The code:
ViewController:
- (void)loadView {
[[UIApplication sharedApplication] setStatusBarHidden:YES withAnimation:UIStatusBarAnimationNone];
CGRect fullScreenRect=[[UIScreen mainScreen] applicationFrame];
scrollView=[[UIScrollView alloc] initWithFrame:fullScreenRect];
graph = [[graphics alloc] initWithFrame:fullScreenRect];
scrollView.contentSize=CGSizeMake(320,480);
UIImageView *tempImageView2 = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"fortuneCenter.png"]];
self.view=scrollView;
[scrollView addSubview:tempImageView2];
scrollView.userInteractionEnabled = YES;
scrollView.bounces = NO;
[scrollView addSubview:graph];
}
Graphics.m:
- (id)initWithFrame:(CGRect)frame
{
self = [super initWithFrame:frame];
if (self) {
// Initialization code
self.backgroundColor = [UIColor clearColor];
}
return self;
}
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
CGPoint point [2] = { CGPointMake(160, 100), CGPointMake(160,300)};
CGContextSetRGBStrokeColor(context, 255, 0, 255, 1);
CGContextStrokeLineSegments(context, point, 2);
}
So how can i draw the lines at runtime. Im just simulating right now, so im not using the realtime data (coordinates). Just want to simulate by using dummy data (coordinates of x,y). Lets say have a button, whenever i press it it updates the coordinates so path extends.
The easiest way would be to add an instance variable representing the points to the UIView subclass.
Then, every time the path changes, update the ivar appropriately and call -setNeedsDisplay or setNeedsDisplayInRect on the custom UIView (or even on its superview). The runtime will then redraw the new path.
You just need to make CGPoint point[] dynamically resizable, from the looks of it.
You can use malloc, a std::vector, or even NSMutableData to store the points you add. Then you pass that array to CGContextStrokeLineSegments.
If 2 points is all you will need, move CGPoint point[2] to an ivar so you may store the positions, then (as Rich noted) invalidate rects appropriately when these values (or the array) are changed.
This subject comes up every now and then, so I created a longer blog post on the general concepts involved with one potential solution, creating and using your own graphics context, here: http://www.musingpaw.com/2012/04/drawing-in-ios-apps.html
i want to add a magnifier in cocos2d game. here is what i found online:
http://coffeeshopped.com/2010/03/a-simpler-magnifying-glass-loupe-view-for-the-iphone
I've changed the code a bit:(since i don't want to let the loupe follow our touch)
- (id)initWithFrame:(CGRect)frame {
if ((self = [super initWithFrame:magnifier_rect])) {
// make the circle-shape outline with a nice border.
self.layer.borderColor = [[UIColor lightGrayColor] CGColor];
self.layer.borderWidth = 3;
self.layer.cornerRadius = 250;
self.layer.masksToBounds = YES;
touchPoint = CGPointMake(CGRectGetMidX(magnifier_rect), CGRectGetMidY(magnifier_rect));
}
return self;
}
Then i want to add it in one of my scene init method:
loop = [[MagnifierView alloc] init];
[loop setNeedsDisplay];
loop.viewToMagnify = [CCDirector sharedDirector].openGLView;
[[CCDirector sharedDirector].openGLView.superview addSubview:loop];
But the result is: the area inside the loupe is black.
Also this loupe just magnify images with the same scale, how can i change it to magnify more near the center and less near the edge? (just like real magnifier)
Thank you !!!
Here I assume that you want to magnify the center of the screen.
You have to change dynamically size attribute to your wishes according to your app needs.
CGSize size = [[CCDirector sharedDirector] winSize];
id lens = [CCLens3D actionWithPosition:ccp(size.width/2,size.height/2) radius:240 grid:ccg(15,10) duration:0.0f];
[self runAction:lens];
Cocos2d draws using OpenGL, not CoreAnimation/Quartz. The CALayer you are drawing is empty, so you see nothing. You will either have to use OpenGL graphics code to perform the loupe effect or sample the pixels and alter them appropriately to achieve the magnification effect, as was done in the Christmann article referenced from the article you linked to. That code also relies on CoreAnimation/Quartz, so you will need to work out another way to get your hands on the image data you wish to magnify.