Which library should i use for face Tracking for captured image? - iphone

I am creating one application with face processing. How may i create face animation?
Is there any library which i can use?
How to face track once capturing image of face?
Please help me

As far as I am aware there is no completely free library to track facial expressions - which i think is what you need to produce aniamtion.
However, there is a commerical library for iOS (and other platforms) here: http://www.image-metrics.com/livedriver/overview/
This is available under a trial license and also a free educational licence. I believe it will do what you want.
Your other option is to develop you own facial feature tracking system using something like OpenCV: http://opencv.org/
Thats going to be a challenge though.

Face detection is already included in CoreImage (CI), see
http://www.bobmccune.com/2012/03/22/ios-5-face-detection-with-core-image/
If you want face recognition, you have to do something on your own, but there are some tutorials available - most of them using OpenCV.

Related

EasyAR: How to create a reliably trackable image target?

I'm using EasyAR Sense for Unity to develop an app that tracks a target image.
I'm using https://www.easyar.com/targetcode.html to test my target images, but I have not yet understood the markers requirements. Are the colors of image important? Or is it based on the recognition of outlines and edges?
Also, are there any suggested guidelines and how do they apply to random images? (just to clarify, i'm not interested in using markers but digitally drawn pictures).
EDIT: We found out that the ways in which Vuforia and EasyAR recognize theyirtargets are pretty far apart from each other. An image that would rate fairly low on Vuforia will score high on EasyAR's site and vice-versa.
As far as we know now, yes, Vuforia bases its recognition methods on high contrasts and sharp edges.
That said, Vuforia as a solution is not feasible for our purposes, as it doesn't support front facing camera. We had to look for alternate solutions and stumbled across EasyAR which seems powerful, but with a really slim documentation on the programming side, and an inexistent design guideline documentation.
As we understand chaotic patterns will be recognized the best from EasyAR's engine but it doesn't state how much chaos defines "a rich texture".
But we are in dire need of simplicity in the images we're using in the application since it's targeted towards kids with understanding disorders, and a messy approach to the images may be counterproductive.
In my experience with EasyAR and Vuforia (they detect 2D images similarly) the more complex is the image is better for recognition, for example:
Contrast between delimited color areas are detected very good.
Lines with sharp edges are detected better.
Enter vuforia and try their system of stars when checking a target, that usually says to me what images will work great on EasyAR.
GOOD DETECTION
BAD DETECTION
In my experience with EasyAR and Vuforia, color is important and Uniform images aren't suitable for targets.
Furthermore, In Vuforia:
developer panel >> Target manager >> your database >> your target >> rating,
you can see your target's rating:

iOS 5 'Facial Recognition' for objects

I'm trying to figure out a way to use the facial recognition software within iOS 5 to detect objects. Currently, I'm using Xcode 4.2 and have a sample of code from here: http://maniacdev.com/2011/11/tutorial-easy-face-detection-with-core-image-in-ios-5/
I would like to redefine what "eyes" and "mouth" is to allow the app to distinguish objects.
Can anyone help me out with this problem?
Thank You!
Face detection algorithms typically work by searching for image features (e.g, certain patterns of gradients) that are specific to the human face. They cannot be used to detect other arbitrary objects.
There are no public APIs for modifying the face detector. You will have to find other image processing software to detect your non-face objects.
Check out Open CV. It should be able to do object recognition. Here is a post about building it for iPhone. http://lambdajive.wordpress.com/2008/12/20/cross-compiling-for-iphone/

Smile Detection (Any alternative other than OpenCV ?)

Is there any library alternative to OpenCV which detects smile.
I dont want to use OpenCV as it sometimes fails to detect faces due to background.
Any one knw other library ? other than OpenCV ?
I would recommend having a look at The Machine Perception Toolbox (MPT Library).
I had a chance to play with it a bit at an Openframeworks OpenCV workshop at Goldsmiths and there is a c++ smile detection sample available.
I imagine you can try the MPT Library for iPhone with openframeworks or simply link to the library from an iphone project.
sometimes fails to detect faces due to
background.
An ideal lighting setup will guarantee better results, but given that you want to use this on a mobile device, you must inform your users that smile detection might fail under extreme conditions (bad lighting)
HTH
How are you doing smile detection? I can't see a smile-specific Haar dataset in the default OpenCV face detection cascades. I suspect your problem is training data rather than OpenCV itself.
Egawer is a good starting point if you need a working app to begin with.
https://github.com/Atrac613/egawer-iOS
I checked the training images of smileD_haarcascade_v0.05, an found that they include the full face. So, it seems to be a "smiling face" detector rather than a smile detector alone. While this seems easier, it can also be less accurate.
The best is to create your own Haar Cascade XML file, but admittedly most of us developers don't have time for that. You can improve the results considerably by equalizing the brightness of the image.
iOS 7 now has native support of simile detection in CoreImage. Here is the API diff:
For iOS 7, Yes, now you can do it with CoreImage.
Here is the API diff in iOS 7 Beta 2:
CoreImage
CIDetector.h
Added CIDetectorEyeBlink
Added CIDetectorSmile

Can I use iPhone face recognition in apps?

I want to develop an application for iPhone in xcode and integrate face recognition in the application to correspond to other application functions but I do not know how it is possible to use face recognition in my application. Any ideas?
Check out this Wikipedia page. It has a lot of references to algorithms, applications, etc.
Check out OpenCV. Here's a blog post that should help get you started:
http://niw.at/articles/2009/03/14/using-opencv-on-iphone/en
See the Apple published iOS sample code that implements face detection called
SquareCam. --
Integrating with CoreImage's new CIFaceDetector to find faces in a real-time VideoDataOutput, as well as in a captured still image. Found faces are indicated with a red square.

An iPhone library for shape recognition via the camera

I hope this falls within the "programming question" category.
Im all lightheaded from Googling (and reading every post in here on the subject) on the subject "Computer Vision", but Im getting more confused than enlightened.
I have 6 abstract shapes printed on a piece of paper and I would like to have the camera on the iPhone identify these shapes (from different angles, lightning etc.).
I have used OpenCV a while back(Java) and I looked at other libraries out there. The caveat is that it seems that either they rely on a jail broken iPhone or they are so experimental and hard to use that I would probably end up using days learning libraries only to figure out they didn't work.
I have thought of taking +1000 images of my shapes and training a Haar filter. But again
if there is anything out there that is a bit easier to work with I would really appreciate the advise, suggestion of people with a bit of experience.
Thank you for any suggestion or pieces of advise you might have:)
Have a look at at OpenCV's SURF feature extraction (they also have a demo which uses it to detect objects).
Surf features are salient image features which are invariant to rotation and scale. Many algorithms detect objects by extracting such features from an image, and then use simple "bag of words" classification (comparing the set of extracted image features to the features of your "shapes". Even without referring to their spacial alignment you can have good detection rates if you only have 6 shapes).
While not a library, Chris Greening explains how iPhone Sudoku Grab does its image recognition of puzzles in his post here. He does seem to recommend OpenCV, and not just for jailbroken devices.
Also Glen Low talks a bit about how Instaviz does its shape recognition in an interview for the Mobile Orchard podcast.
I do shape recognition in my iPhone app Instaviz and the routines are actually packaged into a library I call "Recog". Only problem is that it is meant for finger or mouse gesture recognition rather than image recognition. You pass the routines a set of points representing the gesture and it tells you whether it's a square, circle etc.
I haven't yet decided on a licensing model but probably use a minimal per-seat royalty.