iOS image comparison - iphone

I am just doing some research into image processing and would appreciate it if someone could point me in the right direction. I want to compare image 'A' which is a picture of a person's face with image's stored in a database -B,C,D,E .. etc which are also pictures of faces. I want to compare them to see if the person 'A' is already in the database.
Several questions :
1.How is face recognition comparison usually done? (do you extract features e.g. eyes/mouth and compare them to other images?).
2. Are there prebuilt libraries that are able to do a comparison between images? or do i need to write my own algorithm?
3. Where can i start with this? (would appreciate some references/reading material).

Yes, you identify, extract and quantify various aspects of human faces, such as distance between pupils, width of mouth, percentage of head height where tip of nose is, etc.
There is a company, Luxand which makes software to do this, and I think they license it. Last time I looked (2009?) they didn't have an objective-c library. They do have an app that claims to merge faces from photograhs, so you can see what the offspring of any two people would look like, but it is very cheesy, with lots of hard-coded faces. (If you cross a dog with a tea-pot, you get the same baby-face as from crossing a 2 real faces.)
AFAIK, there is nothing in the iOS SDK that does this.
I would just Google "face recognition" and start reading. Good luck.

I would go with compiling openCV for the iPhone ( http://computer-vision-talks.com/2011/02/building-opencv-for-iphone-in-one-click/ ), and then implementing one of the classical ways to do face recognition like eigenfaces ( http://www.shervinemami.info/faceRecognition.html )
But don't expect miracles the accuracy will be low, and the app will be slow.
Also when you say face recognition is difficult doesn't the first link show how easy it is to detect faces on a picture?
The face detection from the first link is just to detect the face. It is just to see if there is a face in the image, which then you can pass as input to the recognition algorithm.

face recognition are very difficult, you need to extract some kind of "features" and perform some measurement...iphone hardware isn't very appropriate for this job.
yes, you can check here
http://maniacdev.com/2011/11/tutorial-easy-face-detection-with-core-image-in-ios-5/
for a tutorial and here
http://maniacdev.com/2011/12/open-source-library-for-adding-easy-face-to-your-ios-app-with-the-free-face-com-api/
for a free webservice.
3.i suggest you google scholar (http://scholar.google.it/scholar?q=face+recognition&hl=it&btnG=Cerca&lr=) but i think that if you want to write your own algorithm you need a lot o spare time :)

Related

Find image within image (template matching?)

I need to find the location of an image that the user provides within an image that I provide.
It is safe to assume at the time of the analysis that the user provided image is certain to be contained within the image to be compared with.
I’ve looked through and even have some experience with Core ML and Vision image classification however I am struggling to convince myself that it is the correct way to approach this problem. I feel like the way “feature values” is handled in Vision it is almost the reverse of what I’m looking for.
My question: Is there a feature of Core ML or Vision that tackles this particular problem head on?
Other information that may be needed;
It is not safe to assume that images provided are pixel to pixel perfect due to possible resolution differences.
They may also be provided in any shape although possible to crop to a standardised shape before analysis.
Rotation will also need to be accounted for.
There would not be cases where the image is in the image twice.
Take a look at some of the feature detection and matching algorithms.
For example, you could use SIFT (scale-invariant feature transform algorithm) with RANSAC (Random sample consensus algorithm) to do exactly what you described.
If you are using OpenCV there are plenty of such algorithms which you can easily use. (FAST, Shi-Tomasi, etc.)
I think you need something like this expale in OpenCV

Modeling a Physical Place inside iPhone Application

I need to find a way to model a physical place inside an iPhone application. For example, I want to be able to take images for a restaurant and then use some tools or programming API to model this resturant as a 3d place and make the user able to navigate and explore the place and rooms.
I have thought about HTML 5 inside a web view but I don't think the WebGL is compatible with iPhone Web View (Safari Engine).
Can you please recommend a method, API, Commercial Library or anything to help me achieve this task?
First, you need to be able to display 3D models for IPhone. One of the most popular 3D engine is Unity3D:
http://unity3d.com/
It is extremely easy to start playing with Unity3D. You even have a free license with limited features:
http://unity3d.com/unity/licenses
Then, you now need to reconstruct a 3D model from pictures. This is not a trivial problem so it is better if you know some computer vision. You can try to play with OpenCV:
http://opencv.willowgarage.com/wiki/
Best regards.
Actually Nuke from the Foundry has a decent start at the future of creating computer models from images.
Basically it takes a high contrast point and tracks it through successive moments. Given hundreds and thousands of tracked points, the next step is to calculate the perspective change between points.
Say two points are a known pixel distance apart at time zero and a certain time period later they are a different distance apart. This change in difference could be a bad tracking point. But assuming that the two points are perfectly tracking, then the distance change could be caused by a camera motion laterally or rotationally. And in real space a point further away from you will have a different perspective then a closer point . This perspective change is a mathematical certainty.
Initially the tracking is typically used to refilm a piece of film to stabilize it. But the process the software uses to analyze the film can be saved , it is often called a point cloud. connection of many close points that track very closely usually are because the points are parts of a surface, so a model can be built.
But my friend, we are barbarians to the speed and software that can do that perfectly yet. Or all the CG Artists out there would not have anything to model in Maya except fantasy monsters and space ships that don't exist yet....

iPhone Photo Smile Manipulation Applications

I'm trying to create an iphone application that allows a user to take a photo of their smile, then drag and drop new smiles, from a small predefined list.
I know there are a lot of photo manipulation apps and I have seen similar concepts that allow smile manipulation, but not quite what I am looking for. The problem is knowing where to start. How can I create this effect? Would the OpenCV iPhone port be the best way to go? Or perhaps something using OpenGL? Willing to do some research, but I find that experience often goes a long way, so any advice or insight would be much appreciated.
That's a pretty cool application. I'd recommend some type of dense optical flow type alignment method which would be strike a balance between global consistency and local consistency.
In short, you'll want some generic mouth shapes in a gallery. Then, you can crop the user's mouth region and warp it to the gallery shape to show what their smile in that shape will look like.
Ce Liu's Optical Flow implementation might be an interesting point to start. You should be able to port that reasonably easily.

Is there an imaging library that can make you look thinner?

Very odd question, I know, but this is a problem a potential client handed me today.
We assume we have a full length photo of a person. We want to generate a thinner image of that user. Obviously, one way would just be to compress the width of the image but that would result in various distortions that wouldn't be realistic.
I'd like to keep this an open-source implementation so if anybody knows of a library that can identify certain parts of the body and slim each in a way that is most realistic, I'd like to know.
This is obviously something that could be done by hand but we need a solution that works without user interaction.
You should look into seam-carving algorithms. The algorithm is very simple to implement and has many such implmentations online. Seems like ImageMagick has it too - called "Liquid Rescale".
I assume that already the detection of bodyparts in photos is a challenge too hard for algorithms, unless the photos are all very similar (e.g. same background, same pose, etc.)
I have once played around developing algorithms for skin smoothing. I was able to detect skin areas pretty well by converting colors to the LAB space and selecting pixels similar to skin sample colors learnt with a support vector machine from various sample images. Once you have that, you could run something like a liquify-contract algorithm for slimming.
I wouldn't expect satisfying results though unless you spend huge amounts of time on this.

Is there a library that can do raster to vector conversion, for the iPhone?

I am trying to take an image and extract hand written text so that it can be read easily and zoomed in on. I would like to convert the text to vector paths.
I am not aware of any libraries that would make this as painless as possible. Any help is greatly appreciated. Examples are nice too :)
Simple iPhone Image Processing (on Google code) contains all the primitive tools you will need:
Canny edge detection
Histogram equalisation
Skeletonisation
Thresholding, adaptive and global )
Gaussian blur (used as a
preprocessing step for canny edge
detection)
Brightness normalisation
Connected region extraction
Resizing - uses interpolation
The only program I know of for the iPhone that does handwriting recognition is HWPEN. Unfortunately, it's not a library but a full application and (to make matters worse) it requires a Jailbroken phone.
I fear you must either try to get the source for HWPEN or reverse engineer it to obtain the code you need.
Barring that, you may want to write your own. There are several studies on handwriting recognition that may help.