I would like to know what is the best approach to begin a project to perform graphical recognition of people. In other words, the computer will parse an image file and through a heuristic figure out if it sees the shape of a person.
Any API's or open sources available, is this too ahead of the times?
Thanks
Are you searching for face detection or people detection?
If face-detection:
OpenCV comes with samples for face detection. And OpenCV 2.4-beta has samples for face recognition also. Check here : http://github.com/Itseez/opencv/tree/master/samples/cpp
If people-detection:
OpenCV comes with a sample for people-detection using HOG descriptors. Link
This is the result i obtained with above code:
Related
I have tried building my own Haar for segmenting out lips. But I have not achieved very good results. Can I get any Haar for lips segmentation just like the one we have for eyes. I searched the internet for such haar but couldn't find one.
My aim is to find if my mouth is widely open or closed.
If no one has done such haar. I will build one and opensource it.
If there are no Haar cascades in the OpenCV distribution that currently implement how to segment out lips, consider building your own. Take a look at the guide to building Haar cascades from the OpenCV project for more details:
http://docs.opencv.org/trunk/doc/user_guide/ug_traincascade.html
I am assuming you already have positive and negative examples of what you want to classify, and so you can build your own cascades using your own ground truth data. The above guide will get you started on creating your own Haar cascades.
NB: I am usually against deferring people to external links without some sort of closure in my posts, but the process to do this is quite involved, and I can't invest the effort in repeating that information here.
This will help u in matlab use Viola-Jones algorithm
vision.CascadeObjectDetector System object
Detect objects using the Viola-Jones algorithm
Description
The cascade object detector uses the Viola-Jones algorithm to detect people's faces, noses, eyes, mouth, or upper body. You can also use the Training Image Labeler to train a custom classifier to use with this System object. For details on how the function works, see Train a Cascade Object Detector.
I am working on a crowd controlled soundsystem for a music festival. Music would be controlled by individuals and the crowd as a whole, more or less 500 people.
While searching for crowd tracking techniques, I stumbled upon this one http://www.mikelrodriguez.com/crowd-analysis/#density; Matlab code and dataset are enclosed. Are you aware of similar techniques, maybe simpler, based eg on blob detection? Do you have an idea about how well this one would perform in a real-time scenario? Is there a known way to do this with eg OpenCV?
One of my former colleagues implemented something similar (controlling a few motors according to crowd movement) using optical flow. You can analyze the frames of video from a camera, calculate optical flow between frames, and use the values to estimate the crowd movement.
OpenCV has support to perform the above tasks, and comes with good code samples. A desktop should be able to do this in real-time (you might have to tweak with image resolution).
I am not exactly sure how to interface between a C++ program and a sound system. Pure Data (PD) is an alternative, but it might not have much support for motion analysis.
I'm looking for good library for camera calibration, I'm aware of Camera Calibration Toolbox for Matlab and OpenCV. The problem with the toolbox is that it is in Matlab and not very friendly for modifications. OpenCV on the other hand seems to be less precise (see Suriansky).
So are there any alternatives?
The paper you cite is rubbish: whoever wrote it did not bother to actually read the code.
The Matlab toolbox uses exactly the same calibration algorithms as the OpenCV code: Zhang's for the initial estimation, followed by a round of bundle adjustment. The reason they are very similar is that the author of the original implementation of the Matlab toolbox worked for a while with the Intel team that produced the calibration code in the very first release of OpenCV.
Any differences among the results they produce are most likely due to different configurations of the control parameters.
I don't understand what you mean by "not very friendly for modification". If you have Matlab, and your application can use it (it's slow), J.Y. Bouguet's code is quite easy to read and modify. On the other hand, I always found the OpenCV codebase somewhat annoyingly low-level (but understandably so, given the stress on performance).
One alternative is the camera calibration functionality in the Computer Vision System Toolbox for MATLAB. Specifically, check out the Camera Calibrator and the Stereo Camera Calibrator apps.
As mentioned by Francesco, both Matlab and OpenCV use the Zhang method. As 2022, OpenCV offers a larger range of distortion parameters thus offering more precision in the pose estimation results. However, all parameters might not be required at once. More details on these parameters are provided in the documentation:
https://docs.opencv.org/4.x/d9/d0c/group__calib3d.html#ga3207604e4b1a1758aa66acb6ed5aa65d
As an alternative solution to both OpenCV and Matlab, I would definitely recommend CalibPro, a web platform that allows you to upload your data and get your calibration parameters in a matter of minutes without a single line of code. CalibPro is fully compatible with OpenCV camera parameters. The platform is rapidly developing and will provide more than the pinhole camera model in the near future.
[Disclaimer]: I am the founder of CalibPro. I am happy to take any feedback on our platform or help people with their calibration.
I am just doing some research into image processing and would appreciate it if someone could point me in the right direction. I want to compare image 'A' which is a picture of a person's face with image's stored in a database -B,C,D,E .. etc which are also pictures of faces. I want to compare them to see if the person 'A' is already in the database.
Several questions :
1.How is face recognition comparison usually done? (do you extract features e.g. eyes/mouth and compare them to other images?).
2. Are there prebuilt libraries that are able to do a comparison between images? or do i need to write my own algorithm?
3. Where can i start with this? (would appreciate some references/reading material).
Yes, you identify, extract and quantify various aspects of human faces, such as distance between pupils, width of mouth, percentage of head height where tip of nose is, etc.
There is a company, Luxand which makes software to do this, and I think they license it. Last time I looked (2009?) they didn't have an objective-c library. They do have an app that claims to merge faces from photograhs, so you can see what the offspring of any two people would look like, but it is very cheesy, with lots of hard-coded faces. (If you cross a dog with a tea-pot, you get the same baby-face as from crossing a 2 real faces.)
AFAIK, there is nothing in the iOS SDK that does this.
I would just Google "face recognition" and start reading. Good luck.
I would go with compiling openCV for the iPhone ( http://computer-vision-talks.com/2011/02/building-opencv-for-iphone-in-one-click/ ), and then implementing one of the classical ways to do face recognition like eigenfaces ( http://www.shervinemami.info/faceRecognition.html )
But don't expect miracles the accuracy will be low, and the app will be slow.
Also when you say face recognition is difficult doesn't the first link show how easy it is to detect faces on a picture?
The face detection from the first link is just to detect the face. It is just to see if there is a face in the image, which then you can pass as input to the recognition algorithm.
face recognition are very difficult, you need to extract some kind of "features" and perform some measurement...iphone hardware isn't very appropriate for this job.
yes, you can check here
http://maniacdev.com/2011/11/tutorial-easy-face-detection-with-core-image-in-ios-5/
for a tutorial and here
http://maniacdev.com/2011/12/open-source-library-for-adding-easy-face-to-your-ios-app-with-the-free-face-com-api/
for a free webservice.
3.i suggest you google scholar (http://scholar.google.it/scholar?q=face+recognition&hl=it&btnG=Cerca&lr=) but i think that if you want to write your own algorithm you need a lot o spare time :)
I am trying to take an image and extract hand written text so that it can be read easily and zoomed in on. I would like to convert the text to vector paths.
I am not aware of any libraries that would make this as painless as possible. Any help is greatly appreciated. Examples are nice too :)
Simple iPhone Image Processing (on Google code) contains all the primitive tools you will need:
Canny edge detection
Histogram equalisation
Skeletonisation
Thresholding, adaptive and global )
Gaussian blur (used as a
preprocessing step for canny edge
detection)
Brightness normalisation
Connected region extraction
Resizing - uses interpolation
The only program I know of for the iPhone that does handwriting recognition is HWPEN. Unfortunately, it's not a library but a full application and (to make matters worse) it requires a Jailbroken phone.
I fear you must either try to get the source for HWPEN or reverse engineer it to obtain the code you need.
Barring that, you may want to write your own. There are several studies on handwriting recognition that may help.