I am doing an ancient coins recognition system using matlab. What I have done so far is:
convert to grayscale
remove noise using Gaussian filter
contrast enhancement
edge detection using canny edge detector.
Now I want to extract feature for classification. Features I thought to select are roundness, area, colour, SIFT and SURF. My problem is how I can apply SIFT and SURF algorithms to my project. I couldn't find built-in functions for both.
You can find SIFT as a C implementation with MATLAB bindings at: http://www.vlfeat.org/index.html
For anyone else coming across this thread as I did, I noticed the implementation at http://www.vlfeat.org/index.html was far more than I required and also fairly hard to adjust to my code.
The following link; http://robwhess.github.io/opensift/, has an implementation of just the SIFT algorithm accompanied with an example executable, with the source code available (unlike http://www.cs.ubc.ca/~lowe/keypoints/ which only has the sift binary executable).
you can find a matlab implementation of SIFT features here: http://www.cs.ubc.ca/~lowe/keypoints/
Related
SIFT is an important and useful algorithm in computer vision but it seems that it is not part of Matlab or any of its toolboxes.
Why ? Does Matlab offer something better or equivalent ?
MATLAB has SURF available as part of the Computer Vision Toolbox but not SIFT: http://www.mathworks.com/help/vision/ref/surfpoints-class.html. However, both algorithms are pretty much the same with some minor (but crucial) differences, such as using integral images and a fast Hessian detector. I won't go into those differences in any further detail, but you can certainly read up on the work here: http://www.vision.ee.ethz.ch/~surf/eccv06.pdf. As to the reason why MathWorks decided to use SURF instead of SIFT... it could be any reason really. AFAIK, there is no official reason why one was chosen over the other as they are both subject to being patented.
However, if you want to use SIFT within MATLAB, one suggestion I have is to use the VLFeat toolbox where a C and MATLAB implementation of the keypoint, detection and matching framework has been made available and is open source. It also has a variety of other nice computer vision algorithms implemented, but VLFeat is one of the libraries that I know of that manages to compute SIFT as accurately as the original patented algorithm.
If you're dead set on using SIFT, check VLFeat out! Specifically, check out the official VLFeat tutorial on SIFT using the MATLAB wrappers: http://www.vlfeat.org/overview/sift.html
I calibrated a camera with checkerboard pattern using OpenCV and MATLAB. I got .489 and .187 for Mean Re-projection errors in OpenCV and MATLAB respectively. From the looks of it, MATLAB is more precise. But my adviser feels both MATLAB and OpenCV use the same BOUGET's algorithm and should report same error (or close). Is it so ? Can someone explain the difference b/w MATLAB and OpenCV camera calibration methods ?
Thanks!
Your adviser is correct in that both MATLAB and OpenCV use essentially the same calibration algorithm. However, MATLAB uses the Levenberg-Marquardt non-linear least squares algorithm for the optimization (see documentation), whereas OpenCV uses gradient descent. I would guess that this accounts for most of the difference in the reprojection errors.
Additionally, MATLAB and OpenCV use different algorithms for checkerboard detection.
I have created a cascade.xml for detecting face images using the opencv_traincascade utility. I am using LBP or HOG feature based cascades since they are much faster. And I do all my testing on Matlab using vision.cascadeObjectDetector. But I am unsure if Matlab is capable of understanding and calculating LBP/ HOG features for a given cascade.xml file.
Is this the correct approach for testing a cascade detector? If not, what platform should I be using for testing?
Yes, vision.cascadeObjectDetector supports LBP and HOG, as well as Haar features, as of version R2013a.
Furthermore you can now train your detector using trainCascadeObjectDetector function, which is easier to use than opencv_traincascade. There is also a trainingImageLabeler app, which gives you a nice GUI to label objects of interest in your images.
I want to find Region of interest of a color image in MatLab.
Is it good to use openCV inside matlab to find ROI?
Please help me how to achieve this
Matlab and OpenCV are two completely different things. OpenCV is a vision library that you can use in C++ or Python, and you should see Matlab as a programming language, which uses its own libraries (toolboxes), among those you may be interested in the Image processing toolbox and the Computer vision system toolbox.
If you wish to manually mark a ROI in Matlab there are few ways to do it. The easiest, is to use
BW = roipoly.
See roipoly documentation
To the best of my knowledge, no similar generic function such as Matlab's roipoly exists in OpenCV.
I'm trying the assess the correctness of my SURF descriptor implementation with the de facto standard framework by Mikolajczyk et. al. I'm using OpenCV to detect and describe SURF features, and use the same feature positions as input to my descriptor implementation.
To evaluate descriptor performance, the framework requires to evaluate detector repeatability first. Unfortunately, the repeatability test expects a list of feature positions along with ellipse parameters defining the size and orientation of an image region around each feature. However, OpenCV's SURF detector only provides feature position, scale and orientation.
The related paper proposes to compute those ellipse parameters iteratively from the eigenvalues of the second moment matrix. Is this the only way? As far as I can see, this would require some fiddling with OpenCV. Is there no way to compute those ellipse parameters afterwards (e.g. in Matlab) from the feature list and the input image?
Has anyone ever worked with this framework and could assist me with some insights or pointers?
You can use the file evaluation.cpp from OpenCV. Is in the directory OpenCV/modules/features2d/src. In this file you could use the class "EllipticKeyPoint", this class has one function to convert "KeyPoint" to "ElipticKeyPoint"
Honestly I never worked with this framework., but I think you should see this paper about a performance evaluation of local descriptors.