Combine detectors in Bag of visual words - matlab

I am using Bag of visual words for classification.
I have quantized SIFT descriptor into 100 words for each image and encoded the histogram of the images and have completed classification.
Now, I want to try to combine two different descriptors and detectors i.e. SIFT and SURF, which means neither the number of key points will be the same nor will be the descriptor dimensionality (SIFT 128D and SURF 64D).
What will be the easiest way to combine them?
If, for each image, I encode one histogram for SIFT (which will be a 100x1 histogram) and another for SURF (another 100x1) and then stack them together making 200x1 histogram, will that be correct?
Any other way?
Thanks a lot in advance.

In bag of words, the number of key points or the descriptor size is irrelevant, once you generate the code book, you get a histogram whose dimensions are dependent on your codebook size. Again, the histogram is normalized, so it does not depend on the number of features detected per image. Suppose you have SIFT and SURF features, all you need to do is generate 2 codebooks and concatenate them to get a feature vector.
A brief overview of the method is mentioned here:
http://en.wikipedia.org/wiki/Bag-of-words_model_in_computer_vision

Related

how can i use surf feature in hierarchical clustering

First,Is it true to extract surf feature and use them in clustering? I want to cluster similar objects in images ?(each image contain one object)
If yes,how is it possible.
I extract features like this:
I = imread('cameraman.tif');
points = detectSURFFeatures(I);
[features, valid_points] = extractFeatures(I, points);
features is not a vector and is a matrix.Also number of points extracted by 'detectSURFFeatures' differ in different images.
how should features use?
First, you are detecting SURF features, not SIFT features (although they serve the same basic purpose). The reason you get multiple SURF features is that SURF is a local image feature, i.e. it describes only a small portion of the image. In general, multiple features will be detected in a single image. You probably want to find a way to combine these features into a single image descriptor before clustering.
A common method for combining these features is Bag-of-words.
Since it seems you are doing unsupervised learning, you will first need to learn a codebook. A popular method is to use k-means clustering on all the SURF features you extracted in all of your images.
Use these clusters to generate a k-dimensional image descriptor by creating a histogram of "codeword appearances" for each image. In this case there is one "codeword" per cluster. A codeword is said to "appear" each time a SURF feature belongs to its associated cluster.
The histogram of codewords will act as an image descriptor for each image. You can then apply clustering on the image descriptors to find similar images, I recommend either normalizing the image descriptors to have constant norm or using the cosine similarity metric (kmeans(X,k,'Distance','cosine') in MATLAB if you use k-means clustering).
That said, a solution which is likely to work better is to extract a deep feature using a convolutional neural network that was trained on a very large, diverse dataset (like ImageNet) then use those as image descriptors.

How to ensure consistency in SIFT features?

I am working with a classification algorithm that requires the size of the feature vector of all samples in training and testing to be the same.
I am also to use the SIFT feature extractor. This is causing problems as the feature vector of every image is coming up as a different sized matrix. I know that SIFT detects variable keypoints in each image, but is there a way to ensure that the size of the SIFT features is consistent so that I do not get a dimension mismatch error.
I have tried rootSIFT as a workaround:
[~, features] = vl_sift(single(images{i}));
double_features = double(features);
root_it = sqrt( double_features/sum(double_features) ); %root-sift
feats{i} = root_it;
This gives me a consistent 128 x 1 vector for every image, but it is not working for me as the size of each vector is now very small and I am getting a lot of NaN in my classification result.
Is there any way to solve this?
Using SIFT there are 2 steps you need to perform in general.
Extract SIFT features. These points (first output argument of
size NPx2 (x,y) of your function) are scale invariant, and should in
theory be present in each different image of the same object. This
is not completely true. Often points are unique to each frame
(image). These points are described by 128 descriptors each (second
argument of your function).
Match points. Each time you compute features of a different image the amount of points computed is different! Lots of them should be the same point as in the previous image, but lots of them WON'T. You will have new points and old points may not be present any more. This is why you should perform a feature matching step, to link those points in different images. usually this is made by knn matching or RANSAC. You can Google how to perform this task and you'll have tons of examples.
After the second step, you should have a fixed amount of points for the whole set of images (considering they are images of the same object). The amount of points will be significantly smaller than in each single image (sometimes 30~ times less amount of points). Then do whatever you want with them!
Hint for matching: http://www.vlfeat.org/matlab/vl_ubcmatch.html
UPDATE:
You seem to be trying to train some kind of OCR. You would need to probably match SIFT features independently for each character.
How to use vl_ubcmatch:
[~, features1] = vl_sift(I1);
[~, features2] = vl_sift(I2);
matches=vl_ubcmatch(features1,features2)
You can apply a dense SIFT to the image. This way you have more control over from where you get the feature descriptors. I haven't used vlfeat, but looking at the documentation I see there's a function to extract dense SIFT features called vl_dsift. With vl_sift, I see there's a way to bypass the detector and extract the descriptors from points of your choice using the 'frames' option. Either way it seems you can get a fixed number of descriptors.
If you are using images of the same size, dense SIFT or the frames option is okay. There's a another approach you can take and it's called the bag-of-features model (similar to bag-of-words model) in which you cluster the features that you extracted from images to generate codewords and feed them into a classifier.

How to extract useful features from a graph?

Things are like this:
I have some graphs like the pictures above and I am trying to classify them to different kinds so the shape of a character can be recognized, and here is what I've done:
I apply a 2-D FFT to the graphs, so I can get the spectral analysis of these graphs. And here are some result:
S after 2-D FFT
T after 2-D FFT
I have found that the same letter share the same pattern of magnitude graph after FFT, and I want to use this feature to cluster these letters. But there is a problem: I want the features of interested can be presented in a 2-D plane, i.e in the form of (x,y), but the features here is actually a graph, with about 600*400 element, and I know the only thing I am interested is the shape of the graph(S is a dot in the middle, and T is like a cross). So what can I do to reduce the dimension of the magnitude graph?
I am not sure I am clear about my question here, but thanks in advance.
You can use dimensionality reduction methods such as
k-means clustering
SVM
PCA
MDS
Each of these methods can take 2-dimensional arrays, and work out the best coordinate frame to distinguish / represent etc your letters.
One way to start would be reducing your 240000 dimensional space to a 26-dimensional space using any of these methods.
This would give you an 'amplitude' for each of the possible letters.
But as #jucestain says, a network classifiers are great for letter recognition.

Ideas for extracting features of an object using keypoints of image

I'll be appreciated if you help me to create a feature vector of an simple object using keypoints. For now, I use ETH-80 dataset, objects have an almost blue background and pictures are took from different views. Like this:
After creating a feature vector, I want to train a neural network with this vector and use that neural network to recognize an input image of an object. I don't want make it complex, input images will be as simple as train images.
I asked similar questions before, some one suggested using average value of 20x20 neighborhood of keypoints. I tried it, It seems it's not working with ETH-80 images, because of different views of images. It's why I asked another question.
SURF or SIFT. Look for interest point detectors. A MATLAB SIFT implementation is freely available.
Update: Object Recognition from Local Scale-Invariant Features
SIFT and SURF features consist of two parts, the detector and the descriptor. The detector finds the point in some n-dimensional space (4D for SIFT), the descriptor is used to robustly describe the surroundings of said points. The latter is increasingly used for image categorization and identification in what is commonly known as the "bag of word" or "visual words" approach. In the most simple form, one can collect all data from all descriptors from all images and cluster them, for example using k-means. Every original image then has descriptors that contribute to a number of clusters. The centroids of these clusters, i.e. the visual words, can be used as a new descriptor for the image. The VLfeat website contains a nice demo of this approach, classifying the caltech 101 dataset:
http://www.vlfeat.org/applications/apps.html#apps.caltech-101

How to use Homography to recognize two images while having SIFT descriptors and locations?

I use SIFT algorithm and try to use Kovesi's functions to get Homography. However, I cannot succeed. I only have descriptors and locations of the SIFT algorithm.
You need to perform descriptor matching between two frames. So you find the SIFT descriptors in frame A and then frame B.
Then you see which ones correspond between the two images. To do that you need to compare the distance between each descriptor in each frame and then create a matched pair from the best score and repeat for all the points.
Now, you can use RANSAC which basically just means take 4 random sets of matches, calculate the homography using the DLT and then project the points through the inverse of the homography in both directions. Measure the error and then repeat this again a number of times until you get a set of pairs that produce a homography with an error youre happy with.
Now use the chosen homography to project all the points between the images and remove all the outlying matches that have an error above a threshold you define. Then recalculate the homography based on the inliers. OpenCV is avision library that is useful for all these things. And you dont have to use SIFT. You could use SURF which has a nice implementation in OpenCV.