Implementing Naive Bayes Nearest Neighbor (NBNN) in MATLAB - matlab

I posted this question on the CV SO a few days ago but it has gone basically unobserved by the forum. I'm trying to implement NBNN in MATLAB to do image classification on the CIFAR-10 image dataset. The algorithm is pretty simple, and I'm confident in it's correctness, however, I'm receiving terrible accuracy rates with 22-28%. I'm unsure why and I'm hoping someone who has experience with image classification or the algorithm could point me in the right direction. The algorithm learns off of SIFT image descriptors, which could be one of the reasons why it's under-performing. Matlab only has a SURF feature detector. From what I've read, SURF/SIFT are basically equivalent. I've been using features, (nDescriptros x 64) obtained from the function below to train my model.
points = detectSURFFeatures(rgb2gray(image));
[features,valid_points] = extractFeatures(image,points);
Another possible issue with the CIFAR dataset and this approach is the small size of the images. Each image is 32 x 32 images which, I beleive, makes feature detection very difficult. I've played around with the different octave settings in the detectSURFFeatures() but nothing has brought my accuracy above 28%.
The annotated code for the approach is below. It's difficult to understand but still might be helpful.
Hopefully someone can help me out.
for i = 3001:4000 %Train on the first 3000 instances and test on the remaining 1000.
closeness = [];
for j = 1:10 %loop over the 10 catergories
class = train(trainLabel==j,:); % Pull out all descriptors that belong to class j
descriptor = test(test_id==i,:); % Pull out all descriptors for image i
[idx,dist] = knnsearch(class,descriptor,'K',1); % Find the distance between the descriptors and the closest labeled descriptor in class j.
total = sum(dist); % sum up distances
closeness = [closeness,sum(total)]; % append a vector of the image-to-class distances.
end
[val,cat] = min(closeness); % Find choose the class that resulted in the lowest, summed distance.
predLabel = [predLabel;cat]; % Append to a vector of labels.
i
end

If your images are 32x32 pixels, then trying to detect interest points is not a good idea. As you have observed, you would get very few features, if any. Upsampling the images is one option. Another option is to use a global descriptor like HOG (extractHOGFeatures).

Related

How to use distance to extract features and compare images: : matlab

I was trying to code for feature extraction from the two images, which are actually similar. I tried to extract the intersection points from both of the image and calculated the distance from one intersection point to all other points. This procedure was iterated for all points and in both images.
Then I compared the distance between points in both images But I found that even for dissimilar images am getting same kind of distance and am not able to distinguish them.
Is there any way in this method which will improve the code or is there any other way to find the similarity.
I = bwmorph(I,'skel',Inf);
II = bwmorph(II,'skel',Inf);
[i,j] = ind2sub(size(I),find(bwmorph(bwmorph(I,'thin',Inf),'branchpoint') == 1));
[i1,j1] = ind2sub(size(II),find(bwmorph(bwmorph(II,'thin',Inf),'branchpoint') == 1));
figure,imshow(I); hold on; plot(j,i,'rx');
figure,imshow(II); hold on; plot(j1,i1,'rx')
m=size(i,1);
n=size(j,1);
m1=size(i1,1);
n1=size(j1,1);
for x=1:m
for y=1:n
d1(y,x)=round(sqrt((i(y,1)-i(x,1)).^2+(j(y,1)-j(x,1)).^2));
end
end
for x1=1:m1
for y1=1:n1
dd1(y1,x1)=round(sqrt((i1(y1,1)-i1(x1,1)).^2+(j1(y1,1)-j1(x1,1)).^2));
end
end
size(d1);
k1=reshape(d1,1,m*n);
k=sort(k1);
k=unique(k);
size(dd1);
k2=reshape(dd1,1,m1*n1);
k2=sort(k2);
k2=unique(k2);
z = intersect(k,k2)
length(z);
if length(z)>20
disp('similar images');
else
disp('dissimilar images');
end
This is a part of my code where I tried to extract features.
input1
input2
skel 1
skel2
I think your code is not the problem. Instead, it seems that either your feature descriptor is not powerful enough or your comparison method is not powerful enough, or a combination of the two. This gives us several options for how to explore solutions to the problem.
Feature Descriptor
You are constructing an image feature consisting of the distances between skeleton intersection points. This is an unusual approach and a very interesting one. It reminds me of peak constellations, a feature used by Shazam to audio-fingerprint songs. If you are interested in exploring, that more sophisticated technique, take a look at "An Industrial Strength Audio Search Algorithm" by Avery Li-Chun Wang. I believe you could adapt their feature descriptor to your application.
However, if you want a simpler solution there are some other options as well. Your current descriptor uses unique to find a set of unique distances between the skeleton intersection points. Take a look at the following images of a line and an equilateral triangle both with 5 unit line lengths. If we use the unique distances between vertices to make the feature, the two images have identical features, but we can also count the number of lines of each length in a histogram.
The histogram preserves more of the image structure as part of the feature. Using a histogram might help distinguish better between your similar and dissimilar cases.
Here's some demo code for histogram features using the Matlab demo images pears.png and peppers.png. I had difficulty extracting the skeleton from your provided images, but you should be able to adapt this code easily to your application.
I1 = = im2bw(imread('peppers.png'));
I2 = = im2bw(imread('pears.png'));
I1_skel = bwmorph(I1,'skel',Inf);
I2_skel = bwmorph(I2,'skel',Inf);
[i1,j1] = ind2sub(size(I1_skel),find(bwmorph(bwmorph(I1_skel,'thin',Inf),'branchpoint') == 1));
[i2,j2] = ind2sub(size(I2_skel),find(bwmorph(bwmorph(I2_skel,'thin',Inf),'branchpoint') == 1));
%You used a for loop to find the distance between each pair of
%intersections. There is a function for this.
d1 = round(pdist2([i1, j1], [i1, j1]));
d2 = round(pdist2([i2, j2], [i2, j2]));
%Choose a number of bins for the histogram.
%This will be the length of the feature.
%More bins will preserve more structure.
%Fewer bins will help generalize between similar but not identical images.
num_bins = 100;
%Instead of using `unique` to remove repetitions use `histcounts` in R2014b
%feature1 = histcounts(d1(:), num_bins);
%feature2 = histcounts(d2(:), num_bins);
%Use `hist` for pre R2014b Matlab versions
feature1 = hist(d1(:), num_bins);
feature2 = hist(d2(:), num_bins);
%Normalize the features
feature1 = feature1 ./ norm(feature1);
feature2 = feature2 ./ norm(feature2);
figure; bar([feature1; feature2]');
title('Features'); legend({'Feature 1', 'Feature 2'});
xlim([0, num_bins]);
Here are what the detected intersection points are in each image
Here are the resulting features. You can see the clear differences between images.
Feature Comparison
The second part to consider is how you compare your features. Currently, you are simply looking for >20 similar distances. With the 'peppers.png' and 'pears.png' test images distributed with Matlab, I find more than 2000 intersection points in one image and 260 in the other. With so many points, it is trivial to have an overlap of >20 similar distances. In your images, the number of intersection points is much smaller. You could carefully adjust the threshold of similar distances, but I think this metric is probably to simplistic.
In Machine Learning, a simple way to compare two feature vectors is vector similarity or distance. There are multiple distance metrics you could explore. Common ones include
Cosine Distance
score_cosine = feature1 * feature2'; %Cosine distance between vectors
%Set a threshold for cosine similarity [0, 1] where 1 is identical and 0 is perpendicular
cosine_threshold = .9;
disp('Cosine Compare')
disp(score_cosine)
if score_cosine > cosine_threshold
disp('similar images');
else
disp('dissimilar images');
end
Euclidean Distance
score_euclidean = pdist2(feature1, feature2);
%Set a threshold for euclidean similarity where smaller is more similar
euclidean_threshold = 0.1;
disp('Euclidean Compare')
disp(score_euclidean)
if score_euclidean < euclidean_threshold
disp('similar images');
else
disp('dissimilar images');
end
If these don't work, you may need to train a classifier to find a more complicated function to distinguish between similar and dissimilar images.

Clustering an image using Gaussian mixture models

I want to use GMM(Gaussian mixture models for clustering a binary image and also want to plot the cluster centroids on the binary image itself.
I am using this as my reference:
http://in.mathworks.com/help/stats/gaussian-mixture-models.html
This is my initial code
I=im2double(imread('sil10001.pbm'));
K = I(:);
mu=mean(K);
sigma=std(K);
P=normpdf(K, mu, sigma);
Z = norminv(P,mu,sigma);
X = mvnrnd(mu,sigma,1110);
X=reshape(X,111,10);
scatter(X(:,1),X(:,2),10,'ko');
options = statset('Display','final');
gm = fitgmdist(X,2,'Options',options);
idx = cluster(gm,X);
cluster1 = (idx == 1);
cluster2 = (idx == 2);
scatter(X(cluster1,1),X(cluster1,2),10,'r+');
hold on
scatter(X(cluster2,1),X(cluster2,2),10,'bo');
hold off
legend('Cluster 1','Cluster 2','Location','NW')
P = posterior(gm,X);
scatter(X(cluster1,1),X(cluster1,2),10,P(cluster1,1),'+')
hold on
scatter(X(cluster2,1),X(cluster2,2),10,P(cluster2,1),'o')
hold off
legend('Cluster 1','Cluster 2','Location','NW')
clrmap = jet(80); colormap(clrmap(9:72,:))
ylabel(colorbar,'Component 1 Posterior Probability')
But the problem is that I am unable to plot the cluster centroids received from GMM in the primary binary image.How do i do this?
**Now suppose i have 10 such images in a sequence And i want to store the information of their mean position in two cell array then how do i do that.This is my code foe my new question **
images=load('gait2go.mat');%load the matrix file
for i=1:10
I{i}=images.result{i};
I{i}=im2double(I{i});
%determine 'white' pixels, size of image can be [M N], [M N 3] or [M N 4]
Idims=size(I{i});
whites=true(Idims(1),Idims(2));
df=I{i};
%we add up the various color channels
for colori=1:size(df,3)
whites=whites & df(:,:,colori)>0.5;
end
%choose indices of 'white' pixels as coordinates of data
[datax datay]=find(whites);
%cluster data into 10 clumps
K = 10; % number of mixtures/clusters
cInd = kmeans([datax datay], K, 'EmptyAction','singleton',...
'maxiter',1000,'start','cluster');
%get clusterwise means
meanx=zeros(K,1);
meany=zeros(K,1);
for i=1:K
meanx(i)=mean(datax(cInd==i));
meany(i)=mean(datay(cInd==i));
end
xc{i}=meanx(i);%cell array contaning the position of the mean for the 10
images
xb{i}=meany(i);
figure;
gscatter(datay,-datax,cInd); %funky coordinates for plotting according to
image
axis equal;
hold on;
scatter(meany,-meanx,20,'+'); %same funky coordinates
end
I am able to get 10 images segmented but no the values of themean stored in the cell arrays xc and xb.They r only storing [] in place of the values of means
I decided to post an answer to your question (where your question was determined by a maximum-likelihood guess:P), but I wrote an extensive introduction. Please read carefully, as I think you have difficulties understanding the methods you want to use, and you have difficulties understanding why others can't help you with your usual approach of asking questions. There are several problems with your question, both code-related and conceptual. Let's start with the latter.
The problem with the problem
You say that you want to cluster your image with Gaussian mixture modelling. While I'm generally not familiar with clustering, after a look through your reference and the wonderful SO answer you cited elsewhere (and a quick 101 from #rayryeng) I think you are on the wrong track altogether.
Gaussian mixture modelling, as its name suggests, models your data set with a mixture of Gaussian (i.e. normal) distributions. The reason for the popularity of this method is that when you do measurements of all sorts of quantities, in many cases you will find that your data is mostly distributed like a normal distribution (which is actually the reason why it's called normal). The reason behind this is the central limit theorem, which implies that the sum of reasonably independent random variables tends to be normal in many cases.
Now, clustering, on the other hand, simply means separating your data set into disjoint smaller bunches based on some criteria. The main criterion is usually (some kind of) distance, so you want to find "close lumps of data" in your larger data set. You usually need to cluster your data before performing a GMM, because it's already hard enough to find the Gaussians underlying your data without having to guess the clusters too. I'm not familiar enough with the procedures involved to tell how well GMM algorithms can work if you just let them work on your raw data (but I expect that many implementations start with a clustering step anyway).
To get closer to your question: I guess you want to do some kind of image recognition. Looking at the picture, you want to get more strongly correlated lumps. This is clustering. If you look at a picture of a zoo, you'll see, say, an elephant and a snake. Both have their distinct shapes, and they are well separated from one another. If you cluster your image (and the snake is not riding the elephant, neither did it eat it), you'll find two lumps: one lump elephant-shaped, and one lump snake-shaped. Now, it wouldn't make sense to use GMM on these data sets: elephants, and especially snakes, are not shaped like multivariate Gaussian distributions. But you don't need this in the first place, if you just want to know where the distinct animals are located in your picture.
Still staying with the example, you should make sure that you cluster your data into an appropriate number of subsets. If you try to cluster your zoo picture into 3 clusters, you might get a second, spurious snake: the nose of the elephant. With an increasing number of clusters your partitioning might make less and less sense.
Your approach
Your code doesn't give you anything reasonable, and there's a very good reason for that: it doesn't make sense from the start. Look at the beginning:
I=im2double(imread('sil10001.pbm'));
K = I(:);
mu=mean(K);
sigma=std(K);
X = mvnrnd(mu,sigma,1110);
X=reshape(X,111,10);
You read your binary image, convert it to double, then stretch it out into a vector and compute the mean and deviation of that vector. You basically smear your intire image into 2 values: an average intensity and a deviation. And THEN you generate 111*10 standard normal points with these parameters, and try to do GMM on the first two sets of 111. Which are both independently normal with the same parameter. So you probably get two overlapping Gaussians around the same mean with the same deviation.
I think the examples you found online confused you. When you do GMM, you already have your data, so no pseudo-normal numbers should be involved. But when people post examples, they also try to provide reproducible inputs (well, some of them do, nudge nudge wink wink). A simple method for this is to generate a union of simple Gaussians, which can then be fed into GMM.
So, my point is, that you don't have to generate random numbers, but have to use the image data itself as input to your procedure. And you probably just want to cluster your image, instead of actually using GMM to draw potatoes over your cluster, since you want to cluster body parts in an image about a human. Most body parts are not shaped like multivariate Gaussians (with a few distinct exceptions for men and women).
What I think you should do
If you really want to cluster your image, like in the figure you added to your question, then you should use a method like k-means. But then again, you already have a program that does that, don't you? So I don't really think I can answer the question saying "How can I cluster my image with GMM?". Instead, here's an answer to "How can I cluster my image?" with k-means, but at least there will be a piece of code here.
%set infile to what your image file will be
infile='sil10001.pbm';
%read file
I=im2double(imread(infile));
%determine 'white' pixels, size of image can be [M N], [M N 3] or [M N 4]
Idims=size(I);
whites=true(Idims(1),Idims(2));
%we add up the various color channels
for colori=1:Idims(3)
whites=whites & I(:,:,colori)>0.5;
end
%choose indices of 'white' pixels as coordinates of data
[datax datay]=find(whites);
%cluster data into 10 clumps
K = 10; % number of mixtures/clusters
cInd = kmeans([datax datay], K, 'EmptyAction','singleton',...
'maxiter',1000,'start','cluster');
%get clusterwise means
meanx=zeros(K,1);
meany=zeros(K,1);
for i=1:K
meanx(i)=mean(datax(cInd==i));
meany(i)=mean(datay(cInd==i));
end
figure;
gscatter(datay,-datax,cInd); %funky coordinates for plotting according to image
axis equal;
hold on;
scatter(meany,-meanx,20,'ko'); %same funky coordinates
Here's what this does. It first reads your image as double like yours did. Then it tries to determine "white" pixels by checking that each color channel (of which can be either 1, 3 or 4) is brighter than 0.5. Then your input data points to the clustering will be the x and y "coordinates" (i.e. indices) of your white pixels.
Next it does the clustering via kmeans. This part of the code is loosely based on the already cited answer of Amro. I had to set a large maximal number of iterations, as the problem is ill-posed in the sense that there aren't 10 clear clusters in the picture. Then we compute the mean for each cluster, and plot the clusters with gscatter, and the means with scatter. Note that in order to have the picture facing in the right directions in a scatter plot you have to shift around the input coordinates. Alternatively you could define datax and datay correspondingly at the beginning.
And here's my output, run with the already processed figure you provided in your question:
I do believe you must had made a naive mistake in the plot and that's why you see just a straight line: You are plotting only the x values.
In my opinion, the second argument in the scatter command should be X(cluster1,2) or X(cluster2,2) depending on which scatter command is being used in the code.
The code can be made more simple:
%read file
I=im2double(imread('sil10340.pbm'));
%choose indices of 'white' pixels as coordinates of data
[datax datay]=find(I);
%cluster data into 10 clumps
K = 10; % number of mixtures/clusters
[cInd, c] = kmeans([datax datay], K, 'EmptyAction','singleton',...
'maxiter',1000,'start','cluster');
figure;
gscatter(datay,-datax,cInd); %funky coordinates for plotting according to
image
axis equal;
hold on;
scatter(c(:,2),-c(:,1),20,'ko'); %same funky coordinates
I don't think there is nay need for the looping as the c itself return a 10x2 double array which contains the position of the means

Image Classification using gist and SVM training

I'd like to begin by saying that I'm really new to CV, and there may be some obvious things I didn't think about, so don't hesitate to mention anything of that category.
I am trying to achieve scene classification, currently between indoor and outdoor images for simplicity.
My idea to achieve this is to use a gist descriptor, which creates a vector with certain parameters of the scene.
In order to obtain reliable classification, I used indoor and outdoor images, 100 samples each, used a gist descriptor, created a training matrix out of them, and used 'svmtrain' on it. Here's a pretty simple code that shows how I trained the gist vectors:
train_label= zeros(size(200,1),1);
train_label(1:100,1) = 0; % 0 = indoor
train_label(101:200,1) = 1; % 1 = outdoor
training_mat(1:100,:) = gist_indoor1;
training_mat(101:200,:) = gist_outdoor1;
test_mat = gist_test;
SVMStruct = svmtrain(training_mat ,train_label, 'kernel_function', 'rbf', 'rbf_sigma', 0.6);
Group = svmclassify(SVMStruct, test_mat);
The problem is that the results are pretty bad.
I read that optimizing the constraint and gamma parameters of the 'rbf' kernell should improve the classification, but:
I'm not sure how to optimize with multidimensional data vectors(the optimization example given in Mathworks site is in 2D while mine is 512), any suggestion how to begin?
I might be completely in the wrong direction, please indicate if it is so.
Edit:
Thanks Darkmoor! I'll try calibrating using this toolbox, and maybe try to improve my feature extraction.
Hopefully when I have a working classification, I'll post it here.
Edit 2: Forgot to update, by obtaining gist descriptors of indoor and urban outdoor images from the SUN database, and training with optimized parameters by using the libsvm toolbox, I managed to achieve a classification rate of 95% when testing the model on pictures from my apartment and the street outside.
I did the same with urban outdoor scenes and natural scenes from the database, and achieved similar accuracy when testing on various scenes from my country.
The code I used to create the data matrices is taken from here, with very minor modifications:
% GIST Parameters:
clear param
param.imageSize = [256 256]; % set a normalized image size
param.orientationsPerScale = [8 8 8 8]; % number of orientations per scale (from HF to LF)
param.numberBlocks = 4;
param.fc_prefilt = 4;
%Obtain images from folders
sdirectory = 'C:\Documents and Settings\yotam\My Documents\Scene_Recognition\test_set\indoor&outdoor_test';
jpegfiles = dir([sdirectory '/*.jpg']);
% Pre-allocate gist:
Nfeatures = sum(param.orientationsPerScale)*param.numberBlocks^2;
gist = zeros([length(jpegfiles) Nfeatures]);
% Load first image and compute gist:
filename = [sdirectory '/' jpegfiles(1).name];
img = imresize(imread(filename),param.imageSize);
[gist(1, :), param] = LMgist(img, '', param); % first call
% Loop:
for i = 2:length(jpegfiles)
filename = [sdirectory '/' jpegfiles(i).name];
img = imresize(imread(filename),param.imageSize);
gist(i, :) = LMgist(img, '', param); % the next calls will be faster
end
I suggest you to use libsvm it is very efficient. There is relevant post for cross validation of libsvm. The same logic can be used for the relevant Matlab lib you mention.
Your logic is correct. Extract features and try to classify them. In any case, do not expect that the calibration of your classifier will return huge differences. The key idea is the feature extraction for huge differences in your results, in combination of course with your classifier calibration ;).
Good luck.

How to generate this shape in Matlab?

In matlab, how to generate two clusters of random points like the following graph. Can you show me the scripts/code?
If you want to generate such data points, you will need to have their probability distribution to be able to generate the points.
For your point, I do not have the real distributions, so I can only give an approximation. From your figure I see that both lay approximately on a circle, with a random radius and a limited span for the angle. I assume those angles and radii are uniformly distributed over certain ranges, which seems like a pretty good starting point.
Therefore it also makes sense to generate the random data in polar coordinates (i.e. angle and radius) instead of the cartesian ones (i.e. horizontal and vertical), and transform them to allow plotting.
C1 = [0 0]; % center of the circle
C2 = [-5 7.5];
R1 = [8 10]; % range of radii
R2 = [8 10];
A1 = [1 3]*pi/2; % [rad] range of allowed angles
A2 = [-1 1]*pi/2;
nPoints = 500;
urand = #(nPoints,limits)(limits(1) + rand(nPoints,1)*diff(limits));
randomCircle = #(n,r,a)(pol2cart(urand(n,a),urand(n,r)));
[P1x,P1y] = randomCircle(nPoints,R1,A1);
P1x = P1x + C1(1);
P1y = P1y + C1(2);
[P2x,P2y] = randomCircle(nPoints,R2,A2);
P2x = P2x + C2(1);
P2y = P2y + C2(2);
figure
plot(P1x,P1y,'or'); hold on;
plot(P2x,P2y,'sb'); hold on;
axis square
This yields:
This method works relatively well when you deal with distributions that you can transform easily and when you can easily describe the possible locations of the points. If you cannot, there are other methods such as the inverse transforming sampling method which offer algorithms to generate the data instead of manual variable transformations as I did here.
K-means is not going to give you what you want.
For K-means, vectors are classified based on their nearest cluster center. I can only think of two ways you could get the non-convex assignment shown in the picture:
Your input data is actually higher-dimensional, and your sample image is just a 2-d projection.
You're using a distance metric with different scaling across the dimensions.
To achieve your aim:
Use a non-linear clustering algorithm.
Apply a non-linear transform to your input data. (Probably not feasible).
You can find a list on non-linear clustering algorithms here. Specifically, look at this reference on the MST clustering page. Your exact shape appears on the fourth page of the PDF together with a comparison of what happens with K-Means.
For existing MATLAB code, you could try this Kernel K-Means implementation. Also, check out the Clustering Toolbox.
Assuming that you really want to do the clustering operation on existing data, as opposed to generating the data itself. Since you have a plot of some data, it seems logical that you already know how to do that! If I am wrong in this assumption, then you should word your questions more carefully in the future.
The human brain is quite good at seeing patterns in things like this, that writing a code for on a computer will often take some serious effort.
As has been said already, traditional clustering tools such as k-means will fail. Luckily, the image processing toolbox has good tools for these purposes already written. I might suggest converting the plot into an image, using filled in dots to plot the points. Make sure the dots are large enough that they touch each other within a cluster, with some overlap. Then use dilation/erosion tools if necessary to make sure that any small cracks are filled in, but don't go so far as to cause the clusters to merge. Finally, use region segmentation tools to pick out the clusters. Once done, transform back from pixel units in the image into your spatial units, and you have accomplished your task.
For the image processing approach to work, you will need sufficient separation between the clusters compared to the coarseness within a cluster. But that seems obvious for any method to succeed.

How to use SIFT algorithm to compute how similar two images are?

I have used the SIFT implementation of Andrea Vedaldi, to calculate the sift descriptors of two similar images (the second image is actually a zoomed in picture of the same object from a different angle).
Now I am not able to figure out how to compare the descriptors to tell how similar the images are?
I know that this question is not answerable unless you have actually played with these sort of things before, but I thought that somebody who has done this before might know this, so I posted the question.
the little I did to generate the descriptors:
>> i=imread('p1.jpg');
>> j=imread('p2.jpg');
>> i=rgb2gray(i);
>> j=rgb2gray(j);
>> [a, b]=sift(i); % a has the frames and b has the descriptors
>> [c, d]=sift(j);
First, aren't you supposed to be using vl_sift instead of sift?
Second, you can use SIFT feature matching to find correspondences in the two images. Here's some sample code:
I = imread('p1.jpg');
J = imread('p2.jpg');
I = single(rgb2gray(I)); % Conversion to single is recommended
J = single(rgb2gray(J)); % in the documentation
[F1 D1] = vl_sift(I);
[F2 D2] = vl_sift(J);
% Where 1.5 = ratio between euclidean distance of NN2/NN1
[matches score] = vl_ubcmatch(D1,D2,1.5);
subplot(1,2,1);
imshow(uint8(I));
hold on;
plot(F1(1,matches(1,:)),F1(2,matches(1,:)),'b*');
subplot(1,2,2);
imshow(uint8(J));
hold on;
plot(F2(1,matches(2,:)),F2(2,matches(2,:)),'r*');
vl_ubcmatch() essentially does the following:
Suppose you have a point P in F1 and you want to find the "best" match in F2. One way to do that is to compare the descriptor of P in F1 to all the descriptors in D2. By compare, I mean find the Euclidean distance (or the L2-norm of the difference of the two descriptors).
Then, I find two points in F2, say U & V which have the lowest and second-lowest distance (say, Du and Dv) from P respectively.
Here's what Lowe recommended: if Dv/Du >= threshold (I used 1.5 in the sample code), then this match is acceptable; otherwise, it's ambiguously matched and is rejected as a correspondence and we don't match any point in F2 to P. Essentially, if there's a big difference between the best and second-best matches, you can expect this to be a quality match.
This is important since there's a lot of scope for ambiguous matches in an image: imagine matching points in a lake or a building with several windows, the descriptors can look very similar but the correspondence is obviously wrong.
You can do the matching in any number of ways .. you can do it yourself very easily with MATLAB or you can speed it up by using a KD-tree or an approximate nearest number search like FLANN which has been implemented in OpenCV.
EDIT: Also, there are several kd-tree implementations in MATLAB.
You should read David Lowe's paper, which talks about how to do exactly that. It should be sufficient, if you want to compare images of the exact same object. If you want to match images of different objects of the same category (e.g. cars or airplanes) you may want to look at the Pyramid Match Kernel by Grauman and Darrell.
Try to compare each descriptor from the first image with descriptors from the second one situated in a close vicinity (using the Euclidean distance). Thus, you assign a score to each descriptor from the first image based on the degree of similarity between it and the most similar neighbor descriptor from the second image. A statistical measure (sum, mean, dispersion, mean error, etc) of all these scores gives you an estimate of how similar the images are. Experiment with different combinations of vicinity size and statistical measure to give you the best answer.
If you want just compare zoomed and rotated image with known center of rotation you can use phase correlation in log-polar coordinates. By sharpness of peak and histogram of phase correlation you can judge how close images are. You can also use euclidean distance on absolute value of Fourier coefficients.
If you want compare SIFT descriptor, beside euclidean distance you can also use "diffuse distance" - getting descriptor on progressively more rough scale and concatenating them with original descriptor. That way "large scale" feature similarity would have more weight.
If you want to do matching between the images, you should use vl_ubcmatch (in case you have not used it). You can interpret the output 'scores' to see how close the features are. This represents the square of euclidean distance between the two matching feature descriptor. You can also vary the threshold between Best match and 2nd best match as input.