Alternative to spatial histograms in Bag of Words approach using vlfeat - matlab

The phow_caltech101 demo app in vlfeat creates a complete Bag of Words process for image classification on the Caltech101 dataset, roughly put:
Feature Extraction
Visual Vocabulary building
Spatial Histograms computation
SVM training
SVM testing and evaluation,
obtaining a model that can be used to later classify new, unclassified instances.
The only problem the histograms computed are spatial histograms, this means if I have a visual vocabulary of size n, I would have expected the histogram to have size n x (size_collection), containing the ocurrences of each visual word in each training instance.
The spatial histograms, however, are stored in a structure according to the model specified, by default it has two spatial arguments, spatialX and spatialY, which results in a structure with size spatialX * spatialY * (size_vocabulary) which is later normalized and this is the one used to train the SVM.
Now, what if i want to use the normal histogram, normalized or not, but the histogram that gives me a 1-1 correspondance on visual word per image, or obtain this information from the spatial histogram? Also, how much more efficient is the use of the spatial histogram instead of the classical one I take into account when I picture the Bag of Words process?
Any help appreciated.
UPDATE:
Here is part of the code where the histograms are computed, you can see how instead of ending with a histogram vector of size (number_visual_words) you end up with a histogram of size (spatialX * spatialY * number_visual_words). Let me clarify, in this case, the model is defined to have spatialX = [2 4] and spatialY = [2 4].
for i = 1:length(model.numSpatialX)
binsx = vl_binsearch(linspace(1,width,model.numSpatialX(i)+1), frames(1,:)) ;
binsy = vl_binsearch(linspace(1,height,model.numSpatialY(i)+1), frames(2,:)) ;
% combined quantization
bins = sub2ind([model.numSpatialY(i), model.numSpatialX(i), numWords], ...
binsy,binsx,binsa) ;
hist = zeros(model.numSpatialY(i) * model.numSpatialX(i) * numWords, 1) ;
hist = vl_binsum(hist, ones(size(bins)), bins) ;
hists{i} = single(hist / sum(hist)) ;
end
hist = cat(1,hists{:}) ;
hist = hist / sum(hist) ;
And part of the problem is that I havent worked with spatial histogram either, so Im not sure how much better than "normal" histograms they are. Maybe someone who has worked with this kind of histograms before could give a more helpful insight.

Related

Feedforward neural network classification in Matlab

I have two gaussian distribution samples, one guassian contains 10,000 samples and the other gaussian also contains 10,000 samples, I would like to train a feed-forward neural network with these samples but I dont know how many samples I have to take in order to get an optimal decision boundary.
Here is the code but I dont know exactly the solution and the output are weirds.
x1 = -49:1:50;
x2 = -49:1:50;
[X1, X2] = meshgrid(x1, x2);
Gaussian1 = mvnpdf([X1(:) X2(:)], mean1, var1);// for class A
Gaussian2 = mvnpdf([X1(:) X2(:)], mean2, var2);// for Class B
net = feedforwardnet(10);
G1 = reshape(Gaussian1, 10000,1);
G2 = reshape(Gaussian2, 10000,1);
input = [G1, G2];
output = [0, 1];
net = train(net, input, output);
When I ran the code it give me weird results.
If the code is not correct, can someone please suggest me so that I can get a decision boundary for these two distributions.
I'm pretty sure that the input must be the Gaussian distribution (and not the x coordinates). In fact the NN has to understand the relationship between the phenomenons themselves that you are interested (the Gaussian distributions) and the output labels, and not between the space in which are contained the phenomenons and the labels. Moreover, If you choose the x coordinates, the NN will try to understand some relationship between the latter and the output labels, but the x are something of potentially constant (i.e., the input data might be even all the same, because you can have very different Gaussian distribution in the same range of the x coordinates only varying the mean and the variance). Thus the NN will end up being confused, because the same input data might have more output labels (and you don't want that this thing happens!!!).
I hope I was helpful.
P.S.: for doubt's sake I have to tell you that the NN doesn't fit very well the data if you have a small training set. Moreover don't forget to validate your data model using the cross-validation technique (a good rule of thumb is to use a 20% of your training set for the cross-validation set and another 20% of the same training set for the test set and thus to use only the remaining 60% of your training set to train your model).

How to use distance to extract features and compare images: : matlab

I was trying to code for feature extraction from the two images, which are actually similar. I tried to extract the intersection points from both of the image and calculated the distance from one intersection point to all other points. This procedure was iterated for all points and in both images.
Then I compared the distance between points in both images But I found that even for dissimilar images am getting same kind of distance and am not able to distinguish them.
Is there any way in this method which will improve the code or is there any other way to find the similarity.
I = bwmorph(I,'skel',Inf);
II = bwmorph(II,'skel',Inf);
[i,j] = ind2sub(size(I),find(bwmorph(bwmorph(I,'thin',Inf),'branchpoint') == 1));
[i1,j1] = ind2sub(size(II),find(bwmorph(bwmorph(II,'thin',Inf),'branchpoint') == 1));
figure,imshow(I); hold on; plot(j,i,'rx');
figure,imshow(II); hold on; plot(j1,i1,'rx')
m=size(i,1);
n=size(j,1);
m1=size(i1,1);
n1=size(j1,1);
for x=1:m
for y=1:n
d1(y,x)=round(sqrt((i(y,1)-i(x,1)).^2+(j(y,1)-j(x,1)).^2));
end
end
for x1=1:m1
for y1=1:n1
dd1(y1,x1)=round(sqrt((i1(y1,1)-i1(x1,1)).^2+(j1(y1,1)-j1(x1,1)).^2));
end
end
size(d1);
k1=reshape(d1,1,m*n);
k=sort(k1);
k=unique(k);
size(dd1);
k2=reshape(dd1,1,m1*n1);
k2=sort(k2);
k2=unique(k2);
z = intersect(k,k2)
length(z);
if length(z)>20
disp('similar images');
else
disp('dissimilar images');
end
This is a part of my code where I tried to extract features.
input1
input2
skel 1
skel2
I think your code is not the problem. Instead, it seems that either your feature descriptor is not powerful enough or your comparison method is not powerful enough, or a combination of the two. This gives us several options for how to explore solutions to the problem.
Feature Descriptor
You are constructing an image feature consisting of the distances between skeleton intersection points. This is an unusual approach and a very interesting one. It reminds me of peak constellations, a feature used by Shazam to audio-fingerprint songs. If you are interested in exploring, that more sophisticated technique, take a look at "An Industrial Strength Audio Search Algorithm" by Avery Li-Chun Wang. I believe you could adapt their feature descriptor to your application.
However, if you want a simpler solution there are some other options as well. Your current descriptor uses unique to find a set of unique distances between the skeleton intersection points. Take a look at the following images of a line and an equilateral triangle both with 5 unit line lengths. If we use the unique distances between vertices to make the feature, the two images have identical features, but we can also count the number of lines of each length in a histogram.
The histogram preserves more of the image structure as part of the feature. Using a histogram might help distinguish better between your similar and dissimilar cases.
Here's some demo code for histogram features using the Matlab demo images pears.png and peppers.png. I had difficulty extracting the skeleton from your provided images, but you should be able to adapt this code easily to your application.
I1 = = im2bw(imread('peppers.png'));
I2 = = im2bw(imread('pears.png'));
I1_skel = bwmorph(I1,'skel',Inf);
I2_skel = bwmorph(I2,'skel',Inf);
[i1,j1] = ind2sub(size(I1_skel),find(bwmorph(bwmorph(I1_skel,'thin',Inf),'branchpoint') == 1));
[i2,j2] = ind2sub(size(I2_skel),find(bwmorph(bwmorph(I2_skel,'thin',Inf),'branchpoint') == 1));
%You used a for loop to find the distance between each pair of
%intersections. There is a function for this.
d1 = round(pdist2([i1, j1], [i1, j1]));
d2 = round(pdist2([i2, j2], [i2, j2]));
%Choose a number of bins for the histogram.
%This will be the length of the feature.
%More bins will preserve more structure.
%Fewer bins will help generalize between similar but not identical images.
num_bins = 100;
%Instead of using `unique` to remove repetitions use `histcounts` in R2014b
%feature1 = histcounts(d1(:), num_bins);
%feature2 = histcounts(d2(:), num_bins);
%Use `hist` for pre R2014b Matlab versions
feature1 = hist(d1(:), num_bins);
feature2 = hist(d2(:), num_bins);
%Normalize the features
feature1 = feature1 ./ norm(feature1);
feature2 = feature2 ./ norm(feature2);
figure; bar([feature1; feature2]');
title('Features'); legend({'Feature 1', 'Feature 2'});
xlim([0, num_bins]);
Here are what the detected intersection points are in each image
Here are the resulting features. You can see the clear differences between images.
Feature Comparison
The second part to consider is how you compare your features. Currently, you are simply looking for >20 similar distances. With the 'peppers.png' and 'pears.png' test images distributed with Matlab, I find more than 2000 intersection points in one image and 260 in the other. With so many points, it is trivial to have an overlap of >20 similar distances. In your images, the number of intersection points is much smaller. You could carefully adjust the threshold of similar distances, but I think this metric is probably to simplistic.
In Machine Learning, a simple way to compare two feature vectors is vector similarity or distance. There are multiple distance metrics you could explore. Common ones include
Cosine Distance
score_cosine = feature1 * feature2'; %Cosine distance between vectors
%Set a threshold for cosine similarity [0, 1] where 1 is identical and 0 is perpendicular
cosine_threshold = .9;
disp('Cosine Compare')
disp(score_cosine)
if score_cosine > cosine_threshold
disp('similar images');
else
disp('dissimilar images');
end
Euclidean Distance
score_euclidean = pdist2(feature1, feature2);
%Set a threshold for euclidean similarity where smaller is more similar
euclidean_threshold = 0.1;
disp('Euclidean Compare')
disp(score_euclidean)
if score_euclidean < euclidean_threshold
disp('similar images');
else
disp('dissimilar images');
end
If these don't work, you may need to train a classifier to find a more complicated function to distinguish between similar and dissimilar images.

how to find accuracy using multiple value of k in knn classifier (matlab)

I use knn classifier to classify images according to their writers (problem of writer recognition). I worked on a given database that contains 150 images with 100 images for training and 50 images for testing.
I use this code to find the accuracy of the classifier( k=1):
load('testdirection.mat')
load('traindirection.mat')
load('testlabels.mat')
load('trainlabels.mat')
class = knnclassify(testdirection,traindirection, trainlabels);
cp = classperf(testlabels,class);
cp.CorrectRate
fprintf('KNN Classifier Accuracy: %.2f%%\n',100*cp.CorrectRate )
I want to find different accuracy for different value for k [1..25] and save result in matrix matlab. I want also to plot the result to see the variability of accuracy depending on the value of k.
Please, help me to change this code and thanks in advance
knnclassify has an optional fourth argument k which is the number of nearest neighbors. You can just put the knnclassify in a for loop and iterate through all values for k.
load('testdirection.mat')
load('traindirection.mat')
load('testlabels.mat')
load('trainlabels.mat')
for k=25:-1:1
class = knnclassify(testdirection,traindirection, trainlabels, k);
cp = classperf(testlabels,class);
correctRate(k) = cp.CorrectRate;
end
You can plot the result e.g. using stem or plot
stem(1:25,correctRate);
PS: note that according to the MATLAB documentation, knnclassify will be removed in a future release and you should better use fitcknn.

Measuring the entropy of a transition probability matrix in matlab

I'm working on a project which requires to analyze certain graph properties of transition probability matrices which are constructed as weighted directed graphs.
one of the properties of interest is the entropy of these graphs, which i have yet to find a proper way to measure, the general idea is that i need some sort of measure which allows me to quantify the extent to which a certain graph is "ordered" in order to ascertain the predictive value of the nodes within the graph (I.E if all the nodes have the exact same connection patterns, then effectively their predictive value is zero, though this is a very simplistic explanation as there are many other contributing factors to a nodes predictive power).
Iv'e experimented with certain built in matlab commands:
entropy - generally used to determine the entropy of an image
wentropy - to be honest i do not fully understand the proper use of this function, but iv'e tried using it with the 'shannon' and 'log energy' types, and have produced some incosistent results
this is a very basic script i whipped up to some testing, which produces two matrices:
an 20*20 matrix constructed with values drawn entirely from a uniform distribution, intended to produce a matrix with a relatively low degree of order - unordgraph
a 20*20 matrix constructed with 4 5*5 "patches" in which the values are integers drawn from a uniform distribution with a given range that is significantly larger than one, while the rest of the values are drawn from a uniform distribution on the range 0-1 (as in the previous matrix), this form of graph is more "ordered" than the previous patch - ordgraph
when i run the code:
clear all;
n = 50;
gsize = 20;
orderedrange = [100 200];
enttype = 'shannon';
for i = 1:n;
unordgraph = rand(gsize);
% entvec(1,i) = entropy(unordgraph);
entvec(1,i) = wentropy(unordgraph,enttype);
% ordgraph = reshape(1:gsize^2,gsize,gsize);
ordgraph = rand(gsize);
ordgraph(1:5,1:5) = randi(orderedrange,5);
ordgraph(6:10,6:10) = randi(orderedrange,5);
ordgraph(11:15,11:15) = randi(orderedrange,5);
ordgraph(16:20,16:20) = randi(orderedrange,5);
% entvec(2,i) = entropy(ordgraph);
entvec(2,i) = wentropy(ordgraph,enttype);
end
fprintf('the mean entropy of the unordered graph is: %.4f\n',mean(entvec(1,:)));
fprintf('the mean entropy of the ordered graph is: %.4f\n',mean(entvec(2,:)));
i get outputs such as:
the mean entropy of the unordered graph is: 88.8871
the mean entropy of the ordered graph is: -23936552.0113
i'm not really sure about the meaning of such negative values as running the same script on a matrix comprised entirely of zeros or ones (and hence maximally ordered) produces a mean entropy of 0.
i have a pretty rudimentary background in graph theory, making this task that much more difficult, and i would be really grateful for any help, whether theoretical or algorithmical
thanks in advance,
Ron

Improve the accuracy performance on SVM

I am working on people detecting using two different features HOG and LBP. I used SVM to train the positive and negative samples. Here, I wanna ask how to improve the accuracy of SVM itself? Because, everytime I added up more positives and negatives sample, the accuracy is always decreasing. Currently my positive samples are 1500 and negative samples are 700.
%extract features
[fpos,fneg] = features(pathPos, pathNeg);
%train SVM
HOG_featV = loadingV(fpos,fneg); % loading and labeling each training example
fprintf('Training SVM..\n');
%L = ones(length(SV),1);
T = cell2mat(HOG_featV(2,:));
HOGtP = HOG_featV(3,:)';
C = cell2mat(HOGtP); % each row of P correspond to a training example
%extract features from LBP
[LBPpos,LBPneg] = LBPfeatures(pathPos, pathNeg);
LBP_featV = loadingV(LBPpos, LBPneg);
LBPlabel = cell2mat(LBP_featV(2,:));
LBPtP = LBP_featV(3,:);
M = cell2mat(LBPtP)'; % each row of P correspond to a training example
featureVector = [C M];
model = svmlearn(featureVector, T','-t 2 -g 0.3 -c 0.5');
Anyone knows how to find best C and Gamma value for improving SVM accuracy?
Thank you,
To find best C and Gamma value for improving SVM accuracy you typically perform cross-validation. In sum you can leave-one-out (1 sample) and test the VBM for that sample using the different parameters (2 parameters define a 2d grid). Typically you would test each decade of the parameters for a certain range. For example: C = [0.01, 0.1, 1, ..., 10^9]; G= [1^-5, 1^-4, ..., 1000]. This should also improve your SVM accuracy by optimizing the hyper-parameters.
By looking again to your question it seems you are using the svmlearn of the machine learning toolbox (statistics toolbox) of Matlab. Therefore you have already built-in functions for cross-validation. Give a look at: http://www.mathworks.co.uk/help/stats/support-vector-machines-svm.html
I followed ASantosRibeiro's method to optimize the parameters before and it works well.
In addition, you could try to add more negative samples until the proportion of the negative and positive reach 2:1. The reason is that when you implement real-time application, you should scan the whole image step by step and commonly the negative extracted samples would be much more than the people-contained samples.
Thus, add more negative training samples is a quite straightforward but effective way to improve overall accuracy(Both false positive and true negative).