SIFT Fisher Vector GMM - matlab

I am trying to extract SIFT Features with vl_feat implementation in Matlab and compute then the GMM model as well as the Fisher Vector. I have two subsets train and test images from DTD Dataset.
run vl_sift on each split (train&test) and save the 128xN Features
Apply the cell Array each consists of 128xN Features to vl_gmm and get for each Feature [mean covarinace weight] and then apply the Features with calculated gmm model values to vl_fisher for each Feature.
Make PCA
Put all in SVM
My problem is that I dont know in step 2. how to transform the Feature values of each image to fit in into vl_gmm and vl_fisher.
Here is my code:
%% SIFT Feature Extraction
FV_train = cell(size(train_name, 1), 1);
FV_test = cell(size(test_name, 1), 1);
parfor_progress(size(train_name, 1));
parfor n = 1:size(train_name, 1)
[~, FV_train{n}] = vl_sift(single(histeq(imresize(rgb2gray(imread(strcat(pwd, '/DTD/images', '/', train_name{n}))), [512 512]))));
[~, FV_test{n}] = vl_sift(single(histeq(imresize(rgb2gray(imread(strcat(pwd, '/DTD/images', '/', test_name{n}))), [512 512]))));
parfor_progress;
end
parfor_progress(0);
FV_train = FV_train(~cellfun('isempty',FV_train));
FV_test = FV_test(~cellfun('isempty',FV_test));
FV_train = adaptFV(FV_train);
FV_test = adaptFV(FV_test);
parfor n = 1:size(FV_train, 1)
FV_train{n} = double(reshape(FV_train{n},1,size(FV_train{n},2)*size(FV_train{n},1)));
FV_test{n} = double(reshape(FV_test{n},1,size(FV_test{n},2)*size(FV_test{n},1)));
end
There exists two other problems:
One ist that SIFT fails on some images, therefore I rejected them
Due to the fact of the different dimensionality of SIFT Feature I have taken the longest one and fill the others with zeros to an 1xN Feature Vector.

Related

Problem with implement a 4-D Gaussian Processes Regression through GPML

I refer to the link https://stats.stackexchange.com/questions/105516/how-to-implement-a-2-d-gaussian-processes-regression-through-gpml-matlab and create a 2-d Gaussian Process regression. I want to create a 4-d Gaussian Process regression, however the 'meshgrid' only allows 3 inputs([X,Y,Z] = meshgrid(x,y,z)); how do I add another input into meshgrid?
The 3-d code is like:
X1train = linspace(-4.5,4.5,10);
X2train = linspace(-4.5,4.5,10);
X3train = linspace(-4.5,4.5,10);
X = [X1train' X2train' X3train'];
Y = [X1train + X2train + X3train]';
%Testdata
[Xtest1, Xtest2, Xtest3] = meshgrid(-4.5:0.1:4.5, -4.5:0.1:4.5, -4.5:0.1:4.5);
Xtest = [Xtest1(:) Xtest2(:) Xtest3(:)];
% implement regression
[ymu ys2 fmu fs2] = gp(hyp, #infExact, [], covfunc, likfunc, X, Y, Xtest);
If I create an X4train, that means I need an Xtest4, how do I add Xtest4 into meshgrid?
The GPML code is from http://www.gaussianprocess.org/gpml/code/matlab/doc/
You may create n- dimensional grids using ndgrid, but please keep in mind that it does not directly create the same output as meshgrid, you have to convert it first. (How to do that is also explained in the documentation)

How to use matlab crossval do the computation for one of the partitions (in k-fold cross validation)

I'm trying to apply cross validated LDA using matlab cross validation method. To do this I put the crossval() in a loop and in each loop I extract corresponding train and test labels and feature matrix (trFV, tsFV). It's like the example presented in Matlab cvpartition class:
cvp = cvpartition(labelCell,'KFold',kFoldCV)
for i = 1:cvp.NumTestSets
trFV = f(cvp.training(i), :);
tsFV = f(cvp.test(i), :);
% calculate LDA projection matrix:
[~, W] = LDA(trFV, featureMat(cvp.training(i), end));
% apply W to both train and test:
trFVW = trFV * W(:, 1:numel(classes)-1);
tsFVW = tsFV * W(:, 1:numel(classes)-1);
fW = [trFVW; tsFVW];
labels = [labelCell(cvp.training(i)); labelCell(cvp.test(i))];
Mdl = fitcecoc(fW, labels, 'Coding', 'onevsall',...
'Learners', learnerTemplate,...
'ClassNames', classes);
CVMdl = crossval(Mdl, 'CVPartition', cvp);
% Other stuff
end
This implementation is very inefficient since I just need the result for one of the folds (not entire folds). I process each fold once in a loop while crossval process whole loops in each loop. Hence in current implementation instead of cvp.NumTestSets times it perform the cross validation cvp.NumTestSets^2 times. I need something like this:
CVMdl = crossval(Mdl, 'CVPartition', cvp, 'compute just for partition i and not all partitions');
Update
The code above have some problem regarding cross validation. However I'm still interested if Matlab built-in LDA (linear discriminant analysis) could be used to reduce dimension.

Matlab SVM linear binary classification failure

I'm trying to implement a simple SVM linear binary classification in Matlab but I got strange results.
I have two classes g={-1;1} defined by two predictors varX and varY. In fact, varY is enough to classify the dataset in two distinct classes (about varY=0.38) but I will keep varX as random variable since I will need it to other works.
Using the code bellow (adapted from MAtlab examples) I got a wrong classifier. Linear classifier should be closer to an horizontal line about varY=0.38, as we can perceive by ploting 2D points.
It is not displayed the line that should separate two classes
What am I doing wrong?
g(1:14,1)=1;
g(15:26,1)=-1;
m3(:,1)=rand(26,1); %varX
m3(:,2)=[0.4008; 0.3984; 0.4054; 0.4048; 0.4052; 0.4071; 0.4088; 0.4113; 0.4189;
0.4220; 0.4265; 0.4353; 0.4361; 0.4288; 0.3458; 0.3415; 0.3528;
0.3481; 0.3564; 0.3374; 0.3610; 0.3241; 0.3593; 0.3434; 0.3361; 0.3201]; %varY
SVMmodel_testm = fitcsvm(m3,g,'KernelFunction','Linear');
d = 0.005; % Step size of the grid
[x1Grid,x2Grid] = meshgrid(min(m3(:,1)):d:max(m3(:,1)),...
min(m3(:,2)):d:max(m3(:,2)));
xGrid = [x1Grid(:),x2Grid(:)]; % The grid
[~,scores2] = predict(SVMmodel_testm,xGrid); % The scores
figure();
h(1:2)=gscatter(m3(:,1), m3(:,2), g,'br','ox');
hold on
% Support vectors
h(3) = plot(m3(SVMmodel_testm.IsSupportVector,1),m3(SVMmodel_testm.IsSupportVector,2),'ko','MarkerSize',10);
% Decision boundary
contour(x1Grid,x2Grid,reshape(scores2(:,1),size(x1Grid)),[0 0],'k');
xlabel('varX'); ylabel('varY');
set(gca,'Color',[0.5 0.5 0.5]);
hold off
A common problem with SVM or any classification method for that matter is unnormalized data. You have one dimension that spans for 0 to 1 and the other from about 0.3 to 0.4. This causes inbalance between the features. Common practice is to somehow normalize the features, for examply by std. try this code:
g(1:14,1)=1;
g(15:26,1)=-1;
m3(:,1)=rand(26,1); %varX
m3(:,2)=[0.4008; 0.3984; 0.4054; 0.4048; 0.4052; 0.4071; 0.4088; 0.4113; 0.4189;
0.4220; 0.4265; 0.4353; 0.4361; 0.4288; 0.3458; 0.3415; 0.3528;
0.3481; 0.3564; 0.3374; 0.3610; 0.3241; 0.3593; 0.3434; 0.3361; 0.3201]; %varY
m3(:,2) = m3(:,2)./std(m3(:,2));
SVMmodel_testm = fitcsvm(m3,g,'KernelFunction','Linear');
Notice the line before the last.

MATLAB: Naive Bayes with Univariate Gaussian

I am trying to implement Naive Bayes Classifier using a dataset published by UCI machine learning team. I am new to machine learning and trying to understand techniques to use for my work related problems, so I thought it's better to get the theory understood first.
I am using pima dataset (Link to Data - UCI-ML), and my goal is to build Naive Bayes Univariate Gaussian Classifier for K class problem (Data is only there for K=2). I have done splitting data, and calculate the mean for each class, standard deviation, priors for each class, but after this I am kind of stuck because I am not sure what and how I should be doing after this. I have a feeling that I should be calculating posterior probability,
Here is my code, I am using percent as a vector, because I want to see the behavior as I increase the training data size from 80:20 split. Basically if you pass [10 20 30 40] it will take that percentage from 80:20 split, and use 10% of 80% as training.
function[classMean] = naivebayes(file, iter, percent)
dm = load(file);
for i=1:iter
idx = randperm(size(dm.data,1))
%Using same idx for data and labels
shuffledMatrix_data = dm.data(idx,:);
shuffledMatrix_label = dm.labels(idx,:);
percent_data_80 = round((0.8) * length(shuffledMatrix_data));
%Doing 80-20 split
train = shuffledMatrix_data(1:percent_data_80,:);
test = shuffledMatrix_data(percent_data_80+1:length(shuffledMatrix_data),:);
train_labels = shuffledMatrix_label(1:percent_data_80,:)
test_labels = shuffledMatrix_data(percent_data_80+1:length(shuffledMatrix_data),:);
%Getting the array of percents
for pRows = 1:length(percent)
percentOfRows = round((percent(pRows)/100) * length(train));
new_train = train(1:percentOfRows,:)
new_trin_label = shuffledMatrix_label(1:percentOfRows)
%get unique labels in training
numClasses = size(unique(new_trin_label),1)
classMean = zeros(numClasses,size(new_train,2));
for kclass=1:numClasses
classMean(kclass,:) = mean(new_train(new_trin_label == kclass,:))
std(new_train(new_trin_label == kclass,:))
priorClassforK = length(new_train(new_trin_label == kclass))/length(new_train)
priorClassforK_1 = 1 - priorClassforK
end
end
end
end
First, compute the probability of evey class label based on frequency counts. For a given sample of data and a given class in your data set, you compute the probability of evey feature. After that, multiply the conditional probability for all features in the sample by each other and by the probability of the considered class label. Finally, compare values of all class labels and you choose the label of the class with the maximum probability (Bayes classification rule).
For computing conditonal probability, you can simply use the Normal distribution function.

using precomputed kernels with libsvm

I'm currently working on classifying images with different image-descriptors. Since they have their own metrics, I am using precomputed kernels. So given these NxN kernel-matrices (for a total of N images) i want to train and test a SVM. I'm not very experienced using SVMs though.
What confuses me though is how to enter the input for training. Using a subset of the kernel MxM (M being the number of training images), trains the SVM with M features. However, if I understood it correctly this limits me to use test-data with similar amounts of features. Trying to use sub-kernel of size MxN, causes infinite loops during training, consequently, using more features when testing gives poor results.
This results in using equal sized training and test-sets giving reasonable results. But if i only would want to classify, say one image, or train with a given amount of images for each class and test with the rest, this doesn't work at all.
How can i remove the dependency between number of training images and features, so i can test with any number of images?
I'm using libsvm for MATLAB, the kernels are distance-matrices ranging between [0,1].
You seem to already have figured out the problem... According to the README file included in the MATLAB package:
To use precomputed kernel, you must include sample serial number as
the first column of the training and testing data.
Let me illustrate with an example:
%# read dataset
[dataClass, data] = libsvmread('./heart_scale');
%# split into train/test datasets
trainData = data(1:150,:);
testData = data(151:270,:);
trainClass = dataClass(1:150,:);
testClass = dataClass(151:270,:);
numTrain = size(trainData,1);
numTest = size(testData,1);
%# radial basis function: exp(-gamma*|u-v|^2)
sigma = 2e-3;
rbfKernel = #(X,Y) exp(-sigma .* pdist2(X,Y,'euclidean').^2);
%# compute kernel matrices between every pairs of (train,train) and
%# (test,train) instances and include sample serial number as first column
K = [ (1:numTrain)' , rbfKernel(trainData,trainData) ];
KK = [ (1:numTest)' , rbfKernel(testData,trainData) ];
%# train and test
model = svmtrain(trainClass, K, '-t 4');
[predClass, acc, decVals] = svmpredict(testClass, KK, model);
%# confusion matrix
C = confusionmat(testClass,predClass)
The output:
*
optimization finished, #iter = 70
nu = 0.933333
obj = -117.027620, rho = 0.183062
nSV = 140, nBSV = 140
Total nSV = 140
Accuracy = 85.8333% (103/120) (classification)
C =
65 5
12 38