I have pretty simple code for binary classification (see below). When I re-run this in Matlab (just by manually pressing the "run" button), each run gives me slightly different accuracies for each of the 14 subjects. However, if I loop over my code nrPermute times, every iteration of the loop gives me EXACTLY the same accuracy for the respective subject - why is that? So in the first code, the mean(accuracy) is different for different runs, whereas in the second code it is always the same for different iterations. Both codes below
Code where only one 10-fold crossvalidation is done for each subject:
%% SVM-Classification
nrFolds = 10; %number of folds of crossvalidation, 10 is standard
kernel = 'linear'; % 'linear', 'rbf' or 'polynomial'
C = 1;
solver = 'L1QP';
cvFolds = crossvalind('Kfold', labels, nrFolds);
for k = 1:14
for i = 1:nrFolds % iteratre through each fold
testIdx = (cvFolds == i); % indices of test instances
trainIdx = ~testIdx; % indices training instances
% train the SVM
cl = fitcsvm(features(trainIdx,:),
labels(trainIdx),'KernelFunction',kernel,'Standardize',true,...
'BoxConstraint',C,'ClassNames',[0,1],'Solver',solver);
[label,scores] = predict(cl, features(testIdx,:));
eq = sum(label==labels(testIdx));
accuracy(i) = eq/numel(labels(testIdx));
end
crossValAcc(k) = mean(accuracy);
end
Code where each 10-fold crossvalidation is repeated nrPermute times:
%% SVM-Classification
nrFolds = 10; %number of folds of crossvalidation, 10 is standard
kernel = 'linear'; % 'linear', 'rbf' or 'polynomial'
C = 1;
solver = 'L1QP';
cvFolds = crossvalind('Kfold', labels, nrFolds);
nrPermute = 5;
for k = 1:14
for p = 1:nrPermute
for i = 1:nrFolds % iteratre through each fold
testIdx = (cvFolds == i); % indices of test instances
trainIdx = ~testIdx; % indices training instances
% train the SVM
cl = fitcsvm(features(trainIdx,:),
labels(trainIdx),'KernelFunction',kernel,'Standardize',true,...
'BoxConstraint',C,'ClassNames',[0,1],'Solver',solver);
[label,scores] = predict(cl, features(testIdx,:));
eq = sum(label==labels(testIdx));
accuracy(i) = eq/numel(labels(testIdx));
end
accSubj(p) = mean(accuracy); % accuracy of each permutation
end
crossValAcc(k) = mean(accSubj);
end
In case that would be useful for someone else as well, I figure it out: The loop for permutation should be outside of cvFolds = crossvalind('Kfold', labels, nrFolds); such that the distribution into folds is re-shuffled!
I'm trying to use SVM-RFE with libsvm library to run on the gene expression dataset. My algorithm is written in Matlab. The particular dataset able to produce 80++% of classification accuracy under the 5-fold CV without applied Feature selection. When I tried to apply svm-rfe on this dataset (same svm parameter setting and use 5-fold CV), the classification result become worse, can only achieve 60++% classification accuracy.
Here is my matlab coding, appreciate if anyone can shed some light on what's wrong with my codes. Thank you in advance.
[label, data] = libsvmread('libsvm_data.scale');
[N D] = size(data);
numfold=5;
indices = crossvalind ('Kfold',label, numfold);
cp = classperf(label);
for i= 1:numfold
disp(strcat('Fold-',int2str(i)));
testix = (indices == i); trainix = ~testix;
test_data = data(testix,:); test_label = label(testix);
train_data = data(trainix,:); train_label = label(trainix);
model = svmtrain(train_label, train_data, sprintf('-s 0 -t 0); %'
s = 1:D;
r = [];
iter = 1;
while ~isempty(s)
X = train_data(:,s);
fs_model = svmtrain(train_label, X, sprintf('-s 0 -t %f -c %f -g %f -b 1', kernel, cost, gamma));
w = fs_model.SVs' * fs_model.sv_coef; %'
c = w.^2;
[c_minvalue, f] = min(c);
r = [s(f),r];
ind = [1:f-1, f+1:length(s)];
s = s(ind);
iter = iter + 1;
end
predefined = 100;
important_feat = r(:,D-predefined+1:end);
for l=1:length(important_feat)
testdata(:,l) = test_data (:,important_feat(l));
end
[predict_label_itest, accuracy_itest, prob_values] = svmpredict(test_label, testdata, model,'-b 1');
acc_itest_fs (:,i) = accuracy_itest(1);
clear testdata;
end
Mean_itest_fs = mean((acc_itest_fs),2);
Mean_bac_fs = mean (bac_fs,2);
After applying RFE to the traindata, you get a subset of the traindata. So when you use the traindata to train a model, I think you should use the subset of the traindata to train this model.
I am new to classification and trying to do multiclass classification using libsvm. I have song features and want to classify then in 3 classes. Code works but the output (pred) give only one label for all test data. for example pred = [2;2;2;2];
Any change I do has not effect on output. I am not getting where I am going wrong. Can anyone help me out pls. Below is the code
load 113gender
s=cellstr(types);
[~,~,labels] = unique(s);
data = allfeature;
numInst = size(data,1);
numLabels = max(labels);
numTest = size(alltest,1);
trainData = data;
trainLabel = labels;
testData = alltest;
%testLabel = ones(22,1);
%testLabel = [1;1;1;1;1;1;2;2;2;2;2;2;2;2;3;3;3;3;3;3;3;3];
testLabel = [3;3;3;3;3;3;2;2;2;2;2;2;2;2;1;1;1;1;1;1;1;1];
%# train one-against-all models
model = cell(numLabels,1);
for k=1:numLabels
model{k} = svmtrain(double(trainLabel==k), trainData, '-c 1 -g 0.2 -b 1');
end
%# get probability estimates of test instances using each model
prob = zeros(numTest,numLabels);
for k=1:numLabels
[~,~,p] = svmpredict(double(testLabel==k), testData, model{k}, '-b 1');
prob(:,k) = p(:,model{k}.Label==1); %# probability of class==k
end
%# predict the class with the highest probability
[~,pred] = max(prob,[],2);
acc = sum(pred == testLabel) ./ numel(testLabel)
C = confusionmat(testLabel, pred)
I need to keep track of the F1-scores while tuning C & Sigma in SVM,
For example the following code keeps track of the Accuracy, I need to change it to F1-Score but I was not able to do that…….
%# read some training data
[labels,data] = libsvmread('./heart_scale');
%# grid of parameters
folds = 5;
[C,gamma] = meshgrid(-5:2:15, -15:2:3);
%# grid search, and cross-validation
cv_acc = zeros(numel(C),1);
for i=1:numel(C)
cv_acc(i) = svmtrain(labels, data, ...
sprintf('-c %f -g %f -v %d', 2^C(i), 2^gamma(i), folds));
end
%# pair (C,gamma) with best accuracy
[~,idx] = max(cv_acc);
%# now you can train you model using best_C and best_gamma
best_C = 2^C(idx);
best_gamma = 2^gamma(idx);
%# ...
I have seen the following two links
Retraining after Cross Validation with libsvm
10 fold cross-validation in one-against-all SVM (using LibSVM)
I do understand that I have to first find the best C and gamma/sigma parameters over the training data, then use these two values to do a LEAVE-ONE-OUT crossvalidation classification experiment,
So what I want now is to first do a grid-search for tuning C & sigma.
Please I would prefer to use MATLAB-SVM and not LIBSVM.
Below is my code for LEAVE-ONE-OUT crossvalidation classification.
... clc
clear all
close all
a = load('V1.csv');
X = double(a(:,1:12));
Y = double(a(:,13));
% train data
datall=[X,Y];
A=datall;
n = 40;
ordering = randperm(n);
B = A(ordering, :);
good=B;
input=good(:,1:12);
target=good(:,13);
CVO = cvpartition(target,'leaveout',1);
cp = classperf(target); %# init performance tracker
svmModel=[];
for i = 1:CVO.NumTestSets %# for each fold
trIdx = CVO.training(i);
teIdx = CVO.test(i);
%# train an SVM model over training instances
svmModel = svmtrain(input(trIdx,:), target(trIdx), ...
'Autoscale',true, 'Showplot',false, 'Method','ls', ...
'BoxConstraint',0.1, 'Kernel_Function','rbf', 'RBF_Sigma',0.1);
%# test using test instances
pred = svmclassify(svmModel, input(teIdx,:), 'Showplot',false);
%# evaluate and update performance object
cp = classperf(cp, pred, teIdx);
end
%# get accuracy
accuracy=cp.CorrectRate*100
sensitivity=cp.Sensitivity*100
specificity=cp.Specificity*100
PPV=cp.PositivePredictiveValue*100
NPV=cp.NegativePredictiveValue*100
%# get confusion matrix
%# columns:actual, rows:predicted, last-row: unclassified instances
cp.CountingMatrix
recallP = sensitivity;
recallN = specificity;
precisionP = PPV;
precisionN = NPV;
f1P = 2*((precisionP*recallP)/(precisionP + recallP));
f1N = 2*((precisionN*recallN)/(precisionN + recallN));
aF1 = ((f1P+f1N)/2);
i have changed the code
but i making some mistakes and i am getting errors,
a = load('V1.csv');
X = double(a(:,1:12));
Y = double(a(:,13));
% train data
datall=[X,Y];
A=datall;
n = 40;
ordering = randperm(n);
B = A(ordering, :);
good=B;
inpt=good(:,1:12);
target=good(:,13);
k=10;
cvFolds = crossvalind('Kfold', target, k); %# get indices of 10-fold CV
cp = classperf(target); %# init performance tracker
svmModel=[];
for i = 1:k
testIdx = (cvFolds == i); %# get indices of test instances
trainIdx = ~testIdx;
C = 0.1:0.1:1;
S = 0.1:0.1:1;
fscores = zeros(numel(C), numel(S)); %// Pre-allocation
for c = 1:numel(C)
for s = 1:numel(S)
vals = crossval(#(XTRAIN, YTRAIN, XVAL, YVAL)(fun(XTRAIN, YTRAIN, XVAL, YVAL, C(c), S(c))),inpt(trainIdx,:),target(trainIdx));
fscores(c,s) = mean(vals);
end
end
end
[cbest, sbest] = find(fscores == max(fscores(:)));
C_final = C(cbest);
S_final = S(sbest);
.......
and the function.....
.....
function fscore = fun(XTRAIN, YTRAIN, XVAL, YVAL, C, S)
svmModel = svmtrain(XTRAIN, YTRAIN, ...
'Autoscale',true, 'Showplot',false, 'Method','ls', ...
'BoxConstraint', C, 'Kernel_Function','rbf', 'RBF_Sigma', S);
pred = svmclassify(svmModel, XVAL, 'Showplot',false);
cp = classperf(YVAL, pred)
%# get accuracy
accuracy=cp.CorrectRate*100
sensitivity=cp.Sensitivity*100
specificity=cp.Specificity*100
PPV=cp.PositivePredictiveValue*100
NPV=cp.NegativePredictiveValue*100
%# get confusion matrix
%# columns:actual, rows:predicted, last-row: unclassified instances
cp.CountingMatrix
recallP = sensitivity;
recallN = specificity;
precisionP = PPV;
precisionN = NPV;
f1P = 2*((precisionP*recallP)/(precisionP + recallP));
f1N = 2*((precisionN*recallN)/(precisionN + recallN));
fscore = ((f1P+f1N)/2);
end
So basically you want to take this line of yours:
svmModel = svmtrain(input(trIdx,:), target(trIdx), ...
'Autoscale',true, 'Showplot',false, 'Method','ls', ...
'BoxConstraint',0.1, 'Kernel_Function','rbf', 'RBF_Sigma',0.1);
put it in a loop that varies your 'BoxConstraint' and 'RBF_Sigma' parameters and then uses crossval to output the f1-score for that iterations combination of parameters.
You can use a single for-loop exactly like in your libsvm code example (i.e. using meshgrid and 1:numel(), this is probably faster) or a nested for-loop. I'll use a nested loop so that you have both approaches:
C = [0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30, 100, 300] %// you must choose your own set of values for the parameters that you want to test. You can either do it this way by explicitly typing out a list
S = 0:0.1:1 %// or you can do it this way using the : operator
fscores = zeros(numel(C), numel(S)); %// Pre-allocation
for c = 1:numel(C)
for s = 1:numel(S)
vals = crossval(#(XTRAIN, YTRAIN, XVAL, YVAL)(fun(XTRAIN, YTRAIN, XVAL, YVAL, C(c), S(c)),input(trIdx,:),target(trIdx));
fscores(c,s) = mean(vals);
end
end
%// Then establish the C and S that gave you the bet f-score. Don't forget that c and s are just indexes though!
[cbest, sbest] = find(fscores == max(fscores(:)));
C_final = C(cbest);
S_final = S(sbest);
Now we just have to define the function fun. The docs have this to say about fun:
fun is a function handle to a function with two inputs, the training
subset of X, XTRAIN, and the test subset of X, XTEST, as follows:
testval = fun(XTRAIN,XTEST) Each time it is called, fun should use
XTRAIN to fit a model, then return some criterion testval computed on
XTEST using that fitted model.
So fun needs to:
output a single f-score
take as input a training and testing set for X and Y. Note that these are both subsets of your actual training set! Think of them more like a training and validation SUBSET of your training set. Also note that crossval will split these sets up for you!
Train a classifier on the training subset (using your current C and S parameters from your loop)
RUN your new classifier on the test (or validation rather) subset
Compute and output a performance metric (in your case you want the f1-score)
You'll notice that fun can't take any extra parameters which is why I've wrapped it in an anonymous function so that we can pass the current C and S values in. (i.e. all that #(...)(fun(...)) stuff above. That's just a trick to "convert" our six parameter fun into the 4 parameter one required by crossval.
function fscore = fun(XTRAIN, YTRAIN, XVAL, YVAL, C, S)
svmModel = svmtrain(XTRAIN, YTRAIN, ...
'Autoscale',true, 'Showplot',false, 'Method','ls', ...
'BoxConstraint', C, 'Kernel_Function','rbf', 'RBF_Sigma', S);
pred = svmclassify(svmModel, XVAL, 'Showplot',false);
CP = classperf(YVAL, pred)
fscore = ... %// You can do this bit the same way you did earlier
end
I found the only problem with target(trainIdx). It's a row vector so I just replaced target(trainIdx) with target(trainIdx) which is a column vector.
After getting my testlabel and trainlabel, i implemented SVM on libsvm and i got an accuracy of 97.4359%. ( c= 1 and g = 0.00375)
model = svmtrain(TrainLabel, TrainVec, '-c 1 -g 0.00375');
[predict_label, accuracy, dec_values] = svmpredict(TestLabel, TestVec, model);
After i find the best c and g,
bestcv = 0;
for log2c = -1:3,
for log2g = -4:1,
cmd = ['-v 5 -c ', num2str(2^log2c), ' -g ', num2str(2^log2g)];
cv = svmtrain(TrainLabel,TrainVec, cmd);
if (cv >= bestcv),
bestcv = cv; bestc = 2^log2c; bestg = 2^log2g;
end
fprintf('%g %g %g (best c=%g, g=%g, rate=%g)\n', log2c, log2g, cv, bestc, bestg, bestcv);
end
end
c = 8 and g = 0.125
I implement the model again:
model = svmtrain(TrainLabel, TrainVec, '-c 8 -g 0.125');
[predict_label, accuracy, dec_values] = svmpredict(TestLabel, TestVec, model);
I get an accuracy of 82.0513%
How is it possible for the accuracy to decrease? shouldn't it increase? Or am i making any mistake?
The accuracies that you were getting during parameter tuning are biased upwards because you were predicting the same data that you were training. This is often fine for parameter tuning.
However, if you wanted those accuracies to be accurate estimates of the true generalization error on your final test set, then you have to add an additional wrap of cross validation or other resampling scheme.
Here is a very clear paper that outlines the general issue (but in a similar context of feature selection): http://www.pnas.org/content/99/10/6562.abstract
EDIT:
I usually add cross validation like:
n = 95 % total number of observations
nfold = 10 % desired number of folds
% Set up CV folds
inds = repmat(1:nfold, 1, mod(nfold, n))
inds = inds(randperm(n))
% Loop over folds
for i = 1:nfold
datapart = data(inds ~= i, :)
% do some stuff
% save results
end
% combine results
To do cross validation, you are supposed to split your training data. Here you test on training data to find your best set of parameter. That is not a good measure. You should use the following pseudo code:
for param = set of parameter to test
[trainTrain,trainVal] = randomly split (trainSet); %%% you can repeat that several times and take the mean accuracy
model = svmtrain(trainTrain, param);
acc = svmpredict(trainVal, model);
if accuracy is the best
bestPAram = param
end
end