LIBSVM - no probability estimates - matlab

I'm using LIBSVM for matlab. When I use a regression SVM the probability estimates it outputs are an empty matrix, whereas this feature works fine when using classification. Is this a normal behavior, because in the LIBSVM readme it says:
-b probability_estimates: whether to train a SVC or SVR model for probability estimates,
0 or 1 (default 0)

[~,~,P] = svmpredict(x,y,model,'-b 1');
The output P is the probability of y belongs to class 1 and -1 respectively (m*2 array), and it only makes sense for classification problem.
For regression problem, the pairwise probability information is included in your trained model with model.ProbA.

Related

Matlab-libsvm - reproducing the decision values from the primal weight vector, linear kernel

I'm trying to compare the decision values provided by libsvm's svmpredict with those generated by projection of the data on the primal weight vector w (I'm considering the linear case). For debugging purposes I'm using the same data for training and testing.
w is computed according to the libsvm FAQ. Then I'm calculating the decision values by z=X*w+b.
When the data is separable (n=300, p=1000), the decision values produced by both methods are differently scaled, correlated but not identical (the predicted labels are also not exactly the same):
When the data is inseparable (n=300, p=10), there's a very weak relation between the values:
I suspect I've missed something fundamental. Any ideas? Matlab's fitcsvm object does not produce this discrepancy.
Code:
%% generate some random data
n=300;
p=1000;
labels=mod(randperm(n)',2)*2-1;
X=randn(n,p);
%% train model
model= svmtrain(labels, X,'-q b 0');
%% produce primal w (libsvm faq)
w = model.SVs' * model.sv_coef;
b = -model.rho;
if model.Label(1) == -1
w = -w;
b = -b;
end
primal_decision_values=(X*w+b); %??
%% svmpredict decision values
[predicted_label, accuracy, libsvm_decision_values]=svmpredict(labels,X,model,'-q b 0');
%% comparison
fprintf('label agreement: %g\n',mean(sign(predicted_label)==sign(primal_decision_values)))
scatter(primal_decision_values,libsvm_decision_values); xlabel('primal decision values'); ylabel('libsvm decision values');
The default kernel in LibSVM is the RBF Kernel, as described in the documentation:
-t kernel_type : set type of kernel function (default 2)
0 -- linear: u'*v
1 -- polynomial: (gamma*u'*v + coef0)^degree
2 -- radial basis function: exp(-gamma*|u-v|^2)
3 -- sigmoid: tanh(gamma*u'*v + coef0)
4 -- precomputed kernel (kernel values in training_set_file)
With the training command model= svmtrain(labels, X,'-q b 0');, you'll thus train a RBF support vector machine. A prediction using the primal w with the equation X*w+b however is only possible for the linear SVM.
When training a SVM with a linear kernel:
model = svmtrain(labels, X,'-t 0 -q -b 0');
you will get a beautiful identity function when comparing the LibSVM function and the prediction with X*w+b (with all code except the svmtrain identical to your MWE):
(It also took me quite a while to figure out that the default is a RBF (2), and not a linear (0) kernel. Who sets such counterintuitive default values?!?)

Multiclass classification and the sigmoid function

Say have a training set Y :
1,0,1,0
0,1,1,0
0,0,1,1
0,0,1,0
And sigmoid function is defined as :
As the sigmoid function ouputs a value between 0 and 1 does this mean that the training data and value's we are trying to predict should also fall between 0 and 1 ?
Is also correct to use the sigmoid function for making predictions when training set values are not between 0 and 1 ? :
1,4,3,0
2,1,1,0
7,2,6,1
3,0,5,0
Yes, it is perfectly valid have non binary features.
The output falls between 0 and 1 because of the nature of the sigmoid function, there is nothing that stops you from having non binary feature set.
Do the predictions have to be binary?
Yes, you can have multiclass logistic classification as well.
The simplest way of doing that is solving a one-vs-all classification problem, wherein you train one binary logistic classifier for each of the labels.
For example. if your prediction space spans (1, 2, 3, 4), you can have 4 logistic classifiers.
Given any point in the test set, you can give it the label corresponding to the classifier which is most confident (i.e. has the highest score for that test point).

Matlab predict function not working

I am trying to train a linear SVM on a data which has 100 dimensions. I have 80 instances for training. I train the SVM using fitcsvm function in MATLAB and check the function using predict on the training data. When I classify the training data with the SVM all the data points are being classified into only one class.
SVM = fitcsvm(votes,b,'ClassNames',unique(b)');
predict(SVM,votes);
This gives outputs as all 0's which corresponds to 0th class. b contains 1's and 0's indicating the class to which each data point belongs.
The data used, i.e. matrix votes and vector b are given the following link
Make sure you use a non-linear kernel, such as a gaussian kernel and that the parameters of the kernel are tweaked. Just as a starting point:
SVM = fitcsvm(votes,b,'KernelFunction','RBF', 'KernelScale','auto');
bp = predict(SVM,votes);
that said you should split your set in a training set and a testing set, otherwise you risk overfitting

Compute the training error and test error in libsvm + MATLAB

I would like to draw learning curves for a given SVM classifier. Thus, in order to do this, I would like to compute the training, cross-validation and test error, and then plot them while varying some parameter (e.g., number of instances m).
How to compute training, cross-validation and test error on libsvm when used with MATLAB?
I have seen other answers (see example) that suggest solutions for other languages.
Isn't there a compact way of doing it?
Given a set of instances described by:
a set of features featureVector;
their corresponding labels (e.g., either 0 or 1),
if a model was previously inferred via libsvm, the MSE error can be computed as follows:
[predictedLabels, accuracy, ~] = svmpredict(labels, featureVectors, model,'-q');
MSE = accuracy(2);
Notice that predictedLabels contains the labels that were predicted by the classifier for the given instances.

Labeling one class for cross validation in libsvm matlab

I want to use one-class classification using LibSVM in MATLAB.
I want to train data and use cross validation, but I don't know what I have to do to label the outliers.
If for example I have this data:
trainData = [1,1,1; 1,1,2; 1,1,1.5; 1,1.5,1; 20,2,3; 2,20,2; 2,20,5; 20,2,2];
labelTrainData = [-1 -1 -1 -1 0 0 0 0];
(The first four are examples of the 1 class, the other four are examples of outliers, just for the cross validation)
And I train the model using this:
model = svmtrain(labelTrainData, trainData , '-s 2 -t 0 -d 3 -g 2.0 -r 2.0 -n 0.5 -m 40.0 -c 0.0 -e 0.0010 -p 0.1 -v 2' );
I'm not sure which value use to label the 1-class data and what to use to the outliers. Does someone knows how to do this?.
Thanks in advance.
-Jessica
According to http://www.joint-research.org/wp-content/uploads/2011/07/lukashevich2009Using-One-class-SVM-Outliers-Detection.pdf "Due to the lack of class labels in
the one-class SVM, it is not possible to optimize the kernel
parameters using cross-validation".
However, according to the LIBSVM FAQ that is not quite correct:
Q: How do I choose parameters for one-class SVM as training data are in only one class?
You have pre-specified true positive rate in mind and then search for parameters which achieve similar cross-validation accuracy.
Furthermore the README for the libsvm source says of the input data:
"For classification, label is an integer indicating the class label ... For one-class SVM, it's not used so can be any number."
I think your outliers should not be included in the training data - libsvm will ignore the training labels. What you are trying to do is find a hypersphere that contains good data but not outliers. If you train with outliers in the data LIBSVM will try yo find a hypersphere that includes the outliers, which is exactly what you don't want. So you will need a training dataset without outliers, a validation dataset with outliers for choosing parameters, and a final test dataset to see whether your model generalizes.