I am classifying some data as a dummy-test against a zero vector, using a Support Vector Machine (SVM), as follows:
kernel = 'linear'; C =1;
class1 = double(data(labels==1,:));
class2 = zeros([size(class1,1),size(class1,2)]);
data = [class1;class2];
theclass = [ones(size(class1,1),1); -1*ones(size(class2,1),1)];
%Train the SVM Classifier
cl = fitcsvm(data,theclass,'KernelFunction',kernel,...
'BoxConstraint',C,'ClassNames',[-1,1]);
% Cross-validation of the trained SVM
CVSVMModel = crossval(cl)
Where can I retrieve the performance of these classifications, as for instance classification accuracy, from crossval?
Edit: I am also wondering, how this kind of cross-validation works, since it is applied to an already fully trained SVM? Does it take the full dataset, and partitions it into (e.g.) 10-folds and trains new classifiers? Or is it then only predicting on the 10 test-sets?
Related
In Matlab, we train the random forest by using TreeBagger() method. One of the parameters of this method is the number of trees. I am using random forest for classification approach. How can I determine the number of trees of random forest?
If you've been training this model, you should know the number of trees that used in the model because it must set as input for TreeBagger().
Anyway, for the learned model like RFmodel, you can use compact(RFmodel) to determine the number of trees.
This is regression example based on Matlab documentation :
load imports-85;
Y = X(:,1);
X = X(:,2:end);
isCat = [zeros(15,1);ones(size(X,2)-15,1)]; % Categorical variable flag
rng(1945,'twister')
UnknownNumberofTrees=100;
RFmodel = TreeBagger(UnknownNumberofTrees,X,Y,'Method','R','OOBPred','On',...
'Cat',find(isCat == 1),'MinLeaf',5);
RFmodelObject = compact(RFmodel);
RFmodelObject.NTrees
%ans =
% 100
Keeping all parameters constant, I get different Mean Average Percentage Errors on my test data on retraining the neural network. Why is this so? Aren't all components of the neural network training process deterministic? Sometimes, I see a difference of up to 1% on successive trainings.
The training code is below
netFeb = newfit(trainX', trainY', networkConfigFeb);
netFeb.performFcn = 'mae';
netFeb = trainlm(netFeb, trainX', trainY');
% Index for the testing Data
startingInd = find(trainInd == 0, 1, 'first');
endingInd = startingInd + daysInMonth('Feb') - 1 ;
% Testing Data
testX = X(startingInd:endingInd,:);
testY = dailyPeakLoad(startingInd:endingInd,:);
actualLoadFeb = testY;
% Calculate the Forcast Load and the Mean Absolute Percentage Error
forecastLoadFeb = sim(netFeb, testX'';
errFeb = testY - forecastLoadFeb;
errpct = abs(errFeb)./testY*100;
MAPEFeb = mean(errpct(~isinf(errpct)));
As A. Donda hinted, since neural networks initialize their weights randomly, they will generate different networks after training. Thus it will give you different performance. While the training process is deterministic, the initial values are not! You may end up in different local minimums as a result or stop in different places.
If you wish to see why, take a look at Why should weights of Neural Networks be initialized to random numbers?
Edit 1:
Notes
Since the user is defining the testing/training data manually, there is no randomization of the training data sets selected
I'm new in machine learning (and to stackoverflow as well) and i want to make some classification tasks. I performed two group classifications on my data set (field of speech acoustics) with LIBSVM and Matlab's Pattern Recignition Tool from the Neural network toolbox to create a simple network with one hidden layer. In the hope of higher classification results i want to try Deep Neural Networks, and i found this code: http://www.mathworks.com/matlabcentral/fileexchange/42853-deep-neural-network
I have some difficulty understanding it.
My data is constructed of 127 samples of 19 parameters, so my input number is 19. I want to classify them in two groups: 0 and 1, so my output number is 1. The values in my data set are normalized between 0 and 1.
My code is the following:
clear all
clc
addpath('..');
load('data.mat')
inputdata = inputs;
outputdata = outputs;
datanum = 127;
outputnum = 1;
hiddennum = 3;
inputnum = 19;
% rbm = randRBM(inputnum, outputnum);
% rbm = pretrainRBM( rbm, inputdata );
dbn = randDBN([inputnum, hiddennum, outputnum]);
dbn = pretrainDBN( dbn, inputdata );
dbn = SetLinearMapping( dbn, inputdata, outputdata );
dbn = trainDBN( dbn, inputdata, outputdata );
estimate = v2h( dbn, inputdata )
[rmse AveErrNum] = CalcRmse(dbn, inputdata, outputdata)
The code runs. The rmse is 0.4183, the AveErrNum is 0.1969. What i need is the classification accuracy between my targets (stored in outputdata) and the networks predictions (Accuracy = data classified correctly / all data).
Where do i find the networks predictions after binarization?
Do I use the right type of network for my classification?
Don't I need to divide my data into Training, Validation and Testing samples (like in the case of a simple neural network with one hidden layer)?
Thanks in advance for any help!
Say I create a neural network to separate classes:
X1; %Some data in Class 1 100x2
X2; %Some data in Class 2 100x2
classInput = [X1;X2];
negative = zeros(N, 1);
positive = ones(N,1);
classTarget = [positive negative; negative positive];
net = feedforwardnet(20);
net = configure(net, classInput, classTarget);
net = train(net, classInput, classTarget);
%output of training data
output = net(classInput);
I can plot the classes and they are correctly separated:
figure();
hold on
style = {'ro' 'bx'};
for i=1:(2*N)
plot(classInput(i,1),classInput(i,2), style{round(output(i,1))+1});
end
However, how can I apply the network that's just been trained to unseen data? There must be a model which is generated by the network that can be applied to new data?
EDIT: Using sim:
Once the network is trained, if I use sim on the training data:
[Z,Xf,Af] = sim(net,classInput);
The result is as expected. But this only works if the input is of the same size. If for example I want to evalute an individual data point:
[Z1,Xf,Af] = sim(net,[1,2]);
size(Z) == size(Z1), but this clearly doesn't make sense? Surely I can evaluate a single data point?
I'm the OP,
I had assumed that the rows of the input matrices were the data samples and the columns were the "categories", this is the other way around. Transposing the matrices before inputting them to the train() function fixes this.
I am trying to implement Naive Bayes Classifier using a dataset published by UCI machine learning team. I am new to machine learning and trying to understand techniques to use for my work related problems, so I thought it's better to get the theory understood first.
I am using pima dataset (Link to Data - UCI-ML), and my goal is to build Naive Bayes Univariate Gaussian Classifier for K class problem (Data is only there for K=2). I have done splitting data, and calculate the mean for each class, standard deviation, priors for each class, but after this I am kind of stuck because I am not sure what and how I should be doing after this. I have a feeling that I should be calculating posterior probability,
Here is my code, I am using percent as a vector, because I want to see the behavior as I increase the training data size from 80:20 split. Basically if you pass [10 20 30 40] it will take that percentage from 80:20 split, and use 10% of 80% as training.
function[classMean] = naivebayes(file, iter, percent)
dm = load(file);
for i=1:iter
idx = randperm(size(dm.data,1))
%Using same idx for data and labels
shuffledMatrix_data = dm.data(idx,:);
shuffledMatrix_label = dm.labels(idx,:);
percent_data_80 = round((0.8) * length(shuffledMatrix_data));
%Doing 80-20 split
train = shuffledMatrix_data(1:percent_data_80,:);
test = shuffledMatrix_data(percent_data_80+1:length(shuffledMatrix_data),:);
train_labels = shuffledMatrix_label(1:percent_data_80,:)
test_labels = shuffledMatrix_data(percent_data_80+1:length(shuffledMatrix_data),:);
%Getting the array of percents
for pRows = 1:length(percent)
percentOfRows = round((percent(pRows)/100) * length(train));
new_train = train(1:percentOfRows,:)
new_trin_label = shuffledMatrix_label(1:percentOfRows)
%get unique labels in training
numClasses = size(unique(new_trin_label),1)
classMean = zeros(numClasses,size(new_train,2));
for kclass=1:numClasses
classMean(kclass,:) = mean(new_train(new_trin_label == kclass,:))
std(new_train(new_trin_label == kclass,:))
priorClassforK = length(new_train(new_trin_label == kclass))/length(new_train)
priorClassforK_1 = 1 - priorClassforK
end
end
end
end
First, compute the probability of evey class label based on frequency counts. For a given sample of data and a given class in your data set, you compute the probability of evey feature. After that, multiply the conditional probability for all features in the sample by each other and by the probability of the considered class label. Finally, compare values of all class labels and you choose the label of the class with the maximum probability (Bayes classification rule).
For computing conditonal probability, you can simply use the Normal distribution function.