KNN Matlab Train Test Cross-validation - matlab

So I want to use my the data that I defined below (has two labels) and use KNN for training and testing and also cross-validation. I could not find useful MATLAB tutorials so I appreciate it if you guys can help me.
Imagine I have
Data=rand(2000,2);
Lables=[ones(1000,1);-1*ones(1000,1)];
I want to use KNN and have:
50% of the data for training
25% cross-validation
25% testing

The data you gave is not such a good data-set since there is no variance between the 2 sets. You should use
Data = [rand(1000,2)+delta;rand(1000,2)-delta];
The largest delta the easier it would be to classify
The idea behind kNN is that you don't need any training.
Suppose you have a dataset with N labeled values. Now suppose you have an entry which you wish to classify.
If you consider the 1-NN classifier, you calculate the distance between the input and the N labeled training example. The input classified to have the label of the example with the shortest distance.
In the k-NN classifier, you check what are the k labels of the examples with the shortest distance. The class with the largest number of NN wins.
In MATLAB you can use either knnserach to find the nearest k indices, or just use knnclassify to get the label.
here is an example for knnserach
delta = 0.3;
N1 = 50;
N2 = 50;
Data1 = rand(1000,2)+delta;
Data2 = rand(1000,2)-delta;
train = [Data1(1:N1,:);Data2(1:N2,:)]; % create a training set
labels = [ones(N1,1);-1*ones(N2,1)]; % create labels for the training
plot(train(1:N1,1),train(1:N1,2),'xb',train(N1+1:end,1),train(N1+1:end,2),'or')
k = 7; % Can't be an even number
idx = knnsearch(train,Data1(N1+1:end,:),'K',k); % classify for the rest of data 1
res1 = 0;
for i=1:size(idx,1)
if sum(labels(idx(i,:))) < 0;
res1 = res1 + 0; % wrong answer
else
res1 = res1 + 1; % correct answer
end
end
idx2 = knnsearch(train,Data2(N2+1:end,:),'K',k); % classify for the rest of data 2
res2 = 0;
for i=1:size(idx2,1)
if sum(labels(idx2(i,:))) > 0;
res2 = res2 + 0; % wrong answer
else
res2 = res2 + 1; % correct answer
end
end
corr = res1+res2;
tot = size(idx2,1)+size(idx,1);
fprintf('Classified %d right out of %d. %.2f correct\n',corr,tot,corr / tot * 100)

Related

Multivariate Linear Regression prediction in Matlab

I am trying to predict the energy output (y), based on two predictors (X).
I have a total sample of 7034 samples (Xtot and ytot), corresponding to nearly 73 days of records.
I selected a week period within the data.
Then, I used the fitlm to create the MLR model.
Proceeded to the prediction.
Is this right? Is this the way that it should be used to obtain a 48 steps ahead prediction?
Thank you!
Xtot = dadosPVPREV(2:3,:);%predictors
ytot = dadosPVPREV(1,:);%variable to be predicted
Xtot = Xtot';
ytot = ytot';
X = Xtot(1:720,:);%period into consideration - predictors
y = ytot(1:720,:);%period into considaration - variable to be predicted
lmModel = fitlm(X, y, 'linear', 'RobustOpts', 'on'); %MLR fit
Xnew = Xtot(720:769,:); %new predictors of the y
ypred = predict(lmModel, Xnew); %predicted values of y
yreal = ytot(720:769); %real values of the variable to be predicted
RMSE = sqrt(mean((yreal-ypred).^2)); %calculation of the error between the predicted and real values
figure; plot(ypred);hold; plot(yreal)
I see that over the past few days you have been struggling to train a prediction model. The following is an example of training such a model using linear regression. In this example, the values of the previous few steps are used to predict 5 steps ahead. The Mackey-Glass function is used as a data set to train the model.
close all; clc; clear variables;
load mgdata.dat; % importing Mackey-Glass dataset
T = mgdata(:, 1); % time steps
X1 = mgdata(:, 2); % 1st predictor
X2 = flipud(mgdata(:, 2)); % 2nd predictor
Y = ((sin(X1).^2).*(cos(X2).^2)).^.5; % response
to_x = [-21 -13 -8 -5 -3 -2 -1 0]; % time offsets in the past, used for predictors
to_y = +3; % time offset in the future, used for reponse
T_trn = ((max(-to_x)+1):700)'; % time slice used to train model
i_x_trn = bsxfun(#plus, T_trn, to_x); % indices of steps used to construct train data
X_trn = [X1(i_x_trn) X2(i_x_trn)]; % train data set
Y_trn = Y(T_trn+to_y); % train responses
T_tst = (701:(max(T)-to_y))'; % time slice used to test model
i_x_tst = bsxfun(#plus, T_tst, to_x); % indices of steps used to construct train data
X_tst = [X1(i_x_tst) X2(i_x_tst)]; % test data set
Y_tst = Y(T_tst+to_y); % test responses
mdl = fitlm(X_trn, Y_trn) % training model
Y2_trn = feval(mdl, X_trn); % evaluating train responses
Y2_tst = feval(mdl, X_tst); % evaluating test responses
e_trn = mse(Y_trn, Y2_trn) % train error
e_tst = mse(Y_tst, Y2_tst) % test error
Also, using data transformation technique to generate new features in some models can reduce the prediction error:
featGen = #(x) [x x.^2 sin(x) exp(x) log(x)]; % feature generator
mdl = fitlm(featGen(X_trn), Y_trn)
Y2_trn = feval(mdl, featGen(X_trn)); % evaluating train responses
Y2_tst = feval(mdl, featGen(X_tst)); % evaluating test responses

Is my Matlab code correct for applying PCA to data?

I have following code for calculating PCA in Matlab:
train_out = train';
test_out = test';
% subtract off the mean for each dimension
mn = mean(train_out,2);
train_out = train_out - repmat(mn,1,train_size);
test_out = test_out - repmat(mn,1,test_size);
% calculate the covariance matrix
covariance = 1 / (train_size-1) * train_out * train_out';
% find the eigenvectors and eigenvalues
[PC, V] = eig(covariance);
% extract diagonal of matrix as vector
V = diag(V);
% sort the variances in decreasing order
[junk, rindices] = sort(-1*V);
V = V(rindices);
PC = PC(:,rindices);
% project the original data set
out = PC' * train_out;
train_out = out';
out = PC' * test_out;
test_out = out';
Train and test matrix have observations in rows and feature variables in columns. When I perform classification on original data (without PCA) I get much better results than with PCA, even when I keep all dimensions. When I tried doing PCA directly on the whole dataset (train + test) I noticed correlation between these new principal components and previous ones are either near 1 or near -1 which I find strange. I am probably doing something wrong but just can't figure it out.
The code is correct, however using princomp function my be easier:
train_out=train; % save original data
test_out=test;
mn = mean(train_out);
train_out = bsxfun(#minus,train_out,mn); % substract mean
test_out = bsxfun(#minus,test_out,mn);
[coefs,scores,variances] = princomp(train_out,'econ'); % PCA
pervar = cumsum(variances) / sum(variances);
dims = max(find(pervar < var_frac)); % var_frac - e.g. 0.99 - fraction of variance explained
train_out = train_out*coefs(:,1:dims); % dims - keep this many dimensions
test_out = test_out*coefs(:,1:dims); % result is in train_out and test_out

Matrices kernelpca

we are working on a project and trying to get some results with KPCA.
We have a dataset (handwritten digits) and have taken the 200 first digits of each number so our complete traindata matrix is 2000x784 (784 are the dimensions).
When we do KPCA we get a matrix with the new low-dimensionality dataset e.g.2000x100. However we don't understand the result. Shouldn;t we get other matrices such as we do when we do svd for pca? the code we use for KPCA is the following:
function data_out = kernelpca(data_in,num_dim)
%% Checking to ensure output dimensions are lesser than input dimension.
if num_dim > size(data_in,1)
fprintf('\nDimensions of output data has to be lesser than the dimensions of input data\n');
fprintf('Closing program\n');
return
end
%% Using the Gaussian Kernel to construct the Kernel K
% K(x,y) = -exp((x-y)^2/(sigma)^2)
% K is a symmetric Kernel
K = zeros(size(data_in,2),size(data_in,2));
for row = 1:size(data_in,2)
for col = 1:row
temp = sum(((data_in(:,row) - data_in(:,col)).^2));
K(row,col) = exp(-temp); % sigma = 1
end
end
K = K + K';
% Dividing the diagonal element by 2 since it has been added to itself
for row = 1:size(data_in,2)
K(row,row) = K(row,row)/2;
end
% We know that for PCA the data has to be centered. Even if the input data
% set 'X' lets say in centered, there is no gurantee the data when mapped
% in the feature space [phi(x)] is also centered. Since we actually never
% work in the feature space we cannot center the data. To include this
% correction a pseudo centering is done using the Kernel.
one_mat = ones(size(K));
K_center = K - one_mat*K - K*one_mat + one_mat*K*one_mat;
clear K
%% Obtaining the low dimensional projection
% The following equation needs to be satisfied for K
% N*lamda*K*alpha = K*alpha
% Thus lamda's has to be normalized by the number of points
opts.issym=1;
opts.disp = 0;
opts.isreal = 1;
neigs = 30;
[eigvec eigval] = eigs(K_center,[],neigs,'lm',opts);
eig_val = eigval ~= 0;
eig_val = eig_val./size(data_in,2);
% Again 1 = lamda*(alpha.alpha)
% Here '.' indicated dot product
for col = 1:size(eigvec,2)
eigvec(:,col) = eigvec(:,col)./(sqrt(eig_val(col,col)));
end
[~, index] = sort(eig_val,'descend');
eigvec = eigvec(:,index);
%% Projecting the data in lower dimensions
data_out = zeros(num_dim,size(data_in,2));
for count = 1:num_dim
data_out(count,:) = eigvec(:,count)'*K_center';
end
we have read lots of papers but still cannot get the hand of kpca's logic!
Any help would be appreciated!
PCA Algorithm:
PCA data samples
Compute mean
Compute covariance
Solve
: Covariance matrix.
: Eigen Vectors of covariance matrix.
: Eigen values of covariance matrix.
With the first n-th eigen vectors you reduce the dimensionality of your data to the n dimensions. You can use this code for the PCA, it has an integraded example and it is simple.
KPCA Algorithm:
We choose a kernel function in you code this is specified by:
K(x,y) = -exp((x-y)^2/(sigma)^2)
in order to represent your data in a high dimensional space hopping that, in this space your data will be well represented for further porposes like classification or clustering whereas this task could be harder to be solved in the initial feature space. This trick is aslo known as "Kernel trick". Look figure.
[Step1] Constuct gram matrix
K = zeros(size(data_in,2),size(data_in,2));
for row = 1:size(data_in,2)
for col = 1:row
temp = sum(((data_in(:,row) - data_in(:,col)).^2));
K(row,col) = exp(-temp); % sigma = 1
end
end
K = K + K';
% Dividing the diagonal element by 2 since it has been added to itself
for row = 1:size(data_in,2)
K(row,row) = K(row,row)/2;
end
Here because the gram matrix is symetric the half of the values are computed and the final result is obtained by adding the computed so far gram matrix and its transpose. Finally, we divide by 2 as the comments mention.
[Step2] Normalize the kernel matrix
This is done by this part of your code:
K_center = K - one_mat*K - K*one_mat + one_mat*K*one_mat;
As the comments mention a pseudocentering procedure must be done. For an idea about the proof here.
[Step3] Solve the eigenvalue problem
For this task this part of the code is responsible.
%% Obtaining the low dimensional projection
% The following equation needs to be satisfied for K
% N*lamda*K*alpha = K*alpha
% Thus lamda's has to be normalized by the number of points
opts.issym=1;
opts.disp = 0;
opts.isreal = 1;
neigs = 30;
[eigvec eigval] = eigs(K_center,[],neigs,'lm',opts);
eig_val = eigval ~= 0;
eig_val = eig_val./size(data_in,2);
% Again 1 = lamda*(alpha.alpha)
% Here '.' indicated dot product
for col = 1:size(eigvec,2)
eigvec(:,col) = eigvec(:,col)./(sqrt(eig_val(col,col)));
end
[~, index] = sort(eig_val,'descend');
eigvec = eigvec(:,index);
[Step4] Change representaion of each data point
For this task this part of the code is responsible.
%% Projecting the data in lower dimensions
data_out = zeros(num_dim,size(data_in,2));
for count = 1:num_dim
data_out(count,:) = eigvec(:,count)'*K_center';
end
Look the details here.
PS: I encurage you to use code written from this author and contains intuitive examples.

10 fold cross-validation in one-against-all SVM (using LibSVM)

I want to do a 10-fold cross-validation in my one-against-all support vector machine classification in MATLAB.
I tried to somehow mix these two related answers:
Multi-class classification in libsvm
Example of 10-fold SVM classification in MATLAB
But as I'm new to MATLAB and its syntax, I didn't manage to make it work till now.
On the other hand, I saw just the following few lines about cross validation in the LibSVM README files and I couldn't find any related example there:
option -v randomly splits the data into n parts and calculates cross
validation accuracy/mean squared error on them.
See libsvm FAQ for the meaning of outputs.
Could anyone provide me an example of 10-fold cross-validation and one-against-all classification?
Mainly there are two reasons we do cross-validation:
as a testing method which gives us a nearly unbiased estimate of the generalization power of our model (by avoiding overfitting)
as a way of model selection (eg: find the best C and gamma parameters over the training data, see this post for an example)
For the first case which we are interested in, the process involves training k models for each fold, and then training one final model over the entire training set.
We report the average accuracy over the k-folds.
Now since we are using one-vs-all approach to handle the multi-class problem, each model consists of N support vector machines (one for each class).
The following are wrapper functions implementing the one-vs-all approach:
function mdl = libsvmtrain_ova(y, X, opts)
if nargin < 3, opts = ''; end
%# classes
labels = unique(y);
numLabels = numel(labels);
%# train one-against-all models
models = cell(numLabels,1);
for k=1:numLabels
models{k} = libsvmtrain(double(y==labels(k)), X, strcat(opts,' -b 1 -q'));
end
mdl = struct('models',{models}, 'labels',labels);
end
function [pred,acc,prob] = libsvmpredict_ova(y, X, mdl)
%# classes
labels = mdl.labels;
numLabels = numel(labels);
%# get probability estimates of test instances using each 1-vs-all model
prob = zeros(size(X,1), numLabels);
for k=1:numLabels
[~,~,p] = libsvmpredict(double(y==labels(k)), X, mdl.models{k}, '-b 1 -q');
prob(:,k) = p(:, mdl.models{k}.Label==1);
end
%# predict the class with the highest probability
[~,pred] = max(prob, [], 2);
%# compute classification accuracy
acc = mean(pred == y);
end
And here are functions to support cross-validation:
function acc = libsvmcrossval_ova(y, X, opts, nfold, indices)
if nargin < 3, opts = ''; end
if nargin < 4, nfold = 10; end
if nargin < 5, indices = crossvalidation(y, nfold); end
%# N-fold cross-validation testing
acc = zeros(nfold,1);
for i=1:nfold
testIdx = (indices == i); trainIdx = ~testIdx;
mdl = libsvmtrain_ova(y(trainIdx), X(trainIdx,:), opts);
[~,acc(i)] = libsvmpredict_ova(y(testIdx), X(testIdx,:), mdl);
end
acc = mean(acc); %# average accuracy
end
function indices = crossvalidation(y, nfold)
%# stratified n-fold cros-validation
%#indices = crossvalind('Kfold', y, nfold); %# Bioinformatics toolbox
cv = cvpartition(y, 'kfold',nfold); %# Statistics toolbox
indices = zeros(size(y));
for i=1:nfold
indices(cv.test(i)) = i;
end
end
Finally, here is simple demo to illustrate the usage:
%# laod dataset
S = load('fisheriris');
data = zscore(S.meas);
labels = grp2idx(S.species);
%# cross-validate using one-vs-all approach
opts = '-s 0 -t 2 -c 1 -g 0.25'; %# libsvm training options
nfold = 10;
acc = libsvmcrossval_ova(labels, data, opts, nfold);
fprintf('Cross Validation Accuracy = %.4f%%\n', 100*mean(acc));
%# compute final model over the entire dataset
mdl = libsvmtrain_ova(labels, data, opts);
Compare that against the one-vs-one approach which is used by default by libsvm:
acc = libsvmtrain(labels, data, sprintf('%s -v %d -q',opts,nfold));
model = libsvmtrain(labels, data, strcat(opts,' -q'));
It may be confusing you that one of the two questions is not about LIBSVM. You should try to adjust this answer and ignore the other.
You should select the folds, and do the rest exactly as the linked question. Assume the data has been loaded into data and the labels into labels:
n = size(data,1);
ns = floor(n/10);
for fold=1:10,
if fold==1,
testindices= ((fold-1)*ns+1):fold*ns;
trainindices = fold*ns+1:n;
else
if fold==10,
testindices= ((fold-1)*ns+1):n;
trainindices = 1:(fold-1)*ns;
else
testindices= ((fold-1)*ns+1):fold*ns;
trainindices = [1:(fold-1)*ns,fold*ns+1:n];
end
end
% use testindices only for testing and train indices only for testing
trainLabel = label(trainindices);
trainData = data(trainindices,:);
testLabel = label(testindices);
testData = data(testindices,:)
%# train one-against-all models
model = cell(numLabels,1);
for k=1:numLabels
model{k} = svmtrain(double(trainLabel==k), trainData, '-c 1 -g 0.2 -b 1');
end
%# get probability estimates of test instances using each model
prob = zeros(size(testData,1),numLabels);
for k=1:numLabels
[~,~,p] = svmpredict(double(testLabel==k), testData, model{k}, '-b 1');
prob(:,k) = p(:,model{k}.Label==1); %# probability of class==k
end
%# predict the class with the highest probability
[~,pred] = max(prob,[],2);
acc = sum(pred == testLabel) ./ numel(testLabel) %# accuracy
C = confusionmat(testLabel, pred) %# confusion matrix
end

monte carlo integration on a R^5 hypercube in MATLAB

I need to write MATLAB code that will integrate over a R^5 hypercube using Monte Carlo. I have a basic algorithm that works when I have a generic function. But the function I need to integrate is:
∫dA
A is an element of R^5.
If I had ∫f(x)dA then I think my algorithm would work.
Here is the algorithm:
% Writen by Jerome W Lindsey III
clear;
n = 10000;
% Make a matrix of the same dimension
% as the problem. Each row is a dimension
A = rand(5,n);
% Vector to contain the solution
B = zeros(1,n);
for k = 1:n
% insert the integrand here
% I don't know how to enter a function {f(1,n), f(2,n), … f(5n)} that
% will give me the proper solution
% I threw in a function that will spit out 5!
% because that is the correct solution.
B(k) = 1 / (2 * 3 * 4 * 5);
end
mean(B)
In any case, I think I understand what the intent here is, although it does seem like somewhat of a contrived exercise. Consider the problem of trying to find the area of a circle via MC, as discussed here. Here samples are being drawn from a unit square, and the function takes on the value 1 inside the circle and 0 outside. To find the volume of a cube in R^5, we could sample from something else that contains the cube and use an analogous procedure to compute the desired volume. Hopefully this is enough of a hint to make the rest of the implementation straightforward.
I'm guessing here a bit since the numbers you give as "correct" answer don't match to how you state the exercise (volume of unit hypercube is 1).
Given the result should be 1/120 - could it be that you are supposed to integrate the standard simplex in the hypercube?
The your function would be clear. f(x) = 1 if sum(x) < 1; 0 otherwise
%Question 2, problem set 1
% Writen by Jerome W Lindsey III
clear;
n = 10000;
% Make a matrix of the same dimension
% as the problem. Each row is a dimension
A = rand(5,n);
% Vector to contain the solution
B = zeros(1,n);
for k = 1:n
% insert the integrand here
% this bit of code works as the integrand
if sum(A(:,k)) < 1
B(k) = 1;
end
end
clear k;
clear A;
% Begin error estimation calculations
std_mc = std(B);
clear n;
clear B;
% using the error I calculate a new random
% vector of corect length
N_new = round(std_mc ^ 2 * 3.291 ^ 2 * 1000000);
A_new = rand(5, N_new);
B_new = zeros(1,N_new);
clear std_mc;
for k = 1:N_new
if sum(A_new(:,k)) < 1
B_new(k) = 1;
end
end
clear k;
clear A_new;
% collect descriptive statisitics
M_new = mean(B_new);
std_new = std(B_new);
MC_new_error_999 = std_new * 3.921 / sqrt(N_new);
clear N_new;
clear B_new;
clear std_new;
% Display Results
disp('Integral in question #2 is');
disp(M_new);
disp(' ');
disp('Monte Carlo Error');
disp(MC_new_error_999);