Repeated classification accuracies in a loop always the same - matlab

I have pretty simple code for binary classification (see below). When I re-run this in Matlab (just by manually pressing the "run" button), each run gives me slightly different accuracies for each of the 14 subjects. However, if I loop over my code nrPermute times, every iteration of the loop gives me EXACTLY the same accuracy for the respective subject - why is that? So in the first code, the mean(accuracy) is different for different runs, whereas in the second code it is always the same for different iterations. Both codes below
Code where only one 10-fold crossvalidation is done for each subject:
%% SVM-Classification
nrFolds = 10; %number of folds of crossvalidation, 10 is standard
kernel = 'linear'; % 'linear', 'rbf' or 'polynomial'
C = 1;
solver = 'L1QP';
cvFolds = crossvalind('Kfold', labels, nrFolds);
for k = 1:14
for i = 1:nrFolds % iteratre through each fold
testIdx = (cvFolds == i); % indices of test instances
trainIdx = ~testIdx; % indices training instances
% train the SVM
cl = fitcsvm(features(trainIdx,:),
labels(trainIdx),'KernelFunction',kernel,'Standardize',true,...
'BoxConstraint',C,'ClassNames',[0,1],'Solver',solver);
[label,scores] = predict(cl, features(testIdx,:));
eq = sum(label==labels(testIdx));
accuracy(i) = eq/numel(labels(testIdx));
end
crossValAcc(k) = mean(accuracy);
end
Code where each 10-fold crossvalidation is repeated nrPermute times:
%% SVM-Classification
nrFolds = 10; %number of folds of crossvalidation, 10 is standard
kernel = 'linear'; % 'linear', 'rbf' or 'polynomial'
C = 1;
solver = 'L1QP';
cvFolds = crossvalind('Kfold', labels, nrFolds);
nrPermute = 5;
for k = 1:14
for p = 1:nrPermute
for i = 1:nrFolds % iteratre through each fold
testIdx = (cvFolds == i); % indices of test instances
trainIdx = ~testIdx; % indices training instances
% train the SVM
cl = fitcsvm(features(trainIdx,:),
labels(trainIdx),'KernelFunction',kernel,'Standardize',true,...
'BoxConstraint',C,'ClassNames',[0,1],'Solver',solver);
[label,scores] = predict(cl, features(testIdx,:));
eq = sum(label==labels(testIdx));
accuracy(i) = eq/numel(labels(testIdx));
end
accSubj(p) = mean(accuracy); % accuracy of each permutation
end
crossValAcc(k) = mean(accSubj);
end

In case that would be useful for someone else as well, I figure it out: The loop for permutation should be outside of cvFolds = crossvalind('Kfold', labels, nrFolds); such that the distribution into folds is re-shuffled!

Related

MATLAB to Scilab conversion: mfile2sci error "File contains no instruction"

I am very new to Scilab, but so far have not been able to find an answer (either here or via google) to my question. I'm sure it's a simple solution, but I'm at a loss. I have a lot of MATLAB scripts I wrote in grad school, but now that I'm out of school, I no longer have access to MATLAB (and can't justify the cost). Scilab looked like the best open alternative. I'm trying to convert my .m files to Scilab compatible versions using mfile2sci, but when running the mfile2sci GUI, I get the error/message shown below. Attached is the original code from the M-file, in case it's relevant.
I Searched Stack Overflow and companion sites, Google, Scilab documentation.
The M-file code follows (it's a super basic MATLAB script as part of an old homework question -- I chose it as it's the shortest, most straightforward M-file I had):
Mmax = 15;
N = 20;
T = 2000;
%define upper limit for sparsity of signal
smax = 15;
mNE = zeros(smax,Mmax);
mESR= zeros(smax,Mmax);
for M = 1:Mmax
aNormErr = zeros(smax,1);
aSz = zeros(smax,1);
ESR = zeros(smax,1);
for s=1:smax % for-loop to loop script smax times
normErr = zeros(1,T);
vESR = zeros(1,T);
sz = zeros(1,T);
for t=1:T %for-loop to carry out 2000 trials per s-value
esr = 0;
A = randn(M,N); % generate random MxN matrix
[M,N] = size(A);
An = zeros(M,N); % initialize normalized matrix
for h = 1:size(A,2) % normalize columns of matrix A
V = A(:,h)/norm(A(:,h));
An(:,h) = V;
end
A = An; % replace A with its column-normalized counterpart
c = randperm(N,s); % create random support vector with s entries
x = zeros(N,1); % initialize vector x
for i = 1:size(c,2)
val = (10-1)*rand + 1;% generate interval [1,10]
neg = mod(randi(10),2); % include [-10,-1]
if neg~=0
val = -1*val;
end
x(c(i)) = val; %replace c(i)th value of x with the nonzero value
end
y = A*x; % generate measurement vector (y)
R = y;
S = []; % initialize array to store selected columns of A
indx = []; % vector to store indices of selected columns
coeff = zeros(1,s); % vector to store coefficients of approx.
stop = 10; % init. stop condition
in = 0; % index variable
esr = 0;
xhat = zeros(N,1); % intialize estimated x signal
while (stop>0.5 && size(S,2)<smax)
%MAX = abs(A(:,1)'*R);
maxV = zeros(1,N);
for i = 1:size(A,2)
maxV(i) = abs(A(:,i)'*R);
end
in = find(maxV == max(maxV));
indx = [indx in];
S = [S A(:,in)];
coeff = [coeff R'*S(:,size(S,2))]; % update coefficient vector
for w=1:size(S,2)
r = y - ((R'*S(:,w))*S(:,w)); % update residuals
if norm(r)<norm(R)
index = w;
end
R = r;
stop = norm(R); % update stop condition
end
for j=1:size(S,2) % place coefficients into xhat at correct indices
xhat(indx(j))=coeff(j);
end
nE = norm(x-xhat)/norm(x); % calculate normalized error for this estimate
%esr = 0;
indx = sort(indx);
c = sort(c);
if isequal(indx,c)
esr = esr+1;
end
end
vESR(t) = esr;
sz(t) = size(S,2);
normErr(t) = nE;
end
%avsz = sum(sz)/T;
aSz(s) = sum(sz)/T;
%aESR = sum(vESR)/T;
ESR(s) = sum(vESR)/T;
%avnormErr = sum(normErr)/T; % produce average normalized error for these run
aNormErr(s) = sum(normErr)/T; % add new avnormErr to vector of all av norm errors
end
% just put this here to view the vector
mNE(:,M) = aNormErr;
mESR(:,M) = ESR;
% had an 'end' placed here, might've been unmatched
mNE%reshape(mNE,[],Mmax)
mESR%reshape(mESR,[],Mmax)]
figure
dimx = [1 Mmax];
dimy = [1 smax];
imagesc(dimx,dimy,mESR)
colormap gray
strESR = sprintf('Average ESR, N=%d',N);
title(strESR);
xlabel('M');
ylabel('s');
strNE = sprintf('Average Normed Error, N=%d',N);
figure
imagesc(dimx,dimy,mNE)
colormap gray
title(strNE)
xlabel('M');
ylabel('s');
The command used (and results) follow:
--> mfile2sci
ans =
[]
****** Beginning of mfile2sci() session ******
File to convert: C:/Users/User/Downloads/WTF_new.m
Result file path: C:/Users/User/DOWNLO~1/
Recursive mode: OFF
Only double values used in M-file: NO
Verbose mode: 3
Generate formatted code: NO
M-file reading...
M-file reading: Done
Syntax modification...
Syntax modification: Done
File contains no instruction, no translation made...
****** End of mfile2sci() session ******
To convert the foo.m file one has to enter
mfile2sci <path>/foo.m
where stands for the path of the directoty where foo.m is. The result is written in /foo.sci
Remove the ```` at the begining of each line, the conversion will proceed normally ?. However, don't expect to obtain a working .sci file as the m2sci converter is (to me) still an experimental tool !

parfor doesn't consider information about vectors which are used in it

This is a part of my code in Matlab. I tried to make it parallel but there is an error:
The variable gax in a parfor cannot be classified.
I know why the error occurs. because I should tell Matlab that v is an incresing vector which doesn't contain repeated elements. Could anyone help me to use this information to parallelize the code?
v=[1,3,6,8];
ggx=5.*ones(15,14);
gax=ones(15,14);
for m=v
if m > 1
parfor j=1:m-1
gax(j,m-1) = ggx(j,m-1);
end
end
if m<nn
parfor jo=m+1:15
gax(jo,m) = ggx(jo,m);
end
end
end
Optimizing a code should be closely related to its purpose, especially when you use parfor. The code you wrote in the question can be written in a much more efficient way, and definitely, do not need to be parallelized.
However, I understand that you tried to simplify the problem, just to get the idea of how to slice your variables, so here is a fixed version the can run with parfor. But this is surely not the way to write this code:
v = [1,3,6,8];
ggx = 5.*ones(15,14);
gax = ones(15,14);
nn = 5;
for m = v
if m > 1
temp_end = m-1;
temp = ggx(:,temp_end);
parfor ja = 1:temp_end
gax(ja,temp_end) = temp(ja);
end
end
if m < nn
temp = ggx(:,m);
parfor jo = m+1:15
gax(jo,m) = temp(jo);
end
end
end
A vectorized implementation will look like this:
v = [1,3,6,8];
ggx = 5.*ones(15,14);
gax = ones(15,14);
nn = 5;
m1 = v>1; % first condition with logical indexing
temp = v(m1)-1; % get the values from v
r = ones(1,sum(temp)); % generate a vector of indicies
r(cumsum(temp)) = -temp+1; % place the reseting locations
r = cumsum(r); % calculate the indecies
r(cumsum(temp)) = temp; % place the ending points
c = repelem(temp,temp); % create an indecies vector for the columns
inds1 = sub2ind(size(gax),r,c); % convert the indecies to linear
mnn = v<nn; % second condition with logical indexing
temp = v(mnn)+1; % get the values from v
r_max = size(gax,1); % get the height of gax
r_count = r_max-temp+1; % calculate no. of rows per value in v
r = ones(1,sum(r_count)); % generate a vector of indicies
r([1 r_count(1:end-1)+1]) = temp; % set the t indicies
r(cumsum(r_count)+1) = -(r_count-temp)+1; % place the reseting locations
r = cumsum(r(1:end-1)); % calculate the indecies
c = repelem(temp-1,r_count); % create an indecies vector for the columns
inds2 = sub2ind(size(gax),r,c); % convert the indecies to linear
gax([inds1 inds2]) = ggx([inds1 inds2]); % assgin the relevant values
This is indeed quite complicated, and not always necessary. A good thing to remember, though, is that nested for loop are much slower than a single loop, so in some cases (depend on the size of the output), this will may be the fastest solution:
for m = v
if m > 1
gax(1:m-1,m-1) = ggx(1:m-1,m-1);
end
if m<nn
gax(m+1:15,m) = ggx(m+1:15,m);
end
end

I get this code for LARS but the variable seems undefined?

I got this code for LARS but when I run, it says undefined X. I can't understand what x is. Why is there an error?
function [beta, A, mu, C, c, gamma] = lars(X, Y, option, t, standardize)
% Least Angle Regression (LAR) algorithm.
% Ref: Efron et. al. (2004) Least angle regression. Annals of Statistics.
% option = 'lar' implements the vanilla LAR algorithm (default);
% option = 'lasso' solves the lasso path with a modified LAR algorithm.
% t -- a vector of increasing positive real numbers. If given, LARS
% returns the solution at t.
%
% Output:
% A -- a sequence of indices that indicate the order of variable
% beta: history of estimated LARS coefficients;
% mu -- history of estimated mean vector;
% C -- history of maximal current absolute corrrelations;
% c -- history of current corrrelations;
% gamma: history of LARS step size.
% Note: history is traced by rows. If t is given, beta is just the
% estimated coefficient vector at the constraint ||beta||_1 = t.
%
% Remarks:
% 1. LARS is originally proposed to estimate a sparse coefficient vector
% a noisy over-determined linear system. LARS outputs estimates for all
% shrinkage/constraint parameters (homotopy).
%
% 2. LARS is well suited for Basis Pursuit (BP) purpose in the real
% automatically terminates when the current correlations for inactive
% all zeros. The recovered coefficient vector is the last column of beta
% with the *lasso* option. Hence, this function provides a fast and
% efficient solution for the ell_1 minimization problem.
% Ref: Donoho and Tsaig (2006). Fast solution of ell_1 norm minimization
if nargin < 5, standardize = true; end
if nargin < 4, t = Inf; end
if nargin < 3, option = 'lar'; end
if strcmpi(option, 'lasso'), lasso = 1; else, lasso = 0; end
eps = 1e-10; % Effective zero
[n,p] = size(X);
if standardize,
X = normalize(X);
Y = Y-mean(Y);
end
m = min(p,n-1); % Maximal number of variables in the final active set
T = length(t);
beta = zeros(1,p);
mu = zeros(n,1); % Mean vector
gamma = []; % LARS step lengths
A = [];
Ac = 1:p;
nVars = 0;
signOK = 1;
i = 0;
mu_old = zeros(n,1);
t_prev = 0;
beta_t = zeros(T,p);
ii = 1;
tt = t;
% LARS loop
while nVars < m,
i = i+1;
c = X'*(Y-mu); % Current correlation
C = max(abs(c)); % Maximal current absolute correlation
if C < eps || isempty(t), break; end % Early stopping criteria
if 1 == i, addVar = find(C==abs(c)); end
if signOK,
A = [A,addVar]; % Add one variable to active set
nVars = nVars+1;
end
s_A = sign(c(A));
Ac = setdiff(1:p,A); % Inactive set
nZeros = length(Ac);
X_A = X(:,A);
G_A = X_A'*X_A; % Gram matrix
invG_A = inv(G_A);
L_A = 1/sqrt(s_A'*invG_A*s_A);
w_A = L_A*invG_A*s_A; % Coefficients of equiangular vector u_A
u_A = X_A*w_A; % Equiangular vector
a = X'*u_A; % Angles between x_j and u_A
beta_tmp = zeros(p,1);
gammaTest = zeros(nZeros,2);
if nVars == m,
gamma(i) = C/L_A; % Move to the least squares projection
else
for j = 1:nZeros,
jj = Ac(j);
gammaTest(j,:) = [(C-c(jj))/(L_A-a(jj)), (C+c(jj))/(L_A+a(jj))];
end
[gamma(i) min_i min_j] = minplus(gammaTest);
addVar = unique(Ac(min_i));
end
beta_tmp(A) = beta(i,A)' + gamma(i)*w_A; % Update coefficient estimates
% Check the sign feasibility of lasso
if lasso,
signOK = 1;
gammaTest = -beta(i,A)'./w_A;
[gamma2 min_i min_j] = minplus(gammaTest);
if gamma2 < gamma(i), % The case when sign consistency gets violated
gamma(i) = gamma2;
beta_tmp(A) = beta(i,A)' + gamma(i)*w_A; % Correct the coefficients
beta_tmp(A(unique(min_i))) = 0;
A(unique(min_i)) = []; % Delete the zero-crossing variable (keep the ordering)
nVars = nVars-1;
signOK = 0;
end
end
if Inf ~= t(1),
t_now = norm(beta_tmp(A),1);
if t_prev < t(1) && t_now >= t(1),
beta_t(ii,A) = beta(i,A) + L_A*(t(1)-t_prev)*w_A'; % Compute coefficient estimates corresponding to a specific t
t(1) = [];
ii = ii+1;
end
t_prev = t_now;
end
mu = mu_old + gamma(i)*u_A; % Update mean vector
mu_old = mu;
beta = [beta; beta_tmp'];
end
if 1 < ii,
noCons = (tt > norm(beta_tmp,1));
if 0 < sum(noCons),
beta_t(noCons,:) = repmat(beta_tmp',sum(noCons),1);
end
beta = beta_t;
end
% Normalize columns of X to have mean zero and length one.
function sX = normalize(X)
[n,p] = size(X);
sX = X-repmat(mean(X),n,1);
sX = sX*diag(1./sqrt(ones(1,n)*sX.^2));
% Find the minimum and its index over the (strictly) positive part of X
% matrix
function [m, I, J] = minplus(X)
% Remove complex elements and reset to Inf
[I,J] = find(0~=imag(X));
for i = 1:length(I),
X(I(i),J(i)) = Inf;
end
X(X<=0) = Inf;
m = min(min(X));
[I,J] = find(X==m);
You can have more information in the related paper:
Efron, Bradley; Hastie, Trevor; Johnstone, Iain; Tibshirani, Robert. Least angle regression. Ann. Statist. 32 (2004), no. 2, 407--499. doi:10.1214/009053604000000067.
http://projecteuclid.org/euclid.aos/1083178935.

Selecting SVM parameters using cross validation and F1-scores

I need to keep track of the F1-scores while tuning C & Sigma in SVM,
For example the following code keeps track of the Accuracy, I need to change it to F1-Score but I was not able to do that…….
%# read some training data
[labels,data] = libsvmread('./heart_scale');
%# grid of parameters
folds = 5;
[C,gamma] = meshgrid(-5:2:15, -15:2:3);
%# grid search, and cross-validation
cv_acc = zeros(numel(C),1);
for i=1:numel(C)
cv_acc(i) = svmtrain(labels, data, ...
sprintf('-c %f -g %f -v %d', 2^C(i), 2^gamma(i), folds));
end
%# pair (C,gamma) with best accuracy
[~,idx] = max(cv_acc);
%# now you can train you model using best_C and best_gamma
best_C = 2^C(idx);
best_gamma = 2^gamma(idx);
%# ...
I have seen the following two links
Retraining after Cross Validation with libsvm
10 fold cross-validation in one-against-all SVM (using LibSVM)
I do understand that I have to first find the best C and gamma/sigma parameters over the training data, then use these two values to do a LEAVE-ONE-OUT crossvalidation classification experiment,
So what I want now is to first do a grid-search for tuning C & sigma.
Please I would prefer to use MATLAB-SVM and not LIBSVM.
Below is my code for LEAVE-ONE-OUT crossvalidation classification.
... clc
clear all
close all
a = load('V1.csv');
X = double(a(:,1:12));
Y = double(a(:,13));
% train data
datall=[X,Y];
A=datall;
n = 40;
ordering = randperm(n);
B = A(ordering, :);
good=B;
input=good(:,1:12);
target=good(:,13);
CVO = cvpartition(target,'leaveout',1);
cp = classperf(target); %# init performance tracker
svmModel=[];
for i = 1:CVO.NumTestSets %# for each fold
trIdx = CVO.training(i);
teIdx = CVO.test(i);
%# train an SVM model over training instances
svmModel = svmtrain(input(trIdx,:), target(trIdx), ...
'Autoscale',true, 'Showplot',false, 'Method','ls', ...
'BoxConstraint',0.1, 'Kernel_Function','rbf', 'RBF_Sigma',0.1);
%# test using test instances
pred = svmclassify(svmModel, input(teIdx,:), 'Showplot',false);
%# evaluate and update performance object
cp = classperf(cp, pred, teIdx);
end
%# get accuracy
accuracy=cp.CorrectRate*100
sensitivity=cp.Sensitivity*100
specificity=cp.Specificity*100
PPV=cp.PositivePredictiveValue*100
NPV=cp.NegativePredictiveValue*100
%# get confusion matrix
%# columns:actual, rows:predicted, last-row: unclassified instances
cp.CountingMatrix
recallP = sensitivity;
recallN = specificity;
precisionP = PPV;
precisionN = NPV;
f1P = 2*((precisionP*recallP)/(precisionP + recallP));
f1N = 2*((precisionN*recallN)/(precisionN + recallN));
aF1 = ((f1P+f1N)/2);
i have changed the code
but i making some mistakes and i am getting errors,
a = load('V1.csv');
X = double(a(:,1:12));
Y = double(a(:,13));
% train data
datall=[X,Y];
A=datall;
n = 40;
ordering = randperm(n);
B = A(ordering, :);
good=B;
inpt=good(:,1:12);
target=good(:,13);
k=10;
cvFolds = crossvalind('Kfold', target, k); %# get indices of 10-fold CV
cp = classperf(target); %# init performance tracker
svmModel=[];
for i = 1:k
testIdx = (cvFolds == i); %# get indices of test instances
trainIdx = ~testIdx;
C = 0.1:0.1:1;
S = 0.1:0.1:1;
fscores = zeros(numel(C), numel(S)); %// Pre-allocation
for c = 1:numel(C)
for s = 1:numel(S)
vals = crossval(#(XTRAIN, YTRAIN, XVAL, YVAL)(fun(XTRAIN, YTRAIN, XVAL, YVAL, C(c), S(c))),inpt(trainIdx,:),target(trainIdx));
fscores(c,s) = mean(vals);
end
end
end
[cbest, sbest] = find(fscores == max(fscores(:)));
C_final = C(cbest);
S_final = S(sbest);
.......
and the function.....
.....
function fscore = fun(XTRAIN, YTRAIN, XVAL, YVAL, C, S)
svmModel = svmtrain(XTRAIN, YTRAIN, ...
'Autoscale',true, 'Showplot',false, 'Method','ls', ...
'BoxConstraint', C, 'Kernel_Function','rbf', 'RBF_Sigma', S);
pred = svmclassify(svmModel, XVAL, 'Showplot',false);
cp = classperf(YVAL, pred)
%# get accuracy
accuracy=cp.CorrectRate*100
sensitivity=cp.Sensitivity*100
specificity=cp.Specificity*100
PPV=cp.PositivePredictiveValue*100
NPV=cp.NegativePredictiveValue*100
%# get confusion matrix
%# columns:actual, rows:predicted, last-row: unclassified instances
cp.CountingMatrix
recallP = sensitivity;
recallN = specificity;
precisionP = PPV;
precisionN = NPV;
f1P = 2*((precisionP*recallP)/(precisionP + recallP));
f1N = 2*((precisionN*recallN)/(precisionN + recallN));
fscore = ((f1P+f1N)/2);
end
So basically you want to take this line of yours:
svmModel = svmtrain(input(trIdx,:), target(trIdx), ...
'Autoscale',true, 'Showplot',false, 'Method','ls', ...
'BoxConstraint',0.1, 'Kernel_Function','rbf', 'RBF_Sigma',0.1);
put it in a loop that varies your 'BoxConstraint' and 'RBF_Sigma' parameters and then uses crossval to output the f1-score for that iterations combination of parameters.
You can use a single for-loop exactly like in your libsvm code example (i.e. using meshgrid and 1:numel(), this is probably faster) or a nested for-loop. I'll use a nested loop so that you have both approaches:
C = [0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30, 100, 300] %// you must choose your own set of values for the parameters that you want to test. You can either do it this way by explicitly typing out a list
S = 0:0.1:1 %// or you can do it this way using the : operator
fscores = zeros(numel(C), numel(S)); %// Pre-allocation
for c = 1:numel(C)
for s = 1:numel(S)
vals = crossval(#(XTRAIN, YTRAIN, XVAL, YVAL)(fun(XTRAIN, YTRAIN, XVAL, YVAL, C(c), S(c)),input(trIdx,:),target(trIdx));
fscores(c,s) = mean(vals);
end
end
%// Then establish the C and S that gave you the bet f-score. Don't forget that c and s are just indexes though!
[cbest, sbest] = find(fscores == max(fscores(:)));
C_final = C(cbest);
S_final = S(sbest);
Now we just have to define the function fun. The docs have this to say about fun:
fun is a function handle to a function with two inputs, the training
subset of X, XTRAIN, and the test subset of X, XTEST, as follows:
testval = fun(XTRAIN,XTEST) Each time it is called, fun should use
XTRAIN to fit a model, then return some criterion testval computed on
XTEST using that fitted model.
So fun needs to:
output a single f-score
take as input a training and testing set for X and Y. Note that these are both subsets of your actual training set! Think of them more like a training and validation SUBSET of your training set. Also note that crossval will split these sets up for you!
Train a classifier on the training subset (using your current C and S parameters from your loop)
RUN your new classifier on the test (or validation rather) subset
Compute and output a performance metric (in your case you want the f1-score)
You'll notice that fun can't take any extra parameters which is why I've wrapped it in an anonymous function so that we can pass the current C and S values in. (i.e. all that #(...)(fun(...)) stuff above. That's just a trick to "convert" our six parameter fun into the 4 parameter one required by crossval.
function fscore = fun(XTRAIN, YTRAIN, XVAL, YVAL, C, S)
svmModel = svmtrain(XTRAIN, YTRAIN, ...
'Autoscale',true, 'Showplot',false, 'Method','ls', ...
'BoxConstraint', C, 'Kernel_Function','rbf', 'RBF_Sigma', S);
pred = svmclassify(svmModel, XVAL, 'Showplot',false);
CP = classperf(YVAL, pred)
fscore = ... %// You can do this bit the same way you did earlier
end
I found the only problem with target(trainIdx). It's a row vector so I just replaced target(trainIdx) with target(trainIdx) which is a column vector.

MATLAB: One Step Ahead Neural Network Timeseries Forecast

Intro: I'm using MATLAB's Neural Network Toolbox in an attempt to forecast time series one step into the future. Currently I'm just trying to forecast a simple sinusoidal function, but hopefully I will be able to move on to something a bit more complex after I obtain satisfactory results.
Problem: Everything seems to work fine, however the predicted forecast tends to be lagged by one period. Neural network forecasting isn't much use if it just outputs the series delayed by one unit of time, right?
Code:
t = -50:0.2:100;
noise = rand(1,length(t));
y = sin(t)+1/2*sin(t+pi/3);
split = floor(0.9*length(t));
forperiod = length(t)-split;
numinputs = 5;
forecasted = [];
msg = '';
for j = 1:forperiod
fprintf(repmat('\b',1,numel(msg)));
msg = sprintf('forecasting iteration %g/%g...\n',j,forperiod);
fprintf('%s',msg);
estdata = y(1:split+j-1);
estdatalen = size(estdata,2);
signal = estdata;
last = signal(end);
[signal,low,high] = preprocess(signal'); % pre-process
signal = signal';
inputs = signal(rowshiftmat(length(signal),numinputs));
targets = signal(numinputs+1:end);
%% NARNET METHOD
feedbackDelays = 1:4;
hiddenLayerSize = 10;
net = narnet(feedbackDelays,[hiddenLayerSize hiddenLayerSize]);
net.inputs{1}.processFcns = {'removeconstantrows','mapminmax'};
signalcells = mat2cell(signal,[1],ones(1,length(signal)));
[inputs,inputStates,layerStates,targets] = preparets(net,{},{},signalcells);
net.trainParam.showWindow = false;
net.trainparam.showCommandLine = false;
net.trainFcn = 'trainlm'; % Levenberg-Marquardt
net.performFcn = 'mse'; % Mean squared error
[net,tr] = train(net,inputs,targets,inputStates,layerStates);
next = net(inputs(end),inputStates,layerStates);
next = postprocess(next{1}, low, high); % post-process
next = (next+1)*last;
forecasted = [forecasted next];
end
figure(1);
plot(1:forperiod, forecasted, 'b', 1:forperiod, y(end-forperiod+1:end), 'r');
grid on;
Note:
The function 'preprocess' simply converts the data into logged % differences and 'postprocess' converts the logged % differences back for plotting. (Check EDIT for preprocess and postprocess code)
Results:
BLUE: Forecasted Values
RED: Actual Values
Can anyone tell me what I'm doing wrong here? Or perhaps recommend another method to achieve the desired results (lagless prediction of sinusoidal function, and eventually more chaotic timeseries)? Your help is very much appreciated.
EDIT:
It's been a few days now and I hope everyone has enjoyed their weekend. Since no solutions have emerged I've decided to post the code for the helper functions 'postprocess.m', 'preprocess.m', and their helper function 'normalize.m'. Maybe this will help get the ball rollin.
postprocess.m:
function data = postprocess(x, low, high)
% denormalize
logdata = (x+1)/2*(high-low)+low;
% inverse log data
sign = logdata./abs(logdata);
data = sign.*(exp(abs(logdata))-1);
end
preprocess.m:
function [y, low, high] = preprocess(x)
% differencing
diffs = diff(x);
% calc % changes
chngs = diffs./x(1:end-1,:);
% log data
sign = chngs./abs(chngs);
logdata = sign.*log(abs(chngs)+1);
% normalize logrets
high = max(max(logdata));
low = min(min(logdata));
y=[];
for i = 1:size(logdata,2)
y = [y normalize(logdata(:,i), -1, 1)];
end
end
normalize.m:
function Y = normalize(X,low,high)
%NORMALIZE Linear normalization of X between low and high values.
if length(X) <= 1
error('Length of X input vector must be greater than 1.');
end
mi = min(X);
ma = max(X);
Y = (X-mi)/(ma-mi)*(high-low)+low;
end
I didn't check you code, but made a similar test to predict sin() with NN. The result seems reasonable, without a lag. I think, your bug is somewhere in synchronization of predicted values with actual values.
Here is the code:
%% init & params
t = (-50 : 0.2 : 100)';
y = sin(t) + 0.5 * sin(t + pi / 3);
sigma = 0.2;
n_lags = 12;
hidden_layer_size = 15;
%% create net
net = fitnet(hidden_layer_size);
%% train
noise = sigma * randn(size(t));
y_train = y + noise;
out = circshift(y_train, -1);
out(end) = nan;
in = lagged_input(y_train, n_lags);
net = train(net, in', out');
%% test
noise = sigma * randn(size(t)); % new noise
y_test = y + noise;
in_test = lagged_input(y_test, n_lags);
out_test = net(in_test')';
y_test_predicted = circshift(out_test, 1); % sync with actual value
y_test_predicted(1) = nan;
%% plot
figure,
plot(t, [y, y_test, y_test_predicted], 'linewidth', 2);
grid minor; legend('orig', 'noised', 'predicted')
and the lagged_input() function:
function in = lagged_input(in, n_lags)
for k = 2 : n_lags
in = cat(2, in, circshift(in(:, end), 1));
in(1, k) = nan;
end
end