Matlab neural network for regression - matlab

I have implemented 3 function for neural network regression:
1) a forward propagation function that given the training inputs and the net structure calculates the predicted output
function [y_predicted] = forwardProp(Theta,Baias,Inputs,NumberOfLayers,RegressionSwitch)
for i = 1:size(Inputs{1},2)
Activation = (Inputs{1}(:,i))';
for j = 2:NumberOfLayers - RegressionSwitch
Activation = 1./(1+exp(-(Activation*Theta{j-1} + Baias{j-1})));
end
if RegressionSwitch == 1
y_predicted(:,i) = Activation*Theta{end} + Baias{end};
else
y_predicted(:,i) = Activation;
end
end
end
2) a cost function that given the predicted and the desired output, calculates the cost of the network
function [Cost] = costFunction(y_predicted, y, Theta, Baias, Lambda)
Cost = 0;
for j = 1:size(y,2)
for i = 1:size(y,1)
Cost = Cost +(((y(i,j) - y_predicted(i,j))^2)/size(y,2));
end
end
Reg = 0;
for i = 1:size(Theta, 2)
for j = 1:size(Theta{i}, 1)
for k = 1:size(Theta{i}, 2)
Reg = Reg + (Theta{i}(j,k))^2;
end
end
end
for i = 1:size(Baias, 2)
for j = 1:length(Baias{i})
Reg = Reg + (Baias{i}(j))^2;
end
end
Cost = Cost + (Lambda/(2*size(y,2)))*Reg;
end
3) a back propagation function that calculates the partial derivative of the cost function for each weight in the network
function [dTheta, dBaias] = Deltas(Theta,Baias,Inputs,NumberOfLayers,RegressionSwitch, Epsilon, Lambda, y)
for i = 1:size(Theta,2)
for j = 1:size(Theta{i},1)
for k = 1:size(Theta{i},2)
dTp = Theta;
dTm = Theta;
dTp{i}(j,k) = dTp{i}(j,k) + Epsilon;
dTm{i}(j,k) = dTm{i}(j,k) - Epsilon;
y_predicted_p = forwardProp(dTp,Baias,Inputs,NumberOfLayers,RegressionSwitch);
y_predicted_m = forwardProp(dTm,Baias,Inputs,NumberOfLayers,RegressionSwitch);
Cost_p = costFunction(y_predicted_p, y, dTp, Baias, Lambda);
Cost_m = costFunction(y_predicted_m, y, dTm, Baias, Lambda);
dTheta{i}(j,k) = (Cost_p - Cost_m)/(2*Epsilon);
end
end
end
for i = 1:size(Baias,2)
for j = 1:length(Baias{i})
dBp = Baias;
dBm = Baias;
dBp{i}(j) = dTp{i}(j) + Epsilon;
dBm{i}(j) = dTm{i}(j) - Epsilon;
y_predicted_p = forwardProp(Theta,dBp,Inputs,NumberOfLayers,RegressionSwitch);
y_predicted_m =forwardProp(Theta,dBm,Inputs,NumberOfLayers,RegressionSwitch);
Cost_p = costFunction(y_predicted_p, y, Theta, dBp, Lambda);
Cost_m = costFunction(y_predicted_m, y, Theta, dBm, Lambda);
dBaias{i}(j) = (Cost_p - Cost_m)/(2*Epsilon);
end end end
I train the neural network with data from an exact mathematical function of the inputs.
The gradient descent seems to work as the cost decrease each iteration, but when i test the trained network the regression is terrible.
The functions are not meant to be efficient, but they should work so I am really frustrated to see they don't... The main function and the data are ok so the problem should be here. Can you please help me to spot it?
here is the "main":
clear;
clc;
Nodes_X = 5;
Training_Data = 1000;
x = rand(Nodes_X, Training_Data)*3;
y = zeros(2,Training_Data);
for j = 1:Nodes_X
for i = 1:Training_Data
y(1,i) = (x(1,i)^2)+x(2,i)-x(3,i)+2*x(4,i)/x(5,i);
y(2,i) = (x(5,i)^2)+x(2,i)-x(3,i)+2*x(4,i)/x(1,i);
end
end
vx = rand(Nodes_X, Training_Data)*3;
vy = zeros(2,Training_Data);
for j = 1:Nodes_X
for i = 1:Training_Data
vy(1,i) = (vx(1,i)^2)+vx(2,i)-vx(3,i)+2*vx(4,i)/vx(5,i);
vy(2,i) = (vx(5,i)^2)+vx(2,i)-vx(3,i)+2*vx(4,i)/vx(1,i);
end
end
%%%%%%%%%%%%%%%%%%%%%%ASSIGN NODES TO EACH LAYER%%%%%%%%%%%%%%%%%%%%%%%%%%%
NumberOfLayers = 4;
Nodes(1) = 5;
Nodes(2) = 10;
Nodes(3) = 10;
Nodes(4) = 2;
if length(Nodes) ~= NumberOfLayers || (Nodes(1)) ~= size(x, 1)
WARNING = msgbox('Nodes assigned incorrectly!');
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%INITIALIZATION%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
for i = 1:NumberOfLayers-1
Theta{i} = rand(Nodes(i),Nodes(i+1));
Baias{i} = rand(1,Nodes(i+1));
end
Inputs{1} = x;
Outputs{1} = y;
RegressionSwitch = 1;
Lambda = 10;
Epsilon = 0.00001;
Alpha = 0.01;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%TRAINING%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Epoch = 0;
figure;
hold on;
while Epoch <=20
%%%%%%%%%%%%%%%%%%%%FORWARD PROPAGATION%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
y_predicted = forwardProp(Theta,Baias,Inputs,NumberOfLayers,RegressionSwitch);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%COST%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Cost = costFunction(y_predicted, y, Theta, Baias, Lambda);
scatter(Epoch,Cost);
pause(0.01);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%BACK PROPAGATION%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
[dTheta, dBaias] = Deltas(Theta,Baias,Inputs,NumberOfLayers,RegressionSwitch, Epsilon, Lambda, y);
%%%%%%%%%%%%%%%GRADIENT DESCENT%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
for i = 1:size(Theta,2)
Theta{i} = Theta{i}-Alpha*dTheta{i};
end
for i = 1:size(Baias,2)
Baias{i} = Baias{i}-Alpha*dBaias{i};
end
Epoch = Epoch + 1;
end
hold off;
V_Inputs{1} = vx;
V_y_predicted = forwardProp(Theta,Baias,V_Inputs,NumberOfLayers,RegressionSwitch);
figure;
hold on;
for i = 1:size(vy,2)
scatter(vy(1,i),V_y_predicted(1,i));
pause(0.01);
end
hold off;
figure;
hold on;
for i = 1:size(vy,2)
scatter(vy(2,i),V_y_predicted(2,i));
pause(0.01);
end
hold off;

Related

How to remove marginal points in Gaussian power curve?

I'm trying to generate a power a curve which is Gaussian but in the plot generated I need to remove the marginal values. Could someone please guide me how? Thanks
Following is the code I've written for the power curve:
function [xgrid,ygrid,Z] = biVariateContourPlotsGMMCopula(givenData,gmmObject,~,numMeshPoints,x_dim,y_dim)
d = 2;
if nargin < 5
x_dim = 1;
y_dim = 2;
end
if x_dim == y_dim
hist(givenData(:,x_dim),10);
return;
end
numMeshPoints = min(numMeshPoints,256);
givenData = givenData(:,[x_dim y_dim]);
alpha = gmmObject.alpha;
mu = gmmObject.mu(:,[x_dim y_dim]);
sigma = gmmObject.sigma([x_dim y_dim],[x_dim y_dim],:) + 0.005*repmat(eye(d),[1 1 numel(alpha)]);
gmmObject = gmdistribution(mu,sigma,alpha);
bin_num = 256;
for j = 1:2
l_limit = min(gmmObject.mu(:,j))-3*(max(gmmObject.Sigma(j,j,:))^0.5);
u_limit = max(gmmObject.mu(:,j))+3*(max(gmmObject.Sigma(j,j,:))^0.5);
xmesh_inverse_space{j} = (l_limit:(u_limit-l_limit)/(bin_num-1):u_limit);
end
%if isempty(xmesh)||isempty(pdensity)||isempty(cdensity)
% Following for loop does the non-parameteric estimation of marginal
% densities if not provided
for i = 1:d
currentVar = givenData(:,i);
[bandwidth,pdensity{i},xmesh{i}]=kde(currentVar,numMeshPoints);
pdensity{i}(find(pdensity{i}<0)) = 0;
cdensity{i} = cumsum(pdensity{i});
cdensity{i} = (cdensity{i}-min(cdensity{i}))/(max(cdensity{i})-min(cdensity{i}));
end
[xgrid,ygrid] = meshgrid(xmesh{1}(2:end-1),xmesh{2}(2:end-1));
for k = 1:d
marginalLogLikelihood_grid{k} = log(pdensity{k}(2:end-1)+eps);
marginalCDFValues_grid{k} = cdensity{k}(2:end-1);
end
[marg1,marg2] = meshgrid(marginalLogLikelihood_grid{1},marginalLogLikelihood_grid{2});
[xg,yg] = meshgrid(marginalCDFValues_grid{1},marginalCDFValues_grid{2});
inputMatrix = [reshape(xg,numel(xg),1) reshape(yg,numel(yg),1)];
clear xg yg;
copulaLogLikelihoodVals = gmmCopulaPDF(inputMatrix,gmmObject,xmesh_inverse_space);
Z = reshape(copulaLogLikelihoodVals,size(marg1,1),size(marg1,2));
Z = Z+marg1+marg2;
Z = exp(Z);
plot(givenData(:,1),givenData(:,2),'k.','MarkerSize',3);hold
contour(xgrid,ygrid,Z,40);
%title_string = ['GMCM fit (Log-Likelihood = ',num2str(logLikelihoodVal), ')'];
%title(title_string,'FontSize',12,'FontWeight','demi');
axis tight;

Steepest Descent using Armijo rule

I want to determine the Steepest descent of the Rosenbruck function using Armijo steplength where x = [-1.2, 1]' (the initial column vector).
The problem is, that the code has been running for a long time. I think there will be an infinite loop created here. But I could not understand where the problem was.
Could anyone help me?
n=input('enter the number of variables n ');
% Armijo stepsize rule parameters
x = [-1.2 1]';
s = 10;
m = 0;
sigma = .1;
beta = .5;
obj=func(x);
g=grad(x);
k_max = 10^5;
k=0; % k = # iterations
nf=1; % nf = # function eval.
x_new = zeros([],1) ; % empty vector which can be filled if length is not known ;
[X,Y]=meshgrid(-2:0.5:2);
fx = 100*(X.^2 - Y).^2 + (X-1).^2;
contour(X, Y, fx, 20)
while (norm(g)>10^(-3)) && (k<k_max)
d = -g./abs(g); % steepest descent direction
s = 1;
newobj = func(x + beta.^m*s*d);
m = m+1;
if obj > newobj - (sigma*beta.^m*s*g'*d)
t = beta^m *s;
x = x + t*d;
m_new = m;
newobj = func(x + t*d);
nf = nf+1;
else
m = m+1;
end
obj=newobj;
g=grad(x);
k = k + 1;
x_new = [x_new, x];
end
% Output x and k
x_new, k, nf
fprintf('Optimal Solution x = [%f, %f]\n', x(1), x(2))
plot(x_new)
function y = func(x)
y = 100*(x(1)^2 - x(2))^2 + (x(1)-1)^2;
end
function y = grad(x)
y(1) = 100*(2*(x(1)^2-x(2))*2*x(1)) + 2*(x(1)-1);
end

How to correct grid search?

Trying to find the optimal hyperparameters for my svm model using a grid search, but it simply returns 1 for the hyperparameters.
function evaluations = inner_kfold_trainer(C,q,k,features_xy,labels)
features_xy_flds = kdivide(features_xy, k);
labels_flds = kdivide(labels, k);
evaluations = zeros(k,3);
for i = 1:k
fprintf('Fold %i of %i\n',i,k);
train_data = cell2mat(features_xy_flds(1:end ~= i));
train_labels = cell2mat(labels_flds(1:end ~= i));
test_data = cell2mat(features_xy_flds(i));
test_labels = cell2mat(labels_flds(i));
%AU1
train_labels = train_labels(:,1);
test_labels = test_labels(:,1);
[k,~] = size(test_labels);
%train
sv = fitcsvm(train_data,train_labels, 'KernelFunction','polynomial', 'PolynomialOrder',q,'BoxConstraint',C);
sv.predict(test_data);
%Calculate evaluative measures
%svm_outputs = zeros(k,1);
sv_predictions = sv.predict(test_data);
[precision,recall,F1] = evaluation(sv_predictions,test_labels);
evaluations(i,1) = precision;
evaluations(i,2) = recall;
evaluations(i,3) = F1;
end
save('eval.mat', 'evaluations');
end
an inner-fold cross validation function
and below the grid function where something seems to be going wrong
function [q,C] = grid_search(features_xy,labels,k)
% n x n grid
n = 3;
q_grid = linspace(1,19,n);
C_grid = linspace(1,59,n);
tic
evals = zeros(n,n,3);
for i = 1:n
for j = 1:n
fprintf('## i=%i, j=%i ##\n', i, j);
svm_results = inner_kfold_trainer(C_grid(i), q_grid(j),k,features_xy,labels);
evals(i,j,:) = mean(svm_results(:,:));
% precision only
%evals(i,j,:) = max(svm_results(:,1));
toc
end
end
f = evals;
% retrieving the best value of the hyper parameters, to use in the outer
% fold
[M1,I1] = max(f);
[~,I2] = max(M1(1,1,:));
index = I1(:,:,I2);
C = C_grid(index(1))
q = q_grid(index(2))
end
When I run grid_search(features_xy,labels,8) for example, I get C=1 and q=1, for any k(the no. of folds) value. Also features_xy is a 500*98 matrix.

Matlab: how to calculate the Pseudo Zernike moments?

The code below is defined as algorithm 1 that computes the Pseudo Zernike Radial polynomials:
function R = pseudo_zernike_radial_polynomials(n,r)
if any( r>1 | r<0 | n<0)
error(':zernike_radial_polynomials either r is less than or greater thatn 1, r must be between 0 and 1 or n is less than 0.')
end
if n==0;
R =ones(n +1, length(r));
return;
end
R =ones(n +1, length(r));
rSQRT= sqrt(r);
r0 = ~logical(rSQRT.^(2*n+1)) ; % if any low r exist, and high n, then treat as 0
if any(r0)
m = n:-1:mod(n,2); ss=1:sum(r0);
R0(m +1, ss)=0;
R0(0 +1, ss)=1;
R(:,r0)=R0;
end
if any(~r0)
rSQRT= rSQRT(~r0);
R1 = zernike_radial_polynomials(2*n+1, rSQRT );
m = 2:2: 2*n+1 +1;
R1=R1(m,:);
for m=1:size(R1,1)
R1(m,:) = R1(m,:)./rSQRT';
end
R(:,~r0)=R1;
end
Then, this is algorithm 2 that calculates the moments:
and I translate into the code as follow:
clear all
%input : 2D image f, Nmax = order.
f = rgb2gray(imread('Oval_45.png'));
prompt = ('Input PZM order Nmax:');
Nmax = input(prompt);
Pzm =0;
l = size(f,1);
for x = 1:l;
for y =x;
for n = 0:Nmax;
[X,Y] = meshgrid(x,y);
R = sqrt((2.*X-l-1).^2+(2.*Y-l-1).^2)/l;
theta = atan2((l-1-2.*Y+2),(2.*X-l+1-2));
R = (R<=1).*R;
rad = pseudo_zernike_radial_polynomials(n, R);
for m = 0:n;
%find psi
if mod(m,2)==0
%m is even
newd1 = f(x,y)+f(x,y);
newd2 = f(y,x)+f(y,x);
newd3 = f(x,y)+f(x,y);
newd4 = f(y,x)+f(y,x);
x1 = newd1;
y1 = (-1)^m/2*newd2;
x2 = newd3;
y2 = (-1)^m/2*newd4;
psi = cos(m*theta)*(x1+y1+x2+y2)-(1i)*sin(m*theta)*(x1+y1-x2-y2);
else
newd1 = f(x,y)-f(x,y);
newd2 = f(y,x)-f(y,x);
newd3 = f(x,y)-f(x,y);
newd4 = f(y,x)-f(y,x);
x1 = newd1;
y1 = (-1)^m/2*newd2;
x2 = newd3;
y2 = (-1)^m/2*newd4;
psi = cos(m*theta)*(x1+x2)+sin(m*theta)*(y1-y2)+(1i)*(cos(m*theta)*(y1+y2)-sin(m*theta)*(x1-x2));
end
Pzm = Pzm+rad*psi;
end
end
end
end
However its give me error :
Error using *
Integers can only be combined with integers of the same class, or scalar doubles.
Error in main_pzm (line 44)
Pzm = Pzm+rad*psi;
The detail of the calculation can be seen here

Matlab Iris Classification input size mistmach

I am very new to Matlab. What i am trying to do is classify the iris dataset using Cross-Validation (that means that i have to split the dataset in 3: trainingSet, validationSet, and test set) . In my mind everything i write here is ok (beeing a beginner is hard sometimes). So i could use a little help...
This is the function that splits the data (first 35(70% of the data) are the training set, the rest is the validation set(15%) and 15% i will use later for the test set)
close all; clear ;
load fisheriris;
for i = 1:35
for j = 1:4
trainSeto(i,j) = meas(i,j);
end
end
for i = 51:85
for j = 1:4
trainVers(i-50,j) = meas(i,j);
end
end
for i = 101:135
for j = 1:4
trainVirg(i-100,j) = meas(i,j);
end
end
for i = 36:43
for j = 1:4
valSeto(i-35,j) = meas(i,j);
end
end
for i = 86:93
for j = 1:4
valVers(i-85,j) = meas(i,j);
end
end
for i = 136:143
for j = 1:4
valVirg(i-135,j) = meas(i,j);
end
end
for i = 44:50
for j = 1:4
testSeto(i-43,j) = meas(i,j);
end
end
for i = 94:100
for j = 1:4
testVers(i-93,j) = meas(i,j);
end
end
for i = 144:150
for j = 1:4
testVirg(i-143,j) = meas(i,j);
end
end
And this is the main script:
close all; clear;
%%the 3 tipes of iris
run divinp
% the representation of the 3 classes(their coding)
a = [-1 -1 +1]';
b = [-1 +1 -1]';
c = [+1 -1 -1]';
%training set
trainInp = [trainSeto trainVers trainVirg];
%the targets
T = [repmat(a,1,length(trainSeto)) repmat(b,1,length(trainVers)) repmat(c,1,length(trainVirg))];
%%the training
trainCor = zeros(10,10);
valCor = zeros(10,10);
Xn = zeros(1,10);
Yn = zeros(1,10);
for k = 1:10,
Yn(1,k) = k;
for n = 1:10,
Xn(1,n) = n;
net = newff(trainInp,T,[k n],{},'trainbfg');
net = init(net);
net.divideParam.trainRatio = 1;
net.divideParam.valRatio = 0;
net.divideParam.testRatio = 0;
net.trainParam.max_fail = 2;
valInp = [valSeto valVers valVirg];
valT = [repmat(a,1,length(valSeto)) repmat(b,1,length(valVers)) repmat(c,1,length(valVirg))];
[net,tr] = train(net,trainInp,T);
Y = sim(net,trainInp);
[Yval,Pfval,Afval,Eval,perfval] = sim(net,valInp,[],[],valT);
% calculate [%] of correct classifications
trainCor(k,n) = 100 * length(find(T.*Y > 0)) / length(T);
valCor(k,n) = 100 * length(find(valT.*Yval > 0)) / length(valT);
end
end
figure
surf(Xn,Yn,trainCor/3);
view(2)
figure
surf(Xn,Yn,valCor/3);
view(2)
I get this error
Error using trainbfg (line 120) Inputs and targets have different
numbers of samples.
Error in network/train (line 106) [net,tr] =
feval(net.trainFcn,net,X,T,Xi,Ai,EW,net.trainParam);
Error in ClassIris (line 38)
[net,tr] = train(net,trainInp,T);
close all; clear ;
load fisheriris;
trainSetoIndx = 1:35;
trainVersIndx = 51:85; % or: trainVersIndx = trainSetoIndx + 50;
trainVirgIndx = 101:135;
colIndx = 1:4;
trainSeto = meas(trainSetoIndx, colIndx);
trainVers = meas(trainVersIndx, colIndx);
trainVirg = meas(trainVirgIndx, colIndx);
valSetoIndx = 36:43;
valVersIndx = 86:93;
valVirgIndx = 136:143
valSeto = meas(valSetoIndx, colIndx);
valVers = meas(valVersIndx, colIndx);
valVirg = meas(valVirgIndx, colIndx);
testSetoIndx = 44:50;
testVersIndx = 94:100;
testVirgIndx = 144:150
testSeto = meas(testSetoIndx, colIndx);
testVers = meas(testVersIndx, colIndx);
testVirg = meas(testVirgIndx, colIndx);
i have writen it with ":" also still the same problem it's something with repmat.. i don't know how to use it properly or newff :D
Just to get you started, you can rewrite your code loops as follows:
trainSetoIndx = 1:35;
trainVersIndx = 51:85; % or: trainVersIndx = trainSetoIndx + 50;
trainVirgIndx = 101:135; % or: trainVirgIndx = trainSetoIndx + 100;
colIndx = 1:4; % can't tell if this is all the columns in meas
trainSeto = meas(trainIndx, colIndx);
trainVers = meas(trainVersIndx, colIndx);
trainVirg = meas(trainVirgIndx, colIndx);
The do the same thing for all the others:
valSetoIndx = 36:43;
etc.
Next, simply type whos at the command prompt and you will see the sizes of all the arrays you have created. See whether the ones that need to be the same size have, in fact, the same dimensions.