Have problems using Matlab coding Deep Neural Network - matlab

I was trying to build a 5-layer neural network to classify a 3 classes, 178 instances and 13 features dataset. Basically I was following the guideline given here. I have written down my own code in Matlab and it can successfully run. However, the training result turns out to be very bad. The model keep predict the same class as the output. I could not found where is wrong with my code, or the model doesn't fit for the data? Could someone help me find where the problem is? Thank you very much.
My Matlab training code is shown below:
%% Initialization
numclass = 3; % num of class
c = 13; % num of feature
% for each layer, initialize each parameter w
and each b to a small random value near zero
w1 = normrnd(0,0.01,[c,10]); % Input layer -> layer 2 (10 nodes)
b1 = normrnd(0,0.01,[1,10]);
w2 = normrnd(0,0.01,[10,6]); % layer 2 -> layer 3 (6 nodes)
b2 = normrnd(0,0.01,[1,6]);
w3 = normrnd(0,0.01,[6,4]); % layer 3 -> layer 4 (4 nodes)
b3 = normrnd(0,0.01,[1,4]);
w4 = normrnd(0,0.01,[4,numclass]); % layer 4 -> Output layer (3 nodes/class label)
b4 = normrnd(0,0.01,[1,numclass]);
Iter = 0;
lambda = 0.5; % regularization coefficient
%% Batch Training
while Iter < 200
Iter = Iter+1
d_w1 = 0; d_w2 = 0; d_w3 = 0; d_b1 = 0; d_b2 = 0; d_b3 = 0;
d_w4 = 0; d_b4 = 0;
for i = 1:r
% Forward propagation
a1 = X0(i,:); % X0 is training data, each row represents a instance with 13 features
% Input layer -> Layer 2
z2 = a1*w1+b1;
a2 = sigmoid(z2);
% Layer 2 -> Layer 3
z3 = a2*w2+b2;
a3 = sigmoid(z3);
% Layer 3 -> Layer 4
z4 = a3*w3+b3;
a4 = sigmoid(z4);
% Layer 4 -> Output Layer
z5 = a4*w4+b4;
a5 = sigmoid(z5);
% Backward propagation
y = zeros(1,numclass);
y(Y0(i)) = 1; % Y0 is the training label ({1,2,3} in this case), each element indicates which class the instance belongs to
% Output Layer -> Layer 4
delta5 = (-(y-a5).*d_sigmoid(z5))';
% Output Layer -> Layer 3
delta4 = (w4*delta5).*d_sigmoid(z4');
% Layer 3 -> Layer 2
delta3 = (w3*delta4).*d_sigmoid(z3');
% Layer 2 -> Layer I
delta2 = (w2*delta3).*d_sigmoid(z2');
% Compute the desired partial derivatives
d_w1 = d_w1 + (delta2*a1)';
d_b1 = d_b1 + delta2';
d_w2 = d_w2 + (delta3*a2)';
d_b2 = d_b2 + delta3';
d_w3 = d_w3 + (delta4*a3)';
d_b3 = d_b3 + delta4';
d_w4 = d_w4 + (delta5*a4)';
d_b4 = d_b4 + delta5';
end
eta = 0.8; % leraning rate
% weights and bias updating
w1 = w1 - eta*((1/r*d_w1)+ lambda*w1);
b1 = b1 - eta*1/r*d_b1;
w2 = w2 - eta*((1/r*d_w2)+ lambda*w2);
b2 = b2 - eta*1/r*d_b2;
w3 = w3 - eta*((1/r*d_w3)+ lambda*w3);
b3 = b3 - eta*1/r*d_b3;
w4 = w4 - eta*((1/r*d_w4)+ lambda*w4);
b4 = b4 - eta*1/r*d_b4;
end
sigmoid and d_sigmoid function are shown below:
function y = sigmoid(x);
L=1;
k=10;
x0=0;
y = L./(1+exp(-k*(x-x0)));
end
function y = d_sigmoid(x)
tmp = sigmoid(x);
y = tmp.*(1-tmp);
end
The prediction code is shown below:
%% Prediction: X1 is testing data, and Y1 is a vector of testing label
[r,c] = size(X1);
for i = 1:r
A1 = X1(i,:);
% Input layer -> Layer 2
Z2 = A1*w1+b1;
A2 = sigmoid(Z2);
% Layer 2 -> Layer 3
Z3 = A2*w2+b2;
A3 = sigmoid(Z3);
% Layer 3 -> Layer 4
Z4 = A3*w3+b3;
A4 = sigmoid(Z4);
% Layer 4 -> Output Layer
Z5 = A4*w4+b4;
A5 = sigmoid(Z5);
pred(i) = find(A5==max(A5))
end
error = length(find((pred'-Y1)~=0))

Related

MATLAB Backpropagation Algorithm not functioning as expected

I am attempting to write a Multi-Layer Perceptron Network inside MATLAB to help me better understand the calculus required for backpropagation.
The aim is so provide the network with XOR data (where upper-right and lower-left quadrant data is class 1 and the remaining quadrants class 0), train the network on this data, and then test it on new data.
My problem is that my loss curve looks very very strange:
It appears to bounce between very low error very high error and converge in the middle to a pretty poor error.
I was wondering if someone could check that I have correctly implemented the chain rule in MATLAB syntax.
The MLP network is structured as follows: Input-layer has 2 neurons, 1 hidden-layer with 2 neurons, and 1 output neuron.
Here is the MATLAB code:
%Create XOR Dataset
x1pos = rand(500,1);
x1neg = -rand(500,1);
x1 = [x1pos; x1neg];
p = randperm(length(x1));
x1 = x1(p);
x2pos = rand(500,1);
x2neg = -rand(500,1);
x2 = [x1pos; x1neg];
p = randperm(length(x2));
x2 = x2(p);
Data = [x1 x2];
TrainingData = Data(1:800,:);
TestData = Data(801:length(Data),:);
T = gt((Data(:,1).*Data(:,2)),0); %Create class label for data and assign to matrix T
%Neural Net
%Training
W1 = rand(2,2); %Initialize random weights
W2 = rand(1,2); %Initialize random weights
B1 = rand(2,1); %Initialize random biases
B2 = rand(1,1); %Initialize random biases
n = 0.05; %Set Learning Rate
for i = 1:800
%Fwd Pass
x1 = Data(i,1);
x2 = Data(i,2);
X = [x1; x2];
A1 = W1*X + B1;
H1 = sigmoid(A1);
A2 = W2*H1 + B2;
Y = sigmoid(A2);
%Loss
Loss = (Y-T(i))*(Y-T(i));
scatter(i, Loss)
hold on;
%Backpropagation
dEdY = 2*(Y-T(i)); %The partial derivative of the loss with respect to the output
dYdA2 = Y*(1-Y); %The partial derivative of the output with respect to the hidden layer output
dA2dH1 = W2.'; %The partial derivative of the hidden layer output with respect to the first layer activations
dH1dA1 = H1.*(1-H1); %The partial derivative of the first layer activations with respect to the first layer output
%Chain Rule
dEdW2 = dEdY.*dYdA2.*W2.';
dEdW1 = dEdY.*dYdA2.*dA2dH1.*dH1dA1.*W1.';
dEdB2 = dEdY.*dYdA2;
dEdB1 = dEdY.*dYdA2.*dA2dH1.*dH1dA1;
%Update Weights
W2 = (W2.' - n.*dEdW2).';
W1 = (W1.' - n.*dEdW1).';
%Update Biases
B2 = B2 - n.*dEdB2;
B1 = B1 - n.*dEdB1;
%Next training loop
end
%Testing
for i = 801:1000
x1 = Data(i,1);
x2 = Data(i,2);
X = [x1; x2];
A1 = W1*X + B1;
H1 = sigmoid(A1);
A2 = W2*H1 + B2;
Y = sigmoid(A2);
end
function o = sigmoid(input)
o = [];
for i = 1:length(input)
o = [o; 1/(1+exp(-input(i)))];
end
end

Convolutional Neural Networks in MATLAB. Adding one more layer

what I'm trying to do is to testify the fact that if I add one more layer into CNN, the accuracy goes higher.
The code is below here.
This code is from https://github.com/lhoang29/DigitRecognition/blob/master/cnnload.m
I'm at the beginner stage of CNN and trying to expand one more layer including
convolution and pooling stage. I tried several ways but seems not working. Could someone show me how to expand one more layer?
Thankyou. Below is the code
Code for main function:
clear all; close all; clc;
maxtrain = 10000;
iter = 10;
eta = 0.01;
%% Data Load
trlblid = fopen('train-labels-idx1-ubyte');
trimgid = fopen('train-images-idx3-ubyte');
tslblid = fopen('t10k-labels-idx1-ubyte');
tsimgid = fopen('t10k-images-idx3-ubyte');
% read train labels
fread(trlblid, 4);
numtrlbls = toint(fread(trlblid, 4));
trainlabels = fread(trlblid, numtrlbls);
% read train data
fread(trimgid, 4);
numtrimg = toint(fread(trimgid, 4));
trimgh = toint(fread(trimgid, 4));
trimgw = toint(fread(trimgid, 4));
trainimages = permute(reshape(fread(trimgid,trimgh*trimgw*numtrimg),trimgh,trimgw,numtrimg), [2 1 3]);
% read test labels
fread(tslblid, 4);
numtslbls = toint(fread(tslblid, 4));
testlabels = fread(tslblid, numtslbls);
% read test data
fread(tsimgid, 4);
numtsimg = toint(fread(tsimgid, 4));
tsimgh = toint(fread(tsimgid, 4));
tsimgw = toint(fread(tsimgid, 4));
testimages = permute(reshape(fread(tsimgid, tsimgh*tsimgw*numtsimg),tsimgh,tsimgw,numtsimg), [2 1 3]);
%% CNN Training
[missimages, misslabels] = cnntrain(trainlabels,trainimages,testlabels,testimages,maxtrain,iter,eta);
%% CNN Testing
showmiss(missimages,misslabels,testimages,testlabels,25,2);
Code for training:
function [missimages, misslabels] = cnntrain(trainlabels,trainimages,testlabels,testimages,maxtrain,iter,eta)
fn = 5; % number of kernels for layer 1
ks = 5; % size of kernel
[h,w,n] = size(trainimages);
n = min(n,maxtrain);
% normalize data to [-1,1] range
nitrain = (trainimages / 255) * 2 - 1;
nitest = (testimages / 255) * 2 - 1;
% train with backprop
h1 = h-ks+1;
w1 = w-ks+1;
A1 = zeros(h1,w1,fn);
h2 = h1/2;
w2 = w1/2;
I2 = zeros(h2,w2,fn);
A2 = zeros(h2,w2,fn);
A3 = zeros(10,1);
% kernels for layer 1
W1 = randn(ks,ks,fn) * .01;
B1 = ones(1,fn);
% scale parameter and bias for layer 2
S2 = randn(1,fn) * .01;
B2 = ones(1,fn);
% weights and bias parameters for fully-connected output layer
W3 = randn(h2,w2,fn,10) * .01;
B3 = ones(10,1);
% true outputs
Y = eye(10)*2-1;
for it=1:iter
err = 0;
for im=1:n
%------------ FORWARD PROP ------------%
% Layer 1: convolution with bias followed by sigmoidal squashing
for fm=1:fn
A1(:,:,fm) = convn(nitrain(:,:,im),W1(end:-1:1,end:-1:1,fm),'valid') + B1(fm);
end
Z1 = tanh(A1);
% Layer 2: average/subsample with scaling and bias
for fm=1:fn
I2(:,:,fm) = avgpool(Z1(:,:,fm));
A2(:,:,fm) = I2(:,:,fm) * S2(fm) + B2(fm);
end
Z2 = tanh(A2);
% Layer 3: fully connected
for cl=1:10
A3(cl) = convn(Z2,W3(end:-1:1,end:-1:1,end:-1:1,cl),'valid') + B3(cl);
end
Z3 = tanh(A3); % Final output
err = err + .5 * norm(Z3 - Y(:,trainlabels(im)+1),2)^2;
%------------ BACK PROP ------------%
% Compute error at output layer
Del3 = (1 - Z3.^2) .* (Z3 - Y(:,trainlabels(im)+1));
% Compute error at layer 2
Del2 = zeros(size(Z2));
for cl=1:10
Del2 = Del2 + Del3(cl) * W3(:,:,:,cl);
end
Del2 = Del2 .* (1 - Z2.^2);
% Compute error at layer 1
Del1 = zeros(size(Z1));
for fm=1:fn
Del1(:,:,fm) = (S2(fm)/4)*(1 - Z1(:,:,fm).^2);
for ih=1:h1
for iw=1:w1
Del1(ih,iw,fm) = Del1(ih,iw,fm) * Del2(floor((ih+1)/2),floor((iw+1)/2),fm);
end
end
end
% Update bias at layer 3
DB3 = Del3; % gradient w.r.t bias
B3 = B3 - eta*DB3;
% Update weights at layer 3
for cl=1:10
DW3 = DB3(cl) * Z2; % gradient w.r.t weights
W3(:,:,:,cl) = W3(:,:,:,cl) - eta * DW3;
end
% Update scale and bias parameters at layer 2
for fm=1:fn
DS2 = convn(Del2(:,:,fm),I2(end:-1:1,end:-1:1,fm),'valid');
S2(fm) = S2(fm) - eta * DS2;
DB2 = sum(sum(Del2(:,:,fm)));
B2(fm) = B2(fm) - eta * DB2;
end
% Update kernel weights and bias parameters at layer 1
for fm=1:fn
DW1 = convn(nitrain(:,:,im),Del1(end:-1:1,end:-1:1,fm),'valid');
W1(:,:,fm) = W1(:,:,fm) - eta * DW1;
DB1 = sum(sum(Del1(:,:,fm)));
B1(fm) = B1(fm) - eta * DB1;
end
end
disp(['Error: ' num2str(err) ' at iteration ' num2str(it)]);
end
miss = 0;
numtest=size(testimages,3);
missimages = zeros(1,numtest);
misslabels = zeros(1,numtest);
for im=1:numtest
for fm=1:fn
A1(:,:,fm) = convn(nitest(:,:,im),W1(end:-1:1,end:-1:1,fm),'valid') + B1(fm);
end
Z1 = tanh(A1);
% Layer 2: average/subsample with scaling and bias
for fm=1:fn
I2(:,:,fm) = avgpool(Z1(:,:,fm));
A2(:,:,fm) = I2(:,:,fm) * S2(fm) + B2(fm);
end
Z2 = tanh(A2);
% Layer 3: fully connected
for cl=1:10
A3(cl) = convn(Z2,W3(end:-1:1,end:-1:1,end:-1:1,cl),'valid') + B3(cl);
end
Z3 = tanh(A3); % Final output
[pm,pl] = max(Z3);
if pl ~= testlabels(im)+1
miss = miss + 1;
missimages(miss) = im;
misslabels(miss) = pl - 1;
end
end
disp(['Miss: ' num2str(miss) ' out of ' num2str(numtest)]);
end
function [pr] = avgpool(img)
pr = zeros(size(img)/2);
for r=1:2:size(img,1)
for c=1:2:size(img,2)
pr((r+1)/2,(c+1)/2) = (img(r,c)+img(r+1,c)+img(r,c+1)+img(r+1,c+1))/4;
end
end
end
Code for showing accuracy
function [] = showmiss(missim,misslab,testimages,testlabels,numshow,numpages)
nummiss = nnz(missim);
page = 1;
showsize = floor(sqrt(numshow));
for f=1:numshow:nummiss
figure(floor(f/numshow) + 1);
for m=f:min(nummiss,f+numshow-1)
subplot(showsize,showsize,m-f+1);
imshow(testimages(:,:,missim(m)));
title(strcat(num2str(testlabels(missim(m))), ':', num2str(misslab(m))));
end
page = page + 1;
if page > numpages
break;
end
end
end
Function toint
function [x] = toint(b)
x = b(1)*16777216 + b(2)*65536 + b(3)*256 + b(4);
end

Neural Network Backpropagation Algorithm Implementation

I implemented a Neural Network Back propagation Algorithm in MATLAB, however is is not training correctly. The training data is a matrix X = [x1, x2], dimension 2 x 200 and I have a target matrix T = [target1, target2], dimension 2 x 200. The first 100 columns in T can be [1; -1] for class 1, and the second 100 columns in T can be [-1; 1] for class 2.
theta = 0.1; % criterion to stop
eta = 0.1; % step size
Nh = 10; % number of hidden nodes
For some reason the total training error is always 1.000, it never goes close to the theta, so it runs forever.
I used the following formulas:
The total training error:
The code is well documented below. I would appreciate any help.
clear;
close all;
clc;
%%('---------------------')
%%('Generating dummy data')
%%('---------------------')
d11 = [2;2]*ones(1,70)+2.*randn(2,70);
d12 = [-2;-2]*ones(1,30)+randn(2,30);
d1 = [d11,d12];
d21 = [3;-3]*ones(1,50)+randn([2,50]);
d22 = [-3;3]*ones(1,50)+randn([2,50]);
d2 = [d21,d22];
hw5_1 = d1;
hw5_2 = d2;
save hw5.mat hw5_1 hw5_2
x1 = hw5_1;
x2 = hw5_2;
% step 1: Construct training data matrix X=[x1,x2], dimension 2x200
training_data = [x1, x2];
% step 2: Construct target matrix T=[target1, target2], dimension 2x200
target1 = repmat([1; -1], 1, 100); % class 1
target2 = repmat([-1; 1], 1, 100); % class 2
T = [target1, target2];
% step 3: normalize training data
training_data = training_data - mean(training_data(:));
training_data = training_data / std(training_data(:));
% step 4: specify parameters
theta = 0.1; % criterion to stop
eta = 0.1; % step size
Nh = 10; % number of hidden nodes, actual hidden nodes should be 11 (including a biase)
Ni = 2; % dimension of input vector = number of input nodes, actual input nodes should be 3 (including a biase)
No = 2; % number of class = number of out nodes
% step 5: Initialize the weights
a = -1/sqrt(No);
b = +1/sqrt(No);
inputLayerToHiddenLayerWeight = (b-a).*rand(Ni, Nh) + a
hiddenLayerToOutputLayerWeight = (b-a).*rand(Nh, No) + a
J = inf;
p = 1;
% activation function
% f(net) = a*tanh(b*net),
% f'(net) = a*b*sech2(b*net)
a = 1.716;
b = 2/3;
while J > theta
% step 6: randomly choose one training sample vector from X,
% together with its target vector
k = randi([1, size(training_data, 2)]);
input_X = training_data(:,k);
input_T = T(:,k);
% step 7: Calculate net_j values for hidden nodes in layer 1
% hidden layer output before activation function applied
netj = inputLayerToHiddenLayerWeight' * input_X;
% step 8: Calculate hidden node output Y using activation function
% apply activation function to hidden layer neurons
Y = a*tanh(b*netj);
% step 9: Calculate net_k values for output nodes in layer 2
% output later output before activation function applied
netk = hiddenLayerToOutputLayerWeight' * Y;
% step 10: Calculate output node output Z using the activation function
% apply activation function to the output layer neurons
Z = a*tanh(b*netk);
% step 11: Calculate sensitivity delta_k = (target - Z) * f'(Z)
% find the error between the expected_output and the neuron output
% we got using the weights
% delta_k = (expected - output) * activation(output)
delta_k = [];
for i=1:size(Z)
yi = Z(i,:);
expected_output = input_T(i,:);
delta_k = [delta_k; (expected_output - yi) ...
* a*b*(sech(b*yi)).^2];
end
% step 12: Calculate sensitivity
% delta_j = Sum_k(delta_k * hidden-to-out weights) * f'(net_j)
% error = (weight_k * error_j) * activation(output)
delta_j = [];
for j=1:size(Y)
yi = Y(j,:);
error = 0;
for k=1:size(delta_k)
error = error + delta_k(k,:)*hiddenLayerToOutputLayerWeight(j, k);
end
delta_j = [delta_j; error * (a*b*(sech(b*yi)).^2)];
end
% step 13: update weights
%2x10
inputLayerToHiddenLayerWeight = [];
for i=1:size(input_X)
xi = input_X(i,:);
wji = [];
for j=1:size(delta_j)
wji = [wji, eta * xi * delta_j(j,:)];
end
inputLayerToHiddenLayerWeight = [inputLayerToHiddenLayerWeight; wji];
end
inputLayerToHiddenLayerWeight
%10x2
hiddenLayerToOutputLayerWeight = [];
for j=1:size(Y)
yi = Y(j,:);
wjk = [];
for k=1:size(delta_k)
wjk = [wjk, eta * delta_k(k,:) * yi];
end
hiddenLayerToOutputLayerWeight = [hiddenLayerToOutputLayerWeight; wjk];
end
hiddenLayerToOutputLayerWeight
% Mean Square Error
J = 0;
for j=1:size(training_data, 2)
X = training_data(:,j);
t = T(:,j);
netj = inputLayerToHiddenLayerWeight' * X;
Y = a*tanh(b*netj);
netk = hiddenLayerToOutputLayerWeight' * Y;
Z = a*tanh(b*netk);
J = J + immse(t, Z);
end
J = J/size(training_data, 2)
p = p + 1;
if p == 4
break;
end
end
% testing neural network using the inputs
test_data = [[2; -2], [-3; -3], [-2; 5], [3; -4]];
for i=1:size(test_data, 2)
end
Weight decay isn't essential for Neural Network training.
What I did notice was that your feature normalization wasn't correct.
The correct algorthim for scaling data to the range of 0 to 1 is
(max - x) / (max - min)
Note: you apply this for every element within the array (or vector). Data inputs for NN need to be within the range of [0,1]. (Technically they can be a little bit outside of that ~[-3,3] but values furthur from 0 make training difficult)
edit*
I am unaware of this activation function
a = 1.716;
b = 2/3;
% f(net) = a*tanh(b*net),
% f'(net) = a*b*sech2(b*net)
It sems like a variation on tanh.
Could you elaborate what it is?
If you're net still doesn't work give me an update and I'll look at your code more closely.

Return vector from ODE45 has shorter length than initial conditions vector, while trying to solve coupled diffEQ

I am trying to use ODE45 to find the solution to 2 rotating bars, rotating on vertical plane, that have a torsional spring that creates a moment on the bars only when the angle between them differs from 90 degrees. I am just using a1-b4 as the constants in the diffEQ and are just imputing their values into a matrix before sending it into the function. I keep betting back an error saying that I am sending 6 initial conditions, but only get 5 back from the ODE45 function. Any ideas on how to fix this?
%system1.m
function [dx] = system1(t,x,parameters)
dx = zeros(4,1);
a1 = parameters(1);
a2 = parameters(2);
a3 = parameters(3);
a4 = parameters(4);
b1 = parameters(5);
b2 = parameters(6);
b3 = parameters(7);
b4 = parameters(8);
dx(1) = x(2); %dtheta1 = angular velocity1
dx(2) = x(3); %d(angular velocity1) = angular acceleration1
dx(4) = x(5); %dtheta2 = angular velocity2
dx(5) = x(6); %d(angular velocity2) = angular acceleration2
dx(2) = a1*x(1)+a2*x(4)+a3*x(2)+a4*x(5); %motion equation 1
dx(5) = b1*x(1)+b2*x(4)+b3*x(2)+b4*x(5); %motion equation 2
%CA2Lou.m
%set parameters
clear;
a1 = -12;
a2 = 12;
a3 = 0;
a4 = 0;
b1 = 4;
b2 = -4;
b3 = 0;
b4 = 0;
parameters = [a1 a2 a3 a4 b1 b2 b3 b4];
%set final time
tf = .5;
options = odeset('MaxStep',.05);
%setting initial conditions
InitialConditions = [90 0 0 0 0 0];
[t_sol,x_sol] = ode45(#system1,[0 tf],InitialConditions,[],parameters);
Your size and indexing for dx don't match x. You initialize dx to 4 elements, even though x has 6. Then you assign values to 4 indices of dx (specifically, [1 2 4 5]) which results in a new size for dx of 5 elements, still one less than the 6 it expects.
You probably need to initialize dx like so:
dx = zeros(6, 1);
Then, your first and second motion equations should probably (I'm guessing) be placed in indices 3 and 6:
dx(3) = a1*x(1)+a2*x(4)+a3*x(2)+a4*x(5); %motion equation 1
dx(6) = b1*x(1)+b2*x(4)+b3*x(2)+b4*x(5); %motion equation 2

Application of Neural Network in MATLAB

I asked a question a few days before but I guess it was a little too complicated and I don't expect to get any answer.
My problem is that I need to use ANN for classification. I've read that much better cost function (or loss function as some books specify) is the cross-entropy, that is J(w) = -1/m * sum_i( yi*ln(hw(xi)) + (1-yi)*ln(1 - hw(xi)) ); i indicates the no. data from training matrix X. I tried to apply it in MATLAB but I find it really difficult. There are couple things I don't know:
should I sum each outputs given all training data (i = 1, ... N, where N is number of inputs for training)
is the gradient calculated correctly
is the numerical gradient (gradAapprox) calculated correctly.
I have following MATLAB codes. I realise I may ask for trivial thing but anyway I hope someone can give me some clues how to find the problem. I suspect the problem is to calculate gradients.
Many thanks.
Main script:
close all
clear all
L = #(x) (1 + exp(-x)).^(-1);
NN = #(x,theta) theta{2}*[ones(1,size(x,1));L(theta{1}*[ones(size(x,1),1) x]')];
% theta = [10 -30 -30];
x = [0 0; 0 1; 1 0; 1 1];
y = [0.9 0.1 0.1 0.1]';
theta0 = 2*rand(9,1)-1;
options = optimset('gradObj','on','Display','iter');
thetaVec = fminunc(#costFunction,theta0,options,x,y);
theta = cell(2,1);
theta{1} = reshape(thetaVec(1:6),[2 3]);
theta{2} = reshape(thetaVec(7:9),[1 3]);
NN(x,theta)'
Cost function:
function [jVal,gradVal,gradApprox] = costFunction(thetaVec,x,y)
persistent index;
% 1 x x
% 1 x x
% 1 x x
% x = 1 x x
% 1 x x
% 1 x x
% 1 x x
m = size(x,1);
if isempty(index) || index > size(x,1)
index = 1;
end
L = #(x) (1 + exp(-x)).^(-1);
NN = #(x,theta) theta{2}*[ones(1,size(x,1));L(theta{1}*[ones(size(x,1),1) x]')];
theta = cell(2,1);
theta{1} = reshape(thetaVec(1:6),[2 3]);
theta{2} = reshape(thetaVec(7:9),[1 3]);
Dew = cell(2,1);
DewApprox = cell(2,1);
% Forward propagation
a0 = x(index,:)';
z1 = theta{1}*[1;a0];
a1 = L(z1);
z2 = theta{2}*[1;a1];
a2 = L(z2);
% Back propagation
d2 = 1/m*(a2 - y(index))*L(z2)*(1-L(z2));
Dew{2} = [1;a1]*d2;
d1 = [1;a1].*(1 - [1;a1]).*theta{2}'*d2;
Dew{1} = [1;a0]*d1(2:end)';
% NNRes = NN(x,theta)';
% jVal = -1/m*sum(NNRes-y)*NNRes*(1-NNRes);
jVal = -1/m*(a2 - y(index))*a2*(1-a2);
gradVal = [Dew{1}(:);Dew{2}(:)];
gradApprox = CalcGradApprox(0.0001);
index = index + 1;
function output = CalcGradApprox(epsilon)
output = zeros(size(gradVal));
for n=1:length(thetaVec)
thetaVecMin = thetaVec;
thetaVecMax = thetaVec;
thetaVecMin(n) = thetaVec(n) - epsilon;
thetaVecMax(n) = thetaVec(n) + epsilon;
thetaMin = cell(2,1);
thetaMax = cell(2,1);
thetaMin{1} = reshape(thetaVecMin(1:6),[2 3]);
thetaMin{2} = reshape(thetaVecMin(7:9),[1 3]);
thetaMax{1} = reshape(thetaVecMax(1:6),[2 3]);
thetaMax{2} = reshape(thetaVecMax(7:9),[1 3]);
a2min = NN(x(index,:),thetaMin)';
a2max = NN(x(index,:),thetaMax)';
jValMin = -1/m*(a2min-y(index))*a2min*(1-a2min);
jValMax = -1/m*(a2max-y(index))*a2max*(1-a2max);
output(n) = (jValMax - jValMin)/2/epsilon;
end
end
end
EDIT:
Below I present the correct version of my costFunction for those who may be interested.
function [jVal,gradVal,gradApprox] = costFunction(thetaVec,x,y)
m = size(x,1);
L = #(x) (1 + exp(-x)).^(-1);
NN = #(x,theta) L(theta{2}*[ones(1,size(x,1));L(theta{1}*[ones(size(x,1),1) x]')]);
theta = cell(2,1);
theta{1} = reshape(thetaVec(1:6),[2 3]);
theta{2} = reshape(thetaVec(7:9),[1 3]);
Delta = cell(2,1);
Delta{1} = zeros(size(theta{1}));
Delta{2} = zeros(size(theta{2}));
D = cell(2,1);
D{1} = zeros(size(theta{1}));
D{2} = zeros(size(theta{2}));
jVal = 0;
for in = 1:size(x,1)
% Forward propagation
a1 = [1;x(in,:)']; % added bias to a0
z2 = theta{1}*a1;
a2 = [1;L(z2)]; % added bias to a1
z3 = theta{2}*a2;
a3 = L(z3);
% Back propagation
d3 = a3 - y(in);
d2 = theta{2}'*d3.*a2.*(1 - a2);
Delta{2} = Delta{2} + d3*a2';
Delta{1} = Delta{1} + d2(2:end)*a1';
jVal = jVal + sum( y(in)*log(a3) + (1-y(in))*log(1-a3) );
end
D{1} = 1/m*Delta{1};
D{2} = 1/m*Delta{2};
jVal = -1/m*jVal;
gradVal = [D{1}(:);D{2}(:)];
gradApprox = CalcGradApprox(x(in,:),0.0001);
% Nested function to calculate gradApprox
function output = CalcGradApprox(x,epsilon)
output = zeros(size(thetaVec));
for n=1:length(thetaVec)
thetaVecMin = thetaVec;
thetaVecMax = thetaVec;
thetaVecMin(n) = thetaVec(n) - epsilon;
thetaVecMax(n) = thetaVec(n) + epsilon;
thetaMin = cell(2,1);
thetaMax = cell(2,1);
thetaMin{1} = reshape(thetaVecMin(1:6),[2 3]);
thetaMin{2} = reshape(thetaVecMin(7:9),[1 3]);
thetaMax{1} = reshape(thetaVecMax(1:6),[2 3]);
thetaMax{2} = reshape(thetaVecMax(7:9),[1 3]);
a3min = NN(x,thetaMin)';
a3max = NN(x,thetaMax)';
jValMin = 0;
jValMax = 0;
for inn=1:size(x,1)
jValMin = jValMin + sum( y(inn)*log(a3min) + (1-y(inn))*log(1-a3min) );
jValMax = jValMax + sum( y(inn)*log(a3max) + (1-y(inn))*log(1-a3max) );
end
jValMin = 1/m*jValMin;
jValMax = 1/m*jValMax;
output(n) = (jValMax - jValMin)/2/epsilon;
end
end
end
I've only had a quick eyeball over your code. Here are some pointers.
Q1
should I sum each outputs given all training data (i = 1, ... N, where
N is number of inputs for training)
If you are talking in relation to the cost function, it is normal to sum and normalise by the number of training examples in order to provide comparison between.
I can't tell from the code whether you have a vectorised implementation which will change the answer. Note that the sum function will only sum up a single dimension at a time - meaning if you have a (M by N) array, sum will result in a 1 by N array.
The cost function should have a scalar output.
Q2
is the gradient calculated correctly
The gradient is not calculated correctly - specifically the deltas look wrong. Try following Andrew Ng's notes [PDF] they are very good.
Q3
is the numerical gradient (gradAapprox) calculated correctly.
This line looks a bit suspect. Does this make more sense?
output(n) = (jValMax - jValMin)/(2*epsilon);
EDIT: I actually can't make heads or tails of your gradient approximation. You should only use forward propagation and small tweaks in the parameters to compute the gradient. Good luck!