Neural network Cost function in Andrew Ng's Lecture - matlab

I am having trouble with my code that is meant to provide a cost function for my neural network. The cost function (J) is defined as the cost function. Given sample inputs, the cost function returns a negative value that is about 5 times less than the expected value. I have worked on this issue for a few hours but still cannot get the desired value.
Thanks in advance.
function [J grad] = nnCostFunction(nn_params, ...
input_layer_size, ...
hidden_layer_size, ...
num_labels, ...
X, y, lambda)
%NNCOSTFUNCTION Implements the neural network cost function for a two layer
%neural network which performs classification
% [J grad] = NNCOSTFUNCTON(nn_params, hidden_layer_size, num_labels, ...
% X, y, lambda) computes the cost and gradient of the neural network. The
% parameters for the neural network are "unrolled" into the vector
% nn_params and need to be converted back into the weight matrices.
%
% The returned parameter grad should be a "unrolled" vector of the
% partial derivatives of the neural network.
%
% Reshape nn_params back into the parameters Theta1 and Theta2, the weight matrices
% for our 2 layer neural network
Theta1 = reshape(nn_params(1:hidden_layer_size * (input_layer_size + 1)), ...
hidden_layer_size, (input_layer_size + 1));
Theta2 = reshape(nn_params((1 + (hidden_layer_size * (input_layer_size + 1))):end), ...
num_labels, (hidden_layer_size + 1));
% Setup some useful variables
m = size(X, 1);
% You need to return the following variables correctly
J = 0;
Theta1_grad = zeros(size(Theta1));
Theta2_grad = zeros(size(Theta2));
% ====================== YOUR CODE HERE ======================
% Instructions: You should complete the code by working through the
% following parts.
%
% Part 1: Feedforward the neural network and return the cost in the
one=ones(1,10);
temp=one
one=transpose(temp);
sizeTheta1=size(Theta1);
sizeTheta2=size(Theta2);
for i=1:m
if y(i)==1
yVec=[1,0,0,0,0,0,0,0,0,0];
end
if y(i)==2
yVec=[0,1,0,0,0,0,0,0,0,0];
end
if y(i)==3
yVec=[0,0,1,0,0,0,0,0,0,0];
end
if y(i)==4
yVec=[0,0,0,1,0,0,0,0,0,0];
end
if y(i)==5
yVec=[0,0,0,0,1,0,0,0,0,0];
end
if y(i)==6
yVec=[0,0,0,0,0,1,0,0,0,0];
end
if y(i)==7
yVec=[1,0,0,0,0,0,1,0,0,0];
end
if y(i)==8
yVec=[1,0,0,0,0,0,0,1,0,0];
end
if y(i)==9
yVec=[0,0,0,0,0,0,0,0,1,0];
end
if y(i)==10
yVec=[0,0,0,0,0,0,0,0,0,1];
end
xVec=transpose(X(i,:));
term1=transpose(-yVec).*(log(sigmoid(Theta2(1:10,1:sizeTheta2(2)-1))*(sigmoid(Theta1(1:25,1:sizeTheta1(2)-1)*xVec))));
term2=(one-transpose(yVec)).*(log(one-(sigmoid(Theta2(1:10,1:sizeTheta2(2)-1)*(sigmoid(Theta1(1:25,1:sizeTheta1(2)-1)*xVec))))));
J=J+(term1-term2);
end
regTheta1=0;
regTheta2=0;
J=sum(sum(J))*(1/m);
regTheta1=(sum(sum(Theta1.*Theta1)));
regTheta2=(sum(sum(Theta2.*Theta2)));
J=J+((lambda)*(regTheta1+regTheta2))/(2*m);
% -------------------------------------------------------------
% =========================================================================
% Unroll gradients
grad = [Theta1_grad(:) ; Theta2_grad(:)];
end

Related

Octave Treats a 5000*10 Matrix As a 16*4 Matrix

I'm trying to use Octave to submit an assignment written in MATLAB.h_theta2 matrix is a 5000*10 matrix in MATLAB (please see the attached screenshot) and the code works fine in MATLAB. But when I try to submit the assignment in Octave it returns the following error:
Submission failed: operator -: nonconformant arguments (op1 is 16x4, op2 is 5000x10)
LineNumber: 98 (Which refers to delta3=h_theta2-y_2 in the attached screenshot.)
This (I'm guessing) means that Octave is treating h_theta2 as a 16*4 matrix.
The code is supposed to estimate the cost function and gradient of a neural network. X, y, Theta1 and Theta2 are given in the assignment.
function [J grad] = nnCostFunction(nn_params, ...
input_layer_size, ...
hidden_layer_size, ...
num_labels, ...
X, y, lambda)
NNCOSTFUNCTION Implements the neural network cost function for a two-layer neural network which performs classification.
[J grad] = NNCOSTFUNCTON(nn_params, hidden_layer_size, num_labels, ..., X, y, lambda) computes the cost and gradient of the neural network. The parameters for the neural network are "unrolled" into the vector nn_params and need to be converted back into the weight matrices.
The returned parameter grad should be an "unrolled" vector of the partial derivatives of the neural network.
Reshape nn_params back into the parameters Theta1 and Theta2, the weight matrices. For 2-layer neural network:
Theta1 = reshape(nn_params(1:hidden_layer_size * (input_layer_size + 1)), ...
hidden_layer_size, (input_layer_size + 1));
Theta2 = reshape(nn_params((1 + (hidden_layer_size * (input_layer_size + 1))):end), ...
num_labels, (hidden_layer_size + 1));
m = size(X, 1);
I need to return the following variables correctly:
J = 0;
Theta1_grad = zeros(size(Theta1));
Theta2_grad = zeros(size(Theta2));
Sigmoid function is defined in another file and is recalled here to calculate h_theta1 and h_theta2.
%Sigmoid function:
function g = sigmoid(z)
%SIGMOID Compute sigmoid function
% J = SIGMOID(z) computes the sigmoid of z.
g = 1.0 ./ (1.0 + exp(-z));
end
Feedforward the neural network and return the cost in the variable J:
X = [ones(m, 1) X];
h_theta1=sigmoid(X*Theta1');
h_theta1=[ones(m,1) h_theta1];
h_theta2=sigmoid(h_theta1*Theta2');
y_2=zeros(5000,10);
for k=1:10
condition=y(:,1)==k;
y_2(condition,k)=1;
end
for i=1:m
for k=1:num_labels
e(i,k)=-y_2(i,k)'*log(h_theta2(i,k))-(1-y_2(i,k)')*log(1-h_theta2(i,k));
end
end
J=(1/m)*sum(e);
J=sum(J);
Theta_1=Theta1;
Theta_2=Theta2;
Theta_1(:,1)=[];
Theta_2(:,1)=[];
%Regularized cost function:
J=J+(lambda/(2*m))*(sum(sum(Theta_1.*Theta_1))+sum(sum(Theta_2.*Theta_2)));
%Gradient calculation
delta3=h_theta2-y_2;
delta2=(delta3*Theta2).*h_theta1.*(1-h_theta1);
Theta2_grad=Theta2_grad+delta3'*h_theta1;
Theta2_grad=(1/m)*Theta2_grad;
delta_2=delta2;
delta_2(:,1)=[];
Theta1_grad=Theta1_grad+delta_2'*X;
Theta1_grad=(1/m)*Theta1_grad;
I then submit the above code using a submit() function in Octave. The code works for J calculation but then gives the following error:
octave:80> submit()
== Submitting solutions | Neural Networks Learning...
Use token from last successful submission? (Y/n): Y
!! Submission failed: operator -: nonconformant arguments
(op1 is 16x4, op2 is 5000x10)
Function: nnCostFunction
LineNumber: 98
Please correct your code and resubmit.
Any help would be much appreciated.
I figured out where the problem was. The thing is the grader tests my answer with a totally different dataset and I had created y_2 with fixed dimensions. What I should've done instead was to create y_2 as follows:
y_2=zeros(m,num_labels);
for k=1:num_labels
condition=y(:,1)==k;
y_2(condition,k)=1;
end
Which makes the code work for any value of m and num_labels.

Cost function computation for neural network

I am in week 5 of Andrew Ng's Machine Learning Course on Coursera. I am working through the programming assignment in Matlab for this week, and I chose to use a for loop implementation to compute the cost J. Here is my function.
function [J grad] = nnCostFunction(nn_params, ...
input_layer_size, ...
hidden_layer_size, ...
num_labels, ...
X, y, lambda)
%NNCOSTFUNCTION Implements the neural network cost function for a two layer
%neural network which performs classification
% [J grad] = NNCOSTFUNCTON(nn_params, hidden_layer_size, num_labels, ...
% X, y, lambda) computes the cost and gradient of the neural network. The
% parameters for the neural network are "unrolled" into the vector
% nn_params and need to be converted back into the weight matrices.
% Reshape nn_params back into the parameters Theta1 and Theta2, the weight matrices
% for our 2 layer neural network
Theta1 = reshape(nn_params(1:hidden_layer_size * (input_layer_size + 1)), ...
hidden_layer_size, (input_layer_size + 1));
Theta2 = reshape(nn_params((1 + (hidden_layer_size * (input_layer_size + 1))):end), ...
num_labels, (hidden_layer_size + 1));
% Setup some useful variables
m = size(X, 1);
% add bias to X to create 5000x401 matrix
X = [ones(m, 1) X];
% You need to return the following variables correctly
J = 0;
Theta1_grad = zeros(size(Theta1));
Theta2_grad = zeros(size(Theta2));
% initialize summing terms used in cost expression
sum_i = 0.0;
% loop through each sample to calculate the cost
for i = 1:m
% logical vector output for 1 example
y_i = zeros(num_labels, 1);
class = y(m);
y_i(class) = 1;
% first layer just equals features in one example 1x401
a1 = X(i, :);
% compute z2, a 25x1 vector
z2 = Theta1*a1';
% compute activation of z2
a2 = sigmoid(z2);
% add bias to a2 to create a 26x1 vector
a2 = [1; a2];
% compute z3, a 10x1 vector
z3 = Theta2*a2;
%compute activation of z3. returns output vector of size 10x1
a3 = sigmoid(z3);
h = a3;
% loop through each class k to sum cost over each class
for k = 1:num_labels
% sum_i returns cost summed over each class
sum_i = sum_i + ((-1*y_i(k) * log(h(k))) - ((1 - y_i(k)) * log(1 - h(k))));
end
end
J = sum_i/m;
I understand that a vectorized implementaion of this would be easier, but I do not understand why this implementation is wrong. When num_labels = 10, this function outputs J = 8.47, but the expected cost is 0.287629. I computed J from this formula. Am I misunderstanding the computation? My understanding is that each training example's cost for each of the 10 classes are computed then the cost for all 10 classes for each example are summed together. Is that incorrect? Or did I not implement this in my code properly? Thanks in advance.
the problem is in the formula you are implementing
this expression ((-1*y_i(k) * log(h(k))) - ((1 - y_i(k)) * log(1 - h(k)))); represent the loss in case in binary classification because you were simply have 2 classes so either
y_i is 0 so (1 - yi) = 1
y_i is 1 so (1 - yi) = 0
so you basically take into account only the target class probability.
how ever in case of 10 labels as you mention (y_i) or (1 - yi) not necessary of one of them to be 0 and the other to be 1
you should correct the loss function implementation so that you only take into account the probability of the target class only not all other classes.
My problem is with indexing. Rather than saying class = y(m) it should be class = y(i) since i is the index and m is 5000 from the number of rows in the training data.

MATLAB: vectorised backpropagation (no loop over training examples)

In MATLAB/Octave, how do I implement backpropagation without any loops over the training examples?
This answer talks about the theory of parallelism, but how would this be implemented in actual Octave code?
For me the final piece of the puzzle came from computing sum of outer products.
Here is what I came up with:
% X is a {# of training examples} x {# of features} matrix
% Y is a {# of training examples} x {# of output neurons} matrix
% Theta is a cell matrix containing Theta{1}...Theta{n}
% Number of training examples
m = size(X, 1);
% Get h(X) and z (non-activated output of all neurons in network)
[hX, z, activation] = predict(Theta, X);
% Get error of output layer
layers = 1 + length(Theta);
d{layers} = hX - Y;
% Propagate errors backwards through hidden layers
for layer = layers-1 : -1 : 2
d{layer} = d{layer+1} * Theta{layer};
d{layer} = d{layer}(:, 2:end); % Remove "error" for constant bias term
d{layer} .*= sigmoidGradient(z{layer});
end
% Calculate Theta gradients
for l = 1:layers-1
Theta_grad{l} = zeros(size(Theta{l}));
% Sum of outer products
Theta_grad{l} += d{l+1}' * [ones(m,1) activation{l}];
% Add regularisation term
Theta_grad{l}(:, 2:end) += lambda * Theta{l}(:, 2:end);
Theta_grad{l} /= m;
end

Matlab Regularized Logistic Regression - how to compute gradient

I am currently taking Machine Learning on the Coursera platform and I am trying to implement Logistic Regression. To implement Logistic Regression, I am using gradient descent to minimize the cost function and I am to write a function called costFunctionReg.m that returns both the cost and the gradient of each parameter evaluated at the current set of parameters.
The problem is better described below:
My cost function is working, but the gradient function is not. Please note that I would prefer to implement this using looping, rather than element-by-element operations.
I am computing theta[0] (in MATLAB, theta(1)) separately as it is not being regularized, i.e. we do not use the first term (with lambda).
function [J, grad] = costFunctionReg(theta, X, y, lambda)
%COSTFUNCTIONREG Compute cost and gradient for logistic regression with regularization
% J = COSTFUNCTIONREG(theta, X, y, lambda) computes the cost of using
% theta as the parameter for regularized logistic regression and the
% gradient of the cost w.r.t. to the parameters.
% Initialize some useful values
m = length(y); % number of training examples
n = length(theta); %number of parameters (features)
% You need to return the following variables correctly
J = 0;
grad = zeros(size(theta));
% ====================== YOUR CODE HERE ======================
% Instructions: Compute the cost of a particular choice of theta.
% You should set J to the cost.
% Compute the partial derivatives and set grad to the partial
% derivatives of the cost w.r.t. each parameter in theta
% ----------------------1. Compute the cost-------------------
%hypothesis
h = sigmoid(X * theta);
for i = 1 : m
% The cost for the ith term before regularization
J = J - ( y(i) * log(h(i)) ) - ( (1 - y(i)) * log(1 - h(i)) );
% Adding regularization term
for j = 2 : n
J = J + (lambda / (2*m) ) * ( theta(j) )^2;
end
end
J = J/m;
% ----------------------2. Compute the gradients-------------------
%not regularizing theta[0] i.e. theta(1) in matlab
j = 1;
for i = 1 : m
grad(j) = grad(j) + ( h(i) - y(i) ) * X(i,j);
end
for j = 2 : n
for i = 1 : m
grad(j) = grad(j) + ( h(i) - y(i) ) * X(i,j) + lambda * theta(j);
end
end
grad = (1/m) * grad;
% =============================================================
end
What am I doing wrong?
The way you are applying regularization is incorrect. You add regularization after you sum over all training examples but instead you are adding regularization after each example. If you left your code as it was before the correction, you are inadvertently making the gradient step larger and will eventually overshoot the solution. This overshooting will accumulate and will inevitably give you a gradient vector of Inf or -Inf for all components (except for the bias term).
Simply put, place your lambda*theta(j) statement after the second for loop terminates:
for j = 2 : n
for i = 1 : m
grad(j) = grad(j) + ( h(i) - y(i) ) * X(i,j); % Change
end
grad(j) = grad(j) + lambda * theta(j); % Change
end

Implementing a neural network to figure out its cost

Cost function
I am trying to code the above expression in Matlab. Unfortunately I seem to be getting a cost of 10.441460 instead of 0.287629 so I'm out by a factor of over 36!
As for each of the symbols:
m is the number of training examples. [a scalar number]
K is the number of output nodes. [a scalar number]
y is the vector of training outputs [an m by 1 vector]
y^{(i)}_{k} is the ith training output (target) for the kth output node.
[a scalar number]
x^{(i)} is the ith training input. [a column vector for all the input
nodes]
h_{\theta}(x^{(i)})_{k} is the value of the hypothesis at output k, with
weights theta, and training input i. [a scalar number]
note: h_{\theta}(x^{(i)}) will be a column vector with K rows.
My attempt for the cost function:
Theta1 = [ones(1,size(Theta1,2));Theta1];
X = [ones(m,1) , X]; %Add a column of 1's to X
R=zeros(m,1);
for i = 1:m
a = y(i) == [10 1:9];
R(i) = -(a*(log(sigmoid(Theta2*(sigmoid(Theta1*X(i,:)'))))) + (1-a)*(log(1-sigmoid(Theta2*(sigmoid(Theta1*X(i,:)'))))))/m;
end
J = sum(R);
This will probably be useful for reference:
function [J grad] = nnCostFunction(nn_params, ...
input_layer_size, ...
hidden_layer_size, ...
num_labels, ...
X, y, lambda)
Theta1 = reshape(nn_params(1:hidden_layer_size * (input_layer_size + 1)), ...
hidden_layer_size, (input_layer_size + 1));
Theta2 = reshape(nn_params((1 + (hidden_layer_size * (input_layer_size + 1))):end), ...
num_labels, (hidden_layer_size + 1));
% Setup some useful variables
m = size(X, 1);
% ====================== YOUR CODE HERE ======================
% Instructions: You should complete the code by working through the
% following parts.
%
% Part 1: Feedforward the neural network and return the cost in the
% variable J. After implementing Part 1, you can verify that your
% cost function computation is correct by verifying the cost
% computed in ex4.m
%
% =========================================================================
end