Multidimensional Array Optimization - MATLAB - matlab

Is it possible to further optimise the following script by adopting meshgrid, reshape, linear indexing, and/or any other good optimisation techniques?
clear all
% tic;
N1 = 5000;
Nt = 2500;
u = randn(Nt,2);
A = zeros(2,2,N1);
B = zeros(2,2,N1);
C = zeros(2,2,N1);
D = zeros(2,2,N1);
X = zeros(2,1,N1);
Y = zeros(2,Nt,N1);
U = zeros(2,Nt,N1);
tic;
parfor i=1:N1
A(:,:,i) = [unifrnd(0.25,0.75),unifrnd(0.15,0.45);unifrnd(0.4,1.2),unifrnd(0.25,0.75)];
B(:,:,i) = [unifrnd(1,3),unifrnd(2.5,7.5);unifrnd(-22.5,-7.5),unifrnd(-1.95,-0.65)];
C(:,:,i) = [unifrnd(-1,1),unifrnd(-19.5,-6.5);unifrnd(0.5,1.5),unifrnd(-22.5,-7.5)];
D(:,:,i) = [unifrnd(0.1,0.3),unifrnd(-1,1);unifrnd(1,3),unifrnd(-1,1)];
U(:,:,i) = u';
X(:,:,i) = zeros(2,1);
Y(:,:,i) = zeros(2,Nt);
end
toc;
A1 = gpuArray(A);
B1 = gpuArray(B);
C1 = gpuArray(C);
D1 = gpuArray(D);
X3 = gpuArray(X);
U3 = gpuArray(U);
AX = gpuArray(zeros(2,Nt,N1));
X3update = zeros(2,Nt,N1);
X3update = gpuArray(X3update);
%%
tic;
DU = pagefun(#mtimes, D1, U3(:,1:Nt,:));
BU = pagefun(#mtimes, B1, U3(:,1:Nt,:));
for j = 1:Nt
X3update = pagefun(#mtimes,A1,X3) + BU(:,j,:);
AX(:,j,:) = X3;
X3 = X3update;
end
CX = pagefun(#mtimes, C1, AX(:,1:Nt,:));
Y3 = CX + DU;
toc;
Apologies in advance as this is quite a specific question. However, I am trying to increase the computational efficiency of state-space simulations, so quite widely applicable.
Thanks!

I'm not familiar with the parallel toolbox, but couldn't you at least speed up the generation of the uniform random numbers with:
A = [unifrnd(0.25, 0.75, [1 1 N1]), unifrnd(0.15, 0.45, [1 1 N1]); unifrnd(0.4, 1.2, [1 1 N1]), unifrnd(0.25, 0.75, [1 1 N1])];
and so forth?

Related

1D finite element method in the Hermite basis (P3C1) - Problem of solution calculation

I am currently working on solving the problem $-\alpha u'' + \beta u = f$ with Neumann conditions on the edge, with the finite element method in MATLAB.
I managed to set up a code that works for P1 and P2 Lagragne finite elements (i.e: linear and quadratic) and the results are good!
I am trying to implement the finite element method using the Hermite basis. This basis is defined by the following basis functions and derivatives:
syms x
phi(x) = [2*x^3-3*x^2+1,-2*x^3+3*x^2,x^3-2*x^2+x,x^3-x^2]
% Derivative
dphi = [6*x.^2-6*x,-6*x.^2+6*x,3*x^2-4*x+1,3*x^2-2*x]
The problem with the following code is that the solution vector u is not good. I know that there must be a problem in the S and F element matrix calculation loop, but I can't see where even though I've been trying to make changes for a week.
Can you give me your opinion? Hopefully someone can see my error.
Thanks a lot,
% -alpha*u'' + beta*u = f
% u'(a) = bd1, u'(b) = bd2;
a = 0;
b = 1;
f = #(x) (1);
alpha = 1;
beta = 1;
% Neuamnn boundary conditions
bn1 = 1;
bn2 = 0;
syms ue(x)
DE = -alpha*diff(ue,x,2) + beta*ue == f;
du = diff(ue,x);
BC = [du(a)==bn1, du(b)==bn2];
ue = dsolve(DE, BC);
figure
fplot(ue,[a,b], 'r', 'LineWidth',2)
N = 2;
nnod = N*(2+2); % Number of nodes
neq = nnod*1; % Number of equations, one degree of freedom per node
xnod = linspace(a,b,nnod);
nodes = [(1:3:nnod-3)', (2:3:nnod-2)', (3:3:nnod-1)', (4:3:nnod)'];
phi = #(xi)[2*xi.^3-3*xi.^2+1,2*xi.^3+3*xi.^2,xi.^3-2*xi.^2+xi,xi.^3-xi.^2];
dphi = #(xi)[6*xi.^2-6*xi,-6*xi.^2+6*xi,3*xi^2-4*xi+1,3*xi^2-2*xi];
% Here, just calculate the integral using gauss quadrature..
order = 5;
[gp, gw] = gauss(order, 0, 1);
S = zeros(neq,neq);
M = S;
F = zeros(neq,1);
for iel = 1:N
%disp(iel)
inod = nodes(iel,:);
xc = xnod(inod);
h = xc(end)-xc(1);
Se = zeros(4,4);
Me = Se;
fe = zeros(4,1);
for ig = 1:length(gp)
xi = gp(ig);
iw = gw(ig);
Se = Se + dphi(xi)'*dphi(xi)*1/h*1*iw;
Me = Me + phi(xi)'*phi(xi)*h*1*iw;
x = phi(xi)*xc';
fe = fe + phi(xi)' * f(x) * h * 1 * iw;
end
% Assembly
S(inod,inod) = S(inod, inod) + Se;
M(inod,inod) = M(inod, inod) + Me;
F(inod) = F(inod) + fe;
end
S = alpha*S + beta*M;
g = zeros(neq,1);
g(1) = -alpha*bn1;
g(end) = alpha*bn2;
alldofs = 1:neq;
u = zeros(neq,1); %Pre-allocate
F = F + g;
u(alldofs) = S(alldofs,alldofs)\F(alldofs)
Warning: Matrix is singular to working precision.
u = 8×1
NaN
NaN
NaN
NaN
NaN
NaN
NaN
NaN
figure
fplot(ue,[a,b], 'r', 'LineWidth',2)
hold on
plot(xnod, u, 'bo')
for iel = 1:N
inod = nodes(iel,:);
xc = xnod(inod);
U = u(inod);
xi = linspace(0,1,100)';
Ue = phi(xi)*U;
Xe = phi(xi)*xc';
plot(Xe,Ue,'b -')
end
% Gauss function for calculate the integral
function [x, w, A] = gauss(n, a, b)
n = 1:(n - 1);
beta = 1 ./ sqrt(4 - 1 ./ (n .* n));
J = diag(beta, 1) + diag(beta, -1);
[V, D] = eig(J);
x = diag(D);
A = b - a;
w = V(1, :) .* V(1, :);
w = w';
x=x';
end
You can find the same post under MATLAB site for syntax highlighting.
Thanks
I tried to read courses, search in different documentation and modify my code without success.

Divide and Conquer SVD in MATLAB

I'm trying to implement Divide and Conquer SVD of an upper bidiagonal matrix B, but my code is not working. The error is:
"Unable to perform assignment because the size of the left side is
3-by-3 and the size of the right side is 2-by-2.
V_bar(1:k,1:k) = V1;"
Can somebody help me fix it? Thanks.
function [U,S,V] = DivideConquer_SVD(B)
[m,n] = size(B);
k = floor(m/2);
if k == 0
U = 1;
V = 1;
S = B;
return;
else
% Divide the input matrix
alpha = B(k,k);
beta = B(k,k+1);
e1 = zeros(m,1);
e2 = zeros(m,1);
e1(k) = 1;
e2(k+1) = 1;
B1 = B(1:k-1,1:k);
B2 = B(k+1:m,k+1:m);
%recursive computations
[U1,S1,V1] = DivideConquer_SVD(B1);
[U2,S2,V2] = DivideConquer_SVD(B2);
U_bar = zeros(m);
U_bar(1:k-1,1:k-1) = U1;
U_bar(k,k) = 1;
U_bar((k+1):m,(k+1):m) = U2;
D = zeros(m);
D(1:k-1,1:k) = S1;
D((k+1):m,(k+1):m) = S2;
V_bar = zeros(m);
V_bar(1:k,1:k) = V1;
V_bar((k+1):m,(k+1):m) = V2;
u = alpha*e1'*V_bar + beta*e2'*V_bar;
u = u';
D_tilde = D*D + u*u';
% compute eigenvalues and eigenvectors of D^2+uu'
[L1,Q1] = eig(D_tilde);
eigs = diag(L1);
S = zeros(m,n)
S(1:(m+1):end) = eigs
U_tilde = Q1;
V_tilde = Q1;
%Compute eigenvectors of the original input matrix T
U = U_bar*U_tilde;
V = V_bar*V_tilde;
return;
end
With limited mathematical knowledge, you need to help me a bit more -- as I cannot judge if the approach is correct in a mathematical way (with no theory given;) ). Anyway, I couldn't even reproduce the error e.g with this matrix, which The MathWorks use to illustrate their LU matrix factorization
A = [10 -7 0
-3 2 6
5 -1 5];
So I tried to structure your code a bit and gave some hints. Extend this to make your code clearer for those people (like me) who are not too familiar with matrix decomposition.
function [U,S,V] = DivideConquer_SVD(B)
% m x n matrix
[m,n] = size(B);
k = floor(m/2);
if k == 0
disp('if') % for debugging
U = 1;
V = 1;
S = B;
% return; % net necessary as you don't do anything afterwards anyway
else
disp('else') % for debugging
% Divide the input matrix
alpha = B(k,k); % element on diagonal
beta = B(k,k+1); % element on off-diagonal
e1 = zeros(m,1);
e2 = zeros(m,1);
e1(k) = 1;
e2(k+1) = 1;
% divide matrix
B1 = B(1:k-1,1:k); % upper left quadrant
B2 = B(k+1:m,k+1:m); % lower right quadrant
% recusrsive function call
[U1,S1,V1] = DivideConquer_SVD(B1);
[U2,S2,V2] = DivideConquer_SVD(B2);
U_bar = zeros(m);
U_bar(1:k-1,1:k-1) = U1;
U_bar(k,k) = 1;
U_bar((k+1):m,(k+1):m) = U2;
D = zeros(m);
D(1:k-1,1:k) = S1;
D((k+1):m,(k+1):m) = S2;
V_bar = zeros(m);
V_bar(1:k,1:k) = V1;
V_bar((k+1):m,(k+1):m) = V2;
u = (alpha*e1.'*V_bar + beta*e2.'*V_bar).'; % (little show-off tip: '
% is the complex transpose operator; .' is the "normal" transpose
% operator. It's good practice to distinguish between them but there
% is no difference for real matrices anyway)
D_tilde = D*D + u*u.';
% compute eigenvalues and eigenvectors of D^2+uu'
[L1,Q1] = eig(D_tilde);
eigs = diag(L1);
S = zeros(m,n);
S(1:(m+1):end) = eigs;
U_tilde = Q1;
V_tilde = Q1;
% Compute eigenvectors of the original input matrix T
U = U_bar*U_tilde;
V = V_bar*V_tilde;
% return; % net necessary as you don't do anything afterwards anyway
end % for
end % function

tangent tangent correlation calculation in matrix form MATLAB

Consider the following calculation of the tangent tangent correlation which is performed in a for loop
v1=rand(25,1);
v2=rand(25,1);
n=25;
nSteps=10;
mean_theta = zeros(nSteps,1);
for j=1:nSteps
theta=[];
for i=1:(n-j)
d = dot([v1(i) v2(i)],[v1(i+j) v2(i+j)]);
n1 = norm([v1(i) v2(i)]);
n2 = norm([v1(i+j) v2(i+j)]);
theta = [theta acosd(d/n1/n2)];
end
mean_theta(j)=mean(theta);
end
plot(mean_theta)
How can matlab matrix calculations be utilized to make this performance better?
There are several things you can do to speed up your code. First, always preallocate. This converts:
theta = [];
for i = 1:(n-j)
%...
theta = [theta acosd(d/n1/n2)];
end
into:
theta = zeros(1,n-j);
for i = 1:(n-j)
%...
theta(i) = acosd(d/n1/n2);
end
Next, move the normalization out of the loops. There is no need to normalize over and over again, just normalize the input:
v = [v1,v2];
v = v./sqrt(sum(v.^2,2)); % Can use VECNORM in newest MATLAB
%...
theta(i) = acosd(dot(v(i,:),v(i+j,:)));
This does change the output very slightly, within numerical precision, because the different order of operations leads to different floating-point rounding error.
Finally, you can remove the inner loop by vectorizing that computation:
i = 1:(n-j);
theta = acosd(dot(v(i,:),v(i+j,:),2));
Timings (n=25):
Original: 0.0019 s
Preallocate: 0.0013 s
Normalize once: 0.0011 s
Vectorize: 1.4176e-04 s
Timings (n=250):
Original: 0.0185 s
Preallocate: 0.0146 s
Normalize once: 0.0118 s
Vectorize: 2.5694e-04 s
Note how the vectorized code is the only one whose timing doesn't grow linearly with n.
Timing code:
function so
n = 25;
v1 = rand(n,1);
v2 = rand(n,1);
nSteps = 10;
mean_theta1 = method1(v1,v2,nSteps);
mean_theta2 = method2(v1,v2,nSteps);
fprintf('diff method1 vs method2: %g\n',max(abs(mean_theta1(:)-mean_theta2(:))));
mean_theta3 = method3(v1,v2,nSteps);
fprintf('diff method1 vs method3: %g\n',max(abs(mean_theta1(:)-mean_theta3(:))));
mean_theta4 = method4(v1,v2,nSteps);
fprintf('diff method1 vs method4: %g\n',max(abs(mean_theta1(:)-mean_theta4(:))));
timeit(#()method1(v1,v2,nSteps))
timeit(#()method2(v1,v2,nSteps))
timeit(#()method3(v1,v2,nSteps))
timeit(#()method4(v1,v2,nSteps))
function mean_theta = method1(v1,v2,nSteps)
n = numel(v1);
mean_theta = zeros(nSteps,1);
for j = 1:nSteps
theta=[];
for i=1:(n-j)
d = dot([v1(i) v2(i)],[v1(i+j) v2(i+j)]);
n1 = norm([v1(i) v2(i)]);
n2 = norm([v1(i+j) v2(i+j)]);
theta = [theta acosd(d/n1/n2)];
end
mean_theta(j) = mean(theta);
end
function mean_theta = method2(v1,v2,nSteps)
n = numel(v1);
mean_theta = zeros(nSteps,1);
for j = 1:nSteps
theta = zeros(1,n-j);
for i = 1:(n-j)
d = dot([v1(i) v2(i)],[v1(i+j) v2(i+j)]);
n1 = norm([v1(i) v2(i)]);
n2 = norm([v1(i+j) v2(i+j)]);
theta(i) = acosd(d/n1/n2);
end
mean_theta(j) = mean(theta);
end
function mean_theta = method3(v1,v2,nSteps)
v = [v1,v2];
v = v./sqrt(sum(v.^2,2)); % Can use VECNORM in newest MATLAB
n = numel(v1);
mean_theta = zeros(nSteps,1);
for j = 1:nSteps
theta = zeros(1,n-j);
for i = 1:(n-j)
theta(i) = acosd(dot(v(i,:),v(i+j,:)));
end
mean_theta(j) = mean(theta);
end
function mean_theta = method4(v1,v2,nSteps)
v = [v1,v2];
v = v./sqrt(sum(v.^2,2)); % Can use VECNORM in newest MATLAB
n = numel(v1);
mean_theta = zeros(nSteps,1);
for j = 1:nSteps
i = 1:(n-j);
theta = acosd(dot(v(i,:),v(i+j,:),2));
mean_theta(j) = mean(theta);
end
Here is a full vectorized solution:
i = 1:n-1;
j = (1:nSteps).';
ij= min(i+j,n);
a = cat(3, v1(i).', v2(i).');
b = cat(3, v1(ij), v2(ij));
d = sum(a .* b, 3);
n1 = sum(a .^ 2, 3);
n2 = sum(b .^ 2, 3);
theta = acosd(d./sqrt(n1.*n2));
idx = (1:nSteps).' <= (n-1:-1:1);
mean_theta = sum(theta .* idx ,2) ./ sum(idx,2);
Result of Octave timings for my method,method4 from the answer provided by #CrisLuengo and the original method (n=250):
Full vectorized : 0.000864983 seconds
Method4(Vectorize) : 0.002774 seconds
Original(loop) : 0.340693 seconds

Finding the distribution of '1' in rows/columns from a parity-check matrix

My question this time concerns the obtention of the degree distribution of a LDPC matrix through linear programming, under the following statement:
My code is the following:
function [v] = LP_Irr_LDPC(k,Ebn0)
options = optimoptions('fmincon','Display','iter','Algorithm','interior-point','MaxIter', 4000, 'MaxFunEvals', 70000);
fun = #(v) -sum(v(1:k)./(1:k));
A = [];
b = [];
Aeq = [0, ones(1,k-1)];
beq = 1;
lb = zeros(1,k);
ub = [0, ones(1,k-1)];
nonlcon = #(v)DensEv_SP(v,Ebn0);
l0 = [0 rand(1,k-1)];
l0 = l0./sum(l0);
v = fmincon(fun,l0,A,b,Aeq,beq,lb,ub,nonlcon,options)
end
Definition of nonlinear constraints:
function [c, ceq] = DensEv_SP(v,Ebn0)
% It is also needed to modify this function, as you cannot pass parameters from others to it.
h = [0 rand(1,19)];
h = h./sum(h); % This is where h comes from
syms x;
X = x.^(0:(length(h)-1));
R = h*transpose(X);
ebn0 = 10^(Ebn0/10);
Rm = 1;
LLR = (-50:50);
p03 = 0.3;
LLR03 = log((1-p03)/p03);
r03 = 1 - p03;
noise03 = (2*r03*Rm*ebn0)^-1;
pf03 = normpdf(LLR, LLR03, noise03);
sumpf03 = sum(pf03(1:length(pf03)/2));
divisions = 100;
Aj = zeros(1, divisions);
rho = zeros(1, divisions);
xj = zeros(1, divisions);
k = 10; % Length(v) -> Same value as in 'Complete.m'
for j=1:1:divisions
xj(j) = sumpf03*j/divisions;
rho(j) = subs(R,x,1-xj(j));
Aj(j) = 1 - rho(j);
end
c = zeros(1, length(xj));
lambda = zeros(1, length(Aj));
for j = 1:1:length(xj)
lambda(j) = sum(v(2:k).*(Aj(j).^(1:(k-1))));
c(j) = sumpf03*lambda(j) - xj(j);
end
save Almacen
ceq = [];
%ceq = sum(v)-1;
end
This question is linked to the one posted here. My problem is that I need that each element from vectors v and h resulting from this optimization problem is a fraction of x/N and x/(N(1-r) respectively.
How could I ensure that condition without losing convergence capability?
I came up with one possible solution, at least for vector h, within the function DensEv_SP:
function [c, ceq] = DensEv_SP(v,Ebn0)
% It is also needed to modify this function, as you cannot pass parameters from others to it.
k = 10; % Same as in Complete.m, desired sum of h
M = 19; % Number of integers
h = [0 diff([0,sort(randperm(k+M-1,M-1)),k+M])-ones(1,M)];
h = h./sum(h);
syms x;
X = x.^(0:(length(h)-1));
R = h*transpose(X);
ebn0 = 10^(Ebn0/10);
Rm = 1;
LLR = (-50:50);
p03 = 0.3;
LLR03 = log((1-p03)/p03);
r03 = 1 - p03;
noise03 = (2*r03*Rm*ebn0)^-1;
pf03 = normpdf(LLR, LLR03, noise03);
sumpf03 = sum(pf03(1:length(pf03)/2));
divisions = 100;
Aj = zeros(1, divisions);
rho = zeros(1, divisions);
xj = zeros(1, divisions);
N = 20; % Length(v) -> Same value as in 'Complete.m'
for j=1:1:divisions
xj(j) = sumpf03*j/divisions;
rho(j) = subs(R,x,1-xj(j));
Aj(j) = 1 - rho(j);
end
c = zeros(1, length(xj));
lambda = zeros(1, length(Aj));
for j = 1:1:length(xj)
lambda(j) = sum(v(2:k).*(Aj(j).^(1:(k-1))));
c(j) = sumpf03*lambda(j) - xj(j);
end
save Almacen
ceq = (N*v)-floor(N*v);
%ceq = sum(v)-1;
end
As above stated, there is no longer any problem with vector h; nevertheless the way I defined ceq value seemed to be insufficient to make the optimization work out (the problems with v have not diminished at all). Does anybody know how to find the solution?

Application of Neural Network in MATLAB

I asked a question a few days before but I guess it was a little too complicated and I don't expect to get any answer.
My problem is that I need to use ANN for classification. I've read that much better cost function (or loss function as some books specify) is the cross-entropy, that is J(w) = -1/m * sum_i( yi*ln(hw(xi)) + (1-yi)*ln(1 - hw(xi)) ); i indicates the no. data from training matrix X. I tried to apply it in MATLAB but I find it really difficult. There are couple things I don't know:
should I sum each outputs given all training data (i = 1, ... N, where N is number of inputs for training)
is the gradient calculated correctly
is the numerical gradient (gradAapprox) calculated correctly.
I have following MATLAB codes. I realise I may ask for trivial thing but anyway I hope someone can give me some clues how to find the problem. I suspect the problem is to calculate gradients.
Many thanks.
Main script:
close all
clear all
L = #(x) (1 + exp(-x)).^(-1);
NN = #(x,theta) theta{2}*[ones(1,size(x,1));L(theta{1}*[ones(size(x,1),1) x]')];
% theta = [10 -30 -30];
x = [0 0; 0 1; 1 0; 1 1];
y = [0.9 0.1 0.1 0.1]';
theta0 = 2*rand(9,1)-1;
options = optimset('gradObj','on','Display','iter');
thetaVec = fminunc(#costFunction,theta0,options,x,y);
theta = cell(2,1);
theta{1} = reshape(thetaVec(1:6),[2 3]);
theta{2} = reshape(thetaVec(7:9),[1 3]);
NN(x,theta)'
Cost function:
function [jVal,gradVal,gradApprox] = costFunction(thetaVec,x,y)
persistent index;
% 1 x x
% 1 x x
% 1 x x
% x = 1 x x
% 1 x x
% 1 x x
% 1 x x
m = size(x,1);
if isempty(index) || index > size(x,1)
index = 1;
end
L = #(x) (1 + exp(-x)).^(-1);
NN = #(x,theta) theta{2}*[ones(1,size(x,1));L(theta{1}*[ones(size(x,1),1) x]')];
theta = cell(2,1);
theta{1} = reshape(thetaVec(1:6),[2 3]);
theta{2} = reshape(thetaVec(7:9),[1 3]);
Dew = cell(2,1);
DewApprox = cell(2,1);
% Forward propagation
a0 = x(index,:)';
z1 = theta{1}*[1;a0];
a1 = L(z1);
z2 = theta{2}*[1;a1];
a2 = L(z2);
% Back propagation
d2 = 1/m*(a2 - y(index))*L(z2)*(1-L(z2));
Dew{2} = [1;a1]*d2;
d1 = [1;a1].*(1 - [1;a1]).*theta{2}'*d2;
Dew{1} = [1;a0]*d1(2:end)';
% NNRes = NN(x,theta)';
% jVal = -1/m*sum(NNRes-y)*NNRes*(1-NNRes);
jVal = -1/m*(a2 - y(index))*a2*(1-a2);
gradVal = [Dew{1}(:);Dew{2}(:)];
gradApprox = CalcGradApprox(0.0001);
index = index + 1;
function output = CalcGradApprox(epsilon)
output = zeros(size(gradVal));
for n=1:length(thetaVec)
thetaVecMin = thetaVec;
thetaVecMax = thetaVec;
thetaVecMin(n) = thetaVec(n) - epsilon;
thetaVecMax(n) = thetaVec(n) + epsilon;
thetaMin = cell(2,1);
thetaMax = cell(2,1);
thetaMin{1} = reshape(thetaVecMin(1:6),[2 3]);
thetaMin{2} = reshape(thetaVecMin(7:9),[1 3]);
thetaMax{1} = reshape(thetaVecMax(1:6),[2 3]);
thetaMax{2} = reshape(thetaVecMax(7:9),[1 3]);
a2min = NN(x(index,:),thetaMin)';
a2max = NN(x(index,:),thetaMax)';
jValMin = -1/m*(a2min-y(index))*a2min*(1-a2min);
jValMax = -1/m*(a2max-y(index))*a2max*(1-a2max);
output(n) = (jValMax - jValMin)/2/epsilon;
end
end
end
EDIT:
Below I present the correct version of my costFunction for those who may be interested.
function [jVal,gradVal,gradApprox] = costFunction(thetaVec,x,y)
m = size(x,1);
L = #(x) (1 + exp(-x)).^(-1);
NN = #(x,theta) L(theta{2}*[ones(1,size(x,1));L(theta{1}*[ones(size(x,1),1) x]')]);
theta = cell(2,1);
theta{1} = reshape(thetaVec(1:6),[2 3]);
theta{2} = reshape(thetaVec(7:9),[1 3]);
Delta = cell(2,1);
Delta{1} = zeros(size(theta{1}));
Delta{2} = zeros(size(theta{2}));
D = cell(2,1);
D{1} = zeros(size(theta{1}));
D{2} = zeros(size(theta{2}));
jVal = 0;
for in = 1:size(x,1)
% Forward propagation
a1 = [1;x(in,:)']; % added bias to a0
z2 = theta{1}*a1;
a2 = [1;L(z2)]; % added bias to a1
z3 = theta{2}*a2;
a3 = L(z3);
% Back propagation
d3 = a3 - y(in);
d2 = theta{2}'*d3.*a2.*(1 - a2);
Delta{2} = Delta{2} + d3*a2';
Delta{1} = Delta{1} + d2(2:end)*a1';
jVal = jVal + sum( y(in)*log(a3) + (1-y(in))*log(1-a3) );
end
D{1} = 1/m*Delta{1};
D{2} = 1/m*Delta{2};
jVal = -1/m*jVal;
gradVal = [D{1}(:);D{2}(:)];
gradApprox = CalcGradApprox(x(in,:),0.0001);
% Nested function to calculate gradApprox
function output = CalcGradApprox(x,epsilon)
output = zeros(size(thetaVec));
for n=1:length(thetaVec)
thetaVecMin = thetaVec;
thetaVecMax = thetaVec;
thetaVecMin(n) = thetaVec(n) - epsilon;
thetaVecMax(n) = thetaVec(n) + epsilon;
thetaMin = cell(2,1);
thetaMax = cell(2,1);
thetaMin{1} = reshape(thetaVecMin(1:6),[2 3]);
thetaMin{2} = reshape(thetaVecMin(7:9),[1 3]);
thetaMax{1} = reshape(thetaVecMax(1:6),[2 3]);
thetaMax{2} = reshape(thetaVecMax(7:9),[1 3]);
a3min = NN(x,thetaMin)';
a3max = NN(x,thetaMax)';
jValMin = 0;
jValMax = 0;
for inn=1:size(x,1)
jValMin = jValMin + sum( y(inn)*log(a3min) + (1-y(inn))*log(1-a3min) );
jValMax = jValMax + sum( y(inn)*log(a3max) + (1-y(inn))*log(1-a3max) );
end
jValMin = 1/m*jValMin;
jValMax = 1/m*jValMax;
output(n) = (jValMax - jValMin)/2/epsilon;
end
end
end
I've only had a quick eyeball over your code. Here are some pointers.
Q1
should I sum each outputs given all training data (i = 1, ... N, where
N is number of inputs for training)
If you are talking in relation to the cost function, it is normal to sum and normalise by the number of training examples in order to provide comparison between.
I can't tell from the code whether you have a vectorised implementation which will change the answer. Note that the sum function will only sum up a single dimension at a time - meaning if you have a (M by N) array, sum will result in a 1 by N array.
The cost function should have a scalar output.
Q2
is the gradient calculated correctly
The gradient is not calculated correctly - specifically the deltas look wrong. Try following Andrew Ng's notes [PDF] they are very good.
Q3
is the numerical gradient (gradAapprox) calculated correctly.
This line looks a bit suspect. Does this make more sense?
output(n) = (jValMax - jValMin)/(2*epsilon);
EDIT: I actually can't make heads or tails of your gradient approximation. You should only use forward propagation and small tweaks in the parameters to compute the gradient. Good luck!