My approximate entropy script for MATLAB isn't working - matlab

This is my Approximate entropy Calculator in MATLAB. https://en.wikipedia.org/wiki/Approximate_entropy
I'm not sure why it isn't working. It's returning a negative value.Can anyone help me with this? R1 being the data.
FindSize = size(R1);
N = FindSize(1);
% N = input ('insert number of data values');
%if you want to put your own N in, take away the % from the line above
and
%insert the % before the N = FindSize(1)
%m = input ('insert m: integer representing length of data, embedding
dimension ');
m = 2;
%r = input ('insert r: positive real number for filtering, threshold
');
r = 0.2*std(R1);
for x1= R1(1:N-m+1,1)
D1 = pdist2(x1,x1);
C11 = (D1 <= r)/(N-m+1);
c1 = C11(1);
end
for i1 = 1:N-m+1
s1 = sum(log(c1));
end
phi1 = (s1/(N-m+1));
for x2= R1(1:N-m+2,1)
D2 = pdist2(x2,x2);
C21 = (D2 <= r)/(N-m+2);
c2 = C21(1);
end
for i2 = 1:N-m+2
s2 = sum(log(c2));
end
phi2 = (s2/(N-m+2));
Ap = phi1 - phi2;
Apen = Ap(1)

Following the documentation provided by the Wikipedia article, I developed this small function that calculates the approximate entropy:
function res = approximate_entropy(U,m,r)
N = numel(U);
res = zeros(1,2);
for i = [1 2]
off = m + i - 1;
off_N = N - off;
off_N1 = off_N + 1;
x = zeros(off_N1,off);
for j = 1:off
x(:,j) = U(j:off_N+j);
end
C = zeros(off_N1,1);
for j = 1:off_N1
dist = abs(x - repmat(x(j,:),off_N1,1));
C(j) = sum(~any((dist > r),2)) / off_N1;
end
res(i) = sum(log(C)) / off_N1;
end
res = res(1) - res(2);
end
I first tried to replicate the computation shown the article, and the result I obtain matches the result shown in the example:
U = repmat([85 80 89],1,17);
approximate_entropy(U,2,3)
ans =
-1.09965411068114e-05
Then I created another example that shows a case in which approximate entropy produces a meaningful result (the entropy of the first sample is always less than the entropy of the second one):
% starting variables...
s1 = repmat([10 20],1,10);
s1_m = mean(s1);
s1_s = std(s1);
s2_m = 0;
s2_s = 0;
% datasample will not always return a perfect M and S match
% so let's repeat this until equality is achieved...
while ((s1_m ~= s2_m) && (s1_s ~= s2_s))
s2 = datasample([10 20],20,'Replace',true,'Weights',[0.5 0.5]);
s2_m = mean(s2);
s2_s = std(s2);
end
m = 2;
r = 3;
ae1 = approximate_entropy(s1,m,r)
ae2 = approximate_entropy(s2,m,r)
ae1 =
0.00138568170752751
ae2 =
0.680090884817465
Finally, I tried with your sample data:
fid = fopen('O1.txt','r');
U = cell2mat(textscan(fid,'%f'));
fclose(fid);
m = 2;
r = 0.2 * std(U);
approximate_entropy(U,m,r)
ans =
1.08567461184858

Related

How to correct grid search?

Trying to find the optimal hyperparameters for my svm model using a grid search, but it simply returns 1 for the hyperparameters.
function evaluations = inner_kfold_trainer(C,q,k,features_xy,labels)
features_xy_flds = kdivide(features_xy, k);
labels_flds = kdivide(labels, k);
evaluations = zeros(k,3);
for i = 1:k
fprintf('Fold %i of %i\n',i,k);
train_data = cell2mat(features_xy_flds(1:end ~= i));
train_labels = cell2mat(labels_flds(1:end ~= i));
test_data = cell2mat(features_xy_flds(i));
test_labels = cell2mat(labels_flds(i));
%AU1
train_labels = train_labels(:,1);
test_labels = test_labels(:,1);
[k,~] = size(test_labels);
%train
sv = fitcsvm(train_data,train_labels, 'KernelFunction','polynomial', 'PolynomialOrder',q,'BoxConstraint',C);
sv.predict(test_data);
%Calculate evaluative measures
%svm_outputs = zeros(k,1);
sv_predictions = sv.predict(test_data);
[precision,recall,F1] = evaluation(sv_predictions,test_labels);
evaluations(i,1) = precision;
evaluations(i,2) = recall;
evaluations(i,3) = F1;
end
save('eval.mat', 'evaluations');
end
an inner-fold cross validation function
and below the grid function where something seems to be going wrong
function [q,C] = grid_search(features_xy,labels,k)
% n x n grid
n = 3;
q_grid = linspace(1,19,n);
C_grid = linspace(1,59,n);
tic
evals = zeros(n,n,3);
for i = 1:n
for j = 1:n
fprintf('## i=%i, j=%i ##\n', i, j);
svm_results = inner_kfold_trainer(C_grid(i), q_grid(j),k,features_xy,labels);
evals(i,j,:) = mean(svm_results(:,:));
% precision only
%evals(i,j,:) = max(svm_results(:,1));
toc
end
end
f = evals;
% retrieving the best value of the hyper parameters, to use in the outer
% fold
[M1,I1] = max(f);
[~,I2] = max(M1(1,1,:));
index = I1(:,:,I2);
C = C_grid(index(1))
q = q_grid(index(2))
end
When I run grid_search(features_xy,labels,8) for example, I get C=1 and q=1, for any k(the no. of folds) value. Also features_xy is a 500*98 matrix.

Creating a table from a variable inside a for loop

I am writing a for loop to calculate the value of four different variables. The first variable is M. M increases from 10^2 to 10^5,
M = [10^2,10^3,10^4,10^5];
The other three variables needed for the table are shown in the code below.
confmc
confcv
confmcSize/confcvSize
I first create a for loop to iterate through the four different values of M. I then create the table outside of the for loop.
How could I adjust the implementation so that the table displays all four values of M?
randn('state',100)
%%%%%% Problem and method parameters %%%%%%%%%
S = 5; E = 6; sigma = 0.3; r = 0.05; T = 1;
Dt = 1e-2; N = T/Dt; M = [10^2,10^3,10^4,10^5];
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
for k=1:numel(M)
%%%%%%%%% Geom Asian exact mean %%%%%%%%%%%%
sigsqT= sigma^2*T*(N+1)*(2*N+1)/(6*N*N);
muT = 0.5*sigsqT + (r - 0.5*sigma^2)*T*(N+1)/(2*N);
d1 = (log(S/E) + (muT + 0.5*sigsqT))/(sqrt(sigsqT));
d2 = d1 - sqrt(sigsqT);
N1 = 0.5*(1+erf(d1/sqrt(2)));
N2 = 0.5*(1+erf(d2/sqrt(2)));
geo = exp(-r*T)*( S*exp(muT)*N1 - E*N2 );
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Spath = S*cumprod(exp((r-0.5*sigma^2)*Dt+sigma*sqrt(Dt)*randn(M(k),N)),2);
% Standard Monte Carlo
arithave = mean(Spath,2);
Parith = exp(-r*T)*max(arithave-E,0); % payoffs
Pmean = mean(Parith);
Pstd = std(Parith);
confmc = [Pmean-1.96*Pstd/sqrt(M(k)), Pmean+1.96*Pstd/sqrt(M(k))];
confmcSize = [(Pmean+1.96*Pstd/sqrt(M(k)))-(Pmean-1.96*Pstd/sqrt(M(k)))];
% Control Variate
geoave = exp((1/N)*sum(log(Spath),2));
Pgeo = exp(-r*T)*max(geoave-E,0); % geo payoffs
Z = Parith + geo - Pgeo; % control variate version
Zmean = mean(Z);
Zstd = std(Z);
confcv = [Zmean-1.96*Zstd/sqrt(M(k)), Zmean+1.96*Zstd/sqrt(M(k))];
confcvSize = [(Zmean+1.96*Zstd/sqrt(M(k)))-(Zmean-1.96*Zstd/sqrt(M(k)))];
end
T = table(M,confmc,confcv,confmcSize/confcvSize)
The current code returns
T =
1×4 table
M confmc confcv Var4
_____ ____________________ ____________________ ______
1e+05 0.096756 0.1007 0.097306 0.097789 8.1622
How could I change my implementation so that all four values of M are computed?
I just modified few things.Take a look at the following code.
randn('state',100)
%%%%%% Problem and method parameters %%%%%%%%%
S = 5; E = 6; sigma = 0.3; r = 0.05; T = 1;
Dt = 1e-2; N = T/Dt; M = [10^2,10^3,10^4,10^5];
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
confmc = zeros(numel(M), 2);
confcv = zeros(numel(M), 2);
confmcSize = zeros(numel(M), 1);
confcvSize = zeros(numel(M), 1);
for k=1:numel(M)
%%%%%%%%% Geom Asian exact mean %%%%%%%%%%%%
sigsqT= sigma^2*T*(N+1)*(2*N+1)/(6*N*N);
muT = 0.5*sigsqT + (r - 0.5*sigma^2)*T*(N+1)/(2*N);
d1 = (log(S/E) + (muT + 0.5*sigsqT))/(sqrt(sigsqT));
d2 = d1 - sqrt(sigsqT);
N1 = 0.5*(1+erf(d1/sqrt(2)));
N2 = 0.5*(1+erf(d2/sqrt(2)));
geo = exp(-r*T)*( S*exp(muT)*N1 - E*N2 );
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Spath = S*cumprod(exp((r-0.5*sigma^2)*Dt+sigma*sqrt(Dt)*randn(M(k),N)),2);
% Standard Monte Carlo
arithave = mean(Spath,2);
Parith = exp(-r*T)*max(arithave-E,0); % payoffs
Pmean = mean(Parith);
Pstd = std(Parith);
confmc(k,:) = [Pmean-1.96*Pstd/sqrt(M(k)), Pmean+1.96*Pstd/sqrt(M(k))];
confmcSize(k,1) = [(Pmean+1.96*Pstd/sqrt(M(k)))-(Pmean-1.96*Pstd/sqrt(M(k)))];
% Control Variate
geoave = exp((1/N)*sum(log(Spath),2));
Pgeo = exp(-r*T)*max(geoave-E,0); % geo payoffs
Z = Parith + geo - Pgeo; % control variate version
Zmean = mean(Z);
Zstd = std(Z);
confcv(k,:) = [Zmean-1.96*Zstd/sqrt(M(k)), Zmean+1.96*Zstd/sqrt(M(k))];
confcvSize(k,1) = [(Zmean+1.96*Zstd/sqrt(M(k)))-(Zmean-1.96*Zstd/sqrt(M(k)))];
end
T = table(M',confmc,confcv,confmcSize./confcvSize)
In short, I just used a matrix instead of a vector or scalar as the members of the table. In your code, the variables (confmc, confcv, confmcSize, confcvSize) were getting overwritten.

Weird phenomenon when converting RGB to HSV manually in Matlab

I have written a small Matlab funcion which takes an image in RGB and converts it to HSV according to the conversion formulas found here.
The problem is that when I apply this to a color spectrum there is a cut in the spectrum and some values are wrong, see images (to make the comparison easier I have used the internal hsv2rgb() function to convert back to RGB. This does not happen with Matlabs own function rgb2hsv() but I can not find what I have done wrong.
This is my function
function [ I_HSV ] = RGB2HSV( I_RGB )
%UNTITLED3 Summary of this function goes here
% Detailed explanation goes here
[MAX, ind] = max(I_RGB,[],3);
if max(max(MAX)) > 1
I_r = I_RGB(:,:,1)/255;
I_g = I_RGB(:,:,2)/255;
I_b = I_RGB(:,:,3)/255;
MAX = max(cat(3,I_r, I_g, I_b),[],3);
else
I_r = I_RGB(:,:,1);
I_g = I_RGB(:,:,2);
I_b = I_RGB(:,:,3);
end
MIN = min(cat(3,I_r, I_g, I_b),[],3);
d = MAX - MIN;
I_V = MAX;
I_S = (MAX - MIN) ./ MAX;
I_H = zeros(size(I_V));
a = 1/6*mod(((I_g - I_b) ./ d),1);
b = 1/6*(I_b - I_r) ./ d + 1/3;
c = 1/6*(I_r - I_g) ./ d + 2/3;
H = cat(3, a, b, c);
for m=1:size(H,1);
for n=1:size(H,2);
if d(m,n) == 0
I_H(m,n) = 0;
else
I_H(m,n) = H(m,n,ind(m,n));
end
end
end
I_HSV = cat(3,I_H,I_S,I_V);
end
Original spectrum
Converted spectrum
The error was in my simplification of the calculations of a, b, and c. Changing it to the following solved the problem.
function [ I_HSV ] = RGB2HSV( I_RGB )
%UNTITLED3 Summary of this function goes here
% Detailed explanation goes here
[MAX, ind] = max(I_RGB,[],3);
if max(max(MAX)) > 1
I_r = I_RGB(:,:,1)/255;
I_g = I_RGB(:,:,2)/255;
I_b = I_RGB(:,:,3)/255;
MAX = max(cat(3,I_r, I_g, I_b),[],3);
else
I_r = I_RGB(:,:,1);
I_g = I_RGB(:,:,2);
I_b = I_RGB(:,:,3);
end
MIN = min(cat(3,I_r, I_g, I_b),[],3);
D = MAX - MIN;
I_V = MAX;
I_S = D ./ MAX;
I_H = zeros(size(I_V));
a = 1/6*mod(((I_g - I_b) ./ D),6);
b = 1/6*((I_b - I_r) ./ D + 2);
c = 1/6*((I_r - I_g) ./ D + 4);
H = cat(3, a, b, c);
for m=1:size(H,1);
for n=1:size(H,2);
if D(m,n) == 0
I_H(m,n) = 0;
else
I_H(m,n) = H(m,n,ind(m,n));
end
if MAX(m,n) == 0
S(m,n) = 0;
end
end
end
I_HSV = cat(3,I_H,I_S,I_V);
end

Matlab: how to calculate the Pseudo Zernike moments?

The code below is defined as algorithm 1 that computes the Pseudo Zernike Radial polynomials:
function R = pseudo_zernike_radial_polynomials(n,r)
if any( r>1 | r<0 | n<0)
error(':zernike_radial_polynomials either r is less than or greater thatn 1, r must be between 0 and 1 or n is less than 0.')
end
if n==0;
R =ones(n +1, length(r));
return;
end
R =ones(n +1, length(r));
rSQRT= sqrt(r);
r0 = ~logical(rSQRT.^(2*n+1)) ; % if any low r exist, and high n, then treat as 0
if any(r0)
m = n:-1:mod(n,2); ss=1:sum(r0);
R0(m +1, ss)=0;
R0(0 +1, ss)=1;
R(:,r0)=R0;
end
if any(~r0)
rSQRT= rSQRT(~r0);
R1 = zernike_radial_polynomials(2*n+1, rSQRT );
m = 2:2: 2*n+1 +1;
R1=R1(m,:);
for m=1:size(R1,1)
R1(m,:) = R1(m,:)./rSQRT';
end
R(:,~r0)=R1;
end
Then, this is algorithm 2 that calculates the moments:
and I translate into the code as follow:
clear all
%input : 2D image f, Nmax = order.
f = rgb2gray(imread('Oval_45.png'));
prompt = ('Input PZM order Nmax:');
Nmax = input(prompt);
Pzm =0;
l = size(f,1);
for x = 1:l;
for y =x;
for n = 0:Nmax;
[X,Y] = meshgrid(x,y);
R = sqrt((2.*X-l-1).^2+(2.*Y-l-1).^2)/l;
theta = atan2((l-1-2.*Y+2),(2.*X-l+1-2));
R = (R<=1).*R;
rad = pseudo_zernike_radial_polynomials(n, R);
for m = 0:n;
%find psi
if mod(m,2)==0
%m is even
newd1 = f(x,y)+f(x,y);
newd2 = f(y,x)+f(y,x);
newd3 = f(x,y)+f(x,y);
newd4 = f(y,x)+f(y,x);
x1 = newd1;
y1 = (-1)^m/2*newd2;
x2 = newd3;
y2 = (-1)^m/2*newd4;
psi = cos(m*theta)*(x1+y1+x2+y2)-(1i)*sin(m*theta)*(x1+y1-x2-y2);
else
newd1 = f(x,y)-f(x,y);
newd2 = f(y,x)-f(y,x);
newd3 = f(x,y)-f(x,y);
newd4 = f(y,x)-f(y,x);
x1 = newd1;
y1 = (-1)^m/2*newd2;
x2 = newd3;
y2 = (-1)^m/2*newd4;
psi = cos(m*theta)*(x1+x2)+sin(m*theta)*(y1-y2)+(1i)*(cos(m*theta)*(y1+y2)-sin(m*theta)*(x1-x2));
end
Pzm = Pzm+rad*psi;
end
end
end
end
However its give me error :
Error using *
Integers can only be combined with integers of the same class, or scalar doubles.
Error in main_pzm (line 44)
Pzm = Pzm+rad*psi;
The detail of the calculation can be seen here

Application of Neural Network in MATLAB

I asked a question a few days before but I guess it was a little too complicated and I don't expect to get any answer.
My problem is that I need to use ANN for classification. I've read that much better cost function (or loss function as some books specify) is the cross-entropy, that is J(w) = -1/m * sum_i( yi*ln(hw(xi)) + (1-yi)*ln(1 - hw(xi)) ); i indicates the no. data from training matrix X. I tried to apply it in MATLAB but I find it really difficult. There are couple things I don't know:
should I sum each outputs given all training data (i = 1, ... N, where N is number of inputs for training)
is the gradient calculated correctly
is the numerical gradient (gradAapprox) calculated correctly.
I have following MATLAB codes. I realise I may ask for trivial thing but anyway I hope someone can give me some clues how to find the problem. I suspect the problem is to calculate gradients.
Many thanks.
Main script:
close all
clear all
L = #(x) (1 + exp(-x)).^(-1);
NN = #(x,theta) theta{2}*[ones(1,size(x,1));L(theta{1}*[ones(size(x,1),1) x]')];
% theta = [10 -30 -30];
x = [0 0; 0 1; 1 0; 1 1];
y = [0.9 0.1 0.1 0.1]';
theta0 = 2*rand(9,1)-1;
options = optimset('gradObj','on','Display','iter');
thetaVec = fminunc(#costFunction,theta0,options,x,y);
theta = cell(2,1);
theta{1} = reshape(thetaVec(1:6),[2 3]);
theta{2} = reshape(thetaVec(7:9),[1 3]);
NN(x,theta)'
Cost function:
function [jVal,gradVal,gradApprox] = costFunction(thetaVec,x,y)
persistent index;
% 1 x x
% 1 x x
% 1 x x
% x = 1 x x
% 1 x x
% 1 x x
% 1 x x
m = size(x,1);
if isempty(index) || index > size(x,1)
index = 1;
end
L = #(x) (1 + exp(-x)).^(-1);
NN = #(x,theta) theta{2}*[ones(1,size(x,1));L(theta{1}*[ones(size(x,1),1) x]')];
theta = cell(2,1);
theta{1} = reshape(thetaVec(1:6),[2 3]);
theta{2} = reshape(thetaVec(7:9),[1 3]);
Dew = cell(2,1);
DewApprox = cell(2,1);
% Forward propagation
a0 = x(index,:)';
z1 = theta{1}*[1;a0];
a1 = L(z1);
z2 = theta{2}*[1;a1];
a2 = L(z2);
% Back propagation
d2 = 1/m*(a2 - y(index))*L(z2)*(1-L(z2));
Dew{2} = [1;a1]*d2;
d1 = [1;a1].*(1 - [1;a1]).*theta{2}'*d2;
Dew{1} = [1;a0]*d1(2:end)';
% NNRes = NN(x,theta)';
% jVal = -1/m*sum(NNRes-y)*NNRes*(1-NNRes);
jVal = -1/m*(a2 - y(index))*a2*(1-a2);
gradVal = [Dew{1}(:);Dew{2}(:)];
gradApprox = CalcGradApprox(0.0001);
index = index + 1;
function output = CalcGradApprox(epsilon)
output = zeros(size(gradVal));
for n=1:length(thetaVec)
thetaVecMin = thetaVec;
thetaVecMax = thetaVec;
thetaVecMin(n) = thetaVec(n) - epsilon;
thetaVecMax(n) = thetaVec(n) + epsilon;
thetaMin = cell(2,1);
thetaMax = cell(2,1);
thetaMin{1} = reshape(thetaVecMin(1:6),[2 3]);
thetaMin{2} = reshape(thetaVecMin(7:9),[1 3]);
thetaMax{1} = reshape(thetaVecMax(1:6),[2 3]);
thetaMax{2} = reshape(thetaVecMax(7:9),[1 3]);
a2min = NN(x(index,:),thetaMin)';
a2max = NN(x(index,:),thetaMax)';
jValMin = -1/m*(a2min-y(index))*a2min*(1-a2min);
jValMax = -1/m*(a2max-y(index))*a2max*(1-a2max);
output(n) = (jValMax - jValMin)/2/epsilon;
end
end
end
EDIT:
Below I present the correct version of my costFunction for those who may be interested.
function [jVal,gradVal,gradApprox] = costFunction(thetaVec,x,y)
m = size(x,1);
L = #(x) (1 + exp(-x)).^(-1);
NN = #(x,theta) L(theta{2}*[ones(1,size(x,1));L(theta{1}*[ones(size(x,1),1) x]')]);
theta = cell(2,1);
theta{1} = reshape(thetaVec(1:6),[2 3]);
theta{2} = reshape(thetaVec(7:9),[1 3]);
Delta = cell(2,1);
Delta{1} = zeros(size(theta{1}));
Delta{2} = zeros(size(theta{2}));
D = cell(2,1);
D{1} = zeros(size(theta{1}));
D{2} = zeros(size(theta{2}));
jVal = 0;
for in = 1:size(x,1)
% Forward propagation
a1 = [1;x(in,:)']; % added bias to a0
z2 = theta{1}*a1;
a2 = [1;L(z2)]; % added bias to a1
z3 = theta{2}*a2;
a3 = L(z3);
% Back propagation
d3 = a3 - y(in);
d2 = theta{2}'*d3.*a2.*(1 - a2);
Delta{2} = Delta{2} + d3*a2';
Delta{1} = Delta{1} + d2(2:end)*a1';
jVal = jVal + sum( y(in)*log(a3) + (1-y(in))*log(1-a3) );
end
D{1} = 1/m*Delta{1};
D{2} = 1/m*Delta{2};
jVal = -1/m*jVal;
gradVal = [D{1}(:);D{2}(:)];
gradApprox = CalcGradApprox(x(in,:),0.0001);
% Nested function to calculate gradApprox
function output = CalcGradApprox(x,epsilon)
output = zeros(size(thetaVec));
for n=1:length(thetaVec)
thetaVecMin = thetaVec;
thetaVecMax = thetaVec;
thetaVecMin(n) = thetaVec(n) - epsilon;
thetaVecMax(n) = thetaVec(n) + epsilon;
thetaMin = cell(2,1);
thetaMax = cell(2,1);
thetaMin{1} = reshape(thetaVecMin(1:6),[2 3]);
thetaMin{2} = reshape(thetaVecMin(7:9),[1 3]);
thetaMax{1} = reshape(thetaVecMax(1:6),[2 3]);
thetaMax{2} = reshape(thetaVecMax(7:9),[1 3]);
a3min = NN(x,thetaMin)';
a3max = NN(x,thetaMax)';
jValMin = 0;
jValMax = 0;
for inn=1:size(x,1)
jValMin = jValMin + sum( y(inn)*log(a3min) + (1-y(inn))*log(1-a3min) );
jValMax = jValMax + sum( y(inn)*log(a3max) + (1-y(inn))*log(1-a3max) );
end
jValMin = 1/m*jValMin;
jValMax = 1/m*jValMax;
output(n) = (jValMax - jValMin)/2/epsilon;
end
end
end
I've only had a quick eyeball over your code. Here are some pointers.
Q1
should I sum each outputs given all training data (i = 1, ... N, where
N is number of inputs for training)
If you are talking in relation to the cost function, it is normal to sum and normalise by the number of training examples in order to provide comparison between.
I can't tell from the code whether you have a vectorised implementation which will change the answer. Note that the sum function will only sum up a single dimension at a time - meaning if you have a (M by N) array, sum will result in a 1 by N array.
The cost function should have a scalar output.
Q2
is the gradient calculated correctly
The gradient is not calculated correctly - specifically the deltas look wrong. Try following Andrew Ng's notes [PDF] they are very good.
Q3
is the numerical gradient (gradAapprox) calculated correctly.
This line looks a bit suspect. Does this make more sense?
output(n) = (jValMax - jValMin)/(2*epsilon);
EDIT: I actually can't make heads or tails of your gradient approximation. You should only use forward propagation and small tweaks in the parameters to compute the gradient. Good luck!