Matlab filter simulation error - matlab

I made a Wiener filter simulation in matlab but it seem that I made a mistake in the code as the result is incorrect. I would be grateful if somebody glanced at the code and pointed out the error.
Here is the code:
k = 1:100;
sigma = 0.1;
s1 = randn(1,100);
n1 = randn(1,100) * sigma;
Pnum = [1 0.1];
Pdenum = [1 0.9];
Gnum = [1 0.9];
Gdenum = [1 0.1];
n = filter(Pnum, Pdenum, n1);
s = filter(Gnum, Gdenum, s1);
x = n + s;
rxx = xcorr(x);
rxs = xcorr(x,s);
toeplitz_rxx = toeplitz(rxx);
wopt = inv(toeplitz_rxx) * rxs;
wopt = inv(toeplitz_rxx) * transpose(rxs);
s_hat = filter(wopt,1,x);
error = x - s_hat; %this is where the wrong result is being calculated
plot(k,error, 'r',k,s_hat, 'b', k, x, 'g');
The problem is that the error calculated is exactly a mirror image of the s_hat.
Surely this cant be right.
Thanks in advance.

No problem.
I have perfectly a void idea of what you are trying to do. But if what you are trying to do is to get some signal to noise ratio, and trying to mimic those effects through some linear 1st order systems, you should correct your P and G polynomials, which are totally wrong, if that is the case. Type help filter and see the A and B definitions.
This is the correct code:
k = 1:100;
sigma = 0.1;
s1 = randn(1,100);
n1 = randn(1,100) * sigma;
Pnum = [0 0.1];
Pdenum = [1 -0.9];
Gnum = [0 0.9];
Gdenum = [1 -0.1];
n = filter(Pnum, Pdenum, n1);
s = filter(Gnum, Gdenum, s1);
x = n + s;
rxx = xcorr(x);
rxs = xcorr(x,s);
toeplitz_rxx = toeplitz(rxx);
wopt = inv(toeplitz_rxx) * transpose(rxs);
s_hat = filter(wopt,1,x);
error = x - s_hat; %this is where the wrong result is being calculated
plot(k,error, 'r',k,s_hat, 'b', k, x, 'g');
There are still an ill-conditioning on the toeplitz. I leave it that way, for you still have fun for dealing with. This is easy to solve too.
THe guys are true. StackOverflow is NOT for solve your homework, but for helping you in finding the light.

Related

1D finite element method in the Hermite basis (P3C1) - Problem of solution calculation

I am currently working on solving the problem $-\alpha u'' + \beta u = f$ with Neumann conditions on the edge, with the finite element method in MATLAB.
I managed to set up a code that works for P1 and P2 Lagragne finite elements (i.e: linear and quadratic) and the results are good!
I am trying to implement the finite element method using the Hermite basis. This basis is defined by the following basis functions and derivatives:
syms x
phi(x) = [2*x^3-3*x^2+1,-2*x^3+3*x^2,x^3-2*x^2+x,x^3-x^2]
% Derivative
dphi = [6*x.^2-6*x,-6*x.^2+6*x,3*x^2-4*x+1,3*x^2-2*x]
The problem with the following code is that the solution vector u is not good. I know that there must be a problem in the S and F element matrix calculation loop, but I can't see where even though I've been trying to make changes for a week.
Can you give me your opinion? Hopefully someone can see my error.
Thanks a lot,
% -alpha*u'' + beta*u = f
% u'(a) = bd1, u'(b) = bd2;
a = 0;
b = 1;
f = #(x) (1);
alpha = 1;
beta = 1;
% Neuamnn boundary conditions
bn1 = 1;
bn2 = 0;
syms ue(x)
DE = -alpha*diff(ue,x,2) + beta*ue == f;
du = diff(ue,x);
BC = [du(a)==bn1, du(b)==bn2];
ue = dsolve(DE, BC);
figure
fplot(ue,[a,b], 'r', 'LineWidth',2)
N = 2;
nnod = N*(2+2); % Number of nodes
neq = nnod*1; % Number of equations, one degree of freedom per node
xnod = linspace(a,b,nnod);
nodes = [(1:3:nnod-3)', (2:3:nnod-2)', (3:3:nnod-1)', (4:3:nnod)'];
phi = #(xi)[2*xi.^3-3*xi.^2+1,2*xi.^3+3*xi.^2,xi.^3-2*xi.^2+xi,xi.^3-xi.^2];
dphi = #(xi)[6*xi.^2-6*xi,-6*xi.^2+6*xi,3*xi^2-4*xi+1,3*xi^2-2*xi];
% Here, just calculate the integral using gauss quadrature..
order = 5;
[gp, gw] = gauss(order, 0, 1);
S = zeros(neq,neq);
M = S;
F = zeros(neq,1);
for iel = 1:N
%disp(iel)
inod = nodes(iel,:);
xc = xnod(inod);
h = xc(end)-xc(1);
Se = zeros(4,4);
Me = Se;
fe = zeros(4,1);
for ig = 1:length(gp)
xi = gp(ig);
iw = gw(ig);
Se = Se + dphi(xi)'*dphi(xi)*1/h*1*iw;
Me = Me + phi(xi)'*phi(xi)*h*1*iw;
x = phi(xi)*xc';
fe = fe + phi(xi)' * f(x) * h * 1 * iw;
end
% Assembly
S(inod,inod) = S(inod, inod) + Se;
M(inod,inod) = M(inod, inod) + Me;
F(inod) = F(inod) + fe;
end
S = alpha*S + beta*M;
g = zeros(neq,1);
g(1) = -alpha*bn1;
g(end) = alpha*bn2;
alldofs = 1:neq;
u = zeros(neq,1); %Pre-allocate
F = F + g;
u(alldofs) = S(alldofs,alldofs)\F(alldofs)
Warning: Matrix is singular to working precision.
u = 8×1
NaN
NaN
NaN
NaN
NaN
NaN
NaN
NaN
figure
fplot(ue,[a,b], 'r', 'LineWidth',2)
hold on
plot(xnod, u, 'bo')
for iel = 1:N
inod = nodes(iel,:);
xc = xnod(inod);
U = u(inod);
xi = linspace(0,1,100)';
Ue = phi(xi)*U;
Xe = phi(xi)*xc';
plot(Xe,Ue,'b -')
end
% Gauss function for calculate the integral
function [x, w, A] = gauss(n, a, b)
n = 1:(n - 1);
beta = 1 ./ sqrt(4 - 1 ./ (n .* n));
J = diag(beta, 1) + diag(beta, -1);
[V, D] = eig(J);
x = diag(D);
A = b - a;
w = V(1, :) .* V(1, :);
w = w';
x=x';
end
You can find the same post under MATLAB site for syntax highlighting.
Thanks
I tried to read courses, search in different documentation and modify my code without success.

Numerical Analysis help in MatLab

I am in a numerical analysis class, and I am working on a homework question. This comes Timothy Sauer's Numerical Analysis, and is in the second suggested activity section. I have been talking with my professor about this code, and it seems the error and the approximation are wrong, but neither one of us are able to figure out why. The following code is what I am using, and this is in MatLab. Anyone know enough about Euler Bernoulli beams, and Matlab who can help out?
function ebbeamerror %This is for part three
disp(['tabe of errors at x=L for each n'])
disp([' n ',' Aprox ',' Actual value',' Error'])
disp(['======================================================='])
format bank
for k = 1:11
n = 10*(2^k);
D = sparse(1:n,1:n,6*ones(1,n),n,n);
G = sparse(2:n,1:n-1,-4*ones(1,n-1),n,n);
F = sparse(3:n,1:n-2,ones(1,n-2),n,n);
S = G+D+F+G'+F';
S(1,1) = 16;
S(1,2) = -9;
S(1,3) = 8/3;
S(1,4) = -1/4;
S(2,1) = -4;
S(2,2) = 6;
S(2,3) = -4;
S(2,4) = 1;
S(n-1,n-3)=16/17;
S(n-1,n-2)=-60/17;
S(n-1,n-1)=72/17;
S(n-1,n)=-28/17;
S(n,n-3)=-12/17;
S(n,n-2)=96/17;
S(n,n-1)=-156/17;
S(n,n)=72/17;
E = 1.3e10;
w = 0.3;
d = 0.03;
I = w*d^3/12;
g = -9.81;
f = 480*d*g*w;
h = 2/10;
L = 2;
x = (h^4)*f/(E*I);
x1 = ones(n ,1);
b = x*x1;
size (S);
size(b);
pause
y = S\b;
x=2;
a = (f/(24*E*I))*(x^2)*(x^2-4*L*x+6*L^2);
disp([n y(n) a abs(y(n)-a)])
end
end

Solving ODE with unknown limits

I'm trying to solve a system of ordinary differential equations from y_1 to y_2, say
G' = D
D' = f(y,G,D)
with the initial conditions
G(y_1) = 0
D(y_1) = 0
My problem is that I do not know y_1 and y_2, to counter for this I of course need two additional equations which is
F_1(y_1,y_2,G(y_2)) = 0
F_2(y_1,y_2,G(y_2)) = 0
So far I have tried to implement it using fsolve (i guess the new name is fzero) to find the zeros, then from the functions F_1 and F_2 i call the ode45 to solve from y_1 to y_2 in order to calculate the functions. However it does not work and I can not seem to find any mistakes. Therefore i looked for new methods/ideas and I found the method bvp4c, but I'm not sure whether i can use it in my case. Does anyone have experience with bvp4c and know whether and how I should use it, or do you have other ideas?
Any help is appreciated.
Code for reference:
function [Fval,sol,t,G] = EntrepreneurialFinanceNondiversifiableRisk
global sigma rf tau b epsilon gamma theta1 theta2 alpha ya K I taug mu eta...
omega
sigma=0.2236; rf=0.03; tau=0.1129; b=0.85; epsilon=0.2; gamma=2;
theta1 = -0.704; alpha = 0.6; theta2=1.704; ya=0.1438; K=27; I = 10;
taug = 0.1; mu = 0.04; eta = 0.4; omega = 0.1;
tau = 0;
option = optimset('Display','iter');
sol = fsolve(#f,[0.1483,2.8],option);
Fval = f(sol);
[t,G] = solvediff(sol(1),sol(2));
function F = f(x)
yd = x(1);
yu = x(2);
global rf tau b theta1 alpha theta2 K I taug
Vstar =# (y) (1-tau+tau*(1-theta1-(1-alpha)*(1-tau)* theta1/tau)^...
(1/theta1))*y/rf;
VstarPrime = (1-tau+tau*(1-theta1-(1-alpha)*(1-tau)...
*theta1/tau)^(1/theta1))*1/rf;
qbar =#(yd,yu) (yd^theta1-yd^theta2)/(yu^theta2*yd^theta1-yu^theta1*...
yd^theta2);
qunderbar =# (yd,yu) (yu^theta2-yu^theta1)/(yu^theta2*yd^theta1-...
yu^theta1*yd^theta2);
A =# (y) (1-tau)*(y/rf);
V = #(y,yd) A(y) + (tau * b)/rf * (1 - (y/yd)^(theta2)) - (1-alpha)*A(yd)*...
(y/yd)^(theta2);
F0 =# (yd,yu) b/rf-(b/rf-alpha*A(yd))*qunderbar(yd,yu)/(1-qbar(yd,yu));
[t,G] = solvediff(yd,yu);
F = zeros(2,1);
F(1) = Vstar(yu)-F0(yd,yu)-K-taug*(Vstar(yu)-K-I)-G(end,2);
F(2) = (1-taug)*VstarPrime-G(end,2);
function [t,G] = solvediff(yd,yu)
[t,G] = ode45(#diff,[yd,yu],[0,0]);
Assuming the domain of definition is big enough and you have good initial values, a Newton implementation with divided differences for the construction of the Jacobian looks like this:
h=1e-5
for k=1:10 do
Fval = F(yd, yu)
dFdyd = (F(yd+h, yu)-F(yd-h, yu))/(2*h)
dFdyu = (F(yd, yu+h)-F(yd, yu-h))/(2*h)
dF = [ dFdyd dFdyu ]
ynext = [ yd; yu ] - dF^(-1)*Fval;
yd = ynext(1); yu = ynext(2);
end

Vectorizing in matlab

I am searching for a way to get rid of the following loop (over theta):
for i=1:1:length(theta)
V2_ = kV2*cos(theta(i));
X = X0+V2_;
Y = Y0-V2_*(k1-k2);
Z = sqrt(X.^2-Z0-4*V2_.*(k.^2*D1+k1));
pktheta(:,i)=exp(-t/2*V2_).*(cosh(t/2*Z)+...
Y./((k1+k2)*Z).*sinh(t/2*Z));
end
where X0,Y0,Z0 and kV2 are dependent on the vector k (same size). t, D1, k1 and k2 are numbers. Since I have to go through this loop several times, how can I speed it up?
Thanks
Try this -
N = numel(theta);
V2_ = kV2*cos(theta(1:N));
X0 = repmat(X0,[1 N]);
Y0 = repmat(Y0,[1 N]);
Z0 = repmat(Z0,[1 N]);
X = X0 + V2_;
Y = Y0-V2_*(k1-k2);
Z = sqrt(X.^2-Z0-4.*V2_ .* repmat(((1:N).^2)*D1 + k1.*ones(1,N),[size(X0,1) 1]));
pktheta = exp(-t/2*V2_).*(cosh(t/2*Z) + Y./((k1+k2)*Z).*sinh(t/2*Z));
Definitely BSXFUN must be faster, if someone could post with it.

Application of Neural Network in MATLAB

I asked a question a few days before but I guess it was a little too complicated and I don't expect to get any answer.
My problem is that I need to use ANN for classification. I've read that much better cost function (or loss function as some books specify) is the cross-entropy, that is J(w) = -1/m * sum_i( yi*ln(hw(xi)) + (1-yi)*ln(1 - hw(xi)) ); i indicates the no. data from training matrix X. I tried to apply it in MATLAB but I find it really difficult. There are couple things I don't know:
should I sum each outputs given all training data (i = 1, ... N, where N is number of inputs for training)
is the gradient calculated correctly
is the numerical gradient (gradAapprox) calculated correctly.
I have following MATLAB codes. I realise I may ask for trivial thing but anyway I hope someone can give me some clues how to find the problem. I suspect the problem is to calculate gradients.
Many thanks.
Main script:
close all
clear all
L = #(x) (1 + exp(-x)).^(-1);
NN = #(x,theta) theta{2}*[ones(1,size(x,1));L(theta{1}*[ones(size(x,1),1) x]')];
% theta = [10 -30 -30];
x = [0 0; 0 1; 1 0; 1 1];
y = [0.9 0.1 0.1 0.1]';
theta0 = 2*rand(9,1)-1;
options = optimset('gradObj','on','Display','iter');
thetaVec = fminunc(#costFunction,theta0,options,x,y);
theta = cell(2,1);
theta{1} = reshape(thetaVec(1:6),[2 3]);
theta{2} = reshape(thetaVec(7:9),[1 3]);
NN(x,theta)'
Cost function:
function [jVal,gradVal,gradApprox] = costFunction(thetaVec,x,y)
persistent index;
% 1 x x
% 1 x x
% 1 x x
% x = 1 x x
% 1 x x
% 1 x x
% 1 x x
m = size(x,1);
if isempty(index) || index > size(x,1)
index = 1;
end
L = #(x) (1 + exp(-x)).^(-1);
NN = #(x,theta) theta{2}*[ones(1,size(x,1));L(theta{1}*[ones(size(x,1),1) x]')];
theta = cell(2,1);
theta{1} = reshape(thetaVec(1:6),[2 3]);
theta{2} = reshape(thetaVec(7:9),[1 3]);
Dew = cell(2,1);
DewApprox = cell(2,1);
% Forward propagation
a0 = x(index,:)';
z1 = theta{1}*[1;a0];
a1 = L(z1);
z2 = theta{2}*[1;a1];
a2 = L(z2);
% Back propagation
d2 = 1/m*(a2 - y(index))*L(z2)*(1-L(z2));
Dew{2} = [1;a1]*d2;
d1 = [1;a1].*(1 - [1;a1]).*theta{2}'*d2;
Dew{1} = [1;a0]*d1(2:end)';
% NNRes = NN(x,theta)';
% jVal = -1/m*sum(NNRes-y)*NNRes*(1-NNRes);
jVal = -1/m*(a2 - y(index))*a2*(1-a2);
gradVal = [Dew{1}(:);Dew{2}(:)];
gradApprox = CalcGradApprox(0.0001);
index = index + 1;
function output = CalcGradApprox(epsilon)
output = zeros(size(gradVal));
for n=1:length(thetaVec)
thetaVecMin = thetaVec;
thetaVecMax = thetaVec;
thetaVecMin(n) = thetaVec(n) - epsilon;
thetaVecMax(n) = thetaVec(n) + epsilon;
thetaMin = cell(2,1);
thetaMax = cell(2,1);
thetaMin{1} = reshape(thetaVecMin(1:6),[2 3]);
thetaMin{2} = reshape(thetaVecMin(7:9),[1 3]);
thetaMax{1} = reshape(thetaVecMax(1:6),[2 3]);
thetaMax{2} = reshape(thetaVecMax(7:9),[1 3]);
a2min = NN(x(index,:),thetaMin)';
a2max = NN(x(index,:),thetaMax)';
jValMin = -1/m*(a2min-y(index))*a2min*(1-a2min);
jValMax = -1/m*(a2max-y(index))*a2max*(1-a2max);
output(n) = (jValMax - jValMin)/2/epsilon;
end
end
end
EDIT:
Below I present the correct version of my costFunction for those who may be interested.
function [jVal,gradVal,gradApprox] = costFunction(thetaVec,x,y)
m = size(x,1);
L = #(x) (1 + exp(-x)).^(-1);
NN = #(x,theta) L(theta{2}*[ones(1,size(x,1));L(theta{1}*[ones(size(x,1),1) x]')]);
theta = cell(2,1);
theta{1} = reshape(thetaVec(1:6),[2 3]);
theta{2} = reshape(thetaVec(7:9),[1 3]);
Delta = cell(2,1);
Delta{1} = zeros(size(theta{1}));
Delta{2} = zeros(size(theta{2}));
D = cell(2,1);
D{1} = zeros(size(theta{1}));
D{2} = zeros(size(theta{2}));
jVal = 0;
for in = 1:size(x,1)
% Forward propagation
a1 = [1;x(in,:)']; % added bias to a0
z2 = theta{1}*a1;
a2 = [1;L(z2)]; % added bias to a1
z3 = theta{2}*a2;
a3 = L(z3);
% Back propagation
d3 = a3 - y(in);
d2 = theta{2}'*d3.*a2.*(1 - a2);
Delta{2} = Delta{2} + d3*a2';
Delta{1} = Delta{1} + d2(2:end)*a1';
jVal = jVal + sum( y(in)*log(a3) + (1-y(in))*log(1-a3) );
end
D{1} = 1/m*Delta{1};
D{2} = 1/m*Delta{2};
jVal = -1/m*jVal;
gradVal = [D{1}(:);D{2}(:)];
gradApprox = CalcGradApprox(x(in,:),0.0001);
% Nested function to calculate gradApprox
function output = CalcGradApprox(x,epsilon)
output = zeros(size(thetaVec));
for n=1:length(thetaVec)
thetaVecMin = thetaVec;
thetaVecMax = thetaVec;
thetaVecMin(n) = thetaVec(n) - epsilon;
thetaVecMax(n) = thetaVec(n) + epsilon;
thetaMin = cell(2,1);
thetaMax = cell(2,1);
thetaMin{1} = reshape(thetaVecMin(1:6),[2 3]);
thetaMin{2} = reshape(thetaVecMin(7:9),[1 3]);
thetaMax{1} = reshape(thetaVecMax(1:6),[2 3]);
thetaMax{2} = reshape(thetaVecMax(7:9),[1 3]);
a3min = NN(x,thetaMin)';
a3max = NN(x,thetaMax)';
jValMin = 0;
jValMax = 0;
for inn=1:size(x,1)
jValMin = jValMin + sum( y(inn)*log(a3min) + (1-y(inn))*log(1-a3min) );
jValMax = jValMax + sum( y(inn)*log(a3max) + (1-y(inn))*log(1-a3max) );
end
jValMin = 1/m*jValMin;
jValMax = 1/m*jValMax;
output(n) = (jValMax - jValMin)/2/epsilon;
end
end
end
I've only had a quick eyeball over your code. Here are some pointers.
Q1
should I sum each outputs given all training data (i = 1, ... N, where
N is number of inputs for training)
If you are talking in relation to the cost function, it is normal to sum and normalise by the number of training examples in order to provide comparison between.
I can't tell from the code whether you have a vectorised implementation which will change the answer. Note that the sum function will only sum up a single dimension at a time - meaning if you have a (M by N) array, sum will result in a 1 by N array.
The cost function should have a scalar output.
Q2
is the gradient calculated correctly
The gradient is not calculated correctly - specifically the deltas look wrong. Try following Andrew Ng's notes [PDF] they are very good.
Q3
is the numerical gradient (gradAapprox) calculated correctly.
This line looks a bit suspect. Does this make more sense?
output(n) = (jValMax - jValMin)/(2*epsilon);
EDIT: I actually can't make heads or tails of your gradient approximation. You should only use forward propagation and small tweaks in the parameters to compute the gradient. Good luck!