I am trying to understand why the dimensions of this calculation are wrong? - matlab

I am solving the poisson equation and want to plot the error of the exact solution vs. number of grid points. my code is:
function [Ntot,err] = poisson(N)
nx = N; % Number of steps in space(x)
ny = N; % Number of steps in space(y)
Ntot = nx*ny;
niter = 1000; % Number of iterations
dx = 2/(nx-1); % Width of space step(x)
dy = 2/(ny-1); % Width of space step(y)
x = -1:dx:1; % Range of x(-1,1)
y = -1:dy:1; % Range of y(-1,1)
b = zeros(nx,ny);
dn = zeros(nx,ny);
% Initial Conditions
d = zeros(nx,ny);
u = zeros(nx,ny);
% Boundary conditions
d(:,1) = 0;
d(:,ny) = 0;
d(1,:) = 0;
d(nx,:) = 0;
% Source term
b(round(ny/4),round(nx/4)) = 3000;
b(round(ny*3/4),round(nx*3/4)) = -3000;
i = 2:nx-1;
j = 2:ny-1;
% 5-point difference (Explicit)
for it = 1:niter
dn = d;
d(i,j) = ((dy^2*(dn(i + 1,j) + dn(i - 1,j))) + (dx^2*(dn(i,j + 1) + dn(i,j - 1))) - (b(i,j)*dx^2*dy*2))/(2*(dx^2 + dy^2));
u(i,j) = 2*pi*pi*sin(pi*i).*sin(pi*j);
% Boundary conditions
d(:,1) = 0;
d(:,ny) = 0;
d(1,:) = 0;
d(nx,:) = 0;
end
%
%
% err = abs(u - d);
the error I get is:
Subscripted assignment dimension mismatch.
Error in poisson (line 39)
u(i,j) = 2*pi*pi*sin(pi*i).*sin(pi*j);
I am not sure why it is not calculating u at every grid point. I tried taking it out of the for loop but that did not help. Any ideas would be appreciated.

This is because i and j are both 1-by-(N-2) vectors, so u(i, j) is an (N-2)-by-(N-2) matrix. However, the expression 2*pi*pi*sin(pi*i).*sin(pi*j) is a 1-by-(N-2) vector.
The dimensions obviously don't match, hence the error.
I'm not sure, but I'm guessing that you meant to do the following:
u(i,j) = 2 * pi * pi * bsxfun(#times, sin(pi * i), sin(pi * j)');
Alternatively, you can use basic matrix multiplication to produce an (N-2)-by-(N-2) like so:
u(i, j) = 2 * pi * pi * sin(pi * i') * sin(pi * j); %// Note the transpose
P.S: it is recommended not to use "i" and "j" as names for variables.

Related

MATLAB: Linear Poisson Solver using iterative method

I am in the process of writing a 2D non-linear Poisson's solver. One intermediate test I performed is using my non-linear solver to solve for a "linear" Poisson equation. Unfortunately, my non-linear solver is giving me incorrect results, unlike if I try to solve it directly in MATLAB using the backslash ""
non-linear solver iteratively code: "incorrect results"
clearvars; clc; close all;
Nx = 20;
Ny = 20;
Lx = 2*pi;
x = (0:Nx-1)/Nx*2*pi; % x coordinate in Fourier, equally spaced grid
kx = fftshift(-Nx/2:Nx/2-1); % wave number vector
kx(kx==0) = 1; %helps with error: matrix ill scaled because of 0s
ygl = -cos(pi*(0:Ny)/Ny)'; %Gauss-Lobatto chebyshev points
%make mesh
[X,Y] = meshgrid(x,ygl);
%Chebyshev matrix:
VGL = cos(acos(ygl(:))*(0:Ny));
dVGL = diag(1./sqrt(1-ygl.^2))*sin(acos(ygl)*(0:Ny))*diag(0:Ny);
dVGL(1,:) = (-1).^(1:Ny+1).*(0:Ny).^2;
dVGL(Ny+1,:) = (0:Ny).^2;
%Diferentiation matrix for Gauss-Lobatto points
Dgl = dVGL/VGL;
D = Dgl; %first-order derivative matrix
D2 = Dgl*Dgl;
%linear Poisson solved iteratively
Igl = speye(Ny+1);
Ig = speye(Ny);
ZNy = diag([0 ones(1,Ny-1) 0]);
div_x_act_on_grad_x = -Igl; % must be multiplied with kx(m)^2 for each Fourier mode
div_y_act_on_grad_y = D * ZNy *D;
u = Y.^2 .* sin( (2*pi / Lx) * X);
uh = fft(u,[],2);
uold = ones(size(u));
uoldk = fft(uold,[],2);
max_iter = 500;
err_max = 1e-5; %change to 1e-8;
for iterations = 1:max_iter
for m = 1:length(kx)
L = div_x_act_on_grad_x * (kx(m)^2) + div_y_act_on_grad_y;
d2xk = div_x_act_on_grad_x * (kx(m)^2) * uoldk;
d2x = real(ifft(d2xk,[],2));
d2yk = div_y_act_on_grad_y *uoldk;
d2y = real(ifft(d2yk,[],2));
ffh = d2xk + d2yk;
phikmax_old = max(max(abs(uoldk)));
unewh(:,m) = L\(ffh(:,m));
end
phikmax = max(max(abs(unewh)));
if phikmax == 0 %norm(unewh,inf) == 0
it_error = err_max /2;
else
it_error = abs( phikmax - phikmax_old) / phikmax;
end
if it_error < err_max
break;
end
end
unew = real(ifft(unewh,[],2));
DEsol = unew - u;
figure
surf(X, Y, unew);
colorbar;
title('Numerical solution of \nabla^2 u = f');
figure
surf(X, Y, u);
colorbar;
title('Exact solution of \nabla^2 u = f');
Direct solver
ubar = Y.^2 .* sin( (2*pi / Lx) * X);
uh = fft(ubar,[],2);
for m = 1:length(kx)
L = div_x_act_on_grad_x * (kx(m)^2) + div_y_act_on_grad_y;
d2xk = div_x_act_on_grad_x * (kx(m)^2) * uh;
d2x = real(ifft(d2xk,[],2));
d2yk = div_y_act_on_grad_y *uh;
d2y = real(ifft(d2yk,[],2));
ffh = d2xk + d2yk;
%----------------
unewh(:,m) = L\(ffh(:,m));
end
How can I fix code 1 to get the same results as code 2?

Berlekamp Massey Algorithm for BCH simplified binary version

I am trying to follow Lin, Costello's explanation of the simplified BM algorithm for the binary case in page 210 of chapter 6 with no success on finding the error locator polynomial.
I'm trying to implement it in MATLAB like this:
function [locator_polynom] = compute_error_locator(syndrome, t, m, field, alpha_powers)
%
% Initial conditions for the BM algorithm
polynom_length = 2*t;
syndrome = [syndrome; zeros(3, 1)];
% Delta matrix storing the powers of alpha in the corresponding place
delta_rho = uint32(zeros(polynom_length, 1)); delta_rho(1)=1;
delta_next = uint32(zeros(polynom_length, 1));
% Premilimnary values
n_max = uint32(2^m - 1);
% Initialize step mu = 1
delta_next(1) = 1; delta_next(2) = syndrome(1); % 1 + S1*X
% The discrepancy is stored in polynomial representation as uint32 numbers
value = gf_mul_elements(delta_next(2), syndrome(2), field, alpha_powers, n_max);
discrepancy_next = bitxor(syndrome(3), value);
% The degree of the locator polynomial
locator_degree_rho = 0;
locator_degree_next = 1;
% Update all values
locator_polynom = delta_next;
delta_current = delta_next;
discrepancy_rho = syndrome(1);
discrepancy_current = discrepancy_next;
locator_degree_current = locator_degree_next;
rho = 0; % The row with the maximum value of 2mu - l starts at 1
for i = 1:t % Only the even steps are needed (so make t out of 2*t)
if discrepancy_current ~= 0
% Compute the correction factor
correction_factor = uint32(zeros(polynom_length, 1));
x_exponent = 2*(i - rho);
if (discrepancy_current == 1 || discrepancy_rho == 1)
d_mu_times_rho = discrepancy_current * discrepancy_rho;
else
alpha_discrepancy_mu = alpha_powers(discrepancy_current);
alpha_discrepancy_rho = alpha_powers(discrepancy_rho);
alpha_inver_discrepancy_rho = n_max - alpha_discrepancy_rho;
% The alpha power for dmu * drho^-1 is
alpha_d_mu_times_rho = alpha_discrepancy_mu + alpha_inver_discrepancy_rho;
% Equivalent to aux mod(2^m - 1)
alpha_d_mu_times_rho = alpha_d_mu_times_rho - ...
n_max * uint32(alpha_d_mu_times_rho > n_max);
d_mu_times_rho = field(alpha_d_mu_times_rho);
end
correction_factor(x_exponent+1) = d_mu_times_rho;
correction_factor = gf_mul_polynoms(correction_factor,...
delta_rho,...
field, alpha_powers, n_max);
% Finally we add the correction factor to get the new delta
delta_next = bitxor(delta_current, correction_factor(1:polynom_length));
% Update used data
l = polynom_length;
while delta_next(l) == 0 && l>0
l = l - 1;
end
locator_degree_next = l-1;
% Update previous maximum if the degree is higher than recorded
if (2*i - locator_degree_current) > (2*rho - locator_degree_rho)
locator_degree_rho = locator_degree_current;
delta_rho = delta_current;
discrepancy_rho = discrepancy_current;
rho = i;
end
else
% If the discrepancy is 0, the locator polynomial for this step
% is passed to the next one. It satifies all newtons' equations
% until now.
delta_next = delta_current;
end
% Compute the discrepancy for the next step
syndrome_start_index = 2 * i + 3;
discrepancy_next = syndrome(syndrome_start_index); % First value
for k = 1:locator_degree_next
value = gf_mul_elements(delta_next(k + 1), ...
syndrome(syndrome_start_index - k), ...
field, alpha_powers, n_max);
discrepancy_next = bitxor(discrepancy_next, value);
end
% Update all values
locator_polynom = delta_next;
delta_current = delta_next;
discrepancy_current = discrepancy_next;
locator_degree_current = locator_degree_next;
end
end
I'm trying to see what's wrong but I can't. It works for the examples in the book, but not always. As an aside, to compute the discrepancy S_2mu+3 is needed, but when I have only 24 syndrome coefficients how is it computed on step 11 where 2*11 + 3 is 25?
Thanks in advance!
It turns out the code is ok. I made a different implementation from Error Correction and Coding. Mathematical Methods and gives the same result. My problem is at the Chien Search.
Code for the interested:
function [c] = compute_error_locator_v2(syndrome, m, field, alpha_powers)
%
% Initial conditions for the BM algorithm
% Premilimnary values
N = length(syndrome);
n_max = uint32(2^m - 1);
polynom_length = N/2 + 1;
L = 0; % The curent length of the LFSR
% The current connection polynomial
c = uint32(zeros(polynom_length, 1)); c(1) = 1;
% The connection polynomial before last length change
p = uint32(zeros(polynom_length, 1)); p(1) = 1;
l = 1; % l is k - m, the amount of shift in update
dm = 1; % The previous discrepancy
for k = 1:2:N % For k = 1 to N in steps of 2
% ========= Compute discrepancy ==========
d = syndrome(k);
for i = 1:L
aux = gf_mul_elements(c(i+1), syndrome(k-i), field, alpha_powers, n_max);
d = bitxor(d, aux);
end
if d == 0 % No change in polynomial
l = l + 1;
else
% ======== Update c ========
t = c;
% Compute the correction factor
correction_factor = uint32(zeros(polynom_length, 1));
% This is d * dm^-1
dd_sum = modulo(alpha_powers(d) + n_max - alpha_powers(dm), n_max);
for i = 0:polynom_length - 1
if p(i+1) ~= 0
% Here we compute d*d^-1*p(x_i)
ddp_sum = modulo(dd_sum + alpha_powers(p(i+1)), n_max);
if ddp_sum == 0
correction_factor(i + l + 1) = 1;
else
correction_factor(i + l + 1) = field(ddp_sum);
end
end
end
% Finally we add the correction factor to get the new locator
c = bitxor(c, correction_factor);
if (2*L >= k) % No length change in update
l = l + 1;
else
p = t;
L = k - L;
dm = d;
l = 1;
end
end
l = l + 1;
end
end
The code comes from this implementation of the Massey algorithm

Index exceeds matrix dimensions error in Runge-Kutta method: Matlab

I'm trying to make a time stepping code using the 4th order Runge-Kutta method but am running into issues indexing one of my values properly. My code is:
clc;
clear all;
L = 32; M = 32; N = 32; % No. of elements
Lx = 2; Ly = 2; Lz = 2; % Size of each element
dx = Lx/L; dy = Ly/M; dz = Lz/N; % Step size
Tt = 1;
t0 = 0; % Initial condition
T = 50; % Final time
dt = (Tt-t0)/T; % Determining time step interval
% Wave characteristics
H = 2; % Wave height
a = H/2; % Amplitude
Te = 6; % Period
omega = 2*pi/Te; % Wave rotational frequency
d = 25; % Water depth
x = 0; % Location of cylinder axis
u0(1:L,1:M,1:N,1) = 0; % Setting up solution space matrix (u values)
v0(1:L,1:M,1:N,1) = 0; % Setting up solution space matrix (v values)
w0(1:L,1:M,1:N,1) = 0; % Setting up solution space matrix (w values)
[k,L] = disp(d,omega); % Solving for k and wavelength using Newton-Raphson function
%u = zeros(1,50);
%v = zeros(1,50);
%w = zeros(1,50);
time = 1:1:50;
for t = 1:T
for i = 1:L
for j = 1:M
for k = 1:N
eta(i,j,k,t) = a*cos(omega*time(1,t);
u(i,j,k,1) = u0(i,j,k,1);
v(i,j,k,1) = v0(i,j,k,1);
w(i,j,k,1) = w0(i,j,k,1);
umag(i,j,k,t) = a*omega*(cosh(k*(d+eta(i,j,k,t))))/sinh(k*d);
vmag(i,j,k,t) = 0;
wmag(i,j,k,t) = -a*omega*(sinh(k*(d+eta(i,j,k,t))))/sinh(k*d);
uRHS(i,j,k,t) = umag(i,j,k,t)*cos(k*x-omega*t);
vRHS(i,j,k,t) = vmag(i,j,k,t)*sin(k*x-omega*t);
wRHS(i,j,k,t) = wmag(i,j,k,t)*sin(k*x-omega*t);
k1x(i,j,k,t) = dt*uRHS(i,j,k,t);
k2x(i,j,k,t) = dt*(0.5*k1x(i,j,k,t) + dt*uRHS(i,j,k,t));
k3x(i,j,k,t) = dt*(0.5*k2x(i,j,k,t) + dt*uRHS(i,j,k,t));
k4x(i,j,k,t) = dt*(k3x(i,j,k,t) + dt*uRHS(i,j,k,t));
u(i,j,k,t+1) = u(i,j,k,t) + (1/6)*(k1x(i,j,k,t) + 2*k2x(i,j,k,t) + 2*k3x(i,j,k,t) + k4x(i,j,k,t));
k1y(i,j,k,t) = dt*vRHS(i,j,k,t);
k2y(i,j,k,t) = dt*(0.5*k1y(i,j,k,t) + dt*vRHS(i,j,k,t));
k3y(i,j,k,t) = dt*(0.5*k2y(i,j,k,t) + dt*vRHS(i,j,k,t));
k4y(i,j,k,t) = dt*(k3y(i,j,k,t) + dt*vRHS(i,j,k,t));
v(i,j,k,t+1) = v(i,j,k,t) + (1/6)*(k1y(i,j,k,t) + 2*k2y(i,j,k,t) + 2*k3y(i,j,k,t) + k4y(i,j,k,t));
k1z(i,j,k,t) = dt*wRHS(i,j,k,t);
k2z(i,j,k,t) = dt*(0.5*k1z(i,j,k,t) + dt*wRHS(i,j,k,t));
k3z(i,j,k,t) = dt*(0.5*k2z(i,j,k,t) + dt*wRHS(i,j,k,t));
k4z(i,j,k,t) = dt*(k3z(i,j,k,t) + dt*wRHS(i,j,k,t));
w(i,j,k,t+1) = w(i,j,k,t) + (1/6)*(k1z(i,j,k,t) + 2*k2z(i,j,k,t) + 2*k3z(i,j,k,t) + k4z(i,j,k,t));
a(i,j,k,t+1) = ((u(i,j,k,t+1))^2 + (v(i,j,k,t+1))^2 + (w(i,j,k,t+1))^2)^0.5;
end
end
end
end
At the moment, the values seem to be fine for the first iteration but then I have the error Index exceeds matrix dimension in the line calculating eta. I understand that I am not correctly indexing the eta value but am not sure how to correct this.
My goal is to update the value of eta for each loop of t and then use that new eta value for the rest of the calculations.
I'm still quite new to programming and am trying to understand indexing, especially in 3 or 4 dimensional matrices and would really appreciate any advice in correctly calculating this value.
Thanks in advance for any advice!
You declare
time = 1:1:50;
which is just a row vector but access it here
eta(i,j,k,t) = a*cos(omega*time(i,j,k,t));
as if it were an array with 4 dimensions.
To correctly access element x of time you need to use syntax
time(1,x);
(as it is a 1 x 50 array)

Matlab: How to fix Least Mean square algorithm code

I am studying about Least Mean Square algorithm and saw this code. Based on the algorithm steps, the calculation of the the error and weight updates looks alright. However, it fails to give the correct output. Can somebody please help in fixing the problem? The code has been taken from:
http://www.mathworks.com/matlabcentral/fileexchange/35670-lms-algorithm-implementation/content/lms.m
clc
close all
clear all
N=input('length of sequence N = ');
t=[0:N-1];
w0=0.001; phi=0.1;
d=sin(2*pi*[1:N]*w0+phi);
x=d+randn(1,N)*0.5;
w=zeros(1,N);
mu=input('mu = ');
for i=1:N
e(i) = d(i) - w(i)' * x(i);
w(i+1) = w(i) + mu * e(i) * x(i);
end
for i=1:N
yd(i) = sum(w(i)' * x(i));
end
subplot(221),plot(t,d),ylabel('Desired Signal'),
subplot(222),plot(t,x),ylabel('Input Signal+Noise'),
subplot(223),plot(t,e),ylabel('Error'),
subplot(224),plot(t,yd),ylabel('Adaptive Desired output')
EDIT
The code from the answer :
N = 200;
M = 5;
w=zeros(M,N);
mu=0.2;%input('mu = ');
y(1) = 0.0;
y(2) = 0.0;
for j = 3:N
y(j) = 0.95*y(j-1) - 0.195*y(j-2);
end
x = y+randn(1,N)*0.5;
%x= y;
d = y;
for i=(M+1):N
e(i) = d(i) - x((i-(M)+1):i)*w(:,i);
w(:,i+1) = w(:,i) + mu * e(i) * x((i-(M)+1):i)';
end
for i=(M+1):N
yd(i) = x((i-(M)+1):i)*w(:,i);
end
The weight matrix w which stores the coefficients are all zero, meaning that the LMS equations are not working correctly.
I also do not find any mistake in your code. But I doubt that this algorithm is suitable for this kind of noise. You will get better results when using a filter of higher order (M in this case):
M = 5;
w=zeros(M,N);
mu=0.2;%input('mu = ');
for i=(M+1):N
e(i) = d(i) - x((i-(M)+1):i)*w(:,i);
w(:,i+1) = w(:,i) + mu * e(i) * x((i-(M)+1):i)';
end
for i=(M+1):N
yd(i) = x((i-(M)+1):i)*w(:,i);
end
N=input('length of sequence N = ');
t=[0:N-1];
w0=0.001; phi=0.1;
d=sin(2*pi*[1:N]*w0+phi);
x=d+randn(1,N)*0.5;
w=zeros(1,N);
mu=input('mu = ');
for i=1:N
yd(i)=w*x';
e(i) = d(i) - w * x';
for m=1:N
w(m) = w(m) + mu * e(i) * x(m);
end
end
subplot(221),plot(t,d),ylabel('Desired Signal'),
sub plot(222),plot(t,x),ylabel('Input Signal+Noise'),
subplot(223),plot(t,e),ylabel('Error'),
subplot(224),plot(t,yd),ylabel('Adaptive Desired output')
What you were missing was the multiplication of error term in a single iteration with each sample of input and separate update of weights

Recomposing vector input to algorithm from matrix output

I've written some code to implement an algorithm that takes as input a vector q of real numbers, and returns as an output a complex matrix R. The Matlab code below produces a plot showing the input vector q and the output matrix R.
Given only the complex matrix output R, I would like to obtain the input vector q. Can I do this using least-squares optimization? Since there is a recursive running sum in the code (rs_r and rs_i), the calculation for a column of the output matrix is dependent on the calculation of the previous column.
Perhaps a non-linear optimization can be set up to recompose the input vector q from the output matrix R?
Looking at this in another way, I've used an algorithm to compute a matrix R. I want to run the algorithm "in reverse," to get the input vector q from the output matrix R.
If there is no way to recompose the starting values from the output, thereby treating the problem as a "black box," then perhaps the mathematics of the model itself can be used in the optimization? The program evaluates the following equation:
The Utilde(tau, omega) is the output matrix R. The tau (time) variable comprises the columns of the response matrix R, whereas the omega (frequency) variable comprises the rows of the response matrix R. The integration is performed as a recursive running sum from tau = 0 up to the current tau timestep.
Here are the plots created by the program posted below:
Here is the full program code:
N = 1001;
q = zeros(N, 1); % here is the input
q(1:200) = 55;
q(201:300) = 120;
q(301:400) = 70;
q(401:600) = 40;
q(601:800) = 100;
q(801:1001) = 70;
dt = 0.0042;
fs = 1 / dt;
wSize = 101;
Glim = 20;
ginv = 0;
R = get_response(N, q, dt, wSize, Glim, ginv); % R is output matrix
rows = wSize;
cols = N;
figure; plot(q); title('q value input as vector');
ylim([0 200]); xlim([0 1001])
figure; imagesc(abs(R)); title('Matrix output of algorithm')
colorbar
Here is the function that performs the calculation:
function response = get_response(N, Q, dt, wSize, Glim, ginv)
fs = 1 / dt;
Npad = wSize - 1;
N1 = wSize + Npad;
N2 = floor(N1 / 2 + 1);
f = (fs/2)*linspace(0,1,N2);
omega = 2 * pi .* f';
omegah = 2 * pi * f(end);
sigma2 = exp(-(0.23*Glim + 1.63));
sign = 1;
if(ginv == 1)
sign = -1;
end
ratio = omega ./ omegah;
rs_r = zeros(N2, 1);
rs_i = zeros(N2, 1);
termr = zeros(N2, 1);
termi = zeros(N2, 1);
termr_sub1 = zeros(N2, 1);
termi_sub1 = zeros(N2, 1);
response = zeros(N2, N);
% cycle over cols of matrix
for ti = 1:N
term0 = omega ./ (2 .* Q(ti));
gamma = 1 / (pi * Q(ti));
% calculate for the real part
if(ti == 1)
Lambda = ones(N2, 1);
termr_sub1(1) = 0;
termr_sub1(2:end) = term0(2:end) .* (ratio(2:end).^-gamma);
else
termr(1) = 0;
termr(2:end) = term0(2:end) .* (ratio(2:end).^-gamma);
rs_r = rs_r - dt.*(termr + termr_sub1);
termr_sub1 = termr;
Beta = exp( -1 .* -0.5 .* rs_r );
Lambda = (Beta + sigma2) ./ (Beta.^2 + sigma2); % vector
end
% calculate for the complex part
if(ginv == 1)
termi(1) = 0;
termi(2:end) = (ratio(2:end).^(sign .* gamma) - 1) .* omega(2:end);
else
termi = (ratio.^(sign .* gamma) - 1) .* omega;
end
rs_i = rs_i - dt.*(termi + termi_sub1);
termi_sub1 = termi;
integrand = exp( 1i .* -0.5 .* rs_i );
if(ginv == 1)
response(:,ti) = Lambda .* integrand;
else
response(:,ti) = (1 ./ Lambda) .* integrand;
end
end % ti loop
No, you cannot do so unless you know the "model" itself for this process. If you intend to treat the process as a complete black box, then it is impossible in general, although in any specific instance, anything can happen.
Even if you DO know the underlying process, then it may still not work, as any least squares estimator is dependent on the starting values, so if you do not have a good guess there, it may converge to the wrong set of parameters.
It turns out that by using the mathematics of the model, the input can be estimated. This is not true in general, but for my problem it seems to work. The cumulative integral is eliminated by a partial derivative.
N = 1001;
q = zeros(N, 1);
q(1:200) = 55;
q(201:300) = 120;
q(301:400) = 70;
q(401:600) = 40;
q(601:800) = 100;
q(801:1001) = 70;
dt = 0.0042;
fs = 1 / dt;
wSize = 101;
Glim = 20;
ginv = 0;
R = get_response(N, q, dt, wSize, Glim, ginv);
rows = wSize;
cols = N;
cut_val = 200;
imagLogR = imag(log(R));
Mderiv = zeros(rows, cols-2);
for k = 1:rows
val = deriv_3pt(imagLogR(k,:), dt);
val(val > cut_val) = 0;
Mderiv(k,:) = val(1:end-1);
end
disp('Running iteration');
q0 = 10;
q1 = 500;
NN = cols - 2;
qout = zeros(NN, 1);
for k = 1:NN
data = Mderiv(:,k);
qout(k) = fminbnd(#(q) curve_fit_to_get_q(q, dt, rows, data),q0,q1);
end
figure; plot(q); title('q value input as vector');
ylim([0 200]); xlim([0 1001])
figure;
plot(qout); title('Reconstructed q')
ylim([0 200]); xlim([0 1001])
Here are the supporting functions:
function output = deriv_3pt(x, dt)
% Function to compute dx/dt using the 3pt symmetrical rule
% dt is the timestep
N = length(x);
N0 = N - 1;
output = zeros(N0, 1);
denom = 2 * dt;
for k = 2:N0
output(k - 1) = (x(k+1) - x(k-1)) / denom;
end
function sse = curve_fit_to_get_q(q, dt, rows, data)
fs = 1 / dt;
N2 = rows;
f = (fs/2)*linspace(0,1,N2); % vector for frequency along cols
omega = 2 * pi .* f';
omegah = 2 * pi * f(end);
ratio = omega ./ omegah;
gamma = 1 / (pi * q);
termi = ((ratio.^(gamma)) - 1) .* omega;
Error_Vector = termi - data;
sse = sum(Error_Vector.^2);