Mel-frequency function: error with matrix dimensions - matlab

I'm trying to make a prototype audio recognition system by following this link: http://www.ifp.illinois.edu/~minhdo/teaching/speaker_recognition/. It is quite straightforward so there is almost nothing to worry about. But my problem is with the mel-frequency function. Here is the code as provided on the website:
function m = melfb(p, n, fs)
% MELFB Determine matrix for a mel-spaced filterbank
%
% Inputs: p number of filters in filterbank
% n length of fft
% fs sample rate in Hz
%
% Outputs: x a (sparse) matrix containing the filterbank amplitudes
% size(x) = [p, 1+floor(n/2)]
%
% Usage: For example, to compute the mel-scale spectrum of a
% colum-vector signal s, with length n and sample rate fs:
%
% f = fft(s);
% m = melfb(p, n, fs);
% n2 = 1 + floor(n/2);
% z = m * abs(f(1:n2)).^2;
%
% z would contain p samples of the desired mel-scale spectrum
%
% To plot filterbanks e.g.:
%
% plot(linspace(0, (12500/2), 129), melfb(20, 256, 12500)'),
% title('Mel-spaced filterbank'), xlabel('Frequency (Hz)');
f0 = 700 / fs;
fn2 = floor(n/2);
lr = log(1 + 0.5/f0) / (p+1);
% convert to fft bin numbers with 0 for DC term
bl = n * (f0 * (exp([0 1 p p+1] * lr) - 1));
b1 = floor(bl(1)) + 1;
b2 = ceil(bl(2));
b3 = floor(bl(3));
b4 = min(fn2, ceil(bl(4))) - 1;
pf = log(1 + (b1:b4)/n/f0) / lr;
fp = floor(pf);
pm = pf - fp;
r = [fp(b2:b4) 1+fp(1:b3)];
c = [b2:b4 1:b3] + 1;
v = 2 * [1-pm(b2:b4) pm(1:b3)];
m = sparse(r, c, v, p, 1+fn2);
But it gave me an error:
Error using * Inner matrix dimensions must agree.
Error in MFFC (line 17) z = m * abs(f(1:n2)).^2;
When I include these 2 lines just before line 17:
size(m)
size(abs(f(1:n2)).^2)
It gave me :
ans =
20 65
ans =
1 65
So should I transpose the second matrix? Or should I interpret this as an row-wise multiplication and modify the code?
Edit: Here is the main function (I simply run MFCC()):
function result = MFFC()
[y Fs] = audioread('s1.wav');
% sound(y,Fs)
Frames = Frame_Blocking(y,128);
Windowed = Windowing(Frames);
spectrum = FFT_After_Windowing(Windowed);
%imagesc(mag2db(abs(spectrum)))
p = 20;
S = size(spectrum);
n = S(2);
f = spectrum;
m = melfb(p, n, Fs);
n2 = 1 + floor(n/2);
size(m)
size(abs(f(1:n2)).^2)
z = m * abs(f(1:n2)).^2;
result = z;
And here are the auxiliary functions:
function f = Frame_Blocking(y,N)
% Parameters: M = 100, N = 256
% Default : M = 100; N = 256;
M = fix(N/3);
Frames = [];
first = 1; last = N;
len = length(y);
while last <= len
Frames = [Frames; y(first:last)'];
first = first + M;
last = last + M;
end;
if last < len
first = first + M;
Frames = [Frames; y(first : len)];
end
f = Frames;
function f = Windowing(Frames)
S = size(Frames);
N = S(2);
M = S(1);
Windowed = zeros(M,N);
nn = 1:N;
wn = 0.54 - 0.46*cos(2*pi/(N-1)*(nn-1));
for ii = 1:M
Windowed(ii,:) = Frames(ii,:).*wn;
end;
f = Windowed;
function f = FFT_After_Windowing(Windowed)
spectrum = fft(Windowed);
f = spectrum;

Transpose s or transpose the resulting f (it's just a matter of convention).
There is nothing wrong with the melfb function you are using, merely with the dimensions of the signal in the example you are trying to run (in the commented lines 14-17).
% f = fft(s);
% m = melfb(p, n, fs);
% n2 = 1 + floor(n/2);
% z = m * abs(f(1:n2)).^2;
The example assumes that you are using a "colum-vector signal s". From the size of your Fourier transformed f (done via fft which respects the input signal dimensions) your input signal s is a row-vector signal.
The part that gives you the error is the actual filtering operation that requires multiplying a p x n2 matrix with a n2 x 1 column-vector (i.e., each filter's response is multiplied pointwise with the Fourier of the input signal). Since your input s is 1 x n, your f will be 1 x n and the final matrix to vector multiplication for z will give an error.

Thanks to gevang's anwer, I was able to find out my mistake. Here is how I modified the code:
function result = MFFC()
[y Fs] = audioread('s2.wav');
% sound(y,Fs)
Frames = Frame_Blocking(y,128);
Windowed = Windowing(Frames);
%spectrum = FFT_After_Windowing(Windowed');
%imagesc(mag2db(abs(spectrum)))
p = 20;
%S = size(spectrum);
%n = S(2);
%f = spectrum;
S1 = size(Windowed);
n = S1(2);
n2 = 1 + floor(n/2);
%z = zeros(S1(1),n2);
z = zeros(20,S1(1));
for ii=1: S1(1)
s = (FFT_After_Windowing(Windowed(ii,:)'));
f = fft(s);
m = melfb(p,n,Fs);
% n2 = 1 + floor(n/2);
z(:,ii) = m * abs(f(1:n2)).^2;
end;
%f = FFT_After_Windowing(Windowed');
%S = size(f);
%n = S(2);
%size(f)
%m = melfb(p, n, Fs);
%n2 = 1 + floor(n/2);
%size(m)
%size(abs(f(1:n2)).^2)
%z = m * abs(f(1:n2)).^2;
result = z;
As you can see, I naively assumed that the function deals with row-wise matrices, but in fact it deals with column vectors (and maybe column-wise matrices). So I iterate through each column of the input matrix and then combine the results.
But I don't think this is efficient and vectorized code. Also I still can't figure out how to do column-wise operations on the input matrix (Windowed - after the windowing step), instead of using a loop.

Related

1D finite element method in the Hermite basis (P3C1) - Problem of solution calculation

I am currently working on solving the problem $-\alpha u'' + \beta u = f$ with Neumann conditions on the edge, with the finite element method in MATLAB.
I managed to set up a code that works for P1 and P2 Lagragne finite elements (i.e: linear and quadratic) and the results are good!
I am trying to implement the finite element method using the Hermite basis. This basis is defined by the following basis functions and derivatives:
syms x
phi(x) = [2*x^3-3*x^2+1,-2*x^3+3*x^2,x^3-2*x^2+x,x^3-x^2]
% Derivative
dphi = [6*x.^2-6*x,-6*x.^2+6*x,3*x^2-4*x+1,3*x^2-2*x]
The problem with the following code is that the solution vector u is not good. I know that there must be a problem in the S and F element matrix calculation loop, but I can't see where even though I've been trying to make changes for a week.
Can you give me your opinion? Hopefully someone can see my error.
Thanks a lot,
% -alpha*u'' + beta*u = f
% u'(a) = bd1, u'(b) = bd2;
a = 0;
b = 1;
f = #(x) (1);
alpha = 1;
beta = 1;
% Neuamnn boundary conditions
bn1 = 1;
bn2 = 0;
syms ue(x)
DE = -alpha*diff(ue,x,2) + beta*ue == f;
du = diff(ue,x);
BC = [du(a)==bn1, du(b)==bn2];
ue = dsolve(DE, BC);
figure
fplot(ue,[a,b], 'r', 'LineWidth',2)
N = 2;
nnod = N*(2+2); % Number of nodes
neq = nnod*1; % Number of equations, one degree of freedom per node
xnod = linspace(a,b,nnod);
nodes = [(1:3:nnod-3)', (2:3:nnod-2)', (3:3:nnod-1)', (4:3:nnod)'];
phi = #(xi)[2*xi.^3-3*xi.^2+1,2*xi.^3+3*xi.^2,xi.^3-2*xi.^2+xi,xi.^3-xi.^2];
dphi = #(xi)[6*xi.^2-6*xi,-6*xi.^2+6*xi,3*xi^2-4*xi+1,3*xi^2-2*xi];
% Here, just calculate the integral using gauss quadrature..
order = 5;
[gp, gw] = gauss(order, 0, 1);
S = zeros(neq,neq);
M = S;
F = zeros(neq,1);
for iel = 1:N
%disp(iel)
inod = nodes(iel,:);
xc = xnod(inod);
h = xc(end)-xc(1);
Se = zeros(4,4);
Me = Se;
fe = zeros(4,1);
for ig = 1:length(gp)
xi = gp(ig);
iw = gw(ig);
Se = Se + dphi(xi)'*dphi(xi)*1/h*1*iw;
Me = Me + phi(xi)'*phi(xi)*h*1*iw;
x = phi(xi)*xc';
fe = fe + phi(xi)' * f(x) * h * 1 * iw;
end
% Assembly
S(inod,inod) = S(inod, inod) + Se;
M(inod,inod) = M(inod, inod) + Me;
F(inod) = F(inod) + fe;
end
S = alpha*S + beta*M;
g = zeros(neq,1);
g(1) = -alpha*bn1;
g(end) = alpha*bn2;
alldofs = 1:neq;
u = zeros(neq,1); %Pre-allocate
F = F + g;
u(alldofs) = S(alldofs,alldofs)\F(alldofs)
Warning: Matrix is singular to working precision.
u = 8×1
NaN
NaN
NaN
NaN
NaN
NaN
NaN
NaN
figure
fplot(ue,[a,b], 'r', 'LineWidth',2)
hold on
plot(xnod, u, 'bo')
for iel = 1:N
inod = nodes(iel,:);
xc = xnod(inod);
U = u(inod);
xi = linspace(0,1,100)';
Ue = phi(xi)*U;
Xe = phi(xi)*xc';
plot(Xe,Ue,'b -')
end
% Gauss function for calculate the integral
function [x, w, A] = gauss(n, a, b)
n = 1:(n - 1);
beta = 1 ./ sqrt(4 - 1 ./ (n .* n));
J = diag(beta, 1) + diag(beta, -1);
[V, D] = eig(J);
x = diag(D);
A = b - a;
w = V(1, :) .* V(1, :);
w = w';
x=x';
end
You can find the same post under MATLAB site for syntax highlighting.
Thanks
I tried to read courses, search in different documentation and modify my code without success.

Error using feval Undefined function or variable 'Sfun'

I have always used R, so I am quite new to Matlab and running into some troubleshooting issues. I am running some code for a tensor factorization method (available here: https://github.com/caobokai/tBNE). To start I tried to run the demo code, which generates simulated data to run the method with, which results in the following error(s):
Error using feval
Undefined function or variable 'Sfun'.
Error in OptStiefelGBB (line 199)
[F, G] = feval(fun, X , varargin{:}); out.nfe = 1;
Error in tbne_demo>tBNE_fun (line 124)
S, #Sfun, opts, B, P, X, L, D, W, Y, alpha, beta);
Here is the block of code I am running:
clear
clc
addpath(genpath('./tensor_toolbox'));
addpath(genpath('./FOptM'));
rng(5489, 'twister');
m = 10;
n = 10;
k = 10; % rank for tensor
[X, Z, Y] = tBNE_data(m, n, k); % generate the tensor, guidance and label
[T, W] = tBNE_fun(X, Z, Y, k);
[~, y1] = max(Y, [], 2);
[~, y2] = max(T{3} * W, [], 2);
fprintf('accuracy %3.2e\n', sum(y1 == y2) / n);
function [X, Z, Y] = tBNE_data(m, n, k)
B = randn(m, k);
S = randn(n, k);
A = {B, B, S};
X = ktensor(A);
Z = randn(n, 4);
Y = zeros(n, 2);
l = ceil(n / 2);
Y(1 : l, 1) = 1;
Y(l + 1 : end, 2) = 1;
X = tensor(X);
end
function [T, W] = tBNE_fun(X, Z, Y, k)
% t-BNE computes brain network embedding based on constrained tensor factorization
%
% INPUT
% X: brain networks stacked in a 3-way tensor
% Z: side information
% Y: label information
% k: rank of CP factorization
%
% OUTPUT
% T is the factor tensor containing
% vertex factor matrix B = T{1} and
% subject factor matrix S = T{3}
% W is the weight matrix
%
% Example: see tBNE_demo.m
%
% Reference:
% Bokai Cao, Lifang He, Xiaokai Wei, Mengqi Xing, Philip S. Yu,
% Heide Klumpp and Alex D. Leow. t-BNE: Tensor-based Brain Network Embedding.
% In SDM 2017.
%
% Dependency:
% [1] Matlab tensor toolbox v 2.6
% Brett W. Bader, Tamara G. Kolda and others
% http://www.sandia.gov/~tgkolda/TensorToolbox
% [2] A feasible method for optimization with orthogonality constraints
% Zaiwen Wen and Wotao Yin
% http://www.math.ucla.edu/~wotaoyin/papers/feasible_method_matrix_manifold.html
%% set algorithm parameters
printitn = 10;
maxiter = 200;
fitchangetol = 1e-4;
alpha = 0.1; % weight for guidance
beta = 0.1; % weight for classification loss
gamma = 0.1; % weight for regularization
u = 1e-6;
umax = 1e6;
rho = 1.15;
opts.record = 0;
opts.mxitr = 20;
opts.xtol = 1e-5;
opts.gtol = 1e-5;
opts.ftol = 1e-8;
%% compute statistics
dim = size(X);
normX = norm(X);
numClass = size(Y, 2);
m = dim(1);
n = dim(3);
l = size(Y, 1);
D = [eye(l), zeros(l, n - l)];
L = diag(sum(Z * Z')) - Z * Z';
%% initialization
B = randn(m, k);
P = B;
S = randn(n, k);
S = orth(S);
W = randn(k, numClass);
U = zeros(m, k); % Lagrange multipliers
%% main loop
fit = 0;
for iter = 1 : maxiter
fitold = fit;
% update B
ete = (S' * S) .* (P' * P); % compute E'E
b = 2 * ete + u * eye(k);
c = 2 * mttkrp(X, {B, P, S}, 1) + u * P + U;
B = c / b;
% update P
ftf = (S' * S) .* (B' * B); % compute F'F
b = 2 * ftf + u * eye(k);
c = 2 * mttkrp(X, {B, P, S}, 2) + u * B - U;
P = c / b;
% update U
U = U + u * (P - B);
% update u
u = min(rho * u, umax);
% update S
tic;
[S, out] = OptStiefelGBB(...
S, #Sfun, opts, B, P, X, L, D, W, Y, alpha, beta);
tsolve = toc;
fprintf(...
['[S]: obj val %7.6e, cpu %f, #func eval %d, ', ...
'itr %d, |ST*S-I| %3.2e\n'], ...
out.fval, tsolve, out.nfe, out.itr, norm(S' * S - eye(k), 'fro'));
% update W
H = D * S;
W = (H' * H + gamma * eye(k)) \ H' * Y;
% compute the fit
T = ktensor({B, P, S});
normresidual = sqrt(normX ^ 2 + norm(T) ^ 2 - 2 * innerprod(X, T));
fit = 1 - (normresidual / normX);
fitchange = abs(fitold - fit);
if mod(iter, printitn) == 0
fprintf(' Iter %2d: fitdelta = %7.1e\n', iter, fitchange);
end
% check for convergence
if (iter > 1) && (fitchange < fitchangetol)
break;
end
end
%% clean up final results
T = arrange(T); % columns are normalized
fprintf('factorization error %3.2e\n', fit);
end
I know that there is little context here, but my suspicion is that I need to have Simulink, as Sfun is a Simulink related function(?). The script requires two toolboxes: tensor_toolbox, and FOptM.
Available at:
https://www.sandia.gov/~tgkolda/TensorToolbox/index-2.6.html
https://github.com/andland/FOptM
Thank you so much for your help,
Paul
Although SFun is an often used abbreviation for a Simulink S-Function, in this case the error has nothing to do with Simulink, and the name is a coincidence. (There is no Simulink related function specifically called Sfun, it is just a general term.)
Your error message has #Sfun in it, which is a way in MATLAB of creating a function handle to an (m-code) function called Sfun. I'd summize from the code you've shown that this is a cost function used in the optimization.
If you look at the code that your code is based on (tBNE_fun.m) you'll see that there is a function at the end of the file called Sfun. It is this that you are missing.

Implementing a Kalman Filter in MATLAB using 'ss'

I am trying to implement a Kalman Filter for estimating the state 'x' (displacement and velocity) of an oscillator. The code is below and should be simple to follow.
clear; clc; close all;
% io = csvread('sim.csv');
% u = io(:, 1);
% y = io(:, 2);
% clear io;
% Estimation of state of a single degree-of-freedom oscillator using
% the Kalman filter
% x[n + 1] = A x[n] + B u[n] + w[n]
% y[n] = C x[n] + D u[n] + v[n]
% Here, x[n] is 2 x 1, u[n] is 1 x 1
% A is 2 x 2, B is 2 x 1, C is 1 x 2, D is 1 x 1
%% Code begins here
N = 1000;
u = randn(N, 1); % Synthetic input
y = randn(N, 1); % Synthetic output
%% Definitions
dt = 0.005; % Time step in seconds
T = 1.50; % Oscillator period
zeta = 0.05; % Damping ratio
sv0 = max(abs(u)) * dt;
sd0 = sv0 * dt;
Q = [sd0 ^ 2 0.0; 0.0 sv0 ^ 2]; % Prediction error covariance matrix
smeas = 0.001 * max(abs(u));
R = smeas ^ 2; % Measurement error (co)variance scalar
wn = 2. * pi / Ts;
c = 2.0 * zeta * wn;
k = wn ^ 2;
A = [0. 1.; -k -c];
Ad = expm(A * dt);
Bd = A \ (Ad - eye(2));
Bd = Bd(:, 2);
C = [-k -c];
D = -1.0;
%% State-space model and Kalman filter
sys = ss(Ad, Bd, C, D, dt, 'inputname', 'u', 'outputname', 'y');
[kest,L,P] = kalman(sys, Q, R, []);
Here is my problem. I get the error: 'In the "kalman(SYS,QN,RN,NN,...)" command, QN must be a real square matrix with at most 1 rows.'.
I thought that QN = Q = const and should be 2 x 2, but it is asking for a scalar. Perhaps I don't understand the difference between Q and QN in MATLAB's 'kalman' help description. Any insights?
Thanks.
MATLAB is assuming the process noise is only one stochastic variable and not two like Q is representing.
So you have to add the G and H matrices to your sys like so:
G = eye(2);
H = [0,0];
sys = ss(Ad, [Bd, G], C, [D, H], dt, 'inputname', 'u', 'outputname', 'y');
Just as a reminder, using MATLAB's syntax:
x*=Ax+Bu+Gw
y=Cx+Du+Hw+v

Matlab NaN and Inf issue

So, I'm implementing the EM algorithm in Matlab, but my matrices quickly end up contaminated by NaN and Inf values. I think it might be caused by matrix inversions, but I'm not sure that's the only reason.
Here is the code:
function [F, Q, R, x_T, P_T] = em_algo(y, G)
% y_t = G_t'*x_t + v_t 1*1 = 1*p p*1
% x_t = F*x_t-1 + w_t p*1 = p*p p*1
% G is T*p
p = size(G,2); % p = nb assets ; G = T*p
q = size(y,2); % q = nb observations ; y = T*q
T = size(y,1); % y is T*1
F = eye(p); % = Transition matrix p*p
Q = eye(p); % innovation (v) covariance matrix p*p
R = eye(q); % noise (w) covariance matrix q x q
x_T_old = zeros(p,T);
mu0 = zeros(p,1);
Sigma = eye(p); % Initial state covariance matrix p*p
converged = 0;
i = 0;
max_iter = 60; % only for testing purposes
while ~converged
if i > max_iter
break;
end
% E step = smoothing
fprintf('Iteration %d\n',i);
[x_T,P_T,P_Tm2] = smoother(G,F,Q,R,mu0,Sigma,y);
%x_T
% M step
A = zeros(p,p);
B = zeros(p,p);
C = zeros(p,p);
R = eye(q);
for t = 2:T % eq (9) in EM paper
A = A + (P_T(:,:,t-1) + (x_T(:,t-1)*x_T(:,t-1)'));
end
for t = 2:T % eq (10)
%B = B + (P_Tm2(:,:,t-1) + (x_T(:,t)*x_T(:,t-1)'));
B = B + (P_Tm2(:,:,t) + (x_T(:,t)*x_T(:,t-1)'));
end
for t = 1:T %eq (11)
C = C + (P_T(:,:,t) + (x_T(:,t)*x_T(:,t)'));
end
F = B*inv(A); %eq (12)
Q = (1/T)*(C - (B*inv(A)*B')); % eq (13) pxp
for t = 1:T
bias = y(t) - (G(t,:)*x_T(:,t));
R = R + ((bias*bias') + (G(t,:)*P_T(:,:,t)*G(t,:)'));
end
R = (1/T)*R;
if i>1
err = norm(x_T-x_T_old)/norm(x_T_old);
if err < 1e-4
converged = 1;
end
end
x_T_old = x_T;
i = i+1;
end
fprintf('EM algorithm iterated %d times\n',i);
end
This iterates until convergence (which never happens due to my issue) and calls smoother.m at each iteration:
function [x_T, P_T, P_Tm2] = smoother(G,F,Q,R,mu0,Sigma,y)
% G is T*p
p = size(mu0,1); % mu0 is p*1
T = size(y,1); % y is T*1
J = zeros(p,p,T);
K = zeros(p,T); % gain matrix
x = zeros(p,T);
x(:,1) = mu0;
x_m1 = zeros(p,T);
x_T = zeros(p,T); % x values when we know all the data
% Notation : x = xt given t ; x_m1 = xt given t-1 (m1 stands for minus
% one)
P = zeros(p,p,T);% array of cov(xt|y1...yt), eq (6) in Shumway & Stoffer 1982
P(:,:,1) = Sigma;
P_m1 = zeros(p,p,T); % Same notation ; = cov(xt, xt-1|y1...yt) , eq (7)
P_T = zeros(p,p,T);
P_Tm2 = zeros(p,p,T); % cov(xT, xT-1|y1...yT)
for t = 2:T %starts at t = 2 because at each time t we need info about t-1
x_m1(:,t) = F*x(:,t-1); % eq A3 ; pxp * px1 = px1
P_m1(:,:,t) = (F*P(:,:,t-1)*F') + Q; % A4 ; pxp * pxp = pxp
if nnz(isnan(P_m1(:,:,t)))
error('NaNs in P_m1 at time t = %d',t);
end
if nnz(isinf(P_m1(:,:,t)))
error('Infs in P_m1 at time t = %d',t);
end
K(:,t) = P_m1(:,:,t)*G(t,:)'*pinv((G(t,:)*P_m1(:,:,t)*G(t,:)') + R); %A5 ; pxp * px1 * 1*1 = p*1
%K(:,t) = P_m1(:,:,t)*G(t,:)'/((G(t,:)*P_m1(:,:,t)*G(t,:)') + R); %A5 ; pxp * px1 * 1*1 = p*1
% The matrix inversion seems to generate NaN values which quickly
% contaminate all the other matrices. There is no warning about
% (close to) singular matrices or whatever. The use of pinv()
% instead of inv() seems to solve the problem... but I don't think
% it's the appropriate way to deal with it, there must be something
% wrong elsewhere
if nnz(isnan(K(:,t)))
error('NaNs in K at time t = %d',t);
end
x(:,t) = x_m1(:,t) + (K(:,t)*(y(t)-(G(t,:)*x_m1(:,t)))); %A6
P(:,:,t) = P_m1(:,:,t) - (K(:,t)*G(t,:)*P_m1(:,:,t)); %A7
end
x_T(:,T) = x(:,T);
P_T(:,:,T) = P(:,:,T);
for t = T:-1:2 % we stop at 2 since we need to use t-1
%P_m1 seem to get really huge (x10^22...), might lead to "Inf"
%values which in turn might screw pinv()
%% inv() caused NaN value to appear, pinv seems to solve the issue
J(:,:,t-1) = P(:,:,t-1)*F'*pinv(P_m1(:,:,t)); % A8 pxp * pxp * pxp
%J(:,:,t-1) = P(:,:,t-1)*F'/(P_m1(:,:,t)); % A8 pxp * pxp * pxp
x_T(:,t-1) = x(:,t-1) + J(:,:,t-1)*(x_T(:,t)-(F*x(:,t-1))); %A9 % Becomes NaN during 8th iteration!
P_T(:,:,t-1) = P(:,:,t-1) + J(:,:,t-1)*(P_T(:,:,t)-P_m1(:,:,t))*J(:,:,t-1)'; %A10
nans = [nnz(isnan(J)) nnz(isnan(P_m1)) nnz(isnan(F)) nnz(isnan(x_T)) nnz(isnan(x_m1))];
if nnz(nans)
error('NaN invasion at time t = %d',t);
end
end
P_Tm2(:,:,T) = (eye(p) - K(:,T)*G(T,:))*F*P(:,:,T-1); % %A12
for t = T:-1:3 % stop at 3 because use of t-2
P_Tm2(:,:,t-1) = P_m1(:,:,t-1)*J(:,:,t-2)' + J(:,:,t-1)*(P_Tm2(:,:,t)-F*P(:,:,t-1))*J(:,:,t-2)'; % A11
end
end
The NaNs and Infs start popping around the ~8th iteration.
I guess in there somewhere I'm doing something unholy with my matrices, but I really have no clue about what's wrong. I trust your expertise.
Thanks in advance for the help.
Rody :
Here is how I generate the data (it's not "real world" data yet, just some test data generated to check that nothing goes wront) :
T = 500;
nbassets = 3;
G = .1 + randn(T,nbassets); % random walk trajectories
y = (1:T).';
y = 1.01.^y; % 1 * T % Exponential 1% returns curve
Dan :
You're right. I indeed lack the math background to really understand how the formulas are derived. I know it doesn't help, but I'm not sure I can remedy that for the time being. :/
Rody : Yes indeed, I arrived at the same conclusion. But I really have no clue what makes it go wrong like that.
Here is a link to the paper :
http://www.stat.pitt.edu/stoffer/em.pdf
The formulas for the smoother are all at the very end, in the appendix. Thanks for your time so far.
As the user appears to have inserted the answer into his question I will post it here:
As mentioned by #Rody the cause of the problem was that the use of inv created NaN or Inf values.
The user 'solved' this by using pinv instead.

Recomposing vector input to algorithm from matrix output

I've written some code to implement an algorithm that takes as input a vector q of real numbers, and returns as an output a complex matrix R. The Matlab code below produces a plot showing the input vector q and the output matrix R.
Given only the complex matrix output R, I would like to obtain the input vector q. Can I do this using least-squares optimization? Since there is a recursive running sum in the code (rs_r and rs_i), the calculation for a column of the output matrix is dependent on the calculation of the previous column.
Perhaps a non-linear optimization can be set up to recompose the input vector q from the output matrix R?
Looking at this in another way, I've used an algorithm to compute a matrix R. I want to run the algorithm "in reverse," to get the input vector q from the output matrix R.
If there is no way to recompose the starting values from the output, thereby treating the problem as a "black box," then perhaps the mathematics of the model itself can be used in the optimization? The program evaluates the following equation:
The Utilde(tau, omega) is the output matrix R. The tau (time) variable comprises the columns of the response matrix R, whereas the omega (frequency) variable comprises the rows of the response matrix R. The integration is performed as a recursive running sum from tau = 0 up to the current tau timestep.
Here are the plots created by the program posted below:
Here is the full program code:
N = 1001;
q = zeros(N, 1); % here is the input
q(1:200) = 55;
q(201:300) = 120;
q(301:400) = 70;
q(401:600) = 40;
q(601:800) = 100;
q(801:1001) = 70;
dt = 0.0042;
fs = 1 / dt;
wSize = 101;
Glim = 20;
ginv = 0;
R = get_response(N, q, dt, wSize, Glim, ginv); % R is output matrix
rows = wSize;
cols = N;
figure; plot(q); title('q value input as vector');
ylim([0 200]); xlim([0 1001])
figure; imagesc(abs(R)); title('Matrix output of algorithm')
colorbar
Here is the function that performs the calculation:
function response = get_response(N, Q, dt, wSize, Glim, ginv)
fs = 1 / dt;
Npad = wSize - 1;
N1 = wSize + Npad;
N2 = floor(N1 / 2 + 1);
f = (fs/2)*linspace(0,1,N2);
omega = 2 * pi .* f';
omegah = 2 * pi * f(end);
sigma2 = exp(-(0.23*Glim + 1.63));
sign = 1;
if(ginv == 1)
sign = -1;
end
ratio = omega ./ omegah;
rs_r = zeros(N2, 1);
rs_i = zeros(N2, 1);
termr = zeros(N2, 1);
termi = zeros(N2, 1);
termr_sub1 = zeros(N2, 1);
termi_sub1 = zeros(N2, 1);
response = zeros(N2, N);
% cycle over cols of matrix
for ti = 1:N
term0 = omega ./ (2 .* Q(ti));
gamma = 1 / (pi * Q(ti));
% calculate for the real part
if(ti == 1)
Lambda = ones(N2, 1);
termr_sub1(1) = 0;
termr_sub1(2:end) = term0(2:end) .* (ratio(2:end).^-gamma);
else
termr(1) = 0;
termr(2:end) = term0(2:end) .* (ratio(2:end).^-gamma);
rs_r = rs_r - dt.*(termr + termr_sub1);
termr_sub1 = termr;
Beta = exp( -1 .* -0.5 .* rs_r );
Lambda = (Beta + sigma2) ./ (Beta.^2 + sigma2); % vector
end
% calculate for the complex part
if(ginv == 1)
termi(1) = 0;
termi(2:end) = (ratio(2:end).^(sign .* gamma) - 1) .* omega(2:end);
else
termi = (ratio.^(sign .* gamma) - 1) .* omega;
end
rs_i = rs_i - dt.*(termi + termi_sub1);
termi_sub1 = termi;
integrand = exp( 1i .* -0.5 .* rs_i );
if(ginv == 1)
response(:,ti) = Lambda .* integrand;
else
response(:,ti) = (1 ./ Lambda) .* integrand;
end
end % ti loop
No, you cannot do so unless you know the "model" itself for this process. If you intend to treat the process as a complete black box, then it is impossible in general, although in any specific instance, anything can happen.
Even if you DO know the underlying process, then it may still not work, as any least squares estimator is dependent on the starting values, so if you do not have a good guess there, it may converge to the wrong set of parameters.
It turns out that by using the mathematics of the model, the input can be estimated. This is not true in general, but for my problem it seems to work. The cumulative integral is eliminated by a partial derivative.
N = 1001;
q = zeros(N, 1);
q(1:200) = 55;
q(201:300) = 120;
q(301:400) = 70;
q(401:600) = 40;
q(601:800) = 100;
q(801:1001) = 70;
dt = 0.0042;
fs = 1 / dt;
wSize = 101;
Glim = 20;
ginv = 0;
R = get_response(N, q, dt, wSize, Glim, ginv);
rows = wSize;
cols = N;
cut_val = 200;
imagLogR = imag(log(R));
Mderiv = zeros(rows, cols-2);
for k = 1:rows
val = deriv_3pt(imagLogR(k,:), dt);
val(val > cut_val) = 0;
Mderiv(k,:) = val(1:end-1);
end
disp('Running iteration');
q0 = 10;
q1 = 500;
NN = cols - 2;
qout = zeros(NN, 1);
for k = 1:NN
data = Mderiv(:,k);
qout(k) = fminbnd(#(q) curve_fit_to_get_q(q, dt, rows, data),q0,q1);
end
figure; plot(q); title('q value input as vector');
ylim([0 200]); xlim([0 1001])
figure;
plot(qout); title('Reconstructed q')
ylim([0 200]); xlim([0 1001])
Here are the supporting functions:
function output = deriv_3pt(x, dt)
% Function to compute dx/dt using the 3pt symmetrical rule
% dt is the timestep
N = length(x);
N0 = N - 1;
output = zeros(N0, 1);
denom = 2 * dt;
for k = 2:N0
output(k - 1) = (x(k+1) - x(k-1)) / denom;
end
function sse = curve_fit_to_get_q(q, dt, rows, data)
fs = 1 / dt;
N2 = rows;
f = (fs/2)*linspace(0,1,N2); % vector for frequency along cols
omega = 2 * pi .* f';
omegah = 2 * pi * f(end);
ratio = omega ./ omegah;
gamma = 1 / (pi * q);
termi = ((ratio.^(gamma)) - 1) .* omega;
Error_Vector = termi - data;
sse = sum(Error_Vector.^2);