Matlab: Error when dividing the covariance matrix - matlab

I have an expression which is part of the log-likelihood expression for a Gaussian state space model
express =
\sum_{t=2}^T (x(t) - (A*x(t-1)))^2/2*Q
where T = 5, the number of samples/observations; x is a 2 by T matrix; Q is the covariance matrix of the process noise initialized using eye
x =
0.7311 -1.7152 0.2476 3.6643 -1.2870
0.4360 0.3554 0.1981 0.4168 0.2643
A =
0.1950 -0.9500
1.0000 0
Q =
1 0
0 1
I am getting this error:
Error using /
Matrix dimensions must agree.
This is how I have implemented:
numerator = sum((x(:,2:T)-(A*x(:,1:(T-1)))).^2)
numerator =
2.0732 3.0349 3.2291 1.5365
express = numerator / diag(2*diag(Q))
Should I be taking the diagonal or determinant of Q? Please help in correcting this part. Thank you.

You are squaring the term too early. The (') symbol means you need to take the complex conjugate of the term before multiplying it by the inverse of Q and then the term again. I believe you are trying to calculate this
in which case the term you want to sum over is the following,
term = x(:,2:T)-(A*x(:,1:(T-1));
result = term' * inv(Q) * term
the result of which is a 4x4 matrix. You can then sum this (over both directions I presume). From the (7) in the link you mention, you should need to follow this same procedure three times (for R, Q, and V).

Related

Why pinv answer not equal with svd method answer in Matlab?

a = [1 2 3
2 4 6
3 6 9];
b = pinv(a);
[U,S,V] = svd(a);
T = S;
T(find(S~=0)) = 1./S(find(S~=0));
svda = V * T' * U';
I find that the pinv method in Matlab uses the SVD decomposition to calculate pseudo-inverse, so I tried to solve the matrix a.
And as shown above, theoretically the b should be equal with svda, but the Matlab result said they are totally different. Why?
b is
0.00510204081632653 0.0102040816326531 0.0153061224489796
0.0102040816326531 0.0204081632653061 0.0306122448979592
0.0153061224489796 0.0306122448979592 0.0459183673469388
svda is
-2.25000000000000 -5.69876639328585e+15 3.79917759552390e+15
-2.14021397132170e+15 1.33712246709292e+16 -8.20074512351222e+15
1.42680931421447e+15 -7.01456098285751e+15 4.20077088383351e+15
How does pinv get to its result?
REASON:
Thanks to Cris, I check my S, and it do have 2 very large number, and that is the source of this strange result.
S:
14.0000000000000 0 0
0 1.00758232556386e-15 0
0 0 5.23113446604828e-17
By pinv method and Cris method, this 2 latter numbers should set to 0 which I didnt do. So here is the reason。
pinv doesn't just invert all the non-zero values of S, it also removes all the nearly zero values first. From the documentation:
pinv(A,TOL) treats all singular values of A that are less than TOL as zero. By default, TOL = max(size(A)) * eps(norm(A)).
pinv more or less does this:
[U,S,V] = svd(a);
I = find(abs(S) > max(size(a)) * eps(norm(a)));
T = zeros(size(S));
T(I) = 1 ./ S(I);
svda = V * T.' * U';
On my machine, isequal(svda,b) is true, which is a bit of a coincidence because the operations we're doing here are not exactly the same as those done by pinv, and you could expect rounding errors to be different. You can see what pinv does exactly by typing edit pinv in MATLAB. It's a pretty short function.
Note that I used T.', not T'. The former is the transpose, the latter is the complex conjugate transpose (Hermitian transpose). We're dealing with real-valued matrices here, so it doesn't make a difference in this case, but it's important to use the right operator.

Why do I get a negative values if I divide two positives ones?

I'm using the following code to find a positive factor:
[U S V] = svd(image, 'econ'); % calculate the SVD of the image
level = 4;
factorJND = jnd(image, level) ; % calculate the JND values of the image
f = factorJND / abs(U*V) % divide the JND value by the multiplication of U and V matrices( they have the same size)
Knowing that factorsJND and abs(U*V) are both positive, it gives me positive and negative numbers!! I don't know why!
f = -7.2851 6.4520
-7.7509 5.5236
-7.3374 4.1684
-5.6905 5.0915
I even try to do :
f = abs(factorsJND) / abs(U*V)
But still gives me the same result while it should be all positive values!
You are using matrix right division (/) rather than an element-wise division (./). Because of this, it is possible that the result will have negative values for two inputs in which all values are themselves all positive values. You likely element-wise division instead.
f = factorJND ./ abs(U*V);

Why does my function return two values when I only return one?

So I'm trying to implement the Simpson method in Matlab, this is my code:
function q = simpson(x,f)
n = size(x);
%subtracting the last value of the x vector with the first one
ba = x(n) - x(1);
%adding all the values of the f vector which are in even places starting from f(2)
a = 2*f(2:2:end-1);
%adding all the values of the f vector which are in odd places starting from 1
b = 4*f(1:2:end-1);
%the result is the Simpson approximation of the values given
q = ((ba)/3*n)*(f(1) + f(n) + a + b);
This is the error I'm getting:
Error using ==> mtimes
Inner matrix dimensions must agree.
For some reason even if I set q to be
q = f(n)
As a result I get:
q =
0 1
Instead of
q =
0
When I set q to be
q = f(1)
I get:
q =
0
q =
0
I can't explain this behavior, that's probably why I get the error mentioned above. So why does q have two values instead of one?
edit: x = linspace(0,pi/2,12);
f = sin(x);
size(x) returns the size of the array. This will be a vector with all the dimensions of the matrix. There must be at least two dimensions.
In your case n=size(x) will give n=[N, 1], not just the length of the array as you desire. This will mean than ba will have 2 elements.
You can fix this be using length(x) which returns the longest dimension rather than size (or numel(x) or size(x, 1) or 2 depending on how x is defined which returns only the numbered dimension).
Also you want to sum over in a and b whereas now you just create an vector with these elements in. try changing it to a=2*sum(f(...)) and similar for b.
The error occurs because you are doing matrix multiplication of two vectors with different dimensions which isn't allowed. If you change the code all the values should be scalars so it should work.
To get the correct answer (3*n) should also be in brackets as matlab doesn't prefer between / and * (http://uk.mathworks.com/help/matlab/matlab_prog/operator-precedence.html). Your version does (ba/3)*n which is wrong.

What is the Haskell / hmatrix equivalent of the MATLAB pos function?

I'm translating some MATLAB code to Haskell using the hmatrix library. It's going well, but
I'm stumbling on the pos function, because I don't know what it does or what it's Haskell equivalent will be.
The MATLAB code looks like this:
[U,S,V] = svd(Y,0);
diagS = diag(S);
...
A = U * diag(pos(diagS-tau)) * V';
E = sign(Y) .* pos( abs(Y) - lambda*tau );
M = D - A - E;
My Haskell translation so far:
(u,s,v) = svd y
diagS = diag s
a = u `multiply` (diagS - tau) `multiply` v
This actually type checks ok, but of course, I'm missing the "pos" call, and it throws the error:
inconsistent dimensions in matrix product (3,3) x (4,4)
So I'm guessing pos does something with matrix size? Googling "matlab pos function" didn't turn up anything useful, so any pointers are very much appreciated! (Obviously I don't know much MATLAB)
Incidentally this is for the TILT algorithm to recover low rank textures from a noisy, warped image. I'm very excited about it, even if the math is way beyond me!
Looks like the pos function is defined in a different MATLAB file:
function P = pos(A)
P = A .* double( A > 0 );
I can't quite decipher what this is doing. Assuming that boolean values cast to doubles where "True" == 1.0 and "False" == 0.0
In that case it turns negative values to zero and leaves positive numbers unchanged?
It looks as though pos finds the positive part of a matrix. You could implement this directly with mapMatrix
pos :: (Storable a, Num a) => Matrix a -> Matrix a
pos = mapMatrix go where
go x | x > 0 = x
| otherwise = 0
Though Matlab makes no distinction between Matrix and Vector unlike Haskell.
But it's worth analyzing that Matlab fragment more. Per http://www.mathworks.com/help/matlab/ref/svd.html the first line computes the "economy-sized" Singular Value Decomposition of Y, i.e. three matrices such that
U * S * V = Y
where, assuming Y is m x n then U is m x n, S is n x n and diagonal, and V is n x n. Further, both U and V should be orthonormal. In linear algebraic terms this separates the linear transformation Y into two "rotation" components and the central eigenvalue scaling component.
Since S is diagonal, we extract that diagonal as a vector using diag(S) and then subtract a term tau which must also be a vector. This might produce a diagonal containing negative values which cannot be properly interpreted as eigenvalues, so pos is there to trim out the negative eigenvalues, setting them to 0. We then use diag to convert the resulting vector back into a diagonal matrix and multiply the pieces back together to get A, a modified form of Y.
Note that we can skip some steps in Haskell as svd (and its "economy-sized" partner thinSVD) return vectors of eigenvalues instead of mostly 0'd diagonal matrices.
(u, s, v) = thinSVD y
-- note the trans here, that was the ' in Matlab
a = u `multiply` diag (fmap (max 0) s) `multiply` trans v
Above fmap maps max 0 over the Vector of eigenvalues s and then diag (from Numeric.Container) reinflates the Vector into a Matrix prior to the multiplys. With a little thought it's easy to see that max 0 is just pos applied to a single element.
(A>0) returns the positions of elements of A which are larger than zero,
so forexample, if you have
A = [ -1 2 -3 4
5 6 -7 -8 ]
then B = (A > 0) returns
B = [ 0 1 0 1
1 1 0 0]
Note that we have ones corresponding to an elemnt of A which is larger than zero, and 0 otherwise.
Now if you multiply this elementwise with A using the .* notation, then you are multipling each element of A that is larger than zero with 1, and with zero otherwise. That is, A .* B means
[ -1*0 2*1 -3*0 4*1
5*1 6*1 -7*0 -8*0 ]
giving finally,
[ 0 2 0 4
5 6 0 0 ]
So you need to write your own function that will return positive values intact, and negative values set to zero.
And also, u and v does not match in dimension, for a generall SVD decomposition, so you actually would need to REDIAGONALIZE pos(diagS - Tau), so that u* diagnonalized_(diagS -tau) agrres to v

How do I put in two variables in a matlab function (ERROR: inner matrix dimensions must agree)?

When I try to plot a function h in MATLAB, using a variable omega which is defined as its own function, I get an Inner matrix dimensions must agree, error using _*_ response from the console.
The function works when I use a + between the seperate function-components of h; It does not work when I try multiplying the two inner functions in h, which is, from what I guess, what causes the matrix dim error.
function h = freqp(omega)
k = (1:1024-1);
hh = (1:1024-1);
omega = zeros(length(k),1);
omega = (k-1)*((2*pi)/1024);
hh = 2*exp((-3j)*omega)*cos(omega); % This works for ...omega) + cos(...
% but not for ...omega) * cos(, why?
y = fft(hh);
stem(real(y), omega);
How can I solve this? I read the info on mathworks but it only gives a solution for e.g. loading a file. Any help would be greatly appreciated!
Since Omega is a vector, the addition works. But multiplication of two vectors will result as a matrix. You can modify
hh = 2*exp((-3j)*omega)*cos(omega);
as
hh = 2*exp((-3j)*omega)*(cos(omega))';
or looking for element wise multiplication,
use
hh = 2*exp((-3j)*omega).*cos(omega);
The part exp((-3j)*omega worked fine because -3j is a complex scalar and omega a vector. Thus, MATLAB multiplies each element of omega with -3i. However, that result is a vector itself. Also cos(omega) is a vector, and both are row vectors.
In this case, with two vectors, the *-operator means dot product but that would be calculated between a column vector and a row vector, not two row vectors. So, [1 2 3] * [4 5 6] will raise the same error you are reporting, but [1 2 3] * [4 5 6]' yields 32.
From invoking fft on hh your code looks, however, as if you never intended to calculate a dot product (a scalar) but instead were looking for element-wise multiplication. The operator for element-wise multiplication is .*, such that your expression would be instead
hh = 2*exp((-3j)*omega).*cos(omega);