applying the same solver for complex matrix - matlab

Let assume an equation A x = b, where A is a real n x n matrix, b is a real valued vector of length n and x is the solution vector of this linear system.
We can find the solution of x by finding the inverse of A.
B= inv(A)
And therefore
x =A^{-1}b.
x= B*b
Can I apply the same solver if A and b are complex?
EDIT: I'm looking for explanation why it should work. Thanks :)

You can do that. Better in Matlab would be x = A\b. That will usually get the answer faster and more accurately.

In short yes, it works for any matrices irrespective of the field. (Well, at least real or complex field works.)
I think your question is whether B exists or not. It does as long as the determinant of B is non-zero. Vice versa.

Related

Diagonalizing Matrix in Matlab Gives "Wrong" Linear Combination of Eigenvectors

In Matlab, I'm trying to solve for the energies and eigenstates of a Hamiltonian matrix which has a highly degenerate set of eigenvectors. The matrix is a 55x55 hermitian matrix, and when I call either eig or schur to do the diagonalization I find that some (but not all) of the eigenvectors are the "wrong" linear combinations within each degenerate subspace. What I mean by "wrong" is that there are additional constraints in the problem. In this case, there is a good quantum number, M, which I want to preserve by not allowing states with different M values to be mixed--- but that mixing is exactly what I see when I run the code. Is there a way to tell Matlab to diagonalize the matrix while simultaneously maintaining the eigenvectors of another operator?
you can use diag to diagonalize a matrix and [eig_vect,eig_val] = eig(A) to give you eigenvectors.
I don't know matlab well enough to know whether there is a routine for this this, but here's how to do it algorithmically:
First diagonalise H, as you do now. Then for each degenerate eigen-space V, diagonalise the restriction of C to V, and use this diagonalisation to compute simulaneous diagonalisations of C and H
In more detail:
I assume you have an operator C that commutes with your Hamiltonian H. If V is the eigen-space of H for a particular (degenerate) eigen value, and you have a basis x[1] .. x[n] of V , then for each i, Cx[i] must be in V, and so we can expand Cx[i] in terms of the x[], and get a matrix representation C^ of the restriction of C to V, that is we compute
C^[k,j] = <x[k]|C*x[j]> k,j =1 .. n
Diagonalise the matrix C^, getting
C^ = U*D*U*
Then for each row (r1,..rn) of U* we can form
chi = Sum{ j | r[j]*x[j]}
A little algebra shows that this is an eigenvector of C, and also of H

Solve Matrix with Multiple RHS

I have a problem where I need to solve linear equations. The matrix is the same, but the rhs changes (during an iterative procedure). The only way I could see to do this without repeating the matrix factorization is like:
[L,U] = lu(A);
x = U\(L\b);
This seems clunky. Is there a better way? Can I use LU factors that are stored in a single array? TIA
Given a system
LU*x = b
where L is a lower triangular matrix and U is an upper triangular matrix.
From your question I understand that the vector b is changing constantly and you would like to keep the LU factorization constant for all possible vectors b.
In other words, if you have n different b's you would like n different solutions with the same LU decomposition of A.
I would use forward substitution where you solve a system
L*m = b
and then backwards substitution with
U*x = m
More here.
This way you keep the factorization constant for every vector b.
And concerning the other question, yes, it is possible to store the L and U matrices in a single matrix. Just recognize that the diagonal of the L matrix is not relevant because it is always 1.

How to calculate 'half' of an affine transformation matrix in MATLAB

I am looking to find 'half' of an affine transformation matrix using MATLAB. Yes I understand, 'half' a matrix isn't really correct, but exactly what I'm looking for was actually explained very well here: stackexchange mathematics
So I'm looking for an affine transformation matrix (B) which when applied twice to my image, will give the same result as when applying my initial matrix (A) once.
Reflection will not be part of A, otherwise it would be impossible to find B.
My initial matrix (A) is calculated using A = estimateGeometricTransform(movingPoints,fixedPoints,'affine'), which gives me an affine2d object.
If there is no way to find the 'half' matrix from the initial matrix, maybe the arrays of matched points can be manipulated in a way to find B from them.
Cheers
I think there is a possibility to find the half matrix that you speak of. It is called the matrix square root. Suppose you have the matrixa A. In Matlab you can just do B=sqrtm(A), where the m stands for matrix. Then you get a matrix B, where norm(B*B - A) is very small, if the matrix A was well behaved.
If I understand correctly you want to have half of an affine transformation aff = #(x) A*x + b. This can be done using homogenious coordinates. Every transformation aff can be represented by a the matrix
M = [A b; zeros(1,length(b)) 1], where
normalize = #(y) y(1:end-1)/y(end);
affhom = #(x) normalize(M*[x; 1]);
Note that aff and affhom do exactly the same thing. Here we can use what I was talking about earlier. Half of affhom can be represented using
affhomhalf = #(x) normalize(sqrtm(M)*[x; 1])
where
affhomhalf(affhomhalf(y)) - aff(y)
is small for all y, if A and b were well behaved.
I'm not sure about this, but I think you can even decompose sqrtm(M) into an linear and translatory part.

fft matrix-vector multiplication

I have to solve in MATLAB a linear system of equations A*x=B where A is symmetric and its elements depend on the difference of the indices: Aij=f(i-j).
I use iterative solvers because the size of A is say 40000x40000. The iterative solvers require to determine the product A*x where x is the test solution. The evaluation of this product turns out to be a convolution and therefore can be done dy means of fast fourier transforms (cputime ~ Nlog(N) instead of N^2). I have the following questions to this problem:
is this convolution circular? Because if it is circular I think that I have to use a specific indexing for the new matrices to take the fft. Is that right?
I find difficult to program the routine for the fft because I cannot understand the indexing I should use. Is there any ready routine which I can use to evaluate by fft directly the product A*x and not the convolution? Actually, the matrix A is constructed of 3x3 blocks and is symmetric. A ready routine for the product A*x would be the best solution for me.
In case that there is no ready routine, could you give me an idea by example how I could construct this routine to evaluate a matrix-vector product by fft?
Thank you in advance,
Panos
Very good and interesting question! :)
For certain special matrix structures, the Ax = b problem can be solved very quickly.
Circulant matrices.
Matrices corresponding to cyclic convolution Ax = h*x (* - is convolution symbol) are diagonalized in
the Fourier domain, and can be solved by:
x = ifft(fft(b)./fft(h));
Triangular and banded.
Triangular matrices and diagonally-dominant banded matrices are solved
efficiently by sparse LU factorization:
[L,U] = lu(sparse(A)); x = U\(L\b);
Poisson problem.
If A is a finite difference approximation of the Laplacian, the problem is efficiently solved by multigrid methods (e.g., web search for "matlab multigrid").
Interesting question!
The convolution is not circular in your case, unless you impose additional conditions. For example, A(1,3) should equal A(2,1), etc.
You could do it with conv (retaining only the non-zero-padded part with option valid), which probably is also N*log(N). For example, let
A = [a b c d
e a b c
f e a b
g f e a];
Then A*x is the same as
conv(fliplr([g f e a b c d]),x,'valid').'
Or more generally, A*x is the same as
conv(fliplr([A(end,1:end-1) A(1,:)]),x,'valid').'
I'd like to add some comments on Pio_Koon's answer.
First of all, I wouldn't advise to follow the suggestion for triangular and banded matrices. The time taken by a call to Matlab's lu() procedure on a large sparse matrix massively overshadows any benefits gained by solving the linear system as x=U\(L\b).
Second, in the Poisson problem you end up with a circulant matrix, therefore you can solve it using the FFT as described. In this specific case, your convolution mask h is a Laplacian, i.e., h=[0 -0.25 0; -0.25 1 -0.25; 0 -0.25 0].

MATLAB Matrix Problem

I have a system of equations (5 in total) with 5 unknowns. I've set these out into matrices to try solve, but I'm not sure if this comes out right. Basically the setup is AX = B, where A,X, and B are matrices. A is a 5x5, X is a 1x5 and B is a 5x1.
When I use MATLAB to solve for X using the formula X = A\B, it gives me a warning:
Matrix is singular to working precision.
and gives me 0 for all 5 X unknowns, but if I say X = B\A it doesnt, and gives me values for the 5 X unknowns.
Anyone know what I'm doing wrong? In case this is important, this is what my X matrix looks like:
X= [1/C3; 1/P1; 1/P2; 1/P3; 1/P4]
Where C3, P1, P2, P3, P4 are my unknowns.
Your matrix is singular, which means its determinant is 0. Such system of equations does not give you enough information to find a unique solution. One odd thing I see in your question is that X is 1x5 while B is 5x1. This is not a correct way of posing the problem. Both X and B must be 5x1. In case you're wondering, this is not a Matlab thing - this is a linear algebra thing. This [5x5]*[1x5] is illegal. This [5x5]*[5x1] produces a [5x1] result. This [1x5]*[5x5] produces a [1x5] result. Check you algebra first, and then check whether the determinant (det function in Matlab) is 0.
So, the next thing is to figure out why A is singular. (Note that it's possible that you'd want to solve
A x = b
in cases with square and singular A, but they'd only be in cases where b is in the range space of A.)
Maybe you can write your matrix A and vector b out (since it's only 5x5)? Or explain how you create it. That might give a clue as to why A isn't full rank or as to why b isn't in the range space of A.