Multiplication of matrices involving inverse operation: getting infinity - matlab

In my earlier question asked here : Matlab: How to compute the inverse of a matrix
I wanted to know how to perform inverse operation
A = [1/2, (1j/2), 0;
1/2, (-1j/2), 0;
0,0,1]
T = A.*1
Tinv = inv(T)
The output is Tinv =
1.0000 1.0000 0
0 - 1.0000i 0 + 1.0000i 0
0 0 1.0000
which is the same as in the second picture. The first picture is the matrix A
However for a larger matrix say 5 by 5, if I don't use the identity, I to perform element wise multiplication, I am getting infinity value. Here is an example
A = [1/2, (1j/2), 1/2, (1j/2), 0;
1/2, (-1j/2), 1/2, (-1j/2), 0;
1/2, (1j/2), 1/2, (1j/2), 0;
1/2, (-1j/2), 1/2, (-1j/2), 0;
0, 0 , 0 , 0, 1.00
];
T = A.*1
Tinv = inv(T)
Tinv =
Inf Inf Inf Inf Inf
Inf Inf Inf Inf Inf
Inf Inf Inf Inf Inf
Inf Inf Inf Inf Inf
Inf Inf Inf Inf Inf
So, I tried to multiply T = A.*I where I = eye(5) then took the inverse Eventhough, I don't get infinity value, I am getting element 2 which is not there in the picture for 3 by 3 matrix case. Here is the result
Tinv =
2.0000 0 0 0 0
0 0 + 2.0000i 0 0 0
0 0 2.0000 0 0
0 0 0 0 + 2.0000i 0
0 0 0 0 1.0000
If for 3 by 3 matrix case, I use I = eye(3), then again I get element 2.
Tinv =
2.0000 0 0
0 0 + 2.0000i 0
0 0 1.0000
What is the proper method?
Question : For general case, for any sized matrix m by m, should I multiply using I = eye(m) ? Using I prevents infinity values, but results in new numbers 2. I am really confused. Please help
UPDATE: Here is the full image where Theta is a vector of 3 unknowns which are Theta1, Theta1* and Theta2 are 3 scalar valued parameters. Theta1 is a complex valued number, so we are representing it into two parts, Theta1 and Theta1* and Theta2 is a real valued number. g is a complex valued function. The expression of the derivative of a complex valued function with respect to Theta evaluates to T^H. Since, there are 3 unknowns, the matrix T should be of size 3 by 3.

your problem is slightly different than you think. The symbols (I, 0) in the matrices in the images are not necessarily scalars (only for n = 1), but they are actually square matrices.
I is an identity matrix and 0 is a matrix of zeros. if you treat these matrix like that you will get the expected answers:
n = 2; % size of the sub-matrices
I = eye(n); % identity matrix
Z = zeros(n); % matrix of zeros
% your T matrix
T = [1/2*I, (1j/2)*I, Z;
1/2*I, (-1j/2)*I, Z;
Z,Z,I];
% inverse of T
Tinv1 = inv(T);
% expected result
Tinv2 = [I,I,Z;
-1j*I,1j*I,Z;
Z,Z,I];
% max difference between computed and expected
maxDist = max(abs(Tinv1(:) - Tinv2(:)))

First you should know, whether you should do
T = A.*eye(...)
or
I = A.*1 %// which actually does nothing
These are completely different things. Be sure what you need, then think about the code.
The reason why you get all inf is because the determinant det of your matrix is zero.
det(T) == 0
So from the mathematical point of view your result is correct, as building the inverse requires every element of T to be divided by det(T). Your matrix cannot be inversed. If it should be possible, the error is in your input matrix, or again in your understanding of the actual underlying problem to solve.
Edit
After your question update, it feels like you're actually looking for ctranpose instead of inv.

Related

Wrong matrix multiplication causes number of elements not being the same

I have this matrix
mpsim =
1.0e+04 *
-2.2331
-0.4261
1.3810
3.1880
4.9951
6.8022
8.6092
this matrix
fvsim =
NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN NaN NaN NaN NaN
0 0 0.9000 0.1000 0 0 0
0 0 0 0.7500 0 0.2500 0
0 0 0 0 0 1.0000 0
0 0 0 0 0.5000 0 0.5000
0 0 0 0 0 0 1.0000
and this matrix
lingsim =
3
3
3
3
3
3
3
3
3
3
4
4
4
4
6
5
6
7
7
I'm trying to use this code but got an error
sizeA=size(mpsim,1);
sizeB=size(fvsim,1);
sizeC=size(lingsim,1);
outputsim = zeros(size(lingsim));
for i=1:sizeC
if lingsim(i)<=sizeB
outputsim(i)=sum(mpsim * fvsim(lingsim(i), :));
else
outputsim(i)=lingsim(i);
end
end
outputsim
In an assignment A(I) = B, the number of elements in B and I must be the same.
Error in ftskutes (line 131)
outputsim(i)=sum(mpsim * fvsim(lingsim(i), :));
How to fix this? Actually I'm assuming that sum(mpsim * fvsim(lingsim(i), :)); is 1x1 but when I try to check it is 1x7.
The problem is that sum() only works in one dimension - so mpsim * fvsim(lingsim(i), :) produces a 7x7 matrix where it then takes the column sums of, resuling in a 1x7 vector.
To get the sum of all elements, you can use
if lingsim(i)<=sizeB
outputsim(i)=sum(sum(mpsim * fvsim(lingsim(i), :)));
else
edit:
i assumed you did take the outer product on purpose. If however you wanted to multiply each element with each other, you have to replace * with .* and transpose one of the two vectors:
outputsim(i)=sum(mpsim' .* fvsim(lingsim(i), :));
When you multiply to vectors, you should be sure to perform the operation you want:
Outer product: (n x 1) * (1 x n) == (n x n)
Inner (dot) product: (1 x n) * (n x 1) == (1 x 1)
Elementwise product: (n x 1) .* (n x 1) == (n x 1)
mpsim is column vector, i.e. n x 1, and fvsim(lingsim(i), :) is a row vector, i.e. 1 x n. Therefore you are calculating the outer product.
If this not what you want, you can take the transpose (.') or the builtin function dot to calculate the dot product independent of the orientation of your vectors.

Matlab Not Returning Orthonormal Matrix of Eigenvectors

When I try to find the eigen-decomposition of a matrix in Matlab that has a repeated eigenvalue but is NOT defective, it is not returning an orthonormal matrix of eignevectors. For example:
k = 5;
repeats = 1;
% First generate a random matrix of eignevectors that is orthonormal
V = orth(rand(k));
% Now generate a vector of eigenvalues with the given number of repeats
D = rand(k,1);
for i = 1:repeats
% Put one random value into another (note this sometimes will result in
% less than the given number of repeats if we ever input the same
% number)
D(ceil(k*rand())) = D(ceil(k*rand()));
end
A = V'*diag(D)*V;
% Now test the eignevector matrix of A
[V_A, D_A] = eig(A);
disp(V_A*V_A' - eye(k))
I am finding that my matrix of eigenvectors V_A is not orthogonal i.e. V_A*V_A' is not equalling the identity matrix (taking into account rounding errors).
I was under the impression that if my matrix was real and symmetric then Matlab would return an orthogonal matrix of eigenvectors, so what is the issue here?
This seems to be a numerical precision issue.
The eigenvectors of a real symmetric matrix are orthogonal. But your input matrix A is not exactly symmetric. The differences are on the order of eps, as expected from numerical errors.
>> A-A.'
ans =
1.0e-16 *
0 -0.2082 -0.2776 0 0.1388
0.2082 0 0 -0.1388 0
0.2776 0 0 -0.2776 0
0 0.1388 0.2776 0 -0.5551
-0.1388 0 0 0.5551 0
If you force A to be exactly symmetric you'll get an orthogonal V_A, up to numerical errrors on the order of eps:
>> A = (A+A.')/2;
>> A-A.'
ans =
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
>> [V_A, D_A] = eig(A);
>> disp(V_A*V_A' - eye(k))
1.0e-15 *
-0.3331 0.2220 0.0755 0.1804 0
0.2220 -0.2220 0.0572 -0.1665 0.1110
0.0755 0.0572 -0.8882 -0.0590 -0.0763
0.1804 -0.1665 -0.0590 0 -0.0555
0 0.1110 -0.0763 -0.0555 0
Still, it's surprising that so wildly different results are obtained for V_A when A is symmetric and when A is nearly symmetric. This is my bet as to what's happening: as noted by #ArturoMagidin,
(1) Eigenvectors corresponding to distinct eigenvalues of a symmetric matrix must be orthogonal to each other. Eigenvectors corresponding to the same eigenvalue need not be orthogonal to each other.
(2) However, since every subspace has an orthonormal basis,you can find orthonormal bases for each eigenspace, so you can find an orthonormal basis of eigenvectors.
Matlab is probably taking route (2) (thus forcing V_a to be orthogonal) only if A is symmetric. For A not exactly symmetric it probably takes route (1) and gives you a basis of each subspace, but not necessarily with orthogonal vectors.
The eigenvectors of a real matrix will be orthogonal if and only if AA'=A'A and eigenvalues are distinct. If eigenvalues are not distinct, MATLAB chooses an orthogonal system of vectors. In the above example, AA'~=A'A. Besides, you have to consider round off and numerical errors.

Power Method in MATLAB

I would like to implement the Power Method for determining the dominant eigenvalue and eigenvector of a matrix in MATLAB.
Here's what I wrote so far:
%function to implement power method to compute dominant
%eigenvalue/eigenevctor
function [m,y_final]=power_method(A,x);
m=0;
n=length(x);
y_final=zeros(n,1);
y_final=x;
tol=1e-3;
while(1)
mold=m;
y_final=A*y_final;
m=max(y_final);
y_final=y_final/m;
if (m-mold)<tol
break;
end
end
end
With the above code, here is a numerical example:
A=[1 1 -2;-1 2 1; 0 1 -1]
A =
1 1 -2
-1 2 1
0 1 -1
>> x=[1 1 1];
>> x=x';
>> [m,y_final]=power_method(A,x);
>> A*x
ans =
0
2
0
When comparing with the eigenvalues and eigenvectors of the above matrix in MATLAB, I did:
[V,D]=eig(A)
V =
0.3015 -0.8018 0.7071
0.9045 -0.5345 0.0000
0.3015 -0.2673 0.7071
D =
2.0000 0 0
0 1.0000 0
0 0 -1.0000
The eigenvalue coincides, but the eigenvector should be approaching [1/3 1 1/3]. Here, I get:
y_final
y_final =
0.5000
1.0000
0.5000
Is this acceptable to see this inaccuracy, or am I making some mistake?
You have the correct implementation, but you're not checking both the eigenvector and eigenvalue for convergence. You're only checking the eigenvalue for convergence. The power method estimates both the prominent eigenvector and eigenvalue, so it's probably a good idea to check to see if both converged. When I did that, I managed to get [1/3 1 1/3]. Here is how I modified your code to facilitate this:
function [m,y_final]=power_method(A,x)
m=0;
n=length(x);
y_final=x;
tol=1e-10; %// Change - make tolerance more small to ensure convergence
while(1)
mold = m;
y_old=y_final; %// Change - Save old eigenvector
y_final=A*y_final;
m=max(y_final);
y_final=y_final/m;
if abs(m-mold) < tol && norm(y_final-y_old,2) < tol %// Change - Check for both
break;
end
end
end
When I run the above code with your example input, I get:
>> [m,y_final]=power_method(A,x)
m =
2
y_final =
0.3333
1.0000
0.3333
On a side note with regards to eig, MATLAB most likely scaled that eigenvector using another norm. Remember that eigenvectors are not unique and are accurate up to scale. If you want to be sure, simply take the first column of V, which coincides with the dominant eigenvector, and divide by the largest value so that we can get one component to be normalized with the value of 1, just like the Power Method:
>> [V,D] = eig(A);
>> V(:,1) / max(abs(V(:,1)))
ans =
0.3333
1.0000
0.3333
This agrees with what you have observed.

Truncated gaussian kernel implementation Matlab,right?

I have the defination of Truncated gaussian kernel as:
So I confuse which is correct implementation of truncated gaussian kernel. Let see two case and let me know, thank you so much
Case 1:
G_truncated=fspecial('gaussian',round(2*sigma)*2 + 1,sigma); % kernel
Case 2:
G=fspecial('gaussian',round(2*sigma)*2 + 1,sigma); % normal distribution kernel
B = ones(round(2*sigma)*2 + 1,round(2*sigma)*2 + 1);
G_truncated=G.*B;
G_truncated = G_truncated/sum(G_truncated(:)); %normalized for sum=1
To add on to the previous post, there is a question of how to implement the kernel. You could use fspecial, truncate the kernel so that anything outside of the radius is zero, then renormalize it, but I'm assuming you'll want to do this from first principles.... so let's figure that out then. First, you need to generate a spatial map of distances from the centre of the mask. In conjunction, you use this to figure out what the Gaussian values (un-normalized) would be. You filter out those values in the un-normalized mask based on the spatial map of distances, then normalize that. As such, given your standard deviation tau, and your radius rho, you can do this:
%// Find grid of points
[X,Y] = meshgrid(-rho : rho, -rho : rho)
dists = (X.^2 + Y.^2); %// Find distances from the centre (Euclidean distance squared)
gaussVal = exp(-dists / (2*tau*tau)); %// Find unnormalized Gaussian values
%// Filter out those locations that are outside radius and set to 0
gaussVal(dists > rho^2) = 0;
%// Now normalize
gaussMask = gaussVal / (sum(gaussVal(:)));
Here is an example with using rho = 2 and tau = 2 with the outputs at each stage:
Stage #1 - Find grid co-ordinates
>> X
X =
-2 -1 0 1 2
-2 -1 0 1 2
-2 -1 0 1 2
-2 -1 0 1 2
-2 -1 0 1 2
>> Y
Y =
-2 -2 -2 -2 -2
-1 -1 -1 -1 -1
0 0 0 0 0
1 1 1 1 1
2 2 2 2 2
Step #2 - Find distances from centre and unnormalized Gaussian values
>> dists
dists =
8 5 4 5 8
5 2 1 2 5
4 1 0 1 4
5 2 1 2 5
8 5 4 5 8
>> gaussVal
gaussVal =
0.3679 0.5353 0.6065 0.5353 0.3679
0.5353 0.7788 0.8825 0.7788 0.5353
0.6065 0.8825 1.0000 0.8825 0.6065
0.5353 0.7788 0.8825 0.7788 0.5353
0.3679 0.5353 0.6065 0.5353 0.3679
Step #3 - Filter out locations that don't belong within the radius and set to 0
>> gaussVal =
0 0 0.6065 0 0
0 0.7788 0.8825 0.7788 0
0.6065 0.8825 1.0000 0.8825 0.6065
0 0.7788 0.8825 0.7788 0
0 0 0.6065 0 0
Step #4 - Normalize so sum is equal to 1
>> gaussMask =
0 0 0.0602 0 0
0 0.0773 0.0876 0.0773 0
0.0602 0.0876 0.0993 0.0876 0.0602
0 0.0773 0.0876 0.0773 0
0 0 0.0602 0 0
To verify that the mask sums to 1, just do sum(gaussMask(:)) and you'll see it's equal to 1... more or less :)
Your definition of truncated gaussian kernel is different than how MATLAB truncates filter kernels, though it generally won't matter in practice for sizable d.
fspecial already returns truncated AND normalized filter, so the second case is redundant, because it generates exactly the same result as case 1.
From MATLAB help:
H = fspecial('gaussian',HSIZE,SIGMA) returns a rotationally
symmetric Gaussian lowpass filter of size HSIZE with standard
deviation SIGMA (positive). HSIZE can be a vector specifying the
number of rows and columns in H or a scalar, in which case H is a
square matrix.
The default HSIZE is [3 3], the default SIGMA is 0.5.
You can use fspecial('gaussian',1,sigma) to generate a 1x1 filter and see that it is indeed normalized.
To generate a filter kernel that fits your definition, you need to make B in your second case a matrix that has ones in a circular area. A less strict (but nonetheless redundant in practice) solution is to use fspecial('disk',size) to truncate your gaussian kernel. Don't forget to normalize it in either case.
The answer of rayryeng is very useful for me. I only extend the gaussian kernel to ball kernel. The ball kernel is defined :
So based on answer of rayryeng. We can do it by
sigma=2;
rho=sigma;
tau=sigma;
%// Find grid of points
[X,Y] = meshgrid(-rho : rho, -rho : rho)
dists = (X.^2 + Y.^2); %// Find distances from the centre (Euclidean distance squared)
ballVal=dists;
ballVal(dists>sigma)=0;
ballVal(dists<=sigma)=1;
%// Now normalize
ballMask = ballVal / (sum(ballVal(:)));
Let me know, if it has any error or problem. Thank you

problem in finding eigenvectors of a matrix in MATLAB

I have a symmetric matrix with the elements A=[8.8191,0,1.0261; 0,3,0; 1.0261,0,3.1809];
I used the eig(A) function in MATLAB , the eigenvalues and eigenvectors are given :
eigvect =
0.1736 0 0.9848
0 -1.0000 0
-0.9848 0 0.1736
eigval =
3.0000 0 0
0 3.0000 0
0 0 9.0000
Eigenvalues are correct but the eigenvectors are not which I expect, because I think 2 of them should be equal. Does MATLAB calculate correctly the eigenvectors?
The definition of an eigenvalue can be found anywhere on the web
A*v = lam*v
v being the eigenvector with lam, its corresponding eigenvalue.
So test your results:
i =1;
A*eigvect (:,i)-eigval(i,i)*eigvect(:,i) %which should be approx [0;0;0]
It is not necessary that each of the repeating eigenvalue should have its (independent) associated eigenvector. This means, an nxn matrix with an eigenvalue repeating more than once has less or equal to n linearly independent eigenvectors.
Example 1: Matrix
2 0;
0 2
has eigenvalue 2 (repeating twice), but it has two linearly independent eigenvectors associated with eigenvalue 2
Example 2:Matrix
A= 1 1 1 -2;
0 1 0 -1;
0 0 1 1;
0 0 0 1
has eigenvalue 1 (repeating four times), but it has only two independent eigenvectors associated with eigenvalue 1.