Ways around pinv([inf])=NaN in Octave/Matlab - matlab

I am using Octave 3.8.1, a Matlab-like program. I'd like to generalize 1/x to the case where x may be a scalar or a matrix. Replacing 1/x with inv(x) or pinv(x) works for most x, except:
octave:1> 1/inf
ans = 0
octave:2> pinv([inf])
ans = NaN
octave:3> inv([inf])
warning: inverse: matrix singular to machine precision, rcond = 0
ans = Inf
Should I convert NaN to 0 afterwards to get this to work? Or have I missed something? Thanks!

The Moore–Penrose pseudo inverse, which is the basis for Matab and octave's pinv, is implemented via completely different algorithm than the inv function. More specifically, singular value decomposition is used, which require's finite-valued matrices (they also can't be sparse). You didn't say if your matrices are square or not. The real use of pinv is for solving non-square systems (over- or underdetermined).
However, you shouldn't be using pinv or inv for your application, no matter the dimension of your matrices. Instead you should use mldivide (octave, Matlab), i.e., the backslash operator, \. This is much more efficient and numerically robust.
A1 = 3;
A2 = [1 2 1;2 4 6;1 1 3];
A1inv = A1\1
A2inv = A2\eye(size(A2))
The mldivide function handles rectangular matrices too, but you will get different answers for underdetermined systems compared to pinv because the two use different methods to choose the solution.
A3 = [1 2 1;2 4 6]; % Underdetermined
A4 = [1 2;2 4;1 1]; % Overdetermined
A3inv = A3\eye(min(size(A3))) % Compare to pinv(A3), different answer
A4inv = A4\eye(max(size(A4))) % Compare to pinv(A4), same answer
If you run the code above, you'll see that you get a slightly different result for A3inv as compared to what is returned by pinv(A3). However, both are valid solutions.

Related

Matrix access by x/y coordinates without linear indexing and looping in Matlab/Octave

I'm operating on very big, 2D, sparse matrices in Octave. I have hit the linear indexing limit of 2^31 and need to go bigger.
The problem is I have two same-sized vectors of X and Y coordinates and would like to modify respective points without a loop.
I have already tried looping over one dimension, and arrayfun - both work, but suffer from serious perfomance issues.
Is there any way to do this without recompiling Octave for 64bit linear indexing?
Example of what I would like to achive:
A = [1 2 3; 4 5 6; 7 8 9];
x = [1 3]; y = [3 2];
B = getxy/setxy(A, x ,y) % [7 6]

Compare three big matrices - Best way to get a meaningful and easy to understand indicator of the relation between the matrices?

I have 3 matrices (55000x3 double) and want to compare them.
I'm taking the arithmetic mean of the value of each position and want to provide in addition an indicator how the three matrices correlate.
The values in one position of the matrices are for example:
Matrix1 pos(1:1): 3.679
Matrix2 pos(1:1): 3.721
Matrix3 pos(1:1): 3.554
As I cannot just give the standard deviation for each value because it would be to much information I'm looking for a way to give a meaningful statement for the correlation without having to much information.
What's the best way to do this?
I think you want the correlation coefficient. You can reshape each of your matrices into a vector (using (:)), and then compute the correlation coefficient for each pair of vectors (originally matrices) using corrcoef.
For example, let:
Matrix1 = [ 1 2; 3 4; 5 6 ];
Matrix2 = -2*[ 1 2; 3 4; 5 6 ];
Matrix3 = [ 1.1 2.3; 3.4 4.1; 4.9 6.3 ];
Then
C = corrcoef([Matrix1(:) Matrix2(:) Matrix3(:)]);
gives
C =
1.0000 -1.0000 0.9952
-1.0000 1.0000 -0.9952
0.9952 -0.9952 1.0000
This tells you that, in this case,
Each of the three matrices is totally correlated with itself (C(1,1), C(2,2) and C(3,3) equal 1). This is obvious.
Matrices 1 and 2 have correlation coefficient C(1,2) equal to -1. This was expected, because matrix 2 is a negative multiple of matrix 1.
Matrices 1 and 3 are highly correlated (C(1,3) is 0.9952). This is because matrix 3 was defined as matrix 1 with some random "noise".
Matrices 2 and 3 are also highly correlated but with negative sign (C(2,3) is -0.9952), as should be clear from the above.
Have you tried representing your data using boxplot?
boxplot(([data(:,1); data(:,2); data(:,3)])');

Recovering original matrix from Eigenvalue Decomposition

According to Wikipedia the eigenvalue decomposition should be such that:
http://en.wikipedia.org/wiki/Square_root_of_a_matrix
See section Computational Methods by diagonalization:
Sp that if matrix A is decomposed such that it has Eigenvector V and Eigenvalues D, then A=VDV'.
A=[1 2; 3 4];
[V,D]=eig(A);
RepA=V*D*V';
However in Matlab, A and RepA are not equal?
Why is this?
Baz
In general, the formula is:
RepA = V*D*inv(V);
or, written for better numeric accuracy in MATLAB,
RepA = V*D/V;
When A is symmetric, then the V matrix will turn out to be orthogonal, which will make inv(V) = V.'. A is NOT symmetric, so you need the actual inverse.
Try it:
A=[1 2; 2 3]; % Symmetric
[V,D]=eig(A);
RepA = V*D*V';

singular value decomposition and low rank tensorial approximation

according this article
http://www.wseas.us/e-library/conferences/2012/Vouliagmeni/MMAS/MMAS-07.pdf
matrix can be approximated by one rank matrices using tensorial approximation,i know that in matlab kronecker product plays same role as tensorial product,function is kron,now let us suppose that we have following matrix
a=[2 1 3;4 3 5]
a =
2 1 3
4 3 5
SVD of this matrix is
[U E V]=svd(a)
U =
-0.4641 -0.8858
-0.8858 0.4641
E =
7.9764 0 0
0 0.6142 0
V =
-0.5606 0.1382 -0.8165
-0.3913 0.8247 0.4082
-0.7298 -0.5484 0.4082
please help me to implement algorithm with using tensorial approximation reconstructs original matrix in matlab languages,how can i apply tensorial product?like this
X=kron(U(:,1),V(:,1));
or?thanks in advance
I'm not quite sure about the Tensorial interpretation but the closest rank-1 approximation to the matrix is essentially the outer-product of the two dominant singular vectors amplified by the singular value.
In simple words, if [U E V] = svd(X), then the closest rank-1 approximation to X is the outer-product of the first singular vectors multiplied by the first singular value.
In MATLAB, you could do this as:
U(:,1)*E(1,1)*V(:,1)'
Which yields:
ans =
2.0752 1.4487 2.7017
3.9606 2.7649 5.1563
Also, mathematically speaking, the kronecker product of a row vector and a column vector is essentially their outer product. So, you could do the same thing using Kronecker products as:
(kron(U(:,1)',V(:,1))*E(1,1))'
Which yields the same answer.

How to multiply two vectors with different length

Let's say I have two vectors:
A = [1 2 3];
B = [1 2];
And that I need a function similar to multiplication of A*B to produce the following output:
[
1 2 3
2 4 6
]
It seems that things like A*B, A*B' or A.*B are not allowed as the number of elements is not the same.
The only way I managed to do this (I am quite new at MATLAB) is using ndgrid to make two matrices with the same number of elements like this:
[B1,A1] = ndgrid(B, A);
B1.*A1
ans =
1 2 3
2 4 6
Would this have good performance if number of elements was large?
Is there a better way to do this in MATLAB?
Actually I am trying to solve the following problem with MATLAB:
t = [1 2 3]
y(t) = sigma(i=1;n=2;expression=pi*t*i)
Nevertheless, even if there is a better way to solve the actual problem in place, it would be interesting to know the answer to my first question.
You are talking about an outer product. If A and B are both row vectors, then you can use:
A'*B
If they are both column vectors, then you can use
A*B'
The * operator in matlab represents matrix multiplication. The most basic rule of matrix multiplication is that the number of columns of the first matrix must match the number of rows of the second. Let's say that I have two matrices, A and B, with dimensions MxN and UxV respectively. Then I can only perform matrix multiplication under the following conditions:
A = rand(M,N);
B = rand(U,V);
A*B % Only valid if N==U (result is a MxV matrix)
A'*B % Only valid if M==U (result is a NxV matrix)
A*B' % Only valid if N==V (result is a MxU matrix)
A'*B' % Only valid if V==M (result is a UxN matrix)
There are four more possible cases, but they are just the transpose of the cases shown. Now, since vectors are just a matrix with only one non-singleton dimension, the same rules apply
A = [1 2 3]; % (A is a 1x3 matrix)
B = [1 2]; % (B is a 1x2 matrix)
A*B % Not valid!
A'*B % Valid. (result is a 3x2 matrix)
A*B' % Not valid!
A'*B' % Not valid!
Again, there are four other possible cases, but the only one that is valid is B'*A which is the transpose of A'*B and results in a 2x3 matrix.