Computing only necessary rows of a matrix product - matlab

Suppose I have a large (but possibly sparse) matrix A, which is K-by-K in dimension. I have another K-by-1 vector, b.
Let Ax=b. If I am only interested in the first n rows, where n < K, of x, then one way of dealing with this in MATLAB is to calculate x=A\b and take the first n elements.
If the dimension K is so large that the entire computation infeasible, is there any other way to get these elements?

I guess one way would be to rearrange the columns of A and rows of x so that the elements you are interested in occur at the end of x. Then you would reduce [A,b] to row echelon form. Finally, to get the components you are after, you take the lower right hand nxn submatrix of the modified A (let's call it An) and you solve the reduced system An * xn = bn, where xn denotes the submarine of x that you are interested in, and bn denotes the last n rows of b after the row echelon reduction.
I mean, the conversion here to echelon form is still expensive, but you don't need to solve for the rest of the components in x, which can save you time.

Just an idea: You could try to use Block Matrix inversion: if you block your matrix into A = [A11, A12;A21, A22], where A11 is n x n, you can compute the blocks of its inverse B = inv(A) = [B11, B12;B21, B22] via Block Matrix Inversion. There are different versions of it, you could use the one where the Schur complement you use is only of size n x n. I'm not quite sure whether it is possible to avoid any inversion that scales with K, but you could look into it.
Your solution is then x(1:n) = [B11, B12]*b. It saves you from ever computing B21, B22. Still, I'm not sure if it is worth it. Depends on the dimensions I guess.
Here is one version, though this still needs the inverse of A22 which is (K-n)x(K-n):
K = 100;
n = 10;
A = randn(K,K);
b = randn(K,1);
% reference version: full inverse
xfull = inv(A)*b;
% blocks of A
A11 = A(1:n,1:n);A12 = A(1:n,n+1:K);A21 = A(n+1:K,1:n);A22 = A(n+1:K,n+1:K);
% blocks of inverse
A22i = inv(A22); % not sure if this can be avoided
B11 = inv(A11 - A12*A22i*A21);
B12 = -B11*A12*A22i;
% solution
x_n = [B11,B12]*b;
disp(x_n - xfull(1:n))
edit: Of course, this computes the inverse "explicitly" and as such is probably much slower than just solving the LSE. It could be worth it, if you had several vectors b you want to fit for a fixed A.

Related

How to solve a linear system for only one component in MATLAB

I need to solve the linear system
A x = b
which can be done efficiently by
x = A \ b
But now A is very large and I actually only need one component, say x(1). Is there a way to solve this more efficiently than to compute all components of x?
A is not sparse. Here, efficiency is actually an issue because this is done for many b.
Also, storing the inverse of K and multiplying only its first row to b is not possible because K is badly conditioned. Using the \ operator employs the LDL solver in this case, and accuracy is lost when the inverse is explicitly used.
I don't think you'd technically get a speed-up over the very optimized Matlab routine however if you understand how it is solved then you can just solve for one part of x. E.g the following. in traditional solver you use backsub for QR solve for instance. In LU solve you use both back sub and front sub. I could get LU. Unfortunately, it actually starts at the end due to how it solves it. The same is true for LDL which would employ both. That doesn't preclude that fact there may be more efficient ways of solving whatever you have.
function [Q,R] = qrcgs(A)
%Classical Gram Schmidt for an m x n matrix
[m,n] = size(A);
% Generates the Q, R matrices
Q = zeros(m,n);
R = zeros(n,n);
for k = 1:n
% Assign the vector for normalization
w = A(:,k);
for j=1:k-1
% Gets R entries
R(j,k) = Q(:,j)'*w;
end
for j = 1:k-1
% Subtracts off orthogonal projections
w = w-R(j,k)*Q(:,j);
end
% Normalize
R(k,k) = norm(w);
Q(:,k) = w./R(k,k);
end
end
function x = backsub(R,b)
% Backsub for upper triangular matrix.
[m,n] = size(R);
p = min(m,n);
x = zeros(n,1);
for i=p:-1:1
% Look from bottom, assign to vector
r = b(i);
for j=(i+1):p
% Subtract off the difference
r = r-R(i,j)*x(j);
end
x(i) = r/R(i,i);
end
end
The method mldivide, generally represented as \ accepts solving many systems with the same A at once.
x = A\[b1 b2 b3 b4] # where bi are vectors with n rows
Solves the system for each b, and will return an nx4 matrix, where each column is the solution of each b. Calling mldivide like this should improve efficiency becaus the descomposition is only done once.
As in many decompositions like LU od LDL' (and in the one you are interested in particular) the matrix multiplying x is upper diagonal, the first value to be solved is x(n). However, having to do the LDL' decomposition, a simple backwards substitution algorithm won't be the bottleneck of the code. Therefore, the decomposition can be saved in order to avoid repeating the calculation for every bi. Thus, the code would look similar to this:
[LA,DA] = ldl(A);
DA = sparse(DA);
% LA = sparse(LA); %LA can also be converted to sparse matrix
% loop over bi
xi = LA'\(DA\(LA\bi));
% end loop
As you can see in the documentation of mldivide (Algorithms section), it performs some checks on the input matrixes, and having defined LA as full and DA as sparse, it should directly go for a triangular solver and a tridiagonal solver. If LA was converted to sparse, it would use a triangular solver too, and I don't know if the conversion to sparse would represent any improvement.

transformation matrix of reduced row echelon form

I can compute the reduced row echelon form R of a matrix C in Matlab using the command R = rref(C).
However, I would also like to keep track of the performed steps, that is, to obtain the transformation matrix T that gives me TC = R. This matrix should, to the best of my knowledge, be implicitly computed when using Gauss-Jordan elimination.
Is there a way to get T? Maybe a workaround? In the matlab documentation, I couldn't find any information. Are there maybe rref-functions in other programming languages that would return T?
You can use the fact that elementary row operations are equivalent to multiplying with an
elementary matrix on the left. Let c be a matrix of size (mxn);
z= rref([c eye(m)]); % [c I] is multiplied by some matrix T
% the result is [rref(c) T]
r= z(:,1:n); % the reduced row echelon form of c
t= z(:,n+1:end); % now we have T

Calculating the essential matrix from two sets of corresponding points

I'm trying to reconstruct a 3d image from two calibrated cameras. One of the steps involved is to calculate the 3x3 essential matrix E, from two sets of corresponding (homogeneous) points (more than the 8 required) P_a_orig and P_b_orig and the two camera's 3x3 internal calibration matrices K_a and K_b.
We start off by normalizing our points with
P_a = inv(K_a) * p_a_orig
and
P_b = inv(K_b) * p_b_orig
We also know the constraint
P_b' * E * P_a = 0
I'm following it this far, but how do you actually solve that last problem, e.g. finding the nine values of the E matrix? I've read several different lecture notes on this subject, but they all leave out that crucial last step. Likely because it is supposedly trivial math, but I can't remember when I last did this and I haven't been able to find a solution yet.
This equation is actually pretty common in geometry algorithms, essentially, you are trying to calculate the matrix X from the equation AXB=0. To solve this, you vectorise the equation, which means,
vec() means vectorised form of a matrix, i.e., simply stack the coloumns of the matrix one over the another to produce a single coloumn vector. If you don't know the meaning of the scary looking symbol, its called Kronecker product and you can read it from here, its easy, trust me :-)
Now, say I call the matrix obtained by Kronecker product of B^T and A as C.
Then, vec(X) is the null vector of the matrix C and the way to obtain that is by doing the SVD decomposition of C^TC (C transpose multiplied by C) and take the the last coloumn of the matrix V. This last coloumn is nothing but your vec(X). Reshape X to 3 by 3 matrix. This is you Essential matrix.
In case you find this maths too daunting to code, simply use the following code by Y.Ma et.al:
% p are homogenius coordinates of the first image of size 3 by n
% q are homogenius coordinates of the second image of size 3 by n
function [E] = essentialDiscrete(p,q)
n = size(p);
NPOINTS = n(2);
% set up matrix A such that A*[v1,v2,v3,s1,s2,s3,s4,s5,s6]' = 0
A = zeros(NPOINTS, 9);
if NPOINTS < 9
error('Too few mesurements')
return;
end
for i = 1:NPOINTS
A(i,:) = kron(p(:,i),q(:,i))';
end
r = rank(A);
if r < 8
warning('Measurement matrix rank defficient')
T0 = 0; R = [];
end;
[U,S,V] = svd(A);
% pick the eigenvector corresponding to the smallest eigenvalue
e = V(:,9);
e = (round(1.0e+10*e))*(1.0e-10);
% essential matrix
E = reshape(e, 3, 3);
You can do several things:
The Essential matrix can be estimated using the 8-point algorithm, which you can implement yourself.
You can use the estimateFundamentalMatrix function from the Computer Vision System Toolbox, and then get the Essential matrix from the Fundamental matrix.
Alternatively, you can calibrate your stereo camera system using the estimateCameraParameters function in the Computer Vision System Toolbox, which will compute the Essential matrix for you.

How to speed up multiple vector convolution in MATLAB?

I'm having a problem with finding a faster way to convolve multiple vectors. All the vectors have the same length M, so these vectors can be combined as a matrix (A) with the size (N, M). N is the number of vectors.
Now I am using the below code to convolve all these vectors:
B=1;
for i=1:N
B=conv(B, A(i,:));
end
I found this piece of code becomes a speed-limit step in my program since it is frequently called. My question is, is there a way to make this calculation faster? Consider M is a small number (say 2).
It should be quite a lot faster if you implement your convolution as multiplication in the frequency domain.
Look at the way fftfilt is implemented. You can't get optimal performance using fftfilt, because you want to only convert back to time domain after all convolutions are complete, but it nicely illustrates the method.
Convolution is associative. Combine the small kernels, convolve once with the data.
Test data:
M = 2; N = 5; L = 100;
A = rand(N,M);
Bsrc = rand(1,L);
Reference (convolve each kernel with data):
B = Bsrc;
for i=1:N,
B=conv(B, A(i,:));
end
Combined kernels:
A0 = 1;
for ii=1:N,
A0 = conv(A0,A(ii,:));
end
B0 = conv(Bsrc,A0);
Compare:
>> max(abs(B-B0))
ans =
2.2204e-16
If you perform this convolution often, precompute A0 so you can just do one convolution (B0 = conv(Bsrc,A0);).

How do I create a simliarity matrix in MATLAB?

I am working towards comparing multiple images. I have these image data as column vectors of a matrix called "images." I want to assess the similarity of images by first computing their Eucledian distance. I then want to create a matrix over which I can execute multiple random walks. Right now, my code is as follows:
% clear
% clc
% close all
%
% load tea.mat;
images = Input.X;
M = zeros(size(images, 2), size (images, 2));
for i = 1:size(images, 2)
for j = 1:size(images, 2)
normImageTemp = sqrt((sum((images(:, i) - images(:, j))./256).^2));
%Need to accurately select the value of gamma_i
gamma_i = 1/10;
M(i, j) = exp(-gamma_i.*normImageTemp);
end
end
My matrix M however, ends up having a value of 1 along its main diagonal and zeros elsewhere. I'm expecting "large" values for the first few elements of each row and "small" values for elements with column index > 4. Could someone please explain what is wrong? Any advice is appreciated.
Since you're trying to compute a Euclidean distance, it looks like you have an error in where your parentheses are placed when you compute normImageTemp. You have this:
normImageTemp = sqrt((sum((...)./256).^2));
%# ^--- Note that this parenthesis...
But you actually want to do this:
normImageTemp = sqrt(sum(((...)./256).^2));
%# ^--- ...should be here
In other words, you need to perform the element-wise squaring, then the summation, then the square root. What you are doing now is summing elements first, then squaring and taking the square root of the summation, which essentially cancel each other out (or are actually the equivalent of just taking the absolute value).
Incidentally, you can actually use the function NORM to perform this operation for you, like so:
normImageTemp = norm((images(:, i) - images(:, j))./256);
The results you're getting seem reasonable. Recall the behavior of the exp(-x). When x is zero, exp(-x) is 1. When x is large exp(-x) is zero.
Perhaps if you make M(i,j) = normImageTemp; you'd see what you expect to see.
Consider this solution:
I = Input.X;
D = squareform( pdist(I') ); %'# euclidean distance between columns of I
M = exp(-(1/10) * D); %# similarity matrix between columns of I
PDIST and SQUAREFORM are functions from the Statistics Toolbox.
Otherwise consider this equivalent vectorized code (using only built-in functions):
%# we know that: ||u-v||^2 = ||u||^2 + ||v||^2 - 2*u.v
X = sum(I.^2,1);
D = real( sqrt(bsxfun(#plus,X,X')-2*(I'*I)) );
M = exp(-(1/10) * D);
As was explained in the other answers, D is the distance matrix, while exp(-D) is the similarity matrix (which is why you get ones on the diagonal)
there is an already implemented function pdist, if you have a matrix A, you can directly do
Sim= squareform(pdist(A))