Calculating Covariance Matrix in Matlab - matlab

I am implementing a PCA algorithm in MATLAB. I see two different approaches to calculating the covariance matrix:
C = sampleMat.' * sampleMat ./ nSamples;
and
C = cov(data);
What is the difference between these two methods?
PS 1: When I use cov(data) is that unnecessary:
meanSample = mean(data,1);
data = data - repmat(data, nSamples, 1);
PS 2:
At first approach should I use nSamples or nSamples - 1?

In short: cov mainly just adds convenience to the bare formula.
If you type
edit cov
You'll see a lot of stuff, with these lines all the way at the bottom:
xc = bsxfun(#minus,x,sum(x,1)/m); % Remove mean
if flag
xy = (xc' * xc) / m;
else
xy = (xc' * xc) / (m-1); % DEFAULT
end
which is essentially the same as your first line, save for the subtraction of the column-means.
Read the wiki on sample covariances to see why there is a minus-one in the default path.
Note however that your first line uses normal transpose (.'), whereas the cov-version uses conjugate-transpose ('). This will make the output of cov different in the context of complex-valued data.
Also note that cov is a function call to a non-built in function. That means that there will be a (possibly severe) performance penalty when using cov in a loop; Matlab's JIT compiler cannot accelerate non-built in functions.

Related

Vectorizing the solution of a linear equation system in MATLAB

Summary: This question deals with the improvement of an algorithm for the computation of linear regression.
I have a 3D (dlMAT) array representing monochrome photographs of the same scene taken at different exposure times (the vector IT) . Mathematically, every vector along the 3rd dimension of dlMAT represents a separate linear regression problem that needs to be solved. The equation whose coefficients need to be estimated is of the form:
DL = R*IT^P, where DL and IT are obtained experimentally and R and P must be estimated.
The above equation can be transformed into a simple linear model after applying a logarithm:
log(DL) = log(R) + P*log(IT) => y = a + b*x
Presented below is the most "naive" way to solve this system of equations, which essentially involves iterating over all "3rd dimension vectors" and fitting a polynomial of order 1 to (IT,DL(ind1,ind2,:):
%// Define some nominal values:
R = 0.3;
IT = 600:600:3000;
P = 0.97;
%// Impose some believable spatial variations:
pMAT = 0.01*randn(3)+P;
rMAT = 0.1*randn(3)+R;
%// Generate "fake" observation data:
dlMAT = bsxfun(#times,rMAT,bsxfun(#power,permute(IT,[3,1,2]),pMAT));
%// Regression:
sol = cell(size(rMAT)); %// preallocation
for ind1 = 1:size(dlMAT,1)
for ind2 = 1:size(dlMAT,2)
sol{ind1,ind2} = polyfit(log(IT(:)),log(squeeze(dlMAT(ind1,ind2,:))),1);
end
end
fittedP = cellfun(#(x)x(1),sol); %// Estimate of pMAT
fittedR = cellfun(#(x)exp(x(2)),sol); %// Estimate of rMAT
The above approach seems like a good candidate for vectorization, since it does not utilize MATLAB's main strength that is MATrix operations. For this reason, it does not scale very well and takes much longer to execute than I think it should.
There exist alternative ways to perform this computation based on matrix division, as demonstrated here and here, which involve something like this:
sol = [ones(size(x)),log(x)]\log(y);
That is, appending a vector of 1s to the observations, followed by mldivide to solve the equation system.
The main challenge I'm facing is how to adapt my data to the algorithm (or vice versa).
Question #1: How can the matrix-division-based solution be extended to solve the problem presented above (and potentially replace the loops I am using)?
Question #2 (bonus): What is the principle behind this matrix-division-based solution?
The secret ingredient behind the solution that includes matrix division is the Vandermonde matrix. The question discusses a linear problem (linear regression), and those can always be formulated as a matrix problem, which \ (mldivide) can solve in a mean-square error senseā€”. Such an algorithm, solving a similar problem, is demonstrated and explained in this answer.
Below is benchmarking code that compares the original solution with two alternatives suggested in chat1, 2 :
function regressionBenchmark(numEl)
clc
if nargin<1, numEl=10; end
%// Define some nominal values:
R = 5;
IT = 600:600:3000;
P = 0.97;
%// Impose some believable spatial variations:
pMAT = 0.01*randn(numEl)+P;
rMAT = 0.1*randn(numEl)+R;
%// Generate "fake" measurement data using the relation "DL = R*IT.^P"
dlMAT = bsxfun(#times,rMAT,bsxfun(#power,permute(IT,[3,1,2]),pMAT));
%% // Method1: loops + polyval
disp('-------------------------------Method 1: loops + polyval')
tic; [fR,fP] = method1(IT,dlMAT); toc;
fprintf(1,'Regression performance:\nR: %d\nP: %d\n',norm(fR-rMAT,1),norm(fP-pMAT,1));
%% // Method2: loops + Vandermonde
disp('-------------------------------Method 2: loops + Vandermonde')
tic; [fR,fP] = method2(IT,dlMAT); toc;
fprintf(1,'Regression performance:\nR: %d\nP: %d\n',norm(fR-rMAT,1),norm(fP-pMAT,1));
%% // Method3: vectorized Vandermonde
disp('-------------------------------Method 3: vectorized Vandermonde')
tic; [fR,fP] = method3(IT,dlMAT); toc;
fprintf(1,'Regression performance:\nR: %d\nP: %d\n',norm(fR-rMAT,1),norm(fP-pMAT,1));
function [fittedR,fittedP] = method1(IT,dlMAT)
sol = cell(size(dlMAT,1),size(dlMAT,2));
for ind1 = 1:size(dlMAT,1)
for ind2 = 1:size(dlMAT,2)
sol{ind1,ind2} = polyfit(log(IT(:)),log(squeeze(dlMAT(ind1,ind2,:))),1);
end
end
fittedR = cellfun(#(x)exp(x(2)),sol);
fittedP = cellfun(#(x)x(1),sol);
function [fittedR,fittedP] = method2(IT,dlMAT)
sol = cell(size(dlMAT,1),size(dlMAT,2));
for ind1 = 1:size(dlMAT,1)
for ind2 = 1:size(dlMAT,2)
sol{ind1,ind2} = flipud([ones(numel(IT),1) log(IT(:))]\log(squeeze(dlMAT(ind1,ind2,:)))).'; %'
end
end
fittedR = cellfun(#(x)exp(x(2)),sol);
fittedP = cellfun(#(x)x(1),sol);
function [fittedR,fittedP] = method3(IT,dlMAT)
N = 1; %// Degree of polynomial
VM = bsxfun(#power, log(IT(:)), 0:N); %// Vandermonde matrix
result = fliplr((VM\log(reshape(dlMAT,[],size(dlMAT,3)).')).');
%// Compressed version:
%// result = fliplr(([ones(numel(IT),1) log(IT(:))]\log(reshape(dlMAT,[],size(dlMAT,3)).')).');
fittedR = exp(real(reshape(result(:,2),size(dlMAT,1),size(dlMAT,2))));
fittedP = real(reshape(result(:,1),size(dlMAT,1),size(dlMAT,2)));
The reason why method 2 can be vectorized into method 3 is essentially that matrix multiplication can be separated by the columns of the second matrix. If A*B produces matrix X, then by definition A*B(:,n) gives X(:,n) for any n. Moving A to the right-hand side with mldivide, this means that the divisions A\X(:,n) can be done in one go for all n with A\X. The same holds for an overdetermined system (linear regression problem), in which there is no exact solution in general, and mldivide finds the matrix that minimizes the mean-square error. In this case too, the operations A\X(:,n) (method 2) can be done in one go for all n with A\X (method 3).
The implications of improving the algorithm when increasing the size of dlMAT can be seen below:
For the case of 500*500 (or 2.5E5) elements, the speedup from Method 1 to Method 3 is about x3500!
It is also interesting to observe the output of profile (here, for the case of 500*500):
Method 1
Method 2
Method 3
From the above it is seen that rearranging the elements via squeeze and flipud takes up about half (!) of the runtime of Method 2. It is also seen that some time is lost on the conversion of the solution from cells to matrices.
Since the 3rd solution avoids all of these pitfalls, as well as the loops altogether (which mostly means re-evaluation of the script on every iteration) - it unsurprisingly results in a considerable speedup.
Notes:
There was very little difference between the "compressed" and the "explicit" versions of Method 3 in favor of the "explicit" version. For this reason it was not included in the comparison.
A solution was attempted where the inputs to Method 3 were gpuArray-ed. This did not provide improved performance (and even somewhat degradaed them), possibly due to wrong implementation, or the overhead associated with copying matrices back and forth between RAM and VRAM.

MATLAB short way to find closest vector?

In my application, I need to find the "closest" (minimum Euclidean distance vector) to an input vector, among a set of vectors (i.e. a matrix)
Therefore every single time I have to do this :
function [match_col] = find_closest_column(input_vector, vectors)
cmin = 99999999999; % current minimum distance
match_col = -1;
for col=1:width
candidate_vector = vectors(:,c); % structure of the input is not important
dist = norm(input_vector - candidate_vector);
if dist < cmin
cmin = dist;
match_col = col;
end
end
is there a built-in MATLAB function that does this kind of thing easily (with a short amount of code) for me ?
Thanks for any help !
Use pdist2. Assuming (from your code) that your vectors are columns, transposition is needed because pdist2 works with rows:
[cmin, match_col] = min(pdist2(vectors.', input_vector.' ,'euclidean'));
It can also be done with bsxfun (in this case it's easier to work directly with columns):
[cmin, match_col] = min(sum(bsxfun(#minus, vectors, input_vector).^2));
cmin = sqrt(cmin); %// to save operations, apply sqrt only to the minimizer
norm can't be directly applied to every column or row of a matrix, so you can use arrayfun:
dist = arrayfun(#(col) norm(input_vector - candidate_vector(:,col)), 1:width);
[cmin, match_col] = min(dist);
This solution was also given here.
HOWEVER, this solution is much much slower than doing a direct computation using bsxfun (as in Luis Mendo's answer), so it should be avoided. arrayfun should be used for more complex functions, where a vectorized approach is harder to get at.

Integrating matrix minors without a loop in Matlab

I am trying to integrate all the 2x2 matrices A(i-1:1,j-1:j) in Matlab without using a loop. Right now I am doing in a loop but it is extremely slow. The code is shown below:
A=rand(100)
t=linespace(0,1,100);
for i=2:length(A)
for j=2:length(A)
A_minor=A(i-1:i,j-1:j);
B(i,j)=trapz(t(j-1:j),trapz(t(i-1:i),A_minor));
end
end
I'd like to do this without using loops to speed up computation.
If you have the Matlab Image Processing Toolbox, you may be able to use blockproc to do what you want.
http://www.mathworks.com/help/images/ref/blockproc.html
To use blockproc, you will need to define a function that does what you want to be executed on each position in the matrix. Note that the way you are using trapz makes things a little trickier (passing the x-values in - if you can get away without them, you can simplify the code) - here I run trapz without them and scale the results.
% Data
foo = rand(100);
t = linspace(0,1,100);
% Execute blockproc on the indexes
fooproc = blockproc(foo, [2, 2], #(x) trapz(trapz(x.data)));
fooproc = fooproc * (t(2)-t(1))^2; % re-scale by the square of the step size
If you need to pass the x values to trapz, the solution gets a bit trickier.
As trapz is a simple function (especially on a 2x2 matrix), you can just compute the result directly, without calling a function:
t = linspace(0,1,100); % Note that this is a step size of 0.010101
A = rand(100);
B = nan(size(A));
Atmp = (A(1:end-1,:) + A(2:end,:))/2;
Atmp = (Atmp(:,1:end-1) + Atmp(:,2:end))/2;
B(2:end,2:end) = Atmp * (t(2)-t(1))^2;
This should give you the exact same result as your for loop, but much faster.

Implementing iterative solution of integral equation in Matlab

We have an equation similar to the Fredholm integral equation of second kind.
To solve this equation we have been given an iterative solution that is guaranteed to converge for our specific equation. Now our only problem consists in implementing this iterative prodedure in MATLAB.
For now, the problematic part of our code looks like this:
function delta = delta(x,a,P,H,E,c,c0,w)
delt = #(x)delta_a(x,a,P,H,E,c0,w);
for i=1:500
delt = #(x)delt(x) - 1/E.*integral(#(xi)((c(1)-c(2)*delt(xi))*ms(xi,x,a,P,H,w)),0,a-0.001);
end
delta=delt;
end
delta_a is a function of x, and represent the initial value of the iteration. ms is a function of x and xi.
As you might see we want delt to depend on both x (before the integral) and xi (inside of the integral) in the iteration. Unfortunately this way of writing the code (with the function handle) does not give us a numerical value, as we wish. We can't either write delt as two different functions, one of x and one of xi, since xi is not defined (until integral defines it). So, how can we make sure that delt depends on xi inside of the integral, and still get a numerical value out of the iteration?
Do any of you have any suggestions to how we might solve this?
Using numerical integration
Explanation of the input parameters: x is a vector of numerical values, all the rest are constants. A problem with my code is that the input parameter x is not being used (I guess this means that x is being treated as a symbol).
It looks like you can do a nesting of anonymous functions in MATLAB:
f =
#(x)2*x
>> ff = #(x) f(f(x))
ff =
#(x)f(f(x))
>> ff(2)
ans =
8
>> f = ff;
>> f(2)
ans =
8
Also it is possible to rebind the pointers to the functions.
Thus, you can set up your iteration like
delta_old = #(x) delta_a(x)
for i=1:500
delta_new = #(x) delta_old(x) - integral(#(xi),delta_old(xi))
delta_old = delta_new
end
plus the inclusion of your parameters...
You may want to consider to solve a discretized version of your problem.
Let K be the matrix which discretizes your Fredholm kernel k(t,s), e.g.
K(i,j) = int_a^b K(x_i, s) l_j(s) ds
where l_j(s) is, for instance, the j-th lagrange interpolant associated to the interpolation nodes (x_i) = x_1,x_2,...,x_n.
Then, solving your Picard iterations is as simple as doing
phi_n+1 = f + K*phi_n
i.e.
for i = 1:N
phi = f + K*phi
end
where phi_n and f are the nodal values of phi and f on the (x_i).

Cross characteristics of a non-linear equation in Matlab

I'd like to create a Matlab plot of propeller angular velocity in terms of applied current. The point is, this requires combining two interdependent sets of data.
Firstly, drag coefficient c_d depends on angular velocity omega (I have no formula, just data) as seen on the plot below - the characteristics c_d(omega) could be easily linearised as c_d(omega) = p*omega + p_0.
Secondly, omega depends not only on applied current i, but also on the drag coefficient c_d(omega).
A script that solves the case, where c_d is constant below. It must be somehow possible to join those two using Matlab commands. Thanks for any help.
%%Lookup table for drag coefficient c_d
c_d_lookup = [248.9188579 0.036688351; %[\omega c_d]
280.2300647 0.037199094;
308.6091183 0.037199094;
338.6636881 0.03779496;
365.8908244 0.038305703;
393.9557188 0.039156941;
421.9158934 0.039667683;
452.2846224 0.040348674;
480.663676 0.041199911;
511.032405 0.042051149;
538.9925796 0.042561892;
567.2669135 0.043242882;
598.4734005 0.043668501;
624.1297405 0.044264368;
651.9851954 0.044604863;
683.6105614 0.045200729];
subplot(2,1,1)
plot(c_d_lookup(:,1), c_d_lookup(:,2))
title('This is how c_d depends on \omega')
ylabel('c_d')
xlabel('\omega [rad/s]')
%%Calculate propeller angular speed in terms of applied current. omega
%%depends on c_d, which in turn depends on omega. The formula is:
% omega(i) = sqrt(a*i / (b * c_d(omega)))
% Where:
% i - applied current
% omega - propeller angular velocity
% a,b - coefficients
i = [1:15];
a = 0.0718;
b = 3.8589e-005;
%If c_d was constant, I'd do:
omega_i = sqrt(a .* i / (b * 0.042));
subplot(2,1,2)
plot(i, omega_i)
ylabel({'Propeller ang. vel.', '\omega [rad/s]'})
xlabel('Applied current i[A]')
title('Propeller angular velocity in terms of applied current')
EDIT:
Trying to follow bdecaf's solution. So I created a function c_d_find, like so:
function c_d = c_d_find(omega, c_d_lookup)
c_d = interp1(c_d_lookup(:,1), c_d_lookup(:,2), omega, 'linear', 'extrap');
end
I don't know anything about Matlab function handles, but seem to understand the idea... In Matlab command window I typed:
f = #(omega) omega - sqrt(a .* i / (b * c_d_find(omega, c_d_lookup)))
which I hope created the correct function handle. What do I do next? Executing the below doesn't work:
>> omega_consistent = fzero(f,0)
??? Operands to the || and && operators must be convertible to logical scalar
values.
Error in ==> fzero at 333
elseif ~isfinite(fx) || ~isreal(fx)
hmmm...
Wonder if I understand correctly - but looks like you are looking for a consistent solution.
Your equations don't look to complicated I would outline the solution like this:
Write a function function c_d = c_d_find(omega) that does some interpolation or so
make a function handle like f = #(omega) omega - sqrt(a .* i / (b * c_d_find(omega))) - this is zero for consistent omega
calculate a consistent omega with omega_consistent =fzero(f,omega_0)