I'm looking for a way to speed up some simple two port matrix calculations. See the below code example for what I'm doing currently. In essence, I create a [Nx1] frequency vector first. I then loop through the frequency vector and create the [2x2] matrices H1 and H2 (all functions of f). A bit of simple matrix math including a matrix left division '\' later, and I got my result pb as a [Nx1] vector. The problem is the loop - it takes a long time to calculate and I'm looking for way to improve efficiency of the calculations. I tried assembling the problem using [2x2xN] transfer matrices, but the mtimes operation cannot handle 3-D multiplications.
Can anybody please give me an idea how I can approach such a calculation without the need for looping through f?
Many thanks: svenr
% calculate frequency and wave number vector
f = linspace(20,200,400);
w = 2.*pi.*f;
% calculation for each frequency w
for i=1:length(w)
H1(i,1) = {[1, rho*c*k(i)^2 / (crad*pi); 0,1]};
H2(i,1) = {[1, 1i.*w(i).*mp; 0, 1]};
HZin(i,1) = {H1{i,1}*H2{i,1}};
temp_mat = HZin{i,1}*[1; 0];
Zin(i,1) = temp_mat(1,1)/temp_mat(2,1);
temp_mat= H1{i,1}\[1; 1/Zin(i,1)];
pb(i,1) = temp_mat(1,1); Ub(i,:) = temp_mat(2,1);
end
Assuming that length(w) == length(k) returns true , rho , c, crad, mp are all scalars and in the last line is Ub(i,1) = temp_mat(2,1) instead of Ub(i,:) = temp_mat(2,1);
temp = repmat(eyes(2),[1 1 length(w)]);
temp1(1,2,:) = rho*c*(k.^2)/crad/pi;
temp2(1,2,:) = (1i.*w)*mp;
H1 = permute(num2cell(temp1,[1 2]),[3 2 1]);
H2 = permute(num2cell(temp2,[1 2]),[3 2 1]);
HZin = cellfun(#(a,b)(a*b),H1,H2,'UniformOutput',0);
temp_cell = cellfun(#(a,b)(a*b),H1,repmat({[1;0]},length(w),1),'UniformOutput',0);
Zin_cell = cellfun(#(a)(a(1,1)/a(2,1)),temp_cell,'UniformOutput',0);
Zin = cell2mat(Zin);
temp2_cell = cellfun(#(a)({[1;1/a]}),Zin_cell,'UniformOutput',0);
temp3_cell = cellfun(#(a,b)(pinv(a)*b),H1,temp2_cell);
temp4 = cell2mat(temp3_cell);
p(:,1) = temp4(1:2:end-1);
Ub(:,1) = temp4(2:2:end);
Related
I have some MatLab function which calculates a Wavelet Bispectrum. In this a matrix is initialised and values (complex) are put in it after they are calculated in nested for loops. However when I time the the entire function I see a third of the time (~1000s) is spent adding these values to the matrix.
The Matrix size is (1025 x 1025) and so about 1million values are input. If I generate random numbers and put them in a matrix in a similar way it takes less than I second. So why is it taking up so much time in my code?
Anyone able to help me out?
I've included a sample of the code. It is putting the million or so values into the Bisp Matrix that takes over 1000s. WT is a matrix containing complex values and freq is a vector containing all the values that match the 1st dimension of the WT matrix.
`
nfreq = 1025;
Bisp = nan(nfreq, nfreq);
Norm = nan(nfreq, nfreq);
for j = 1 : nfreq
for k = 1 : nfreq
f3 = freq(j) + freq(k);
if f3 <= freq(end)
idx3 = find(freq <= f3);
idx3 = idx3(end);
x = (WT(j, :) .* WT(k, :)) .* conj(WT(indx3, :));
bb = nansum(x);
norm1 = nansum(abs(WT(j, :) .* WT(k, :)).^2);
norm2 = nansum(abs(WT(idx3, :).^2));
a = sqrt(norm1 * norm2);
Norm(j, k) = a;
Bisp(j, k) = bb; **This takes about 1000seconds**
end
end
end
`
I asked this question in Math Stackexchange, but it seems it didn't get enough attention there so I am asking it here. https://math.stackexchange.com/questions/1729946/why-do-we-say-svd-can-handle-singular-matrx-when-doing-least-square-comparison?noredirect=1#comment3530971_1729946
I learned from some tutorials that SVD should be more stable than QR decomposition when solving Least Square problem, and it is able to handle singular matrix. But the following example I wrote in matlab seems to support the opposite conclusion. I don't have a deep understanding of SVD, so if you could look at my questions in the old post in Math StackExchange and explain it to me, I would appreciate a lot.
I use a matrix that have a large condition number(e+13). The result shows SVD get a much larger error(0.8) than QR(e-27)
% we do a linear regression between Y and X
data= [
47.667483331 -122.1070832;
47.667483331001 -122.1070832
];
X = data(:,1);
Y = data(:,2);
X_1 = [ones(length(X),1),X];
%%
%SVD method
[U,D,V] = svd(X_1,'econ');
beta_svd = V*diag(1./diag(D))*U'*Y;
%% QR method(here one can also use "\" operator, which will get the same result as I tested. I just wrote down backward substitution to educate myself)
[Q,R] = qr(X_1)
%now do backward substitution
[nr nc] = size(R)
beta_qr=[]
Y_1 = Q'*Y
for i = nc:-1:1
s = Y_1(i)
for j = m:-1:i+1
s = s - R(i,j)*beta_qr(j)
end
beta_qr(i) = s/R(i,i)
end
svd_error = 0;
qr_error = 0;
for i=1:length(X)
svd_error = svd_error + (Y(i) - beta_svd(1) - beta_svd(2) * X(i))^2;
qr_error = qr_error + (Y(i) - beta_qr(1) - beta_qr(2) * X(i))^2;
end
You SVD-based approach is basically the same as the pinv function in MATLAB (see Pseudo-inverse and SVD). What you are missing though (for numerical reasons) is using a tolerance value such that any singular values less than this tolerance are treated as zero.
If you refer to edit pinv.m, you can see something like the following (I won't post the exact code here because the file is copyrighted to MathWorks):
[U,S,V] = svd(A,'econ');
s = diag(S);
tol = max(size(A)) * eps(norm(s,inf));
% .. use above tolerance to truncate singular values
invS = diag(1./s);
out = V*invS*U';
In fact pinv has a second syntax where you can explicitly specify the tolerance value pinv(A,tol) if the default one is not suitable...
So when solving a least-squares problem of the form minimize norm(A*x-b), you should understand that the pinv and mldivide solutions have different properties:
x = pinv(A)*b is characterized by the fact that norm(x) is smaller than the norm of any other solution.
x = A\b has the fewest possible nonzero components (i.e sparse).
Using your example (note that rcond(A) is very small near machine epsilon):
data = [
47.667483331 -122.1070832;
47.667483331001 -122.1070832
];
A = [ones(size(data,1),1), data(:,1)];
b = data(:,2);
Let's compare the two solutions:
x1 = A\b;
x2 = pinv(A)*b;
First you can see how mldivide returns a solution x1 with one zero component (this is obviously a valid solution because you can solve both equations by multiplying by zero as in b + a*0 = b):
>> sol = [x1 x2]
sol =
-122.1071 -0.0537
0 -2.5605
Next you see how pinv returns a solution x2 with a smaller norm:
>> nrm = [norm(x1) norm(x2)]
nrm =
122.1071 2.5611
Here is the error of both solutions which is acceptably very small:
>> err = [norm(A*x1-b) norm(A*x2-b)]
err =
1.0e-11 *
0 0.1819
Note that use mldivide, linsolve, or qr will give pretty much same results:
>> x3 = linsolve(A,b)
Warning: Matrix is close to singular or badly scaled. Results may be inaccurate. RCOND = 2.159326e-16.
x3 =
-122.1071
0
>> [Q,R] = qr(A); x4 = R\(Q'*b)
x4 =
-122.1071
0
SVD can handle rank-deficiency. The diagonal matrix D has a near-zero element in your code and you need use pseudoinverse for SVD, i.e. set the 2nd element of 1./diag(D) to 0 other than the huge value (10^14). You should find SVD and QR have equally good accuracy in your example. For more information, see this document http://www.cs.princeton.edu/courses/archive/fall11/cos323/notes/cos323_f11_lecture09_svd.pdf
Try this SVD version called block SVD - you just set the iterations equal to the accuracy you want - usually 1 is enough. If you want all the factors (this has a default # selected for factor reduction) then edit the line k= to the size(matrix) if I recall my MATLAB correctly
A= randn(100,5000);
A=corr(A);
% A is your correlation matrix
tic
k = 1000; % number of factors to extract
bsize = k +50;
block = randn(size(A,2),bsize);
iter = 2; % could set via tolerance
[block,R] = qr(A*block,0);
for i=1:iter
[block,R] = qr(A*(A'*block),0);
end
M = block'*A;
% Economy size dense SVD.
[U,S] = svd(M,0);
U = block*U(:,1:k);
S = S(1:k,1:k);
% Note SVD of a symmetric matrix is:
% A = U*S*U' since V=U in this case, S=eigenvalues, U=eigenvectors
V=real(U*sqrt(S)); %scaling matrix for simulation
toc
% reduced randomized matrix for simulation
sims = 2000;
randnums = randn(k,sims);
corrrandnums = V*randnums;
est_corr_matrix = corr(corrrandnums');
total_corrmatrix_difference =sum(sum(est_corr_matrix-A))
I have a set of three vectors (stored into a 3xN matrix) which are 'entangled' (e.g. some value in the second row should be in the third row and vice versa). This 'entanglement' is based on looking at the figure in which alpha2 is plotted. To separate the vector I use a difference based approach where I calculate the difference of one value with respect the three next values (e.g. comparing (1,i) with (:,i+1)). Then I take the minimum and store that. The method works to separate two of the three vectors, but not for the last.
I was wondering if you guys can share your ideas with me how to solve this problem (if possible). I have added my coded below.
Thanks in advance!
Problem in figures:
clear all; close all; clc;
%%
alpha2 = [-23.32 -23.05 -22.24 -20.91 -19.06 -16.70 -13.83 -10.49 -6.70;
-0.46 -0.33 0.19 2.38 5.44 9.36 14.15 19.80 26.32;
-1.58 -1.13 0.06 0.70 1.61 2.78 4.23 5.99 8.09];
%%% Original
figure()
hold on
plot(alpha2(1,:))
plot(alpha2(2,:))
plot(alpha2(3,:))
%%% Store start values
store1(1,1) = alpha2(1,1);
store2(1,1) = alpha2(2,1);
store3(1,1) = alpha2(3,1);
for i=1:size(alpha2,2)-1
for j=1:size(alpha2,1)
Alpha1(j,i) = abs(store1(1,i)-alpha2(j,i+1));
Alpha2(j,i) = abs(store2(1,i)-alpha2(j,i+1));
Alpha3(j,i) = abs(store3(1,i)-alpha2(j,i+1));
[~, I] = min(Alpha1(:,i));
store1(1,i+1) = alpha2(I,i+1);
[~, I] = min(Alpha2(:,i));
store2(1,i+1) = alpha2(I,i+1);
[~, I] = min(Alpha3(:,i));
store3(1,i+1) = alpha2(I,i+1);
end
end
%%% Plot to see if separation worked
figure()
hold on
plot(store1)
plot(store2)
plot(store3)
Solution using extrapolation via polyfit:
The idea is pretty simple: Iterate over all positions i and use polyfit to fit polynomials of degree d to the d+1 values from F(:,i-(d+1)) up to F(:,i). Use those polynomials to extrapolate the function values F(:,i+1). Then compute the permutation of the real values F(:,i+1) that fits those extrapolations best. This should work quite well, if there are only a few functions involved. There is certainly some room for improvement, but for your simple setting it should suffice.
function F = untangle(F, maxExtrapolationDegree)
%// UNTANGLE(F) untangles the functions F(i,:) via extrapolation.
if nargin<2
maxExtrapolationDegree = 4;
end
extrapolate = #(f) polyval(polyfit(1:length(f),f,length(f)-1),length(f)+1);
extrapolateAll = #(F) cellfun(extrapolate, num2cell(F,2));
fitCriterion = #(X,Y) norm(X(:)-Y(:),1);
nFuncs = size(F,1);
nPoints = size(F,2);
swaps = perms(1:nFuncs);
errorOfFit = zeros(1,size(swaps,1));
for i = 1:nPoints-1
nextValues = extrapolateAll(F(:,max(1,i-(maxExtrapolationDegree+1)):i));
for j = 1:size(swaps,1)
errorOfFit(j) = fitCriterion(nextValues, F(swaps(j,:),i+1));
end
[~,j_bestSwap] = min(errorOfFit);
F(:,i+1) = F(swaps(j_bestSwap,:),i+1);
end
Initial solution: (not that pretty - Skip this part)
This is a similar solution that tries to minimize the sum of the derivatives up to some degree of the vector valued function F = #(j) alpha2(:,j). It does so by stepping through the positions i and checks all possible permutations of the coordinates of i to get a minimal seminorm of the function F(1:i).
(I'm actually wondering right now if there is any canonical mathematical way to define the seminorm so we get our expected results... I initially was going for the H^1 and H^2 seminorms, but they didn't quite work...)
function F = untangle(F)
nFuncs = size(F,1);
nPoints = size(F,2);
seminorm = #(x,i) sum(sum(abs(diff(x(:,1:i),1,2)))) + ...
sum(sum(abs(diff(x(:,1:i),2,2)))) + ...
sum(sum(abs(diff(x(:,1:i),3,2)))) + ...
sum(sum(abs(diff(x(:,1:i),4,2))));
doSwap = #(x,swap,i) [x(:,1:i-1), x(swap,i:end)];
swaps = perms(1:nFuncs);
normOfSwap = zeros(1,size(swaps,1));
for i = 2:nPoints
for j = 1:size(swaps,1)
normOfSwap(j) = seminorm(doSwap(F,swaps(j,:),i),i);
end
[~,j_bestSwap] = min(normOfSwap);
F = doSwap(F,swaps(j_bestSwap,:),i);
end
Usage:
The command alpha2 = untangle(alpha2); will untangle your functions:
It should even work for more complicated data, like these shuffled sine-waves:
nPoints = 100;
nFuncs = 5;
t = linspace(0, 2*pi, nPoints);
F = bsxfun(#(a,b) sin(a*b), (1:nFuncs).', t);
for i = 1:nPoints
F(:,i) = F(randperm(nFuncs),i);
end
Remark: I guess if you already know that your functions will be quadratic or some other special form, RANSAC would be a better idea for larger number of functions. This could also be useful if the functions are not given with the same x-value spacing.
I present my simple working Matlab code and will ask questions:
tic
nrand1 = 10000;
nrand2 = 20000;
% Location matrix 1: [longitude, latitude, w1]
lmat1=[rand(nrand1,1)-75 rand(nrand1,1)+39 round(rand(nrand1,1)*1000)+1];
% Location matrix 2: [longitude, latitude, w2]
lmat2=[rand(nrand2,1)-75 rand(nrand2,1)+39 round(rand(nrand2,1)*100)+1];
% The number of rows for each matrix = In fact it's nrand1 X nrand2, obviously
nobs1 = size(lmat1,1);
nobs2 = size(lmat2,1);
% The number of pair-wise distances
% between L1 locations X L2 locations
ndist = nobs1*nobs2;
% Initialization: Distance vector and weight vector
hdist = zeros(ndist,1);
weight = zeros(ndist,1);
% Double for loop -- for calculating the pair-wise distances and weights
k=1;
for i=1:nobs1
for j=1:nobs2
% distances in kilometers.
lonH = sin(0.5*(lmat1(i,1)-lmat2(j,1))*pi/180.0)^2;
latH = sin(0.5*(lmat1(i,2)-lmat2(j,2))*pi/180.0)^2;
hdist(k) = 0.001*6372797.560856*2 ...
*asin(sqrt(latH+(cos(lmat1(i,2)*pi/180.0) ...
*cos(lmat2(j,2)*pi/180.0))*lonH));
weight(k) = lmat1(i,3)*lmat2(j,3);
k=k+1;
end
end
toc
The code calculates 10000 X 20000 distances and weights.
Elapsed time is 67.124844 seconds.
Is there a way to vectorize the double-loop processing, or to perform a parallel computing? If there is no room for performance improvement in Matlab, I may have to write the double loops in C and call it from Matlab. I don't know how to call C from matlab, so I will ask a separate question. Thanks!
Using bsxfun, you can eliminate the for loops and the need for calculating matrices for each combination (this should reduce memory usage). The following is about six times faster than your original code on my computer using R2014b:
nrand1 = 10000;
nrand2 = 20000;
% Location matrix 1: [longitude, latitude, w1]
lmat1=[rand(nrand1,1)-75 rand(nrand1,1)+39 round(rand(nrand1,1)*1000)+1];
% Location matrix 2: [longitude, latitude, w2]
lmat2=[rand(nrand2,1)-75 rand(nrand2,1)+39 round(rand(nrand2,1)*100)+1];
p180 = pi/180;
lonH = sin(0.5*bsxfun(#minus,lmat1(:,1).',lmat2(:,1))*p180).^2;
latH = sin(0.5*bsxfun(#minus,lmat1(:,2).',lmat2(:,2))*p180).^2;
hdist = 0.001*6372797.560856*2*asin(sqrt(latH+bsxfun(#times,cos(lmat1(:,2).'*p180),cos(lmat2(:,2)*p180)).*lonH));
hdist1 = hdist(:);
weight1 = bsxfun(#times,lmat1(:,3).',lmat2(:,3));
weight1 = weight1(:);
Note that by using the variable p180, the math is changed slightly so you won't get precisely the same values, but they will be very close.
The solution is that your inputs (lmat1 and lmat2) do not need to be matrices like you have them. Each one is really three vectors. Once you've broken out the vectors, you can create arrays that have every permutation of lmat1 and lmat2 together (which is what your double loop is doing). At that point, you can call your math as single, fully-vectorized operations...
%make your vectors
lmat1A = rand(nrand1,1)-75;
lmat1B = rand(nrand1,1)+39;
lmat1C = round(rand(nrand1,1)*1000)+1
lmat2A = rand(nrand2,1)-75;
lmat2B = rand(nrand2,1)+39;
lmat2C = round(rand(nrand2,1)*1000)+1
%make every combination
lmat1A = lmat1A(:)*ones(1,nrand2);
lmat1B = lmat1B(:)*ones(1,nrand2);
lmat1C = lmat1C(:)*ones(1,nrand2);
lmat2A = ones(nrand1,1)*(lmat2A(:)');
lmat2B = ones(nrand1,1)*(lmat2B(:)');
lmat2C = ones(nrand1,1)*(lmat2C(:)');
%do your math
lonH = sin(0.5*(lmat1A-lmat2A)*pi/180.0).^2;
latH = sin(0.5*(lmat1B-lmat2B)*pi/180.0).^2;
hdist = 0.001*6372797.560856*2 ...
.*asin(sqrt(latH+(cos(lmat1B*pi/180.0) ...
.*cos(lmat2B*pi/180.0)).*lonH)); %use element-wise multiplication
weight = lmat1C.*lmat2C;
%reshape your output into vectors (not arrays), which is what your original code does
lonH = lonH(:)
latH = latH(:)
hdist = hdist(:);
weight = weight(:);
I need to perform some elementary histogram matching on 2 sets of 3D data. This is part of a larger algorithm.
My goal is to perform this by minimising the following cost function:
|| cumpdf(f(A)) - cumpdf(B) || .^2
where:
cumpdf is the cumulative histogram
f() is linear transformation a*A + b where a/b are affine coefficients to be
determined
A is the image to be transformed and B is the image to be matched
I am using lsqcurvefit however I have run into some trouble and therefore really need some help.
A(maskA==0)=0;
B(maskB==0)=0;
[na,~] = hist(A(maskA~=0),500);
na = na ./ numel(A(maskA~=0));
x_data = cumsum(na);
[nb,~] = hist(B(maskB~=0),500);
nb = nb ./ numel(B(maskB~=0));
y_data = cumsum(nb);
xo = [1.5 -200];
[coeff,~] = lsqcurvefit(#cost,xo,x_data,y_data);
function F = cost(x,xc)
F = x(1).*A + x(2);
[nc,~] = hist(C(maskA~=0),500);
nc = nc / numel(C(maskA~=0));
xc = cumsum(nc);
Amask and Bmask just represent some indexing I need to do.
My question is: I know that the above is wrong. However, I think it represents best what I want to do, regarding the cost function and the goal. Some help would me much appreciated!