Matlab: Euclidean norm (or difference) between two vectors - matlab

I'd like to calculate the Euclidean distance between a vector G and each row of an array C, while dividing each row by a value in a vector GSD. What I've done seems very inefficient. What's my biggest overhead?
Could I speed it up?
m=1E7;
G=1E5*rand(1,8);
C=1E5*[zeros(m,1),rand(m,8)];
GSD=10*rand(1,8);
%I've taken the log10 of the values because G and C are very large in magnitude.
%Don't know if it's worth it.
for i=1:m
dG(i,1)=norm((log10(G)-log10(C(i,2:end)))/log10(GSD));
end
Using the examples from below, they don't all give the same answer. In fact none of them give the same answer (see following figure using:
dG = pdist2(log10(G),log10(C(:,2:end)),'mahalanobis',diag(log10(GSD))); %(1)
dG = sqrt(sum((log10(G)-log10(C(:,2:end))./log10(GSD)).^2,2));
tmp=bsxfun(#rdivide,bsxfun(#minus,log10(G),log10(C(:,2:end))),log10(GSD)); %(4)
dG = sqrt(sum(tmp.^2,2));

You can use pdist2(x,y) to calculate the pairwise distance between all elements in x and y, thus your example would be something like
dG = pdist2(log10(G),log10(C(:,2:end)),'mahalanobis',diag(log10(GSD)).^2);
where the name-pair 'mahalanobis',diag(log10(GSD)).^2 puts log10(GSD) as weights on the Eucledean, which is the known as the Mahalanobis distance.
Note that the Mahalanobis distance is originally intented for normalising data, thus it is the "covariance" which have to be put as the fourth input, which MATLAB then finds the Cholesky decomposition of (element-wise squareroot when diagonal, as here).
Implicit expansion
In newer MATLAB editions, one can also just just the implcit expansion as the first entry is only 1 vector.
dG = sqrt(sum(((log10(G)-log10(C(:,2:9)))./log10(GSD)).^2,2));
which is probably a tad faster, I do, however, prefer the pdist2 solution as I find it clearer.

The floating point should handle the large magnitude of the input data, up to a certain point with float data and with any reasonable value with double data
realmax('single')
ans =
3.4028e+38
realmax('double')
ans =
1.7977e+308
With 1e7 values in the +/- 1e5 range, you may expect the square of the Euclidian distance to be in the +/- 1e17 range (5+5+7), which both formats will handle with ease.
In any case, you should vectorize the code to remove the loop (which Matlab has a history of handling very inefficiently, especially in older versions)
With new versions (2016b and later), simply use:
tmp=(log10(G)-log10(C(:,2:end)))./log10(GSD);
dG = sqrt(sum(tmp.^2,2)); %row-by-row norm
Note that you have to use ./ which is a element-wise division, not / which is matrix right division.
The following code will work everywhere
tmp=bsxfun(#rdivide,bsxfun(#minus,log10(G),log10(C(:,2:end))),log10(GSD));
dG = sqrt(sum(tmp.^2,2)); %row-by-row norm
I however believe that the use of log10 is a mathematical error. The result dG will not be the Euclidian norm. You should stick with the root mean square of the weighted difference:
dG = sqrt(sum(bsxfun(#rdivide,bsxfun(#minus,G,C(:,2:end)),GSD).^2,2)); % all versions
dG = sqrt(sum((G-C(:,2:end)./GSD).^2,2)); %R2016b and later

Related

Speed up calculation in Physics simulation in Matlab

I am working on a MR-physic simulation written in Matlab which simulates bloch's equations on an defined object. The magnetisation in the object is updated every time-step with the following functions.
function Mt = evolveMtrans(gamma, delta_B, G, T2, Mt0, delta_t)
% this function calculates precession and relaxation of the
% transversal component, Mt, of M
delta_phi = gamma*(delta_B + G)*delta_t;
Mt = Mt0 .* exp(-delta_t*1./T2 - 1i*delta_phi);
end
This function is a very small part of the entire code but is called upon up to 250.000 times and thus slows down the code and the performance of the entire simulation. I have thought about how I can speed up the calculation but haven't come up with a good solution. There is one line that is VERY time consuming and stands for approximately 50% - 60% of the overall simulation time. This is the line,
Mt = Mt0 .* exp(-delta_t*1./T2 - 1i*delta_phi);
where
Mt0 = 512x512 matrix
delta_t = a scalar
T2 = 512x512 matrix
delta_phi = 512x512 matrix
I would be very grateful for any suggestion to speed up this calculation.
More info below,
The function evovleMtrans is called every timestep during the simulation.
The parameters that are used for calling the function are,
gamma = a constant. (gyramagnetic constant)
delta_B = the magnetic field value
G = gradientstrength
T2 = a 512x512 matrix with T2-values for the object
Mstart.r = a 512x512 matrix with the values M.r had the last timestep
delta_t = a scalar with the difference in time since the last calculated M.r
The only parameters of these that changed during the simulation are,
G, Mstart.r and delta_t. The rest do not change their values during the simulation.
The part below is the part in the main code that calls the function.
% update phase and relaxation to calcTime
delta_t = calcTime - Mstart_t;
delta_B = (d-d0)*B0;
G = Sq.Gx*Sq.xGxref + Sq.Gz*Sq.zGzref;
% Precession around B0 (z-axis) and B1 (+-x-axis or +-y-axis)
% is defined clock-wise in a right hand system x, y, z and
% x', y', z (see the Bloch equation, Bloch 1946 and Levitt
% 1997). The x-axis has angle zero and the y-axis has angle 90.
% For flipping/precession around B1 in the xy-plane, z-axis has
% angle zero.
% For testing of precession direction:
% delta_phi = gamma*((ones(size(d)))*1e-6*B0)*delta_t;
M.r = evolveMtrans(gamma, delta_B, G, T2, Mstart.r, delta_t);
M.l = evolveMlong(T1, M0.l, Mstart.l, delta_t);
This is not a surprise.
That "single line" is a matrix equation. It's really 1,024 simultaneous equations.
Per Jannick, that first term means element-wise division, so "delta_t/T[i,j]". Multiplying a matrix by a scalar is O(N^2). Matrix addition is O(N^2). Evaluating exponential of a matrix will be O(N^2).
I'm not sure if I saw a complex argument in there as well. Does that mean complex matricies with real and imaginary entries? Does your equation simplify to real and imaginary parts? That means twice the number of computations.
Your best hope is to exploit symmetry as much as possible. If all your matricies are symmetric, you cut your calculations roughly in half.
Use parallelization if you can.
Algorithm choice can make a big difference, too. If you're using explicit Euler integration, you may have time step limitations due to stability concerns. Is that why you have 250,000 steps? Maybe a larger time step is possible with a more stable integration schema. Think about a higher order adaptive scheme with error correction, like 5th order Runge Kutta.
There are several possibilities to improve the speed of the code but all that I see come with a caveat.
Numerical ode integration
The first possibility would be to change your analytical solution by numerical differential equation solver. This has several advantages
The analytical solution includes the complex exponential function, which is costly to calculate, while the differential equation contains only multiplication and addition. (d/dt u = -a u => u=exp(-at))
There are plenty of built-in solvers for matlab available and they are typically pretty fast (e.g. ode45). The built-ins however all use a variable step size. This improves speed and accuracy but would be a problem if you really need a fixed equally spaced grid of time points. Here are unofficial fixed step solvers.
As a start you could also try to use just an euler step by replacing
M.r = evolveMtrans(gamma, delta_B, G, T2, Mstart.r, delta_t);
by
delta_phi = gamma*(delta_B + G)*t_step;
M.r += M.r .* (1-t_step*1./T2 - 1i*delta_phi);
You can then further improve that by precalculating all constant values, e.g. one_over_T1=1/T1, moving delta_phi out of the loop.
Caveat:
You are bound to a minimum step size or the accuracy suffers. Therefore this is only a good idea if you time-spacing is quite fine.
Less points in time
You should carfully analyze whether you really need so many points in time. It seems somewhat puzzling to me that you need so many points. As you know the full analytical solution you can freely choose how to sample the time and maybe use this to your advantage.
Going fortran
This might seem like a grand step but in my experience basic (simple loops, matrix operations etc.) matlab code can be relatively easily translated to fortran line-by-line. This would be especially helpful in addition to my first point. If you still want to use the full analytical solution probably there is not much to gain here because exp is already pretty fast in matlab.

Generate random samples from arbitrary discrete probability density function in Matlab

I've got an arbitrary probability density function discretized as a matrix in Matlab, that means that for every pair x,y the probability is stored in the matrix:
A(x,y) = probability
This is a 100x100 matrix, and I would like to be able to generate random samples of two dimensions (x,y) out of this matrix and also, if possible, to be able to calculate the mean and other moments of the PDF. I want to do this because after resampling, I want to fit the samples to an approximated Gaussian Mixture Model.
I've been looking everywhere but I haven't found anything as specific as this. I hope you may be able to help me.
Thank you.
If you really have a discrete probably density function defined by A (as opposed to a continuous probability density function that is merely described by A), you can "cheat" by turning your 2D problem into a 1D problem.
%define the possible values for the (x,y) pair
row_vals = [1:size(A,1)]'*ones(1,size(A,2)); %all x values
col_vals = ones(size(A,1),1)*[1:size(A,2)]; %all y values
%convert your 2D problem into a 1D problem
A = A(:);
row_vals = row_vals(:);
col_vals = col_vals(:);
%calculate your fake 1D CDF, assumes sum(A(:))==1
CDF = cumsum(A); %remember, first term out of of cumsum is not zero
%because of the operation we're doing below (interp1 followed by ceil)
%we need the CDF to start at zero
CDF = [0; CDF(:)];
%generate random values
N_vals = 1000; %give me 1000 values
rand_vals = rand(N_vals,1); %spans zero to one
%look into CDF to see which index the rand val corresponds to
out_val = interp1(CDF,[0:1/(length(CDF)-1):1],rand_vals); %spans zero to one
ind = ceil(out_val*length(A));
%using the inds, you can lookup each pair of values
xy_values = [row_vals(ind) col_vals(ind)];
I hope that this helps!
Chip
I don't believe matlab has built-in functionality for generating multivariate random variables with arbitrary distribution. As a matter of fact, the same is true for univariate random numbers. But while the latter can be easily generated based on the cumulative distribution function, the CDF does not exist for multivariate distributions, so generating such numbers is much more messy (the main problem is the fact that 2 or more variables have correlation). So this part of your question is far beyond the scope of this site.
Since half an answer is better than no answer, here's how you can compute the mean and higher moments numerically using matlab:
%generate some dummy input
xv=linspace(-50,50,101);
yv=linspace(-30,30,100);
[x y]=meshgrid(xv,yv);
%define a discretized two-hump Gaussian distribution
A=floor(15*exp(-((x-10).^2+y.^2)/100)+15*exp(-((x+25).^2+y.^2)/100));
A=A/sum(A(:)); %normalized to sum to 1
%plot it if you like
%figure;
%surf(x,y,A)
%actual half-answer starts here
%get normalized pdf
weight=trapz(xv,trapz(yv,A));
A=A/weight; %A normalized to 1 according to trapz^2
%mean
mean_x=trapz(xv,trapz(yv,A.*x));
mean_y=trapz(xv,trapz(yv,A.*y));
So, the point is that you can perform a double integral on a rectangular mesh using two consecutive calls to trapz. This allows you to compute the integral of any quantity that has the same shape as your mesh, but a drawback is that vector components have to be computed independently. If you only wish to compute things which can be parametrized with x and y (which are naturally the same size as you mesh), then you can get along without having to do any additional thinking.
You could also define a function for the integration:
function res=trapz2(xv,yv,A,arg)
if ~isscalar(arg) && any(size(arg)~=size(A))
error('Size of A and var must be the same!')
end
res=trapz(xv,trapz(yv,A.*arg));
end
This way you can compute stuff like
weight=trapz2(xv,yv,A,1);
mean_x=trapz2(xv,yv,A,x);
NOTE: the reason I used a 101x100 mesh in the example is that the double call to trapz should be performed in the proper order. If you interchange xv and yv in the calls, you get the wrong answer due to inconsistency with the definition of A, but this will not be evident if A is square. I suggest avoiding symmetric quantities during the development stage.

Find approximation of sine using least squares

I am doing a project where i find an approximation of the Sine function, using the Least Squares method. Also i can use 12 values of my own choice.Since i couldn't figure out how to solve it i thought of using Taylor's series for Sine and then solving it as a polynomial of order 5. Here is my code :
%% Find the sine of the 12 known values
x=[0,pi/8,pi/4,7*pi/2,3*pi/4,pi,4*pi/11,3*pi/2,2*pi,5*pi/4,3*pi/8,12*pi/20];
y=zeros(12,1);
for i=1:12
y=sin(x);
end
n=12;
j=5;
%% Find the sums to populate the matrix A and matrix B
s1=sum(x);s2=sum(x.^2);
s3=sum(x.^3);s4=sum(x.^4);
s5=sum(x.^5);s6=sum(x.^6);
s7=sum(x.^7);s8=sum(x.^8);
s9=sum(x.^9);s10=sum(x.^10);
sy=sum(y);
sxy=sum(x.*y);
sxy2=sum( (x.^2).*y);
sxy3=sum( (x.^3).*y);
sxy4=sum( (x.^4).*y);
sxy5=sum( (x.^5).*y);
A=[n,s1,s2,s3,s4,s5;s1,s2,s3,s4,s5,s6;s2,s3,s4,s5,s6,s7;
s3,s4,s5,s6,s7,s8;s4,s5,s6,s7,s8,s9;s5,s6,s7,s8,s9,s10];
B=[sy;sxy;sxy2;sxy3;sxy4;sxy5];
Then at matlab i get this result
>> a=A^-1*B
a =
-0.0248
1.2203
-0.2351
-0.1408
0.0364
-0.0021
However when i try to replace the values of a in the taylor series and solve f.e t=pi/2 i get wrong results
>> t=pi/2;
fun=t-t^3*a(4)+a(6)*t^5
fun =
2.0967
I am doing something wrong when i replace the values of a matrix in the Taylor series or is my initial thought flawed ?
Note: i can't use any built-in function
If you need a least-squares approximation, simply decide on a fixed interval that you want to approximate on and generate some x abscissae on that interval (possibly equally spaced abscissae using linspace - or non-uniformly spaced as you have in your example). Then evaluate your sine function at each point such that you have
y = sin(x)
Then simply use the polyfit function (documented here) to obtain least squares parameters
b = polyfit(x,y,n)
where n is the degree of the polynomial you want to approximate. You can then use polyval (documented here) to obtain the values of your approximation at other values of x.
EDIT: As you can't use polyfit you can generate the Vandermonde matrix for the least-squares approximation directly (the below assumes x is a row vector).
A = ones(length(x),1);
x = x';
for i=1:n
A = [A x.^i];
end
then simply obtain the least squares parameters using
b = A\y;
You can clearly optimise the clumsy Vandermonde generation loop above I have just written to illustrate the concept. For better numerical stability you would also be better to use a nice orthogonal polynomial system like Chebyshev polynomials of the first kind. If you are not even allowed to use the matrix divide \ function then you will need to code up your own implementation of a QR factorisation and solve the system that way (or some other numerically stable method).

svds not working for some matrices - wrong result

Here is my testing function:
function diff = svdtester()
y = rand(500,20);
[U,S,V] = svd(y);
%{
y = sprand(500,20,.1);
[U,S,V] = svds(y);
%}
diff_mat = y - U*S*V';
diff = mean(abs(diff_mat(:)));
end
There are two very similar parts: one finds the SVD of a random matrix, the other finds the SVD of a random sparse matrix. Regardless of which one you choose to comment (right now the second one is commented-out), we compute the difference between the original matrix and the product of its SVD components and return that average absolute difference.
When using rand/svd, the typical return (mean error) value is around 8.8e-16, basically zero. When using sprand/svds, the typical return values is around 0.07, which is fairly terrible considering the sparse matrix is 90% 0's to start with.
Am I misunderstanding how SVD should work for sparse matrices, or is something wrong with these functions?
Yes, the behavior of svds is a little bit different from svd. According to MATLAB's documentation:
[U,S,V] = svds(A,...) returns three output arguments, and if A is m-by-n:
U is m-by-k with orthonormal columns
S is k-by-k diagonal
V is n-by-k with orthonormal columns
U*S*V' is the closest rank k approximation to A
In fact, usually k will be somethings about 6, so you will get rather "rude" approximation. To get more exact approximation specify k to be min(size(y)):
[U, S, V] = svds(y, min(size(y)))
and you will get error of the same order of magnitude as in case of svd.
P.S. Also, MATLAB's documentations says:
Note svds is best used to find a few singular values of a large, sparse matrix. To find all the singular values of such a matrix, svd(full(A)) will usually perform better than svds(A,min(size(A))).

Duplicating a 2d matrix in matlab along a 3rd axis MANY times

I'm looking to duplication a 784x784 matrix in matlab along a 3rd axis. The following code seems to work:
mat = reshape(repmat(mat, 1,10000),784,784,10000);
Unfortunately, it takes so long to run it's worthless (changing the 10,000s to 1000 makes it take a few minutes, and using 10,000 makes my whole machine freeze up practically). is there a faster way to do this?
For reference, I'm looking to use mvnpdf on 10,000 vectors each of length 784, using the same covariance matrix for each. So my final call looks like
mvnpdf(X,mu,mat)
%size(X) = (10000,784), size(mu) = (10000,784), size(mat) = 784,784,10000
If there's a way to do this that's not repeating the covariance matrix 10,000 times, that'd be helpful too. Thanks!
For replication in more than 2 dimensions, you need to supply the replication counts as an array:
out = repmat(mat,[1,1,10000])
Creating a 784x784 matrix 10,000 times isn't going to take advantage of the vectorization in MATLAB, which is going to be more useful for small arrays. Avoiding a for loop also won't help too much, given the following:
The main speedup you can gain here is by computing the inverse of the covariance matrix once, and then computing the pdf yourself. The inverse of sigma takes O(n^3), and you are needlessly doing that 10,000 times. (Also, the square root determinant can be precomputed.) For reference, the PDF of the multivariate normal distribution is computed as follows:
http://en.wikipedia.org/wiki/Multivariate_normal_distribution#Properties
Better to just compute the inverse once, and then compute z = x - mu for each value, then doing z'Sz for each pdf value, and applying a simple function and a constant. But wait! You can vectorize that, too.
I don't have MATLAB in front of me, but this is basically what you need to do, and it'll run in an instant.
s = inv(sigma);
c = -0.5*log(det(s)) - (k/2)*log(2*pi);
z = x - mu; % 10000 x 784 matrix
ans = exp( c - 0.5 .* dot(z*s, z, 2) ); % 10000 x 1 vector