I wish to plot the heat map of a bivariate (independent) Gaussian. To plot it over a 2D square, I did
joint_pdf = #(m, s) normpdf(m, 1, 1)*normpdf(s, 1, 1);
[x, y] = meshgrid(0:0.1:10, 0:0.1:10);
prob_map = zeros(numel(x), numel(y));
for idx1 = 1:size(prob_map, 1)
for idx2 = 1:size(prob_map, 2)
prob_map(idx1, idx2) = joint_pdf(x(idx1), y(idx2));
end
end
image(prob_map);
This is very very slow. Is there a way of avoiding the looping?
One can hack into normpdf.m and get all the elements of prob_map in a vectorized manner and thus also avoid those many function calls, which must make it much more efficient. I like to call this hacked approach as using the "raw version" of normpdf's implementation. Here's the final code -
%// Define arrays for inputting into meshgrid
array1 = 0:0.1:10;
array2 = 0:0.1:10;
[x, y] = meshgrid(array1, array2);
%// Define parameteres for normpdf
mu = 1;
sigma = 1;
%// Use "raw version" of normpdf to calculate all prob_map elements in one go
dim1 = exp(-0.5 * ((x(:) - mu)./sigma).^2) ./ (sqrt(2*pi) .* sigma);
dim2 = exp(-0.5 * ((y(:) - mu)./sigma).^2) ./ (sqrt(2*pi) .* sigma);
prob_map = bsxfun(#times,dim1,dim2.');
If you are interested in further speeding it up, you can pre-calculate few more stuffs surrounding the x(:) and y(:) in dim1 and dim2 respectively!
Related
TL;DR: I am trying to optimize the following short code in Matlab. Because it involves loops over large matrices, it is too slow.
for i = 1:sz,
for j = 1:sz,
if X(j) == Q(i) && Y(j) == R(i),
S(i) = Z(j);
break
end
end
end
Specifics: Basically, I started with three vectors of x, y and z data that I wanted to plot as a surface. I generated a mesh of the x and y data and then made a matrix for the corresponding z values using
[X, Y] = meshgrid(x, y);
Z = griddata(x, y, z, X, Y);
Because the data is collected in random order, when generating the surface plot the connections are all wrong and the plot looks all triangulated like the following example.
So, to make sure Matlab was connecting the right dots, I then reorganized the X and Y matrices using
[R, R_indx] = sort(Y);
[Q, Q_indx] = sort(X, 2);
From here I thought it would be a simple problem of reorganizing the matrix Z based on the indices of the sorting for matrix X and Y. But I run into trouble because no matter how I use the indices, I cannot produce the correct matrix. For example, I tried
S = Z(R_indx); % to match the rearrangement of Y
S = S(Q_indx); % to match the rearrangement of X
and I got this barcode...
Running the first block of code gives me the "desired" result pictured here. However, this takes far too long as it is a double loop over a very large matrix.
Question: How can I optimize this rearrangement of the matrix Z without for loops?
Please have a look at the following solutions, and test both with your matrices. Do they perform faster? The array indexing solution does, what you asked for, i.e. the re-arrangement of the matrices. The vector indexing might be even better, since it sorts your original vectors instead of the matrices, and generates the output directly from there.
% Parameters.
dim = 4;
% Test input.
x = [2, -2, 5, 4];
y = [1, -4, 6, -2];
z = rand(dim);
[X, Y] = meshgrid(x, y);
Z = griddata(x, y, z, X, Y);
[R, R_indx] = sort(Y);
[Q, Q_indx] = sort(X, 2);
% Initialize output.
S = zeros(dim);
% Provided solution using loop.
for i = 1:numel(z)
for j = 1:numel(z)
if (X(j) == Q(i) && Y(j) == R(i))
S(i) = Z(j);
break
end
end
end
% Output.
S
% Solution using array indexing; output.
S_array = reshape(((X(:) == Q(:).') & (Y(:) == R(:).')).' * Z(:), dim, dim)
% Solution using vector indexing; output.
[r, r_indx] = sort(y);
[q, q_indx] = sort(x);
[X, Y] = meshgrid(q, r);
Z = griddata(q, r, z, X, Y);
idx = (ones(dim, 1) * ((q_indx - 1) * dim) + r_indx.' * ones(1, dim));
S_vector = Z(idx)
Example output:
S =
0.371424 0.744220 0.777214 0.778058
0.580353 0.686495 0.356647 0.577780
0.436699 0.217288 0.883900 0.800133
0.594355 0.405309 0.544806 0.085540
S_array =
0.371424 0.744220 0.777214 0.778058
0.580353 0.686495 0.356647 0.577780
0.436699 0.217288 0.883900 0.800133
0.594355 0.405309 0.544806 0.085540
S_vector =
0.371424 0.744220 0.777214 0.778058
0.580353 0.686495 0.356647 0.577780
0.436699 0.217288 0.883900 0.800133
0.594355 0.405309 0.544806 0.085540
Given a system of the form y' = A*y(t) with solution y(t) = e^(tA)*y(0), where e^A is the matrix exponential (i.e. sum from n=0 to infinity of A^n/n!), how would I use matlab to compute the solution given the values of matrix A and the initial values for y?
That is, given A = [-2.1, 1.6; -3.1, 2.6], y(0) = [1;2], how would I solve for y(t) = [y1; y2] on t = [0:5] in matlab?
I try to use something like
t = 0:5
[y1; y2] = expm(A.*t).*[1;2]
and I'm finding errors in computing the multiplication due to dimensions not agreeing.
Please note that matrix exponential is defined for square matrices. Your attempt to multiply the attenuation coefs with the time vector doesn't give you what you'd want (which should be a 3D matrix that should be exponentiated slice by slice).
One of the simple ways would be this:
A = [-2.1, 1.6; -3.1, 2.6];
t = 0:5;
n = numel(t); %'number of samples'
y = NaN(2, n);
y(:,1) = [1;2];
for k =2:n
y(:,k) = expm(t(k)*A) * y(:,1);
end;
figure();
plot(t, y(1,:), t, y(2,:));
Please note that in MATLAB array are indexed from 1.
I'm trying to get Matlab to take this as a function of x_1 through x_n and y_1 through y_n, where k_i and r_i are all constants.
So far my idea was to take n from the user and make two 1×n vectors called x and y, and for the x_i just pull out x(i). But I don't know how to make an arbitrary sum in MATLAB.
I also need to get the gradient of this function, which I don't know how to do either. I was thinking maybe I could make a loop and add that to the function each time, but MATLAB doesn't like that.
I don't believe a loop is necessary for this calculation. MATLAB excels at vectorized operations, so would something like this work for you?
l = 10; % how large these vectors are
k = rand(l,1); % random junk values to work with
r = rand(l,1);
x = rand(l,1);
y = rand(l,1);
vals = k(1:end-1) .* (sqrt(diff(x).^2 + diff(y).^2) - r(1:end-1)).^2;
sum(vals)
EDIT: Thanks to #Amro for correcting the formula and simplifying it with diff.
You can solve for the gradient symbolically with:
n = 10;
k = sym('k',[1 n]); % Create n variables k1, k2, ..., kn
x = sym('x',[1 n]); % Create n variables x1, x2, ..., xn
y = sym('y',[1 n]); % Create n variables y1, y2, ..., yn
r = sym('r',[1 n]); % Create n variables r1, r2, ..., rn
% Symbolically sum equation
s = sum((k(1:end-1).*sqrt((x(2:end)-x(1:end-1)).^2+(y(2:end)-y(1:end-1)).^2)-r(1:end-1)).^2)
grad_x = gradient(s,x) % Gradient with respect to x vector
grad_y = gradient(s,y) % Gradient with respect to y vector
The symbolic sum and gradients can be evaluated and converted to floating point with:
% n random data values for k, x, y, and r
K = rand(1,n);
X = rand(1,n);
Y = rand(1,n);
R = rand(1,n);
% Substitute in data for symbolic variables
S = double(subs(s,{[k,x,y,r]},{[K,X,Y,R]}))
GRAD_X = double(subs(grad_x,{[k,x,y,r]},{[K,X,Y,R]}))
GRAD_Y = double(subs(grad_y,{[k,x,y,r]},{[K,X,Y,R]}))
The gradient function is the one overloaded for symbolic variables (type help sym/gradient) or see the more detailed documentation online).
Yes, you could indeed do this with a loop, considering that x, y, k, and r are already defined.
n = length(x);
s = 0;
for j = 2 : n
s = s + k(j-1) * (sqrt((x(j) - x(j-1)).^2 + (y(j) - y(j-1)).^2) - r(j-1)).^2
end
You should derive the gradient analytically and then plug in numbers. It should not be too hard to expand these terms and then find derivatives of the resulting polynomial.
Vectorized solution is something like (I wonder why do you use sqrt().^2):
is = 2:n;
result = sum( k(is - 1) .* abs((x(is) - x(is-1)).^2 + (y(is) - y(is-1)).^2 - r(is-1)));
You can either compute gradient symbolically or rewrite this code as a function and make a standard +-eps calculation. If you need a gradient to run optimization (you code looks like a fitness function) you could use algorithms that calculate them themselves, for example, fminsearch can do this
I have double summation over m = 1:M and n = 1:N for polar point with coordinates rho, phi, z:
I have written vectorized notation of it:
N = 10;
M = 10;
n = 1:N;
m = 1:M;
rho = 1;
phi = 1;
z = 1;
summ = cos (n*z) * besselj(m'-1, n*rho) * cos(m*phi)';
Now I need to rewrite this function for accepting vectors (columns) of coordinates rho, phi, z. I tried arrayfun, cellfun, simple for loop - they work too slow for me. I know about "MATLAB array manipulation tips and tricks", but as MATLAB beginner I can't understand repmat and other functions.
Can anybody suggest vectorized solution?
I think your code is already well vectorized (for n and m). If you want this function to also accept an array of rho/phi/z values, I suggest you simply process the values in a for-loop, as I doubt any further vectorization will bring significant improvements (plus the code will be harder to read).
Having said that, in the code below, I tried to vectorize the part where you compute the various components being multiplied {row N} * { matrix N*M } * {col M} = {scalar}, by making a single call to the BESSELJ and COS functions (I place each of the row/matrix/column in the third dimension). Their multiplication is still done in a loop (ARRAYFUN to be exact):
%# parameters
N = 10; M = 10;
n = 1:N; m = 1:M;
num = 50;
rho = 1:num; phi = 1:num; z = 1:num;
%# straightforward FOR-loop
tic
result1 = zeros(1,num);
for i=1:num
result1(i) = cos(n*z(i)) * besselj(m'-1, n*rho(i)) * cos(m*phi(i))';
end
toc
%# vectorized computation of the components
tic
a = cos( bsxfun(#times, n, permute(z(:),[3 2 1])) );
b = besselj(m'-1, reshape(bsxfun(#times,n,rho(:))',[],1)'); %'
b = permute(reshape(b',[length(m) length(n) length(rho)]), [2 1 3]); %'
c = cos( bsxfun(#times, m, permute(phi(:),[3 2 1])) );
result2 = arrayfun(#(i) a(:,:,i)*b(:,:,i)*c(:,:,i)', 1:num); %'
toc
%# make sure the two results are the same
assert( isequal(result1,result2) )
I did another benchmark test using the TIMEIT function (gives more fair timings). The result agrees with the previous:
0.0062407 # elapsed time (seconds) for the my solution
0.015677 # elapsed time (seconds) for the FOR-loop solution
Note that as you increase the size of the input vectors, the two methods will start to have similar timings (the FOR-loop even wins on some occasions)
You need to create two matrices, say m_ and n_ so that by selecting element i,j of each matrix you get the desired index for both m and n.
Most MATLAB functions accept matrices and vectors and compute their results element by element. So to produce a double sum, you compute all elements of the sum in parallel by f(m_, n_) and sum them.
In your case (note that the .* operator performs element-wise multiplication of matrices)
N = 10;
M = 10;
n = 1:N;
m = 1:M;
rho = 1;
phi = 1;
z = 1;
% N rows x M columns for each matrix
% n_ - all columns are identical
% m_ - all rows are identical
n_ = repmat(n', 1, M);
m_ = repmat(m , N, 1);
element_nm = cos (n_*z) .* besselj(m_-1, n_*rho) .* cos(m_*phi);
sum_all = sum( element_nm(:) );
I want to normalise each column of a matrix in Matlab. I have tried two implementations:
Option A:
mx=max(x);
mn=min(x);
mmd=mx-mn;
for i=1:size(x,1)
xn(i,:)=((x(i,:)-mn+(mmd==0))./(mmd+(mmd==0)*2))*2-1;
end
Option B:
mn=mean(x);
sdx=std(x);
for i=1:size(x,1)
xn(i,:)=(x(i,:)-mn)./(sdx+(sdx==0));
end
However, these options take too much time for my data, e.g. 3-4 seconds on a 5000x53 matrix. Thus, is there any better solution?
Use bsxfun instead of the loop. This may be a bit faster; however, it may also use more memory (which may be an issue in your case; if you're paging, everything'll be really slow).
To normalize with mean and std, you'd write
mn = mean(x);
sd = std(x);
sd(sd==0) = 1;
xn = bsxfun(#minus,x,mn);
xn = bsxfun(#rdivide,xn,sd);
Remember, in MATLAB, vectorizing = speed.
If A is an M x N matrix,
A = rand(m,n);
minA = repmat(min(A), [size(A, 1), 1]);
normA = max(A) - min(A); % this is a vector
normA = repmat(normA, [length(normA) 1]); % this makes it a matrix
% of the same size as A
normalizedA = (A - minA)./normA; % your normalized matrix
Note: I am not providing a freshly new answer, but I am comparing the proposed answers.
Option A: Using bsxfun()
function xn = normalizeBsxfun(x)
mn = mean(x);
sd = std(x);
sd(sd==0) = eps;
xn = bsxfun(#minus,x,mn);
xn = bsxfun(#rdivide,xn,sd);
end
Option B: Using a for-loop
function xn = normalizeLoop(x)
xn = zeros(size(x));
for ii=1:size(x,2)
xaux = x(:,ii);
xn(:,ii) = (xaux - mean(xaux))./mean(xaux);
end
end
We compare both implementations for different matrix sizes:
expList = 2:0.5:5;
for ii=1:numel(expList)
expNum = round(10^expList(ii));
x = rand(expNum,expNum);
tic;
xn = normalizeBsxfun(x);
ts(ii) = toc;
tic;
xn = normalizeLoop(x);
tl(ii) = toc;
end
figure;
hold on;
plot(round(10.^expList),ts,'b');
plot(round(10.^expList),tl,'r');
legend('bsxfun','loop');
set(gca,'YScale','log')
The results show that for small matrices, the bsxfun is faster. But, the difference is neglect able for higher dimensions, as it was also found in other post.
The x-axis is the squared root number of matrix elements, while the y-axis is the computation time in seconds.
Let X be a m x n matrix and you want to normalize column wise.
The following matlab code does it
XMean = repmat(mean(X),m,1);
XStd = repmat(std(X),m,1);
X_norm = (X - XMean)./(XStd);
The element wise ./ operator is explained here: http://www.mathworks.in/help/matlab/ref/arithmeticoperators.html
Note: As op mentioned, this is simply a faster solution and performs the same task as looping through the matrix. The underlying implementation of this inbuilt function makes it work faster
Note: This code works in Octave and MATLAB versions R2016b or higher.
function X_norm = normalizeMatrix(X)
mu = mean(X); %mean
sigma = std(X); %standard deviation
X_norm = (X - mu)./sigma;
end
How about using
normc(X)
that would normalize the matrix X columnwise. You need to include the Neural Network Toolbox in your install though.
How about this?
A = [7, 2, 6; 3, 8, 4]; % a 2x3 matrix
Asum = sum(A); % sum the columns
Anorm = A./Asum(ones(size(A, 1), 1), :); % normalise the columns