Is bsxfun still optimal in MATLAB? - matlab

I did bump into this question while searching for this topic, but this one seems to be outdated.
Reading https://blogs.mathworks.com/loren/2016/10/24/matlab-arithmetic-expands-in-r2016b , implicit expansion was introduced in 2016b, but I can still find the reference codes in the papers using bsxfun for arithmetic expansion. So I assume that there are some circumstances that make bsxfun preferable to other methods.
I did compare the speeds between bsxfun, repmat, and implicit expansion (I used the code of Jonas from the link)
The below shows the comparison in calculation time using tic toc:
which shows that implicit expansion is clearly faster than bsxfun or repmat. Is there any reason to use bsxfun nowadays?
Here is the code I used to compare the speed:
n = 300;
k=100; %# k=100 for the second graph
a = ones(10,1);
rr = zeros(n,1);
bb = zeros(n,1);
ntt = 100;
tt = zeros(ntt,1);
for i=1:n;
r = rand(1,i*k);
for it=1:ntt;
tic,
x = bsxfun(#plus,a,r);
tt(it) = toc;
end;
bb(i) = median(tt);
for it=1:ntt;
tic,
y = repmat(a,1,i*k) + repmat(r,10,1);
tt(it) = toc;
end;
rr(i) = median(tt);
for it=1:ntt;
tic,
z = a + r;
tt(it) = toc;
end;
gg(i) = median(tt);
end
figure;
plot(bb,'b')
hold on
plot(rr,'r')
plot(gg,'g')
legend(["bsxfun","repmat","implicit"])

All bsxfun does is Binary Singleton eXpansion. It's more typing than the, now usual, implicit expansion. I'd guess The MathWorks kept bsxfun around for backwards compatibility, but no longer works on it; it might even internally just map to implicit expansion.
The documentation on bsxfun states:
It is recommended that you replace most uses of bsxfun with direct calls to the functions and operators that support implicit expansion. Compared to using bsxfun, implicit expansion offers faster speed of execution, better memory usage, and improved readability of code. For more information, see Compatible Array Sizes for Basic Operations.
Additionally, implicit expansion seems to have internal optimisations beyond what bsxfun does, see this question of mine.
More helpful links can be found in this answer by nirvana-msu, amongst others to blogs by MathWorks employees discussing this.
So I'd say that the only reason to use bsxfun instead of implicit expansion would be if you'd run the code on a pre-2016b version of MATLAB.

Related

why does a*b*a take longer than (a'*(a*b)')' when using gpuArray in Matlab scripts?

The code below performs the operation the same operation on gpuArrays a and b in two different ways. The first part computes (a'*(a*b)')' , while the second part computes a*b*a. The results are then verified to be the same.
%function test
clear
rng('default');rng(1);
a=sprand(3000,3000,0.1);
b=rand(3000,3000);
a=gpuArray(a);
b=gpuArray(b);
tic;
c1=gather(transpose(transpose(a)*transpose(a*b)));
disp(['time for (a''*(a*b)'')'': ' , num2str(toc),'s'])
clearvars -except c1
rng('default');
rng(1)
a=sprand(3000,3000,0.1);
b=rand(3000,3000);
a=gpuArray(a);
b=gpuArray(b);
tic;
c2=gather(a*b*a);
disp(['time for a*b*a: ' , num2str(toc),'s'])
disp(['error = ',num2str(max(max(abs(c1-c2))))])
%end
However, computing (a'*(a*b)')' is roughly 4 times faster than computing a*b*a. Here is the output of the above script in R2018a on an Nvidia K20 (I've tried different versions and different GPUs with the similar behaviour).
>> test
time for (a'*(a*b)')': 0.43234s
time for a*b*a: 1.7175s
error = 2.0009e-11
Even more strangely, if the first and last lines of the above script are uncommented (to turn it into a function), then both take the longer amount of time (~1.7s instead of ~0.4s). Below is the output for this case:
>> test
time for (a'*(a*b)')': 1.717s
time for a*b*a: 1.7153s
error = 1.0914e-11
I'd like to know what is causing this behaviour, and how to perform a*b*a or (a'*(a*b)')' or both in the shorter amount of time (i.e. ~0.4s rather than ~1.7s) inside a matlab function rather than inside a script.
There seem to be an issue with multiplication of two sparse matrices on GPU. time for sparse by full matrix is more than 1000 times faster than sparse by sparse. A simple example:
str={'sparse*sparse','sparse*full'};
for ii=1:2
rng(1);
a=sprand(3000,3000,0.1);
b=sprand(3000,3000,0.1);
if ii==2
b=full(b);
end
a=gpuArray(a);
b=gpuArray(b);
tic
c=a*b;
disp(['time for ',str{ii},': ' , num2str(toc),'s'])
end
In your context, it is the last multiplication which does it. to demonstrate I replace a with a duplicate c, and multiply by it twice, once as sparse and once as full matrix.
str={'a*b*a','a*b*full(a)'};
for ii=1:2
%rng('default');
rng(1)
a=sprand(3000,3000,0.1);
b=rand(3000,3000);
rng(1)
c=sprand(3000,3000,0.1);
if ii==2
c=full(c);
end
a=gpuArray(a);
b=gpuArray(b);
c=gpuArray(c);
tic;
c1{ii}=a*b*c;
disp(['time for ',str{ii},': ' , num2str(toc),'s'])
end
disp(['error = ',num2str(max(max(abs(c1{1}-c1{2}))))])
I may be wrong, but my conclusion is that a * b * a involves multiplication of two sparse matrices (a and a again) and is not treated well, while using transpose() approach divides the process to two stage multiplication, in none of which there are two sparse matrices.
I got in touch with Mathworks tech support and Rylan finally shed some light on this issue. (Thanks Rylan!) His full response is below. The function vs script issue appears to be related to certain optimizations matlab applies automatically to functions (but not scripts) not working as expected.
Rylan's response:
Thank you for your patience on this issue. I have consulted with the MATLAB GPU computing developers to understand this better.
This issue is caused by internal optimizations done by MATLAB when encountering some specific operations like matrix-matrix multiplication and transpose. Some of these optimizations may be enabled specifically when executing a MATLAB function (or anonymous function) rather than a script.
When your initial code was being executed from a script, a particular matrix transpose optimization is not performed, which results in the 'res2' expression being faster than the 'res1' expression:
n = 2000;
a=gpuArray(sprand(n,n,0.01));
b=gpuArray(rand(n));
tic;res1=a*b*a;wait(gpuDevice);toc % Elapsed time is 0.884099 seconds.
tic;res2=transpose(transpose(a)*transpose(a*b));wait(gpuDevice);toc % Elapsed time is 0.068855 seconds.
However when the above code is placed in a MATLAB function file, an additional matrix transpose-times optimization is done which causes the 'res2' expression to go through a different code path (and different CUDA library function call) compared to the same line being called from a script. Therefore this optimization generates slower results for the 'res2' line when called from a function file.
To avoid this issue from occurring in a function file, the transpose and multiply operations would need to be split in a manner that stops MATLAB from applying this optimization. Separating each clause within the 'res2' statement seems to be sufficient for this:
tic;i1=transpose(a);i2=transpose(a*b);res3=transpose(i1*i2);wait(gpuDevice);toc % Elapsed time is 0.066446 seconds.
In the above line, 'res3' is being generated from two intermediate matrices: 'i1' and 'i2'. The performance (on my system) seems to be on par with that of the 'res2' expression when executed from a script; in addition the 'res3' expression also shows similar performance when executed from a MATLAB function file. Note however that additional memory may be used to store the transposed copy of the initial array. Please let me know if you see different performance behavior on your system, and I can investigate this further.
Additionally, the 'res3' operation shows faster performance when measured with the 'gputimeit' function too. Please refer to the attached 'testscript2.m' file for more information on this. I have also attached 'test_v2.m' which is a modification of the 'test.m' function in your Stack Overflow post.
Thank you for reporting this issue to me. I would like to apologize for any inconvenience caused by this issue. I have created an internal bug report to notify the MATLAB developers about this behavior. They may provide a fix for this in a future release of MATLAB.
Since you had an additional question about comparing the performance of GPU code using 'gputimeit' vs. using 'tic' and 'toc', I just wanted to provide one suggestion which the MATLAB GPU computing developers had mentioned earlier. It is generally good to also call 'wait(gpuDevice)' before the 'tic' statements to ensure that GPU operations from the previous lines don't overlap in the measurement for the next line. For example, in the following lines:
b=gpuArray(rand(n));
tic; res1=a*b*a; wait(gpuDevice); toc
if the 'wait(gpuDevice)' is not called before the 'tic', some of the time taken to construct the 'b' array from the previous line may overlap and get counted in the time taken to execute the 'res1' expression. This would be preferred instead:
b=gpuArray(rand(n));
wait(gpuDevice); tic; res1=a*b*a; wait(gpuDevice); toc
Apart from this, I am not seeing any specific issues in the way that you are using the 'tic' and 'toc' functions. However note that using 'gputimeit' is generally recommended over using 'tic' and 'toc' directly for GPU-related profiling.
I will go ahead and close this case for now, but please let me know if you have any further questions about this.
%testscript2.m
n = 2000;
a = gpuArray(sprand(n, n, 0.01));
b = gpuArray(rand(n));
gputimeit(#()transpose_mult_fun(a, b))
gputimeit(#()transpose_mult_fun_2(a, b))
function out = transpose_mult_fun(in1, in2)
i1 = transpose(in1);
i2 = transpose(in1*in2);
out = transpose(i1*i2);
end
function out = transpose_mult_fun_2(in1, in2)
out = transpose(transpose(in1)*transpose(in1*in2));
end
.
function test_v2
clear
%% transposed expression
n = 2000;
rng('default');rng(1);
a = sprand(n, n, 0.1);
b = rand(n, n);
a = gpuArray(a);
b = gpuArray(b);
tic;
c1 = gather(transpose( transpose(a) * transpose(a * b) ));
disp(['time for (a''*(a*b)'')'': ' , num2str(toc),'s'])
clearvars -except c1
%% non-transposed expression
rng('default');
rng(1)
n = 2000;
a = sprand(n, n, 0.1);
b = rand(n, n);
a = gpuArray(a);
b = gpuArray(b);
tic;
c2 = gather(a * b * a);
disp(['time for a*b*a: ' , num2str(toc),'s'])
disp(['error = ',num2str(max(max(abs(c1-c2))))])
%% sliced equivalent
rng('default');
rng(1)
n = 2000;
a = sprand(n, n, 0.1);
b = rand(n, n);
a = gpuArray(a);
b = gpuArray(b);
tic;
intermediate1 = transpose(a);
intermediate2 = transpose(a * b);
c3 = gather(transpose( intermediate1 * intermediate2 ));
disp(['time for split equivalent: ' , num2str(toc),'s'])
disp(['error = ',num2str(max(max(abs(c1-c3))))])
end
EDIT 2 I might have been right, see this other answer
EDIT: They use MAGMA, which is column major. My answer does not hold, however I will leave it here for a while in case it can help crack this strange behavior.
The below answer is wrong
This is my guess, I can not 100% tell you without knowing the code under MATLAB's hood.
Hypothesis: MATLABs parallel computing code uses CUDA libraries, not their own.
Important information
MATLAB is column major and CUDA is row major.
There is no such things as 2D matrices, only 1D matrices with 2 indices
Why does this matter? Well because CUDA is highly optimized code that uses memory structure to maximize cache hits per kernel (the slowest operation on GPUs is reading memory). This means a standard CUDA matrix multiplication code will exploit the order of memory reads to make sure they are adjacent. However, what is adjacent memory in row-major is not in column-major.
So, there are 2 solutions to this as someone writing software
Write your own column-major algebra libraries in CUDA
Take every input/output from MATLAB and transpose it (i.e. convert from column-major to row major)
They have done point 2, and assuming that there is a smart JIT compiler for MATLAB parallel processing toolbox (reasonable assumption), for the second case, it takes a and b, transposes them, does the maths, and transposes the output when you gather.
In the first case however, you already do not need to transpose the output, as it is internally already transposed and the JIT catches this, so instead of calling gather(transpose( XX )) it just skips the output transposition is side. The same with transpose(a*b). Note that transpose(a*b)=transpose(b)*transpose(a), so suddenly no transposes are needed (they are all internally skipped). A transposition is a costly operation.
Indeed there is a weird thing here: making the code a function suddenly makes it slow. My best guess is that because the JIT behaves differently in different situations, it doesn't catch all this transpose stuff inside and just does all the operations anyway, losing the speed up.
Interesting observation: It takes the same time in CPU than GPU to do a*b*a in my PC.

Matlab symbolic

I am trying to compare two simple expressions using Matlab symbolic toolbox. For some reason, the code returns 0. Any idea ?
syms a b c
A = (a/b)^c
B = a^c/b^c
isequal(A,B)
It seems like MATLAB has a hard time telling that two expressions are the same when (potentially) fractional exponents are involved.
So, one solution, as suggested by Mikhail is to restrict the values of c to be only integers although, as discussed in the Math.SE question jodag posted, there is nothing wrong with fractional exponents in this case.
Hence, since this restriction to integers is not necessary for the statement to be true, another solution is to use simplify function on the expression for B but allowing it to run more simplification steps in order to get the most simplified expression.
syms a b c
A = (a/b)^c
B = a^c/b^c
isequal(A,simplify(B,'step',4))
Four steps is actually the smallest number that worked for me, but that could vary across versions of MATLAB I'm assuming. To be sure, I would include more, but for really large expressions, this could become computationally intensive, so some judgment is necessary. Note that, you could also use the 'Seconds' option to limit the amount of time allowed for simplification.
In general what you wrote isn't true, under the right "assumptions" it becomes true: for example, assuming c is an integer you can trick MATLAB into expanding A
clc; clear all;
syms a
syms b
syms c integer
A = (a/b)^c;
B = simplify((a^c)/(b^c));
disp(isequal(A,B));
disp(A);
disp(B);
1
a^c/b^c
a^c/b^c

MATLAB function passing by reference

I have a class with properties in it (let say the name of the class file is inputvar),
I use it as the input argument for two different functions, which have an exactly the same calculation, but a little bit different code, which I'll explain later.
For the first function (let say the name is myfun1), I wrote the input argument like this:
f = myfun1 (inputvar)
So every time I want to use variables from the class inside the function, I'll have to call inputvar.var1, inputvar.var2, and etc.
For the second function (myfun2), I wrote each variables from the class in the input argument, so it looks like this:
f = myfun2 (inputvar.var1, inputvar.var2, ... etc )
Inside the function, I just have to use var1, var2, and etc, without having to include the name of the class.
After running both functions, I found that myfun2 runs a lot faster than myfun1, about 60% (I used tic-toc).
Can someone explain to me exactly why is that ?
with the reference:
MATLAB uses a system commonly called "copy-on-write" to avoid making a
copy of the input argument inside the function workspace until or
unless you modify the input argument. If you do not modify the input
argument, MATLAB will avoid making a copy. For instance, in this code:
function y = functionOfLargeMatrix(x) y = x(1); MATLAB will not make a
copy of the input in the workspace of functionOfLargeMatrix, as x is
not being changed in that function. If on the other hand, you called
this function:
function y = functionOfLargeMatrix2(x) x(2) = 2; y = x(1);
then x is being modified inside the workspace of functionOfLargeMatrix2, and so a copy must be made.
According to the statement above, when you directly pass a class object and you change any member of this object, a whole copy operation for the class is applied.
On the other side, with giving the class members as separate arguments, copy operation is applied only for the related members modified in the function, resulting in a faster execution.
I found that accessing properties is very slow in Matlab. I have not found a way around it, but some basic ideas are found here: http://blogs.mathworks.com/loren/2012/03/26/considering-performance-in-object-oriented-matlab-code/
But this article talks only about avoiding horrible, abysmal performance. Even with the simplest properties, performance is mediocre at best.
Take the example class from the Mathworks article. I did small test script:
clear all
clc
n = 1e5;
%% OOP way - abysimal
result = zeros(1, n);
tic
for i = 1:n
cyl = SimpleCylinder();
cyl.R = i;
cyl.Height = 10;
result(i) = cyl.volume();
end
toc
%% OOP Vectorized - fair
clear result
tic
cyl = SimpleCylinder();
cyl.R = 1:n;
cyl.Height = 10;
result = cyl.volume();
toc
%% for loop without objects - good
result = zeros(1, n);
tic
for i = 1:n
result(i) = pi .* i.^2 .* 10;
end
toc
%% Vectorized without objects - excellent
clear result
tic
R = 1:n;
result = pi .* R.^2 .* 10;
toc
With these results:
Elapsed time is 6.141445 seconds.
Elapsed time is 0.006245 seconds.
Elapsed time is 0.002116 seconds.
Elapsed time is 0.000478 seconds.
As you can see, every property access is slowing down. Try to vectorize (as always) but even the simple for-loop outperforms the vectorized OOP solution for small n. (On my PC, they break even at 1e7)
Essential message: OOP in Matlab is slow! You pay the price for every property access.
To your question: When you call
myfun2 (inputvar.var1, inputvar.var2, ... etc )
the values are copied. Within the function, you are no longer dealing with classes. Access to variables is fast. However, if you pass the whole class, every access to a property is slow. You can circumvent this by caching all properties in local variables and use these.
If you modify the class to inherit from handle everything gets a bit faster, but the difference is negligible.

MATLAB matrix multiplication vs for loop for each column

When multiplying two matrices, I tried the following two options:
1)
res = X*A;
2)
for i = 1:size(A,2)
res(:,i) = X*A(:,i);
end
I preallocated memory for res in both. And surprisingly, I found option 2 to be faster.
Can someone explain how this is so?
edit:
I tried
K=10000;
clear t1 t2
t1=zeros(K,1);
t2=zeros(K,1);
for k=1:K
clear res
x = rand(100,100);
a = rand(100,100);
tic
res = x*a;
t1(k) = toc;
end
for k=1:K
clear res2
res2 = zeros(100,100);
x = rand(100,100);
a = rand(100,100);
tic
for i = 1:100
res2(:,i) = x*a(:,i);
end
t2(k) = toc;
end
I run both codes in a loop 1000 times. In average (but not always) the first vectorized code was 3-4 times faster. I cleared the result variables and preallocated before starting timer.
x = rand(100,100);
a = rand(100,100);
K=1000;
clear t1 t2
t1=zeros(K,1);
t2=zeros(K,1);
for k=1:K
clear res
tic
res = x*a;
t1(k) = toc;
end
for k=1:K
clear res2
res2 = zeros(100,100);
tic
for i = 1:100
res2(:,i) = x*a(:,i);
end
t2(k) = toc;
end
So, never make a timing conclusion based on a single run.
I believe I can chime in on the variation in timings between the two methods, as well as why people are getting different relative speeds.
Before Matlab version 2008a (or a version near that release), for loops took a major hit in any Matlab code because the interpreter (a layer between the very readable script and a lower level implementation of the code) would have to re-interpret the code each time through the for loop.
Since that release, the interpreter has gotten progressively better so, when running a modern version of Matlab, the interpreter can look at your code and say "Ah ha! I know what he is doing, let me optimize it just a bit" and avoid the hit it would otherwise take by reinterpreting the code.
I would expect the two ways of performing matrix multiplies to evaluate in the same amount of time, why the for loop implementation runs faster is because of some detail in the optimizations of the interpreter that us mere mortals are not privy to know.
One broad lesson we should take from this, is not all versions are equal. I do work on a couple of bleeding edge cases using two Matlab add ons, the SimBiology and the Parallel Computing Toolboxes, both of which (especially if you want them to work together) are version dependent in speed of execution, and from time to time other stability issues. As such, I keep the three most recent releases of Matlab, will test that I get the same answers out of each version, and I'll occasionally roll back to an earlier version if I find issues with some features. This is probably overkill for most people, but gives you an idea of version differences.
Hope this helps.
Edits:
To clarify, code vectorization is still important. But given a script like:
x_slow = zeros(1,1e5);
x_fast = zeros(1,1e5);
tic;
for i=1:1e5
x_slow(i) = log(i);
end
time_slow = toc; % evaluates for me in .0132 seconds
tic;
x_fast = log(1:1e5);
time_fast = toc; % evaluates for me in .0055 seconds
The disparity between time_slow and time_fast has reduced in the past several versions based on improvements in the interpreter. The example I saw I believe was on 2000a vs. 2008b, but that's subject to my recollection.
There is something else that might be going on that was addressed by Oli and Yuk. There is often a difference between the time_1 and time_2 in:
tic; x = log(1:1e5); time_1 = toc
tic; x = log(1:1e5); time_2 = toc
So the test of one million evaluations vs. one evaluation is valuable, depending on where in memory x is (in cache or no).
Hope this helps again.
This may well be an effect of caching. a is already in the cache by the time you do the second version, so it has an advantage. Try creating an independent set of inputs to make it fair. Also, it's probably better to measure the time of e.g. 1 million iterations of this, in order to eliminate typical variations due to outside effects.
It looks to me that you are not multiplying matrix properly, you need to sum all the products from ith row of X matrix and jth column of A matrix, that might be a reason.
Look here to see how it's done.

vectorizing loops in Matlab - performance issues

This question is related to these two:
Introduction to vectorizing in MATLAB - any good tutorials?
filter that uses elements from two arrays at the same time
Basing on the tutorials I read, I was trying to vectorize some procedure that takes really a lot of time.
I've rewritten this:
function B = bfltGray(A,w,sigma_r)
dim = size(A);
B = zeros(dim);
for i = 1:dim(1)
for j = 1:dim(2)
% Extract local region.
iMin = max(i-w,1);
iMax = min(i+w,dim(1));
jMin = max(j-w,1);
jMax = min(j+w,dim(2));
I = A(iMin:iMax,jMin:jMax);
% Compute Gaussian intensity weights.
F = exp(-0.5*(abs(I-A(i,j))/sigma_r).^2);
B(i,j) = sum(F(:).*I(:))/sum(F(:));
end
end
into this:
function B = rngVect(A, w, sigma)
W = 2*w+1;
I = padarray(A, [w,w],'symmetric');
I = im2col(I, [W,W]);
H = exp(-0.5*(abs(I-repmat(A(:)', size(I,1),1))/sigma).^2);
B = reshape(sum(H.*I,1)./sum(H,1), size(A, 1), []);
Where
A is a matrix 512x512
w is half of the window size, usually equal 5
sigma is a parameter in range [0 1] (usually one of: 0.1, 0.2 or 0.3)
So the I matrix would have 512x512x121 = 31719424 elements
But this version seems to be as slow as the first one, but in addition it uses a lot of memory and sometimes causes memory problems.
I suppose I've made something wrong. Probably some logic mistake regarding vectorizing. Well, in fact I'm not surprised - this method creates really big matrices and probably the computations are proportionally longer.
I have also tried to write it using nlfilter (similar to the second solution given by Jonas) but it seems to be hard since I use Matlab 6.5 (R13) (there are no sophisticated function handles available).
So once again, I'm asking not for ready solution, but for some ideas that would help me to solve this in reasonable time. Maybe you will point me what I did wrong.
Edit:
As Mikhail suggested, the results of profiling are as follows:
65% of time was spent in the line H= exp(...)
25% of time was used by im2col
How big are I and H (i.e. numel(I)*8 bytes)? If you start paging, then the performance of your second solution is going to be affected very badly.
To test whether you really have a problem due to too large arrays, you can try and measure the speed of the calculation using tic and toc for arrays A of increasing size. If the execution time increases faster than by the square of the size of A, or if the execution time jumps at some size of A, you can try and split the padded I into a number of sub-arrays and perform the calculations like that.
Otherwise, I don't see any obvious places where you could be losing lots of time. Well, maybe you could skip the reshape, by replacing B with A in your function (saves a little memory as well), and writing
A(:) = sum(H.*I,1)./sum(H,1);
You may also want to look into upgrading to a more recent version of Matlab - they've worked hard on improving performance.