Initialize vector with function in matlab - matlab

just started out with matlab and have some troubles finding the solution for the following action:
I am trying to initialize a vector of 1000 different values, with a function that doesn't take any arguments as input. I can do this with a for loop, but haven't found out how to do it without.
What I expected that would work:
z = zeros(1,1000)
result = arrayfun(*functionname*,z)
This however gives an error saying that the first input must be a function handle.
My function is a simple implementation of a monte carlo method to calculate pi:
function Result = mcm()
clear
N=1000;
M=0;
for j=1:N
p=[2*rand-1; 2*rand-1];
if p'*p<1
M=M+1;
end
end
Result=4*M/N

One way to actually vectorize your given function mcm would be -
N = 1000; %// Number of data points
P = [2*rand(1,N)-1; 2*rand(1,N)-1]; %// OR 2*rand(2,N)-1
out = 4*sum(sum(P.^2,1)<1)/N
Runtime tests
Code -
N = 1000000; %// Number of data points
disp('---------------- With Original Approach')
tic
M=0;
for j=1:N
P=[2*rand-1; 2*rand-1];
if P'*P<1
M=M+1;
end
end
Result=4*M/N;
toc
disp('---------------- With Proposed Approach')
tic
P = 2*rand(2,N)-1;
out = 4*sum(sum(P.^2,1)<1)/N;
toc
Timings & Outputs -
---------------- With Original Approach
Elapsed time is 3.952998 seconds.
---------------- With Proposed Approach
Elapsed time is 0.089590 seconds.
>> Result
Result =
3.1422
>> out
out =
3.1428

Since your function takes no arguments you can't use arrayfun. arrayfun applies the function to each element in the array.
Instead use this:
z = ones(1,1000) * mcm;
A side benefit is that mcm will only run once so it will be faster than looping that function 1000 times.

Related

MATLAB cellfun vectorization slow when using function handle

I encountered a weird bug in cell vectorization (MATLAB version R2019B).
Please consider the following minimal example, say we generate a cell array with variable length vector in each cell:
N = 10000;
rng(1);
result = cell(N,1);
numConnect = randi(10, [N,1]); % randomly generated number of connected nodes
for i = 1:N
result{i} = randi(N, [1, numConnect(i)]);
end
Now we want to retrospectively retrieve numConnect, i.e., the length of each cell, we can use cellfun. According to this documentation, in Backward Compatibility mode, you can use string as func variable instead of function handle. However, there is a drastic difference in performance locally.
tic;
nC1 = cellfun('length', result);
toc;
This one usually produces something like
Elapsed time is 0.038531 seconds.
If I changed to # function handle:
tic;
nC2 = cellfun(#length, result);
toc;
Then
Elapsed time is 1.041925 seconds.
is normal. There is a 30x difference!
I wonder is this performance difference a bug on my local machine, or a "feature" of MATLAB cellfun?

Quickly Evaluating MANY matlabFunctions

This post builds on my post about quickly evaluating analytic Jacobian in Matlab:
fast evaluation of analytical jacobian in MATLAB
The key difference is that now, I am working with the Hessian and I have to evaluate close to 700 matlabFunctions (instead of 1 matlabFunction, like I did for the Jacobian) each time the hessian is evaluated. So there is an opportunity to do things a little differently.
I have tried to do this two ways so far and I am thinking about implementing a third and was wondering if anyone has any other suggestions. I will go through each method with a toy example, but first some preprocessing to generate these matlabFunctions:
PreProcessing:
% This part of the code is calculated once, it is not the issue
dvs = 5;
X=sym('X',[dvs,1]);
num = dvs - 1; % number of constraints
% multiple functions
for k = 1:num
f1(X(k+1),X(k)) = (X(k+1)^3 - X(k)^2*k^2);
c(k) = f1;
end
gradc = jacobian(c,X).'; % .' performs transpose
parfor k = 1:num
hessc{k} = jacobian(gradc(:,k),X);
end
parfor k = 1:num
hess_name = strcat('hessian_',num2str(k));
matlabFunction(hessc{k},'file',hess_name,'vars',X);
end
METHOD #1 : Evaluate functions in series
%% Now we use the functions to run an "optimization." Just for an example the "optimization" is just a for loop
fprintf('This is test A, where the functions are evaluated in series!\n');
tic
for q = 1:10
x_dv = rand(dvs,1); % these are the design variables
lambda = rand(num,1); % these are the lagrange multipliers
x_dv_cell = num2cell(x_dv); % for passing large design variables
for k = 1:num
hess_name = strcat('hessian_',num2str(k));
function_handle = str2func(hess_name);
H_temp(:,:,k) = lambda(k)*function_handle(x_dv_cell{:});
end
H = sum(H_temp,3);
end
fprintf('The time for test A was:\n')
toc
METHOD # 2: Evaluate functions in parallel
%% Try to run a parfor loop
fprintf('This is test B, where the functions are evaluated in parallel!\n');
tic
for q = 1:10
x_dv = rand(dvs,1); % these are the design variables
lambda = rand(num,1); % these are the lagrange multipliers
x_dv_cell = num2cell(x_dv); % for passing large design variables
parfor k = 1:num
hess_name = strcat('hessian_',num2str(k));
function_handle = str2func(hess_name);
H_temp(:,:,k) = lambda(k)*function_handle(x_dv_cell{:});
end
H = sum(H_temp,3);
end
fprintf('The time for test B was:\n')
toc
RESULTS:
METHOD #1 = 0.008691 seconds
METHOD #2 = 0.464786 seconds
DISCUSSION of RESULTS
This result makes sense because, the functions evaluate very quickly and running them in parallel waists a lot of time setting up and sending out the jobs to the different Matlabs ( and then getting the data back from them). I see the same result on my actual problem.
METHOD # 3: Evaluating the functions using the GPU
I have not tried this yet, but I am interested to see what the performance difference is. I am not yet familiar with doing this in Matlab and will add it once I am done.
Any other thoughts? Comments? Thanks!

Call multiple functions from cells in MATLAB

I store some functions in cell, e.g. f = {#sin, #cos, #(x)x+4}.
Is it possible to call all those functions at the same time (with the same input). I mean something more efficient than using a loop.
As constructed, the *fun family of functions exists for this purpose (e.g., cellfun is the pertinent one here). They are other questions on the use and performance of these functions.
However, if you construct f as a function that constructs a cell array as
f = #(x) {sin(x), cos(x), x+4};
then you can call the function more naturally: f([1,2,3]) for example.
This method also avoids the need for the ('UniformOutput',false) option pair needed by cellfun for non-scalar argument.
You can also use regular double arrays, but then you need to be wary of input shape for concatenation purposes: #(x) [sin(x), cos(x), x+4] vs. #(x) [sin(x); cos(x); x+4].
I'm just posting these benchmarking results here, just to illustrate that loops not necessarily are slower than other approaches:
f = {#sin, #cos, #(x)x+4};
x = 1:100;
tic
for ii = 1:1000
for jj = 1:numel(f)
res{jj} = f{jj}(x);
end
end
toc
tic
for ii = 1:1000
res = cellfun(#(arg) arg(x),functions,'uni',0);
end
toc
Elapsed time is 0.042201 seconds.
Elapsed time is 0.179229 seconds.
Troy's answer is almost twice as fast as the loop approach:
tic
for ii = 1:1000
res = f((1:100).');
end
toc
Elapsed time is 0.025378 seconds.
This might do the trick
functions = {#(arg) sin(arg),#(arg) sqrt(arg)}
x = 5;
cellfun(#(arg) arg(x),functions)
hope this helps.
Adrien.

MATLab Bootstrap without for loop

yesterday I implemented my first bootstrap in MATLab. (and yes, I know, for loops are evil.):
%data is an mxn matrix where the data should be sampled per column but there
can be a NaNs Elements
%from the array (a column of data) n values are sampled nReps times
function result = bootstrap_std(data, n, nReps,quantil)
result = zeros(1,size(data,2));
for i=1:size(data,2)
bootstrap_data = zeros(n,nReps);
values = find(~isnan(data(:,i)));
if isempty(values)
bootstrap_data(:,:) = NaN;
else
for k=1:nReps
bootstrap_data(:,k) = datasample(data(values,i),n);
end
end
stat = zeros(1,nReps);
for k=1:nReps
stat(k) = nanstd(bootstrap_data(:,k));
end
sort(stat);
result(i) = quantile(stat,quantil);
end
end
As one can see, this version works columnwise. The algorithm does what it should but is really slow when the data size increaes. My question is now: Is it possible to implement this logic without using for loops? My problem is here that I could not find a version of datasample which does the sampling columnwise. Or is there a better function to use?
I am happy for any hint or idea how I can speed up this implementation.
Thanks and best regards!
stephan
The bottlenecks in your implementation are
The function spends a lot of time inside nanstd which is unnecessary since you exclude NaN values from your sample anyway.
There are a lot of functions that operate column-wise, but you spend time looping over the columns and calling them many times.
You make many calls to datasample which is a relatively slow function. It's much faster to create a random vector of indices using randi and use that instead.
Here's how I would write the function (actually I probably wouldn't put in this many comments, and I wouldn't use so many temp variables, but I'm doing it now so you can see what all the steps of the computation are).
function result = bootstrap_std_new(data, n, nRep, quantil)
result = zeros(1, size(data,2));
for i = 1:size(data,2)
isbad = isnan(data(:,i)); %// Vector of NaN values
if all(isbad)
result(i) = NaN;
else
data0 = data(~isbad, i); %// Temp copy of this column for indexing
index = randi(size(data0,1), n, nRep); %// Create the indexing vector
bootstrapdata = data0(index); %// Sample the data
stdevs = std(bootstrapdata); %// Stdev of sampled data
result(i) = quantile(stdevs, quantil); %// Find the correct quantile
end
end
end
Here are some timings
>> data = randn(100,10);
>> data(randi(1000, 50, 1)) = NaN;
>> tic, bootstrap_std(data, 50, 1000, 0.5); toc
Elapsed time is 1.359529 seconds.
>> tic, bootstrap_std_new(data, 50, 1000, 0.5); toc
Elapsed time is 0.038558 seconds.
So this gives you about a 35x speedup.
Your main issue seems to be that you may have varying numbers/positions of NaN in each column, so can't work on the full matrix unless you're okay with also sampling NaNs. However, some of the inner loops could be simplified.
for k=1:nReps
bootstrap_data(:,k) = datasample(data(values,i),n);
end
Since you're sampling with replacement, you should be able to just do:
bootstrap_data = datasample(data(values,i), n*nReps);
bootstrap_data = reshape(bootstrap_data, [n nReps]);
Also nanstd can work on a full matrix so no need to loop:
stat = nanstd(bootstrap_data); % or nanstd(x,0,2) to change dimension
It would also be worth just looking over your code with profile to see where the bottlenecks are.

compute speed in for loop matlab

I have the problem when I compute in a matrix. This problem is about the speed of computation.
I have a matrix of binary image (f), I find conected component by bwlabel in matlab. [L num]=bwlabel(f);
after that base on some property I found a vector p that include some value of L that I need to remove. this is my code and explanation
function [f,L] = clear_nontext(f,L,nontext)
% p is a vector include a lot of value we need to remove
p=find(nontext(:)~=0);
% example p= [1 2 9 10 100...] that mean we need to find in L matrix where get the value =1,2,9,10,100..] and remove it
[a b]=size(L);
g=zeros(a,b);
for u=1:length(p)
for i=1:a
for j=1:b
if L(i,j)==p(u)
g(i,j)=1;
%L(i,j)=500000;
f(i,j)=0;
end
end
end
end
end
When I use this way, program run but it is so slow, because with one value of p we need to check all value in matrix f (or L) again. So I need another way to run it faster. Could you help me?
Thank you so much
Generally, MATLAB performs matrix operations (or index operations) faster then loops.
You can try the following:
g(ismember(L,p)) = 1;
f(ismember(L,p)) = 1;
EDIT:
I was curious so I ran a little test:
L = round(20*randn(10000,10000));
f = L;
p = 1:5;
[a b]=size(L);
g=zeros(a,b);
tic;
for u=1:length(p)
for i=1:a
for j=1:b
if L(i,j)==p(u)
g(i,j)=1;
f(i,j)=0;
end
end
end
end
toc
for which I got:
Elapsed time is 38.960842 seconds.
When I tried the following:
tic;
g(ismember(L,p)) = 1;
f(ismember(L,p)) = 0;
toc
I got
Elapsed time is 5.735137 seconds.