Evaluate a changing function in loop - matlab

I am writing a code that generates a function f in a loop. This function f changes in every loop, for example from f = x + 2x to f = 3x^2 + 1 (randomly), and I want to evaluate f at different points in every loop. I have tried using subs, eval, matlabFunction etc but it is still running slowly. How would you tackle a problem like this in the most efficient way?
This is as fast as I have been able to do it. ****matlabFunction and subs go slower than this.
The code below is my solution and it is one loop. In my larger code the function f and point x0 change in every loop so you can imagine why I want this to go as fast as possible. I would greatly appreciate it if someone could go through this, and give me any pointers. If my coding is crap feel free to tell me :D
x = sym('x',[2,1]);
f = [x(1)-x(1)cos(x(2)), x(2)-3x(2)^2*cos(x(1))];
J = jacobian(f,x);
x0 = [2,1];
N=length(x0); % Number of equations
%% Transform into string
fstr = map2mat(char(f));
Jstr = map2mat(char(J));
% replace every occurence of 'xi' with 'x(i)'
Jstr = addPar(Jstr,N);
fstr = addPar(fstr,N);
x = x0;
phi0 = eval(fstr)
J = eval(Jstr)
function str = addPar(str,N)
% pstr = addPar(str,N)
% Transforms every occurence of xi in str into x(i)
% N is the maximum value of i
% replace every occurence of xi with x(i)
% note that we do this backwards to avoid x10 being
% replaced with x(1)0
for i=N:-1:1
is = num2str(i);
xis = ['x' is];
xpis = ['x(' is ')'];
str = strrep(str,xis,xpis);
end
function r = map2mat(r)
% MAP2MAT Maple to MATLAB string conversion.
% Lifted from the symbolic toolbox source code
% MAP2MAT(r) converts the Maple string r containing
% matrix, vector, or array to a valid MATLAB string.
%
% Examples: map2mat(matrix([[a,b], [c,d]]) returns
% [a,b;c,d]
% map2mat(array([[a,b], [c,d]]) returns
% [a,b;c,d]
% map2mat(vector([[a,b,c,d]]) returns
% [a,b,c,d]
% Deblank.
r(findstr(r,' ')) = [];
% Special case of the empty matrix or vector
if strcmp(r,'vector([])') | strcmp(r,'matrix([])') | ...
strcmp(r,'array([])')
r = [];
else
% Remove matrix, vector, or array from the string.
r = strrep(r,'matrix([[','['); r = strrep(r,'array([[','[');
r = strrep(r,'vector([','['); r = strrep(r,'],[',';');
r = strrep(r,']])',']'); r = strrep(r,'])',']');
end

There are several ways to get huge boosts in speed for this sort of problem:
The java GUI front end slows everything down. Go back to version 2010a or earlier. Go back to when it was based on C or fortran. The MATLAB script runs as fast as if you had put it into the MATLAB "compiler".
If you have MatLab compiler (or builder, I forget which) but not the coder, then you can process your code and have it run a few times faster without modifying the code.
write it to a file, then call it as a function. I have done this for changing finite-element expressions, so large ugly math that makes $y = 3x^2 +1$ look simple. In that it gave me solid speed increase.
vectorize, vectorize, vectorize. It used to reliably give 10x to 12x speed increase. Pull it out of loops. The java, I think, obscures this some by making everything slower.
have you "profiled" your function to make sure that "eval" or such are the problem? If you fix "eval" and your bottleneck is elsewhere then you will have problems.
If you have the choice between eval and subs, stick with eval. subs gives you a symbolic solution, not a numeric one.
If there is a clean way to have multiple instances of MatLab running, especially if you have a decently core-rich cpu that MatLab does not fully utilize, then get several of them going. If you are at an educational institution you might try running several different versions (2010a, 2010b, 2009a,...) on the same system. I (fuzzily) recall they didn't collide when I did it. Running more than about 8 started slowing things down more than it improved them. Make sure they aren't colliding on file access if you are using files to share control.
You could write your program in LabVIEW (not MathScript, not MatLab) and because it is a compiled language, there are times that code can run 1000x faster.
You could go all numeric and make it a matrix activity. This depends on your code, but if you could randomly populate the columns in the matrix then matrix multiply it to a matrix $ \left[ 1, x, x^{2}, ...\right] $, that would likely be several hundreds or thousands of times faster than your current level of equation handling and still in MatLab.
About your coding:
don't redeclare "x" as a symbol every loop, that is expensive.
what is this "map2mat" then "addPar" stuff?
the string handling functions are horrible for runtime. Stick to one language. The symbolic toolbox IS maple, and you don't have to get goofy hand-made parsing to make it work with the rest of MatLab.

Related

Interpolation with pre-lookup

I need to perform many (thousands) of look-up operations, where the break-points in the look-up do not change. A simple example would be,
% create some dummy data
% In practice
% - the grid will be denser, and not as regular
% - each page of v will be different
% - v will have thousands of pages
[x,y] = ndgrid(-5:0.1:5);
n_surfaces = 10;
v = repmat(sin(x.^2 + y.^2) ./ (x.^2 + y.^2),1,1,n_surfaces);
[xq,yq] = ndgrid(-5:0.2:5);
vq = nan([size(xq),n_surfaces]);
for idx = 1:n_surfaces
F = griddedInterpolant(x,y,v(:,:,idx));
vq(:,:,idx) = F(xq,yq);
end
Note that the above code can be sped up slightly by doing,
F = griddedInterpolant(x,y,v(:,:,1));
for idx = 1:n_surfaces
F.Values = v(:,:,idx);
vq(:,:,idx) = F(xq,yq);
end
However, in general interpolation is a two step process,
Determining the index and interval fraction for each new point
Performing the interpolation to obtain the new value
and in the above code both of these steps are being performed during every loop. However Step 1 will be identical in every loop and hence performing it thousands of times is inefficient. I'm wondering if anyone has a workaround to split the two steps and only perform the first step once, and just perform the second step in the loop?
(For those familiar with Simulink, this is equivalent to using the Prelookup in conjunction with multiple Interpolation Using Prelookup blocks.)
Edit:
The question linked in the comment by #rahnema1 (Precompute weights for multidimensional linear interpolation) is pretty much what I am looking for. However, on converting that code to run on the CPU (rather than a GPU), and using double arithmetic, it is about 3 times slower than using the m-code at the start of my question. That timing seems to hold irrespective of the number of surfaces being interpolated (I have tried values from 10 through to 1000.)
The problem is in performing the indexing operation V(I) used in the linked code. Even when the complete operation sum(W.*V(I),2) is implemented in a mex file, the execution times are slower than the above m-code.

why does a*b*a take longer than (a'*(a*b)')' when using gpuArray in Matlab scripts?

The code below performs the operation the same operation on gpuArrays a and b in two different ways. The first part computes (a'*(a*b)')' , while the second part computes a*b*a. The results are then verified to be the same.
%function test
clear
rng('default');rng(1);
a=sprand(3000,3000,0.1);
b=rand(3000,3000);
a=gpuArray(a);
b=gpuArray(b);
tic;
c1=gather(transpose(transpose(a)*transpose(a*b)));
disp(['time for (a''*(a*b)'')'': ' , num2str(toc),'s'])
clearvars -except c1
rng('default');
rng(1)
a=sprand(3000,3000,0.1);
b=rand(3000,3000);
a=gpuArray(a);
b=gpuArray(b);
tic;
c2=gather(a*b*a);
disp(['time for a*b*a: ' , num2str(toc),'s'])
disp(['error = ',num2str(max(max(abs(c1-c2))))])
%end
However, computing (a'*(a*b)')' is roughly 4 times faster than computing a*b*a. Here is the output of the above script in R2018a on an Nvidia K20 (I've tried different versions and different GPUs with the similar behaviour).
>> test
time for (a'*(a*b)')': 0.43234s
time for a*b*a: 1.7175s
error = 2.0009e-11
Even more strangely, if the first and last lines of the above script are uncommented (to turn it into a function), then both take the longer amount of time (~1.7s instead of ~0.4s). Below is the output for this case:
>> test
time for (a'*(a*b)')': 1.717s
time for a*b*a: 1.7153s
error = 1.0914e-11
I'd like to know what is causing this behaviour, and how to perform a*b*a or (a'*(a*b)')' or both in the shorter amount of time (i.e. ~0.4s rather than ~1.7s) inside a matlab function rather than inside a script.
There seem to be an issue with multiplication of two sparse matrices on GPU. time for sparse by full matrix is more than 1000 times faster than sparse by sparse. A simple example:
str={'sparse*sparse','sparse*full'};
for ii=1:2
rng(1);
a=sprand(3000,3000,0.1);
b=sprand(3000,3000,0.1);
if ii==2
b=full(b);
end
a=gpuArray(a);
b=gpuArray(b);
tic
c=a*b;
disp(['time for ',str{ii},': ' , num2str(toc),'s'])
end
In your context, it is the last multiplication which does it. to demonstrate I replace a with a duplicate c, and multiply by it twice, once as sparse and once as full matrix.
str={'a*b*a','a*b*full(a)'};
for ii=1:2
%rng('default');
rng(1)
a=sprand(3000,3000,0.1);
b=rand(3000,3000);
rng(1)
c=sprand(3000,3000,0.1);
if ii==2
c=full(c);
end
a=gpuArray(a);
b=gpuArray(b);
c=gpuArray(c);
tic;
c1{ii}=a*b*c;
disp(['time for ',str{ii},': ' , num2str(toc),'s'])
end
disp(['error = ',num2str(max(max(abs(c1{1}-c1{2}))))])
I may be wrong, but my conclusion is that a * b * a involves multiplication of two sparse matrices (a and a again) and is not treated well, while using transpose() approach divides the process to two stage multiplication, in none of which there are two sparse matrices.
I got in touch with Mathworks tech support and Rylan finally shed some light on this issue. (Thanks Rylan!) His full response is below. The function vs script issue appears to be related to certain optimizations matlab applies automatically to functions (but not scripts) not working as expected.
Rylan's response:
Thank you for your patience on this issue. I have consulted with the MATLAB GPU computing developers to understand this better.
This issue is caused by internal optimizations done by MATLAB when encountering some specific operations like matrix-matrix multiplication and transpose. Some of these optimizations may be enabled specifically when executing a MATLAB function (or anonymous function) rather than a script.
When your initial code was being executed from a script, a particular matrix transpose optimization is not performed, which results in the 'res2' expression being faster than the 'res1' expression:
n = 2000;
a=gpuArray(sprand(n,n,0.01));
b=gpuArray(rand(n));
tic;res1=a*b*a;wait(gpuDevice);toc % Elapsed time is 0.884099 seconds.
tic;res2=transpose(transpose(a)*transpose(a*b));wait(gpuDevice);toc % Elapsed time is 0.068855 seconds.
However when the above code is placed in a MATLAB function file, an additional matrix transpose-times optimization is done which causes the 'res2' expression to go through a different code path (and different CUDA library function call) compared to the same line being called from a script. Therefore this optimization generates slower results for the 'res2' line when called from a function file.
To avoid this issue from occurring in a function file, the transpose and multiply operations would need to be split in a manner that stops MATLAB from applying this optimization. Separating each clause within the 'res2' statement seems to be sufficient for this:
tic;i1=transpose(a);i2=transpose(a*b);res3=transpose(i1*i2);wait(gpuDevice);toc % Elapsed time is 0.066446 seconds.
In the above line, 'res3' is being generated from two intermediate matrices: 'i1' and 'i2'. The performance (on my system) seems to be on par with that of the 'res2' expression when executed from a script; in addition the 'res3' expression also shows similar performance when executed from a MATLAB function file. Note however that additional memory may be used to store the transposed copy of the initial array. Please let me know if you see different performance behavior on your system, and I can investigate this further.
Additionally, the 'res3' operation shows faster performance when measured with the 'gputimeit' function too. Please refer to the attached 'testscript2.m' file for more information on this. I have also attached 'test_v2.m' which is a modification of the 'test.m' function in your Stack Overflow post.
Thank you for reporting this issue to me. I would like to apologize for any inconvenience caused by this issue. I have created an internal bug report to notify the MATLAB developers about this behavior. They may provide a fix for this in a future release of MATLAB.
Since you had an additional question about comparing the performance of GPU code using 'gputimeit' vs. using 'tic' and 'toc', I just wanted to provide one suggestion which the MATLAB GPU computing developers had mentioned earlier. It is generally good to also call 'wait(gpuDevice)' before the 'tic' statements to ensure that GPU operations from the previous lines don't overlap in the measurement for the next line. For example, in the following lines:
b=gpuArray(rand(n));
tic; res1=a*b*a; wait(gpuDevice); toc
if the 'wait(gpuDevice)' is not called before the 'tic', some of the time taken to construct the 'b' array from the previous line may overlap and get counted in the time taken to execute the 'res1' expression. This would be preferred instead:
b=gpuArray(rand(n));
wait(gpuDevice); tic; res1=a*b*a; wait(gpuDevice); toc
Apart from this, I am not seeing any specific issues in the way that you are using the 'tic' and 'toc' functions. However note that using 'gputimeit' is generally recommended over using 'tic' and 'toc' directly for GPU-related profiling.
I will go ahead and close this case for now, but please let me know if you have any further questions about this.
%testscript2.m
n = 2000;
a = gpuArray(sprand(n, n, 0.01));
b = gpuArray(rand(n));
gputimeit(#()transpose_mult_fun(a, b))
gputimeit(#()transpose_mult_fun_2(a, b))
function out = transpose_mult_fun(in1, in2)
i1 = transpose(in1);
i2 = transpose(in1*in2);
out = transpose(i1*i2);
end
function out = transpose_mult_fun_2(in1, in2)
out = transpose(transpose(in1)*transpose(in1*in2));
end
.
function test_v2
clear
%% transposed expression
n = 2000;
rng('default');rng(1);
a = sprand(n, n, 0.1);
b = rand(n, n);
a = gpuArray(a);
b = gpuArray(b);
tic;
c1 = gather(transpose( transpose(a) * transpose(a * b) ));
disp(['time for (a''*(a*b)'')'': ' , num2str(toc),'s'])
clearvars -except c1
%% non-transposed expression
rng('default');
rng(1)
n = 2000;
a = sprand(n, n, 0.1);
b = rand(n, n);
a = gpuArray(a);
b = gpuArray(b);
tic;
c2 = gather(a * b * a);
disp(['time for a*b*a: ' , num2str(toc),'s'])
disp(['error = ',num2str(max(max(abs(c1-c2))))])
%% sliced equivalent
rng('default');
rng(1)
n = 2000;
a = sprand(n, n, 0.1);
b = rand(n, n);
a = gpuArray(a);
b = gpuArray(b);
tic;
intermediate1 = transpose(a);
intermediate2 = transpose(a * b);
c3 = gather(transpose( intermediate1 * intermediate2 ));
disp(['time for split equivalent: ' , num2str(toc),'s'])
disp(['error = ',num2str(max(max(abs(c1-c3))))])
end
EDIT 2 I might have been right, see this other answer
EDIT: They use MAGMA, which is column major. My answer does not hold, however I will leave it here for a while in case it can help crack this strange behavior.
The below answer is wrong
This is my guess, I can not 100% tell you without knowing the code under MATLAB's hood.
Hypothesis: MATLABs parallel computing code uses CUDA libraries, not their own.
Important information
MATLAB is column major and CUDA is row major.
There is no such things as 2D matrices, only 1D matrices with 2 indices
Why does this matter? Well because CUDA is highly optimized code that uses memory structure to maximize cache hits per kernel (the slowest operation on GPUs is reading memory). This means a standard CUDA matrix multiplication code will exploit the order of memory reads to make sure they are adjacent. However, what is adjacent memory in row-major is not in column-major.
So, there are 2 solutions to this as someone writing software
Write your own column-major algebra libraries in CUDA
Take every input/output from MATLAB and transpose it (i.e. convert from column-major to row major)
They have done point 2, and assuming that there is a smart JIT compiler for MATLAB parallel processing toolbox (reasonable assumption), for the second case, it takes a and b, transposes them, does the maths, and transposes the output when you gather.
In the first case however, you already do not need to transpose the output, as it is internally already transposed and the JIT catches this, so instead of calling gather(transpose( XX )) it just skips the output transposition is side. The same with transpose(a*b). Note that transpose(a*b)=transpose(b)*transpose(a), so suddenly no transposes are needed (they are all internally skipped). A transposition is a costly operation.
Indeed there is a weird thing here: making the code a function suddenly makes it slow. My best guess is that because the JIT behaves differently in different situations, it doesn't catch all this transpose stuff inside and just does all the operations anyway, losing the speed up.
Interesting observation: It takes the same time in CPU than GPU to do a*b*a in my PC.

Is it possible to change two function using parfor loop?

Suppose I have two functions written on different scripts, say function1.m and function2.m The two computation in the two functions are independent (Some inputs may be the same, say function1(x,y) and function2(x,z) for example). However, running sequentially, say ret1 = function1(x,y); ret2 = function2(x,z); may be time consuming. I wonder if it is possible to run it in parfor loop:
parfor i = 1:2
ret(i) = run(['function' num2str(i)]); % if i=1,ret(1)=function1 and i=2, ret(2)=function2
end
Is it possible to write it in parfor loop?
Your idea is correct, but the implementation is wrong.
Matlab won't let you use run within parfor as it can't make sure it's a valid way to use parfor (i.e. no dependencies between iterations). The proper way to do that is to use functions (and not scrips) and an if statement to choose between them:
ret = zeros(2,1);
parfor k = 1:2
if k==1, ret(k) = f1(x,y); end
if k==2, ret(k) = f2(x,z); end
end
here f1 and f2 are some functions that return a scalar value (so it's suitable for ret(k) and each instance of the loop call a different if statement.
You can read here more about how to convert scripts to functions.
The rule of thumb for a parfor loop is that each iteration must be standalone. More accurately,
The body of the parfor-loop must be independent. One loop iteration
cannot depend on a previous iteration, because the iterations are
executed in a nondeterministic order.
That means that every iteration must be one which can be performed on its own and produce the correct result.
Therefore, if you have code that says, for instance,
parfor (i = 1:2)
function1(iterator,someNumber);
function2(iterator,someNumber);
end
there should be no issue with applying parfor.
However, if you have code that says, for instance,
persistentValue = 0;
parfor (i = 1:2)
persistentValue = persistentValue + function1(iterator,someNumber);
function2(iterator,persistentValue);
end
it would not be usable.
Yes. It is possible.
Here's an example:
ret = zeros(2,1);
fHandles = {#min, #max};
x = 1:10;
parfor i=1:2
ret(i) = fHandles{i}(x);
end
ret % show the results.
Whether this is a good idea or not, I don't know. There is overhead to setting up the parallel processing that may or may not make it worthwhile for you.
Typically the more iterations you have computed, the more value you get from setting up a parfor loop as the iterations are sliced-up and sent non-deterministically to the separate cores for processing. So you're getting use of 2 cores right now, but if you have many functions this may improve things.
The order that the iterations are run is not guaranteed (it could be that one core gets assigned a range of values for i, but we do not know if it those values are taken in order or randomly), so your code can't depend on other iterations of the loop.
In general, the MATLAB editor is pretty at flagging these issues ahead of time.
EDIT
Here's a proof of concept for a variable number of arguments to your different functions
ret = zeros(2,1);
fHandles = {#min, #max};
x = 1:10; % x is a 1x10 vector
y = rand(20); % y is a 20x20 matrix
z = 1; % z is a scalar value
fArgs = {{x};
{y,z}}; %wrap your arguments up in a cell
parfor i=1:2
ret(i) = fHandles{i}([fArgs{i}{:}]); %calls the function with its variable sized arguments here
end
ret % show the output
Again, this is just proof-of-concept. There are big warnings showing up in MATLAB about having to broadcast fArgs across all of the cores.

Create vector-valued function with arbitrary components

Good evening everyone,
I want to create a function
f(x) = [f1(x), f2(x), ... , fn(x)]
in MatLab, with an arbitrary form and number for the fi. In my current case they are meant to be basis elements for a finite-dimensional function space, so for example a number of multi variable polynomials. I want to able to be able to set form (e.g. hermite/lagrange polynomials, ...) and number via arguments in some sort of "function creating" function, so I would like to solve this for arbitrary functions fi.
Assume for now that the fi are fi:R^d -> R, so vector input to scalar output. This means the result from f should be a n-dim vector containing the output of all n functions. The number of functions n could be fairly large, as there is permutation involved. I also need to evaluate the resulting function very often, so I hope to do it as efficiently as possible.
Currently I see two ways to do this:
Create a cell with each fi using a loop, using something like
funcell{i}=matlabFunction(createpoly(degree, x),'vars',{x})
and one of the functions from the symbolic toolbox and a symbolic x (vector). It is then possible to create the desired function with cellfun, e.g.
f=#(x) cellfun(#(v) v(x), funcell)
This is relatively short, easy and what can be found when doing searches. It even allows extension to vector output using 'UniformOutput',false and cell2mat. On the downside it is very inefficient, first during creation because of matlabFunction and then during evaluation because of cellfun.
The other idea I had is to create a string and use eval. One way to do this would be
stringcell{i}=[char(createpoly(degree, x)),';']
and then use strjoin. In theory this should yield an efficient function. There are two problems however. The first is the use of eval (mostly on principle), the second is inserting the correct arguments. The symbolic toolbox does not allow symbols of the form x(i), so the resulting string will not contain them either. The only remedy I have so far is some sort of string replacement on the xi that are allowed, but this is also far from elegant.
So I do have ways to do what I need right now, but I would appreciate any ideas for a better solution.
From my understanding of the problem, you could do the straightforward:
Initialization step:
my_fns = cell(n, 1); %where n is number of functions
my_fns{1} = #f1; % Assuming f1 is defined in f1.m etc...
my_fns{2} = #f2;
Evaluation at x:
z = zeros(n, 1);
for i=1:n,
z(i) = my_fns{i}(x)
end
For example if you put it in my_evaluate.m:
function z = my_evaluate(my_fns, x)
z = zeros(n, 1);
for i=1:n,
z(i) = my_fns{i}(x)
end
How might this possibly be sped up?
Depends on if you have special structure than can be exploited.
Are there calculations common to some subset of f1 through fn that need not be repeated with each function call? Eg. if the common calculation step is costly, you could do y = f_helper(x) and z(i) = fi(x, y).
Can the functions f1...fn be vector / matrix friendly, allowing evaluation of multiple points with each function call?
The big issue is how fast your function calls f1 through fn are, not how you collect the results from those calls in a vector.

What is your favourite MATLAB/Octave programming trick? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I think everyone would agree that the MATLAB language is not pretty, or particularly consistent. But nevermind! We still have to use it to get things done.
What are your favourite tricks for making things easier? Let's have one per answer so people can vote them up if they agree. Also, try to illustrate your answer with an example.
Using the built-in profiler to see where the hot parts of my code are:
profile on
% some lines of code
profile off
profile viewer
or just using the built in tic and toc to get quick timings:
tic;
% some lines of code
toc;
Directly extracting the elements of a matrix that satisfy a particular condition, using logical arrays:
x = rand(1,50) .* 100;
xpart = x( x > 20 & x < 35);
Now xpart contains only those elements of x which lie in the specified range.
Provide quick access to other function documentation by adding a "SEE ALSO" line to the help comments. First, you must include the name of the function in all caps as the first comment line. Do your usual comment header stuff, then put SEE ALSO with a comma separated list of other related functions.
function y = transmog(x)
%TRANSMOG Transmogrifies a matrix X using reverse orthogonal eigenvectors
%
% Usage:
% y = transmog(x)
%
% SEE ALSO
% UNTRANSMOG, TRANSMOG2
When you type "help transmog" at the command line, you will see all the comments in this comment header, with hyperlinks to the comment headers for the other functions listed.
Turn a matrix into a vector using a single colon.
x = rand(4,4);
x(:)
Vectorizing loops. There are lots of ways to do this, and it is entertaining to look for loops in your code and see how they can be vectorized. The performance is astonishingly faster with vector operations!
Anonymous functions, for a few reasons:
to make a quick function for one-off uses, like 3x^2+2x+7. (see listing below) This is useful for functions like quad and fminbnd that take functions as arguments. It's also convenient in scripts (.m files that don't start with a function header) since unlike true functions you can't include subfunctions.
for closures -- although anonymous functions are a little limiting as there doesn't seem to be a way to have assignment within them to mutate state.
.
% quick functions
f = #(x) 3*x.^2 + 2*x + 7;
t = (0:0.001:1);
plot(t,f(t),t,f(2*t),t,f(3*t));
% closures (linfunc below is a function that returns a function,
% and the outer functions arguments are held for the lifetime
% of the returned function.
linfunc = #(m,b) #(x) m*x+b;
C2F = linfunc(9/5, 32);
F2C = linfunc(5/9, -32*5/9);
Matlab's bsxfun, arrayfun, cellfun, and structfun are quite interesting and often save a loop.
M = rand(1000, 1000);
v = rand(1000, 1);
c = bsxfun(#plus, M, v);
This code, for instance, adds column-vector v to each column of matrix M.
Though, in performance critical parts of your application you should benchmark these functions versus the trivial for-loop because often loops are still faster.
LaTeX mode for formulas in graphs: In one of the recent releases (R2006?) you add the additional arguments ,'Interpreter','latex' at the end of a function call and it will use LaTeX rendering. Here's an example:
t=(0:0.001:1);
plot(t,sin(2*pi*[t ; t+0.25]));
xlabel('t');
ylabel('$\hat{y}_k=sin 2\pi (t+{k \over 4})$','Interpreter','latex');
legend({'$\hat{y}_0$','$\hat{y}_1$'},'Interpreter','latex');
Not sure when they added it, but it works with R2006b in the text(), title(), xlabel(), ylabel(), zlabel(), and even legend() functions. Just make sure the syntax you are using is not ambiguous (so with legend() you need to specify the strings as a cell array).
Using xlim and ylim to draw vertical and horizontal lines. Examples:
Draw a horizontal line at y=10:
line(xlim, [10 10])
Draw vertical line at x=5:
line([5 5], ylim)
Here's a quick example:
I find the comma separated list syntax quite useful for building function calls:
% Build a list of args, like so:
args = {'a', 1, 'b', 2};
% Then expand this into arguments:
output = func(args{:})
Here's a bunch of nonobvious functions that are useful from time to time:
mfilename (returns the name of the currently running MATLAB script)
dbstack (gives you access to the names & line numbers of the matlab function stack)
keyboard (stops execution and yields control to the debugging prompt; this is why there's a K in the debug prompt K>>
dbstop error (automatically puts you in debug mode stopped at the line that triggers an error)
I like using function handles for lots of reasons. For one, they are the closest thing I've found in MATLAB to pointers, so you can create reference-like behavior for objects. There are a few neat (and simpler) things you can do with them, too. For example, replacing a switch statement:
switch number,
case 1,
outargs = fcn1(inargs);
case 2,
outargs = fcn2(inargs);
...
end
%
%can be turned into
%
fcnArray = {#fcn1, #fcn2, ...};
outargs = fcnArray{number}(inargs);
I just think little things like that are cool.
Using nargin to set default values for optional arguments and using nargout to set optional output arguments. Quick example
function hLine=myplot(x,y,plotColor,markerType)
% set defaults for optional paramters
if nargin<4, markerType='none'; end
if nargin<3, plotColor='k'; end
hL = plot(x,y,'linetype','-', ...
'color',plotColor, ...
'marker',markerType, ...
'markerFaceColor',plotColor,'markerEdgeColor',plotColor);
% return handle of plot object if required
if nargout>0, hLine = hL; end
Invoking Java code from Matlab
cellfun and arrayfun for automated for loops.
Oh, and reverse an array
v = 1:10;
v_reverse = v(length(v):-1:1);
conditional arguments in the left-hand side of an assignment:
t = (0:0.005:10)';
x = sin(2*pi*t);
x(x>0.5 & t<5) = 0.5;
% This limits all values of x to a maximum of 0.5, where t<5
plot(t,x);
Know your axis properties! There are all sorts of things you can set to tweak the default plotting properties to do what you want:
set(gca,'fontsize',8,'linestyleorder','-','linewidth',0.3,'xtick',1:2:9);
(as an example, sets the fontsize to 8pt, linestyles of all new lines to all be solid and their width 0.3pt, and the xtick points to be [1 3 5 7 9])
Line and figure properties are also useful, but I find myself using axis properties the most.
Be strict with specifying dimensions when using aggregation functions like min, max, mean, diff, sum, any, all,...
For instance the line:
reldiff = diff(a) ./ a(1:end-1)
might work well to compute relative differences of elements in a vector, however in case the vector degenerates to just one element the computation fails:
>> a=rand(1,7);
>> diff(a) ./ a(1:end-1)
ans =
-0.5822 -0.9935 224.2015 0.2708 -0.3328 0.0458
>> a=1;
>> diff(a) ./ a(1:end-1)
??? Error using ==> rdivide
Matrix dimensions must agree.
If you specify the correct dimensions to your functions, this line returns an empty 1-by-0 matrix, which is correct:
>> diff(a, [], 2) ./ a(1, 1:end-1)
ans =
Empty matrix: 1-by-0
>>
The same goes for a min-function which usually computes minimums over columns on a matrix, until the matrix only consists of one row. - Then it will return the minimum over the row unless the dimension parameter states otherwise, and probably break your application.
I can almost guarantee you that consequently setting the dimensions of these aggregation functions will save you quite some debugging work later on.
At least that would have been the case for me. :)
The colon operator for the manipulation of arrays.
#ScottieT812, mentions one: flattening an array, but there's all the other variants of selecting bits of an array:
x=rand(10,10);
flattened=x(:);
Acolumn=x(:,10);
Arow=x(10,:);
y=rand(100);
firstSix=y(1:6);
lastSix=y(end-5:end);
alternate=y(1:2:end);
In order to be able to quickly test a function, I use nargin like so:
function result = multiply(a, b)
if nargin == 0 %no inputs provided, run using defaults for a and b
clc;
disp('RUNNING IN TEST MODE')
a = 1;
b = 2;
end
result = a*b;
Later on, I add a unit test script to test the function for different input conditions.
Using ismember() to merge data organized by text identfiers. Useful when you are analyzing differing periods when entries, in my case company symbols, come and go.
%Merge B into A based on Text identifiers
UniverseA = {'A','B','C','D'};
UniverseB = {'A','C','D'};
DataA = [20 40 60 80];
DataB = [30 50 70];
MergeData = NaN(length(UniverseA),2);
MergeData(:,1) = DataA;
[tf, loc] = ismember(UniverseA, UniverseB);
MergeData(tf,2) = DataB(loc(tf));
MergeData =
20 30
40 NaN
60 50
80 70
Asking 'why' (useful for jarring me out of a Matlab runtime-fail debugging trance at 3am...)
Executing a Simulink model directly from a script (rather than interactively) using the sim command. You can do things like take parameters from a workspace variable, and repeatedly run sim in a loop to simulate something while varying the parameter to see how the behavior changes, and graph the results with whatever graphical commands you like. Much easier than trying to do this interactively, and it gives you much more flexibility than the Simulink "oscilloscope" blocks when visualizing the results. (although you can't use it to see what's going on in realtime while the simulation is running)
A really important thing to know is the DstWorkspace and SrcWorkspace options of the simset command. These control where the "To Workspace" and "From Workspace" blocks get and put their results. Dstworkspace defaults to the current workspace (e.g. if you call sim from inside a function the "To Workspace" blocks will show up as variables accessible from within that same function) but SrcWorkspace defaults to the base workspace and if you want to encapsulate your call to sim you'll want to set SrcWorkspace to current so there is a clean interface to providing/retrieving simulation input parameters and outputs. For example:
function Y=run_my_sim(t,input1,params)
% runs "my_sim.mdl"
% with a From Workspace block referencing I1 as an input signal
% and parameters referenced as fields of the "params" structure
% and output retrieved from a To Workspace block with name O1.
opt = simset('SrcWorkspace','current','DstWorkspace','current');
I1 = struct('time',t,'signals',struct('values',input1,'dimensions',1));
Y = struct;
Y.t = sim('my_sim',t,opt);
Y.output1 = O1.signals.values;
Contour plots with [c,h]=contour and clabel(c,h,'fontsize',fontsize). I usually use the fontsize parameter to reduce the font size so the numbers don't run into each other. This is great for viewing the value of 2-D functions without having to muck around with 3D graphs.
Vectorization:
function iNeedle = findClosest(hay,needle)
%FINDCLOSEST find the indicies of the closest elements in an array.
% Given two vectors [A,B], findClosest will find the indicies of the values
% in vector A closest to the values in vector B.
[hay iOrgHay] = sort(hay(:)'); %#ok must have row vector
% Use histogram to find indices of elements in hay closest to elements in
% needle. The bins are centered on values in hay, with the edges on the
% midpoint between elements.
[iNeedle iNeedle] = histc(needle,[-inf hay+[diff(hay)/2 inf]]); %#ok
% Reversing the sorting.
iNeedle = iOrgHay(iNeedle);
Using persistent (static) variables when running an online algorithm. It may speed up the code in areas like Bayesian machine learning where the model is trained iteratively for the new samples. For example, for computing the independent loglikelihoods, I compute the loglikelihood initially from scratch and update it by summing this previously computed loglikelihood and the additional loglikelihood.
Instead of giving a more specialized machine learning problem, let me give a general online averaging code which I took from here:
function av = runningAverage(x)
% The number of values entered so far - declared persistent.
persistent n;
% The sum of values entered so far - declared persistent.
persistent sumOfX;
if x == 'reset' % Initialise the persistent variables.
n = 0;
sumOfX = 0;
av = 0;
else % A data value has been added.
n = n + 1;
sumOfX = sumOfX + x;
av = sumOfX / n; % Update the running average.
end
Then, the calls will give the following results
runningAverage('reset')
ans = 0
>> runningAverage(5)
ans = 5
>> runningAverage(10)
ans = 7.5000
>> runningAverage(3)
ans = 6
>> runningAverage('reset')
ans = 0
>> runningAverage(8)
ans = 8
I'm surprised that while people mentioned the logical array approach of indexing an array, nobody mentioned the find command.
e.g. if x is an NxMxO array
x(x>20) works by generating an NxMxO logical array and using it to index x (which can be bad if you have large arrays and are looking for a small subset
x(find(x>20)) works by generating list (i.e. 1xwhatever) of indices of x that satisfy x>20, and indexing x by it. "find" should be used more than it is, in my experience.
More what I would call 'tricks'
you can grow/append to arrays and cell arrays if you don't know the size you'll need, by using end + 1 (works with higher dimensions too, so long as the dimensions of the slice match -- so you'll have to initialize x to something other than [] in that case). Not good for numerics but for small dynamic lists of things (or cell arrays), e.g. parsing files.
e.g.
>> x=[1,2,3]
x = 1 2 3
>> x(end+1)=4
x = 1 2 3 4
Another think many people don't know is that for works on any dim 1 array, so to continue the example
>> for n = x;disp(n);end
1
2
3
4
Which means if all you need is the members of x you don't need to index them.
This also works with cell arrays but it's a bit annoying because as it walks them the element is still wrapped in a cell:
>> for el = {1,2,3,4};disp(el);end
[1]
[2]
[3]
[4]
So to get at the elements you have to subscript them
>> for el = {1,2,3,4};disp(el{1});end
1
2
3
4
I can't remember if there is a nicer way around that.
-You can make a Matlab shortcut to an initialization file called startup.m. Here, I define formatting, precision of the output, and plot parameters for my Matlab session (for example, I use a larger plot axis/font size so that .fig's can be seen plainly when I put them in presentations.) See a good blog post from one of the developers about it http://blogs.mathworks.com/loren/2009/03/03/whats-in-your-startupm/ .
-You can load an entire numerical ascii file using the "load" function. This isn't particularly fast, but gets the job done quickly for prototyping (shouldn't that be the Matlab motto?)
-As mentioned, the colon operator and vectorization are lifesavers. Screw loops.
x=repmat([1:10],3,1); % say, x is an example array of data
l=x>=3; % l is a logical vector (1s/0s) to highlight those elements in the array that would meet a certain condition.
N=sum(sum(l));% N is the number of elements that meet that given condition.
cheers -- happy scripting!