I need to create a queue in matlab that holds structs which are very large. I don't know how large this queue will get. Matlab doesn't have linked lists, and I'm worried that repeated allocation and copying is really going to slow down this code which must be run thousands of times. I need some sort of way to use a growable data structure. I've found a couple of entries for linked lists in the matlab help but I can't understand what's going on. Can someone help me with this problem?
I posted a solution a while back to a similar problem. The way I tried it is by allocating the array with an initial size BLOCK_SIZE, and then I keep growing it by BLOCK_SIZE as needed (whenever there's less than 10%*BLOCK_SIZE free slots).
Note that with an adequate block size, performance is comparable to pre-allocating the entire array from the beginning. Please see the other post for a simple benchmark I did.
Just create an array of structs and double the size of the array when it hits the limit. This scales well.
If you're worried that repeated allocation and copying is going to slow the code down, try it. It may in fact be very slow, but you may be pleasantly surprised.
Beware of premature optimization.
Well, I found the easy answer:
L = java.util.LinkedList;
I think the built-in cell structure would be suitable for storing growable structures.
I made a comparison among:
Dynamic size cell, size of the cell changes every loop
Pre-allocated cell
Java LinkedList
Code:
clear;
scale = 1000;
% dynamic size cell
tic;
dynamic_cell = cell(0);
for ii = 1:scale
dynamic_cell{end + 1} = magic(20);
end
toc
% preallocated cell
tic;
fixed_cell = cell(1, scale);
for ii = 1:scale
fixed_cell{ii} = magic(20);
end
toc
% java linked list
tic;
linked_list = java.util.LinkedList;
for ii = 1:scale
linked_list.add(magic(20));
end
toc;
Results:
Elapsed time is 0.102684 seconds. % dynamic
Elapsed time is 0.091507 seconds. % pre-allocated
Elapsed time is 0.189757 seconds. % Java LinkedList
I change scale and magic(20) and find the dynamic and pre-allocated versions are very close on speed. Maybe cell only stores pointer-like structures and is efficient on resizing.
The Java way is slower. And I find it sometimes unstable (it crashes my MATLAB when the scale is very large).
Related
I have noticed many individual questions on SO but no one good guide to MATLAB optimization.
Common Questions:
Optimize this code for me
How do I vectorize this?
I don't think that these questions will stop, but I'm hoping that the ideas presented here will them something centralized to refer to.
Optimizing Matlab code is kind of a black-art, there is always a better way to do it. And sometimes it is straight-up impossible to vectorize your code.
So my question is: when vectorization is impossible or extremely complicated, what are some of your tips and tricks to optimize MATLAB code? Also if you have any common vectorization tricks I wouldn't mind seeing them either.
Preface
All of these tests are performed on a machine that is shared with others, so it is not a perfectly clean environment. Between each test I clear the workspace to free up memory.
Please don't pay attention to the individual numbers, just look at the differences between the before and after optimisation times.
Note: The tic and toc calls I have placed in the code are to show where I am measuring the time taken.
Pre-allocation
The simple act of pre-allocating arrays in Matlab can give a huge speed advantage.
tic;
for i = 1:100000
my_array(i) = 5 * i;
end
toc;
This takes 47 seconds
tic;
length = 100000;
my_array = zeros(1, length);
for i = 1:length
my_array(i) = 5 * i;
end
toc;
This takes 0.1018 seconds
47 seconds to 0.1 seconds for a single line of code added is an amazing improvement. Obviously in this simple example you could vectorize it to my_array = 5 * 1:100000 (which took 0.000423 seconds) but I am trying to represent the more complicated times when vectorization isn't an option.
I recently found that the zeros function (and others of the same nature) are not as fast at pre-allocating as simply setting the last value to 0:
tic;
length = 100000;
my_array(length) = 0;
for i = 1:length
my_array(i) = 5 * i;
end
toc;
This takes 0.0991 seconds
Now obviously this tiny difference doesn't prove much but you'll have to believe me over a large file with many of these optimisations the difference becomes a lot more apparent.
Why does this work?
The pre-allocation methods allocate a chunk of memory for you to work with. This memory is contiguous and can be pre-fetched, just like an Array in C++ or Java. However if you do not pre-allocate then MATLAB will have to dynamically find more and more memory for you to use. As I understand it, this behaves differently to a Java ArrayList and is more like a LinkedList where different chunks of the array are split all over the place in memory.
Not only is this slower when you write data to it (47 seconds!) but it is also slower every time you access it from then on. In fact, if you absolutely CAN'T pre-allocate then it is still useful to copy your matrix to a new pre-allocated one before you start using it.
What if I don't know how much space to allocate?
This is a common question and there are a few different solutions:
Overestimation - It is better to grossly overestimate the size of your matrix and allocate too much space, than it is to under-allocate space.
Deal with it and fix later - I have seen this a lot where the developer has put up with the slow population time, and then copied the matrix into a new pre-allocated space. Usually this is saved as a .mat file or similar so that it could be read quickly at a later date.
How do I pre-allocate a complicated structure?
Pre-allocating space for simple data-types is easy, as we have already seen, but what if it is a very complex data type such as a struct of structs?
I could never work out to explicitly pre-allocate these (I am hoping someone can suggest a better method) so I came up with this simple hack:
tic;
length = 100000;
% Reverse the for-loop to start from the last element
for i = 1:length
complicated_structure = read_from_file(i);
end
toc;
This takes 1.5 minutes
tic;
length = 100000;
% Reverse the for-loop to start from the last element
for i = length:-1:1
complicated_structure = read_from_file(i);
end
% Flip the array back to the right way
complicated_structure = fliplr(complicated_structure);
toc;
This takes 6 seconds
This is obviously not perfect pre-allocation, and it takes a little while to flip the array afterwards, but the time improvements speak for themselves. I'm hoping someone has a better way to do this, but this is a pretty good hack in the mean time.
Data Structures
In terms of memory usage, an Array of Structs is orders of magnitude worse than a Struct of Arrays:
% Array of Structs
a(1).a = 1;
a(1).b = 2;
a(2).a = 3;
a(2).b = 4;
Uses 624 Bytes
% Struct of Arrays
a.a(1) = 1;
a.b(1) = 2;
a.a(2) = 3;
a.b(2) = 4;
Uses 384 Bytes
As you can see, even in this simple/small example the Array of Structs uses a lot more memory than the Struct of Arrays. Also the Struct of Arrays is in a more useful format if you want to plot the data.
Each Struct has a large header, and as you can see an array of structs repeats this header multiple times where the struct of arrays only has the one header and therefore uses less space. This difference is more obvious with larger arrays.
File Reads
The less number of freads (or any system call for that matter) you have in your code, the better.
tic;
for i = 1:100
fread(fid, 1, '*int32');
end
toc;
The previous code is a lot slower than the following:
tic;
fread(fid, 100, '*int32');
toc;
You might think that's obvious, but the same principle can be applied to more complicated cases:
tic;
for i = 1:100
val1(i) = fread(fid, 1, '*float32');
val2(i) = fread(fid, 1, '*float32');
end
toc;
This problem is no longer simple because in memory the floats are represented like this:
val1 val2 val1 val2 etc.
However you can use the skip value of fread to achieve the same optimizations as before:
tic;
% Get the current position in the file
initial_position = ftell(fid);
% Read 100 float32 values, and skip 4 bytes after each one
val1 = fread(fid, 100, '*float32', 4);
% Set the file position back to the start (plus the size of the initial float32)
fseek(fid, position + 4, 'bof');
% Read 100 float32 values, and skip 4 bytes after each one
val2 = fread(fid, 100, '*float32', 4);
toc;
So this file read was accomplished using two freads instead of 200, a massive improvement.
Function Calls
I recently worked on some code that used many function calls, all of which were located in separate files. So lets say there were 100 separate files, all calling each other. By "inlining" this code into one function I saw a 20% improvement in execution speed from 9 seconds.
Obviously you would not do this at the expense of re-usability, but in my case the functions were automatically generated and not reused at all. But we can still learn from this and avoid excessive function calls where they are not really needed.
External MEX functions incur an overhead for being called. Therefore one call to a large MEX function is a lot more efficient than many calls to smaller MEX functions.
Plotting Many Disconnected Lines
When plotting disconnected data such as a set of vertical lines, the traditional way to do this in Matlab is to iterate multiple calls to line or plot using hold on. However if you have a large number of individual lines to plot, this becomes very slow.
The technique I have found uses the fact that you can introduce NaN values into data to plot and it will cause a break in the data.
The below contrived example converts a set of x_values, y1_values, and y2_values (where the line is from [x, y1] to [x, y2]) to a format appropriate for a single call to plot.
For example:
% Where x is 1:1000, draw vertical lines from 5 to 10.
x_values = 1:1000;
y1_values = ones(1, 1000) * 5;
y2_values = ones(1, 1000) * 10;
% Set x_plot_values to [1, 1, NaN, 2, 2, NaN, ...];
x_plot_values = zeros(1, length(x_values) * 3);
x_plot_values(1:3:end) = x_values;
x_plot_values(2:3:end) = x_values;
x_plot_values(3:3:end) = NaN;
% Set y_plot_values to [5, 10, NaN, 5, 10, NaN, ...];
y_plot_values = zeros(1, length(x_values) * 3);
y_plot_values(1:3:end) = y1_values;
y_plot_values(2:3:end) = y2_values;
y_plot_values(3:3:end) = NaN;
figure; plot(x_plot_values, y_plot_values);
I have used this method to print thousands of tiny lines and the performance improvements were immense. Not only in the initial plot, but the performance of subsequent manipulations such as zoom or pan operations improved as well.
This is more a question to understand a behavior rather than a specific problem.
Mathworks states that numerical are stored continuous which makes preallocation important. This is not the case for cell arrays.
Are they something similar than vector or array of pointers in C++?
This would mean that prealocation is not so important since a pointer is half the size of a double (according to whos - but there surely is overhead somewhere to store the datatype of the mxArray).
Running this code:
clear all
n = 1e6;
tic
A = [];
for i=1:n
A(end + 1) = 1;
end
fprintf('Numerical without preallocation %f s\n',toc)
clear A
tic
A = zeros(1,n);
for i=1:n
A(i) = 1;
end
fprintf('Numerical with preallocation %f s\n',toc)
clear A
tic
A = cell(0);
for i=1:n
A{end + 1} = 1;
end
fprintf('Cell without preallocation %f s\n',toc)
tic
A = cell(1,n);
for i=1:n
A{i} = 1;
end
fprintf('Cell with preallocation %f s\n',toc)
returns:
Numerical without preallocation 0.429240 s
Numerical with preallocation 0.025236 s
Cell without preallocation 4.960297 s
Cell with preallocation 0.554257 s
There is no surprise for the numerical values. But the did surprise me since only the container of the pointers and not the data itself would need reallocation. Which should (since the pointer is smaller than a double) lead to difference of <.2s. Where does this overhead come from?
A related question would be, if I would like to make a data container for heterogeneous data in Matlab (preallocation is not possible since the final size is not known in the beginning). I think handle classes are not good since the also have huge overhead.
already looking forward to learn something
magu_
Edit:
I tried out the linked list proposed by Eitan T but I think the overhead from matlab is still rather big. I tried something with an double array as data (rand(200000,1)).
I made a little plot to illustrate:
code for the graph: (I used the dlnode class from the matlab hompage as stated in the answering post)
D = rand(200000,1);
s = linspace(10,20000,50);
nC = zeros(50,1);
nL = zeros(50,1);
for i = 1:50
a = cell(0);
tic
for ii = 1:s(i)
a{end + 1} = D;
end
nC(i) = toc;
a = list([]);
tic
for ii = 1:s(i)
a.insertAfter(list(D));
end
nL(i) = toc;
end
figure
plot(s,nC,'r',s,nL,'g')
xlabel('#iter')
ylabel('time (s)')
legend({'cell' 'list'})
Don't get me wrong I love the idea of linked list, since there are rather flexible, but I think the overhead might be to big.
Are cell arrays something similar to a vector or an array of pointers in C++?
Cell arrays allow storing data of different types and sizes indeed, but each cell also adds a constant overhead of 112 bytes (see this other answer of mine). This is far more than an 8-byte double, and this is non-negligible, especially when dealing with large cell arrays as in your example.
It is reasonable to assume that a cell array is implemented as a continuous array of pointers, each pointing to the actual content of the cell.
This means that you can modify the content of each cell individually without actually resizing the cell array container itself. However, this also means that adding new cells to the cell array requires dynamic storage allocation and this is why preallocating memory for a cell array improves performance.
A related question would be, if I would like to make a data container for heterogeneous data in Matlab (preallocation is not possible since the final size is not known in the beginning)
Not knowing the final size may indeed be a problem, but you could always preallocate a cell array with the maximum supported size necessary (if there is one), and remove the empty cells in the end. I also suggest that you look into implementing linked lists in MATLAB.
Is there a way to rewrite my code to make it faster?
for i = 2:length(ECG)
u(i) = max([a*abs(ECG(i)) b*u(i-1)]);
end;
My problem is the length of ECG.
You should pre-allocate u like this
>> u = zeros(size(ECG));
or possibly like this
>> u = NaN(size(ECG));
or maybe even like this
>> u = -Inf(size(ECG));
depending on what behaviour you want.
When you pre-allocate a vector, MATLAB knows how big the vector is going to be and reserves an appropriately sized block of memory.
If you don't pre-allocate, then MATLAB has no way of knowing how large the final vector is going to be. Initially it will allocate a short block of memory. If you run out of space in that block, then it has to find a bigger block of memory somewhere, and copy all the old values into the new memory block. This happens every time you run out of space in the allocated block (which may not be every time you grow the array, because the MATLAB runtime is probably smart enough to ask for a bit more memory than it needs, but it is still more than necessary). All this unnecessary reallocating and copying is what takes a long time.
There are several several ways to optimize this for loop, but, surprisingly memory pre-allocation is not the part that saves the most time. By far. You're using max to find the largest element of a 1-by-2 vector. On each iteration you build this vector. However, all you're doing is comparing two scalars. Using the two argument form of max and passing it two scalar is MUCH faster: 75+ times faster on my machine for large ECG vectors!
% Set the parameters and create a vector with million elements
a = 2;
b = 3;
n = 1e6;
ECG = randn(1,n);
ECG2 = a*abs(ECG); % This can be done outside the loop if you have the memory
u(1,n) = 0; % Fast zero allocation
for i = 2:length(ECG)
u(i) = max(ECG2(i),b*u(i-1)); % Compare two scalars
end
For the single input form of max (not including creation of random ECG data):
Elapsed time is 1.314308 seconds.
For my code above:
Elapsed time is 0.017174 seconds.
FYI, the code above assumes u(1) = 0. If that's not true, then u(1) should be set to it's value after preallocation.
According to this question, I should try to use Preallocation is Matlab.
Now I have a situation that I cannot calculate the exact size of the matrix to preallocate. I can guess the size.
suppose the actual size of the matrix is 100, but I don't know it. Sh
Which scenario is more efficient:
Should I be lavish? I guess a large matrix and at the end I remove extra rows.
Should I be stingy? I guess a small size and If it was wrong, I add new rows.
Thanks.
To my opinion, the answer is a bit more complex than portrayed by #natan.
I think there are two factors his answer does not take into account:
Possible necessary copies of memory: when you under-estimate a matrix size and you re-allocate it, all its old values should be copied to the new allocated location.
Continuity of memory chunks: sometimes Matlab is able to allocate new memory continuously at the end of the old matrix. In principle, in such a scenario the old values need not be copied to the new location - since it is the same as the old one just bigger. However, if you add rows to a 2D matrix, the content needs to be copied even in this scenario, since Matlab stores matrices in a row-major fashion in memory.
So, my answer is this:
First of all, what exactly don't you know about the size of the matrix: if you know one dimension - make it the number of rows of your matrix, so you'll only need to change the number of columns. This way, if your already stored data needs to be copied, it would be copied at larger chunks.
Second, it depends on how much free RAM you have at your disposal.
If you are not short at RAM, then there's nothing wrong with over estimating.
However, if you are short at RAM, consider under estimating. BUT when you re-allocate, increase the new block size at each iteration:
BASIC_SIZE = X; % first estimate
NEW_SIZE = Y; % if need more, add this amount
factor = 2;
arr = zeros( m, BASIC_SIZE ); % first allocation, assuming we know number of rows
while someCondition
% process arr ...
if needMoreCols
arr(:, size(arr,2) + (1:NEW_SIZE) ) = 0; % allocate another block
NEW_SIZE = round(NEW_SIZE * factor); % it seems like we are off in estimation, try larger chunk next time factor should be > 1
end
end
arr = arr(:, 1:actualNumOfCols ); % resize to actual size, discard unnecessary columns
+1 for the interesting question.
EDITED Answer:
From a little experimental study at first it seems better to add rows later, but it now seems more efficient overesrimated and preallocate again when you have the info of the correct size . I started with matrix size 3000 and guessed an error of 10% in the size estimation, see below:
clear all
clc
guess_size=3000;
m=zeros(guess_size);
%1. oops overesrimated, take out rows
tic
m(end-300:end,:)=[];
toc
%1b. oops overesrimated, preallocate again
tic
m=zeros(guess_size-300,guess_size);
toc
%2. oops overesrimated, take out cols
m=zeros(guess_size);
tic
m(:,end-300:end)=[];
toc
%2b. oops overesrimated, preallocate again
m=zeros(guess_size);
tic
m=zeros(guess_size,guess_size-300);
toc
%3. oops underesrimated, add rows
m=zeros(guess_size);
tic
m=zeros(guess_size+300,guess_size);
toc
%4. oops underesrimated, add cols
m=zeros(guess_size);
tic
m=zeros(guess_size,guess_size+300);
toc
Elapsed time is 0.041893 seconds.
Elapsed time is 0.026925 seconds.
Elapsed time is 0.041818 seconds.
Elapsed time is 0.023425 seconds.
Elapsed time is 0.027523 seconds.
Elapsed time is 0.029509 seconds.
Option 2b and 1b are slightly faster than underestimating, so if you can, better overestimate and then preallocate again. It is never efficient to delete rows from an array. Also adding columns seems slightly more efficient, but this is just a quick and dirty job. See #Shai detailed answer for the inner workings...
In addition to the other educating answers, The short short version:
There are three cases:
The size of the array is relatively small (up to thousends of bytes) -> it doesn't really matter.
The array is big, but you are not bounded by the amount of memory your system have -> Overestimate.
The array is big, and you are bounded by the amount of memory your system have -> do what Shai suggested.
Why is indexing into a dataset array so slow? A peak into the dataset.subsref function shows that all the columns of the dataset are stored in a cell array. However, cell indexing is much, much faster than dataset indexing, which is just indexing into a cell array under the hood. My guess is that this has to do with some overhead with MATLAB OOP. Any ideas on how to speed this up?
%% Using R2011a, PCWIN64
feature accel off; % turn off JIT
dat = (1:1e6)';
dat2 = repmat({'abc'}, 1e6, 1);
celldat = {dat dat2};
ds = dataset(dat, dat2);
N = 1e2;
tic;
for j = 1:N
tmp = celldat{2};
end
toc;
tic;
for j = 1:N
tmp2 = ds.dat2; % 2.778sec spent on line 262 of dataset.subsref
end
toc;
feature accel on; % turn JIT back on
Elapsed time is 0.000165 seconds.
Elapsed time is 2.778995 seconds.
EDIT: I've updated the example to be more like the problem I'm seeing. A huge amount of time is spent on line 262 of dataset.subsref - "b = a.data{varIndex};". It's very strange to me since it is a simple cell dereference. I'm wondering if there is a OOP trick that will allow me to index into "a.data" without the strange overhead.
EDIT2: As per Andrew's suggestion, I've submitted this as a bug to MatWorks. Will update if I hear anything from them.
EDIT3: Matlab responded and said they are aware of the problem now and will fix it in a future release. They noted that the problem is specific to cell arrays, and to try to avoid them if possible.
Yes, you are most likely seeing the overhead of Matlab OOP method calls. They are expensive compared to cell indexing, or method calls in some other languages. Your .513872 seconds / 1e4 ~= 51 microseconds per call, which is the approximate cost of a few MCOS method calls; they're ~5-15 microsececonds each on machines I've seen. So that looks like method overhead of the subsref() call itself and other methods and property accesses it's calling in turn.
For some details and discussion, see: Is MATLAB OOP slow or am I doing something wrong?
I don't know of a way to make this faster, aside from structuring your code to minimize calls to "ds.dat" or other methods. If possible, when working with the data set, call "ds.dat" once, keep it in a local variable and work with it there, and then push it back in to the ds object.
Caveat: I don't know what "feature accel" does or how it could affect these timings.
Edit: I threw it in the profiler like Richie suggested. On my R2009b, looks like about half the time is method call overhead, and the rest in find(), strcmp(), and other operations inside subsref; subsref doesn't call any other methods in turn.
Edit 2: The revised example is showing much higher timings. Method call overhead doesn't account for all that.