averaging over 3rd dim, within a cell array, as efficiently as possible - matlab

I haven't found a good solution for this problem, which takes forever and seems to be mainly due to storage of the data in a cell array (as far as I see).
I process movie data in this format:
[data{1:4}] = deal(int16(randi([0 255],200,400,100))); %200px, 200px, 100 frames, 4 similar movies
Where data represents 4 different, but similar movies, in a cell array. Now I would like to take the average of the 4 variables data{1:4}, frame by frame. This is what I came up with:
for frame = 1:size(data{ind},3)
tmp = zeros(200,400,'int16');
for ind = 1:4
tmp = tmp + data{ind}(:,:,frame);
end
data_avg(:,:,frame) = tmp./4;
end
is there a more efficient (faster performing, without doubling the RAM usage) way to do this (I haven't found any)?

the fastest approach will be to do:
data_avg= (data{1}+data{2}+data{3}+data{4})/4;
no need for a for loop.
This is slower: mean(double(cat(4,data{:})),4); because matlabs mean has an overhead inefficiency. your for loop is in between.

Related

How to execute for more faster in Matlab

I have to execute this for cycle:
load('Y');
X_test = ...;
Y_test = ...;
X_train = ...;
Y_train = ...;
for i=1:length(Y.Y)
if Y.Y(i,1) == l
current_test_data = [current_test_data; X_test(i,:)];
current_test_labes = [current_test_labes; Y_test(i,:)];
else
current_train_data = [current_train_data; X_train(i,:)];
current_train_labes = [current_train_labes; Y_train(i,:)];
end
end
But length(Y.Y) is 2300250 so this execution takes a long time. There is a faster way to do that?
What you are doing is indeed not great in terms of performance.
The first issue is the loop. Matlab does not handle them very fast; when possible, vectorized operations should be preferred as they are well optimized. For instance, it is much faster to execute A=B.*C than for ii=1:length(B), A(ii)=B(ii)*C(ii);end
The second issue is that you are concatenating arrays within the loop. current_test_data starts as a small array that grows over time. Each time some data is appended, memory needs to be reallocated. The data may have to be moved to another place in the memory. Since Matlab stores data in column major order, adding an extra row also means that all samples but the first column have to be moved (while adding an extra column is just appending data at the end). All of this conspires to make a terrible performance. With small arrays, this may not be noticeable; but as you start moving megabytes or more of data in memory at each iteration, performance will plummet
The usual solution, when the final size of the array is known, is to pre-allocate arrays, for instance current_test_data = zeros(expected_rows,expected_columns);, and put the data straight away where it belongs: current_test_data(jj,:) = some_matrix(ii,:);. No more memory allocation, no more memory moves, no more shuffling-samples-around.
But then, in your specific case, the solution lies with the first point: using vectorized notation is the solution. It will pre-allocate the arrays to the right size and copy data efficiently.
sel = Y.Y(:,1)==1; % Builds a logical vector
% Selects data based on logical vector
current_test_data = X_test(sel,:);
current_test_labes = Y_test(sel,:);
current_train_data = X_train(~sel,:);
current_train_labes = Y_train(~sel,:);

How to do a median projection of a large image stack in Matlab

I have a large stack of 800 16bit gray scale images with 2048x2048px. They are read from a single BigTIFF file and the whole stack barely fits into my RAM (8GB).
Now I need do a median projection. That means I want to compute the median of each pixel across all 800 frames. The Matlab median function fails because there is not enough memory left make a copy of the whole array for the function call. What would be an efficient way to compute the median?
I have tried using a for loop to compute the median one pixel at a time, but this is still terribly slow.
Iterating over blocks, as #Shai suggests, may be the most straightforward solution. If you do have this problem frequently, you may want to consider converting the image to a mat-file, so that you can access the pixels as n-d array directly from disk.
%# convert to mat file
matObj = matfile('dest.mat','w');
matObj.data(2048,2048,numSlices) = 0;
for t = 1:numSlices
matObj.data(:,:,t) = imread(tiffFile,'index',t);
end
%# load a block of the matfile to take median (run as part of a loop)
medianOfBlock = median(matObj.data(1:128,1:128,:),3);
I bet that the distributions of the individual pixel values over the stack (i.e. the histograms of the pixel jets) are sparse.
If that's the case, the amount of memory needed to keep all the pixel histograms is much less than 2K x 2K x 64k: you can use a compact hash map to represent each histogram, and update them loading the images one at a time. When all updates are done, you go through your histograms and compute the median of each.
If you have access to the Image Processing Toolbox, Matlab has a set of tool to handle large images called Blockproc
From the docs :
To avoid these problems, you can process large images incrementally: reading, processing, and finally writing the results back to disk, one region at a time. The blockproc function helps you with this process.
I will try my best to provide help (if any), because I don't have an 800-stack TIFF image, nor an 8GB computer, but I want to see if my thinkings can form a solution.
First, 800*2048*2048*8bit = 3.2GB, not including the headers. With your 8GB RAM it should not be too difficult to store it at once; there might be too many programs running and chopping up the contiguous memories. Anyway, let's treat the problem as Matlab can't load it as a whole into the memory.
As Jonas suggests, imread supports loading a TIFF image by index. It also supports a PixelRegion parameter, so you can also consider accessing parts of the image by this parameter if you want to utilize Shai's idea.
I came up with a median algo that doesn't use all the data at the same time; it barely scans through a sequence of un-ordered data, one at each time; but it does keep a memory of 256 counters.
_
data = randi([0,255], 1, 800);
bins = num2cell(zeros(256,1,'uint16'));
for ii = 1:800
bins{data(ii)+1} = bins{data(ii)+1} + 1;
end
% clearvars data
s = cumsum(cell2mat(bins));
if find(s==400)
med = ( find(s==400, 1, 'first') + ...
find(s>400, 1, 'first') ) /2 - 1;
else
med = find(s>400, 1, 'first') - 1;
end
_
It's not very efficient, at least because it uses a for loop. But the benefit is instead of keeping 800 raw data in memory, only 256 counters are kept; but the counters need uint16, so actually they are roughly equivalent to 512 raw data. But if you are confident that for any pixel the same grayscale level won't count for more than 255 times among the 800 samples, you can choose uint8, and hence reduce the memory by half.
The above code is for one pixel. I'm still thinking how to expand it to a 2048x2048 version, such as
for ii = 1:800
img_data = randi([0,255], 2048, 2048);
(do stats stuff)
end
By doing so, for each iteration, you only need these kept in memory:
One frame of image;
A set of counters;
A few supplemental variables, with size comparable to one frame of image.
I use a cell array to store the counters. According to this post, a cell array can be pre-allocated while its elements can still be stored in memory non-contigously. That means the 256 counters (512*2048*2048 bytes) can be stored separately, which is quite reasonable for your 8GB RAM. But obviously my sample code does not make use of it since bins = num2cell(zeros(....

Recursive loop optimization

Is there a way to rewrite my code to make it faster?
for i = 2:length(ECG)
u(i) = max([a*abs(ECG(i)) b*u(i-1)]);
end;
My problem is the length of ECG.
You should pre-allocate u like this
>> u = zeros(size(ECG));
or possibly like this
>> u = NaN(size(ECG));
or maybe even like this
>> u = -Inf(size(ECG));
depending on what behaviour you want.
When you pre-allocate a vector, MATLAB knows how big the vector is going to be and reserves an appropriately sized block of memory.
If you don't pre-allocate, then MATLAB has no way of knowing how large the final vector is going to be. Initially it will allocate a short block of memory. If you run out of space in that block, then it has to find a bigger block of memory somewhere, and copy all the old values into the new memory block. This happens every time you run out of space in the allocated block (which may not be every time you grow the array, because the MATLAB runtime is probably smart enough to ask for a bit more memory than it needs, but it is still more than necessary). All this unnecessary reallocating and copying is what takes a long time.
There are several several ways to optimize this for loop, but, surprisingly memory pre-allocation is not the part that saves the most time. By far. You're using max to find the largest element of a 1-by-2 vector. On each iteration you build this vector. However, all you're doing is comparing two scalars. Using the two argument form of max and passing it two scalar is MUCH faster: 75+ times faster on my machine for large ECG vectors!
% Set the parameters and create a vector with million elements
a = 2;
b = 3;
n = 1e6;
ECG = randn(1,n);
ECG2 = a*abs(ECG); % This can be done outside the loop if you have the memory
u(1,n) = 0; % Fast zero allocation
for i = 2:length(ECG)
u(i) = max(ECG2(i),b*u(i-1)); % Compare two scalars
end
For the single input form of max (not including creation of random ECG data):
Elapsed time is 1.314308 seconds.
For my code above:
Elapsed time is 0.017174 seconds.
FYI, the code above assumes u(1) = 0. If that's not true, then u(1) should be set to it's value after preallocation.

what function or loop do I have to use to average the matrix?

I want to find the average of all the matrix:
Data=(Data{1}+......+Data{n})/n)
where Data{n} is a matrix of m*n..
Thank you sooo much
First, you convert your cell array into a 3D array, then you can take the average, like this:
tmp = cat(3,Data{:}); %# catenates the data, so that it becomes a m*n*z (or m*1*n)
averageData = mean(tmp,3); %# takes average along 3rd dimension
Note: if you get memory problems this way, and if you don't need to keep the variable Data around, you can replace tmp with Data and all will work fine.
Alternatively, if Data is simply a m*n numeric array
averageData = mean(Data,2);
If your cell array is really big, you might want to keep away from the above solution because of its memory usage. I'd then suggest using the utility mtimesx which is available from Matlab Central, here.
N = length(Data);
b = cell(N,1);
b(:) = {1};
averageData = mtimesx(Data,b)/N;
In the above example, I assumed that Data is a line-shaped cell array. I have never used personally mtimesx, this solution comes from there, where timing issues are also discussed.
Hope this helps.
A.

vectorizing loops in Matlab - performance issues

This question is related to these two:
Introduction to vectorizing in MATLAB - any good tutorials?
filter that uses elements from two arrays at the same time
Basing on the tutorials I read, I was trying to vectorize some procedure that takes really a lot of time.
I've rewritten this:
function B = bfltGray(A,w,sigma_r)
dim = size(A);
B = zeros(dim);
for i = 1:dim(1)
for j = 1:dim(2)
% Extract local region.
iMin = max(i-w,1);
iMax = min(i+w,dim(1));
jMin = max(j-w,1);
jMax = min(j+w,dim(2));
I = A(iMin:iMax,jMin:jMax);
% Compute Gaussian intensity weights.
F = exp(-0.5*(abs(I-A(i,j))/sigma_r).^2);
B(i,j) = sum(F(:).*I(:))/sum(F(:));
end
end
into this:
function B = rngVect(A, w, sigma)
W = 2*w+1;
I = padarray(A, [w,w],'symmetric');
I = im2col(I, [W,W]);
H = exp(-0.5*(abs(I-repmat(A(:)', size(I,1),1))/sigma).^2);
B = reshape(sum(H.*I,1)./sum(H,1), size(A, 1), []);
Where
A is a matrix 512x512
w is half of the window size, usually equal 5
sigma is a parameter in range [0 1] (usually one of: 0.1, 0.2 or 0.3)
So the I matrix would have 512x512x121 = 31719424 elements
But this version seems to be as slow as the first one, but in addition it uses a lot of memory and sometimes causes memory problems.
I suppose I've made something wrong. Probably some logic mistake regarding vectorizing. Well, in fact I'm not surprised - this method creates really big matrices and probably the computations are proportionally longer.
I have also tried to write it using nlfilter (similar to the second solution given by Jonas) but it seems to be hard since I use Matlab 6.5 (R13) (there are no sophisticated function handles available).
So once again, I'm asking not for ready solution, but for some ideas that would help me to solve this in reasonable time. Maybe you will point me what I did wrong.
Edit:
As Mikhail suggested, the results of profiling are as follows:
65% of time was spent in the line H= exp(...)
25% of time was used by im2col
How big are I and H (i.e. numel(I)*8 bytes)? If you start paging, then the performance of your second solution is going to be affected very badly.
To test whether you really have a problem due to too large arrays, you can try and measure the speed of the calculation using tic and toc for arrays A of increasing size. If the execution time increases faster than by the square of the size of A, or if the execution time jumps at some size of A, you can try and split the padded I into a number of sub-arrays and perform the calculations like that.
Otherwise, I don't see any obvious places where you could be losing lots of time. Well, maybe you could skip the reshape, by replacing B with A in your function (saves a little memory as well), and writing
A(:) = sum(H.*I,1)./sum(H,1);
You may also want to look into upgrading to a more recent version of Matlab - they've worked hard on improving performance.