MATLAB: sum of one row - matlab

I am trying to streamline my code. I have a 2-column array from which I'd like to extract the averages of the columns and store them as X and Y.
I tried using the following code:
[x y] = mean(theArray);
...However, this returns
??? Error using ==> mean
Too many output arguments.
For now, I have settled with the three lines:
coords = mean(theArray);
x = coords(1);
y = coords(2);
I'm sure there must be a much simpler way of doing this in less than three lines. My code is running an eye tracking device at 1000Hz and I want to avoid any unnecessary processing...
Any wisdom gratefully received

In two lines:
x = mean(theArray(:,1));
y = mean(theArray(:,2));

Your code is already pretty darn simple. You can do it in a one-liner, using this or similar in-line array rearranging code.
[x,y] = deal(mean(theArray(:,1)), mean(theArray(:,2)));
But in efficiency terms your original three liner is probably better. Splitting the array out before the mean call will allocate more memory and costs an extra mean() call. You could get it down to two lines without the extra memory and mean().
tmp = mean(theArray);
[x,y] = deal(tmp(1), tmp(2));
But that really just accomplishes the same thing as your original code, paying an additional function call at run time to save a line on paper.
Throw your code in the Matlab profiler with profile on and see if you actually have a problem before trying to optimize. I'll bet none of these are distinguishable in practice, in which case you can stick with whatever's most readable.

Related

Applying (with as few loops as possible) a function to given elements/voxels (x,y,z) taken from subfields of multiple structs (nifti's) in MATLAB?

I have a dataset of n nifti (.nii) images. Ideally, I'd like to be able to get the value of the same voxel/element from each image, and apply a function to the n data points. I'd like to do this for each voxel/element across the whole image, so that I can reconvert the result back into .nii format.
I've used the Tools for NIfTI and ANALYZE image toolbox to load my images:
data(1)=load_nii('C:\file1.nii');
data(2)=load_nii('C:\file2.nii');
...
data(n)=load_nii('C:\filen.nii');
From which I obtain a struct object with each sub-field containing one loaded nifti. Each of these has a subfield 'img' corresponding to the image data I want to work on. The problem comes from trying to select a given xyz within each img field of data(1) to data(n). As I discovered, it isn't possible to select in this way:
data(:).img(x,y,z)
or
data(1:n).img(x,y,z)
because matlab doesn't support it. The contents of the first brackets have to be scalar for the call to work. The solution from googling around seems to be a loop that creates a temporary variable:
for z = 1:nz
for x = 1:nx
for y = 1:ny
for i=1:n;
points(i)=data(i).img(x,y,z);
end
[p1(x,y,z,:),~,p2(x,y,z)] = fit_data(a,points,b);
end
end
end
which works, but takes too long (several days) for a single set of images given the size of nx, ny, nz (several hundred each).
I've been looking for a solution to speed up the code, which I believe depends on removing those loops by vectorisation, preselecting the img fields (via getfield ?)and concatenating them, and applying something like arrayfun/cellfun/structfun, but i'm frankly a bit lost on how to do it. I can only think of ways to pre-select which themselves require loops, which seems to defeat the purpose of the exercise (though a solution with fewer loops, or fewer nested loops at least, might do it), or fun into the same problem that calls like data(:).img(x,y,z) dont work. googling around again is throwing up ways to select and concatenate fields within a struct, or a given field across multiple structs. But I can't find anything for my problem: select an element from a non-scalar sub-field in a sub-struct of a struct object (with the minimum of loops). Finally I need the output to be in the form of a matrix that the toolbox above can turn back into a nifti.
Any and all suggestions, clues, hints and help greatly appreciated!
You can concatenate images as a 4D array and use linear indexes to speed up calculations:
img = cat(4,data.img);
p1 = zeros(nx,ny,nz,n);
p2 = zeros(nx,ny,nz);
sz = ny*nx*nz;
for k = 1 : sz
points = img(k:sz:end);
[p1(k:sz:end),~,p2(k)] = fit_data(a,points,b);
end

Vectorising matlab function

I'm writing a function to get the cosine of a given array. It works but I'm presently using a loop in order to iterate over each value in the array whereas I'm assured that it can be vectorised.
Presently the code is:
for i = 1:numel(x)
cos(i) = (sum(((-1).^(0:n)).*(x(i).^(2*(0:n)))./(factorial(2*(0:n)))));
end
and I can't for the life of me think how it vectorises, so any help would be appreciated.
EDIT: Here is the full function http://pastebin.com/n1DG6nUv
2nd EDIT: link updated with new code that doesn't overwrite cos.
Here's one way using bsxfun and gamma:
v = 0:n;
fcos = zeros(size(x));
fcos(:) = sum(bsxfun(#times,bsxfun(#power,x(:),2*v),(-1).^v./gamma(2*v+1)),2)
In the spirit of learning, note that you have several issues with the code in your question. First, you don't preallocate memory. Second, you're overwriting the cos function, which is probably not a good idea. Also, I believe that using gamma(n+1) instead of factorial(n) will be faster. Finally, there are many unnecessary parentheses that make the code hard to read.

Detect signal jumps relative to local activity

In Matlab, is it possible to measure local variation of a signal across an entire signal without using for loops? I.e., can I implement the following:
window_length = <something>
for n = 1:(length_of_signal - window_length/2)
global_variance(n) = var(my_signal(1:window_length))
end
in a vectorized format?
If you have the image processing toolbox, you can use STDFILT:
global_std = stdfilt(my_signal(:),ones(window_length,1));
% square to get the variance
global_variance = global_std.^2;
You could create a 2D array where each row is shifted one w.r.t. to the row above, and with the number of rows equal to the window width; then computing the variance is trivial. This doesn't require any toolboxes. Not sure if it's much faster than the for loop though:
longSignal = repmat(mySignal(:), [1 window_length+1]);
longSignal = reshape(longSignal(1:((length_of_signal+1)*window_length)), [length_of_signal+1, window_length])';
global_variance = sum(longSignal.*longSignal, 2);
global_variance = global_variance(1:length_of_signal-window_length));
Note that the second column is shifted down by one relative to the one above - this means that when we have the blocks of data on which we want to operate in rows, so I take the transpose. After that, the sum operator will sum over the first dimension, which gives you a row vector with the results you want. However, there is a bit of wrapping of data going on, so we have to limit to the number of "good" values.
I don't have matlab handy right now (I'm at home), so I was unable to test the above - but I think the general idea should work. It's vectorized - I can't guarantee it's fast...
Check the "moving window standard deviation" function at Matlab Central. Your code would be:
movingstd(my_signal, window_length, 'forward').^2
There's also moving variance code, but it seems to be broken.
The idea is to use filter function.

Vectorize matlab code to map nearest values in two arrays

I have two lists of timestamps and I'm trying to create a map between them that uses the imu_ts as the true time and tries to find the nearest vicon_ts value to it. The output is a 3xd matrix where the first row is the imu_ts index, the third row is the unix time at that index, and the second row is the index of the closest vicon_ts value above the timestamp in the same column.
Here's my code so far and it works, but it's really slow. I'm not sure how to vectorize it.
function tmap = sync_times(imu_ts, vicon_ts)
tstart = max(vicon_ts(1), imu_ts(1));
tstop = min(vicon_ts(end), imu_ts(end));
%trim imu data to
tmap(1,:) = find(imu_ts >= tstart & imu_ts <= tstop);
tmap(3,:) = imu_ts(tmap(1,:));%Use imu_ts as ground truth
%Find nearest indecies in vicon data and map
vic_t = 1;
for i = 1:size(tmap,2)
%
while(vicon_ts(vic_t) < tmap(3,i))
vic_t = vic_t + 1;
end
tmap(2,i) = vic_t;
end
The timestamps are already sorted in ascending order, so this is essentially an O(n) operation but because it's looped it runs slowly. Any vectorized ways to do the same thing?
Edit
It appears to be running faster than I expected or first measured, so this is no longer a critical issue. But I would be interested to see if there are any good solutions to this problem.
Have a look at knnsearch in MATLAB. Use cityblock distance and also put an additional constraint that the data point in vicon_ts should be less than its neighbour in imu_ts. If it is not then take the next index. This is required because cityblock takes absolute distance. Another option (and preferred) is to write your custom distance function.
I believe that your current method is sound, and I would not try and vectorize any further. Vectorization can actually be harmful when you are trying to optimize some inner loops, especially when you know more about the context of your data (e.g. it is sorted) than the Mathworks engineers can know.
Things that I typically look for when I need to optimize some piece of code liek this are:
All arrays are pre-allocated (this is the biggest driver of performance)
Fast inner loops use simple code (Matlab does pretty effective JIT on basic commands, but must interpret others.)
Take advantage of any special data features that you have, e.g. use sort appropriate algorithms and early exit conditions from some loops.
You're already doing all this. I recommend no change.
A good start might be to get rid of the while, try something like:
for i = 1:size(tmap,2)
C = max(0,tmap(3,:)-vicon_ts(i));
tmap(2,i) = find(C==min(C));
end

Replacement for repmat in MATLAB

I have a function which does the following loop many, many times:
for cluster=1:max(bins), % bins is a list in the same format as kmeans() IDX output
select=bins==cluster; % find group of values
means(select,:)=repmat_fast_spec(meanOneIn(x(select,:)),sum(select),1);
% (*, above) for each point, write the mean of all points in x that
% share its label in bins to the equivalent row of means
delta_x(select,:)=x(select,:)-(means(select,:));
%subtract out the mean from each point
end
Noting that repmat_fast_spec and meanOneIn are stripped-down versions of repmat() and mean(), respectively, I'm wondering if there's a way to do the assignment in the line labeled (*) that avoids repmat entirely.
Any other thoughts on how to squeeze performance out of this thing would also be welcome.
Here is a possible improvement to avoid REPMAT:
x = rand(20,4);
bins = randi(3,[20 1]);
d = zeros(size(x));
for i=1:max(bins)
idx = (bins==i);
d(idx,:) = bsxfun(#minus, x(idx,:), mean(x(idx,:)));
end
Another possibility:
x = rand(20,4);
bins = randi(3,[20 1]);
m = zeros(max(bins),size(x,2));
for i=1:max(bins)
m(i,:) = mean( x(bins==i,:) );
end
dd = x - m(bins,:);
One obvious way to speed up calculation in MATLAB is to make a MEX file. You can compile C code and perform any operations you want. If you're searching for the fastest-possible performance, turning the operation into a custom MEX file would likely be the way to go.
You may be able to get some improvement by using ACCUMARRAY.
%# gather array sizes
[nPts,nDims] = size(x);
nBins = max(bins);
%# calculate means. Not sure whether it might be faster to loop over nDims
meansCell = accumarray(bins,1:nPts,[nBins,1],#(idx){mean(x(idx,:),1)},{NaN(1,nDims)});
means = cell2mat(meansCell);
%# subtract cluster means from x - this is how you can avoid repmat in your code, btw.
%# all you need is the array with cluster means.
delta_x = x - means(bins,:);
First of all: format your code properly, surround any operator or assignment by whitespace. I find your code very hard to comprehend as it looks like a big blob of characters.
Next of all, you could follow the other responses and convert the code to C (mex) or Java, automatically or manually, but in my humble opinion this is a last resort. You should only do such things when your performance is not there yet by a small margin. On the other hand, your algorithm doesn't show obvious flaws.
But the first thing you should do when trying to improve performance: profile. Use the MATLAB profiler to determine which part of your code is causing your problems. How much would you need to improve this to meet your expectations? If you don't know: first determine this boundary, otherwise you will be looking for a needle in a hay stack which might not even be in there in the first place. MATLAB will never be the fastest kid on the block with respect to runtime, but it might be the fastest with respect to development time for certain kinds of operations. In that respect, it might prove useful to sacrifice the clarity of MATLAB over the execution speed of other languages (C or even Java). But in the same respect, you might as well code everything in assembler to squeeze all of the performance out of the code.
Another obvious way to speed up calculation in MATLAB is to make a Java library (similar to #aardvarkk's answer) since MATLAB is built on Java and has very good integration with user Java libraries.
Java's easier to interface and compile than C. It might be slower than C in some cases, but the just-in-time (JIT) compiler in the Java virtual machine generally speeds things up very well.