Matlab: need some help for a seemingly simple vectorization of an operation - matlab

I would like to optimize this piece of Matlab code but so far I have failed. I have tried different combinations of repmat and sums and cumsums, but all my attempts seem to not give the correct result. I would appreciate some expert guidance on this tough problem.
S=1000; T=10;
X=rand(T,S),
X=sort(X,1,'ascend');
Result=zeros(S,1);
for c=1:T-1
for cc=c+1:T
d=(X(cc,:)-X(c,:))-(cc-c)/T;
Result=Result+abs(d');
end
end
Basically I create 1000 vectors of 10 random numbers, and for each vector I calculate for each pair of values (say the mth and the nth) the difference between them, minus the difference (n-m). I sum over of possible pairs and I return the result for every vector.
I hope this explanation is clear,
Thanks a lot in advance.

It is at least easy to vectorize your inner loop:
Result=zeros(S,1);
for c=1:T-1
d=(X(c+1:T,:)-X(c,:))-((c+1:T)'-c)./T;
Result=Result+sum(abs(d),1)';
end
Here, I'm using the new automatic singleton expansion. If you have an older version of MATLAB you'll need to use bsxfun for two of the subtraction operations. For example, X(c+1:T,:)-X(c,:) is the same as bsxfun(#minus,X(c+1:T,:),X(c,:)).
What is happening in the bit of code is that instead of looping cc=c+1:T, we take all of those indices at once. So I simply replaced cc for c+1:T. d is then a matrix with multiple rows (9 in the first iteration, and one fewer in each subsequent iteration).
Surprisingly, this is slower than the double loop, and similar in speed to Jodag's answer.
Next, we can try to improve indexing. Note that the code above extracts data row-wise from the matrix. MATLAB stores data column-wise. So it's more efficient to extract a column than a row from a matrix. Let's transpose X:
X=X';
Result=zeros(S,1);
for c=1:T-1
d=(X(:,c+1:T)-X(:,c))-((c+1:T)-c)./T;
Result=Result+sum(abs(d),2);
end
This is more than twice as fast as the code that indexes row-wise.
But of course the same trick can be applied to the code in the question, speeding it up by about 50%:
X=X';
Result=zeros(S,1);
for c=1:T-1
for cc=c+1:T
d=(X(:,cc)-X(:,c))-(cc-c)/T;
Result=Result+abs(d);
end
end
My takeaway message from this exercise is that MATLAB's JIT compiler has improved things a lot. Back in the day any sort of loop would halt code to a grind. Today it's not necessarily the worst approach, especially if all you do is use built-in functions.

The nchoosek(v,k) function generates all combinations of the elements in v taken k at a time. We can use this to generate all possible pairs of indicies then use this to vectorize the loops. It appears that in this case the vectorization doesn't actually improve performance (at least on my machine with 2017a). Maybe someone will come up with a more efficient approach.
idx = nchoosek(1:T,2);
d = bsxfun(#minus,(X(idx(:,2),:) - X(idx(:,1),:)), (idx(:,2)-idx(:,1))/T);
Result = sum(abs(d),1)';

Update: here are the results for the running times for the different proposals (10^5 trials):
So it looks like the transformation of the matrix is the most efficient intervention, and my original double-loop implementation is, amazingly, the best compared to the vectorized versions. However, in my hands (2017a) the improvement is only 16.6% compared to the original using the mean (18.2% using the median).
Maybe there is still room for improvement?

Related

Test for Duplicate Quickly in Matlab Array

I have two matrices S and T which have n columns and a row vector v of length n. By my construction, I know that S does not have any duplicates. What I'm looking for is a fast way to find out whether or not the row vector v appears as one of the rows of S. Currently I'm using the test
if min([sum(abs(S - repmat(f,size(S,1),1)),2);sum(abs(T - repmat(v,size(dS_new,1),1)),2)]) ~= 0 ....
When I first wrote it, I had a for loop testing each (I knew this would be slow, I was just making sure the whole thing worked first). I then changed this to defining a matrix diff by the two components above and then summing, but this was slightly slower than the above.
All the stuff I've found online says to use the function unique. However, this is very slow as it orders my matrix after. I don't need this, and it's a massively waste of time (it makes the process really slow). This is a bottleneck in my code -- taking nearly 90% of the run time. If anyone has any advice as how to speed this up, I'd be most appreciative!
I imagine there's a fairly straightforward way, but I'm not that experienced with Matlab (fairly, just not lots). I know how to use basic stuff, but not some of the more specialist functions.
Thanks!
To clarify following Sardar_Usama's comment, I want this to work for a matrix with any number of rows and a single vector. I'd forgotten to mention that the elements are all in the set {0,1,...,q-1}. I don't know whether that helps or not to make it faster!
You may want this:
ismember(v,S,'rows')
and replace arguments S and v to get indices of duplicates
ismember(S,v,'rows')
Or
for test if v is member of S:
any(all(bsxfun(#eq,S,v,2))
this returns logical indices of all duplicates
all(bsxfun(#eq,S,v),2)

vectorizing "for" loop with bidirectionally related variables

Last week I asked the following:
https://stackoverflow.com/questions/32658199/vectorizing-gibbs-sampler-in-matlab
Perhaps it was not that clear what I want to do, so this might be more clear.
I would like to vectorize a "for" loop in matlab, where some variables inside of the loop are bidirectionally related. So, here is an example:
A=2;
B=3;
for i=1:10000
A=3*B;
B=exp(A*(-1/2))
end
Thank you once again for your time.
A quick Excel calculation indicates that this quickly converges to 0.483908 (after much less than 10000 loops - so one way of speeding it up would be to check for convergence). If A and B are always 2 and 3 respectively, you could just replace the loop with this value.
Alternatively, using some series analysis you might be able to come up with an analytical expression for B when i is large - although with the nested exponents deriving this is a bit beyond my own abilities!
Edit
A bit of googling reveals this. Wikipedia states that for a tetration of x to infinity (i.e. x^x^x^x^x...), the solution y satisfies y = x^y. In your case, for example, 0.483908 = e^(-3/2)^0.483908, so 0.483908 is a solution. Not sure how you would exploit this though.
Wikipedia also gives a convergence condition, which might be of use to you: x lies between e^-e and e^1/e.
Final Edit (?)
Turns out you need Lambert's W function to solve for equations of the form of y = x^y. There seems to be no native function for this, but there seems to be something in the FileExchange - see here and here.

How to sort in ascending order the solution vector in each iteration using ODE?

I've got an ODE system working perfectly. But now, I want in each iteration, sort in ascending order the solution vector. I've tried many ways but I could not do it. Does anyone know how to do?
Here is a simplified code:
function dtemp = tanque1(t,temp)
for i=1:N
if i==1
dtemp(i)=(((-k(i)*At*(temp(i)-temp(i+1)))/(y))-(U*As(i)*(temp(i)-Tamb)))/(ro(i)*vol_nodo*cp(i));
end
if i>1 && i<N
dtemp(i)=(((k(i)*At*(temp(i-1)-temp(i)))/(y))-((k(i)*At*(temp(i)-temp(i+1)))/(y))-(U*As(i)*(temp(i)-Tamb)))/(ro(i)*vol_nodo*cp(i));
end
if i==N
dtemp(i)=(((k(i)*At*(temp(i-1)-temp(i)))/(y))-(U*As(i)*(temp(i)-Tamb)))/(ro(i)*vol_nodo*cp(i));
end
end
end
Test Script:
inicial=343.15*ones(200,1);
[t temp]=ode45(#tanque1,0:360:18000,inicial);
It looks like you have three different sets of differential equations depending on the index i of the solution vector. I don't think you mean "sort," but rather a more efficient way to implement what you've already done - basically vectorization. Provided I haven't accidentally made any typos (you should check), the following should do what you need:
function dtemp = tanque1(t,temp)
dtemp(1) = (-k(1)*At*(temp(1)-temp(2))/y-U*As(1)*(temp(1)-Tamb))/(ro(1)*vol_nodo*cp(1));
dtemp(2:N-1) = (k(2:N-1).*(diff(temp(1:N-1))-diff(temp(2:N)))*At/y-U*As(2:N-1).*(temp(2:N-1)-Tamb))./(vol_nodo*ro(2:N-1).*cp(2:N-1));
dtemp(N) = (k(N)*At*(temp(N-1)-temp(N))/y-U*As(N)*(temp(N)-Tamb))/(ro(N)*vol_nodo*cp(N));
You'll still need to define N and the other parameters and ensure that temp is returned as a column vector. You could also try replacing N with the end keyword, which might be faster. The two uses of diff make the code shorter, but, depending on the value of N, they may also speed up the calculation. They could be replaced with temp(1:N-2)-temp(2:N-1) and temp(2:N-1)-temp(3:N). It may be possible to collapse these down to a single vectorized equation, but I'll leave that as an exercise for you to attempt if you like.
Note that I also removed a great many unnecessary parentheses for clarity. As you learn Matlab you'll to get used to the order of operations and figure out when parentheses are needed.

Vectorize matlab code to map nearest values in two arrays

I have two lists of timestamps and I'm trying to create a map between them that uses the imu_ts as the true time and tries to find the nearest vicon_ts value to it. The output is a 3xd matrix where the first row is the imu_ts index, the third row is the unix time at that index, and the second row is the index of the closest vicon_ts value above the timestamp in the same column.
Here's my code so far and it works, but it's really slow. I'm not sure how to vectorize it.
function tmap = sync_times(imu_ts, vicon_ts)
tstart = max(vicon_ts(1), imu_ts(1));
tstop = min(vicon_ts(end), imu_ts(end));
%trim imu data to
tmap(1,:) = find(imu_ts >= tstart & imu_ts <= tstop);
tmap(3,:) = imu_ts(tmap(1,:));%Use imu_ts as ground truth
%Find nearest indecies in vicon data and map
vic_t = 1;
for i = 1:size(tmap,2)
%
while(vicon_ts(vic_t) < tmap(3,i))
vic_t = vic_t + 1;
end
tmap(2,i) = vic_t;
end
The timestamps are already sorted in ascending order, so this is essentially an O(n) operation but because it's looped it runs slowly. Any vectorized ways to do the same thing?
Edit
It appears to be running faster than I expected or first measured, so this is no longer a critical issue. But I would be interested to see if there are any good solutions to this problem.
Have a look at knnsearch in MATLAB. Use cityblock distance and also put an additional constraint that the data point in vicon_ts should be less than its neighbour in imu_ts. If it is not then take the next index. This is required because cityblock takes absolute distance. Another option (and preferred) is to write your custom distance function.
I believe that your current method is sound, and I would not try and vectorize any further. Vectorization can actually be harmful when you are trying to optimize some inner loops, especially when you know more about the context of your data (e.g. it is sorted) than the Mathworks engineers can know.
Things that I typically look for when I need to optimize some piece of code liek this are:
All arrays are pre-allocated (this is the biggest driver of performance)
Fast inner loops use simple code (Matlab does pretty effective JIT on basic commands, but must interpret others.)
Take advantage of any special data features that you have, e.g. use sort appropriate algorithms and early exit conditions from some loops.
You're already doing all this. I recommend no change.
A good start might be to get rid of the while, try something like:
for i = 1:size(tmap,2)
C = max(0,tmap(3,:)-vicon_ts(i));
tmap(2,i) = find(C==min(C));
end

Parallelize or vectorize all-against-all operation on a large number of matrices?

I have approximately 5,000 matrices with the same number of rows and varying numbers of columns (20 x ~200). Each of these matrices must be compared against every other in a dynamic programming algorithm.
In this question, I asked how to perform the comparison quickly and was given an excellent answer involving a 2D convolution. Serially, iteratively applying that method, like so
list = who('data_matrix_prefix*')
H = cell(numel(list),numel(list));
for i=1:numel(list)
for j=1:numel(list)
if i ~= j
eval([ 'H{i,j} = compare(' char(list(i)) ',' char(list(j)) ');']);
end
end
end
is fast for small subsets of the data (e.g. for 9 matrices, 9*9 - 9 = 72 calls are made in ~1 s, 870 calls in ~2.5 s).
However, operating on all the data requires almost 25 million calls.
I have also tried using deal() to make a cell array composed entirely of the next element in data, so I could use cellfun() in a single loop:
# who(), load() and struct2cell() calls place k data matrices in a 1D cell array called data.
nextData = cell(k,1);
for i=1:k
[nextData{:}] = deal(data{i});
H{:,i} = cellfun(#compare,data,nextData,'UniformOutput',false);
end
Unfortunately, this is not really any faster, because all the time is in compare(). Both of these code examples seem ill-suited for parallelization. I'm having trouble figuring out how to make my variables sliced.
compare() is totally vectorized; it uses matrix multiplication and conv2() exclusively (I am under the impression that all of these operations, including the cellfun(), should be multithreaded in MATLAB?).
Does anyone see a (explicitly) parallelized solution or better vectorization of the problem?
Note
I realize both my examples are inefficient - the first would be twice as fast if it calculated a triangular cell array, and the second is still calculating the self comparisons, as well. But the time savings for a good parallelization are more like a factor of 16 (or 72 if I install MATLAB on everyone's machines).
Aside
There is also a memory issue. I used a couple of evals to append each column of H into a file, with names like H1, H2, etc. and then clear Hi. Unfortunately, the saves are very slow...
Does
compare(a,b) == compare(b,a)
and
compare(a,a) == 1
If so, change your loop
for i=1:numel(list)
for j=1:numel(list)
...
end
end
to
for i=1:numel(list)
for j= i+1 : numel(list)
...
end
end
and deal with the symmetry and identity case. This will cut your calculation time by half.
The second example can be easily sliced for use with the Parallel Processing Toolbox. This toolbox distributes iterations of your code among up to 8 different local processors. If you want to run the code on a cluster, you also need the Distributed Computing Toolbox.
%# who(), load() and struct2cell() calls place k data matrices in a 1D cell array called data.
parfor i=1:k-1 %# this will run the loop in parallel with the parallel processing toolbox
%# only make the necessary comparisons
H{i+1:k,i} = cellfun(#compare,data(i+1:k),repmat(data(i),k-i,1),'UniformOutput',false);
%# if the above doesn't work, try this
hSlice = cell(k,1);
hSlice{i+1:k} = cellfun(#compare,data(i+1:k),repmat(data(i),k-i,1),'UniformOutput',false);
H{:,i} = hSlice;
end
If I understand correctly you have to perform 5000^2 matrix comparisons ? Rather than try to parallelise the compare function, perhaps you should think of your problem being composed of 5000^2 tasks ? The Matlab Parallel Compute Toolbox supports task-based parallelism. Unfortunately my experience with PCT is with parallelisation of large linear algebra type problems so I can't really tell you much more than that. The documentation will undoubtedly help you more.