I am working on a big Matlab testbench with thousands of lines of code, and I am trying to optimize the most time-consuming routines, determined via the profiler in Matlab.
I noticed that one of those most time-consuming operations is the following:
list = list((list(:,1) >= condxMin) & (list(:,1) <= condxMax) & (list(:,2) >= condyMin) & (list(:,2) <= condyMax),:);
Concretely, I have a big list of coordinates (50000 x 2 at least) and I want to restrict the values of this list so as to keep only the points that verify both of these conditions :
list(:,1) must be within [condxMin, condxMax] and list(:2) within [condyMin condyMax].
I was wondering if there was a more efficient way to do it, considering that this line of code is already vectorized.
Also, I am wondering if Matlab does a short-circuiting or not. If it doesn't, then I don't think there is a way to do it without breaking the vectorization and do it with a while loop, where I would write instead something like this:
j=1;
for i=1:size(list,1)
if(cond1 && cond2 && cond3 && cond4)
newlist(j,1:2) = list(i,1:2);
j=j+1;
end
end
Thank you in advance for your answer :)
Looks like the original vectorized version is the fastest way I can find, barring any really clever ideas. Matlab does do short circuiting, but not for matrices. The loop implementation you you showed would be very slow, since you're not pre-allocating (nor are you able to pre-allocate the full matrix).
I tried a couple of variations on this, including a for loop which used a short circuited && to determine whether the index was bad or not, but no such luck. On the plus side, the vectorized version you've got runs at 0.21s for a 5 million element coordinate list.
Related
I have two matrices S and T which have n columns and a row vector v of length n. By my construction, I know that S does not have any duplicates. What I'm looking for is a fast way to find out whether or not the row vector v appears as one of the rows of S. Currently I'm using the test
if min([sum(abs(S - repmat(f,size(S,1),1)),2);sum(abs(T - repmat(v,size(dS_new,1),1)),2)]) ~= 0 ....
When I first wrote it, I had a for loop testing each (I knew this would be slow, I was just making sure the whole thing worked first). I then changed this to defining a matrix diff by the two components above and then summing, but this was slightly slower than the above.
All the stuff I've found online says to use the function unique. However, this is very slow as it orders my matrix after. I don't need this, and it's a massively waste of time (it makes the process really slow). This is a bottleneck in my code -- taking nearly 90% of the run time. If anyone has any advice as how to speed this up, I'd be most appreciative!
I imagine there's a fairly straightforward way, but I'm not that experienced with Matlab (fairly, just not lots). I know how to use basic stuff, but not some of the more specialist functions.
Thanks!
To clarify following Sardar_Usama's comment, I want this to work for a matrix with any number of rows and a single vector. I'd forgotten to mention that the elements are all in the set {0,1,...,q-1}. I don't know whether that helps or not to make it faster!
You may want this:
ismember(v,S,'rows')
and replace arguments S and v to get indices of duplicates
ismember(S,v,'rows')
Or
for test if v is member of S:
any(all(bsxfun(#eq,S,v,2))
this returns logical indices of all duplicates
all(bsxfun(#eq,S,v),2)
I've got an ODE system working perfectly. But now, I want in each iteration, sort in ascending order the solution vector. I've tried many ways but I could not do it. Does anyone know how to do?
Here is a simplified code:
function dtemp = tanque1(t,temp)
for i=1:N
if i==1
dtemp(i)=(((-k(i)*At*(temp(i)-temp(i+1)))/(y))-(U*As(i)*(temp(i)-Tamb)))/(ro(i)*vol_nodo*cp(i));
end
if i>1 && i<N
dtemp(i)=(((k(i)*At*(temp(i-1)-temp(i)))/(y))-((k(i)*At*(temp(i)-temp(i+1)))/(y))-(U*As(i)*(temp(i)-Tamb)))/(ro(i)*vol_nodo*cp(i));
end
if i==N
dtemp(i)=(((k(i)*At*(temp(i-1)-temp(i)))/(y))-(U*As(i)*(temp(i)-Tamb)))/(ro(i)*vol_nodo*cp(i));
end
end
end
Test Script:
inicial=343.15*ones(200,1);
[t temp]=ode45(#tanque1,0:360:18000,inicial);
It looks like you have three different sets of differential equations depending on the index i of the solution vector. I don't think you mean "sort," but rather a more efficient way to implement what you've already done - basically vectorization. Provided I haven't accidentally made any typos (you should check), the following should do what you need:
function dtemp = tanque1(t,temp)
dtemp(1) = (-k(1)*At*(temp(1)-temp(2))/y-U*As(1)*(temp(1)-Tamb))/(ro(1)*vol_nodo*cp(1));
dtemp(2:N-1) = (k(2:N-1).*(diff(temp(1:N-1))-diff(temp(2:N)))*At/y-U*As(2:N-1).*(temp(2:N-1)-Tamb))./(vol_nodo*ro(2:N-1).*cp(2:N-1));
dtemp(N) = (k(N)*At*(temp(N-1)-temp(N))/y-U*As(N)*(temp(N)-Tamb))/(ro(N)*vol_nodo*cp(N));
You'll still need to define N and the other parameters and ensure that temp is returned as a column vector. You could also try replacing N with the end keyword, which might be faster. The two uses of diff make the code shorter, but, depending on the value of N, they may also speed up the calculation. They could be replaced with temp(1:N-2)-temp(2:N-1) and temp(2:N-1)-temp(3:N). It may be possible to collapse these down to a single vectorized equation, but I'll leave that as an exercise for you to attempt if you like.
Note that I also removed a great many unnecessary parentheses for clarity. As you learn Matlab you'll to get used to the order of operations and figure out when parentheses are needed.
I have two lists of timestamps and I'm trying to create a map between them that uses the imu_ts as the true time and tries to find the nearest vicon_ts value to it. The output is a 3xd matrix where the first row is the imu_ts index, the third row is the unix time at that index, and the second row is the index of the closest vicon_ts value above the timestamp in the same column.
Here's my code so far and it works, but it's really slow. I'm not sure how to vectorize it.
function tmap = sync_times(imu_ts, vicon_ts)
tstart = max(vicon_ts(1), imu_ts(1));
tstop = min(vicon_ts(end), imu_ts(end));
%trim imu data to
tmap(1,:) = find(imu_ts >= tstart & imu_ts <= tstop);
tmap(3,:) = imu_ts(tmap(1,:));%Use imu_ts as ground truth
%Find nearest indecies in vicon data and map
vic_t = 1;
for i = 1:size(tmap,2)
%
while(vicon_ts(vic_t) < tmap(3,i))
vic_t = vic_t + 1;
end
tmap(2,i) = vic_t;
end
The timestamps are already sorted in ascending order, so this is essentially an O(n) operation but because it's looped it runs slowly. Any vectorized ways to do the same thing?
Edit
It appears to be running faster than I expected or first measured, so this is no longer a critical issue. But I would be interested to see if there are any good solutions to this problem.
Have a look at knnsearch in MATLAB. Use cityblock distance and also put an additional constraint that the data point in vicon_ts should be less than its neighbour in imu_ts. If it is not then take the next index. This is required because cityblock takes absolute distance. Another option (and preferred) is to write your custom distance function.
I believe that your current method is sound, and I would not try and vectorize any further. Vectorization can actually be harmful when you are trying to optimize some inner loops, especially when you know more about the context of your data (e.g. it is sorted) than the Mathworks engineers can know.
Things that I typically look for when I need to optimize some piece of code liek this are:
All arrays are pre-allocated (this is the biggest driver of performance)
Fast inner loops use simple code (Matlab does pretty effective JIT on basic commands, but must interpret others.)
Take advantage of any special data features that you have, e.g. use sort appropriate algorithms and early exit conditions from some loops.
You're already doing all this. I recommend no change.
A good start might be to get rid of the while, try something like:
for i = 1:size(tmap,2)
C = max(0,tmap(3,:)-vicon_ts(i));
tmap(2,i) = find(C==min(C));
end
Instead of concatening results to this, Is there other way to do the following, I mean the loop will persist but vector=[vector,sum(othervector)]; can be gotten in any other way?
vector=[];
while a - b ~= 0
othervector = sum(something') %returns a vector like [ 1 ; 3 ]
vector=[vector,sum(othervector)];
...
end
vector=vector./100
Well, this really depends on what you are trying to do. Starting from this code, you might need to think about the actions you are doing and if you can change that behavior. Since the snippet of code you present shows little dependencies (i.e. how are a, b, something and vector related), I think we can only present vague solutions.
I suspect you want to get rid of the code to circumvent the effect of constantly moving the array around by concatenating your new results into it.
First of all, just make sure that the slowest portion of your application is caused by this. Take a look at the Matlab profiler. If that portion of your code is not a major time hog, don't bother spending a lot of time on improving it (and just say to mlint to ignore that line of code).
If you can analyse your code enough to ensure that you have a constant number of iterations, you can preallocate your variables and prevent any performance penalty (i.e. write a for loop in the worst case, or better yet really vectorized code). Or if you can `factor out' some variables, this might help also (move any loop invariants outside of the loop). So that might look something like this:
vector = zeros(1,100);
while a - b ~= 0
othervector = sum(something);
vector(iIteration) = sum(othervector);
iIteration = iIteration + 1;
end
If the nature of your code doesn't allow this (e.g. you are iterating to attain convergence; in that case, beware of checking equality of doubles: always include a tolerance), there are some tricks you can perform to improve performance, but most of them are just rules of thumb or trying to make the best of a bad situation. In this last case, you might add some maintenance code to get slightly better performance (but what you gain in time consumption, you lose in memory usage).
Let's say, you expect the code to run 100*n iterations most of the time, you might try to do something like this:
iIteration = 0;
expectedIterations = 100;
vector = [];
while a - b ~= 0
if mod(iIteration,expectedIterations) == 0
vector = [vector zeros(1,expectedIterations)];
end
iIteration = iIteration + 1;
vector(iIteration) = sum(sum(something));
...
end
vector = vector(1:iIteration); % throw away uninitialized
vector = vector/100;
It might not look pretty, but instead of resizing the array every iteration, the array only gets resized every 100th iteration. I haven't run this piece of code, but I've used very similar code in a former project.
If you want to optimize for speed, you should preallocate the vector and have a counter for the index as #Egon answered already.
If you just want to have a different way of writing vector=[vector,sum(othervector)];, you could use vector(end + 1) = sum(othervector); instead.
I have been happily using MATLAB to solve some project Euler problems. Yesterday, I wrote some code to solve one of these problems (14). When I write code containing long loops I always test the code by running it with short loops. If it runs fine and it does what it's supposed to do I assume this will also be the case when the length of the loop is longer.
This assumption turned out to be wrong. While executing the code below, MATLAB ran out of memory somewhere around the 75000th iteration.
c=1;
e=1000000;
for s=c:e
n=s;
t=1;
while n>1
a(s,t)=n;
if mod(n,2) == 0
n=n/2;
else
n=3*n+1;
end
a(s,t+1)=n;
t=t+1;
end
end
What can I do to prevent this from happening? Do I need to clear variables or free up memory somewhere in the process? Will saving the resulting matrix a to the hard drive help?
Here is the solution, staying as close as possible to your code (which is very close, the main difference is that you only need a 1D matrix):
c=1;
e=1000000;
a=zeros(e,1);
for s=c:e
n=s;
t=1;
while n>1
if mod(n,2) == 0
n=n/2;
else
n=3*n+1;
end
t=t+1;
end
a(s)=t;
end
[f g]=max(a);
This takes a few seconds (note the preallocation), and the result g unlocks the Euler 14 door.
Simply put, there's not enough memory to hold the matrix a.
Why are you making a two-dimensional matrix here anyway? You're storing information that you can compute just as fast as looking it up.
There's a much better thing to memoize here.
EDIT: Looking again, you're not even using the stuff you put in that matrix! Why are you bothering to create it?
The code appears to be storing every sequence in a different row of a matrix. The number of columns of that matrix will be equal to the length of the longest sequence currently found. This means that a sequence of two numbers will be padded with a bunch of right hand zeros.
I am sure you can see how this is incredibly inefficient. That may be the point of the exercise, or it will be for you in this implementation.
Better is to keep a variable like "Seed of longest solution found" which would store the seed for the longest solution. I would also keep a "length of longest solution found" keep the length. As you try every new seed, if it wins the title of longest, then update those variables.
This will keep only what you need in memory.
Short Answer:Use a 2d sparse matrix instead.
Long Answer: http://www.mathworks.com/access/helpdesk/help/techdoc/ref/sparse.html