Faster finding of rows in matlab - matlab

I have a very long list of x,y,z coordinates of items (200K-800K,3) that I need to search through and find the nearest items to a particular point - This final list is always has at least 1 item and usually less than 10 items.
I've tried a few simple search methods to find this list but I've hit a bit of a limit - here are my two best methods to date:
Method 1 - find Indexing
xInd = find(PositionsList(:,1) > (searchPoint(i,1) - searchRad) & PositionsList(:,1) < (searchPoint(i,1) + searchRad));
yInd = find(PositionsList(xInd,2) > (searchPoint(i,2) - searchRad) & PositionsList(xInd,2) < (searchPoint(i,2) + searchRad));
xyInd = xInd(yInd);
zInd = find(PositionsList(xyInd,3) > (searchPoint(i,3) - searchRad) & PositionsList(xyInd,3) < (searchPoint(i,3) + searchRad));
xyzInd = xyInd(zInd);
Method 2 - Brute force distance search
neighbours = sqrt(sum(bsxfun(#minus,searchPoint(i,:),PositionsList).^2,2)) <= searchRad;
xyzInd = find(neighbours == 1);
Method 3 - logial Indexing
xInd = PositionsList(:,1) > (searchPoint(i,1) - searchRad) & PositionsList(:,1) < (searchPoint(i,1) + searchRad);
newlist = PositionsList(xInd==1,:);
yzInd = newlist(:,2) > (searchPoint(i,2) - searchRad) & newlist(:,2) < (searchPoint(i,2) + searchRad)...
& newlist(:,3) > (searchPoint(i,3) - searchRad) & newlist(:,3) < (searchPoint(i,3) + searchRad);
xyzInd = newlist(yzInd==1,:);
For my data method 1 is much quicker - for a small list of 20000 particles it runs in about 25s whereas method 2 runs in about 170s, but method 2 is slightly more accurate - it has dubious neighbours (outlier on edge of search area) much less often.
My code calls this search several thousand times so I'm keen to save as much time on it as possible - it currently makes up about 85% of my run-time. I've read that mex implementation may be much quicker but I'm not familiar mex. I've also tried a 3rd method of logical indexing rather than find, but it is slower at 35s.
Can someone help with making this search faster? Maybe a mex function?

Following on from obchardon suggestion I had a look at k-d trees for searching and found the following, kd-tree for matlab, on file exchange with my data and began testing with my data.
Now I can complete a search in under 5s for 500K coordinates, compared with my previous best of 25s for 20K sets of coordinates. Huge improvement.
The only down side is the order it returns the neighbours seems to be random which mean I have some slight "caressing" of the results to do, but with the time saved, this is more than acceptable.
Thanks for the great suggestion!!

Related

Random pivot selection for quicksort not working

I am trying to choose a random index for quicksort, but for some reason, the array is not sorting. In fact, the algorithm returns a different array (ex. input [2,1,4] and [1,1,4] is outputted) sometimes. Any help would be much appreciated. This algorithm works if, instead of choosing a random index, I always choose the first element of the array as the pivot.
def quicksort(array):
if len(array) < 2:
return array
else:
random_pivot_index = randint(0, len(array) - 1)
pivot = array[random_pivot_index]
less = [i for i in array[1:] if i =< pivot]
greater = [i for i in array[1:] if i > pivot]
return quicksort(less) + [pivot] + quicksort(greater)
less = [i for i in array[1:] if i =< pivot]
You're including elements equal to the pivot value in less here.
But here, you also include the pivot value explicitly in the result:
return quicksort(less) + [pivot] + quicksort(greater)
Instead try it with just:
return quicksort(less) + quicksort(greater)
Incidentally, though this does divide-and-conquer in the same way as QuickSort does, it's not really an implementation of that algorithm: Actual QuickSort sorts the elements in place - your version will suffer from the run-time overhead associated with allocating and concatenating the utility arrays.

Looking for advice on improving a custom function in AnyLogic

I'm estimating last mile delivery costs in an large urban network using by-route distances. I have over 8000 customer agents and over 100 retail store agents plotted in a GIS map using lat/long coordinates. Each customer receives deliveries from its nearest store (by route). The goal is to get two distance measures in this network for each store:
d0_bar: the average distance from a store to all of its assigned customers
d1_bar: the average distance between all customers common to a single store
I've written a startup function with a simple foreach loop to assign each customer to a store based on by-route distance (customers have a parameter, "customer.pStore" of Store type). This function also adds, in turn, each customer to the store agent's collection of customers ("store.colCusts"; it's an array list with Customer type elements).
Next, I have a function that iterates through the store agent population and calculates the two average distance measures above (d0_bar & d1_bar) and writes the results to a txt file (see code below). The code works, fortunately. However, the problem is that with such a massive dataset, the process of iterating through all customers/stores and retrieving distances via the openstreetmap.org API takes forever. It's been initializing ("Please wait...") for about 12 hours. What can I do to make this code more efficient? Or, is there a better way in AnyLogic of getting these two distance measures for each store in my network?
Thanks in advance.
//for each store, record all customers assigned to it
for (Store store : stores)
{
distancesStore.print(store.storeCode + "," + store.colCusts.size() + "," + store.colCusts.size()*(store.colCusts.size()-1)/2 + ",");
//calculates average distance from store j to customer nodes that belong to store j
double sumFirstDistByStore = 0.0;
int h = 0;
while (h < store.colCusts.size())
{
sumFirstDistByStore += store.distanceByRoute(store.colCusts.get(h));
h++;
}
distancesStore.print((sumFirstDistByStore/store.colCusts.size())/1609.34 + ",");
//calculates average of distances between all customer nodes belonging to store j
double custDistSumPerStore = 0.0;
int loopLimit = store.colCusts.size();
int i = 0;
while (i < loopLimit - 1)
{
int j = 1;
while (j < loopLimit)
{
custDistSumPerStore += store.colCusts.get(i).distanceByRoute(store.colCusts.get(j));
j++;
}
i++;
}
distancesStore.print((custDistSumPerStore/(loopLimit*(loopLimit-1)/2))/1609.34);
distancesStore.println();
}
Firstly a few simple comments:
Have you tried timing a single distanceByRoute call? E.g. can you try running store.distanceByRoute(store.colCusts.get(0)); just to see how long a single call takes on your system. Routing is generally pretty slow, but it would be good to know what the speed limit is.
The first simple change is to use java parallelism. Instead of using this:
for (Store store : stores)
{ ...
use this:
stores.parallelStream().forEach(store -> {
...
});
this will process stores entries in parallel using standard Java streams API.
It also looks like the second loop - where avg distance between customers is calculated doesn't take account of mirroring. That is to say distance a->b is equal to b->a. Hence, for example, 4 customers will require 6 calculations: 1->2, 1->3, 1->4, 2->3, 2->4, 3->4. Whereas in case of 4 customers your second while loop will perform 9 calculations: i=0, j in {1,2,3}; i=1, j in {1,2,3}; i=2, j in {1,2,3}, which seems wrong unless I am misunderstanding your intention.
Generally, for long running operations it is a good idea to include some traceln to show progress with associated timing.
Please have a look at above and post results. With more information additional performance improvements may be possible.

limit random complex number to a given range

I can get the real part of a random number to stay withing a given range but the complex part of the number doesn't stay within the range I set. see matlab / octave code below.
xmin=-.5
xmax=1
n=3
x=xmin+rand(1,n)*(xmax-xmin)+(rand(1,n)-(xmax-xmin))*1i
x=x(:)
The real part works but the complex part isn't limited to -0.5 to 1
0.2419028288441536 - 0.6579427654754871i
0.2712527227134944 - 1.451964497492678i
0.3245051849394858 - 1.107556052779179i
You have two mistakes:
x=xmin+rand(1,n)*(xmax-xmin)+(xmin + rand(1,n)*(xmax-xmin))*1i
You should add xmin to the sum and change - to * in the second part.
I've added some spaces to your code so the difference more obvious:
x = xmin+rand(1,n)*(xmax-xmin) + ( rand(1,n)-(xmax-xmin) )*1i
^^^ correct ^^^ not correct: missing `xmin+`
(and as OmG noted, also a `-` instead of a `*`)
One good way to reduce the number of bugs is by avoiding code duplication. You could for example write:
rand_sequence = #(m,xmin,xmax) xmin+rand(1,n)*(xmax-xmin);
x = rand_sequence(n,xmin,xmax) + 1i*rand_sequence(n,xmin,xmax)
(This looks like more code, but the more complicated code logic is not duplicated.)
Or like this:
x = xmin + (rand(1,n)+1i*rand(1,n)) * (xmax-xmin);

Why is while loop much slower than for loop in Swift?

I'm trying to evaluate the performance of these two loop method, I tried number from 0 to 99999 using for in and while loop clause.
for i in 0..<s.count - 9 {
print("\(i)")
}
var j = 0
while j < s.count - 9 {
print("\(j)")
j = j+1
}
In both loop, will print the current number and add number by 1 until it reaches 99999.
Turns out that for in clause use 0.91 to go through every number, at same time while take much much much longer time (around 80.8).
I searched on Internet and documents, but cannot figure out why.
What cause this huge performance difference?

How to select the last column of numbers from a table created by FoldList in Mathematica

I am new to Mathematica and I am having difficulties with one thing. I have this Table that generates 10 000 times 13 numbers (12 numbers + 1 that is a starting number). I need to create a Histogram from all 10 000 13th numbers. I hope It's quite clear, quite tricky to explain.
This is the table:
F = Table[(Xi = RandomVariate[NormalDistribution[], 12];
Mu = -0.00644131;
Sigma = 0.0562005;
t = 1/12; s = 0.6416;
FoldList[(#1*Exp[(Mu - Sigma^2/2)*t + Sigma*Sqrt[t]*#2]) &, s,
Xi]), {SeedRandom[2]; 10000}]
The result for the following histogram could be a table that will take all the 13th numbers to one table - than It would be quite easy to create an histogram. Maybe with "select"? Or maybe you know other ways to solve this.
You can access different parts of a list using Part or (depending on what parts you need) some of the more specialised commands, such as First, Rest, Most and (the one you need) Last. As noted in comments, Histogram[Last/#F] or Histogram[F[[All,-1]]] will work fine.
Although it wasn't part of your question, I would like to note some things you could do for your specific problem that will speed it up enormously. You are defining Mu, Sigma etc 10,000 times, because they are inside the Table command. You are also recalculating Mu - Sigma^2/2)*t + Sigma*Sqrt[t] 120,000 times, even though it is a constant, because you have it inside the FoldList inside the Table.
On my machine:
F = Table[(Xi = RandomVariate[NormalDistribution[], 12];
Mu = -0.00644131;
Sigma = 0.0562005;
t = 1/12; s = 0.6416;
FoldList[(#1*Exp[(Mu - Sigma^2/2)*t + Sigma*Sqrt[t]*#2]) &, s,
Xi]), {SeedRandom[2]; 10000}]; // Timing
{4.19049, Null}
This alternative is ten times faster:
F = Module[{Xi, beta}, With[{Mu = -0.00644131, Sigma = 0.0562005,
t = 1/12, s = 0.6416},
beta = (Mu - Sigma^2/2)*t + Sigma*Sqrt[t];
Table[(Xi = RandomVariate[NormalDistribution[], 12];
FoldList[(#1*Exp[beta*#2]) &, s, Xi]), {SeedRandom[2];
10000}] ]]; // Timing
{0.403365, Null}
I use With for the local constants and Module for the things that are other redefined within the Table (Xi) or are calculations based on the local constants (beta). This question on the Mathematica StackExchange will help explain when to use Module versus Block versus With. (I encourage you to explore the Mathematica StackExchange further, as this is where most of the Mathematica experts are hanging out now.)
For your specific code, the use of Part isn't really required. Instead of using FoldList, just use Fold. It only retains the final number in the folding, which is identical to the last number in the output of FoldList. So you could try:
FF = Module[{Xi, beta}, With[{Mu = -0.00644131, Sigma = 0.0562005,
t = 1/12, s = 0.6416},
beta = (Mu - Sigma^2/2)*t + Sigma*Sqrt[t];
Table[(Xi = RandomVariate[NormalDistribution[], 12];
Fold[(#1*Exp[beta*#2]) &, s, Xi]), {SeedRandom[2];
10000}] ]];
Histogram[FF]
Calculating FF in this way is even a little faster than the previous version. On my system Timing reports 0.377 seconds - but such a difference from 0.4 seconds is hardly worth worrying about.
Because you are setting the seed with SeedRandom, it is easy to verify that all three code examples produce exactly the same results.
Making my comment an answer:
Histogram[Last /# F]