finding distances within unique ID values (tracks) of a (huge) matrix, Matlab - matlab

I have had a search around on this, but being new to matlab I could do with more specific help with my issue.
I have a huge matrix <2182824x9double> where each row represents a particle being tracked over time with columns including (numerical) TrackID, Time, Lat and Long.
What I need to do is for each unique TrackID, take time=0 and call that the start position, and for every other row within that TrackID (where time is not 0), find the distance (as the crow flies) from the start lat long (this is in order to find the maximal distance from the start point achieved being not necessarily the end point of the track).
To further complicate this i have a non-standard radius for the earth so I need a method which will allow me to stipulate this radius (6371.001km).
I don't really know where to start on this, and am worried about computational effort given the size of my matrix (I have many more such matrices to do the same thing to) - any suggestions would be much appreciated!
Many thanks for your time and attention,
All the best,
Bex

Related

scipy.interpolate.griddata slow due to unnecessary data

I have a map with a 600*600 aequidistant x,y grid with associated scalar values.
I have around 1000 x,y coordinates at which I would like to get the bi-linear interpolated map values. Those are randomly placed in an inner center area of the map with arround 400*400 size.
I decided to go with the griddata function with method linear. My understanding is that with linear interpolation I would only need the three nearest grid positions around each coordinate do get the well defined interpolated values. So I would require around 3000 data points of the map to perform the interpolation. The 360k data points are highly unnecessary for this task.
Throwing stupidly the complete map in results in long excecution times of a half minute. Since it's easy to narrow the map already down to the area of interest I could reduce excecution time to nearly 20%.
I am now wondering if I oversaw something in my assumption that I need only the three nearest neighbours for my task. And if not, whether there is a fast solution to filter those 3000 out of the 360k. I assume looping 3000 times over the 360k lines will take longer than to just throw in the inner map.
Edit: I had also a look at the comparisson of the result with 600*600 and the reduced data points. I am actually surprised and concerned about the observation, that the interpolation results differ partly significantly.
So I found out that RegularGridinterpolator is the way to go for me. It's fast and I have a regular grid already.
I tried to sort out my findings with the differences in interpolation value and found griddata to show unexpected behavior for me.
Check out the issue I created for details.
https://github.com/scipy/scipy/issues/17378

More Efficient Way of Calculating Population from Data Grid and overlapping Polygon?

folks! Apologies if this is a duplicate question and I've done some research on the topic but don't know if I'm heading in the right direction.
I have converted gridded data of population density to a MongoDB collection using a geometry object defining the population density cell as a five node polygon (the fifth node matching the first) and a float value consisting of the population in that geographic region. Even though the database is huge in size, I can quickly retrieve the "records" of the population regions as they are indexed as a 2D Sphere when it intersects a geo-polygon indicating some type of weather event or other geofence polygon.
The issue comes when I try to add all of the boxes up. It takes an exceedingly long amount of time, especially if the polygon is of a significant geographic area. The population data I have are 1km^2 cells. The adding of the data can take several seconds or, in worse case scenario, minutes!
I had a thought of creating a type of quadtree structure in the database by a lower resolution node set as a separate collection and so on and so on. Then when calculating population, I could start with the lowest res set and work my way down the node "tree" by making several database calls until there are no more matches. While I'd increase my database calls significantly, I'd reduce the sheer number of elements that I would need to add up at the end - which is taking the most computational time.
I could try to create these data using bottom-up neighbor finding whilst adding up the four population values that would make up the next lower-resolution node set. This, of course, will explode the database size and will increase the number of queries to the database for a single population request.
I haven't seen too much of this done with databases. I'd like to have it in a database (could also be PostgreSQL) since it gives me the ability to quickly geo-query by point or area. And, I'm returning the result as an API call so the efficiency of time is of the essence!
Any advice or places to research would be greatly appreciated!!!

compare all data in database at same time ( real time)

I have a problem with my android app, I have x value (whatever it is) and I have data in the database, I want to compare the value of x with all the data in the database at the same time in real time
the app is using sqlite.
I used a loop but when the database is large in this case my app lags in comparing all the data.my code is
public void Check_Distance(Location Current_Location,ArrayList<Location> LocationArrayList1)
{
double Distance;
for(int i=0;i<LocationArrayList1.size();i++)
{
Distance=distanceBetween(Current_Location,LocationArrayList1.get(i));
if(Distance<=0.1*1000){ // if distance is less then 100m give a sound
Notification_Sound();
}
}
}
You can't look at every record in the database at the exact same time. That's called quantum computing, and is currently an active research area where people far smarter than you or I are currently spending millions of dollars to try and create a machine that can do this kind of parallel processing.
That being said, you can make your algorithm more efficient, but that takes some effort to do. Both of the below are based on the idea of eliminating the majority of the locations that are obviously too far away very quickly, and performing more in-depth checks on those that could be in range.
One method is to sort the locations in ascending order in two arrays - one by North/South and the other by East/West. Find all entries within a given distance of the current position in each list, then combine the results to get a list of points within a box of X distance from the location. This box will have a much smaller number of points within it that you can then apply an iterative, circular, distance based approach to.
Another is to create a quadtree. This would subdivide the map area into a set of bounding volumes, where each volume would have a set of points, or additional bounding volumes. You can then place down your search area and find all the quadtree volumes that intersect with your circular search area, greatly minimizing the number of locations you need to do a true distance check on.

What's the most efficient way to use large data from Excel in my C# code?

I ran a computer simulation for my Pendulum, to measure time taken to reach the lowest point, for every velocity and every angle.
As you can imagine there is a lot of data, thousands of lines for all angles and velocity.
On every frame, I will be measuring the velocity and angle of the pendulum, and will look for the closest data in my Excel spreadsheet.
How can I go about this to make sure it's not too CPU-intensive?
Should I create a massive array where every element corresponds to a certain angle: for example, myArray[30] will be for all velocities and times for all my data between 30.0 degrees and 30.999. (That way it will be avoid lots of if statements)
Or should I keep everything in my Excel spreadsheet?
Any suggestion?
The best approach in my opinion would be dividing your data into intervals based on distribution since you have to access that data in every frame. Then when you measure the velocity and angle you can go look for the interval and access only that part of your data.
I would find maximum and minimum of your data points while importing to Unity and then divide that part based on (maximum - minimum) / NumOfIntervals. Lets say your interval size is 5 for each Angle. When you got an angle of 17 you can do (int)15/5 = 3(Assuming indexes start from zero) and go for third item in your structure. This can be a dictionary or Array of an Arbitrary class instances based on your data.
I can try to help further if you can share the structure of your data. But in my opinion evenly distribution of data to every interval is important.

Finding the closest point to a given point

I have searched all over for this, but I can't seem to find the best approach to this. I have about 22000 lat/lon points and I want to find the closest one's to the current location of the iPhone. I've seen people ask about Quad Trees, Dijkstra's Algorithm, and spatial databases. Which is the best for the iPhone? Spatial databases seem easiest, but I am not sure.
EDIT: there are actually over 20,000 points. You think iterating through all of them is the way to do it? But thanks for you input.
Thanks.
Actually, it is best to use Haversine (great circle) calculation for Lat/Long points, otherwise increasingly large distances will be wrong, especially if you use simple trig like in Jherico's answer.
A quick search provides this javascript example:
var R = 6371; // km Radius of earth
var dLat = (lat2-lat1).toRad();
var dLon = (lon2-lon1).toRad();
var a = Math.sin(dLat/2) * Math.sin(dLat/2) +
Math.cos(lat1.toRad()) * Math.cos(lat2.toRad()) *
Math.sin(dLon/2) * Math.sin(dLon/2);
var c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1-a));
var d = R * c;
In terms of the datastructure, Geohash is worth looking at.
If you need better than O(N), you can only get that if you first pay N lg N for building a spatial hash of some sort (a quadtree, octree, hash grid, or similar). Then each test will be approximately O(lg N), and can be much better typically by caching the last location you checked, if there's a lot of coherency (generally, there is).
I would probably build an octree in Euler (geocentric, XYZ) space, because that allows me to get "true" distance, not "warped" lat/lon distance. However, in practice, a quad tree in lat/lon space will probably work well enough. Once you have a hit, you hold on to that tree node (assuming the tree isn't re-built at runtime), and the next query starts walking from that tree node, and only needs to worry about nodes that may be closer if the previous point moved further away from the previous answer.
As you are on iPhone, you can use CoreLoaction to perform the geographic distance - using CLLocation's – getDistanceFrom:
I would be tempted to use a brute force linear search though all 2k points nad, if that isn't fast enough, switch to something like GeoHash to store meta data against your points for search.
Why not tile the globe into regions? (Think hexes.) Then, either when you add points to your list, or in one big pre-processing loop, for each point, store the region it is.
Then, when searching for points near point A in hex X, you only need to check points in hex X and a maximum of 3 neighbouring hexes.
If this is still too many points to check, add subregions.
you must consider that to use Dijkstra you must know your node position in the graph, that is instead the problem you're trying to solve; you're not in the graph, but you want to know the closest point to you
so simply, as already Chaos told you, you must calculate all distances beetween your position and all 20.000 points, then sort them
I think this algorithm works:
Create an array sorted by latitude
Create an array sorted by longitude
To find the closest, first find the closest by latitude by
doing a binary search in the latitude array. Do the
same for the longitude array. Now you have 2 points, one
closest by latitude, the other closest by longitude.
Compute the distances to each point via the pythagorean theorem.
The closest point wins.
Roger