Find Overlapping Trajectories - mongodb

Consider the following case :
I have stored trips/trajectories in mongodb as LineString and index is 2dsphere.
According to the image provided, Trip 1 is a trajectory that a user wants to search for and Trip2-6 are trips that are already stored on mongodb.
Given a maxDistance on $near, Trip1 should be "matched" with Trip 3 and 4 as shown.
However $geointersects seem to accept a Polygon or Multipolygon as $geometry type and $near seem to accept only Point.
Is there any time efficient way to implement the following scenario with mongo queries?
Thanks!
EDIT : I changed geometry to Polygon, as Alex Blex said.
Visualisation of data (Trip 1 is the search Trip, Trip2-3 are stored in db)
So we have the following documents stored on mongo:
Trip2
tripData: Object
{
type: Polygon
coordinates: [
[ [8,2] , [7,3] , [7,4], [8,2] ]
]
}
Trip3
tripData: Object
{
type: Polygon
coordinates: [
[ [3,1], [4,1], [4,1.9999], [3,1] ]
]
}
Trip 1 is the trip we search for
tripData: Object
{
type: Polygon
coordinates: [
[ [2,2] , [1,4] , [3,5] , [4,2] , [2,2] ]
]
}
The query i run is the following :
db.trips.find({ tripData: { $geoIntersects : { $geometry : trip1 } } } )
Nothing is returned from this query as expected, because trips do not intersect as you can see in the Visualisation. How can i modify the query in order to match Trip1 with Trip3 using $near operator ?

geoIntersects requires polygons or multipolygons in the query, i.e. Trip1 in the question. Trip2-6 are LineString stored in the documents, which is perfectly fine. So the only extra thing to do is to convert Trip1 to polygon using offset, shown as lime near in the question.
Let's consider straight line first. The function to convert line [[x1,y1][x2,y2]] to polygon with offset d can be as simple as:
function LineToPolyWithFalsePositive(line, d) {
var teta = Math.atan2(line[1][0] - line[0][0], line[1][1] - line[0][1]);
var s = Math.sin(teta);
var c = Math.cos(teta);
return [
[line[0][0] - d*s - d*c, line[0][1] - d*c + d*s],
[line[1][0] + d*s - d*c, line[1][1] + d*c + d*s],
[line[1][0] + d*s + d*c, line[1][1] + d*c - d*s],
[line[0][0] - d*s + d*c, line[0][1] - d*c - d*s]
];
}
or
function LineToPolyWithFalseNegative(line, d) {
var teta = Math.atan2(line[1][0] - line[0][0], line[1][1] - line[0][1]);
var s = Math.sin(teta);
var c = Math.cos(teta);
return [
[line[0][0] - d*s, line[0][1] - d*c],
[line[0][0] - d*c, line[0][1] + d*s],
[line[1][0] - d*c, line[1][1] + d*s],
[line[1][0] + d*s, line[1][1] + d*c],
[line[1][0] + d*c, line[1][1] - d*s],
[line[0][0] + d*c, line[0][1] - d*s]
];
}
Which produce lime polygons as on the image below:
The returned value can be used in geoIntersects query against documents with LineString locations.
The problematic areas highlighted with red. The first poly covers distance more than d on edge cases, and the second poly covers less distance than d in the same edge cases.
If it was the only problem, I would go with false negative approach and run 2 more near queries for Points [x1,y1] and [x2,y2] to check if there are any missed documents in the highlighted areas.
If Trip1 is a complex LineString, there are much more calculations need to be done to convert it to polygon. See the image:
Apart from the edge cases for first and last point, there are similar problems for start and end of each segment. Basically you will need to calculate an angle between each segment to workout corresponded vertices of the polygon. Still doable thou. In the false-negative version of the polygon, the vertices circled with red should be cut, again considering the angle between segments.
If the Trip1 line in the query have many segments, it may be quite expensive, as you will need to run near query for each vertex + 2 for terminal points.
As a pragmatic approach, if it is acceptable, the false-positive version may work quite fast, as it is a single query.

Related

Why does my perspective implementation fails in displaying my cube's faces?

I wrote a program that takes in entry some points, expressed in 3D coordinates and that must be drawn in a 2D canvas. I use perspective projection, homogeneous coordinates and similar triangles to do that. However, my program does not work and I actually don't know why.
I followed two tutorials. I really understood the geometrical definitions and properties I have read. However, my implementation fails... I will write references to these both courses little by little, to make your reading more confortable :).
Overview : geometrical reminders
The perspective projection is done following this workflow (cf. these 2 courses - I wrote pertinent links about the latter (HTML anchors) further down, in this post) :
Definition of the points to draw, expressed according to the world's coordinates system ; Definition of the matrix of projection, which is a matrix of transformation that "converts" a point expressed according to the world coordinates system into a point expressed according to the camera's coordinates system (NB : I believe this matrix can be understood as being the 3D object "camera")
Product of these points with this matrix (as defined in the adequat part, below, in this document) : the product of these world-expressed points results in the conversion of these points to the camera's coordinates system. Note that points and matrix are expressed in 4D (concept of homogenous coordinates).
Use of similar triangles concept to project (only computing is done at this step) on the canvas the in-camera-expressed points (using their 4D coordinates). After this operation, the points are now expressed in 3D (the third coordinate is computed but not actually used on the canvas). The 4th coordinate is removed because not useful. Note that the 3rd coordinate won't be useful, except to handle z-fighting (though, I don't want to do that).
Last step : rasterization, to actually draw the pixels on the canvas (other computing AND displaying are done at this step).
First, the problem
Well, I want to draw a cube but the perspective doesn't work. The projected points seem to be drawn withtout perspective.
What result I should expect for
The result I'm expecting is the cube displayed in "Image" part of this below PNG :
What I'm outputting
The faces of my cube are odd, as if perspective wasn't well used.
I guess I know why I'm having this problem...
I think my projection matrix (i.e. : the camera) doesn't have the good coefficients. I'm using a very simple projection matrix, without the concepts of fov, near and far clipping planes (as you can see belower).
Indeed, to get the expected result (as previouslyt defined), the camera should be placed, if I'm not mistaken, at the center (on axes x and y) of the cube expressed according to the world coordinate system and at the center (on axes x and y) of the canvas, which is (I make this assumption) placed 1 z in front of the camera.
The Scastie (snippet)
NB : since X11 is not activated on Scastie, the window I want to create won't be shown.
https://scastie.scala-lang.org/N95TE2nHTgSlqCxRHwYnxA
Entries
Perhaps the problem is bound to the entries ? Well, I give you them.
Cube's points
Ref. : myself
val world_cube_points : Seq[Seq[Double]] = Seq(
Seq(100, 300, -4, 1), // top left
Seq(100, 300, -1, 1), // top left z+1
Seq(100, 0, -4, 1), // bottom left
Seq(100, 0, -1, 1), // bottom left z+1
Seq(400, 300, -4, 1), // top right
Seq(400, 300, -1, 1), // top right z+1
Seq(400, 0, -4, 1), // bottom right
Seq(400, 0, -1, 1) // bottom right z+1
)
Transformation (Projection) matrix
Ref. : https://www.scratchapixel.com/lessons/3d-basic-rendering/perspective-and-orthographic-projection-matrix/building-basic-perspective-projection-matrix , End of the Part. "A Simple Perspective Matrix"
Note that I'm using the simplest perspective projection matrix : I don't use concept of fov, near and far clipping planes.
new Matrix(Seq(
Seq(1, 0, 0, 0),
Seq(0, 1, 0, 0),
Seq(0, 0, -1, 0),
Seq(0, 0, -1, 0)
))
Consequence of this matrix : each point P(x;y;z;w) producted with this matrix will be : P'(x;y;-z;-z).
Second, the first operation my program does : a simple product of a point with a matrix.
Ref. : https://github.com/ssloy/tinyrenderer/wiki/Lesson-4:-Perspective-projection#homogeneous-coordinates
/**
* Matrix in the shape of (use of homogeneous coordinates) :
* c00 c01 c02 c03
* c10 c11 c12 c13
* c20 c21 c22 c23
* 0 0 0 1
*
* #param content the content of the matrix
*/
class Matrix(val content : Seq[Seq[Double]]) {
/**
* Computes the product between a point P(x ; y ; z) and the matrix.
*
* #param point a point P(x ; y ; z ; 1)
* #return a new point P'(
* x * c00 + y * c10 + z * c20
* ;
* x * c01 + y * c11 + z * c21
* ;
* x * c02 + y * c12 + z * c22
* ;
* 1
* )
*/
def product(point : Seq[Double]) : Seq[Double] = {
(0 to 3).map(
i => content(i).zip(point).map(couple2 => couple2._1 * couple2._2).sum
)
}
}
Then, use of similar triangles
Ref. 1/2 : Part. "Of the Importance of Converting Points to Camera Space
" of https://www.scratchapixel.com/lessons/3d-basic-rendering/computing-pixel-coordinates-of-3d-point/mathematics-computing-2d-coordinates-of-3d-points
Ref. 2/2 : https://github.com/ssloy/tinyrenderer/wiki/Lesson-4:-Perspective-projection#time-to-work-in-full-3d
NB : at this step, the entries are points expressed according to the camera (i.e. : they are the result of the precedently defined product with the precedently defined matrix).
class Projector {
/**
* Computes the coordinates of the projection of the point P on the canvas.
* The canvas is assumed to be 1 unit forward the camera.
* The computation uses the definition of the similar triangles.
*
* #param points the point P we want to project on the canvas. Its coordinates must be expressed in the coordinates
* system of the camera before using this function.
* #return the point P', projection of P.
*/
def drawPointsOnCanvas(points : Seq[Seq[Double]]) : Seq[Seq[Double]] = {
points.map(point => {
point.map(coordinate => {
coordinate / point(3)
}).dropRight(1)
})
}
}
Finally, the drawing of the projected points, onto the canvas.
Ref. : Part. "From Screen Space to Raster Space" of https://www.scratchapixel.com/lessons/3d-basic-rendering/computing-pixel-coordinates-of-3d-point/mathematics-computing-2d-coordinates-of-3d-points
import java.awt.Graphics
import javax.swing.JFrame
/**
* Assumed to be 1 unit forward the camera.
* Contains the drawn points.
*/
class Canvas(val drawn_points : Seq[Seq[Double]]) extends JFrame {
val CANVAS_WIDTH = 820
val CANVAS_HEIGHT = 820
val IMAGE_WIDTH = 900
val IMAGE_HEIGHT = 900
def display = {
setTitle("Perlin")
setSize(IMAGE_WIDTH, IMAGE_HEIGHT)
setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE)
setVisible(true)
}
override def paint(graphics : Graphics): Unit = {
super.paint(graphics)
drawn_points.foreach(point => {
if(!(Math.abs(point.head) <= CANVAS_WIDTH / 2 || Math.abs(point(1)) <= CANVAS_HEIGHT / 2)) {
println("WARNING : the point (" + point.head + " ; " + point(1) + ") can't be drawn in this canvas.")
} else {
val normalized_drawn_point = Seq((point.head + (CANVAS_WIDTH / 2)) / CANVAS_WIDTH, (point(1) + (CANVAS_HEIGHT / 2)) / CANVAS_HEIGHT)
graphics.fillRect((normalized_drawn_point.head * IMAGE_WIDTH).toInt, ((1 - normalized_drawn_point(1)) * IMAGE_HEIGHT).toInt, 5, 5)
graphics.drawString(
"P(" + (normalized_drawn_point.head * IMAGE_WIDTH).toInt + " ; "
+ ((1 - normalized_drawn_point(1)) * IMAGE_HEIGHT).toInt + ")",
(normalized_drawn_point.head * IMAGE_WIDTH).toInt - 50, ((1 - normalized_drawn_point(1)) * IMAGE_HEIGHT).toInt - 10
)
}
})
}
}
Question
What's wrong with my program ? I understood the geometrical concepts explained by these both tutorials that I read carefully. I'm pretty sure my product works. I think either the rasterization, or the entries (the matrix) could be wrong...
Note that I'm using the simplest perspective projection matrix : I don't use concept of fov, near and far clipping planes.
I think that your projection matrix is too simple. By dropping the near and far clipping planes, you are dropping perspective projection entirely.
You do not have to perform the z-clipping step, but you need to define a view frustum to get perspective to work. I believe that your projection matrix defines a cubic "view frustrum", hence no perspective.
See http://www.songho.ca/opengl/gl_projectionmatrix.html for a discussion of how the projection matrix works.
Quoting the Scratchapixel page:
... If we substitute these numbers in the above equation, we get:
Where y' is the y coordinate of P'. Thus:
This is probably one the simplest and most fundamental relation in computer graphics, known as the z or perspective divide. The exact same principle applies to the x coordinate. ...
And in your code:
def drawPointsOnCanvas(points : Seq[Seq[Double]]) : Seq[Seq[Double]] = {
points.map(point => {
point.map(coordinate => {
coordinate / point(3)
^^^^^^^^
...
The (3) index is the 4th component of point, i.e. its W-coordinate, not its Z-coordinate. Perhaps you meant coordinate / point(2)?

Manually building Hexagonal Torus

I am interested in building a hexagonal Torus using a mesh of points?
I think I can start with a 2-d polygon, and then iterate 360 times (1 deg resolution) to build a complete solid.
Is this the best way to do this? What I'm really after is building wing profiles with variable cross section geometry over it's span.
In Your way You can do this with polyhedron(). Add an appropriate number of points per profile in defined order to a vector „points“, define faces by the indices of the points in a second vector „faces“ and set both vectors as parameter in polyhedron(), see documentation. You can control the quality of the surface by the number of points per profile and the distance between the profiles (sectors in torus).
Here an example code:
// parameter:
r1 = 20; // radius of torus
r2 = 4; // radius of polygon/ thickness of torus
s = 360; // sections per 360 deg
p = 6; // points on polygon
a = 30; // angle of the first point on Polygon
// points on cross-section
// angle = 360*i/p + startangle, x = r2*cos(angle), y = 0, z = r2*sin(angle)
function cs_point(i) = [r1 + r2*cos(360*i/p + a), 0, r2*sin(360*i/p + a)];
// returns to the index in the points - vector the section number and the number of the point on this section
function point_index(i) = [floor(i/p), i - p*floor(i/p)];
// returns the points x-, y-, z-coordinates by rotatating the corresponding point from crossection around the z-axis
function iterate_cs(i) = [cs[point_index(i)[1]][0]*cos(360*floor(i/p)/s), cs[point_index(i)[1]][0]*sin(360*floor(i/p)/s), cs[point_index(i)[1]][2]];
// for every point find neighbour points to build faces, ( + p: point on the next cross-section), points ordered clockwise
// to connect point on last section to corresponding points on first section
function item_add1(i) = i >= (s - 1)*p ? -(s)*p : 0;
// to connect last point on section to first points on the same and the next section
function item_add2(i) = i - p*floor(i/p) >= p-1 ? -p : 0;
// build faces
function find_neighbours1(i) = [i, i + 1 + item_add2(i), i + 1 + item_add2(i) + p + item_add1(i)];
function find_neighbours2(i) = [i, i + 1 + + item_add2(i) + p + item_add1(i), i + p + item_add1(i)];
cs = [for (i = [0:p-1]) cs_point(i)];
points = [for (i = [0:s*p - 1]) iterate_cs(i)];
faces1 = [for (i = [0:s*p - 1]) find_neighbours1(i)];
faces2 = [for (i = [0:s*p - 1]) find_neighbours2(i)];
faces = concat(faces1, faces2);
polyhedron(points = points, faces = faces);
here the result:
Since openscad 2015-03 faces can have more than 3 points, if all points of the face are on the same plane. So in this case faces could be build in one step too.
Are you building smth. like NACA airfoils? https://en.wikipedia.org/wiki/NACA_airfoil
There are a few OpenSCAD designs for those floating around, see e.g. https://www.thingiverse.com/thing:898554

Best Way for Search GPS points in a NOSQL Database

My questions is related on what is the best way to handle data related gps points in a database (in my case nosql DB MongoDB) in order to returns only closest points.
I have a collections of users in my database.
Now I need to create a new "table" which associate users with gps points (an user can have more points). For example:
User,lat,long
ALFA,40,50
ALFA,30,50
BETA,42,33
...
The server should makes available a function that, given a position in input, returns a list of users which are associated to points near the input.
For example:
function nearestUsers(lat,lon){
var mindif = 10000;
var closest;
users = getAllRecordsFromDataBase(); //query for MongoDB that returnst all records of the new table
for ( i = 0 ; i < users.length; i++){
if(this.distance(lat,lon,users[i].lat,users[i].lon)>mindif) delete users[i];
}
return users;
}
The distance function is the following:
function distance(lat1, lon1, lat2, lon2) {
lat1 = Deg2Rad(lat1);
lat2 = Deg2Rad(lat2);
lon1 = Deg2Rad(lon1);
lon2 = Deg2Rad(lon2);
var R = 6371;
var x = (lon2 - lon1) * Math.cos((lat1 + lat2) / 2);
var y = (lat2 - lat1);
var d = Math.sqrt(x * x + y * y) * R;
return d;
}
I'm afraid that, for big amount of data, this approch will result slow. Which is the best way to make more scalable the algorithm? Any suggestions?
Considering that this funcionality is inside my server in Node.js using a MongoDB, can I implement this function directly via query or using some special structure in my database?
You can use mongodb geospatial indexes and queries. Just store you points as GeoJSON points and perform queries using GeoJSON polylines as bbox-es and $geoWithin.

Find circles which the user is in, according to each circle's relative radius

Here's the pitch. I have a collection of Circles which have mainly two attributes: location is a Point and radius is a distance in meters.
Users also have a profile.location Point attribute.
In my publications, I want to find all the Circles that the user is "in", ergo the ones he or she is near enough according to each Circle's radius attribute. To sum it up, here's how it would look like:
Meteor.publish('circles', function() {
var curUser = Meteor.users.findOne({_id:this.userId});
if (curUser) {
return Circles.find({
location: {$near:
{$geometry:curUser.profile.location,$maxDistance:self.radius} // HERE
}
}));
}
this.ready();
});
Except self.radius is a completely made-up term on my behalf. But is it possible to achieve something like this?
POST-SOLVING edit:
Thanks to Electric Jesus, I have my matchings working perfectly with polygons, since circles are not GeoJSON types as of yet. (therefore they are not single attributes that can be queried, sort of) So I converted my circles into polygons! Here is a JavaScript function to do this:
function generateGeoJSONCircle(center, radius, numSides) {
var points = [];
var earthRadius = 6371;
var halfsides = numSides / 2;
//angular distance covered on earth's surface
var d = parseFloat(radius / 1000.) / earthRadius;
var lat = (center.coordinates[1] * Math.PI) / 180;
var lon = (center.coordinates[0] * Math.PI) / 180;
for(var i = 0; i < numSides; i++) {
var gpos = {};
var bearing = i * Math.PI / halfsides; //rad
gpos.latitude = Math.asin(Math.sin(lat) * Math.cos(d) + Math.cos(lat) * Math.sin(d) * Math.cos(bearing));
gpos.longitude = ((lon + Math.atan2(Math.sin(bearing) * Math.sin(d) * Math.cos(lat), Math.cos(d) - Math.sin(lat) * Math.sin(gpos.latitude))) * 180) / Math.PI;
gpos.latitude = (gpos.latitude * 180) / Math.PI;
points.push([gpos.longitude, gpos.latitude]);
};
points.push(points[0]);
return {
type: 'Polygon',
coordinates: [ points ]
};
}
Here you go. I don't really know how many sides I should use, so I left an argument for that too. From there, you can use Electric Jesus's answer to get where I was going. Don't forget to put a 2dsphere index on your polygon!
Circles._ensureIndex({'polygonConversion': "2dsphere"});
No, the Geo indexes would never work in a way that you demand it to be dynamic according to a document's 'radius'. You must convert your circles into Polygon geometries and use the $geoIntersects query to find which Circle (Polygon geometry) intersects with your current location/location parameter (Point geometry).
var location = {
longitude: -1.85549,
latitude: 52.9445
}
// create a circle with a Polygon geometry location, with converted coordinates
Circles.insert({name: "My Circle 1", loc: {
type: "Polygon",
coordinates: [[[ ... ]]],
}
});
// finding one or more Circles that intersect with current location.
Circles.find({loc: {
$geoIntersects: {
$geometry: {
type: "Point" coordinates: [location.longitude, location.latitude]
}
}
}});
Mongo's geospatial operators include $centerSphere, which returns documents that are within the bounds of a circle:
Entities.find({
location : {
$geoWithin : {
$centerSphere: [
[ curUser.profile.location.lng , curUser.profile.location.lat ],
radius / 6378100 // convert radians to meters by dividing by the Earth's radius
]
}
}
} );
You can try an additively weighted voronoi diagram. The distance function is the euklidian distance minus the weight. Sites with bigger weights and nearby other sites get sorted into the same cell.

MongoDB - latitude/longitude - signs

Reading here
http://docs.mongodb.org/manual/tutorial/query-a-2dsphere-index/
I find the following:
The following example queries grid coordinates and returns all documents
within a 10 mile radius of longitude 88 W and latitude 30 N. The example
converts the distance, 10 miles, to radians by dividing by the approximate
radius of the earth, 3959 miles:
db.places.find( { loc :
{ $geoWithin :
{ $centerSphere :
[ [ 88 , 30 ] , 10 / 3959 ]
} } } )
I think the "standard" notation is:
East is + (plus) and West is - (minus),
North is + (plus) and South is - (minus).
So why is West + (plus) in this example
on the MongoDB documentation site?
Is it really that way in MongoDB?
In fact, is there any standard which defines if West
maps to + or to - and the same for East, North, South?
See also:
Wikipedia - Latitude and longitude of the Earth
Wikipedia - Geographic coordinate system