Wrong distance calculation with MongoDB - mongodb

I am executing the following raw query with MongoDB:
qry = {"position" : SON([("$near", [52.497309,13.39385]), ("$maxDistance", distance/111.12 )])}
locations = Locations.objects(__raw__=qry)
The position in the database is set to [52.473266, 13.45494].
I get a result once I set the distance to 7.3 or higher, so it seems the two locations must at least be 7.3 kilometer away from each other.
When I calculate the distance of those two geo locations with Google Maps (for example going by car) it's telling me it's only 5.2 kilometer away from each other.
I tested it with loads of different locations and there is always a big difference in the distance calculation of google and Mongodb
Am i missing anything or can somebody explain please where this difference is coming from?
I already checked this answer but it's not working for me...

MongoDB assumes that coordinates are in (long, lat) format. If you compute distances by hand using Great-circle distance you'll see what is going on:
> from math import acos, sin, cos, radians
>
> long_x, lat_x = [radians(y) for y in [52.473266, 13.45494]]
> long_y, lat_y = [radians(y) for y in [52.497309, 13.39385]]
>
> acos(sin(lat_x) * sin(lat_y) + cos(lat_x) * cos(lat_y) * cos(long_x - long_y)) * 6371.0
7.27362435031
Google takes coordinates in (lat, long) format so if you provide the same input Google interpretation will be like below:
> acos(sin(long_x) * sin(long_y) + cos(long_x) * cos(long_y) * cos(lat_x - lat_y)) * 6371.0
4.92535867182

Related

Detect self intersection of a polygon with n sides?

I am using Google Maps SDK to allow a user to draw a polygon on the map by tapping. Everything works perfectly is the user is to draw a polygon following a path and continues on that path without crossing over lines. If that happens, this result is produced:
However, if the user is to make an error and cross over or change the direction of their "tapping" path this happens:
I need to either:
A) alert the user that they have created an invalid polygon, and must undo that action, or
B) correct the polygon shape to form a complete polygon.
With the research I have done, option A seems much more feasible and simple since option B would require rearranging the path of the polygon points.
I have done research and found algorithms and formulas to detect line intersection, but I am yet to find any piece of a solution in Swift to recognize if a polygon self-intersects based off of points (in this case, latitude and longitude). I don't need to know the point, just TRUE or FALSE to the question, "Does this Polygon self-intersect?" The polygon will typically have less than 20 sides.
Perhaps there is a solution built in with the GoogleMaps SDK, but I am yet to find it. Also, I understand that there are already algorithms for problems such as these, I am just having trouble implementing them into Swift 2 or 3. Any help is appreciated, thanks!
I'm guessing that you're trying to plot out the quickest way to get from point to point as the crow flies. You'll probably want to consider road direction too, which I won't here.
Both your options are possible. It's easy enough to iterate over every existing line when a new line is added and determine if they've crossed over. But your user would definitely rather not be told that they've screwed up, your app should just fix it for them. This is where it gets fun.
I am certain algorithms exist for finding the minimal polygon containing all points, but I didn't look them up, because where's the fun in that.
Here's how I would do it. In pseudocode:
if (line has intersected existing line)
find mean point (sum x sum y / n)
find nearest point to centre by:
taking min of: points.map(sqrt((x - centrex)^2 + (y-centrey)^2))
from the line between centre and nearest point, determine angle to every other line.
points.remove(nearest)
angles = points.map(cosine law(nearest to centre, centre, this point))
<- make sure to check if it crossed pi, at which point you must add pi.
sort angles so minimum is first.
starting at nearest point, add line to next point in the array of minimal angle points
I'm sorry I haven't put this into swift. I will update tomorrow with proper Swift 3.
This seems to be working pretty well for what I need. Adopted from Rob's answer here
func intersectionBetweenSegmentsCL(p0: CLLocationCoordinate2D, _ p1: CLLocationCoordinate2D, _ p2: CLLocationCoordinate2D, _ p3: CLLocationCoordinate2D) -> CLLocationCoordinate2D? {
var denominator = (p3.longitude - p2.longitude) * (p1.latitude - p0.latitude) - (p3.latitude - p2.latitude) * (p1.longitude - p0.longitude)
var ua = (p3.latitude - p2.latitude) * (p0.longitude - p2.longitude) - (p3.longitude - p2.longitude) * (p0.latitude - p2.latitude)
var ub = (p1.latitude - p0.latitude) * (p0.longitude - p2.longitude) - (p1.longitude - p0.longitude) * (p0.latitude - p2.latitude)
if (denominator < 0) {
ua = -ua; ub = -ub; denominator = -denominator
}
if ua >= 0.0 && ua <= denominator && ub >= 0.0 && ub <= denominator && denominator != 0 {
print("INTERSECT")
return CLLocationCoordinate2D(latitude: p0.latitude + ua / denominator * (p1.latitude - p0.latitude), longitude: p0.longitude + ua / denominator * (p1.longitude - p0.longitude))
}
return nil
}
I then implemented like this:
if coordArray.count > 2 {
let n = coordArray.count - 1
for i in 1 ..< n {
for j in 0 ..< i-1 {
if let intersection = intersectionBetweenSegmentsCL(coordArray[i], coordArray[i+1], coordArray[j], coordArray[j+1]) {
// do whatever you want with `intersection`
print("Error: Intersection # \(intersection)")
}
}
}
}

Cartesian Coordinate System in Perspective Projection

I'm still implementing a perspective projection for my augmented reality application. I've already asked some questions about the viewport-calculation and other camera stuff, which is explained from Aldream in this thread
However, I don't get any useful value at the moment and I think this depends on my calculation of the cartesian coordinate space.
I had some different ways to transform latitude,longitude and altitude to a cartesian coordinate space, but nothing of them seems to work properly. Currently I'm using ECEF(earth centered), but I also tried different calculations like a combination of the haversine-formula and trigonometry (to calculate x and y from the distance and the bearing between two points).
So my question is:
How does the cartesian coordinate space affect my perspective projection? Where do I have to "compensate" my units?(When I'm using meter or centimeter for example)?
Lets say I'm using ECEF, than I get values in meter, so for example, my camera is at (0,0,2m height) and my point is at (10,10,0). Now I can easily use the function mentioned on wikipedia and afterwards using the conversion of dx,dy,dz explained in my other thread (mentioned above). What I still don't get: How does this projection "know" what my units in the coordinate system are? I think this is the mistake I'm currently doing. I don't handle the units of my coordinate system and therefore, cannot get any good value from my projection.
When I'm using a coordinate system with centimeter as unit, all of my values from my perspective projection are increasing. Where do I have to "resolve" this unit-problem? Do I have to "transform" my camera-width and camera-height from pixel to meter? Do I have to convert the coordinate system to pixel? Which coordinate-system should be used to handle this situation? I hope you can understand my problem.
Edit:I solved it myself.
I've changed my coordinate system from ecef to a own system (using haversine and bearing and then calculating x,y,z) and now I get good values! :)
I'll try another way to explain it here then. :)
The short answer is: the unit of your cartesian positions doesn't matter as long as you keep it homogeneous, ie as long as you apply this unit both to your scene and to your camera.
For the longer answer, let's go back to the formula you used...
With:
d the relative Cartesian coordinates
s the size of your printable surface
r the size of your "sensor" / recording surface (ie r_x and r_y the size of the sensor and r_z its focal length)
b the position on your printable surface
.. and do the pseudo dimensional analysis. We have:
[PIXEL] = (([LENGTH] x [PIXEL]) / ([LENGTH] * [LENGTH])) * [LENGTH]
Whatever you use as unit for LENGTH, it will be homogenized, ie only the proportion is kept.
Ex:
[PIXEL] = (([MilliM] x [PIXEL]) / ([MilliMeter] * [MilliMeter])) * [MilliMeter]
= (([Meter/1000] x [PIXEL]) / ([Meter/1000] * [Meter/1000])) * [Meter/1000]
= 1000 * 1000 / 1000 /1000 * (([Meter] x [PIXEL]) / ([Meter] * [Meter])) * [Meter]
= (([Meter] x [PIXEL]) / ([Meter] * [Meter])) * [Meter]
Back to my explanations on your other thread:
If we use those notations to express b_x:
b_x = (d_x * s_x) / (d_z * r_x) * r_z
= (d_x * w) / (d_z * 2 * f * tan(α)) * f
= (d_x * w) / (d_z * 2 * tan(α)) // with w in px
Wheter you use (d_x, d_y, d_z) = (X,Y,Z) or (d_x, d_y, d_z) = (1000*X,1000*Y,1000*Z), the ratio d_x / d_z won't change.
Now for the reasons behind your problem, you should maybe check if you apply the correct unit to the position of your camera / to its distance to the scene too. Check also your α or the unit of the focal length, depending on which one you use.
If think the later suggestion is the most likely. It can be easy to forget to also apply the right unit to the characteristics of your camera.

Distance between two coordinates in php using haversine

I've looked around and seen mention of the haversine formula to determine distance between two coordinates (lat1, lng1) and (lat2, lng2).
I've implemented this code:
function haversineGreatCircleDistance(
$latitudeFrom, $longitudeFrom, $latitudeTo, $longitudeTo, $earthRadius = 6371000)
{
// convert from degrees to radians
$latFrom = deg2rad($latitudeFrom);
$lonFrom = deg2rad($longitudeFrom);
$latTo = deg2rad($latitudeTo);
$lonTo = deg2rad($longitudeTo);
$latDelta = $latTo - $latFrom;
$lonDelta = $lonTo - $lonFrom;
$angle = 2 * asin(sqrt(pow(sin($latDelta / 2), 2) +
cos($latFrom) * cos($latTo) * pow(sin($lonDelta / 2), 2)));
return $angle * $earthRadius;
}
And am trying to determine:
1) what units this is returning? (goal being in feet)
2) is this equation written the right way?
For example what should be the distance between these two points?
(32.8940695525,-96.7926336453) and (33.0642604502, -96.8064332754)?
I'm getting 18968.0903312 from the formula above.
Thanks!
1) what units this is returning? (goal being in feet)
Whatever units in which you supply the Earth's radius.
2) is this equation written the right way?
Test it. You can compare your results with an existing Haversine formula implementation, like this one.

Figuring out distance and course between two coordinates

I have 2 coordinates and would like to do something seemingly straightforward. I want to figure out, given:
1) Coordinate A
2) Course provided by Core Location
3) Coordinate B
the following:
1) Distance between A and B (can currently be done using distanceFromLocation) so ok on that one.
2) The course that should be taken to get from A to B (different from course currently traveling)
Is there a simple way to accomplish this, any third party or built in API?
Apple doesn't seem to provide this but I could be wrong.
Thanks,
~Arash
EDIT:
Thanks for the fast responses, I believe there may have been some confusion, I am looking to get the course (bearing from point a to point b in degrees so that 0 degrees = north, 90 degrees = east, similar to the course value return by CLLocation. Not trying to compute actual turn by turn directions.
I have some code on github that does that. Take a look at headingInRadians here. It is based on the Spherical Law of Cosines. I derived the code from the algorithm on this page.
/*-------------------------------------------------------------------------
* Given two lat/lon points on earth, calculates the heading
* from lat1/lon1 to lat2/lon2.
*
* lat/lon params in radians
* result in radians
*-------------------------------------------------------------------------*/
double headingInRadians(double lat1, double lon1, double lat2, double lon2)
{
//-------------------------------------------------------------------------
// Algorithm found at http://www.movable-type.co.uk/scripts/latlong.html
//
// Spherical Law of Cosines
//
// Formula: θ = atan2( sin(Δlong) * cos(lat2),
// cos(lat1) * sin(lat2) − sin(lat1) * cos(lat2) * cos(Δlong) )
// JavaScript:
//
// var y = Math.sin(dLon) * Math.cos(lat2);
// var x = Math.cos(lat1) * Math.sin(lat2) - Math.sin(lat1) * Math.cos(lat2) * Math.cos(dLon);
// var brng = Math.atan2(y, x).toDeg();
//-------------------------------------------------------------------------
double dLon = lon2 - lon1;
double y = sin(dLon) * cos(lat2);
double x = cos(lat1) * sin(lat2) - sin(lat1) * cos(lat2) * cos(dLon);
return atan2(y, x);
}
See How to get angle between two POI?
Depending on how much work you want to put in this one, I would suggest looking at Tree Traversal Algorithms (check the column on the right), things like A* alpha star, that you can use to find your find from one point to another, even if obstacles are in-between.
If I understand you correctly, you have the current location and you have some other location. You want to find the distance (as the crow flies) between the two points, and to find a walking path between the points.
To answer your first question, distanceFromLocation will find the distance across the earth's surface between 2 points, that is it follows the curvature of the earth, but it will give you the distance as the crow flies. So I think you're right about that.
The second question is a much harder. What you want to do is something called path-finding. Path finding, require's not only a search algorithm that will decide on the path, but you also need data about the possible paths. That is to say, if you want to find a path through the streets, the computer has to know how the streets are connected to each other. Furthermore, if you're trying to make a pathfinder that takes account for traffic and the time differences between taking two different possible paths, you will need a whole lot more data. It is for this reason that we usually leave these kinds of tasks up to big companies, with lots of resources, like Google, and Yahoo.
However, If you're still interested in doing it, check this out
http://www.youtube.com/watch?v=DoamZwkEDK0

What is the depth image received from Kinect

When I ran this Matlab code to get the depth image, the result I got is a matrix of 480x640. The min element value is 0 and the max element value is 2711. What does 2711 mean? Is that the distance from the camera to the farthest part of the image. But what is the unit of 2711. Is that meter of feet or ??
I don't know what the Matlab code exactly does to the depth, but it probably does some processing on it because the depth sent by the Kinect is on 11 bits, so it shouldn't be higher than 2048. Try to find out what it does, or to get access to the raw data sent by the Kinect.
The data sent by the Kinect is not a proper distance (it's a "disparity"), so you have to do some math to convert it to useful units.
From the OpenKinect project wiki (which contains useful information about the Kinect) :
From their data, a basic first order
approximation for converting the raw
11-bit disparity value to a depth
value in centimeters is: 100/(-0.00307
* rawDisparity + 3.33). This approximation is approximately 10 cm
off at 4 m away, and less than 2 cm
off within 2.5 m.
A better approximation is given by
Stéphane Magnenat in this post:
distance = 0.1236 * tan(rawDisparity /
2842.5 + 1.1863) in meters. Adding a final offset term of -0.037 centers
the original ROS data. The tan
approximation has a sum squared
difference of .33 cm while the 1/x
approximation is about 1.7 cm.
Once you have the distance using the
measurement above, a good
approximation for converting (i, j, z)
to (x,y,z) is:
x = (i - w / 2) * (z + minDistance) * scaleFactor * (w/h)
y = (j - h / 2) * (z + minDistance) * scaleFactor
z = z
Where
minDistance = -10
scaleFactor = .0021.
These values were found by hand.
You can find more details about the Kinect's depth camera and its calibration on the ROS website (and many others !).
If you map the data to a meter scale it compresses the depth image slightly. I found this was an issue when I was trying to look for planes in the mapped data.