How to get angle between two POI? - iphone

How do I calculate the angle in degrees between the coordinates of two POIs (points of interest) on an iPhone map application?

I'm guessing you try to calculate the degrees between the coordinates of two points of interest (POI).
Calculating the arc of a great circle:
+(float) greatCircleFrom:(CLLocation*)first
to:(CLLocation*)second {
int radius = 6371; // 6371km is the radius of the earth
float dLat = second.coordinate.latitude-first.coordinate.latitude;
float dLon = second.coordinate.longitude-first.coordinate.longitude;
float a = pow(sin(dLat/2),2) + cos(first.coordinate.latitude)*cos(second.coordinate.latitude) * pow(sin(dLon/2),2);
float c = 2 * atan2(sqrt(a),sqrt(1-a));
float d = radius * c;
return d;
}
Another option is to pretend you are on cartesian coordinates (faster but not without error on long distances):
+(float)angleFromCoordinate:(CLLocationCoordinate2D)first
toCoordinate:(CLLocationCoordinate2D)second {
float deltaLongitude = second.longitude - first.longitude;
float deltaLatitude = second.latitude - first.latitude;
float angle = (M_PI * .5f) - atan(deltaLatitude / deltaLongitude);
if (deltaLongitude > 0) return angle;
else if (deltaLongitude < 0) return angle + M_PI;
else if (deltaLatitude < 0) return M_PI;
return 0.0f;
}
If you want the result in degrees instead radians, you have to apply the following conversion:
#define RADIANS_TO_DEGREES(radians) ((radians) * 180.0 / M_PI)

You are calculating the 'Bearing' from one point to another here. There's a whole bunch of formula for that, and lots of other geographic quantities like distance and cross-track error, on this web page:
http://www.movable-type.co.uk/scripts/latlong.html
the formulae are in several formats so you can easily convert to whatever language you need for your iPhone. There's also javascript calculators so you can test your code gets the same answers as theirs.

If the other solutions dont work for you try this:
- (int)getInitialBearingFrom:(CLLocation *)first
to:(CLLocation *)second
{
float lat1 = [self degreesToRad:first.coordinate.latitude];
float lat2 = [self degreesToRad:second.coordinate.latitude];
float lon1 = [self degreesToRad:first.coordinate.longitude];
float lon2 = [self degreesToRad:second.coordinate.longitude];
float dLon = lon2 - lon1;
float y = sin (dLon) * cos (lat2);
float x1 = cos (lat1) * sin (lat2);
float x2 = sin (lat1) * cos (lat2) * cos (dLon);
float x = x1 - x2;
float bearingRadRaw = atan2f (y, x);
float bearingDegRaw = bearingRadRaw * 180 / M_PI;
int bearing = ((int) bearingDegRaw + 360) % 360; // +- 180 deg to 360 deg
return bearing;
}
For final bearing, simply take the initial bearing from the end point to the start point and reverse it (using θ = (θ+180) % 360).
You need these 2 helpers:
-(float)radToDegrees:(float)radians
{
return radians * 180 / M_PI;
}
-(float)degreesToRad:(float)degrees
{
return degrees * M_PI /180;
}

Related

Get angle to land ballistic arc on target with fixed velocity projectile

So I was trying to follow the code in this question to get a turret that can fire ballistic projectiles with a fixed starting velocity and no drag to a given point on a 3D surface.
Find an angle to launch the projectile at to reach a specific point
But It's not quite working. The turret ends up aiming too high when the target is close, and too low when the target is further away. There is of course a specific distance at which it does hit the target but that distance is arbitrary, so that's not at all helpful to me.
The way the error scales makes me think I have a multiplication mistake, or am missing some multiplication or division, but I can't for the life of me figure out where I am going wrong. Can anyone point me in the right direction?
Code Below:
float CalculateAngle(float velocity)
{
float gravity = -Physics.gravity.y;
Vector3 modPos = target.position;
if (modPos.x < 0) modPos.x -= 2 * modPos.x;
if (modPos.y < 0) modPos.y -= 2 * modPos.y;
if (modPos.z < 0) modPos.z -= 2 * modPos.z;
modPos.x /= 10;
modPos.y /= 10;
modPos.z /= 10;
float deltaX = modPos.x - FirePoint.position.x;
float deltaZ = modPos.z - FirePoint.position.z;
float deltaY = modPos.y - FirePoint.position.y;
float horzDelta = Mathf.Sqrt(deltaX * deltaX + deltaZ * deltaZ);
float RHSFirstPart = (velocity * velocity) / (gravity * horzDelta);
float RHSSecondPart = Mathf.Sqrt(((velocity * velocity) * ((velocity * velocity) - (2 * gravity * deltaY))/ (gravity * gravity * horzDelta * horzDelta)) - 1);
float tanθ = RHSFirstPart - RHSSecondPart;
float angle = Mathf.Atan2(tanθ, 1) * Mathf.Rad2Deg;
if (angle < 0) return angle;
return -angle;
}
Edit 1:
Still struggling heavily with this. I just can't get the math to work. I went back to the original root of the knowledge here https://physics.stackexchange.com/questions/56265/how-to-get-the-angle-needed-for-a-projectile-to-pass-through-a-given-point-for-t then wrote a function that did the exact equation given in the answers, copying the input values and everything. Except when I run it it fails, as one of the values that needs to be squared is negative which throws a NaN. I assume I am going wrong somewhere in my equation but I've gone over it a hundred times and I am not spotting the error. My code:
float CalculateAngle3(float velocity)
{
float deltaX = 500;
float deltaY = 20;
float v = 100;
float vSqr = v * v;
float g = 9.81f * 9.81f;
float a = vSqr * (vSqr - 2 * g * deltaY);
float b = (g * g) * (deltaX * deltaX);
float c = a / b - 1;
float d = Mathf.Sqrt(c); //c is negitive causing an NaN
float e = vSqr / g * deltaX;
float tanθ = e - d;
return tanθ;
}
Edit 2:
Gave up. This guy solved it so I am just going to use his logic instead
: P
https://www.forrestthewoods.com/blog/solving_ballistic_trajectories/
Using it like such:
Vector3 s0;
Vector3 s1;
if (fts.solve_ballistic_arc(FirePoint.position, bomb.StartingVelocity.z, target.position, -Physics.gravity.y, out s0, out s1) > 0)
{
targetPosition = transform.position + s1;
SafetyEnabled = false;
}
else
{
//Don't fire if we don't have a solution
SafetyEnabled = true;
}
I'm going to leave the question open for now since it's still technically not answered. I still don't know why the original implementation wasn't working.
It is possible your quadratic formula is incorrect (I do not know why you did not code a separate small function that solves the quadratic equation for any three given coefficients, to make your code more readable and less prone to errors)
float RHSFirstPart = velocity / (gravity * horzDelta);
float RHSSecondPart = Mathf.Sqrt(RHSFirstPart*RHSFirstPart - 2*RHSFirstPart*deltaY/horzDelta - 1);
float tanθ = RHSFirstPart - RHSSecondPart;
A comment: In most applications we do not really need the actual angle but the values of cos(angle) and sin(angle) because these are the components of the unit vector which usually is sought (just like in your case). So no need to use inverse trigonometry to find an actual number, which slows down calculations and is may introduce unnecessary round-off errors.

Given point of (latitude,longitude), distance and bearing, How to get the new latitude and longitude

I found a piece of code on web. It calculates the Minimum bounding rectangle by a given lat/lon point and a distance.
private static void GetlatLon(double LAT, double LON, double distance, double angle, out double newLon, out double newLat)
{
double dx = distance * 1000 * Math.Sin(angle * Math.PI / 180.0);
double dy = distance * 1000 * Math.Cos(angle * Math.PI / 180.0);
double ec = 6356725 + 21412 * (90.0 - LAT) / 90.0;
double ed = ec * Math.Cos(LAT * Math.PI / 180);
newLon = (dx / ed + LON * Math.PI / 180.0) * 180.0 / Math.PI;
newLat = (dy / ec + LAT * Math.PI / 180.0) * 180.0 / Math.PI;
}
public static void GetRectRange(double centorlatitude, double centorLogitude, double distance,
out double maxLatitude, out double minLatitude, out double maxLongitude, out double minLongitude)
{
GetlatLon(centorlatitude, centorLogitude, distance, 0, out temp, out maxLatitude);
GetlatLon(centorlatitude, centorLogitude, distance, 180, out temp, out minLatitude);
GetlatLon(centorlatitude, centorLogitude, distance, 90, out minLongitude, out temp);
GetlatLon(centorlatitude, centorLogitude, distance, 270, out maxLongitude, out temp);
}
double ec = 6356725 + 21412 * (90.0 - LAT) / 90.0; //why?
double ed = ec * Math.Cos(LAT * Math.PI / 180); // why?
dx / ed //why?
dy / ec //why?
6378137 is the equator radius, 6356725 is polar radius, 21412 =6378137 -6356725.
from the link, I know a little of the meanings. But these four lines, I don't know why. Could you please help to give more information? Could you please help to let me know the derivation of the formula?
From the link, in the section "Destination point given distance and bearing from start point", it gives another formula to get the result. What is the derivation of the formula?
From this link , I know the derivation of the Haversine Formula, it's very informative. I don't think the formula in the section of "Destination point given distance and bearing from start point" is just a simple reversion of Haversine.
Thanks a lot!
This is a prime example of why commenting your code makes it more readable and maintainable. Mathematically you are looking at the following:
double ec = 6356725 + 21412 * (90.0 - LAT) / 90.0; //why?
This is a measure of eccentricity to account for the equatorial bulge in some fashion. 21412 is, as you know, the difference in earth radius between the equator and pole. 6356725 is the polar radius. (90.0 - LAT) / 90.0 is 1 at the equator, and 0 at the pole. The formula simply estimates how much bulge is present at any given latitude.
double ed = ec * Math.Cos(LAT * Math.PI / 180); // why?
(LAT * Math.PI / 180) is a conversion of latitude from degrees to radians. cos (0) = 1 and cos(1) = 0, so at the equator, you are applying the full amount of the eccentricity while at the pole you are applying none. Similar to the preceding line.
dx / ed //why?
dy / ec //why?
The above seems to be the fractional additions to distance in both the x and y directions attributable to the bulge at any given lat/lon used in the newLon newLat computation to arrive at the new location.
I haven't done any research into the code snippet you found, but mathematically, this is what is taking place. Hopefully that will steer you in the right direction.
Haversine Example in C
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
double m2ft (double l) { /* convert meters to feet */
return l/(1200.0/3937.0);
}
double ft2smi (double l) { /* convert feet to statute miles*/
return l/5280.0;
}
double km2smi (double l) { /* convert km to statute mi. */
return ft2smi(m2ft( l * 1000.0 ));
}
static const double deg2rad = 0.017453292519943295769236907684886;
static const double earth_rad_m = 6372797.560856;
typedef struct pointd {
double lat;
double lon;
} pointd;
/* Computes the arc, in radian, between two WGS-84 positions.
The result is equal to Distance(from,to)/earth_rad_m
= 2*asin(sqrt(h(d/earth_rad_m )))
where:
d is the distance in meters between 'from' and 'to' positions.
h is the haversine function: h(x)=sin²(x/2)
The haversine formula gives:
h(d/R) = h(from.lat-to.lat)+h(from.lon-to.lon)+cos(from.lat)*cos(to.lat)
http://en.wikipedia.org/wiki/Law_of_haversines
*/
double arcradians (const pointd *from, const pointd *to)
{
double latitudeArc = (from-> lat - to-> lat) * deg2rad;
double longitudeArc = (from-> lon - to-> lon) * deg2rad;
double latitudeH = sin (latitudeArc * 0.5);
latitudeH *= latitudeH;
double lontitudeH = sin (longitudeArc * 0.5);
lontitudeH *= lontitudeH;
double tmp = cos (from-> lat * deg2rad) * cos (to-> lat * deg2rad);
return 2.0 * asin (sqrt (latitudeH + tmp*lontitudeH));
}
/* Computes the distance, in meters, between two WGS-84 positions.
The result is equal to earth_rad_m*ArcInRadians(from,to)
*/
double dist_m (const pointd *from, const pointd *to) {
return earth_rad_m * arcradians (from, to);
}
int main (int argc, char **argv) {
if (argc < 5 ) {
fprintf (stderr, "Error: insufficient input, usage: %s (lat,lon) (lat,lon)\n", argv[0]);
return 1;
}
pointd points[2];
points[0].lat = strtod (argv[1], NULL);
points[0].lon = strtod (argv[2], NULL);
points[1].lat = strtod (argv[3], NULL);
points[1].lon = strtod (argv[4], NULL);
printf ("\nThe distance in meters from 1 to 2 (smi): %lf\n\n", km2smi (dist_m (&points[0], &points[1])/1000.0) );
return 0;
}
/* Results/Example.
./bin/gce 31.77 -94.61 31.44 -94.698
The distance in miles from Nacogdoches to Lufkin, Texas (smi): 23.387997 miles
*/
I assume 6356725 has something to do with the radius of the earth. Check out this answer, and also take a look at the Haversine Formula.

Angle between 2 GPS Coordinates

I'm working in another iPhone App that uses AR, and I'm creating my own framework, but I'm having trouble trying to get the angle of a second coordinate relative to the current device position, anyone know a good resource that could help me with this?
Thanks in advance!
If the two points are close enough together, and well away from the poles, you can use some simple trig:
float dy = lat2 - lat1;
float dx = cosf(M_PI/180*lat1)*(long2 - long1);
float angle = atan2f(dy, dx);
EDIT: I forgot to mention that latN and longN — and therefore dx and dy — can be in degrees or radians, so long as you don't mix units. angle, however, will always come back in radians. Of course, you can get it back to degrees if you multiply by 180/M_PI.
Here is the android version of this code
import com.google.android.maps.GeoPoint;
public double calculateAngle(GeoPoint startPoint, GeoPoint endPoint) {
double lat1 = startPoint.getLatitudeE6() / 1E6;
double lat2 = endPoint.getLatitudeE6() / 1E6;
double long2 = startPoint.getLongitudeE6() / 1E6;
double long1 = endPoint.getLongitudeE6() / 1E6;
double dy = lat2 - lat1;
double dx = Math.cos(Math.PI / 180 * lat1) * (long2 - long1);
double angle = Math.atan2(dy, dx);
return angle;
}

Box2d Calculating Trajectory

I'm trying to make physics bodies generated at a random position with a random velocity hit a target. I gleaned and slightly modified this code from the web that was using chipmunk to run in Box2d
+ (CGPoint) calculateShotForTarget:(CGPoint)target from:(CGPoint) launchPos with:(float) velocity
{
float xp = target.x - launchPos.x;
float y = target.y - launchPos.y;
float g = 20;
float v = velocity;
float angle1, angle2;
float tmp = pow(v, 4) - g * (g * pow(xp, 2) + 2 * y * pow(v, 2));
if(tmp < 0){
NSLog(#"No Firing Solution");
}else{
angle1 = atan2(pow(v, 2) + sqrt(tmp), g * xp);
angle2 = atan2(pow(v, 2) - sqrt(tmp), g * xp);
}
CGPoint direction = CGPointMake(cosf(angle1),sinf(angle1));
CGPoint force = CGPointMake(direction.x * v, direction.y * v);
NSLog(#"force = %#", NSStringFromCGPoint(force));
NSLog(#"direction = %#", NSStringFromCGPoint(direction));
return force;
}
The problem is I don't know how to apply this to my program, I have a gravity of -20 for y but putting 20 for g and a lower velocity like 10 for v gets me nothing but "No Firing Solution".
What am I doing wrong?
A lower velocity of 10 is never going to work the projectile doesn't have enough power to travel the distance.
The error in the calculation is that everything is in meters except for the distance calculations which are in pixels!
Changing the code to this fixed the crazy velocities i was getting:
+ (CGPoint) calculateShotForTarget:(CGPoint)target from:(CGPoint) launchPos with:(float) velocity
{
float xp = (target.x - launchPos.x) / PTM_RATIO;
float y = (target.y - launchPos.y) / PTM_RATIO;
float g = 20;
float v = velocity;
float angle1, angle2;
float tmp = pow(v, 4) - g * (g * pow(xp, 2) + 2 * y * pow(v, 2));
if(tmp < 0){
NSLog(#"No Firing Solution");
}else{
angle1 = atan2(pow(v, 2) + sqrt(tmp), g * xp);
angle2 = atan2(pow(v, 2) - sqrt(tmp), g * xp);
}
CGPoint direction = CGPointMake(cosf(angle1),sinf(angle1));
CGPoint force = CGPointMake(direction.x * v, direction.y * v);
NSLog(#"force = %#", NSStringFromCGPoint(force));
NSLog(#"direction = %#", NSStringFromCGPoint(direction));
return force;
}

2d collision between line and a point

Im trying to understanding collision detection in 2d world. I recently got this tutorials http://www.gotoandplay.it/_articles/2003/12/bezierCollision.php. I have question which puzzled me a lot - on the flash demo ball is dropping without responding if i try to swap the starting and end point.
Can someone explain me , how the simulation works.
I have modified this the sample code. It works perfect until the start and end point are swapped, Here is same code in objective c
Thanks in advance. .
-(void)render:(ccTime)dt {
if(renderer)
{
CGPoint b = ball.position;
float bvx = ball.vx;
float bvy = ball.vy;
bvx += .02;
bvy -= .2;
b.x += bvx;
b.y += bvy;
float br = ball.contentSize.width/2;
for ( int p = 0 ; p < [map count] ; p++ ) {
line *l = [map objectAtIndex:p];
CGPoint p0 = l.end;
CGPoint p1 = l.start;
float p0x = p0.x, p0y = p0.y, p1x = p1.x, p1y = p1.y;
// get Angle //
float dx = p0x - p1x;
float dy = p0y - p1y;
float angle = atan2( dy , dx );
float _sin = sin ( angle );
float _cos = cos ( angle );
// rotate p1 ( need only 'x' ) //
float p1rx = dy * _sin + dx * _cos + p0x;
// rotate ball //
float px = p0x - b.x;
float py = p0y - b.y;
float brx = py * _sin + px * _cos + p0x;
float bry = py * _cos - px * _sin + p0y;
float cp = ( b.x - p0x ) * ( p1y - p0y ) - ( b.y - p0y ) * ( p1x - p0x );
if ( bry > p0y - br && brx > p0x && brx < p1rx && cp > 0 ) {
// calc new Vector //
float vx = bvy * _sin + bvx * _cos;
float vy = bvy * _cos - bvx * _sin;
vy *= -.8;
vx *= .98;
float __sin = sin ( -angle );
float __cos = cos ( -angle );
bvx = vy * __sin + vx * __cos;
bvy = vy * __cos - vx * __sin;
// calc new Position //
bry = p0y - br;
dx = p0x - brx;
dy = p0y - bry;
b.x = dy * __sin + dx * __cos + p0x;
b.y = dy * __cos - dx * __sin + p0y;
}
}
ball.position = b;
ball.vx = bvx;
ball.vy = bvy;
if ( b.y < 42)
{
ball.position = ccp(50, size.height - 42);
ball.vx = .0f;
ball.vy = .0f;
}
}
}
The order of the points defines an orientation on the curve. If the start point is on the left and the end point on the right, then the curve is oriented so that "up" points above the curve. However, if you swap the start/end points the curve is oppositely oriented, so now "up" actually points below the curve.
When your code detects a collision and then corrects the velocity it is using the curve's orientation. That is why when the ball drops on the curve with the start/end points swapped it appears to jump through the curve.
To correct this your collision resolution code should check which side of the curve the ball is on (with respect to the curve's orientation), and adjust accordingly.
If you swap l.end and l.start it will serve for line without the segment (l.start, l.end). This is because all values are signed here.
Algorithm turns the plane so that line is horizontal and one of the segment ends doesn't move. After that it is easy to understand whether the ball touches the line. And if it does, its speed should change: in rotated plane it just reverses y-coordinate and we should rotate it back to get line not horizontal again.
In fact not a very good implementation. All this can be done without sin, cos, just vectors.