iOS - Calculating distance, azimuth, elevation and relative position (Augmented Reality) - iphone

I am starting to build an augmented reality app where you can place an image on the screen on your augmented reality camera view and it stays in that position on the Earth, so someone else with their camera view can come by and see it on the augmented reality camera view. For this I know I need to calculate some sort of distance factor along with azimuth and elevation.
So, I have already figured out how to send the object's graphics up to a server and retrieve it back, but how can I place it back on its original position, relative to Earth. I know I need to calculate its:
Altitude
Coordinates
Azimuth
Elevation
Distance
But how would I calculate these and account for them/piece them together. I hope you understand what I mean.
To refine your understanding let me give you a short demo of the app:
A man is in his house, he decides to place an image of a painting on one of his walls. He opens up the app which defaults to the augmented reality screen, he presses the plus button and adds an image from his photo library. Behind the scenes, it saves the location and positional data up to a server, someone with the app and its augmented reality screen comes by, it goes up to the server and finds images saved nearby, it then downloads the image and places it up on the wall so the other man can see it with his phone when he moves it by.
What approach should I take to achieve this? Any outline, links, resources, tutorials, thoughts, experience etc. Thanks! This was a generally hard question to write down, I hope you can understand. If not please tell me and I will reword.
Rohan

I'm working on two AR iOS apps which do the following: convert azimuth (compass, horizontal angle) and elevation (gyroscope, vertical angle) to a position in 3D space (e.g. spherical to cartesian).
The frameworks you need are:
CoreLocation
CoreMotion
Getting the geolocation (coordinates) is pretty straightforward for latitude, longitude, and altitude. You can easily find this information in several online sources, but this is the main call you need from the CLLocationManagerDelegate after you call startUpdatingLocation:
- (void)locationManager:(CLLocationManager *)manager didUpdateLocations:(NSArray *)locations
{
latitude = (float) manager.location.coordinate.latitude;
longitude = (float) manager.location.coordinate.longitude;
altitude = (float) manager.location.altitude;
}
Getting the azimuth angle is also pretty straightforward, using the same delegate as the location after calling startUpdatingHeading:
- (void)locationManager:(CLLocationManager *)manager didUpdateHeading:(CLHeading *)newHeading
{
azimuth = (float) manager.heading.magneticHeading;
}
Elevation is extracted from the gyroscope, which doesn't have a delegate but is also easy to set up. The call looks something like this (note: this works for my app running in landscape mode, check yours):
elevation = fabsf(self.motionManager.deviceMotion.attitude.roll);
Finally, you can convert your orientation coordinates into a 3D point like so:
- (GLKVector3)sphericalToCartesian:(float)radius azimuth:(float)theta elevation:(float)phi
{
// Convert Coordinates: Spherical to Cartesian
// Spherical: Radial Distance (r), Azimuth (θ), Elevation (φ)
// Cartesian: x, y, z
float x = radius * sinf(phi) * sinf(theta);
float y = radius * cosf(phi);
float z = radius * sinf(phi) * cosf(theta);
return GLKVector3Make(x, y, z);
}
For this last part be very wary of angle and axis naming conventions as they vary wildly from source to source. In my system, θ is the angle on the horizontal plane, φ is the angle on the vertical plane, x is left-right, y is down-up, and z is back-front.
As for distance, I'm not sure you really need to use it but if you do then just substitute it for "radius".
Hope that helps

Swift 3
Gyroscope code update:
import CoreMotion
...
motionManager.deviceMotionUpdateInterval = 0.1
motionManager.startDeviceMotionUpdates(to: OperationQueue.current!) { deviceManager, error in
guard let dm = deviceManager else { return }
let roll = dm.attitude.roll
let pitch = dm.attitude.pitch
let yaw = dm.attitude.yaw
print("r: \(roll), p: \(pitch), y: \(yaw)")
}

Related

Reconstruct near plane ray intersection with camera

I am trying to reconstruct the point where the ray of the camera rendering the current pixel intersects the near plane.
I need the coordinates of the intersection point in the local coordinates of the object being rendered.
This is my current implementation:
float4 nearClipLS = mul(inv_modelViewProjectionMatrix , float4((i.vertex.x / i.vertex.w), (i.vertex.y / i.vertex.w),-1., 1.)); nearClipLS /= nearClipLS.w;
There's got to be a more efficient way to do it, but the following should, in theory, work.
Find the offset vector from the camera to the pixel:
float3 cam2pos = v.worldPos - _WorldSpaceCameraPos;
Get the camera's forward vector:
float3 camFwd = UNITY_MATRIX_IT_MV[2].xyz;
Get the dot product of the two to determine how far the point projects in the direction of the camera's forward axis:
float projDist = dot(cam2pos, camFwd);
Then, you should be able to use that data to re-project the point onto the near clip plane:
float nearClipZ = _ProjectionParams.y;
float3 nearPos = _WorldSpaceCameraPos+ (cam2pos * (nearClipZ / projDist));
This solution doesn't address edge cases (like when it's even with or behind the camera, which could cause problems), so you may want to check those once you get it working.

Longitude and Latitude to location on sphere in Unity

Hi all,
I'm trying to transform locations based upon longitude and latitude to a vector3 location, which will be placed on a sphere in Unity. However, the location seems to be constantly off (compared to the actual location).
I use the following code at the moment:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class testPosLatLong : MonoBehaviour {
public float longi;
public float lati;
public float radius;
public Transform marker;
// Use this for initialization
void Start () {
// Transfer to Radians from Degrees
float templongi = longi * Mathf.PI / 180;
float templati = lati * Mathf.PI / 180;
float Xpos = radius * Mathf.Cos(templati) * Mathf.Cos(templongi);
float Ypos = radius * Mathf.Cos(templati) * Mathf.Sin(templongi);
float Zpos = radius * Mathf.Sin(templati);
Debug.Log ("X, Y, Z" + Xpos + " " + Ypos + " " + Zpos);
// Set the X,Y,Z pos from the long and lat
Instantiate(marker);
marker.position = new Vector3 (Xpos, Zpos, Ypos);
}
// Update is called once per frame
void Update () {
}
}
I've tried to set the longitude and latitude to zero, which looks like it got the right position on the sphere:
But when I try the longitude and latitude of Amsterdam for example it gives the following results (left side), while it should be the result on the right side.
Am I missing something or what is going wrong here? I tried googling a lot, but couldn't find anything that might explain my current problem. The project itself can be found here: https://wetransfer.com/downloads/31303bd353fd9fde874e92338e68573120171205170107/1208ac%3E
Hope somebody can help.
I think your globe is what's wrong. It's tough to see from your images, but to me the equator looks like it's in slightly the wrong spot, and the North Pole looks to be too crowded with Greenland. I suspect is has to do with the projection you're using to paste the globe image onto the sphere.
Typically there are very complex ways of projecting 2D maps/images onto 3D surfaces. There's a whole field of geospatial analysis on how to do this accurately. It's a tough problem.
Also, The actual earth isn't exactly a sphere, so this could give you errors as well, but I would guess these would result in much smaller errors that what you describe. Accurate maps reperesent the 3D earth as a "geoid". to get locally accurate maps, people typically project data differently in small areas to maximize accuracy at a local scale. Otherwise you see large distortions. So as you zoom into a global map, the projections actually change significantly.
One possible way to test where the issue is coming from could be to set up control points of known coordinates both on your map image and on the globe, and then test to see if they line up. Like put a spike sticking out of the world at where Amsterdam SHOULD be on the globe, and where it is on the actual image. If these two don't line up then you can be pretty sure where your problem lies. This is called "Georeferencing".
I think for 3D projections you'd likely need at least 4 such "control points". You might take a look at the geospatial community on stackexchange for more detailed info on projecting maps and swapping between coordinate systems.

Calculating coordinates from reference points

I'm working on a game in Unity where you can walk around in a city that also exists in real life.
In the game you should be able to enter real-world coordinates, or use your phone's GPS, and you'll be transported to the in-game position of those coordinates.
For this, i'd need to somehow convert the game coordinates to latitude and longitude coordinates. I have some coordinates from specific buildings, and i figured i might be able to write a script to determine the game coordinates from those reference points.
I've been searching for a bit on Google, and though i have probably come across the right solutions occasionally, i've been unable to understand them enough to use it in my code.
If someone has experience with this, or knows how i could do this, i'd appreciate it if you could help me understand it :)
Edit: Forgot to mention that other previous programmers have already placed the world at some position and rotation they felt like using, which unfortunately i can't simply change without breaking things.
Tim Falken
This is simple linear math. The main issues you'll come across is the fact that your game coordinate system will be probably be reversed along one or more axis. You'll probably need to reverse the direction along the latitude (Y) axis of your app. Aside from that it is just a simple conversion of the scales. Since you say that this is the map of a real place you should be able to easily figure out the min\max lon\lat which your map covers. Take the absolute value of the difference between these two values and divide that by the width\height of your map in each direction. This will be the change in latitude per map unit value. Store this value and it should be easy to convert both ways between the two units. Make functions that abstract the details and you should have no problems calculating this either way.
I assume that you have been able to retrieve the GPS coordinates OK.
EDIT:
By simple linear math I mean something like this (this is C++ style psuedo code and completely untested; in a real world example the constants would all be member variables instead):
define('MAP_WIDTH', 1000);
define('MAP_HEIGHT', 1000);
define('MIN_LON', 25.333);
define('MIN_LAT', 20.333);
define('MAX_LON', 27.25);
define('MAX_LAT', 20.50);
class CoordConversion {
float XScale=abs(MAX_LON-MIN_LON)/MAP_WIDTH;
float YScale=abs(MAX_LAT-MIN_LAT)/MAP_HEIGHT;
int LonDir = MIN_LON<MAX_LON?1:-1;
int LatDir = MIN_LAT<MAX_LAT?1:-1;
public static float GetXFromLon(float lon) {
return (this.LonDir>0?(lon-MIN_LON):(lon-MAX_LON))*this.XScale;
}
public static float GetYFromLat(float lat) {
return (this.LatDir >0?(lat-MIN_LAT):(lat-MAX_LAT))*this.YScale;
}
public static float GetLonFromX(float x) {
return (this.LonDir>0?MIN_LON:MAX_LON)+(x/this.XScale);
}
public static float GetLatFromY(float y) {
return (this.LonDir>0?MIN_LAT:MAX_LAT)+(y/this.YScale);
}
}
EDIT2: In the case that the map is rotated you'll want to use the minimum and maximum lon\lat actually shown on the map. You'll also need to rotate each point after the conversion. I'm not even going to attempt to get this right off the top of my head but I can give your the code you'll need:
POINT rotate_point(float cx,float cy,float angle,POINT p)
{
float s = sin(angle);
float c = cos(angle);
// translate point back to origin:
p.x -= cx;
p.y -= cy;
// rotate point
float xnew = p.x * c - p.y * s;
float ynew = p.x * s + p.y * c;
// translate point back:
p.x = xnew + cx;
p.y = ynew + cy;
}
This will need to be done in when returning a game point and also it needs to be done in reverse before using a game point to convert to a lat\lon point.
EDIT3: More help on getting the coordinates of your maps. First find the city or whatever it is on Google maps. Then you can right click the highest point (furthest north) on your maps and find the highest longitude. Repeat this for all four cardinal directions and you should be set.

Detecting touch position on 3D objects in openGL

I have created a 3D object in opengl for one of my application. The object is something like a human body and can be rotated on touch. How can I detect the position of touch on this 3D object. Means if the user touches the head, I have to detect that it is the head. If touch is on the hand, then that has to be identified. It should work even if the object is rotated to some other direction. I think the coordinates of touch on the 3D object is required.
This is the method where I am getting the position of touch on the view.
- (void) touchesBegan: (NSSet*) touches withEvent: (UIEvent*) event
{
UITouch* touch = [touches anyObject];
CGPoint location = [touch locationInView: self];
m_applicationEngine->OnFingerDown(ivec2(location.x, location.y));
}
Can anyone help? Thanks in advance!
Forget about RayTracing and other Top Notch Algorithms. We have used a simple trick for one of our applications(Iyan 3D) on App Store. But this technique need one extra render pass everytime you finish rotating the scene to a new angle. Render different objects (head, hand, leg etc) in different colors (not actual colors but unique ones). Read the color in the rendered image corresponding to the screen position. You can find the object based on its color.
In this method you can use change rendered image resolution to balance accuracy and performance.
To determine the 3D location of the object I would suggest ray tracing.
Assuming the model is in worldspace coordinates you'll also need to know the worldspace coordinates of the eye location and the worldspace coordinates of the image plane. Using those two points you can calculate a ray which you will use to intersect with the model, which I assume consists of triangles.
Then you can use the ray triangle test to determine the 3D location of the touch, by finding the triangle that has the closest intersection to the image plane. If you want which triangle is touched you will also want to save that information when you do the intersection tests.
This page gives an example of how to do ray triangle intersection tests: http://www.scratchapixel.com/lessons/3d-basic-lessons/lesson-9-ray-triangle-intersection/ray-triangle-intersection-geometric-solution/
Edit:
Updated to have some sample code. Its some slightly modified code I took from a C++ raytracing project I did a while ago so you'll need to modify it a bit to get it working for iOS. Also the code in its current form wouldn't even be useful since it doesn't return the actual intersection point but rather if the ray intersects the triangle or not.
// d is the direction the ray is heading in
// o is the origin of the ray
// verts is the 3 vertices of the triangle
// faceNorm is the normal of the triangle surface
bool
Triangle::intersect(Vector3 d, Vector3 o, Vector3* verts, Vector3 faceNorm)
{
// Check for line parallel to plane
float r_dot_n = (dot(d, faceNorm));
// If r_dot_n == 0, then the line and plane are parallel, but we need to
// do the range check due to floating point precision
if (r_dot_n > -0.001f && r_dot_n < 0.001f)
return false;
// Then we calculate the distance of the ray origin to the triangle plane
float t = ( dot(faceNorm, (verts[0] - o)) / r_dot_n);
if (t < 0.0)
return false;
// We can now calculate the barycentric coords of the intersection
Vector3 ba_ca = cross(verts[1]-verts[0], verts[2]-verts[0]);
float denom = dot(-d, ba_ca);
dist_out = dot(o-verts[0], ba_ca) / denom;
float b = dot(-d, cross(r.o-verts[0], verts[2]-verts[0])) / denom;
float c = dot(-d, cross(verts[1]-verts[0], o-verts[0])) / denom;
// Check if in tri or if b & c have NaN values
if ( b < 0 || c < 0 || b+c > 1 || b != b || c != c)
return false;
// Use barycentric coordinates to calculate the intersection point
Vector3 P = (1.f-b-c)*verts[0] + b*verts[1] + c*verts[2];
return true;
}
The actual intersection point you would be interested in is P.
Ray tracing is an option and is used in many applications for doing just that (picking). The problem with ray tracing is that this solution is a lot of work to get a pretty simple basic feature working. Also ray tracing can be slow but if you have only one ray to trace (the location of your finger say), then it should be okay.
OpenGL's API also provides a technique to pick object. I suggest you look at for instance: http://www.lighthouse3d.com/opengl/picking/
Finally a last option would consist of projecting the vertices of an object in screen space and use simple 2d techniques to find out which faces of the object your finger overlaps.

How to draw real world coordinates rotated relative to the device around a center coordinate?

I'm working on a simple location-aware game where the current location of the user is shown on a game map, as well as the locations of other players around him. It's not using MKMapView but a custom game map with no streets.
How can I translate the other lat/long coordinates of other players into CGPoint values to represent them in the world scale game map with a fixed scale like 50 meters = 50 points in screen, and orient all the points such that the user can see in which direction he would have to go to reach another player?
The key goal is to generate CGPoint values for lat/long coordinates for a flat top-down view, but orient the points around the users current location similar to the orient map feature (the arrow) of Google Maps so you know where is what.
Are there frameworks which do the calculations?
first you have to transform lon/lat to cartesian x,y in meters.
next is the direction in degrees to your other players. the direction is dy/dx where dy = player2.y to me.y, same for dx. normalize dy and dx by this value by dividing by distance between playerv2 and me.
you receive
ny = dy / sqrt(dx*dx + dy*dy)
nx = dx / sqrt(dx*dx + dy*dy)
multipl with 50. now you have a point 50 m in direction of the player2:
comp2x = 50 * nx;
comp2y = 50 * ny;
now center the map on me.x/me.y. and apply the screen to meter scale
You want MKMapPointForCoordinate from MapKit. This converts from latitude-longitude pairs to a flat surface defined by an x and y. Take a look at the documentation for MKMapPoint which describes the projection. You can then scale and rotate those x,y pairs into CGPoints as needed for your display. (You'll have to experiment to see what scaling factors work for your game.)
To center the points around your user, just subtract the value of their x and y position (in MKMapPoints) from the points of all other objects. Something like:
MKMapPoint userPoint = MKMapPointForCoordinate(userCoordinate);
MKMapPoint otherObjectPoint = MKMapPointForCoordinate(otherCoordinate);
otherObjectPoint.x -= userPoint.x; // center around your user
otherObjectPoint.y -= userPoint.y;
CGPoint otherObjectCenter = CGPointMake(otherObjectPoint.x * 0.001, otherObjectPoint.y * 0.001);
// Using (50, 50) as an example for where your user view is placed.
userView.center = CGPointMake(50, 50);
otherView.center = CGPointMake(50 + otherObjectCenter.x, 50 + otherObjectCenter.y);