Moving around the surface of an Earth shaped spheroid in Unity - unity3d

I'm trying to make a Unity game that allows the user to explore the surface of an Earth shaped spheroid, based on WGS84.
The project so far is on Github, and there's a YouTube video of this behaviour.
A shape the size of Earth is way too big for Unity, so I just spawn tiles near the user, offset so that the first tile is at Unity's origin point. This bit works.
The issue is moving around. I've been using an approach where I get the user's position in ECEF coordinates, then normalise that to provide the global orientation for the player, then I translate the player forward based on that and their rotation.
The issue with this is that normalising the ECEF coordinate means that the player is moving in a spherical shape, but the WGS84 spheroid is not perfectly spherical. So the player sinks into the floor, or flies up if you got south or north, respectively.
My question is, how can I allow the user to move around the surface of the spheroid by way of translation? I feel like there might be some way of taking the major/minor axis of the spheroid into account as the player moves, but I'm not sure how to do that.

I have no experience with Unity or computer graphics, I'm approaching it purely from the navigation point of view.
Let's look at the real world.
We want to travel either by walking/driving on the surface or flying at some altitude. When we do it, we move in the local coordinate system (North-South, East-West, Up-Down), we can't see any curvature. We assume the Earth is flat.
The problem arises when we try to do it on a computer, which is ruthlessly precise and knows the shape of the Earth. We can't assume the Earth is flat, we can't assume the Earth is a sphere. The Earth is a geoid. Fortunately for some purposes we can simplify things and assume the Earth is an ellipsoid. You chose WGS84. Good!
So how to move around an ellipsoid? Solving the problem analitically is a nightmare. We have to cheat ;)
We should assume te Earth is flat for a moment, make a move in a chosen direction in the local coordinate system, write down the altitude of the new position, calculate the global geodetic coordinates (Lat, Long, Alt) of that new point and then replace the altitude with the one obtained while using the local coordinate system. In other words: each time we move forward along a perfectly straight line and diverge from the ellipsoid (just a tiny bit), we force the altitude not to change in relation to the ellipsoid.
Implementation.
You need to be able to freely translate coordinates between geodetic (Lat, Long, Alt) and ECEF. Going from geodetic to ECEF is easy. Finding geodetic coordinates for a given ECEF position is much more complex, there are many different algorithms, I'm sure you should be able to find a ready to use implementation somewhere.
What you also need is Local Tangent Plane, and to be precise, you are going to use NED.
Let's assume your object is initially at some geodetic position. You write down the altitude (relative to the ellipsoid). Then you create a local NED coordinate system with its origin at that point. Then you move the object in that local coordinate system. You write down how much the altitude (or rather the Down coordinate) changed. Then you must calculate the ECEF coordinates of that new position and transform it to geodetic (Lat, Long, Alt). You have the old altitude, you have the altitude change in the NED coordinates, which means you know the new altitude. You then apply that altitude to your new geodetic coordinates (brutally replace the Alt in Lat/Long/Alt with a new value).
Then you make another move in the NED coordinates defined for that new position. And so on...
I'm not sure if it is clear, the process is quite complicated. If you can't understand - shout :)

Related

Can the coordinate of any point that a person looks at be determined from the photo of his eyes?

I am planning to develop a program in MATLAB/Python that can detect the coordinate of any point in space that a person is looking at. I was hoping to do it by placing a headset with a camera installed before eyes and tracking the pupil of the both eyes. I still don't know if it is possible. That's why I need help.
This is a challenging task.
To some extent, you can determine the outline of the pupils, and a little of the iris, by image processing. Then fitting an ellipse to these and a little geometry from the ellipse axis gives you an approximate direction of gaze. Then drawing two rays from the eyes and following these directions will give you a point of convergence.
The coordinates you will obtain will be relative to the image plane, i.e. to the position of the camera, which will move with the head.
Anyway, exact outlining of ellipses will be inaccurate, the rays will fail to cross exactly, cornea refraction and perspective will alter the measurements, the glasses will move a little, and so on.

How do I think about rotation in the physical and virtual world?

I have a couple of IMUs (rotation devices), I'm trying to attach them on fingers and make a glove.
I get confused because I don't have a good way to think about rotation in the physical and virtual world.
Right now, the way I'm thinking about it is that, in the physical world, there is a world coordinate (XYZ). The IMU's rotation quaternion is probably based on that.
I'm also thinking that each IMU has slightly different physical world coordinate from one other (because they give slightly different quaternion values when they point the same direction).
In the virtual world, each IMU has another virtual world coordinate system.
I'm confusing that maybe it's just one coordinate system that cannot be separated as world or virtual? I'm not sure how to think about all of this and it makes it hard to ask questions.
Please tell me which concepts/terminologies I need to understand to
gain intuition about how to think rotations in the physical and
virtual world.
The motivation? I am trying to calibrate my glove (each finger has one IMU, the palm has one IMU) so that it shows correct hand model in the Unity scene. But it's hard to wrap my head around it.
The main concept to understand about how Unity handles the rotation of your object is that every element has a World rotation and a Local rotation.
The local rotation is the rotation of the object relative to it's parent, and the world rotation is the rotation to the object relative to the root of your scene.
Local rotation : https://docs.unity3d.com/ScriptReference/Transform-localRotation.html
World rotation : https://docs.unity3d.com/ScriptReference/Transform-rotation.html
These variables are stored as Quaternions, but if you want to work effectively with them, you will probably want to use Vector3 values. This is what eulerAngles are for :
https://docs.unity3d.com/ScriptReference/Transform-eulerAngles.html
https://docs.unity3d.com/ScriptReference/Quaternion-eulerAngles.html
The Euler angles of your quaternion are actually the XYZ values of your rotation you mentionned in your post.

Finding real world coordinate(3D) from 2D coordinates

I have x, y coordinates of a feature from a single photograph. I know camera parameters. How can i get the 3D coordinate of that feature (in matlab). please help me.
Give a look to this: http://www.cim.mcgill.ca/~langer/558/4-cameramodel.pdf
Systems like this that I've seen before requires that you know where the camera is (latitude and longitude), in which direction (azimuth and elevation) the camera is pointing, together with the field of view. Then you project this onto the geodata of the environment and from there you can do all kinds of things, like finding the 3D position of an object based on its location in the photograph.

iOS is it possible to convert CLLocation into some sort of XYZ metric coordinate system?

I'm building an augmented reality game, and working with CLLocation is rather cumbersome.
Is there some way to locally approximate CLLocation as XYZ coordinate, expressed in meters with the origin starting at some arbitrary point (for example the initial position when the game was started)?
Lets say I'm working with a 1 mile radius and do not really care about the curvature of the earth. Is it possible to approximate or somehow simplify the location based calculations for local position tracking?
Alternatively, is there a coordinate system that can be used with CLLocation that also incorporates the roll, pitch, yaw of the CMAttitude as well as compass orientation?
Clarification: As far as I understand, the problem with latitude and longitude is that their units vary in size, depending on the position on the globe. I should've specified that X,Y,Z should be in standard units, like meters or feet.
Thank you!
The Haversine formula may be useful.
I found a good article on it at http://www.jaimerios.com/?p=39 with code examples.
You could get the initial point at the app's launch and calculate the relative points based on the user coordinates as he or she moves. Admittedly, this is not super elegant, but if you are just trying to do some simple comparisons based on the user's location relative to an arbitrary origin, this should work. For the Z, Alex Stone's suggestion of calculating it based on the altitude should be fine.

Open GL - ES 2.0 : Touch detection

Hi Guys I am doing some work on iOS and the work requires use of OpenGL es. So now I have a bunch of squares, cubes and triangles on the screen. Some of these geometries might overlap. Any ideas/ approaches for touch detection?
Regards
To follow up on the answer already given, squares, cubes and triangles are convex shapes so you can perform ray-object intersection quite easily, even directly from the geometry rather than from the mathematical description of the perfect object.
You're going to need to be able to calculate the distance of a point from the plane and the intersection of a ray with the plane. As a simple test you can implement yourself very quickly, for each polygon on the convex shape work out the intersection between the ray and the plane. Then check whether that point is behind all the planes defined by polygons that share an edge with the one you just tested. If so then the hit is on the surface of the object — though you should be careful about coplanar adjoining polygons and rounding errors.
Once you've found a collision you can easily get the length of the ray to the point of collision. The object with the shortest distance is the one that's in front.
If that's fast enough then great, otherwise you'll probably want to look into partitioning the world or breaking objects down to their silhouettes. Convex objects are really simple — consider all the edges that run between one polygon and the next. If only exactly one of those polygons is front facing then the edge is part of the silhouette. All the silhouettes edges together can be projected to a convex 2d shape on the view plane. You can then test touches by performing a 2d point-in-polygon from that.
A further common alternative that eliminates most of the maths is picking. You'd render the scene to an invisible buffer with each object appearing as a solid blob in a suitably unique colour. To test for touch, you'd just do a glReadPixels and inspect the colour.
For the purposes of glu on the iPhone, you can grab SGI's implementation (as used by MESA). I've used its tessellator in a shipping, production project before.
I had that problem in the past. What I have used is an implementation of glu unproject that you can find on google (it uses the inverse of the model view projection matrix and the viewport size). This allows you to map the 2D screen coordinates to a 3D vector into the world. Then, you can use this vector to intersect with your objects and see which one intersects (or comes really close to doing so).
I do hope there are better ways of doing this, so I look forward to other answers as well!
Once you get the inverse-modelview and cast your ray (vector), you still need to know if the ray intersects your geometry. One approach would be to grab the depth (z in view coordinate system) of the object's center and extend (stretch) your vector just that far. Then see if the vector's "head" ends within the volume of your object or not (you need the objects center and e.g. Its radius, if it's a sphere)