OpenGL ES units - iphone

I am developing an application for iPhone.
I am using OpenGL to display a 3D object in the screen, with the camera view as background.
I'd like to know how can i change the OpenGL ES unit to centimeters/meters.
How can i do that?

You dont.
Thought experiment time!
First imagine you have sphere 1 centimeter in diameter. And you have a camera 10 centimeters away. You would see a small sphere in the center of the frame.
Now imagine you have a sphere 1 kilometer in diameter and a camera 10 kilometers away. How would you expect the image to be different?
The correct answer is you would not expect the image to change at all. All that really matters is the relative sizes of things. So the unit type you attribute to the the numbers only matters to the programmer, and not to the program.
So you simply mentally declare that one unit is equal to one centimeter and create your objects and world according to that scale. There is no code level change to make this happen. It's merely a convention that you use to help you build things in correct dimensions relative to each other.

OpenGL does not have a notion for units. It just uses unit-less values. What these values mean is up to you. They just have to be consistent. So if your objects coordinates and viewing parameters are all specified with meters in mind and you have an object whose coordinates are in centimeters, just scale that object by a factor of 0.1.

Related

Mapbox Unity SDK: storying and displaying short relative distances

I was wondering how I can go about storing and displaying small, but geographically accurate distances in the mapbox unity SDK?
I'm storing radius' about markers on a map, I get the value in meters (from ~0.5m-10m), and then, adaptively with the zoom level, I want to accurately display those meters in Unity world space (draw an ellipse) using these stored values. The problem is that the mapbox api from my understanding only lets you to convert lat/long to unity world coordinates and I'm running into precision errors. I can get adequate precision when using the CheapRuler class and meters, but as soon as I use the _map.GeoToWorld(latlon) method the precision is lost.
How would I go about keeping adequate precession, is there a way I can use the marker as the reference point and the radius as the offset, and get the relative unity world coordinate distance (of the radius) that way? I know you can also store scale relative to the mapbox tiles, but I'm not sure how I can convert that back to a unity world distance. I'm operating on very small distances, so any warping due to lat/long being a Mercator projection can probably be ignored.
I figured out a round-about solution.
First I convert the meters into unity world space using whatever IMapScalingStrategy Mapbox is currently using.
Then I convert from world to the view space of whatever camera I want to scale to the given bounds.
After that, I use find out the scale of the bounds, solving for:
UnityRelativeScaleChange = 2Map Zoom Level Change; which (to my estimations) is the relationship between unity scale and mapbox zoom levels.
This solutions works great as long as you don't have to zoom in/out by too much, otherwise you'll run into precision problems as the functions rely on the relative view-based size of a given bounds to do their calculations which will lead to unstable results if those initially take a tiny portion of the screen.

Transformation unity 3d

I'm learning unity by the book "Unity game development in 24 hours". The book says:
Translation: Translation is a inert transformation. That means any changes applied after it won't be affected.
Scaling: Scaling effectively changes the size of the local coordinate grid. Basically, when you scale an object to be larger, you are really scaling the local coordinate system to be larger. This causes the object to seem to grow. This change is multiplicative. For example, if an object is scaled to 1 (its natural, default size) and then translated 5 units along the x axis, the object appears to move 5 units to the right. If the same object were to be scaled to 2, however, then translating 5 units on the x axis would result in the object appearing to move 10 units to the right. This is because the local coordinate system is now double the size and 5 times 2 equals 10. Inversely, if the object were scaled to .5 and then moved, it would appear to only move 2.5 units (.5 x 5 = 2.5)
I tried to experiment this two effects but it didn't work that way. To the Translation, I can apply any changes after it. And to the Scaling, it scaled the local coordinate system in multiplicative way but it didn't multi the affect of translation. Am I understand this wrong or it's the book?
Translating (using Transform.Translate method) means moving object's transform by some vector. Simple as that.
Local scale is little bit more complicated. It scales not only the object itself, but all objects, that are children of it. And the distance moved is relative - if you have a cube that's 1x1x1 in size and you move it by 1 unit, it will move its full length. If, however, you scale it by 2 and than move it by 1 unit, it moves only half its size.
According to what you wrote, the book is probably really bad source to learn Unity3D. Try doing some official tutorials, they are really good and explain the basics really well. This one is pretty good, this one as well. And remember, anytime you are in doubt with Unity. try to search their really good documentation first.

Swift - how can i correct sky map according to current time and location?

i'm new to Swift, and now i'm trying to build sky map app like the application "star chart".
i already got a sky map image from NASA and cover it on SCNsphere, also already set camera node in the center of this sphere to make it looks like 360 degrees. Furthermore, i used accelerator to check what direction the camera is looking at.
i know that the sky map like “star chart” doesn't need internet to update data. so now the biggest problem is that i don't know how to correct the position of my sky map according to people's current time and location.
Any good advice and help? Thanks in advance!!! cause i tried vary hard to find some related information but still stuck in here for three weeks.
You just need to rotate your map with time+longitude around Earth's rotation axis and with latitude around axis longitude=90 degrees while earth is placed in the center of your sphere. For stars the offset does not matter so you can ignore Sun-Earth distance and also Earth's radius as well.
The time rotation must be day+year rotations together. On top of that you have to apply precession and nutation if you want to have higher precision.
Of coarse the stars are moving too so if you need really high precision and or high time interval to cover (hundreds or thousands of years) then this approach is not good and you should use stellar catalog with the motions implemented.
For more info see related:
How to draw sky chart
Plotting a star chart efficiently
If you want to use catalog and real colors then you will also need
Star B-V color index to apparent RGB color
simplified atmospheric scattering GLSL shader
And finally here some hints for such applications:
Is it possible to make realistic n-body solar system simulation in matter of size and mass?

3d reconstruction from 2 views

I'm doing some study on the 3d reconstruction from two views and fixed known camera focal length. Something that is unclear to me is does triangulation gives us the real world scale of an object or the scale of the result is different to the actual one? If the scale is different than the actual size, how can I find the depth of points from it? I was wondering if there is more information that I need to create a real world scale of object.
Scale is arbitrary in SfM tasks so the result may be different in every reconstruction since points are initially projected on a random depth value.
You need at least one known distance in your scene to recover the absolute (real-world) scale. You can include one object with known size in your scene so you will be able to convert your scale afterwards.

Screen-to-World coordinate conversion in OpenGLES an easy task?

The Screen-to-world problem on the iPhone
I have a 3D model (CUBE) rendered in an EAGLView and I want to be able to detect when I am touching the center of a given face (From any orientation angle) of the cube. Sounds pretty easy but it is not...
The problem:
How do I accurately relate screen-coordinates (touch point) to world-coordinates (a location in OpenGL 3D space)? Sure, converting a given point into a 'percentage' of the screen/world-axis might seem the logical fix, but problems would arise when I need to zoom or rotate the 3D space. Note: rotating & zooming in and out of the 3D space will change the relationship of the 2D screen coords with the 3D world coords...Also, you'd have to allow for 'distance' in between the viewpoint and objects in 3D space. At first, this might seem like an 'easy task', but that changes when you actually examine the requirements. And I've found no examples of people doing this on the iPhone. How is this normally done?
An 'easy' task?:
Sure, one might undertake the task of writing an API to act as a go-between between screen and world, but the task of creating such a framework would require some serious design and would likely take 'time' to do -- NOT something that can be one-manned in 4 hours...And 4 hours happens to be my deadline.
The question:
What are some of the simplest ways to
know if I touched specific locations
in 3D space in the iPhone OpenGL ES
world?
You can now find gluUnProject in http://code.google.com/p/iphone-glu/. I've no association with the iphone-glu project and haven't tried it yet myself, just wanted to share the link.
How would you use such a function? This PDF mentions that:
The Utility Library routine gluUnProject() performs this reversal of the transformations. Given the three-dimensional window coordinates for a location and all the transformations that affected them, gluUnProject() returns the world coordinates from where it originated.
int gluUnProject(GLdouble winx, GLdouble winy, GLdouble winz,
const GLdouble modelMatrix[16], const GLdouble projMatrix[16],
const GLint viewport[4], GLdouble *objx, GLdouble *objy, GLdouble *objz);
Map the specified window coordinates (winx, winy, winz) into object coordinates, using transformations defined by a modelview matrix (modelMatrix), projection matrix (projMatrix), and viewport (viewport). The resulting object coordinates are returned in objx, objy, and objz. The function returns GL_TRUE, indicating success, or GL_FALSE, indicating failure (such as an noninvertible matrix). This operation does not attempt to clip the coordinates to the viewport or eliminate depth values that fall outside of glDepthRange().
There are inherent difficulties in trying to reverse the transformation process. A two-dimensional screen location could have originated from anywhere on an entire line in three-dimensional space. To disambiguate the result, gluUnProject() requires that a window depth coordinate (winz) be provided and that winz be specified in terms of glDepthRange(). For the default values of glDepthRange(), winz at 0.0 will request the world coordinates of the transformed point at the near clipping plane, while winz at 1.0 will request the point at the far clipping plane.
Example 3-8 (again, see the PDF) demonstrates gluUnProject() by reading the mouse position and determining the three-dimensional points at the near and far clipping planes from which it was transformed. The computed world coordinates are printed to standard output, but the rendered window itself is just black.
In terms of performance, I found this quickly via Google as an example of what you might not want to do using gluUnProject, with a link to what might lead to a better alternative. I have absolutely no idea how applicable it is to the iPhone, as I'm still a newb with OpenGL ES. Ask me again in a month. ;-)
You need to have the opengl projection and modelview matrices. Multiply them to gain the modelview projection matrix. Invert this matrix to get a matrix that transforms clip space coordinates into world coordinates. Transform your touch point so it corresponds to clip coordinates: the center of the screen should be zero, while the edges should be +1/-1 for X and Y respectively.
construct two points, one at (0,0,0) and one at (touch_x,touch_y,-1) and transform both by the inverse modelview projection matrix.
Do the inverse of a perspective divide.
You should get two points describing a line from the center of the camera into "the far distance" (the farplane).
Do picking based on simplified bounding boxes of your models. You should be able to find ray/box intersection algorithms aplenty on the web.
Another solution is to paint each of the models in a slightly different color into an offscreen buffer and reading the color at the touch point from there, telling you which brich was touched.
Here's source for a cursor I wrote for a little project using bullet physics:
float x=((float)mpos.x/screensize.x)*2.0f -1.0f;
float y=((float)mpos.y/screensize.y)*-2.0f +1.0f;
p2=renderer->camera.unProject(vec4(x,y,1.0f,1));
p2/=p2.w;
vec4 pos=activecam.GetView().col_t;
p1=pos+(((vec3)p2 - (vec3)pos) / 2048.0f * 0.1f);
p1.w=1.0f;
btCollisionWorld::ClosestRayResultCallback rayCallback(btVector3(p1.x,p1.y,p1.z),btVector3(p2.x,p2.y,p2.z));
game.dynamicsWorld->rayTest(btVector3(p1.x,p1.y,p1.z),btVector3(p2.x,p2.y,p2.z), rayCallback);
if (rayCallback.hasHit())
{
btRigidBody* body = btRigidBody::upcast(rayCallback.m_collisionObject);
if(body==game.worldBody)
{
renderer->setHighlight(0);
}
else if (body)
{
Entity* ent=(Entity*)body->getUserPointer();
if(ent)
{
renderer->setHighlight(dynamic_cast<ModelEntity*>(ent));
//cerr<<"hit ";
//cerr<<ent->getName()<<endl;
}
}
}
Imagine a line that extends from the viewer's eye
through the screen touch point into your 3D model space.
If that line intersects any of the cube's faces, then the user has touched the cube.
Two solutions present themselves. Both of them should achieve the end goal, albeit by a different means: rather than answering "what world coordinate is under the mouse?", they answer the question "what object is rendered under the mouse?".
One is to draw a simplified version of your model to an off-screen buffer, rendering the center of each face using a distinct color (and adjusting the lighting so color is preserved identically). You can then detect those colors in the buffer (e.g. pixmap), and map mouse locations to them.
The other is to use OpenGL picking. There's a decent-looking tutorial here. The basic idea is to put OpenGL in select mode, restrict the viewport to a small (perhaps 3x3 or 5x5) window around the point of interest, and then render the scene (or a simplified version of it) using OpenGL "names" (integer identifiers) to identify the components making up each face. At the end of this process, OpenGL can give you a list of the names that were rendered in the selection viewport. Mapping these identifiers back to original objects will let you determine what object is under the mouse cursor.
Google for opengl screen to world (for example there’s a thread where somebody wants to do exactly what you are looking for on GameDev.net). There is a gluUnProject function that does precisely this, but it’s not available on iPhone, so that you have to port it (see this source from the Mesa project). Or maybe there’s already some publicly available source somewhere?