OpenGL ES: Rotating 3d model around itself - iphone

I'm playing with OpenGL ES on iPhone and I'm trying to rotate a model by panning with the finger. I discovered the open source app Molecules that let's you do that and I'm looking at that code, but when it comes to rotate a model of mine I'm able to rotate it only around a point distant in the space (like it was in orbit as a satellite and I am the fixed planet).
Any suggestion on what can be wrong?
I can post the code later , maybe on demand (many lines)
For the most part refer to Molecules you can find it here MOLECULES

If my memory serves me correctly, I think you need to translate the model to the origin, rotate, and then translate back to starting position to get the effect you are after.
I think there is a glTranslate() function, Say the object is at 1,0,0. You should then translate by -1,0,0 to go to origin. That is translate by a vector going from the center of the object to the origin.

The draw code probably looks roughly like this:
glLoadIdentity();
glTranslate(0, 0, -10);
glRotate(...);
drawMolecule();
Now it's important to realize that these transformations are applied in reverse order. If, in drawMolecule, we specify a vertex, then this vertex will first be rotated about the axis given to glRotate (which by definition passes through the local origin of the molecule), and then be translated 10 units in the −z direction.
This makes sense, because glTranslate essentially means: "translate everything that comes after this". This includes the glRotate call itself, so the result of the rotation also gets translated. Had the calls been reversed, then the result of the translation would have been rotated, which results in a rotation about an axis that does not pass through the origin anymore.
Bottom line: to rotate an object about its local origin, put the glRotate call last.

Related

Unity: Create AnimationClip With World Scale AnimationCurves

I've been looking for a solution to this for quite a while now (meaning several days) and I haven't found anything yet. Maybe I'm thinking about it wrong and there isn't a way, but let's try!
I'm recording hand-data on a Hololens (the Unity Hololens Input Simulation for now). This essentially gives me one float AnimationCurve for each hand joint for each transform.position.x to z and rotation.x to w. Now my goal is to put these curves into an AnimationClip and add it to an AnimatorController (via an AnimatorOverrideController) that animates a hand rig and replay the recordings. Everything so far works!
However, the recorded hand-data from the Hololens is in world scale, not in local scale. (which makes sense, since you usually want absolute coordinates when you want to know where the hand is.) But to animate the hand, it seems I'm only able to set local coordinates, which I don't have.
Example:
clip.SetCurve("", typeof(Transform), "localPosition.x", curve.PositionX);
Here, the clip takes the the x-coordinates from some hand joint and puts it to the localPosition.x of the corresponding hand rig joint. The problem: curve.PositionX is world-scale (absolute coordinates), but localPosition.x takes local-scale (coordinates relative to its parent).
I can't simply change "localPosition.x" to "position.x", like so:
clip.SetCurve("", typeof(Transform), "position.x", curve.PositionX);
even though the Transform class has both properties and position is the object's world scale position. I'm not sure why this doesn't work, but it gives me the following error:
Cannot bind generic curve on Transform component, only position, rotation and scale curve are supported.
I'm aware that it doesn't make much sense to use absolute coordinates for an animation, but I simply don't have anything else.
Does anyone have an approach how I can deal with this in a sensible, not-too-cumbersome way? It seems I have all the important parts, I just can't figure out how to put them together. Thanks so much already! :)
From my basic understanding, it seems like you are using the Input animation recording service provided by MRTK. Unfortunately, MRTK does not provide the localPosition version of Curves data. However, you can modify the data from the recordingBuffer after the InputRecordingService stops recording.
So, this is a method worth trying for you: in the handJointCurves dictionary property of recordingBuffer field, a set of pose curves is stored for each joint. And then, base on this table:Joint pose curves, subtract the position value of the key None from the position value of each other joint in every key frame so that the localPosition based on the key None is obtained.

openGL ES Rotation about a point

I'm attempting to get an object to rotate about the origin point (0,0,0)
I'm following some guidelines from this blog and was able to get the basic rotation about the Z axis and it makes a very tight circle about the Z azis.
When I change it to the X or Y axis the triangle I made goes behind me and then shows up from the other side.
The basic effect I'm hoping to achieve is to have it spin right infront of the camera.
I understand that I would have to rotate it by the amount I want and then translate it back to the origin, but I'm not quite sure on how to figure out how much to translate it by.
Can someone give me a push in the right direction about this especially the formula I would need to use to translate it properly?
Hard to answer without seeing your code, but it sounds like you want to first translate the center of the triangle to the origin, rotate, then translate back to the triangle's original position. glRotate() rotates around the origin, not an arbitrary point.
So, effectively,
glTranslatef(centerX, centerY, centerZ);
glRotatef(angle, 0, 0, 1);
glTranslatef(-centerX, -centerY, -centerZ);
Remember that OpenGL transformations are applied in reverse order that they are specified in the code, so the above translates by -(centerX, centerY, centerZ), then rotates, then translates back by (centerX, centerY, centerZ).
Check out Chapter 3 of the OpenGL Programming Guide for more information.

Rotating an object in OpenGL ES for iPhone [translate to origin --> rotate --> translate back is not working]

I recently started working with OpenGL ES for the iPhone, and I am having a bit of trouble with it. I want to be able to rotate an object with your fingers. My problem is that I have my object placed at (0, 0, -3), and I would like to rotate it about its center. I know that I need to translate back to the origin, rotate, and then bring it back to the original place. I think I am facing a problem because I am using a matrix to keep track (?) of all of my rotations/translation/scaling etc, and I think it may be combining the operations in a way that order is not even considered (so the two translations would cancel each other). I just started learning OpenGL a day ago and am a complete newbie, so my assumption may be wrong.
Here is the part of the in drawView that I am having trouble with:
GLfloat matrix[16];
glGetFloatv(GL_MODELVIEW_MATRIX, matrix);
glLoadIdentity();
glTranslatef(0, 0, 3); // bring to origin
glRotatef(self.angle, self.dy, self.dx, 0); // rotate
glTranslatef(0, 0, -3); // put it back in place
glMultMatrixf(matrix); // save the transformations performed
Help would be much appreciated, thank you!
Retrieving the modelview matrix, and then multiplying it back on after your rotation seems fishy, but it depends on what your other transforms are and what coordinate space things are supposed to be. Your comment on the glMultMatrix line doesn't correspond with what you're doing.
Normally you would just do the translate+rotate+translate as the most local actions on the object, just before you render it. Also note that this would only apply if you're object is at (0, 0, -3) in object-space. If it's at that location in world space, then the rotation will already rotate the object around its own center if you have previously made a series of transform calls (translate, rotate, etc) to move the object to its intended position in the world.
Transform order is one of the tricky parts of learning OpenGL. As a general rule of thumb, your operations start with the outer-most and progress to the inner-most. So a typical simple set of transforms would be: the inverse of the camera transform to move the world to match up with the camera, then the object's translation to move it to its world-space position, then the rotation to set it's intended orientation. The PushMatrix/PopMatrix stack functions let you save and undo part of that series of transforms so that groups of objects can share portions of that chain.

Fill a touch drawn path of CGPoints using OpenGL or CoreGraphics

I have a NSArray of points that make up a path. I can detect when it self-intersects. When this happens, I try to fill the path.
First I used CoreGraphics, now I'm using openGl to draw a triangle array. Doesn't work well as you can see in the image.
How do I fill only the circular area while leaving the "tail" alone? I was thinking of a reverse flood fill but don't think CG has any API functions for this...
Maybe instead of actually drawing the path you can just approximate the diameter of the path and draw a circle with your approximation.
Here is some code to detect a circle gesture on the iPhone:
http://www.mobileorchard.com/iphone-circle-gesture-detection/
Record all of the points in a doubly-linked list. When it comes time to fill, walk the list from the start and find the point that's closest to the end. Then, lineto that point, then lineto each point in reverse order, stopping with the second point in the list. The fill will implicitly close the path, which will jump from where you left off (the second point) back to the start (first) point.
This is just off the top of my head; you can play with a couple of variations on this to see what works best. You might record the closest previous node in each node, but this could get expensive for many nodes.

Screen-to-World coordinate conversion in OpenGLES an easy task?

The Screen-to-world problem on the iPhone
I have a 3D model (CUBE) rendered in an EAGLView and I want to be able to detect when I am touching the center of a given face (From any orientation angle) of the cube. Sounds pretty easy but it is not...
The problem:
How do I accurately relate screen-coordinates (touch point) to world-coordinates (a location in OpenGL 3D space)? Sure, converting a given point into a 'percentage' of the screen/world-axis might seem the logical fix, but problems would arise when I need to zoom or rotate the 3D space. Note: rotating & zooming in and out of the 3D space will change the relationship of the 2D screen coords with the 3D world coords...Also, you'd have to allow for 'distance' in between the viewpoint and objects in 3D space. At first, this might seem like an 'easy task', but that changes when you actually examine the requirements. And I've found no examples of people doing this on the iPhone. How is this normally done?
An 'easy' task?:
Sure, one might undertake the task of writing an API to act as a go-between between screen and world, but the task of creating such a framework would require some serious design and would likely take 'time' to do -- NOT something that can be one-manned in 4 hours...And 4 hours happens to be my deadline.
The question:
What are some of the simplest ways to
know if I touched specific locations
in 3D space in the iPhone OpenGL ES
world?
You can now find gluUnProject in http://code.google.com/p/iphone-glu/. I've no association with the iphone-glu project and haven't tried it yet myself, just wanted to share the link.
How would you use such a function? This PDF mentions that:
The Utility Library routine gluUnProject() performs this reversal of the transformations. Given the three-dimensional window coordinates for a location and all the transformations that affected them, gluUnProject() returns the world coordinates from where it originated.
int gluUnProject(GLdouble winx, GLdouble winy, GLdouble winz,
const GLdouble modelMatrix[16], const GLdouble projMatrix[16],
const GLint viewport[4], GLdouble *objx, GLdouble *objy, GLdouble *objz);
Map the specified window coordinates (winx, winy, winz) into object coordinates, using transformations defined by a modelview matrix (modelMatrix), projection matrix (projMatrix), and viewport (viewport). The resulting object coordinates are returned in objx, objy, and objz. The function returns GL_TRUE, indicating success, or GL_FALSE, indicating failure (such as an noninvertible matrix). This operation does not attempt to clip the coordinates to the viewport or eliminate depth values that fall outside of glDepthRange().
There are inherent difficulties in trying to reverse the transformation process. A two-dimensional screen location could have originated from anywhere on an entire line in three-dimensional space. To disambiguate the result, gluUnProject() requires that a window depth coordinate (winz) be provided and that winz be specified in terms of glDepthRange(). For the default values of glDepthRange(), winz at 0.0 will request the world coordinates of the transformed point at the near clipping plane, while winz at 1.0 will request the point at the far clipping plane.
Example 3-8 (again, see the PDF) demonstrates gluUnProject() by reading the mouse position and determining the three-dimensional points at the near and far clipping planes from which it was transformed. The computed world coordinates are printed to standard output, but the rendered window itself is just black.
In terms of performance, I found this quickly via Google as an example of what you might not want to do using gluUnProject, with a link to what might lead to a better alternative. I have absolutely no idea how applicable it is to the iPhone, as I'm still a newb with OpenGL ES. Ask me again in a month. ;-)
You need to have the opengl projection and modelview matrices. Multiply them to gain the modelview projection matrix. Invert this matrix to get a matrix that transforms clip space coordinates into world coordinates. Transform your touch point so it corresponds to clip coordinates: the center of the screen should be zero, while the edges should be +1/-1 for X and Y respectively.
construct two points, one at (0,0,0) and one at (touch_x,touch_y,-1) and transform both by the inverse modelview projection matrix.
Do the inverse of a perspective divide.
You should get two points describing a line from the center of the camera into "the far distance" (the farplane).
Do picking based on simplified bounding boxes of your models. You should be able to find ray/box intersection algorithms aplenty on the web.
Another solution is to paint each of the models in a slightly different color into an offscreen buffer and reading the color at the touch point from there, telling you which brich was touched.
Here's source for a cursor I wrote for a little project using bullet physics:
float x=((float)mpos.x/screensize.x)*2.0f -1.0f;
float y=((float)mpos.y/screensize.y)*-2.0f +1.0f;
p2=renderer->camera.unProject(vec4(x,y,1.0f,1));
p2/=p2.w;
vec4 pos=activecam.GetView().col_t;
p1=pos+(((vec3)p2 - (vec3)pos) / 2048.0f * 0.1f);
p1.w=1.0f;
btCollisionWorld::ClosestRayResultCallback rayCallback(btVector3(p1.x,p1.y,p1.z),btVector3(p2.x,p2.y,p2.z));
game.dynamicsWorld->rayTest(btVector3(p1.x,p1.y,p1.z),btVector3(p2.x,p2.y,p2.z), rayCallback);
if (rayCallback.hasHit())
{
btRigidBody* body = btRigidBody::upcast(rayCallback.m_collisionObject);
if(body==game.worldBody)
{
renderer->setHighlight(0);
}
else if (body)
{
Entity* ent=(Entity*)body->getUserPointer();
if(ent)
{
renderer->setHighlight(dynamic_cast<ModelEntity*>(ent));
//cerr<<"hit ";
//cerr<<ent->getName()<<endl;
}
}
}
Imagine a line that extends from the viewer's eye
through the screen touch point into your 3D model space.
If that line intersects any of the cube's faces, then the user has touched the cube.
Two solutions present themselves. Both of them should achieve the end goal, albeit by a different means: rather than answering "what world coordinate is under the mouse?", they answer the question "what object is rendered under the mouse?".
One is to draw a simplified version of your model to an off-screen buffer, rendering the center of each face using a distinct color (and adjusting the lighting so color is preserved identically). You can then detect those colors in the buffer (e.g. pixmap), and map mouse locations to them.
The other is to use OpenGL picking. There's a decent-looking tutorial here. The basic idea is to put OpenGL in select mode, restrict the viewport to a small (perhaps 3x3 or 5x5) window around the point of interest, and then render the scene (or a simplified version of it) using OpenGL "names" (integer identifiers) to identify the components making up each face. At the end of this process, OpenGL can give you a list of the names that were rendered in the selection viewport. Mapping these identifiers back to original objects will let you determine what object is under the mouse cursor.
Google for opengl screen to world (for example there’s a thread where somebody wants to do exactly what you are looking for on GameDev.net). There is a gluUnProject function that does precisely this, but it’s not available on iPhone, so that you have to port it (see this source from the Mesa project). Or maybe there’s already some publicly available source somewhere?