Convert left-handed Y-up (e.g. Unity) coordinates&rotations to right-handed Z-up (3DS Max) - unity3d

We develop an application that uses models exported as an fbx from max in unity (seems to work), changes them and then communicates the changes back to 3DSMax for a clean render.
We rotate the model pivot in max in such a way in max that it is shown correctly in Unity after the export.
What we got so far:
Position:
x(max) = x(unity)
y(max) = z(unity)
z(max) = y(unity)
Rotation:
x(max) = x(unity)
y(max) = -y(unity)
z(max) = z(unity)
Simple rotations seem to work, complex do not. I suspect we did not properly take the mirroring when going from left handed to right handed or the different rotation multiplication order into account. How is the mapping done correctly?
There is a related question with no answers:
Unity rotation convertion

The issue was the different rotation order of Unity (XYZ) and Max (ZYX). That explains that single rotations work but not complex ones. If you do the transformation in the question above and then just do each rotation in the same order consecutively in Unity, it works.

Related

Unity: Create AnimationClip With World Scale AnimationCurves

I've been looking for a solution to this for quite a while now (meaning several days) and I haven't found anything yet. Maybe I'm thinking about it wrong and there isn't a way, but let's try!
I'm recording hand-data on a Hololens (the Unity Hololens Input Simulation for now). This essentially gives me one float AnimationCurve for each hand joint for each transform.position.x to z and rotation.x to w. Now my goal is to put these curves into an AnimationClip and add it to an AnimatorController (via an AnimatorOverrideController) that animates a hand rig and replay the recordings. Everything so far works!
However, the recorded hand-data from the Hololens is in world scale, not in local scale. (which makes sense, since you usually want absolute coordinates when you want to know where the hand is.) But to animate the hand, it seems I'm only able to set local coordinates, which I don't have.
Example:
clip.SetCurve("", typeof(Transform), "localPosition.x", curve.PositionX);
Here, the clip takes the the x-coordinates from some hand joint and puts it to the localPosition.x of the corresponding hand rig joint. The problem: curve.PositionX is world-scale (absolute coordinates), but localPosition.x takes local-scale (coordinates relative to its parent).
I can't simply change "localPosition.x" to "position.x", like so:
clip.SetCurve("", typeof(Transform), "position.x", curve.PositionX);
even though the Transform class has both properties and position is the object's world scale position. I'm not sure why this doesn't work, but it gives me the following error:
Cannot bind generic curve on Transform component, only position, rotation and scale curve are supported.
I'm aware that it doesn't make much sense to use absolute coordinates for an animation, but I simply don't have anything else.
Does anyone have an approach how I can deal with this in a sensible, not-too-cumbersome way? It seems I have all the important parts, I just can't figure out how to put them together. Thanks so much already! :)
From my basic understanding, it seems like you are using the Input animation recording service provided by MRTK. Unfortunately, MRTK does not provide the localPosition version of Curves data. However, you can modify the data from the recordingBuffer after the InputRecordingService stops recording.
So, this is a method worth trying for you: in the handJointCurves dictionary property of recordingBuffer field, a set of pose curves is stored for each joint. And then, base on this table:Joint pose curves, subtract the position value of the key None from the position value of each other joint in every key frame so that the localPosition based on the key None is obtained.

Move 3d model forwards based on rotation

I have a 3d model in Xcode using SceneKit, that can rotate around itself, and i would like for it to move forwards based on rotation, for example if it is rotated 236 degrees in the z axis, it wouldn't go straight in x or y, but a bit of both so it would move forwards. Is it possible? Do i have to get any plugins?
No plugins required.
You can do this two main ways:
Move the object's position relative to its rotation by changing its "transform" over time.
Applying force (and/or impulses) over time (or instantly) in the direction you'd like your entity to travel.
Within these two approaches are a LOT of other considerations regarding scene size, resistance, speed, immediacy, etc.

Maths behind iPhone AR ToolKit

I'm using iPhone ARToolkit and I'm wondering how it works.
I want to know how with a destination location, a user location and a compass, this toolkit can know it user is looking to that destination.
How can I know the maths behind this calculations?
The maths that AR ToolKit uses is basic trigonometry. It doesn't use the technique that Thomas describes which I think would be a better approach (apart from step 5. See below)
Overview of the steps involved.
The iPhone's GPS supplies the device's location and you already have the coordinates of the location you want to look at.
First it calculates the difference between the latitude and the longitude values of the two points. These two difference measurements mean you can construct a right-angled triangle and calculate what angle from your current position another given position is. This is the relevant code:
- (float)angleFromCoordinate:(CLLocationCoordinate2D)first toCoordinate:(CLLocationCoordinate2D)second {
float longitudinalDifference = second.longitude - first.longitude;
float latitudinalDifference = second.latitude - first.latitude;
float possibleAzimuth = (M_PI * .5f) - atan(latitudinalDifference / longitudinalDifference);
if (longitudinalDifference > 0) return possibleAzimuth;
else if (longitudinalDifference < 0) return possibleAzimuth + M_PI;
else if (latitudinalDifference < 0) return M_PI;
return 0.0f;
}
At this point you can then read the compass value from the phone and determine what specific compass angle(azimuth) your device is pointing at. The reading from the compass will be the angle directly in the center of the camera's view. The AR ToolKit then calculates the full range of angle's currently displayed on screen as the iPhone's field of view is known.
In particular it does this by calculating what the angle of the leftmost part of the view is showing:
double leftAzimuth = centerAzimuth - VIEWPORT_WIDTH_RADIANS / 2.0;
if (leftAzimuth < 0.0) {
leftAzimuth = 2 * M_PI + leftAzimuth;
}
And then calculates the right most:
double rightAzimuth = centerAzimuth + VIEWPORT_WIDTH_RADIANS / 2.0;
if (rightAzimuth > 2 * M_PI) {
rightAzimuth = rightAzimuth - 2 * M_PI;
}
We now have:
The angle relative to our current position of something we want to display
A range of angles which are currently visible on the screen
This is enough to plot a marker on the screen in the correct position (kind of...see problems section below)
It also does similar calculations related to the devices inclination so if you look at the sky you hopefully won't see a city marker up there and if you point it at your feet you should in theory see cities on the opposite side of the planet. There are problems with these calculation in this toolkit however.
The problems...
Device orientation is not perfect
The value I've just explained the calculation of assumes you're holding the device in an exact position relative to the earth. i.e. perfectly landscape or portrait. Your user probably won't always be doing that. If you tilt the device slightly your horizon line will no longer be horizontal on screen.
The earth is actually 3D!
The earth is 3-dimensional. Few of the calculations in the toolkit account for that. The calculations it performs are only really accurate when you're pointing the device towards the horizon.
For example if you try to plot a point on the opposite side of the globe (directly under your feet) this toolkit behaves very strangely. The approach used to calculate the azimuth range on screen is only valid when looking at the horizon. If you point your camera at the floor you can actually see every single compass point. The toolkit however, thinks you're still only looking at compass reading ± (width of view / 2). If you rotate on the spot you'll see your marker move to edge of the screen, disappear and then reappear on the other side. What you would expect to see is the marker stay on screen as you rotate.
The solution
I've recently implemented an app with AR which I initially hoped AR Toolkit would do the heavy lifting for me. I came across the problems just described which aren't acceptable for my app so had to roll my own.
Thomas' approach is a good method up to point 5 which as I explained above only works when pointing towards the horizon. If you need to plot anything outside of that it breaks down. In my case I have to plot objects that are overhead so it's completely unsuitable.
I addressed this by using OpenGL ES to plot my markers where they actually are in 3D space and move the OpenGL viewport around according to readings from the gyroscope while continuously re-calibrating against the compass. The 3D engine handles all the hard work of determining what's on screen.
Hope that's enough to get you started. I wish I could provide more detail than that but short of posting a lot of hacky code I can't. This approach however did address both problems described above. I hope to open source that part of my code at some point but it's very rough and coupled to my problem domain at the moment.
that is all information needed. with iphone-location and destination-location you can calculate the destination-angle (with respect to true north).
The only missing thing is to know where the iPhone is currently looking at which is returned by the compass (magnetic north + current location -> true north).
edit: Calculations: (this is just an idea: there may exist a better solution without a lot coordinate-transformations)
convert current and destination location to ecef-coordinates
transform destination ecef coordinate to enu (east, north, up) local coordinate system with current location as reference location. You can also use this.
ignore the height-value and use the enu-coordinate to get the direction: atan2(deast, dnorth)
The compass returns already the angle the iPhone is looking at
display the destination on the screen if dest_angle - 10° <= compass_angle <= dest_angle + 10°
with respect to the cyclic-angle-space. The constant of 10° is just a guessed value. You should either try some values to find out a useful one or you have to analyse some properties of the iPhone-camera.
The coordinate-transformation-equations become much simpler if you assume that the earth is a sphere and not an ellipsoid. Most links if have postet are assuming an wgs-84 ellipsoid becasue gps also does afaik).

Iphone OpenGL : glOrthof vs glFrustumf. is glOrthof not 3D?

having a bad coding day.
Right I need to make a 3D cube that spins around etc via user interaction. Hey no biggy.
All the examples to make a 3D cube seem to use glOrthof and when I demo one to people they say its not 3D.
The problem is that glFrustumf seems to put me in the cube instead of in front of me. I cant move it back using glTransform because it re-uses the ModelView Matrix (I even tried manually modifying that)
/* save current rotation state */
GLfloat matrix[16];
glGetFloatv(GL_MODELVIEW_MATRIX, matrix);
/* re-center cube, apply new rotation */
glLoadIdentity();
glRotatef(self.angle, self.dy,self.dx,0);
/* reapply other rotations so far */
glMultMatrixf(matrix);
So questions are.
To do a 3D cube must I use glFrustumf and if so, how the hell do I step back 5 but still re-use the model matrix (it keeps the cube spinning in what ever direction the user moves it)
I'm not sure what you mean by glOrthof() "not being 3-D". The rotating cube example I have here (using both OpenGL ES 1.1 and 2.0 for rendering of the textured cube) seems to work on 3-D, and I use glOrthof() in the OpenGL ES 1.1 side of the renderer. Shading and other effects can be applied independently of the glOrthof() usage.
In that example, I don't read back the model view matrix to manipulate the cube. Instead, I keep a copy of the matrix locally and modify that using some Core Animation helper functions. In addition to the CATransform3DRotate() that I perform on the cube, you should be able to throw in a CATransform3DTranslate() to displace it in a certain direction, while still being able to spin it.
I keep a local copy of the model view matrix for performance (reading back the model view matrix halts the rendering pipeline on OpenGL ES 1.1), and for compatibility with 2.0 (where you need to send the matrix as a uniform to the shaders).
Also, in an answer to your later question (which might get closed), you can't just arbitrarily change values within the model view matrix and expect to see linear displacements from that. You need to get the math right, and matrix math was never one of my strong points. I find it best to let a transform operation (like those provided in Core Animation) do the math for you when manipulating matrices.

Using iPhone/iPod Touch acceleration to rotate a 3D object

I'm trying to use an iPhone/iPod acceleration to manipulate directly a 3D object.
For that I've been searching lot's of stuff (Euler angles, Quaternions, etc).
I'm using OpenSG, where I have a 3D environment and want to manipulate a certain object (just rotating in all possible iPhone/iPod degrees of freedom using only accelerometer).
So, I tried to figure it out a solution for this problem but it still doesn't have the expected result and get some weird rotations in some angles.
Can someone tell me what I'm doing wrong? Or, is there a better way of doing this without using quaternions?
The acceleration variable is a Vec3f containing the accelerometer values from iPhone/iPod filtered with a low-pass filter.
acceleration.normalize();
Vec3f reference = OSG::Vec3f(0, 0, 1);
OSG::Vec3f axis = acceleration.cross( reference );
angle = acos( acceleration.dot( reference ) );
OSG::Quaternion quat;
quat.setValueAsAxisRad(axis, angle);
After this code, I update my scene node using quaternion quat.
I wanted to do the exact same thing and just tried it, I hadn't played around with an accelerometer before and it seemed like it should be possible.
The problem is that if you set your iPhone on a table and then slowly spin it around and observe the output of the accelerometer it basically doesn't change (one gravity down). If you tilt it up/down on any of the four edges you will see the output change.
In other words you know that your table top is tilting top/bottom or left/right, but you can't tell that you are spinning it. So you can map this tilt to two rotations of a 3D object.
You could probably use the compass for the horizontal rotation, I couldn't try because I was prototyping in the Unity Game Engine and it doesn't seem to support compass yet.
The ever wonderful Brad Larson posted an excellent description of his initial experiences of a 3d viewer while writing his Moleculs app.
His method for rotations was achieved as follows:
GLfloat currentModelViewMatrix[16];
glGetFloatv(GL_MODELVIEW_MATRIX, currentModelViewMatrix);
glRotatef(xRotation, currentModelViewMatrix[1], currentModelViewMatrix[5], currentModelViewMatrix[9]);
glGetFloatv(GL_MODELVIEW_MATRIX, currentModelViewMatrix);
glRotatef(yRotation, currentModelViewMatrix[0], currentModelViewMatrix[4], currentModelViewMatrix[8]);
but whether or not this is helpful I can't recommend this blog entry enough Brad learns a lesson or two
Editing to add that I may have misread the question, but will keep the post here as it will likely help people searching with similar keywords.