What is a "Break Rotator" and "Make Rotator" in Unreal Engine 4? - unreal-engine4

So, I am beginner in Unreal Engine 4. I am having trouble understanding "Make Rotator" and "Break Rotator" on Movement Input of the character blueprint class. The original definition UE4 documentation has made me even more confused. Can anyone explain this in a simple way?

A FRotator (or just Rotator in BP) is Unreal's way of storing rotations.
Usually, there are two main ways rotations are represented in programs:
As three separate rotation values per-axis, which is called Euler rotation: one for how much you rotate on X, one for Y, one for Z. This is the "older" approach, and has its issues, mainly gimbal lock. Also, there's no standard, and different programs apply them in different orders (some programs do ZXY: Z first, X second, Y third, but other do XYZ etc).
To solve the issues with Euler rotations, another option is used, with four separate values: three values denote the axis of rotation, and another one denotes the angle. This axis-and-angle value is called a quaternion (beacuse it has four values, quat is four in latin). Since any unique rotation mathematically can be expressed as an axis and angle, quaternion rotations are free of gimbal lock, and aren't dependent on any order.
Unreal internally uses quaternions (FQuat), but they're a bit harder to explain and understand, than just three X Y Z rotations. Because of that, FQuat is only for C++, and on the Blueprint level the editor simply shows the rotation as three axis. That's what the Rotator is for.
When you break a rotator, you just separate out the three float values for the XYZ rotations. When you make a rotator out of three float values, it makes the rotation that would result from them.
These break and make nodes exist for many other types, like FVector (shown in BP as just Vector), FLinearColor (Linear Color), to easily make a more complicated thing, like a color or rotation, out of simple float values.
Having said this, since the Rotator really represents an axis and angle, it's better not to use the make Rotator node, but the rotator from axis and angle node.
Rotators are quite convenient, because there are many functions in Unreal to work with them, such as Lerp (Rotator) and others.

Break Rotator : allows access to the elements of the rotator.
Make Rotator : Make a rotator with three values.

Related

Unity: Create AnimationClip With World Scale AnimationCurves

I've been looking for a solution to this for quite a while now (meaning several days) and I haven't found anything yet. Maybe I'm thinking about it wrong and there isn't a way, but let's try!
I'm recording hand-data on a Hololens (the Unity Hololens Input Simulation for now). This essentially gives me one float AnimationCurve for each hand joint for each transform.position.x to z and rotation.x to w. Now my goal is to put these curves into an AnimationClip and add it to an AnimatorController (via an AnimatorOverrideController) that animates a hand rig and replay the recordings. Everything so far works!
However, the recorded hand-data from the Hololens is in world scale, not in local scale. (which makes sense, since you usually want absolute coordinates when you want to know where the hand is.) But to animate the hand, it seems I'm only able to set local coordinates, which I don't have.
Example:
clip.SetCurve("", typeof(Transform), "localPosition.x", curve.PositionX);
Here, the clip takes the the x-coordinates from some hand joint and puts it to the localPosition.x of the corresponding hand rig joint. The problem: curve.PositionX is world-scale (absolute coordinates), but localPosition.x takes local-scale (coordinates relative to its parent).
I can't simply change "localPosition.x" to "position.x", like so:
clip.SetCurve("", typeof(Transform), "position.x", curve.PositionX);
even though the Transform class has both properties and position is the object's world scale position. I'm not sure why this doesn't work, but it gives me the following error:
Cannot bind generic curve on Transform component, only position, rotation and scale curve are supported.
I'm aware that it doesn't make much sense to use absolute coordinates for an animation, but I simply don't have anything else.
Does anyone have an approach how I can deal with this in a sensible, not-too-cumbersome way? It seems I have all the important parts, I just can't figure out how to put them together. Thanks so much already! :)
From my basic understanding, it seems like you are using the Input animation recording service provided by MRTK. Unfortunately, MRTK does not provide the localPosition version of Curves data. However, you can modify the data from the recordingBuffer after the InputRecordingService stops recording.
So, this is a method worth trying for you: in the handJointCurves dictionary property of recordingBuffer field, a set of pose curves is stored for each joint. And then, base on this table:Joint pose curves, subtract the position value of the key None from the position value of each other joint in every key frame so that the localPosition based on the key None is obtained.

Why do Quaternions have four variables?

The official documentation for the Unity Engine doesn't contain this, and I'm not far enough into my math/physics studies to have come across a Quaternion, but I understand it has to do with rotation. What I don't understand is why Quaternions have four variables, w,x,y,z, when there are only three axis of rotation in Unity.
"A quaternion is basically an axis in 3D space with a angle of rotation around the axis. Four values make up a quaternion, namely x, y, z and w. Three of the values are used to represent the axis in vector format, and the forth value would be the angle of rotation around the axis."
http://www.real3dtutorials.com/tut00011.php
So you could think of it as the rotation of the rotation, in simple terms!
Like Hellium noted in the comments below; Unity recommends that you do not fiddle with Quaternions directly if don't know exactly what you're doing. Like Hellium also points out, whatever you want to accomplish, you probably want to use the static methods of the Quaternion class. They are very useful and easy to use and can accomplish most things you want to do with rotations.

Getting the axis pointing up in a rotation (Unity)

I have an object in unity which has a rotation described as the following:
x, y, z, where they are both rotations ranging from 0 to 360 around their respective axis.
Now I'm trying to find out which of the vectors point up the most. Essentially I have a 6-sided dice, on which I use physics to emulate a dice-throw. I now want to find out which of the 6 faces of the die points upwards. I can imagine some rather advanced if sentences, revolving around checking the rotations individually, but I'd like to know if there is a good way to do this?
You can get the face directions with:
transform.up
-transform.up
transform.right
-transform.right
transform.forward
-transform.forward
You need to associate each direction with the appropriate face value. The side facing up will be the one with the greatest Dot Product vs Vector3.up (the world "up" direction). A dot product of 1 means a face is pointing directly up. Note that this only works because all the directions are unit vectors.
Vector3.Dot(Vector3.up, transform.up);
Given that it's only 6 (or 3 if you are clever) if statements to find the max that's probably the best way. If you are considering the general case, i.e. to support any die shape and number of faces, you could store a list of structs with a lambda expression denoting the face direction + the face value then use Linq Max().
You could, as you say, check the rotations directly manually.
Here's an alternative collider-based approach: put invisible children game objects with individual trigger colliders on each face of the die, then whenever one of them collides with the table surface, record the number of the opposite side of the face.

What repulsion direction do we use for Smoothed Particle Hydrodynamics when radius is 0?

When doing SPH, the paper by Kelagar recommends using a particular kernel for pressure induced forces between particles. The kernel it recommends is the following when the radius is within the kernel radius:
(15/(pi*h^9)) * (h - r)^3
where h is the kernel radius, and r is the radius we are interested in calculating the value of a function at.
The paper then states that the gradient of this function is
(-45/(pi*h^9))*((r_vec)/r)*(h-r)^2
where r_vec is now the vector from the center of the kernel to the point we are interested in. As length of r_vec goes to 0 from the positive direction, the paper states that this gradient approaches:
(-45/(pi*h^6))
But this is a scalar, not a vector. In order for there to be a repulsion between the two points we're interested in, there needs to be a direction to repel in.
What direction should we use for when two particles are right next to each other?
I assume that first expression is meant to be a potential. The negative gradient (derivative with respect to r) is then the force. This gradient is a vector, always pointing toward or away from the center. This appears correct for the second expression.
r_vec is, according to what you say, a vector pointing away from the origin to a point at some distance r away. (r_vec/r) is then a unit vector to specify direction. This works at every point except the origin itself, where it can be declared undefined, or declared to be zero. Zero is the average value of (r_vec/r) over all "nearby" points. This means zero force.
Normally in particle simulations with pair-wise forces, we ignore forces of a particle on itself, and of two particle at the same exact position. What about two particle very close, and you have a force law that goes like 1/r, 1/(r^2), or similar? Nobody wants a divide by zero fault. Usually there's a small radius below which the potential is constant matching the given potential formula at the boundary of that radius. Particles too close together have zero force, just so that the simulation won't crash. It may seem unphysical for the force to suddenly cease just inside that boundary when it is fiercely strong just outside it. But we strive to avoid such situations. Keep count of such incidences, and if there are too many, the simulation has gone bad. Maybe a smaller time step is needed.
Luckily you don't have a 1/r type of force, but still you have that nasty r_vec/r whose direction can swing wildly. The same technique of making force zero below a certain tiny radius will help.
But that third expression bothers me. If it's force at r=0, then starting with the force law in the second expression, I'm not sure how the third expression comes about. The problem of it looking scalar while, if it is supposed to be force, expecting vector could be resolved by understanding that it is meant to be the radial component of a force vector. Just multiply the expression by (r_vec/r), the familiar unit-magnitude vector. OTOH, it has no defined direction, so it is nonsense.
Better overall solution: start with a new potential function, one that smoothly levels off and is flat right at r=0, like exp(-r^2) or 1/(1+r^2). The given potential peaks sharply. You want something more like Instead of declaring force zero inside some small zone, the force would just naturally be zero at r=0. Find a flat-at-origin potential that approximates the given one well outside some small radius.

iPhone iOS is it possible to create a rangefinder with 2 laser pointers and an iPhone?

I'm working on an IPhone robot that would be moving around. One of the challenges is estimating distance to objects- I don't want the robot to run into things. I saw some very expensive (~1000$) laser rangefinders, and would like to emulate one using iPhone.
I got one or two camera feeds and two laser pointers. The laser pointers are mounted about 6 inches apart, at an angle The angle of lasers in relation to the cameras is known. The Angle of cameras to each other is known.
The lasers are pointing ahead of cameras, creating 2 dots on a camera feed. Is it possible to estimate the distance to the dots by looking at the distance between the dots in a camera image?
The lasers form a trapezoid from the
/wall \
/ \
/laser mount \
As the laser mount gets closer to the wall, the points should be moving further away from each other.
Is what I'm talking about feasible? Has anyone done something like that?
Would I need one or two cameras for such calculation?
If you just don't want to run into things, rather than have an accurate idea of the distance to them, then you could go "dambusters" on it and just detect when the two points become one - this would be at a known distance from the object.
For calculation, it is probaby cheaper to have four lasers instead, in two pairs, each pair at a different angle, one pair above the other. Then a comparison between the relative differences of the dots would probably let you work out a reasonably accurate distance. Math overflow for that one, though.
In theory, yes, something like this can work. Google "light striping" or "structured light depth measurement" for some good discussions of using this sort of idea on a larger scale.
In practice, your measurements are likely to be crude. There are a number of factors to consider: the camera intrinsic parameters (focal length, etc) and extrinsic parameters will affect how the dots appear in the image frame.
With only two sample points (note that structured light methods use lines, etc), the environment will present difficulties for distance measurement. Surfaces that are directly perpendicular to the floor (and direction of travel) can be handled reasonably well. Slopes and off-angle walls may be detectable, but you will find many situations that will give ambiguous or incorrect distance measures.