I can not understand the extrinsic and intrinsic rotation order for `scipy` lib - scipy

After some research, for now, I understand scipy has right-handed axis coordinate system, and left-handed rotation.
For example
from scipy.spatial.transform import Rotation as R
np.array([0,1,0]) # R.from_euler("XYZ", [0,0,30], degrees=True).as_matrix() # should be [0.5,sqrt(3)/2,0]
But I can not figure out the different between extrinsic and intrinsic rotation.
For my understanding.
extrinsic should use a fixed axis, and intrinsic should use rotatable axis.
So, if I understand this correctly.
Here "XYZ" means intrinsic rotation, you can find it at the official doc
# should be [-0.5,sqrt(3)/2,-1]
# But it is [0.5,sqrt(3)/2,-1], seems like the `extrinsic rotation`
np.array([0,1,1]) # R.from_euler("YZX", [180,30,0], degrees=True).as_matrix()
# should be [0.5,sqrt(3)/2,-1]
# But it is [-0.5,sqrt(3)/2,-1], seems like the `intrinsic rotation`
np.array([0,1,1]) # R.from_euler("yzx", [180,30,0], degrees=True).as_matrix()
Am I misunderstand this?

Sorry, this is a stupid question.
My fault.
It should multiply rotation at left.
R.from_euler("YZX", [180,30,0], degrees=True).as_matrix() # np.array([0,1,1])

Related

What is a "Break Rotator" and "Make Rotator" in Unreal Engine 4?

So, I am beginner in Unreal Engine 4. I am having trouble understanding "Make Rotator" and "Break Rotator" on Movement Input of the character blueprint class. The original definition UE4 documentation has made me even more confused. Can anyone explain this in a simple way?
A FRotator (or just Rotator in BP) is Unreal's way of storing rotations.
Usually, there are two main ways rotations are represented in programs:
As three separate rotation values per-axis, which is called Euler rotation: one for how much you rotate on X, one for Y, one for Z. This is the "older" approach, and has its issues, mainly gimbal lock. Also, there's no standard, and different programs apply them in different orders (some programs do ZXY: Z first, X second, Y third, but other do XYZ etc).
To solve the issues with Euler rotations, another option is used, with four separate values: three values denote the axis of rotation, and another one denotes the angle. This axis-and-angle value is called a quaternion (beacuse it has four values, quat is four in latin). Since any unique rotation mathematically can be expressed as an axis and angle, quaternion rotations are free of gimbal lock, and aren't dependent on any order.
Unreal internally uses quaternions (FQuat), but they're a bit harder to explain and understand, than just three X Y Z rotations. Because of that, FQuat is only for C++, and on the Blueprint level the editor simply shows the rotation as three axis. That's what the Rotator is for.
When you break a rotator, you just separate out the three float values for the XYZ rotations. When you make a rotator out of three float values, it makes the rotation that would result from them.
These break and make nodes exist for many other types, like FVector (shown in BP as just Vector), FLinearColor (Linear Color), to easily make a more complicated thing, like a color or rotation, out of simple float values.
Having said this, since the Rotator really represents an axis and angle, it's better not to use the make Rotator node, but the rotator from axis and angle node.
Rotators are quite convenient, because there are many functions in Unreal to work with them, such as Lerp (Rotator) and others.
Break Rotator : allows access to the elements of the rotator.
Make Rotator : Make a rotator with three values.

(UNITY) Plane not rotating to normal vector of three points?

I am trying to get a stretched out cube (which we can call a plane for the sake of discussion) to orient itself to the normal vector of a plane described by three points. I wrote a script to find the normal of three points, and then used transform.LookAt to have the planes align. However, I am finding that this script is not working at all how it is intended to and despite my best efforts I can not figure out why.
drastic movements of the individual points hardly effect the planes rotation.
the rotation of the object when using the existing points in the script should be 0,0,0 in the inspector. However, it is always off by a few degrees and as i said does not align itself when I move the points around.
This is the script. I can also post photos showing the behavior or share a small unity package
First of all Transform.LookAt takes a position as parameter, not a direction!
And then it
Rotates the transform so the forward vector points at worldPosition.
Doesn't sound like what you are trying to achieve.
If you want your object to look with its forward vector in the given normal direction (assuming you are calculating the normal correctly) then you could rather use Quaternion.LookRotation
transform.rotation = Quaternion.LookRotation(doNormal(cpit, cmit, ctht);
alternatively to this you can also simply assign the according vector directly like e.g.
transform.forward = doNormal(cpit, cmit, ctht);
or
transform.up = doNormal(cpit, cmit, ctht);
depending on your needs

Why do Quaternions have four variables?

The official documentation for the Unity Engine doesn't contain this, and I'm not far enough into my math/physics studies to have come across a Quaternion, but I understand it has to do with rotation. What I don't understand is why Quaternions have four variables, w,x,y,z, when there are only three axis of rotation in Unity.
"A quaternion is basically an axis in 3D space with a angle of rotation around the axis. Four values make up a quaternion, namely x, y, z and w. Three of the values are used to represent the axis in vector format, and the forth value would be the angle of rotation around the axis."
http://www.real3dtutorials.com/tut00011.php
So you could think of it as the rotation of the rotation, in simple terms!
Like Hellium noted in the comments below; Unity recommends that you do not fiddle with Quaternions directly if don't know exactly what you're doing. Like Hellium also points out, whatever you want to accomplish, you probably want to use the static methods of the Quaternion class. They are very useful and easy to use and can accomplish most things you want to do with rotations.

Image processing: Rotational alignment of an object

I have a stack of images with a bar close to the center. As the stack progresses the bar pivots around one end and the entire stack contains images with the bar rotated at many different angles up to 45 degrees above or below horizontal.
As shown here:
I'm looking for a way to rotate the bar and/or entire image and align everything horizontally before I do my other processing. Ideally this would be done in Matlab / imageJ / ImageMagick. I'm currently trying to work out a method using first Canny edge detection, followed by a Hough transform, followed by an image rotation, but I'm hoping this is a specific case of a more general problem which has already been solved.
If you have the image processing toolbox you can use regionprops with the 'Orientation' property to find the angle.
http://www.mathworks.com/help/images/ref/regionprops.html#bqkf8ji
The problem you are solving is known as image registration or image alignment.
-The first thing you need to due is to treshold the image, so you end up with a black and white image. This will simplify the process.
-Then you need to calculate the mass center of the imgaes and then translate them to match each others centers.
Then you need to rotate the images to matcheach other. This could be done using the principal axis measure. The principal axis will give you the two axis that explain most of the variance in the population. Which will basically give you a vector showing which way your bar is pointing. Then all you need to due is rotate the bars in the same direction.
-After the principal axis transformation you can try rotating the pictues a little bit more in each direction to try and optimise the rotation.
All the way through your translation and rotation you need a measure for showing you how good a fit your tranformation is. This measure can be many thing. If the picture is black and white a simple subtraction of the pictures is enough. Otherwise you can use measures like mutual information.
...you can also look at procrustes analysis see this link for a matlab function http://www.google.dk/search?q=gpa+image+analysis&oq=gpa+image+analysis&sugexp=chrome,mod=9&sourceid=chrome&ie=UTF-8#hl=da&tbo=d&sclient=psy-ab&q=matlab+procrustes+analysis&oq=matlab+proanalysis&gs_l=serp.3.1.0i7i30l4.5399.5883.2.9481.3.3.0.0.0.0.105.253.2j1.3.0...0.0...1c.1.5UpjL3-8aC0&pbx=1&bav=on.2,or.r_gc.r_pw.r_qf.&bvm=bv.1355534169,d.Yms&fp=afcd637d8ae07bde&bpcl=40096503&biw=1600&bih=767
You might want to look into the SIFT transform.
You should take as your image the rectangle that represents a worst case guess for your bar and determine the rotation matrix for that.
See http://www.vlfeat.org/overview/sift.html
Use the StackReg plugin of ImageJ. I'm not 100% sure but I think it already comes installed with FIJI (FIJI Is Just ImageJ).
EDIT: I think I have misread your question. That is not a stack of images you are trying to fix, right? In that case, a simple approach (probably not the most efficient but definetly works), is the following algorithm:
threshold the image (seems easy, your background is always white)
get a long horizontal line as a structuring element and dilate the image with it
rotate the structuring element and keep dilating image, measuring the size of the dilation.
the angle that maximizes it, is the rotation angle you'll need to fix your image.
There are several approaches to this problem as suggested by other answers. One approach possibly similar to what you are already trying, is to use Hough transform. Hough transform is good at detecting line orientations. Combining this with morphological processing and image rotation after detecting the angle you can create a system that corrects for angular variations. The basic steps would be
Use Morphological operations to make the bar a single line blob.
Use Hough transform on this image.
Find the maximum in the transform output and use that to find orientation angle.
Use the angle to fix original image.
A full example which comes with Computer Vision System Toolbox for this method. See
http://www.mathworks.com/help/vision/examples/rotation-correction-1.html
you can try givens or householder transform, I prefer givens.
it require an angle, using cos(angle) and sin(angle) to make the givens matrix.

How to determine absolute orientation

I have a xyz accelerometer and magnetometer. Now I want to determine the orientation of the device using both. The problem I see is that depending on the device orientation, I'd need to use the sensors in different order.
Let me give an example. If I have the device facing me then changes in both the roll and pitch can be determined with the accelerometer. For yaw I use the magnetometer.
But if I put the device horizontally (ie. turn it 90º, facing the ceiling) then any change in the up vector (now horizontal) isn't notice, as the accelerometer doesn't detect any change. This can now be detected with the magnetometer.
So the question is, how to determine when to use one or the other. Is this enough with both sensors or do I need something else?
Thanks
The key is to use the cross product of the two vectors, gravity and magnetometer. The cross product gives a new vector perpendicular to them both. That means it is horizontal (perpendicular to down) and 90 degrees away from north. Now you have three orthogonal vectors which define orientation. It is a little ugly because they are not all perpendicular but that is easy to fix. If you then cross this new vector back with the gravity vector that gives a third vector perpendicular to the gravity vector and the magnet plane vector. Now you have three perpendicular vectors which defines your 3D orientation coordinate system. The original accelerometer (gravity) vector defines Z (up/down) and the two cross product vectors define the east/west and north/south components of the orientation.
Here is some documentation that walks through this project. As is clear from other answers, the math can be tricky.
http://www.freescale.com/files/sensors/doc/app_note/AN4248.pdf
I think the question "how to determine when to use one or the other" is misguided. You should always use both sensors for orientation. There are cases where one of them is useless. However, these are edge cases.
If I understand you correctly, you'll need something to detect pitch (tilting) and orientation according to the cardinal points (North, East, South and West).
The pitch can be read from the accelerometer.
The orientation according to the cardinal points can be read from a compass.
Combining the output from these two sensors correctly with the right math in your software will most likely give you the absolute orientation.
I think it's doable that way.
Good luck.
In the event you still need absolute orientation you can check this break out board from Adafruit: https://www.adafruit.com/products/2472. The nice thing about this is board is that it has an ARM Cortex-M0 processor to do all of the calculations for you.