I've set up an animation of a tugboat [from VRML library] using the Virtual Reality Animation objects. but am having trouble with viewing the rotation of the boat.
To be more specific: I have a simulator going, where I calculate from rigid body dynamics the trajectory of it in time. This is, I have x, y, z, phi, theta, psi vs. time. I associate the translations and rotations to the node corresponding to the boat. When pressing play, I can see the translation and rotation which is not as expected.
Not sure what the problem could be. I tried to add one Transform in the .wrl for each of the rotational degrees of freedom, but I found it weird as when I give rotation in one direction I see the object rotating and translating other directions as well.
Any help is most welcome.
Related
I am well aware of the existence of this question but mine will differ. I also know that there could be significant errors with this approach but I want to understand the configuration also theoretically.
I have some basic questions which I find hard to answer for myself clearly. There is a lot of information about accelerometers and gyroscopes but I still haven't found an explanation "from first principles" of some basic properties.
So I have a plate sensor that contains an accelerometer and gyroscope. There is also a magnetometer which I skip for now.
The accelerometer gives information in each time t about the temporary acceleration vector a = (ax, ay, az) in m/s^2 according to the fixed coordinate system to the sensor.
The gyroscope gives a 3D vector in deg/s which says the temporary speed of rotation of the three axes (Ox, Oy and Oz). From this information, one can get a rotation matrix that corresponds to an infinitesimal rotation of the coordinate system (according to the previous moment). Here is some explanation how to obtain a quaternion, that represents R.
So we know that the infinitesimal movement can be calculated considering that the acceleration is the second derivative of the position.
Imagine that your sensor is attached to your hand or leg. In the first moment we can consider its point in 3D space as (0,0,0) and the initial coordinate system also attached in this physical point. So for the very first time step we will have
r(1) = 0.5a(0)dt^2
where r is the infinitesimal movement vector, a(0) is the acceleration vector.
In each of the following steps we will use the calculations
r(t+1) = 0.5a(t)dt^2 + v(t)dt + r(t)
where v(t) is the speed vector which will be estimated in some way, for example as (r(t)-r(t-1)) / dt.
Also, after each infinitesimal movement we will have to take into account the data from the gyroscope. We will use the rotation matrix to rotate the vector r(t+1).
In this way, probably with tremendous error I will get some trajectory according to the initial coordinate system.
My queries are:
Am I principally correct with this algorithm? If not, where am I wrong?
I would very much appreciate some resources with a working example where the first principles are not skipped.
How should I proceed with using the Kalman's filter to obtain a better trajectory? In what way exactly do I pass all the IMU data (accelerometer, gyroscope and magnetometer) to the Kalman filter?
Your conceptual framework is correct, but the equations need some work. The acceleration is measured in the platform frame, which can rotate very quickly, so it is not advisable to integrate acceleration in the platform frame and rotate the position change. Rather, the accelerations are transformed into a relatively slowly rotating frame and the integration to velocity change and position change is done there. Typically a locally-level frame (e.g. North-East-Down or Wander Aziumuth) or an Earth-centered frame (ECEF or ECI). Gravity and Coriolis force must be included in the acceleration.
Derivations from first principles can be found in many references, one of my favorites is Strapdown Inertial Navigation Technology by Titterton and Weston. Derivations of the inertial navigation equations in locally-level and Earth-fixed frames are given in Chapter 3.
As you've recognized in your question - the initial velocity is an unknown constant of integration. Without some estimate of initial velocity the trajectory resulting from integrating the inertial data can be wildly wrong.
I am leveraging SimMechanics, SimElectronics, and Simulink to model a quadcopter system for an embedded system class project ( files here ). I have generated a 2nd Generation SimMechanics model of an F450 quadcopter frame, including the motors and propellers. We were hoping to develop a model of a quadcopter with only a single rotational degree of freedom around either the x or y axis. I was hoping to model this with a revolute joint connecting the quadcopter frame to the "world frame". However, the "revolute joint" block in SimMechanics only acts around the z-axis. How can I change the axis of rotation for a revolute joint?
It appears that another individual has asked the same question, but no one has yet responded to his question.
See Assembling Multibody Models in the SimMechanics documentation, in particular the section on "orienting joints":
To obtain the motion expected in a model, you must align its various
joint motion axes properly. This means aligning the joints themselves
as observed or anticipated in the real system. Misaligning the joint
axes may lead to unexpected motion but it often leads to something
more serious, such as a failure to assemble and simulate.
You can specify and change joint alignment by rotating the connection
frames local to the adjoining body subsystems. For this purpose, you
specify rotation transforms using Rigid Transform blocks, either by
adding new blocks to the body subsystems or, if appropriate, by
changing the rotation transforms in existing blocks within the
subsystems.
Why change the orientation of joints through body subsystem frames?
The primitives in a Joint block each have a predetermined motion axis,
such as x or z. The axis definition is fixed and cannot be changed.
Realigning the connection frames local to the adjoining body
subsystems provides a natural way to reorient joints while avoiding
confusion over which axis a particular joint uses.
For an example of how to rotate joint connection frames, see Model
Mount.
So the answer is to use a Rigid Transform block to change the orientation of the frames, you cannot change the axis of the revolute joint.
I think you should change it in your CAD file. Change your propeller axis to align with z axis. But you should only change the propeller axis, not the whole body.
I have a CAD model of a bar 25cm x 5cm x 2cm imported into SimMechanics.
On one of the sides, I have a small "hole", around which I have to apply a certain torque, to make the bar spin.
I have applied said torque through a revolute joint, but the axis of rotation is assumed by SimMechanics to be one of the edges, giving a "lopsided" rotation.
How can I shift the position of the torque to this specific point on the bar?
To answer my own question, the way I solved it was to add a Rigid Transform "after" the revolute joint.
What happened was that adding the Rigid Transform after the revolute joint essentially "shifted" the bar to the imaginary axis of rotation of the revolute joint, which was what I was looking for.
Still need the math: I am trying to calculate the yxy rotation sequence given a quaternion transformation. I can easily do this using Matlab's quat2angle function. However, I need to calculate this by hand using a python script.
This part solved: Please look at this awesome presentation which helped me resolve these issues below:
https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=3&cad=rja&uact=8&ved=0CCoQFjAC&url=http%3A%2F%2Fwww.udel.edu%2Fbiology%2Frosewc%2Fkaap686%2Freserve%2Fshoulder%2Fshoulder%2FBluePresentation.ppt&ei=jgRAVLHfOsSrogTJiYHABQ&usg=AFQjCNGFmwh11jEZen80jc3tM4f7HUQcNw&sig2=Dlr8_7TIFPLyUfJy6-pSJA&bvm=bv.77648437,d.cGU
Also, with Matlab, I am seeing strange results with the way they calculate yxy. I have a quaternion transformation of [1.0000 -0.0002 -0.0011 -0.0006] and I get y = 112.4291 x = -0.0719 y1 = -112.5506 (in degrees).
I don't expect to see any rotations here (my sensors aren't rotating). Why is Matlab showing me rotation? And when I try to just move in the x rotation, I see y and y1 also rotate, however, I don't expect y or y1 to be rotating. Any thoughts?
UPDATE:
When I add y + y1 I seem to get the value for the first y (when doing simple rotation around the first y), and this smooths out the data. However, when I combine the three rotations of the shoulder, the data doesn't make sense. I am trying to define shoulder movement based on plane of elevation, elevation and rotation (yxy) in a way that's easy to interpret. When I rotate around x, then the second y, I get "clipping" (data goes to 180 then -180 following positive trend for y1 and opposite happens for y), even though I start my sensors at the zero position. Also, If I try to rotate only around the second y, I see rotation in the x. That doesn't make any sense either. Any additional thoughts?
Note:
I am using 2 IMU sensors, taring them in the same orientation, holding one constant and rotating the other, calculating the relative rotation between them using quaternions, and then calculating the yxy rotation sequence angles.
In case anyone is interested in quaternion calculations and transformations. I solved it using this transformations library:
http://www.lfd.uci.edu/~gohlke/code/transformations.py.html
There are several functions in here using matrices, quaternions, and Euler rotations. And you can convert quaternions to several different Euler rotation sequences. Give thanks to the person who created this script.
I want to use the iPhones's accelerometer to detect motions while driving. I'm a bit confused what the accelerometer actually measures, especially when driving a curve.
As you can see in the picture, a car driving a curve causes two forces. One is the centripetal force and one is the velocity. Imagine the iPhone is placed on the dashboard with +y-axis is pointing to the front, +x-axis to the right and +z-axis to the top.
My Question is now what acceleration will be measured when the car drives this curve. Will it measure g-force on the -x-axis or will the g-force appear on the +y axis?
Thanks for helping!
UPDATE!
For thoses interested, as one of the answers suggested it measures both. The accelerometer is effected by centrifugal force and velocity resulting in an acceleration vector that is a combination of these two.
I think it will measure both. But don't forget that the sensor will measure gravity as well. So when your car is not moving, you will still get accelerometer readings. A nice talk on sensors in smartphones http://www.youtube.com/watch?v=C7JQ7Rpwn2k&feature=results_main&playnext=1&list=PL29AD66D8C4372129 (it's on android, but the same type of sensors are used in iphone).
Accelerometer measures acceleration of resultant force applied to it (velocity is not a force by the way). In this case force is F = g + w + c i.e. vector sum of gravity, centrifugal force (reaction to steering centripetal force, points from the center of the turn) and car acceleration force (a force changing absolute value of instantaneous velocity, points along the velocity vector). Providing Z axis of accelerometer always points along the gravity vector (which is rare case for actual car) values of g, w and c accelerations can be accessed in Z, X and Y coordinates respectively.
Unless you are in free fall the g-force (gravity) is always measured. If I understand your setup correctly, the g-force will appear on the z axis, the axis that is vertical in the Earth frame of reference. I cannot tell whether it will be +z or -z, it is partly convention so you will have to check it for yourself.
UPDATE: If the car is also going up/downhill then you have to take the rotation into account. In other words, there are two frames of reference: the iPhone's frame of reference and the Earth frame of reference. If you would like to deal with this situation, then please ask a new question.