Forearm posture prediction based on arm and hand angles - unity3d

I have 2 sensors: on an upper arm and on a hand, each gives me a quaternion (or Euler angles) of its position in global 3d space, do you have any ideas, how can I find the forearm position?
For example, I need this value to solve the issue:
when I hold my hand on a table and move up my upper arm (like a wing), the program thinks that I move my hand up together with the upper arm.

The situation you describe sounds like an inverse kinematics problem. In effect, this is the way of determining the required angles (and by extension, positioning) of intermediate joints in a limb (or other jointed object) based on the desired position of its free end (the end effector).
Rather than tackling this difficult mathematics question on your own, you may want to look at Unity's own inverse kinematics implementation. However, this might not be feasible if you can't adapt your in-game object(s) to an Animator.
You could also code a solution on your own. Creating a generic solution to this problem is extremely mathematically complex - but you've got some extra information available to you: the actual angle and positioning of the hand and upper arm. If you specify either the forearm length or upper arm length (you can even derive this mathematically, up to you!), then some light linear algebra should be enough for your purposes (provided your sensor readings are correct).
If you specify a forearm length FL, then you need to find the point of intersection (the elbow) along the upper arm's direction vector which is distance FL from the hand. Alternatively, if you specify an upper arm length UL, you just need to move UL/2 down the upper arm's direction vector (since the position is in the middle of the upper arm) to get the elbow position.
In both cases, once you have the elbow position, it's trivial to get the forearm position and angle. Here's a diagram for visualization of the problem:

Related

Oriented Bounding Box algorithm, Need some understanding/clarification of a few lines of existing (working) code

I am reviewing some MATLAB code that is publicly available at the following location:
https://github.com/mattools/matGeom/blob/master/matGeom/geom2d/orientedBox.m
This is an implementation of the rotating calipers algorithm on the convex hull of a set of points in order to compute an oriented bounding box. My review was to understand intuitively how the algorithm works however I seek clarification on certain lines within the file which I am confused on.
On line 44: hull = bsxfun(#minus, hull, center);. This appears to translate all the points within the convex hull set so the calculated centroid is at (0,0). Is there any particular reason why this is performed? My only guess would be that it allows straightforward rotational transforms later on in the code, as rotating about the real origin would cause significant problems.
On line 71 and 74: indA2 = mod(indA, nV) + 1; , indB2 = mod(indB, nV) + 1;. Is this a trick in order to prevent the access index going out of bounds? My guess is to prevent out of bounds access, it will roll the index over upon reaching the end.
On line 125: y2 = - x * sit + y * cot;. This is the correct transformation as the code behaves properly, but I am not sure why this is actually used and different from the other rotational transforms done later and also prior (with the calls to rotateVector). My best guess is that I am simply not visualizing what rotation needs to be done in my head correctly.
Side note: The external function calls vectorAngle, rotateVector, createLine, and distancePointLine can all be found under the same repository, in files named after the function name (as per MATLAB standard). They are relatively uninteresting and do what you would expect aside from the fact that there is normalization of vector angles going on.
I'm the author of the above piece of code, so I can give some explanations about it:
First of all, the algorithm is indeed a rotating caliper algorithm. In the current implementation, only the width of the algorithm is tested (I did not check the west and est vertice). Actually, it seems the two results correspond most of the time.
Line 44 -> the goal of translating to origin was to improve numerical accuracy. When a polygon is located far away from the origin, coordinates may be large, and close together. Many computation involve products of coordinates. By translating the polygon around the origin, the coordinates are smaller, and the precision of the resulting products are expected to be improved. Well, to be honest, I did not evidenced this effect directly, this is more a careful way of coding than a fix…
Line 71-74! Yes. The idea is to find the index of the next vertex along the polygon. If the current vertex is the last vertex of the polygon, then the next vertex index should be 1. The use of modulo rescale between 0 and N-1. The two lines ensure correct iteration.
Line 125: There are several transformations involved. Using the rotateVector() function, one simply computes the minimal with for a given edge. On line 125, one rotate the points (of the convex hull) to align with the “best” direction (the one that minimizes the width). The last change of coordinates (lines 132->140) is due to the fact that the center of the oriented box is different from the centroid of the polygon. Then we add a shift, which is corrected by the rotation.
I did not really look at the code, this is an explanation of how the rotating calipers work.
A fundamental property is that the tightest bounding box is such that one of its sides overlaps an edge of the hull. So what you do is essentially
try every edge in turn;
for a given edge, seen as being horizontal, south, find the farthest vertices north, west and east;
evaluate the area or the perimeter of the rectangle that they define;
remember the best area.
It is important to note that when you switch from an edge to the next, the N/W/E vertices can only move forward, and are readily found by finding the next decrease of the relevant coordinate. This is how the total processing time is linear in the number of edges (the search for the initial N/E/W vertices takes 3(N-3) comparisons, then the updates take 3(N-1)+Nn+Nw+Ne comparisons, where Nn, Nw, Ne are the number of moves from a vertex to the next; obviously Nn+Nw+Ne = 3N in total).
The modulos are there to implement the cyclic indexing of the edges and vertices.

Measuring objects in a photo taken by calibrated cameras, knowing the size of a reference object in the photo

I am writing a program that captures real time images from a scene by two calibrated cameras (so the internal parameters of the cameras are known to us). Using two view geometry, I can find the essential matrix and use OpenCV or MATLAB to find the relative position and orientation of one camera with respect to another. Having the essential matrix, it is shown in Hartley and Zisserman's Multiple View Geometry that one can reconstruct the scene using triangulation up to scale. Now I want to use a reference length to determine the scale of reconstruction and resolve ambiguity.
I know the height of the front wall and I want to use it for determining the scale of reconstruction to measure other objects and their dimensions or their distance from the center of my first camera. How can it be done in practice?
Thanks in advance.
Edit: To add more information, I have already done linear trianglation (minimizing the algebraic error) but I am not sure if it is any useful because there is still a scale ambiguity that I don't know how to get rid of it. My ultimate goal is to recognize an object (like a Pepsi can) and separate it in a rectangular area (which is going to be written as a separate module by someone else) and then find the distance of each pixel in this rectangular area, i.e. the region of interest, to the camera. Then the distance from the camera to the object will be the minimum of the distances from the camera to the 3D coordinates of the pixels in the region of interest.
Might be a bit late, but at least for someone struggling with the same staff.
As far as I remember it is actually linear problem. You got essential matrix, which gives you rotation matrix and normalized translation vector specifying relative position of cameras. If you followed Hartley and Zissermanm you probably chose one of the cameras as origin of world coordinate system. Meaning all your triangulated points are in normalized distance from this origin. What is important is, that the direction of every triangulated point is correct.
If you have some reference in the scene (lets say height of the wall), then you just have to find this reference (2 points are enough - so opposite ends of the wall) and calculate "normalization coefficient" (sorry for terminology) as
coeff = realWorldDistanceOf2Points / distanceOfTriangulatedPoints
Once you have this coeff, just mulptiply all your triangulated points with it and you got real world points.
Example:
you know that opposite corners of the wall are 5m from each other. you find these corners in both images, triangulate them (lets call triangulated points c1 and c2), calculate their distance in the "normalized" world as ||c1 - c2|| and get the
coeff = 5 / ||c1 - c2||
and you get real 3d world points as triangulatedPoint*coeff.
Maybe easier option is to have both cameras in fixed relative position and calibrate them together by stereoCalibrate openCV/Matlab function (there is actually pretty nice GUI in Matlab for that) - it returns not just intrinsic params, but also extrinsic. But I don't know if this is your case.

Align 2 sets of 3D points in Unity by using Singular Value Decomposition method (SVD)

I wanted to translate a set of reference points on contour to a set of corresponding target points. There are total 8 points on each contour.
In order to calculate the rotation & translation vector, I was using Math.Net Numerics library to perform SVD calculation - The idea came from this URL (page 3-7):
But somehow I noticed that transformation done using result from SVD calculation seems inaccurate. The result as shown below:
The transform supposed to move reference points to target points as close as possible, but as highlighted, it moves far away from target point.
In addition, I also did a simple test whereby I calculated centroid for both contours and perform deduction: (TargetCentroid - RefCentroid = translation vector). The final transformation result is the same as going through SVD.
Am I did something wrong? Can anyone suggest a better solution to transform ref point to target point?
Edit:
1. Garment transformation from reference model to various target models
This seems like an over complicated solution to the problem.
If you have the target points, you can just Lerp the given points to their corresponding target points.
Or if the target is the same mesh but of different scale and rotation as in the picture, you can just Lerp the transform values, scale and rotation respectfully, without the need to go over all the points individually.
Using Vector3.Lerp
Edit:
Additionally, lerping will cause all the points to reach their targets at the same time, which is, in most cases, the desired behavior.

Azimuth and Elevation from one moving object to another, relative to first object's orientation

I have two moving objects with position and orientation data (Euler Angles, Quaternions) relative to ECI coordinate frame. I would like to calculate AZ/EL from what I'm guessing is the "body frame" of the first object. I have attempted to convert both objects into the body frame through rotation matrices (X-Y-Z and Z-Y-X rotation sequence) and calculate a target vector AZ/EL this way but have not had success. I've also tried to get body frame positions and calculate the body axis/angles and convert back to Eulers (relative to body frame). I'm never sure how the coordinate system axes I'm supposedly creating are aligned along my object.
I have found a couple other questions like this answered with Quaternion solutions so that may be the best path to take, however my Quaternion background is weak at best and I fail to see how the computations given result in a target vector.
Any advice would be much appreciated and I'm happy to share my progress/experiences going forward.
get the current NEH transform matrix for the moving object
you must know position and at least two directions from North,East,Height(Up or Altitude) of the moving object otherwise is your problem unsolvable no matter what. This matrix/frame is called NEH (X=North,Y=East,Z=Height) or sometimes also ENU (X=East,Y=North,Z=Up). Look here transform matrix anatomy and here Earth's NEH construction and change the position and radius to match your moving object.
convert point P0 from GCS (global coordinate system) to NEH
simply: P1=Inverse(NEH)*P0 where P1 is now in NEH LCS (Local coordinate system). Both P0,P1 are in homogenous coordinates { x,y,z,w=1 } to allow multiplications with 4x4 matrix so you can compute azimut and elevation directly from it:
Azimut=atanxy(P1.x,P1.y);
Elevation=atan(P1.z/sqrt((P1.x*P1.x)+(P1.y*P1.y)));
where atanxy is mine atan2 (4 quadrant atan) first is dx then dy. I think atan2 in matlab has it in reverse.
[Notes]
Always visually check all frames (especially NEH). Just draw the 3 axises as lines of some length to validate if the result is correct. It should look like on image, just different color for each axis. You can move to next point only if NEH is OK !!!
Check atan2/atanxy operands order and also check goniometric functions units (rad,deg) to avoid confusions.

Force Calculation at a Point Within a Vector Field, and then Reacting to that Force

So, this is going to be pretty hard for me to explain, or try to detail out since I only think I know what I'm asking, but I could be asking it with bad wording, so please bear with me and ask questions if need-be.
Currently I have a 3D vector field that's being plotted which corresponds to 40 levels of wind vectors in a 3D space (obviously). These are plotted in 3D levels and then stacked on top of each other using a dummy altitude for now (we're debating how to go about pressure altitude conversion most accurately--not to worry here). The goal is to start at a point within the vector space, modeling that point as a particle that can experience physics, and iteratively go through the vector field reacting to the forces, thus creating a trajectory of sorts through the vector field.
Currently what I'm trying to do is whip up code that would allow me to to start a point within this field and calculate the forces that the particle would feel at that point and then establish a resultant force vector that would indicate the next path of movement throughout the vector space.
Right now I'm stuck in the theoretical aspects of the code, as I'm trying to think through how the particle would feel vectors at a distance.
Any suggestions on ways to attack this problem within MatLab or relevant equations to use?
In order to run my code, you'll need read_grib.r4 and to compile that mex file here is a link to a zip with the code and the required files.
https://www.dropbox.com/s/uodvixdff764frq/WindSim_StackOverflow_Files.zip
I would try to interpolate the wind vector from the adjecent ones. You seem to have a regular grid, that should be no problem. (You can use interp3 for this)
Afterwards, you can use any differential-equation solver for your problem, as you have basically a field of gradients and an initial value. Forward euler would be the simplest one but need a small step size. (N.B.: Your field should be a gradient field)
You may read about this in Wikipedia: http://en.wikipedia.org/wiki/Vector_field#Flow_curves
In response to comment #1:
Yes. In a regular grid, any (arbitrary chosen) point will have eight neighbors. interp3 will so a trilinear interpolation to determine an interpolated gradient vector.
If you use forward-euler, you will then move a small distance in that direction. There you interpolate a gradient and go a small step into this new direction and so on. What happens are two things:
You get a series of points that lie on a streamline and thus form the trajectory of a particle moving along the field
Get large errors, the further you move and the larger the step size is. Use a small step size or use a better solver (Runge-Kutta comes to my mind)
If all you want is plotting, then the streamline function might help.