Units of ipod/iphone gyroscope data? - iphone

These things are not clear.
What are the units of
1.Data given by (CMGyroData) basically x,y and z?
What is the minimum and maximum variation of one axis data(For eg, x axis)
Does this x data represent the rotation(or swing) around the x axis?

The place to look is the documentation: http://developer.apple.com/library/ios/#documentation/CoreMotion/Reference/CMGyroData_Class/Reference/Reference.html.
x
The X-axis rotation rate in radians per second. The sign follows the
right hand rule: If the right hand is wrapped around the X axis such
that the tip of the thumb points toward positive X, a positive
rotation is one toward the tips of the other four fingers.
etc.

Related

Is it possible to use OpenPose output of x and y coordinates to detect rotating movement?

I did some research on OpenPose and the output is x and y coordinates with confidence point. x and y coordinates are good for detecting up, down, left, and right movements. I was wondering is it possible to detect turning movements. Turning usually happens on the z axis. Is there a way to tell if body part has rotated 180 degrees with x and y coordinate.
I had few ideas like calculating the slope of the hand line. The slope tells us if the hand is tilted or not. If the slope is high or very low then the hand has rotated. Same concept for all other body parts. But I don't think that will work in all cases. Please check 2nd figure here https://cmu-perceptual-computing-lab.github.io/openpose/web/html/doc/md_doc_02_output.html to understand output of OpenPose.

MATLAB flip definition of angles (or alternative angular metric)

I am doing work on symmetric images where I would like to define a symmetric (polar) coordinate space. Basically for the left image, I want 0 degrees to be defined along the right horizontal axis (as is the default). However, for the right image, I want 0 degrees to be defined along the left horizontal axis.
I know a phase shift of pi would do the trick. However, for comparison purposes, I am trying to keep the range of angles the same, [-pi : pi).
In the above color plot of the rotations in an object, note that they are both defined in the same direction. Ideally I'd like to see the colors of the right object flipped across its vertical axis.
I should note that these angles are calculated by taking the arctan(y/x) of the perimeter coordinates when measured from the centroid. Is there a different trig function that may result in the proper symmetry? I couldn't seem to come up with one while still claiming it was representative of direction.

Onscreen angle of 3D vector

My math is too rusty to figure this out. I want to derive the onscreen angle (the angle as seen on the 2d screen) of a 3d vector.
Given the x and y rotation of a vector (z rotation is zero and doesn't mstter), what does the angle on screen look like?
We know when y is zero and x is positive, the angle is 90. When y is zero and x is negative the angle is -90. When y is 90, for any value of x, the angle is 180. When y is -90, for any value of x, the angle is 0.
So what the formula here so I can derive the angle for the other values of x and y rotation?
The problem, as stated, doesn't make sense. If you're holding z to zero rotation, you've converted a 3D problem to 2D already. Also, it seems the angle you're measuring is from the y-axis which is fine but will change the ultimate formula. Normally, the angle is measured from the x-axis and trigometric functions will assume that. Finally, if using Cartesian coordinates, holding y constant will not keep the angle constant (and from the system you described for x, the angle would be in the range from -90 to 90 - but exclusive of the end points).
The arctangent function mentioned above assumes an angle measured from the x-axis.
Angle can be calculated using the inverse tangent of the y/x ratio. On unity3d coordinated system (left-handed) you can get the angle by,
angle = Mathf.Rad2Deg * Mathf.Atan(y/x);
Your question is what will a 3-d vector look like.
(edit after posted added perspective info)
If you are looking at it isometrically from the z-axis, it would not matter what the z value of the vector is.
(Assuming a starting point of 0,0,0)
1,1,2 looks the same as 1,1,3.
all x,y,z1 looks the same as any x,y,z2 for any values of z1 and z2
You could create the illusion that something is coming "out of the page" by drawing higher values of z bigger. It would not change the angle, but it would be a visual hint of the z value.
Lastly, you can use Dinal24's method. You would apply the same technique twice, once for x/y, and then again with the z.
This page may be helpful: http://www.mathopenref.com/trigprobslantangle.html
Rather than code this up yourself, try to find a library that already does it, like https://processing.org/reference/PVector.html

OpenNI range of returned coordinates

I am using the HandsGenerator class of OpenNI, and I want to use it to track the users' movements.
I've registered my own callback for getting the updated position of the hand, and everything works fine, except I can't find information about the coordinate system etc. of the returned XnPoint3D. Is there a spec somewhere that precisely specifies the X,Y,Z ranges, and perhaps scaling information (so that I would know that say a change of 100 in the XnPoint3D's X corresponds to a movement of 10 centimeters, or something).
The HandsGenerator returns real world coordinates in millimeters from the sensor. This means that depth points that are right in the middle of the depthmap will have an X and Y of 0.
A change of 100 (in X, Y, or Z) is indeed a change of 10 centimeters (100mm = 10cm).
The range of the X an Y values depends on the Z value of the hand point. Assuming you have a hand point at the top left of the depthmap (or 0,0 in projective coordinates) the possible X and Y values depend on how far away the hand is. The closer the hand, the smaller X and Y. To get the max range your hand positions can be you should choose an arbitrary max Z value and then find the X & Y values of the corners of the depth map at that distance. Or in other words - convert the projective coordinates (0,0,maxZ) and (DepthmapWidth,DepthmapHeight,maxZ) to real world coordinates. All hand points that have a Z value less than maxZ will fall between those 2 real world coordinates)
Note that you can convert projective coordinates to real world using DepthGenerator::ConvertProjectiveToRealWorld.

Kinect - Calculating Surface Area

I'd like to be able to calculate the surface area of objects seen by the depth camera. Is there an easy way to do this? For example if the kinect is seeing a player I need to calculate how much surface it is covering.
If there is no such functions existing, I can calculate by creating multiple squares with coordinate (x,y) (x+1,y) (x, y+1) (x+1, y+1) and taking into consideration the z value. But I'm not sure how to get the distance in mm or cm between pixels in the x or y axis.
Thanks