OpenNI range of returned coordinates - coordinates

I am using the HandsGenerator class of OpenNI, and I want to use it to track the users' movements.
I've registered my own callback for getting the updated position of the hand, and everything works fine, except I can't find information about the coordinate system etc. of the returned XnPoint3D. Is there a spec somewhere that precisely specifies the X,Y,Z ranges, and perhaps scaling information (so that I would know that say a change of 100 in the XnPoint3D's X corresponds to a movement of 10 centimeters, or something).

The HandsGenerator returns real world coordinates in millimeters from the sensor. This means that depth points that are right in the middle of the depthmap will have an X and Y of 0.
A change of 100 (in X, Y, or Z) is indeed a change of 10 centimeters (100mm = 10cm).
The range of the X an Y values depends on the Z value of the hand point. Assuming you have a hand point at the top left of the depthmap (or 0,0 in projective coordinates) the possible X and Y values depend on how far away the hand is. The closer the hand, the smaller X and Y. To get the max range your hand positions can be you should choose an arbitrary max Z value and then find the X & Y values of the corners of the depth map at that distance. Or in other words - convert the projective coordinates (0,0,maxZ) and (DepthmapWidth,DepthmapHeight,maxZ) to real world coordinates. All hand points that have a Z value less than maxZ will fall between those 2 real world coordinates)
Note that you can convert projective coordinates to real world using DepthGenerator::ConvertProjectiveToRealWorld.

Related

Best way to find the top most left coordinate point in set of Cartesian coordinates?

I have a set of cartesian coordinates. What is the best way to find the top most left coordinate?
My approach is to find max Y and then corresponding lowest X coordinate. (Since there could be multiple points with same Ymax). This works fine but wonders if there are other cute methods?
If your x and y coordinates are not sorted, then you have to check all the input coordinates - at first your y values (for topmost, the higher the better) and then all corresponding x values for such y, but now you look for minimal x.

Onscreen angle of 3D vector

My math is too rusty to figure this out. I want to derive the onscreen angle (the angle as seen on the 2d screen) of a 3d vector.
Given the x and y rotation of a vector (z rotation is zero and doesn't mstter), what does the angle on screen look like?
We know when y is zero and x is positive, the angle is 90. When y is zero and x is negative the angle is -90. When y is 90, for any value of x, the angle is 180. When y is -90, for any value of x, the angle is 0.
So what the formula here so I can derive the angle for the other values of x and y rotation?
The problem, as stated, doesn't make sense. If you're holding z to zero rotation, you've converted a 3D problem to 2D already. Also, it seems the angle you're measuring is from the y-axis which is fine but will change the ultimate formula. Normally, the angle is measured from the x-axis and trigometric functions will assume that. Finally, if using Cartesian coordinates, holding y constant will not keep the angle constant (and from the system you described for x, the angle would be in the range from -90 to 90 - but exclusive of the end points).
The arctangent function mentioned above assumes an angle measured from the x-axis.
Angle can be calculated using the inverse tangent of the y/x ratio. On unity3d coordinated system (left-handed) you can get the angle by,
angle = Mathf.Rad2Deg * Mathf.Atan(y/x);
Your question is what will a 3-d vector look like.
(edit after posted added perspective info)
If you are looking at it isometrically from the z-axis, it would not matter what the z value of the vector is.
(Assuming a starting point of 0,0,0)
1,1,2 looks the same as 1,1,3.
all x,y,z1 looks the same as any x,y,z2 for any values of z1 and z2
You could create the illusion that something is coming "out of the page" by drawing higher values of z bigger. It would not change the angle, but it would be a visual hint of the z value.
Lastly, you can use Dinal24's method. You would apply the same technique twice, once for x/y, and then again with the z.
This page may be helpful: http://www.mathopenref.com/trigprobslantangle.html
Rather than code this up yourself, try to find a library that already does it, like https://processing.org/reference/PVector.html

Units of ipod/iphone gyroscope data?

These things are not clear.
What are the units of
1.Data given by (CMGyroData) basically x,y and z?
What is the minimum and maximum variation of one axis data(For eg, x axis)
Does this x data represent the rotation(or swing) around the x axis?
The place to look is the documentation: http://developer.apple.com/library/ios/#documentation/CoreMotion/Reference/CMGyroData_Class/Reference/Reference.html.
x
The X-axis rotation rate in radians per second. The sign follows the
right hand rule: If the right hand is wrapped around the X axis such
that the tip of the thumb points toward positive X, a positive
rotation is one toward the tips of the other four fingers.
etc.

How to get real world coordinates (x, y, z) from a distinct object using a Kinect

I have to get the real world coordinates (x, y, z) using Kinect. Actually, I want the x, y, z distance (in meters) from Kinect.
I have to get these coordinates from a unique object (e.g. a little yellow box) in the scenario, colored in a distinct color.
Here you can see an example of the scenario
I want the distance (x, y, z in meters) of the yellow object in the shelf.
Note that is not required a person (skeleton) in the scenario.
First of all, I would like to know if it is possible and simple to do?
So, I would appreciate if you send some links/code that could help me doing this task.
You would need to use both the Color Stream and the Depth Stream.
First, using the Color Stream you would need to collect an array of pixels that match the color you are looking for and then lookup the depth data from the Depth Stream for those pixels to get an average distance from the camera. That gives you the Z.
To get the X and Y you would use the math from this answer.
The Z distance (from object to kinect) you get from Position.Z of a specific Joint. So there is no problem with getting it.
The X and Y. It depends do you want to get distance from joint to joint or from joint to Kinect. You can calculate it. Use the math. You need to take angle of view of kinect and distance from it

biot savart matlab (couldn't find a matlab forum)

I want to calculate the magnetic field from a given image using biot savarts law. For example if I have a picture of a triangle, I say that this triangle forms a closed wire carrying current. Using image derivatives I can get the co-ordinates and direction of the current (normals included). I am struggling implementing this...need a bit of help with logic too. Here is what I have:
Img = imread('littletriangle.bmp');
Img = Img(:,:,1);
Img = double(Img);
[x,y] = size(Img);
[Ix, Iy] = gradient(Img);
biot savart equation is:
b = mu/4*pi sum(Idl x rn / r^2)
where mu/4pi is const, I is current magnitude, rn distance unit vector between a pixel and current, r^2 is the squared magnitude of the displacement between a pixel and the current.
So just to start off, I read the image in, turn it into a binary and then take the image gradient. This gives me the location and orientation of the 'current'. I now need to calculate the magnetic field from this 'current' at every pixel in the image. I am only interested in getting the magnetic field in the x-y plane. anything just to start me off would be brilliant!
For wire
B = mu * I /(2*pi*r)
B is vector and has. Direction is perpendicular on line between wire an point of interest. Fastest way to rotate vector by 90° is just swapping (x.y) so it becomes (y,x) vector
What about current? If you deal whit current then current is homogenous inside of wire (can be triangle) and I in upper direction is just normalized I per point and per Whole I.
So how to do this?
Get current per pixel (current / number of pixel in shape)
For each point calculate B using (r calculated form protagora) as sum of all other mini wires expressed as pixel using upper equation. (B is vector and has also direction, so keep track of B as (x,y) )
having picture of 100*100 will yield (100*100)*(100*100) calculations of B equation or something less if you will not calculate filed from empty space.
B is at the end instead of just mu * I /(2*pi*r) sum of all wire and I becomes dI
You do not need to apply any derivatives, just integration (sum)