everyone.
I'm trying to record the movement from a person, frame to frame using the Microsoft Kinect API. For that i'm saving all the joint's position, and besides i would like to get the direction of the vector of the joint. I've seen that the API has something about joint orientation with quaternion matrices, but i don´t know how to use it to get the direction, or should i simply calculate the direction from the coordinates?
Thanks
Thanks to the answer from Carmine Si - MSFTMicrosoft (MSFT)
"To determine the direction a joint is travelling, then you should just calculate the vector based on the point locations from frame to frame. Typically the other values are for mapping your skeletion from different coordiante spaces so you can do things like the Avateering sample."
Related
I am working on taking two HoloLens 2 users' gaze data, and comparing them to verify they are tracking the same hologram's trajectory. I have access to the GazeProvider data, no issues there. However, the GazeProvider.GazeDirection data throws me. For instance, I've referenced the API at:
GazeDirection API Data
But, I dont really understand what the Vector 3 it returns means. Are the X,Y,Z relative motion? If not, can I use Vector3.angle to compute relative motion vectors between two points?
The vector returned by the GazeDirection property leveraging three coordinate parameters to point the direction that the user's eyes are looking towards. The origin is located between the user's eyes. The Vector3.angle method you mentioned can help you compute the angle between the two eye gaze directions.
I have just started to dig into gaze from a different scenario, but one suggestion I would make is that you also take a look at the gaze origin api.
Each user occupies a different location in space and is gazing into the world in a "gaze direction" from their location in space which would be their "gaze origin".
Basically you need to reconcile the different spatial coordinate systems.
I was visualising my wind velocity using glyphs and cloud water content at the same time. However, I notice that the direction where the clouds move do not match the direction the glyphs pointing.
Below are the steps how I create the output:
The data is a netcdf file with wind variable array "ua" (eastward_wind_speed), "va" (northward_wind_speed), and "wa" (wind_vertical_velocity).
I used a cell_data_to_point_data filter to convert them into point data.
Then I combined these 3 arrays using a Paraview calculator with the equation iHatua + jHatva + kHat*wa.
Then do a glyph filter to visualise the wind velocity.
The problem is, the clouds are moving to the left(east), which does not match where the glyphs are pointing at (south).
What would be the possible reason for this error?
TIA
Update:
For anyone that might have the same problem:
Just solved the problem and the glyphs make much more sense now.
Switched off the spherical coordinate
Transform filter to scaled down the vertical components
Then do the contour filter and glyph filter as usual.
There are two things to consider.
Some weather agencies use a convection of wind direction from which is blowing. However there are agencies that have a wind direction to which is blowing.
Probably you are not using the wind direction at the same height as the clouds.
I have x, y coordinates of a feature from a single photograph. I know camera parameters. How can i get the 3D coordinate of that feature (in matlab). please help me.
Give a look to this: http://www.cim.mcgill.ca/~langer/558/4-cameramodel.pdf
Systems like this that I've seen before requires that you know where the camera is (latitude and longitude), in which direction (azimuth and elevation) the camera is pointing, together with the field of view. Then you project this onto the geodata of the environment and from there you can do all kinds of things, like finding the 3D position of an object based on its location in the photograph.
I am creating a 2d sidescroller game. I have a point in space (where the mouse is) and I need the weapon to look and "follow" that point.
Does anyone know where to begin?
wikihow: How to Find the Angle Between Two Vectors
After you have the angle, you can appropriately rotate the thing to be rotated.
I am not sure if javascript also has an atan2(x,y) function, which could be used to get the angle.
I have a xyz accelerometer and magnetometer. Now I want to determine the orientation of the device using both. The problem I see is that depending on the device orientation, I'd need to use the sensors in different order.
Let me give an example. If I have the device facing me then changes in both the roll and pitch can be determined with the accelerometer. For yaw I use the magnetometer.
But if I put the device horizontally (ie. turn it 90º, facing the ceiling) then any change in the up vector (now horizontal) isn't notice, as the accelerometer doesn't detect any change. This can now be detected with the magnetometer.
So the question is, how to determine when to use one or the other. Is this enough with both sensors or do I need something else?
Thanks
The key is to use the cross product of the two vectors, gravity and magnetometer. The cross product gives a new vector perpendicular to them both. That means it is horizontal (perpendicular to down) and 90 degrees away from north. Now you have three orthogonal vectors which define orientation. It is a little ugly because they are not all perpendicular but that is easy to fix. If you then cross this new vector back with the gravity vector that gives a third vector perpendicular to the gravity vector and the magnet plane vector. Now you have three perpendicular vectors which defines your 3D orientation coordinate system. The original accelerometer (gravity) vector defines Z (up/down) and the two cross product vectors define the east/west and north/south components of the orientation.
Here is some documentation that walks through this project. As is clear from other answers, the math can be tricky.
http://www.freescale.com/files/sensors/doc/app_note/AN4248.pdf
I think the question "how to determine when to use one or the other" is misguided. You should always use both sensors for orientation. There are cases where one of them is useless. However, these are edge cases.
If I understand you correctly, you'll need something to detect pitch (tilting) and orientation according to the cardinal points (North, East, South and West).
The pitch can be read from the accelerometer.
The orientation according to the cardinal points can be read from a compass.
Combining the output from these two sensors correctly with the right math in your software will most likely give you the absolute orientation.
I think it's doable that way.
Good luck.
In the event you still need absolute orientation you can check this break out board from Adafruit: https://www.adafruit.com/products/2472. The nice thing about this is board is that it has an ARM Cortex-M0 processor to do all of the calculations for you.