The question is explained in the following picture.
Now I have Quaternions data of Q1, Q2, and Q3. How could I calculate Q4?
Thanks in advance for helping.
enter image description here
To make it more clear:
I want to develop a 3D Indoor navigation app which allows the user to upload the route and reproduce the route at the same location.
When creating a path and upload:
At first, the user opens the camera and got the initiate location is O, then he will store the translation and rotation data of X, which is a predefined location. Finally creating their path by record another point like Y relative to O.
When reproducing the path user created before, they open the camera again and get another initiate location O'. He can go the predefined location again and get translation and rotation of X again. Then I could calculate the Y relative to O' and reproduce the point.
In my point of view, I could calculate translation by the vector calculation: O'X - OX + OY = O'Y
But I'm not familiar with quaternions and rotation part. I've tried the same method but turns out not correctly. So I simplize the question as the model described in the graph and ask for some help.
I think Quaternion.SetLookRotation suit your problem, it converts a forward vector (and a up vector) to quaternion, and the up vector for people is usually Vector3.up, so:
var Q4 = Quaternion.SetLookRotation(/* vector Y-O' here */);
Related
Whenever I rotate the object it squishes along either the x or z directions. I’m using this equation for the rotations.
x = x2cosθ−y2sinθ and y=x2sinθ+y2cosθ. x2=xcosθ+ysinθ and y2=−xsinθ+y*cosθ.
θ is the angle you are setting it to.
I tried making a second list for each coordinate line but it gave me me problems like possible 4d rotations
Never mind, I got it working. Also sry for not responding, I don’t have notifications as an available option. Here’s the project link though https://scratch.mit.edu/projects/790020886/
I have a player object that controls like the ship in Asteroids, using speed and direction. This object is fixed in the middle of the screen, but can rotate. Movement of this object is a visual illusion as other objects move past it.
I need to get x and y coordinates of this player object, from an origin of (0, 0) at room start. x and y do not provide this info as the object does not move. Does anyone know how I can get 'fake coordinates', based on the speed and direction?
One thing to make sure is that you're not just getting x and y on their own, as that will get the current object's x and y position. Instead, make sure to reference the object you're trying to get. For example:
var objectX = myShip.x;
var objectY = myShip.y;
show_debug_message("x: " + string(objectX));
show_debug_message("y: " + string(objectY));
I think you are thinking about it wrong. You do not need "fake coordinates". Real coordinates are fine. Give the ship and asteroids/enemies whatever coordinates and velocity vectors you want; randomly generate them if the game is like Asteroids.
The coordinates do not have to be fake; it is just that when you render in your game loop, you render a particular frame of reference. If the origin is the center of the screen, when you paint an object at (x,y) paint it as though it were at (x - ship_x, y - ship_y) -- including the ship, which will be at (0,0). If you wanted to make rotation relative to the ship too, you could do the same thing with rotation.
Now, you have your question tagged as game-maker. I have no idea if game-maker lets you control how sprites are painted like this. If not then you need to maintain the real coordinates as separate properties of objects and let the official (x,y) coordinates be relative to the ship. The trouble with this is that you will have to update all of the objects everytime the ship moves. But like I said I don't know how GameMaker works -- if it is a problem maybe ask a question more specific to GameMaker.
You'll need to think what you'll use to move the ship around, but then use that code on different variables.
Normally, you'll update the x or y if you want to move the ship, but since you're not going to do that, simply use a custom variable that replaces the x and y value (like posx or posy), and use them on the code that would otherwise be used to move the ship around.
I am working on taking two HoloLens 2 users' gaze data, and comparing them to verify they are tracking the same hologram's trajectory. I have access to the GazeProvider data, no issues there. However, the GazeProvider.GazeDirection data throws me. For instance, I've referenced the API at:
GazeDirection API Data
But, I dont really understand what the Vector 3 it returns means. Are the X,Y,Z relative motion? If not, can I use Vector3.angle to compute relative motion vectors between two points?
The vector returned by the GazeDirection property leveraging three coordinate parameters to point the direction that the user's eyes are looking towards. The origin is located between the user's eyes. The Vector3.angle method you mentioned can help you compute the angle between the two eye gaze directions.
I have just started to dig into gaze from a different scenario, but one suggestion I would make is that you also take a look at the gaze origin api.
Each user occupies a different location in space and is gazing into the world in a "gaze direction" from their location in space which would be their "gaze origin".
Basically you need to reconcile the different spatial coordinate systems.
I am currently trying to reconstruct a 3D trajectory of a falling object like a ball or a rock out of a sequence of images taken from an iPhone video.
Where should I start looking? I know I have to calibrate the camera (I think I'll use the matlab calibration toolbox by Jean-Yves Bouguet) and then find the vanishing point from the same sequence, but then I'm really stuck.
read this: http://www.cs.auckland.ac.nz/courses/compsci773s1c/lectures/773-GG/lectA-773.htm
it explains 3d reconstruction using two cameras. Now for a simple summary, look at the figure from that site:
You only know pr/pl, the image points. By tracing a line from their respective focal points Or/Ol you get two lines (Pr/Pl) that both contain the point P. Because you know the 2 cameras origin and orientation, you can construct 3d equations for these lines. Their intersection is thus the 3d point, voila, it's that simple.
But when you discard one camera (let's say the left one), you only know for sure the line Pr. What's missing is depth. Luckily you know the radius of your ball, this extra information can give you the missing depth information. see next figure (don't mind my paint skills):
Now you know the depth using the intercept theorem
I see one last issue: the shape of ball changes when projected under an angle (ie not perpendicular on your capture plane). However you do know the angle, so compensation is possible, but I leave that up to you :p
edit: #ripkars' comment (comment box was too small)
1) ok
2) aha, the correspondence problem :D Typically solved by correlation analysis or matching features (mostly matching followed by tracking in a video). (other methods exist too)
I haven't used the image/vision toolbox myself, but there should definitely be some things to help you on the way.
3) = calibration of your cameras. Normally you should only do this once, when installing the cameras (and every other time you change their relative pose)
4) yes, just put the Longuet-Higgins equation to work, ie: solve
P = C1 + mu1*R1*K1^(-1)*p1
P = C2 + mu2*R2*K2^(-1)*p2
with
P = 3D point to find
C = camera center (vector)
R = rotation matrix expressing the orientation of the first camera in the world frame.
K = calibration matrix of the camera (containing internal parameters of the camera, not to be confused with the external parameters contained by R and C)
p1 and p2 = the image points
mu = parameter expressing the position of P on the projection line from camera center C to P (if i'm correct R*K^-1*p expresses a line equation/vector pointing from C to P)
these are 6 equations containing 5 unknowns: mu1, mu2 and P
edit: #ripkars' comment (comment box too small once again)
The only computer vison library that pops up in my mind is OpenCV (http://opencv.willowgarage.com/wiki ). But that's a C library, not matlab... I guess google is your friend ;)
About the calibration: yes, if those two images contain enough information to match some features. If you change the relative pose of the cameras, you'll have to recalibrate of course.
The choice of the world frame is arbitrary; it only becomes important when you want to analyze the retrieved 3d data afterwards: for example you could align one of the world planes with the plane of motion -> simplified motion equation if you want to fit one.
This world frame is just a reference frame, changeable with a 'change of reference frame transformation' (translation and/or rotation transformation)
Unless you have a stereo camera, you will never be able to know the position for sure, even with calibrated camera. Because you don't know whether the ball is small and close or large and far away.
There are other methods with single camera, based on series of images with different focus. But I doubt that you can control the camera of your cell phone in that way.
Edit(1):
as #GuntherStruyf points out correctly, you can know the position if one of your inputs is the size of the ball.
I am developing an app which uses LK for tracking and POSIT for estimation. I am successful in getting rotation matrix, projection matrix and able to track perfectly but the problem for me is I am not able to translate 3D object properly. The object is not fitting in to the right place where it has to fit.
Will some one help me regarding this?
Check this links, they may provide you some ideas.
http://computer-vision-talks.com/2011/11/pose-estimation-problem/
http://www.morethantechnical.com/2010/11/10/20-lines-ar-in-opencv-wcode/
Now, you must also check whether the intrinsic camera parameters are correct. Even a small error in estimating the field of view can cause troubles when trying to reconstruct 3D space. And from your details, it seems that the problem are bad fov angles (field of view).
You can try to measure them, or feed the half or double value to your algorithm.
There are two conventions for fov: half-angle (from image center to top or left, or from bottom to top, respectively from left to right) Maybe you just mixed them up, using full-angle instead of half, or vice-versa
Maybe you can show us how you build a transformation matrix from R and T components?
Remember, that cv::solvePnP function returns inverse transformation (e.g camera in world) - it finds object pose in 3D space where camera is in (0;0;0). For almost all cases you need inverse it to get correct result: {Rt; -T}