calculate camera up vector after glulookat()? - iphone

I'm just starting out teaching myself openGL and now adding openAL to the mix.
I have some planets scattered around in 3D space and when I touch the screen, I'm assigning a sound to a random planet and then slowly and smoothly flying the "camera" over to look at it and listen to it. The animation/tweening part is working perfectly, but the openAL piece isn't quiet right. I move the camera around by doing a tiny translate() and gluLookAt() for every frame to keep things smooth (tweening the camera position and lookAt coords). The trouble seems to be with the stereo image I'm getting out of the headphones.. it seems like the left/right/up/down is mixed up sometimes after the camera rolls or spins. I am pretty sure the trouble is here:
ALfloat listenerPos[]={camera->currentX,camera->currentY,camera->currentZ};
ALfloat listenerOri[]={camera->currentLookX,
camera->currentLookY,
camera->currentLookZ,
0.0,//Camera Up X <--- here
0.1,//Camera Up Y <--- here
0.0}//Camera Up Z <--- and here
alListenerfv(AL_POSITION,listenerPos);
alListenerfv(AL_ORIENTATION,listenerOri);
I'm wondering if I need to recompute the UP vector for the camera after each gluLookAt() to straighten out the audio positioning problem.. That seems like it could be the missing ingredient, but the math involved seems so advanced I'm not even sure where to begin..
1) Is it correct that I'll need to recalculate the up vector after each gluLookAt()?
2) Could someone teach me how to calculate an up vector?
So would this be a correct way to get the up vector of the camera after the gluLookAt()?
gluLookAt(cam->currentX,
cam->currentY,
cam->currentZ,
cam->currentLookX,
cam->currentLookY,
cam->currentLookZ,
cam->upX,
cam->upY,
cam->upZ);
//Get the up vector
glGetFloatv( GL_MODELVIEW_MATRIX, cam->modelViewMatrix);
cam->upX = cam->modelViewMatrix[4];
cam->upY = cam->modelViewMatrix[5];
cam->upZ = cam->modelViewMatrix[6];

In an Y-up world, the global "up" vector is the local "Y" vector. Or, put another way, if you put in "0, 1, 0" into the transform, it will come out pointing "up."
In an OpenGL matrix, this means that the second column is your "up" vector. You can extract it as follows:
float *myMatrix = ...;
myUpVector = Vector3(myMatrix[4], myMatrix[5], myMatrix[6]);

Related

(UNITY) Plane not rotating to normal vector of three points?

I am trying to get a stretched out cube (which we can call a plane for the sake of discussion) to orient itself to the normal vector of a plane described by three points. I wrote a script to find the normal of three points, and then used transform.LookAt to have the planes align. However, I am finding that this script is not working at all how it is intended to and despite my best efforts I can not figure out why.
drastic movements of the individual points hardly effect the planes rotation.
the rotation of the object when using the existing points in the script should be 0,0,0 in the inspector. However, it is always off by a few degrees and as i said does not align itself when I move the points around.
This is the script. I can also post photos showing the behavior or share a small unity package
First of all Transform.LookAt takes a position as parameter, not a direction!
And then it
Rotates the transform so the forward vector points at worldPosition.
Doesn't sound like what you are trying to achieve.
If you want your object to look with its forward vector in the given normal direction (assuming you are calculating the normal correctly) then you could rather use Quaternion.LookRotation
transform.rotation = Quaternion.LookRotation(doNormal(cpit, cmit, ctht);
alternatively to this you can also simply assign the according vector directly like e.g.
transform.forward = doNormal(cpit, cmit, ctht);
or
transform.up = doNormal(cpit, cmit, ctht);
depending on your needs

Explanation of how to calculate transforms in Unity

I am getting started with Unity and am just trying to get my head around the units. What are these units? It seems they are their own 'quantity' and to treat 2 units as 2 times the value of 1 unit.
Anyway - I am trying to workout how to optimally calculate transforms to objects sit exactly where I want them to.
In my scene I have a terrain and a cylinder as so:
As you can see my cylinder is floating. I want the cylinder to sit perfectly on top of the terrain.
My terrain is at the following transform: 0,0,0 and scale 0,0,0 (not sure how to tell it's dimensions yet).
My cylinder is part of a new object, as so:
My FirstPersonPlayer is at transform: 85.9,2.165,51.8 and scale 1,1,1. My Cylinder is at 'localposition' 0,0,0 and local scale 1.2,1.8,1.2
Now - the transform of FirstPersonPlayer on the y axis appears to be what I need to correct.
Currently it is set to 2.165 and is floating a bit above the terrain.
Through manually shifting it, around 1.85 looks about right - but I want to know how to calculate that, rather than doing a finger in the air 'that looks about right'.
Can anyone help me? (Before you suggest using gravity etc , I actually am, but don't want the player falling as soon as they start, however slight that may look or feel.
Many thanks,
As per #Nikola Dimitroff the answer is:
You don't have to compute anything, hold Shift + Control and drag the object. Every game engine ever made calls this "Snap to Ground"
I appreciate and agree with the other comments.

Automatically calculating new position of camera after we increase our chessboard size but want it still to stay in shot

Say my camera is rotated around the X axis 60 degrees and looking down on a 9x9 block chess board. As we adjust board size, I want to zoom out the camera. Say for arguments sake the camera's position is (4,20,-7) and like this the whole board is visible and taking up the full screen.
If I adjust my board size to say 11x11 blocks I will now need to zoom out the camera. Say I want to maintain the same 60 degree angle and want the board to fill as much of the screen as it did before. What should the camera's new position be and how do you calculate it?
The X part is easy since you simple give the camera the same X position as the middle of the board. I'm not sure about how to calculate the new Y and Z positions though.
Any advice appreciated. Thanks.
edit: and if i wanted to change the angle of the camera as well as zoom out, is that possible to calculate? this is less important since i'll probably stick with the same angle, but i'm interested to know the maths behind it anyway.
Transform.Translate() method will move the transform according to the rotation. So you don't have to worry about the direction where your camera is looking at, just
yourCamera.transform.Translate(Vector3.forward * moveAmount);
will move your camera forward, which means zoom in. If you want to zoom out, just change the sign of the value to minus.
When I didn't know this, I used Mathf.Sin() and Mathf.Cos() to calculate each y and z world coordinates, which sucks.

Unity Shader Graph: Combining texture rotation and offset

I'm making a water simulation, and I'm trying to visualize the velocity field. In this demo I continuously add water in the centre of the image, where the blue 'raindrop' things are.
I have a texture where rg is the X and Y direction of the velocity, and ba is the total movement of water through it (ie: every step ba = ba + rg * delta_time).
I'm working in Unity Shader Graph.
I want to rotate a 'ripple' texture in the direction of the velocity, and then translate in that direction as well. To prevent the shader from jumping around when the velocity changes I thought of using the ba channels (which were previously unused) to keep like a total velocity like described above.
However, both the rotation (based on velocity alone), and the translation (based on the 'total velocity') work fine on their own. But when I sum them together it looks like the translation is also rotated. I'm not sure why this happens.
Here's what I do:
First part: rotating my water texture in the direction of the velocity, and that looks fine:
The shader itself looks like this:
So basically I discretize the uv (custom function on the right), get the angle of the velocity (using arctan2), and then rotate each discrete block using the Rotate block. This works as expected.
Second part: translating the texture based on the total velocity (in the ba channels), also works as expected:
The shader itself looks like this:
Again I used the discretized uv, now I translate each block based on the ba channels, which contain the total of the velocity (ba = ba + rg * delta_time each time step). As you can see this shows the textures flowing away from the centre (where water is added constantly). This is what I would expect to happen.
Now, when I combine them, it goes wrong:
The one I circled in red shows the problem the best (though all block seem to have it to some degree, depending on how much they were rotated). The arrow point to the bottom-right, which seems to be correct, however it flows to the top now.
The shader:
So here I add the rotated discrete block to the translation. But it looks like the translation part now also rotated, even though I add them together after the rotation block. So while the translation isn't rotated, it looks like it is.
Why is this happening? And how can I fix it.
I hope I explained it adequately, since it's not easy to show in just pictures and gifs.
Thanks!
So I fixed my problem by rather than storing the x and y of the offset in the b & a channels, to just storing the total distance moved in the b channel (thus b += length(rg)).
Then I'm using float2(0, b)` as the offset.
This is then also rotated for some reason and visually works as I wanted it.
However, I still don't really see why, sometimes I think I get it, and then I think some more and I don't any more.
So if anyone knows why this happens and can explain, I'm happy to accept that answer.
However, for now it is solved.

3D trajectory reconstruction from video (taken by a single camera)

I am currently trying to reconstruct a 3D trajectory of a falling object like a ball or a rock out of a sequence of images taken from an iPhone video.
Where should I start looking? I know I have to calibrate the camera (I think I'll use the matlab calibration toolbox by Jean-Yves Bouguet) and then find the vanishing point from the same sequence, but then I'm really stuck.
read this: http://www.cs.auckland.ac.nz/courses/compsci773s1c/lectures/773-GG/lectA-773.htm
it explains 3d reconstruction using two cameras. Now for a simple summary, look at the figure from that site:
You only know pr/pl, the image points. By tracing a line from their respective focal points Or/Ol you get two lines (Pr/Pl) that both contain the point P. Because you know the 2 cameras origin and orientation, you can construct 3d equations for these lines. Their intersection is thus the 3d point, voila, it's that simple.
But when you discard one camera (let's say the left one), you only know for sure the line Pr. What's missing is depth. Luckily you know the radius of your ball, this extra information can give you the missing depth information. see next figure (don't mind my paint skills):
Now you know the depth using the intercept theorem
I see one last issue: the shape of ball changes when projected under an angle (ie not perpendicular on your capture plane). However you do know the angle, so compensation is possible, but I leave that up to you :p
edit: #ripkars' comment (comment box was too small)
1) ok
2) aha, the correspondence problem :D Typically solved by correlation analysis or matching features (mostly matching followed by tracking in a video). (other methods exist too)
I haven't used the image/vision toolbox myself, but there should definitely be some things to help you on the way.
3) = calibration of your cameras. Normally you should only do this once, when installing the cameras (and every other time you change their relative pose)
4) yes, just put the Longuet-Higgins equation to work, ie: solve
P = C1 + mu1*R1*K1^(-1)*p1
P = C2 + mu2*R2*K2^(-1)*p2
with
P = 3D point to find
C = camera center (vector)
R = rotation matrix expressing the orientation of the first camera in the world frame.
K = calibration matrix of the camera (containing internal parameters of the camera, not to be confused with the external parameters contained by R and C)
p1 and p2 = the image points
mu = parameter expressing the position of P on the projection line from camera center C to P (if i'm correct R*K^-1*p expresses a line equation/vector pointing from C to P)
these are 6 equations containing 5 unknowns: mu1, mu2 and P
edit: #ripkars' comment (comment box too small once again)
The only computer vison library that pops up in my mind is OpenCV (http://opencv.willowgarage.com/wiki ). But that's a C library, not matlab... I guess google is your friend ;)
About the calibration: yes, if those two images contain enough information to match some features. If you change the relative pose of the cameras, you'll have to recalibrate of course.
The choice of the world frame is arbitrary; it only becomes important when you want to analyze the retrieved 3d data afterwards: for example you could align one of the world planes with the plane of motion -> simplified motion equation if you want to fit one.
This world frame is just a reference frame, changeable with a 'change of reference frame transformation' (translation and/or rotation transformation)
Unless you have a stereo camera, you will never be able to know the position for sure, even with calibrated camera. Because you don't know whether the ball is small and close or large and far away.
There are other methods with single camera, based on series of images with different focus. But I doubt that you can control the camera of your cell phone in that way.
Edit(1):
as #GuntherStruyf points out correctly, you can know the position if one of your inputs is the size of the ball.