Camera Bounds in perspective mode in Unity 3D - unity3d

How Can I find extreme left, right, top, bottom points of perspective camera in Unity 3D. I am trying to do zooming and panning. I need those points to check if I am going out of my Bound. Is there any other way to find?

This unity manual entry directly answers your question:
FrustumSizeAtDistance
So to summarize:
To calculate the height of the view frustum at a given distance we can calculate it like so:
var frustumHeight = 2.0f * distance * Mathf.Tan(camera.fieldOfView * 0.5f * Mathf.Deg2Rad);
If we already know the frustumHeight we can calculate the corresponding distance to the camera:
var distance = frustumHeight * 0.5f / Mathf.Tan(camera.fieldOfView * 0.5f * Mathf.Deg2Rad);
Once we have the frustumHeight at a given distance we can calculate its width by using the cameras aspect:
var frustumWidth = frustumHeight * camera.aspect;
This can also be reversed like so:
var frustumHeight = frustumWidth / camera.aspect;

Camera.ScreenToWorldPoint will assist you.
For instance, to find bottom-left point of the screen projected onto the world, use this:
camera.ScreenToWorldPoint(new Vector3(0, 0, distance_from_camera));

Related

convert 2d location on 360 image to 3d location (in sphere)

I have a bunch of 360 equirectangular images an on each image i want to place a point of interest. To make this easy i just want to have to determine the 2d location of this point on the image. See image below for clarification:
Lets say that the blue point has a pixel location of X: 3000 and Y: 1300.
And that the total dimensions of the image are 4096x2048.
Now i want to convert this point to a spherical location and then to a 3d location. I try to do this in the following way:
Vector3 PlaceMenu(Vector2 loc2d)
{
var phi = 2 * Mathf.PI * (loc2d.x / imageDimensions.x);
var theta = ( loc2d.y / imageDimensions.y) * Mathf.PI;
var pos = new Vector3(Mathf.Cos(phi) * Mathf.Sin(theta), Mathf.Sin(phi) * Mathf.Sin(theta), Mathf.Cos(theta));
pos *= offsetRadius;
return pos;
}
in this case offsetRadius is the radius of the sphere.
But the results i am getting with this code are weird. because the blue points appear on weird other locations then specified by the 2d location.
What am i doing wrong here?
If any more explaination is needed i am happy to provide!

Quaternion * Vector3 * Distance

Could someone please help me understand the result of the following multiplications?
In the Unity VR samples project, the following two lines are used:
Quaternion headRotation = InputTracking.GetLocalRotation(VRNode.Head);
TargetMarker.position = Camera.position + (headRotation * Vector3.forward) * DistanceFromCamera;
I can understand the first line - how the user's head rotation is calculated and stored in headRotation which is a Quaternion.
I can also understand that the TargetMarker's position should be calculated by adding the Camera's position to something. What is this something?
Most importantly, how does the result of (headRotation * Vector3.forward) * DistanceFromCamera is a position?
headRotation * Vector3.forward return a Vector3 in the direction forward of your Quaternion headRotation. (So the direction you are facing)
As Vector3.forward is the vector normalized (0, 0, 1) when you multiply it by your Quaternion you have a vector of length 1 with the same direction of your head.
Then when you multiply it by the distance between your marker and your camera you now have a vector of the same length and direction that between your camera and your marker.
Add it to your current camera position and you now have the position of your marker.

Game programming difficult mathematical issue

The question I am about to ask could be somewhat challenging. I will try to make this as clear and cohesive as possible.
I am currently making a game, in which I have a 'laser ring,' as shown here:
This laser ring, when prompted, will fire a 'grappling hook' which is simply the image shown below. This image's frame.width property is adjusted to make it fire (lengthen) and retract (shorten.) It starts at a width of 0, and as the frames progress, it lengthens until reaching the desired point.
This grappling hook, when fired, should line up with the ring so that they appear to be one item. Refer to the image below for clarity:
*Note that the grappling hook's width changes almost every frame, so a constant width cannot be assumed.
Something else to note is that, for reasons that are difficult to explain, I can only access the frame.center property of the grappling hook and not the frame.origin property.
So, my question to you all is this: How can I, accessing only the frame.center.x and frame.center.y properties of the grappling hook, place it around the laser ring in such a way that it appears to be seamlessly extending from the ring as shown in the above image - presumably calculated based on the angle and width of the grappling hook at any given frame?
Any help is immensely appreciated.
OK, I've done this exact same thing in my own app.
The trick I did to make it easier was to have a function to calculate the "unitVector" of the line.
i.e. the vector change in the line based on a line length of 1.
It just uses simple pythagorus...
- (CGSize)unitVectorFromPoint:(CGPoint)start toPoint:(CGPoint)end
{
//distance between start an end
float dX = end.x - start.x;
float dY = end.y - start.y;
float distance = sqrtf(dX * dX + dY * dY); // simple pythagorus
//unit vector is just the difference divided by the distance
CGSize unitVector = CGSizeMake(dX/distance, dY/distance);
return unitVector;
}
Note... it doesn't matter which way round the start and end are as squaring the numbers will only give positive values.
Now you can use this vector to get to any point along the line between the two points (centre of the circle and target).
So, the start of the line is ...
CGPoint center = // center of circle
CGPoint target = // target
float radius = //radius of circle
float dX = center.x - target.x;
float dY = center.y - target.y;
float distance = sqrtf(dX * dX + dY * dY);
CGSize unitVector = [self unitVectorFromPoint:center toPoint:target];
CGPoint startOfLaser = CGPointMake(center.x + unitVector.x * radius, center.y + unitVector.y * radius).
CGPoint midPointOfLaser = CGPointMake(center.x + unitVecotr.x * distance * 0.5, center.y + unitVector.y * distance * 0.5);
This just multiplies the unit vector by how far you want to go (radius) to get to the point on the line at that distance.
Hope this helps :D
If you want the mid point between the two points then you just need to change "radius" to be the distance that you want to calculate and it will give you the mid point. (and so on).

OpenGL-ES change angle of vision in frustum

Let's see if I can explain myself.
When you set up the glFrustum view it will give the perspective effect. Near things near & big... far things far & small. Everything looks like it shrinks along its Z axis to create this effect.
Is there a way to make it NOT shrink that much?
To approach perspective view to an orthographic view.... but not that much to lose perspective completely?
Thanks
The angle is conformed by two parameters: heigth of the nearest clipping plane (determined by top and bottom parameters), and the distance of the nearest clipping plane (determined by zNearest).
To make a perspective matrix such that it doesn't shrink the image too much, you can set a smaller height or a further nearest clipping plane.
The thing is to understand that orthographic view is a view with a FOV of zero and a camera position at infinity. So there is a way to approach orthographic view by reducing FOV and moving the camera far away.
I can suggest the following code that computes a near-orthographic projection camera from a given theta FOV value. I use it in a personal project, though it uses custom matrix classes rather than glOrtho and glFrustum, so it might be incorrect. I hope it gives a good general idea though.
void SetFov(int width, int height, float theta)
{
float near = -(width + height);
float far = width + height;
/* Set the projection matrix */
if (theta < 1e-4f)
{
/* The easy way: purely orthogonal projection. */
glOrtho(0, width, 0, height, near, far);
return;
}
/* Compute a view that approximates the glOrtho view when theta
* approaches zero. This view ensures that the z=0 plane fills
* the screen. */
float t1 = tanf(theta / 2);
float t2 = t1 * width / height;
float dist = width / (2.0f * t1);
near += dist;
far += dist;
if (near <= 0.0f)
{
far -= (near - 1.0f);
near = 1.0f;
}
glTranslate3f(-0.5f * width, -0.5f * height, -dist);
glFrustum(-near * t1, near * t1, -near * t2, near * t2, near, far);
}

How do I use the gravity vector to correctly transform scene for augmented reality?

I'm trying figure out how to get an OpenGL specified object to be displayed correctly according to the device orientation (ie. according to the gravity vector from the accelerometer, and heading from compass).
The GLGravity sample project has an example which is almost like this (despite ignoring heading), but it has some glitches. For example, the teapot jumps 180deg as the device viewing angle crosses the horizon, and it also rotates spuriously if you tilt the device from portrait into landscape. This is fine for the context of this app, as it just shows off an object and it doesn't matter that it does these things. But it means that the code just doesn't work when you attempt to emulate real life viewing of an OpenGL object according to the device's orientation. What happens is that it almost works, but the heading rotation you apply from the compass gets "corrupted" by the spurious additional rotations seen in the GLGravity example project.
Can anyone provide sample code that shows how to adjust correctly for the device orientation (ie. gravity vector), or to fix the GLGravity example so that it doesn't include spurious heading changes?
//Clear matrix to be used to rotate from the current referential to one based on the gravity vector
bzero(matrix, sizeof(matrix));
matrix[3][3] = 1.0;
//Setup first matrix column as gravity vector
matrix[0][0] = accel[0] / length;
matrix[0][1] = accel[1] / length;
matrix[0][2] = accel[2] / length;
//Setup second matrix column as an arbitrary vector in the plane perpendicular to the gravity vector {Gx, Gy, Gz} defined by by the equation "Gx * x + Gy * y + Gz * z = 0" in which we arbitrarily set x=0 and y=1
matrix[1][0] = 0.0;
matrix[1][1] = 1.0;
matrix[1][2] = -accel[1] / accel[2];
length = sqrtf(matrix[1][0] * matrix[1][0] + matrix[1][1] * matrix[1][1] + matrix[1][2] * matrix[1][2]);
matrix[1][0] /= length;
matrix[1][1] /= length;
matrix[1][2] /= length;
//Setup third matrix column as the cross product of the first two
matrix[2][0] = matrix[0][1] * matrix[1][2] - matrix[0][2] * matrix[1][1];
matrix[2][1] = matrix[1][0] * matrix[0][2] - matrix[1][2] * matrix[0][0];
matrix[2][2] = matrix[0][0] * matrix[1][1] - matrix[0][1] * matrix[1][0];
//Finally load matrix
glMultMatrixf((GLfloat*)matrix);
Here's a clarification showing how to get the elevation and tilt that are needed for gluLookAt solution as shown in my last answer:
// elevation comes from z component (0 = facing horizon)
elevationRadians = asin(gravityVector.z / Vector3DMagnitude(gravityVector));
// tilt is how far screen is from vertical, looking along z axis
tiltRadians = atan2(-gravityVector.y, -gravityVector.x) - M_PI_2;
Following up on Chris's suggestion: I'm not sure if I've got this all correct due to differing conventions of row/column order and heading cw or ccw. However the following code is what I came up with:
Vector3D forward = Vector3DMake(0.0f, 0.0f, -1.0f);
// Multiply it by current rotation matrix to get teapot direction
Vector3D direction;
direction.x = matrix[0][0] * forward.x + matrix[1][0] * forward.y + matrix[2][0] * forward.z;
direction.y = matrix[0][1] * forward.x + matrix[1][1] * forward.y + matrix[2][1] * forward.z;
direction.z = matrix[0][2] * forward.x + matrix[1][2] * forward.y + matrix[2][2] * forward.z;
heading = atan2(direction.z, direction.x) * 180 / M_PI;
// Use this heading to adjust the teapot direction back to keep it fixed
// Rotate about vertical axis (Y), as it is a heading adjustment
glRotatef(heading, 0.0, 1.0, 0.0);
When I run this code, the teapot behaviour has apparently "improved" eg. heading no longer flips 180deg when device screen (in portrait view) is pitched forward/back through upright. However, it still makes major jumps in heading when device (in landscape view) is pitched forward/back. So something's not right. It suggests that the above calculation of the actual heading is incorrect...
I finally found a solution that works. :-)
I dropped the rotation matrix approach, and instead adopted gluLookAt. To make this work you need to know the device "elevation" (viewing angle relative to horizon ie. 0 on horizon, +90 overhead), and the camera's "tilt" (how far the device is from vertical its x/y plane ie. 0 when vertical/portrait, +/-90 when horizontal/landscape), both of which are obtained from the device gravity vector components.
Vector3D eye, scene, up;
CGFloat distanceFromScene = 0.8;
// Adjust eye position for elevation (y/z)
eye.x = 0;
eye.y = distanceFromScene * -sin(elevationRadians); // eye position goes down as elevation angle goes up
eye.z = distanceFromScene * cos(elevationRadians); // z position is maximum when elevation is zero
// Lookat point is origin
scene = Vector3DMake(0, 0, 0); // Scene is at origin
// Camera tilt - involves x/y plane only - arbitrary vector length
up.x = sin(tiltRadians);
up.y = cos(tiltRadians);
up.z = 0;
Then you just apply the gluLookAt transformation, and also rotate the scene according to the device heading.
// Adjust view for device orientation
gluLookAt(eye.x, eye.y, eye.z, scene.x, scene.y, scene.z, up.x, up.y, up.z);
// Apply device heading to scene
glRotatef(currentHeadingDegrees, 0.0, 1.0, 0.0);
Try rotating the object depending upon iphone acceleration values.
float angle = -atan2(accelX, accelY);
glPushMatrix();
glTranslatef(centerPoint.x, centerPoint.y, 0);
glRotatef(angle, 0, 0, 1);
glTranslatef(-centerPoint.x, -centerPoint.y, 0);
glPopMatrix();
Where centerPoint is the middle point the object.
oo, nice.
GLGravity seems to get everything right except for the yaw. Here's what I would try. Do everything GLGravity does, and then this:
Project a vector in the direction you want the teapot to face, using the compass or whatever you so choose. Then multiply a "forward" vector by the teapot's current rotation matrix, which will give you the direction the teapot is facing. Flatten the two vectors to the horizontal plane and take the angle between them.
This angle is your corrective yaw. Then just glRotatef by it.
Whether or not the 3GS's compass is reliable and robust enough for this to work is another thing. Normal compasses don't work when the north vector is perpendicular to their face. But I just tried the Maps app on my workmate's 3GS and it seems to cope, so maybe they have got a mechanical solution in there. Knowing what the device is actually doing will help interpret the results it gives.
Make sure to test your app at the north and south poles once you're done. :-)
Getting a much more stable gravity-based reference, can now be done using CMMotionManager.
When starting motion updates with startDeviceMotionUpdates(), you can specify a reference frame.
This fuses the accelerometer, gyroscope and optionally (depending on chose reference frame) magnetometer data. Accelerometer data is pretty noisy and bouncy (any sideways motion of the device temporarily tilts the gravity vector by any device acceleration) and alone doesn't make a good reference.
I've been low-pass filtering the accelerometer data, which helps a bit but makes the system slow.