OpenGL-ES change angle of vision in frustum - iphone

Let's see if I can explain myself.
When you set up the glFrustum view it will give the perspective effect. Near things near & big... far things far & small. Everything looks like it shrinks along its Z axis to create this effect.
Is there a way to make it NOT shrink that much?
To approach perspective view to an orthographic view.... but not that much to lose perspective completely?
Thanks

The angle is conformed by two parameters: heigth of the nearest clipping plane (determined by top and bottom parameters), and the distance of the nearest clipping plane (determined by zNearest).
To make a perspective matrix such that it doesn't shrink the image too much, you can set a smaller height or a further nearest clipping plane.

The thing is to understand that orthographic view is a view with a FOV of zero and a camera position at infinity. So there is a way to approach orthographic view by reducing FOV and moving the camera far away.
I can suggest the following code that computes a near-orthographic projection camera from a given theta FOV value. I use it in a personal project, though it uses custom matrix classes rather than glOrtho and glFrustum, so it might be incorrect. I hope it gives a good general idea though.
void SetFov(int width, int height, float theta)
{
float near = -(width + height);
float far = width + height;
/* Set the projection matrix */
if (theta < 1e-4f)
{
/* The easy way: purely orthogonal projection. */
glOrtho(0, width, 0, height, near, far);
return;
}
/* Compute a view that approximates the glOrtho view when theta
* approaches zero. This view ensures that the z=0 plane fills
* the screen. */
float t1 = tanf(theta / 2);
float t2 = t1 * width / height;
float dist = width / (2.0f * t1);
near += dist;
far += dist;
if (near <= 0.0f)
{
far -= (near - 1.0f);
near = 1.0f;
}
glTranslate3f(-0.5f * width, -0.5f * height, -dist);
glFrustum(-near * t1, near * t1, -near * t2, near * t2, near, far);
}

Related

How to reposition a circle to be outside of circumference of two other circles?

This is a question for Unity people or Math geniuses.
I'm making a game where I have a circle object that I can move, but I don't want it to intersect or go into other (static) circles in the world (Physics system isn't good enough in Unity to simply use that, btw).
It's in 3D world, but the circles only ever move on 2 axis.
I was able to get this working perfectly if circle hits only 1 other circle, but not 2 or more.
FYI: All circles are the same size.
Here's my working formula for 1 circle to move it to the edge of the colliding circle if intersecting:
newPosition = PositionOfStaticCircleThatWasJustIntersected + ((positionCircleWasMovedTo - PositionOfStaticCircleThatWasJustIntersected).normalized * circleSize);
But I can't figure out a formula if the moving circle hits 2 (or more) static circles at the same time.
One of the things that confuse me the most is the direction issue depending on how all the circles are positioned and what direction the moving circle is coming from.
Here's an example image of what I'm trying to do.
Since we're operating in a 2D space, let's approach this with some geometry. Taking a close look at your desired outcome, a particular shape become apparent:
There's a triangle here! And since all circles are the same radius, we know even more: this is an isosceles triangle, where two sides are the same length. With that information in hand, the problem basically boils down to:
We know what d is, since it's the distance between the two circles being collided with. And we know what a is, since it's the radius of all the circles. With that information, we can figure out where to place the moved circle. We need to move it d/2 between the two circles (since the point will be equidistant between them), and h away from them.
Calculating the height h is straightforward, since this is a right-angle triangle. According to the Pythagorean theorem:
// a^2 + b^2 = c^2, or rewritten as:
// a = root(c^2 - b^2)
float h = Mathf.Sqrt(Mathf.Pow(2 * a, 2) - Mathf.Pow(d / 2, 2))
Now need to turn these scalar quantities into vectors within our game space. For the vector between the two circles, that's easy:
Vector3 betweenVector = circle2Position - circle1Position
But what about the height vector along the h direction? Well, since all movement is on 2D space, find a direction that your circles don't move along and use it to get the cross product (the perpendicular vector) with the betweenVector using Vector3.Cross(). For
example, if the circles only move laterally:
Vector3 heightVector = Vector3.Cross(betweenVector, Vector3.up)
Bringing this all together, you might have a method like:
Vector3 GetNewPosition(Vector3 movingCirclePosition, Vector3 circle1Position,
Vector3 circle2Position, float radius)
{
float halfDistance = Vector3.Distance(circle1Position, circle2Position) / 2;
float height = Mathf.Sqrt(Mathf.Pow(2 * radius, 2) - Mathf.Pow(halfDistance, 2));
Vector3 betweenVector = circle2Position - circle1Position;
Vector3 heightVector = Vector3.Cross(betweenVector, Vector3.up);
// Two possible positions, on either side of betweenVector
Vector3 candidatePosition1 = circle1Position
+ betweenVector.normalized * halfDistance
+ heightVector.normalized * height;
Vector3 candidatePosition2 = circle1Position
+ betweenVector.normalized * halfDistance
- heightVector.normalized * height;
// Absent any other information, the closer position will be assumed as correct
float distToCandidate1 = Vector3.Distance(movingCirclePosition, candidatePosition1);
float distToCandidate2 = Vector3.Distance(movingCirclePosition, candidatePosition2);
if (distToCandidate1 < distToCandidate2){
return candidatePosition1;
}
else{
return candidatePosition2;
}
}

Camera Bounds in perspective mode in Unity 3D

How Can I find extreme left, right, top, bottom points of perspective camera in Unity 3D. I am trying to do zooming and panning. I need those points to check if I am going out of my Bound. Is there any other way to find?
This unity manual entry directly answers your question:
FrustumSizeAtDistance
So to summarize:
To calculate the height of the view frustum at a given distance we can calculate it like so:
var frustumHeight = 2.0f * distance * Mathf.Tan(camera.fieldOfView * 0.5f * Mathf.Deg2Rad);
If we already know the frustumHeight we can calculate the corresponding distance to the camera:
var distance = frustumHeight * 0.5f / Mathf.Tan(camera.fieldOfView * 0.5f * Mathf.Deg2Rad);
Once we have the frustumHeight at a given distance we can calculate its width by using the cameras aspect:
var frustumWidth = frustumHeight * camera.aspect;
This can also be reversed like so:
var frustumHeight = frustumWidth / camera.aspect;
Camera.ScreenToWorldPoint will assist you.
For instance, to find bottom-left point of the screen projected onto the world, use this:
camera.ScreenToWorldPoint(new Vector3(0, 0, distance_from_camera));

picking in 3D with ray-tracing using NinevehGL or OpenGL i-phone

I couldn't find the correct and understandable expression of picking in 3D with method of ray-tracing. Has anyone implemented this algorithm in any language? Share directly working code, because since pseudocodes can not be compiled, they are genereally written with lacking parts.
What you have is a position in 2D on the screen. The first thing to do is convert that point from pixels to normalized device coordinates — -1 to 1. Then you need to find the line in 3D space that the point represents. For this, you need the transformation matrix/ces that your 3D app uses to create a projection and camera.
Typically you have 3 matrics: projection, view and model. When you specify vertices for an object, they're in "object space". Multiplying by the model matrix gives the vertices in "world space". Multiplying again by the view matrix gives "eye/camera space". Multiplying again by the projection gives "clip space". Clip space has non-linear depth. Adding a Z component to your mouse coordinates puts them in clip space. You can perform the line/object intersection tests in any linear space, so you must at least move the mouse coordinates to eye space, but it's more convenient to perform the intersection tests in world space (or object space depending on your scene graph).
To move the mouse coordinates from clip space to world space, add a Z-component and multiply by the inverse projection matrix and then the inverse camera/view matrix. To create a line, two points along Z will be computed — from and to.
In the following example, I have a list of objects, each with a position and bounding radius. The intersections of course never match perfectly but it works well enough for now. This isn't pseudocode, but it uses my own vector/matrix library. You'll have to substitute your own in places.
vec2f mouse = (vec2f(mousePosition) / vec2f(windowSize)) * 2.0f - 1.0f;
mouse.y = -mouse.y; //origin is top-left and +y mouse is down
mat44 toWorld = (camera.projection * camera.transform).inverse();
//equivalent to camera.transform.inverse() * camera.projection.inverse() but faster
vec4f from = toWorld * vec4f(mouse, -1.0f, 1.0f);
vec4f to = toWorld * vec4f(mouse, 1.0f, 1.0f);
from /= from.w; //perspective divide ("normalize" homogeneous coordinates)
to /= to.w;
int clickedObject = -1;
float minDist = 99999.0f;
for (size_t i = 0; i < objects.size(); ++i)
{
float t1, t2;
vec3f direction = to.xyz() - from.xyz();
if (intersectSphere(from.xyz(), direction, objects[i].position, objects[i].radius, t1, t2))
{
//object i has been clicked. probably best to find the minimum t1 (front-most object)
if (t1 < minDist)
{
minDist = t1;
clickedObject = (int)i;
}
}
}
//clicked object is objects[clickedObject]
Instead of intersectSphere, you could use a bounding box or other implicit geometry, or intersect a mesh's triangles (this may require building a kd-tree for performance reasons).
[EDIT]
Here's an implementation of the line/sphere intersect (based off the link above). It assumes the sphere is at the origin, so instead of passing from.xyz() as p, give from.xyz() - objects[i].position.
//ray at position p with direction d intersects sphere at (0,0,0) with radius r. returns intersection times along ray t1 and t2
bool intersectSphere(const vec3f& p, const vec3f& d, float r, float& t1, float& t2)
{
//http://wiki.cgsociety.org/index.php/Ray_Sphere_Intersection
float A = d.dot(d);
float B = 2.0f * d.dot(p);
float C = p.dot(p) - r * r;
float dis = B * B - 4.0f * A * C;
if (dis < 0.0f)
return false;
float S = sqrt(dis);
t1 = (-B - S) / (2.0f * A);
t2 = (-B + S) / (2.0f * A);
return true;
}
vec4f from = toWorld * vec4f(mouse, -1.0f, 1.0f);
vec4f to = toWorld * vec4f(mouse, 1.0f, 1.0f);
I'm assuming that 'from' is the position of the mouse cursor? If so then why is its z negative one, if we are assuming openGL coordinates.
Also in this way do we assume that the depth at this time is -1 to +1 right? Rather than the depth of our frustrum.

Is CGContextAddArc really that slow (compared to a circle drawn with a few lines

Folks,
While coding up a few dials and sliders (e.g. like a big volume button one can rotate around) - I found that the standard CGContextAddArc() used like:
- (void)drawRect:(CGRect)rect {
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGColorSpaceRef rgbColorspace = CGColorSpaceCreateDeviceRGB();
CGContextSetLineWidth(ctx, radius * (KE-KR)+8);
CGContextSetStrokeColorWithColor(ctx,self.foregroundColor.CGColor);
.... more some colour/width/etc settings
...
CGContextAddArc(ctx, dx,dy,radius, 0, 2*M_PI, 0);
to be unbelievable slow.
On an iPad - with a handful of filled/stroked circles, less than some 10 clean [self setNeedsDisplay] updates/second during drag. A very quick hack with a hand-drawn circle (shown below) was several orders of magnitude faster. Same applies to the emulator.
Why is this. Seems to be the case for both a normal fill and various gradient fills. What am I doing wrong ?
Dw.
// Stupid replacement for CGContectAddArc() which seems to be very slow.
//
void CGContextAddCirlce(CGContextRef ctx, float ox, float oy, float radius)
{
double len = 2 * M_PI * radius;
double step = 1.8 / len; // over the top :)
// translating/scaling would more efficient, etc..
//
float x = ox + radius;
float y = oy;
// stupid hack - should just do a quadrant and mirror twice.
//
CGContextMoveToPoint(ctx,x,y);
for(double a = step; a < 2.0 * M_PI -step; a += step) {
x = ox + radius * cos(a);
y = oy + radius * sin(a);
CGContextAddLineToPoint(ctx, x, y);
};
CGContextClosePath(ctx);
};
The vector drawing operations of Quartz 2D can be slow, which is why it is a good idea to redraw only when needed.
In your case, I would suggest drawing your volume button once, then transforming the UIView or CALayer into which you've drawn the button using a rotational transform. By simply moving, rotating, or scaling a view, you do not trigger an expensive redraw. The content is already cached as a texture, and the GPU can quickly manipulate and composite this rasterized content on top of your other views.
You'll find that avoiding redrawing in this manner will yield much improved performance.
Issue partly (mostly resolved).
Extensive benchmarking does show that AddArc is indeed slow compared to drawing a complete circle with a vector/straight-line path for circles in the 100-200 pixel radius range. For partial circles the effect is much less pronounced; am wondering if this is tied to the number of beziers.
BUT:
The code below did not compile as one would read it; M_PI was not the 3.14etc as actually expected by set to (3.14... * ((EVP_ARM7_ADJUST[(PLTF)])) by an included fixed-point DSP library (set to x100).
Hence it specified the end-arc double by a factor of 256 too large.
And it was the latter which did make the issue so noticeable (evidently the underlaying implementation just keeps going round and round and round..).
So issue now understood (and will keep an optimized/benchmarked version).
Thanks for the help!

How do I use the gravity vector to correctly transform scene for augmented reality?

I'm trying figure out how to get an OpenGL specified object to be displayed correctly according to the device orientation (ie. according to the gravity vector from the accelerometer, and heading from compass).
The GLGravity sample project has an example which is almost like this (despite ignoring heading), but it has some glitches. For example, the teapot jumps 180deg as the device viewing angle crosses the horizon, and it also rotates spuriously if you tilt the device from portrait into landscape. This is fine for the context of this app, as it just shows off an object and it doesn't matter that it does these things. But it means that the code just doesn't work when you attempt to emulate real life viewing of an OpenGL object according to the device's orientation. What happens is that it almost works, but the heading rotation you apply from the compass gets "corrupted" by the spurious additional rotations seen in the GLGravity example project.
Can anyone provide sample code that shows how to adjust correctly for the device orientation (ie. gravity vector), or to fix the GLGravity example so that it doesn't include spurious heading changes?
//Clear matrix to be used to rotate from the current referential to one based on the gravity vector
bzero(matrix, sizeof(matrix));
matrix[3][3] = 1.0;
//Setup first matrix column as gravity vector
matrix[0][0] = accel[0] / length;
matrix[0][1] = accel[1] / length;
matrix[0][2] = accel[2] / length;
//Setup second matrix column as an arbitrary vector in the plane perpendicular to the gravity vector {Gx, Gy, Gz} defined by by the equation "Gx * x + Gy * y + Gz * z = 0" in which we arbitrarily set x=0 and y=1
matrix[1][0] = 0.0;
matrix[1][1] = 1.0;
matrix[1][2] = -accel[1] / accel[2];
length = sqrtf(matrix[1][0] * matrix[1][0] + matrix[1][1] * matrix[1][1] + matrix[1][2] * matrix[1][2]);
matrix[1][0] /= length;
matrix[1][1] /= length;
matrix[1][2] /= length;
//Setup third matrix column as the cross product of the first two
matrix[2][0] = matrix[0][1] * matrix[1][2] - matrix[0][2] * matrix[1][1];
matrix[2][1] = matrix[1][0] * matrix[0][2] - matrix[1][2] * matrix[0][0];
matrix[2][2] = matrix[0][0] * matrix[1][1] - matrix[0][1] * matrix[1][0];
//Finally load matrix
glMultMatrixf((GLfloat*)matrix);
Here's a clarification showing how to get the elevation and tilt that are needed for gluLookAt solution as shown in my last answer:
// elevation comes from z component (0 = facing horizon)
elevationRadians = asin(gravityVector.z / Vector3DMagnitude(gravityVector));
// tilt is how far screen is from vertical, looking along z axis
tiltRadians = atan2(-gravityVector.y, -gravityVector.x) - M_PI_2;
Following up on Chris's suggestion: I'm not sure if I've got this all correct due to differing conventions of row/column order and heading cw or ccw. However the following code is what I came up with:
Vector3D forward = Vector3DMake(0.0f, 0.0f, -1.0f);
// Multiply it by current rotation matrix to get teapot direction
Vector3D direction;
direction.x = matrix[0][0] * forward.x + matrix[1][0] * forward.y + matrix[2][0] * forward.z;
direction.y = matrix[0][1] * forward.x + matrix[1][1] * forward.y + matrix[2][1] * forward.z;
direction.z = matrix[0][2] * forward.x + matrix[1][2] * forward.y + matrix[2][2] * forward.z;
heading = atan2(direction.z, direction.x) * 180 / M_PI;
// Use this heading to adjust the teapot direction back to keep it fixed
// Rotate about vertical axis (Y), as it is a heading adjustment
glRotatef(heading, 0.0, 1.0, 0.0);
When I run this code, the teapot behaviour has apparently "improved" eg. heading no longer flips 180deg when device screen (in portrait view) is pitched forward/back through upright. However, it still makes major jumps in heading when device (in landscape view) is pitched forward/back. So something's not right. It suggests that the above calculation of the actual heading is incorrect...
I finally found a solution that works. :-)
I dropped the rotation matrix approach, and instead adopted gluLookAt. To make this work you need to know the device "elevation" (viewing angle relative to horizon ie. 0 on horizon, +90 overhead), and the camera's "tilt" (how far the device is from vertical its x/y plane ie. 0 when vertical/portrait, +/-90 when horizontal/landscape), both of which are obtained from the device gravity vector components.
Vector3D eye, scene, up;
CGFloat distanceFromScene = 0.8;
// Adjust eye position for elevation (y/z)
eye.x = 0;
eye.y = distanceFromScene * -sin(elevationRadians); // eye position goes down as elevation angle goes up
eye.z = distanceFromScene * cos(elevationRadians); // z position is maximum when elevation is zero
// Lookat point is origin
scene = Vector3DMake(0, 0, 0); // Scene is at origin
// Camera tilt - involves x/y plane only - arbitrary vector length
up.x = sin(tiltRadians);
up.y = cos(tiltRadians);
up.z = 0;
Then you just apply the gluLookAt transformation, and also rotate the scene according to the device heading.
// Adjust view for device orientation
gluLookAt(eye.x, eye.y, eye.z, scene.x, scene.y, scene.z, up.x, up.y, up.z);
// Apply device heading to scene
glRotatef(currentHeadingDegrees, 0.0, 1.0, 0.0);
Try rotating the object depending upon iphone acceleration values.
float angle = -atan2(accelX, accelY);
glPushMatrix();
glTranslatef(centerPoint.x, centerPoint.y, 0);
glRotatef(angle, 0, 0, 1);
glTranslatef(-centerPoint.x, -centerPoint.y, 0);
glPopMatrix();
Where centerPoint is the middle point the object.
oo, nice.
GLGravity seems to get everything right except for the yaw. Here's what I would try. Do everything GLGravity does, and then this:
Project a vector in the direction you want the teapot to face, using the compass or whatever you so choose. Then multiply a "forward" vector by the teapot's current rotation matrix, which will give you the direction the teapot is facing. Flatten the two vectors to the horizontal plane and take the angle between them.
This angle is your corrective yaw. Then just glRotatef by it.
Whether or not the 3GS's compass is reliable and robust enough for this to work is another thing. Normal compasses don't work when the north vector is perpendicular to their face. But I just tried the Maps app on my workmate's 3GS and it seems to cope, so maybe they have got a mechanical solution in there. Knowing what the device is actually doing will help interpret the results it gives.
Make sure to test your app at the north and south poles once you're done. :-)
Getting a much more stable gravity-based reference, can now be done using CMMotionManager.
When starting motion updates with startDeviceMotionUpdates(), you can specify a reference frame.
This fuses the accelerometer, gyroscope and optionally (depending on chose reference frame) magnetometer data. Accelerometer data is pretty noisy and bouncy (any sideways motion of the device temporarily tilts the gravity vector by any device acceleration) and alone doesn't make a good reference.
I've been low-pass filtering the accelerometer data, which helps a bit but makes the system slow.