Implementation of gluLookAt and gluPerspective - iphone

I've written a small 2D engine in opengl in the process of making a game. I'm using OpenGL ES 2 and the code compiles and runs on iOS and Mac OSX.
Now I'm extending it to support 3D and I'm having a problem setting up the camera.
I've checked the code a hundred times and I can't finde where the problem is, so maybe someone with experience on this can give an idea.
This is the code I have: I'm posting the part of the code where I think the problem might be, but if something else is needed just ask me.
Matrix4 _getFrustumMatrix(float left, float right, float bottom, float top, float near, float far){
Matrix4 res = Matrix4(2.0 * near / (right - left), 0, 0, 0,
0, 2.0 * near / (top - bottom), 0, 0,
(right + left) / (right - left), (top + bottom) / (top - bottom), -(far + near) / (far - near), -1.0,
0,0, -2.0 * far * near / (far - near), 0);
return res;
}
Matrix4 _getPerspectiveMatrix(float near, float far, float angleOfView){
static float aspectRatio = float(SCREENW)/float(SCREENH);
float top = near * tan(angleOfView * 3.1415927 / 360.0);
float bottom = -top;
float left = bottom * aspectRatio;
float right = top * aspectRatio;
return _getFrustumMatrix(left, right, bottom, top, near, far);
}
Matrix4 _getLookAtMatrix(Vector3 eye, Vector3 at, Vector3 up){
Vector3 forward, side;
forward = at - eye;
forward.normalize();
side = forward ^ up;
side.normalize();
up = side ^ forward;
Matrix4 res = Matrix4(side.x, up.x, -forward.x, 0,
side.y, up.y, -forward.y, 0,
side.z, up.z, -forward.z, 0,
0, 0, 0, 1);
res.translate(Vector3(0 - eye));
return res;
}
void Scene3D::_deepRender(){
cameraEye = Vector3(10,0,40);
cameraAt = Vector3(0,0,0);
cameraUp = Vector3(0,1,0);
MatrixStack::push();
Matrix4 projection = _getPerspectiveMatrix(1, 100, 45);
Matrix4 view = _getLookAtMatrix(cameraEye, cameraAt, cameraUp);
MatrixStack::set(projection * view);
Space3D::_deepRender();
MatrixStack::pop();
}
The drawn object is a representation of the axes where x=red, y=green, z=blue, and it's located at (0,0,0).
If I put the eye at (0,0,40) everything looks as expected:
If I put the eye at (10,0,40) then the object is not drawn in the middle of the screen as it should be.
This is the Matrix4::translate method:
void Matrix4::translate(const Vector3& v) {
a14 += a11 * v.x + a12 * v.y + a13 * v.z;
a24 += a21 * v.x + a22 * v.y + a23 * v.z;
a34 += a31 * v.x + a32 * v.y + a33 * v.z;
a44 += a41 * v.x + a42 * v.y + a43 * v.z;
}
EDIT: To add some information:
Using _getLookAtMatrix() with this parameters:
cameraEye = Vector3(40,40,40);
cameraAt = Vector3(0,0,0);
cameraUp = Vector3(0,1,0);
Should give me an equivalent matrix to this one?
Matrix4 view;
view.setIdentity();
view.translate(Vector3(0,0,-69.2820323)); // 69.2820323 is the length of Vector3(40,40,40)
view.rotate(45, Vector3(1,0,0));
view.rotate(-45, Vector3(0,1,0));
At least those transformations makes sense to me and the resulting image looks as what I should expect.
But this matrix compared to the one I get using _getLookAtMatrix() are very different:
view:
0.707106769, -0.49999997, 0.49999997, 0,
0, 0.707106769, 0.707106769, 0,
-0.707106769, -0.49999997, 0.49999997, 0,
0, 0, -69.2820358, 1
_getLookAtMatrix(cameraEye, cameraAt, cameraUp):
0.707106769, 0, -0.707106769, 0,
-0.408248276, 0.816496551, -0.408248276, 0,
0.577350259, 0.577350259, 0.577350259, 0,
-35.0483475, -55.7538719, 21.520195, 1

You seem to have some serious ordering inconsistencies in your matrix class.
For example I assumed your Matrix4 constructor takes it arguments (the matrix elements) as column-major, otherwise your functions wouldn't match the reference implementations of glFrustum and gluLookAt and you would get completely screwed results.
And the code of your translate function also looks correct, since it has to modify the last column of the matrix, which are the elements (a14, a24, a34 and a44).
But the your print out of the view matrix suggests that translate actually modifies the last row, unless you print the matrix in column-major format and therefore transposed. But in this case the print of the _getLookAtMatrix suggests that the Matrix4 constructor takes its arguments in row-major order, which indeed invalidates other things.
Of course all this is also depending on how you send the matrices to OpenGL and how you use them in the vertex shader (I assume ES 2.0, otherwise there would be no need for your own matrix library). If you indeed use ES 1 then you need to send the matrix elements to OpenGL in column-major order, but the translation has to be in the last column and not the last row.
But no matter what convention you use, there is definitely a severe inconsistency inside your matrix code. But without seeing the whole Matrix4 class, the vertex shader and the code where you upload the matrices to OpenGL, it is hard to tell where this inconsistency is.

Related

Smoothed-particle Hydrodynamics Using Niagara in Unreal Engine

This is going to be quite a long post, sorry, but I think it's worth it because it's quite complicated and I would imagine quite a lot of other people would really like to be able to achieve this effect. There are a few other questions on here about SPH but none of them relate to a Niagara implementation. I've also posted this question on Unreal Engine Answers.
I've been attempting to replicate the fluid simulation in Niagara as shown by Asher Zhu here: The Art of Illusion - Niagara Simulation Framework Overview. Skip to 20:25 for the effect I'm after.
Seeing as he shows none of the Niagara system at all part from some of the bits for rendering it (as far as which I've yet to get), I've followed the article here: link.
Now, I have it looking more or less like a fluid. However, it doesn't really look anything like Asher's. It's rather unstable and will tend to sit for a few seconds with a region of higher density before exploding and then settling down. It also never develops any depth. All the particles, unless they're flying about erratically, sit on the floor. The other problem is collision - I can't see how Asher has managed to get such clean collisions with the environment. My signed distance fields are big, round and uneven and the particles never get anywhere near the walls.
The fourth image below shows it exploding just after it got to the third image and the fifth image is what it looks like after it finally settles down (as well as how far away from the walls the particles end up). The last image shows that it's completely flat (this isn't an issue with the volume of the box; I've tested that).
It's difficult to show everything in the Niagara system on here but the crucial bit is the HLSL code:
OutVelocity = Velocity;
OutPosition = Position;
Density = 0;
float Pressure = 0;
float smoothingRadius = 1.0f;
float restDensity = 0.2f;
float viscosity = 0.018f;
float gas = 500.0f;
const float3 gravity = float3(0, 0, -98);
float pi = 3.141593;
int numParticles;
DirectReads.GetNumParticles(numParticles);
const float Poly6_constant = (315 / (64 * pi * pow(smoothingRadius, 9)));
const float Spiky_constant = (-45 / (pi * pow(smoothingRadius, 6)));
float3 forcePressure = float3(0,0,0);
float3 forceViscosity = float3(0,0,0);
#if GPU_SIMULATION
//Calculate the density of this particle based on the proximity of the other particles.
for (int i = 0; i < numParticles; ++i)
{
bool myBool; //Temporary bool used to catch valid/invalid results for direct reads.
float OtherMass;
DirectReads.GetFloatByIndex<Attribute="Mass">(i, myBool, OtherMass);
float3 OtherPosition;
DirectReads.GetVectorByIndex<Attribute="Position">(i, myBool, OtherPosition);
// Calculate the distance and direction between the target Particle and itself
float distanceBetween = distance(OtherPosition, OutPosition);
if (distanceBetween < smoothingRadius)
{
Density += OtherMass * Poly6_constant * pow(smoothingRadius - distanceBetween, 3);
}
}
//Avoid negative pressure by clamping density to reference value
Density = max(restDensity, Density);
//Calculate pressure
Pressure = gas * (Density - restDensity);
//Calculate the forces.
for (int i = 0; i < numParticles; ++i)
{
if (i != InstanceId) //Only calculate the pressure-based force and Laplacian smoothing function if the other particle is not the current particle.)
{
bool myBool; //Temporary bool used to catch valid/invalid results for direct reads.
float OtherMass;
DirectReads.GetFloatByIndex<Attribute="Mass">(i, myBool, OtherMass);
float OtherDensity;
DirectReads.GetFloatByIndex<Attribute="Density">(i, myBool, OtherDensity);
float3 OtherPosition;
DirectReads.GetVectorByIndex<Attribute="Position">(i, myBool, OtherPosition);
float3 OtherVelocity;
DirectReads.GetVectorByIndex<Attribute="Velocity">(i, myBool, OtherVelocity);
float3 direction = OutPosition - OtherPosition;
float3 normalisedVector = normalize(direction);
float distanceBetween = distance(OtherPosition, OutPosition);
if (distanceBetween > 0 && distanceBetween < smoothingRadius) //distanceBetween must be >0 to avoide a div0 error.
{
float OtherPressure = gas * (OtherDensity - restDensity);
//Calculate particle pressure.
forcePressure += -1 * Mass * normalisedVector * (Pressure + OtherPressure) / (2 * Density * OtherDensity) * Spiky_constant * pow(smoothingRadius - distanceBetween, 2);
//Viscosity-based force computation with Laplacian smoothing function (W).
const float W = -(pow(distanceBetween, 3) / (2 * pow(smoothingRadius, 3))) + (pow(distanceBetween, 2) / pow(smoothingRadius, 2)) + (smoothingRadius / (2 * distanceBetween)) - 1;
forceViscosity += viscosity * (OtherMass / Mass) * (1 / OtherDensity) * (OtherVelocity - Velocity) * W * normalisedVector;
//forceViscosity += viscosity * (OtherMass / Mass) * (1 / OtherDensity) * (OtherVelocity - Velocity) * (45 / (pi * pow(smoothingRadius, 6))) * (smoothingRadius - distanceBetween);
}
}
}
OutVelocity += DeltaTime * ((forcePressure + forceViscosity) / Density);
OutPosition += DeltaTime * OutVelocity;
#endif
This code does two loops through all the other particles in the system, one to calculate the pressure and one to calculate the forces. Then it outputs the velocity and position. Just like the article I linked to above and like some other things I've seen. Yet it simply doesn't behave as shown in those resources.
I haven't applied any grid-based optimisation. To do this I'll just apply the grid optimisation used in the PBD example in UE's Content Examples project. But for now it's an added complication that isn't really needed. It runs fine with a thousands of particles even without it.
I've looked at a few resources (articles, videos and academic research papers) and I've spent a fortnight experimenting, including trial and error on the values at the top of the code. I'm obviously missing something crucial. What can it be? I'm so frustrated now that any help would be much appreciated.

Vertex position relative to normal

In a surface shader, given the world's up axis (and the others too), a world space position and a normal in world space, how can we rotate the worldspace position into the space of the normal?
That is, given a up vector and a non-orthogonal target-up vector, how can we transform the position by rotating its up vector?
I need this so I can get the vertex position only affected by the object's rotation matrix, which I don't have access to.
Here's a graphical visualization of what I want to do:
Up is the world up vector
Target is the world space normal
Pos is arbitrary
The diagram is bidimensional, but I need to solve this for a 3D space.
Looks like you're trying to rotate pos by the same rotation that would transform up to new_up.
Using the rotation matrix found here, we can rotate pos using the following code. This will work either in the surface function or a supplementary vertex function, depending on your application:
// Our 3 vectors
float3 pos;
float3 new_up;
float3 up = float3(0,1,0);
// Build the rotation matrix using notation from the link above
float3 v = cross(up, new_up);
float s = length(v); // Sine of the angle
float c = dot(up, new_up); // Cosine of the angle
float3x3 VX = float3x3(
0, -1 * v.z, v.y,
v.z, 0, -1 * v.x,
-1 * v.y, v.x, 0
); // This is the skew-symmetric cross-product matrix of v
float3x3 I = float3x3(
1, 0, 0,
0, 1, 0,
0, 0, 1
); // The identity matrix
float3x3 R = I + VX + mul(VX, VX) * (1 - c)/pow(s,2) // The rotation matrix! YAY!
// Finally we rotate
float3 new_pos = mul(R, pos);
This is assuming that new_up is normalized.
If the "target up normal" is a constant, the calculation of R could (and should) only happen once per frame. I'd recommend doing it on the CPU side and passing it into the shader as a variable. Calculating it for every vertex/fragment is costly, consider what it is you actually need.
If your pos is a vector-4, just do the above with the first three elements, the fourth element can remain unchanged (it doesn't really mean anything in this context anyway).
I'm away from a machine where I can run shader code, so if I made any syntactical mistakes in the above, please forgive me.
Not tested, but should be able to input a starting point and an axis. Then all you do is change procession which is a normalized (0-1) float along the circumference and your point will update accordingly.
using UnityEngine;
using System.Collections;
public class Follower : MonoBehaviour {
Vector3 point;
Vector3 origin = Vector3.zero;
Vector3 axis = Vector3.forward;
float distance;
Vector3 direction;
float procession = 0f; // < normalized
void Update() {
Vector3 offset = point - origin;
distance = offset.magnitude;
direction = offset.normalized;
float circumference = 2 * Mathf.PI * distance;
angle = (procession % 1f) * circumference;
direction *= Quaternion.AngleAxis(Mathf.Rad2Deg * angle, axis);
Ray ray = new Ray(origin, direction);
point = ray.GetPoint(distance);
}
}

picking in 3D with ray-tracing using NinevehGL or OpenGL i-phone

I couldn't find the correct and understandable expression of picking in 3D with method of ray-tracing. Has anyone implemented this algorithm in any language? Share directly working code, because since pseudocodes can not be compiled, they are genereally written with lacking parts.
What you have is a position in 2D on the screen. The first thing to do is convert that point from pixels to normalized device coordinates — -1 to 1. Then you need to find the line in 3D space that the point represents. For this, you need the transformation matrix/ces that your 3D app uses to create a projection and camera.
Typically you have 3 matrics: projection, view and model. When you specify vertices for an object, they're in "object space". Multiplying by the model matrix gives the vertices in "world space". Multiplying again by the view matrix gives "eye/camera space". Multiplying again by the projection gives "clip space". Clip space has non-linear depth. Adding a Z component to your mouse coordinates puts them in clip space. You can perform the line/object intersection tests in any linear space, so you must at least move the mouse coordinates to eye space, but it's more convenient to perform the intersection tests in world space (or object space depending on your scene graph).
To move the mouse coordinates from clip space to world space, add a Z-component and multiply by the inverse projection matrix and then the inverse camera/view matrix. To create a line, two points along Z will be computed — from and to.
In the following example, I have a list of objects, each with a position and bounding radius. The intersections of course never match perfectly but it works well enough for now. This isn't pseudocode, but it uses my own vector/matrix library. You'll have to substitute your own in places.
vec2f mouse = (vec2f(mousePosition) / vec2f(windowSize)) * 2.0f - 1.0f;
mouse.y = -mouse.y; //origin is top-left and +y mouse is down
mat44 toWorld = (camera.projection * camera.transform).inverse();
//equivalent to camera.transform.inverse() * camera.projection.inverse() but faster
vec4f from = toWorld * vec4f(mouse, -1.0f, 1.0f);
vec4f to = toWorld * vec4f(mouse, 1.0f, 1.0f);
from /= from.w; //perspective divide ("normalize" homogeneous coordinates)
to /= to.w;
int clickedObject = -1;
float minDist = 99999.0f;
for (size_t i = 0; i < objects.size(); ++i)
{
float t1, t2;
vec3f direction = to.xyz() - from.xyz();
if (intersectSphere(from.xyz(), direction, objects[i].position, objects[i].radius, t1, t2))
{
//object i has been clicked. probably best to find the minimum t1 (front-most object)
if (t1 < minDist)
{
minDist = t1;
clickedObject = (int)i;
}
}
}
//clicked object is objects[clickedObject]
Instead of intersectSphere, you could use a bounding box or other implicit geometry, or intersect a mesh's triangles (this may require building a kd-tree for performance reasons).
[EDIT]
Here's an implementation of the line/sphere intersect (based off the link above). It assumes the sphere is at the origin, so instead of passing from.xyz() as p, give from.xyz() - objects[i].position.
//ray at position p with direction d intersects sphere at (0,0,0) with radius r. returns intersection times along ray t1 and t2
bool intersectSphere(const vec3f& p, const vec3f& d, float r, float& t1, float& t2)
{
//http://wiki.cgsociety.org/index.php/Ray_Sphere_Intersection
float A = d.dot(d);
float B = 2.0f * d.dot(p);
float C = p.dot(p) - r * r;
float dis = B * B - 4.0f * A * C;
if (dis < 0.0f)
return false;
float S = sqrt(dis);
t1 = (-B - S) / (2.0f * A);
t2 = (-B + S) / (2.0f * A);
return true;
}
vec4f from = toWorld * vec4f(mouse, -1.0f, 1.0f);
vec4f to = toWorld * vec4f(mouse, 1.0f, 1.0f);
I'm assuming that 'from' is the position of the mouse cursor? If so then why is its z negative one, if we are assuming openGL coordinates.
Also in this way do we assume that the depth at this time is -1 to +1 right? Rather than the depth of our frustrum.

How to draw anti-aliased circle with iPhone OpenGL ES

There are three main ways I know of to draw a simple circle in OpenGL ES, as provided by the iPhone. They are all based on a simple algorithm (the VBO version is below).
void circleBufferData(GLenum target, float radius, GLsizei count, GLenum usage) {
const int segments = count - 2;
const float coefficient = 2.0f * (float) M_PI / segments;
float *vertices = new float[2 * (segments + 2)];
vertices[0] = 0;
vertices[1] = 0;
for (int i = 0; i <= segments; ++i) {
float radians = i * coefficient;
float j = radius * cosf(radians);
float k = radius * sinf(radians);
vertices[(i + 1) * 2] = j;
vertices[(i + 1) * 2 + 1] = k;
}
glBufferData(target, sizeof(float) * 2 * (segments + 2), vertices, usage);
glVertexPointer(2, GL_FLOAT, 0, 0);
delete[] vertices;
}
The three ways that I know of to draw a simple circle are by using glDrawArray from an array of vertices held by the application; using glDrawArray from a vertex buffer; and by drawing to a texture on initialization and drawing the texture when rendering is requested. The first two methods I know fairly well (though I have not been able to get anti-aliasing to work). What code is involved for the last option (I am very new to OpenGL as a whole, so a detailed explanation would be very helpful)? Which is most efficient?
Antialiasing in the iOS OpenGL ES impelmentation is severely limited. You won't be able to draw antialiased circles using traditional methods.
However, if the circles you're drawing aren't that large, and are filled, you could take a look at using GL_POINT_SMOOTH. It's what I used for my game, Pizarro, which involves a lot of circles. Here's a detailed writeup of my experience with drawing antialiased circles on the iOS:
http://sveinbjorn.org/drawing_antialiased_circles_opengl_iphone

OpenGL-ES change angle of vision in frustum

Let's see if I can explain myself.
When you set up the glFrustum view it will give the perspective effect. Near things near & big... far things far & small. Everything looks like it shrinks along its Z axis to create this effect.
Is there a way to make it NOT shrink that much?
To approach perspective view to an orthographic view.... but not that much to lose perspective completely?
Thanks
The angle is conformed by two parameters: heigth of the nearest clipping plane (determined by top and bottom parameters), and the distance of the nearest clipping plane (determined by zNearest).
To make a perspective matrix such that it doesn't shrink the image too much, you can set a smaller height or a further nearest clipping plane.
The thing is to understand that orthographic view is a view with a FOV of zero and a camera position at infinity. So there is a way to approach orthographic view by reducing FOV and moving the camera far away.
I can suggest the following code that computes a near-orthographic projection camera from a given theta FOV value. I use it in a personal project, though it uses custom matrix classes rather than glOrtho and glFrustum, so it might be incorrect. I hope it gives a good general idea though.
void SetFov(int width, int height, float theta)
{
float near = -(width + height);
float far = width + height;
/* Set the projection matrix */
if (theta < 1e-4f)
{
/* The easy way: purely orthogonal projection. */
glOrtho(0, width, 0, height, near, far);
return;
}
/* Compute a view that approximates the glOrtho view when theta
* approaches zero. This view ensures that the z=0 plane fills
* the screen. */
float t1 = tanf(theta / 2);
float t2 = t1 * width / height;
float dist = width / (2.0f * t1);
near += dist;
far += dist;
if (near <= 0.0f)
{
far -= (near - 1.0f);
near = 1.0f;
}
glTranslate3f(-0.5f * width, -0.5f * height, -dist);
glFrustum(-near * t1, near * t1, -near * t2, near * t2, near, far);
}