Why does my perspective implementation fails in displaying my cube's faces? - scala

I wrote a program that takes in entry some points, expressed in 3D coordinates and that must be drawn in a 2D canvas. I use perspective projection, homogeneous coordinates and similar triangles to do that. However, my program does not work and I actually don't know why.
I followed two tutorials. I really understood the geometrical definitions and properties I have read. However, my implementation fails... I will write references to these both courses little by little, to make your reading more confortable :).
Overview : geometrical reminders
The perspective projection is done following this workflow (cf. these 2 courses - I wrote pertinent links about the latter (HTML anchors) further down, in this post) :
Definition of the points to draw, expressed according to the world's coordinates system ; Definition of the matrix of projection, which is a matrix of transformation that "converts" a point expressed according to the world coordinates system into a point expressed according to the camera's coordinates system (NB : I believe this matrix can be understood as being the 3D object "camera")
Product of these points with this matrix (as defined in the adequat part, below, in this document) : the product of these world-expressed points results in the conversion of these points to the camera's coordinates system. Note that points and matrix are expressed in 4D (concept of homogenous coordinates).
Use of similar triangles concept to project (only computing is done at this step) on the canvas the in-camera-expressed points (using their 4D coordinates). After this operation, the points are now expressed in 3D (the third coordinate is computed but not actually used on the canvas). The 4th coordinate is removed because not useful. Note that the 3rd coordinate won't be useful, except to handle z-fighting (though, I don't want to do that).
Last step : rasterization, to actually draw the pixels on the canvas (other computing AND displaying are done at this step).
First, the problem
Well, I want to draw a cube but the perspective doesn't work. The projected points seem to be drawn withtout perspective.
What result I should expect for
The result I'm expecting is the cube displayed in "Image" part of this below PNG :
What I'm outputting
The faces of my cube are odd, as if perspective wasn't well used.
I guess I know why I'm having this problem...
I think my projection matrix (i.e. : the camera) doesn't have the good coefficients. I'm using a very simple projection matrix, without the concepts of fov, near and far clipping planes (as you can see belower).
Indeed, to get the expected result (as previouslyt defined), the camera should be placed, if I'm not mistaken, at the center (on axes x and y) of the cube expressed according to the world coordinate system and at the center (on axes x and y) of the canvas, which is (I make this assumption) placed 1 z in front of the camera.
The Scastie (snippet)
NB : since X11 is not activated on Scastie, the window I want to create won't be shown.
https://scastie.scala-lang.org/N95TE2nHTgSlqCxRHwYnxA
Entries
Perhaps the problem is bound to the entries ? Well, I give you them.
Cube's points
Ref. : myself
val world_cube_points : Seq[Seq[Double]] = Seq(
Seq(100, 300, -4, 1), // top left
Seq(100, 300, -1, 1), // top left z+1
Seq(100, 0, -4, 1), // bottom left
Seq(100, 0, -1, 1), // bottom left z+1
Seq(400, 300, -4, 1), // top right
Seq(400, 300, -1, 1), // top right z+1
Seq(400, 0, -4, 1), // bottom right
Seq(400, 0, -1, 1) // bottom right z+1
)
Transformation (Projection) matrix
Ref. : https://www.scratchapixel.com/lessons/3d-basic-rendering/perspective-and-orthographic-projection-matrix/building-basic-perspective-projection-matrix , End of the Part. "A Simple Perspective Matrix"
Note that I'm using the simplest perspective projection matrix : I don't use concept of fov, near and far clipping planes.
new Matrix(Seq(
Seq(1, 0, 0, 0),
Seq(0, 1, 0, 0),
Seq(0, 0, -1, 0),
Seq(0, 0, -1, 0)
))
Consequence of this matrix : each point P(x;y;z;w) producted with this matrix will be : P'(x;y;-z;-z).
Second, the first operation my program does : a simple product of a point with a matrix.
Ref. : https://github.com/ssloy/tinyrenderer/wiki/Lesson-4:-Perspective-projection#homogeneous-coordinates
/**
* Matrix in the shape of (use of homogeneous coordinates) :
* c00 c01 c02 c03
* c10 c11 c12 c13
* c20 c21 c22 c23
* 0 0 0 1
*
* #param content the content of the matrix
*/
class Matrix(val content : Seq[Seq[Double]]) {
/**
* Computes the product between a point P(x ; y ; z) and the matrix.
*
* #param point a point P(x ; y ; z ; 1)
* #return a new point P'(
* x * c00 + y * c10 + z * c20
* ;
* x * c01 + y * c11 + z * c21
* ;
* x * c02 + y * c12 + z * c22
* ;
* 1
* )
*/
def product(point : Seq[Double]) : Seq[Double] = {
(0 to 3).map(
i => content(i).zip(point).map(couple2 => couple2._1 * couple2._2).sum
)
}
}
Then, use of similar triangles
Ref. 1/2 : Part. "Of the Importance of Converting Points to Camera Space
" of https://www.scratchapixel.com/lessons/3d-basic-rendering/computing-pixel-coordinates-of-3d-point/mathematics-computing-2d-coordinates-of-3d-points
Ref. 2/2 : https://github.com/ssloy/tinyrenderer/wiki/Lesson-4:-Perspective-projection#time-to-work-in-full-3d
NB : at this step, the entries are points expressed according to the camera (i.e. : they are the result of the precedently defined product with the precedently defined matrix).
class Projector {
/**
* Computes the coordinates of the projection of the point P on the canvas.
* The canvas is assumed to be 1 unit forward the camera.
* The computation uses the definition of the similar triangles.
*
* #param points the point P we want to project on the canvas. Its coordinates must be expressed in the coordinates
* system of the camera before using this function.
* #return the point P', projection of P.
*/
def drawPointsOnCanvas(points : Seq[Seq[Double]]) : Seq[Seq[Double]] = {
points.map(point => {
point.map(coordinate => {
coordinate / point(3)
}).dropRight(1)
})
}
}
Finally, the drawing of the projected points, onto the canvas.
Ref. : Part. "From Screen Space to Raster Space" of https://www.scratchapixel.com/lessons/3d-basic-rendering/computing-pixel-coordinates-of-3d-point/mathematics-computing-2d-coordinates-of-3d-points
import java.awt.Graphics
import javax.swing.JFrame
/**
* Assumed to be 1 unit forward the camera.
* Contains the drawn points.
*/
class Canvas(val drawn_points : Seq[Seq[Double]]) extends JFrame {
val CANVAS_WIDTH = 820
val CANVAS_HEIGHT = 820
val IMAGE_WIDTH = 900
val IMAGE_HEIGHT = 900
def display = {
setTitle("Perlin")
setSize(IMAGE_WIDTH, IMAGE_HEIGHT)
setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE)
setVisible(true)
}
override def paint(graphics : Graphics): Unit = {
super.paint(graphics)
drawn_points.foreach(point => {
if(!(Math.abs(point.head) <= CANVAS_WIDTH / 2 || Math.abs(point(1)) <= CANVAS_HEIGHT / 2)) {
println("WARNING : the point (" + point.head + " ; " + point(1) + ") can't be drawn in this canvas.")
} else {
val normalized_drawn_point = Seq((point.head + (CANVAS_WIDTH / 2)) / CANVAS_WIDTH, (point(1) + (CANVAS_HEIGHT / 2)) / CANVAS_HEIGHT)
graphics.fillRect((normalized_drawn_point.head * IMAGE_WIDTH).toInt, ((1 - normalized_drawn_point(1)) * IMAGE_HEIGHT).toInt, 5, 5)
graphics.drawString(
"P(" + (normalized_drawn_point.head * IMAGE_WIDTH).toInt + " ; "
+ ((1 - normalized_drawn_point(1)) * IMAGE_HEIGHT).toInt + ")",
(normalized_drawn_point.head * IMAGE_WIDTH).toInt - 50, ((1 - normalized_drawn_point(1)) * IMAGE_HEIGHT).toInt - 10
)
}
})
}
}
Question
What's wrong with my program ? I understood the geometrical concepts explained by these both tutorials that I read carefully. I'm pretty sure my product works. I think either the rasterization, or the entries (the matrix) could be wrong...

Note that I'm using the simplest perspective projection matrix : I don't use concept of fov, near and far clipping planes.
I think that your projection matrix is too simple. By dropping the near and far clipping planes, you are dropping perspective projection entirely.
You do not have to perform the z-clipping step, but you need to define a view frustum to get perspective to work. I believe that your projection matrix defines a cubic "view frustrum", hence no perspective.
See http://www.songho.ca/opengl/gl_projectionmatrix.html for a discussion of how the projection matrix works.

Quoting the Scratchapixel page:
... If we substitute these numbers in the above equation, we get:
Where y' is the y coordinate of P'. Thus:
This is probably one the simplest and most fundamental relation in computer graphics, known as the z or perspective divide. The exact same principle applies to the x coordinate. ...
And in your code:
def drawPointsOnCanvas(points : Seq[Seq[Double]]) : Seq[Seq[Double]] = {
points.map(point => {
point.map(coordinate => {
coordinate / point(3)
^^^^^^^^
...
The (3) index is the 4th component of point, i.e. its W-coordinate, not its Z-coordinate. Perhaps you meant coordinate / point(2)?

Related

Need help in perline noise ursina to make an a plain land

i understood using perlin-noise to make a pattern with cubes but what i want to is a random terrain
plain land such as this game called muck (that has been devolped by unity)
i've tried doing:
noise = PerlinNoise(octaves=3, seed=2007)
amp = 3
freq = 24
width = 30
for po in range(width*width):
s = randint(200, 255)
q = Entity(model="cube", collider="box", texture="white_cube",
color=color.rgb(s, s, s))
q.x = floor(po/width)
q.z = floor(po % width)
q.y = floor(noise([q.x/freq, q.z/freq]) * amp)
so this would give a -almost- perfect random terrain but i want my terrain look more realistic than cubic
thanks in advance ;)
Unsurprisingly, your landscape looks cubic because you're constructing it from cubes. So to make it more realistic, use a more flexible Mesh for it. You will have to assign each point of the surface separately and connect it to its surrounding via triangles:
level_parent = Entity(model=Mesh(vertices=[], uvs=[]), color=color.white, texture='white_cube')
for x in range(1, width):
for z in range(1, width):
# add two triangles for each new point
y00 = noise([x/freq, z/freq]) * amp
y10 = noise([(x-1)/freq, z/freq]) * amp
y11 = noise([(x-1)/freq, (z-1)/freq]) * amp
y01 = noise([x/freq, (z-1)/freq]) * amp
level_parent.model.vertices += (
# first triangle
(x, y00, z),
(x-1, y10, z),
(x-1, y11, z-1),
# second triangle
(x, y00, z),
(x-1, y11, z-1),
(x, y01, z-1)
)
level_parent.model.generate()
level_parent.model.project_uvs() # for texture
level_parent.model.generate_normals() # for lighting
level_parent.collider = 'mesh' # for collision
However, this will be quite slow to generate, mostly due to the Perlin noise implementation used (I'm assuming this one). A faster option can be found in this answer.

Find pixel coordinate of world/geographic coordinate in tile

I'm trying to use Mapbox Terrain RGB to get elevation for specific points in space. I used mercantile.tile to get the coordinates of the tile containing my point at zoom level 15, which for -43º, -22º (for simplicity sake) is 12454, 18527, then mercantile.xy to get the corresponding world coordinates: -4806237.7150042495, -2621281.2257876047.
Shouldn't the integer part of -4806237.7150042495 / 256 (tile size) equal the x coordinate of the tile containing the point, that is, 12454? If this calculation checked out I'd figure that I'm looking for the pixel column (x axis) corresponding to the decimal part of the result, like column 127(256 * 0,5) for 12454,5. However, the division results in -18774.366, (which is curiously close to the tile y coordinate, but it looks like a coincidence). What am I missing here?
As an alternative, I thought of using mercantile.bounds, assigning the first and last pixel columns to the westmost and eastmost longitudes, and finding my position with interpolation, but I wanted to check if I'm doing this the right/recommended way. I'm interested in point elevations, so everything said here goes for the Y axis as well.
Here's what I got so far:
def correct_altitude_mode(kml):
with open(kml, "r+") as f:
txt = f.read()
if re.search("(?<=<altitudeMode>)relative(?=<\/altitudeMode>)", txt):
lat = round(float(find_with_re("latitude", txt)), 5)
lng = round(float(find_with_re("longitude", txt)), 5)
alt = round(float(find_with_re("altitude", txt)), 5)
z = 15
tile = mercantile.tile(lng, lat, z)
westmost, southmost, eastmost, northmost = mercantile.bounds(tile)
pixel_column = np.interp(lng, [westmost, eastmost], [0,256])
pixel_row = np.interp(lat, [southmost, northmost], [256, 0])
response = requests.get(f"https://api.mapbox.com/v4/mapbox.terrain-rgb/{z}/{tile.x}/{tile.y}.pngraw?access_token=pk.eyJ1IjoibWFydGltcGFzc29zIiwiYSI6ImNra3pmN2QxajBiYWUycW55N3E1dG1tcTEifQ.JFKSI85oP7M2gbeUTaUfQQ")
buffer = BytesIO(response.content)
tile_img = png.read_png_int(buffer)
_,R,G,B = (tile_img[int(pixel_row), int(pixel_column)])
print(tile_img[int(pixel_row), int(pixel_column)])
height = -10000 + ((R * 256 * 256 + G * 256 + B) * 0.1)
print(f"R:{R},G:{G},B:{B}\n{height}")
plt.hlines(pixel_row, 0.0, 256.0, colors="r")
plt.vlines(pixel_column, 0.0, 256.0, colors="r")
plt.imshow(tile_img)

Make ring of vectors "flat" relative to world space

I am trying to simulate liquid conformity in a container. The container is a Unity cylinder and so is the liquid. I track current volume and max volume and use them to determine the coordinates of the center of where the surface should be. When the container is tilted, each vertex in the upper ring of the cylinder should maintain it's current local x and z values but have a new local y value that is the same height in the global space as the surface center.
In my closest attempt, the surface is flat relative to the world space but the liquid does not touch the walls of the container.
Vector3 v = verts[i];
Vector3 newV = new Vector3(v.x, globalSurfaceCenter.y, v.z);
verts[i] = transform.InverseTransformPoint(newV);
(I understand that inversing the point after using v.x and v.z changes them, but if I change them after the fact the surface is no longer flat...)
I have tried many different approaches and I always end up at this same point or a stranger one.
Also, I'm not looking for any fundamentally different approach to the problem. It's important that I alter the vertices of a cylinder.
EDIT
Thank you, everyone, for your feedback. It helped me make progress with this problem but I've reached another roadblock. I made my code more presentable and took some screenshots of some results as well as a graph model to help you visualize what's happening and give variable names to refer to.
In the following images, colored cubes are instantiated and given the coordinates of some of the different vectors I am using to get my results.
F(red) A(green) B(blue)
H(green) E(blue)
Graphed Model
NOTE: when I refer to capital A and B, I'm referring to the Vector3's in my code.
The cylinders in the images have the following rotations (left to right):
(0,0,45) (45,45,0) (45,0,20)
As you can see from image 1, F is correct when only one dimension of rotation is applied. When two or more are applied, the surface is flat, but not oriented correctly.
If I adjust the rotation of the cylinder after generating these results, I can get the orientation of the surface to make sense, but the number are not what you might expect.
For example: cylinder 3 (on the right side), adjusted to have a surface flat to the world space, would need a rotation of about (42.2, 0, 27.8).
Not sure if that's helpful but it is something that increases my confusion.
My code: (refer to graph model for variable names)
Vector3 v = verts[iter];
Vector3 D = globalSurfaceCenter;
Vector3 E = transform.TransformPoint(new Vector3(v.x, surfaceHeight, v.z));
Vector3 H = new Vector3(gsc.x, E.y, gsc.z);
float a = Vector3.Distance(H, D);
float b = Vector3.Distance(H, E);
float i = (a / b) * a;
Vector3 A = H - D;
Vector3 B = H - E;
Vector3 F = ((A + B)) + ((A + B) * i);
Instantiate(greenPrefab, transform).transform.position = H;
Instantiate(bluePrefab, transform).transform.position = E;
//Instantiate(redPrefab, transform).transform.position = transform.TransformPoint(F);
//Instantiate(greenPrefab, transform).transform.position = transform.TransformPoint(A);
//Instantiate(bluePrefab, transform).transform.position = transform.TransformPoint(B);
Some of the variables in my code and in the graphed model may not be necessary in the end, but my hope is it gives you more to work with.
Bear in mind that I am less than proficient in geometry and math in general. Please use Laymans's terms. Thank you!
And thanks again for taking the time to help me.
As a first step, we can calculate the normal of the upper cylinder surface in the cylinder's local coordinate system. Given the world transform of your cylinder transform, this is simply:
localNormal = inverse(transform) * (0, 1, 0, 0)
Using this normal and the cylinder height h, we can define the plane of the upper cylinder in normal form as
dot(localNormal, (x, y, z) - (0, h / 2, 0)) = 0
I am assuming that your cylinder is centered around the origin.
Using this, we can calculate the y-coordinate for any x/z pair as
y = h / 2 - (localNormal.x * x + localNormal.z * z) / localNormal.y

How to create Bezier curves from B-Splines in Sympy?

I need to draw a smooth curve through some points, which I then want to show as an SVG path. So I create a B-Spline with scipy.interpolate, and can access some arrays that I suppose fully define it. Does someone know a reasonably simple way to create Bezier curves from these arrays?
import numpy as np
from scipy import interpolate
x = np.array([-1, 0, 2])
y = np.array([ 0, 2, 0])
x = np.r_[x, x[0]]
y = np.r_[y, y[0]]
tck, u = interpolate.splprep([x, y], s=0, per=True)
cx = tck[1][0]
cy = tck[1][1]
print( 'knots: ', list(tck[0]) )
print( 'coefficients x: ', list(cx) )
print( 'coefficients y: ', list(cy) )
print( 'degree: ', tck[2] )
print( 'parameter: ', list(u) )
The red points are the 3 initial points in x and y. The green points are the 6 coefficients in cx and cy. (Their values repeat after the 3rd, so each green point has two green index numbers.)
Return values tck and u are described scipy.interpolate.splprep documentation
knots: [-1.0, -0.722, -0.372, 0.0, 0.277, 0.627, 1.0, 1.277, 1.627, 2.0]
# 0 1 2 3 4 5
coefficients x: [ 3.719, -2.137, -0.053, 3.719, -2.137, -0.053]
coefficients y: [-0.752, -0.930, 3.336, -0.752, -0.930, 3.336]
degree: 3
parameter: [0.0, 0.277, 0.627, 1.0]
Not sure starting with a B-Spline makes sense: form a catmull-rom curve through the points (with the virtual "before first" and "after last" overlaid on real points) and then convert that to a bezier curve using a relatively trivial transform? E.g. given your points p0, p1, and p2, the first segment would be a catmull-rom curve {p2,p0,p1,p2} for the segment p1--p2, {p0,p1,p2,p0} will yield p2--p0, and {p1, p2, p0, p1} will yield p0--p1. Then you trivially convert those and now you have your SVG path.
As demonstrator, hit up https://editor.p5js.org/ and paste in the following code:
var points = [{x:150, y:100 },{x:50, y:300 },{x:300, y:300 }];
// add virtual points:
points = points.concat(points);
function setup() {
createCanvas(400, 400);
tension = createSlider(1, 200, 100);
}
function draw() {
background(220);
points.forEach(p => ellipse(p.x, p.y, 4));
for (let n=0; n<3; n++) {
let [c1, c2, c3, c4] = points.slice(n,n+4);
let t = 0.06 * tension.value();
bezier(
// on-curve start point
c2.x, c2.y,
// control point 1
c2.x + (c3.x - c1.x)/t,
c2.y + (c3.y - c1.y)/t,
// control point 2
c3.x - (c4.x - c2.x)/t,
c3.y - (c4.y - c2.y)/t,
// on-curve end point
c3.x, c3.y
);
}
}
Which will look like this:
Converting that to Python code should be an almost effortless exercise: there is barely any code for us to write =)
And, of course, now you're left with creating the SVG path, but that's hardly an issue: you know all the Bezier points now, so just start building your <path d=...> string while you iterate.
A B-spline curve is just a collection of Bezier curves joined together. Therefore, it is certainly possible to convert it back to multiple Bezier curves without any loss of shape fidelity. The algorithm involved is called "knot insertion" and there are different ways to do this with the two most famous algorithm being Boehm's algorithm and Oslo algorithm. You can refer this link for more details.
Here is an almost direct answer to your question (but for the non-periodic case):
import aggdraw
import numpy as np
import scipy.interpolate as si
from PIL import Image
# from https://stackoverflow.com/a/35007804/2849934
def scipy_bspline(cv, degree=3):
""" cv: Array of control vertices
degree: Curve degree
"""
count = cv.shape[0]
degree = np.clip(degree, 1, count-1)
kv = np.clip(np.arange(count+degree+1)-degree, 0, count-degree)
max_param = count - (degree * (1-periodic))
spline = si.BSpline(kv, cv, degree)
return spline, max_param
# based on https://math.stackexchange.com/a/421572/396192
def bspline_to_bezier(cv):
cv_len = cv.shape[0]
assert cv_len >= 4, "Provide at least 4 control vertices"
spline, max_param = scipy_bspline(cv, degree=3)
for i in range(1, max_param):
spline = si.insert(i, spline, 2)
return spline.c[:3 * max_param + 1]
def draw_bezier(d, bezier):
path = aggdraw.Path()
path.moveto(*bezier[0])
for i in range(1, len(bezier) - 1, 3):
v1, v2, v = bezier[i:i+3]
path.curveto(*v1, *v2, *v)
d.path(path, aggdraw.Pen("black", 2))
cv = np.array([[ 40., 148.], [ 40., 48.],
[244., 24.], [160., 120.],
[240., 144.], [210., 260.],
[110., 250.]])
im = Image.fromarray(np.ones((400, 400, 3), dtype=np.uint8) * 255)
bezier = bspline_to_bezier(cv)
d = aggdraw.Draw(im)
draw_bezier(d, bezier)
d.flush()
# show/save im
I didn't look much into the periodic case, but hopefully it's not too difficult.

picking in 3D with ray-tracing using NinevehGL or OpenGL i-phone

I couldn't find the correct and understandable expression of picking in 3D with method of ray-tracing. Has anyone implemented this algorithm in any language? Share directly working code, because since pseudocodes can not be compiled, they are genereally written with lacking parts.
What you have is a position in 2D on the screen. The first thing to do is convert that point from pixels to normalized device coordinates — -1 to 1. Then you need to find the line in 3D space that the point represents. For this, you need the transformation matrix/ces that your 3D app uses to create a projection and camera.
Typically you have 3 matrics: projection, view and model. When you specify vertices for an object, they're in "object space". Multiplying by the model matrix gives the vertices in "world space". Multiplying again by the view matrix gives "eye/camera space". Multiplying again by the projection gives "clip space". Clip space has non-linear depth. Adding a Z component to your mouse coordinates puts them in clip space. You can perform the line/object intersection tests in any linear space, so you must at least move the mouse coordinates to eye space, but it's more convenient to perform the intersection tests in world space (or object space depending on your scene graph).
To move the mouse coordinates from clip space to world space, add a Z-component and multiply by the inverse projection matrix and then the inverse camera/view matrix. To create a line, two points along Z will be computed — from and to.
In the following example, I have a list of objects, each with a position and bounding radius. The intersections of course never match perfectly but it works well enough for now. This isn't pseudocode, but it uses my own vector/matrix library. You'll have to substitute your own in places.
vec2f mouse = (vec2f(mousePosition) / vec2f(windowSize)) * 2.0f - 1.0f;
mouse.y = -mouse.y; //origin is top-left and +y mouse is down
mat44 toWorld = (camera.projection * camera.transform).inverse();
//equivalent to camera.transform.inverse() * camera.projection.inverse() but faster
vec4f from = toWorld * vec4f(mouse, -1.0f, 1.0f);
vec4f to = toWorld * vec4f(mouse, 1.0f, 1.0f);
from /= from.w; //perspective divide ("normalize" homogeneous coordinates)
to /= to.w;
int clickedObject = -1;
float minDist = 99999.0f;
for (size_t i = 0; i < objects.size(); ++i)
{
float t1, t2;
vec3f direction = to.xyz() - from.xyz();
if (intersectSphere(from.xyz(), direction, objects[i].position, objects[i].radius, t1, t2))
{
//object i has been clicked. probably best to find the minimum t1 (front-most object)
if (t1 < minDist)
{
minDist = t1;
clickedObject = (int)i;
}
}
}
//clicked object is objects[clickedObject]
Instead of intersectSphere, you could use a bounding box or other implicit geometry, or intersect a mesh's triangles (this may require building a kd-tree for performance reasons).
[EDIT]
Here's an implementation of the line/sphere intersect (based off the link above). It assumes the sphere is at the origin, so instead of passing from.xyz() as p, give from.xyz() - objects[i].position.
//ray at position p with direction d intersects sphere at (0,0,0) with radius r. returns intersection times along ray t1 and t2
bool intersectSphere(const vec3f& p, const vec3f& d, float r, float& t1, float& t2)
{
//http://wiki.cgsociety.org/index.php/Ray_Sphere_Intersection
float A = d.dot(d);
float B = 2.0f * d.dot(p);
float C = p.dot(p) - r * r;
float dis = B * B - 4.0f * A * C;
if (dis < 0.0f)
return false;
float S = sqrt(dis);
t1 = (-B - S) / (2.0f * A);
t2 = (-B + S) / (2.0f * A);
return true;
}
vec4f from = toWorld * vec4f(mouse, -1.0f, 1.0f);
vec4f to = toWorld * vec4f(mouse, 1.0f, 1.0f);
I'm assuming that 'from' is the position of the mouse cursor? If so then why is its z negative one, if we are assuming openGL coordinates.
Also in this way do we assume that the depth at this time is -1 to +1 right? Rather than the depth of our frustrum.