Triangulate set of points on arbitrary plane in 3D space - unity3d

I have a set of points in 3D space. With maximum error of 10^-5 i can place a plane through them (error is the distance from point to plane).
Is there a way to triangulate these points on this arbitrary plane? I have tried Bowyer-Watson but this only works when the error is 0. Anything else and it wont triangulate or i wont get a good triangulation(overlapping triangles).
Edit
I think i found the problem. At certain angles the bowyer watson algorithm wont work because my calculation of the circumcenter is off. How can i calculate the circumcenter of a triangle in 3D?

Since i know the points on the plane i can calculate a vector. This vector lies on the plane. Next i calculate the center of mass of the points.
Using the vector and center of mass i can create a large triangle on the plane
Vertex p1 = new Vertex(dir * 3000 + center);
Vertex p2 = new Vertex(Quaternion.AngleAxis(120, plane.normal) * dir * 3000 + center);
Vertex p3 = new Vertex(Quaternion.AngleAxis(240, plane.normal) * dir * 3000 + center);
Now that i have the enclosing triangle i can just use Bowyer-Watson.
For circumcenter in 3D i use:
Vector3 ac = p3 - p1;
Vector3 ab = p2 - p1;
Vector3 abXac = Vector3.Cross(ab, ac);
circumceter = p1 + (Vector3.Cross(abXac, ab) * ac.sqrMagnitude + Vector3.Cross(ac, abXac) * ab.sqrMagnitude) / (2 * abXac.sqrMagnitude);
And now i have a triangulated set of points on an arbitrary plane in 3D.

Related

Depth to world registration hololens2 unity

I'm working on a program on hololens2 research mode on unity. Hololens give us a depth image that is distance from depth sensor to object in front, for every pixel.
What I do is for every pixel I project pixel to image plane, then backproject it according to depth distance captured by depth sensor and it gives me the xyz in depth sensor coordinate frame. now it is needed to transform this coordinate to global coordinate system. to do so I get camera coordinate from unity by cam_pose = Camera.main.transform and in the other hand saved depth sensor extrinsic matrix.
From these two matrices I create a depth_to_world = cam_pose # inv(extrinsic). Now for every xyz on depth I perform global_xyz = depth_to_world # xyz to get point in real world. Problem is it return a point with 10-15 cm error. What am I doing wrong? (code is in python)
x = self.us[Depth_i, Depth_j] # projection from pixels to image plane
y = self.vs[Depth_i, Depth_j] # projection from pixels to image plane
D = distance_img[Depth_i, Depth_j] #distance_img is depth image
distance = 1000*float(D) / np.sqrt(x * x + y * y + 1) #distance according to spherical image plane D is in millimeter
depth_to_world = cam_pose # np.linalg.inv(Constants.camera_extrinsic)
X = (np.array([x * distance, y * distance, 1.0 * distance, 1])).reshape(4, 1)
point = (depth_to_world # X )[0:3, 0]
I got it! according to (https://github.com/petergu684/HoloLens2-ResearchMode-Unity) first I passed unity world origin to a winrt plugin, and depth_to_world was depth_to_world = inv(extrinsic) * cam_pose witch cam_pose is given by TryLocateAtTimeStamp. And other point is that unity coordinate is left handed (surprisingly!) so we should multiply a (-1) to z. (z <- -z)
my depth_to_world transformation was near but not correct.

Draw elliptical arc between 2 points using bezier curve

I need a bezier curve to join the end points of 2 arbitrary lines smoothly. The lines are all either perpendicular or parallel. By "smoothly" I mean I want the curve's tangent at the end points to have the same slope as the lines.
I'm going to be using MatLab (Octave actually) to write the xml for an svg. So I need a formula to output the positions of the bezier curve's control points based on the positions of the endpoints.
Any help?
If lines are parallel but shifted, then you cannot draw single arc to connect them smoothly, so cubic Bezier curve (that can have S-form) is more suitable.
Let we have the first endpoint P0 with unit direction vector T0, and second endpoint P3 with unit direction vector T3. Control points of cubic Bezier lie on lines continuations. To make curve smooth, we should choose distance of conttrol points from endpoints. Empirical value is about half of distance between endpoints. Method works also for perpendicular lines.
Dist = Sqrt((P0.X - P3.X)^2 + (P0.Y - P3.Y)^2)
Control1.X = P0.X + T0.X * Dist / 2
Control1.Y = P0.Y + T0.Y * Dist / 2
Control2.X = P3.X - T3.X * Dist / 2 //account for T3 direction here
Control2.Y = P3.Y - T3.Y * Dist / 2
Example of curve generated with described approach:

Approximating relative angle between two line segments on sphere surface

I am in need of an idea! I want to model the vascular network on the eye in 3D. I have made statistics on the branching behaviour in relation to vessel diameter, length etc. What I am stuck at right now is the visualization:
The eye is approximated as a sphere E with center in origo C = [0, 0, 0] and a radius r.
What I want to achieve is that based on the following input parameters, it should be able to draw a segment on the surface/perimeter of E:
Input:
Cartesian position of previous segment ending: P_0 = [x_0, y_0, z_0]
Segment length: L
Segment diameter: d
Desired angle relative to the previous segment: a (1)
Output:
Cartesian position of resulting segment ending: P_1 = [x_1, y_1, z_1]
What I do now, is the following:
From P_0, generate a sphere with radius L, representing all the points we could possibly draw to with the correct length. This set is called pool.
Limit pool to only include points with a distance to C between r*0.95 and r, so only the points around the perimeter of the eye are included.
Select only the point that would generate a relative angle (2) closest to the desired angle a.
The problem is, that whatever angle a I desire, is actually not what is measured by the dot product. Say I want an angle at 0 (i.e. that the new segment is following the same direction as the previous`, what I actually get is an angle around 30 degrees because of the curvature of the sphere. I guess what I want is more the 2D angle when looking from an angle orthogonal from the sphere to the branching point. Please take a look at the screenshots below for a visualization.
Any ideas?
(1) The reason for this is, that the child node with the greatest diameter is usually follows the path of the previous segment, whereas smaller child nodes tend to angle differently.
(2) Calculated by acos(dot(v1/norm(v1), v2/norm(v2)))
Screenshots explaining the problem:
Yellow line: previous segment
Red line: "new" segment to one of the points (not neccesarily the correct one)
Blue x'es: Pool (text=angle in radians)
I will restate the problem with my own notation:
Given two points P and Q on the surface of a sphere centered at C with radius r, find a new point T such that the angle of the turn from PQ to QT is A and the length of QT is L.
Because the segments are small in relation to the sphere, we will use a locally-planar approximation of the sphere at the pivot point Q. (If this isn't an okay assumption, you need to be more explicit in your question.)
You can then compute T as follows.
// First compute an aligned orthonormal basis {U,V,W}.
// - {U,V} should be a basis for the plane tangent at Q.
// - W should be normal to the plane tangent at Q.
// - U should be in the direction PQ in the plane tangent at Q
W = normalize(Q - C)
U = normalize(Q - P)
U = normalize(U - W * dotprod(W, U))
V = normalize(crossprod(W, U))
// Next compute the next point S in the plane tangent at Q.
// In a regular plane, the parametric equation of a unit circle
// centered at the origin is:
// f(A) = (cos A, sin A) = (1,0) cos A + (0,1) sin A
// We just do the same thing, but with the {U,V} basis instead
// of the standard basis {(1,0),(0,1)}.
S = Q + L * (U cos A + V sin A)
// Finally project S onto the sphere, obtaining the segment QT.
T = C + r * normalize(S - C)

Incorrect angle detected between two planes

I want to calculate the angle between 2 planes, Reference plane and Plane1. When I feed the X,Y,Z co-ordinates of pointCloud to the function plane_fit.m (by Kevin Mattheus Moerman), I get the coefficients:
reference_plane_coeff: [-0.13766204 -0.070385590 130.69409]
Plane1_coeff: [0.0044337390 -0.0013548643 95.890228]
Next, I find the intersection of both planes, separately on the XZ plane and get a line equation; ref_line_XZ and plane1_line_XZ respectively. For this, I make the second coefficient 0. (Is this right?)
Aref = reference_plane_coeff(1);
Cref = reference_plane_coeff(3);
ref_line_XZ = [Aref Cref];
Arun = Plane1_coeff(1);
Crun = Plane1_coeff(3);
plane1_line_XZ = [Arun Crun];
angle_XZ = acos( dot(ref_line_XZ,plane1_line_XZ ) / (norm(ref_line_XZ) * norm(plane1_line_XZ )) )
I get the angle_XZ value as 0.0012 rad. i.e. 0.0685 degrees
When I plot these planes on a graph and view it, the angle seems to be much more than 0.0012 degrees. I'm talking about the angle made by the two lines after intersection of both planes with the XZ plane.
What am I doing wrong?
Also, when I tried to find angle between its normals, using:
angle_beta_deg = acosd( dot(reference_plane_coeff,Plane1_coeff) / (norm(reference_plane_coeff) * norm(Plane1_coeff)) )
I got the angle as 0.0713.
On visual inspection of both planes' plots and manually calculating from the plot, angle_XZ should be around 9 degrees.
plane_fit.m (by Kevin Mattheus Moerman)

How can I detect if a point is inside a cone or not, in 3D space?

How is possible to detect if a 3D point is inside a cone or not?
Ross cone = (x1, y1, h1)
Cone angle = alpha
Height of the cone = H
Cone radius = R
Coordinates of the point of the cone = P1 (x2, y2, h2)
Coordinates outside the cone = P2( x3, y3, h3)
Result for point1 = true
Result for point2 = false
To expand on Ignacio's answer:
Let
x = the tip of the cone
dir = the normalized axis vector, pointing from the tip to the base
h = height
r = base radius
p = point to test
So you project p onto dir to find the point's distance along the axis:
cone_dist = dot(p - x, dir)
At this point, you can reject values outside 0 <= cone_dist <= h.
Then you calculate the cone radius at that point along the axis:
cone_radius = (cone_dist / h) * r
And finally calculate the point's orthogonal distance from the axis to compare against the cone radius:
orth_distance = length((p - x) - cone_dist * dir)
is_point_inside_cone = (orth_distance < cone_radius)
The language-agnostic answer:
Find the equation of the line defining the main axis of your cone.
Compute the distance from the 3D point to the line, along with the intersection point along the line where the distance is perpendicular to the line.
Find the radius of your cone at the intersection point and check to see if the distance between the line and your 3D point is greater than (outside) or less than (inside) that radius.
A cone is simply an infinite number of circles whose size is defined by a linear equation that takes the distance from the point. Simply check if it's inside the circle of the appropriate size.
Wouldn't it be easier to compute angle between vector to center of cone and vector from apex pointing at point under evaluation. If vector projection is used and the length of the resultant vector is shorter then the vector pointing at the center of the cone the between the angle and length you know if you are inside a cone.
https://en.wikipedia.org/wiki/Vector_projection