Patch on a sphere of varying size - unity3d

Imagine a patch glued to a sphere. How would I manage to make the patch keep its center position and surface area as the sphere is scaled up or down? Normally, only the curvature of the patch should change, as it is « glued » to the sphere. Assume the patch is described as a set of ( latitude, longitude ) coordinates.
One possible solution would consist of converting the geographical coordinates of the patch into gnomonic coordinates (patch viewed perpendicularly directly from above), thereby making a 2D texture, which is then scaled up or down as the sphere changes its size. But I am unsure whether this is the right approach and how close of the desired effect this would be.
I am a newbie so perhaps Unity can do this simply with the right set options when applying a texture. In this case which input map projection should be used for the texture? Or maybe I should use a 3D surface and « nail » it somehow to the sphere.
Thank you!!
EDIT
I’m adding an illustration to show how the patch should be deformed as the sphere is scaled up or down. On a very small sphere, the patch would eventually wrap around. Whereas on a larger sphere, the patch would be almost flat. The deformation of the patch could be thought of as being similar to gluing the same sticker to spheres of different sizes.
The geometry of the patch could be any polygonal surface, and as previously mentioned must preserve its center position and surface area when the sphere is scaled up or down.

Assume you have a sphere of radius R1 centered at the origin of the standard coordinate system O e1 e2 e3. Then the sphere is given by all points x = [x[0], x[1], x[2]] in 3D that satisfy the equation x[0]^2 + x[1]^2 + x[2]^2 = R1^2. On this sphere you have a patch and the patch has a center c = [c[0], c[1], c[2]].
First, rotate the patch so that the center c goes to the north pole, then project it onto a plane, using an area preserving map for the sphere of radius R1, then map it back using the analogous area preserving map but for radius R2 sphere and finally rotate back the north pole to the scaled position of the center.
Functions you may need to define:
Function 1: Define spherical coordinates
x = sc(u, v, R):
return
x[0] = R*sin(u)*sin(v)
x[1] = R*sin(u)*cos(v)
x[2] = R*cos(u)
where
0 <= u <= pi and 0 <= v < 2*pi
Function 2: Define inverse spherical coordinates:
[u, v] = inv_sc(x, R):
return
u = arccos( x[2] / R )
if x[1] > 0
v = arccot(x[0] / x[1]) if x[1] > 0
else if x[1] < 0
v = 2*pi - arccot(x[1] / x[0])
else if x[1] = 0 and x[0] > 0
v = 0
else if x[1] = 0 and x[0] < 0
v = pi
where x[0]^2 + x[1]^2 + x[2]^2 = R^2
Function 3: Rotation matrix that rotates the center c to the north pole:
Assume the center c is given in spherical coordinates [uc, vc]. Then apply function 1
c = [c[0], c[2], c[3]] = sc(uc, vc, R1)
Then, find for which index i we have c[i] = min( abs(c[0]), abs(c[1]), abs(c[2])). Say i=2 and take the coordinate vector e2 = [0, 1, 0].
Calculate the cross-product vectors cross(c, e2) and cross(cross(c, e2), c), think of them as row-vectors, and form the 3 by 3 rotation matrix
A3 = c / norm(c)
A2 = cross(c, e2) / norm(cross(c, e2))
A1 = cross(A2, A3)
A = [ A1,
A2,
A3 ]
Functions 4:
[w,z] = area_pres(u,v,R1,R2):
return
w = arccos( 1 - (R1/R2)^2 * (1 - cos(u)) )
z = v
Now if you re-scale the sphere from radius R1 to radius R2 then any point x from the patch on the sphere with radius R1 gets transformed to the point y on the sphere of radius R2 by the following chain of transformations:
If x is given in spherical coordinates `[ux, vx]`, first apply
x = [x[0], x[1], x[2]] = sc(ux, vx, R1)
Then rotate with the matrix A:
x = matrix_times_vector(A, x)
Then apply the chain of transformations:
[u,v] = inv_sc(x, R1)
[w,z] = area_pres(u,v,R1,R2)
y = sc(w,z,R2)
Now y is on the R2 sphere.
Finally,
y = matrix_times_vector(transpose(A), y)
As a result all of these points y fill-in the corresponding transformed patch on the sphere of radius R2 and the patch-area on R2 equals the patch-area of the original patch on sphere R1. Plus the center point c gets just scaled up or down along a ray emanating from the center of the sphere.
The general idea behind this appriach is that, basically, the area element of the R1 sphere is R1^2*sin(u) du dv and we can look for a transformation of the latitude-longitude coordinates [u,v] of the R1 sphere into latitude-longitude coordinates [w,z] of the R2 sphere where we have the functions w = w(u,v) and z = z(u,v) such that
R2^2*sin(w) dw dz = R1^2*sin(u) du dv
When you expand the derivatives of [w,z] with respect to [u,v], you get
dw = dw/du(u,v) du + dw/dv(u,v) dv
dz = dz/du(u,v) du + dz/dv(u,v) dv
Plug them in the first formula, and you get
R2^2*sin(w) dw dz = R2^2*sin(w) * ( dw/du(u,v) du + dw/dv(u,v) dv ) wedge ( dz/du(u,v) du + dz/dv(u,v) dv )
= R1^2*sin(u) du dv
which simplifies to the equation
R2^2*sin(w) * ( dw/du(u,v) dz/dv(u,v) - dw/dv(u,v) dz/du(u,v) ) du dv = R^2*sin(u) du dv
So the general differential equation that guarantees the area preserving property of the transformation between the spherical patch on R1 and its image on R2 is
R2^2*sin(w) * ( dw/du(u,v) dz/dv(u,v) - dw/dv(u,v) dz/du(u,v) ) = R^2*sin(u)
Now, recall that the center of the patch has been rotated to the north pole of the R1 sphere, so you can think the center of the patch is the north pole. If you want a nice transformation of the patch so that it is somewhat homogeneous and isotropic from the patch's center, i.e. when standing at the center c of the patch (c = north pole) you see the patch deformed so that longitudes (great circles passing through c) are preserved (i.e. all points from a longitude get mapped to points of the same longitude), you get the restriction that the longitude coordinate v of point [u, v] gets transformed to a new point [w, z] which should be on the same longitude, i.e. z = v. Therefore such longitude preserving transformation should look like this:
w = w(u,v)
z = v
Consequently, the area-preserving equation simplifies to the following partial differential equation
R2^2*sin(w) * dw/du(u,v) = R1^2*sin(u)
because dz/dv = 1 and dz/du = 0.
To solve it, first fix the variable v, and you get the ordinary differential equation
R2^2*sin(w) * dw = R1^2*sin(u) du
whose solution is
R2^2*(1 - cos(w)) = R1^2*(1 - cos(u)) + const
Therefore, when you let v vary, the general solution for the partial differential equation
R2^2*sin(w) * dw/du(u,v) = R^2*sin(u)
in implicit form (equation that links the variables w, u, v) should look like
R2^2*(1 - cos(w)) = R1^2*(1 - cos(u)) + f(v)
for any function f(v)
However, let us not forget that the north pole stays fixed during this transformation, i.e. we have the restriction that w= 0 whenever u = 0. Plug this condition into the equation above and you get the restriction for the function f(v)
R2^2*(1 - cos(0)) = R1^2*(1 - cos(0)) + f(v)
R2^2*(1 - 1) = R1^2*(1 - 1) + f(v)
0 = f(v)
for every longitude v
Therefore, as soon as you impose longitudes to be transformed to the same longitudes and the north pole to be preserved, the only option you are left with is the equation
R2^2*(1 - cos(w)) = R1^2*(1 - cos(u))
which means that when you solve for w you get
w = arccos( 1 - (R1/R2)^2 * (1 - cos(u)) )
and thus, the corresponding area preserving transformation between the patch on sphere R1 and the patch on sphere R2 with the same area, fixed center and a uniform deformation at the center so that longitudes are transformed to the same longitudes, is
w = arccos( 1 - (R1/R2)^2 * (1 - cos(u)) )
z = v
Here I implemented some of these functions in Python and ran a simple simulation:
import numpy as np
import math
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
def trig(uv):
return np.cos(uv), np.sin(uv)
def sc_trig(cos_uv, sin_uv, R):
n, dim = cos_uv.shape
x = np.empty((n,3), dtype=float)
x[:,0] = sin_uv[:,0]*cos_uv[:,1] #cos_u*sin_v
x[:,1] = sin_uv[:,0]*sin_uv[:,1] #cos_u*cos_v
x[:,2] = cos_uv[:,0] #sin_u
return R*x
def sc(uv,R):
cos_uv, sin_uv = trig(uv)
return sc_trig(cos_uv, sin_uv, R)
def inv_sc_trig(x):
n, dim = x.shape
cos_uv = np.empty((n,2), dtype=float)
sin_uv = np.empty((n,2), dtype=float)
Rad = np.sqrt(x[:,0]**2 + x[:,1]**2 + x[:,2]**2)
r_xy = np.sqrt(x[:,0]**2 + x[:,1]**2)
cos_uv[:,0] = x[:,2]/Rad #cos_u = x[:,2]/R
sin_uv[:,0] = r_xy/Rad #sin_v = x[:,1]/R
cos_uv[:,1] = x[:,0]/r_xy
sin_uv[:,1] = x[:,1]/r_xy
return cos_uv, sin_uv
def center_x(x,R):
n, dim = x.shape
c = np.sum(x, axis=0)/n
return R*c/math.sqrt(c.dot(c))
def center_uv(uv,R):
x = sc(uv,R)
return center_x(x,R)
def center_trig(cos_uv, sin_uv, R):
x = sc_trig(cos_uv, sin_uv, R)
return center_x(x,R)
def rot_mtrx(c):
i = np.where(c == min(c))[0][0]
e_i = np.zeros(3)
e_i[i] = 1
A = np.empty((3,3), dtype=float)
A[2,:] = c/math.sqrt(c.dot(c))
A[1,:] = np.cross(A[2,:], e_i)
A[1,:] = A[1,:]/math.sqrt(A[1,:].dot(A[1,:]))
A[0,:] = np.cross(A[1,:], A[2,:])
return A.T # ready to apply to a n x 2 matrix of points from the right
def area_pres(cos_uv, sin_uv, R1, R2):
cos_wz = np.empty(cos_uv.shape, dtype=float)
sin_wz = np.empty(sin_uv.shape, dtype=float)
cos_wz[:,0] = 1 - (R1/R2)**2 * (1 - cos_uv[:,0])
cos_wz[:,1] = cos_uv[:,1]
sin_wz[:,0] = np.sqrt(1 - cos_wz[:,0]**2)
sin_wz[:,1] = sin_uv[:,1]
return cos_wz, sin_wz
def sym_patch_0(n,m):
u = math.pi/2 + np.linspace(-math.pi/3, math.pi/3, num=n)
v = math.pi/2 + np.linspace(-math.pi/3, math.pi/3, num=m)
uv = np.empty((n, m, 2), dtype=float)
uv[:,:,0] = u[:, np.newaxis]
uv[:,:,1] = v[np.newaxis,:]
uv = np.reshape(uv, (n*m, 2), order='F')
return uv, u, v
uv, u, v = sym_patch_0(18,18)
r1 = 1
r2 = 2/3
r3 = 2
limits = max(r1,r2,r3)
p = math.pi
x = sc(uv,r1)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x[:,0], x[:,1], x[:,2])
ax.set_xlim(-limits, limits)
ax.set_ylim(-limits, limits)
ax.set_zlim(-limits, limits)
ax.set_xlabel('X Label')
ax.set_ylabel('Y Label')
ax.set_zlabel('Z Label')
plt.show()
B = rot_mtrx(center_x(x,r1))
x = x.dot(B)
cs, sn = inv_sc_trig(x)
cs1, sn1 = area_pres(cs, sn, r1, r2)
y = sc_trig(cs1, sn1, r2)
y = y.dot(B.T)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(y[:,0], y[:,1], y[:,2])
ax.set_xlim(-limits, limits)
ax.set_ylim(-limits, limits)
ax.set_zlim(-limits, limits)
ax.set_xlabel('X Label')
ax.set_ylabel('Y Label')
ax.set_zlabel('Z Label')
plt.show()
cs1, sn1 = area_pres(cs, sn, r1, r3)
y = sc_trig(cs1, sn1, r3)
y = y.dot(B.T)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(y[:,0], y[:,1], y[:,2])
ax.set_xlim(-limits, limits)
ax.set_ylim(-limits, limits)
ax.set_zlim(-limits, limits)
ax.set_xlabel('X Label')
ax.set_ylabel('Y Label')
ax.set_zlabel('Z Label')
plt.show()
One can see three figures of how a patch gets deformed when the radius of the sphere changes from radius 2/3, through radius 1 and finally to radius 2. The patch's area doesn't change and the transformation of the patch is homogeneous in all direction with no excessive deformation.

You could e.g. do something like
public class Example : MonoBehaviour
{
public Transform sphere;
public float latitude;
public float longitude;
private void Update()
{
transform.position = sphere.position
+ Quaternion.AngleAxis(longitude, -Vector3.up)
* Quaternion.AngleAxis(latitude, -Vector3.right)
* sphere.forward * sphere.lossyScale.x / 2f;
transform.LookAt(sphere);
transform.Rotate(90,0,0);
}
}
The pin would not be a child of the sphere. It would result in a pin (in red) like:
Alternatively as said you could make the pin a child of the sphere in a structure like
Sphere
|--PinAnchor
|--Pin
So in order to change the Pin position you would rotate the PinAnchor. The Pin itself would update its own scale so it has always a certain target scale e.g. like
public class Example : MonoBehaviour
{
public float targetScale;
private void Update()
{
var scale = transform.parent.lossyScale;
var invertScale = new Vector3(1 / scale.x, 1 / scale.y, 1 / scale.z);
if (float.IsNaN(invertScale.x)) invertScale.x = 0;
if (float.IsNaN(invertScale.y)) invertScale.y = 0;
if (float.IsNaN(invertScale.z)) invertScale.z = 0;
transform.localScale = invertScale * targetScale;
}
}

I am going to add another answer, because it is possible you may decide that different properties are important for your patch transformation, more specifically having minimal (in some sense) distortion, and the area preservation of the patch is not as important.
Assume you want to create a transformation from a patch (an open subset of the sphere with relatively well-behaved boundary, e.g. piecewise smooth or even piecewise geodesic boundary) on a sphere of radius R1 to a corresponding patch on a sphere of radius R2. However, you want the transformation to not distort the original patch on R1 wen mapping it to R2. Assume the patch on R1 has a distinguished point c, called the center. This could be its geometric center, i.e. its center of mass (barycenter), or a point selected in another way.
For this discussion, let us assume the center c is at the north pole of the sphere R1. If it is not, we can simply rotate it to the north pole (see my previous post for one way to rotate the center), so that the standard spherical coordinates [u, v] (latitude and longitude) naturally apply, i.e.
for sphere R1:
x[0] = R1*sin(u)*cos(v)
x[1] = R1*sin(u)*sin(v)
x[2] = R1*cos(u)
for sphere R2:
y[0] = R2*sin(w)*cos(z)
y[1] = R2*sin(w)*sin(z)
y[2] = R2*cos(w)
with point c being with coordinates [0,0] (or any [0,v] for that matter, as these coordinates have a singularity at the pole). Ideally, if you construct an isometric transformation between the two patches (isometry is a transformation that preserves distances, angles and consequently area), then you are done. The two spheres, however, have different radii R1 and R2 and so they have different intrinsic curvature, so there can be no isometry between the patches. Nevertheless, let us see what an isometry would have done: An isometry is a transformation that transforms the metric tensor (the line element, the way we measure distance on the sphere) of the first sphere to the metric tensor of the second, i.e.
Metric tensor of R1:
R1^2 * ( du^2 + (sin(u))^2 dv^2 )
Metric tensor of R2:
R2^2 * ( dw^2 + (sin(w))^2 dz^2 )
An isometry: [u,v] --> [w,z] so that
R1^2 * ( du^2 + (sin(u))^2 dv^2 ) = R2^2 * ( dw^2 + (sin(w))^2 dz^2 )
What an isometry would do, fist it would send spherical geodesics (great circles) to spherical geodesics, so in particular longitudinal circles of R1 should be mapped to longitudinal circles of R2, because we want the north pole of R1 to be mapped to the north pole of R2. Also, an isometry would preserve angles, so in particular, it would preserve angles between longitudinal circles. Since the angle between the zero longitudinal circle and the longitudinal circle of longitude v is equal to v (up to a translation by a constant if a global rotation of the sphere around the north pole is added, but we don't want that), then v should be preserved by an isometry (i.e. the isometry should preserve the bearing at the north pole). That implies that the desired isometric map between the patches should have the form
Map between patch on R1 and patch on R2,
which maps the north pole of R1 to the north pole of R2:
w = w(u, v)
z = v
Furthermore, since the sphere looks the same at any point and in any direction (it is homogeneous and isotropic everywhere), in particular this is true for the north pole and therefore an isometry should transform identically in all direction when looking from the north pole (the term is "isometric transformations should commute with the with the group of isometric automorphisms of the surfaces") which yields that w = w(u, v) should not depend on the variable v:
Map between patch on R1 and patch on R2,
which maps the north pole of R1 to the north pole of R2:
w = w(u)
z = v
The final steps towards finding an isometric transformation between the patches on R1 and R2 is to make sure that the metric tensors before and after the transformation are equal, i.e.:
R2^2 * ( dw^2 + (sin(w))^2 dz^2 ) = R1^2 * ( du^2 + (sin(u))^2 dv^2 )
dw = (dw/du(u)) du and dz = dv
R2^2 * ( (dw/du(u))^2 du^2 + (sin( w(u) ))^2 dv^2 ) = R1^2 * ( du^2 + (sin(u))^2 dv^2 )
set K = R1/R2
( dw/du(u) )^2 du^2 + (sin( w(u) ))^2 dv^2 = K^2 du^2 + K^2*(sin(u))^2 dv^2
For the latter equation to hold, we need the function w = w(u) to satisfy the following two restrictions
dw/du(u) = K
sin(w(u)) = K * sin(u)
However, we have only one function w(u) and two equations which are satisfied only when K = 1 (i.e. R1 = R2) which is not the case. This is where the isometric conditions break and that is why there is no isometric transformation between a patch on sphere R1 and a patch on R2 when R1 != R2. One thing we can try to do is to find a transformation that in some reasonable sense minimizes the discrepancy between the metric tensors (i.e. we would like to minimize somehow the degree of non-isometricity of the transformation [w = w(u), z = v] ). To that end, we can define a Lagrangian discrepancy function (yes, exactly like in physics) and try to minimize it:
Lagrangian:
L(u, w, dw/du) = ( dw/du - K )^2 + ( sin(w) - K*sin(u) )^2
minimize the action:
S[w] = integral_0^u2 L(u, w(u), dw/du(u))du
or more explicitly, find the function `w(u)` that makes
the sum (integral) of all discrepancies:
S[w] = integral_0^u2 ( ( dw/du(u) - K )^2 + ( sin(w(u)) - K*sin(u) )^2 )du
minimal
In order to find the function w(u) that minimizes the discrepancy integral S[w] above, one needs to derive the Euler-Lagrange equations associated to the Lagrangian L(u, w, dw,du) and to solve them. The Euler-Lagrange equation in this case is one and it is second derivative one:
d^2w/du^2 = sin(w)*cos(w) - K*sin(u)*cos(w)
w(0) = 0
dw/du(0) = K
or using alternative notation:
w''(u) = sin(w(u))*cos(w(u)) - K*sin(u)*cos(w(u))
w(0) = 0
w'(0) = K
The reason for the condition w'(0) = K comes from imposing the isometric identity
( dw/du(u) )^2 du^2 + (sin( w(u) ))^2 dv^2 = K^2 du^2 + K^2*(sin(u))^2 dv^2
When u = 0, we already know w(0) = 0 because we want the north pole to be mapped to the north pole and so the latter identity simplifies to
( dw/du(0) )^2 du^2 + (sin(0))^2 dv^2 = K^2 du^2 + K^2*(sin(0))^2 dv^2
( dw/du(0) )^2 du^2 = K^2 du^2
( dw/du(0) )^2 = K^2
which holds when
dw/du(0) = u'(0) = K
Now, to obtain a north -pole respecting transformation between circular patches on two spheres of radii R1 and R2 respectively, that has as little distortion as possible (with respect to the error Lagrnagian), we have to solve the non-linear initial value problem
d^2w/du^2 = sin(w)*cos(w) - K*sin(u)*cos(w)
w(0) = 0
dw/du(0) = K
or written as a system of two first-derivative differential equations (Hamiltonain form):
dw/du = p
dp/du = sin(w)*cos(w) - K*sin(u)*cos(w)
w(0) = 0
p(0) = K
I seriously doubt that this is an exactly solvable (integrable) system of ordinary differential equations, but a numerical integration with a reasonably small integration step can give an excellent discrete solution, which combined with a good interpolation scheme, like cubic splines, can give you a very accurate solution.
Now, if you do not care too much about exactly equal areas between the patches, but reasonably close areas and would actually prefer to have a smallest possible (in some sence) geometric deformation, you can simply use this model and stop here. However, if you really insist on the equal area between the two patches, you can continue further, by splitting your original patch (call it D1) on sphere R1 into a subpatch C1 inside D1 with the same center as D1, such that the difference D1 \ C1 is a narrow frame surrounding C1. Let the image of C1 under the map w = w(u), z = v, defined above, be denoted by C2. Then to find a transformation (a map) from the patch D1 onto a patch D2 on the sphere R2, which has the same area as D1 and includes C2, you can piece together one map from two submaps:
w = w(u)
z = v
for [u,v] from C1 ---> [w,z] from C2
w = w_ext(u, v)
z = v
for [u,v] from D1 \ C1 ---> [w,z] from D2 \ C2
The question is how to find the extension transfromation w_ext(u). For the area of D2 to be equal to the area of D1, you need to choose w_ext(u) so that
integra_(D1 \ C1) sin(w_ext(u)) dw_ext/du(u) du dv = (R1/R2)^2 Area(D1) - Area(C2) ( = the areas on the right are constants )
Now, pick a suitable function (you can start with a cosntant if you want) f(u), say a polynomial with adjustable coefficients, so that
integra_(D1 \ C1) f(u) du dv = (R1/R2)^2 Area(D1) - Area(C2)
e.g.
f(u) = L (constant) such that
integra_(D1 \ C1) L du dv = (R1/R2)^2 Area(D1) - Area(C2)
i.e.
L = ( (R1/R2)^2 Area(D1) - Area(C2) ) / integra_(D1 \ C1) du dv
Then solve the differential eqution
sin(w) dw/du = f(u)
e.g.
sin(w) dw/du = L
w(u) = arccos(L*u + a)
But in this case it is imortant to glue this solution with the previous one, so the initial condition of w_ext(u) matters, possibly depending on the direction v, i.e
w_ext(u, v) = arccos(L*u + a(v))
So there exists a somewhat more laborious approach, but it has a lot of details and is more comlicated.

Related

Calculate 3D cordinates from with camera matrix and know distance

I have been struggeling with this quiz question. This was part of FSG 2022 registration quiz and I can't figure out how to solve it
At first I thought that I can use extrinsic and intrinsic parameters to calculate 3D coordinates using equations described by Mathworks or in this article. Later I realized that the distance to the object is provided in camera frame, which means that this could be treat as a depth camera and convert depth info into 3d space as described in medium.com article
this article is using formula show below to calculate x and y coordinates and is very similar to this question, yet I can't get the correct solution.
One of my Matlab scripts attempting to solve it:
rot = eul2rotm(deg2rad([102 0 90]));
trans = [500 160 1140]' / 1000; % mm to m
t = [rot trans];
u = 795; % here was typo as pointed out by solstad.
v = 467;
cx = 636;
cy = 548;
fx = 241;
fy = 238;
z = 2100 / 1000 % mm to m
tmp_x = (u - cx) * z / fx;
tmp_y = (v - cy) * z / fy;
% attempt 1
tmp_cords = [tmp_x; tmp_y; z; 1]
linsolve(t', tmp_cords)'
% result is: 1.8913 1.8319 -0.4292
% attempt 2
tmp_cords = [tmp_x; tmp_y; z]
rot * tmp_cords + trans
% result is: 2.2661 1.9518 0.4253
If possible I would like to see the calculation process not any kind of a python code.
Correct answer is under the image.
Correct solution provided by the organisers were 2.030, 1.272, 0.228 m
The task states that the object's euclidean (straight-line) distance is 2.1 m. That doesn't mean its distance along z is 2.1 m. Those two only coincide if there is no x or y component in the object's translation to the camera frame.
The z component of the object's translation will be less than 2.1 meters.
You need to take a ray/vector for the screen space coordinates (normalized) and multiply that by the euclidean distance.
v_x = (u - cx) / fx;
v_y = (v - cy) / fy;
v_z = 1;
v = [v_x; v_y; v_z];
dist = 2.1;
tmp = v / norm(v) * dist;
The rotation may be an issue. Roll happens around X, then pitch happens around Y, and then yaw happens around Z. These operations are applied in that order, i.e. inner to outer.
R_Z * R_Y * R_X * v
My rotation matrix is
[[ 0. 0.20791 0.97815]
[ 1. 0. 0. ]
[ 0. 0.97815 -0.20791]]
That camera, taking the usual (X right, Y down, Z far) frame, would be looking, upside down, out the windshield, and slightly down.
Make sure that eul2rotm() does the right thing (specify axis order as 'XYZ') or that you use something else.
You can use rotvec2mat3d() to build individual rotation matrices from an axis-angle encoding.
Perhaps also review different MATLAB conventions regarding matrix multiplication: https://www.mathworks.com/help/images/migrate-geometric-transformations-to-premultiply-convention.html
I used Python and scipy.spatial.transform.Rotation.from_euler('xyz', [R_roll, R_pitch, R_yaw], degrees=True).as_matrix() to arrive at the sample solution.
Properly, the task should have specified a frame conversion step between vehicle and camera because the differing views are quite confusing, with a car having +X being forward and a camera having +Z being forward...
In addition to Christoph Rackwitz answer, which is correct and should get all the credited, here is a working Matlab script:
rot = eul2rotm(deg2rad([90 0 102]));
trans = [500 160 1140]' / 1000; % mm to m
u = 795;
v = 467;
cx = 636;
cy = 548;
fx = 241;
fy = 238;
v_x = (u - cx) / fx;
v_y = (v - cy) / fy;
v_z = 1;
v = [v_x; v_y; v_z];
dist = 2.1;
tmp = v / norm(v) * dist;
rot * tmp + trans

How to find the Delaunay neighbours of points in two different sets using scipy's Delaunay?

See https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.Delaunay.html
Consider two sets of points. For each point in X_, I would like to find the nearest delaunay neighbours in "points". I think a slow way is to form Delaunay triangulations of points plus a points from X_ one at a time and then do the neighbours lookup somehow. Is there a more efficient way of doing this using scipy (or another tool)?
from pylab import *
from scipy.spatial import Delaunay
np.random.seed(1)
points = np.random.randn(10, 2)
X_ = np.random.randn(4, 2)
tri = Delaunay(points, incremental=True)
plt.triplot(points[:,0], points[:,1], tri.simplices, alpha=0.5)
plot(X_[:, 0], X_[:, 1], 'ro', alpha=0.5)
for i, (x, y) in enumerate(points):
text(x, y, i)
for i, (x, y) in enumerate(X_):
text(x, y, i, color='g')
tri.points, tri.points.shape
Ideally, the simplest solution would probably look something like this:
Generate the Delaunay triangulation of "points"
For each point X in X_
Add point X to the triangulation
Get the neighbors of X in the augmented triangulation
Remove point X from triangulation
Using scipy.spatial.Delaunay, you have the ability to incrementally add points to the triangulation but isn't a mechanism to remove them. So that approach cannot be applied directly.
We can avoid actually adding the new point to the triangulation and instead just determine which points it would be connected to without actually changing the triangulation. Basically, we need to identify the cavity in the Bowyer-Watson algorithm and collect the vertices of the triangles in the cavity. A Delaunay triangle is part of the cavity if the proposed point lies inside its circumcircle. This can be built incrementally / locally starting from the triangle containing the point and then only testing neighboring triangles.
Here the code to perform this algorithm (for a single test point).
import numpy as np
import matplotlib.pyplot as plt
from scipy.spatial import Delaunay
num_tri_points = 50
rng = np.random.default_rng(1)
# points in the Delaunay triangulation
points = rng.random((num_tri_points, 2))
# test point for identifying neighbors in the combined triangulation
X = rng.random((1, 2))*.8 + .1
tri = Delaunay(points)
# determine if point lies in the circumcircle of simplex of Delaunay triangulation tri...
# note: this code is not robust to numerical errors and could be improved with exact predicates
def in_circumcircle(tri, simplex, point):
coords = tri.points[tri.simplices[simplex]]
ax = coords[0][0]
ay = coords[0][1]
bx = coords[1][0]
by = coords[1][1]
cx = coords[2][0]
cy = coords[2][1]
d = 2 * (ax * (by - cy) + bx * (cy - ay) + cx * (ay - by))
ux = ((ax * ax + ay * ay) * (by - cy) + (bx * bx + by * by) * (cy - ay) + (cx * cx + cy * cy) * (ay - by)) / d
uy = ((ax * ax + ay * ay) * (cx - bx) + (bx * bx + by * by) * (ax - cx) + (cx * cx + cy * cy) * (bx - ax)) / d
rad_sq = (ax-ux)*(ax-ux)+(ay-uy)*(ay-uy)
point_dist_sq = (point[0]-ux)*(point[0]-ux)+(point[1]-uy)*(point[1]-uy)
return point_dist_sq < rad_sq
# find the triangle containing point X
starting_tri = tri.find_simplex(X)
# remember triangles that have been tested so we don't repeat
tris_tested = set()
# collect the set of neighboring vertices in the potential combined triangulation
neighbor_vertices = set()
# queue triangles for performing the incircle check
tris_to_process = [ starting_tri[0] ]
while len(tris_to_process):
# get the next triangle
tri_to_process = tris_to_process.pop()
# remember that we have checked this triangle
tris_tested.add(tri_to_process)
# is the proposed point inside the circumcircle of the triangle
if in_circumcircle(tri, tri_to_process, X[0,:]):
# if so, add the vertices of this triangle as neighbors in the combined triangulation
neighbor_vertices.update(tri.simplices[tri_to_process].flatten())
# queue the neighboring triangles for processing
for nbr in tri.neighbors[tri_to_process]:
if nbr not in tris_tested:
tris_to_process.append(nbr)
# plot the results
plt.triplot(points[:,0], points[:,1], tri.simplices, alpha=0.7)
plt.plot(X[0, 0], X[0, 1], 'ro', alpha=0.5)
plt.plot(tri.points[list(neighbor_vertices)][:,0], tri.points[list(neighbor_vertices)][:,1], 'bo', alpha=0.7)
plt.show()
Running the code gives the following plot containing the original triangulation, the test point (red) and the identified neighboring vertices (blue).

Rotation and Translation from Essential Matrix incorrect

I currently have a stereo camera setup. I have calibrated both cameras and have the intrinsic matrix for both cameras K1 and K2.
K1 = [2297.311, 0, 319.498;
0, 2297.313, 239.499;
0, 0, 1];
K2 = [2297.304, 0, 319.508;
0, 2297.301, 239.514;
0, 0, 1];
I have also determined the Fundamental matrix F between the two cameras using findFundamentalMat() from OpenCV. I have tested the Epipolar constraint using a pair of corresponding points x1 and x2 (in pixel coordinates) and it is very close to 0.
F = [5.672563368940768e-10, 6.265600996978877e-06, -0.00150188302445251;
6.766518121363063e-06, 4.758206104804563e-08, 0.05516598334827842;
-0.001627120880791009, -0.05934224611334332, 1];
x1 = 133,75
x2 = 124.661,67.6607
transpose(x2)*F*x1 = -0.0020
From F I am able to obtain the Essential Matrix E as E = K2'*F*K1. I decompose E using the MATLAB SVD function to get the 4 possibilites of rotation and translation of K2 with respect to K1.
E = transpose(K2)*F*K1;
svd(E);
[U,S,V] = svd(E);
diag_110 = [1 0 0; 0 1 0; 0 0 0];
newE = U*diag_110*transpose(V);
[U,S,V] = svd(newE); //Perform second decompose to get S=diag(1,1,0)
W = [0 -1 0; 1 0 0; 0 0 1];
R1 = U*W*transpose(V);
R2 = U*transpose(W)*transpose(V);
t1 = U(:,3); //norm = 1
t2 = -U(:,3); //norm = 1
Let's say that K1 is used as the coordinate frame for which we make all measurements. Therefore, the center of K1 is at C1 = (0,0,0). With this it should be possible to apply the correct rotation R and translation t such that C2 = R*(0,0,0)+t (i.e. the center of K2 is measured with respect to the center of K1)
Now let's say that using my corresponding pairs x1 and x2. If I know the length of each pixel in both my cameras and since I know the focal length from the intrinsic matrix, I should be able to determine two vectors v1 and v2 for both cameras that intersect at the same point as seen below.
pixel_length = 7.4e-6; //in meters
focal_length = 17e-3; //in meters
dx1 = (133-319.5)*pixel_length; //x-distance from principal point of 640*480 image
dy1 = (75-239.5) *pixel_length; //y-distance from principal point of 640*480 image
v1 = [dx1 dy1 focal_length] - (0,0,0); //vector found using camera center and corresponding image point on the image plane
dx2 = (124.661-319.5)*pixel_length; //same idea
dy2 = (67.6607-239.5)*pixel_length; //same idea
v2 = R * ( [dx2 dy2 focal_length] - (0,0,0) ) + t; //apply R and t to measure v2 with respect to K1 frame
With this vector and knowing the line equation in parametric form, we can then equate the two lines to triangulate and solve the two scalar quantities s and t through the left hand divide function in MATLAB to solve for the system of equations.
C1 + s*v1 = C2 + t*v2
C1-C2 = tranpose([v2 v1])*transpose([s t]) //solve Ax = B form system to find s and t
With s and t determined we can find the triangulated point by plugging back into the line equation. However, my process has not been successful as I cannot find a single R and t solution in which the point is in front of both cameras and where both cameras are pointed forwards.
Is there something wrong with my pipeline or thought process? Is it at all possible to obtain each individual pixel ray?
When you decompose the essential matrix into R and t you get 4 different solutions. Three of them project the points behind one or both cameras, and one of them is correct. You have to test which one is correct by triangulating some sample points.
There is a function in the Computer Vision System Toolbox in MATLAB called cameraPose, which will do that for you.
Should it be not C1-C2 = transpose([v2 -v1] * transpose([t s]). This works.
Checked your code and found that the determinants of both R1 and R2 are -1, which is incorrect because as a rotation matrix R should have a determinant equal to 1. Just take R=-R and try again.

Is this rotation matrix (angle about vector) limited to certain orientations?

From a couple references (i.e., http://en.wikipedia.org/wiki/Rotation_matrix "Rotation matrix from axis and angle", and exercise 5.15 in "Computer Graphics - Principles and Practice" by Foley et al, 2nd edition in C), I've seen this definition of a rotation matrix (implemented below in Octave) that rotates points by a specified angle about a specified vector. Although I have used it before, I'm now seeing rotation problems that appear to be related to orientation. The problem is recreated in the following Octave code that
takes two unit vectors: src (green in figures) and dst (red in figures),
calculates the angle between them: theta,
calculates the vector normal to both: pivot (blue in figures),
and finally attempts to rotate src into dst by rotating it about vector pivot by angle theta.
% This test fails: rotated unit vector is not at expected location and is no longer normalized.
s = [-0.49647; -0.82397; -0.27311]
d = [ 0.43726; -0.85770; -0.27048]
test_rotation(s, d, 1);
% Determine rotation matrix that rotates the source and normal vectors to the x and z axes, respectively.
normal = cross(s, d);
normal /= norm(normal);
R = zeros(3,3);
R(1,:) = s;
R(2,:) = cross(normal, s);
R(3,:) = normal;
R
% After rotation of the source and destination vectors, this test passes.
s2 = R * s
d2 = R * d
test_rotation(s2, d2, 2);
function test_rotation(src, dst, iFig)
norm_src = norm(src)
norm_dst = norm(dst)
% Determine rotation axis (i.e., normal to two vectors) and rotation angle.
pivot = cross(src, dst);
theta = asin(norm(pivot))
theta_degrees = theta * 180 / pi
pivot /= norm(pivot)
% Initialize matrix to rotate by an angle theta about pivot vector.
ct = cos(theta);
st = sin(theta);
omct = 1 - ct;
M(1,1) = ct - pivot(1)*pivot(1)*omct;
M(1,2) = pivot(1)*pivot(2)*omct - pivot(3)*st;
M(1,3) = pivot(1)*pivot(3)*omct + pivot(2)*st;
M(2,1) = pivot(1)*pivot(2)*omct + pivot(3)*st;
M(2,2) = ct - pivot(2)*pivot(2)*omct;
M(2,3) = pivot(2)*pivot(3)*omct - pivot(1)*st;
M(3,1) = pivot(1)*pivot(3)*omct - pivot(2)*st;
M(3,2) = pivot(2)*pivot(3)*omct + pivot(1)*st;
M(3,3) = ct - pivot(3)*pivot(3)*omct;
% Rotate src about pivot by angle theta ... and check the result.
dst2 = M * src
dot_dst_dst2 = dot(dst, dst2)
if (dot_dst_dst2 >= 0.99999)
"success"
else
"FAIL"
end
% Draw the vectors: green is source, red is destination, blue is normal.
figure(iFig);
x(1) = y(1) = z(1) = 0;
ubounds = [-1.25 1.25 -1.25 1.25 -1.25 1.25];
x(2)=src(1); y(2)=src(2); z(2)=src(3);
plot3(x,y,z,'g-o');
hold on
x(2)=dst(1); y(2)=dst(2); z(2)=dst(3);
plot3(x,y,z,'r-o');
x(2)=pivot(1); y(2)=pivot(2); z(2)=pivot(3);
plot3(x,y,z,'b-o');
x(2)=dst2(1); y(2)=dst2(2); z(2)=dst2(3);
plot3(x,y,z,'k.o');
axis(ubounds, 'square');
view(45,45);
xlabel("xd");
ylabel("yd");
zlabel("zd");
hold off
end
Here are the resulting figures. Figure 1 shows an orientation that doesn't work. Figure 2 shows an orientation that works: the same src and dst vectors but rotated into the first quadrant.
I was expecting the src vector to always rotate onto the dst vector, as shown in Figure 2 by the black circle covering the red circle, for all vector orientations. However Figure 1 shows an orientation where the src vector does not rotate onto the dst vector (i.e., the black circle is not on top of the red circle, and is not even on the unit sphere).
For what it's worth, the references that defined the rotation matrix did not mention orientation limitations, and I derived (in a few hours and a few pages) the rotation matrix equation and didn't spot any orientation limitations there. I'm hoping the problem is an implementation error on my part, but I haven't been able to find it yet in either of my implementations: C and Octave. Have you experienced orientation limitations when implementing this rotation matrix? If so, how did you work around them? I would prefer to avoid the extra translation into the first quadrant if it isn't necessary.
Thanks,
Greg
Seems two minus signs have escaped:
M(1,1) = ct - P(1)*P(1)*omct;
M(1,2) = P(1)*P(2)*omct - P(3)*st;
M(1,3) = P(1)*P(3)*omct + P(2)*st;
M(2,1) = P(1)*P(2)*omct + P(3)*st;
M(2,2) = ct + P(2)*P(2)*omct; %% ERR HERE; THIS IS THE CORRECT SIGN
M(2,3) = P(2)*P(3)*omct - P(1)*st;
M(3,1) = P(1)*P(3)*omct - P(2)*st;
M(3,2) = P(2)*P(3)*omct + P(1)*st;
M(3,3) = ct + P(3)*P(3)*omct; %% ERR HERE; THIS IS THE CORRECT SIGN
Here is a version that is much more compact, faster, and also based on Rodrigues' rotation formula:
function test
% first test: pass
s = [-0.49647; -0.82397; -0.27311];
d = [ 0.43726; -0.85770; -0.27048]
d2 = axis_angle_rotation(s, d)
% Determine rotation matrix that rotates the source and normal vectors to the x and z axes, respectively.
normal = cross(s, d);
normal = normal/norm(normal);
R(1,:) = s;
R(2,:) = cross(normal, s);
R(3,:) = normal;
% Second test: pass
s2 = R * s;
d2 = R * d
d3 = axis_angle_rotation(s2, d2)
end
function vec = axis_angle_rotation(vec, dst)
% These following commands are just here for the function to act
% the same as your original function. Eventually, the function is
% probably best defined as
%
% vec = axis_angle_rotation(vec, axs, angle)
%
% or even
%
% vec = axis_angle_rotation(vec, axs)
%
% where the length of axs defines the angle.
%
axs = cross(vec, dst);
theta = asin(norm(axs));
% some preparations
aa = axs.'*axs;
ra = vec.'*axs;
% location of circle centers
c = ra.*axs./aa;
% first coordinate axis on the circle's plane
u = vec-c;
% second coordinate axis on the circle's plane
v = [axs(2)*vec(3)-axs(3)*vec(2)
axs(3)*vec(1)-axs(1)*vec(3)
axs(1)*vec(2)-axs(2)*vec(1)]./sqrt(aa);
% the output vector
vec = c + u*cos(theta) + v*sin(theta);
end

How to find the bisector of an angle in MATLAB

I have a question connected to this code:
t = -20:0.1:20;
plot3(zeros(size(t)),t,-t.^2);
grid on
hold on
i = 1;
h = plot3([0 0],[0 t(i)],[0 -t(i)^2],'r');
h1 = plot3([-1 0],[0 0],[-400 -200],'g');
for(i=2:length(t))
set(h,'xdata',[-1 0],'ydata',[0 t(i)],'zdata',[-400 -t(i)^2]);
pause(0.01);
end
In this code, I plot two intersecting lines. H1, and H2. H1 is fixed, H2 moves as a function of time. H2 happens to trace a parabola in this example, but its movement could be arbitrary.
How can I calculate and draw the bisector of the angle between these two intersecting lines for every position of the line H2? I would like to see in the plot the bisector and the line H2 moving at the same time.
Solving this problem for one position of H2 is sufficient, since it will be the same procedure for all orientations of H2 relative to H1.
I am not a geometry genius, there is likely an easier way to do this. As of now, no one has responded though, so this will be something.
You have three points in three space:
Let A be the common vertice of the two line segments.
Let B and C be two known points on the two line segments.
Choose an arbitrary distance r where
r <= distance from A to B
r <= distance from A to C
Measure from A along line segment AB a distance of r. This is point RB
Measure from A along line segment AC a distance or r. This is point RC
Find the mid point of line segment connecting RB and RC. This is point M
Line segment AM is the angular bisector of angle CAB.
Each of these steps should be relatively easy to accomplish.
Here is basically MatlabDoug's method with some improvement on the determination of the point he calls M.
t = -20:0.1:20;
plot3(zeros(size(t)),t,-t.^2);
grid on
hold on
v1 = [1 0 200];
v1 = v1/norm(v1);
i = 1;
h = plot3([-1 0],[0 t(i)],[-400 -t(i)^2],'r');
h1 = plot3([-1 0],[0 0],[-400 -200],'g');
l = norm([1 t(i) -t(i)^2+400]);
p = l*v1 + [-1 0 -400];
v2 = (p + [0 t(i) -t(i)^2])/2 - [-1 0 -400];
p2 = [-1 0 -400] + v2/v2(1);
h2 = plot3([-1 p2(1)],[0 p2(2)],[-400 p2(3)],'m');
pause(0.1)
for(i=2:length(t))
l = norm([1 t(i) -t(i)^2+400]);
p = l*v1 + [-1 0 -400];
v2 = (p + [0 t(i) -t(i)^2])/2 - [-1 0 -400];
p2 = [-1 0 -400] + v2/v2(1);
set(h,'xdata',[-1 0],'ydata',[0 t(i)],'zdata',[-400 -t(i)^2]);
set(h2,'xdata',[-1 p2(1)],'ydata',[0 p2(2)],'zdata',[-400 p2(3)]);
pause;
end
I just use the following:
Find the normalized vectors AB, and AC, where A is the common point of the segments.
V = (AB + AC) * 0.5 // produces the direction vector that bisects AB and AC.
Normalize V, then do A + V * length to get the line segment of the desired length that starts at the common point.
(Note that this method does not work on 3 points along a line to produce a perpendicular bisector, it will yield a vector with no length in that case)
I have added a C# implementation (in the XZ plane using Unity 3D Vector3 struct) that handles Perpendicular and Reflex bisectors in case someone that knows MATLAB would translate it.
public Vector3 GetBisector(Vector3 center, Vector3 first, Vector3 second)
{
Vector3 firstDir = (first - center).normalized;
Vector3 secondDir = (second - center).normalized;
Vector3 result = ((firstDir + secondDir) * 0.5f).normalized;
if (IsGreaterThan180(-firstDir, secondDir))
{
// make into a reflex vector
(result.x, result.z) = (-result.x, -result.z);
}
if (result.sqrMagnitude < 0.99f)
{
// we have a colinear set of lines.
// return the perpendicular bisector.
result = Vector3.Cross(Vector3.up, -firstDir).normalized;
}
return result;
}
bool IsGreaterThan180(Vector3 dir, Vector3 dir2)
{
// < 0.0 for clockwise ordering
return (dir2.x * dir.z - dir2.z * dir.x) < 0.0f;
}
Also note that the returned bisector is a vector of unit length. Using "center + bisector * length" could be used to place it into worldspace.