I'm not sure if this would be better asked on Mathoverflow, but I thought I would check here first. I have tried to be as clear and concise as possible; if there is anything that needs clearing up please let me know.
Background
I have two sets of points in R3 that are distributed in the form of (more-or-less) arbitrarily oriented ellipsoids. I wish to interpolate a tubular structure between these two ellipsoids. I also have coordinates of the desired centre line of this tubular structure.
I approximate the ellipsoids at either end with a minimum volume enclosing ellipsoid using the Khachiyan Algorithm implemented in Matlab, [1] which returns the coordinates of the centre of the ellipsoid (C), and matrix of the ellipse in centre form (A), such that:
(x - C)' * A * (x - C) = 1
I then extract the ellipsoid's axes lengths (a,b,c) and the rotation matrix (V) using singular value decomposition:
[U,D,V] = svd(A);
a = 1/sqrt(D(1,1));
b = 1/sqrt(D(2,2));
c = 1/sqrt(D(3,3));
I can easily interpolate the axes length parameters (e.g. linear, spline). To interpolate between the orientations, I first convert the rotation matrices to quaternion representation. Then for each point along the centre line, I use spherical linear interpolation (SLERP) implemented in another Matlab file [2]:
for iPoint = 1 : nPoints
t = iPoint / (nPoints + 2);
quat = slerp(startQuat,endQuat,t,0.001);
R = quat2rot(quat);
end
This is where I get stuck.
Unfortunately, even though SLERP "gives a straightest and shortest path between its quaternion endpoints," [3] the resulting interpolated ellipsoids are sometimes rotating in the "wrong" direction. That is, rather than resulting in a smooth tube, the interpolation results in a sort of twisted elliptical cylinder (see attached image, below).
I have tried checking to see if the dot product of the two quaternions is negative and if so, inverting one of them using quatinv. However, inverting results in something completely incorrect (see second attached image, below).
My question is: why is this happening, and what can I do to correct for this behavior? That is, how can I interpolate along the "true" shortest path between the two ellipsoid orientations?
Any suggestions would be greatly appreciated!
UPDATE
I have created a minimum working example and a required data file. I have also attached a screenshot of the result. I've zipped these up and uploaded them to Dropbox. [4]
[1] http://www.mathworks.com/matlabcentral/fileexchange/9542-minimum-volume-enclosing-ellipsoid/content/MinVolEllipse.m
[2] http://www.mathworks.com/matlabcentral/fileexchange/11827-slerp/content/slerp.m
[3] https://en.wikipedia.org/wiki/Slerp
[4] https://dl.dropboxusercontent.com/u/38218/ellipsoidInterpolation.zip
The solution was to rotate everything by the inverse of the rotation matrix of one of the reference ellipsoids, such that that reference ellipsoid was axis-aligned (i.e. had no rotation). Then after interpolating each ellipsoid, rotate it back to the original reference frame by multiplying by the original rotation matrix.
I've attached a screenshot of the result:
Update
Apparently this does not work in every case. I have posted a new question here.
Related
Explanation of the problem:
I have points with (x,y,z) coordinates at two+ distinct times. For convenience, they can be imagined as irregularly spaced points along the surface of an inverted paraboloid.
There is some minimal thickness to the paraboloid. The paraboloid changes shape slightly as time proceeds (like a balloon inflating) and when it does so, all of the points move.
By substracting the coordinates at time2 - time1, I can get the displacement vectors at each point.
It is important to note (and I suspect this might be the source of the problem) that at the first time point, the x and y coordinates range from 0 to 2000, and the z coordinates are all within a narrower range - say 350 to 450. During the deformation, each point has an x component of displacement, y component, and z component.
The x and y components are small (~50 at most), while the z component is the largest (goes up to 400 near the center, much less near the edges).
Using weighted moving least squares at the location of each point, I am trying to fit the components of displacements to a second degree polynomial surface in terms of the original x,y,z coordinates of the point: eg.
x component of
displacement = ax^2 + bxy + cx + dy^2.. + hz^2 + iz + j
I use the lsqr function in MATLAB,like so, looping through each point for each time interval:
Ux = displacements{k,1}(:,1);
Cx = lsqr((adjust_B_matrix'*W*adjust_B_matrix),(adjust_B_matrix'*W*Ux),1e-7,10000);
W is the weight matrix, and adjust_B_matrix is the matrix of all (x,y,z) coordinates at time 1, shifted so that they're all centered around the point at which I'm trying to fit the function.
What is going wrong?
It's just not working -- once I have the functions, they're re-centered around the actual coordinates of the points.
But once I plot the resulting points (initial pointx + displacementx, initial pointy + displacementy, initial pointz + displacementz) by plugging in the coordinates at time 1 into the now-discovered functions, it just spits out a surface that looks just like the surface at time 1.
What might be going wrong? Things I have tried:
It's not an issue with the code itself- I generated 'fake' data using a grid of points and it worked perfectly. The predicted locations were superimposed with the actual coordinates and I was able to get back the function I started with. But in my trial example, I used x,y,z from 0 to 5, evenly spaced.
Global fitting works (but I need local fitting...).
I tried MATLAB's curve fitting toolbox and just tried to fit one of the displacements to only x and y coordinates, globally. It worked perfectly.
I think I shouldn't have a singular matrix issue because I use a large radius (around 75-80) points in the calculations, somewhat dispersed in 3D space.
Suspicions:
I think it has to do with the uneven distribution of initial (x,y,z) coordinates, but I don't know why or how to fix the issue, or even what method I can use.
If you read this far, thank you so much. Any advice would be greatly appreciated.
Figure for reference:
green = predicted points at time 2. Overlapping mostly with red, the actual coordinates of the points at time 1.
blue is the correct coordinates of points at time 2 (this is where the green ones should be if things were working).
image
Updated link for files:
http://a.tmp.ninja/eWfkNmFZyTFk.zip
Contents - code, sample data (please load the .mat files).
I can't actually access the code you posted, so here's some general suggestions.
It does look like the curve fitting toolbox has tools that do exactly what you are looking for, checkout the bottom of this page: https://www.mathworks.com/help/curvefit/polynomial.html#bt9ykh.
It looks like for whatever your learned function for the displacement is just very small or zero everywhere. I suspect the issue is just a minor typo/error on your part somewhere in your pipeline, possibly translating what you have to work with the fit function will reveal the issue.
This really shouldn't be the issue, but in the future if you had much more unbalanced data you could normalize it all before fitting (x_norm = (x - x_mu)/x_std).
Also, I don't think this is your problem either, but you can check if your matrix is close to singular by checking the condition number using the cord() function. So you could check cond(adjust_B_matrix'Wadjust_B_matrix). Second, If you check the documentation for lsqr there is an option to get a debug return flag, that is worth checking too.
I am trying to compute the 3D coordinates from several pair of two view points.
First, I used the matlab function estimateFundamentalMatrix() to get the F of the matched points (Number > 8) which is:
F1 =[-0.000000221102386 0.000000127212463 -0.003908602702784
-0.000000703461004 -0.000000008125894 -0.010618266198273
0.003811584026121 0.012887141181108 0.999845683961494]
And my camera - taken these two pictures - was pre-calibrated with the intrinsic matrix:
K = [12636.6659110566, 0, 2541.60550098958
0, 12643.3249022486, 1952.06628069233
0, 0, 1]
From this information I then computed the essential matrix using:
E = K'*F*K
With the method of SVD, I finally got the projective transformation matrices:
P1 = K*[ I | 0 ]
and
P2 = K*[ R | t ]
Where R and t are:
R = [ 0.657061402787646 -0.419110137500056 -0.626591577992727
-0.352566614260743 -0.905543541110692 0.235982367268031
-0.666308558758964 0.0658603659069099 -0.742761951588233]
t = [-0.940150699101422
0.320030970080146
0.117033504470591]
I know there should be 4 possible solutions, however, my computed 3D coordinates seemed to be not correct.
I used the camera to take pictures of a FLAT object with marked points. I matched the points by hand (which means there should not be obvious mistake exists about the raw material). But the result turned out to be a surface with a little bit banding.
I guess this might be due to the reason pictures did not processed with distortions (but actually I remember I did).
I just want to know whether this method to solve the 3D reconstruction issue right? Especially when we already know the camera intrinsic matrix.
Edit by JCraft at Aug.4: I have redone the process and got some pictures showing the problem, I will write another question with detail then post the link.
Edit by JCraft at Aug.4: I have posted a new question: Calibrated camera get matched points for 3D reconstruction, ideal test failed. And #Schorsch really appreciate your help formatting my question. I will try to learn how to do inputs in SO and also try to improve my gramma. Thanks!
If you only have the fundamental matrix and the intrinsics, you can only get a reconstruction up to scale. That is your translation vector t is in some unknown units. You can get the 3D points in real units in several ways:
You need to have some reference points in the world with known distances between them. This way you can compute their coordinates in your unknown units and calculate the scale factor to convert your unknown units into real units.
You need to know the extrinsics of each camera relative to a common coordinate system. For example, you can have a checkerboard calibration pattern somewhere in your scene that you can detect and compute extrinsics from. See this example. By the way, if you know the extrinsics, you can compute the Fundamental matrix and the camera projection matrices directly, without having to match points.
You can do stereo calibration to estimate the R and the t between the cameras, which would also give you the Fundamental and the Essential matrices. See this example.
Flat objects are critical surfaces, not possible to achive your goal from them. try adding two (or more) points off the plane (see Hartley and Zisserman or other text on the matter if still interested)
I am implementing the algorithm for Photometric Stereo where I have already calculated the normals from a set of images with different light directions.
How can I plot the normal vector field in matlab? I have a matrix of normals of size (N x 3).
I'm afraid you have left out a step. You need to retrieve the depth map from the surface normals, and then you can start plotting. To see how to do this, you can check out section 4 of the following paper:
http://www.wisdom.weizmann.ac.il/~vision/photostereo/Photometric%20Stereo%20with%20General%20Unknown%20Lighting%20-%20BasriJacobsKemelmacher_ijcv06.pdf
There are other resources on the web too; I don't know of any built-in function in any Matlab library, but I don't have the Computer Vision toolbox, so who knows?
I suspect you are looking for quiver3.
You need to present normals field, as an gradient field, then you can,use quiver function. And in gradient field previously normalized triple {pn,qn,rn}, the data is presented in such a way , as to rend the third component of it always equal to one (at least in theory). I mean with rn=1, or should I now say, that now : R=1, and you actually need only {P,Q} coomponents to present contents of gradient field with ordinary 2D quiver function. Thus, gradient vector is something quite different and distinct from normals field, because: P=pn/sqrt(pn^2+qn^2+rn^2) , and Q=qn/(pn^2+qn^2+rn^2) POINTWISELY saying.
However you don't bother with double for loops, run over X,Y directions, cause POINTWISELY, correctly rendered calculations for gradient field from normals, is the following: P=pn./(pn.^2 + qn.^2 + rn.^2).^(1/2);, and so on.
You can see as well: http://www.mathworks.com/matlabcentral/fileexchange/authors/126090/
You need to present normals field, as an gradient field, then you can use a Matlab's quiver function. And in gradient field, the previously normalized triple {pn,qn,rn}, of the data, is presented in such a way, as to rend the third component of it always equal to one (at least in theory).
I mean with rn = 1, or should I now say, that now with: some R=1, you actually need only {P,Q} coomponents to present contents of gradient field with ordinary 2D quiver function. Thus, gradient vector is something quite different and distinct from normals field, because:
P=pn/sqrt(pn^2+qn^2+rn^2) , and Q=qn/(pn^2+qn^2+rn^2) POINTWISELY saying.
However you don't bother with double for loops, which would be run over X, Y directions, cause POINTWISELY, correctly rendered calculations for gradient field from normals, are the following:
P=pn./(pn.^2 + qn.^2 + rn.^2).^(1/2); , and: Q=qn./(pn.^2 + qn.^2 + rn.^2);
You can see as well:
http://www.mathworks.com/matlabcentral/fileexchange/authors/126090/
Briefly saying, the gradient field always represents slopes on X, Y directions, while descending exactly one height unit alongside Z axis on the 3D surface retrieved with for instance Photometric Stereo algorithm. That is why the third component in quiver function visualization is always equal to one (i.e. R = 1), and practically irrelevant.
I have posted, last month, some codes, on THE simplest Photometric Stereo methods, on Mathworks Web pages, due to some span of time available to tide it all up, I mean my own so far produced codes in Matlab...
I have two ellipsoids in R3 described in terms of their centre points (P), their axes lengths (a,b,c), and their rotation vector (R). I wish to interpolate a tubular structure between these two ellipsoids along a given centre line. This is done by creating an ellipsoid centred at each point along the centre line. Its axes lengths are interpolated linearly between those at the two endpoints, and the rotation is obtained as a quaternion using spherical linear interpolation, or SLERP.
I previously asked a similar question on this problem here. I have since isolated the issue a little further, and thought it warranted a new post. The difference here is that before doing SLERP, I first rotate the two reference ellipsoids by the inverse of the rotation matrix that describes one of them, such that one of them is now axis-aligned (i.e. has no rotation). Previously this appeared to solve the problem, but I have encountered an example where this fix does not work.
The source code to reproduce this issue is available here. The relevant function is ellipsoidSLERP and the functions it calls. Here is a screenshot of the output:
What you are seeing is an interpolation of ellipsoid volumes (blue) between two reference ellipsoid volumes at either end (green) along a centreline (cyan).
Problem Statement
The interpolation on the left works correctly, resulting in a smooth tubular structure. The interpolation on the right does not work correctly, and results in a twist.
What is causing this behaviour, and how can I correct it?
Please let me know if there's anything I can do to clarify.
I have a convex polygon in 3D. For simplicity, let it be a square with vertices, (0,0,0),(1,1,0),(1,1,1),(0,0,1).. I need to arrange these vertices in counter clockwise order. I found a solution here. It is suggested to determine the angle at the center of the polygon and sort them. I am not clear how is that going to work. Does anyone have a solution? I need a solution which is robust and even works when the vertices get very close.
A sample MATLAB code would be much appreciated!
This is actually quite a tedious problem so instead of actually doing it I am just going to explain how I would do it. First find the equation of the plane (you only need to use 3 points for this) and then find your rotation matrix. Then find your vectors in your new rotated space. After that is all said and done find which quadrant your point is in and if n > 1 in a particular quadrant then you must find the angle of each point (theta = arctan(y/x)). Then simply sort each quadrant by their angle (arguably you can just do separation by pi instead of quadrants (sort the points into when the y-component (post-rotation) is greater than zero).
Sorry I don't have time to actually test this but give it a go and feel free to post your code and I can help debug it if you like.
Luckily you have a convex polygon, so you can use the angle trick: find a point in the interior (e.g., find the midpoint of two non-adjacent points), and draw vectors to all the vertices. Choose one vector as a base, calculate the angles to the other vectors and order them. You can calculate the angles using the dot product: A · B = A B cos θ = |A||B| cos θ.
Below are the steps I followed.
The 3D planar polygon can be rotated to 2D plane using the known formulas. Use the one under the section Rotation matrix from axis and angle.
Then as indicated by #Glenn, an internal points needs to be calculated to find the angles. I take that internal point as the mean of the vertex locations.
Using the x-axis as the reference axis, the angle, on a 0 to 2pi scale, for each vertex can be calculated using atan2 function as explained here.
The non-negative angle measured counterclockwise from vector a to vector b, in the range [0,2pi], if a = [x1,y1] and b = [x2,y2], is given by:
angle = mod(atan2(y2-y1,x2-x1),2*pi);
Finally, sort the angles, [~,XI] = sort(angle);.
It's a long time since I used this, so I might be wrong, but I believe the command convhull does what you need - it returns the convex hull of a set of points (which, since you say your points are a convex set, should be the set of points themselves), arranged in counter-clockwise order.
Note that MathWorks recently delivered a new class DelaunayTri which is intended to superseded the functionality of convhull and other older computational geometry stuff. I believe it's more accurate, especially when the points get very close together. However I haven't tried it.
Hope that helps!
So here's another answer if you want to use convhull. Easily project your polygon into an axes plane by setting one coordinate zero. For example, in (0,0,0),(1,1,0),(1,1,1),(0,0,1) set y=0 to get (0,0),(1,0),(1,1),(0,1). Now your problem is 2D.
You might have to do some work to pick the right coordinate if your polygon's plane is orthogonal to some axis, if it is, pick that axis. The criterion is to make sure that your projected points don't end up on a line.