Matlab: Calculate intersection point of vector and analytical surface - matlab

I would like to compute the intersection point in R^3 of a vector given by
p + alpha * n where x is a spatial vector, n is another vector and alpha is a scalar to be determined.
the surface is given in analytical form by the formulation
f(x,y) = [x, y, z(x,y)] where z(x,y) can be an arbitrary nonlinear surface description
I set up a linearization:
[n1 n2 n3 ] (d_alpha)= [p1 + alpha*n1 - x]
[-1 0 -dz(x,y)/dx] (d_x) = [p2 + alpha*n2 - y]
[ 0 -1 -dz(x,y)/dx] (d_y) = [p3 + alpha*n3 - z(x,y)]
and search to iterate with starting values for alpha, x and y
However, I cant seem to converge here. Any idea where my mistake is?
Thanks in advance

You can write your equations as
x_line(a) = p1 + a * n1
y_line(a) = p2 + a * n2
z_line(a) = p3 + a * n3
z_plane(x, y) = fun(x, y)
Assuming that your problem has a unique solution, the height along the z-direction dz of the line above the plane, as a function of a is then
dz(a) = z_line(a) - fun(x_line(a), y_line(a))
= p3 + a * n3 - fun(p1 + a * n1, p2 + a * n2)
To find the intersection of the line with the plane, you simply have to find the value of a for which dz is zero. This can be done in Matlab using an anonymous function and fzero like so:
dz = #(a) = p3 + a * n3 - fun(p1 + a * n1, p2 + a * n2);
a_intersect = fzero(dz, a0);
where a0 is some (arbitrary) starting guess for a.
You might want read a bit about optical ray-tracing, I guess you might find some introductory university notes online. This is a pretty standard problem for finding e.g. the intersection of an optical ray and a curved lens or a parabolic mirror.

Related

Coordinate from 2 vectors

All,
Suppose I have two vectors U and V with 2 units and 1 unit length, respectively as shown in the sketch. The vector U is rotated by angle theta.
There are, at least two possible cases whereby vector U can go "up" or "down" as shown in the sketch.
My question is, having the above dataset is it possible to have a generic formula that can be transferred into Matlab to get the coordinate of point M?
the length of the vector U and V and angle theta are arbitrary.
Thank you!
There is a more efficient solution.
The coordinates of the endpoints of U are given by:
(U * cos(theta), U * sin(theta))
For any vector (x, y) the clockwise perpendicular direction (i.e. the second diagram "down") is (y, -x), and those of the anti-clockwise direction are minus these. Therefore the coordinates of M are given by:
Anti-clockwise ("up"): (U * cos(theta) - M * sin(theta), U * sin(theta) + M * cos(theta))
Clockwise ("down"): (U * cos(theta) + M * sin(theta), U * sin(theta) - M * cos(theta))
No need for calls to arctan or sqrt which are both very costly. Also you can compute sin/cos just once and use the results for both components.
From Pythogoras we know that
M = sqrt(U^2 + V^2)
angle between M and U is
alpha = arctan(V/U)
So then you know that the x- and y-coordinates for M are:
the "up" case:
M = (sqrt(U^2 + V^2)*cos(theta + sign(cosd(theta))*arctan(V/U)), sqrt(U^2 + V^2)*sin(theta + sign(cosd(theta))*arctan(V/U)))
the "down" case:
M = (sqrt(U^2 + V^2)*cos(theta - sign(cosd(theta))*arctan(V/U)), sqrt(U^2 + V^2)*sin(theta - sign(cosd(theta))*arctan(V/U)))
A second way to calculate this is to look add the length of U and V in the x and y direction, and sum them.
The coordinates of U are:
(Ucos(theta), Usin(theta))
To this coordinates we must add/substract the x-and y-coordinates of V. The length of V along x and y is:
(abs(sin(theta)), abs(cos(theta))
Whether one should add or substract these from U is dependent on theta. In general we can write Vup and Vdown as
Vup = (V*sign(-cos(theta))sin(theta), Vsign(cos(theta))*cos(theta))
Vdown = (V*sign(cos(theta))sin(theta), Vsign(-cos(theta))*cos(theta))
then we can alway add U to Vup and Vdown. Finally
Mup = U + Vup
Mdown = U + Vdown
Just another compact solution
theta = 30;
L = 2; % norm of U vector
U = L*[cosd(theta) ; sind(theta)];
Vup = [-U(2) ; U(1)] / L; % Normal vectors, unit length
Vdown = [U(2) ; -U(1)] / L;
Mup = U + Vup; % Two possible values of M
Mdown = U + Vdown;
% Bonus plot
figure
plot([0 U(1)] , [0 U(2)] , 'k-')
hold on; axis equal;
plot([0 Vup(1)]+U(1) , [0 Vup(2)]+U(2) , 'r-')
plot([0 Vdown(1)]+U(1) , [0 Vdown(2)]+U(2) , 'r-')
text(Mup(1),Mup(2),'M_u_p')
text(Mdown(1),Mdown(2),'M_d_o_w_n')
You can exploit the properties of the cross product of Uinit and Urot. The sign of the product will inform you on the orientation of the resulting vector.
Supposing that the origin is O(0,0), your initial vector is Uinit(x1,y1) and your final vector is Urot(x2,y2). Also M(x,y) can be calculated easily.
If you want to filter the rotated vectors Urot that ended up 'above' or 'below' M compared to the previous orientation of your triangle, you can take the following cross products:
M cross Uinit and M cross Urot.
If their sign is the same then the resulting rotated vector didn't cross the line OM and the opposite if the sign is different.

Fit plane to N dimensional points in MATLAB

I have a set of N points in k dimensions as a matrix of size N X k.
How can I find the best fitting line through these points? The line will be a plane (hyerpplane) in k dimensions. It will have k coefficients and one bias term.
Existing functions like fit seem to be usable only for points in 2 or 3 dimension.
You can fit a hyperplane (or any lower dimensional affine space) to a set of D dimensional data using Principal Component Analysis. Here's an example of fitting a plane to a set of 3D data. This is explained in more detail in the MATLAB documentation but I tried to construct the simplest example I could.
% generate some random correlated data
D = 3;
mu = zeros(1,D);
sqrt_sig = randn(D);
sigma = sqrt_sig'*sqrt_sig;
% generate 50 points in a D x 50 matrix
X = mvnrnd(mu, sigma, 50)';
% perform PCA
coeff = pca(X');
% The last principal component is normal to the best fit plane and plane goes through mean of X
a = coeff(:,D);
b = -mean(X,2)'*a;
% plane defined by a'*x + b = 0
dist = abs(a'*X+b) / norm(a);
mse = mean(dist.^2)
Edit: Added example plot of results for D = 3. I take advantage of the orthogonality of the other principal components here. Ignore the code if you want it's just to demonstrate that the plane does in fact fit the data pretty well.
% plot in 3D
X0 = bsxfun(#minus,X,mean(X,2));
b1 = coeff(:,1); b2 = coeff(:,2);
y1 = b1'*X0; y2 = b2'*X0;
y1_min = min(y1); y1_max = max(y1);
y1_span = y1_max - y1_min;
y2_min = min(y2); y2_max = max(y2);
y2_span = y2_max - y2_min;
pad = 0.2;
y1_min = y1_min - pad*y1_span;
y1_max = y1_max + pad*y1_span;
y2_min = y2_min - pad*y2_span;
y2_max = y2_max + pad*y2_span;
[y1_m,y2_m] = meshgrid(linspace(y1_min,y1_max,5), linspace(y2_min,y2_max,5));
grid = bsxfun(#plus, bsxfun(#times,y1_m(:)',b1) + bsxfun(#times,y2_m(:)',b2), mean(X,2));
x = reshape(grid(1,:),size(y1_m));
y = reshape(grid(2,:),size(y1_m));
z = reshape(grid(3,:),size(y1_m));
figure(1); clf(1);
surf(x,y,z,'FaceColor','black','FaceAlpha',0.3,'EdgeAlpha',0.6);
hold on;
plot3(X(1,:),X(2,:),X(3,:),' .');
axis equal;
axis vis3d;
Edit2: When I say "principal component" I'm being a bit sloppy (or just plain wrong) with the wording. I'm actually referring to the orthogonal basis vectors that the principal components are expressed in.
Here's a simpler solution, that just uses MATLAB's \ operator. We start with defining a plane in k dimensions:
% 0 = a + x(1) * b(1) + x(2) * b(2) + ... + x(k) * 1
k = 8;
a = randn(1);
b = randn(k-1,1);
(note that we assume b(k)=1, you can always multiply the plane parameters by any value without changing the plane).
Next we generate N random points within this plane:
N = 1000;
x = rand(N,k-1);
x(:,k) = -(a + x * b);
...sorry, it's not the best way to generate random points on the plane, but it's good enough for the demonstration here. Add noise to the points:
x = x + 0.05*randn(size(x));
To find the parameters of the plane, we solve the system of equations
% a + x(1:k-1) * b == -x(k)
in the least-squares sense. a and b are the unknowns there. We can rewrite the left-hand side as [1,x(1:k-1)] * [a;b]. If we have a matrix equation M*p=v we can solve for p by writing p=M\v:
p = [ones(N,1),x(:,1:k-1)]\(-x(:,k));
disp(['ground truth: [a,b,1] = ',mat2str([a,b',1],3)]);
disp(['estimated : [a,b,1] = ',mat2str([p',1],3)]);
This gives as output:
ground truth: [a,b,1] = [-1.35 -1.44 -1.48 1.17 0.226 -0.214 0.234 -1.59 1]
estimated : [a,b,1] = [-1.41 -1.38 -1.43 1.14 0.219 -0.195 0.221 -1.54 1]
The less noise or the more points in the dataset, the smaller the error will be of course!

helix-shaped movement of an object in Matlab

I have found this equation in a paper which represents a helix-shaped movement of an object:
When I plotted the S vector in Matlab I got a different result, not helix shape.
Where is the problem, is it in the equation or in the code?
l is a random number in [-1,1]
r is a random vector in [0,1]
b is a constant for defining the shape of the logarithmic spiral.
Matlab code:
dim =3;
Max_iter =10;
X_star=zeros(1,dim);
ub = 100;
lb = -100;
X=rand(1,dim).*(ub-lb)+lb;
S = [];
t=0;
while t<Max_iter
a=-1+t*((-1)/Max_iter);
r=rand();
b=1;
l=(a-1)*rand + 1;
for j=1:size(X,2)
D= abs(X_star(j) - X(1,j));
X(1,j)= D * exp(b.*l).* cos(l.*2*pi) + X_star(j);
end
X_star=X(1,:);
S = [S X];
plot(S);
t = t+1;
end
that equations does not look like helix at all. Lets do this for 3D. First some definitions:
p0,p1 - centers of start and end of the helix 3D points
r - scalar radius of the helix
m - scalar number of screws (if not integer than the screws will not start and end at the same angular position)
We first need TBN vectors:
n = p1-p0
t = (1,0,0)
if (|dot(n/|n|,t)|>0.9) t = (0,1,0)
b = cross(n,t)
b = r*b / |b|
t = cross(b,n)
t = r*t / |t|
And now we can do the helix using TBN as basis vectors:
// x' y' z'
p(a) = p0 + t*cos(2*Pi*m*a) + b*sin(2*Pi*m*a) + n*a
Where a = <0.0,1.0> is the parameter which point on helix you want. If you move out of this interval you will extrapolate the helix before p0 and after p1. You can consider a is the time...
As you can see the equation is just 2D parametric circle (x',y') + linear movement (z') and transformed by basis vectors to p0,p1 direction and position.
If you need equations/code for the dot,cross and abs value of 3D vectors see:
Understanding 4x4 homogenous transform matrices
They are at the bottom of that answer of mine.

FFT in Octave/Matlab, Plot cos(x) and approximate with

I have to plot in a exercise cos(x) in Octave and interpolate it by
I plotted cos(x) with
fplot("[cos(x)]", [0, 2*pi])
n are equidistant supporting points on [0, 2*pi] that I calculate by
x = zeros(n,1);
for i=1:n
x(i,1)= (-1) + (i-1/2)*(2/n);
end
How do I plot the term to approximate it?
I am also confused about what is the question actually. I guess you want to plot that formula.
Have a row vector k = 0:(n/2 - 1) (suppose n even).
Then you need your d coefficients as a row vector of the same length. (I don't know from where you get them though)
Then define your column vector x (column vector is made from a row vector by transposing, e.g. x = x.')
Left term of the function is then
leftterm = sum(d .* exp(i * x * k), 2)
Right term:
rightterm = sum( fliplr(d) .* exp(- i * x * (k.+1)), 2)
Alltogether they give:
f = sum(d .* exp(i * x * k) + fliplr(d) .* exp(- i * x * (k .+ 1)), 2)
and you plot it with:
plot(x,f)

Projection matrix from Fundamental matrix

I have obtained fundamental matrix between two cameras. I also, have their internal parameters in a 3 X 3 matrix which I had obtained earlier through chess board. Using the fundamental matrix, I have obtained P1 and P2 by
P1 = [I | 0] and P2 = [ [e']x * F | e']
These projection matrices are not really useful in getting the exact 3D location.
Since, I have the internal parameters K1 and K2, I changed P1 and P2 as
P1 = K1 * [I | 0] and P2 = K2 * [ [e']x * F | e']
Is this the right way to get the real projection matrices which gives the actual relation between the 3D world and the image?
If not, please help me understand the right way and where I have gone wrong.
If this is the right approach, how do I verify these matrices?
A good reference book is "Multiple View Geometry in Computer Vision" from Hartley and Zisserman.
First, your formula for P is wrong. If you want the formula with K inside, it is rather
P = K * [R | t]
or
P = [ [e']x * F | e']
but not a mix of both.
If you computed F from the 8 points algorithm, then you can recover only projective geometry up to a 3D homography (i.e. a 4x4 transformation).
To upgrade to euclidian space, there are 2 possibilities, both starting by computing the essential matrix.
First possibility is to compute the essential matrix from F: E = transpose(K2)*F*K1.
Second possibility, is to estimate directly the essential matrix for these 2 views:
Normalize your 2D points by pre multiplying with inverse of K for each image ("normalized image coordinates")
Apply the (same than for F) 8 points algorithm on these normalized points
Enforce the fact that the essential matrix has its 2 singular values equal to 1 and last is 0, by SVD decomposition and forcing the diagonal values.
Once you have the essential matrix, we can compute the projection matrix in the form
P = K * [R | t]
R and t can be found thanks to the elements of the SVD of E (cf the previously mentioned book).
However, you will have 4 possibilities. Only one of them projects points in front of both cameras, so you shall test a point (if you are sure of it) to remove the ambiguity among the 4.
And in this case you will be able to place the camera and its orientation (with R and t of the projection) in your 3D scene.
Not so obvious, indeed...
Just came across this question and want to give a more direct answer to the question.
When P1 = [I, 0] is your first projection matrix but it should be P1 = K1 * [I, 0], then your "world" is distorted by the 4x4 matrix M = [K1, 0; 0, 1]. Any point X in the world projects to x1 = P1 * X = (P1 * M) * (M^-1 * X) = P1' * X' where X' is now the point in the "undistorted world" (note that X = M * X' is again the point in the "distorted world") and P1' = P1 * M = [I, 0] * [K1, 0; 0, 1] = K1 * [I, 0] is the projection matrix in the undistorted world.
Analoguously, P2' = P1 * M is the projection matrix in the undistorted world and has the form P2' = [ [e']x * F | e'] * [K1, 0; 0, 1] = [ [e']x * F * K1 | e'].
Note that P2 = [ [e']x * F | e'] is just one possible projection matrix but in general has the form P2 = [ [e']x * F + e' * v^T | s * e'] for some real s and a 3-vector v. Note further, that if you want to find a projection matrix of the form P2' ~ K2 * [R, t] for some rotation matrix R, you might be better off using the algorithm based on the essential matrix outlined by Damien and described in Hartley&Zisserman(2.Ed) Sec. 9.6.2.