Is this rotation matrix (angle about vector) limited to certain orientations? - matlab

From a couple references (i.e., http://en.wikipedia.org/wiki/Rotation_matrix "Rotation matrix from axis and angle", and exercise 5.15 in "Computer Graphics - Principles and Practice" by Foley et al, 2nd edition in C), I've seen this definition of a rotation matrix (implemented below in Octave) that rotates points by a specified angle about a specified vector. Although I have used it before, I'm now seeing rotation problems that appear to be related to orientation. The problem is recreated in the following Octave code that
takes two unit vectors: src (green in figures) and dst (red in figures),
calculates the angle between them: theta,
calculates the vector normal to both: pivot (blue in figures),
and finally attempts to rotate src into dst by rotating it about vector pivot by angle theta.
% This test fails: rotated unit vector is not at expected location and is no longer normalized.
s = [-0.49647; -0.82397; -0.27311]
d = [ 0.43726; -0.85770; -0.27048]
test_rotation(s, d, 1);
% Determine rotation matrix that rotates the source and normal vectors to the x and z axes, respectively.
normal = cross(s, d);
normal /= norm(normal);
R = zeros(3,3);
R(1,:) = s;
R(2,:) = cross(normal, s);
R(3,:) = normal;
R
% After rotation of the source and destination vectors, this test passes.
s2 = R * s
d2 = R * d
test_rotation(s2, d2, 2);
function test_rotation(src, dst, iFig)
norm_src = norm(src)
norm_dst = norm(dst)
% Determine rotation axis (i.e., normal to two vectors) and rotation angle.
pivot = cross(src, dst);
theta = asin(norm(pivot))
theta_degrees = theta * 180 / pi
pivot /= norm(pivot)
% Initialize matrix to rotate by an angle theta about pivot vector.
ct = cos(theta);
st = sin(theta);
omct = 1 - ct;
M(1,1) = ct - pivot(1)*pivot(1)*omct;
M(1,2) = pivot(1)*pivot(2)*omct - pivot(3)*st;
M(1,3) = pivot(1)*pivot(3)*omct + pivot(2)*st;
M(2,1) = pivot(1)*pivot(2)*omct + pivot(3)*st;
M(2,2) = ct - pivot(2)*pivot(2)*omct;
M(2,3) = pivot(2)*pivot(3)*omct - pivot(1)*st;
M(3,1) = pivot(1)*pivot(3)*omct - pivot(2)*st;
M(3,2) = pivot(2)*pivot(3)*omct + pivot(1)*st;
M(3,3) = ct - pivot(3)*pivot(3)*omct;
% Rotate src about pivot by angle theta ... and check the result.
dst2 = M * src
dot_dst_dst2 = dot(dst, dst2)
if (dot_dst_dst2 >= 0.99999)
"success"
else
"FAIL"
end
% Draw the vectors: green is source, red is destination, blue is normal.
figure(iFig);
x(1) = y(1) = z(1) = 0;
ubounds = [-1.25 1.25 -1.25 1.25 -1.25 1.25];
x(2)=src(1); y(2)=src(2); z(2)=src(3);
plot3(x,y,z,'g-o');
hold on
x(2)=dst(1); y(2)=dst(2); z(2)=dst(3);
plot3(x,y,z,'r-o');
x(2)=pivot(1); y(2)=pivot(2); z(2)=pivot(3);
plot3(x,y,z,'b-o');
x(2)=dst2(1); y(2)=dst2(2); z(2)=dst2(3);
plot3(x,y,z,'k.o');
axis(ubounds, 'square');
view(45,45);
xlabel("xd");
ylabel("yd");
zlabel("zd");
hold off
end
Here are the resulting figures. Figure 1 shows an orientation that doesn't work. Figure 2 shows an orientation that works: the same src and dst vectors but rotated into the first quadrant.
I was expecting the src vector to always rotate onto the dst vector, as shown in Figure 2 by the black circle covering the red circle, for all vector orientations. However Figure 1 shows an orientation where the src vector does not rotate onto the dst vector (i.e., the black circle is not on top of the red circle, and is not even on the unit sphere).
For what it's worth, the references that defined the rotation matrix did not mention orientation limitations, and I derived (in a few hours and a few pages) the rotation matrix equation and didn't spot any orientation limitations there. I'm hoping the problem is an implementation error on my part, but I haven't been able to find it yet in either of my implementations: C and Octave. Have you experienced orientation limitations when implementing this rotation matrix? If so, how did you work around them? I would prefer to avoid the extra translation into the first quadrant if it isn't necessary.
Thanks,
Greg

Seems two minus signs have escaped:
M(1,1) = ct - P(1)*P(1)*omct;
M(1,2) = P(1)*P(2)*omct - P(3)*st;
M(1,3) = P(1)*P(3)*omct + P(2)*st;
M(2,1) = P(1)*P(2)*omct + P(3)*st;
M(2,2) = ct + P(2)*P(2)*omct; %% ERR HERE; THIS IS THE CORRECT SIGN
M(2,3) = P(2)*P(3)*omct - P(1)*st;
M(3,1) = P(1)*P(3)*omct - P(2)*st;
M(3,2) = P(2)*P(3)*omct + P(1)*st;
M(3,3) = ct + P(3)*P(3)*omct; %% ERR HERE; THIS IS THE CORRECT SIGN
Here is a version that is much more compact, faster, and also based on Rodrigues' rotation formula:
function test
% first test: pass
s = [-0.49647; -0.82397; -0.27311];
d = [ 0.43726; -0.85770; -0.27048]
d2 = axis_angle_rotation(s, d)
% Determine rotation matrix that rotates the source and normal vectors to the x and z axes, respectively.
normal = cross(s, d);
normal = normal/norm(normal);
R(1,:) = s;
R(2,:) = cross(normal, s);
R(3,:) = normal;
% Second test: pass
s2 = R * s;
d2 = R * d
d3 = axis_angle_rotation(s2, d2)
end
function vec = axis_angle_rotation(vec, dst)
% These following commands are just here for the function to act
% the same as your original function. Eventually, the function is
% probably best defined as
%
% vec = axis_angle_rotation(vec, axs, angle)
%
% or even
%
% vec = axis_angle_rotation(vec, axs)
%
% where the length of axs defines the angle.
%
axs = cross(vec, dst);
theta = asin(norm(axs));
% some preparations
aa = axs.'*axs;
ra = vec.'*axs;
% location of circle centers
c = ra.*axs./aa;
% first coordinate axis on the circle's plane
u = vec-c;
% second coordinate axis on the circle's plane
v = [axs(2)*vec(3)-axs(3)*vec(2)
axs(3)*vec(1)-axs(1)*vec(3)
axs(1)*vec(2)-axs(2)*vec(1)]./sqrt(aa);
% the output vector
vec = c + u*cos(theta) + v*sin(theta);
end

Related

Calculate 3D cordinates from with camera matrix and know distance

I have been struggeling with this quiz question. This was part of FSG 2022 registration quiz and I can't figure out how to solve it
At first I thought that I can use extrinsic and intrinsic parameters to calculate 3D coordinates using equations described by Mathworks or in this article. Later I realized that the distance to the object is provided in camera frame, which means that this could be treat as a depth camera and convert depth info into 3d space as described in medium.com article
this article is using formula show below to calculate x and y coordinates and is very similar to this question, yet I can't get the correct solution.
One of my Matlab scripts attempting to solve it:
rot = eul2rotm(deg2rad([102 0 90]));
trans = [500 160 1140]' / 1000; % mm to m
t = [rot trans];
u = 795; % here was typo as pointed out by solstad.
v = 467;
cx = 636;
cy = 548;
fx = 241;
fy = 238;
z = 2100 / 1000 % mm to m
tmp_x = (u - cx) * z / fx;
tmp_y = (v - cy) * z / fy;
% attempt 1
tmp_cords = [tmp_x; tmp_y; z; 1]
linsolve(t', tmp_cords)'
% result is: 1.8913 1.8319 -0.4292
% attempt 2
tmp_cords = [tmp_x; tmp_y; z]
rot * tmp_cords + trans
% result is: 2.2661 1.9518 0.4253
If possible I would like to see the calculation process not any kind of a python code.
Correct answer is under the image.
Correct solution provided by the organisers were 2.030, 1.272, 0.228 m
The task states that the object's euclidean (straight-line) distance is 2.1 m. That doesn't mean its distance along z is 2.1 m. Those two only coincide if there is no x or y component in the object's translation to the camera frame.
The z component of the object's translation will be less than 2.1 meters.
You need to take a ray/vector for the screen space coordinates (normalized) and multiply that by the euclidean distance.
v_x = (u - cx) / fx;
v_y = (v - cy) / fy;
v_z = 1;
v = [v_x; v_y; v_z];
dist = 2.1;
tmp = v / norm(v) * dist;
The rotation may be an issue. Roll happens around X, then pitch happens around Y, and then yaw happens around Z. These operations are applied in that order, i.e. inner to outer.
R_Z * R_Y * R_X * v
My rotation matrix is
[[ 0. 0.20791 0.97815]
[ 1. 0. 0. ]
[ 0. 0.97815 -0.20791]]
That camera, taking the usual (X right, Y down, Z far) frame, would be looking, upside down, out the windshield, and slightly down.
Make sure that eul2rotm() does the right thing (specify axis order as 'XYZ') or that you use something else.
You can use rotvec2mat3d() to build individual rotation matrices from an axis-angle encoding.
Perhaps also review different MATLAB conventions regarding matrix multiplication: https://www.mathworks.com/help/images/migrate-geometric-transformations-to-premultiply-convention.html
I used Python and scipy.spatial.transform.Rotation.from_euler('xyz', [R_roll, R_pitch, R_yaw], degrees=True).as_matrix() to arrive at the sample solution.
Properly, the task should have specified a frame conversion step between vehicle and camera because the differing views are quite confusing, with a car having +X being forward and a camera having +Z being forward...
In addition to Christoph Rackwitz answer, which is correct and should get all the credited, here is a working Matlab script:
rot = eul2rotm(deg2rad([90 0 102]));
trans = [500 160 1140]' / 1000; % mm to m
u = 795;
v = 467;
cx = 636;
cy = 548;
fx = 241;
fy = 238;
v_x = (u - cx) / fx;
v_y = (v - cy) / fy;
v_z = 1;
v = [v_x; v_y; v_z];
dist = 2.1;
tmp = v / norm(v) * dist;
rot * tmp + trans

Summarize all intersection points by clockwise direction

This a program that I have, I already asked before for how to find the intersection on my image with a circle, and somebody has an answer it (thank you) and I have another problem...
a = imread('001_4.bmp');
I2 = imcrop(a,[90 5 93 180]);
[i,j]=size(I2);
x_hist=sum(I2,1);
y_hist=(sum(I2,2))';
x=1:j ; y=1:i;
centx=sum(x.*x_hist)/sum(x_hist);
centy=sum(y.*y_hist)/sum(y_hist);
BW = edge(I2,'Canny',0.5);
bw2 = imcomplement(BW);
circle = int32([centx,centy,40]);%<<----------
shapeInserter = vision.ShapeInserter('Fill',false);
release(shapeInserter);
set(shapeInserter,'Shape','Circles');
% construct binary image of circle only
bwCircle = step(shapeInserter,true(size(bw2)),circle);
% find the indexes of the intersection between the binary image and the circle
[i, j] = find ((bw2 | bwCircle) == 0);
figure
imshow(bw2 & bwCircle) % plot the combination of both images
hold on
plot(j, i, 'r*') % plot the intersection points
K = step(shapeInserter,bw2,circle);
[n,m]=size(i);
d=0;
k=1;
while (k < n)
d = d + sqrt((i(k+1)-i(k)).^2 + (j(k+1)-j(k)).^2);
k = k+1;
end
Q: How can I calculate all the existing intersection values(red *) in a clockwise direction?
Not sure, but I think that your teacher meant something different from the answer given in the previous question, because
CW direction can be a hint, not a additional problem
I've seen no clue that the circle should be drawn; for small radius circles are blocky and simple bw_circle & bw_edges may not work. For details please see "Steve on image processing" blog, "Intersecting curves that don't intersect"[1]
May be your teacher is rather old and wants Pascal-stile answer.
If my assumptions are correct the code below should be right
img = zeros(7,7); %just sample image, edges == 1
img(:,4) = 1; img(sub2ind(size(img),1:7,1:7)) = 1;
ccx = 4; % Please notice: x is for columns, y is for rows
ccy = 3;
rad = 2;
theta = [(pi/2):-0.01:(-3*pi/2)];
% while plotting it will appear CCW, for visual CW and de-facto CCW use
% 0:0.01:2*pi
cx = rad * cos(theta) + ccx; % gives slightly different data as for
cy = rad * sin(theta) + ccy; % (x-xc)^2 + (y-yc)^2 == rad^2 [2]
ccoord=[cy;cx]'; % Once again: x is for columns, y is for rows
[ccoord, rem_idx, ~] =unique(round(ccoord),'rows');
cx = ccoord(:,2);
cy = ccoord(:,1);
circ = zeros(size(img)); circ(sub2ind(size(img),cy,cx))=1;
cross_sum = 0;
figure, imshow(img | circ,'initialmagnification',5000)
hold on,
h = [];
for un_ang = 1:length(cx),
tmp_val= img(cy(un_ang),cx(un_ang));
if tmp_val == 1 %the point belongs to edge of interest
cross_sum = cross_sum + tmp_val;
h = plot(cx(un_ang),cy(un_ang),'ro');
pause,
set(h,'marker','x')
end
end
hold off
[1] https://blogs.mathworks.com/steve/2016/04/12/intersecting-curves-that-dont-intersect/
[2] http://matlab.wikia.com/wiki/FAQ#How_do_I_create_a_circle.3F

Non overlapping randomly located circles

I need to generate a fixed number of non-overlapping circles located randomly. I can display circles, in this case 20, located randomly with this piece of code,
for i =1:20
x=0 + (5+5)*rand(1)
y=0 + (5+5)*rand(1)
r=0.5
circle3(x,y,r)
hold on
end
however circles overlap and I would like to avoid this. This was achieved by previous users with Mathematica https://mathematica.stackexchange.com/questions/69649/generate-nonoverlapping-random-circles , but I am using MATLAB and I would like to stick to it.
For reproducibility, this is the function, circle3, I am using to draw the circles
function h = circle3(x,y,r)
d = r*2;
px = x-r;
py = y-r;
h = rectangle('Position',[px py d d],'Curvature',[1,1]);
daspect([1,1,1])
Thank you.
you can save a list of all the previously drawn circles. After
randomizing a new circle check that it doesn't intersects the previously drawn circles.
code example:
nCircles = 20;
circles = zeros(nCircles ,2);
r = 0.5;
for i=1:nCircles
%Flag which holds true whenever a new circle was found
newCircleFound = false;
%loop iteration which runs until finding a circle which doesnt intersect with previous ones
while ~newCircleFound
x = 0 + (5+5)*rand(1);
y = 0 + (5+5)*rand(1);
%calculates distances from previous drawn circles
prevCirclesY = circles(1:i-1,1);
prevCirclesX = circles(1:i-1,2);
distFromPrevCircles = ((prevCirclesX-x).^2+(prevCirclesY-y).^2).^0.5;
%if the distance is not to small - adds the new circle to the list
if i==1 || sum(distFromPrevCircles<=2*r)==0
newCircleFound = true;
circles(i,:) = [y x];
circle3(x,y,r)
end
end
hold on
end
*notice that if the amount of circles is too big relatively to the range in which the x and y coordinates are drawn from, the loop may run infinitely.
in order to avoid it - define this range accordingly (it can be defined as a function of nCircles).
If you're happy with brute-forcing, consider this solution:
N = 60; % number of circles
r = 0.5; % radius
newpt = #() rand([1,2]) * 10; % function to generate a new candidate point
xy = newpt(); % matrix to store XY coordinates
fails = 0; % to avoid looping forever
while size(xy,1) < N
% generate new point and test distance
pt = newpt();
if all(pdist2(xy, pt) > 2*r)
xy = [xy; pt]; % add it
fails = 0; % reset failure counter
else
% increase failure counter,
fails = fails + 1;
% give up if exceeded some threshold
if fails > 1000
error('this is taking too long...');
end
end
end
% plot
plot(xy(:,1), xy(:,2), 'x'), hold on
for i=1:size(xy,1)
circle3(xy(i,1), xy(i,2), r);
end
hold off
Slightly amended code #drorco to make sure exact number of circles I want are drawn
nCircles = 20;
circles = zeros(nCircles ,2);
r = 0.5;
c=0;
for i=1:nCircles
%Flag which holds true whenever a new circle was found
newCircleFound = false;
%loop iteration which runs until finding a circle which doesnt intersect with previous ones
while ~newCircleFound & c<=nCircles
x = 0 + (5+5)*rand(1);
y = 0 + (5+5)*rand(1);
%calculates distances from previous drawn circles
prevCirclesY = circles(1:i-1,1);
prevCirclesX = circles(1:i-1,2);
distFromPrevCircles = ((prevCirclesX-x).^2+(prevCirclesY-y).^2).^0.5;
%if the distance is not to small - adds the new circle to the list
if i==1 || sum(distFromPrevCircles<=2*r)==0
newCircleFound = true;
c=c+1
circles(i,:) = [y x];
circle3(x,y,r)
end
end
hold on
end
Although this is an old post, and because I faced the same problem before I would like to share my solution, which uses anonymous functions: https://github.com/davidnsousa/mcsd/blob/master/mcsd/cells.m . This code allows to create 1, 2 or 3-D cell environments from user-defined cell radii distributions. The purpose was to create a complex environment for monte-carlo simulations of diffusion in biological tissues: https://www.mathworks.com/matlabcentral/fileexchange/67903-davidnsousa-mcsd
A simpler but less flexible version of this code would be the simple case of a 2-D environment. The following creates a space distribution of N randomly positioned and non-overlapping circles with radius R and with minimum distance D from other cells. All packed in a square region of length S.
function C = cells(N, R, D, S)
C = #(x, y, r) 0;
for n=1:N
o = randi(S-R,1,2);
while C(o(1),o(2),2 * R + D) ~= 0
o = randi(S-R,1,2);
end
f = #(x, y) sqrt ((x - o(1)) ^ 2 + (y - o(2)) ^ 2);
c = #(x, y, r) f(x, y) .* (f(x, y) < r);
C = #(x, y, r) + C(x, y, r) + c(x, y, r);
end
C = #(x, y) + C(x, y, R);
end
where the return C is the combined anonymous functions of all circles. Although it is a brute force solution it is fast and elegant, I believe.

Rotation and Translation from Essential Matrix incorrect

I currently have a stereo camera setup. I have calibrated both cameras and have the intrinsic matrix for both cameras K1 and K2.
K1 = [2297.311, 0, 319.498;
0, 2297.313, 239.499;
0, 0, 1];
K2 = [2297.304, 0, 319.508;
0, 2297.301, 239.514;
0, 0, 1];
I have also determined the Fundamental matrix F between the two cameras using findFundamentalMat() from OpenCV. I have tested the Epipolar constraint using a pair of corresponding points x1 and x2 (in pixel coordinates) and it is very close to 0.
F = [5.672563368940768e-10, 6.265600996978877e-06, -0.00150188302445251;
6.766518121363063e-06, 4.758206104804563e-08, 0.05516598334827842;
-0.001627120880791009, -0.05934224611334332, 1];
x1 = 133,75
x2 = 124.661,67.6607
transpose(x2)*F*x1 = -0.0020
From F I am able to obtain the Essential Matrix E as E = K2'*F*K1. I decompose E using the MATLAB SVD function to get the 4 possibilites of rotation and translation of K2 with respect to K1.
E = transpose(K2)*F*K1;
svd(E);
[U,S,V] = svd(E);
diag_110 = [1 0 0; 0 1 0; 0 0 0];
newE = U*diag_110*transpose(V);
[U,S,V] = svd(newE); //Perform second decompose to get S=diag(1,1,0)
W = [0 -1 0; 1 0 0; 0 0 1];
R1 = U*W*transpose(V);
R2 = U*transpose(W)*transpose(V);
t1 = U(:,3); //norm = 1
t2 = -U(:,3); //norm = 1
Let's say that K1 is used as the coordinate frame for which we make all measurements. Therefore, the center of K1 is at C1 = (0,0,0). With this it should be possible to apply the correct rotation R and translation t such that C2 = R*(0,0,0)+t (i.e. the center of K2 is measured with respect to the center of K1)
Now let's say that using my corresponding pairs x1 and x2. If I know the length of each pixel in both my cameras and since I know the focal length from the intrinsic matrix, I should be able to determine two vectors v1 and v2 for both cameras that intersect at the same point as seen below.
pixel_length = 7.4e-6; //in meters
focal_length = 17e-3; //in meters
dx1 = (133-319.5)*pixel_length; //x-distance from principal point of 640*480 image
dy1 = (75-239.5) *pixel_length; //y-distance from principal point of 640*480 image
v1 = [dx1 dy1 focal_length] - (0,0,0); //vector found using camera center and corresponding image point on the image plane
dx2 = (124.661-319.5)*pixel_length; //same idea
dy2 = (67.6607-239.5)*pixel_length; //same idea
v2 = R * ( [dx2 dy2 focal_length] - (0,0,0) ) + t; //apply R and t to measure v2 with respect to K1 frame
With this vector and knowing the line equation in parametric form, we can then equate the two lines to triangulate and solve the two scalar quantities s and t through the left hand divide function in MATLAB to solve for the system of equations.
C1 + s*v1 = C2 + t*v2
C1-C2 = tranpose([v2 v1])*transpose([s t]) //solve Ax = B form system to find s and t
With s and t determined we can find the triangulated point by plugging back into the line equation. However, my process has not been successful as I cannot find a single R and t solution in which the point is in front of both cameras and where both cameras are pointed forwards.
Is there something wrong with my pipeline or thought process? Is it at all possible to obtain each individual pixel ray?
When you decompose the essential matrix into R and t you get 4 different solutions. Three of them project the points behind one or both cameras, and one of them is correct. You have to test which one is correct by triangulating some sample points.
There is a function in the Computer Vision System Toolbox in MATLAB called cameraPose, which will do that for you.
Should it be not C1-C2 = transpose([v2 -v1] * transpose([t s]). This works.
Checked your code and found that the determinants of both R1 and R2 are -1, which is incorrect because as a rotation matrix R should have a determinant equal to 1. Just take R=-R and try again.

Finding the intersection points of ray-5th order polynomial

I am doing ray tracing and I have A screen described in the world coordinates as Matrices(I had before the X,Y,Z in the screen coordinates and by using transformation and rotation I got it in the world coordinates)
Xw (NXM Matrix)
Yw (NXM Matrix)
Zw (I have got this polynomial (5th order polynomial)by fitting the 3D data Xw and Yw. I have it as f(Xw,Yw))
I have the rays equations too described as usual:
X = Ox + t*Dx
Y = Oy + t*Dy
Z = Oz + t*Dz %(O is the origin point and D is the direction)
So what I did is that I replaced the X and Y in the Polynomial equation f(Xw,Yw) and solved it for t so I can then get the intersection point.
But apparently the method that I used is wrong(The intersection points that I got were somewhere else).
Could any one please help me and tell me what is the mistake. Please support me.
Thanks
This is part of the code:
X_World_coordinate_scr = ScreenXCoordinates.*Rotation_matrix_screen(1,1) + ScreenYCoordinates.*Rotation_matrix_screen(1,2) + ScreenZCoordinates.*Rotation_matrix_screen(1,3) + Zerobase_scr(1);
Y_World_coordinate_scr = ScreenXCoordinates.*Rotation_matrix_screen(2,1) + ScreenYCoordinates.*Rotation_matrix_screen(2,2) + ScreenZCoordinates.*Rotation_matrix_screen(2,3) + Zerobase_scr(2);
Z_World_coordinate_scr = ScreenXCoordinates.*Rotation_matrix_screen(3,1) + ScreenYCoordinates.*Rotation_matrix_screen(3,2) + ScreenZCoordinates.*Rotation_matrix_screen(3,3) + Zerobase_scr(3); % converting the screen coordinates to the world coordinates using the rotation matrix and the translation vector
polymodel = polyfitn([X_World_coordinate_scr(:),Y_World_coordinate_scr(:)],Z_World_coordinate_scr(:),5); % using a function from the MAtlab file exchange and I trust this function. I tried it different data and it gives me the f(Xw,Yw).
ScreenPoly = polyn2sym(polymodel); % Function from Matlab file exchange to give the symbolic shape of the polynomial.
syms X Y Z t Dx Ox Dy Oy oz Dz z;
tsun = matlabFunction(LayerPoly, 'vars',[X,Y,Z]); % just to substitue the symboles from X , Y and Z to (Ox+t*Dx) , (Oy+t*Dy) and (Oz+t*Dz) respectively
Equation = tsun((Ox+t*Dx),(Oy+t*Dy),(Oz+t*Dz));
Answer = solve(Equation,t); % solving it for t but the equation that it is from the 5th order and the answer is RootOf(.... for z)
a = char(Answer); % preparing it to find the roots (Solutions of t)
R = strrep(a,'RootOf(','');
R1 = strrep(R,', z)','');
b = sym(R1);
PolyCoeffs = coeffs(b,z); % get the coefficient of the polynomail
tfun = matlabFunction(PolyCoeffs, 'vars',[Ox,Oy,oz,Dx,Dy,Dz]);
tCounter = zeros(length(Directions),1);
NaNIndices = find(isnan(Surface(:,1))==1); %I have NaN values and I am taking them out
tCounter(NaNIndices) = NaN;
NotNaNIndices = find(isnan(Surface(:,1))==0);
for i = NotNaNIndices' % for loop to calc
OxNew = Surface(i,1);
OyNew = Surface(i,2);
OzNew = Surface(i,3);
DxNew = Directions(i,1);
DyNew = Directions(i,2);
DzNew = Directions(i,3);
P = tfun(OxNew,OyNew,OzNew ,DxNew,DyNew,DzNew);
t = roots(P);
t(imag(t) ~= 0) = []; % getting rid of the complex solutions
tCounter(i) = t;
end
Please support
Thanks in advance