Transformation of camera calibration patterns - matlab

I use camera calibration in matlab to detect some checkerboard patterns, after
figure; showExtrinsics(cameraParams, 'CameraCentric');
Now, I want to rotate the checkerboard patterns around the x-axis such that all of them have nearly the same y coordinates in the camera frame.
Method:
I get the positions of all patterns in the camera's frame. Then I do optimization,where the objective function is to minimize variance in y and the variable is rotation about x ranging from o to 360.
Problem:
But when I plot the transformed y-coordinates, they are even nearly in a line.
Code:
Get the checkerboad points:
%% Get rotation and translation matrices for each image;
T_cw=cell(num_imgs,1); % stores camera to world rotation and translation for each image
pixel_coordinates=zeros(num_imgs,2); % stores the pixel coordinates of each checkerboard origin
for ii=1:num_imgs,
% Calibrate the camera
im=imread(list_imgs_path{ii});
[imagePoints, boardSize] = detectCheckerboardPoints(im);
[r_wc, t_wc] = extrinsics(imagePoints, worldPoints, cameraParams);
T_wc=[r_wc,t_wc';0 0 0 1];
% World to camera matrix
T_cw{ii} = inv(T_wc);
t_cw{ii}=T_cw{ii}(1:3,4); % x,y,z coordinates in camera's frame
end
Data(num_imgs=10):
t_cw
[-1072.01388542262;1312.20387622761;-1853.34408157349]
[-1052.07856598756;1269.03455126794;-1826.73576892251]
[-1091.85978641218;1351.08261414473;-1668.88197803184]
[-1337.56358084648;1373.78548638383;-1396.87603554914]
[-1555.19509876309;1261.60428874489;-1174.63047408086]
[-1592.39596647158;1066.82210015055;-1165.34417772659]
[-1523.84307918660;963.781819272748;-1207.27444716506]
[-1614.00792252030;893.962075837621;-1114.73528985018]
[-1781.83112607964;708.973204727939;-797.185326205240]
[-1781.83112607964;708.973204727939;-797.185326205240]
Main code (Optimization and transformation):
%% Get theta for rotation
f_obj = #(x)var_ycors(x,t_cw);
opt_theta = fminbnd(f_obj,0,360);
%% Plotting (rotate ycor and check to fix theta)
y_rotated=zeros(1,num_imgs);
for ii=1:num_imgs,
y_rotated(ii)=rotate_cor(opt_theta,t_cw{ii});
end
plot(1:numel(y_rotated),y_rotated);
function var_computed=var_ycors(theta,t_cw)
ycor=zeros(1,numel(t_cw));
for ii =1:numel(t_cw),
ycor(ii)=rotate_cor(theta,t_cw{ii});
end
var_computed=var(ycor);
end
function ycor=rotate_cor(theta,mat)
r_x=[1 0 0; 0 cosd(theta) -sind(theta); 0 sind(theta) cosd(theta)];
rotate_mat=mat'*r_x;
ycor=rotate_mat(2);
end

This is a clear eigenvector problem!
Take your centroids:
t_cw=[-1072.01388542262;1312.20387622761;-1853.34408157349
-1052.07856598756;1269.03455126794;-1826.73576892251
-1091.85978641218;1351.08261414473;-1668.88197803184
-1337.56358084648;1373.78548638383;-1396.87603554914
-1555.19509876309;1261.60428874489;-1174.63047408086
-1592.39596647158;1066.82210015055;-1165.34417772659
-1523.84307918660;963.781819272748;-1207.27444716506
-1614.00792252030;893.962075837621;-1114.73528985018
-1781.83112607964;708.973204727939;-797.185326205240
-1781.83112607964;708.973204727939;-797.185326205240];
t_cw=reshape(t_cw,[3,10])';
compute PCA on them, so we know the principal conponents:
[R]=pca(t_cw);
And.... thats it! R is now the transformation matrix between your original points and the rotated coordinate system. As an example, I will draw in red the old points and in blue the new ones:
hold on
plot3(t_cw(:,1),t_cw(:,2),t_cw(:,3),'ro')
trans=t_cw*R;
plot3(trans(:,1),trans(:,2),trans(:,3),'bo')
You can see that now the blue ones are in a plane, with the best possible fit to the X direction. If you want them in Y direction, just rotate 90 degrees in Z (I am sure you can figure out how to do this with 2 minutes of Google ;) ).
Note: This is mathematically the best possible fit. I know they are not as "in a row" as one would like, but this is because of the data, this is honestly the best possible fit, as that is what the eigenvectors are!

Related

Causing a rotating arc to touch another curve and not intersect it (Matlab code)

I want to write a code where an arc stops rotating as soon as it comes in contact with a semi-circle.
I have written a code to do so, my arc does not just touch the circle but it slightly intersects it.
I have put rotation of arc inside a while loop by using linspace() to change theta. I used polyxpoly() for finding intersection. Condition for the while loop is that as long as I have empty array the loop continues, but as soon as I get a value from polyxpoly() my loop stops.
However, at the place of touch the theta value exceeds what I needed, so as a result I get an intersection.
How do I modify the code so that the arc will touch the semi-circle and not intersect it?
Here is the output. Click the link below
Image of intersection but I need touch and not intersection
clc,clear
R = 5; % radius of a circle
r = 10; % radius of arc
aa = 60*pi/180; % arc angle
ap = 0*pi/180; % arc position angle
% defining the semi-circle about the origin
t = linspace(0,pi);
[x,y] = pol2cart(t,R); % circle data
% Shifting circle centre to (3.5,0)
x=x+3.5;
y=y+0;
% defining the arc about the origin
t1 = linspace(0,aa)-aa/2+ap;
[x1,y1] = pol2cart(t1,r); % arc data
% shifting arc-lower-end to (14,0)
delx=14-x1(1); % Finding the x difference between arc-lower-end x-coordinate & 14
dely=0-y1(1); % Finding the y difference between arc-lower-end y-coordinate & 0
x1=x1+delx;
y1=y1+dely;
theta =linspace(0,pi,1000);
i=1;
xc=[];
yc=[];
while isempty(xc)&& isempty(yc)
% create a matrix of these points, which will be useful in future calculations
v = [x1;y1];
% choose a point which will be the center of rotation
x_center = 14;
y_center = 0;
% create a matrix which will be used later in calculations
center = repmat([x_center; y_center], 1, length(x1));
% define a 60 degree counter-clockwise rotation matrix
R = [cos(theta(i)) -sin(theta(i)); sin(theta(i)) cos(theta(i))];
% do the rotation...
s = v - center; % shift points in the plane so that the center of rotation is at the origin
so = R*s; % apply the rotation about the origin
vo = so + center; % shift again so the origin goes back to the desired center of rotation
% this can be done in one line as:
% vo = R*(v - center) + center
% pick out the vectors of rotated x- and y-data
x_rotated = vo(1,:);
y_rotated = vo(2,:);
[xc,yc] = polyxpoly(x_rotated,y_rotated,x,y)
[xc1,yc1] = polyxpoly(x1,y1,x,y)
i=i+1;
end
% make a plot
plot(x,y)
hold on
plot(x1, y1, 'k-', x_rotated, y_rotated, 'r-', x_center, y_center, 'bo');
axis equal
I need to find way it contacts a circle and does not intersect it.
The code is in matlab.
Any suggestions are welcome.
This is the problem of collision detection. Most, if not all, methods I know of in collision detection requires the computer to check for intersections of some sort. It's very difficult (if not impossible) to have two objects "just touch" in simulations, because you'll need the precise (analytically solved) location of the boundaries of those objects.
polyxpoly() is a function to return the intersection of two polygons. So unfortunately, if you insist that the arc cannot touch, then you cannot use polyxpoly(). In that extreme case, you'll have to solve some mathematical equation for when the tip of the arc coincides with a point on the circle perfectly, then simulate up until that point in time.
But realistically, what you need is a finer simulation (although I personally think what you have alone is good enough). So in every simulation step, you calculate a smaller movement, so that when the arc eventually intersects, only a very small amount of the arc intersects.

Rotate image around world x axis

Having this coordinate system:
And this dominant vertical vanishing point:
I would like to rotate the image around x axis so the vanishing point is at infinity. That means that all vertical lines are parallel.
I am using matlab. I find the line segmentes using LSD and the vanishing point using homogeneous coordinates. I would like to use angle-axis representation, then convert it to a rotation matrix and pass this to imwarp and get the rotated image. Also would be good to know how to rotate the segments. The segments are as (x1,y1,x2,y2).
Image above example:
Vanishin point in homogenous coordinates:
(x,y,z) = 1.0e+05 * [0.4992 -2.2012 0.0026]
Vanishin point in cartesian coordinates (what you see in the image):
(x,y) = [190.1335 -838.3577]
Question: With this vanishing point how do I compute the rotation matrix in the world x axis as explained above?
If all you're doing is rotating the image so that the vector from the origin to the vanishing point, is instead pointing directly vertical, here's an example.
I = imread('cameraman.tif');
figure;imagesc(I);set(gcf,'colormap',gray);
vp=-[190.1335 -838.3577,0]; %3d version,just for cross-product use,-ve ?
y=[0,1,0]; %The vertical axis on the plot
u = cross(vp,y); %you know it's going to be the z-axis
theta = -acos(dot(vp/norm(vp),y)); %-ve ?
rotMat = vrrotvec2mat([u, theta]);
J=imwarp(I,affine2d (rotMat));
figure;imagesc(J);set(gcf,'colormap',gray); %tilted image
You can play with the negatives, and plotting, since I'm not sure about those parts applying to your situation. The negatives may come from plotting upside down, or from rotation of the world vs. camera coordinate system, but I don't have time to think about it right now.
EDIT
If you want to rotation about the X-axis, this might work (adapted from https://www.mathworks.com/matlabcentral/answers/113074-how-to-rotate-an-image-along-y-axis), or check out: Rotate image over X, Y and Z axis in Matlab
[rows, columns, numberOfColorChannels] = size(I);
newRows = rows * cos(theta);
rotatedImage = imresize(I, [newRows, columns]);

Calibration of images to obtain a top-view for points that lie on a same plane

Calibration:
I have calibrated the camera using this vision toolbox in Matlab. I used checkerboard images to do so. After calibration I get the cameraParams
which contains:
Camera Extrinsics
RotationMatrices: [3x3x18 double]
TranslationVectors: [18x3 double]
and
Camera Intrinsics
IntrinsicMatrix: [3x3 double]
FocalLength: [1.0446e+03 1.0428e+03]
PrincipalPoint: [604.1474 359.7477]
Skew: 3.5436
Aim:
I have recorded trajectories of some objects in motion using this camera. Each object corresponds to a single point in a frame. Now, I want to project the points such that I get a top-view.
Note all these points I wish to transform are are the on the same plane.
ex: [xcor_i,ycor_i ]
-101.7000 -77.4040
-102.4200 -77.4040
KEYPOINT: This plane is perpendicular to one of images of checkerboard used for calibration. For that image(below), I know the height of origin of the checkerboard of from ground(193.040 cm). And the plane to project the points on is parallel to the ground and perpendicular to this image.
Code
(Ref:https://stackoverflow.com/a/27260492/3646408 and answer by #Dima below):
function generate_homographic_matrix()
%% Calibrate camera
% Define images to process
path=['.' filesep 'Images' filesep];
list_imgs=dir([path '*.jpg']);
list_imgs_path=strcat(path,{list_imgs.name});
% Detect checkerboards in images
[imagePoints, boardSize, imagesUsed] = detectCheckerboardPoints(list_imgs_path);
imageFileNames = list_imgs_path(imagesUsed);
% Generate world coordinates of the corners of the squares
squareSize = 27; % in units of 'mm'
worldPoints = generateCheckerboardPoints(boardSize, squareSize);
% Calibrate the camera
[cameraParams, imagesUsed, estimationErrors] = estimateCameraParameters(imagePoints, worldPoints, ...
'EstimateSkew', true, 'EstimateTangentialDistortion', true, ...
'NumRadialDistortionCoefficients', 3, 'WorldUnits', 'mm');
%% Compute homography for peripendicular plane to checkerboard
% Detect the checkerboard
im=imread(['.' filesep 'Images' filesep 'exp_19.jpg']); %exp_19.jpg is the checkerboard orthogonal to the floor
[imagePoints, boardSize] = detectCheckerboardPoints(im);
% Compute rotation and translation of the camera.
[Rc, Tc] = extrinsics(imagePoints, worldPoints, cameraParams);
% Rc(rotation of the calibration view w.r.t the camera) = [x y z])
%then the floor has rotation Rf = [z x -y].(Normal vector of the floor goes up.)
Rf=[Rc(:,3),Rc(:,1),Rc(:,2)*-1];
% Translate it to the floor
H=452;%distance btw origin and floor
Fc = Rc * [0; H; 0];
Tc = Tc + Fc';
% Combine rotation and translation into one matrix:
Rf(3, :) = Tc;
% Compute the homography between the checkerboard and the image plane:
H = Rf * cameraParams.IntrinsicMatrix;
save('homographic_matrix.mat','H')
end
%% Transform points
function [x_transf,y_transf] =transform_points(xcor_i,ycor_i)
% creates a projective2D object and then transforms the points forward to
% get a top-view
% xcor_i and ycor_i are 1d vectors comprising of the x-coordinates and
% y-coordinates of trajectories.
data=load('homographic_matrix.mat');
homo_matrix=data.H;
tform=projective2d(inv(homo_matrix));
[x_transf,y_transf] = transformPointsForward(tform,xcor_i,ycor_i);
end
Quoting text from OReilly Learning OpenCV Pg 412:
"Once we have the homography matrix and the height parameter set as we wish, we could
then remove the chessboard and drive the cart around, making a bird’s-eye view video
of the path..."
This what I essentially wish to achieve.
Abhishek,
I don't entirely understand what you are trying to do. Are your points on a plane, and are you trying to create a bird's eye view of that plane?
If so, then you need to know the extrinsics, R and t, describing the relationship between that plane and the camera. One way to get R and t is to place a checkerboard on the plane, and then use the extrinsics function.
After that, you can follow the directions in the question you cited to get the homography. Once you have the homography, you can create a projective2D object, and use its transformPointsForward method to transform your points.
Since you have the size of squares on the grid, then given 2 points that you know are connected by an edge of size E (in real world units), you can calculate their 3D position.
Taking the camera intrinsic matrix K and the 3D position C and the camera orientation matrix R, you can calculate a ray to each of the points p by doing:
D = R^T * K^-1 * p
Each 3D point is defined as:
P = C + t*D
and you have the constraint that ||P1-P2|| = E
then it's a matter of solving for t1,t2 and finding the 3D position of the two points.
In order to create a top view, you can take the 3D points and project them using a camera model for that top view to generate a new image.
If all your points are on a single plane, it's enough to calculate the position of 3 points, and you can extrapolate the rest.
If your points are located on a plane that you know one coordinate of, you can do it simply for each point. For example, if you know that your camera is located at height h=C.z, and you want to find the 3D location of points in the frame, given that they are on the floor (z=0), then all you have to do is calculate the direction D as above, and then:
t=abs( (h-0)/D.z )
The 0 represent the height of the plane. Substitute for any other value for other planes.
Now that you have the value of t, you can calculate the 3D position of each point: P=C+t*D.
Then, to create a top view, create a new camera position and rotation to match your required projection, and you can project each point onto this camera's image plane.
If you want a full image, you can interpolate positions and fill in the blanks where no feature point was present.
For more details, you can always read: http://www.robots.ox.ac.uk/~vgg/hzbook/index.html

Is there a way to control distortion in Matlab's 3D viewer?

The background of this problem relates to my attempt to combine output from a ray tracer with Matlab's 3d plotters. When doing ray tracing, there is no need to apply a perspective transformation to the rendered image. You see this in the image below. Basically, the intersections of the rays with the viewport will automatically adjust for the perspective scaling.
Suppose I've gone and created a ray-traced image (so I am given my camera, my focal length, viewport dimensions, etc.). How do I create exactly the same view in Matlab's 3d plotting environment?
Here is an example:
clear
close all
evec = [0 200 300]; % Camera position
recw = 200; % cm width of box
recl = 200; % cm length of box
h = 150; % cm height of box
% Create the front face rectangle
front = zeros(3,5);
front(:,1) = [-recw/2; 0; -recl/2];
front(:,2) = [recw/2; 0; -recl/2];
front(:,3) = [recw/2; h; -recl/2];
front(:,4) = [-recw/2; h; -recl/2];
front(:,5) = front(:,1);
% Back face rectangle
back = zeros(3,5);
back(:,1) = [-recw/2; 0; recl/2];
back(:,2) = [recw/2; 0; recl/2];
back(:,3) = [recw/2; h; recl/2];
back(:,4) = [-recw/2; h; recl/2];
back(:,5) = back(:,1);
% Plot the world view
figure(1);
patch(front(1,:), front(2,:), front(3,:), 'r'); hold all
patch(back(1,:), back(2,:), back(3,:), 'b');
plot3(evec(1), evec(2), evec(3), 'bo');
xlabel('x'); ylabel('y'); zlabel('z');
title('world view'); view([-30 40]);
% Plot the camera view
figure(2);
patch(front(1,:), front(2,:), front(3,:), 'r'); hold all
patch(back(1,:), back(2,:), back(3,:), 'b');
xlabel('x'); ylabel('y'); zlabel('z');
title('Camera view');
campos(evec);
camup([0 1 0]); % Up vector is y+
camproj('perspective');
camtarget([evec(1), evec(2), 0]);
title('camera view');
Now you see the world view
and the camera view
I know how to adjust the camera position, the camera view angle, and orientation to match the output from my ray tracer. However, I do not know how to adjust Matlab's built-in perspective command
camproj('perspective')
for different distortions.
Note: within the documentation, there is the viewmtx command, which allows you to output a transformation matrix corresponding to a perspective distortion of a certain angle. This is not quite what I want. I want to do things in 3D and through Matlab's OpenGL viewer. In essence, I want a command like
camproj('perspective', distortionamount)
so I can match up the amount of distortion in Matlab's viewer with the distortion from the ray tracer. If you use the viewmtx command to create the 2D projections, you will not be able to use patch' orsurf' and keep colours and faces intact.
The MATLAB perspective projection works just like your raytracer. You don't need any transformation matrices to it use it. Perspective distortion is determined entirely by the camera position and direction of projection.
In the terminology of the raytracer diagram above, if the CameraPosition matches your raytracer's pinhole coordinates and the vector between CameraPosition and CameraTarget is perpendicular to your raytracer's viewport, the perspective distortion will also match. The rest is just scaling and alignment.

Estimating distance to a point using camera calibration

I want to estimate distance (camera to a point in the ground : that means Yw=0) from a given pixel coordinate of that point . For that I used camera calibration methods
But the results are not meaningful.
I have following details to calibration
-focal length x and y , principal point x and y, effective pixel size in meters , yaw and pitch angles and camera heights etc.
-I have entered focal length ,principal points and translation vector in terms of pixels for calculation
-I have multiplied image point with camera_matrix and then rotational| translation matrix (R|t), to get the world point.
Is my procedure correct?? What can be wrong ?
result
image_point(x,y) =400,380
world_point z co ordinate(distance) = 12.53
image_point(x,y) =400,180
world_point z co ordinate(distance) = 5.93
problem
I am getting very few pixels for z coordinate ,
That means z co ordinate is << 1 m , (because effective pixel size in meters = 10 ^-5 )
This is my matlab code
%positive downward pitch
xR = 0.033;
yR = 0;
zR = pi;
%effective pixel size in meters = 10 ^-5 ; focal_length x & y = 0.012 m
% principal point x & y = 320 and 240
intrinsic_params =[1200,0,320;0,1200,240;0,0,1];
Rx=[1,0,0 ; 0,cos(xR),sin(xR); 0,-sin(xR),cos(xR)];
Ry=[cos(yR),0,-sin(yR) ; 0,1,0 ; sin(yR),0,cos(yR)];
Rz=[cos(zR),sin(zR),0 ; -sin(zR),cos(zR),0 ; 0,0,1];
R= Rx * Ry * Rz ;
% The camera is 1.17m above the ground
t=[0;117000;0];
extrinsic_params = horzcat(R,t);
% extrinsic_params is 3 *4 matrix
P = intrinsic_params * extrinsic_params; % P 3*4 matrix
% make it square ....
P_sq = [P; 0,0,0,1];
%image size is 640 x 480
%An arbitrary pixel 360,440 is entered as input
image_point = [400,380,0,1];
% world point will be in the form X Y Z 1
world_point = P_sq * image_point'
Your procedure is kind of right, however it is going in the wrong direction.
See this link. Using your intrinsic and extrinsic calibration matrix you can find the pixel-space position of a real-world vector, NOT the other way around. The exception to this is if your camera is stationary in the global frame and you have the Z position of the feature in the global space.
Stationary camera, known feature Z case: (see also this link)
%% First we simulate a camera feature measurement
K = [0.5 0 320;
0 0.5 240;
0 0 1]; % Example intrinsics
R = rotx(0)*roty(0)*rotz(pi/4); % orientation of camera in global frame
c = [1; 1; 1]; %Pos camera in global frame
rwPt = [ 10; 10; 5]; %position of a feature in global frame
imPtH = K*R*(rwPt - c); %Homogeneous image point
imPt = imPtH(1:2)/imPtH(3) %Actual image point
%% Now we use the simulated image point imPt and the knowledge of the
% features Z coordinate to determine the features X and Y coordinates
%% First determine the scaling term lambda
imPtH2 = [imPt; 1];
z = R.' * inv(K) * imPtH2;
lambda = (rwPt(3)-c(3))/z(3);
%% Now the RW position of the feature is:
rwPt2 = c + lambda*R.' * inv(K) * imPtH2 % Reconstructed RW point
Non-stationary camera case:
To find the real-world position or distance from the camera to a particular feature (given on the image plane) you have to employ some method of reconstructing the 3D data from the 2D image.
The two that come to mind immediately is opencv's solvePnP and stereo-vision depth estimation.
solvePnP requires 4 co-planar (in RW space) features to be available in the image, and the positions of the features in RW space known. This may not sound useful as you need to know the RW position of the features, but you can simply define the 4 features with a known offset rather than a position in the global frame - the result will be the relative position of the camera in the frame the features are defined in. solvePnP gives very accurate pose estimation of the camera. See my example.
Stero vision depth estimation requires the same feature to be found in two spatially-separate images and the transformation between the images in RW space must be known very precisely.
There may be other methods but these are the two I am familiar with.